Manifold had an offsite in Mexico City recently. It has good public transit, lots of plants, low cost of living, and a big expat community (which we didn’t speak to much). I strongly recommend the city.
The Cause Exploration Prize has ended, but we just released a similar tournament for ClearerThinking.org 's regrant project. Details here: https://manifold.markets/group/clearer-thinking-regrants/about
A few thoughts.I'm skeptical of using this for AI alignment. AI risk is already well funded, so if it all it took was adding more resources or hitting a metric, existing orgs could just buy that directly.I think the economic issues of AI risk are less in its lack of legible liquid resources, and more in the difficulty of getting the AI field overall to cooperate to not race.However, I still think pool-less quadratic funding is very exciting for donations to causes that have room for more funding (like direct charity or meta EA tooling)I disagree with the strategic thinking section. People don't think in terms of maximizing leverage, but in maximizing good-to-yourself per $ spent. When other people donate after you, you spend slightly more than you already did, and you get a lot more public good "for free" (paid for by other people) which makes it worth it. And to the extent people are more altruistic, they'll generally fund these goods more rather than less.
The main benefit of prediction markets in posts is not on betting on the performance of particular posts, but on betting on the claims in the post. I see it more like:Epistemic status :- I think 60% to 70% chance of X (and click to bet over/under)- Y odds-ratio in favor of X (and link to my bet on existing market for X)- I'm not betting on this, butPost Summary -> testable prediction in market titleEpistemic comments vs Vibe comments -> comments with bets vs withoutEpistemic likes vs Vibe likes -> market movement vs karmaPaid peer review for X -> market subsidy