This is a special post for quick takes by Eli Rose. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it's a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:

  1. EA and the availability of lots of funding for it are relatively new — there's just not that much time for "market inefficiencies" to have been filled.
  2. The number of people in EA who are able to get funding for, and excited to start, new projects, is really small relative to the number of people doing this in the wider world.

I don't see the connection between EMH and EA projects. Can you elaborate on how those two intersect?

EMH says that we shouldn't expect great opportunities to make money to just be "lying around" ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer "why didn't anyone do this before?" (ofc. this is a simplification, EMH isn't really one coherent view)

One might also think that there aren't great EA projects just "lying around" ready for anyone to do. This would be an "EMH for EA." But I think it's not true.

I had to use Wikipedia to get a concise definition of EMH, rather than rely on my memory:

The efficient-market hypothesis (EMH) is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to "beat the market" consistently on a risk-adjusted basis since market prices should only react to new information. [1]

This appears to me to apply exclusively to financial (securities) markets and I think we would be taking (too) far out of its original context in trying to use it to answer questions about whether great EA projects exist. In that sense, I completely agree with you that:

it's a poor way to model the situation that will lead you to make systematically wrong judgments

 In the real (non-financial) world, there are plenty of opportunities to make money, which is one reason entrepreneurs exist and are valuable. Are you aware of people using EMH to suggest we should not expect to find good philanthropic opportunities?

  1. ^

What I'm talking about tends to be more of an informal thing which I'm using "EMH" as a handle for. I'm talking about a mindset where, when you think of something that could be an impactful project, your next thought is "but why hasn't EA done this already?" I think this is pretty common and it's reasonably well-adapted to the larger world, but not very well-adapted to EA.

"but why hasn't EA done this already?"

still seems like a fair question. I think the underlying problem you're pointing to might be that people will then give up on their projects or ideas without having come up with a good answer. An "EMH-style" mindset seems to point to an analytical shortcut: if it hasn't already been done, it probably isn't worth doing. Which, I agree is wrong. 

I still think EMH has no relevance in this context and that should be the main argument against applying it to EA projects. 

If you're an EA who's just about to graduate, you're very involved in the community, and most of the people you think are really cool are EAs, I think there's a decent chance you're overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the "career capital" their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.

At first blush it seems like this recommends you should almost never take an EA job early in your career — since jobs at EA orgs are such a small proportion of all jobs, what are the odds that such a job was optimal from a career capital perspective? I think this is wrong for a number of reasons, but it's instructive to actually run through the list. One is that a job being at an EA org is correlated with it being good in other ways — e.g. with it having smart, driven colleagues that you get on well with, or with it being in a field connected to one of the world's biggest problems. Another is that some types of career capital are best gotten at EA orgs or in doing EA projects — e.g. if you want to upskill for community-building work, there's plausibly no Google/McKinsey of community-building to go get useful career capital for this at. (Though I do think some types of experience, like startup experience, are often transferable to community-building.)

I think a good orientation to have towards this is to try your hardest, when looking at jobs as a new grad, to "wipe the slate clean" of tribal-affiliation-related considerations, and (to a large extent) of impact-related considerations, and assess mostly based on career-capital considerations.

(Context: I worked at an early-stage non-EA startup for 3 years before getting my current job at Open Phil. This was an environment where I was pushed to work really hard, take on a lot of responsibility, and produce high-quality work. I think I'd be way worse at my current job [and less likely to have gotten it] without this experience. My co-workers cared about lots of instrumental stuff EA cares about, like efficiency, good management, feedback culture, etc. I liked them a lot and was really motivated. However, this doesn't happen to everyone at every startup, and I was plausibly unusually well-suited to it or unusually lucky.)

I agree with this take (and also happen to be sitting next to Eli right now talking to him about it :). I think working at a fast-growing startup in an emerging technology is one of the best opportunities for career capital: https://forum.effectivealtruism.org/posts/ejaC35E5qyKEkAWn2/early-career-ea-s-should-consider-joining-fast-growing 

Curated and popular this week
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal