This is a special post for quick takes by Joey🔸. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

A thing that seems valuable but is not talked about much is organizations that bring talent into the EA/impact-focused charity world, vs. re-using people already in the movement, vs. turning people off the movement. The difference in these effects seems both significant and pretty consistent within an organization. I think Founders Pledge is a good example of an organization that, I think, net brings talent into the effective charities world. I often see their hires, post-leaving FP, go on to pretty impactful other roles that it’s not clear they would have done absent their experience working for FP. I wish more organizations did this vs. re-using/turning people off.

I find it a bit surprising that your point is so well-taken and has met no disagreement so far, though I am inclined to agree with it.

Another way of framing "orgs that bring talent into the EA/impact-focused charity world" is orgs whose hiring is less focused on value alignment, insofar as involvement in the movement corresponds with EA value alignment. One might be concerned that a less aligned hire might do well on metrics that can be easily ascertained or credited by one's immediate employer, but ignore other opportunities or considerations regarding impact because he/she is narrowly concerned about legible job performance and personal career capital. They could go on, in this view, to use the career capital developed and displace more aligned individuals. If funding is the larger constraint for impactful work than labor willing to work for pay, "re-using" people in the community may make sense because the impact premium from value-alignment is worth the marginal delta from a seemingly superior resume.

Of course, another view is that hiring someone into an EA org can create buy-in and "convert" someone into the community, or allow them to discover a community they already agree with.

Something that just gives me pause regarding giving too much credit for bringing in additional talent is that -regarding lots of kinds of talent- there is a lot of EA talent chasing limited paid opportunities. Expanding the labor pool for some areas is probably much less important because funding is more the limiting factor. 

 I think it would be cool if someone scraped linkedin and made some sort of diagram of talent flows like this. I imagine it could be done in a weekend and might yield interesting results.

LinkedIn has made automated scraping against their ToS, so anyone attempting this should be aware that their account may get banned

I might do this. What organizations would you be most interested in seeing this for?

Nice! My guess is that the most immediate way this data could be useful is that organizations who get funding on the basis of a "meta" theory of change (e.g. funding by OP, EAIF, MCF) get more/less funding because it turns out they are doing more/less to bring people in than expected. So maybe I would start with organizations funded by those groups, along with some other class of organizations to use as a control.

Sorry for demanding the spoon-feeding, but where do I find a list of such organizations?

  1. OP Grantees
  2. EA Funds
  3. I don't think MCF has a database (maybe @Joey 🔸 knows?)  but this post and this post list their grants

If you're looking for the meta organisations Ben is talking about, you can see all of the city and national groups funded by the Centre for Effective Altruism's Community Building Grants programme under the 'Groups' tab on this page. This is probably one of the bigger groupings of meta organisations (in terms of longterm stable funding). You also check Marieke's mindmap for a bunch of other meta organisations.  

That’s an interesting point, and it does seem impactful when organisations succeed in introducing new talent to the EA/impact space, especially when it leads to long-term contributions. Isn’t this a key focus for most community building organisations, though? Or is there a nuance in the approach you’re describing that perhaps I’m missing?

This take was more aimed at hiring/staffing instead of direct outreach/EA chapters

Part of this long but highly interesting blog series stood out to me



What the heck happened here? Why such a big difference? Was it:

  1. His spending was not high at the time the podcast happened.
  2. It was high, but 80k/EA didn't know about it.
  3. It was high, and 80k/EA did know, but it was introduced like this anyway.

Does anyone have a sense or a link to if this was talked about elsewhere?
 

Here's a comment from the 80k interviewer 2 years ago: https://forum.effectivealtruism.org/posts/RPTPo8eHTnruoFyRH/some-important-questions-for-the-ea-leadership?commentId=Xr27yCbC72ZPh5Fzn
 

Hi Oli — I was very saddened to hear that you thought the most likely explanation for the discussion of frugality in my interview with Sam was that I was deliberately seeking to mislead the audience.

I had no intention to mislead people into thinking Sam was more frugal than he was. I simply believed the reporting I had read about him and he didn’t contradict me.

It’s only in recent weeks that I learned that some folks such as you thought the impression about his lifestyle was misleading, notwithstanding Sam's reference to 'nice apartments' in the interview:

"I don’t know, I kind of like nice apartments. ... I’m not really that much of a consumer exactly. It’s never been what’s important to me. And so I think overall a nice place is just about as far as it gets."

Unfortunately as far as I can remember nobody else reached out to me after the podcast to correct the record either.

In recent years, in pursuit of better work-life balance, I’ve been spending less time socialising with people involved in the EA community, and when I do, I discuss work with them much less than in the past. I also last visited the SF Bay Area way back in 2019 and am certainly not part of the 'crypto' social scene. That may help to explain why this issue never came up in casual conversation.

Inasmuch as the interview gave listeners a false impression about Sam I am sorry about that, because we of course aim for the podcast to be as informative and accurate as possible.

 

Two years later, after having read way too many posts, comments, podcasts and a book about SBF, my understanding is that the most likely interpretation is that SBF was actually frugal for a billionaire.

  1. By far the main thing is the Bahamas luxury penthouse and properties, where he lived with at least 6 other coworkers. The person who bought the property recently did an interview claiming that the high cost was due to a lack of supply of real estate in the Bahamas, the need to incentivize employees to move there, and the fact that FTX believed it was very rich: https://x.com/TuckerCarlson/status/1844100642979099054?t=3585 . I think this not an absurd explanation even in hindsight. That person does not talk positively about SBF or the "effective altruism cult" at all in the interview, but describes SBF as obsessed with work and not caring about much else.
  2. On cars: Caroline Ellison during her testimony against SBF testified that they were originally assigned luxury cars, but Bankman-Fried suggested they switch to a Toyota Corolla and a Honda Civic. I can't find any reliable source that Bankman owned a $110k "BMW X7". The source that Thorsdad uses is this website which doesn't look reliable. It seems that the Judge who sentenced Bankman drives a BMW X7, so maybe it was generated from that? In any case, $110k seems a cheap car for a billionaire, and the fact that it made the list of most luxurious expenditures seems telling.
  3. On restaurants, the fact that they spent up to ~$40 per day per employee (if I'm doing the math right) doesn't seem crazy to me.
  4. On the extravagant lifestyle article, most of the expenses seem to be buying favour with the local authorities, the most extravagant thing for employees seems to be this:

employees could request any groceries they wanted twice a week and frequently received comped meals. And there were parties at Albany, which “at some point I got tired of.”
The level of spoilage was such that once, she recalled an FTX employee requesting a pair of toenail clippers over Slack, which was quickly delivered.

 

Even Habryka, who (after the FTX collapse) claimed that SBF "was actually living a quite lavish lifestyle"[1] also claimed that SBF was in many ways frugal. Two years later, as far as I know, zero of the people who were close to SBF at the time described him as lavish.

 

In general, I would encourage a lot of scepticism when reading Thorstad. I think that if he was writing similar articles on any topic besides "EA criticism", people would point out that they are extremely misleading and often straight-up false.

 

  1. ^

    But note that two independent sources told me in private that the post-FTX-collapse comments from Habryka are inconsistent with what he used to say about SBF/FTX as late as April 2022, so I don't know how reliable they are, and afaik there are no public claims from anyone before November 2022 that SBF himself was lavish.

I agree with some of the thrust of this question, but want to flag that I think these sources and this post kind of conflate FTX being extravagant and SBF personally being so. E.g. if you click through the restaurant tabs were about doordash orders for FTX, not SBF personally. I think it's totally consistent to believe it's worth spending a lot on employee food (especially given they were trying to retain top talent in a difficult location in a high-paying field) while being personally more abstemious

As an EA at the time (let's say mid-2022), I knew there were aspects of the of the FTX situation what were very plush. I still believed it was part of SBF's efforts to make as much money as possible for good causes, and had heard SBF say things communicating that he thought it was worth spending a lot in the course of optimizing intensely for having the best shot of making a ton of money in the long run, and was generally skeptical of the impact of aiming at frugality. My impression at the time was indeed that the Corolla was a bit of a gimmick (and that the beanbag was about working longer, not saving money), but that SBF was genuinely very altruistic and giving his wealth away extremely quickly by the standards of new billionaires. 

I don't think I saw the 80k thing in particular at the time 

I think that EA outreach can be net positive in a lot of circumstances, but there is one version of it that always makes me cringe. That version is the targeting of really young people (for this quicktake, I will say anyone under 20). This would basically include any high school targeting and most early-stage college targeting. I think I do not like it for two reasons: 1) it feels a bit like targeting the young/naive in a way I wish we would not have to do, given the quality of our ideas, and 2) these folks are typically far from making a real impact, and there is lots of time for them to lose interest or get lost along the way.

Interestingly, this stands in contrast to my personal experience—I found EA when I was in my early 20s and would have benefited significantly from hearing about it in my teenage years.

Eh, I'm with Aristotle on this one: it's better to start early with moral education. If anything, I think EA leaves it too late. We should be thinking about how to encourage the virtues of scope-sensitive beneficentrism (obviously not using those terms!) starting in early childhood.

(Or, rather, since most actual EAs aren't qualified to do this, we should hope to win over some early childhood educators who would be competent to do this!)

Are you imagining this being taught to children in a philosophy class along topics like virtue ethics etc, or do you think that “scope-sensitive beneficententrism” should be taught just as students are taught the golden rule and not to bully one another?

I think there could be ways of doing both. But yeah, I think the core idea of "it's good to actively help people, and helping more is better than helping less" should be a core component of civic virtue that's taught as plain commonsense wisdom alongside "racism is bad", etc.

I think you raise some good points. Two potential countervailing considerations:

19 year olds are legally adults - they can (varying a bit by country) vote, drink, buy firearms, join the army, get married, raise children.

It's also common for other ideological movements to target much younger people. For example, both environmentalism and feminism are taught in elementary schools.

Can you maybe expand a bit more on why? I found out about EA when I was 23 and I wish I found out about it when I was perhaps 16/17 and perhaps earlier. It's obviously hard to know but I think I would have made better and different choices on career path, study, etc.; so it's advantageous to learn about EA earlier in life despite being far from making direct impact.

I also suspect though correct me if I'm wrong, behind point 1 is an assumption that EA is bad for people's personal welfare. I don't know if this is true. 

You highlight a couple of downsides. Far from all of the downsides of course, but none of the advantages either.

I feel a bit sad to read this since I've worked on something related[1] to what you post about for years myself. And a bit confused why you posted this; do you think that you think EAs are underrating these two downsides? (If not, it just feels a bit unnecessarily disparaging to people trying their best to do good in the world.)

Appreciate you highlighting your personal experience though; that's a useful anecdote.

 

  1. ^

    "Targeting of really young people" is certainly not the framing I would use; there's genuine demand for the services that we offer, as demonstrated by the tens of thousands of applications received across Leaf, Non-Trivial, Atlas, Pivotal, and SPARC/ESPR. But it's of course accurate in the sense that our target audience consists of (subsets of) young people.

Hey Jamie, sorry my post made you feel bad. Indeed there are more nuances and it would be interesting to compile a more advanced pros and cons list on the topic of targeting younger folks. When AIM/me have thought about the pros and cons in deeper depth we tend to come out negative on it - specifically I do indeed think both value drift and flow through ecosystem effects to other parts of the movement are on average under-valubed by EAs. I wanted to call some attention to these two cons.

Do you think there's a difference between developmentally and otherwise appropriate engagement focused on younger people and problematic targeting? Your statement that the cringe-inducing activities would basically include "most early-stage college targeting" along with "any" targeting at the high school level implies that there may be some difference at the young adult level in your mind, but maybe not at the not-quite-adult level.

My usual approach on these sorts of questions is to broaden the question to include what kinds of stuff I would think appropriate for analogous altruistic/charitable movements, and then decide whether EA has any special features that justify a deviation from that baseline. If I deploy that approach, my baseline would be (e.g.) that there are certainly things that are inappropriate for under-20s but that one could easily extend a norm too broadly. Obviously, the younger the age in question, the less that would be appropriate -- but I don't think I'm left with a categorical bar for engagement directed at under-18s.

(Whether investing resources in under-20s is a strategically wise use of resources is a different question to me, but does not bring up feelings of cringe for me.)

I think the possibility that outreach to younger age groups[1] might be net negative is relatively neglected. That said, the two possible reasons suggested here didn't strike me as particularly conclusive.

The main reasons why I'm somewhat wary of outreach to younger ages (though there are certainly many considerations on both sides):

  • It seems quite plausible that people are less apt to adopt EA at younger ages because their thinking is 'less developed' in some relevant way that seems associated with interest in EA.
    • I think something related to but distinct from your factor (2) could also be an influence here, namely reaching out to people close to the time when they are making relevant decisions might be more effective at engaging people.
  • It also seems possible (though far from certain) that the counterfactual for many people engaged by outreach to younger age groups, is that they could have been reached by outreach targeted at a later date, i.e. many people we reach as high schoolers could simply have been reached once they were at university. 

These questions seem very uncertain, but also empirically tractable, so it's a shame that more hasn't been done to try to address them. For example, it seems relatively straightforward to compare the success rates of outreach targeting different ages. 

We previously did a little work to look at the relationship between the age when people first got involved in EA and their level of engagement. Prima facie, younger age of involvement seemed associated with higher engagement, though there's a relative dearth of people who joined EA at younger ages, making the estimates uncertain (when comparing <20s to early 20s, for example), and we'd need to spend more time on it to disentangle other possible confounds.

 

 

  1. ^

    Or it might be that 'life stages' are the relevant factor rather than age per se, i.e. a younger person who's already an undergrad might have similar outcomes when exposed to EA as a typical-age undergrad, whereas reaching out to people while in high school (regardless of age) might be associated with negative outcomes.

Do you think there's a way to tell the former group apart from people who are closer to your experience (hearing earlier would be beneficial)?

I think a semi-decent amount of broadly targeted adult-based outreach would have resulted in me finding out about EA (e.g., I watched a lot of TED Talks and likely would have found out about EA if it had TED Talks at that point). I also think mediums that are not focused on a given age but also do not penalize someone for it would have been effective. For example, when I was young, I took part in a lot of forums in part because they didn't care about or know my age.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or