This is a special post for quick takes by Austin. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:

Anthropic's donation program seems to have been recently pared down? I recalled it as 3:1, see eg this comment on Feb 2023. But right now on https://www.anthropic.com/careers:
> Optional equity donation matching at a 1:1 ratio, up to 25% of your equity grant

Curious if anyone knows the rationale for this -- I'm thinking through how to structure Manifund's own compensation program to tax-efficiently encourage donations, and was looking at the Anthropic program for inspiration.

I'm also wondering if existing Anthropic employees still get the 3:1 terms, or the program has been changed for everyone going forward. Given the rumored $60b raise, Anthropic equity donations are set to be a substantial share of EA giving going forward, so the precise mechanics of the giving program could change funding considerations by a lot.

One (conservative imo) ballpark:

  • If founders + employees broadly own 30% of outstanding equity
  • 50% of that has been assigned and vested
  • 20% of employees will donate
  • 20% of their equity within the next 4 years

then $60b x 0.3 x 0.5 x 0.2 x 0.2 / 4 = $90m/y. And the difference between 1:1 and 3:1 match is the difference between $180m/y of giving and $360m/y.

It's been confirmed that the donation matching still applies to early employees: https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-shortform?commentId=oeXHdxZixbc7wwqna 

I would be surprised if the 3:1 match applied to founders as well. Also, I think 20% of employees donating 20% of their equity within the next 4 years is very optimistic.

My guess is that donations from Antrhopic/OpenAI will depend largely on what the founders decide to do with their money. Forbes estimates Altman and Daniela Amodei at ~$1B each, and Altman signed the Giving Pledge.


See also this article from Jan 8: 

At Anthropic’s new valuation, each of its seven founders — [...] — are set to become billionaires. Forbes estimates that each cofounder will continue to hold more than 2% of Anthropic’s equity each, meaning their net worths are at least $1.2 billion.

I don't think Forbes numbers are particularly reliable, and I think that there's a significant chance that Anthropic and/or OpenAI equity goes to 0; but in general, I expect founders to both have much more money than employees and be more inclined to donate significant parts of it (partly because of diminishing marginal returns of wealth)

It's a good point about how it applies to founders specifically - under the old terms (3:1 match up to 50% of stock grant) it would imply a maximum extra cost from Anthropic of 1.5x whatever the founders currently hold. That's a lot! 

Those bottom line figures doesn't seem crazy optimistic to me, though - like, my guess is a bunch of folks at Anthropic expect AGI on the inside of 4 years, and Anthropic is the go to example of "founded by EAs". I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

I would take an even-odds bet that the total amount donated to charity out of Anthropic equity, excluding matches, is >$400m in 4 years time. 

If Anthropic doesn't lose >85% of its valuation (which can definitely happen) I would expect way more.

As mentioned above, each of its seven cofounders is likely to become worth >$500m, and I would expect many of them to donate significantly.

 

Anthropic is the go to example of "founded by EAs"

I find these kind of statements a bit weird. My sense is that it used to be true, but they don't necessarily identify themselves with the EA movement anymore: it's never mentioned in interviews, and when asked by journalists they explicitly deny it.

Some reflections on the Manifest 2024 discourse:

  1. I’m annoyed (with “the community”, but mostly with human nature & myself) that this kind of drama gets so much more attention than eg typical reviews of the Manifest experience, or our retrospectives of work on Manifund, which I wish got even 10% of this engagement. It's fun to be self-righteous on the internet, fun to converse with many people who I respect, fun especially when they come to your defense (thanks!), but I feel guilty at the amount of attention this has sucked up for everyone involved.

    This bit from Paul Graham makes a lot more sense to me now:

    > When someone contradicts you, they're in a sense attacking you. Sometimes pretty overtly. Your instinct when attacked is to defend yourself. But like a lot of instincts, this one wasn't designed for the world we now live in. Counterintuitive as it feels, it's better most of the time not to defend yourself. Otherwise these people are literally taking your life.

    Kudos to all y'all who are practicing the virtue of silence and avoiding engaging with this.
  2. While it could have been much, much better written, on net I’m glad the Guardian article exists. And not just in a "all PR is good PR" sense, or even a “weak opponents are superweapons” sense; I think there's a legitimate concern there that's worthy of reporting. I like the idea of inviting the journalists to come to Manifest in the future.
  3. That said, I am quite annoyed that now many people who didn’t attend Manifest, may think of it as "Edgelordcon". I once again encourage people who weren't there to look at our actual schedule, or to skim over some of the many many post-Manifest reports, to get a more representative sense of what Manifest is like or about.
  4. If Edgelordcon is what you really wanted, consider going to something like Hereticon instead of Manifest, thanks.
  5. Not sure how many people already know this but I formally left Manifold a couple months ago. I'm the most comfortable writing publicly out of the 3 founders, but while I'm still on the company board, I expect Manifold vs my own views to diverge more over time.
  6. Also, Rachel and Saul were much more instrumental in making Manifest 2024 happen than me. Their roles were approximately co-directors, while I'm more like a producer of the event. So most of the credit for a well-run event goes to them; I wish more people engaged with their considerations, rather than mine. (Blame for the invites, as I mentioned, falls on me.)
  7. EA Forum is actually pretty good for having nuanced discussion: the threading and upvote vs agreevote and reactions all help compared to other online discourse. Kudos to the team! (Online text-based discourse does remain intrinsically more divisive than offline, though, which I don't love. I wish more people eg took up Saul on his offer to call with folks.)
  8. Overall my impression of the state of the EA community has ticked upwards as a result of this all this. I’m glad to be here!
  9. Some of my favorite notes amidst all this: Isa, huw, TracingWoodgrains, and Nathan Young on their experiences, Richard Ngo against deplatforming, Jacob and Oli on their thoughts, Bentham's Bulldog and Theo Jaffee on their defenses of the event, and Saul and Rachel on their perspectives as organizers.

cosigned, generally.

most strongly, i agree with:

  • (1), (3), (4)

i also somewhat agree with:

  • (2), (7), (8), (9)

[the rest of this comment is a bit emotional, a bit of a rant/ramble. i don't necessarily reflectively endorse the below, but i think it pretty accurately captures my state of mind while writing.]

but man, people can be mean. twitter is a pretty low bar, and although the discourse on twitter isn't exactly enjoyable, my impression of the EA forum has also gone down over the last few days. most of the comments that critique my/rachel's/austin's decisions (and many of the ones supporting our decisions!) have made me quite sad/anxious/ashamed in ways i don't endorse — and (most) have done ~nothing to reduce the likelihood that i invite speakers who the commenters consider racist to the next manifest.

i'm a little confused about the goals of a lot of the folks who're commenting. like, their (your?) marginal 20 minutes would be WAY more effective by... idk, hopping on a call with me or something?[1]  [june23-2024 — edit: jeff's comment has explained why: yes, 1:1 discussion with me is better for the goal of improving/changing manifest's decisions, but many of the comments are "trying to hash out what EA community ... norms should be in this sort of situation, and that seems ... reasonably well suited for public discussion."]

there have been a few comments that are really great, both some that are in support of our decisions & some that are against them — austin highlighted a few that i had in mind, like Isa's and huw's. and, a few folks have reached out independently to offer their emotional support, which is really kind of them. these are the things that make me agree with (8): i don't think that, in many communities, folks who might disagree with me on the object level would offer their emotional support for me on the meta-level.

i'm grateful to the folks who're disagreeing (& agreeing) with me constructively; to everyone else... idk, man, at least hold off on commenting until you've given me a call or let me buy you a coffee or something. [june23-2024 — see edit above]

  1. ^

    and i would explicitly encourage you, dear reader, to do so! please! i would like to talk to you much more than i would like to read your comment on the EA forum, and way more than i'd like to read your twitter post! i would very much like to adjust my decision-making process to be better, and insofar as you think that's good, please do so through a medium that's much higher bandwidth!

i'm a little confused about the goals of a lot of the folks who're commenting. like, their (your?) marginal 20 minutes would be WAY more effective by... idk, hopping on a call with me or something?

To the extent that people are trying to influence future Manifest decisions or your views in particular, I agree that 1:1 private discussion would often be better. But I read a lot of the discussion as people trying to hash out what EA community (broadly construed) norms should be in this sort of situation, and that seems to me like it's reasonably well suited for public discussion?

thanks, this has cleared things up quite a bit for me. i edited my comment to reflect it!

I’d strongly recommend against inviting them. If they decide to come, then I’d probably let them, but intentionally bringing in people who want to stir up drama is a bad idea and would ruin the vibes.

Fwiw, I think the main thing getting missed in this discourse is that even 3 out of your 50 speakers (especially if they're near the top of the bill) are mostly known for a cluster of edgy views that are not welcome in most similar spaces, people who really want to gather to discuss those edgy and typically unwelcome views will be a seriously disproportionate share of attendees, and this will have significant repercussions for the experience of the attendees who were primarily interested in the other 47 speakers.

Missing-but-wanted children now substantially outnumber unwanted births. Missing kids are a global phenomenon, not just a rich-world problem. Multiplying out each country’s fertility gap by its population of reproductive age women reveals that, for women entering their reproductive years in 2010 in the countries in my sample, there are likely to be a net 270 million missing births—if fertility ideals and birth rates hold stable. Put another way, over the 30 to 40 years these women would potentially be having children, that’s about 6 to 10 million missing babies per year thanks to the global undershooting of fertility.

https://ifstudies.org/blog/the-global-fertility-gap

For reference - malaria kills 600k a year. Covid has killed 6m to date.

If you believe creating an extra life is worth about the same as preventing an extra death (very controversial, but I hold something like this) then increasing fertility is an excellent cause area.

What's the QALY cost of the sanctions on Russia? How does it compare to the QALY lost in the Ukraine conflict?

My sense of the media narrative has been "Russia/Putin bad, Ukraine good, sanctions good". But if you step back (a lot) and squint, both direct warfare and economic sanctions share the property of being negative-sum transactions. Has anyone done an order-of-magnitude calculation for the cost of this?

(extremely speculative)

Quick stab: Valuing one QALY at $100k (rough figure for US), Russian GDP was $1.4T;  the ruble has lost 30% of its value. If we take that to be a 10% contraction, $140B/$100k = 1.4M QALY lost; if 80 QALY = 1 life, then 17.5k lives lost.

Edit: re: downvotes for OP: to clarify, I support the downvotes and don't endorse the premise of the question - damage to the Russian economy and its indirect health effects are not the dominant consideration here. Because Ukraine will suffer much more, the question's premise is naive and insensitive.  I tried to answer this because I wanted to show how much harm Putin inflicted on Russia by starting this war indirectly and which might outweigh the direct casualties on the Russian side. 

Countries usually value a QALY at 1-3x their GDP.

But also, GDP reduction and QALYs might not commensurable in that way...

I have a more detailed note on diminishing returns here. In brief, according to the law of logarithmic utility—a simple rule of thumb is that a dollar is worth 1/X times as much if you are X times richer. So doubling someone's income is worth the same amount no matter where they start. If GDP per capita is $10k, a $1 reduction is 10x less bad as at $1k mark. In other words, people will probably rather give up money than health on that current margin. 

But there are ways to calculate this and it's probably gonna be bad...

One Lancet study suggests that the 2008 economic crisis caused 0.5 million  excess cancer-related deaths worldwide. This is just cancer, which is about 15% of global mortality and so a naive extrapolation might suggest mortality figures in the millions. There are 50m deaths per year globally, so maybe there was a 10% increase.

Russia has about 2m deaths per year.

GDP loss is projected to be similar to 2008 or Covid.

https://vizhub.healthdata.org/gbd-compare/ 

https://www.sciencedirect.com/science/article/pii/S0140673618314855 

https://ars.els-cdn.com/content/image/1-s2.0-S0140673618314855-mmc1.pdf

Thank you for taking the time to write this response!

I'm not exactly sure what premise downvoters are reading from my question. To be clear, I think the war is a horrible idea and it's important to punish defection in a negative-sum way (aka impose sanctions on countries in violation of international laws).

The main point I wanted to entertain was: it's sad when we have to impose sanctions on countries; lots of people will suffer. In the same way it's sad when a war is fought, and lots of people suffer. We should be careful not to treat economic punishment as qualitatively different or intrinsically superior to direct violence; its a question of how much net utility different responses produce for the world.

Thanks for clarifying - fwiw I didn't think you're ill-intentioned... and at its core your question re: innocent Russians suffering due to sanctions is a valid one - as you say, all suffering counts equally independent of who suffers (and Russians will definitely suffer much more so than most people who are living a relatively affluent life in the west). But because Ukrainians are currently disproportionately suffering much more than Russian, the question might have struck some people as tone-deaf or inappropriate. Even taking aside the terrible direct humanitarian impact of the war, just consider Russia's GDP per capita being $10k, while Ukraine's being $3k before the war and it'll have a much bigger hit to the economy.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A