I agree with your last sentence, and I think in some versions of this it's the vast majority of people. A lot of charity advertising seems to encourage a false sense of confidence, e.g. "Feed this child for $1," or "adopt this manatee". I think this makes use of a near-universal human bias which probably has a name but which I am not recalling at the moment. For a less deceptive version of this, note how much effort AMF and GiveDirectly seem to have put in into tracking the concrete impact of your specific donation.
Orthogonally, I think most people are willing to pay more for a more legible/direct theory of impact.
"I give $2800, this kid has lifesaving heart surgery" is certainly more legible and direct than a GiveWell-type charity. In the former case, the donor doesn't have to trust GiveWell's methodologies, data gathering abilities, and freedom from bias. I've invested a significant amount of time and thought into getting to my current high level of confidence in GiveWell's analyses, more time than most people are prepared to spend thinking about their charit...
Building off of Jason's comment: Another way to express this is that comparing directly to the $5,500 Givewell bar is only fair for risk-neutral donors (I think?). Most potential donors are not really risk neutral, and would rather spend $5,001 to definitely save one life than $5,000 to have a 10% chance of saving 10 lives. Risk neutrality is a totally defensible position, but so is non-neutrality. It's good to have the option of paying a "premium" for a higher confidence (but lower risk-neutral EV).
Leaving math mode...I love this post. It made me emotiona...
Thanks so much for the encouragment, really do appreciate it.
Great point I hadn't thought about risk neutrality vs non-neutrality here and that there might be a pool of people even within EA who would rather pay a "premium" for higher confidence. Outside EA my experience has been that perhaps even the majority of people would probably prefer to pay for higher confidence.
Very nice post. "Anarchists have no idols" strikes me as very similar to the popular anarchist slogan, "No gods, no masters." Perhaps the person who said it to you was riffing on that?
I think a simpler explanation for his bizarre actions is that he is probably the most stressed-out person on the face of the earth right now. Or he's not seeing the situation clearly, or some combination of the two. Also probably sleep-deprived, struggling to get good advice from people around him, etc.
(This is not meant to excuse any of his actions or words, I think he's 100% responsible for everything he says and does.)
This sort of falls under the second category, "Grantees who received funds, but want to set them aside to return to creditors or depositors." At least that's how I read it, though the more I think about it the more this category is kind of confusing and your wording seems more direct.
I think it'd be preferable to explicitly list as a reason for applying something along the lines of "Grantees who received funds, but want to set them aside to protect themselves from potential clawbacks".
Less importantly, it'd possibly be better to make it separate from "to return to creditors or depositors".
Thanks for the clarification. I agree that the FTX problems are clearly related to crypto being such a new unregulated area, and I was wrong to try to downplay that causal link.
I don't think anonymized donations would help mitigate conflicts of interest. In fact I think it would encourage COIs, since donors could directly buy influence without anyone knowing they were doing so. Currently one of our only tools for identifying otherwise-undisclosed COIs is looking at flows of money. If billionaire A donates to org B, we have a norm that org B shouldn't do st...
Downvoted because I think this is too harsh and accusatory:
I cannot believe that some of you delete your posts simply because it ends up being downvoted.
Also because I disagree in the following ways:
Sorry that the post came off as very harsh and accusatory tone. I mainly meant to express my exasperation with how the situation unfolded so quickly. I’m worried about the coming months and how that will affect the community and in the long term.
Clearly, revealing who is donating is good for transparency. However, if donations were anonymized from the perspective of the recipients, I think that would help mitigate conflicts of interest. I think there needs to be more dialogue about how we can mitigate conflicts of interest, regardless of whether we a...
Yep this is a great point and overlaps with Vardev's comment. If I thought that the money was gained immorally, it would be pretty bad to just return it to the people who did the immoral thing!
Yeah this seems super relevant, great point! To be honest I'm skeptical of how separate "FTX Foundation, Inc." is/was from the rest of the FTX conglomerate. Would be useful to see the Foundation's finances after this all shakes out.
Put very vaguely: If it turned out that the money BERI received was made through means which I consider to be immoral, then I think I would return the money, even if that meant cancelling the projects it funded.
But of course I don't know how where my bar for "immoral" is in this case. Also it's probably not the case that all of FTX's profits were immoral. So how do I determine (even in theory) if the money BERI received was part of the "good profits" or the "bad profits"?
What if there were a norm in EA of not accepting large amounts of funding unless a third-party auditor of some sort has done a thorough review of the funder's finances and found them to above-board? Obviously lots of variables in this proposal, but I think something like this is plausibly good and would be interested to hear pushback.
I disagree with this. I think we should receive money from basically arbitrary sources, but I think that money should not come with associated status and reputation from within the community. If an old mafia boss wants to buy malaria nets, I think it's much better if they can than if they cannot.
I think the key thing that went wrong was that in addition to Sam giving us money and receiving charitable efforts in return, he also received a lot of status and in many ways became one of the central faces of the EA community, and I think that was quite bad...
I don't know much about how this all works but how relevant do you think this point is?
If Sequoia Capital can get fooled - presumably after more due diligence and apparent access to books than you could possibly have gotten while dealing with the charitable arm of FTX FF that was itself almost certainly in the dark - then there is no reasonable way you could have known.
[Edit: I don't think the OP had included the Eliezer tweet in the question when I originally posted this. My point is basically already covered in the OP now.]
What are the specific things you'd want to see on a transparency page? I think transparency is important, and I try to maintain BERI's transparency page, but I'm wondering if it meets your standards.
I'd guess the reason this was done for comments first is that posts are much longer and more complicated, such that it's often not clear what "agreeing" with the post even means. I think it's plausibly a good feature for posts, but I think it makes a lot more sense for comments.
It might be tough to implement this in a way that doesn't boost linkposts (which I think would be counter to your purpose).
Love this, great work. I especially appreciate your honest opinions on what mistakes you think you made and how the survey could have been improved. If JERIS continues next year, those thoughts will enable a lot of improvement!
Consider adding the Berkeley Existential Risk Initiative (BERI) to the list, either under Professional Services or under Financial and other material support. Suggested description: "Supports university research groups working to reduce x-risk, by providing them with free services and support."
Great post. This put words to some vague concerns I've had lately with people valorizing "agent-y" characteristics. I'm agentic in some ways and very unagentic in other ways, and I'm mostly happy with my impact, reputation, and "social footprint". I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).
This post, especially the section on "Assuming that it is low-cost for others to say 'no' to requests" ...
Good catch, thanks! I can't find my original quote, so I think this was a recent change. I will edit my post accordingly.
Great points, thanks David. I especially like the compare and contrast between personal connections and academic credentials. I think probably you're more experienced with academia and non-EA philanthropy than I am, so your empirical views are different. But I also think that even if EA is better than these other communities, we should still be thinking about (1) keeping it that way, and (2) maybe getting even less reliant. This is part of what I was saying with:
...None of this is unique to EA. While I think EA is particularly guilty of some of these issues,
I think the extent to which "member of the EA community" comes along with a certain way of thinking (i.e. "a lot of useful frames") is exaggerated by many people I've heard talk about this sort of thing. I think ~50% of the perceived similarity is better described as similar ways of speaking and knowledge of jargon. I think that there actually not that many people who have fully internalized new ways of thinking that are 1.) very rare outside of EA, and 2.) shared across most EA hiring managers.
Another way to put this would be: I think EA hiring managers o...
Explicitly asking for a reference the head organizer knows personally.
That feels pretty bad to me! I can imagine some reason that this would be necessary for some programs, but in general requiring this doesn't seem healthy.
I find the request for references on the EA Funds' application to be a good middle-ground. There's several sentences to it, but the most relevant one is:
...References by people who are directly involved in effective altruism and adjacent communities are particularly useful, especially if we are likely to be familiar with their work and thi
Thanks Chi, this was definitely a mistake on my part and I will edit the post. I do think that your website's "Get Involved" -> "CLR Fund" might not be the clearest path for people looking for funding, but I also think I should have spent more time looking.
Thanks for the thoughtful feedback Chris!
I think that the author undervalues value alignment and how the natural state is towards one of regression to the norm unless specific action is taken to avoid this
I think there is difference between "value alignment" and "personal connection". I agree that the former is important, and I think the latter is often used (mostly successfully) as a tool to encourage the former. I addressed one aspect of this in the Hiring Managers section.
...I agree that as EA scales, we will be less able to rely personal relationshi
tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.
I completely agree! I think probably some combination is best, and/or it could differ between subcommunities.
Also thanks for pointing out the FTX Future Fund's experience, I'd forgotten about that. I completely agree that this is evidence against my hypothesis, specifically in the case of grantee-grantor relationships.
Great point about mitigating as opposed to solving. It's possible that my having a "solutions" section wasn't the best framing. I definitely don't think personal connections should be vilified or gotten rid of entirely (if that were even possible), and going too far in this direction would be really bad.
Thanks Stefan! I agree with those strengths of personal connections, and I think there are many others. I mainly tried to argue that there are negative consequences as well, and that the negatives might outweigh the positives at some level of use. Did any of the problems I mentioned in the post strike you as wrong? (Either you think they don't tend to arise from reliance on personal connections, or you think they're not important problems even if they do arise?)
Something that didn't strike me as wrong, but as something worth reflecting more on, is your analysis of the tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.
Another point bearing in mind is that your (correct) observation that many EA orgs do not take general applications may itself be (limited) evidence against your thesis. For example, the Future Fund has made a special effort to test a variet...
I think this is a good idea as a neutral tracking resource, but I might be against it if it had the effect of heaping additional praise on the billionaires. (I don't like Elliot's Impact List idea.) I think transparency is good.
Hi Lucas! If you're still looking, you might consider applying for the Deputy Director position at the Berkeley Existential Risk Initiative. Let me know if you have any questions.
I'm excited to see this happening and I think you're one of the better people to be launching it. I think there's probably some helpful overlap with BERI's world here, so please reach out if you'd like to talk about anything.
The Berkeley Existential Risk Initiative (BERI) is seeking a Deputy Director to help me grow BERI's university collaborations program and create new programs, all with the mission of improving human civilization’s long-term prospects for survival and flourishing.
This is BERI's first "core" hire since I was hired 3 years ago—all of our hires since then are embedded at some particular research group, and aren't responsible for running BERI as an organization.
This is a great opportunity for an early- to mid-career person with some experience and interest in o...
But do oracular funders (eg OpenPhil, Future Fund) pay taxes at all, or benefit from tax-deductability? I’m not clear on this.
In theory it would be great to get a lawyer/money manager from one of these orgs to comment on this, but I don't expect that to happen, so I'm going to give my guess as someone who runs a charity that has gotten money from both of these orgs.
I think most of Open Phil's money is stored at a DAF at SVCF. Dustin presumably got a big tax deduction when he donated to that DAF. Open Phil also sometimes distributes money in other ways, whi...
The relative value of taxes vs donations underlies a lot of EA thinking and doesn't get discussed much, so I'm glad you brought this up. I think it's important how one defines "evading taxes". If we grant the argument that "taxes are not your money" (which is plausible and appeals to me aesthetically), it's pretty critical to identify the "correct amount" of taxes which one owes. I might say the correct amount is whatever the tax authorities say I need to pay, which basically amounts to "whatever I can get away with". Or you might say a bunch of the normal...
Note for readers: At Adam's request I reviewed and approved the section on BERI prior to posting. I feel that it presents BERI accurately, and I can't think of any improvements that would be important enough to include.
I'm excited for FAR's work and I'm glad to see this post!
Today is Asteroid Day. From the website:
Asteroid Day as observed annually on 30 June is the United Nations sanctioned day of public awareness of the risks of asteroid impacts. Our mission is to educate the public about the risks and opportunities of asteroids year-round by hosting events, providing educational resources and regular communications to our global audience on multiple digital platforms.
I didn't know about this until today. Seems like a potential opportunity for more general communication on global catastrophic risks.
Hi Sawyer, thanks for this question! We are in touch with Fønix and think these are both valid approaches in moving the realization of shelters forward.
We believe there is great value in getting together at SHELTER Weekend while being mindful of the fact that there will most probably be different organisations and approaches pushing shelters forward in the future. Ulrik has been warmly invited to come to SHELTER Weekend!
I really enjoyed this post. In addition to being well-written and a nice read, it's packed full with great links supporting and contextualizing your thoughts. Given how much has been written about related topics recently, I was happy you chose to make those connections explicitly. I feel like it helped me understand where you were positioning your own arguments in the wider conversation.
This might be a disagreement about whether or not it's appropriate to use "infinity" as a number (i.e. a value). Mathematically, if a function approaches infinity as the input approaches infinity, I think typically you're supposed say the limit is "undefined", as opposed to saying the limit is "infinity". So whether this is (a) underselling it or (b) just writing accurately depends on the audience.
SBF's Protect Our Future PAC has put more than $7M towards Flynn's campaign. I think this is what _pk and others are concerned about, not direct donations. And this is what most people concerned with "buying elections" are concerned about. (This is what the Citizens United controversy is about.)
That seems like a positive adjustment from my perspective! I think the interviews are valuable content, so I'd still encourage you to add a link to the interview, with the name and topic of the person you interviewed. That way interested Substack readers will still see it.
As Nathan Young mentioned in his comment, this argument is also similar to Carl Shulman's view expressed in this podcast: https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/
(Assuming you mean "rot")
As far as specific needs, nothing very specific. Sometimes I wonder how much overlap there is in grantees between the different grantmakers, and having more of them in one table where they can be collectively sorted and filtered would be more useful. I just generally think it's good to have transparency in grantmaking, and a single source that covers >90% of what people might consider "EA grantmaking" is more transparent than asking people to look at several different HTML tables or non-tabular lists.
Why might one believe that MacAskill and Ord's idea of The Long Reflection is actually a bad idea, or impossible, or that it should be dropped from longtermist discourse for some other reason?
Robin Hanson's argument here: https://www.overcomingbias.com/2021/10/long-reflection-is-crazy-bad-idea.html
Very cool! I'm super happy that this exists, and I'm excited by this first issue. On the constructive criticism side, I think this is too long for a newsletter. I think it's unlikely that I fully read future editions, and if they're all this long, I might unsubscribe at some point. So consider this one vote for trying to make the newsletter shorter :)
Thanks Yonatan, this is great! Glad to see this was so straightforward, I appreciate you putting it together. Misha seems to have taken care of the EA Funds part, at least up to mid-2021, so we're getting close. I'm planning to merge them in one direction or another.
Very cool Misha, thanks! Do you plan to keep this updated over time? If so, I think integrating Yonatan's sheet into this Airtable (or visa versa) would already accomplish most of what I was looking for.
Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?
Which of these is the correct analogy?
EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).
The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) ... (read more)