Thanks for writing this! I have a more philosophical counter that I'd love for you to respond to.
The idea of haggling doesn't sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative. Specifically, it seems to encourage deceptive pricing and reward people who are willing to be manipulative and stretch the truth.
In other words, haggling gives me bad vibes.
When you think about haggling/negotiating in altruistic context, do you...
I agree that those companies are worth distinguishing. I just think calling them "labs" is a confusing way to do so. If the purpose was only to distinguish them from other AI companies, you could call them "AI bananas" and it would be just as useful. But "AI bananas" is unhelpful and confusing. I think "AI labs" is the same (to a lesser but still important degree).
I think this is a useful distinction, thanks for raising it. I support terms like, "frontier AI company," "company making frontier AI," and "company making foundation models," all of which help distinguish OpenAI from Palantir. Also it seems pretty likely that within a few years, most companies will be AI companies!? So we'll need new terms. I just don't want that term to be "lab".
Another thing you might be alluding to is that "lab" is less problematic when talking to people within the AI safety community, and more problematic the further out you go. I thi...
Interesting point! I'd be OK with people calling them "evil mad scientist labs," but I still think the generic "lab" has more of a positive, harmless connotation than this negative one.
I'd also be more sympathetic to calling them "labs" if (1) we had actual regulations around them or (2) they were government projects. Biosafety and nuclear weapons labs have a healthy reputation for being dangerous and unfriendly, in a way "computer labs" do not. Also, private companies may have biosafety containment labs on premises, and the people working within them are ...
There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).
I think this dynamic is generally overstated, at least in the existential risk space that I work in. I've personally asked all of our medium and large funders for permission, and the vast majority of them have given permission. Most of the funding comes from Open Philanthropy and SFF, both of which publicly announce all of their grants—when recipients decided not to list those funders, it's not because the funders don't want them to. There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transp...
Nonprofit organizations should make their sources of funding really obvious and clear: How much money you got from which grantmakers, and approximately when. Any time I go on some org's website and can't find information about their major funders, it's a big red flag. At a bare minimum you should have a list of funders, and I'm confused why more orgs don't do this.
I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was "more capable". It's in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of "capability" is somewhat idiosyncratic to AI research and industry, and I'm arguing that it's not particularly useful or clarifying language.
More to my point (though probably orthogonal to your point), I don't think many people would buy this dog, because most people care more about not getting attacke...
What is "capabilities"? What is "safety"? People often talk about the alignment tax: the magnitude of capabilities/time/cost a developer loses by implementing an aligned/safe system. But why should we consider an unaligned/unsafe system "capable" at all? If someone developed a commercial airplane that went faster than anything else on the market, but it exploded on 1% of flights, no one would call that a capable airplane.
This idea overlaps with safety culture and safety engineering and is not new. But alongside recent criticism of the terms "safety" and "alignment", I'm starting to think that the term "capabilities" is unhelpful, capturing different things for different people.
This is truly crushing news. I met Marisa at a CFAR workshop in 2020. She was open, kind, and grateful to everyone, and it was joyful to be around her. I worked with her a bit revitalizing the EA Operations Slack Workspace in 2020, and had only had a few conversations with her since then, here and there at EA events. Marisa (like many young EAs) made me excited for a future that would benefit from her work, ambition, and positivity. Now she's gone. She was a good person, I'm glad she was alive, and I am so sad she's gone.
One thing I think is often missing from these sorts of conversations is that "alignment with EA" and "alignment with my organization's mission" are not the same thing! It's a mistake to assume that the only people who understand and believe in your organization’s mission are members of the effective altruism community. EA ideas don’t have to come in a complete package. People can believe that one organization’s mission is really valuable and important, for different reasons, coming from totally different values, and without also believing that a bunch of o...
Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?
Which of these is the correct analogy?
EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).
The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) ...
I agree with your last sentence, and I think in some versions of this it's the vast majority of people. A lot of charity advertising seems to encourage a false sense of confidence, e.g. "Feed this child for $1," or "adopt this manatee". I think this makes use of a near-universal human bias which probably has a name but which I am not recalling at the moment. For a less deceptive version of this, note how much effort AMF and GiveDirectly seem to have put in into tracking the concrete impact of your specific donation.
Orthogonally, I think most people are willing to pay more for a more legible/direct theory of impact.
"I give $2800, this kid has lifesaving heart surgery" is certainly more legible and direct than a GiveWell-type charity. In the former case, the donor doesn't have to trust GiveWell's methodologies, data gathering abilities, and freedom from bias. I've invested a significant amount of time and thought into getting to my current high level of confidence in GiveWell's analyses, more time than most people are prepared to spend thinking about their charit...
Building off of Jason's comment: Another way to express this is that comparing directly to the $5,500 Givewell bar is only fair for risk-neutral donors (I think?). Most potential donors are not really risk neutral, and would rather spend $5,001 to definitely save one life than $5,000 to have a 10% chance of saving 10 lives. Risk neutrality is a totally defensible position, but so is non-neutrality. It's good to have the option of paying a "premium" for a higher confidence (but lower risk-neutral EV).
Leaving math mode...I love this post. It made me emotiona...
Thanks so much for the encouragment, really do appreciate it.
Great point I hadn't thought about risk neutrality vs non-neutrality here and that there might be a pool of people even within EA who would rather pay a "premium" for higher confidence. Outside EA my experience has been that perhaps even the majority of people would probably prefer to pay for higher confidence.
I think a simpler explanation for his bizarre actions is that he is probably the most stressed-out person on the face of the earth right now. Or he's not seeing the situation clearly, or some combination of the two. Also probably sleep-deprived, struggling to get good advice from people around him, etc.
(This is not meant to excuse any of his actions or words, I think he's 100% responsible for everything he says and does.)
I think it'd be preferable to explicitly list as a reason for applying something along the lines of "Grantees who received funds, but want to set them aside to protect themselves from potential clawbacks".
Less importantly, it'd possibly be better to make it separate from "to return to creditors or depositors".
Thanks for the clarification. I agree that the FTX problems are clearly related to crypto being such a new unregulated area, and I was wrong to try to downplay that causal link.
I don't think anonymized donations would help mitigate conflicts of interest. In fact I think it would encourage COIs, since donors could directly buy influence without anyone knowing they were doing so. Currently one of our only tools for identifying otherwise-undisclosed COIs is looking at flows of money. If billionaire A donates to org B, we have a norm that org B shouldn't do st...
Downvoted because I think this is too harsh and accusatory:
I cannot believe that some of you delete your posts simply because it ends up being downvoted.
Also because I disagree in the following ways:
Sorry that the post came off as very harsh and accusatory tone. I mainly meant to express my exasperation with how the situation unfolded so quickly. I’m worried about the coming months and how that will affect the community and in the long term.
Clearly, revealing who is donating is good for transparency. However, if donations were anonymized from the perspective of the recipients, I think that would help mitigate conflicts of interest. I think there needs to be more dialogue about how we can mitigate conflicts of interest, regardless of whether we a...
Put very vaguely: If it turned out that the money BERI received was made through means which I consider to be immoral, then I think I would return the money, even if that meant cancelling the projects it funded.
But of course I don't know how where my bar for "immoral" is in this case. Also it's probably not the case that all of FTX's profits were immoral. So how do I determine (even in theory) if the money BERI received was part of the "good profits" or the "bad profits"?
What if there were a norm in EA of not accepting large amounts of funding unless a third-party auditor of some sort has done a thorough review of the funder's finances and found them to above-board? Obviously lots of variables in this proposal, but I think something like this is plausibly good and would be interested to hear pushback.
I disagree with this. I think we should receive money from basically arbitrary sources, but I think that money should not come with associated status and reputation from within the community. If an old mafia boss wants to buy malaria nets, I think it's much better if they can than if they cannot.
I think the key thing that went wrong was that in addition to Sam giving us money and receiving charitable efforts in return, he also received a lot of status and in many ways became one of the central faces of the EA community, and I think that was quite bad...
I don't know much about how this all works but how relevant do you think this point is?
If Sequoia Capital can get fooled - presumably after more due diligence and apparent access to books than you could possibly have gotten while dealing with the charitable arm of FTX FF that was itself almost certainly in the dark - then there is no reasonable way you could have known.
[Edit: I don't think the OP had included the Eliezer tweet in the question when I originally posted this. My point is basically already covered in the OP now.]
What are the specific things you'd want to see on a transparency page? I think transparency is important, and I try to maintain BERI's transparency page, but I'm wondering if it meets your standards.
Consider adding the Berkeley Existential Risk Initiative (BERI) to the list, either under Professional Services or under Financial and other material support. Suggested description: "Supports university research groups working to reduce x-risk, by providing them with free services and support."
Great post. This put words to some vague concerns I've had lately with people valorizing "agent-y" characteristics. I'm agentic in some ways and very unagentic in other ways, and I'm mostly happy with my impact, reputation, and "social footprint". I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).
This post, especially the section on "Assuming that it is low-cost for others to say 'no' to requests" ...
Great points, thanks David. I especially like the compare and contrast between personal connections and academic credentials. I think probably you're more experienced with academia and non-EA philanthropy than I am, so your empirical views are different. But I also think that even if EA is better than these other communities, we should still be thinking about (1) keeping it that way, and (2) maybe getting even less reliant. This is part of what I was saying with:
...None of this is unique to EA. While I think EA is particularly guilty of some of these issues,
I think the extent to which "member of the EA community" comes along with a certain way of thinking (i.e. "a lot of useful frames") is exaggerated by many people I've heard talk about this sort of thing. I think ~50% of the perceived similarity is better described as similar ways of speaking and knowledge of jargon. I think that there actually not that many people who have fully internalized new ways of thinking that are 1.) very rare outside of EA, and 2.) shared across most EA hiring managers.
Another way to put this would be: I think EA hiring managers o...
Explicitly asking for a reference the head organizer knows personally.
That feels pretty bad to me! I can imagine some reason that this would be necessary for some programs, but in general requiring this doesn't seem healthy.
I find the request for references on the EA Funds' application to be a good middle-ground. There's several sentences to it, but the most relevant one is:
...References by people who are directly involved in effective altruism and adjacent communities are particularly useful, especially if we are likely to be familiar with their work and thi
Thanks for the thoughtful feedback Chris!
I think that the author undervalues value alignment and how the natural state is towards one of regression to the norm unless specific action is taken to avoid this
I think there is difference between "value alignment" and "personal connection". I agree that the former is important, and I think the latter is often used (mostly successfully) as a tool to encourage the former. I addressed one aspect of this in the Hiring Managers section.
...I agree that as EA scales, we will be less able to rely personal relationshi
tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.
I completely agree! I think probably some combination is best, and/or it could differ between subcommunities.
Also thanks for pointing out the FTX Future Fund's experience, I'd forgotten about that. I completely agree that this is evidence against my hypothesis, specifically in the case of grantee-grantor relationships.
Great point about mitigating as opposed to solving. It's possible that my having a "solutions" section wasn't the best framing. I definitely don't think personal connections should be vilified or gotten rid of entirely (if that were even possible), and going too far in this direction would be really bad.
Thanks Stefan! I agree with those strengths of personal connections, and I think there are many others. I mainly tried to argue that there are negative consequences as well, and that the negatives might outweigh the positives at some level of use. Did any of the problems I mentioned in the post strike you as wrong? (Either you think they don't tend to arise from reliance on personal connections, or you think they're not important problems even if they do arise?)
Something that didn't strike me as wrong, but as something worth reflecting more on, is your analysis of the tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.
Another point bearing in mind is that your (correct) observation that many EA orgs do not take general applications may itself be (limited) evidence against your thesis. For example, the Future Fund has made a special effort to test a variet...
Thanks for writing and posting this! I've had these sorts of feelings floating around in my head for a while, but this is the best term I've heard for it.