Meta is paying billions of dollars to recruit people with proven experience at developing relevant AI models.
Does the set of "people with proven experience in building AI models" overlap with "people who defer to Eliezer on whether AI is safe" at all? I doubt it.
Indeed given that Yudkowsky's arguments on AI are not universally admired and people who have chosen building the thing he says will make everybody die as their career are particularly likely to be sceptical about his convictions on that issue, an endorsement might even be net negative.
The opportunity cost only exists for those with a high chance of securing comparable level roles in AI companies, or very senior roles at non-AI companies in the near future. Clearly this applies to some people working in AI capabilities research,[1] but if you wish to imply this applies to everyone working at MIRI and similar AI research organizations, I think the burden of proof actually rests on you. As for Eliezer, I don't think his motivation for dooming is profit, but it's beyond dispute that dooming is profitable for him. Could he earn orders of magnitude more money from building benevolent superintelligence based on his decision theory as he once hoped to? Well yes, but it'd have to actually work.[2]
Anyway, my point was less to question MIRI's motivations or Thomas' observation Nate could earn at least as much if he decided to work for a pro-AI organization and more to point out that (i) no, really, those industry norm salaries are very high compared with pretty much any quasi-academic research job not related to treating superintelligence as imminent and especially to roles typically considered "altruistic" and (ii) if we're worried that money gives AI company founders the wrong incentives, we should worry about the whole EA-AI ecosystem and talent pipeline EA is backing. Especially since that pipeline incubated those founders.
$235K is not very much money. I made close to Nate's salary as basically an unproductive intern at MIRI.
I understand the point being made (Nate plausibly could get a pay rise from an accelerationist AI company in Silicon Valley, even if the work involved was pure safetywashing, because those companies have even deeper pockets), but I would stress that these two sentences underline just how lucrative peddling doom has become for MIRI[1] as well as how uniquely positioned all sides of the AI safety movement are.
There are not many organizations whose messaging has resonated with deep pocketed donors to the extent that they can afford to pay their [unproductive] interns north of $200k pro rata to brainstorm with them.[2] Or indeed up to $450k to someone with interesting ideas for experiments to test AI threats, communication skills and at least enough knowledge of software to write basic Python data processing scripts. So the financial motivations to believe that AI is really important are there on either side of the debate; the real asymmetry is between the earning potential of having really strong views on AI vs really strong views on the need to eliminate malaria or factory farming.
tbf to Eliezer, he appears to have been prophesizing imminent tech-enabled doom/salvation since he was a teenager on quirky extropian mailing lists, so one thing he cannot be accused of is bandwagon jumping.
Outside the Valley bubble, plenty of people at profitable or well-backed companies with specialist STEM skillsets or leadership roles are not earning that for shipping product under pressure, never mind junior research hires for nonprofits with nominally altruistic missions
Easier to persuade commercial entities of the merits of making more money (by incidentally doing the right thing) than persuade a reviewer of multiple competitive funding bids scoped for habitat preservation to fund a study into lab grown meat. At the end of the day, the proposals written by biodiversity enthusiasts with biodiversity rationales and very specific biodiversity metrics are just going to be more plausible,[1] even if they turn out to be ineffective.
For similar reasons, I don't expect EA animal welfare funds to award funding to an economic think tank proposing to research how to grow the economy, even if the economic think tank insists its true goal is animal welfare and provides a lot of evidence that investment in meat alternatives and enforcement of animal welfare legislation is linked to overall economic growth.
Biobanks and biodiversity charity effectiveness research might stand a chance, obviously
Also failures trying to do really outlandish things like bribing Congresspeople to endorse Jim Mattis as a centrist candidate in the 2024 US Presidential Election are likely to backfire in more spectacular ways than (say) providing malaria nets for a region with falling malaria or losing a court case against a factory farming conglomerate. That said, this criticism does apply to some other things EAs are interested in, particularly actions purportedly addressing x-risks.
Feels like in the real world you describe in which few/no cause areas are actually satiated for funding, neglectedness is of interest mainly in how it interacts with tractability.
If your small amount of effort kickstarts an area of research rather than merely adds some marginal quantity of additional research or funding, you might get some sort of multiplier on your efforts, assuming others find your case persuasive. And certain problems that have being neglected due to the relative obscurity/rarity of who/what they affect might be an indication that more tractable interventions exist (if there is a simple cure for common cancers it is remarkable we have not found it yet; conversely certain obscure diseases have been the subject of comparatively little research). On the other hand, the relationship doesn't always run that way: some causes like world peace are neglected precisely because however important they might be, there doesn't appear to be an efficacious solution.
This stands in notable contrast to most other religious and philosophical traditions, which tend to focus on timescales of centuries or millennia at most, or alternatively posit an imminent end-times scenario.
Feels like the time of perils hypothesis (and associated imperatives to act and magnitude of reward scenario) popular with longtermists maps rather more closely to the imminent end times scenario common to many eras and cultures than Buddhist views of short and long cycles and an eventual[1] Maitreya Buddha...
there have also been Buddhists acting on the belief that the Maitreya was imminent or the claim that they were the Maitreya...
It's a little different, but I'm not sure indexing to the consumption preferences of a certain class of US citizen in 2025 represents a better index, or one particularly close to Rawls concept of primary goods. The "climate controlled space" in particular feels particularly oddly specific (both because much of the world doesn't need full climate control, and because 35m^2 is not a particularly "elite" apportionment of space )
To the extent the VPP concept is useful I'd say it's mostly in indicating that no matter how much it bumps GDP per capita, AI isn't going to automagically reduce costs of land and buildings, and is currently driving the amount of compute+bandwidth an "US coastal elite" person directly or indirectly consumes up very rapidly...
It would probably be worthwhile to encourage legally binding versions of the Giving Pledge in general.
Donations before death are optimal, but it's particularly easy to ensure that the pledge is met at that stage with a will which can be updated at the time of signing it. (I presume most of the 64% did have a will, but chose to leave their fortune to others. I guess it's possible some fortunes inherited by widow[er]s will be donated to pledged causes in the fullness of time).
I don't think this should replace the Giving Pledge; some people's intentions and financial situations are too complex to write into a binding contract, but such pledges should be taken more seriously (even though in practice they are still likely to be reversible).