DT

David T

1402 karmaJoined

Comments
248

Meta is paying billions of dollars to recruit people with proven experience at developing relevant AI models.

Does the set of "people with proven experience in building AI models" overlap with "people who defer to Eliezer on whether AI is safe" at all? I doubt it.

Indeed given that Yudkowsky's arguments on AI are not universally admired and people who have chosen building the thing he says will make everybody die as their career are particularly likely to be sceptical about his convictions on that issue, an endorsement might even be net negative.

The opportunity cost only exists for those with a high chance of securing comparable level roles in AI companies, or very senior roles at non-AI companies in the near future. Clearly this applies to some people working in AI capabilities research,[1] but if you wish to imply this applies to everyone working at MIRI and similar AI research organizations, I think the burden of proof actually rests on you. As for Eliezer, I don't think his motivation for dooming is profit, but it's beyond dispute that dooming is profitable for him. Could he earn orders of magnitude more money from building benevolent superintelligence based on his decision theory as he once hoped to? Well yes, but it'd have to actually work.[2]

Anyway, my point was less to question MIRI's motivations or Thomas' observation Nate could earn at least as much if he decided to work for a pro-AI organization and more to point out that (i) no, really, those industry norm salaries are very high compared with pretty much any quasi-academic research job not related to treating superintelligence as imminent and especially to roles typically considered "altruistic" and (ii) if we're worried that money gives AI company founders the wrong incentives, we should worry about the whole EA-AI ecosystem and talent pipeline EA is backing. Especially since that pipeline incubated those founders.

  1. ^

    including Nate

  2. ^

    and work in a way that didn't kill everyone, I guess...

$235K is not very much money. I made close to Nate's salary as basically an unproductive intern at MIRI.

I understand the point being made (Nate plausibly could get a pay rise from an accelerationist AI company in Silicon Valley, even if the work involved was pure safetywashing, because those companies have even deeper pockets), but I would stress that these two sentences underline just how lucrative peddling doom has become for MIRI[1] as well as how uniquely positioned all sides of the AI safety movement are.

There are not many organizations whose messaging has resonated with deep pocketed donors to the extent that they can afford to pay their [unproductive] interns north of $200k pro rata to brainstorm with them.[2] Or indeed up to $450k to someone with interesting ideas for experiments to test AI threats, communication skills and at least enough knowledge of software to write basic Python data processing scripts. So the financial motivations to believe that AI is really important are there on either side of the debate; the real asymmetry is between the earning potential of having really strong views on AI vs really strong views on the need to eliminate malaria or factory farming.

  1. ^

    tbf to Eliezer, he appears to have been prophesizing imminent tech-enabled doom/salvation since he was a teenager on quirky extropian mailing lists, so one thing he cannot be accused of is bandwagon jumping.

  2. ^

    Outside the Valley bubble, plenty of people at profitable or well-backed companies with specialist STEM skillsets or leadership roles are not earning that for shipping product under pressure, never mind junior research hires for nonprofits with nominally altruistic missions

Easier to persuade commercial entities of the merits of making more money (by incidentally doing the right thing) than persuade a reviewer of multiple competitive funding bids scoped for habitat preservation to fund a study into lab grown meat. At the end of the day, the proposals written by biodiversity enthusiasts with biodiversity rationales and very specific biodiversity metrics are just going to be more plausible,[1] even if they turn out to be ineffective.

For similar reasons, I don't expect EA animal welfare funds to award funding to an economic think tank proposing to research how to grow the economy, even if the economic think tank insists its true goal is animal welfare and provides a lot of evidence that investment in meat alternatives and enforcement of animal welfare legislation is linked to overall economic growth.

  1. ^

    Biobanks and biodiversity charity effectiveness research might stand a chance, obviously

Also failures trying to do really outlandish things like bribing Congresspeople to endorse Jim Mattis as a centrist candidate in the 2024 US Presidential Election are likely to backfire in more spectacular ways than (say) providing malaria nets for a region with falling malaria or losing a court case against a factory farming conglomerate. That said, this criticism does apply to some other things EAs are interested in, particularly actions purportedly addressing x-risks.

Feels like in the real world you describe in which few/no cause areas are actually satiated for funding, neglectedness is of interest mainly in how it interacts with tractability.

If your small amount of effort kickstarts an area of research rather than merely adds some marginal quantity of additional research or funding, you might get some sort of multiplier on your efforts, assuming others find your case persuasive. And certain problems that have being neglected due to the relative obscurity/rarity of who/what they affect might be an indication that more tractable interventions exist (if there is a simple cure for common cancers it is remarkable we have not found it yet; conversely certain obscure diseases have been the subject of comparatively little research). On the other hand, the relationship doesn't always run that way: some causes like world peace are neglected precisely because however important they might be, there doesn't appear to be an efficacious solution.

This stands in notable contrast to most other religious and philosophical traditions, which tend to focus on timescales of centuries or millennia at most, or alternatively posit an imminent end-times scenario. 

Feels like the time of perils hypothesis (and associated imperatives to act and magnitude of reward scenario) popular with longtermists maps rather more closely to the imminent end times scenario common to many eras and cultures than Buddhist views of short and long cycles and an eventual[1] Maitreya Buddha...

 

  1. ^

    there have also been Buddhists acting on the belief that the Maitreya was imminent or the claim that they were the Maitreya...

I thought Altman and the Amodeis had already altruistically devoted their lives to saving us from grey goo. Since they're going to do this before 2027 you may already be too late

Peter Thiel wants to know if your AI can be unfriendly enough to make a weapon out of it.

It's a little different, but I'm not sure indexing to the consumption preferences of a certain class of US citizen in 2025 represents a better index, or one particularly close to Rawls concept of primary goods. The "climate controlled space" in particular feels particularly oddly specific (both because much of the world doesn't need full climate control, and because 35m^2 is not a particularly "elite" apportionment of space )

To the extent the VPP concept is useful I'd say it's mostly in indicating that no matter how much it bumps GDP per capita, AI isn't going to automagically reduce costs of land and buildings, and is currently driving the amount of compute+bandwidth an "US coastal elite" person directly or indirectly consumes up very rapidly...

I don't have a global audience, but if I did I wouldn't have share this view I expressed to individuals back when COVID was first reported: 

probably this isn't going to become a global pandemic or affect us at all; but the WHO overreacting to unknown new diseases is what prevents pandemics from happening 

That take illustrates two things: firstly that there are actual lifesaving reasons for communicating messages slightly differently to your personal level of concern about an issue, and secondly hunches about what is going to happen next can be very wrong.

In fact, semi-informed contrarian hunches were shared frequently and often by public intellectuals throughout the pandemic, often with [sincere] high confidence. They predicted it would cease to be a thing as soon as the weather got warmer, were far too clever to wear masks because they knew that protective effects which might be statistically significant at population level had negligible impact upon them, they didn't have to worry about infection any more because they were using Invermectin as a prophylactic and they were keen to express their concern about vaccines[1] Piper's hunch is probably unusual in being directionally correct. Of all the possible cases for public intellectuals sharing everything they think about an issue, COVID is probably the worst example. Many did, and many readers and listeners died.

Being a domain expert relative to ones audience doesn't seem nearly enough to be contradicting actual experts with speculation on health in other contexts either.[2]

Similarly, I'm unfamiliar with Ball, but if he is “probably way above replacement level for “Trump admin-approved intellectual” he should probably try to stay in post. There are many principled reasons to cease to become a White House adviser, but to pursue a particular cause by placing less emphasis on arguments they're might be receptive to and more on others isn't really one of them. It's not like theories that Open Source AI might be valuable as an alternative to an oligarchy dominated by US Americans who orbit the White House struggle to get aired in other environments. Political lobbying is the ur-case for emphasizing the bits the audience cares about, and I'm really struggling to imagine any benefit to Ball giving the same message to 80k Hours and the Trump administration, unless the intention is for both audiences to ignore him.

I haven't read either of MacAskill's full length books so I'm less sure on this one, but my understanding is that one focuses on various approaches to addressing poverty and one focuses on the long term, in much the same way as Famine, Affluence and Morality has nothing to say on liberating animals and Animal Liberation having little to say on duties to save human lives.[3] I don't think there's anything deceptive in editorial focus, and I think if readers are concluding from reading one of those texts that Singer doesn't care about animals or that MacAskill doesn't care about the future, the problem with jumping to inaccurate conclusions is all theirs. MacAskill has written about other priorities since; I don't think he owes his audience apologies for not covering everything he cares about in the same book.

I do have an issue with "bait and switch" like using events nominally about the best way to address global poverty to segueing into "actually all these calculations to save children's lives we agonised earlier are moot; turns out the most important thing is to support these AI research organizations we're affiliated with"[4] but I consider that fundamentally different to focusing

  1. ^

    These are just the good-faith beliefs held by intellectuals with more than a passing interest in the subject. Needless to say not all the people amplifying them had such good intentions or any relevant knowledge at all...

  2. ^

    At least with COVID, public health authorities were also acting on low information. The same is not true in other cases, where on the one hand there is a mountain of evidence-based medicine and on the other, a smart more influential person idly speculating otherwise.

  3. ^

    Even though Singer has had absolutely no issue with writing about some seriously unpopular positions he holds, he still doesn't emphasize everything important in everything he writes... 

  4. ^

    Apart from the general ickiness, I'm not even convinced it's a particularly good way to recruit the most promising AI researchers...

  5. Show all footnotes
Load more