Hi, I'm Max :)
Huh, I actually kinda thought that Open Phil also had a mixed portfolio, just less prominently/extensively than GiveWell. Mostly based on hearling like once or twice that they were in talks with interested UHNW people, and a vague memory of somebody at Open Phil mentioning them being interested in expanding their donors beyond DM&CT...
Cool!
the article is very fair, perhaps even positive!
Just read the whole thing, wondering whether it gets less positive after the exerpt here. And no, it's all very positive. Thanks you guys for your work, so good to see forecasting gaining momentum.
For example, the fact that it took us more than ten years to seriously consider the option of "slowing down AI" seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.
I'd guess it's also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?
Hmmm, your reply makes me more worried than before that you'll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :')
I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.
I'm not completely sure what you refer to with "legitimate jobs", but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing "the behavior of AI companies" (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn't do? I think that's definitely not "the most bland advocacy" anymore?
Also, the way you frame your pushback makes me worry that you'll loose patience with considerate advocacy way too quickly:
"There’s no reason to rush to hostility"
"If showing hostility works to convey the situation, then hostility could be merited."
"And I really hope it’s not necessary to advance into hostility."
Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.
I'm possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:
[Politization] is a real risk that any cause runs when it seeks public attention, and unfortunately I don’t think there’s much we can do to avoid it. Unfortunately, though, AI is going to become politicized whether we get involved in it or not. (I would argue that many of the predominant positions on AI in the community are already markers of grey tribe membership.)
I spontaneously feel like I'd want you to spend more time thinking about politicization risks than this cursory treatment here indicates.
More generally, I'm pretty positively surprised with how things are going on the political side of AI, and I'm a bit protective of it. While I don't have any insider knowledge and haven't thought much about all of this, I see bipartisan and sensible sounding stuff from Congress, I see Ursula von der Leyen saying AI is a potential x-risks in front of the EU parliament, I see the UK AI Safety Summit, I see the Frontier Model Forum, the UN says things about existential risks. As a consequence, I'd spontaneously rather see more reasonable voices being supportive and encouraging and protective of the current momentum, rather than potentially increasing the adversarial tone and "politicization noise", making things more hot-button, less open and transparent, etc.
One random concrete way public protests could affect things negatively: If AI pause protests would have started half a year ago ealier, would e.g. Microsoft chief executives still have signed the CAIS open letter?
On the discussion that AI will have deficits in expressing care and eliciting trust, I feel like he’s neglecting that AI systems can easily get a digital face and a warm voice for this purpose?
Interesting discussion, thanks! The discussion of AI potentially driving explosive innovations seemed much more relevant than the replacement of the jobs you spent most time discussing, and at the same time unfortunately much more rushed.
But it’s a kind of thing where, you know, I can keep coming up with new bottlenecks [for explosive innovations leading to economic growth], and [Tom Davidson] can keep dismissing them, and we can keep going on forever.
Relatedly, I'd've been interested how Michael relates to the Age of Em scenario, in which IIRC explosive innovation and economic happens mostly in a parallel digital economy of digital minds. For the next two decades I kinda expect some mild version of such a parallel digital economy, where growth in AI mostly affects stuff like software development, biotech, R&D generally, content creation, finance, personal productivity services. Would be interesting to dig into the bottlenecks that Michael foresees in this case, spontaneously I'm not convinced that there is not room for explosive growth in the digital sphere.
Hey Kieren :) Thanks, yeah, it was intentional but badly worded on my part. :D I adopted your suggestion.
Thanks a lot for sharing, and for your work supporting his family and for generally helping the people who knew him in processing this loss. I only recently got to know him during the last two EA conferences I attended but he left a strong impression of being a very kind and caring and thoughtful person.