MaxRa

3676 karmaJoined Mar 2017

Bio

Participation
5

Hi, I'm Max :)

  • working in AI governance (strategy, expert surveys, research infrastructure, EU tech policy fellow)
  • background in cognitive science & biology (did research on metacognition and confidence judgements)
  • most worried about AI going badly for technical & coordination reasons
  • vegan for the animals
  • doing my own forecasts: https://www.metaculus.com/accounts/profile/110500/

Comments
556

Topic Contributions
2

For example, the fact that it took us more than ten years to seriously consider the option of "slowing down AI" seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.

I'd guess it's also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?

Hmmm, your reply makes me more worried than before that you'll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :')

I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.

I'm not completely sure what you refer to with "legitimate jobs", but I generally have the impression that EAs working on AI risks have very mixed feelings about AI companies advancing cutting edge capabilities? Or sharing models openly? And I think reconceptualizing "the behavior of AI companies" (I would suggest trying to be more concrete in public, even here) as aggressive and hostile will itself be perceived as hostile, which you said you wouldn't do? I think that's definitely not "the most bland advocacy" anymore?

Also, the way you frame your pushback makes me worry that you'll loose patience with considerate advocacy way too quickly:

"There’s no reason to rush to hostility"

"If showing hostility works to convey the situation, then hostility could be merited."

"And I really hope it’s not necessary to advance into hostility."

MaxRa
6d31
6
1
2

Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.

I'm possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:

[Politization] is a real risk that any cause runs when it seeks public attention, and unfortunately I don’t think there’s much we can do to avoid it. Unfortunately, though, AI is going to become politicized whether we get involved in it or not. (I would argue that many of the predominant positions on AI in the community are already markers of grey tribe membership.) 

I spontaneously feel like I'd want you to spend more time thinking about politicization risks than this cursory treatment here indicates. 

  • E.g. politization is probably not a binary, and I'd be plausibly very grateful for work that on the margin reduces the intensity of politicization.
  • E.g. politicization can probably take in thousands of different shapes, some of which are much more conducive for policymakers to still have reasonably sane discussions on issues relevant to existential risks.

More generally, I'm pretty positively surprised with how things are going on the political side of AI, and I'm a bit protective of it. While I don't have any insider knowledge and haven't thought much about all of this, I see bipartisan and sensible sounding stuff from Congress, I see Ursula von der Leyen saying AI is a potential x-risks in front of the EU parliament, I see the UK AI Safety Summit, I see the Frontier Model Forum, the UN says things about existential risks. As a consequence, I'd spontaneously rather see more reasonable voices being supportive and encouraging and protective of the current momentum, rather than potentially increasing the adversarial tone and "politicization noise", making things more hot-button, less open and transparent, etc. 

One random concrete way public protests could affect things negatively: If AI pause protests would have started half a year ago ealier, would e.g. Microsoft chief executives still have signed the CAIS open letter?

On the discussion that AI will have deficits in expressing care and eliciting trust, I feel like he’s neglecting that AI systems can easily get a digital face and a warm voice for this purpose?

Interesting discussion, thanks! The discussion of AI potentially driving explosive innovations seemed much more relevant than the replacement of the jobs you spent most time discussing, and at the same time unfortunately much more rushed.

But it’s a kind of thing where, you know, I can keep coming up with new bottlenecks [for explosive innovations leading to economic growth], and [Tom Davidson] can keep dismissing them, and we can keep going on forever.

Relatedly, I'd've been interested how Michael relates to the Age of Em scenario, in which IIRC explosive innovation and economic happens mostly in a parallel digital economy of digital minds. For the next two decades I kinda expect some mild version of such a parallel digital economy, where growth in AI mostly affects stuff like software development, biotech, R&D generally, content creation, finance, personal productivity services. Would be interesting to dig into the bottlenecks that Michael foresees in this case, spontaneously I'm not convinced that there is not room for explosive growth in the digital sphere.

Hey Kieren :) Thanks, yeah, it was intentional but badly worded on my part. :D I adopted your suggestion.

(Very off-hand and uncharitably phrased and likely misleading reaction to the "Holden vs. hardcore utilitarianism" bit, thought it's just useful enough to quickly share anyways)

  • Holden's and Rob's takes felt a bit like "Hey, we have these confused ideas of infitinies, and then apply it to Utilitarianism and make Utilitarianism confusing ➔ let's throw out Utilitarianism and deprioritize the welfare of future generations relative to what the caring and calculating approach tells us. And maybe even consider becoming nihilists haha, but for real, let's just lean into our parochial moral intuitions more."
  • Instead my response to this cluster of thinking: Don't break your brain on infinite universes, just be a common-sense futurist. Don't think about Everett branches, think about the concrete bread-and-butter stars and planets out there that are not confusing, that exist today! We can see them with our own eyes! They are not infinite, but they are huge, they are likely much much much much much much much much much much much much bigger than everything that ever happened on Earth, and that indeed does swamp a lot of other very important things.

Fwiw, despite the tournmant feeling like a drag at points, I think I kept at it due to a mix of:
a) I committed to it and wanted to fulfill the committment (which I suppose is conscientiousness),
b) me generally strongly sharing the motivations for having more forecasting, and
c) having the money as a reward for good performance and for just keeping at it.

I was also a participant. I engaged less than I wanted mostly due to the amount of effort this demanded and losing more and more intrinsic motivation. 

Some vague recollections:

  • Everything took more time than expected and that decreased my motivation a bunch
    • E.g. I just saw one note that one pandemic-related initial forecast took me ~90 minutes
    • I think making legible notes requires effort and I invested more time into this than others. 
    • Also reading up on things takes a bunch of time if you're new to a field (I think GPT-4 would've especially helped with making this faster)
  • Getting feedback from others took a while I think, IIRC most often more than a week? By the point I received feedback I basically forgot everything again and could only go by my own notes
  • It was effortful to read the notes from most others, I think they often were just written hastily

What could have caused me to engage more with others?

  • I think the idea of having experts just focus on questions they have some expertise in is a good idea to get me to try to think through other people's vague and messy notes more, ask more questions, etc.
  • Probably also having smaller teams (like ~3-5 people) would've made the tournament feel more engaging, I basically didn't develop anything close to a connection with anyone in my team because they were just a bunch of anonymized usernames.
Load more