simeon_c

366Joined Aug 2021

Comments
43

One of my friends and collaborators did this app which is aimed at predicting the likelihood that we go extinct: https://xriskcalculator.vercel.app/

It might be useful!

It was a way to say that if you think that intelligence is perfectly correlated with "morally good", then you're fine. But you're right that it doesn't include all the ways you could reject the orthogonality thesis

Why do you think your approach is better than working straight on alignment? 

  1. intelligence peaks more closely to humans, and super intelligence doesn't yield significant increases to growth.

Even if you have a human-ish intelligence, most of the advantage of AI from its other features: 
- You can process any type of data, orders of magnitude faster than human and once you know how to do a task your deterministically know how to do it. 
- You can just double the amount of GPUs and double the number of AIs. If you pair two AIs and make them interact at high speed, it's much more power than anything human-ish. 
These are two of the many features that make AI radically different and make that it will shape the future. 

2. superintelligence in one domain doesn't yield superintelligence in others, leading to some, but limited growth, like most other technologies.

That's very (very) unlikely given recent observations on transformers where you just take some models trained from text and plug it on images, train a tiny bit more (compared with the initial term) and it works + the fact that it does maths + the fact that it's more and more sample efficient. 

3. we develop EMs which radically changes the world, including growth trajectories, before we develop superintelligence.

I think that's the most plausible of all three claims but I still think it's like btwn 0.1% and 1% likely. Whereas we've a pretty clear path in mind on how to reach AIs that are powerful enough to change the world, we've no idea how to build EMs. Also, this doesn't change directionally my argument bc no one in the EA community works on EMs. If you think that EMs are likely to change the world and that EAs should work on it, you should probably write on it and make the case for it. But I think that it's unlikely that EMs are a significant thing we should care about rn.
 

If you have other examples, I'm happy to consider them but I suspect you don't have better examples than those.

Meta-point: I think that you should be more inside viewy when considering claims.
"Engineers can barely design a bike that will work on the first try, what possibly makes you think you can create an accurate theoretical model on a topic that is so much more complex?"
This class of arguments for instance is pretty bad IMO. Uncertainty doesn't prevent you from thinking about the EV and here I was mainly arguing on the line that if you care about the long-term EV, AI is very likely to be the first-order determinant of it. Uncertainty should make us willing to do some exploration and I'm not arguing against that but in other cause areas we're making much more than exploration. 5% of longtermists would be sufficient to do all types of explorations on many topics, even EMs.
 

Yes, that's right, but it's very different to be somewhere and by chance affect AGI and to be somewhere because you think that it's your best way to affect AGI.
And I think that if you're optimizing the latter, you're not very likely to end up working in nuclear weapons policy (even if there might be a few people for who it is be the best fit)
 

I think that this comment is way too outside viewy. 

Could you mention concretely one of the "many options" that would change directionally the conclusion of the post? 

The claim is "AGI will radically change X". And I tried to argue that if you cared about X and wanted to impact it, basically on the first order you could calculate your impact on it just by measuring your impact on AGI. 

"The superintelligence is misaligned with our own objectives but is benign". 
You could have an AI with some meta-cognition, able to figure out what's good and maximizing it in the same way EAs try to figure out what's good and maximize it with parts of their life. This view mostly make sense if you give some credence to moral realism. 

"My personal view on your subject is that you don't have to work in AI to shape its future." 
Yes, that's what I wrote in the post. 

"You can also do that by bringing the discussion into the public and create awareness for the dangers."
I don't think it's a good method and I think you should target a much more specific public but yes, I know what you mean.

I think that by the AGI timelines of the EA community, yes other X-risks have roughly a probability of extinction indistinguishable from 0. 
And conditional on AGI working we'll also go out of the other risks most likely. 

Whereas without AGI, biorisks X-risks might become a thing, not in the short run but in the second half of the century.

 

That's right! I just think that the base rate for "civilisation collapse prevents us from ever becoming a happy intergalactic civilisation" is very low. 
And multiplying any probability by 0.1 also does matter because when we're talking about AGI, we're talking about things are >=10% likely to happen for a lot of people (I put a higher likelihood than that but Toby Ord putting 10% is sufficient).

So it means that even if you condition on biorisks being the same as AGI (which is the point I argue against) for everything else, you still need biorisks to be >5% likely to lead to a civilizational collapse by the end of the century for my point to not hold, i.e that 95% of longtermists should work AI (19/20 of the people + assumption of linear returns for the few first thousands ppl).

Load More