All of Otto's Comments + Replies

Should we buy coal mines?

Anyway I posted this here because I think it somewhat resembles the policy of buying and closing coal mines. You're deliberately creating scarcity. Since there are losers when you do that, policymakers might respond. I think creating scarcity in carbon rights is more efficient and much more easy to implement than creating scarcity in coal, but indeed suffers from some of the same drawbacks.

Should we buy coal mines?

Possibly, in the medium term. To counter that, you might want to support groups who lobby for lower carbon scheme ceilings as well.

How I failed to form views on AI safety

Hey I wasn't saying it wasn't that great :)

I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be... (read more)

Should we buy coal mines?

If you want to spend money quickly on reducing carbon dioxide emissions, you can buy emission rights and destroy them. In schemes such as the EU ETS, destroyed emission rights should lead to direct emission reduction. This has technically been implemented already. Even cheaper is probably to buy and destroy rights in similar schemes in other regions.

2John G. Halstead22d
there's some endongeneity in the policy though - policymakers probably respond to that kind of activity, especially if it happens at scale.
How I failed to form views on AI safety

Hi AM, thanks for your reply.

Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So... (read more)

1Ada-Maaria Hyvärinen24d
Hi Otto! I agree that the example was not that great and that definitely lack of data sources can be countered with general intelligence, like you describe. So it could definitely be possible that a a generally intelligent agent could plan around to gather needed data. My gut feeling is still that it is impossible to develop such intelligence based on one data source (for example text, however large amounts), but of course there are already technologies that combine different data sources (such as self-driving cars), so this clearly is also not the limit. I'll have to think more about where this intuition of lack of data being a limit comes from, since it still feels relevant to me. Of course 100 years is a lot of time to gather data. I'm not sure if imagination is the difference either. Maybe it is the belief in somebody actually implementing things that can be imagined.
How I failed to form views on AI safety

Thanks for the reply, and for trying to attach numbers to your thoughts!

So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further.

Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?

1Ada-Maaria Hyvärinen1mo
Hmm, with a non-zero probability in the next 100 years the likelihood for a longer time frame should be bigger given that there is nothing that makes developing AGI more difficult the more time passes, and I would imagine it is more likely to get easier than harder (unless something catastrophic happens). In other words, I don't think it is certainly impossible to build AGI, but I am very pessimistic about anything like current ML methods leading to AGI. A lot of people in the AI safety community seem to disagree with me on that, and I have not completely understood why.
How I failed to form views on AI safety

Hi Ada-Maaria, glad to have talked to you at EAG and congrats for writing this post - I think it's very well written and interesting from start to finish! I also think you're more informed on the topic than most people who are AI xrisk convinced in EA, surely including myself.

As an AI xrisk-convinced person, it always helps me to divide AI xrisk in these three steps. I think superintelligence xrisk probability is the product of these three probabilities:

1) P(AGI in next 100 years)
2) P(AGI leads to superintelligence)
3) P(superintelligence destroys humanity)... (read more)

3Ada-Maaria Hyvärinen1mo
Hi Otto! Thanks, it was nice talking to you on EAG. (I did not include any interactions/information I got from this weekend's EAG in the post because I had written it before the conference, felt like it should not be any longer than it already was, but wanted to wait until my friends who are described as "my friends" in the post had read it before publishing it.) I am not that convinced AGI is necessarily the most important component to x-risk from AI – I feel like there could be significant risks from powerful non-generally intelligent systems, but of course it is important to avoid all x-risk, so x-risk from AGI specifically is also worth talking about. I don't enjoy putting numbers to estimates but I understand why it can be a good idea so I will try. At least then I can later see if I have changed my mind and by how much. I would give quite low probability to 1), perhaps 1%? (I know this is lower than average estimates by AI researchers.) I think 2) on the other hand is very likely, maybe 99%, by the assumption that there can be enough differences between implement AGIs to make a team of AGIs surpass a team of humans by for example more efficient communication (basically what Russell says in Human Compatible on this seems credible to me). Note that even if this would be superhuman intelligence it could still be more stupid than some superintelligence scenarios. I would give a much lower probability to superintelligence like Bostrom describes it. 3) is hard to estimate without knowing much about the type of superintelligence, but I would spontanously say something high, like 80%? So because of the low probability on 1) my concatenated estimate is still significantly lower than yours. I definitely would love to read more research on this as well.
Existential Risk Observatory: results and 2022 targets

Thanks for that context and for your thoughts! We understand the worries that you mention, and as you say, op-eds are a good way to avoid those. Most (>90%) of the other mainstream media articles we've seen about existential risk (there's a few dozen) did not suffer from these issues either, fortunately.

Existential Risk Observatory: results and 2022 targets

Thank you for the heads up! We would love to have more information about general audience attitudes towards existential risk, especially related to AI and other novel tech.  Particularly interesting for us would be research into which narratives work best. We've done some of this ourselves, but it would be interesting to see if our results match others'. So yes please let us know when you have  this available!

3PeterSlattery4mo
Hey Otto, We have 580 responses from international data [https://drive.google.com/drive/folders/1OPgolKrjr-49fpJi3kcTPbbLO0qa36bi?usp=sharing] collected during the SCRUB project [https://www.scrubcovid19.org/] (though these responses may not all be complete). We collected them in Waves 5&6. You can see the questionnaires used for each wave in the Word files in the linked folder. 'GCR_' are the relevant variables. We also have 1400 domestic (Australian) responses. These are embargoed by our government partner. We can only get access by requesting the data for a paper (which we plan to do eventually). Let us know if you would be interested in collaborating on that. Let me know if you have any questions.