Anyway I posted this here because I think it somewhat resembles the policy of buying and closing coal mines. You're deliberately creating scarcity. Since there are losers when you do that, policymakers might respond. I think creating scarcity in carbon rights is more efficient and much more easy to implement than creating scarcity in coal, but indeed suffers from some of the same drawbacks.
Possibly, in the medium term. To counter that, you might want to support groups who lobby for lower carbon scheme ceilings as well.
Hey I wasn't saying it wasn't that great :)I agree that the difficult part is to get to general intelligence, also regarding data. Compute, algorithms, and data availability are all needed to get to this point. It seems really hard to know beforehand what kind and how much of algorithms and data one would need. I agree that basically only one source of data, text, could well be insufficient. There was a post I read on a forum somewhere (could have been here) from someone who let GPT3 solve questions including things like 'let all odd rows of your answer be... (read more)
If you want to spend money quickly on reducing carbon dioxide emissions, you can buy emission rights and destroy them. In schemes such as the EU ETS, destroyed emission rights should lead to direct emission reduction. This has technically been implemented already. Even cheaper is probably to buy and destroy rights in similar schemes in other regions.
Hi AM, thanks for your reply.
Regarding your example, I think it's quite specific, as you notice too. That doesn't mean I think it's invalid, but it does get me thinking: how would a human learn this task? A human intelligence wasn't trained on many specific tasks in order to be able to do them all. Rather, it first acquired general intelligence (apparently, somewhere), and was later able to apply this to an almost infinite amount of specific tasks with typically only a few examples needed. I would guess that an AGI would solve problems in a similar way. So... (read more)
Thanks for the reply, and for trying to attach numbers to your thoughts!So our main disagreement lies in (1). I think this is a common source of disagreement, so it's important to look into it further.Would you say that the chance to ever build AGI is similarly tiny? Or is it just the next hundred years? In other words, is this a possibility or a timeline discussion?
Hi Ada-Maaria, glad to have talked to you at EAG and congrats for writing this post - I think it's very well written and interesting from start to finish! I also think you're more informed on the topic than most people who are AI xrisk convinced in EA, surely including myself.
As an AI xrisk-convinced person, it always helps me to divide AI xrisk in these three steps. I think superintelligence xrisk probability is the product of these three probabilities:
1) P(AGI in next 100 years)2) P(AGI leads to superintelligence)3) P(superintelligence destroys humanity)... (read more)
Thanks for that context and for your thoughts! We understand the worries that you mention, and as you say, op-eds are a good way to avoid those. Most (>90%) of the other mainstream media articles we've seen about existential risk (there's a few dozen) did not suffer from these issues either, fortunately.
Thank you for the heads up! We would love to have more information about general audience attitudes towards existential risk, especially related to AI and other novel tech. Particularly interesting for us would be research into which narratives work best. We've done some of this ourselves, but it would be interesting to see if our results match others'. So yes please let us know when you have this available!