titotal

Computational Physicist
5682 karmaJoined Jul 2022

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
517

In the 90's and 2000's, many people such as Eric Drexler were extremely worried about nanotechnology and viewed it as an existential threat through the "gray goo" scenario. Yudkowsky predicted drexler style nanotech would occur by 2010, using very similar language to what he is currently saying about AGI. 

It turned out they were all being absurdly overoptimistic about how soon the technology would arrive, and the whole drexlerite nanotech project flamed out by the end of the 2000's and has pretty much not progressed since. I think a similar dynamic playing out with AGI is less likely, but still very plausible. 

A lot of people here donate to givedirectly.org, with the philosophy that we should let the worlds poorest decide where money needs to be spent to improve their lives. Grassroots projects like this seem like a natural extension of this, where a community as a whole decides where they need resources in order to uplift everyone. I'm no GHD expert, and I would encourage an in depth analysis, but it's at least plausible that this could be more effective than givedirectly, as this project  is too large to be paid for under that model.  

Grassroots organising seems like a good idea in general: by cutting most of the westerners out of the process, the money goes into the third world economy. We could also see knock-on effects: maybe altruistic philosophy becomes more popular throughout Uganda, and they are more receptive to, say, animal rights later on in their development.  

I think more estimates around cost effectiveness is a good idea, but EA had funded far more speculative and dubious projects in recent memory. I would encourage EA funders to give the proposal a fair shot. 

I'm fine with CEA's, my problem is that this seems to have been trotted out selectively in order to dismiss Anthony's proposal in particular, even though EA discusses and sometimes funds proposals that make the supposed "16 extra deaths" look like peanuts by comparison. 

The Wytham abbey project has been sold, so we know it's overall impact was to throw something like a million pounds down the drain (when you factor in stamp duty, etc). I think it's deeply unfair to frame Anthony's proposal as possibly letting 16 people die, while not doing the same for Wytham, which (in this framing) definitively let 180 people die. 

Also, the cost effectiveness analysis hasn't even been done yet! I find it kind of suspect that this is getting such a hostile response when EA insiders propose ineffective projects all the time with much less pushback. There are also differing factors here worth considering, like helping EA build links with grassroots orgs, indirectly spreading EA ideas to organisers in the third world, etc. EA spends plenty of money on "community building", would this not count? 

The HPMOR thing is a side note, but I vehemently disagree with your analysis, and the initial grant, because the counterfactual in this case is not doing nothing, it's sending them a link to the website where HPMOR is hosted for free for everybody, which costs nothing. Plus HPMOR only tangentially advocates for EA causes anyway! A huge number of people have read HPMOR, and only a small proportion have gone on to become EA members. Your numbers are absurdly overoptimistic. 

Okay, that makes a lot more sense, thank you. 

I think the talk of transition risks and sail metaphors aren't actually that relevant to your argument here? Wouldn't a gradual and continuous decrease to state risk, like Kuznets curve shown in Thorstadt's paper here, have the same effect? 

I guess at a very high level, I think: either there are accessible arrangements for society at some level of technological advancement which drive risk very low, or there aren't. If there aren't, it's very unlikely that the future will be very large. If there are, then there's a question of whether the world can reach such a state before an existential catastrophe.

This reasoning seems off. Why would it have to drive thing to very low risk, rather than to a low but significant level of risk, like we have today with nuclear weapons? Why would it be impossible to find arrangements that keep the level of state risk at like 1%? 

AI risk thinking seems to have a lot of "all or nothing" reasoning that seems completely unjustified to me. 

titotal
2d22
16
10
2

I don't like that this "converting to lives" thing is being done on this kind of post and seemingly nowhere else? 

Like, if we applied it to the wytham abbey purchase (I don't know if the 15 mill figure is accurate but whatever), that's 2700 people EA let die in order to purchase a manor house. Or what about the fund that gave $28000 dollars to print out harry potter fanfiction and give it to math olympians? That's 6 dead children sacrificed for printouts of freely available fiction!

I hope you see why I don't like this type of rhetoric. 

Instead it's saying that it may be more natural to have the object-level conversations about transitions rather than about risk-per-century.

 

Hmm, I definitely think there's an object level disagreement about the structure of risk here. 

Take the invention of nuclear weapons for example. This was certainly a "transition" in society relevant to existential risk. But it doesn't make sense to me to analogise it to a risk in putting up a sail. Instead, nuclear weapons are just now a permanent risk to humanity, which goes up or down depending on geopolitical strategy. 

I don't see why future developments wouldn't work the same way. It seems that since early humanity the state risk has only been increasing further and further as technology develops. I know there are arguments for why it could suddenly drop, but I agree with the linked Thorstadt analysis that this seems unlikely. 

I think this sail metaphor is more obfucatory than revealing. If you think that the risk will drop orders of magnitude and stay there, then it's fine to say so, and you should make your object-level arguments for that. Calling it a transition doesn't really add anything: society has been "transitioning" between different states for it's entire lifetime, why is this one different?

Thorstadt has previously written a paper specifically addressing the time of perils hypothesis, summarised in seven parts here

One of the points is that just being in a time of perils is not enough to debunk his arguments, it has to be a short time of perils, and the time of perils ending has to drop the risk by many orders of magnitude. These assumptions seem highly uncertain to me. 

I'm actually finishing up an article on this exact topic! 

I'll explain more there, but I think the major reason is this: If Leif Weinar didn't hate EA, he wouldn't have bothered to write the article. You need a reason to do things, and hatred is one of the most motivating ones. 

Load more