titotal

Computational Physicist
7056 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
580

Yeah, perhaps I am subtly misrepresenting the argument. Trying again, I interpret it as saying:

People have justified longtermism by pointing to actions that seem sensible, such as the claim that it made sense in the past to end slavery, and it makes sense currently to prevent existential risk. But both of these examples can be justified with a lot more certainty by appealing to the short term future. So in order to justify longtermism in particular, you have to point out proposed policies that are a lot less sensible seeming, and rely on a lot less certainty. 

It might help to clarify that in the article they are defining “long term future” as a scale of millions of years. 

The "distant country" objection does not defend against the argument that "We Are Not in a Position to Predict the Best Actions for the Far Future". 

We can go to a distant country and observe what is going on there, and make reasonably informed decisions about how to help them. A more accurate analogy would be if we were trying to help a distant country that we hadn't seen, couldn't communicate with and knew next to nothing about.  

It also doesn't work as a counterargument for  "The Far Future Must Conflict with the Near Future to be Morally Relevant". The authors are claiming that anything that helps the far future can also be accomplished by helping people in the present. The analogous argument that anything that helps distant countries can also be accomplished by helping people in this country is just wrong. 

Answer by titotal4
0
1

The best EA critic is David Thorstadt, his work is compiled at "reflective altruism". I also have a lot of critiques that I post here as well as on my blog. There are plenty of other internal EA critics you can find with the criticisms tag. (I'll probably add to this and make it it's own post at some point). 

In regards to AI x-risk in particular, there are a few place where you can find frequent critique. These are not endorsements or anti-endorsements. I like some of them and dislike others.  I've ranked them in rough order of how convincing i would expect them to be for the average EA (most convincing first).

First, Magnus vinding has already prepared an anti-foom reading list, compiling arguments against the "ai foom" hypothesis. His other articles on the subject of foom are also good. 

AI optimism, by Nora Belrose and Quinton pope, argues the case that AI will be naturally helpful, by looking at present day systems and debunking poor x-risk arguments. 

The AI snake oil blog and book, by computer scientists Arvind Narayanan and Sayesh kapoor, which tries to deflate AI hype. 

Gary marcus is a psychologist who has been predicting that deep learning will hit a wall since before the genAI boom, and continues to maintain that position. Blog, Twitter

Yann lecunn is a prestigious deep learning expert that works for meta AI and is strongly in favour of open source AI and anti-doomerism. 

Emily bender is a linguist and the lead author on the famous "stochastic parrots" paper. She hosts a podcast, "mystery AI hype theatre 3000", attacking AI hype.

The effective accelerationists (e/accs) like marc andressen are strongly "full steam ahead" on AI. I haven't looked into them much (they seem dumb) so I don't have any links here. 

Nirit Weiss-Blatt runs the AI panic blog, another anti AI-hype blog

The old tumblr user SU3SU2SU1 was a frequent critic of rationalism and MIRI in particular. Sadly it's mostly been deleted, but his critic of HPMOR has been preserved here

David Gerard is mainly a cryptocurrency critic, but has been criticizing rationalism and EA for a very long time, and runs the "pivot to AI" blog attacking AI hype. 

Tech won't save us is a left wing podcast which attacks the tech sector in general, with many episodes on EA and AI x-risk figures in genral. 

Timnit gebru is the ex-head of AI ethics in google. She strongly dislikes EA. Often associates with the  significantly less credible Emille torres:

Emille torres, is an ex-EA who is highly worried about longtermism. They are very disliked here for a number of questionable actions

r/sneerclub has massively dropped off in activity but has been mocking rationalism for more than a decade now, and as such has accumulated a lot of critiques. 

Has this analysis been checked by any qualified biologists? I'm seeing a lot of uncited speculative claims here, and I don't want to form a strong opinion on these things without subject matter experts weighing in. 

(for the record, I am in favour of gene drive research, target malaria seems like a worthy org)

So, most of this is a heavily biased and cherrypicked polemic against regulation in general. Like, they look at climate change and they pick on EV subsidies and move on, not mentioning all the other climate interventions that are actually working. I don't think anyone credibly thinks the free market would have solved climate change on it's own.

With regards to AI, I agree with them that the future is very hard to predict. But the present isn't, and I think there are present day real-world harms that can and should be regulated. 

The concerns in this case seem to be proportional to the actual real world impact: The jump from chatgpt and midjourney not existing to being available to millions was extremely noticeable: suddenly the average person could chat with a realistic bot and use it to cheat on their homework, or configure up a realistic looking image from a simple text prompt. 

In contrast for the average person, not a lot has changed from 2022 to now. The chatbots give out less errors and the images are a lot better, but they aren't able to accomplish significantly more actual tasks than they were in 2022.  Companies are shoving AI tools into everything, but people are mostly ignoring them. 

My best guess is that the All or Nothing theorist associates numbers with mathematical certainty. So, to use numbers to present one’s best estimate inherently “projects absolute confidence”

I think a version of this critique is still entirely fair. My problem here is that the numbers are often presented or spread without uncertainty qualifications

For example, the EA page on the against malaria foundation states:

As of July 2022, GiveWell estimates that AMF can deliver a LLIN at a cost of about $5, and that a donation to AMF has an average cost-effectiveness of $5,500 per life saved.[7][8][9] 

This statement gives no information about how sure they are about the $5 or $5500 figure. Is givewell virtually certain the cost effectiveness it's in the range of $5000 to $6000? Or do they think it could be between $2000 and $9000? Givewell explains it's methodology in detail, but their uncertainty ranges are dropped when this claim is spread (do you know of the top of your head what their uncertainty is?). Absent these ranges, I see these claims repeated all over the place as if $5000 really is an objectively correct answer and not a rough estimate. 

I think "scout mindset" vs "soldier mindset" in individuals is the wrong thing to be focusing on in general (. You will never succeed in making individuals perfectly unbiased. In science, plenty of people with "soldier mindset" do great work and make great discoveries. 

What matters is that the system as a whole is epistemologically healthy and has mechanisms to successfully counteract people's biases. A "soldier" in science is still meant to be honest and argue for their views with evidence and experimentation, and other scientists are incentivized to probe their arguments for weaknesses. 

A culture where less people quit of their own accord, but more people are successfully pressured to leave due to high levels of skeptical scrutiny might be superior. 

I've had similar worries. Most extremely impactful projects look destined to fail half a dozen times before they blow up.

Do you have any examples of this? 

When I plug in the training prompt  from the technical report (last page of the paper) into the free version of chatgpt, it gives a response that seems very similar to what five-thirty nine says. This is despite my chatgpt prompt not including any of the sources retrieved. 

Have I interpreted this wrong, or is it possible that the retrieval of sources is basically doing nothing here? 

EDIT: I did another experiment that seems even more damning: I asked an even simpler prompt to chatgpt: "what is the probability that china lands on the moon before 2050? Please give a detailed analysis and present your final estimate as a single number between 0% and 100%"

The result is a very detailed analysis and a final answer of 85%. 

Asking the same question to five-thirty nine "what is the probability that china lands on the moon before 2050?", I get a response of pretty much the same detail, and the exact same final answer of 85%

I've tried this with a few other prompts and it usually gives similar results. I see no proof that the sources do anything. 

Load more