titotal

Computational Physicist
7593 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail. 

Comments
616

Though I think it would be a grave mistake to conclude from the fact that ChatGPT mostly complies with developer and user intent that we have any reliable way of controlling an actual machine superintelligence. The top researchers in the field say we don’t

The link you posted does not support your claim. The 24 authors of the linked paper contains some top AI researchers like Geoffrey Hinton and Stuart Russell, but it obviously does not contain all of them, and is obviously not a representative sample. It also contains people with limited expertise in the subject, including a psychologist and a medieval historian. 

In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit.  I doubt this applies to all people or even the majority, but it does seem like it's happened at least once. 

The EA space in general has fairly weak defenses against ideas that sound persuasive but don't actually hold up to detailed scrutiny. An initiative like this, if implemented correctly, seems like a step in the right direction.

I find it unusual that this end of year review contains barely any details of things you've actually done this year. Why should donors consider your organization as opposed to other AI risk orgs?

"It seems hard to predict whether superintelligence will kill everyone or not, but there's a worryingly high chance it will, and Earth isn't prepared," and seems to think the latter framing is substantially driven by concerns about what can be said "in polite company."

Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don't think it's uncommon for someone who secretly suspects it's all a load of nonsense to diplomatically say a statement like the above, in "polite EA company".

Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly. 

To be clear, Thorstadt has written around a hundred different articles critiquing EA positions in depth, including significant amounts of object level criticism

I find it quite irritating that no matter how much in depth object level criticism people like Thorstadt or I make, if we dare to mention meta-level problems at all we often get treated like rabid social justice vigilantes. This is just mud-slinging: both meta level and object level issues are important for the epistemological health of the movement. 

I'm worried that a lot of these "questions" seem like you're trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief. 

Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?

First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it's traditional role in murder and genocide. Third, it's all untested and based on questionable science and i suspect it wouldn't actually work very well, if at all.

Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?

Have you considered that the rest of EA is incentivised to pretend there aren't problems in EA, for reputational reasons? If so, why shouldn't community health be expanded instead of reduced? 

This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can't think of a major scandal in EA that was first raised by the community health team. 

Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the "TESCREAL" conspiracy theory and antisemitic conspiracy theories?

Because this is a dumb and baseless parallel? There's a lot more to antisemitic conspiracy theories than "powerful people controlling things". In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale 

Why aren't there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.

Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?

For as long as it's existed the "AI safety" movement has been trying to convince people that superintelligent AGI is imminent and immensely powerful. You can't act all shocked pikachu that some people would ignore the danger warnings and take that as a cue to build it before someone else does. This was all quite a predictable result of your actions. 

I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations. 

I think that the other 8 billion of us might want a say, and I'd guess we'd not be particularly happy if we got collectively eviscerated because some random person made a math error. 

On multiple occasions, I've found a "quantified" analysis to be indistinguishable from a "vibes-based" analysis: you've just assigned those vibes a number, often one basically pulled out of your behind.  (I haven't looked enough into shrimp to know if this is one of those cases). 

I think it is entirely sensible to strongly prefer cause estimates that are backed by extremely strong evidence such as meta-reviews of randomised trials, rather than cause estimates based on vibes that are essentially made up. Part of the problem I have with naive expected value reasoning is that it seemingly does not take this entirely reasonable preference into account.

Load more