M

Maxtandy

10 karmaJoined

Comments
4

Thanks Cody. I appreciate the thoughtfulness of the replies given by you and others. I'm not sure if you were expecting the community response to be as it is. 

My expressed thoughts were a bit muddled. I have a few reasons why I think 80k's change is not good. I think it's unclear how AI will develop further, and multiple worlds seem plausible. Some of my reasons apply to some worlds and not others. The inconsistent overlap is perhaps leading to a lack of clarity. Here's a more general category of failure mode of what I was trying to point to. 

I think in cases where AGI does lead to explosive outcomes soon, it's suddenly very unclear what is best, or even good. It's something like a wicked problem, with lots of unexpected second order effects and so on. I don't think we have a good track record of thinking about this problem in a way that leads to solutions even on a first order effects level, as Geoffrey Miller highlighted earlier in the thread. In most of these worlds, what I expect will happen is something like:

  1. Thinkers and leaders in the movement have genuinely interesting ideas and insights about what AGI could imply at an abstract or cosmic level.
  2. Other leaders start working out what this actually implies individuals and organisations should do. This doesn't work though, because we don't know what we're doing. Due to unknown unknowns, the most important things are missed, and because of the massive level of detail in reality, the things that are suggested are significantly wrong at load-bearing points. There are also suggestions in the spirit of "we're not sure which of these directly opposing views X and Y are correct, and encourage careful consideration", because it is genuinely hard.
  3. People looking for career advice or organisational direction etc. try to think carefully about things, but in the end, most just use it to rationalise a messy choice they make between X and Y that they actually make based on factors like convenience, cost and reputational risk.

I think the impact of most actions here is basically chaotic. There are some things that are probably good, like trying to ensure it's not controlled by a single individual. I also think "make the world better in meaningful ways in our usual cause areas before AGI is here" probably helps in many worlds, due to things like AI maybe trying to copy our values, or AI could be controlled by the UN or whatever and it's good to get as much moral progress in there as possible beforehand, or just updates on the amount of morally aligned training data being used. 

There are worlds where AGI doesn't take off soon. I think that more serious consideration of the Existential Risk Persuasion Tournament leads one to conclude that wildly transformational outcomes just aren't that likely in the short/medium term. I'm aware the XPT doesn't ask about that specifically, but it seems like one of the better data points we have. I worry that focusing on things like expected value leads to some kind of Pascal's mugging, which is a shame because the counterfactual - refusing to be mugged - is still good in this case.

I still think AI an issue worth considering seriously, dedicating many resources to addressing, etc. I think significant de-emphasis on other cause areas is not good. Depending on how long 80k make the change for, it also plausibly leads to new people not entering other causes areas in significant numbers for quite some time, which is probably bad in movement-building ways that is greater than the sum of its parts (fewer people leads to feelings of defeat, stagnation etc and few new people mean better, newer ideas can't take over). 

I hope 80k reverse this change after the first year or two. I hope that, if they don't, it's worth it. 

I applaud the decision to take a big swing, but I think the reasoning is unsound and probably leads to worse worlds.

I think there are actions that look like “making AI go well” that actually are worse than not doing anything at all, because things like “keep human in control over AI” can very easily lead to something like value lock-in, or at least leaving it in the hands of immoral stewards. It’s plausible that if ASI is developed and still controlled by humans, hundreds of trillions of animals would suffer, because humans still want to eat meat from an animal. I think it’s far from clear that factors like faster alternative proteins development outweigh/outpace this risk - it’s plausible humans will always want animal meat instead of identical cultured meat for similar reasons to why some prefer human-created art over AI-created art.

If society had positive valence, I think redirecting more resources to AI and minimising x-risk are worth it, the “neutral” outcome may be plausibly that things just scale up to galactic scales which seems ok/good, and “doom” is worse than that. However, I think that when farmed animals are considered, civilisation's valence is probably significantly negative. If the “neutral” option of scale up occurs, astronomical suffering seems plausible. This seems worse than “doom”. 

Meanwhile, in worlds where ASI isn’t achieved soon, or is achieved and doesn’t lead to explosive economic growth or other transformative outcomes, redirecting people towards focusing on that instead of other cause areas probably isn’t very good.

Promoting a wider portfolio of career paths/cause areas seems more sensible, and more beneficial to the world.

Maxtandy
1
0
0
79% disagree

Essentially the Brian Kateman view: civilisation's valence seems massively negative due to farmed animal suffering. This is only getting worse despite people being able to change right now. There's a very significant chance that people will continue to prefer animal meat, even if cultured meat is competitive on price etc. "Astronomical suffering" is a real concern.

Hi Lewis,

I'm an aspiring food scientist half way through my degree. Do you think there is more potential for for impact if I focus more on plant based or clean meats? Plant based seems easier from a scientific point of view, but more low-hanging fruit seems to be taken there already. Thanks!