P

Pablo

9924 karmaJoined Aug 2014Working (6-15 years)Madrid, Spain
www.stafforini.com/

Bio

Every post, comment, or Wiki edit I authored is hereby licensed under a Creative Commons Attribution 4.0 International License

Sequences
1

Future Matters

Comments
1151

Topic Contributions
4110

Thanks for the feedback! Although I am no longer working on this project, I am interested in your thoughts because I am currently developing a website with Spanish translations, which will also feature a system where each tag is also a wiki article and vice versa. I do think that tags and wiki articles have somewhat different functions and integrating them in this way can sometimes create problems. But I'm not sure I agree that the right approach is to map multiple tags onto a single article. In my view, a core function of a Wiki is to provide concise definitions of key terms and expressions (as a sort of interactive glossary), and this means that one wants the articles to be as granular as the tags. The case of "AI safety" vs. "AI risk" vs. "AI alignment" seems to me more like a situation where the underlying taxonomy is unclear, and this affects the Wiki entries both considered as articles and considered as tags. But perhaps there are other cases I'm missing.

Tagging @Lizka and @Amber Dawn.

But if they both use microdooms, they can compare things 1:1 in terms of their effect on the future, without having to flesh out all of the post-agi cruxes.

I don't think this is the case for all key disagreements, because people can disagree a lot about the duration of the period of heightened existential risk, whereas microdooms are defined as a reduction in total existential risk rather than in terms of per-period risk reduction. So two people can agree that AI safety work aimed at reducing existential risk will decrease risk by a certain amount over a given period, but one may believe such work averts 100x as many microdooms as the other because they believe the period of heightened risk is 100x shorter.

“the attainment of capabilities affording a level of economic productivity and control over nature close to the maximum that could feasibly be achieved.” (Nick Bostrom (2013) ‘Existential risk prevention as global priority’, Global Policy, vol. 4, no. 1, p. 19.)

Loneliness in a house share situation depends entirely on whether your housemates are good, and there is no guarantee that your co-workers are good housemates just because they are EA.

The most plausible version of this argument is not that someone will be a good housemate just because they are EA. It is that banning or discouraging EA co-living makes it more difficult for people to find any co-living arrangement.

Thanks for the update: I have retracted the relevant part of my previous comment.

Talking with some physicist friends helped me debunk the many worlds thing Yud has going.

Yudkowsky may be criticized for being overconfident in the many-worlds interpretation, but to feel that you have “debunked” it after talking to some physicist friends shows excessive confidence in the opposite direction. Have you considered how your views about this question would have changed if e.g. David Wallace had been among the physicists you talked to?

Also, my sense is that “Yud” was a nickname popularized by members of the SneerClub subreddit (one of the most intellectually dishonest communities I have ever encountered). Given its origin, using that nickname seems disrespectful toward Yudkowsky.

I'm very excited to see this.

One minor correction: Toby Ord assigns a 1-in-6 chance of existential catastrophe this century, which isn't equivalent to a 1-in-6 chance of human extinction.

Indeed. And there are other forecasting failures by Mearsheimer, including one in which he himself apparently admits (prior to resolution) that such a failure would constitute a serious blow to his theory. Here’s a relevant passage from a classic textbook on nuclear strategy:[1]

In an article that gained considerable attention, largely for its resolute refusal to share the general mood of optimism that surrounded the events of 1989, John Mearsheimer assumed that Germany would become a nuclear power. Then, as the Soviet Union collapsed, he explained why it might make sense for Ukraine to hold on to its nuclear bequest. In the event Germany made an explicit renunciation of the nuclear option at the time of the country’s unification in 1990, while Japan, the other defeated power of 1945, continued to insist that it had closed off this option. Nor in the end did Kiev agree that the nuclear component of Ukraine’s Soviet inheritance provided a natural and even commendable way of affirming a new-found statehood. Along with Belarus and Kazakhstan, Ukraine eased out of its nuclear status. As it gained its independence from the USSR, Ukraine adopted a non-nuclear policy. The idea that a state with nuclear weapons would choose to give them up, especially when its neighbour was a nuclear state with historic claims on its territory, was anathema to many realists. One of his critics claimed that when asked in 1992, ‘What would happen if Ukraine were to give up nuclear weapons?’ Mearsheimer responded, ‘That would be a tremendous blow to realist theory.’

  1. ^

    Lawrence Freedman & Jeffrey Michaels, The Evolution of Nuclear Strategy, 4th ed., London, 2019, pp. 579–580

In my view, the comment isn't particularly responsive to the post.

Shouldn’t we expect people who believe that a comment isn’t responsive to its parent post to downvote it rather than to disagree-vote it, if they don’t have any substantive disagreements with it?

I am puzzled that, at the time of writing, this comment has received as many disagreement votes as agreement votes. Shouldn't we all agree that the EA community should allocate significantly more resources to an area, if by far the most good can be done by this allocation and there are sound public arguments for this conclusion? What are the main reasons for disagreement?

Load more