All of RhysSouthan's Comments + Replies

Oxford college choice from EA perspective?

Balliol tends to have a lot of philosophy graduate students, and Wadham is considered to be one of the most left-wing colleges. Looking at the list of current Oxford philosophy graduate students, I noticed there are a lot at St Anne's right now as well. But this can change depending on the year, and philosophy student obviously doesn't mean EA. I would be surprised if any college reliably had a higher number of EAs. 

AlasdairGives' suggestion to consider funding options makes sense, though you should also keep in mind that the wealthiest colleges get the most applications, so if you apply to St John's, there's more of a risk they won't pick you, and then there's more randomness in the college you end up at. 

AMA: Toby Ord, author of "The Precipice" and co-founder of the EA movement

I had a similar question myself. It seems like believing in a "long reflection" period requires denying that there will be a human-aligned AGI. My understanding would have been that once a human-aligned AGI is developed, there would not be much need for human reflection—and whatever human reflection did take place could be accelerated through interactions with the superintelligence, and would therefore not be "long." I would have thought, then, that most of the reflection on our values would need to have been completed before the creation of an AGI. From what I've read of The Precipice, there is no explanation for how a long reflection is compatible with the creation of a human-aligned AGI.

Institutions for Future Generations

A lot of good ideas here!

In interested in how Demeny Voting is expected to work psychologically. I would expect just about everyone who is given a second vote (which they are told to submit on behalf of future generations) to use that second vote as a second vote for whatever their first vote was for. I imagine they would either think their first vote was for the best policy/person, in which case they could convince themselves that's best for future generations too, or they would realize their first vote is only good for the short term, but they would... (read more)

9tylermjohn2ySurprising (and confusing!) as it may be, there is some evidence that voters would vote differently with their Demeny vote than with their first vote. I've asked Ben Grodeck (who clued me into Demeny voting) to weigh in with more data, but for now see this study from Japanese economist Reiko Aoki, who found (Table 8 and Figure 7) that the voting preferences of surveyed participants who are permitted to cast one vote on behalf of themselves and one vote for their child sometimes vote differently on their second vote. The effect isn't drastic, but it is certainly non-trivial. http://hermes-ir.lib.hit-u.ac.jp/rs/bitstream/10086/22250/1/cis_dp539.pdf [http://hermes-ir.lib.hit-u.ac.jp/rs/bitstream/10086/22250/1/cis_dp539.pdf] The study authors further find that policy preferences on behalf of oneself and on behalf of one's children diverge to a greater degree, and the authors hypothesize that we would see more divergence between the multiple votes of Demeny voters if they had different political options that better reflected the divergence between these sets of preferences. Thus, they think that instituting Demeny voting would cause party platforms to change to try to cater to the policy preferences of parents voting on behalf of their children.
Philosophical Critiques of Effective Altruism by Prof Jeff McMahan

I haven't read the Srinivasan, Gray, and Nussbaum critiques. However, I did read the Krishna critique, and that one uses another rhetorical technique (aside from the sneering dismissal McMahan mentions) to watch out for in critiques of effective altruism. The technique is for the critic of EA to write in as beautiful, literary and nuanced a way possible, in part to subtly frame the EA critic as a much more fully developed, artistic and mature human than the (implied) shallow utilitarian robots who devote their lives to doing a lot of good.

Effective altrui... (read more)

2Austen_Forrester5yAbsolutely. That is such a common tactic. I think all of the criticisms against EA use one cheap rhetorical trick or another. Someone needs to make up a definitive web page that lists all the criticisms of EA with responses, and most importantly, calls out the rhetorical device that was used. It's mostly the same tired, discredited criticisms and persuasive tricks that are used over and over, so rather than responding to each individually, we can simply refer people to the web page.

Sure you may have saved hundreds of lives, but your essays feature too few obscure literary references, you monstrous, pathetic excuse for a human being.

A response to Matthews on AI Risk

I forgot to mention that your post did help to clarify points and alleviate some of my confusion. Particularly the idea that an ultra-powerful AI tool (which may or may not be sentient) "would still permit one human to wield power over all others."

The hypothetical of an AI wiping out all of humanity because it figures out (or thinks it figures out) that it will increase overall utility by doing so is just one extreme possibility. There must be a lot of credible seeming scenarios opposed to this one in which an AI could be used to increase overal... (read more)

2Evan_Gaensbauer6yBrian Tomasik is ab self-described "negative-leaning" hedonic utilitarian who is a prominent thinker for effective altruism. He's written about how humanity might have values which lead us to generating much suffering in the future, but also worries a machine superintelligence might end up doing the same. They're myriad reasons he thinks this I can't do justice to here. I believe right now he thinks the best course of action is to try steering values of present-day humanity, as much of it or as crucially an influential subset as possible, towards neglecting suffering less. He also believes doing foundational research into ascertaining better the chances of a singleton to promulgate suffering throughout space in the future. To this end he both does research with and funds colleagues at the Foundational Research Institute. His whole body of work concerning future suffering is referred to as "astronomical suffering" considerations, sort of complementary utilitarian consideration to Dr Bostrom's astronomical waste argument. You can read more of Mr. Tomasik's work on the far future and related topics here [http://www.utilitarian-essays.com]. Note some of it is advanced and may require you to read beforehand to understand all premises in some of his essays, but he also usually provides citations for all this.
A response to Matthews on AI Risk

I haven't explored the debate over AI risk in the EA movement in depth, so I'm not informed enough to take a strong position. But Kosta's comment gets at one of the things that has puzzled me -- as basically an interested outsider -- about the concern for x-risk in EA. A very strong fear of human extinction seems to treat humanity as innately important. But in a hedonic utilitarian framework, humanity is only contingently important to the extent that the continuation of humanity improves overall utility. If an AI or AIs could improve overall utility by des... (read more)

2Owen_Cotton-Barratt6yIf you're a hedonic utilitarian, you might retain some uncertainty over this, and think it's best to at least hold off on destroying humanity for a while out of deference to other moral theories, and because of the option value. Even if someone took the view you describe, though, it's not clear that it would be a helpful one to communicate, because talking about "AI destroying humanity" does a good job of successfully communicating concern about the scenarios you're worried about (where AI destroys humanity without this being a good outcome) to other people. As the exceptions are things people generally won't even think of, caveating might well cause more confusion than clarity.
3RyanCarey6yCautious support of giving an AI control is not opposed to x-risk reduction. Reduction of x-risk is defined as curtailing the potential of Earth-originating life. Turning civ over to AIs or ems might be inevitable, but would still be safety-critical. A non-careful transition to AI is bad for utilitarians and many others because of its irreversibility. Once you codify values (a definition of happiness and whatever else) in an AI, they're stuck, unless you've programmed the AI a way for it to reflect on its values. When combined with Bostrom's argument in Astronomical Waste, that the eventual awesomeness of a technologically mature civilisation is more important than when it is achived, this gives a strong reason for caution.