All of Question Mark's Comments + Replies

Donation in 2021

If you're still trying to decide what to donate to, Brian Tomasik wrote this article on his donation recommendations, which may give you some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. Both of these organizations focus on reducing S-risks, or risks of astronomical suffering. There was also a post here from a few months ago giving shallow evaluations of various longtermist organizations.

AI Timelines: Where the Arguments, and the "Experts," Stand

Brian Tomasik wrote a similar article several years ago on Predictions of AGI Takeoff Speed vs. Years Worked in Commercial Software. In general, AI experts with the most experience working in commercial software tend to expect a soft takeoff, rather than a hard takeoff.

7anonymous_ea2moI appreciate you posting this picture, which I had not seen before. I just want to add that this was compiled in 2014, and some of the people in the picture have likely shifted in their views since then.
List of AI safety courses and resources

These aren't entirely about AI, but Brian Tomasik's Essays on Reducing Suffering and Tobias Baumann's articles on S-risks are also worth reading. They contain a lot of articles related to futurism and scenarios that could result in astronomical suffering. On the topic of AI alignment, Tomasik wrote this article on the risks of a "near miss" in AI alignment, and how a slightly misaligned AI may create far more suffering than a completely unaligned AI.

Gifted $1 million. What to do? (Not hypothetical)

There was a post here a few months ago giving brief evaluations of various longtermist organizations, and briefly commented on the Qualia Research Institute. It described QRI's pathway to impact as "implausible" and "overly ambitious". What would be your response to this?

6andzuck2moHi Question Mark. While Nuño evaluated many longtermist orgs in that post, he didn’t actually evaluate QRI. Here’s the full quote: “Below is a list of perhaps notable organizations which I could have evaluated but didn't. […] Qualia Research Institute. Its pathway to impact appears implausible and overly ambitious.” It’s unfortunate that no explanation is actually given for why the view is held. The name of any longtermist org could have replaced QRI’s name and the statement would sound exactly the same. QRI’s path to impact has three steps. Step 1: understand what things are conscious and how to measure and quantify valence (how good or bad an experience feels). Fortunately, we’re in a great position to make progress. Michael Johnson’s Principia Qualia [https://www.qualiaresearchinstitute.org/pdf/Principia-Qualia.pdf] breaks down the problem of consciousness into eight clear sub-problems and lays out a testable theory for what valence is. You can read about the progress made since the theory was presented here [https://www.qualiaresearchinstitute.org/blog/a-primer-on-the-symmetry-theory-of-valence] . After we can measure valence, step 2 is to do just that in humans, animals, and anything we suspect is conscious. We'll do it in a variety of situations and conditions. We’d confirm whether valence follows a log scale, as Andrés Gomez Emilsson has suggested [https://www.qualiaresearchinstitute.org/blog/log-scales] . All this data will make it easier to make economic decisions, allocate capital, and do effective altruism. It’ll also let us learn what the situation is with the quintillions of organisms on the planet and come up with a triage plan to help. For more on this topic, see Johnson’s “Effective Altruism, and building a better QALY [https://opentheory.net/2015/06/effective-altruism-and-building-a-better-qaly/] ." Step 3: reduce suffering, increase baseline well-being, reach new heights of happiness. Solving valence measurement will probably yield insight into
Gifted $1 million. What to do? (Not hypothetical)

Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. CLR and CRS are doing research on cause prioritization and reducing S-risks, i.e. risks of astronomical suffering. S-risks are a neglected priority, so any additional funding for S-risk research will likely have more marginal impact compared to other causes.

What would you do if you had half a million dollars?

Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you're a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you're a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn't necessarily increase expected utility.

9jackmalde3moYes that is true. For what it's worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the "sadistic conclusion" whereby one can make things better by bringing into existence people with terrible lives, as long as they're still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.
What would you do if you had half a million dollars?

Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.

Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right,  yo... (read more)

4jackmalde3moYou're right to question this as it is an important consideration. The Global Priorities Institute has highlighted "The value of the future of humanity" in their research agenda [https://globalprioritiesinstitute.org/wp-content/uploads/GPI-research-agenda-version-2.1.pdf] (pages 10-13). Have a look at the "existing informal discussion" on pages 12 and 13, some of which argues that the expected value of the future is positive. I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.
What would you do if you had half a million dollars?

There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far fut... (read more)

2jackmalde3moSure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering. Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics. This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn't necessarily mean you shouldn't want to reduce most forms of existential risk.
What would you do if you had half a million dollars?

If one values reducing suffering and increasing happiness equally, it isn't clear that reducing existential risk is justified either. Existential risk reduction and space colonization means that the far future can be expected to have both more happiness and more suffering, which would seem to even out the expected utility. More happiness + more suffering isn't necessarily better than less happiness + less suffering. Focusing on reducing existential risks would only  seem to be justified if either A) you believe in Positive Utilitarianism, i.e. increas... (read more)

B) the far future can be reasonably expected to have significantly more happiness than suffering

I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker's The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory... (read more)

What would you do if you had half a million dollars?

Even if you value reducing suffering and increasing happiness equally, reducing S-risks would likely still greatly increase the expected value of the far future. Efforts to reduce S-risks would almost certainly reduce the risk of extreme suffering being created in the far future, but it's not clear that they would reduce happiness much.

I'm not saying that reducing S-risks isn't a great thing to do, nor that it would reduce happiness, I'm just saying that it isn't clear that a focus on reducing S-risks rather than on reducing existential risk  is justified if one values reducing suffering and increasing happiness equally.

What would you do if you had half a million dollars?

Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. In terms of the long-term future, reducing suffering in the far future may be more important than reducing existential risk. If life in the far future is significantly bad on average, space colonization could potentially create and spread a large amount of suffering.

8antimonyanthony3moThis is easy for me to say as someone who agrees with these donation recommendations, but I find it disappointing that this comment apparently has gotten several downvotes. The comment calls attention to a neglected segment of longtermist causes, and briefly discusses what sorts of considerations would lead you to prioritize those causes. Seems like a useful contribution.
3MichaelStJules3moNote that s-risks are existential risks (or at least some s-risks are, depending on the definition). Extinction risks are specific existential risks, too.

My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness - even if the 'magnitude' of the happiness and suffering are the same.

If one holds a more symmetric view where suffering and happiness are both equally important it isn't clear how useful his donation recommendations are.

How to Stop The Deadliest Animal on Earth (A Happier World video)

The Gates Foundation is financing a campaign to genetically engineer the mosquito population in order to control malaria. He compares it to Mao Zedong's Four Pests Campaign, and how Mao's attempts to wipe out the sparrow population resulted in the Great Chinese Famine. Taleb argues that there may be similar unintended consequences, and something similar could happen with genetically modifying mosquitoes. He also talks about processes that are too fast for nature, and he draws a graph comparing the speed at which the ecosystem changes and the corresponding risk of harm, and how harm scales non-linearly in proportion to speed.

4Aaron Gertler4moThanks for sharing a summary! It doesn't seem like it applies to AMF's work, but it does describe other malaria control efforts. My impression is that the scientists who work on these things all day often pay more attention to risks and safety than other people realize, but I hope that the initial tests being run on this technology include appropriate follow-up to understand any unintended consequences.
2BrianTan4moDon't have the time to read into it but I think that total extinction of the biosphere would very likely not be a good thing.
People working on x-risks: what emotionally motivates you?

I'm mostly concerned with S-risks, i.e. risks of astronomical suffering. I view it as a more rational form of Pascal's Wager, and as a form of extreme longtermist self-interest. Since there is still a >0% chance of some form of afterlife or a bad form of quantum immortality existing, raising awareness of S-risks and donating to S-risk reduction organizations like the Center on Long-Term Risk and the Center for Reducing Suffering likely reduces my risk of going to "hell". See The Dilemma of Worse Than Death Scenarios.

The dilemma is that it does not seem

... (read more)
How to Stop The Deadliest Animal on Earth (A Happier World video)

What do you think the unintended consequences of these efforts to stop malaria could be? Nassim Taleb argues that the Gates Foundation is repeating the errors of Mao Zedong. It's also possible that donating malaria nets could cause local net manufacturers to go out of business, which could increase African dependence on foreign aid in the long run.

8Aaron Gertler4moI've seen you link to the Mao video multiple times. Whenever you're linking to a long resource in a way that isn't self-explanatory, it really helps to share a summary of what you mean. Mao Zedong made (to be charitable) many errors, so that summary is much less informative than "cause local net manufacturers to go out of business". But since you've already seen the video, you could probably write a brief summary in much less time than it will take, say, five interested readers to watch enough of the video to see what you mean. And you'll be able to use that summary in other threads where you want to raise the same question, so it pays dividends.
3Aaron Gertler4moOn the question about AMF's impact on local manufacturers, here's Rob Mather, head of AMF, on exactly those concerns [https://www.againstmalaria.com/NewsItem.aspx?newsitem=Where-do-we-buy-our-nets-from] . The response (copied below) is ten years old, so the information may be out of date. It sounds like a difficult trade-off, and I'd be happy to see data on manufacturing conditions or other economic conditions in areas where AMF has worked, or on longer-term malaria rates that might reflect the impact of nets becoming less available locally. But I'll note that I haven't really seen a "go out of business" argument that reflects these points: * Lower malaria rates obviously increase productivity in a vacuum. I'd expect that losing a child, or having to care for a sick child, also has a negative impact on productivity. If one local manufacturer goes out of business, but thousands of additional cases of malaria are prevented, what's the net economic effect? * If a net manufacturer goes out of business, and AMF's nets only last a few years, how often can that manufacturer (or another one) get back into business? Consider that: * Any local business must have been a startup at some point, grown from nothing. * Someone who used to run such a business would have useful contacts and experience for starting it again — presumably, that's easier than starting up the first time! * If lots more people are now accustomed to sleeping under nets, local demand for nets may be higher post-AMF, another good sign for local manufacturers. * Given that AMF targets areas with very high numbers of people not sleeping under nets [https://www.givewell.org/charities/amf#Are_LLINs_targeted_at_people_who_do_not_already_have_them] , how often are they actually competing with local manufacturers? * Rob's answer seems to imply that many areas can't actually support local manufacturers (I don't know how common t
Intactivism as a potential Effective Altruist cause area?

I don't know enough about the cultures and internal workings of Australia, Canada, the UK, etc. to give you a good answer for how precisely this shift took place. But the fact of the matter is that something took place in these countries that caused the practice of circumcision to be abandoned en masse.

The point I'm trying to get at is that there's a risk that circumcision won't decline in the US as it has in other countries, and that it will keep being practiced for centuries. The longer circumcision continues, the more culturally entrenched it will get, ... (read more)

Intactivism as a potential Effective Altruist cause area?

I appreciate the breakdown of importance, tractability, and crowdedness here, but I don't think this post uses scout mindset; it's written to persuade, and leaves out a lot of contradictory evidence while overstating the strength of other evidence.

I did link to a number of resources that address the arguments from circumcision proponents though, such as Eric Clopper’s lecture. I also mentioned the possibility of infants not being sentient, which would weaken the case for it as a cause area.

In the end, I decided to downvote; once I'd spent ~90 minutes readi

... (read more)
Shallow evaluations of longtermist organizations

Would you consider reviewing the Center for Reducing Suffering? They are an organization similar to the Center on Long-Term Risk in the sense that their main focus is reducing S-risks, i.e. risks of astronomical suffering, but are less focused on AI. CRS is currently Brian Tomasik's top charity recommendation.

2NunoSempere4moIn what capacity are you asking? I'd be more likely to do so if you were asking as a team member, because the organization right now looks fairly small and I would almost be evaluating individuals.
Resilient food

Brian Tomasik's article on the amount of suffering produced by various animal foods is worth reading. If you're not willing to go vegan, it's probably a good idea to generally eat meat/animal products from larger animals, namely beef and milk. Since fewer animals are needed per unit of meat/food, these foods cause far less animal suffering. It may also be a good idea to eat less bread/rice/pasta/cereal and more beans, nuts, and potatoes.

4MichaelA4moHey Question Mark, this page is for behind-the-scenes Discussion of this wiki entry, rather than discussion of the topic. This is analogous to Wikipedia's Talk pages, which each say at the top: (Melodramatic example [https://en.wikipedia.org/wiki/Talk:Adolf_Hitler].) Speaking of which, I'll talk to the moderators about maybe adding a banner like that to the top of these pages to avoid future confusion. (Also, this entry is about a specific type of alternative foods, not about things like veganism, though unfortunately the term that's currently used is pretty vague and ambiguous. Hopefully in future the common term will be "resilient foods" instead.)
The positive case for a focus on achieving safe AI?

Brian Tomasik believes that there's a chance that AI alignment may itself be dangerous, since a "near miss" in AI alignment could cause vastly more suffering than a paperclip maximizer. In his article on his donation recommendations, he estimates that organizations like MIRI may have a ~38% chance of doing active harm.

Intactivism as a potential Effective Altruist cause area?

In the United States, Canada, and South Korea, the vast majority of circumcisions are secular and performed in hospitals. They persist for social reasons, hospitals operating for profit, and because of various health myths, rather than because of religion. Personally, I am circumcised, and my father is an atheist. 

As for specific policy changes, I will admit that reducing religious circumcision among Jews and Muslims is much more intractable than reducing secular circumcisions among Americans, and an outright ban is almost impossible. Efforts toward r... (read more)

What are some moral catastrophes events in history?

Male genital mutilation is far more widespread and is arguably just as horrible as female genital mutilation.

What are some moral catastrophes events in history?

Abortion is only a moral catastrophe if you reject antinatalism. From an antinatalist/negative utilitarian perspective, one could argue that abortion prevents an entire lifetime worth of suffering. This is especially the case if abortion disproportionately targets fetuses that would have lived lives that are worse than average.