If you're still trying to decide what to donate to, Brian Tomasik wrote this article on his donation recommendations, which may give you some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. Both of these organizations focus on reducing S-risks, or risks of astronomical suffering. There was also a post here from a few months ago giving shallow evaluations of various longtermist organizations.
Brian Tomasik wrote a similar article several years ago on Predictions of AGI Takeoff Speed vs. Years Worked in Commercial Software. In general, AI experts with the most experience working in commercial software tend to expect a soft takeoff, rather than a hard takeoff.
These aren't entirely about AI, but Brian Tomasik's Essays on Reducing Suffering and Tobias Baumann's articles on S-risks are also worth reading. They contain a lot of articles related to futurism and scenarios that could result in astronomical suffering. On the topic of AI alignment, Tomasik wrote this article on the risks of a "near miss" in AI alignment, and how a slightly misaligned AI may create far more suffering than a completely unaligned AI.
There was a post here a few months ago giving brief evaluations of various longtermist organizations, and briefly commented on the Qualia Research Institute. It described QRI's pathway to impact as "implausible" and "overly ambitious". What would be your response to this?
Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. CLR and CRS are doing research on cause prioritization and reducing S-risks, i.e. risks of astronomical suffering. S-risks are a neglected priority, so any additional funding for S-risk research will likely have more marginal impact compared to other causes.
Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you're a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you're a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn't necessarily increase expected utility.
Brian Tomasik recommends the Center on Long-Term Risk and the Center for Reducing Suffering.
Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.
Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right, yo... (read more)
There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far fut... (read more)
If one values reducing suffering and increasing happiness equally, it isn't clear that reducing existential risk is justified either. Existential risk reduction and space colonization means that the far future can be expected to have both more happiness and more suffering, which would seem to even out the expected utility. More happiness + more suffering isn't necessarily better than less happiness + less suffering. Focusing on reducing existential risks would only seem to be justified if either A) you believe in Positive Utilitarianism, i.e. increas... (read more)
B) the far future can be reasonably expected to have significantly more happiness than suffering
I think EAs who want to reduce x-risk generally do believe that the future should have more happiness than suffering, conditional on no existential catastrophe occurring. I think these people generally argue that quality of life has improved over time and believe that this trend should continue (e.g. Steven Pinker's The Better Angels of Our Nature). Of course life for farmed animals has got worse...but I think people believe we should successfully render factory... (read more)
Even if you value reducing suffering and increasing happiness equally, reducing S-risks would likely still greatly increase the expected value of the far future. Efforts to reduce S-risks would almost certainly reduce the risk of extreme suffering being created in the far future, but it's not clear that they would reduce happiness much.
I'm not saying that reducing S-risks isn't a great thing to do, nor that it would reduce happiness, I'm just saying that it isn't clear that a focus on reducing S-risks rather than on reducing existential risk is justified if one values reducing suffering and increasing happiness equally.
Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. In terms of the long-term future, reducing suffering in the far future may be more important than reducing existential risk. If life in the far future is significantly bad on average, space colonization could potentially create and spread a large amount of suffering.
My understanding is that Brian Tomasik has a suffering-focused view of ethics in that he sees reducing suffering as inherently more important than increasing happiness - even if the 'magnitude' of the happiness and suffering are the same.
If one holds a more symmetric view where suffering and happiness are both equally important it isn't clear how useful his donation recommendations are.
The Gates Foundation is financing a campaign to genetically engineer the mosquito population in order to control malaria. He compares it to Mao Zedong's Four Pests Campaign, and how Mao's attempts to wipe out the sparrow population resulted in the Great Chinese Famine. Taleb argues that there may be similar unintended consequences, and something similar could happen with genetically modifying mosquitoes. He also talks about processes that are too fast for nature, and he draws a graph comparing the speed at which the ecosystem changes and the corresponding risk of harm, and how harm scales non-linearly in proportion to speed.
I'm mostly concerned with S-risks, i.e. risks of astronomical suffering. I view it as a more rational form of Pascal's Wager, and as a form of extreme longtermist self-interest. Since there is still a >0% chance of some form of afterlife or a bad form of quantum immortality existing, raising awareness of S-risks and donating to S-risk reduction organizations like the Center on Long-Term Risk and the Center for Reducing Suffering likely reduces my risk of going to "hell". See The Dilemma of Worse Than Death Scenarios.
The dilemma is that it does not seem
What do you think the unintended consequences of these efforts to stop malaria could be? Nassim Taleb argues that the Gates Foundation is repeating the errors of Mao Zedong. It's also possible that donating malaria nets could cause local net manufacturers to go out of business, which could increase African dependence on foreign aid in the long run.
I don't know enough about the cultures and internal workings of Australia, Canada, the UK, etc. to give you a good answer for how precisely this shift took place. But the fact of the matter is that something took place in these countries that caused the practice of circumcision to be abandoned en masse.
The point I'm trying to get at is that there's a risk that circumcision won't decline in the US as it has in other countries, and that it will keep being practiced for centuries. The longer circumcision continues, the more culturally entrenched it will get, ... (read more)
I appreciate the breakdown of importance, tractability, and crowdedness here, but I don't think this post uses scout mindset; it's written to persuade, and leaves out a lot of contradictory evidence while overstating the strength of other evidence.
I did link to a number of resources that address the arguments from circumcision proponents though, such as Eric Clopper’s lecture. I also mentioned the possibility of infants not being sentient, which would weaken the case for it as a cause area.
In the end, I decided to downvote; once I'd spent ~90 minutes readi
Would you consider reviewing the Center for Reducing Suffering? They are an organization similar to the Center on Long-Term Risk in the sense that their main focus is reducing S-risks, i.e. risks of astronomical suffering, but are less focused on AI. CRS is currently Brian Tomasik's top charity recommendation.
Brian Tomasik's article on the amount of suffering produced by various animal foods is worth reading. If you're not willing to go vegan, it's probably a good idea to generally eat meat/animal products from larger animals, namely beef and milk. Since fewer animals are needed per unit of meat/food, these foods cause far less animal suffering. It may also be a good idea to eat less bread/rice/pasta/cereal and more beans, nuts, and potatoes.
Brian Tomasik believes that there's a chance that AI alignment may itself be dangerous, since a "near miss" in AI alignment could cause vastly more suffering than a paperclip maximizer. In his article on his donation recommendations, he estimates that organizations like MIRI may have a ~38% chance of doing active harm.
In the United States, Canada, and South Korea, the vast majority of circumcisions are secular and performed in hospitals. They persist for social reasons, hospitals operating for profit, and because of various health myths, rather than because of religion. Personally, I am circumcised, and my father is an atheist.
As for specific policy changes, I will admit that reducing religious circumcision among Jews and Muslims is much more intractable than reducing secular circumcisions among Americans, and an outright ban is almost impossible. Efforts toward r... (read more)
Male genital mutilation is far more widespread and is arguably just as horrible as female genital mutilation.
Abortion is only a moral catastrophe if you reject antinatalism. From an antinatalist/negative utilitarian perspective, one could argue that abortion prevents an entire lifetime worth of suffering. This is especially the case if abortion disproportionately targets fetuses that would have lived lives that are worse than average.