Question Mark

Wiki Contributions

Comments

Donation in 2021

If you're still trying to decide what to donate to, Brian Tomasik wrote this article on his donation recommendations, which may give you some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. Both of these organizations focus on reducing S-risks, or risks of astronomical suffering. There was also a post here from a few months ago giving shallow evaluations of various longtermist organizations.

AI Timelines: Where the Arguments, and the "Experts," Stand

Brian Tomasik wrote a similar article several years ago on Predictions of AGI Takeoff Speed vs. Years Worked in Commercial Software. In general, AI experts with the most experience working in commercial software tend to expect a soft takeoff, rather than a hard takeoff.

List of AI safety courses and resources

These aren't entirely about AI, but Brian Tomasik's Essays on Reducing Suffering and Tobias Baumann's articles on S-risks are also worth reading. They contain a lot of articles related to futurism and scenarios that could result in astronomical suffering. On the topic of AI alignment, Tomasik wrote this article on the risks of a "near miss" in AI alignment, and how a slightly misaligned AI may create far more suffering than a completely unaligned AI.

Gifted $1 million. What to do? (Not hypothetical)

There was a post here a few months ago giving brief evaluations of various longtermist organizations, and briefly commented on the Qualia Research Institute. It described QRI's pathway to impact as "implausible" and "overly ambitious". What would be your response to this?

Gifted $1 million. What to do? (Not hypothetical)

Brian Tomasik wrote this article on his donation recommendations, which may provide you with some useful insight. His top donation recommendations are the Center on Long-Term Risk and the Center for Reducing Suffering. CLR and CRS are doing research on cause prioritization and reducing S-risks, i.e. risks of astronomical suffering. S-risks are a neglected priority, so any additional funding for S-risk research will likely have more marginal impact compared to other causes.

What would you do if you had half a million dollars?

Thanks. Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian. With more people, both the number of hedons and dolors will increase, with a ratio between hedons to dolors skewed in favor of hedons. If you're a total utilitarian, the net hedons will be higher with more people, so adding more people is rational. If you're a total utilitarian, the ratio of hedons to dolors and the average level of happiness per capita will be roughly the same, so adding more people wouldn't necessarily increase expected utility.

What would you do if you had half a million dollars?

Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.

Would you mind linking some posts or articles assessing the expected value of the long-term future? If the basic argument for the far future being far better than the present is because life now is better than it was thousands of years ago, this is, in my opinion, a weak argument. Even if people like Steven Pinker are right,  you are extrapolating billions of years from the past few thousand years. To say that this is wild extrapolation is an understatement. I know Jacy Reese talks about it in this post, yet he admits the possibility that the expected value of the far future could potentially be close to zero. Brian Tomasik also wrote this article about how a "near miss" in AI alignment could create astronomical amounts of suffering.

Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.

Sure, it's possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point of the population and make everyone have hyperthymia. But you must remember that millions of years of evolution put our hedonic set-points where they are for a reason. It's possible that in the long run, genetically engineered hyperthymia might be evolutionarily maladaptive, and the "super happy people" will die out in the long run.

What would you do if you had half a million dollars?

There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving. Even though poverty is far lower and medical care is far better than in the past, there may also be more mental illness and loneliness than in the past. The mutational load within the human population may also be increasing. Taking the hedonic treadmill into account, happiness levels in general should be roughly stable in the long run regardless of life circumstances. One may object to this by saying that wireheading may become feasible in the far future. Yet wireheading may be evolutionarily maladaptive, and pure replicators may dominate the future instead. Andrés Gómez Emilsson has also talked about this in A Universal Plot - Consciousness vs. Pure Replicators.

Regarding averting extinction and option value, deciding to go extinct is far easier said than done. You can’t just convince everyone that life ought to go extinct. Collectively deciding to go extinct would likely require a singleton to exist, such as Thomas Metzinger's BAAN scenario. Even if you could convince a sizable portion of the population that extinction is desirable, these people will simply be removed by natural selection, and the remaining portion of the population will continue existing and reproducing. Thus, if extinction turns out to be desirable, engineered extinction would most likely have to be done without the consent of the majority of the population. In any case, it is probably far easier to go extinct now while we are confined to a single planet than it would be during the age of galaxy-wide colonization.

What would you do if you had half a million dollars?

If one values reducing suffering and increasing happiness equally, it isn't clear that reducing existential risk is justified either. Existential risk reduction and space colonization means that the far future can be expected to have both more happiness and more suffering, which would seem to even out the expected utility. More happiness + more suffering isn't necessarily better than less happiness + less suffering. Focusing on reducing existential risks would only  seem to be justified if either A) you believe in Positive Utilitarianism, i.e. increasing happiness is more important than reducing suffering, B) the far future can be reasonably expected to have significantly more happiness than suffering, or C) reducing existential risk is a terminal value in and of itself.

Load More