Frank_R

Wiki Contributions

Comments

Is Bitcoin Dangerous?

Thank you for writing this piece! I think that there should be a serious discussion if crypto is net positive or negative for the world.

In my opinion, there are a few more ways how crypto could contribute to existential risk. Since you can accept donations in monero, it is much easier to make a living by spreading dangerous ideologies (human extinction is a worthy goal, political measures against existential risk are totalitarian, etc.) Of course, you can also support an atheist blogger in Iran or a whistleblower in China by crypto, but it is very hard to tell if the advantages or disadvantages weigh more.

Moreover, crypto can be used to fund more dangerous stuff than "just" artificial pathogens. Think of AGIs that are build for a criminal purpose, imperfect mind uploads that suffer, or perfect mind uploads that are build with the intention to torture them, underground companies that avoid AI regulation in order to cut costs etc.

These scenarios do not prove that cryptocurrencies are net negative; especially since it may be possible to build DAOs that will solve some coordination problems. Nevertheless, I would be happy if more smart people were thinking hard about these issues. 

Why do you find the Repugnant Conclusion repugnant?

I suggest the following thought experiment. Imagine wild animal suffering can be solved. Then it would be possible to populate a square mile with millions of happy insects instead of a few happy human beings. If the repugnant conclusion was true, the best world would be populated with as many insects as possible and only a few human beings that take care that there is no wild animal suffering. 

Even more radical, the best thing to do would be to fill as much of the future light cone as possible with hedonium. Both scenarios do not match the moral intuitions of most people.

If you believe in the opposite, namely that a world with fewer individuals with higher cognitive functions is more worthy, you may arrive at the conclusion that a world populated with a few planet-sized AIs is the best.  

As other people have said, all kinds of population ethics lead to some counter-intuitive conclusions. The most conservative solution is to aim for outcomes that are not bad according to many ethical theories. 

Is it no longer hard to get a direct work job?

An important factor is how many people in the EA movement are actively searching for EA jobs and how many applications they write per year. Maybe this would be a good question for the next EA survey.

What's the GiveDirectly of longtermism & existential risk?

Genomic mass screening of wastewater for unknown pathogens, as described here:

[2108.02678] A Global Nucleic Acid Observatory for Biodefense and Planetary Health (arxiv.org)

A few test sites can already help to detect a new (natural or manmade) pandemic at an early stage. Nevertheless, there is room for a few billion dollars if you want to build a global screening network.

Unfortunately, I do not know if there is any organisation with need for funding working on this. 

Open Thread: Winter 2021

I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.

Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future.

In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change. 

Frank_R's Shortform

There is a short piece on longtermism in Spiegel Online, which is probably the biggest news site in Germany:

Longtermism: Was ist das - Rettung oder Gefahr? - Kolumne - DER SPIEGEL

Google Translate:

Longtermism: Was ist das - Rettung oder Gefahr? - Kolumne - DER SPIEGEL (www-spiegel-de.translate.goog)

As far as I know, this is the first time that longtermism is mentioned in a mayor German news outlet. The author mentions some key ideas and acknowledges that shorttime thinking is a big problem in society, but he is rather critical of the longtermist movement. For example, he thinks that climate change is neglected within longtermism and he cites Phil Torres article on Aeon. 

I probably will not find the time to comment each of the points in the article and I do not know if this would be the most productive thing to do, but maybe some of you find the article interesting. 

Robin Hanson on the Long Reflection

I think that it is not possible to delay technological progress if there are strong near-term and/or egoistical reasons to accelerate the development of new technologies.

As an example, let us assume that it is possible to stop biological aging within a timeframe of 100 years. Of course, you can argue that this is an irreversible change, which may or may not be good for humankinds longterm future. But I do not think that it is realistic to say "Let's fund Alzheimer's research and senolytics, but everything that prolongs life expectancy beyond 120 years will be forbidden for the next millenia until we have figured out if we want to have a society of ageless people." 

On the other hand my argument does not rule out that it is possible to delay technologies which are very expensive to develop and which have no clear value from an egoistic point of view. 

The problem of artificial suffering

I think that it is possible that whole brain emulation (WBE) will be developed before AGI and that there are s-risks associated with WBE. It seems to me that most people in the s-risk community work on AI risks. 

Do you know of any research that deals specifically with the prevention of s-risks from WBE?  Since an emulated mind should resemble the original person, it should be difficult to tweak the code of the emulation such that extreme suffering is impossible. Although this may work for AGI, you need probably a different strategy for emulated minds.  

On the Expanded Implementation of Nuclear Energy: An evaluation of the present technology and its potential to reduce global CO2 Emissions

Thank you very much for sharing your paper. I have heard somewhere that Thorium reactors could be a big deal against climate change.  The advantage would be that there are greater Thorium reserves than Uranium reserves and that you cannot use Thorium to build nuclear weapons. Do you have an opinion if the technology can be developed fast enough and deployed worldwide? 

Magnitude of uncertainty with longtermism

I think that the case for longtermism gets stronger if you consider truly irreversible catastrophic risks, for example human extinction. Lets say that there is a chance of 10% for the extinction of humankind. Suppose you suggest some policy that reduces this risk by 2%, but introduces a new extinction risk with a probability of 1%. Then it would be wise to enact this policy.

This kind of reasoning would be probably wrong if you have a chance of 2% for a very good outcome such as unlimited cheap energy, but an additional extinction risk of 1%.

Moreover, you cannot argue that everything will be OK several thousand years in the future if humankind is eradicated instead of "just" reduced to a much smaller population size. 

Your forum and your blog post contain many interesting thoughts and I think that the role of high variations in longtermism is indeed underexplored. Nevertheless, I think that even if everything that you  have written is correct, it would still be sensible to limit global warming and care for extinction risks. 

Load More