Imagine: would you board an airplane if 50% of airplane engineers who built it said there was a 10% chance that everybody on board dies?
In the context of the OP, the thought experiment would need to be extended.
"Would you risk a 10% chance of a deadly crash to go to [random country]" -> ~100% of people reply no.
"Would you risk a 10% of a deadly crash to go to a Utopia without material scarcity, conflict, disease?" -> One would expect a much more mixed response.
The main ethical problem is that in the scenario of global AI progress, everyone is forced to board the plane, irrespective of their preferences.
Idk, academia doesn't care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don't care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't. (I think this is the situation of AI safety.)
It seems like an overstatem...
It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines.
I didn't mean to say this, there's certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It's certainly possible to repackage the research in a format more suited to academia -- but it must be repackaged, which leads to
rewrite your paper so that regular academics unders...
I think in community building, it is a good trajectory to start with strong homogeneity and strong reference to 'stars' that act as reference points and communication hubs, and then to incrementally soften and expand as time passes. It is a much harder or even impossible to do this in reverse, as this risks to yield a fuzzy community that lacks the mechanisms to attract talent and converge on anything.
With that in mind, I think some of the rigidity of EA thinking in the past might have been good, but the time has come to re-think how the EA community should evolve from here on out.
1. Artificial general intelligence, or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.
2. This artificial intelligence acquires the power to usurp humanity and achieve a position of dominance on Earth.
3. This artificial intelligence has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth.
4. This artificial intelligence either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significan...
I think so as well. I have started drafting an article about this intervention. Feel free to give feedback / share with others who might have valuable expertise:
Culturally acceptable DIY respiratory protection: an urgent intervention for COVID-19 mitigation in countries outside East Asia?
https://docs.google.com/document/d/11HvoN43aQrx17EyuDeEKMtR_hURzBBrYfxzseOCL5JM/edit#
I think the classic 'drop out of college and start your own thing' mentality makes most sense if your own thing 1) is in the realm of software development, where the job market is still rather forgiving and job opportunities in case of failure abound 2) would generate large monetary profit in the case of success
Perhaps many Pareto fellowship applications do not meet these criteria and therefore applicants want to be more on the safe side regarding personal career risk?
By the way, let me know if you want to collaborate on driving this forward (I would be very interested!). I think next steps would be to
Excellent review! I started researching this topic myself a few weeks ago, with the intention of writing an overview like this one -- you beat me to it :)
Based on my current reading of the literature, I tend to think that opting for total eradication of mosquito-borne disease vectors (i.e. certain species of mosquito) via CRISPR gene drives seems like the most promising approach.
I also came to the conclusion that accelerating the translation of gene drive technology from research to implementation should be a top priority for the EA community right now. I ...
Or rather: people failing to list high earning careers that are comparatively easy to get.
I think popularizing earning-to-give among persons who already are in high-income professions or career trajectories is a very good strategy. But as a career advice for young people interested in EA, it seems to be of rather limited utility.
This seems like a good idea! Gleb, perhaps you should collect your EA outreach activities (e.g., the 'Valentine’s Day Gift That Saves Lives' article) under such a monthly thread, since the content might be too well-known to most of the participants of this forum?
"For me, these kinds of discussions suggest that most self-declared consequentialists are not consequentialists"
Well, too bad, because I am a consequentialist. <<
To clarify, this remark was not directed towards you, but referred to others further up in the thread who argued against moral offsetting.
Perhaps you could further describe 1) Why you think that offsetting meat consumption is different from offsetting killing a person 2) How meat consumption can "affect the extent to which one is able to be ethically productive"
For me, these kinds of discussions suggest that most self-declared consequentialists are not consequentialists, but deontologists using consequentalist decision making in certain aspects of their lives. I think acknowledging this fact would be a step towards greater intellectual honesty.
I think a very good heuristic is to look out for current social taboos. Some examples that come to mind:
Psychopharmacology. There is still a huge taboo against handing out drugs that make people feel better because of fears of abuse or the simple 'immorality' of the idea. Many highly effective drug development leads might also not be pursued because of fear of abuse.
End-of-life suffering, effective palliative medicine and assisted suicide. A lot of extreme suffering might be concentrated around the last months and years of life, both in developing and in developed nations. Most people prefer not to think about it too hard, and the topic is very loaded with religious concerns.
The thing that is quite unique about EAG compared to other conferences is the strong reliance on one-on-ones planned through the app. I think this comes with some advantages, but also downsides.
In 'normal' scientific conferences, one would approach people in a less planned, more organic way. Discussions would usually involve more than two people. The recent EAG London conference felt like it was so dominated by pre-planned one-on-ones that these other ways of interaction suffered.