All of matthias_samwald's Comments + Replies

The thing that is quite unique about EAG compared to other conferences is the strong reliance on one-on-ones planned through the app. I think this comes with some advantages, but also downsides. 

In 'normal' scientific conferences, one would approach people in a less planned, more organic way. Discussions would usually involve more than two people. The recent EAG London conference felt like it was so dominated by pre-planned one-on-ones that these other ways of interaction suffered.

Imagine: would you board an airplane if 50% of airplane engineers who built it said there was a 10% chance that everybody on board dies?

In the context of the OP, the thought experiment would need to be extended.

"Would you risk a 10% chance of a deadly crash to go to [random country]" -> ~100% of people reply no.

"Would you risk a 10% of a deadly crash to go to a Utopia without material scarcity, conflict, disease?" -> One would expect a much more mixed response.

The main ethical problem is that in the scenario of global AI progress, everyone is forced to board the plane, irrespective of their preferences.

6
Linch
1y
I agree with you more than with Akash/Tristan Harris here, but note that death and Utopia are not the only possible outcomes! It's more like "Would you risk a 10% of a deadly crash for a chance to go to a Utopia without material scarcity, conflict, disease"
Idk, academia doesn't care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don't care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't. (I think this is the situation of AI safety.)

It seems like an overstatem... (read more)

It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines.

I didn't mean to say this, there's certainly overlap. My claim is that (at least in AI safety, and I would guess in other EA areas as well) the reasons we do the research we do are different from those of most academics. It's certainly possible to repackage the research in a format more suited to academia -- but it must be repackaged, which leads to

rewrite your paper so that regular academics unders
... (read more)

I think in community building, it is a good trajectory to start with strong homogeneity and strong reference to 'stars' that act as reference points and communication hubs, and then to incrementally soften and expand as time passes. It is a much harder or even impossible to do this in reverse, as this risks to yield a fuzzy community that lacks the mechanisms to attract talent and converge on anything.

With that in mind, I think some of the rigidity of EA thinking in the past might have been good, but the time has come to re-think how the EA community should evolve from here on out.

1. Artificial general intelligence, or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.
2. This artificial intelligence acquires the power to usurp humanity and achieve a position of dominance on Earth.
3. This artificial intelligence has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth.
4. This artificial intelligence either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significan
... (read more)

I think so as well. I have started drafting an article about this intervention. Feel free to give feedback / share with others who might have valuable expertise:

Culturally acceptable DIY respiratory protection: an urgent intervention for COVID-19 mitigation in countries outside East Asia?

https://docs.google.com/document/d/11HvoN43aQrx17EyuDeEKMtR_hURzBBrYfxzseOCL5JM/edit#

I think the classic 'drop out of college and start your own thing' mentality makes most sense if your own thing 1) is in the realm of software development, where the job market is still rather forgiving and job opportunities in case of failure abound 2) would generate large monetary profit in the case of success

Perhaps many Pareto fellowship applications do not meet these criteria and therefore applicants want to be more on the safe side regarding personal career risk?

By the way, let me know if you want to collaborate on driving this forward (I would be very interested!). I think next steps would be to

  • Make a more structured review of all the arguments for and against implementation
  • Identifying nascent communities and stakeholders that will be vital in deciding on implementation
  • Identifying methods for speeding up the process (research, advocacy)

Excellent review! I started researching this topic myself a few weeks ago, with the intention of writing an overview like this one -- you beat me to it :)

Based on my current reading of the literature, I tend to think that opting for total eradication of mosquito-borne disease vectors (i.e. certain species of mosquito) via CRISPR gene drives seems like the most promising approach.

I also came to the conclusion that accelerating the translation of gene drive technology from research to implementation should be a top priority for the EA community right now. I ... (read more)

1
matthias_samwald
8y
By the way, let me know if you want to collaborate on driving this forward (I would be very interested!). I think next steps would be to * Make a more structured review of all the arguments for and against implementation * Identifying nascent communities and stakeholders that will be vital in deciding on implementation * Identifying methods for speeding up the process (research, advocacy)

Or rather: people failing to list high earning careers that are comparatively easy to get.

I think popularizing earning-to-give among persons who already are in high-income professions or career trajectories is a very good strategy. But as a career advice for young people interested in EA, it seems to be of rather limited utility.

This seems like a good idea! Gleb, perhaps you should collect your EA outreach activities (e.g., the 'Valentine’s Day Gift That Saves Lives' article) under such a monthly thread, since the content might be too well-known to most of the participants of this forum?

1
Gleb_T
8y
Oh, thanks for the idea! This might work well.

"For me, these kinds of discussions suggest that most self-declared consequentialists are not consequentialists"

Well, too bad, because I am a consequentialist. <<

To clarify, this remark was not directed towards you, but referred to others further up in the thread who argued against moral offsetting.

0
kbog
8y
Oh, right. Yes that's true, sorry for misunderstanding.

Perhaps you could further describe 1) Why you think that offsetting meat consumption is different from offsetting killing a person 2) How meat consumption can "affect the extent to which one is able to be ethically productive"

For me, these kinds of discussions suggest that most self-declared consequentialists are not consequentialists, but deontologists using consequentalist decision making in certain aspects of their lives. I think acknowledging this fact would be a step towards greater intellectual honesty.

2
Benjamin_Todd
8y
Many EAs don't self-declare as consequentialists. And even if you think consequentialism is the best guess moral theory, due to moral uncertainty, you should still care about what other perspectives might say. https://80000hours.org/2012/01/practical-ethics-given-moral-uncertainty/
1
kbog
8y
"Perhaps you could further describe 1) Why you think that offsetting meat consumption is different from offsetting killing a person" Because meat consumption has the potential to save money and/or time for the average person, which murder doesn't. "2) How meat consumption can "affect the extent to which one is able to be ethically productive" As the OP, Katja Grace, and others have pointed out, if being vegetarian incurs any social or monetary costs in the range of under, say, $100 a year, then it's far more inefficient than donations simply to animal charities, while human poverty charities and existential risk charities could do even better. Personally, I save $100 a month by living in an apartment without a kitchen where I can't cook meat substitutes, I save an additional $10 or so per week on groceries (based on comparison between my expenses when I was living vegetarian and now), I spend less of my time cooking, and my diet is more complete.

I think a very good heuristic is to look out for current social taboos. Some examples that come to mind:

  • Psychopharmacology. There is still a huge taboo against handing out drugs that make people feel better because of fears of abuse or the simple 'immorality' of the idea. Many highly effective drug development leads might also not be pursued because of fear of abuse.

  • End-of-life suffering, effective palliative medicine and assisted suicide. A lot of extreme suffering might be concentrated around the last months and years of life, both in developing and in developed nations. Most people prefer not to think about it too hard, and the topic is very loaded with religious concerns.