matthias_samwald

Wiki Contributions

Comments

Objections to Value-Alignment between Effective Altruists
Idk, academia doesn't care about the things we care about, and as a result it is hard to publish there. It seems like long-term we want to make a branch of academia that cares about what we care about, but before that it seems pretty bad to subject yourself to peer reviews that argue that your work is useless because they don't care about the future, and/or to rewrite your paper so that regular academics understand it whereas other EAs who actually care about it don't. (I think this is the situation of AI safety.)

It seems like an overstatement that the topics of EA are completely disjoint with topics of interest to various established academic disciplines. I do agree that many of the intellectual and methodological approaches are still very uncommon in academia.

It is not hard to imagine ideas from EA (and also the rationality community) becoming a well-recognized part of some branches of mainstream academia. And this would be extremely valuable, because it would unlock resources (both monetary and intellectual) that go far beyond anything that is currently available.

And because of this, it is unfortunate that there is so little effort of establishing EA thinking in academia, especially since it is not *that* hard:

  • In addition to posting articles directly into a forum, consider that post a pre-print and take the extra mile to also submit as a research paper or commentary in a peer-reviewed open-access journal. This way, you gain additional readers from outside the core EA group, and you make it easier to cite your work as a reputable source.
    • Note that this also makes it easier to write grant proposals about EA-related topics. Writing a proposal right now I have the feeling that 50% of my citations would be of blog posts, which feels like a disadvantage
    • Also note that this increases the pool of EA-friendly reviewers for future papers and grant proposals. Reviewers are often picked from the pool of people who are cited by an article or grant under review, or pop up in related literature searches. If most of the relevant literature is locked into blog posts, this system does not work.
  • Organize scientific conferences
  • Form an academic society / association

etc

Objections to Value-Alignment between Effective Altruists

I think in community building, it is a good trajectory to start with strong homogeneity and strong reference to 'stars' that act as reference points and communication hubs, and then to incrementally soften and expand as time passes. It is a much harder or even impossible to do this in reverse, as this risks to yield a fuzzy community that lacks the mechanisms to attract talent and converge on anything.

With that in mind, I think some of the rigidity of EA thinking in the past might have been good, but the time has come to re-think how the EA community should evolve from here on out.

Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics
1. Artificial general intelligence, or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.
2. This artificial intelligence acquires the power to usurp humanity and achieve a position of dominance on Earth.
3. This artificial intelligence has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth.
4. This artificial intelligence either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significantly diminish our long-term potential.

I think one problem here is phrasing 2.-4 as singular ("This artificial intelligence"), when the plural would be more appropriate. If the technological means are available, it is likely that many actors will create powerful AI systems. If the offense-defense balance is unfavorable (i.e., it is much easier for the AGI systems available at a specific time to do harm than to protect from harm), then a catastrophic event might be triggered by just one of very many AGI systems becoming unaligned ('unilateralist curse').

So I would rephrase your estimates like this:

1. Artificial general intelligence (AGI), or an AI which is able to out-perform humans in essentially all human activities, is developed within the next century.

2. AT LEAST ONE of a large number of AGI systems acquires the capability to usurp humanity and achieve a position of dominance on Earth.

3. AT LEAST ONE of those AGI systems has a reason/motivation/purpose to usurp humanity and achieve a position of dominance on Earth (unaligned AGI).

4. The offense-defense balance between AGI systems available at the time is unfavorable (i.e., defense from unaligned AGI through benevolent AGI is difficult)

5. The unaligned AGI either brings about the extinction of humanity, or otherwise retains permanent dominance over humanity in a manner so as to significantly diminish our long-term potential.

My own estimates when phrasing it this way would be 0,99 * 0,99 * 0,99 * 0,5 * 0,1 = roughly a 5% risk, with high uncertainty.

This would make risk of an unfavorable offense-defense balance (here estimated as 0,5) one of the major determining parameters in my estimate.

Are there any public health funding opportunities with COVID-19 that are plausibly competitive with Givewell top charities per dollar?

I think so as well. I have started drafting an article about this intervention. Feel free to give feedback / share with others who might have valuable expertise:

Culturally acceptable DIY respiratory protection: an urgent intervention for COVID-19 mitigation in countries outside East Asia?

https://docs.google.com/document/d/11HvoN43aQrx17EyuDeEKMtR_hURzBBrYfxzseOCL5JM/edit#

Should you start your own project now rather than later?

I think the classic 'drop out of college and start your own thing' mentality makes most sense if your own thing 1) is in the realm of software development, where the job market is still rather forgiving and job opportunities in case of failure abound 2) would generate large monetary profit in the case of success

Perhaps many Pareto fellowship applications do not meet these criteria and therefore applicants want to be more on the safe side regarding personal career risk?

A review of the safety & efficacy of genetically engineered mosquitoes

By the way, let me know if you want to collaborate on driving this forward (I would be very interested!). I think next steps would be to

  • Make a more structured review of all the arguments for and against implementation
  • Identifying nascent communities and stakeholders that will be vital in deciding on implementation
  • Identifying methods for speeding up the process (research, advocacy)
A review of the safety & efficacy of genetically engineered mosquitoes

Excellent review! I started researching this topic myself a few weeks ago, with the intention of writing an overview like this one -- you beat me to it :)

Based on my current reading of the literature, I tend to think that opting for total eradication of mosquito-borne disease vectors (i.e. certain species of mosquito) via CRISPR gene drives seems like the most promising approach.

I also came to the conclusion that accelerating the translation of gene drive technology from research to implementation should be a top priority for the EA community right now. I have little doubt that we will see implementation of these technologies eventually, but the potentially unwarranted delay to implementation could have severe consequences.

Counterarguments to swift implementation seem rather weak and suffer from an enormous status-quo bias. The current death toll of Malaria alone exceeds 500.000 people per year. If this would be a novel threat (global terrorism, emerging pandemia), nobody would think for a second that the vague negative consequences of losing Anopheles spp. as pollinators would outweigh the benefits. These are precisely those kinds of biases that EA reasoning and advocacy could help overcome.

What is especially scary is how open-ended the potential debates causing delay are. How long would it take to conduct research to prove or disprove that eradicating Anopheles spp. has negative ecological consequences that outweigh the utility of eradicating malaria? Who decides what these criteria are? How should a committee that decides on the implementation of CRISPR look like, since the effects of implementation are bound to cross borders and the technology is without precedent?

I am afraid that implementation might be severely delayed by waiting for committees deciding on which committees are responsible. This is a terrible situation, when each single day of delay causes 1.400 avoidable deaths from malaria alone. I think the EA community has all the right expertise, arguments and - increasingly - the connections to help avoid such a tragedy.

It seems like these committees are currently forming. We should devise strategies for influencing them in the most positive and effective way we can.

The Effective Altruism Newsletter & Open Thread – February 2016

Or rather: people failing to list high earning careers that are comparatively easy to get.

I think popularizing earning-to-give among persons who already are in high-income professions or career trajectories is a very good strategy. But as a career advice for young people interested in EA, it seems to be of rather limited utility.

Accomplishments Open Thread - February 2016

This seems like a good idea! Gleb, perhaps you should collect your EA outreach activities (e.g., the 'Valentine’s Day Gift That Saves Lives' article) under such a monthly thread, since the content might be too well-known to most of the participants of this forum?

Don't sweat diet?

"For me, these kinds of discussions suggest that most self-declared consequentialists are not consequentialists"

Well, too bad, because I am a consequentialist. <<

To clarify, this remark was not directed towards you, but referred to others further up in the thread who argued against moral offsetting.

Load More