All of DM's Comments + Replies

DM
1y21
7
0

As a caveat, there are some nuances to Wikipedia editing to make sure you're following community standards, which I've tried to lay out in my post. In particular, before investing a lot of time writing a new article, you should check if someone else tried that before and/or if the same content is already covered elsewhere. For example, there have been previous unsuccessful efforts to create an 'Existential risk' Wikipedia article. Those attempts failed in part because relevant content is already covered on the 'Global catastrophic risks' article.

DM
1y21
7
0

As a caveat, there are some nuances to Wikipedia editing to make sure you're following community standards, which I've tried to lay out in my post. In particular, before investing a lot of time writing a new article, you should check if someone else tried that before and/or if the same content is already covered elsewhere. For example, there have been previous unsuccessful efforts to create an 'Existential risk' Wikipedia article. Those attempts failed in part because relevant content is already covered on the 'Global catastrophic risks' article.

DM
1y6
1
0

One other relevant resource I'd recommend is Will and Toby's joint keynote speech at the 2016 EA Global conference in San Francisco. It discusses some of the history of EA (focusing on the Oxford community in particular) and some historical precursors: https://youtu.be/VH2LhSod1M4

DM
1y43
18
1

I enjoyed reading this and would love to see more upbeat and celebratory posts like this. The EA community is very self-critical (which is good!) but we shouldn't lose sight of all the awesome things community members accomplish.

DM
2y39
0
0

I recently had to make an important and urgent career decision and found it tremendously valuable to speak with several dozen wonderful people about this at EA Global SF. I'm immensely grateful both to the people giving me advice and to CEA for organizing my favorite EA Global yet.

DM
2y17
0
0

Going very broad, I'd recommend going through the EA Forum Topics Wiki and considering the concepts included there. Similarly, you may look at the posts that make up the EA Handbook and look for suitable concepts there.

DM
2y12
0
0

At the risk of self-promotion, I wrote a motivational essay on EA a few years ago, Framing Effective Altruism as Overcoming Indifference

3
michel
2y
Thanks for sharing! I hadn’t come across this but I like the framing.
DM
2y8
0
0

Well done! The article receives about 50,000 page views each year, so there are a lot of people out there who benefit from your contribution.

4
DirectedEvolution
2y
Wow, I hadn't thought to check! Thanks for pointing that out, and for writing this post!
Answer by DMMay 25, 20227
0
0

Toby Ord explains several related distinctions very clearly in his paper 'The Edges of Our Universe'. Highly recommended: https://arxiv.org/abs/2104.01191

Answer by DMMay 24, 20226
0
0

Copied from my post: Notes on "The Myth of the Nuclear Revolution" (Lieber & Press, 2020)

I recently completed a graduate school class on nuclear weapons policy, where we read the 2020 book “The Myth of the Nuclear Revolution: Power Politics in the Atomic Age” by Keir A. Lieber and Daryl G. Press. It is the most insightful nuclear security book I have read to date and while I disagree with some of the book’s outlook and conclusions, it is interesting and well written. The book is also very accessible and fairly short (180 pages). In sum, I believe more

... (read more)
Answer by DMApr 16, 20229
0
0

In "The Definition of Effective Altruism", William MacAskill writes that 

"Effective altruism is often considered to simply be a rebranding of utilitarianism, or to merely refer to applied utilitarianism...It is true that effective altruism has some similarities with utilitarianism: it is maximizing, it is primarily focused on improving wellbeing, many members of the community make significant sacrifices in order to do more good, and many members of the community self-describe as utilitarians.

But this is very different from effective altruism being the... (read more)

Answer by DMApr 16, 202221
0
0

The following paper is relevant: Pummer & Crisp (2020). Effective Justice, Journal of Moral Philosophy, 17(4):398-415.

From the abstract: 
"Effective Justice, a possible social movement that would encourage promoting justice most effectively, given limited resources. The latter minimal view reflects an insight about justice, and our non-diminishing moral reason to promote more of it, that surprisingly has gone largely unnoticed and undiscussed. The Effective Altruism movement has led many to reconsider how best to help others, but relatively little ... (read more)

DM
2y51
0
0

Great post! While I agree with your main claims, I believe the numbers for the multipliers (especially in aggregate and for ex ante impact evaluations) are nowhere near as extreme in reality as your article suggests for the reasons that Brian Tomasik elaborates on in these two articles:

(i) Charity Cost-Effectiveness in an Uncertain World 

(ii) Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness  

4
Adam Binks
2y
I agree - and if the multiplier numbers are lower, then some claims are don't hold, e.g.: This doesn't hold if the set of multipliers includes 1.5x, for example. Instead we might want to talk about the importance of hitting as many big multipliers as possible. And being willing to spend more effort on these over the smaller (e.g. 1.1x) ones. (But want to add that I think the post in general is great! Thanks for writing this up!)
4
gwern
2y
Well, you know what the stereotype is about women in Silicon Valley high tech companies & their sock needs... (Incidentally, when I wrote a sock-themed essay, which was really not about socks, I was surprised how many strong opinions on sock brands people had, and how expensive socks could be.) If you don't like the example 'buy socks', perhaps one can replace it with real-world examples like spending all one's free time knitting sweaters for penguins. (With the rise of Ravelry and other things, knitting is more popular than it has been in a long time.)

I mostly agree; the uncertain flow-through effects of giving socks to one's colleagues totally overwhelm the direct impact and are probably at least 1/1000 as big as the effects of being a charity entrepreneur (when you take the expected value according to our best knowledge right now). If Ana is trying to do good by donating socks, instead of saying she's doing 1/20,000,000th the good she could be, perhaps it's more accurate to say that she has an incorrect theory of change and is doing good (or harm) by accident.

I think the direct impacts of the best int... (read more)

DM
2y28
0
0

Excellent post! I really appreciate your proposal and framing for a book on utilitarianism. In line with your point, William MacAskill and Richard Yetter Chappell also perceived a lack of accessible, modern, and high-quality resources on utilitarianism (and related ideas), motivating the creation of utilitarianism.net, an online textbook on utilitarianism. The website has been getting a lot of traction over the past year, and it's still under development (including plans to experiment with non-text media and translations into other languages).

Thanks for working on this website, it's a great idea!

Possible additions to your list of books (I've only read the first one so forgive me if they aren't as good/relevant as I think they are):

... (read more)
7
finm
2y
Big fan of utilitarianism.net — not sure how I forgot to mention it!
4
JackRoyal
2y
As a beginner exploring normative ethics this looks very helpful. Thanks!
DM
2y12
0
0

Strong upvote! I found reading this very interesting and the results seem potentially quite useful to inform EA community building efforts.

DM
2y15
0
0

Hi Timothy, it's great that you found your way here! There's a vibrant German EA community (including an upcoming conference in Berlin in September/October that you may want to join). 

Regarding your university studies, I essentially agree with Ryan's comment. However, while studying in the UK and US can be great, I appreciate that doing so may be daunting and financially infeasible for many young Germans. If you decide to study in Germany and are more interested in the social sciences than in the natural sciences, I would encourage you (like Ryan) to ... (read more)

DM
2y2
0
0

Time to up your game, Linch! 😉

4
Linch
2y
I'm ahead of both him and MichaelA now. Currently #2!
DM
2y27
0
0
1

I want to express my deep gratitude to you, Patrick, for running EA Radio for all these years! 🙏 Early in my EA involvement (2015-16), I listened to all the EA Radio talks available at the time and found them very valuable. 

DM
2y6
0
0

Excellent! A well-deserved second prize in the Creative Writing Contest.

Answer by DMDec 25, 202121

In my experience, many EAs have a fairly nuanced perspective on technological progress and aren't unambiguous techno-optimists. 

For instance, a substantial fraction of the community is very concerned about the potential negative impacts of advanced technologies (AI, biotech, solar geoengineering, cyber, etc.) and actively works to reduce the associated risks. 

Moreover, some people in the community have promoted the idea of "differential (technological) progress" to suggest that we should work to (i) accelerate risk-reducing, welfare-enhancing tec... (read more)

DM
2y4
0
0

Utilitarianism.net has also recently published an article on Arguments for Utilitarianism, written by Richard Yetter Chappell. (I'm sharing this article since it may interest readers of this post)

1
Omnizoid
2y
Yeah, that has some good arguments, thank you for sharing that.  
DM
2y2
0
0

Thanks, it's valuable to hear your more skeptical view on this point! I've included it after several reviewers of my post brought it up and still think it was probably worth including as one of several potential self-interested benefits of Wikipedia editing. 

I was mainly trying to draw attention to the fact that it is possible to link a Wikipedia user account to a real person and that it is worth considering whether to include it in certain applications (something I've done in previous applications). I still think Wikipedia editing is a decent signal ... (read more)

DM
2y5
0
0

Thanks for this comment, Michael! I agree with all the points you make and should have been more careful to compare Wikipedia editing against the alternatives (I began doing this in an earlier draft of this post and then cut it because it became unwieldy). 

In my experience, few EAs I've talked to have ever seriously considered Wikipedia editing. Therefore, my main objective with this post was to get more people to recognize it as one option of something valuable they might do with a part of their time; I wasn't trying to argue that Wikipedia editing i... (read more)

DM
2y21
0
0

I strongly agree that we should learn our lessons from this incident and seriously try to avoid any repetition of something similar. In my view, the key lessons are something like:

  1. It's probably best to avoid paid Wikipedia editing
  2. It's crucial to respect the Wikipedia community's rules and norms (I've really tried to emphasize this heavily in this post)
  3. It's best to really approach Wikipedia editing with a mindset of "let's look for actual gaps in quality and coverage of important articles" and avoid anything that looks like promotional editing

I think it wou... (read more)

4
Jonathan Zimmermann
1y
I'm actually working on a similar project www.oka.wiki, focused on funding Wikipedia translators (which we train on using Wikipedia, and hire in countries with low cost of living). We currently have ~10 FTE and already published hundreds of articles. We initially got some pushback from the community, but so far it seems like the solutions we have implemented (around increasing transparency, more thorough quality checks) have helped. I'd be happy to share more about the project and our experience if that's helpful. I was planning to write a post in a couple of months about it once I have gathered more data/experience with this.
3
Pablo
2y
I strongly endorse each of these points.
DM
2y6
0
0

As an example, look at this overview of the Wikipedia pages that Brian Tomasik has created and their associated pageview numbers (screenshot of the top 10 pages below). The pages created by Brian mostly cover very important (though fringe) topics and attract ~ 100,000 pageviews every year.  (Note that this overview ignores all the pages that Brian has edited but didn't create himself.)

DM
2y17
0
0

Someone (who is not me) just started a proposal for a WikiProject on Effective Altruism! To be accepted, this proposal will need to be supported by at least 6-12 active Wikipedia editors. If you're interested in contributing to such a WikiProject, please express "support" for the proposal on the proposal page.  

The proposal passed!! Everyone who's interested should add themselves as a participant on the official wikiproject! https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Effective_Altruism

DM
2y3
0
0

This is the best tool I know of to get an overview of Wikipedia article pageview counts (as mentioned in the post); the only limitation with it is that pageview data "only" goes back to 2015.

Answer by DMNov 04, 202141
0
0

Create a page on biological weapons. This could include, for instance,

  1. An overview of offensive BW programs over time (when they were started, stopped, funding, staffing, etc.; perhaps with a separate section on the Soviet BW program)
  2. An overview of different international treaties relating to BW, including timelines and membership over time (i.e., the Geneva Protocol, the Biological Weapons Convention (BWC), Australia Group, UN Security Council Resolution 1540)
  3. Submissions of Confidence-Building Measures in the BWC over time (including as a percentage of the
... (read more)
3
EdMathieu
2y
Not much yet, but on (5) we now have this world map: Number of biosafety level 4 facilities

(This does sound useful, though I'd note this is also a relatively sensitive area and OWID are - thankfully! - a quite prominent site, so OWID may wish to check in with global catastrophic biorisk researchers regarding whether anything they'd intend to include on such a page might be best left out.)

DM
2y4
0
0

For many people interested in but not yet fully committed to biosecurity, it may make more sense to choose a more general master's program in international affairs/security and then concentrate on biosecurity/biodefense to the extent possible within their program.

Some of the best master's programs to consider to this end:

  1. Georgetown University: MA in Security Studies (Washington, DC; 2 years) 
  2. Johns Hopkins University: MA in International Relations (Washington, DC; 2 years)
  3. Stanford University: Master's in International Policy (2 years)
  4. King's College Lon
... (read more)
DM
2y2
0
0

The GMU Biodefense Master's is also offered as an online-only degree

Answer by DMOct 29, 20214
0
0

Georgetown University offers a 2-semester MSc in "Biohazardous Threat Agents & Emerging Infectious Diseases". Course description from the website: "a one year program designed to provide students with a solid foundation in the concepts of biological risk, disease threat, and mitigation strategies. The curriculum covers classic biological threats agents, global health security, emerging diseases, technologies, CBRN risk mitigation, and CBRN security."

DM
3y7
0
0

Website traffic was initially low (i.e. 21k pageviews by 9k unique visitors from March to December 2020) but has since been gaining steam (i.e. 40k pageviews by 20k unique visitors in 2021 to date) as the website's search performance has improved. We expect traffic to continue growing significantly as we add more content, gather more backlinks and rise up the search  rank. For comparison, the Wikipedia article on utilitarianism has received ~ 480k pageviews in 2021 to date, which suggests substantial room for growth for utilitarianism.net.

2
trammell
3y
Thanks!
6
kuhanj
3y
Pageviews would also go up a lot if (as suggested in the post) articles from the website were included in intro fellowships/other educational programs. I'll discuss adding these articles/others on the site to our intro syllabi.  One potential concern with adding articles from utilitarianism.net is that many (new-to-EA) people (from experience running many fellowships) have  negative views towards utilitarianism (e.g. find it off-putting, think people use it to justify selfish/horrible/misguided actions, think it's too demanding (e.g. implications of the drowning child argument), think it's naive, etc etc. I think utilitarianism is often not brought up very charitably in philosophy/other classes (again, based on my impressions running fellowships).  So I worry about introducing ideas through the lens of utilitarianism. So one potential solution is to include these readings in fellowship syllabi after talking about utilitarianism more broadly (for what it's worth, in our fellowship we try to present utilitarianism as we/EAs tend interpret it and address misconceptions, but we can also do so much), or to bring them up in in-depth fellowships/non-intro programs where what I've brought up might be less of a concern. 
DM
3y11
0
0

I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii).

This may be the crux - I would not count a ~ 1000x multiplier as anywhere near "astronomical" and should probably have made this clearer in my original comment. 

Claim (i), that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term,  refers to differences in value of something like 1030 x.

All my comment was meant to say is that it seems hi... (read more)

DM
3y1
0
0

I'd like to point to the essay Multiplicative Factors in Games and Cause Prioritization as a relevant resource for the question of how we should apportion the community's resources across (longtermist and neartermist) causes:

TL;DR: If the impacts of two causes add together, it might make sense to heavily prioritize the one with the higher expected value per dollar.  If they multiply, on the other hand, it makes sense to more evenly distribute effort across the causes.  I think that many causes in the effective altruism sphere interact more multip

... (read more)
DM
3y9
0
0

Please see my above response to jackmalde's comment. While I understand and respect your argument, I don't think we are justified in placing high confidence in this  model of the long-term flowthrough effects of near-term targeted interventions. There are many similar more-or-less plausible models of such long-term flowthrough effects, some of which would suggest a positive net effect of near-term targeted interventions on the long-term future, while others would suggest a negative net effect. Lacking strong evidence that would allow us to accurately ... (read more)

Yep, not placing extreme weight. Just medium levels of confidence that when summed over, add up to something pretty low or maybe mildly negative. I definitely am not like 90%+ confidence on the flowthrough effects being negative.

DM
3y6
0
0

No, we probably don’t. All of our actions plausibly affect the long-term future in some way, and it is difficult to (be justified to) achieve very high levels of confidence about the expected long-term impacts of specific actions. We would require an exceptional  degree of confidence to claim that the long-term effects of our specific longtermist intervention are astronomically (i.e. by many orders of magnitude) larger than the long-term effects of some random neartermist interventions (or even doing nothing at all). Of course, this claim is perfectly... (read more)

9[anonymous]3y
Phil Trammell's point in  Which World Gets Saved is also relevant: 
4
Jack Malde
3y
For the record I'm not really sure about 1030 times, but I'm open 1000s of times. Pretty much every action has an expected impact on the future in that we know it will radically alter the future  e.g. by altering the times of conceptions and therefore who lives in the future. But that doesn't necessarily mean we have any idea on the magnitude or sign of this expected impact. When it comes to giving to the Against Malaria Foundation for example I have virtually no idea of what the expected long-run impacts are and if this would even be positive or negative - I'm just clueless. I also have no idea what the flow-through effects of giving to AMF are on existential risks.  If I'm utterly clueless about giving to AMF but I think giving to an AI research org has an expected value of 1030 then in a sense my expected value of giving to the AI org is astronomically greater than giving to AMF (although it's sort of like comparing 1030 to undefined so it does get a bit weird...). Does that make any sense?
DM
3y3
0
0

Agreed, I'd love this feature! I also frequently rely on pageview statistics to prioritize which Wikipedia articles to improve.

DM
3y15
0
0

There is a big difference between (i) the very plausible claim that the value of the long-term (in terms of lives, experiences, etc.) is astronomically larger than the value of the near-term, and (ii) the rather implausible claim that interventions targeted at improving the long-term are astronomically more important/cost-effective than those targeted at improving the near-term. It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Basically, in this context the same points apply that Brian Tomasik made in his essay "Why Ch... (read more)

I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.

I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> '... (read more)

8
Habryka
3y
I think I believe (ii), but it's complicated and I feel a bit confused about it. This is mostly because many interventions that target the near-term seem negative from a long-term perspective, because they increase anthropogenic existential risk by accelerating the speed of technological development. So it's pretty easy for there to be many orders of magnitude in effectiveness between different interventions (in some sense infinitely many, if I think that many interventions that look good from a short-term perspective are actually bad in the long term).

It seems to me that many longtermists believe (i) but that almost no-one believes (ii).

Really? This surprises me. Combine (i) with the belief that we can tractably influence the far future and don't we pretty much get to (ii)?

3
Davidmanheim
3y
I'm unwilling to pin this entirely on the epistemic uncertainty, and specifically don't think everyone agrees that, for example, interventions targeting AI safety aren't the only thing that matters, period. (Though this is arguably not even a longtermist position.) But more generally, I want to ask the least-convenient-world question of what the balance should be if we did have certainty about impacts, given that you seem to agree strongly with (i).
DM
3y10
0
0

I really appreciated the many useful links you included in this post and would like to encourage others to strive to do the same when writing EA Forum articles.

rgb
3y15
0
0

Thanks Darius! It was my pleasure.

DM
3y1
0
0

Happy to have you here, Linda! It sounds like you have some really important skills to offer and I wish you will find great opportunities to apply them.

1[anonymous]3y
Thank you, Darius! I'm excited to use my skills to give back after a long career.
DM
3y9
0
0

The listed application documents include a "Short essay (≤500 words)" without further details. Can you say more about what this entails and what you are looking for?

4
Technology Policy Fellowship
3y
The specific prompts were included in the application form. Apologies that this was not clear. We’ve now added a note along those lines to the fellowship page.  The prompts are:  * Personal statement: “What do you want to get out of this fellowship? Why do you think you are a good fit? Please describe your interest in (and any experience with) policy as well as your area of focus (e.g. AI or biosecurity).” * Short essay: “What is one specific policy idea related to AI, biosecurity, or related emerging technology areas that you think the US government should pursue today? Why do you think this idea would be beneficial?” * Statement of motivation: “How do your interests and plans align with Open Philanthropy's goals related to societal impacts of technology?”
DM
3y5
0
0

Are non-US citizens who hold a US work authorization disadvantaged in the application process even if they seek to enter a US policy career (and perhaps aim to become naturalized eventually)?

2
Technology Policy Fellowship
3y
Non-citizens are eligible to apply for the program if they do not require visa sponsorship in order to receive a placement. For example, someone with a green card should be eligible to work at any think tank. As long as applicants are eligible to work in the roles that they are applying for, non-citizens who aspire to US policy careers will not be disadvantaged. It’s our understanding that it is difficult for non-citizens to get a security clearance, which is required for many federal government roles, and executive branch offices are generally hesitant about bringing on non-citizens. Congressional offices are legally allowed to take on permanent residents (and even some temporary visa holders), but individual offices may adopt policies favoring US citizens. Out of the three categories, we therefore expect non-citizens to have the easiest time matching with a think tank. However, a lot depends on individual circumstances, so it is difficult to generalize. We encourage non-citizens with work authorization to apply, and would work through these sorts of questions with them individually if they reach the later stages of the application process.
Answer by DMJul 25, 202121
0
0

There is Eliezer Yudkowsky's Harry Potter fan fiction "Harry Potter and the Methods of Rationality" (HPMOR), which conveys many ideas and concepts that are relevant to EA: http://www.hpmor.com/

Please note that there is also a fan-produced audio version of HPMOR: https://hpmorpodcast.com/

3
MaxRa
3y
I also really enjoyed the unofficial sequal, Significant Digits. http://www.anarchyishyperbole.com/p/significant-digits.html
DM
3y3
0
0

Great initiative! Unfortunately, I cannot seem to find the podcast on either of my two podcast apps (BeyondPod and Podcast Addict). Do you plan to make the podcast available across all major platforms?

9
D0TheMath
3y
Anchor sends messages to podcast platforms to get the podcast on them. They say this takes a few business days to complete. In the meantime, you can use Ben Schifman's method.
5
BenSchifman
3y
On mobile but not for some reason on the web version there is a "more platforms" button that gives you an RSS feed that should work on any player: https://anchor.fm/s/62cbeec4/podcast/rss
DM
3y16
0
0

Strongly agree! I'm currently writing an EA Forum post making the case for Wikipedia editing.

6
ChristianKleineidam
3y
Given the discussion here and over at LessWrong where I crossposted this, I think when it comes to writing a larger post to make a more effective argument it's important to explain how Wikipedia works. It seems to me like many people think that changing Wikipedia articles is just about making an edit and hoping it doesn't get reverted. This works for smaller issues but when it comes to big issues it needs more then one person to create change. I'm currently in a deep discussion on a contentious issue where I wrote a lot. If 3-4 people would join in and back me up, I likely could make the change and it wouldn't take much effort for everyone of those people. When it comes to voting on an election you don't need to explain to people that even so they didn't get what they wanted this doesn't mean that there wasn't a democratic election. People have a mental model for how elections work but they don't have one for how decisions on Wikipedia get made and thus think that if they alone don't have the power to create change it's not worth speaking up on the talk page.  I also read that people think the goal of Wikipedia is truth when it isn't it's to reflect what secondary sources say. While it might be great to have an encyclopedia that has truth as a goal having a place where you find a synthesis of other secondary sources is valuable. Understanding that helps to know when it's worth to speak up and when it isn't.
DM
3y2
0
0

Awesome episode! I really enjoyed listening to it when it came out and was excited for Sam's large audiences across Waking Up and Making Sense to learn about EA in this way.

Load more