Recent Discussion

When talking about causes, I'd like to see comments like "there hasn't been enough analysis of effectiveness of meta-science interventions". 

4Khorton16hI assume you don't have a problem with it when people are making the claim specifically about EA, as opposed to the wider world? Like if I said "Building teams that come from a variety of relevant backgrounds and diverse demographics is neglected in EA", even if you disagreed with the statement, you probably wouldn't mind the "neglected in EA" part? Although I agree that "neglected in EA" often leads to lazy writing... I think the argument above could be phrased a lot more clearly.
3Mati_Roy17hAlthough, maybe the EA Community has a certain prestige that make it a good position from which to propagate ideas through society. So if, for example, the EA Community broadly acknowledged anti-aging as an important problem, even without working much on it, it might get other people to work on it that would have otherwise worked on something less important. So in that sense it might make sense. But still, I would prefer if it was phrased more explicitly as such, like "The EA Community should acknowledge X has an important problem". Posted a similar version of this comment here: https://www.facebook.com/groups/effective.altruists/permalink/3166557336733935/?comment_id=3167088476680821&reply_comment_id=3167117343344601 [https://www.facebook.com/groups/effective.altruists/permalink/3166557336733935/?comment_id=3167088476680821&reply_comment_id=3167117343344601]

80,000 hours have outlined many career paths where it is possible to do an extraordinary amount of good. To maximize my impact I should consider these careers. Many of these paths are very competitive and require enormous specialization. I will not be done with my studies for potentially many years to come. How will the landscape look then? Will there still be the same need for an AI specialist, or will entirely new pressing issues have crept up on us like Operations management recently did so swiftly?

80,000 hours is working hard at identifying key bottlenecks in the community. MIRI has long s... (Read more)

I'd agree with the idea people should take personal fit very seriously, with passion/motivation for a career path being a key part of that. And I'd agree with your rationale for that.

But I also think that many people could become really, genuinely fired up about a wider range of career paths than they might currently think (if they haven't yet tried or thought about those career paths). And I also think that many people could be similarly good fits for, or similarly passionate about, multiple career paths. For these people, which career path... (read more)

2MichaelA5hI've seen indications and arguments that suggest this is true when 80,000 Hours releases data or statements they don't want people to take too seriously. Do you (or does anyone else) have thoughts on whether it's the case that anyone releasing "substandard" (but somewhat relevant and accurate) data on a topic will tend to be worse than there being no explicit data on a topic? Basically, I'm tentatively inclined to think that some explicit data is often better than no explicit data, as long as it's properly caveated, because people can just update their beliefs only by the appropriate amount. (Though that's definitely not fully or always true; see e.g. here [https://forum.effectivealtruism.org/posts/pYHZ8dhZWPCSZ66dX/i-knew-a-bit-about-misinformation-and-fact-checking-in-2017?commentId=tLokkQXf7M6WpNW6f] .) But then 80k is very prestigious and trusted by much of the EA community, so I can see why people might take statements or data from 80k too seriously, even if 80k tells them not to. So maybe it'd be net positive for something like what the OP requests to be done by the EA Survey or some random EA, but net negative if 80k did it?
3 suggestions about jargon in EA
806d5 min readShow Highlight

Summary and purpose

I suggest that effective altruists should:

  1. Be careful to avoid using jargon to convey something that isn’t what the jargon is actually meant to convey, and that could be conveyed well without any jargon.
    • As examples, I’ll discuss misuses I’ve seen of the terms existential risk and the unilateralist’s curse, and the jargon-free statements that could’ve been used instead.
  2. Provide explanations and/or hyperlinks to explanations the first time they use jargon.
  3. Be careful to avoid implying jargon or concepts originated in EA when they did not.

Iȁ... (Read more)

3kl6hThanks for this post! Jargon has another important upside: its use is a marker of in-group belonging. So, especially IRL, employing jargon might be psychologically or socially useful for people who are not immediately perceived as belonging in EA, or feel uncertain whether they are being perceived as belonging or not. Because jargon is a marker of in-group belonging, I fear that giving an unprompted explanation could be alienating to someone who makes the implication that jargon is being explained to them because they're perceived as not belonging. (E.g., "I know what existential risk is! Would this person feel the need to explain this to me if I were white/male/younger?") In some circumstances, explaining jargon unprompted will be appreciated and inclusionary, but I think it's a judgment call.

Yes, I think these are all valid points. So my suggestion would indeed be to often provide a brief explanation and/or a link, rather than to always do that. I do think I've sometimes seen people explain jargon unnecessarily in a way that's a bit awkward and presumptuous, and perhaps sometimes been that person myself.

In my articles for the EA Forum, I often include just links rather than explanations, as that gives readers the choice to get an explanation if they wish. And in person, I guess I'd say that it's worth:

  • entertaining both the
... (read more)

In the recent article Some promising career ideas beyond 80,000 Hours' priority paths, Arden Koehler (on behalf of the 80,000 Hours team) highlights the pathway “Become a historian focusing on large societal trends, inflection points, progress, or collapse”. I share the view that historical research is plausibly highly impactful, and I’d be excited to see more people explore that area.

I commented on that article to list some history topics I’d be excited to see people investigate, as well as to provide some general thoughts on the intersection of history resea... (Read more)

3MathiasKirkBonde6hGreat write up, though I feel slight regret reading it as there are now a further 10 things in my life to be annoyed I don't know more about! Maybe it would be valuable to try crowdsourcing research such as this? Start a shared g-suite document where we can coordinate and collaborate. I would find it fairly fun to research one of these topics in my free time, but doubt I commit the full energy it requires to produce a thorough analysis. I could write myself up publicly somewhere others can see, that I'm willing to work 7 hours a week, on eg. studying societal collapse. Then someone else looking to do the same, can coordinate and collaborate with me, and we could potentially produce a much better output. Even if collaboration turns out to be unfruitful, coordination might at least prevent double work.

That definitely sounds good to me. My personal impression is that there are many EAs who could be doing some good research on-the-side (in a volunteer-type capacity), and many research questions worth digging into, and that we should therefore be able to match these people with these questions and get great stuff stuff. And it seems good to have some sort of way of coordinating that.

Though I also get the impression that this is harder than it sounds, for reasons I don't fully understand, and that mentorship (rather than just collaboration) is also qui... (read more)

This quote from Kelsey Piper:

Maybe pretty early on, it just became obvious that there wasn’t a lot of value in preaching to people on a topic that they weren’t necessarily there for, and that I had a lot of thoughts on the conversations people were already having.
Then I think one thing you can do to share any reasoning system, but it works particularly well for effective altruism is just to apply it consistently, in a principled way, to problems that people care about. Then, they’ll see whether your tools look like useful tools. If th
... (read more)

This is just a thought I had today listening to the most recent episode with Ben Garfinkel. There are times when listening to 80,000 Hours episodes when I wonder what an expert on 'the other side of the argument' would say to a particular point made. Hosts like Rob Wiblin and Howie Lempel do a good job in challenging guests in this way, but it's not quite the same as having two experts on opposite sides of an argument respond to each other in real time with a moderator.

An example of such a debate was a recent episode on The Future of Life Institute podcast where Stuart Russell a... (Read more)

I would love to see events or podcasts for good-faith debates on important topics (even those that fall outside of the top EA causes) from any EA-aligned people or organisations.

I think it could help us engage productively with audiences we don't usually engage with, and is a great demonstrating our values/methods to a broader audience and engaging with people we don't usually engage with.

As an example, EA Philadelphia hosted an animal welfare debate on Abolitionism vs Welfarism a few months ago (which you can view here), which went really well and was one of our highest attended events.

Concern, and hope
1006d1 min readShow Highlight

I am worried.

The last month or so has been very emotional for a lot of people in the community, culminating in the Slate Star Codex controversy of the past two weeks. On one side, we've had multiple posts talking about the risks of an incipient new Cultural Revolution; on the other, we've had someone accuse a widely-admired writer associated with the movement of abetting some pretty abhorrent worldviews. At least one prominent member of an EA org I know, someone I deeply respect, deleted their Forum account this week. I expect there are more I don't know about.

Both groups feel like they and th

... (Read more)

The witch hunts were sometimes endorsed/supported by the authorities, and other times not, just like the Red Guards:

Under Charlemagne, for example, Christians who practiced witchcraft were enslaved by the Church, while those who worshiped the Devil (Germanic gods) were killed outright.

By early 1967 Red Guard units were overthrowing existing party authorities in towns, cities, and entire provinces. These units soon began fighting among themselves, however, as various factions vied for power amidst each one’s claims that it was the true representative o

... (read more)

[Content warning: discussion of violence and child abuse. No graphic images in this post, but some links may contain disturbing material.]

In July 2017, a Facebook user posts a video of an execution. He is a member of the Libyan National Army, and in the video, kneeling on the ground before his brigade, are twenty people dressed in prisoner orange and wearing bags over their heads. In the description, the uploader states that these people were members of the Islamic State. The brigade proceeds to execute the prisoners, one by one, by gunshot.

The videos was uploaded along with other execu... (Read more)

Problem areas beyond 80,000 Hours' current priorities mentions "Broadly promoting positive values".


I have some some questions:

What are the values that are needed to further EA's interests?

Where (in which cultures or areas of culture at large) are they deficient, or where might they become deficient in the future?

Problem areas... mentions "altruism" and "concern for other sentient beings". Maybe those are the two that EA is most essentially concerned with. If so, what are the support values needed for maximizing those values?

A few free ideas occasioned by this:

1. The fact that this is a government paper makes me think of "people coming together to write a mission statement." To an extent, values are agreed-upon by society, and it's good to bear that in mind. (Working with widespread values instead of against them, accepting that to an extent values are socially-constructed (or aren't, but the crowd could be objectively right and you wrong) and adjusting to what's popular instead of using a lot of energy to try to change things.)

2. My first reaction ... (read more)

Longtermism ⋂ Twitter
481mo1 min readShow Highlight

There's now a medium-sized amount of discussion of longtermism on Twitter, and I've noticed a bunch of people newly using it (such as some of those listed by Stefan Schubert here).

Twitter seems like a potentially underrated platform for longtermists. Like the EA Forum, Twitter promotes "liked" content. It allows us to follow content of interest to us. But it also differs from the EA Forum in some ways:

  • It promotes concise discussion.
  • It allows distribution of content to non-EA audiences.
  • It allows reading content from non-EA contributors.
  • It promotes content from top contributo
... (Read more)

Counterpoints:

  • "If you have a large follower account twitter is mostly experienced like this: you share a thought optimized for group x. Members of group y,z, and v automatically start sharing it as the textbook example of why group x deserves crucifixion." https://twitter.com/Scholars_Stage/status/1281583686295719936
  • "Bad faith is the condition of the modern internet, and shitposting is the lingua franca of the online world. And not just online: A troll is president. Trolling won. Perhaps we can agree that these platforms aren't suited
... (read more)

Resources spent

  • Leverage Research has now existed for over 7.5 years1
  • Since 2011, it has consumed over 100 person-years of human capital.
  • From 2012-16, Leverage Research spent $2.02 million, and the associated Institute for Philosophical Research spent $310k.23

Outputs

Some of the larger outputs of Leverage Research include:

  • Work on Connection Theory: although this does not include the initial creation of the theory itself, which was done by Geoff Anders prior to founding Leverage Research
  • Contributions to productivity of altruists via the a
... (Read more)

About two years have now passed since the post. Main updates:

  • Leverage Research appears to be just four people. They have announced new plans, and released a short introduction to their interests in early stage science, but not any other work. Their history of Leverage Research appears to have stalled at the fourth chapter.
  • Reserve seems to be ten people, about seven of whom were involved with Leverage Research. Reserve Rights is up by about 160% since being floated two years ago.
  • Paradigm Research is now branding as a self-help organisation.

I feel that older EA forum posts are not read nearly as much as they should. Hence, I collected the ones that seemed to be the most useful and still relevant today. I recommend going through this list in the same way you would go through the frontpage in the homepage of this forum: reading the titles and clicking on the ones that seem interesting and relevant to you. Note that you can hover over links to see more details about each post.

Also note that many of these posts have lower karma scores than most posts posted nowadays. This is in large part because until September 2018, all votes were

... (Read more)

July 9 update:

The Development Media International's COVID-19 prevention campaign (28:52) uses, marginally, about USD 0.017/person informed. The cost per life saved is between $50 and $1,000 (31:55–32:20). In comparison, EA Cameroon's cost is USD 0.0283/person. However, EACAM adds personal delivery of informational flyers to local community leaders, workshops on making own masks, and newspaper articles. Also, if only some of the activities to inform the Santa community are selected, the cost/person will decrease. Thus, donating to EA Cameroon for the COVID-19 prevention campaig... (Read more)

Sure! I am currently connecting with EAs in sub-Saharan Africa with the intention of building the EA community there. During these conversations, I identified a project that the EA community may be interested in and offered to edit the writing of EA Cameroon.

Konrad Seifert and I are writing “a field guide to place future generations at the core of policy-making”. To make it maximally relevant to the EA community, please, ask us related questions, share criticism and give feedback on the current version of the book proposal.

Let us know your thoughts, questions and feedback in the comments or via email max@eageneva.org by 31 July 2020. Thank you in advance!

Read the full proposal here (~2700 words). Or get a quick overview below:

Goal

Longtermist scholarship still needs to translate its ideas into policy change to achieve large-scale impa... (Read more)

I also want to clarify my statement that this was "low-medium value" was based on the current plan – I think there is valuable stuff here that could be teased out to make this useful to people in policy.

A good book summarising the academic work on how policy is made, how change happens, how external influences work, mapping out the whole space and giving an overview and different perspectives could be really really useful.

I wouldn’t give up on this idea – just maybe develop it further – can talk more if useful.

Summary

I argue that space governance has been overlooked as a potentially promising cause area for longtermist effective altruists. While many uncertainties remain, there is a reasonably strong case that such work is important, time-sensitive, tractable and neglected, and should therefore be part of the longtermist EA portfolio.

I also suggest criteria for what good space governance should look like, and outline possible directions for further work on the topic.

What is space governance?

It’s plausible that humans, or their successors, will eventually be able to colonise space. There are alrea

... (Read more)

Why is that? I don't know much about the area, but my impression is that we currently don't know what space governance would be good from an EA perspective, so we can't advocate for any specific improvement. Advocating for more generic research into space-governance would probably be net-positive, but it seems a lot less leveraged than having EAs look into the area, since I expect longtermists to have different priorities and pay attention to different things (e.g. that laws should be robust to vastly improved technology, and that colonization of other solar systems matter more than asteroid mining despite being further away in time).

After reading this I thought that a natural next step for the self-interested rational actor that wants to short nuclear war would be to invest in efforts to reduce its likelihood, no? Then one might simply look at the yearly donation numbers of a pool of such efforts.

One of the most crucial considerations in cause prioritisation is figuring out how much moral weight we should place on the lives and preferences of non-human animals. Jason Schukraft has written about this recently here and here.

I have been wondering about this problem from an evolutionary perspective, which leads to my question: What was the first being on Earth to experience suffering?

I feel very uncertain whether this was a simple organism living in the sea millions of years ago, the first mammal, the first hominid, the first Homo Sapiens, or anywhere in between!

The answer, of course, will... (Read more)

I haven't investigated this question in any detail, but a natural thought is that the emergence of sentience coincided with (either as a byproduct of or causal factor in) the Cambrian Explosion, ~540 million years ago. The capacity for valenced experience probably arose either simultaneously with the capacity for general awareness or shortly thereafter. With the capacity for valenced experience comes the capacity for negative hedonic states, which under many circumstances would constitute suffering, in my view. Depending on how robustly you're defining 'de

... (read more)
Load More