80,000 hours have outlined many career paths where it is possible to do an extraordinary amount of good. To maximize my impact I should consider these careers. Many of these paths are very competitive and require enormous specialization. I will not be done with my studies for potentially many years to come. How will the landscape look then? Will there still be the same need for an AI specialist, or will entirely new pressing issues have crept up on us like Operations management recently did so swiftly?
80,000 hours is working hard at identifying key bottlenecks in the community. MIRI has long s... (Read more)
I suggest that effective altruists should:
Iȁ... (Read more)
In the recent article Some promising career ideas beyond 80,000 Hours' priority paths, Arden Koehler (on behalf of the 80,000 Hours team) highlights the pathway “Become a historian focusing on large societal trends, inflection points, progress, or collapse”. I share the view that historical research is plausibly highly impactful, and I’d be excited to see more people explore that area.
I commented on that article to list some history topics I’d be excited to see people investigate, as well as to provide some general thoughts on the intersection of history resea... (Read more)
This is just a thought I had today listening to the most recent episode with Ben Garfinkel. There are times when listening to 80,000 Hours episodes when I wonder what an expert on 'the other side of the argument' would say to a particular point made. Hosts like Rob Wiblin and Howie Lempel do a good job in challenging guests in this way, but it's not quite the same as having two experts on opposite sides of an argument respond to each other in real time with a moderator.
An example of such a debate was a recent episode on The Future of Life Institute podcast where Stuart Russell a... (Read more)
I am worried.
The last month or so has been very emotional for a lot of people in the community, culminating in the Slate Star Codex controversy of the past two weeks. On one side, we've had multiple posts talking about the risks of an incipient new Cultural Revolution; on the other, we've had someone accuse a widely-admired writer associated with the movement of abetting some pretty abhorrent worldviews. At least one prominent member of an EA org I know, someone I deeply respect, deleted their Forum account this week. I expect there are more I don't know about.
Both groups feel like they and th... (Read more)
[Content warning: discussion of violence and child abuse. No graphic images in this post, but some links may contain disturbing material.]
In July 2017, a Facebook user posts a video of an execution. He is a member of the Libyan National Army, and in the video, kneeling on the ground before his brigade, are twenty people dressed in prisoner orange and wearing bags over their heads. In the description, the uploader states that these people were members of the Islamic State. The brigade proceeds to execute the prisoners, one by one, by gunshot.
The videos was uploaded along with other execu... (Read more)
Problem areas beyond 80,000 Hours' current priorities mentions "Broadly promoting positive values".
I have some some questions:
What are the values that are needed to further EA's interests?
Where (in which cultures or areas of culture at large) are they deficient, or where might they become deficient in the future?
Problem areas... mentions "altruism" and "concern for other sentient beings". Maybe those are the two that EA is most essentially concerned with. If so, what are the support values needed for maximizing those values?
There's now a medium-sized amount of discussion of longtermism on Twitter, and I've noticed a bunch of people newly using it (such as some of those listed by Stefan Schubert here).
Twitter seems like a potentially underrated platform for longtermists. Like the EA Forum, Twitter promotes "liked" content. It allows us to follow content of interest to us. But it also differs from the EA Forum in some ways:
Some of the larger outputs of Leverage Research include:
I feel that older EA forum posts are not read nearly as much as they should. Hence, I collected the ones that seemed to be the most useful and still relevant today. I recommend going through this list in the same way you would go through the frontpage in the homepage of this forum: reading the titles and clicking on the ones that seem interesting and relevant to you. Note that you can hover over links to see more details about each post.
Also note that many of these posts have lower karma scores than most posts posted nowadays. This is in large part because until September 2018, all votes were... (Read more)
July 9 update:
The Development Media International's COVID-19 prevention campaign (28:52) uses, marginally, about USD 0.017/person informed. The cost per life saved is between $50 and $1,000 (31:55–32:20). In comparison, EA Cameroon's cost is USD 0.0283/person. However, EACAM adds personal delivery of informational flyers to local community leaders, workshops on making own masks, and newspaper articles. Also, if only some of the activities to inform the Santa community are selected, the cost/person will decrease. Thus, donating to EA Cameroon for the COVID-19 prevention campaig... (Read more)
Konrad Seifert and I are writing “a field guide to place future generations at the core of policy-making”. To make it maximally relevant to the EA community, please, ask us related questions, share criticism and give feedback on the current version of the book proposal.
Let us know your thoughts, questions and feedback in the comments or via email email@example.com by 31 July 2020. Thank you in advance!
Read the full proposal here (~2700 words). Or get a quick overview below:
Longtermist scholarship still needs to translate its ideas into policy change to achieve large-scale impa... (Read more)
I argue that space governance has been overlooked as a potentially promising cause area for longtermist effective altruists. While many uncertainties remain, there is a reasonably strong case that such work is important, time-sensitive, tractable and neglected, and should therefore be part of the longtermist EA portfolio.
I also suggest criteria for what good space governance should look like, and outline possible directions for further work on the topic.
It’s plausible that humans, or their successors, will eventually be able to colonise space. There are alrea... (Read more)
One of the most crucial considerations in cause prioritisation is figuring out how much moral weight we should place on the lives and preferences of non-human animals. Jason Schukraft has written about this recently here and here.
I have been wondering about this problem from an evolutionary perspective, which leads to my question: What was the first being on Earth to experience suffering?
I feel very uncertain whether this was a simple organism living in the sea millions of years ago, the first mammal, the first hominid, the first Homo Sapiens, or anywhere in between!
The answer, of course, will... (Read more)