News from the World of Effective Altruism
We hope you enjoyed the first Effective Altruism Newsletter. Below is another round of the latest posts, articles, news, announcements, and job postings from the world of effective altruism.
On the EA forum, the newsletter doubles as an Open Thread where you can discuss anything related or not related to the newsletter.
We’ve also added a new section, “Timeless Classics”, in which we share articles and other links that deserve to be seen again.
Thanks a lot to everyone who submitted feedback last time! Let us know how we’re doing and how we can improve this newsletter. Feel free to submit interesting links and articles through this form.
The Effective Altruism Newsletter Team
What is EA?
If you’re not familiar with effective altruism, EA – to use its common abbreviation – is a growing social movement founded on the desire to make the world as good a place as it can be, the use of evidence and reason to find out how to do so, and the audacity to actually try.
Articles and Community Posts
GiveWell has released an article on potential new high-impact charities they would like to see. Their research has shown that there are some priority programs that no one is carrying out yet. Are you the right person to set one up? GiveWell might be able to support you in doing so.
On the Giving What We Can blog, Scott Weathers writes about the Reach Act that could prove to be a high-impact reform for USAID.
Julia Wise published a short post about the importance of setting boundaries and taking care of yourself when engaging in altruistic work.
Swedish EA Stefan Schubert, who runs an organization to promote evidence-based policy, has released a “Fact-Checking 2.0” page together with clearerthinking.org. So far, they provide argument-checking and fact-checking for a large chunk of the current US presidential debates. They aim to improve argumentative standards, which will hopefully lead to more evidence-based politics.
Updates from EA Organizations and Projects
Animal Charity Evaluators
Following last year’s great success, ACE has announced the start of its fall matching campaign. All gifts will be matched up to $50,000.
Future of Humanity Institute
Toby Ord published a paper on “moral trade” in top journal Ethics (open access version here). Nick Bostrom appeared in a half-hour prime time interview on BBC’s HardTalk (only available in the UK) and spoke at the UN Headquarters (see below).
The Life You Can Save
Check out The Life You Can Save's updated Impact Calculator, which shows the interventions your donation can buy with each of their 16 recommended charities, providing calculations in the currency of your country.
The New EA Wiki Is Online
Eric Bruylant has shared some great news: a team of .impact volunteers has moved the EA Wiki, which was previously hosted on Wikia, to the EA Hub. This allows for far more flexibility with zero ads. There’s lots of great content to check out. Happy reading, editing and contributing!
Nick Bostrom Speaks at the UN Headquarters
Nick Bostrom, Director of the Future of Humanity Institute, has given a talk about threats from future technologies, including artificial intelligence, at the UN Headquarters during the UN General Assembly. Watch the briefing here (starts at 2:14:30).
Advice from Effective Altruism Action
EA Action offers free one-on-one Skype sessions to effective altruists – both for newcomers and veterans. They exist to help you decide what course of action makes sense for your unique circumstances and help you navigate the vast landscape of EA organizations, projects and causes.
The EA movement is growing quickly and whenever EA organizations are looking for people we’ll be posting the jobs here. There’s also a Facebook group with more job postings.
Future of Humanity Institute
FHI is looking to hire three research fellows for our new Strategic AI Research Centre. They are looking for:
- An emerging technology policy analyst
- A multidisciplinary scientist
- A computer scientist with a strong background in machine learning and the control problem.
The posts are not officially open yet, but they are encouraging expressions of interest. Full details are here.
In this section we’ll be sharing some great content from the past which deserves to be seen again.
Helen Toner’s post “Effective Altruism is a Question (not an ideology)” reminds us of some crucial characteristics that distinguish EA from many other movements: EA is a process rather than an answer and we’re always on the lookout for information that will change our mind so we can have even more impact.
Effective Altruism Companies
The idea of 'EA companies' is a notion some find problematic. Here is a rough framework I've deveoped for determing what might count as effective altruist company. By 'company' in this case, I definitely mean a for-profit organization, though its structure isn't important (e.g., startup, small business, corporation, investment firm, etc.). Some corporations have their own private foundations, such as Google with their philanthropic arm, Google.org. I don't know yet whether to count these as EA companies, or non-profits, as they're in an ambiguous space.
Below is my current (and incomplete) list of EA companies. Please feel free to add any you believe meet the criteria listed. If you want to suggest a company which doesn't meet any of the criteria, rather than making a special exception for them, suggest a new criterion for me to add to the list which the company in question meets.
Health eFilings is type 1.
Right, I'm now aware 10MinutePQRS has become Health eFilings.
Avant is Type-3.
Also, I thought Lendlayer was sold to Affirm. http://fortune.com/2015/08/05/affirm-acquires-lendlayer-to-bolster-education-loans/
It appears LendLayer has been acquired by Affirm. I wasn't aware of this. I'd be interested to know what the founders of LendLayer, particularly those who are also 80,000 Hours members, are doing now. I'll try finding out.
Matt Gibb is working for Affirm now. I don't know what Ben Gilbert is up to. And I don't know who the other co-founders are.
Agora For Good is type 2. We're building a donation platform that:
a) make it easy for donors (in particular, people outside the EA community who have attachments to causes other than Global Health, X-Risk or EA-Meta-Charities) to find the most impactful charities they can within their cause area.
b) makes it easy and cheap for donors to give online to any nonprofit they want, while tracking their giving over time. (And, through the same process, makes it easier for noprofits to set up online donation collecting and tracking)
c) Over time, develop a scalable process to short charities roughly into the buckets of "possibly high impact", "probably high impact" and "definitely high impact", so as the effective giving becomes mainstream we're able to handle the increased flow of donations.
This is great. I didn't expect to see another type-2 company besides Wave, so I'm pleased with this development.
I think MaxMind is type 1.
I checked out MaxMind's website, and they might be the best example of an "EA company" I never knew about. They're excellent. Thanks!
Quixey is plausibly type-1, not type-3.
Both Liron Shapira as a private individual, and Quixey as a company, have donated $15,000 to MIRI. That definitely constitutes "plausibly type-1". For a medium-sized company, I'd want to know more of what they intend to do, re: donations. I'll look into it some more.
How's that? When I watched the livestream of EAG 2015, the speakers invited any attendees in the audience to come up and address everyone with any important points they had. Topher Hallquist, an employee of Quixey, hopped on stage and told anyone qualified to apply to Quixey because the company was actively hiring. I figure Mr. Hallquist wouldn't have done this if Quixey wasn't a type-3 company.
So, what's the mistake I'm making in reading the situation?
Has their been an evaluation of the impact of EA Global yet? Do we have any indication of any wins it yielded?
As far as I know, there haven't. I think within the community it likely bolstered people's connection to it. Many of my friends work at Charity Science. Attending EA Global yielded at least one new hire for Charity Science, and probably securing additional funding for expanded operations, due to how they've two other permanent staff since EA Global. I'm guessing this is the same for other organizations which attended EA Global.
Additionally, EA Global likely exposed some effective altruists to new causes or projects they hadn't considered before, chaging their minds about cause selection, or presenting them with new opportunities. This seemed to be the outcome of the 2014 EA Summit, which I attended. I think effective altruism conferences tend to improve the value of existing ties, but maybe in subtle ways which don't show their first-order effects for maybe several months. So, it's difficult to quantify the impact EA Global.
There was some negative coverage of EA Global, such as the Vox article which made A.I. risk seem weird. This doesn't seem to have left a lasting negative impression of A.I. risk reduction, or its proponents, based on my observations in the news it's not perceived as any less seriously than before EA Global. I know there were some gaffes in planning EA Global which offended some animal advocates, such as the event being promised to be vegan up until the day before the conference, but with people showing up to EA Global with there being some meat offered. Additionally, there is an annual animal rights conference planned in the U.S., which EA Global was scheduled on the same weekend as. I'm not sure which date was set first. Anyway, I'm aware some animal advocates felt snubbed, or that effective altruism only pays lip service to the cause. So, a lot of animal advocates in effective altruism felt EA Global was problematic, and I've heard rumors a few newer effective animal altruists left the community. I don't know how many folks that numbers as.
Despite the above problems with EA Global, I wrote about them more in detail so you know what they're about. The mistakes of EA Global aren't necessarily more pronounced or bigger than the wins from it. I think it's net positive. However, since EA Global, I don't perceive any changes in the community which have empowered it. Like last year, there seems to be some hubbub, both positive and negative, and then the community regresses to the mean of things as usual, without any dramatic changes which we might expect from an event as big as EA Global.
Overall, I think conferences like EA Global indicate a failure to capitalize on a lot of good will and enthusiasm from the community leading up to it. It's not so much what was definitely done wrong, so much as what wasn't done right. The opportunity costs of subpar execution of the event are worthy of criticism. These opinions of mine apply to the EA Global event held in Mountain View, California, at the Google headquarters, and not the events in Oxford or Melbourne.
I'm thinking of publishing a post on the Forum critical of the planning and/or execution of EA Global 2015, in the vein of what they did right, but also pointing out their mistakes which to repeat would not be excusable.
Small thing but maybe put the "What is EA" bit at the bottom from now on, since it won't be the most important thing for most readers.
Even better, make an auto-welcome email that fires when you first join the list, and drop this section.
I have some evidence that there are many software engineers who would gladly volunteer to code for EA causes (and some access to such engineers). What volunteering opportunities like that are available? EA organizations that need coders? Open source projects that can be classified as EA causes? Anything else?
I'm trying to build a complete and exhaustive directory of effective altruism organizations. By this, I mean organizations even a substantial minority of effective altruism within a given cause area considers effective, even if the organization itself doesn't self-identify as 'effective altruist'. Some association with effective altruism is sufficient, even if it's only unilateral or peripheral.
I'm going to use this thread to list organizations by one of the three object-level causes effective altruism favors. Some organizations might focus on 'meta-' projects, or pursue cross-cutting or less typical activity, but I'll try saving those for a later list. Otherwise, replications will be edited out. Please add any organizations you think I missed.
Global Poverty Reduction and Global Public Health
Animal Advocacy (including animal welfare, rights, and liberation)
Global Catastrophic and Existential Risk Reduction
 The problem with listing GCRI's directory as effective altruism organizations is because of the 131 organizations listed by GCRI, most of them aren't in any way I can tell directly affiliated with effective altruism. With GCR reduction and the other causes, organizations I listed as "effective altruist" are either recognized by some authority as effective, or cited as effective by a substantial minority of a given cause as aligned with effecitve altruism. This is not the case for most organization in GCRI's directory. This also raises the issue of whether government organizations, or publicly funded institutions, count as effective altruism or not, as effective altruism has most typically meant private ways of doing good.
Copenhagen Consensus Center as poverty meta.
Copenhagen Consensus is meta everything. They're the Skoll Foundation of Europe. Thanks, though, will add to list.