The Monthly EA Newsletter – February 2016 Edition

EA Newsletter Logo
Hey all,

We hope 2016 has been treating you well so far.

Last week’s announcement that the UK Government and the Bill & Melinda Gates Foundation pledged £3 billion to help end malaria deaths will hopefully be a sign of many more altruistic endeavours to come!

Stay hungry. Stay foolish. 

The Team
 
Articles and Community Posts
 
What did EAs change their mind about in 2015? Find out in this Facebook discussion.

If you want to have some fun with stretching and testing your intuitions, you might enjoy this comic on the tricky moral dilemmas surrounding self-driving cars.

Kahneman, Tversky and the World Bank: Sebastian Roing writes about the promise of low-cost interventions at the intersection of behavioral economics and international development.

See how malaria eradication can induce economic growth, as James Snowden explains how the benefit of fighting this disease extends beyond the number of deaths averted.

Holden Karnofsky compares the value of a smaller number of rigorous studies to a bigger number of flawed studies.

Kieran Grieg provides a run-down on how to conduct effective studies relating to animal advocacy.

What are the limits to ethical offsetting and earning to give? Rob Wiblin looks at just how bad it is to be a CEO in the tobacco industry. Here is a response to him on the EA Forum.
 
Updates from EA Organizations
 
80,000 Hours

80k grew 50% over the last month with their impact-adjusted significant plan changes. They also developed a new afternoon-long career workshop, which they gave to 130 people in Cambridge. They received great feedback, including: “this is the best workshop I’ve been to so far” and “this was the first career event I’ve been to I haven’t hated”.

Animal Charity Evaluators

ACE released their annual Year in Review. Of note, money moved increased from at least $141,000 in 2014 to at least $828,000 in 2015. ACE als plans to host an academic conference at Princeton in late 2016.

Charity Entrepreneurship

The Charity Entrepreneurship team is now in India: read their reflections on their first slum visit, the broad phases of their project and give feedback on their research process.

Effective Altruism Foundation

Adriano Mannino, President of EAF, recently gave a comprehensive German-languageintroductory talk on EA.

GiveWell

GiveWell published an update on its web traffic and money moved. As of late January 2016, GiveWell has now tracked more than $100 million in money moved to its recommended charities.

Giving What We Can

This New Year a group of people from around the world coordinated to take the pledge or sign up to Try Giving as part of the Giving What We Can Pledge Event. It was a great success: around 80 people joined and together they pledged more than $17 million.

Local Effective Altruism Network

LEAN’s latest employee, Georgie Mallett, recently joined the team in Vancouver. They seeded 34 new EA groups recently, from Capilano University to South Korea – see if there’s a new group near you.
Other Announcements

Apply to the Pareto Fellowship by Feb. 14, a new summer program from the Centre for Effective Altriusm, which includes training and project work for three months in the San Francisco Bay Area.

If you are interested in hosting an EA Global X conference this year but missed the deadline or don’t feel ready to apply yet, send a quick email to roxanne@eaglobal.org.

The Good Technology Project was recently announced. Their goal is to influence talented people within technology to work on higher-impact projects. They are looking for feedback as well as people who might want to get involved.
 
Job Postings

The Schistosomiasis Control Initiative (SCI), a charity recommended by both GiveWell and Giving What We Can, is hiring a full-time Communications & Development Manager. The application deadline is 16 Feb.
 
Timeless Classics

“Imagine you are setting out on a dangerous expedition through the Arctic on a limited budget.” What essentials do you take? Discover what that has to do with effective altruism in the classic “Efficient Charity: Do Unto Others…”.
 
Go forth and do the most good!

Let us know how you liked this edition and how we can improve further.

See you again on March 3!

Georgie, Michał, Pascal and Sören
– The Effective Altruism Newsletter Team

The Effective Altruism Newsletter is a joint project between the Centre for Effective Altruism, theEffective Altruism Hub and .impact

Comments9


Sorted by Click to highlight new comments since:

I'm new to the EA Forum. It was suggested to me that I crosspost this LessWrong post criticizing Jeff Kaufman's speech at EA Global 2015 entitled 'Why Global Poverty?' on the EA forum, but I need 5 karma to make my first post.

EDIT: Here it is.

"And I would argue that any altruist is doing the same thing when they have to choose between causes before they can make observations. There are a million other things that the founders of the Against Malaria Foundation could have done, but they took the risk of riding on distributing bed nets, even though they had yet to see it actually work."

This point should be rewritten, I think. I'm not sure what the "it" here you're talking about actually is.

Sorry about the confusion, I mean to say that even though the Against Malaria Foundation observes evidence of the effectiveness of its interventions all of the time, and this is good, the founders of the Against Malaria Foundation had to choose an initial action before they had made any observations about the effectiveness of their interventions. Presumably, there was some first village or region of trial subjects that first empirically demonstrated the effectiveness of durable, insecticidal bednets. But before this first experiment, the AMF also presumably had to rely merely on correct reasoning without corroborative observations to support their arguments. Nonetheless, their reasoning was correct. Experiment is a way to increase our confidence in our reasoning, and it is good to use it when it's available, but we can have confidence at times without it. I use these points to argue that people successfully reason without being able to test the effectiveness of their actions all of the time, and that they often have to.

The more general point is that people often use a very simple heuristic to decide whether or not something academic is worthy of interest: Is it based on evidence and empirical testing? 'Evidence-based medicine' is synonymous with 'safe, useful medicine,' depending on who you ask. Things are bad if they are not based on evidence. But in the case of existential risk interventions, it is a property of the situation that we cannot empirically test the effectiveness of our interventions. It is thus necessary to reason without conducting empirical tests. This is a reason to take the problem more seriously, for its difficulty, as opposed to the reaction of some others, which is that the 'lack of evidence-based methods' is some sort of point against trying to solve the problem anyway.

And in the case of some risks, like AI, it is actually dangerous to conduct empirical testing. It's plausible that sufficiently intelligent unsafe AIs would mimic safe AIs until they gain a decisive strategic advantage. See Bostrom's 'treacherous turn' for more on this.

This is an interesting discussion, people listing high earning careers which're comparatively easy to get: https://www.facebook.com/groups/effective.altruists/permalink/1002743319782025/

Or rather: people failing to list high earning careers that are comparatively easy to get.

I think popularizing earning-to-give among persons who already are in high-income professions or career trajectories is a very good strategy. But as a career advice for young people interested in EA, it seems to be of rather limited utility.

What luck have the big EA charities (GiveWell and CEA come to mind as the obvious candidates) had with building up a non-EA donor base? (By which I mean one which wouldn't otherwise donate to what'd generally be considered EA picks, like GiveWell recommendations, meta charities, etc.)

x

[This comment is no longer endorsed by its author]Reply

Is there an old Facebook or Forum thread where people describe how many people they've 'recruited' to EA (to some extent, and in some shape or form)?

Curated and popular this week
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as