Hide table of contents

I took a deep breath seeing the dreaded combination of SBF and EA on the front page of the BBC and...

I breathed a sigh of relief seeing  that the article appears pretty fair leaning perhaps slightly towards the negative with many positive things to say as well. There's also little if any poorly informed negativity. Yes there is the common narrative comes through of EA originally being about helping people now (framed positively), with a later shift towards long termism framed negatively, but that's probably to be expected and is understandable.

A huge thanks to Brian Berkey, for providing much of the content for the article. He framed things so well as to be heavily quoted. Great to see some openly EA people high up in academia, who can be accessed when need be for articles like this!

Some Positive Comments

"Effective altruism is a philosophy that aims to do as much good as possible," explains Brian Berkey, associate professor of legal studies and business ethics at The Wharton School of the University of Pennsylvania, US. "It's how to help ensure people's time and resources are spent well in making the world a better place. Through empirical evidence, individuals can make more informed decisions over which charitable causes to support."

"An early focus of EA, says Berkey, was the movement's collaboration with the Anti Malaria Foundation to donate money towards mosquito bed nets: a cheap solution to one of Sub-Saharan Africa's biggest killers. The program generated maximum gains for minimal costs. "Many resources put to charitable use are often done so inefficiently," he says. "Directing funds towards an unheralded charity that does 10,000-times as much good as a popular organisation that receives millions of dollars every year means achieving massive differences through the same resources."

"Despite criticism, effective altruism has had real results in some cases. By March 2022, Giving What We Can had raised more than $2.5bn in pledges, with $8.6m donated to the UK-based Against Malaria Foundation – enough to save approximately 2,000 lives, most of which are children under the age of five. Funds amounting to $3.7m have gone to Schistosomiasis Control Initiative and Deworm the World, enough to remove parasitic worms from 3.7m children."

"When thinking of how to make the world a better place, many people may choose to work for a charity or in political activism," says Joshua Hobbs, lecturer and consultant in applied ethics at the University of Leeds, UK. "However, many effective altruists believe that rather than slog away in a soup kitchen, you can create a greater impact by working in say, investment banking, earn higher wages and donate greater sums to charity." - Although I would disagree that slogging away in a soup kitchen and working in investment banking are mutually exclusive ;)




Sorted by Click to highlight new comments since: Today at 7:17 PM

It's interesting that all the aforementioned examples of why EA does good concretely are all pertaining to global health and development, while EA is becoming highly skewed towards AI risks and longtermist causes, where it is going to be much more difficult to justify the good that can potentially be done. Advocating for EA will be much more difficult in the coming years, sadly. 

I agree it might be more difficult, but there are steps I think that could make the advocacy more easy. Obviously there are always tradeoffs here.

  1. Having a more compassionate and caring tone when talking about X-risk causes. I think EA has a bit of a tone problem when it comes to outward facing materials. For example the 80,000 hours page is friendly and very well communicated, but there are few (if any) warm and compassionate vibes. The idea that EAs are into X-risks mitigation because they really care about people and the future humanity in general could be more front and center. 

    The climate change movement for example talks about things like "creating a positive future for our grandchildren", maybe we could take a leaf out of that book.
  1. Acknowledge and lean into the good vibes Global Health and development stuff gives out by putting it a bit more front and center, even if it means sacrificing pure epistemic integrity at times.

Nick - yes, absolutely. The main PR problem with longtermism and X risk is that we haven't quite found the most effective ways to express kindness and benevolence towards future people, including our own kids, grandkids, and descendants. I agree that 'creating a positive future for our grandchildren' is a good start.

As a rabid pronatalist, I've noticed that EAs often seem quite reluctant to advocate for 'selfish' emphasis on kids, families, and lineages... as if that's an unseemly shrinking of the 'moral circle'. But most adults are parents, and most parents care deeply about the world that their kids will inhabit. I think we have to be willing to reframe X risk minimization as concrete parental protectiveness, rather than some abstract concern for generic 'future people'.

I agree with you Nick, when you say that we should present AI risks in a much more human way, I just don't think that it's the path taken by the loudest voices concerning AI risks right now, and that's a shame. And I see no incompatibility between good epistemics and wanting to make the field of AI safety more inclusive and kind so that it includes everybody and not just software engineers who went into EA because there was money (see the post on the great amount of funding going to AI safety positions that are paid x3 compared to researchers working in hospitals etc), and prestige (they've been into ML for so long and now is their chance to get opportunities and recognition). I want to dive deeper into how much EA-oriented are these new EAs if we talk about the core-values that have created the EA movement. 

On a constructive note, as a community builder, I am raising projects from the ground whose aim to focus on the role of AI risks in regards to soaring inequalities or possibility of increasing the likelihood of AI being used by a tyrannic power, themes that have a clear signalling into impact for everyone, rather than staying in the realm of singletons and other abstract figures because it's just intellectually satisfying to think about these things.

Yeah I love that, I agree that communicating well about the inequality, authoritarian and violence risks that AI could present is another potentially great angle, even if it that doesn't describe the X-risk we are most worry about 

Classic x-risk concerns (the murder of all humans) seem pretty violent to me.

For sure, that's mainly my point in that the communication line could be more about preventing "death and violence" rather than "mitigating x risk".

And yeah I was talking about a different context of AI enabled violence than x risk, but my point is about how we communicate, not the outcome.

Yeah I think I overworried about this.