Hide table of contents

I took a deep breath seeing the dreaded combination of SBF and EA on the front page of the BBC and...

I breathed a sigh of relief seeing  that the article appears pretty fair leaning perhaps slightly towards the negative with many positive things to say as well. There's also little if any poorly informed negativity. Yes there is the common narrative comes through of EA originally being about helping people now (framed positively), with a later shift towards long termism framed negatively, but that's probably to be expected and is understandable.

A huge thanks to Brian Berkey, for providing much of the content for the article. He framed things so well as to be heavily quoted. Great to see some openly EA people high up in academia, who can be accessed when need be for articles like this!


Some Positive Comments

"Effective altruism is a philosophy that aims to do as much good as possible," explains Brian Berkey, associate professor of legal studies and business ethics at The Wharton School of the University of Pennsylvania, US. "It's how to help ensure people's time and resources are spent well in making the world a better place. Through empirical evidence, individuals can make more informed decisions over which charitable causes to support."

"An early focus of EA, says Berkey, was the movement's collaboration with the Anti Malaria Foundation to donate money towards mosquito bed nets: a cheap solution to one of Sub-Saharan Africa's biggest killers. The program generated maximum gains for minimal costs. "Many resources put to charitable use are often done so inefficiently," he says. "Directing funds towards an unheralded charity that does 10,000-times as much good as a popular organisation that receives millions of dollars every year means achieving massive differences through the same resources."

"Despite criticism, effective altruism has had real results in some cases. By March 2022, Giving What We Can had raised more than $2.5bn in pledges, with $8.6m donated to the UK-based Against Malaria Foundation – enough to save approximately 2,000 lives, most of which are children under the age of five. Funds amounting to $3.7m have gone to Schistosomiasis Control Initiative and Deworm the World, enough to remove parasitic worms from 3.7m children."

"When thinking of how to make the world a better place, many people may choose to work for a charity or in political activism," says Joshua Hobbs, lecturer and consultant in applied ethics at the University of Leeds, UK. "However, many effective altruists believe that rather than slog away in a soup kitchen, you can create a greater impact by working in say, investment banking, earn higher wages and donate greater sums to charity." - Although I would disagree that slogging away in a soup kitchen and working in investment banking are mutually exclusive ;)

64

6
0

Reactions

6
0
Comments8


Sorted by Click to highlight new comments since:

It's interesting that all the aforementioned examples of why EA does good concretely are all pertaining to global health and development, while EA is becoming highly skewed towards AI risks and longtermist causes, where it is going to be much more difficult to justify the good that can potentially be done. Advocating for EA will be much more difficult in the coming years, sadly. 

I agree it might be more difficult, but there are steps I think that could make the advocacy more easy. Obviously there are always tradeoffs here.

  1. Having a more compassionate and caring tone when talking about X-risk causes. I think EA has a bit of a tone problem when it comes to outward facing materials. For example the 80,000 hours page is friendly and very well communicated, but there are few (if any) warm and compassionate vibes. The idea that EAs are into X-risks mitigation because they really care about people and the future humanity in general could be more front and center. 

    The climate change movement for example talks about things like "creating a positive future for our grandchildren", maybe we could take a leaf out of that book.
     
  1. Acknowledge and lean into the good vibes Global Health and development stuff gives out by putting it a bit more front and center, even if it means sacrificing pure epistemic integrity at times.

Nick - yes, absolutely. The main PR problem with longtermism and X risk is that we haven't quite found the most effective ways to express kindness and benevolence towards future people, including our own kids, grandkids, and descendants. I agree that 'creating a positive future for our grandchildren' is a good start.

As a rabid pronatalist, I've noticed that EAs often seem quite reluctant to advocate for 'selfish' emphasis on kids, families, and lineages... as if that's an unseemly shrinking of the 'moral circle'. But most adults are parents, and most parents care deeply about the world that their kids will inhabit. I think we have to be willing to reframe X risk minimization as concrete parental protectiveness, rather than some abstract concern for generic 'future people'.

I agree with you Nick, when you say that we should present AI risks in a much more human way, I just don't think that it's the path taken by the loudest voices concerning AI risks right now, and that's a shame. And I see no incompatibility between good epistemics and wanting to make the field of AI safety more inclusive and kind so that it includes everybody and not just software engineers who went into EA because there was money (see the post on the great amount of funding going to AI safety positions that are paid x3 compared to researchers working in hospitals etc), and prestige (they've been into ML for so long and now is their chance to get opportunities and recognition). I want to dive deeper into how much EA-oriented are these new EAs if we talk about the core-values that have created the EA movement. 

On a constructive note, as a community builder, I am raising projects from the ground whose aim to focus on the role of AI risks in regards to soaring inequalities or possibility of increasing the likelihood of AI being used by a tyrannic power, themes that have a clear signalling into impact for everyone, rather than staying in the realm of singletons and other abstract figures because it's just intellectually satisfying to think about these things.

Yeah I love that, I agree that communicating well about the inequality, authoritarian and violence risks that AI could present is another potentially great angle, even if it that doesn't describe the X-risk we are most worry about 

Classic x-risk concerns (the murder of all humans) seem pretty violent to me.

For sure, that's mainly my point in that the communication line could be more about preventing "death and violence" rather than "mitigating x risk".

And yeah I was talking about a different context of AI enabled violence than x risk, but my point is about how we communicate, not the outcome.

Yeah I think I overworried about this. 

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A
 ·  · 2m read
 · 
Hi everybody! I'm Conor. I run the 80,000 Hours Job Board. Or I used to. As of today — April 1 — we are becoming Job Birds! We've been talking to users for the last few years about making this change, and people have overwhelmingly been in favour (remember, there are six or more birds for every human on Earth). Whether it's the daily emails asking me to finally switch, or the flocks of people accosting me at conferences to urge a migration to Job Birds, the demand is overwhelming! Luckily, the wait is over! I've included an FAQ below of the most common questions we receive. FAQ * Do these birds have jobs? * In a sense, no. In another, preferred sense, definitely! They have roles in ecological niches. * What's a good bird to get started with? * The peregrine falcon. * What's the theory of change? * Birds are fascinating creatures. * Birds are the only living animals with feathers. * Birds have hollow bones, which help facilitate flight. * Some bird species, such as parrots and corvids, display remarkable intelligence. * What was the question again? * Caw! Caw! * I have concerns about wild animal suffering. How does Job Birds intend to navigate promoting birds of prey? * The humans behind Job Birds share these concerns. Unfortunately, birds of prey we've spoken to overwhelmingly ascribe to an incompatible, sort of Avi-Nietzschean value system. Owl contractors are in the process of building a moral parliament tool in order to manage these conflicting normative claims. * I worry that sharing birds with people isn't as impactful as sharing jobs. * Okay, but consider this: you can click the media buttons to even hear the sounds of the bird! * I would like to do good with my career. Can Job Birds help me with that? * Users report that job hunting can be stressful and time-intensive. In light of this, we at Job Birds recommend breaking up your job hunts with some time learning about the wonders of over 1,000 bird species at 80,000
Recent opportunities in Community
46
Ivan Burduk
· · 2m read