Since everyone is having fun with ChatGPT this week, I thought I'd see if I could play around and ask it to write a critical post on EA, while also offering solutions. Note that below post was written entirely by ChatGPT and isn't intended to reflect my own views.

"Write a critical post about Effective Altruism, and offer suggestions on how to improve the movement."

Effective Altruism (EA) is a movement that aims to use evidence and reason to determine the most effective ways to improve the world and then take action based on those findings.

This approach is based on the recognition that our resources, whether they be time, money, or expertise, are limited and that we have a moral obligation to use them in the most effective ways possible. By using data and analysis to identify the interventions that have the greatest impact, EA helps to ensure that our efforts to make the world a better place are not misguided or wasted. This rigorous and evidence-based approach is critical to the success of the movement and sets it apart from many other charitable efforts.

While the goals of EA are admirable and the movement has made significant contributions to global charitable efforts, there are some criticisms and areas for improvement.

One criticism of EA is that it can be overly focused on quantitative metrics, such as cost-per-life-saved, to the exclusion of other important factors. This can lead to a narrow and potentially misguided focus on certain interventions, such as distributing insecticide-treated bed nets to prevent malaria, while ignoring others that may be equally or even more effective but are harder to measure.

Another criticism is that EA can be overly individualistic and neglect the collective action and systemic change needed to address global problems. For example, EA often emphasizes the importance of individual charitable giving and volunteering, but downplays the role of collective action, such as advocacy and policy change, in addressing systemic issues like poverty and inequality.

To improve the movement, EA could benefit from incorporating a more diverse range of perspectives and expertise. This could include engaging more with communities and groups that are directly affected by global problems, as well as incorporating insights from disciplines such as sociology, political science, and ethics.

Additionally, EA could prioritize greater transparency and accountability in its decision-making and operations. This could include being more explicit about the assumptions and evidence behind its recommendations and being more responsive to feedback and criticism.

Overall, while Effective Altruism has made important contributions to global charitable efforts, there are opportunities for the movement to improve and address some of the criticisms that have been levied against it. By incorporating a more diverse range of perspectives and prioritizing transparency and accountability, EA could become an even more effective force for positive change in the world.

That's it!

Curious to know what everyone thinks. Personally, it reads a bit generic and doesn't (to me) offer anything new that I wouldn't have already read or pondered myself in one form or another.  I have been playing around with some nuanced follow up questions that turned out to be much more interesting with its response, but for the purpose of keeping this post brief, I've excluded for now.

Here's a list of follow up questions I asked that you could replicate:

  • What other important factors are you referring to?
  • Why examples are harder to measure, and how can we improve those?
  • What systemic change are you referring to?
  • What types of policy changes would benefit EA's specific AI Safety goals?
  • What communities and groups would be most open to joining EA?

Personally, I do slightly enjoy the thought of using AI to bolster our movement (and perhaps more ironically, for AI Safety itself)— but whether that's a good idea, or whether we're even there yet, is a question I'm not really qualified to answer.

30

New comment
6 comments, sorted by Click to highlight new comments since: Today at 7:39 AM

I have to agree with what you've said about it being generic:

X should be more diverse and accountable and transparent

This is about as generic as you can get in terms of criticism.

On the plus side, we may get better criticism as people strive to avoid sounding like a language model.

There's nothing wrong with giving the generic criticism when it's true and is being actively ignored.

Have you read criticism of criticism of criticism? It, in my view, argues strongly that generic criticism is far more suboptimal than your comment makes it seem you believe. Here's my favourite part of the piece where it touches on the differences between generic (e.g. racism in psychiatry) and specific (e.g. esketamine critique) criticism:

All of these are the opposite of the racism critique: they’re minor, finicky points entirely within the current paradigm that don’t challenge any foundational assumptions at all.

But you can actually start fights if you bring them up, instead of getting people to nod along and smile vacuously.

The racism critique doesn’t imply any specific person is doing any specific thing wrong. Certainly not you! It doesn’t demand any specific response except maybe more awareness, saying the right slogans, and maybe having some other person form a committee to make meaningless changes to some set of bylaws. But the esketamine critique actually demands that you in particular go out and learn about a different medication which is kind of scary and could get you in trouble if you use it wrong. It implies that you personally are failing patients, in a way that some other doctors aren’t failing patients. Maybe it means those other doctors are better than you! And so the knives come out.

It's possible to make the same criticism in a less generic way.

Generic or not, I think more importantly is that criticism with generic suggestions tend to be less helpful, albeit sounding good. For example, saying that EA could be more diverse is easy, giving actionable solutions to how to make it more diverse is more useful (especially when the solutions provided are designed to fit EA's specific culture and challenges), but nuanced and harder to solve.

I do think ChatGPT has the capability to solve such questions, but it requires a lot of follow up questions worded in a specific way to output something that's not generic, as of now.

I tried something similar and got the same criticisms of focus on quantitative metrics, lack of diversity, transparency, and accountability. Very similar style, structure, word choice, etc.