Hide table of contents

Isn’t it better to spend money and time in changing the capitalist system rather than donate it to charities?

-17

0
0

Reactions

0
0
New Answer
New Comment


4 Answers sorted by

When I used to give introductory EA talks I got this question now and then, from people who mean well and ask because this is something they are honestly wondering about. I can imagine people downvote because the question gives the impression that you didn‘t spend much time thinking before asking (e.g. what’s meant with capitalist system, what changes seem promising to you). Have you searched for „capitalism“ or „systemic change“ in the forum and didn‘t find any useful discussion that helps specifying your answer? If yes, you could mention that so people know that you invested some of your time already and are really interested in improving your understanding. E.g. this article seems related: https://forum.effectivealtruism.org/posts/ktEfsoGfBFGsaiY46/overview-of-capitalism-and-socialism-for-effective-altruism

Thanks for your response. Yes, I did look at the systematic change area, albeit quite quickly. I also read Singer's response in the live you can save, I thought that his response wasn’t convincing. So I thought I wonder what people think in this forum. Thank you for the Capitalism article, I’ll have a read.

4
MaxRa
I think a paragraph on why you didn't find Singer's response convincing would've improved this question considerably, as it would've allowed people to see where you are coming from and what kind of concrete answer would be useful for you.

Also this downvoting made me question whether I understand what is the purpose of the forum? Isn’t it for people to discuss issues related to EA? Or is it only to agree on what the EA say? I don’t see how my question gave the impression that I haven’t thought about the question? You the topic of capitalism a bit sensitive here?

4
MaxRa
There's a popular guide on asking good questions to a technical community, which I found useful (though it's pretty biting): http://www.catb.org/~esr/faqs/smart-questions.html  I think asking "Is capitalism the root of all evil?" without much context gives the strong impression of a less useful question, as it's very unspecific and in my experience value-laden and requires a tonne of unpacking (the article I linked discusses this a bit).
1
Ayman
Thanks again for your response. I didn’t know that only volunteers in EA respond to these questions. I did not ask the question to get a clear answer, I asked it to see what people think (as in a form of discussion). I thought anyone can see these question not only volunteers. So I apologies for wasting anyone’s time.
4
Will Bradshaw
To clarify, when meerpirat say that people on the EA Forum are volunteers, they don't (I assume) mean that there is some dedicated team of volunteers whose job it is to answer Forum questions. Rather, they simply mean that most users of the Forum are not paid to use it. (I'm not sure if you were in fact confused about this, but I thought your comment above potentially implied that you were, so I wanted to make this clear just in case.)
1
Ayman
Thank you for this clarification. I certainly thought that there are dedicated volnuteers.

Please let me know how to delete my question so other volunteers don’t have to respond. And just to clarify my question was’t whether capitalism is the root of all evil, this was just the title. But anyway thank you for your time and directions. I will read around in the EA about this topic and make up my own mind. Thanks again.

Many EAs, me included, are pretty sympathetic to capitalism as an economic system – certainly much more so than many other communities that place a strong emphasis on helping others.

This certainly isn't universal within EA, but it is common. Personally, I think this has become a bit of a tribal signal within EA, such that people are a bit too ready to downvote anti-capitalist content. That said, given this context, it's probably a good idea to ask questions like this in a somewhat more measured style, and provide some concrete arguments that people can engage with.

(One thing that would significantly update me in an anti-capitalist direction, for example, would be to provide evidence that capitalism leads to significantly more factory farming than other economic systems, even accounting for differences in wealth.)

Thank you very much for this! I certainly got the impression that this topic is a bit sensitive here.

You are right, I could have given an argument or asked the question in a different way- although I did write under the title the question that I really wanted to ask, don’t know if that showed up to people. If it did, I am surprised why no one focused on that rather than the title. For me Capitalism is the cause or one of the biggest contributors or the maintainer to many of the issues we are facing, Global poverty and climate change to name a few. To not ... (read more)

2
Will Bradshaw
I might give you climate change – though I would note that e.g. communist states also have very bad environmental records (see e.g. the Aral Sea), so there is still some work to be done to strengthen that case. I don't agree with global poverty – I currently think  capitalism has historically been, and will continue to be, one of the most important forces bringing people out of poverty. People say a lot of silly things regarding the connection between capitalism and poverty. There might be a more sensible case for a link between the two, but I haven't heard it yet.
2
Will Bradshaw
I can certainly see a story where capitalism was the genesis and remains the driver of factory farming. But the more fundamental problem is that most people don't see animals as morally important, and that applies across nearly all economic systems. There's a complicating factor here, which is that capitalism makes countries rich, and rich people want to eat more meat, so it's possible that that is most of the driver here. One could respond to that that in that case making countries rich is bad on net, but I think any path to a good world is going to involve making everyone in the world a lot richer, so if factory farming is near-inevitable in rich countries (in the absence of good technological alternatives) then I'm reluctant to blame capitalism, as opposed to humans in general. I would be interested in seeing data on factory farming in less capitalist countries (e.g. the Soviet Union), compared to more free-market countries of similar wealth (if one can find any).

I think we should be open to the idea that our current societal structures are far from perfect. It's a good habit to question the premises of our society that are typically taken for granted.

I think you can make a case for it if you truly think capitalism is the root of all evil. I stubbed my toe earlier today and I must admit I'm not sure how to pin that on capitalism. Capitalism certainly is the root of some evil, but I think it's a stretch to say it's the root of all evil.

Thanks for your response. Of course Capitalism is not the root of all evil! I was just being dramatic in my title, I didn’t think people will take literary. It seems like this topic is a bit sensitive.

This is a fair question to ask, and an important conversation to be had. Both the title and the question under it. It is rather obvious that the "all" in the title question shouldn't be taken literally, just like you wouldn't take in its absolutely literal sense the "everyone" in "everyone knows that Earth revolves around the Sun".

Curated and popular this week
 ·  · 52m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI) by 2028?[1] In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote).[1] This means that, while the co
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
gergo
 ·  · 11m read
 · 
Crossposted on Substack and Lesswrong. Introduction There are many reasons why people fail to land a high-impact role. They might lack the skills, don’t have a polished CV, don’t articulate their thoughts well in applications[1] or interviews, or don't manage their time effectively during work tests. This post is not about these issues. It’s about what I see as the least obvious reason why one might get rejected relatively early in the hiring process, despite having the right skill set and ticking most of the other boxes mentioned above. The reason for this is what I call context, or rather, lack thereof. Subscribe to The Field Building Blog On professionals looking for jobs It’s widely agreed upon that we need more experienced professionals in the community, but we are not doing a good job of accommodating them once they make the difficult and admirable decision to try transitioning to AI Safety. Let’s paint a basic picture that I understand many experienced professionals are going through, or at least the dozens I talked to at EAGx conferences. 1. They do an AI Safety intro course 2. They decide to pivot their career 3. They start applying for highly selective jobs, including ones at OpenPhilanthropy 4. They get rejected relatively early in the hiring process, including for more junior roles compared to their work experience 5. They don’t get any feedback 6. They are confused as to why and start questioning whether they can contribute to AI Safety If you find yourself continuously making it to later rounds of the hiring process, I think you will eventually land the job sooner or later. The competition is tight, so please be patient! To a lesser extent, this will apply to roles outside of AI Safety, especially to those aiming to reduce global catastrophic risks. But for those struggling to penetrate later rounds of the hiring process, I want to suggest a potential consideration. Assuming you already have the right skillset for a given role, it might