BW

Brad West

Founder & CEO @ Profit for Good Initiative
1779 karmaJoined Roselle, IL, USAProfit4good.org/

Bio

Participation
2

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Comments
280

One question I often grapple with is the true benefit of having EAs fill certain roles, particularly compared to non-EAs. It would be valuable to see an analysis—perhaps there’s something like this on 80,000 Hours—of the types of roles where having an EA as opposed to a non-EA would significantly increase counterfactual impact. If an EA doesn’t outperform the counterfactual non-EA hire, their impact is neutralized. This is why I believe that earning to give should be a strong default for many EAs. If they choose a different path, they should consider whether:

  1. They are providing specialized and scarce labor in a high-impact area where their contribution is genuinely advancing the field. This seems more applicable in specialized research than in general management or operations.
  2. They are exceptionally competent, yet the market might not compensate them adequately, thus allowing highly effective organizations to benefit from their undercompensated talent.

I tend to agree more with you on the "doer" aspect—EAs who independently seek out opportunities to improve the world and act on these insights often have a significant impact.

I appreciate the depth and seriousness with which suffering-focused ethics addresses the profound impact of extreme negative experiences. I’m sympathetic to the idea that such suffering often carries more moral weight than extreme positive experiences. For example, being tortured is not merely "worse" than having a pleasurable experience, but it is disproportionately more severe. The extreme nature of certain sufferings makes it challenging, if not impossible, to identify positive experiences that one would reasonably trade off to endure them.

However, I maintain a classical utilitarian framework, which, while recognizing the disproportionate severity of certain forms of suffering, also acknowledges the significant value of positive experiences. The example involving a toothache and heaven illustrates why positive experiences cannot be dismissed. Ending a state of eternal bliss (or preventing it from ever occurring) simply to avoid a trivial negative experience like a stubbed toe is both absurd and morally troubling. It suggests a kind of ethical myopia that undervalues the richness and depth of joy, love, and fulfillment that life can offer.

Imagine individuals behind a veil of ignorance, choosing between two potential lives: one filled with immense joy but punctuated by occasional bad days, versus a life that is consistently mediocre, without significant pain but also devoid of substantial positive experiences. It seems intuitive that most would choose the former. The prospect of immense joy outweighs the temporary pain that accompanies it, suggesting that the value of positive experiences should not be discounted but rather carefully weighed alongside the potential for suffering.

The sensible approach, in my view, is not to eliminate or devalue the significance of joy and positive experiences, but to acknowledge the depth and intensity of potential suffering. By doing so, we can ensure that our ethical frameworks remain balanced, appropriately weighting the full spectrum of the experiences of conscious beings without overcorrecting in a way that leads to counterintuitive and undesirable outcomes.

In summary, while suffering-focused ethics rightly highlights the importance of alleviating extreme suffering, we must also recognize and value the profound positive experiences that give life its richness and meaning. Both extremes of the human condition (and those of other conscious beings)—intense suffering and intense joy—deserve our moral attention and appropriate weighting in our ethical considerations.

I think Peter Singer's book, The Life You Can Save, addresses this question more fully. But I would say that the obligations of people in wealthy countries is to make life choices, including sharing of their own wealth, in a way that shows some degree of consideration for their ability to help others in such an efficient way.

Failing to make some significant effort to help, perhaps to the degree of the 10% pledge (though I would probably think more than that even would in many situations be morally required). I do not know where exactly I would draw the line, but some degree of consideration similar to that of the 10% pledge would be a minimum.

I definitely think that the very demanding requirement you stated above would make more sense than none whatsoever in which one implicitly values others less than a thousandth of how one values oneself.

My intuition doesn't really change significantly if you change the obligation from a financial one to the amount of labor that would correspond to the financial one.

If I recall correctly, the value of a statistical life used by government agencies is $10 mil/life, which is calculated by using how much people value their own lives implicitly through choices they make that entail avoiding risk by incurring costs and getting benefits by incurring risk to themselves.

If we round up the cost to save a life in the developing world to $10k, people in the developing world could save 1,000 lives for the cost at which they value their own lives.

I simply think that acting in a way that you value another person 1,000 times less than you do yourself is immoral. This is why I do think that incorporating the value of other conscious beings to some degree is morally required.

Yeah I think in the case of both choosing not to act to save the kid and acting to kill the kid (in this narrow hypothetical) you're violating the kid's rights just as just (privileging your financial interests over his life).

And regarding your point regarding conscience... You're appealing to our moral intuitions which we can question the validity of, particularly with such thought experiments as these.

I suppose I would agree that acting as a moral person requires a significant consideration of other conscious beings with regard to our choices. And I think the vast majority of people fail to take adequate consideration thereof. I suppose that's how I consider my own "conscience": am I making choices with sufficient regard for the interests of other beings across space and time. I think attempting to act accordingly is part of my "inner goodness".

I'm not saying you're not legally entitled to the money.

I'm saying that, in an ultimate sense, the kid is more morally entitled not to die from malaria than you are to retain your $6k.

And there are no norms that would develop in the thought experiment. Your activity would be totally secret. The further policy issues might indicate that people ought to have a right to their money, but that does not bear on whether they would be morally obligated to exercise it in certain ways.

I don't think that EA should be graduated from. I think that it's a matter of continuing to develop in both the "effective" and "altruistic" components.

With "Effective", I'd say we're talking about an epistemological process. There, you're trying to learn the relevant knowledge about the world and yourself such that the resources within your control that you are deciding to deploy for altruistic purposes are deployed such that they can do the most good.

With "Altruism", that would be digging deep within yourself so that you can deploy more of those resources. The ideal, in my mind, would be having no more partiality to your own interests than those of other conscious beings across space, species, and/or time.

So, I don't see an endpoint, but rather a constant striving for knowledge, wisdom, and will.

I would have a lot less concern about more central control of funding within EA if there was more genuine interest within those funding circles for broad exploration and development of evidence from new ideas within the community. Currently, I think there are a handful of (very good) notions about areas that are the most promising (anthropogenic short-term existential or major risks like AI, nuclear weapons, pandemics/bioweapons, animal welfare, global health, and development) that guide the 'spotlight' under which major funders are looking. This spotlight is not just about these important areas—it’s also shaped by strong intuitions and priors about the value of prestige and the manner in which ideas are presented. While these methodologies have merit, they can create an environment where the kinds of thinking and approaches that align with these expectations are more likely to receive funding. This incentivizes pattern-matching to established norms rather than encouraging genuinely new ideas.

The idea of experimenting with a more democratic distribution of funding, as you suggest, raises an interesting question: would this approach help incentivize and enable more exploration within EA? On one hand, by decentralizing decision-making and involving the broader community in cause area selection, such a model could potentially diversify the types of projects that receive funding. This could help break the current pattern-matching incentives, allowing for a wider array of ideas to be explored and tested, particularly those that might not align with the established priorities of major funders.

However, there are significant challenges to consider. New and unconventional ideas often require deeper analysis and nuanced understanding, which may not be easily accessible to participants in a direct democratic process. The reality is that many people, even within the EA community, might not have the time or expertise to thoroughly evaluate novel ideas. As a result, they may default to allocating funds toward causes and approaches they are already familiar with, rather than taking the risk on something unproven or less understood.

In light of this, a more 'republican' system, where the community plays a role in selecting qualified assessors who are tasked with evaluating new ideas and allocating funds, might offer a better balance. Such a system would allow for informed decision-making while still reflecting the community’s values and priorities. These assessors could be chosen based on their expertise and commitment to exploring a wide range of ideas, thereby ensuring that unconventional or nascent ideas receive the consideration they deserve. This approach could combine the benefits of broad community input with the depth of analysis needed to make wise funding decisions, potentially leading to a richer diversity of projects being supported and a more dynamic, exploratory EA ecosystem.

Ultimately, while direct democratic funding models have the potential to diversify funding, they also risk reinforcing existing biases towards familiar ideas. A more structured approach, where the community helps select knowledgeable assessors, might strike a better balance between exploration and empirical rigor, ensuring that new and unconventional ideas have a fair chance to develop and prove their worth.

EDIT:

I wanted to clarify that I recognize the 'republic' nature in your proposal, where fund managers have the discretion to determine how best to advance the selected cause areas. My suggestion builds on this by advocating for even greater flexibility for these representatives. Specifically, I propose that the community selects assessors who would have broader autonomy not just to optimize within established areas but to explore and fund unconventional or emerging ideas that might not yet have strong empirical support. This could help ensure a more dynamic and innovative approach to funding within the EA community.

I don't believe the "meat eater problem" should be ignored, but rather approached with great care. It's easy to imagine the negative press and public backlash that could arise from expressing views suggesting it might be better for people to die or discouraging support for charities that save lives in the developing world.

The Effective Altruism community is very small, with estimates around 10,000 people—a tiny fraction of the nearly 8 billion people on the planet. If we want to create a world without factory farming, we need to focus on bringing more people into the fold who care about animals. Spotlighting an analysis that essentially suggests it's good when young children die and that we should discourage saving them doesn't seem like the path to growing the movement that can end the horrors of factory farming.

By treating this problem with care, we can ensure that our efforts to improve the world are effective without alienating those who might otherwise join us in the fight against animal suffering.

The "meat eater problem" raises an intriguing ethical question, but I'm inclined to think (with low confidence) that even if the concern is valid, the proliferation of this idea could have a negative expected value. By focusing on such a divisive concept, we risk alienating potential supporters of the animal welfare movement, which could ultimately hinder efforts to reduce animal suffering. That said, this is distinct from whether the impact of the average human on factory farming would alter personal donation decisions.

Load more