BW

Brad West🔸

Founder & CEO @ Profit for Good Initiative
1920 karmaJoined Roselle, IL, USAProfit4good.org/

Bio

Participation
2

Looking to advance businesses with charities in the vast majority shareholder position. Check out my TEDx talk for why I believe Profit for Good businesses could be a profound force for good in the world.

 

Posts
19

Sorted by New

Comments
301

I don't think people are saying putting time and/or money to charities that address the poor in rich countries is not helping people, but merely that you could help more poor people in poor countries with the same resources. Thus, if we are saying that we are considering the interests of the unfortunate in poor and rich countries equally, we would want to commit our limited resources to the developing world.

I think a lot of times EAs are assuming a given set of resources that they have to commit to doing good. With that assumption, the counterfactual of a donation to the food pantry is a donation to a more cost effective charity. The "warm fuzzy/utilon" dichotomy that you deride here actually supports your notion that the food pantry could compete with the donor's luxury consumption instead. This is because warm fuzzies (the donor's psychic benefit derived from giving) could potentially be a substitute for the consumption of luxury goods (going out to eat, etc.).

So, the concept of the fuzzies (albeit maybe with language you find offensive) actually supports your notion that, within individual donation decisions, helping locally does not always compete with effective giving.

I think the sort of world that could be achieved by the massive funding of effective charities is a rather inspiring vision. Natalie Cargill, Longview Philanthropy's CEO, lays out a rather amazing set of outcomes that could be achieved in her TED Talk.

I think that a realistic method of achieving these levels of funding are Profit for Good businesses, as I lay out in my TEDx Talk. I think it is realistic because most people don't want to give something up to fund charities -as donation would require- but if they could help solve world problems by buying products or services they want or need of similar quality at the same price, they would.

I find it a bit surprising that your point is so well-taken and has met no disagreement so far, though I am inclined to agree with it.

Another way of framing "orgs that bring talent into the EA/impact-focused charity world" is orgs whose hiring is less focused on value alignment, insofar as involvement in the movement corresponds with EA value alignment. One might be concerned that a less aligned hire might do well on metrics that can be easily ascertained or credited by one's immediate employer, but ignore other opportunities or considerations regarding impact because he/she is narrowly concerned about legible job performance and personal career capital. They could go on, in this view, to use the career capital developed and displace more aligned individuals. If funding is the larger constraint for impactful work than labor willing to work for pay, "re-using" people in the community may make sense because the impact premium from value-alignment is worth the marginal delta from a seemingly superior resume.

Of course, another view is that hiring someone into an EA org can create buy-in and "convert" someone into the community, or allow them to discover a community they already agree with.

Something that just gives me pause regarding giving too much credit for bringing in additional talent is that -regarding lots of kinds of talent- there is a lot of EA talent chasing limited paid opportunities. Expanding the labor pool for some areas is probably much less important because funding is more the limiting factor. 

I agree with your post overall and think that EA can be very pedantic, professorial, and overly averse to persuasion. I am very glad that you wrote this post and believe that EAs should credit more the importance of persuasion (and probably be more susceptible to positive persuasion as against criticism).

However, the title of your post suggested that the scout mindset is valuable only as a servant of persuasion. I think that it is important to note that scout mindset has other valuable applications.

On the subject of redirecting streams of money from less impactful causes to EA causes, I feel I need to beat my drum regarding the potential of Profit for Good businesses (businesses with charities in all or almost all of the shareholder position). In such cases, to the extent EA PFGs profits displace those of normal businesses, funds are diverted from the average shareholder to an effective charity. 

So when a business like Humanitix (PFG helping projects in the developing world, $4mil AUD to The Life You Can Save) displaces the marketshare of Ticketmaster, funds are diverted not from charities, but from the funds of the business's competitors. This method of diversion seems less difficult because the operative actors (consumers, employees, business partners) are not deciding between a strong non-EA charity often optimized for warm fuzzies and marketing, but rather choosing between products with similar value propositions, but where engaging with one - in addition to the other value proposition - implies helping fight malaria or something instead of enriching a random investor. 

If you're interested in learning more about Profit for Good, here is a reading list on the subject.

Perhaps the most compelling reason for independent donors to contribute is that organizations like OP may have methodologies and assumptions that result in important opportunities being missed. Independent donors likely have a different set of methodologies and assumptions – as well as ideas that they are exposed to- that enable them to spot and support high-impact opportunities that OP overlooks or undervalues due to its particular perspectives, biases, or just lack of awareness.

 Given the vast landscape of potential research areas, decisions, even by large institutions, about which causes to investigate are often made using rough back-of-the-envelope calculations. And given the importance of finality and focus, promising ideas and/or cause areas can be rather cavalierly dismissed. Even if these calculations are approximately correct, categorically including or excluding entire areas means that promising interventions not typical of a category may be missed. Independent funders would not necessarily be burdened by having removed areas from consideration (although this certainly trades off with OP's ability to zoom in and explore the areas that they positively categorized more fully).

By bringing diverse viewpoints to the table, independent donors can fund innovative projects that might otherwise be overlooked, enriching the philanthropic landscape beyond what a single major funder can achieve.

It seems to me that the proof is in the pudding. The content can be evaluated on what it brings to the discourse and the tools used in producing it are only relevant insofar as these tools result in undesirable content. Rather than questioning whether the post was written by generative AI, I would give feedback as to what aspects of the content you are criticizing.

You seem to indicate that one who is “maximizing” for some value, such as the well-being of moral patients across spacetime would lead to, or tend to lead to, poor mental health. I can understand how one might think this for a “naïve maximization”, where one depletes oneself by giving of oneself, in terms of ones effort, time, and resources, at a rate that either makes one burnout, or barely able to function. But this is like suggesting if you want to get the most out of a car, you should drive it as frequently and relentlessly, without providing the vehicle needed upkeep and repairs.

But one who does not incorporate one’s own needs, including mental health needs, into one’s determination of how to maximize for a value is not operating optimally as a maximizer. I will note that there have been others who have indicated that when they view the satisfaction of their own needs or desires as primarily instrumental, rather than terminal goals, that this somewhat diminishes them. In my personal experience, I strive to “maximize”- I want to live my life in a way that best calculated toward reducing suffering and increasing flourishing of conscious beings- but I recognize that taking care of my health is part of how to do so.

I would be curious if other “maximizers” would say that they are capable of integrating their own health into their decisions such that they can maintain adequate health.

Just when I have seen efforts to improve community relations it has typically been in the "Community Health" context relating to when people have had complaints about people in the community or other conflicts. I haven't seen as much concerted effort in connecting people working on different EA projects that might add value to each other.

A lot of what I have seen regarding "EA Community teams" seems to be be about managing conflicts between different individuals. 

It would be interesting to see an organization or individual that was explicitly an expert in knowing different individuals and organizations and the projects that they are working on and could potentially connect people who might be able to add value to each other's projects. It strikes me that there are a lot of opportunities for collaboration but not as much organization around mapping out the EA space on a more granular level. 

Load more