17

 

I’ve spent a lot of time thinking and writing about how we ought to live. The aim has always been to get people to behave more ethically, and so to bring about a better world.  The Life You Can Save (TLYCS), an organization I founded and which I now serve as board president, focuses on making effective giving part of mainstream thinking about giving .  I hope some of you will want to take a look at the organization’s Strategic Plan for producing large-scale behavioral change.  It provides some insights into our  thinking, where we are today, and where we want to be in the future.

 

In 2012, The Life You Can Save was a book, which I published in 2009, and a website set up by a friend and sjupporter, intended to promote the ideas of the book and encourage people to pledge to donate a percentage of their income to effective charities.  At that time Charlie Bresler  approached me, offering to turn TLYCS as a meta-charity, based both on his time and work (he became its unpaid Executive Director), and on his funding.  I thought it was risky (relative to Charlie’s alternative of giving his money to the Against Malaria Foundation) but a risk worth taking.  Since then, I’ve become more and more convinced that the bet has paid off handsomely.  Last year, we moved $2.7m (a conservative estimate) to our recommended charities, more than $9 for every dollar spent on operating expenses.  These metrics should continue to improve as growth in money moved has so far been strong in the current year, while expenses are roughly he same as last year.

 

TLYCS has built a small but talented team led by Charlie (former president of the Men’s Wearhouse) and COO Jon Behar (a 10 year veteran of the world’s largest hedge fund).   Our impact to date has been significant, and is growing at a steep trajectory, but this progress only represents the early stages of our plans.  Ultimately, TLYCS wants to develop the capacity to introduce huge numbers of people to the idea of effective giving, and to have available the tools and messaging to get them to act.  We also want to build a community that will nurture and increase their involvement over time.  Our Strategic Plan explains the vision for making this happen, and how added capacity will translate to more impact. I hope you’ll read the Strategic Plan and consider supporting TLYCS.

 

17

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

I've for a long time seen things this way:

  • GiveWell: emphasizes effectiveness: the logic pull
  • TLYCS: emphasizes altruism: the emotion pull
  • GWWC: emphasizes the pledge: the act that unifies us as a common movement (or I think+feel it does)

One cute EA family.

Thanks for the info.

I have some doubts generally about the principle of mainstreaming. It seems to me that it utilises dominant ideologies 'strategically', thus reifying them. In terms of the animal movement this is very much the case in regard to One Step for Animals, Pro-Veg and The Vegan Strategist. All these groups and organisations have adopted a mainstream 'pragmatic' approach which concurrently undermines social justice.

This is of course one approach, but i do not believe there is sufficient evidence to pursue it, or that it stands to reason. It would be far better for these mainstream groups to avoid social justice issues completely, so that would include rights and veganism (the cessation of exploitation), rather than essentially undermining them to privilege their approach.

For example, i think it is deeply unfortunate Matt Ball recently said that we need to utilise the idea that people hate vegans in order to appeal to non-vegans and 'help' animals. I would question the ethics of this, and also whether it is in fact true that 'people' hate vegans, or that forming and perpetuating this idea would be a good thing anyway. This is one example, but in my view mainstreaming sets forth a cascade against people that are trying to do good pro-intersectional social justice work, and it is i believe also true that groups involved in 'mainstreaming' have not sufficiently evaluated their approach, so it seems unworthwhile to support it, even whilst many EAs seem to do just that.

In which way do you believe that pragmatism undermines social justice? Couldn't it be that a pragmatic approach increases social justice, if it is shown to be the most effective?

DC
0
0
0

What empirical tests can we make to measure which approach is more effective? What predictions can be made in advance of those tests?

First of all we would need to accept there are different approaches, and consider what they are before evaluating effectiveness.

The issue with Effective Altruism is that it is fairly one dimensional when it comes to animal advocacy. That is it works with the system of animal exploitation rather than counter to it, so primarily welfarism and reducetarianism. In relation to these ideas we need to view the subsequent counterfactual analysis, and yet where is it? I've asked these sorts of questions and it seems that people haven't applied some fundamental aspects of Effective Altruism to these issues. They are merely assumed.

For some time it has appeared as if EA has been working off a strictly utilitarian script, and has ignored or marginalised other ideas. Partly this has arisen because of the limited pool of expertise that EA has chosen to draw upon, and this has had a self replicating effect.

Recently i read through some of Holden Karnofsky's thoughts on Hits-based Giving and something particularly chimed towards the end of the essay.

"Respecting those we interact with and avoiding deception, coercion, and other behavior that violates common-sense ethics. In my view, arrogance is at its most damaging when it involves “ends justify the means” thinking. I believe a great deal of harm has been done by people who were so convinced of their contrarian ideas that they were willing to violate common-sense ethics for them (in the worst cases, even using violence).

As stated above, I’d rather live in a world of individuals pursuing ideas that they’re excited about, with the better ideas gaining traction as more work is done and value is demonstrated, than a world of individuals reaching consensus on which ideas to pursue. That’s some justification for a hits-based approach. But with that said, I’d also rather live in a world where individuals pursue their own ideas while adhering to a baseline of good behavior and everyday ethics than a world of individuals lying to each other, coercing each other, and actively interfering with each other to the point where coordination, communication and exchange break down.

On this front, I think our commitment to being honest in our communications is important. It reflects that we don’t think we have all the answers, and we aren’t interested in being manipulative in pursuit of our views; instead, we want others to freely decide, on the merits, whether and how they want to help us in our pursuit of our mission. We aspire to simultaneously pursue bold ideas and remember how easy it would be for us to be wrong."

I think in time we will view the present EAA approach as having commonalities with Karnofsky's concerns, and steps will be taken to broaden the EAA agenda to be more inclusive. I think it is unlikely however, that these changes will be sought or encouraged by movement leaders, and even within groups such as ACE i remain concerned about bias within leadership toward the 'mainstream' approach. Unfortunately, ACE has historically been underfunded, and has not received the support it has needed to properly account for movement issues, or to increase the range of the work it undertakes. I think this is partly a leadership issue in that aims and goals have not been reasonably set and pursued, and also an EA movement issue, where a certain complacency has set in.

http://www.openphilanthropy.org/blog/hits-based-giving

I don't see how TYLCS is selling out at all. They have the same maximizing impact message as other EA groups, just with a more engaging feel that also appeals to emotions (the only driver of action in almost all people).

Matt Ball is more learned and impact-focused than anyone in the animal rights field. One Step for Animals, and the Reducetarian Foundation were formed to save as many animals as possible -- complementing, not replacing, vegan advocacy. Far from selling out, One Step and Reducetarian are the exceptions from most in animal rights who have traded their compassion for animals for feelings of superiority.

Maximising impact wouldn't necessarily rely on messaging that undermines other groups in the broader animal movement. I don't think it is a good thing to take such an approach either in relation to Effective Altruism or in the broader animal movement.

Matt Ball's recent vox article stated that people love animals and hate vegans and that we need to act on this. I think this isn't a good thing, particularly where someone as respected as Matt Ball is equating vegans to hezbollah through someone as dedicated to animal exploitation as Bourdain. This of course is quite an extreme example compared to what many 'pragmatists' (for instance Tobias Leenaert) have been doing for some time. Yet it has become a dominant theme in Effective Altruism, and it isn't justified. Instead, i would argue it is actually quite harmful.

In terms of where we should be aiming, then i believe we ought not be undermining veganism on an institutional basis, as Reducetarianism and One Step put forward (so they shouldn't utilise a misrepresentation of veganism to privilege their approach). Neither would recycling anti-vegan rhetoric or irrational justifications for animal consumption reflect well on the integrity of Effective Altruism, nor is there any evidence for it being a particularly 'effective' approach, beside it being popular among people who have been conditioned to exploit animals. However, popularity need not be pursued through the replication of carnism, or the utility of the carnist system, there are other values and methods with which to make appeals.

It's also really not a question of superiority, this is something which is generally brought up to dismiss the issue. Instead it is a question of integrity, responsibility and consideration. I think these are all central values of Effective Altruism, and they need to be applied.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr