Evan_Gaensbauer's Comments

Expert Communities and Public Revolt

This still neglects the possibility that if governments across the world are acting in a matter suboptimally, then them cooperating with each other, and a close and cozy relationship between expert communities and governments may come up the cost of a negative relationship with broad sections of the public. Who and what 'the public' should usually be unpacked but suffice to say there are sections of civil society that are closer to correctly diagnosing problems and solutions regarding social crises, as far as expert communities are concerned, than governments. For example, expert communities sometimes have more success in achieving their goals working with many environmental movements around the world to indirectly move government policy than working with governments directly. This is sometimes observed today in progress made in tackling the climate crisis. Similarly during the Cold War, social movements (anti-war, anti-nuclear, environmental movements) in countries on both sides played a crucial in moving governments towards policy that deescalated nuclear tensions, like the SALT treaties, that an expert organization like the Bulletin of Atomic Scientists (BAS) would advocate for. It's not clear that movements within the scientific community to deescalate nuclear tensions between governments would have succeeded without broader movements in society pursuing the same goals.

Obviously such movements can be a hindrance to the goals for improving the world pursued by expert communities, when governments are otherwise the institutions that would advance progress towards these goals better than those movements. A key example of this is how environmental movements have played a positive role in combating pollution and deescalating nuclear tensions during the Cold War, they've been counterproductive by decreasing public acceptance and the political pursuit of the safest forms of nuclear energy. Many governments around the world which otherwise would build more nuclear reactors to produce energy and electricity to replace fossil fuels don't do so because they rightly fear the public backlash that would be whipped up by environmental movements. Some sections of the global environmental movement have become quite effective on freezing the progress on climate change that could be made by governments around the world building more nuclear reactors.

There are trade-offs in the relationships expert communities face in building relationships with sections of the public like social movements vs. governments. I haven't done enough research to know if there is a super-effective strategy for knowing what to do under any conditions, as an expert community. Suffice to say, there aren't easy answers for effective altruism as a social and intellectual movement, or the expert communities to which we're connected, to resolve these issues.

While we are on this topic, I thought it would be fit if we acknowledge what similar issues effective altruism as a movement faces. Effective altruism as a global community has been crucial the growing acceptance of AI alignment as a global priority among some institutions in Silicon Valley and other influential research institutions across the world, both academic and corporate. We've also influenced some NGOs in policymaking and world governments to take seriously transformative AI and the risks it poses. Yet it's mostly been indirect, has had little visible impact and hasn't produced a better, ongoing relationship between EA as a set of institutions, and governments.

We're now in a position where as much as EA might be integrated with efforts in AI security in Silicon Valley and universities around the world, governments of countries like Russia, China, South Korea, the European Union, and at least the military and intelligence institutions of the American government are focused on it. Those governments focusing on AI security more is in part a consequence of EA perpetuating greater public consciousness regarding AI alignment (the far-bigger factor being the corporate and academic sectors achieving major research progress in AI as recognized through significant milestones and breakthroughs). There are good reasons why some EA-aligned organizations would keep private that they've developed working relationships with the research arms of world governments on the subject of AI security. Yet from what we can observe publicly, it's not clear that at present perspectives from EA and expert communities we work with would have more than a middling influence on the choices world governments make regarding matters of security in AI R&D.

AMA: "The Oxford Handbook of Social Movements"

I've identified the chapters in OHSM that, if there is an answer to these questions to be found in the book, they will be in there. They are 5 chapters, totaling roughly 100 pages in number. Half the chapters focus on ties to other social movements, and half the chapters focus on political parties/ideologies. I can and will read them, but to give a complete answer to your questions, I'd have to read most of at least a couple of chapters. That will take time. Maybe I can provide specific answers to more pointed questions. If you've read this comment, pick one goal from one cause area, and decide if you think the achievement of that goal depends more on EA's relationship to either another social movement, or a political ideology. At that level of specificity, I expect I can achieve something like giving one or two academic citations that should answer that question. Again, I will answer the question at the highest level, but at that point I'm writing a mini-book review on the EA Forum that will take a couple of weeks for me to complete.

AMA: "The Oxford Handbook of Social Movements"

I'm aware of a practical framework that social movements along other kinds of organizations can use. There are different versions of this framework, for example, in start-up culture. I'm going to use the version I'm familiar with from social movements. I haven't taken the time yet to look up in the OHSM if this is a framework widely and effectively employed by social movements overall.

A mission is what a movement seeks to ultimately accomplish. It's usually the very thing that inspires the creation of a movement. It's so vast it often goes unstated. For example, the global climate change movement has a mission of 'stopping the catastrophic impact of climate change'. Yet that's so obvious it's not like at meetings environmentalists need to establish the fact they've gathered is to stop climate change. It's common knowledge.

The mission of effective altruism is, more or less, "to do the most good". Cause areas exist in other movements similarly broad to effective altruism, but they're not the same thing as a mission. The cause area someone focuses on will be due to their perception of how to do the most good, or their evaluation of how they can personally do the most good. So each cause area in EA represents a different interpretation of how to do the most good, as opposed to being a mission or goal in and of itself.

Goals are the factors a movement believes are the milestones to be completed to complete a mission. The movement believes each goal by itself is a necessary factor in completing the mission, and that the full set of goals combined fulfills the sufficient condition to complete the mission. So for the examples you gave, the set up would be as follows:

Cause: Global poverty alleviation

Mission: End extreme global poverty.

Goals: Improve trade and foreign aid.

Cause: Factory Farming

Mission: End factory farming.

Goals: Gain popular support for legal and corporate reforms.

Cause: Existential risk reduction

Mission: Avoid extinction.

Goals: MItigate extinction risk from AI, pandemics, and nuclear weapons.

Cause: Climate Change

Mission: Address climate change.

Goals: Pursue cap-and-trade, carbon taxes and clean tech

Cause: Wild Animal Welfare

Mission: Improve the welfare of wild animals.

Goals: Do research to figure out how to do that.

Having laid it out like this, it’s easier to see (1), why a “cause” isn’t a “mission” or “goal”; and, (2), how this framework can be crucial for clarifying what a movement is about at the highest level of abstraction. For example, while the mission of the cause of ‘global poverty alleviation’ is ‘eliminate extreme global poverty’, the goals of systemic international policy reform don’t match up to what EA primarily focuses on to alleviate global poverty, which is a lot of fundraising, philanthropy, research and field activity, focused on global health, not public policy. Your framing assumes ‘existential risk reduction’ refers to ‘extinction risk’, but ‘existential risk’ has been defined as long-term outcomes that permanently and irreversibly alter the trajectory of life, humanity, intelligence and civilization on Earth or in the universe. That includes extinction risks but can also include risks of astronomical suffering. If nitpicking the difference between missions and goals seems like needless semantics, remember that because EA as a community doesn’t have a clear and common framework for defining these things, we’ve been debating and discussing them for years.

Below goals are strategy and tactics. The strategy is the framework a movement employs for how to achieve the goals. Tactics are the set of concrete, action-oriented steps the movement takes to implement the strategy. The mission is to the goals as the strategy is to the tactics. There is more to get into about strategy and tactics, but this is too abstract a discussion to get into that. For figuring out what an effective social movement is, and how it becomes effective, it’s enough to start thinking in terms of mission and goals.

AMA: "The Oxford Handbook of Social Movements"

This isn't from the OHSM, but two resources to learn more about this topic are the Wikipedia article on 'satisficing', a commonly suggested strategy for adapting utilitarianism in response to the demandingness criticism, and this section of the 'consequentialism' article on the Stanford Encyclopedia of Philosophy focused on the demandingness criticism.

AMA: "The Oxford Handbook of Social Movements"

Same as with my response to your other questions in your other comment, it's easier to operationalize 'success', 'failure', and 'support' with missions, goals and objectives in mind. The other questions I believe I can find answers for more easily, but these ones aren't answerable without specified goals.

AMA: "The Oxford Handbook of Social Movements"

These questions seem too general to provide a satisfying answer. I'd have to quote a few whole chapters to give a complete answer. An answer applicable to effective altruism depends on making assumptions about what the community's goals are. I think it's safe to make some assumptions here for the sake of argument. To start off, it's safe to say effective altruism is in practice a reformist as opposed to revolutionary movement. Beyond that, it'd be helpful to specify what kind of goals you have in mind, and what means of achieving them are either preferred and/or believed to be most effective.

What are the key ongoing debates in EA?

Whether effective altruism should be sanitized seems like an issue separate from how big the movement can or should grow. I'm also not sure questions of sanitization should be reduced to just either doing weird things openly, or not doing them at all. That framing ignores the possibility of how something can be changed to be less 'weird', like has been done with AI alignment, or, to a lesser extent, wild animal welfare. Someone could figure out how to make it so betting on pandemics or whatever can be done without it becoming a liability for the reputation of effective altruism.

Did Geoff Anders ever write a post about the performance of Leverage Research and their recent disbanding?

Frankly, I'm unsure how much there is to learn from or about Leverage Research at this point. Having been in the effective altruism movement for almost as long as Leverage Research has been around, an organization which has had some kind of association with effective altruism since soon after it was founded, Leverage Research's history is one of failed projects, many linked to the mismanagement of Leverage Research as an ecosystem of projects. In effective altruism, one of our goals is learning from mistakes, including the mistakes of others, is so we don't make the same kind of mistakes ourselves. It's usually more prudent to judge mistakes on a case-by-case basis, as opposed to the actor or agency that perpetuates them. Yet other times there is a common thread. When there is evidence for repeated failures borne of systematic errors in an organization's operations and worldview, often the most prudent lesson we can learn from that organization is why they repeatedly and consistently failed, and about their environment, for why it enabled a culture of an organization barely ever course-correcting, or being receptive to feedback. What we might be able to learn from Leverage Research is how EA(-adjacent) organizations should not operate, and how effective altruism as a community can learn to interact with them better.

[This comment is no longer endorsed by its author]Reply
Load More