I blog about political/economic theory and moral philosophy.
Hi Brad,
The counterfactual is definitely something that I think I should examine in more detail.
Agreed that marginal effect would be fairly logarithmic and I probably should have considered the fact that there is quite a lot of competition for employment at Earthjustice (i.e. need to be top 0.001% of lawyers to have counterfactual impact).
I am pretty completely convinced by the argument that seeking to work for Earthjustice is worse than ETG actually, so I might go and make some rather sweeping modifications to the post.
I think that the exercise does at least stand as a demonstration of the potential impact of systems change nonprofits with new/neglected focus and that Earthjustice is a success story in this realm.
Do you have a high level of confidence that Earthjustice is too large/established for it to compete with funding new and/or neglected projects?
Hi Jason, thanks for the response.
Agree that marginal increases have lower impact. I assume GiveWell-style research on the inner workings of the organization would be needed to see if funding efficacy is actually currently comparable to AMF, and I don't presume to have that level of know-how. I'm just hoping to bring more attention to this area.
What tools are used to assess likely funging? Is a large deficit as % of operating costs a sign that funging would be relatively low, or are most organizations that don't have the explicit goal of continuing to scale assumed to have very high funging costs of say 50% or higher?
Other species are instrumentally very useful to humans, providing ecosystem functions, food, and sources of material (including genetic material).
On the AI side, it seems possible that a powerful misaligned AGI would find ecosystems and/or biological materials valuable, or that it would be cheaper to use humans for some tasks than machines. I think these factors would raise the odds that some humans (or human-adjacent engineered beings) survive in worlds dominated by such an AGI.
I think it is potentially difficult to determine how good the average doctor is in a particular place and how much better one would be than the average, but if one could be reasonably confident that they could make a large counterfactual impact on patient outcomes, the impact could be significant. The easiest way to be sure of these factors that I can think of would be to go somewhere with a well-documented shortage of good doctors, while trying to learn about and emulate the attributes of good doctors.
Being a doctor may not be one of the highest impact career paths on Earth, but it might be the highest impact and/or the most fulfilling for a particular person. High impact and personal fit/fulfillment are fairly highly correlated, I think, and it's worth exploring a variety of career options in an efficient way while making those decisions. In my experience, it can be very difficult to know what one's best path is, but the things that have helped me the most so far are experiences that let me get a taste for the day-to-day in a role, as well as talking to people who are already established in one's prospective paths.
EA should add systems change as a cause area - Macaskill or Ord v. [Someone with a view of history that favors systems change more who's been on 80k hours].
From hazy memory of their episodes it seems like Ian Morris, Mushtaq Khan, Christopher Brown, or Bear Braumoeller might espouse this type of view.
True. I think they meant that it's plausible humans would convert the entire population of cows into spare parts, instead of just the ones that have reached a certain age or state, if it served human needs better for cows to not exist.
I agree that activism in particular has a lot of idiosyncrasies, even within the broader field of systems change, that make it harder to model or understand but do not invalidate its worth. I think that it is worthwhile to attempt to better understand the realms of activism or systems change in general, and to do so, EA methodology would need to be comfortable engaging in much looser expected value calculations than it normally does. Particularly, I think a separate system from ITN may be preferable for this context, because "scope, neglectedness, and tractability" may be less useful for the purpose of deciding what kind of activism to do than other concepts like "momentum, potential scope, likely impact of a movement at maximum scope and likely impact at minimum or median scope/success, personal skill/knowledge fit, personal belief alignment" etc.
I think it's worth attempting to do these sorts of napkin calculations and invent frameworks for things in the category of "things that don't usually meet the minimum quantifiability bar for EA" as a thought exercise to clarify one's beliefs if nothing else, but besides, regardless of whether moderately rigorous investigation endorses the efficacy of various systems change mechanisms or not, it seems straightforwardly good to develop tools that help those interested in systems change to maximize their positive impact. Even if the EA movement itself remained less focused on systems change, I think people in EA are capable of producing accurate and insightful literature/research on the huge and extremely important fields of public policy and social change, and those contributions may be taken up by other groups, hopefully raising the sanity waterline on the meta-decision of which movements to invest time and effort into. After all, there are literally millions of activist groups and systems-change-focused movements out there, and developing tools to make sense out of that primordial muck could aid many people in their search to engage with the most impactful and fulfilling movements possible.
We may never know whether highly-quantifiable non-systems change interventions or harder-to-quantify systems change interventions are more effective, but it seems possible that to develop an effectiveness methodology for both spheres is better than to restrict one's contributions to one. For example, spreading good ideas in the other sphere may boost the general influence of a group's set of ideals and methodologies, and also provide benefits in the form of cross-pollination from advances in the other sphere. If EA maximizes for peak highly-quantifiable action, ought there to be a subgroup that maximizes for peak implementation of "everything that doesn't make the typical minimum quantifiability bar for EA"?
Race for the galaxy is an excellent game.
Gratefulness can sound cheesy but it's one of the most scientifically-backed ways to make humans happy. I've found that a nice ritual to do with the people you live with is to go around the table and have each person say something they're grateful for before eating dinner together.
It's true that you didn't technically advocate for it, but in context it's implied that subsidies for abortion for people who are addicted to drug use would be a good policy to consider.
"We are not going to stop hearing about eugenics. Every time someone tries to call it something different, the “e” word and its association with historic injustice and abuse is invoked to end the discussion before it can begin.
When someone says that screening embryos for genetic diseases, giving educated women incentives to have children (like free child care for college educated women), or offering subsidized abortions for women addicted to drugs is "eugenics" they are absolutely using the term correctly."
I accept that the idea "abortion is eugenics" is already advanced by some conservatives. However, I think that the policy of targeted abortion subsidies would convince more people that "abortion is eugenics," and I think that this would make it easier to ban abortion.
I think the fact that Israel already has a very different cultural environment regarding genetic interventions means that those examples of targeted subsidies may well be much more controversial in other countries.
I'm glad you agree on that last point.
For me it's been good to make a habit of looking for the least controversial policy that achieves desired goals. I often discover reasons that the more controversial options were actually less desirable in some way than the less controversial ones. This isn't always the case, but in my experience it has been a definite pattern.
I suspect that the social cost of making "I have [better/worse genetics] than this person" a widespread, politically relevant, and socially permissible subject outweighs the potential benefits of policies like subsidized abortions for people addicted to drugs and special incentives for educated women to have kids.
With regard to targeted abortion subsidies, what about the risk of reanimating the "abortion is eugenics" argument against its legality, particularly in the US, where abortion has been banned in many states? If you believe that abortion's legality has had very positive genetic effects, then shouldn't preserving that legality be an extremely high priority, and the political cost of proposing this policy prohibitive?
If you want to make abortions more accessible, why not make them free for everyone, making it a better option for people who might not be able to face the short-term financial burden of its cost, reducing the odds of it being banned by reinforcing the idea that it is is a right, and avoiding the backlash that a targeted subsidy would unleash?
It seems like ensuring that everyone gets to decide when they are best prepared to have and raise kids, if ever, has such incredibly high positive externalities that if anything there is a strong argument that abortion should be free already. The same goes for condoms, birth control pills, and IUDs.
Why not advocate for new, universal public goods, rather than policies that unnecessarily risk negative social and political impacts?