I was discussing EA with a friend the other night and they made some criticisms I couldn’t answer. I think this is usually illustrative. I’ll attempt to structure the discussion:
Effective Altruism must exist due to market failure
That the market is the least bad allocator of resources we have created. I agree that a failure to price in x-risks and s-risks (existential and suffering risks) is a market failure here. Companies will make no money in a future where we are all dead and consumers would pay a lot to avoid significant suffering and yet companies aren’t investing. We are subsidising catastrophe while we let this continue.
Effective Altruism does not primarily focus on correcting that market failure
While it is an aspect of EA to lobby government to consider x and s-risk, it is not (as far as I can tell) the primary focus, nor it is what most people seem to spend their time doing. In other ideologies it might be reasonable to say we are doing what we can whilst carrying on, but since we are about finding the most effective way to do things, if this were the most important thing to do, we should all do it. We should found or convince a political party and either campaign or pay others to. Why don’t we?
80000 hours is basically central planning and is unsustainable as the movement grows
If people choose work based on 80k hours advice rather than their own desires/market incentives, this makes a break from market allocation and backtracks towards central planning which has always been worse in the past. Why is it better here?
Likewise the 80k hours advice isn’t scalable. If 10% of the workforce was reading 80k hours, it would make much more sense at to change government than to tell each individual which job they ought to be doing. At that point rather than saying it should mainly be AI and biorisk, you’d be thinking about how to shift the whole economy, which is more effectively done by the market than central planning. Rather than advising individuals you’d work on the level of legislation (to correct externalities).
So why is there a goldilocks zone where it makes sense to tell individuals to change their lives, when elsewhere they should all lobby government? Why are we working to do something which we don’t intend to do indefinitely? I can’t help but think it seems as if much of EA is a stopgap right now to demonstrate our legitimacy so that we can convince others to join and eventually move to our real purpose - wholescale legislative change. If that’s the case we should be honest about it. If that isn’t the case, where is this argument wrong?
A core question for me is still, "Is EA's main aim to grow to affect govt policy?". This would be able to deal with problems that EA organisations work on at an incentives level such that that non-EAs would be properly motivated to solve problems that affect all our wellbeing.
In that sense, correcting an an externality is better than lobbying firms/consumers to ignore it (which is roughly what we currently do). Am I wrong here? If growth isn't EA's main aim, why not? Something doesn't add up.
I suppose the best answer I can expect is "we don't know that's more effective" thanks to aaron who showed me how Givewell is starting to look at this . But at some level this will stop being true, if EA had 51% support then we could just vote throw the measures we wanted (some ethical nuances).
So the secondary question is, do we have any idea when this shift from lobbying individuals to lobbying/participating in govt ought to take place. How many EAs should exist in a country before they make a concerted effort to lobby directly. That seems a fairly crucial detail.