Hide table of contents

Following up on my post criticising liberal-progressive criticisms of EA, I’m bringing you suggestions from further to the left on how EA could improve.

In general, libertarian socialism opposes the concentration of power and wealth and aims to redistribute them, either as a terminal goal or an instrumental goal.

This post is divided into 3 sections - meta EA, EA interventions and EA Philosophy.

Meta EA

Most of the interventions I propose are improvements in EA’s institutional design and safeguards, which should, in theory, increase the chances that resources are spent optimally.

Whether we are spending resources optimally is near-impossible to measure and evaluate, so we have to rely on theory. Regardless of whether my proposed interventions work or fail, there would be no evidence for it. 

EA relies on highly-uncertain, vulnerable-to-motivated-reasoning expected value (EV) calculations and is *no less* vulnerable to motivated reasoning than other ideologies. Because it is not possible to *detect* suboptimal spending, we should not wait for strong evidence of mistakes or outright fraud and corruption to make improvements, and we should be willing to bear small costs to reap long term benefits. 

EA priors on the influence of self-serving biases are too weak

In my view, EAs underestimate the influence that self-serving biases play in imprecise, highly uncertain expected value (EV) calculations around decisions such as buying luxurious conference venues, lavish community building expenditure, funding ready meals and funding Ubers, leading to suboptimal allocation of resources.

When concerns are raised, I notice that some EAs ask for “evidence” that decisions are influenced by self-serving biases. But that is not how motivated reasoning works - you will rarely find *concrete evidence* for motivated reasoning. Depending on the strength of self-serving biases, they could influence expected value calculations in ways that justify the most suboptimal, most luxurious purchases, with no *evidence* of the biases existing. 

Concrete suggestions for improving EV calculations which I also discussed in another post:

  1. Have two (or more) individuals, or groups, independently calculate the expected value of an intervention and compare results
  2. In expected value calculations, identify a theoretical cost at which the intervention would no longer be approximately maximising expected value from the resources
  3. Keep in mind that EA aims to make decisions that approximately maximise expected value from a set of resources, rather than just make decisions which just have net positive expected value

EAs underestimate the importance of conflicts of interest and the distribution of power inside EA

There is a huge amount of overlap across boards, governance and leadership of key EA organisations, increasing the risk of suboptimal allocation of resources, since in theory, there is a high risk of funders giving too much funding to other organisations with connected leadership. 

Although I think a certain degree of coordination via events such as the Leaders Summit is good, a greater degree of independence between institutions may help reduce biases and safeguard against misallocation. 

Concrete suggestion:
I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each. Beyond reducing risks of bias in funding allocation, this would also help to distribute power at the top of EA, safeguarding against individual irrationality and increasing diversity of thought, which may generate additional benefits.

If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?

EDIT 1 at 43 upvotes: Another potential intervention could be to split up existing organisations into more organisations. I can't think of an organisation where this would be obviously suitable so am not advocating for this happening right now, but I think it would make sense for organisations to split as they grow further in the future.

EA organisations underinvest in transparency

Many EA organisations do not write up funding decisions or do this with massive delays due to low capacity. This weakens safeguards against misallocation of resources by making it more difficult for the community to scrutinise grants and detect conflicts of interest, biassed reasoning or outright corruption. 

Previous discussion of this on the EA Forum has indicated what I consider to be overconfidence in decision making by funders. Others have implied that a low probability of failures currently happening may justify not investing more in transparency. 

Firstly, as I often say, funding decisions in EA rely on highly uncertain EV calculations which are highly prone to motivated reasoning. The probability of biased reasoning and correctable misallocation of resources does not seem low. The probability of *outright* corruption, on the other hand, does seem low. 

But importantly, the function of transparency is primarily as a long-term safeguard and disincentive against these things. *Detecting* poor reasoning, bias and corruption is only a secondary function of transparency. 

There are costs in implementing transparency. I don’t think EA should aim to maximise transparency, eg - by publishing grant write-ups the same day grants are made, but transparency should be increased from where it currently is. I think the costs of improving transparency are worth bearing. 

If this seems implausible, try the reversal test: do you think EA orgs should invest less in transparency than they do now, to allow faster grantmaking?

I made a separate post about this recently: https://forum.effectivealtruism.org/posts/G9RHEcHMLguGJY7uP/you-should-have-capacity-for-more-transparency-and

More on this topic:

https://forum.effectivealtruism.org/posts/4iLeA9uwdAqXS3Jpc/the-case-for-transparent-spending

https://forum.effectivealtruism.org/posts/sEpWkCvvJfoEbhnsd/the-ftx-crisis-highlights-a-deeper-cultural-problem-within

https://forum.effectivealtruism.org/posts/PkFenL3DcEJDjERwY/ftx-prob-related-strongly-recommending-creating-an-internal

EDIT 2 at 108 Upvotes: 

Concretely suggestion:

Grantmaking orgs should set and adhere to targets of writing up the reasoning behind every approved grant on the EA Forum within a certain timescale (eg - 1 month)

EA hasn’t sufficiently encouraged entrepreneurship-to-give as a strategy to diversify funding

“Diversify donors” is an obviously intractable solution to power concentration in EA - EA isn’t exactly turning large donors away to protect Dustin Moskovitz’s influence. 

But as I have written elsewhere, EA earning-to-give discussions have focused too much on high paying jobs, and not enough on entrepreneurship-to-give, which may be more likely to generate more large donors to distribute power away from Open Philanthropy, Cari Tuna and Dustin Moskovitz. (I also think entrepreneurship-to-give will just generate more money overall).

More on this topic:

https://forum.effectivealtruism.org/posts/cdBo2HuXA5FJpya4H/entrepreneurship-etg-might-be-better-than-80k-thought

https://forum.effectivealtruism.org/posts/JXDi8tL6uoKPhg4uw/earning-to-give-should-have-focused-more-on-entrepreneurship

EA grant making decisions are too technocratic

This section may seem similar to Luke Kemp and Carla Zoe Kremer’s criticisms, but I don’t think they properly explore the downsides of democratising things. Exploring the downsides of democratising things doesn’t mean being opposed to democratising things, but does help us do it in the best way.

As I wrote in a previous post, it can help to view decision-making structures on a scale from highly technocratic to populist. We should not be trying to make the world as populist as possible and governments should not be making every decision using referendums and citizens assemblies. Public opinion is notoriously unstable, highly susceptible to influence by politicians, corporations and the media, sometimes in conflict with facts supported by strong evidence, and historically, has been extremely racist and homophobic.

But I think EA funding decisions are on the extreme technocratic end of the scale at the moment, and should be made less technocratic. I think this would improve long-term impact by benefiting from the wisdom of the crowd, incorporating diversity of thought, reducing bias and improving accountability to the community. It would also have the instrumental value of making community members feel empowered, which could help retain EAs.

Concrete suggestions:

  1. For grant decisions where expected value calculations put projects just above or just below an EA funder’s funding bar, the decision on whether or not to fund the project should be put to a vote on the EA forum (restricted to users with a certain amount of karma), or a new voting platform should be created for people accepted to EAG.
  2. Instead of individual EAs donating to EA Funds, a pooled fund should be created for individual EAs to donate to. Projects could apply to this fund and individual donor EAs could then vote on which grants to approve, similar to a DAO (digital autonomous organisation) or a co-operative. Again, this can be restricted to people with a certain amount of karma on the EA forum or people accepted to EAG to protect the EA character of the fund and ensure that it doesn’t all get spent on problems in Western countries or problems which already receive lots of attention. 

EA Interventions

EAs underestimate the tractability of party politics

EA has accepted “politics is the mind killer” dogma from the rationalist community too strongly and has become too discouraged from Carrick Flynn’s run for office.

What’s intractable and of low expected value is focusing on making your side win and throwing lots of money at things.

But if you’re in Western Europe, interested in party politics and extroverted, there’s a good chance that this is the highest EV career for you, especially when you’re trying to co-ordinate with other EAs and avoid single-player thinking. If you’re right leaning, then I’m extra confident in this, because most smart, educated young people join centre-left parties and most EAs are centre-left. Importantly, you can do this in a voluntary capacity alongside work or studies. Network, seek to build coalitions supporting causes, be patient and be on the lookout for political opportunities to promote overseas development assistance, climate change mitigation, pro-LMIC foreign policy and trade policy, investments for pandemic preparedness and farmed animal welfare.

EDIT 2 at 108 votes: 

Concrete suggestion: Regular EAG and EAGx talks, workshops and meetups focused on party politics.

EAs underestimate the expected value of advocacy, campaigning and protest

I’m just going to link https://www.socialchangelab.org here.

EDIT 2 at 108 votes: 


Concrete suggestion: Regular EAG and EAGx talks, workshops and meetups focused on advocacy, campaigning and protest.

EAs undervalue distributing power

Libertarian socialists see distributing power as an end in itself. Most EAs do not. But EAs underestimate the instrumental value of distributing power and making it easier for people to advocate for improvements to their welfare themselves over the long-term, instead of having to rely on charity. 

An example of an intervention that is currently looking tractable is campaigning for the African Union to be given a seat at the G20. In the long-term, EAs could campaign for disenfranchised groups such as foreigners, children, future generations and animals to be given political representation in democracies.

EA Philosophy

EAs depoliticise distance in charitable giving

EA philosophy suggests that rich Westerners don’t donate to charities abroad because they are further away, but glosses over key factors other than distance - nationalism, localism and racism.

Many EAs don’t place much value on the distribution of utility

This is more of just a disagreement, but libertarian socialists very much inherently care about how power and welfare is distributed across individuals, while many EAs do not. That being said, EA does seem to instrumentally value equality in the pursuit of maximising welfare.

EA undervalues rights

Libertarian socialists place inherent value on strengthening rights and distributing power, while EAs only value this instrumentally. But I think EAs underestimate the instrumental value of strengthening rights too. Valuing rights more would probably cause the expected value of political campaigns to influence legislation look higher, especially in the context of farmed animal welfare and international development.

EAs underestimate uncertainty in cause prioritisation and grantmaking

I discuss uncertainty in EA in more detail here but in a different context.

EA relies on highly-uncertain, imprecise, vulnerable-to-motivated-reasoning expected value (EV) calculations. These calculations often use probabilities derived from belief which aren’t based on empirical evidence. Taking 80 000 Hours’ views on uncertainty literally, they think it is plausible for land use reform to be as pressing as biosecurity.

Work in the randomista development wing of EA and prioritisation between interventions in this area is highly empirical, able to use high quality evidence and unusually resistant to irrationality. Since this is the wing of EA that initially draws many EAs to the movement, I think it can give them the misconception that decision making across EA is also highly empirical and unusually resistant to irrationality, when this is not true.

I think the underestimation of uncertainty in decision making may be why EAs are overconfident in decision making, undervalue transparency and the distribution of power within EA, and may be why EAs underestimate the effects of self-serving biases.

EA expectations of extinction may mean that EAs undervalue long-term benefits of interventions

Many of the interventions I have proposed are intended to generate long-term benefits to EA while imposing short-term costs, because the risk of severe misallocations of resources increases over time. I think EAs expecting to go extinct from AGI in the next 20-30 years causes EAs to value these interventions less than I do.

Comments33
Sorted by Click to highlight new comments since: Today at 7:15 PM

If this seems like a bad idea, try the reversal test: do you think EA orgs should become more integrated?

For what it's worth, this does seem good to me: even the largest EA organizations are tiny compared to for-profit companies, and we miss out on a bunch of economies of scale as a result. There are reasonable criticisms to be made of how EVF (my employer) has done fiscal sponsorship (e.g. perhaps more stuff should have been based in the US instead of the UK) but I would still encourage any new organization to get fiscal sponsorship (from someone besides EVF, if they want) instead of being independent.

I'd be interested to hear your response to my most recent questions about alternatives here, as well as Brendon's response.

My current sense from that thread is still that having to set up an org without fiscal sponsorship is a cost that could be substantially reduced by being provided as a service, and that unifying the organisations has large and subtle costs that aren't being sufficiently acknowledged.

Thanks for sharing these!

Regardless of whether my proposed interventions work or fail, there would be no evidence for it. 

I guess this is maybe true for some strict definition of "evidence", but I would find these suggestions much more helpful if they came with:

  1. More concreteness. E.g. what things do you think organizations should be transparent about? Is it just that you think grantmakers should publish grant writeups more quickly?
  2. Actual calculations of trade-offs. E.g. how many additional hours of labor would it take to be transparent in the way that you suggest? What are the actual odds that this transparency results in getting suggestions that improve the organization? Can you make a BOTEC which quantifies the benefits here?
  3. Specific examples of how these suggestions would have been helpful in the past. E.g. are there historical instances of corruption that your transparency proposal would have caught? How valuable would this have been?

Right now I can't even tell[1] if I'm one of the people you're criticizing (maybe my work is as transparent as you want, I don't know) much less whether I agree with your suggestions.

(Note: it's obviously way more expensive to do what I suggest than to just briefly list your suggestions. But my guess is that it would be substantially more impactful to go into one of these in detail than to give this current high-level list.)

  1. ^
Arepo
1y20
10
2

Asking individuals to quantify such benefits seems like a de facto way of not actually considering them - individuals very rarely have time to do a thorough job, and any work they publish will be inevitably speculative, and easy enough to criticise on the margins that orgs that don't want to change their behaviour will be able to find a reason not to.

Since EA orgs' lack of transparency is a widespread concern among EAs, it seems a reasonable use of resources for EA orgs that don't think it's worth it to produce a one-off (or perhaps once-every-n-years) report giving their own reasons as to why it isn't. Then the community as a whole can discuss the report, and if the sentiment is broadly positive the org can confidently go on as they are, and if there's a lot of pushback on it, a) the org might choose to listen  and change its practices and b) if they don't, it will at least be more evident that they've explicitly chosen not to heed the community's views, which I'd hope would guide them towards more caution, and gradually separate the visionaries from the motivated reasoners.

Another option would be a one-time or periodic "EA Governance and Transparency Red Teaming Contest" with volunteer judges who were not affiliated with the large meta organizations. I do not think a six-figure prize fund would be necessary; to be honest, a major purpose of there being a prize fund for this contest would be to credibly signal to would-be writers that the organizations are seriously interested in ideas about improving governance and transparency. 

To build off of what you said, it's really hard for people to feel motivated to do even a moderately thorough job on a proposal or a cost-effectiveness analysis without a credible signal that there is a sufficient likelihood that the organization(s) in question will actually be responsive to a proposal/analysis. Right now, it would feel like sending an unsolicited grant proposal to an organization that doesn't list your cause area as one of its interests and has not historically funded in that area. At least in that example, the author potentially stands to gain from a grant acceptance, while the author of a governance/transparency proposal benefits no more than any other member of the community.

I mean, I don't even know what the claim is that I'm supposed to produce a report giving my own reasons for. I guess the answer is "nothing."

(Which obviously is fine! Not all forum posts need to be targeted at getting me to change my behavior. In fact, almost none are. But I thought I might have been in the target audience, so hence I wrote the comment.)

I think the suggestion is something like this (I am elaborating a bit)-- certain organizations should consider producing a report that explains:

(1) How their organization displays good governance, accountability, and transparency ("GAT");

(2) Why the organization believes its current level of GAT is sufficient under the circumstances; and possibly

(3) Why the organization believes that future improvements in GAT that might be considered would not be cost-effective / prudent / advisible.

Of course, if the organization thought it should improve its GAT, it could say that instead.

(3) would probably need a crowdsourced list of ideas and a poll on which ones the community was most interested in.

Thanks for your comment!

By transparency, I mean publishing explanations behind important decisions much more regularly and quickly to the EA Forum. This is mostly relevant for grantmakers and grantmaking organisations and isn’t super relevant for your role.

But for example, if you made a decision behind a big change to the karma system on the EA Forum, I would like you to publish an explanation behind your decision for the sake of transparency.

Agree that this would be better but as you say it is obviously very time consuming. I (ironically) don’t really have capacity soon to do this, but would encourage others to have a go at some BOTECs related to this post.

I’m not aware of any examples of outright corruption in EA.

I think an example of the kind of decision for which reasoning should be published on the EA Forum is when 80 000 hours starts listing multiple jobs in a new organisation on its job board. Doing this for OpenAI might have led to earlier scrutiny.

Another example might be the Wytham Abbey purchase but I’m not sure how much time had passed between the purchase and the discussion on this forum.

I think a great example of transparency was this post (https://forum.effectivealtruism.org/posts/4JF39v548SETuMewp/?commentId=R2Axqfvbyq89fSRYQ) from the EAG organisers explaining why they’re making a set of changes to EAG, allowing scrutiny from the EA community.

(This meta-analysis (https://journals.sagepub.com/doi/full/10.1177/00208523211033236) suggests that transparency has a small effect on government corruption, but I would not put too much weight on the results since effects seem to be context specific and I’m not sure how much we can extrapolate from governments to a network of organisations. )

While I don't think it would be that difficult to write up a BOTEC on the costs side (e.g., here are some ways EA Funds could be more transparent and I estimate the cost of the package as $50K over five years), quantifying benefits for this kind of thing seems awfully difficult. For instance, I could point to some posts on the forum as evidence that some people are bothered by what they perceive as inadequate transparency, and might be reasonably expected to donate less, not apply / get disillusioned, etc. My sense is that is true of quite a bit in the meta space, and am not sure it is reasonable to expect transparency spend to quantify in a clean manner if similar spends aren't held to the same standard. 

When you say that "EA philosophy . . . glosses over key factors other than distance," do you mean that EAs do not believe that "nationalism, localism and racism" are meaningful factors in explaining mainstream Western charitable priorities, or that EAs do not spend enough time talking about those factors? I would be surprised if you polled a number of EAs and any significant number disagreed with this belief.

My take is that telling potential donors that they have been doing charity in a nationalist/racist manner is much more likely to get them to stop listening to you than it is to change their practices -- and much, if not most, of the critique of mainstream Western charitable priorities is geared toward outsiders. So it may be more instrumentally effective to lead with and focus on a rationale based on a universal cognitive bias that potential donors can accept without having to label their past charitable behavior as racist/nationalist.

Do you think this may be an instance where the difference lies mainly in considerations of inherent vs. instrumental value? Or do you think EAs tend to get the tactical approach here wrong on instrumental grounds alone?

Thanks for your comment!

In terms of things EAs actually believe, I think EAs overestimate the contribution of cognitive biases relating to distance, and underestimate the contributions of nationalism, localism and racism, to charity priorities in rich countries.

Luckily, I don’t think my disagreement with most EAs here is super action-relevant, other than that I think EAs who are interested in promoting broad social values should consider promoting internationalism.

In terms of strategy, I agree that it mostly makes sense to emphasise factors that will offend potential donors less, such as cognitive biases relating to distance (and maybe it is this strategy that causes EAs to overestimate the importance of this factor compared to other factors).

Although I think when pitching effective giving to people we know are left wing or progressives, it might be more effective to emphasise the nationalism and racism elements, since I expect left-wingers to be keen to position themselves against these ideologies.

Hello OP,

thanks for creating this write up. On the democratizing of grantmaking part of this post. One of the reasons that I gave my full internship pay of 14k as a rising senior to the EA animal welfare fund was that I didn't have to put in any effort into thinking about what organizations/opportunities were effective. I really valued that I could outsource this labor to lewis bollard and other grantmakers that have a good track record. 
If I feel they are not doing a good job I will switch my donations to ACE though >:3

Some model of liquid democracy could help with that kind of thing. In short, it allows voters to either cast their vote themselves or delegate it to others (who can delegate their pool further, and so on).

That makes sense. I think it would be best to retain these option alongside the ideas that I propose

I would love to see more EA's succeed as entrepreneurs so that we're less reliant on OpenPhilanthropy, not only so that we have more money, but also to balance out OpenPhilanthropy's influence.

I would recommend that individuals are only allowed to hold leadership, board or governance positions in one EA organisation each.

I would heavily bet on this leading to worse governance as it would mean you couldn't recruit someone who was demonstrating their competence by leading an EA organisation well to your board. And having such a person on your board could be of great assistance, especially for a new org.

One specific thing I appreciate about GiveWell is the policy that no one donor can fund more than 20% of their operating expenses. I think there is a particular need for certain work to be broadly funded; generally, that work is at places like GiveWell, RP, ACE, etc. and will have an significant influence on what else gets funded / gets done.

There's probably a happy medium between a hard one-organization limit and  having no rules on multiple board service / board overlap.

I'm actually not convinced that we need a policy here, especially since people have very limited time and I suspect if someone is on like 12 different boards then they will have very little influence at each org because they're spreading their time so thin. But I don’t have board experience, so I could be wrong here.

We consider CEOs of large companies as being capable of steering the whole company for better or worse, and they can  have far more staff and decision-making requirements than the whole EA nonprofit world combined.

If someone is on so many boards that they have minimal influence at each, that is an independent reason to limit their service and ask someone else to serve. I'm really impressed, for instance, by RP's open call for board member applications.

I'm more concerned about someone being on the board / in leadership of 3-4 particularly important organizations than in 12.

To be fair to OpenPhil, it's common to have a major donor in a board seat as a means of providing transparency/accountability to that donor...

Thanks for your comment!

I would have thought that board experience at a non-EA org would be very similar to board experience at an EA org, but interested to hear why this may not be the case.

we have less money

I think you mean "more money"

Thanks, corrected. Reasons why I shouldn't multitask.

Strongly upvoted, but I specifically disagree with the suggestions on which group of people should be able to vote on things:

  1. "High karma users" selects, at minimum, for people with lots of time to spend on the internet. This means it will be more accessible to e.g. rich people. It probably also selects for people who agree with most of the popular opinions in EA (although I might be a counterexample), which goes against diversity of thought.

  2. "People admitted to EAG" lets the funders choose the people who'll vote about their decisions.

In line with what (I think) happens in EA Germany and EA Czech Republic, I'd propose a much simpler criterion - membership in an organisation which requires a small yearly fee.

https://decidim.org/ seems like a natural fit for what the OP and you are suggesting. The app and the org can help in more participation when it comes to projects, more transparency when it comes to implementation and funding and you can gate certain features trough a membership fee as well.

Thanks for writing this!

EA relies on highly-uncertain, imprecise, vulnerable-to-motivated-reasoning expected value (EV) calculations. These calculations often use probabilities derived from belief which aren’t based on empirical evidence.

Having now done a few explicit cost-effectiveness analyses myself, I can see how this point is quite important. It is easy to underestimate the uncertainty of inputs which are not much based on empirical evidence. However:

  • I think it motivates (even) more efforts to assess the effect of interventions, not moving away from EV calculations. 
  • I would also say that it applies not only to explicit EV calculations, but also (or even more) to other tools/mechanism used in decision-making. 

Agree, I don’t advocate for moving away from EV calculations, just for improving the way we use them!

You mention a few times that EV calculations are susceptible to motivated reasoning. But this conflicts with my understanding, which is that EV calculations are useful partly (largely) because they help to prevent motivated reasoning from guiding our decisions too heavily 

(e.g. You can imagine a situation where charity Y performs an intervention that is more cost effective than charity X. By following on EV calculation, one might switch their donation from charity X to charity Y, despite that charity X sounds intuitively better.)

Maybe you could include some examples/citations of where you think this "EV motivated reasoning" has occurred. Otherwise I find it hard to believe that EV calculations are worse than the alternative, from a "susceptible-to-motivated-reasoning" perspective (here, the alternative is not using EV calculations).

Thanks for your comment.

I don't think EV calculations directly guard against motivated reasoning.

I think the main benefit of EV calculations is that they allow more precise comparison between interventions (compared to say, just calling many interventions 'good').

However, many EV calculations involve probabilities and estimates derived from belief rather than from empirical evidence. These probabilities and estimates are highly prone to motivated reasoning and cognitive biases. 

For example, if I was to calculate the EV of an EA funding org investing in more transparency, I might need to estimate a percentage of grants which were approved but ideally should not have been. As someone who has a strong prior in favour of transparency, I might estimate this to be much higher than someone who has a strong prior against transparency. This could have a large effect on my calculated EV. 

That being said, there are certainly EV calculations where all the inputs can be pegged to empirical evidence, especially in the cause area of international development and global health. These EV calculations are less prone to motivated reasoning, but motivated reasoning remains nonetheless, because where there is empirical evidence available from different sources, motivated reasoning may affect the source used. (Guy Raveh points out some other ways that motivated reasoning can affect these calculations too)

With sufficient transparency, I think EV calculations can help reduce motivated reasoning since people can debate the inputs into the EV calculation, allowing the probabilities and estimates derived from belief to be refined, which may make them more accurate than before.

I agree that EV calculations are less susceptible to motivated reasoning than alternative approaches, but I think they are very susceptible nonetheless, which is why I think we should make certain changes to how they are used and implement stronger safeguards against motivated reasoning.

Here are some places where motivated reasoning can come in. It's past 2am here so I'll only give an example to some.

  1. In which interventions you choose to compare or to ignore, or which aspects you choose to include in your assessment.

  2. In how you estimate the consequences of your choices. Often, EV calculations in EA rely on guesses or deference to prediction markets (or even play-money ones like Metaculus). These all have as strong biases as you'd find anywhere else. As an explicit example, some Longtermists like Bostrom rely on figures for how many people may live in the future (10^(a lot), allegedly) and these figures are almost purely fictional.

  3. In how you choose to apply EV-maximisation reasoning in situations where it's unclear if that's the right thing to do. For example, if you're not entirely risk-neutral, it only makes sense to try to maximise the expected value of decisions if you know there is a large number of independent ones. But this is not what we do: a. We rank charities in ways that make donation decisions highly correlated with each other. b. We treat sequential decisions as if they were independent even when that's not true. c. We use EV reasoning on big one-off decisions (like double-or-nothing experiments).

Another potential intervention could be to split up existing organisations into more organisations. I can't think of an organisation where this would be obviously suitable so am not advocating for this happening right now, but I think it would make sense for organisations to split as they grow further in the future.

I explicitly argued for this here, albeit mainly on the grounds that it gives better fidelity of feedback, thus allowing for better competition. My current views, tentatively:

  • CEA has substantially too many (and too unrelated) responsibilities. 
  • I'm less clear about 80k and Founders Pledge, who have the next widest remits, but who each have a clearer unifying theme running through their projects. 
  • I'm also very worried about 5 trustees having theoretical authority over all the orgs within effective ventures, which collectively constitute basically the whole 'movement' part of 'the EA movement'. There's a case that the legal upside of this is so high that it outweighs the risks, which I discussed in a thread starting here with Peter Wildeford, but after his most recent response as well as Brendan_Wong's comment, I still have the impression that this case is not very robust, and that healthier alternative paths exist.

In each of the above discussions  I've generally been quite downvoted, so perhaps I should take that as evidence that my views are wrong - but in the subsequent discussions I've never felt like the points I raised had been adequately resolved.

There are also potentially other options to manage the five-trustee problem. 

If EVF is basically supposed to be a fiscal sponsor-like entity that makes the trains run on time and supports (rather than dictate to) its constituent organizations, it's not clear why people like Will MacAskill need to be on the board at all (or why it is the highest and best use of their time). The problem could be mitigated by expanding the board to seven, nine, or 11 members and by choosing people who do not have loads of "soft power" in the community for most of the seats.

I am not sure about UK law, but on the US end, you can have a nonprofit corporation whose board is elected by members (and you can define members however you want, it doesn't have to be open enrollment). Those members could even be other EA organizations if desired.  So if the costs of splitting up EVF were thought too high, adding a layer of members (maybe 50-100?) whose sole purpose would basically be to re-elect (or remove) board members as necessary would at least provide some protection against the concentration of control.

Your suggestion of having multiple individuals or groups independently calculate the expected value of an intervention is an interesting one. It could increase objectivity and reduce the influence of motivated reasoning or self-serving biases and help us end up with not only better judgments but several times more research & considerations. 

Do you know of any EA organizations that are considering it or any prior debate about this idea in the forum?

It would be interesting to see if this method would lead to more accurate expected value calculations in practice. Additionally, I am curious about how the process of comparing results and coming to a consensus would be handled in this approach.

Interesting post! Broadly I agree on most of the stuff in the meta-section, which I think has been under-researched and under-explained by the orgs in question, and disagree on the interventions, which I think have been extremely well researched and explained. 

But importantly, the function of transparency is primarily as a long-term safeguard and disincentive against these things. *Detecting* poor reasoning, bias and corruption is only a secondary function of transparency. 

I think this is a really important point, which seems to have been overlooked by many of the responses to recent lack-of-transparency criticisms (cf eg Owen's explanation of Wytham Abbey, which says that they didn't tell anyone because 'I'm not a fan of trying to create hype' and 'it felt a bit gauche', which sounds like they gave basically no thought to the precedents and incentives not announcing it was establishing).

EA grant making decisions are too technocratic

Inasmuch as this is a problem, ironically, it seems like FTX Foundation were the organisation doing the most to redress this via their regranting program.

That said, I think a focus on technocracy per se is misguided. As I understand it, Tuna and Moskovitz have completely relinquished control and sometimes knowledge of the money they donated. If there is a concern to be had with OP and formerly FTX, it's that the people they relinquished control to are a small number of closely networked individuals who a) tend to be involved with multiple EA orgs and b) tend not to worry about conflict of interest (eg as I understand it Nick Beckstead has simultaneously been a trustee for Effective Ventures and a fund manager for OP directing large amounts of money to EV subsidiaries).

restricted to users with a certain amount of karma

I would like to see a more transparent alternative to EA funds, but karma is a really bad proxy for contribution value - the highest karma people are, almost inevitably, those with the biggest network of other high-karma users to strong-upvote them. Plus newer posts get far more upvoted than older posts, both because of more users and general karma inflation. 

Perhaps a more substantial issue with the alternatives you propose is that, assuming the money would go to organisations, it would be very difficult for them to work with such uncertain income sources. Small orgs benefit greatly from a clear conversation about what their funders' expectations of them are, and what milestones would be necessary/sufficient to secure them funding. Without such predictability, it's very difficult for them to hire staff, which often (especially for meta-orgs) constitutes a majority of their expenses. 

Worth mentioning also that EA funds are a tiny pool of money relative to OpenPhil and (at least on paper) the Founders Pledge commitments. 

EAs underestimate the tractability of party politics... EAs underestimate the expected value of advocacy, campaigning and protest

I'm not sure either of these claims are true. 80k have a longstanding problem profile on the value of being a civil servant, which has been extremely influential in the UK at least (I'm not sure why there specifically), to the extent that it's been one of the most popular career paths for UK-based EAs. And many reports have strongly recommended giving to advocacy organisations (off the top of my head, ACE advocate giving to the Humane League campaigns, FP advocate Clean Air Task Force and formerly Coalition for Rainforest Nations).

That said, I still think Scott's warning about systemic change is a strong argument for caution.

Curated and popular this week
Relevant opportunities