(I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it's still a helpfwl frame in order to understand the differences anyway, so I'm posting it here. ^^)
TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin.
Some reasons to prefer decentralised funding and insider trading
I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total.
Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I'm not arguing that we shouldn't take these considerations into account. What I'm trying to say is that even after you've given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don't handicap ourselves just to fit in.
Consider the community from a bird's eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing?
What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they've had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point.
To be clear, this isn't how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition ("consciousness"). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn't work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads.
Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion?
I think it would be a community optimised for the early detection and transmission of market-moving information--which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they're friends with the CEO and received private information, it's called "insider trading" and is illegal in some countries.
But it's not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we'd want to see happening.
If, say, you have a friend who's trying to get time off from work in order to start a project, but no one's willing to fund them because they're a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness. That kind of information doesn't transmit very readily, so if we insist on centralised funding mechanisms, we're unknowingly losing out on all those insider trading opportunities.
Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn't even pass the first layer.
(I should probably mention that there are obviously biases that come into play when evaluating people you're close to, and that could easily interfere with good judgment. It's a crucial consideration. I'm mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.)
There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they're likely to be better at evaluating impact opportunities. But this needn't matter much if they're bottlenecked by bandwidth--both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3]
On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration.
While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets.
They're a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by "purchasing the impact" second-hand.
Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally.
(There are of course, more complexities to this, and you can check out the previous discussions on the forum.)
This doesn't necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they're still reluctant to use information that other people don't have access to, so it amounts to nearly the same thing.
This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren't interested in passing it along.
(Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!)
As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be "between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information." -- (Herbert et al., 2013)
Luckily it's nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research.
Marcus Daniell appreciation note
@Marcus Daniell, cofounder of High Impact Athletes, came back from knee surgery and is donating half of his prize money this year. He projects raising $100,000. Through a partnership with Momentum, people can pledge to donate for each point he gets; he has raised $28,000 through this so far. It's cool to see this, and I'm wishing him luck for his final year of professional play!
Effective giving quick take for giving season
This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.
I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, though; I’m not sure what should be done except to be clearer about all this, but I suspect it’s hard to properly convey “this seems like the absolute best thing in the world to do, also next year my view could be that it’s basically useless” even if you use those exact words. And maybe people have done this, or maybe it’s worth trying harder. Another approach would be something like insurance.
A frame I’ve been more interested in lately (definitely not original to me) is that earning to give is a kind of resilience / robustness-add for EA, where more donors just means better ability to withstand crazy events, even if in most worlds the small donors aren’t adding much in the way of impact. Not clear that that nets out, but “good in case of tail risk” seems like an important aspect.
A more
The Effective Ventures Foundation UK’s Full Accounts for Fiscal Year 2022 has been released via the UK companies house filings (August 30 2023 entry - it won't let me direct link the PDF).
* Important to note that as of June 2022 “EV UK is no longer the sole member of EV US and now operate as separate organizations but coordinate per an affiliation agreement (p11).”
* It’s noted that Open Philanthropy was, for the 2021/2022 fiscal year, the primary funder for the organization (p8).
* EVF (UK&US) had consolidated income of just over £138 million (as of June 2022). That’s a ~£95 million increase from 2021.
* Consolidated expenses for 2022 were ~ £79 million - an increase of £56 million from 2021 (still p8).
* By end of fiscal year consolidated net funds were just over £87 million of which £45.7 million were unrestricted.
* (p10) outlines EVF’s approach to risk management and mentions FTX collapse.
* A lot of boiler plate in this document so you may want to skip ahead to page 26 for more specific breakdowns
* EVF made grants totaling ~£50 million (to institutions and 826 individuals) an almost £42 million increase in one year (p27)
* A list of grant breakdowns (p28) ; a lot of recognizable organizations listed from AMF to BERI and ACE
* also a handful of orgs I do not recognize or vague groupings like “other EA organizations” for almost £3 million
* Expenses details (p30) main programs are (1) Core Activities (2) 80,000 Hours (3) Forethought and (4) Grant-making
* Expenses totaled £79 million for 2022 (a £65 million increase from 2021) which seems like a huge jump for just one year
* further expense details are on (p31-33) and tentatively show a £23.3 million jump between 2021 and 2022 [but the table line items are NOT the same across 2021/2022 so it’s hard to tell - if anyone can break this down better please do in the comments]
* We may now have a more accurate number of £1.6 million spent on marketing for What We Owe The Future (which i
I was finding it hard to keep track of all the different organizations posting about their marginal funding plans recently, so i made a simple spreadsheet:
https://docs.google.com/spreadsheets/d/19nZWRPsVd_-MzzA63_qWpuMDcoNFzj0wPi8fIXQVxhs/edit
Feel free to add any other EA orgs or fix errors or re-arrange everything or whatever.
I feel a bit confused about how much I should be donating.
1. On the one hand there’s just a straight forward case that donating could help many sentient beings to a greater degree than it helps me. On the other hand, donating 10% for me feels like it’s coming from a place of fitting in with the EA consensus, gaining a certain kind of status and feeling good rather than believing it’s the best thing for me to do.
2. I’m also confused about whether I’m already donating a substantial fraction of my income.
* I’m pretty confident that I’m taking at least a 10% pay-cut in my current role. If nothing else my salary right now is not adjusted for inflation which was ~8% last year so it feels like I’m at least underpaid by that amount (though it’s possible they were overpaying me before). Many of my friends earn more than twice as much as I do and I think if I negotiated hard for a 100% salary increase the board would likely comply.
* So how much of my lost salary should I consider to be a donation? I think numbers between 0% and 100% are plausible. -50% also isn’t insane to me as my salary does funge with other peoples donations to charities.
* One solution is that I should just negotiate for my salary from a non-altruistic perspective, and then decide how much I want to donate back to my organisation after that. This seems a bit inefficient though and I think we should be able to do better.
3. One reason I don’t donate ~50% of my salary is that I genuinely believe it’s more cost-effective for me to build runway than donate right now. I quite like the idea of discussing this with someone who strongly disagrees with me and I admire and see if they come round to my position. It feels a bit too easy to find reasons not to give, and I’m very aware of my own selfishness in many parts of my life.
Welcome to the effective giving subforum!
This is a dedicated space for discussions about effective giving.
Get involved:
* ❤️ Donate via Giving What We Can
* Join the discussion
* Share where you're donating this giving season — and why!
* Start a new thread in this subforum[1]
* Ask questions about donation decisions
* Discuss strategic considerations about giving
* Explore other opportunities for donating or raising money
* Explore updated giving recommendations from GiveWell, Animal Charity Evaluators, Giving What We Can, and Happier Lives Institute
* Book an effective giving talk at your workplace
* Give the Forum team feedback about this beta subforum
* Reach us at forum@centreforeffectivealtruism.org or comment on this post.
1. ^
Threads can be casual! This will only appear in this subforum or for people who've joined the subforum.
The Happier Lives Institute have helped many people (including me) open their eyes to Subjective Wellbeing and perhaps even update us to the potential value of SWB. The recent heavy discussion (60+ comments) on their fundraising thread disheartened me. Although I agree with much of the criticism against them, the hammering they took felt at best rough and perhaps even unfair. I'm not sure exactly why I felt this way, but here are a few ideas.
* (High certainty) HLI have openly published their research and ideas, posted almost everything on the forum and engaged deeply with criticism which is amazing - more than perhaps any other org I have seen. This may (uncertain) have hurt them more than it has helped them.
* (High certainty) When other orgs are criticised or asked questions, they often don't reply at all, or get surprisingly little criticism for what I and many EAs might consider poor epistemics and defensiveness in their posts (for charity I'm not going to link to the handful I can think of). Why does HLI get such a hard time while others get a pass? Especially when HLI's funding is less than many of orgs that have not been scrutinised as much.
* (Low certainty) The degree of scrutiny and analysis of some development orgs in general like HLI seems to exceed that of AI orgs, Funding orgs and Community building orgs. This scrutiny has been intense- more than one amazing statistician has picked apart their analysis. This expert-level scrutiny is fantastic, I just wish it could be applied to other orgs as well. Very few EA orgs (at least that have been posted on the forum) produce full papers with publishable level deep statistical analysis like HLI have at least attempted to do. Does there need to be a "scrutiny rebalancing" of sorts. I would rather other orgs got more scrutiny, rather than development orgs getting less.
Other orgs might see threads like the HLI funding thread hammering and compare it with other threads where orgs are criticised and don't eng
(I realised after I wrote this that the metaphor between brains and epistemic communities is less fruitfwl than it seems like I think, but it's still a helpfwl frame in order to understand the differences anyway, so I'm posting it here. ^^)
TL;DR: I think people should consider searching for giving opportunities in their networks, because a community that efficiently capitalises on insider information may end up doing more efficient and more varied research. There are, as you would expect, both problems and advantages to this, but it definitely seems good to encourage on the margin.
Some reasons to prefer decentralised funding and insider trading
I think people are too worried about making their donations appear justifiable to others. And what people expect will appear justifiable to others, is based on the most visibly widespread evidence they can think of.[1] It just so happens that that is also the basket of information that everyone else bases their opinions on as well. The net effect is that a lot less information gets considered in total.
Even so, there are very good reasons to defer to consensus among people who know more, not act unilaterally, and be epistemically humble. I'm not arguing that we shouldn't take these considerations into account. What I'm trying to say is that even after you've given them adequate consideration, there are separate social reasons that could make it tempting to defer, and we should keep this distinction is in mind so we don't handicap ourselves just to fit in.
Consider the community from a bird's eye perspective for a moment. Imagine zooming out, and seeing EA as a single organism. Information goes in, and causal consequences go out. Now, what happens when you make most of the little humanoid neurons mimic their neighbours in proportion to how many neighbours they have doing the same thing?
What you end up with is a Matthew effect not only for ideas, but also for the bits of information that get promoted to public consciousness. Imagine ripples of information flowing in only to be suppressed at the periphery, way before they've had a chance to be adequately processed. Bits of information accumulate trust in proportion to how much trust they already have, and there are no well-coordinated checks that can reliably abort a cascade past a point.
To be clear, this isn't how the brain works. The brain is designed very meticulously to ensure that only the most surprising information gets promoted to universal recognition ("consciousness"). The signals that can already be predicted by established paradigms are suppressed, and novel information gets passed along with priority.[2] While it doesn't work perfectly for all things, consider just the fact that our entire perceptual field gets replaced instantly every time we turn our heads.
And because neurons have been harshly optimised for their collective performance, they show a remarkable level of competitive coordination aimed at making sure there are no informational short-circuits or redundancies.
Returning to the societal perspective again, what would it look like if the EA community were arranged in a similar fashion?
I think it would be a community optimised for the early detection and transmission of market-moving information--which in a finance context refers to information that would cause any reasonable investor to immediately make a decision upon hearing it. In the case where, for example, someone invests in a company because they're friends with the CEO and received private information, it's called "insider trading" and is illegal in some countries.
But it's not illegal for altruistic giving! Funding decisions based on highly valuable information only you have access to is precisely the thing we'd want to see happening.
If, say, you have a friend who's trying to get time off from work in order to start a project, but no one's willing to fund them because they're a weird-but-brilliant dropout with no credentials, you may have insider information about their trustworthiness. That kind of information doesn't transmit very readily, so if we insist on centralised funding mechanisms, we're unknowingly losing out on all those insider trading opportunities.
Where the architecture of the brain efficiently promotes the most novel information to consciousness for processing, EA has the problem where unusual information doesn't even pass the first layer.
(I should probably mention that there are obviously biases that come into play when evaluating people you're close to, and that could easily interfere with good judgment. It's a crucial consideration. I'm mainly presenting the case for decentralisation here, since centralisation is the default, so I urge you keep some skepticism in mind.)
There are no way around having to make trade-offs here. One reason to prefer a central team of highly experienced grant-makers to be doing most of the funding, is that they're likely to be better at evaluating impact opportunities. But this needn't matter much if they're bottlenecked by bandwidth--both in terms of having less information reach them and in terms of having less time available to analyse what does come through.[3]
On the other hand, if you believe that most of the relevant market-moving information in EA is already being captured by relevant funding bodies, then their ability to separate the wheat from the chaff may be the dominating consideration.
While I think the above considerations make a strong case for encouraging people to look for giving opportunities in their own networks, I think they apply with greater force to adopting a model like impact markets.
They're a sort of compromise between central and decentralised funding. The idea is that everyone has an incentive to fund individuals or projects where they believe they have insider information indicating that the project will show itself to be impactfwl later on. If the projects they opportunistically funded at an early stage do end up producing a lot of impact, a central funding body rewards the maverick funder by "purchasing the impact" second-hand.
Once a system like that is up and running, people can reliably expect the retroactive funders to make it worth their while to search for promising projects. And when people are incentivised to locate and fund projects at their earliest bottlenecks, the community could end up capitalising on a lot more (insider) information than would be possible if everything had to be evaluated centrally.
(There are of course, more complexities to this, and you can check out the previous discussions on the forum.)
This doesn't necessarily mean that people defer to the most popular beliefs, but rather that even if they do their own thinking, they're still reluctant to use information that other people don't have access to, so it amounts to nearly the same thing.
This is sometimes called predictive processing. Sensory information comes in and gets passed along through increasingly conceptual layers. Higher-level layers are successively trying to anticipate the information coming in from below, and if they succeed, they just aren't interested in passing it along.
(Imagine if it were the other way around, and neurons were increasingly shy to pass along information in proportion to how confused or surprised they were. What a brain that would be!)
As an extreme example of how bad this can get, an Australian study on medicinal research funding estimated the length of average grant proposals to be "between 80 and 120 pages long and panel members are expected to read and rank between 50 and 100 proposals. It is optimistic to expect accurate judgements in this sea of excessive information." -- (Herbert et al., 2013)
Luckily it's nowhere near as bad for EA research, but consider the Australian case as a clear example of how a funding process can be undeniably and extremely misaligned with the goal producing good research.