T

toonalfrink

794 karmaJoined

Comments
90

This topic seems even more relevant today compared to 2019 when I wrote it. At EAG London I saw an explosion of initiatives and there is even more money that isn't being spent. I've also seen an increase in attention that EA is giving to this problem, both from the leadership and on the forum. 

Increase fidelity for better delegation

In 2021 I still like to frame this as a principal-agent problem.

First of all there's the risk of goodharting. One prominent grantmaker recounted to me that back when one prominent org was giving out grants, people would just frame what they were doing as EA, and then they would keep doing what they were doing anyway.

This is not actually an unsolved problem if you look elsewhere in the world. Just look at your average company. Surely employees like to sugarcoat their work a bit, but we don't often see a total departure from what their boss wants from them. Why not?

Well I recently applied for funding to the EA meta fund. The project was a bit wacky, so we gave it a 20% chance of being approved. The rejection e-mail contained a whopping ~0.3 bits of information: "No". It's like that popular meme where a guy asks his girlfriend what she wants to eat, makes a lot of guesses, and she just keeps saying "no" without giving him any hints. 

So how are we going to find out what grantmakers want from us, if not by the official route? Perhaps this is why it seems so common for people close to the grantmaker to get funded: they do get to have high-fidelity communication.

If this reads as cynicism, I'm sorry. For all I know, they've got perfect reasons for keeping me guessing. Perhaps they want me to generate a good model by myself, as a proof of competence? There's always a high-trust interpretation and despite everything I insist on mistake theory.

The subscription model

My current boss talks to me for about an hour, about once a month. This is where I tell him how my work is going. If I'm off the rails somehow, this is where he would tell me. If my work was to become a bad investment for him, this is where he would fire me. 

I had a similar experience back when I was doing RAISE. Near the end, there was one person from Berkeley who was funding us. About once a month, for about an hour, we would talk about whether it was a good idea to continue this funding. When he updated away from my project being a good investment, he discontinued it. This finally gave me the high-fidelity information I needed to decide to quit. If not for him, who knows for how much longer I would have continued.

So if I was going to attempt for a practical solution: train more grantmakers. Allow grantmakers to make exploratory grants unilaterally to speed things up. Fund applicants according to a subscription model. Be especially liberal with the first grant, but only fund them for a small period. Talk to them after every period. Discontinue funds as soon as you stop believing in their project. Give them a cooldown period between projects so they don't leech off of you.

I have added a note to my RAISE post-mortem, which I'm cross-posting here:

Edit November 2021: there is now the Cambridge AGI Safety Fundamentals course, which promises to be successful. It is enlightening to compare this project with RAISE. Why is that one succeeding while this one did not? I'm quite surprised to find that the answer isn't so much about more funding, more senior people to execute it, more time, etc. They're simply using existing materials instead of creating their own. This makes it orders of magnitude easier to produce the thing, you can just focus on the delivery. Why didn't I, or anyone around me, think of this? I'm honestly perplexed. It's worth thinking about.

You might feel that this whole section is overly deferential. The OpenPhil staff are not omniscient. They have limited research capacity. As Joy's Law states, "no matter who you are, most of the smartest people work for someone else."

But unlike in competitive business, I expect those very smart people to inform OpenPhil of their insights. If I did personally have an insight into a new giving opportunity, I would not proceed to donate, I would proceed to write up my thoughts on EA Forum and get feedback. Since there's an existing popular venue for crowdsourcing ideas, I'm even less willing to believe that that large EA foundations have simply missed a good opportunity.

I would like to respond specifically to this reasoning.

Consider the scenario that a random (i.e. probably not EA-affiliated) genius comes up with an idea that is, as a matter of fact, high value. 

Simplifying a lot, there are two possibilities here: X their idea falls within the window of what the EA community regards as effective, and Y it does not.

Probabilities for X and Y could be hotly debated, but I'm comfortable stating that the probability for X is less than 0.5. i.e. we may have a high success rate within our scope of expertise, but the share of good ideas that EA can recognize as good is not that high. 

The ideas that reach Openphil via the EA community might be good, but not all good ideas make it through the EA community.

To me, reducing your weirdness is equivalent to defection in a prisoner's dilemma, where the least weird person gets the most reward but the total reward shrinks as the total weirdness shrinks.

Of course you can't just go all-out on weirdness, because the cost you'd incur would be too great. My recommendation is to be slightly more weird than average. Or: be as weird as you perceive you can afford, but not weirder. If everyone did that, we would gradually expand the range of acceptable things outward.

Cause if there is excess funding and less applicants, I'd assume such applicants would also get funding.

I have seen examples of this at EA Funds, but it's not clear to me whether this is being broadly deployed.

Let's interpret "study" as broad as we can: is there not anything that someone can do on their own initiative, and do it better if they have time, that increases their leadership capacity?

I think the biggest constraint for having more people working on EA projects is management and leadership capacity. But those aren't things you can (solely) self-study; you need to practice management and leadership in order to get good at them.

What about those people that already have management and leadership skills, but lack things like:

  • Connections with important actors
  • Awareness of the incentives and the models of the important actors
  • Awareness of important bottlenecks in the movement
  • Background knowledge as a source of legitimacy
  • Skin in the game / a track record as a source of legitimacy

If I take my best self as a model for leadership (which feels like a status grab but I'll hope you excuse me, it's the best data I have) then good leadership requires a lot of affinity/domain knowledge/vision/previous interactions with the thing that is being led. Can this not be cultivated?

There is also significant loss caused by moving to a different town, i.e. loss of important connections with friends and family at home, but we're tempted not to count those.

I would train more grantmakers. Not because they're necessarily overburdened but because, if they had more resources per applicant, they could double as mentors.  

I suspect there is a significant set of funding applicants that don't meet the bar but would if they received regular high-quality feedback from a grantmaker.

(like myself in 2019)

Load more