Owen Cotton-Barratt

7054Joined Aug 2014

Comments
675

Topic Contributions
3

(My personal take based on general theory; not representing any kind of official position or based on specifics of EVF:)

Yeah, combining lots of projects in a small number of legal entities probably increases risk aversion some, relative to them each having their own legal entities. There are various reasons for this, and it’s not clear whether it’s net good.

On the hard analysis (i.e. just looking at ~economic incentives): first order is that it decreases inappropriate risk tolerance since projects that might be judgement proof by themselves are no longer so as part of a larger entity. OTOH it might be that the ecosystem systematically underincentivizes taking upside risks. If large upside risks were correlated with large downside risks (e.g. some activities are just high-variance), which is plausible, it could be bad to asymmetrically make projects internalize downside risk, even though internalizing externalities is usually good. (Impact markets might help here, but have issues of their own …)

On the soft analysis: people may be inclined towards ambiguity aversion, and really not wanting any project to have serious downsides for other projects. This would suggest you might get more risk aversion than is appropriate. OTOH the whole setup could lead to more systematic analysis of risks, in a way that helps to avoid unknowingly taking risks, which is probably an improvement.

Or if you’re asking about the introduction of the Interim CEOs: you might have a concern that they’d be overly risk-averse, if they get the blame for big problems, but don’t get credit for big successes by the projects. I agree that this is a worry in theory; pragmatically, the respective boards will be holding Howie and Zach accountable for “is this a structure which encourages project leads to make appropriately ambitious plans?”, which should help to mitigate it some (probably not all the way because it’s a harder thing to hold them accountable for than whether there were big problems).

Overall my guess is that “effect on risk aversion” is not one of the most important factors for whether this is a good setup.

Hi Jeff —

Side point: The Alphabet and Meta analogies work better for the relationship between EVF (either / both of US and UK) and the projects hosted within it rather than the relationship between EVF UK and EVF US. That is to say, Google rebranded to Alphabet to avoid confusion between the parent company (Alphabet) and the well-known subsidiary company (Google). Similarly, CEA rebranded to EVF to avoid confusion between the legal entities (EVF) and the well-known subsidiary project (CEA).

Why two CEOs? While the projects are hosted within the broader EVF legal entities, neither EVF UK nor EVF US subsumes the other entity. As distinct nonprofits operating in distinct countries, they need distinct leadership; consequently the boards appointed separate Interim CEOs. Similarly, as is typical for nonprofits, each charity has its own board, which is responsible for providing governance oversight (EVF UK used to be a member of EVF US but isn’t any more). There’s a lot in flux, the interim CEO appointments are new, and various things may change over time. We’re still exploring how this best works, and what the right long-term structure will be.

The purchase was in April 2022 not in 2021; however the rest of your comment seems fair.

FYI: I added a brief explanation of why we hadn't posted publicly about it before now to the end of my answer.

(I edited in a way which changed which paragraph was penultimate. I believe Larks was referring to the content which is now expanded on in paragraphs starting "We wanted ..." and "We thought ...".)

I've edited my reply to add a bit more detail on this point.

Hey,

First I want to explain that I think it's misleading to think of this as a CEA decision (I've edited to be more explicit about this). To explain that I need to disambiguate between:

  1. CEA, the project that runs the EA Forum, EA Global, etc.
    • This is what I think ~everyone usually thinks of when they think of "CEA", as it's the group that's been making public use of that brand
  2. CEA, the former name of a legal entity which hosts lots of projects (including #1)
    • This is a legacy naming issue ... 
      • The name of the legal entity was originally intended as a background brand to house 80,000 Hours and Giving What We Can; other projects have been added since, especially in recent years
      • Since then the idea of "effective altruism" has become somewhat popular in its own right! And one of the projects within the entity started making good use of the name "CEA"
    • We’ve now renamed the legal entity to EVF, basically in order to avoid this kind of ambiguity!
       

Wytham Abbey was bought by #2, and isn’t directly related to #1, except for being housed within the same legal entity. I was the person who owned the early development of the project idea, and fundraised for it. (The funding comes from a grant specifically for this project, and is not FTX-related.) I brought it to the rest of the board of EVF to ask for fiscal sponsorship (i.e. I would direct the funding to EVF and EVF would buy the property and employ staff to work on the project). So EVF made two decisions here: they approved fiscal sponsorship, agreeing to take funds for this new project; and they then followed through and bought the property with the funds that had been earmarked for that. The second of these is technically a decision to buy the building (and was done by a legal entity at the time called CEA), but at that point it was fulfilling an obligation to the donor, so it would have been wild to decide anything else. The first is a real decision, but the decision was to offer sponsorship to a project that would likely otherwise have happened through another vehicle, not to use funds to buy a building rather than for another purpose. Neither of these decisions were made by any staff of the group people generally understand as "CEA". (All of this ambiguity/confusion is on us, not on readers.)

I’d also like to speak briefly to the “why” — i.e. why I thought this was a good idea. The central case was this: 

I’ve personally been very impressed by specialist conference centres. When I was doing my PhD, I think the best workshops I went to were at Oberwolfach, a mathematics research centre funded by the German government. Later I went to an extremely productive workshop on ethical issues in measuring the global burden of disease at the Brocher Foundation. Talking to other researchers, including in other fields, I don’t think my impression was an outlier. Having an immersive environment which was more about exploring new ideas than showing off results was just very good for intellectual progress. In theory this would be possible without specialist venues, but researchers want to spend time thinking about ideas not event logistics. Having a venue which makes itself available to experts hosting events avoids this issue.

In the last few years, I’ve been seeing the rise of what seems to me an extremely important cluster of ideas — around asking what’s most important to do in the world, and taking chains of reasoning from there seriously. I think this can lead to tentative answers like “effective altruism” or “averting existential risk”, but for open-minded intellectual exploration I think it’s better to have the focus on questions than answers. I thought it would be great if we could facilitate more intellectual work of this type, and the specialist-venue model was a promising one to try. We will experiment with a variety of event types. 

We had various calculations about costings, which made it look somewhere between “moderately money-saving” and “mildly money-spending” vs renting venues for events that would happen anyway, depending on various assumptions e.g. about usage that we couldn’t get great data on before running the experiment. The main case for the project was not a cost-saving one, but that if it was a success it could generate many more valuable workshops than would otherwise exist. Note that this is a much less expensive experiment than it may look on face value, since we retain the underlying asset of the building.

We wanted to be close to Oxford for easy access to the intellectual communities there. (Property prices weren’t falling off significantly with distance until travel time from Oxford and London had become significantly higher.) We looked at a lot of properties online, and visited the three properties we found for sale with 20+ bedrooms within about 50 minutes of Oxford. These were all "country houses", which are commonly repurposed as event venues in England. The other two were cheaper (one ~£6M and one ~£9M at the end of a competitive process; compared to a purchase price for Wytham of a bit under £15M) but needed significantly more work before they were usable, which would have added large expense (running into the millions) and delay (likely years). (And renovation expense isn’t obviously recoverable if one sells — it depends on how much the buyers want the same things from the property as you do.)

We thought Wytham had the most long-term potential as a venue because it had multiple large common rooms that could take >40 people. The other properties had one large room each holding perhaps a max of 40, but there would be pressure on this space since it would be wanted as both a dining space and for workshop sessions, and would also reduce flexibility of use for meetings (extra construction might have been able to address this, but it was a big question mark whether you could get planning consent). Wytham also benefited from being somewhat larger (about 27,000 sq ft vs roughly 20,000 sq ft for each of the other two) and a more accessible location. Overall we thought that a combination of factors made it the most appropriate choice.

I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.

On why we hadn’t posted publicly about this before: I'm not a fan of trying to create hype. I thought the natural time to post about the project publicly would be when we were ready to accept public applications to run events, and it felt a bit gauche to post before that. Now that there's a public discussion, of course, it seemed worth explaining some of the thinking.

I hope this is helpful.

I agree with all this. I meant to state that I was assuming logarithmic returns for the example, although I do think some smoothness argument should be enough to get it to work for small shifts.

Sorry I don't have a link. Here's an example that's a bit more spelled out (but still written too quickly to be careful):

Suppose there are two possible worlds, S and L (e.g. "short timelines" and "long timelines"). You currently assign 50% probability to each. You invest in actions which help with either until your expected marginal returns from investment in either are equal. If the two worlds have the same returns curves for actions on both, then you'll want a portfolio which is split 50/50 across the two (if you're the only investor; otherwise you'll want to push the global portfolio towards that).

Now you update either that S is 1% more likely (51%, with L at 49%).

This changes your estimate of the value of marginal returns on S and on L. You rebalance the portfolio until the marginal returns are equal again -- which has 51% spending on S and 49% spending on L.

So you eliminated the marginal 1% spending on L and shifted it to a marginal 1% spending on S. How much better spent, on average, was the reallocated capital compared to before? Around 1%. So you got a 1% improvement on 1% of your spending.

If you'd made a 10% update you'd get roughly a 10% improvement on 10% of your spending. If you updated all the way to certainty on S you'd get to shift all of your money into S, and it would be a big improvement for each dollar shifted.

On the face of it an update 10% of the way towards a threshold should only be about 1% as valuable to decision-makers as an update all the way to the threshold.

(Two intuition pumps for why this is quadratic: a tiny shift in probabilities only affects a tiny fraction of prioritization decisions and only improves them by a tiny amount; or getting 100 updates of the size 1% of the way to a threshold is super unlikely to actually get you to a threshold since many of them are likely to cancel out.)

However you might well want to pay for information that leaves you better informed even if it doesn't change decisions (in expectation it could change future decisions).

Re. arguments split across multiple posts, perhaps it would be ideal to first decide the total prize pool depending on the value/magnitude of the total updates, and then decide on the share of credit allocation for the updates. I think that would avoid the weirdness about post order or incentivizing either bundling/unbundling considerations, while still paying out appropriately more for very large updates.

Load More