T

TaraMacAulay

159 karmaJoined Jun 2015

Posts
2

Sorted by New

Comments
9

I think their approach is highly speculative, even if you were to agree with their overall plan. I think Leverage has contributed to EA in the past, and I expect them to continue doing so, but this alone isn't enough to make them a better donation target than orgs like CEA or 80K.

I'm glad they exist, and hope they continue to exist, I just don't think Leverage or Paradigm are the most effective things I could be doing with my money or time. I feel similarly about CFAR. Supporting movement building and long-termism is already meta enough for me.

Note: I was previously CEO of CEA, but stepped down from that role about 9 months ago.

I've long been confused about the reputation Leverage has in the EA community. After hearing lots of conflicting reports, both extremely positive and negative, I decided to investigate a little myself. As a result, I've had multiple conversations with Geoff, and attended a training weekend run by Paradigm. I can understand why many people get a poor impression, and question the validity of their early stage research. I think that in the past, Leverage has done a poor job communicating their mission, and relationship to the EA movement. I'd like to see Leverage continue to improve transparency, and am pleased with Geoff's comments below.

Despite some initial hesitation, I found the Paradigm training I attended surprisingly useful, perhaps even more so than the CFAR workshop I attended. The workshop was competently run, and content was delivered in a polished fashion. I didn't go in expecting the content to be scientifically rigorous, most self improvement content isn't. It was fun, engaging, and useful enough to justify the time spent.

Paradigm is now running the EA summit. I know Mindy and Peter, some of the key organisers, through their long standing contributions to EA. They were both involved in running a successful student group, and Peter worked at CEA, helping us to organise EAG 2015. I believe that Mindy and Peter are dedicated EAs, who decided to organise this event because they would really like to see more focus on movement building in the EA community.

I've been wanting to see new and more movement building focused activities in EA. CEA can't do it all alone, and I generally support people in the EA community attempting ambitious movement building projects. Given this, and my positive experience attending an event put on by Paradigm, I decided to provide some funding for the EA Summit personally.

I don't think that Leverage, Paradigm or related projects are good use of EA time or money, but I do think the level of hostility towards them I've seen in this community is unwarranted, and I'd like to see us do better.

I know it's outside the scope of this writeup, but just wanted to say that I found this really helpful, and I'm looking forward to seeing an evaluation of MIRIs other research.

I'd also be really excited to see more posts about which research pathways you think are most promising in general, and how you compare work on field building, strategy and policy approaches and technical research.

Another thing I should have mentioned - if you're in a similar position, and are not planning to donate within the next 3 months, but are very likely to do so in the future, you can indicate your support for the project by filling out our feedback form and telling us roughly how much you'd be interested in donating and how you would allocate your donation between the 4 funds. A couple of you have done so already. We plan to take these 'pledged donations' into account when reviewing whether to continue the project.

Thanks for raising this point. We intend to reassess at 3 months, as we think it is prudent to reassess early, rather than risk wasting a year of staff time. We should expect to have some evidence either way, even if it is not definitive, and hope to learn much more if we extend the trial.

We were also concerned that it might be difficult to raise sufficient funds to really test if this idea is worthwhile given that the vast majority of giving, even within the EA-sphere, occurs towards the end of the calendar year. One successful outcome for this project would be to accumulate a reasonable number of monthly recurring donations in each cause area, which could help ameliorate some of the coordination issues that arise due to these predictable patterns in giving. At present, organizations continue to hold fundraisers close to the end of the year as that's when people decide where to give, and donors continue waiting until the end of the year as that's when organizations typically post detailed updates which make it easier to decide where to donate. It is not very practical for any individual donor to assess an organization's funding gap, or strategy, when that organization is not currently engaged in fundraising. Our hope is that pooling together donations will help Fund Managers access significant economies of scale, where it becomes more practical to assess these gaps quarterly. In addition, monthly recurring donations will indicate to Fund Managers that there is an appetite for continued investment in a particular cause area within the EA community. This may encourage them to recommend slightly more speculative grants and could also lead them to feel more comfortable supporting existing, highly promising, organizations at higher levels of funding than they might otherwise recommend.

We will likely extend the project beyond the 3 month period if something like the following conditions occur:

  • we come to believe that this project is something the community wants, using donations as a proxy for interest.
  • at the end of the 3 month period, the Fund Managers are satisfied that the total value of donations received is sufficient to justify their continued investment in the project.

The EA Funds are now live and accepting donations. You can read about the Far Future fund here.

We plan to send quarterly updates to all EA Funds donors detailing the total size of the fund and details of any grants made in the period. We will also publish grant reports on the EA Funds website and will keep an updated grant history on the fund description page, much in the same manner as Open Phil. We plan to publish a more detailed review of the project in 3 months, at which time we will reassess, and possibly make significant changes to the current iteration of the funds.

While the EA Giving Group DAF (EAGG) will continue to run, we suspect that many donors interested in the EAGG will prefer to donate to the EA Community fund or the Far Future fund. These funds will be easier to use, tax deductible in both the UK and the US, and will not have a large minimum donation amount. We were actually inspired to create these funds, in part, due to the success of the EAGG - we saw this as something like a super-MVP version of this idea.

Hi AGB, you are correct on both counts - the linked budget is for CEA UK only, and the $3.1M figure is enough to allow us to end 2017 with at least 12 months of reserves.

The reason that we’re raising more than the total projected spend for 2017 is that we are hoping to build up our reserves to ensure we do not need to fundraise mid-year. We aim to maintain a minimum of 12 months of reserves, in line with recommended best practices for non-profits. Prior to the start of this fundraiser, we had planned let our reserves fall far below this limit towards the end of 2016, as we identified some particularly promising opportunities late in the year, including the Doing Good Better giveaway campaign and marketing the EA Newsletter and the Giving What We Can Pledge. Having experimented with these new approaches, we want to further test and expand upon these activities in 2017, while rebuilding our reserves to a more sustainable level. This means that we need to raise about 18 months of reserves to fully fund our current mainline plans, and avoid the need to fundraise mid-year.

You can describe our plans following the fundraiser as follows:

  • If we raise the full $3.1M then we will not run another fundraiser until late 2017. We will plan to end 2017 with around 12-16 months of reserves.

  • If we raise less than $2.1M, we will reevaluate our 2017 plans. In this scenario, we would likely reduce our planned spending on marketing activities during Y Combinator, reduce the amount we plan to spend on EAGx and student group grants and delay or cancel some planned hires.

  • If we raise an amount between $2.1M to $3.1M, we will proceed with our mainline plans for 2017, but we will likely not pursue any additional activities and we will be more cautious with some of our more flexible spending such as EAGx grants and the marketing spend we have planned during Y Combinator. We will then reevaluate our financial position mid-year and may decide to run a smaller fundraiser then to cover any gaps.

Great post - identifying experts and, in particular, comparing expertise between similar candidates is exceptionally difficult, using even a rough model seems likely to greatly improve our ability to undertake this task.

While it seems possible to make some progress on the problem of independently assessing expertise, I want to stress that we should still expect to fail if we proceed to do so entirely independently, without consulting a domain expert - Great! - now we have a simpler problem - how do we identify the best domain expert who can help us build a framework for assessing candidates?

Tyler’s model seems somewhat helpful here, and adding the components from John’s model improves it again. My prior approach was a simpler one, but shares some characteristics. I usually look for evidence of exceptional accomplishments that are rare or unprecedented, and ignore most examples of accomplishments which are difficult or competitive but common. Peer recognition is also a good barometer, more so if you ask people who are field insiders but have a merely casual acquaintance with the person in question. In the case of picking an expert who can help me identify predictors of expertise in their field, I’m less concerned with my ability to rate and compare their level of expertise with other top-level experts, as it’s fairly low cost to seek out the opinions of multiple experts.

When we were considering hiring a digital marketer, I sought input from 4 people who I will call experts, doing so dramatically improved my ability to pick the best candidates from the pool. I tested my predictions against the experts by rating applications for the top 5 candidates myself, then getting the domain expert to rank them and compare scores, watching them doing so. Watching the expert evaluate other candidates helped me pick out further elements which were not in their original verbal model. This part seems qute different than Tyler’s approach, as it is about identifying domain-specific expertise, rather than searching for domain general predictors of expertise, however it seems important to mention. I worry that neglecting to seek out domain-specific predictors would lead to a poorer outcome.

I also want to tease apart the question of attaining domain level expertise versus having a good process for generating expertise. I imagine that it is possible for those who have a good process (these people would, I imagine, score well using Tyler’s model) to become experts more quickly. I imagine there is another class of experts who have decades of experience, rich implicit models and impressive achievements, but who would struggle to present concise, detailed answers if you asked them to share their wisdom. I suspect that quiet observation of such a person in their work environment, rather than asking them questions, would yield a better measure of their level of expertise, but this requires considerable skill on the part of the observer.

I’d love to think about this more, looking forward to trying on your framework and playing around with it.