All of Grayden's Comments + Replies

Thanks for sharing. It’s a start, but it’s certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because there’s no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.

2
Nathan Young
1mo
I don't think that really works. You can get feedback from 5 years in 5 years. Metaculus already has some suggestions as to people who are good 5 year forecasters. None of the above are prediction markets. 

Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals

2
Austin
2mo
Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket. But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today's world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you - you'll obviously do worse if you can't see outcomes of your actions. You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn't been great; I agree that the most common approaches of "tournament setting forecasting" or "superforecaster consulting" haven't produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I'm again excited to see what Ben and Javier do here.

Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical

You might be right but just to add a datapoint: I was featured in an article in 2016. I don’t regret it but I was careful about (1) the journalist and (2) what I said on the record.

Grayden
2mo99
46
15
1
1

I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.

4
Nathan Young
1mo
COI - I work in forecasting. Whether or not forecasting is a good use of funds, good decision-making is probably correlated with impact. So I'm open to the idea that forecasting hasn't been a good use of funds, but it seems it should be a priori. Forecasting in one sense is predicting how decisions will go. How could that not be a good idea in theory. More robust cases in practice: 1.   1. Forecasters have good track records and are provably good thinkers 2. They can red team institutional decisions "what will be the impacts of this" 3. In some sense this is similar to research 2.   1. Forecasting is becoming a larger part of the discourse and this is probably good. It is much more common to see the Economist, the FT, Matt Yglesias, twitter discourse referencing specific testable predictions 3.   1. In making AI policy specifically it seems very valuable to guess progress and guess the impact of changes. 2. To me it looks like Epoch and Metaculus do useful work here that people find valuable.

The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it's a fun game (this doesn't prove more recent interest isn't driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn't feel like it made a huge shift).

Also, AI timelines forecasting work has been highly decision-relevant to a large number of peop... (read more)

5
Mo Putera
2mo
Would you count Holden's take here as a robust case for funding forecasting as an effective use of charitable funds?  This is my own (possibly very naive) interpretation of one motivation behind some of Open Phil's forecasting-related grants.  Actually, maybe it's also useful to just look at the biggest grants from that list: 
5
joshcmorrison
2mo
Personally, I think specifically forecasting for drug development could be very impactful: Both in the general sense of aligning fields around the probability of success of different approaches (at a range of scales -- very relevant both for scientists and funders) and the more specific regulatory use case (public predictions of safety/efficacy of medications as part of approvals by FDA/EMA etc.)  More broadly, predicting the future is hugely valuable. Insofar as effective altruism aims to achieve consequentialist goals, the greatest weakness of consequentialism is uncertainty about the effects of our actions. Forecasting targets that problem directly. The financial system creates a robust set of incentives to predict future financial outcomes -- trying to use forecasting to build a tool with broader purpose than finance seems like it could be extremely valuable.  I don't really do forecasting myself so I can't speak to the field's practical ability to achieve its goals (though as an outsider I feel optimistic), so perhaps there are practical reasons it might not be a good investment. But overall to me it definitely feels like the right thing to be aiming at.

I'm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there's potentially a misunderstanding here, leading to unnecessary disagreement.

I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions pol... (read more)

I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.

There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.

For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people ... (read more)

5
Vasco Grilo
2mo
Thanks for the comment, Grayden. For context, readers may want to check the question post Why is EA so enthusiastic about forecasting?.

Insolvency happens on an entity by entity level. I don’t know which FTX entity gave money to EA orgs (if anyone knows, please say), and whether it went first via the founders personally. I would have thought it’s possible that FTX full repays its creditors, so there is value in the shares, but then FTX’s investors go after the founders personally and they are declared bankrupt.

2
Jason
2mo
If I remember a report from Ray et al. correctly, there were a bunch of intertwined bank accounts. I believe some transactions were made from Alameda-owned accounts, some from North Dimension-owned accounts, etc. without much rhyme or reason.

I’m hugely in favour of principles first as I think it builds a more healthy community. However, my concern is that if you try too hard to be cause neutral, you end up artificially constrained. For example, Global Heath and Wellbeing is often a good introduction point to the concept of effectiveness. Then once people are focused on maximisation, it’s easier to introduce Animal Welfare and X-Risk.

I agree that GHW is an excellent introduction to effectiveness and we should watch out for the practical limitations of going too meta, but I want to flag that seeing GHW as a pipeline to animal welfare and longtermism is problematic, both from a common-sense / moral uncertainty view (it feels deceitful and that’s something to avoid for its own sake) and a long-run strategic consequentialist view (I think the EA community would last longer and look better if it focused on being transparent, honest, and upfront about what most members care about, and it’s really important for the long term future of society that the core EA principles don’t die).

6
calebp
4mo
I agree with the overall point, though I am not I've seen much empirical evidence for the GHD as a good starting point claim (or at least I think it's often overstated). I got into EA stuff though GHD, but, this may have just been because there were a lot more GHD/EA intro materials at the time. I think that the eco-system is now a lot more developed and I wouldn't be surprised if GHD didn't have much of an edge over cause first outreach (for AW or x-risk). Maybe our analysis should be focussed on EA principles, but the interventions themselves can be branded however they like? E.g. We're happy to fund GHD giving games because we believe that they contribute to promoting caring about impartiality and cost-effectiveness in doing good - but they don't get much of a boost or penalty from being GHD giving games (as opposed to some other suitable cause area).

When you are a start-up non-profit, it can be hard to find competent people outside your social circle, which is why I created the EA Good Governance Project to make life easier for people.

I think it's important:

  1. To put in place good practices (e.g. board meeting without the CEO regularly) BEFORE they are needed.
  2. For FUNDERS to ask questions about effective governance and bear responsibility when they get it wrong.

My two cents:

  • Most governments heavily subsidise R&D (which is equivalent to a deliberate negative externality), often through tax credits
  • The patent system allows companies to extract abnormal profits for 20 years and incentivise a race (even if somebody independently develops the technology, they can’t use it if somebody else has patented it). This system is a deliberate inefficient market
  • Corporate R&D tends to be much more short-term and customer-focused. If you come from an academic background, you will be shocked by what is counted as R&
... (read more)

Funding to EA orgs has roughly halved in the last year, so a recession would barely be noticed! More broadly, the point you make is valid. One of the reasons I’ve stayed earning to give is that I’ve never been confident in the stability of EA funding over my future career.

Donating a kidney results in an over 1300% increase in the risk of kidney disease. A risk-averse interpretation of the data puts the increase in year-to-year mortality after donation upwards of 240%.
Could you provide these in absolute terms as relative terms are pretty meaningless and rhetoric

Great article. Very concise, clear and actionable!

You raise some good points, so I have removed that point from the main article.

Each one of us only has a single perspective and it’s human nature to assume other people have similar perspectives. EA is a bubble and there are certainly bubbles within the bubble, e.g. I understand Bay Area is very AI focused while London is more plural.

Articles like this that attempt to replace one person’s perspective with hard data are really useful. Thank you.

At EA for Christians, we often interact with people who are altruistic and focused on impact but do not want to associate with EA because of its perceived anti-religion ethos.

On the flip side, since becoming involved with EA for Christians, a number of people have told me they are Christian but keep it quiet for fear it will damage their career prospects.

And to add another way the anti-religion ethos is harmful, people may not be comfortable talking to their Christian friends about EA (or even about topics considered aligned with EA) in the first place.

We should all try to maximise our impact and there’s a good argument for specialisation.

However, I’m concerned by a few things:

  • Its not obvious to me that spending more money on yourself will make you better at your job
  • There’s a danger of arrogance clouding our judgment, e.g. I don’t think 99% of people in EA should be flying business class
  • Donating has value for many people due to the “skin in the game” effect
2
carter allen
11mo
Agree with everything here. But the argument isn’t that people should spend the money they’d otherwise donate to charity on themselves/flying business class, it’s that they should use it to further whatever their particular path to impact is.

Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%)

are you sure this isn’t just a function of the definition of highly engaged?

are you sure this isn’t just a function of the definition of highly engaged?

 

No, I think it it probably is partly explained by that. 

For context for other readers: the highest level of engagement on the engagement scale is defined as "I am heavily involved in the effective altruism community, perhaps helping to lead an EA group or working at an EA-aligned organization. I make heavy use of the principles of effective altruism when I make decisions about my career or charitable donations." The next highest category of engagement ("I’ve engaged exte... (read more)

1
Larks
1y
Should redefine engagement in terms of total $ donated to charity in the last year and see how the stats look.

Yes! A rather important typo! I’ve now fixed

The Parable of the Good Samaritan seems to lean towards impartiality. Although the injured man was laying in front of the Samaritan (geographic proximity), the Samaritan was considered a foreigner / enemy (no proximity of relationship).

7
Luke Eure
1y
It's the geographic proximity that I get hung up on though. He is right in front of the Samaritan. I can't think of any parables that involve someone showing mercy to a person who is not right in front of them. Every time Jesus performs a miracle, it is for someone right in front of him. I am strongly in favor of more impartiality, but think most Christians find it a stretch to say that the Good Samaritan parable is meant to imply we should care for future people and people on the other side of the world who they will never meet.
1
dominicroser
1y
Is there a typo in the first sentence - should it say impartiality rather than partiality?

Did the EV US Board consider running an open recruitment process and inviting applications from people outside of their immediate circle? If so, why did it decide against?

The EV US board was (in my opinion) significantly undersized to handle a major operational crisis. I suspect it knew at some point that Rebecca Kagan might be stepping down soon and that existing members might have to recuse from important decisions for various reasons. Thus, it would have been reasonable in my eyes to prioritize getting two new people on ASAP and to defer a broader recruitment effort until further expansion.

Thanks, Ben. This is a really thoughtful post.

I wondered if you had any update on the blurring between EA and longtermism. I‘ve seen a lot of criticism of EA that is really just low quality criticism of longtermism because the conclusions can be weird.

Sorry if I wasn’t clear. My claim was not “Every organisation has a COO); it was “If an organisation has a COO, the department they manage is typically front-office rather than back-office and often the largest department”.

For Apple, they do indeed manage front-office operations:  “Jeff Williams is Apple’s chief operating officer reporting to CEO Tim Cook. He oversees Apple’s entire worldwide operations, as well as customer service and support. He leads Apple’s renowned design team and the software and hardware engineering for Apple Watch. Jeff a... (read more)

I also found these charts a little confusing. A single value for each or a clustered column chart might be clearer

2
David_Moss
1y
Thanks for the additional comment. If it helps, you can simply look at the relationship between longtermism (left) and neartermism (right) separately, and ignore the other.  I think it is relatively safe in this instance to look at the relationship with mean longtermism-minus-neartermism scores (shown below), but only because we first examined the individual relationships (shown above) above since- as I noted in my reply to Nathan- support for longtermist causes and neartermist causes don't reflect a single dimension.

Two quick points:

  1. Yes, legal control is the first consideration, but governance requires skill not just value-alignment
  2. I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique

Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago

Thanks for this! You might be right about the non-profit vs. for-profit distinction in 'operations' and your point about the COO being 'Operating' rather than 'Operations' is a good one.

Re avoiding managers doing paperwork, I agree with that way of putting it. However, I think EA needs to recognise that management is an entirely different skill. The best researcher at a research organization should definitely not have to handle lots of paperwork, but I'd argue they probably shouldn't be the manager in the first place! Management is a very different skillset that involves people management, financial planning, etc. that are often skills pushed on operations teams by people who shouldn't be managers.

9
abrahamrowe
1y
Yeah, I definitely agree with that - I think a pretty common issue is people entering into people management on the basis of their skills at research, and they don't seem particularly likely to be correlated. I also think organizations sometimes struggle to provide pathways to more senior roles outside of management too, and that seems like an issue when you have ambitious people who want to grow professionally, but no options to except people management.

Most organizations do not divide tasks between core and non-core. The ones that do (and are probably most similar to a lot of EA orgs) are professional services ones

Administration definitely sounds less appealing, but maybe it would be more honest and reduce churn?

I don’t work in ops or within an EA org, but my observation from the outside is that the way EA does ops is very weird. Note these are my impressions from the outside so may not be reflective of the truth:

  • The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest number of people reporting to them after the CEO)
  • EA takes highly talented peopl
... (read more)
4
Linch
1y
I was surprised by this claim, so I checked every (of the 3) non-EA orgs I've worked at. Not only is it not true that "the COO normally has the highest number of people reporting to them after the CEO," literally none of them even have a COO for the whole org.  To check whether my experiences were representative, I went through this list of the largest companies. It looks like of the 5 largest companies by market cap, 2 of them have COOs (Apple, Amazon). Microsoft doesn't have a designated COO, but they had a Chief Human Resources Officer and a Chief Financial Officer, which in smaller orgs will probably be a COO job[1]. So maybe an appropriate prior is 50%? This is a very quick spotcheck however, would be interested in more representative data. 1. ^ Notably, they didn't have a CTO, which surprised me.

I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.

The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest

... (read more)
5
Joseph Lemien
1y
I agree that this is weird. In EA operations is something like "everything that supports the core work and allows other people to focus on the core work," while outside of EA operations is the core work of a company. Although I wish that EA hadn't invented it's own definition for operations, at this point I don't see any realistic options for it changing.

There are some very competent leaders within EA so I don’t think we should make sweeping assumptions. I think we need to make EA a meritocracy

7
Ben Stewart
1y
Sure, but my impression of the number of them and their competence has decreased. It’s still moderately high. And meritocracy cuts both ways - meritocracy would push harder on judging current leaders by their past success - Ie harshly - and not be as beholden to contingent or social reasons for believing they’re competent

@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.

6
Benjamin_Todd
1y
I'm definitely not saying value alignment is the only thing to consider.

Thanks, Joey. Really appreciate you taking the time to engage on these questions.

To be clear, I’m not seriously suggesting ignoring all research from before the decision. I’m just saying that mathematically, an independent test needs its backrest data to exclude all calibration data.

It strikes me that there are broadly 3 buckets of risk / potential failure:

  1. Execution risk - this is significant and you can only find out by trying, but you only really know if you’re being successful with the left hand side of the theory of change
  2. Logic risk - having an exte
... (read more)

Thank you for writing this and for all the work you (and others) have put in over the years.

My question is to what extent you think CE’s impact measurement is tautological. If you determine something to be a high impact opportunity and then go and do it, aren’t you by definition doing things you estimate to be high impact (as long as you don’t screw up the execution or realise you made an error). To full adjust for selection effect, you would have to ignore all research conducted before the decision was made and rely solely on new data, which is probably q... (read more)

Joey
1y14
4
0

I think tautological measurement is a real concern for basically every meta charity, although I'm not sure I agree with your solution. I think the better solution is external evaluation, someone like GiveWell or Founders Pledge who does not have any reason to value CE charities. Typically, these organizations do their own independent research and compare it across their current portfolio of projects. If CE can, for example, fairly consistently incubate charities that GW/FP/etc. rank as best in the world, I think that is at least not organizationally tautol... (read more)

The high success rate almost makes me think CE should be incubating even more ambitious, riskier projects, with the expectation of a lower success rate but higher overall EV. Very uncertain about this intuition though, would be interested to hear what CE thinks.

what qualifies as a 'task'

Basically anything that involves actually delivering the organization's goal. It's probably easier to define as everything that's not governance or advice.

why does it reduce independence for the board to do it

If board members are doing it, then board members become part of the organization rather than separate from it.

why does it impede governance

The most important function of a board is to provide accountability for the CEO (and by extension the team below them). If they are involved in something, they cannot also provide externa... (read more)

a system of governance that has been shown repeatedly to lead to better organizational performance.

This is a pretty strong empirical claim, and I don't see documentation for it either in your comment or the original post. Can you share what evidence you're basing this on?

3
Arepo
1y
Thanks Grayden. V useful answers! Though re this Can you cite some evidence for this?

In the early days, it was hard to get people who were sufficiently “value aligned” with experience. I don’t think that’s the case anymore. For example, there are many value-aligned highly-experienced people on the EA Good Governance Project’s Trustee Directory. The issue is that many people don’t want to give up partial control of something to somebody outside their clique and/or prior to October didn’t have an easy way to find them

I think there’s no substitute for role models and experience. Whenever I advise people in EA on careers, I always suggest spending some time in ‘normal’ organisations first

Jona
1y12
4
2

Hmm. Obviously, career advice depends a lot on the individual and the specific context, all things equal, I tentatively agree that there is some value in having seen a large "functioning" org. I think many of these orgs have also dysfunctional aspects (e.g., I think most orgs are struggling with sexual harassment and concentration of formal and informal power) and that working at normal orgs has quite high opportunity costs. I also think that many of my former employers were net negative for some silly which I think are highly relevant, e.g., high-quality decision making 

Thanks for sharing

EA is too insular and needs to learn from other fields

Definitely!

business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution

Well put!  I used to pitch EA as the "business approach to charity", but that view has fallen out of favour with the rise of the philosophers

If only there were some kind of measure of an individuals contribution. Maybe we could call it something like PELTIV

Saying they have been recused since November implies that they weren't recused from decision-making regarding FTX prior to November. If this is true (and I'm hesitant because I don't know all the facts), they were likely not following proper process prior to November.

I cannot comment to how FTX issues were handled prior to November. It's entirely possible that Will and Nick recused themselves too. I'm also not sure what kind of FTX concerns were discussed.

Correct, but if he intends to give away 100% of the proceeds (and presumably considers EVF effective), it has the same effect.

My comment was not to say “he should do x” but instead say “if he intends to do x and remain a trustee, this is a good way to structure it”.

In the UK, non-profits can be either member-based or not. I believe EVF is not member-based. That means the board is accountable to itself. Only the board can appoint and remove Trustees.

This is one of the reasons why term limits are considered good practice. Based on the tenure of some EVF Trustees, it looks like term limits have not been implemented.

6
Aleks_K
1y
A UK charity can put rules in their governing documents on how to remove trustees. I don't think EVFs governing documents are public, so we probably don't know. 

My read of the article is that it is alleging incompetence and/or lack of regard for laws rather than alleging wrongdoing. I'm a trustee of a number of UK charities myself and the Charity Commission sends all trustees basic information on manging conflicts of interest and data protection. They are by no means "obscure and arbitrary" and I think we as a community need to be extra careful to comply with the letter and spirit of every law given the recent FTX events.

On the book deal, a good way to structure this (i.e. achieving the same objective, but in a legally compliant way) would be for perhaps with CEA / EVF owning the copyright. In that way, they can receive all the proceeds or structure the deal such that all proceeds get diverted elsewhere. This might well have been what they did.

If it was indeed structured with Will receiving money and then donating it onwards, then at a minimum he should have recused himself from the decision-making on that issue (which he might well have done) but probably should have st... (read more)

2
Ben Millwood
1y
I imagine EVF owning the copyright would prevent Will from benefiting from any sales of the book, including ones by unrelated third parties.

I think it's important to distinguish between morality and legal compliance. I don't think anybody involved here was immoral, but it sounds like there are questions to answer about whether CEA / EVF acted illegally (whether through deliberate decision or lack of competence). Hopefully they will be answered quickly and conclusively, so we can all move on.

If someone is just graduating and interested in entrepreneurship, what do you think is the probability that they will be accepted into YC? The original article mentions a 2.5% acceptance rate at YC of those who apply. Do you need prior success in order to apply?

6
Ben_West
1y
No, you don't need prior success to apply (it's specifically targeted at people without prior startups), but it is highly selective. A suggestion that I've heard and think seems reasonable is to spend 6-12 months trying to build up enough of a company to be accepted to YC (or another top incubator), and then shut down the business if you don't get in. Note that I'm using YCombinator here as a convenience sample – it's not the right choice for everyone (I didn't go through it with either of my companies, and think that was the correct decision in both cases).
Load more