Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals
Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical
You might be right but just to add a datapoint: I was featured in an article in 2016. I don’t regret it but I was careful about (1) the journalist and (2) what I said on the record.
I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it's a fun game (this doesn't prove more recent interest isn't driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn't feel like it made a huge shift).
Also, AI timelines forecasting work has been highly decision-relevant to a large number of peop...
I'm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there's potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions pol...
I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.
There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.
For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people ...
Insolvency happens on an entity by entity level. I don’t know which FTX entity gave money to EA orgs (if anyone knows, please say), and whether it went first via the founders personally. I would have thought it’s possible that FTX full repays its creditors, so there is value in the shares, but then FTX’s investors go after the founders personally and they are declared bankrupt.
I’m hugely in favour of principles first as I think it builds a more healthy community. However, my concern is that if you try too hard to be cause neutral, you end up artificially constrained. For example, Global Heath and Wellbeing is often a good introduction point to the concept of effectiveness. Then once people are focused on maximisation, it’s easier to introduce Animal Welfare and X-Risk.
I agree that GHW is an excellent introduction to effectiveness and we should watch out for the practical limitations of going too meta, but I want to flag that seeing GHW as a pipeline to animal welfare and longtermism is problematic, both from a common-sense / moral uncertainty view (it feels deceitful and that’s something to avoid for its own sake) and a long-run strategic consequentialist view (I think the EA community would last longer and look better if it focused on being transparent, honest, and upfront about what most members care about, and it’s really important for the long term future of society that the core EA principles don’t die).
When you are a start-up non-profit, it can be hard to find competent people outside your social circle, which is why I created the EA Good Governance Project to make life easier for people.
I think it's important:
My two cents:
Funding to EA orgs has roughly halved in the last year, so a recession would barely be noticed! More broadly, the point you make is valid. One of the reasons I’ve stayed earning to give is that I’ve never been confident in the stability of EA funding over my future career.
“Donating a kidney results in an over 1300% increase in the risk of kidney disease. A risk-averse interpretation of the data puts the increase in year-to-year mortality after donation upwards of 240%.”
Could you provide these in absolute terms as relative terms are pretty meaningless and rhetoric
Each one of us only has a single perspective and it’s human nature to assume other people have similar perspectives. EA is a bubble and there are certainly bubbles within the bubble, e.g. I understand Bay Area is very AI focused while London is more plural.
Articles like this that attempt to replace one person’s perspective with hard data are really useful. Thank you.
At EA for Christians, we often interact with people who are altruistic and focused on impact but do not want to associate with EA because of its perceived anti-religion ethos.
On the flip side, since becoming involved with EA for Christians, a number of people have told me they are Christian but keep it quiet for fear it will damage their career prospects.
And to add another way the anti-religion ethos is harmful, people may not be comfortable talking to their Christian friends about EA (or even about topics considered aligned with EA) in the first place.
We should all try to maximise our impact and there’s a good argument for specialisation.
However, I’m concerned by a few things:
Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%)
are you sure this isn’t just a function of the definition of highly engaged?
are you sure this isn’t just a function of the definition of highly engaged?
No, I think it it probably is partly explained by that.
For context for other readers: the highest level of engagement on the engagement scale is defined as "I am heavily involved in the effective altruism community, perhaps helping to lead an EA group or working at an EA-aligned organization. I make heavy use of the principles of effective altruism when I make decisions about my career or charitable donations." The next highest category of engagement ("I’ve engaged exte...
The Parable of the Good Samaritan seems to lean towards impartiality. Although the injured man was laying in front of the Samaritan (geographic proximity), the Samaritan was considered a foreigner / enemy (no proximity of relationship).
Did the EV US Board consider running an open recruitment process and inviting applications from people outside of their immediate circle? If so, why did it decide against?
The EV US board was (in my opinion) significantly undersized to handle a major operational crisis. I suspect it knew at some point that Rebecca Kagan might be stepping down soon and that existing members might have to recuse from important decisions for various reasons. Thus, it would have been reasonable in my eyes to prioritize getting two new people on ASAP and to defer a broader recruitment effort until further expansion.
Thanks, Ben. This is a really thoughtful post.
I wondered if you had any update on the blurring between EA and longtermism. I‘ve seen a lot of criticism of EA that is really just low quality criticism of longtermism because the conclusions can be weird.
Sorry if I wasn’t clear. My claim was not “Every organisation has a COO); it was “If an organisation has a COO, the department they manage is typically front-office rather than back-office and often the largest department”.
For Apple, they do indeed manage front-office operations: “Jeff Williams is Apple’s chief operating officer reporting to CEO Tim Cook. He oversees Apple’s entire worldwide operations, as well as customer service and support. He leads Apple’s renowned design team and the software and hardware engineering for Apple Watch. Jeff a...
I also found these charts a little confusing. A single value for each or a clustered column chart might be clearer
Two quick points:
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Thanks for this! You might be right about the non-profit vs. for-profit distinction in 'operations' and your point about the COO being 'Operating' rather than 'Operations' is a good one.
Re avoiding managers doing paperwork, I agree with that way of putting it. However, I think EA needs to recognise that management is an entirely different skill. The best researcher at a research organization should definitely not have to handle lots of paperwork, but I'd argue they probably shouldn't be the manager in the first place! Management is a very different skillset that involves people management, financial planning, etc. that are often skills pushed on operations teams by people who shouldn't be managers.
Most organizations do not divide tasks between core and non-core. The ones that do (and are probably most similar to a lot of EA orgs) are professional services ones
Administration definitely sounds less appealing, but maybe it would be more honest and reduce churn?
I don’t work in ops or within an EA org, but my observation from the outside is that the way EA does ops is very weird. Note these are my impressions from the outside so may not be reflective of the truth:
I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.
...The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest
There are some very competent leaders within EA so I don’t think we should make sweeping assumptions. I think we need to make EA a meritocracy
@Jack Lewars is spot on. If you don’t believe him, take a look at the list of ~70 individuals on the EA Good Governance Project’s trustee directory. In order to effectively govern you need competence and no collective blindspots, not just value alignment.
Thanks, Joey. Really appreciate you taking the time to engage on these questions.
To be clear, I’m not seriously suggesting ignoring all research from before the decision. I’m just saying that mathematically, an independent test needs its backrest data to exclude all calibration data.
It strikes me that there are broadly 3 buckets of risk / potential failure:
Thank you for writing this and for all the work you (and others) have put in over the years.
My question is to what extent you think CE’s impact measurement is tautological. If you determine something to be a high impact opportunity and then go and do it, aren’t you by definition doing things you estimate to be high impact (as long as you don’t screw up the execution or realise you made an error). To full adjust for selection effect, you would have to ignore all research conducted before the decision was made and rely solely on new data, which is probably q...
I think tautological measurement is a real concern for basically every meta charity, although I'm not sure I agree with your solution. I think the better solution is external evaluation, someone like GiveWell or Founders Pledge who does not have any reason to value CE charities. Typically, these organizations do their own independent research and compare it across their current portfolio of projects. If CE can, for example, fairly consistently incubate charities that GW/FP/etc. rank as best in the world, I think that is at least not organizationally tautol...
The high success rate almost makes me think CE should be incubating even more ambitious, riskier projects, with the expectation of a lower success rate but higher overall EV. Very uncertain about this intuition though, would be interested to hear what CE thinks.
what qualifies as a 'task'
Basically anything that involves actually delivering the organization's goal. It's probably easier to define as everything that's not governance or advice.
why does it reduce independence for the board to do it
If board members are doing it, then board members become part of the organization rather than separate from it.
why does it impede governance
The most important function of a board is to provide accountability for the CEO (and by extension the team below them). If they are involved in something, they cannot also provide externa...
a system of governance that has been shown repeatedly to lead to better organizational performance.
This is a pretty strong empirical claim, and I don't see documentation for it either in your comment or the original post. Can you share what evidence you're basing this on?
In the early days, it was hard to get people who were sufficiently “value aligned” with experience. I don’t think that’s the case anymore. For example, there are many value-aligned highly-experienced people on the EA Good Governance Project’s Trustee Directory. The issue is that many people don’t want to give up partial control of something to somebody outside their clique and/or prior to October didn’t have an easy way to find them
I think there’s no substitute for role models and experience. Whenever I advise people in EA on careers, I always suggest spending some time in ‘normal’ organisations first
Hmm. Obviously, career advice depends a lot on the individual and the specific context, all things equal, I tentatively agree that there is some value in having seen a large "functioning" org. I think many of these orgs have also dysfunctional aspects (e.g., I think most orgs are struggling with sexual harassment and concentration of formal and informal power) and that working at normal orgs has quite high opportunity costs. I also think that many of my former employers were net negative for some silly which I think are highly relevant, e.g., high-quality decision making
Thanks for sharing
EA is too insular and needs to learn from other fields
Definitely!
business can be thought of as the study of how to accomplish goals as an organization - how to get things done in the real world. EA needs the right mix of theory and real world execution
Well put! I used to pitch EA as the "business approach to charity", but that view has fallen out of favour with the rise of the philosophers
If only there were some kind of measure of an individuals contribution. Maybe we could call it something like PELTIV
Thanks for the nudge. I've just posted an update here: https://forum.effectivealtruism.org/posts/nEnDvu2Ha9HLguvK8/update-from-the-ea-good-governance-project
Saying they have been recused since November implies that they weren't recused from decision-making regarding FTX prior to November. If this is true (and I'm hesitant because I don't know all the facts), they were likely not following proper process prior to November.
I cannot comment to how FTX issues were handled prior to November. It's entirely possible that Will and Nick recused themselves too. I'm also not sure what kind of FTX concerns were discussed.
Correct, but if he intends to give away 100% of the proceeds (and presumably considers EVF effective), it has the same effect.
My comment was not to say “he should do x” but instead say “if he intends to do x and remain a trustee, this is a good way to structure it”.
In the UK, non-profits can be either member-based or not. I believe EVF is not member-based. That means the board is accountable to itself. Only the board can appoint and remove Trustees.
This is one of the reasons why term limits are considered good practice. Based on the tenure of some EVF Trustees, it looks like term limits have not been implemented.
My read of the article is that it is alleging incompetence and/or lack of regard for laws rather than alleging wrongdoing. I'm a trustee of a number of UK charities myself and the Charity Commission sends all trustees basic information on manging conflicts of interest and data protection. They are by no means "obscure and arbitrary" and I think we as a community need to be extra careful to comply with the letter and spirit of every law given the recent FTX events.
On the book deal, a good way to structure this (i.e. achieving the same objective, but in a legally compliant way) would be for perhaps with CEA / EVF owning the copyright. In that way, they can receive all the proceeds or structure the deal such that all proceeds get diverted elsewhere. This might well have been what they did.
If it was indeed structured with Will receiving money and then donating it onwards, then at a minimum he should have recused himself from the decision-making on that issue (which he might well have done) but probably should have st...
I think it's important to distinguish between morality and legal compliance. I don't think anybody involved here was immoral, but it sounds like there are questions to answer about whether CEA / EVF acted illegally (whether through deliberate decision or lack of competence). Hopefully they will be answered quickly and conclusively, so we can all move on.
If someone is just graduating and interested in entrepreneurship, what do you think is the probability that they will be accepted into YC? The original article mentions a 2.5% acceptance rate at YC of those who apply. Do you need prior success in order to apply?
Thanks for sharing. It’s a start, but it’s certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because there’s no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.