I agree and wrote about it here: https://forum.effectivealtruism.org/posts/nnf2GsSq9fhdRCvZj/keynesian-altruism
You narrative talks about the movement switching from earn to give to career-focused. I think that has huge survivorship bias in it. There are now many GWWC pledgers who would not call themselves EA. As the movement became bigger, the career-focused side began to dominate discourse because there’s a lot more to say if you are career-focused and trying to coordinate things rather than if you are head down earning money.
Interest rates are much higher, which is partially offset by inflation (it’s real not nominal that matters) but not entirely. Today, US Treasuries have a +1.79% yield over 5 years in real terms, so higher than the -1.28% I mention in the article but still within the long-term range of -1% to +2% that I mention in the article. Importantly, that’s still below real GDP growth expectations, so over time the amount you can buy as a proportion of global wealth declines.
Surely it’s not a case of either-or. EA exists because we all found that existing charity was not up to scratch, hence we do want EA to take different approaches. However, I think it’s important to also have people from outside EA (but with good value alignment) to provide diversity of thought and make sure there are no blindspots.
I don't have the specific grant agreement in front of me and feel somewhat uncomfortable disclosing more information about this application before running the request by the grantees. I'm happy to share the following thoughts, which I believe address most of your questions but I'm sorry if you are mostly interested in this specific case as opposed to the more general situation.
For all grants, we have grantees sign a grant agreement that outlines the purpose and use of the grant, record-keeping requirements, monitoring, prohibited use, situations where w
Thanks for the great analysis!
The lack of interest in GHD by the Leaders Forum is often communicated as if GHD should be deprioritised, but I think a fair amount of causation goes the other way. Historically, people promoting GHD have not been invited to the Leaders Forum.
I think it’s similar with engagement. Highly engaged EAs are less likely to support GHD, but that ignores the fact that engagement is defined primarily based on direct work not E2G or careers outside EA, hence people interested in GHD are naturally classified as less engaged even if they are just as committed.
Thanks Grayden!
Sure, the claim hides a lot of uncertainties. At a high level the article says “A implies X, Y and Z”, but you can’t possibly derive all of that information from the single number A. Really what’s the article should say is “X, Y and Z are consistent with the value of A”, which is a very different claim.
i don’t specifically disagree with X, Y and Z.
I do think you should hedge more given the tower of assumptions underneath.
The title of the post is simultaneously very confident ("the market implies" and "but not more"), but also somewhat imprecise ("trillions" and "value"). It was not clear to me that the point you were trying to make was that the number was high.
Your use of "but not more" implies you were also trying to assert the point that it was not that high, but I agree with your point above that the market could be even bigger. If you believe it could be much bigger, that seems inconsistent with...
Your claim is very strong that “the market implies X”, when I think what you mean is that “the share price is consistent with X”.
There are a lot of assumptions stacked up:
Your objections seem reasonable but I do not understand their implications due to a lack of finance background. Would you mind helping me understand how your points affect the takeaway? Specifically, do you think that the estimates presented here are biased, much more uncertain than the post implies, or something else?
Can you elaborate? The stock price tells us about the NPV of future profits, not revenue. However, if we use make an assumption about margin, that tells us something about future expected revenues.
I'm also not claiming to prove the claim. More that current market prices seem consistent with a scenario like this, and this scenario seems plausible for other reasons (though they could also be consistent with other scenarios).
I basically say this in the first sentence of the original post. I've edited the intro on the forum to make it clearer.
Perhaps you could say which additional assumptions in the original post you disagree with?
I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it's a fun game (this doesn't prove more recent interest isn't driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn't feel like it made a huge shift).
Also, AI timelines forecasting work has been highly decision-relevant to a large number of peop...
I'm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there's potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions pol...
I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.
There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.
For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people ...
Insolvency happens on an entity by entity level. I don’t know which FTX entity gave money to EA orgs (if anyone knows, please say), and whether it went first via the founders personally. I would have thought it’s possible that FTX full repays its creditors, so there is value in the shares, but then FTX’s investors go after the founders personally and they are declared bankrupt.
I’m hugely in favour of principles first as I think it builds a more healthy community. However, my concern is that if you try too hard to be cause neutral, you end up artificially constrained. For example, Global Heath and Wellbeing is often a good introduction point to the concept of effectiveness. Then once people are focused on maximisation, it’s easier to introduce Animal Welfare and X-Risk.
I agree that GHW is an excellent introduction to effectiveness and we should watch out for the practical limitations of going too meta, but I want to flag that seeing GHW as a pipeline to animal welfare and longtermism is problematic, both from a common-sense / moral uncertainty view (it feels deceitful and that’s something to avoid for its own sake) and a long-run strategic consequentialist view (I think the EA community would last longer and look better if it focused on being transparent, honest, and upfront about what most members care about, and it’s really important for the long term future of society that the core EA principles don’t die).
When you are a start-up non-profit, it can be hard to find competent people outside your social circle, which is why I created the EA Good Governance Project to make life easier for people.
My two cents:
Each one of us only has a single perspective and it’s human nature to assume other people have similar perspectives. EA is a bubble and there are certainly bubbles within the bubble, e.g. I understand Bay Area is very AI focused while London is more plural.
Articles like this that attempt to replace one person’s perspective with hard data are really useful. Thank you.
At EA for Christians, we often interact with people who are altruistic and focused on impact but do not want to associate with EA because of its perceived anti-religion ethos.
On the flip side, since becoming involved with EA for Christians, a number of people have told me they are Christian but keep it quiet for fear it will damage their career prospects.
We should all try to maximise our impact and there’s a good argument for specialisation.
However, I’m concerned by a few things:
are you sure this isn’t just a function of the definition of highly engaged?
No, I think it it probably is partly explained by that.
For context for other readers: the highest level of engagement on the engagement scale is defined as "I am heavily involved in the effective altruism community, perhaps helping to lead an EA group or working at an EA-aligned organization. I make heavy use of the principles of effective altruism when I make decisions about my career or charitable donations." The next highest category of engagement ("I’ve engaged exte...
The EV US board was (in my opinion) significantly undersized to handle a major operational crisis. I suspect it knew at some point that Rebecca Kagan might be stepping down soon and that existing members might have to recuse from important decisions for various reasons. Thus, it would have been reasonable in my eyes to prioritize getting two new people on ASAP and to defer a broader recruitment effort until further expansion.
Sorry if I wasn’t clear. My claim was not “Every organisation has a COO); it was “If an organisation has a COO, the department they manage is typically front-office rather than back-office and often the largest department”.
For Apple, they do indeed manage front-office operations: “Jeff Williams is Apple’s chief operating officer reporting to CEO Tim Cook. He oversees Apple’s entire worldwide operations, as well as customer service and support. He leads Apple’s renowned design team and the software and hardware engineering for Apple Watch. Jeff a...
Two quick points:
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago
Thanks for this! You might be right about the non-profit vs. for-profit distinction in 'operations' and your point about the COO being 'Operating' rather than 'Operations' is a good one.
Re avoiding managers doing paperwork, I agree with that way of putting it. However, I think EA needs to recognise that management is an entirely different skill. The best researcher at a research organization should definitely not have to handle lots of paperwork, but I'd argue they probably shouldn't be the manager in the first place! Management is a very different skillset that involves people management, financial planning, etc. that are often skills pushed on operations teams by people who shouldn't be managers.
Yeah, I definitely agree with that - I think a pretty common issue is people entering into people management on the basis of their skills at research, and they don't seem particularly likely to be correlated. I also think organizations sometimes struggle to provide pathways to more senior roles outside of management too, and that seems like an issue when you have ambitious people who want to grow professionally, but no options to except people management.
I don’t work in ops or within an EA org, but my observation from the outside is that the way EA does ops is very weird. Note these are my impressions from the outside so may not be reflective of the truth:
I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.
...The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest
I’m not commenting on this change specifically, but as someone who is a long-term EA but not working in EA full-time, I find there are way too many name changes in EA. Name changes are hugely expensive (both in terms of costs, lost brand equity and confusion among ‘customers’) so should not be taken lightly.