All of Grayden 🔸's Comments + Replies

I’m not commenting on this change specifically, but as someone who is a long-term EA but not working in EA full-time, I find there are way too many name changes in EA. Name changes are hugely expensive (both in terms of costs, lost brand equity and confusion among ‘customers’) so should not be taken lightly.

2
NickLaing
I agree, although i thought Ambitious Impact was an example of a good one, great name better than the original and made a lot of sense.

I agree except that suitability is only part of the equation. Luck plays a very important role.

You narrative talks about the movement switching from earn to give to career-focused. I think that has huge survivorship bias in it. There are now many GWWC pledgers who would not call themselves EA. As the movement became bigger, the career-focused side began to dominate discourse because there’s a lot more to say if you are career-focused and trying to coordinate things rather than if you are head down earning money.

3
JP Addison🔸
I think this is a good and useful point. And one that's underappreciated in general.

This article has a lot of downvoting (net karma of 39 from 28 votes and 3 disagree votes). Could some of the people who downvoted or disagreed explain their rationale?

1
harfe
This does not seem to be an unusual amount of downvoting to me. The net karma is even higher than the number of votes! As a more general point, I think people should worry less about downvotes on posts with a high net karma.

You could try putting cash into a separate savings account earmarked for donation. When you are happy that you don’t need it, donate it. (But maybe over a few years for tax efficiency)

Interest rates are much higher, which is partially offset by inflation (it’s real not nominal that matters) but not entirely. Today, US Treasuries have a +1.79% yield over 5 years in real terms, so higher than the -1.28% I mention in the article but still within the long-term range of -1% to +2% that I mention in the article. Importantly, that’s still below real GDP growth expectations, so over time the amount you can buy as a proportion of global wealth declines.

I think all the points still stand albeit the numbers in the example look dated now! Anything you think should be changed?

2
Lorenzo Buonanno🔸
Sorry for the very late reply, basically my understanding is that interest rates now are high, so this post implies that we should consider investing now and donating later. Is that a correct interpretation? Are you following that strategy yourself?

Surely it’s not a case of either-or. EA exists because we all found that existing charity was not up to scratch, hence we do want EA to take different approaches. However, I think it’s important to also have people from outside EA (but with good value alignment) to provide diversity of thought and make sure there are no blindspots.

2
Toby Tremlett🔹
Didn't realise I might be implying we didn't by posting this- makes sense! Collectively, CEA does have a great network- writing a quick take was a way to cast the net a bit wider, and gauge enthusiasm.   

Do you get 1 karma just for posting this comment? 😂

4
Arepo
It gets an auto-upvote from me, but I'm pretty sure that it doesn't affect my overall karma score (though entertainingly given it's non-negative karma after six votes, a few people must have upvoted it :)

Estonia doesn’t surprise me. It’s very tech-heavy and EA skews heavily to tech people

What were the conditions of the grant? What follow-up was there after the grant was made? Was there a staged payment schedule based on intermediate outputs? If this grant went to a for-profit and no output was produced, can the money be clawed back?

I don't have the specific grant agreement in front of me and feel somewhat uncomfortable disclosing more information about this application before running the request by the grantees. I'm happy to share the following thoughts, which I believe address most of your questions but I'm sorry if you are mostly interested in this specific case as opposed to the more general situation.

  • For all grants, we have grantees sign a grant agreement that outlines the purpose and use of the grant, record-keeping requirements, monitoring, prohibited use, situations where w

... (read more)

Thanks for the great analysis!

The lack of interest in GHD by the Leaders Forum is often communicated as if GHD should be deprioritised, but I think a fair amount of causation goes the other way. Historically, people promoting GHD have not been invited to the Leaders Forum.

I think it’s similar with engagement. Highly engaged EAs are less likely to support GHD, but that ignores the fact that engagement is defined primarily based on direct work not E2G or careers outside EA, hence people interested in GHD are naturally classified as less engaged even if they are just as committed.

9
Vaipan
I agree. We have to take into account that 80k strongly pushed for careers in AI safety, encouraged field building specifically for AI safety, and the job board has become increasingly dominated by AI safety job offers. And the trend is not likely to be reversed soon.  However, that does not keep people outside of EA to obtain jobs in the GHD field (which is not just development economics, as someone wrote one day);  they are just not accounted for. And if the movement keep giving opportunities and funding specifically towards AI safety, sure we'll get less and less GHD people. So it's still impressive, taking all this funding concentration, that we get so many EAs that still consider GHD as the most pressing cause-area. 

Thanks Grayden!

  • I strongly agree that engagement =/= commitment or impact. 
  • That said, I'd the trend for higher engagement to be associated with stronger support for longtermist over neartermist causes is also observed across other proxies for engagement. For example, perhaps most surprisingly, having taken the GWWC pledge is (within our sample) significantly associated with stronger support for LT over NT.

Sure, the claim hides a lot of uncertainties. At a high level the article says “A implies X, Y and Z”, but you can’t possibly derive all of that information from the single number A. Really what’s the article should say is “X, Y and Z are consistent with the value of A”, which is a very different claim.

i don’t specifically disagree with X, Y and Z.

I do think you should hedge more given the tower of assumptions underneath.

The title of the post is simultaneously very confident ("the market implies" and "but not more"), but also somewhat imprecise ("trillions" and "value"). It was not clear to me that the point you were trying to make was that the number was high.

Your use of "but not more" implies you were also trying to assert the point that it was not that high, but I agree with your point above that the market could be even bigger. If you believe it could be much bigger, that seems inconsistent with... (read more)

FWIW this might not be true of the average reader but I felt like I understood all the implicit assumptions Ben was making and I think it's fine that he didn't add more caveats/hedging. His argument improved my model of the world.

3
Benjamin_Todd
It's fair that I only added "(but not more)" to the forum version – it's not in the original article which was framed more like a lower bound. Though, I stand by "not more" in the sense that the market isn't expecting it to be *way* more, as you'd get in an intelligence explosion or automation of most of the economy. Anyway I edited it a bit. I'm not taking revenue to be equivalent to value. I define value as max consumer willingness to pay, which is closely related to consumer surplus.

Your claim is very strong that “the market implies X”, when I think what you mean is that “the share price is consistent with X”.

There are a lot of assumptions stacked up:

  • The share price represents the point at which the marginal buyer and marginal seller transact. If you assume both are rational and fundamental, then this represents the NPV of future cash flows for the marginal buyer / seller. Note this is not the same as the median / mean expectation.
  • You can use some other market expectations for discount rates etc. to translate that into some possible f
... (read more)
3
Benjamin_Todd
I agree all these factors go into it (e.g. I discuss how it's not the same as the mean expectation in the appendix of the main post, and also the point about AI changing interest rates). It's possible I should hedge more in the title of the post. That said, I think the broad conclusion actually holds up to plausible variation in many of these parameters. For instance, margin is definitely a huge variable, but Nvidia's margin is already very high. More likely the margin falls, and that means the size of the chip market needs to be even bigger than the estimate.

You cannot derive revenue, or the shape of revenue growth, from a stock price. I think what you mean is consensus forecasts that support the current share price. The title of the article is provably incorrect.

Your objections seem reasonable but I do not understand their implications due to a lack of finance background. Would you mind helping me understand how your points affect the takeaway? Specifically, do you think that the estimates presented here are biased, much more uncertain than the post implies, or something else?

Can you elaborate? The stock price tells us about the NPV of future profits, not revenue. However, if we use make an assumption about margin, that tells us something about future expected revenues.

I'm also not claiming to prove the claim. More that current market prices seem consistent with a scenario like this, and this scenario seems plausible for other reasons (though they could also be consistent with other scenarios).

I basically say this in the first sentence of the original post. I've edited the intro on the forum to make it clearer.

Perhaps you could say which additional assumptions in the original post you disagree with?

Thanks for sharing. It’s a start, but it’s certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because there’s no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.

2
Nathan Young
I don't think that really works. You can get feedback from 5 years in 5 years. Metaculus already has some suggestions as to people who are good 5 year forecasters. None of the above are prediction markets. 

Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals

2
Austin
Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket. But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today's world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you - you'll obviously do worse if you can't see outcomes of your actions. You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn't been great; I agree that the most common approaches of "tournament setting forecasting" or "superforecaster consulting" haven't produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I'm again excited to see what Ben and Javier do here.

Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical

You might be right but just to add a datapoint: I was featured in an article in 2016. I don’t regret it but I was careful about (1) the journalist and (2) what I said on the record.

I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.

4
Nathan Young
COI - I work in forecasting. Whether or not forecasting is a good use of funds, good decision-making is probably correlated with impact. So I'm open to the idea that forecasting hasn't been a good use of funds, but it seems it should be a priori. Forecasting in one sense is predicting how decisions will go. How could that not be a good idea in theory. More robust cases in practice: 1.   1. Forecasters have good track records and are provably good thinkers 2. They can red team institutional decisions "what will be the impacts of this" 3. In some sense this is similar to research 2.   1. Forecasting is becoming a larger part of the discourse and this is probably good. It is much more common to see the Economist, the FT, Matt Yglesias, twitter discourse referencing specific testable predictions 3.   1. In making AI policy specifically it seems very valuable to guess progress and guess the impact of changes. 2. To me it looks like Epoch and Metaculus do useful work here that people find valuable.

The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it's a fun game (this doesn't prove more recent interest isn't driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn't feel like it made a huge shift).

Also, AI timelines forecasting work has been highly decision-relevant to a large number of peop... (read more)

5
Mo Putera
Would you count Holden's take here as a robust case for funding forecasting as an effective use of charitable funds?  This is my own (possibly very naive) interpretation of one motivation behind some of Open Phil's forecasting-related grants.  Actually, maybe it's also useful to just look at the biggest grants from that list: 
5
joshcmorrison
Personally, I think specifically forecasting for drug development could be very impactful: Both in the general sense of aligning fields around the probability of success of different approaches (at a range of scales -- very relevant both for scientists and funders) and the more specific regulatory use case (public predictions of safety/efficacy of medications as part of approvals by FDA/EMA etc.)  More broadly, predicting the future is hugely valuable. Insofar as effective altruism aims to achieve consequentialist goals, the greatest weakness of consequentialism is uncertainty about the effects of our actions. Forecasting targets that problem directly. The financial system creates a robust set of incentives to predict future financial outcomes -- trying to use forecasting to build a tool with broader purpose than finance seems like it could be extremely valuable.  I don't really do forecasting myself so I can't speak to the field's practical ability to achieve its goals (though as an outsider I feel optimistic), so perhaps there are practical reasons it might not be a good investment. But overall to me it definitely feels like the right thing to be aiming at.

I'm considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there's potentially a misunderstanding here, leading to unnecessary disagreement.

I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions pol... (read more)

I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.

There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.

For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people ... (read more)

5
Vasco Grilo🔸
Thanks for the comment, Grayden. For context, readers may want to check the question post Why is EA so enthusiastic about forecasting?.

Insolvency happens on an entity by entity level. I don’t know which FTX entity gave money to EA orgs (if anyone knows, please say), and whether it went first via the founders personally. I would have thought it’s possible that FTX full repays its creditors, so there is value in the shares, but then FTX’s investors go after the founders personally and they are declared bankrupt.

2
Jason
If I remember a report from Ray et al. correctly, there were a bunch of intertwined bank accounts. I believe some transactions were made from Alameda-owned accounts, some from North Dimension-owned accounts, etc. without much rhyme or reason.

I’m hugely in favour of principles first as I think it builds a more healthy community. However, my concern is that if you try too hard to be cause neutral, you end up artificially constrained. For example, Global Heath and Wellbeing is often a good introduction point to the concept of effectiveness. Then once people are focused on maximisation, it’s easier to introduce Animal Welfare and X-Risk.

I agree that GHW is an excellent introduction to effectiveness and we should watch out for the practical limitations of going too meta, but I want to flag that seeing GHW as a pipeline to animal welfare and longtermism is problematic, both from a common-sense / moral uncertainty view (it feels deceitful and that’s something to avoid for its own sake) and a long-run strategic consequentialist view (I think the EA community would last longer and look better if it focused on being transparent, honest, and upfront about what most members care about, and it’s really important for the long term future of society that the core EA principles don’t die).

6
calebp
I agree with the overall point, though I am not I've seen much empirical evidence for the GHD as a good starting point claim (or at least I think it's often overstated). I got into EA stuff though GHD, but, this may have just been because there were a lot more GHD/EA intro materials at the time. I think that the eco-system is now a lot more developed and I wouldn't be surprised if GHD didn't have much of an edge over cause first outreach (for AW or x-risk). Maybe our analysis should be focussed on EA principles, but the interventions themselves can be branded however they like? E.g. We're happy to fund GHD giving games because we believe that they contribute to promoting caring about impartiality and cost-effectiveness in doing good - but they don't get much of a boost or penalty from being GHD giving games (as opposed to some other suitable cause area).

When you are a start-up non-profit, it can be hard to find competent people outside your social circle, which is why I created the EA Good Governance Project to make life easier for people.

I think it's important:

  1. To put in place good practices (e.g. board meeting without the CEO regularly) BEFORE they are needed.
  2. For FUNDERS to ask questions about effective governance and bear responsibility when they get it wrong.

My two cents:

  • Most governments heavily subsidise R&D (which is equivalent to a deliberate negative externality), often through tax credits
  • The patent system allows companies to extract abnormal profits for 20 years and incentivise a race (even if somebody independently develops the technology, they can’t use it if somebody else has patented it). This system is a deliberate inefficient market
  • Corporate R&D tends to be much more short-term and customer-focused. If you come from an academic background, you will be shocked by what is counted as R&
... (read more)

Funding to EA orgs has roughly halved in the last year, so a recession would barely be noticed! More broadly, the point you make is valid. One of the reasons I’ve stayed earning to give is that I’ve never been confident in the stability of EA funding over my future career.

Donating a kidney results in an over 1300% increase in the risk of kidney disease. A risk-averse interpretation of the data puts the increase in year-to-year mortality after donation upwards of 240%.
Could you provide these in absolute terms as relative terms are pretty meaningless and rhetoric

Great article. Very concise, clear and actionable!

You raise some good points, so I have removed that point from the main article.

Each one of us only has a single perspective and it’s human nature to assume other people have similar perspectives. EA is a bubble and there are certainly bubbles within the bubble, e.g. I understand Bay Area is very AI focused while London is more plural.

Articles like this that attempt to replace one person’s perspective with hard data are really useful. Thank you.

At EA for Christians, we often interact with people who are altruistic and focused on impact but do not want to associate with EA because of its perceived anti-religion ethos.

On the flip side, since becoming involved with EA for Christians, a number of people have told me they are Christian but keep it quiet for fear it will damage their career prospects.

And to add another way the anti-religion ethos is harmful, people may not be comfortable talking to their Christian friends about EA (or even about topics considered aligned with EA) in the first place.

We should all try to maximise our impact and there’s a good argument for specialisation.

However, I’m concerned by a few things:

  • Its not obvious to me that spending more money on yourself will make you better at your job
  • There’s a danger of arrogance clouding our judgment, e.g. I don’t think 99% of people in EA should be flying business class
  • Donating has value for many people due to the “skin in the game” effect
2
carter allen🔸
Agree with everything here. But the argument isn’t that people should spend the money they’d otherwise donate to charity on themselves/flying business class, it’s that they should use it to further whatever their particular path to impact is.

Highly engaged EAs were much more likely to select research (25.0% vs 15.1%) and much less likely to select earning to give (5.7% vs 15.7%)

are you sure this isn’t just a function of the definition of highly engaged?

are you sure this isn’t just a function of the definition of highly engaged?

 

No, I think it it probably is partly explained by that. 

For context for other readers: the highest level of engagement on the engagement scale is defined as "I am heavily involved in the effective altruism community, perhaps helping to lead an EA group or working at an EA-aligned organization. I make heavy use of the principles of effective altruism when I make decisions about my career or charitable donations." The next highest category of engagement ("I’ve engaged exte... (read more)

1
Larks
Should redefine engagement in terms of total $ donated to charity in the last year and see how the stats look.

Yes! A rather important typo! I’ve now fixed

The Parable of the Good Samaritan seems to lean towards impartiality. Although the injured man was laying in front of the Samaritan (geographic proximity), the Samaritan was considered a foreigner / enemy (no proximity of relationship).

7
Luke Eure
It's the geographic proximity that I get hung up on though. He is right in front of the Samaritan. I can't think of any parables that involve someone showing mercy to a person who is not right in front of them. Every time Jesus performs a miracle, it is for someone right in front of him. I am strongly in favor of more impartiality, but think most Christians find it a stretch to say that the Good Samaritan parable is meant to imply we should care for future people and people on the other side of the world who they will never meet.
1
dominicroser
Is there a typo in the first sentence - should it say impartiality rather than partiality?

Did the EV US Board consider running an open recruitment process and inviting applications from people outside of their immediate circle? If so, why did it decide against?

The EV US board was (in my opinion) significantly undersized to handle a major operational crisis. I suspect it knew at some point that Rebecca Kagan might be stepping down soon and that existing members might have to recuse from important decisions for various reasons. Thus, it would have been reasonable in my eyes to prioritize getting two new people on ASAP and to defer a broader recruitment effort until further expansion.

Thanks, Ben. This is a really thoughtful post.

I wondered if you had any update on the blurring between EA and longtermism. I‘ve seen a lot of criticism of EA that is really just low quality criticism of longtermism because the conclusions can be weird.

Sorry if I wasn’t clear. My claim was not “Every organisation has a COO); it was “If an organisation has a COO, the department they manage is typically front-office rather than back-office and often the largest department”.

For Apple, they do indeed manage front-office operations:  “Jeff Williams is Apple’s chief operating officer reporting to CEO Tim Cook. He oversees Apple’s entire worldwide operations, as well as customer service and support. He leads Apple’s renowned design team and the software and hardware engineering for Apple Watch. Jeff a... (read more)

I also found these charts a little confusing. A single value for each or a clustered column chart might be clearer

2
David_Moss
Thanks for the additional comment. If it helps, you can simply look at the relationship between longtermism (left) and neartermism (right) separately, and ignore the other.  I think it is relatively safe in this instance to look at the relationship with mean longtermism-minus-neartermism scores (shown below), but only because we first examined the individual relationships (shown above) above since- as I noted in my reply to Nathan- support for longtermist causes and neartermist causes don't reflect a single dimension.

Two quick points:

  1. Yes, legal control is the first consideration, but governance requires skill not just value-alignment
  2. I think in 2023 the skills you want largely exist within the community; it’s just that (a) people can’t find them easily (hence I founded the EA Good Governance Project) and (b) people need to be willing to appoint outside their clique

Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago

Thanks for this! You might be right about the non-profit vs. for-profit distinction in 'operations' and your point about the COO being 'Operating' rather than 'Operations' is a good one.

Re avoiding managers doing paperwork, I agree with that way of putting it. However, I think EA needs to recognise that management is an entirely different skill. The best researcher at a research organization should definitely not have to handle lots of paperwork, but I'd argue they probably shouldn't be the manager in the first place! Management is a very different skillset that involves people management, financial planning, etc. that are often skills pushed on operations teams by people who shouldn't be managers.

Yeah, I definitely agree with that - I think a pretty common issue is people entering into people management on the basis of their skills at research, and they don't seem particularly likely to be correlated. I also think organizations sometimes struggle to provide pathways to more senior roles outside of management too, and that seems like an issue when you have ambitious people who want to grow professionally, but no options to except people management.

Most organizations do not divide tasks between core and non-core. The ones that do (and are probably most similar to a lot of EA orgs) are professional services ones

Administration definitely sounds less appealing, but maybe it would be more honest and reduce churn?

I don’t work in ops or within an EA org, but my observation from the outside is that the way EA does ops is very weird. Note these are my impressions from the outside so may not be reflective of the truth:

  • The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest number of people reporting to them after the CEO)
  • EA takes highly talented peopl
... (read more)
4
Linch
I was surprised by this claim, so I checked every (of the 3) non-EA orgs I've worked at. Not only is it not true that "the COO normally has the highest number of people reporting to them after the CEO," literally none of them even have a COO for the whole org.  To check whether my experiences were representative, I went through this list of the largest companies. It looks like of the 5 largest companies by market cap, 2 of them have COOs (Apple, Amazon). Microsoft doesn't have a designated COO, but they had a Chief Human Resources Officer and a Chief Financial Officer, which in smaller orgs will probably be a COO job[1]. So maybe an appropriate prior is 50%? This is a very quick spotcheck however, would be interested in more representative data. 1. ^ Notably, they didn't have a CTO, which surprised me.

I agree with several of your points here, especially the reinventing the wheel one, but I think the first and last miss something. But, I'll caveat this by saying I work in operations for a large (by EA standards) organization that might have more "normal" operations due to its size.

The term “Operations” is not used in the same way outside EA. In EA, it normally seems to mean “everything back office that the CEO doesn’t care about as long as it’s done. Outside of EA, it normally means the main function of the organisation (the COO normally has the highest

... (read more)
5
Joseph
I agree that this is weird. In EA operations is something like "everything that supports the core work and allows other people to focus on the core work," while outside of EA operations is the core work of a company. Although I wish that EA hadn't invented it's own definition for operations, at this point I don't see any realistic options for it changing.

There are some very competent leaders within EA so I don’t think we should make sweeping assumptions. I think we need to make EA a meritocracy

8
Ben Stewart
Sure, but my impression of the number of them and their competence has decreased. It’s still moderately high. And meritocracy cuts both ways - meritocracy would push harder on judging current leaders by their past success - Ie harshly - and not be as beholden to contingent or social reasons for believing they’re competent
Load more