All of Benjamin_Todd's Comments + Replies

This is great. I'm so glad this analysis has finally been done!

One quick idea: should 'speed-ups' be renamed 'accelerations'? I think I'd find that clearer personally, and would help to disambiguate it from earlier uses of 'speed-up' (e.g. in Nick's thesis).

9
Toby_Ord
9mo
I've thought about this a lot and strongly think it should be the way I did it in this chapter. Otherwise all the names are off by one derivative. e.g. it is true that for one of my speed-ups, one has to temporarily accelerate, but you also have to temporarily change every higher derivative too, and we don't name it after those. The key thing that changes permanently and by a fixed amount is the speed.

As one practical upshot of this, I helped 80k make a round of updates to their online articles in light of FTX. See more.

More on what moderation might mean in practice here: 

https://80000hours.org/2023/05/moderation-in-doing-good/

I really liked this post. I find the undergrad degree metaphor useful in some ways (focus on succeeding in your studies over 4 years, but give a bit of thought to how it sets you up for the next stage), but since the end game is only 3 years (rather than a normal 40 year career), overall it seems like your pacing and attitude could end up pretty different.

Maybe the analogy could be an undergrad where their only goal is to get the "best" graduate degree possible. Then high school = early game, undergrad = midgame, graduate degree = end game. Maybe you could... (read more)

Yes I agree that could be a good scenario to emerge from this – a very salient example of this kind of thinking going wrong is one of the most helpful things to convince people to stop doing it.

The 80k team are still discussing it internally and hope to say more at a later date.

 
Speaking personally, Holden's comments (e.g. in Vox) resonated with me. I wish I'd done more to investigate what happened at Alameda.

If you're tracking the annual change in wealth between two periods, you should try to make sure the start at the end point are either both market peaks or both market lows.

e.g. from 2017 to 2021, or 2019 to Nov 2022 would be valid periods for tracking crypto.

If you instead track from e.g. 2019 to 2021, then you're probably going to overestimate.

Another option would be to average over periods significantly longer than a typical market cycle (e.g. 10yr).

0
mikbp
1y
Ah, I see. Thanks. It makes total sense.

Thanks!

My social life is pretty much only people who aren't in the EA community at this point.

Small comment on this:

actively fighting against EA communities to become silos and for EA enterprises to have workers outside EA communities would be of great value 

It depends on the org, but for smaller orgs that are focused on EA community building, I still think it could make sense for them to pretty much only people who are very interested in EA. I wouldn't say the same about e.g. most biorisk orgs though.

1
mikbp
1y
Super! :-D I think I would also agree regarding the community building organisations. I haven't really thought about that case, but it intuitively makes sense.

Yes, I'd basically agree – he didn't influence the thinking that much but he did impact what you could get paid to do (and that could also have long term impacts on the structure of the community).

Though, given income inequality, the latter problem seems very hard to solve. 

That's useful - my 'naive optimizing' thing isn't supposed to be the same thing as naive utilitarianism, but I do find it hard to pin down the exact trait that's the issue here, and those are interesting points about confidence maybe not being the key thing.

Just a small clarification, I'm not saying we should abandon the practical project, but it could make sense to (relatively speaking) focus on more tractable areas / dial down ambitions / tilt more towards the intellectual project until we've established more operational competence. 

I also agree dissociating has significant costs that need to be weighed against the other reasons.

I'd agree a high degree of confidence + strong willingness to act combined with many other ideologies leads to bad stuff.

Though I still think some ideologies encourage maximisation more than others.

Utilitarianism is much more explicit in its maximisation than most ideologies, plus it (at least superficially) actively undermines the normal safeguards against dangerous maximisation (virtues, the law, and moral rules) by pointing out these can be overridden for the greater good.

Like yes there are extreme environmentalists and that's bad, but normally when som... (read more)

7
Cullen
1y
I think it's true that utilitarianism is more maximizing than the median ideology. But I think a lot of other ideologies are minimizing in a way that creates equal pathologies in practice. E.g., deontological philosophies are often about minimizing rights violations, which can be used to justify pretty extreme (and bad) measures.
4
MaxRa
1y
I moderately confidently expect there to be a higher proportion of extreme environmentalists than extreme utilitarians. I think utilitarians will be generally more intelligent / more interested in discussion / more desiring to be "correct" and "rational", and that the correct and predominant reply to things like the "Utilitarianism implies killing healthy patients!" critique is "Yeah, that's naive Utilitarianism, I'm a Sophisticated Utilitarian who realizes the value of norms, laws, virtues and intuitions for cooperation".

I basically agree and try to emphasize personality much more than ideology in the post.

That said, it doesn't seem like a big leap to think that confidence in an ideology that says you need to maximise a single value to the exclusion of all else could lead to dangerously optimizing behaviour...

Having more concern for the wellbeing of others is not the problematic part. But utilitarianism is more than that.

Moreover it could still be true that confidence in utilitarianism is in practice correlated with these dangerous traits.

I expect it's the negative compone... (read more)

it doesn't seem like a big leap to think that confidence in an ideology that says you need to maximise a single value to the exclusion of all else could lead to dangerously optimizing behaviour.

I don't find this a persuasive reason to think that utilitarianism is more likely to lead to this sort of behavior than pretty much any other ideology. I think a huge number of (maybe all?) ideologies imply that maximizing the good as defined by that ideology is the best thing to do, and that considerations outside of that ideology have very little weight. You se... (read more)

Yes, something like that: he of course had an influence on what you could get paid for (which seems hard to avoid given some ppl have more money than others) but I don't think he had a big influence on people's thinking about cause pri.

I don’t think SBF impacted cause prioritization by promoting pet causes that weren’t already favored by parts of the EA community. But I do think SBF, through the FTX Future Fund, likely shifted how people prioritized across different EA causes. 

My sense is that the easy availability of longetermist funding made people more likely to work in that space, as people were well aware of a dynamic where (in Peter Wildeford’s words): “it's clear that global poverty does get the most overall EA funding right now, but it's also clear that it's more easy for me... (read more)

Hmm that does seem worse than I expected.

I wonder if it's because gwwc has cut back outreach or is getting less promotion by other groups (whereas 80k continued it's marketing as before, plus a lot of 80k's reach is passive), or whether it points to outreach actually being harder now.

4
AnonymousEAForumAccount
6mo
FYI I’ve just released a post which offers significantly more empirical data on how FTX has impacted EA. FTX’s collapse seems to mark a clear and sizable deterioration across a variety of different EA metrics. I included your comment about 80k's metrics being largely unaffected, but if there's some updated data on if/how 80k's metrics have changed post-FTX that would be very interesting to see.

I had you in mind as a good utilitarian when writing :)

Good point that just saying 'naively optimizing' utilitarians is probably clearest most of the time. I was looking for other words that would denote high-confidence and willingness to act without qualms.

4[anonymous]1y
minor nitpick - this doesn't seem to capture naive utilitarianism as I understand it. I always thought naive utilitarianism was about going against common sense norms on the basis of your own personal fragile calculations. eg lying is prone to being rumbled and one's reputation is very fragile, so it makes sense to follow the norm of not lying even if your own calculations seem to suggest that it is good because the calculations will tend to miss longer term indirect and subtle effects. But this is neither about (1) high confidence nor (2) acting without qualms. Indeed one might decide not to lie with high confidence and without qualms. Equally, one might choose to 'lie for the greater good' with low confidence and with lots of qualms. This would still be naive utilitarian behaviour 

Thank you!

Yes I think if you make update A due to a single data point, then you realise you shouldn't have updated on a single data point, you should undo update A. Like your original reasoning was wrong.

That aside, in the general case I think it can sometimes be justified a lot to update on a single datapoint. E.g. if you think an event was very unlikely, and then that event happens, your new probability estimate for the event will normally go up a lot.

In other cases, if you already have lots of relevant points, then adding a single extra one won't have m... (read more)

Some of the ones I've seen:

80k's metrics seems unaffected so far, and it's one of the biggest drivers of community growth.

I've also heard that EAG(x) applications didn't seem affected.

GWWC pledgers were down, though a lot of that is due to them not doing a pledge drive in Dec. My guess is that if they do a pledge drive next Dec similar to previous ones, the results will be similar. The baseline of monthly pledges seems ~similar.

I would be surprised if the effect from the lack of a pledge drive would run on into February and March 2023 though. Comparison YoY here is 12 months before, Jan 2023 to 2022 etc.

Good point there are reasons why work could get more valuable the closer you are – I should have mentioned that.

Also interesting points about option value.

I agree with many of the points, especially that personal fit is a big deal and that doing a PhD is also in part useful research (rather than pure career capital), and what matters is time until the x-risk rather than random definitions of AGI, but I'm worried this bit understates the reasons for urgency quite a bit:

you might then conclude that delaying your career by 6 years would cause it to have 41/91 = 45% of the value. If that’s the case, if the delay increased the impact you could have by a bit more than a factor of 2, the delay would be worth it.
&nb

... (read more)
2
alex lawsen (previously alexrjl)
1y
This is a useful consideration to point out, thanks. I push back a bit below on some specifics, but this effect is definitely one I'd want to include if I do end up carving out time to add a bunch more factors to the model. I don't think having skipped the neglectedness considerations you mention is enough to call the specific example you quote misleading though, as it's very far from the only thing I skipped, and many of the other things point the other way. Some other things that were skipped: * Work after AGI likely isn't worth 0, especially with e.g. Metaculus definitions. * While in the community building examples you're talking about, shifting work later doesn't change the quality of that work, this is not true wrt PhDs (doing a PhD looks more like truncating the most junior n years of work than shifting all years of work n years later). * Work that happens just before AGI can be done with a much better picture of what AGI will look like, which pushes against the neglectedness effect. * Work from research leads may actually increase in effectiveness as the field grows, if the growth is mostly coming from junior people who need direction and/or mentorship, as has historically been the case. And then there's something about changing your mind, but it's unclear to me which direction this shifts things: * it's easier to drop out of a PhD than it is to drop into one, if e.g. your timelines suddenly shorten. * If your timelines shorten because AGI arrives, though, it's too late to switch, while big updates towards timelines being longer are things you can act on, pushing towards acting as if timelines are short.

First to clarify, yes most of the general public haven't heard of EA, and many haven't made the connection with FTX.

I think EA's brand has mainly been damaged among what you could call the chattering classes. But I think that is a significant cost. 

It's also damaged in the sense that if you search for it online you find a lot more negative stuff now.

On the question about comparisons, unfortunately I don't have a comprehensive answer.

Part of my thinking is that early on I thought EA wasn't a good public facing brand. Then things went better than I expe... (read more)

From copying suggest changes :(

I was trying to do that :) That's why I opened with naive optimizing as the problem. The point about gung-ho utilitarians was supposed to be an example of a potential implication.

Yeah, I think "proudly self-identified utilitarians" is not the same as "naively optimizing utilitarians", so would encourage you to still be welcoming to those in the former group who are not in the latter :-)

ETA: I did appreciate your emphasizing that "it’s quite possible to have radical inside views while being cautious in your actions."

Appendix: some other comments that didn't make it into the main post

I made some mistakes in tracking EA money

These aren’t updates but rather things I should have been able to figure out ex ante:

  • Mark down private valuations of startups by a factor of ~2 (especially if the founders hold most of the equity) 
  • Measure growth from peak-to-peak or trough-to-trough rather than trough-to-peak – I knew crypto was in a huge bull market
  • I maybe should have expected more correlation between Alameda and FTX, and a higher chance of going to zero, based on the outside
... (read more)
3
mikbp
1y
What does this mean?

I think I made a mistake to publicly affiliate 80,000 Hours with SBF as much as we did


But 80k didn't just "affiliate" with SBF - they promoted him with untrue information. And I don't see this addressed in the above or anywhere else. Particularly, his podcast interview made a thing of him driving a Corolla and sleeping on a beanbag as part of promoting the frugal Messiah image, when it seems likely that at least some people high up in EA knew that this characterisation was false. Plus no mentioning of the stories of how he treated his Alameda co-founders. ... (read more)

5[anonymous]1y
Interesting! Can you share details? (Or point to where others have?)
3
pseudonym
1y
Is this an incomplete sentence?

It's a big topic, but the justification is supposed to be the part just before. I think we should be more worried about attracting naive optimizers, and I think people who are gung-ho utilitarians are one example of a group who are more likely to have this trait.

I think it's notable that SBF was a gung-ho utilitarian before he got into EA.

It's maybe worth clarifying that I'm most concerned about people who a combination of high-confidence in utilitarianism and a lack of qualms about putting it into practice.

There are lots of people who see utilitarianism a... (read more)

It's maybe worth clarifying that I'm most concerned about people who a combination of high-confidence in utilitarianism and a lack of qualms about putting it into practice.

Thank you, that makes more sense + I largely agree.

However, I also wonder if all this could be better gauged by watching out for key psychological traits/features instead of probing someone's ethical view. For instance, a person low in openness showing high-risk behavior who happens to be a deontologist could cause as much trouble as a naive utilitarian optimizer. In either case, it would be the high-risk behavior that would potentially cause problems rather than how they ethically make decisions. 

Thank you - great to see work vetting & deepening the understanding of fundamental EA ideas like this.

Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views?

 

Yes - but the issue plays itself out one level up.

For instance, most people aren't very scope sensitive – firstly in their intuitions, and especially when it comes to acting on them.

I think scope sensitivity is a key part of effec... (read more)

1
howdoyousay?
1y
Now having read your reply, I think we're likely closer together than apart on views. But... I don't think this is how I see the question of board choice in practice. In theory yes, for the specific legal, hard mechanisms you mention. But in practice in my experience boards significantly check and challenge direction of the organisation, so the collective ability of board members to do this should be factored in appointment decisions which may trade off against legal control being put in the 'safest pair of hands'. That said, I feel back and forth responses on the EA forum may be exhausting their value here; I feel I'd have more to say in a brainstorm about potential trade-offs between legal control and ability to check and challenge, and open to discussing further if helpful to some concrete issue at hand :)

A third question is:

3. What values should one morally advocate for?

The answers to this could be different yet again.

Much past longtermist advocacy arguably fits into the framework of trying to get people to increase their altruistic willingness to pay to help future generations, and I think make sense on those grounds. Though again perhaps could be clearer that the question of what governments should do all-considered given people's current willingnesses to pay is a different question.

The message I take is that there's potentially a big difference between these two questions:

  1. Which government policies should one advocate for?
  2. For an impartial individual, what are the best causes and interventions to work on?

Most of effective altruism, including 80,000 Hours, has focused on the second question.

This paper makes a good case for an answer to the first, but doesn't tell us much about the second.

If you only value the lives of the present generation, it's not-at-all obvious that marginal investment in reducing catastrophic risk beats funding Giv... (read more)

A third question is:

3. What values should one morally advocate for?

The answers to this could be different yet again.

Much past longtermist advocacy arguably fits into the framework of trying to get people to increase their altruistic willingness to pay to help future generations, and I think make sense on those grounds. Though again perhaps could be clearer that the question of what governments should do all-considered given people's current willingnesses to pay is a different question.

It's things like a house in a slum made from corrugated iron or mud, with no electricity and a shared latrine pit instead of plumbing, which gets periodically flooded. The rent for something like this in the US would be very cheap, but this option doesn't exist since there's so few people who are poor enough to pay for it (and it's illegal too).

Likewise cheaper foods like cassava dough that you couldn't find in stores in the US, dentistry from a random person without training, water that isn't clean etc.

I agree that there's a broader circle of people who get the ideas but aren't "card carrying" community members, and having some of those on the board is good. A board definitely doesn't need to be 100% self-identified EAs.

Another clarification is that what I care about is whether they deeply grok and are willing to act on the principles – not that they're part of the community, or self-identify as EA. Those things are at best rough heuristics for the first thing. 

This said, I think there are surprisingly few people out there like that. And due to the ... (read more)

7
howdoyousay?
1y
Isn't the point of EA that we are responsive to new arguments? So, unlike Extinction Rebellion where belief that climate change is a real and imminent risk is essential, our "belief system" is rather more about openness and willingness to update in response to 1) evidence, and 2) reasonable arguments about other world views? Also I think a lot of the time when people say "value alignment", they are in fact looking for signals like self-identification as EAs, or who they're friends with or have collaborated / worked with. I also notice we conflate our aesthetic preferences for communication with good reasoning or value alignment; for example, someone who knows in-group terminology or uses non-emotive language is seen as aligned with EA values / reasoning (and by me as well often). But within social-justice circles, emotive language can be seen as a signal of value alignment. Basically, there's a lot more to unpack with "value alignment" and what it means in reality vs. what we say it ostensibly means. Also to tackle your response, and maybe I'm reading between the lines too hard here / being too harsh on you here, but I feel there's goalpost shifting in your original post about EA value alignment and you now stating that people who understand broader principles are also "value aligned". Another reflection: the more we speak about "value alignment" being important, the more it incentivises people to signal "value alignment" even if they have good arguments to the contrary. If we speak about valuing different perspectives, we give permission and incentivise people to bring those.

Sorry that's what I meant. I was saying there are 5,000 community members. If you want the board to be controlled by people who are actually into EA, then you need 2/3 to come from something like that pool. Another 1/3 could come from outside (though not without risk). I wasn't talking about what fraction of the board should have specific expertise.

Another clarification, what I care about is whether they deeply grok and are willing to act on the principles – not that they're part of the community, or self-identify as EA. Those things are at best rough heur... (read more)

2
Grayden
1y
Thanks, Ben. I agree with what you are saying. However, I think that on a practical level, what you are arguing for is not what happens. EA boards tend to be filled with people who work full-time in EA roles, not by fully-aligned talent individuals from the private sector (e.g. lawyers, corporate managers) who might be earning to give having followed 80k’s advice 10 years ago

I agree Will's made a bunch of mistakes (like yes CEA was messed up), but I find it hard to sign up to a narrative where status seeking is the key reason.

My impression is that Will often finds it stressful and unpleasant to do community leadership stuff, media, talk to VIPs etc. He often seems to do it out of a sense of duty (i.e. belief that it's the most impactful thing). His ideal lifestyle would be more like being an academic.

Maybe there's some kind of internal conflict going on, but it seems more complicated than this makes out.

My hot take is that a b... (read more)

I agree Will's made a bunch of mistakes (like yes CEA was messed up), but I find it hard to sign up to a narrative where status seeking is the key reason.

My guess is you know Will better, so I would trust your judgement here a decent amount, though I have talked to other people who have worked with Will a decent amount who thought that status-seeking was pretty core to what was going on (for the sake of EA, of course, though it's hard to disentangle these kinds of things).

My impression is that Will often finds it stressful and unpleasant to do community le

... (read more)
2
JWS
1y
Edit: So this has got a very negative reaction, including (I think) multiple strong disagreevotes. I notice I'm a bit confused why, I don't recognise anything in the post that is beyond the pale? Maybe people think I'm piling on or trying to persuade rather than inform, though I may well have got the balance wrong. Minds are changed through discussion, disagreement, and debate - so I'd like to encourage the downvoters to reply (or DM me privately, if you prefer), as I'm not sure why people disagree, it's not clear where I made a mistake (if any) and how much I ought to update my beliefs. This makes a lot of sense to me intuitively, and I'd be pretty confident that Will would probably be most effective while being happy, unstressed, and doing what he likes and is good at - academic philosophy! It seems very reminiscent to me of stories of rank-and-file EAs who end up doing things that they aren't especially motivated by, or especially exceptional at, because of a sense of duty that seems counterproductive. I guess the update I think ought to happen is that Will trading off academic work to do community building / organisational leadership may not have been correct? Of course, hindsight is 20-20 and all that. But it seems plausible, and I'd be interested to hear the community's opinion. In any case, it seems that a good next step would be to find people in the community who are good at running organisations and willing to the do the community-leadership/public facing stuff, so we can remove the stress from Will and let him contribute in the academic sphere? The EA Good Governance Project seems like a promising thing to track in this area.

Non-profit boards have 100% legal control of the organisation– they can do anything they want with it.

If you give people who aren't very dedicated to EA values legal control over EA organisations, they won't be EA organisations for very long.

There are under 5,000 EA community members in the world – most of them have no management experience.

Sure, you could give up 1/3 of the control to people outside of the community, but this doesn't solve the problem  (it only reduces the need for board members by 1/3).

Ultimately, if you think there is enough value within EA arguments about how to do good, you should be able to find smart people from other walks of life who have: 1) enough overlap with EA thinking (because EA isn't 100% original after all) to have a reasonable starting point along with 2) more relevant leadership experience and demonstrably good judgement, and linked to the two previous 3) mature enough in their opinions and / or achievements to be less susceptible to herding.

If you think that EA orgs won't remain EA orgs if you don't appoint "value alig... (read more)

Jason
1y37
13
1

The assumption that this 1/3 would come from outside the community seems to rely on an assumption that there are no lawyers/accountants/governance experts/etc. in the community. It would be more accurate, I think, to say that the 1/3 would come from outside what Jack called "high status core EAs."

I think it's mostly because these estimates aren't properly adjusted for regression to the mean – there's a ton of sources of model error, and properly factoring these in will greatly reduce the top interventions. There are also other factors like the top interventions quickly running out of capacity. I discuss this in the article. I put a lot more trust in GiveWell's figures as an estimate of the real marginal cost-effectiveness. Though I agree there could be some interventions accessible to policy-makers that aren't accessible to GiveWell.

2
EdoArad
1y
Yea, I agree with your analyses in the article, though I'd be interested in understanding the relative effects

Thank, that's very helpful!

(I'm not surprised satisfaction was higher in 2022 than 2020 before FTX.)

Is it possible to compare overall satisfaction to previous EA surveys? As you say, all the methods for extracting the impact of FTX from this data seem a bit suspect. Satisfaction now vs. satisfaction 1-2 years ago is simpler and arguably more decision-relevant.

Is it possible to compare overall satisfaction to previous EA surveys?

 

It certainly is! (That said, I don't think comparing 2020 to 2022 is necessarily simpler. If we want to know where we are now, we presumably want to distinguish pre-FTX and post-FTX responses. And, if we want to know the effect of FTX, we may want to control for engagement, like in our other analyses, to account for differences between groups).

This plot shows the raw mean satisfaction for EAS 2020, EAS 2022, and for EAS 2022 (but only post-FTX). 

However, if we control for enga... (read more)

I'd expect this to look significantly worse if done in March rather than Dec :(

Might it be possible to re-survey a subset of people just about overall satisfaction, to see if it's moved?

Yes, we are planning a followup survey (most likely in 2-3 months). We have consent to send approximately 1600 respondents a followup survey (and may do additional parallel recruiting).

5
Habryka
1y
Yeah, I also feel this way. Doing a bit of resurveying seems like it could be quite valuable.

I agree different comparisons are relevant in different situations.

A comparison with the median is also helpful, since it e.g. tells us the gain that the people currently doing the bottom 50% of interventions could get if they switched.

Though I think the comparison to the mean is very relevant (and hasn't had enough attention) since it's the effectiveness of what the average person donates to, supposing we don't know anything about them.  Or alternatively it's the effectiveness you end up with if you pick without using data.

I think you'd need to show

... (read more)

One small extra data point that might be useful: I made a rough estimate for smallpox eradication in the post, finding it fell in the top 0.1% of the distribution for global health, so it seemed consistent.

I'd also add it would be great if there was more work to empirically analyse ex ante and ex post spread among hits based interventions with multiple outcomes. I could imagine it leading to a somewhat different picture, though I think the general thrust will still hold, and I still thinking looking at spread among measurable interventions can help to inform intuitions about the hits based case.

One example of work in this area is this piece by OP, where they say they believe they found some 100x and a few 1000x multipliers on cash transfers to US citizens by... (read more)

4
jackva
1y
I agree that this would be great to exist, though it is likely very hard and the examples that will exist soon will not be the strongest ones (given how effects can become visible over longer time-frames, e.g. how OP discusses green revolution and other interventions that took many years  to have the large effects we can now observe).   

Hey,  thanks for the comments. Here are some points that might help us get on the same page:

1) I agree this data is missing difficult-to-measure hits based interventions, like research and advocacy, which means it'll understate the degree of spread.

I discuss that along with other ways it could understate the differences here:

https://80000hours.org/2023/02/how-much-do-solutions-differ-in-effectiveness/#ways-the-data-could-understate-differences-between-the-best-and-typical-interventions

 

2) Aside: I'm not sure conjunction of multipliers is the best... (read more)

Hey Ben, thanks for the replies -- adding some more to get closer to the same page 🙂

Re your 1), my criticism here is more one of emphasis and of the top-line messaging, as you indeed mention these cases of advocacy and research.

I just think that these cases are rather fundamental and affecting the conclusions very significantly -- because we are almost never in the situation that all we can choose from are direct interventions so the solution space (and with it, the likely variance) will almost always look quite different than what is discussed as primary... (read more)

7
Benjamin_Todd
1y
I'd also add it would be great if there was more work to empirically analyse ex ante and ex post spread among hits based interventions with multiple outcomes. I could imagine it leading to a somewhat different picture, though I think the general thrust will still hold, and I still thinking looking at spread among measurable interventions can help to inform intuitions about the hits based case. One example of work in this area is this piece by OP, where they say they believe they found some 100x and a few 1000x multipliers on cash transfers to US citizens by e.g. supporting advocacy into land use reform. But this involves an element of cause selection as well as solution selection, cash transfers seem likely below the mean, and this was based on BOTECs that will contain a lot of model error and so should be further regressed. Overall I'd say this is consistent with within-cause differences of ~10x from top to mean, and doesn't support > 100x differences.

A couple of comments that might help readers of the thread separate problems and solutions:

1) If you're aiming to do good in the short-term, I think this framework is useful:

expected impact = problem effectiveness x solution effectiveness x personal fit

I think problem effectiveness varies more than solution effectiveness, and is also far less commonly discussed in normal doing good discourse, so it makes sense for EA to emphasise it a lot.

However, solution effectiveness also matters a lot too. It seems plausible that EAs neglect it too much.

80k covers both... (read more)

This is very helpful.

Might you have a rough estimate for how much the bar has gone up in expected value?

E.g. is the marginal grant now 2x, 3x etc. higher impact than before?

4
Holden Karnofsky
1y
I don’t have a good answer, sorry. The difficulty of getting cardinal estimates for longtermist grants is a lot of what drove our decision to go with an ordinal approach instead.

Hey, I missed the lottery this year. Do you know when the next one will be?

Is this also the only one running in EA right now? Does it replace the one run by the EA Funds in the past?

4
Giving What We Can
1y
Hi Ben, The donor lottery should be run again at the end of 2023. As far as we're aware this is the only donor lottery occurring.  And yes, the GWWC team took over the donor lottery from EA Funds at the end of 2021. Thanks, Grace

That makes sense. It just means you should decrease your exposure to bonds, and not necc buy more equities.

I'm skeptical you'd end up with a big bond short though - due to my other comment. (Unless you think timelines are significantly shorter or the market will re-rate very soon.)

I think the standard asset pricing logic would be: there is one optimal portfolio, and you want to lever that up or down depending on your risk tolerance and how risky that portfolio is.

In the merton's share, your exposure depends on (i) expected returns of the optimal portfolio ... (read more)

Load more