All of Devon Fritz's Comments + Replies

I think one of the major problems I see in EA as a whole is a fairly loose definition of 'impact'.

I find the statement is more precise if you put "longtermism" where "EA" is. Is that your sense as well?

4
CAISID
2mo
I think that's a good modification of my initial point, you may well be right.

I continue to be impressed by the Effektiv Spenden team. Well done everyone!

To me, based on what you said, you have provided a lot of value to many people at relatively low cost to yourself. I have the impression that the time was quite counterfactual given you didn't seem to have many burning projects at the time. So, seems pretty good to me on the face of it although for every given detail you know way more than I do!

I continue to be impressed by the breadth and depth of stuff you and the rest of the CE team pump out, seemingly on a monthly basis!

1
Tee
9mo
I was just thinking this

Really excited by this and all of the work the CE team has been pumping out!

 

Quick question regarding programming: You say the first month is about learning and the second about applying, but that the course is 11 weeks. What is the extra time dedicated to?

5
Leonie Falk
10mo
The split between learning and applying is a little bit less distinct than the post might have suggested, I edited it for clarity. Generally, we are trying to emphasize learning through doing projects which means that we expect that after an initial overview week (week 1) we would break down learning goals into smaller sections to accomplish as participants are walking through the different steps of the research process (weeks 2-10). The additional 11th week is for conducting more research and producing reports that will be disseminated among organizations that could use it for their decision-making.

Strong agree with this. Most EAs would probably agree with these points abstractly, but there is likely a gap in that (I believe) most EAs have not e.g. taken the GGWC pledge.

Hey David, thanks for the question. I just want to chime in from High Impact Professionals' stand point. 

High Impact Professionals has two main products, both of which are geared at getting professionals into more impactful roles. These are:
- Talent Directory - We have a talent directory with over 750 impact-oriented professionals looking for work at high-impact organizations and over 100 organizations sourcing from it. We encourage both professionals and organizations to sign up. I think this doesn't have much overlap with SuccessIf, except with thei... (read more)

I just want to say thank you for being courageous and sharing this and also sorry that you had to/have to go through this.

This is really great to hear!

Can I ask roughly how much it costs to commission such a report? I'd love to see more investigations into different organizations to get more robust data on how impactful they are.

Thanks for asking Devon! 

On the margin, our commissioned reports cost ~$7,000-$30,000 depending on the scope (this particular report being on the lower end of that spectrum). That being said, we don't take in every commissioned work that comes our way and only have the capacity to take on a few projects at this price because: (1) we have longer term agreements with a couple of organizations for which we do a lot of work, (2) we want our team to have the capacity to pursue independent projects we think are promising.

That being said, you're always welcome to reach out if there are specific projects you are interested in. We always welcome new ideas and look forward to connecting with new individuals/organizations :)

Great to hear! It was on my agenda to reach out to you today and I appreciate the proactivity!

I agree with you and also want to push back on the meme that “all the good stuff gets funded”.

Cool initiative! 

I am not very involved in AI Safety but have heard multiple times from big funders in the ecosystem over time something like "everything in AI Safety that SHOULD be funded IS being funded. What we really need are good people working on it." I'd expect e.g. OP to be excited to fund great people working in the space, so curious why you think the people who will apply to your network aren't getting funded otherwise.

Just for context: I am very FOR a more diverse funding ecosystem so I think getting more people to fund more projects who have different strategies for funding and risk tolerances is going in the right direction. 

Good question! Here are a few thoughts on that:

  • Evaluating charities is more like evaluating startups than evaluating bridge-builders

You can tell if somebody is a good bridge builder. We have good feedback loops on bridges and we know why bridges work. For bridges, you can have a small number of experts making the decisions and it will work out great.

However, with startups, nobody really knows what works or why. Even with Y Combinator, potentially the best startup evaluator in the world, the vast majority of their bets don’t work out. We don’t know&nbs... (read more)

There are many factors going into that issue, but I think the biggest are the bottlenecks within the pipeline that brings money from OP to individual donation opportunities. Most directly, OP has a limited staff and a lot of large, important grants to manage. They often don't have the spare attention, time, or energy to solicit, vet, and manage funding to the many individuals and small organizations that need funding.

LTFF and other grantmakers have similar issues. The general idea is that just there are many inefficiencies in the grantmaker -> ??? ->... (read more)

Very excited for you to take on this project as you already know Fernando!

Excited to see another impactful set of charities get founded!

I agree that is the other way out of the puzzle. I wonder whom to even trust if everyone is susceptible to this problem...

Yeah I suppose we just disagree then. I think such a big error and hit to the community should downgrade any rational person's belief in the output of what EA has to offer and also downgrade the trust they are getting it right.  

Another side point: Many EAs like Cowen and think he is right most of the time. I think it is suspicious that when Cowen says something about EA that is negative he is labeled stuff like "daft".

6
Cornelis Dirk Haupt
1y
I disagreed with the Scott analogy but after thinking it through it made me change my mind. Simply make the following modification: "Leading UN climatologists are in serious condition after all being wounded in the hurricane Smithfield that further killed as many people as were harmed by the FTX scandal. These climatologists claim that their models can predict the temperature of the Earth from now until 2200 - but they couldn’t even predict a hurricane in their own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves or those nearby in the present?" Now we are talking about a group rather than one person and also what they missed is much more directly in their domain expertise. I.e. it feels, like the FTX Future fund team's domain expertise on EA money, like something they shouldn't be able to miss. Would you say any rational person should downgrade their opinion of the climatology community and any output they have to offer and downgrade the trust they are getting their 2200 climate change models right? I shared the modification with an EA that - like me - at first agreed with Cowen. Their response was something like "OK, so the climatologists not seeing the existential neartermist threat to themselves appears to still be a serious failure (people they know died!) on their part that needs to be addressed - but I agree it would be a mistake on my part to downgrade my confidence in their 2100 climate change model because if it" However, we conceded that there is a catch: if the climatology community persistently finds their top UN climatologists wounded in hurricanes to the point that they can't work on their models, then rationally we ought to update that their productive output should be lower than expected because they seem to have this neartermist blindspot to their own wellbeing and those nearby. This concession though comes with asterisks though. If we, for sake of argument, as

Hi Devon, FWIW I agree with John Halstead and Michael PJ re John's point 1.

If you're open to considering this question further, you may be interested in knowing my reasoning (note that I arrived at this opinion independently of John and Michael), which I share below.

Last November I commented on Tyler Cowen's post to explain why I disagreed with his point:

I don't find Tyler's point very persuasive: Despite the fact that the common sense interpretation of the phrase "existential risk" makes it applicable to the sudden downfall of FTX, in actuality I think fo

... (read more)
-4
Anthony Repetto
1y
Thank you! I remember hearing about Bayesian updates, but rationalizations can wipe those away quickly. From the perspective of Popper, EAs should try "taking the hypothesis that EA..." and then try proving themselves wrong, instead of using a handful of data-points to reach their preferred, statistically irrelevant conclusion, all-the-while feeling confident.
2
Michael_PJ
1y
Tbh I took the Gell-Mann amnesia interpretation and just concluded that he's probably being daft more often in areas I don't know so much about.

The difference in that example is that Scholtz is one person so the analogy doesn't hold. EA is a movement comprised of many, many people with different strengths, roles, motives, etc and CERTAINLY there are some people in the movement whose job it was (or at a minimum there are some people who thought long and hard) to mitigate PR/longterm risks to the movement. 

I picture the criticism more like EA being a pyramid set in the ground, but upside down. At the top of the upside-down pyramid, where things are wide, there are people working to ensure the l... (read more)

9
Robi Rahman
9mo
Scott's analogy is correct, in that the problem with the criticism is that the thing someone failed to predict was on a different topic. It's not reasonable to conclude that a climate scientist is bad at predicting the climate because they are bad at predicting mass shootings. If it were a thousand climate scientists predicting the climate a hundred years from now, and they all died in an earthquake yesterday, it's not reasonable to conclude that their climate models were wrong because they failed to predict something outside the scope of their models.

I think I just don't agree with your charitable reading. The very next paragraph makes it very clear that Cowen means this to suggest that we should think less well of actual existential risk research:

Hardly anyone associated with Future Fund saw the existential risk to…Future Fund, even though they were as close to it as one could possibly be.

I am thus skeptical about their ability to predict existential risk more generally, and for systems that are far more complex and also far more distant.

I think that's plain wrong, and Cowen actually is doing the ... (read more)

I agree with most of your points, but strongly disagree with number 1 and surprised to have heard over time that so many people thought this point was daft.

I don't disagree that "existential risk" is being employed in a very different sense, so we agree there, in the two instances, but the broader point, which I think is valid, is this:

There is a certain hubris in claiming you are going to "build a flourishing future" and "support ambitious projects to improve humanity's long-term prospects" (as the FFF did on its website) only to not exist 6 months later ... (read more)

5
Greg_Colbourn
1y
This. We can taboo the words "existential risk" and focus instead on Longtermism. It's damning that the largest philanthropy focused on Longtermism -- the very long term future of humanity -- didn't even last a year. A necessary part of any organisation focused on the long term is a security mindset. It seems that this was lacking in the Future Fund. In particular, nothing was done to secure funding.

Scott Alexander, from "If The Media Reported On Other Things Like It Does Effective Altruism":

Leading UN climatologist Dr. John Scholtz is in serious condition after being wounded in the mass shooting at Smithfield Park. Scholtz claims that his models can predict the temperature of the Earth from now until 2200 - but he couldn’t even predict a mass shooting in his own neighborhood. Why should we trust climatologists to protect us from some future catastrophe, when they can’t even protect themselves in the present?

I am having a hard time here and speckled throughout the rest of this post with people writing that we are doing the "good thing" and we should do that and not just what looks good with the "good thing" in question being buying a castle and not say, caring about wild animal suffering. 

I guess I've gone off into the abstract argument about whether we should care about optics or not. I don't mean to assert that buying Wytham Abbey was a good thing to do, I just think that we should argue about whether it was a good thing to do, not whether it looks like a good thing to do.

I am for more debate, but don't like this suggestion due to specific names, like Timnit, who just seems so hostile I can't imagine a fruitful conversation. 

I think what we really need are more funding pillars in addition to EA Funds and Open Phil. And continue to let EA Funds deploy as they see fit, but have other streams that do the same and maybe take a different position on risk appetite, methodology, etc.

I'm curious to hear the perceived downsides about this. All I can think of is logistical overhead, which doesn't seem like that much.

If something is called the "Coordination Forum" and previously called the "Leaders Forum" and there is a "leaders" slack where further coordination that affects the community takes place, it seems fair that people should at least know who is attending them. The community prides itself on transparency and knowing the leaders of your movement seems like one of the obvious first steps of transparency. 

2
Nathan Young
1y
I think that if you have an intense focus on leaders you end up with leaders who can bare that intense focus. I guess I sense that there is something good in this area, but that this specific suggestion would be -EV

Hey Vasco, thanks for the well thought out response. I love the compromise - I think having more grantmakers (with different, but still strong EA perspectives)  is a great way to go.

4
Vasco Grilo
1y
Thanks for sharing! On the one hand, I think that bar may still be higher than that of neartermist interventions. I have estimated here the marginal cost-effectiveness of longtermism and catastrophic risk prevention is 9 (= 3.95/0.431) times as high as that of global health and development. On the other hand, I think one can still say the bar of longtermist interventions is more constrained by labour than that of neartermist ones. Fields like global health and development have been around for a while, so it makes sense that funding is more likely to be undersuplied relative to ideas. Whereas fields like AI safety only have very few people involved[1], so the ideas space is still very nascent, and more people are needed to explore it. Benjamin Todd's post Let's stop saying 'funding overhang' clarifies this matter. If funding is currently oversuplied relative to labour in the longtermist space, one can prioritise interventions which focus on ensuring more people will be able to contribure to the area in the future. The LTFF makes many (most?) grants with this goal. Some examples from this quarter (taken from here): * "The Alignable Structures workshop in Philadelphia". * "Financial support for: Finishing Master's (AI upskilling), independent research and career exploration". * "6-month budget to self-study ML and research the possible applications of a Neuro/CogScience perspective for AGI Safety". * "Funding from 15th September 2022 until 31st January 2023 for doing alignment research and upskilling". 1. ^ According to 80,000 Hours' page:

I actually strongly disagree with this - I think too many people defer to the EA Funds on the margin. Don't get me wrong, I think they do great work, but if FTX showed us anything it is that EA needs MORE funding diversity and I think this often shakes out as lots of people making many different decisions on where to give, so that e.g. and org doesn't need to rely on its existence from whether or not its gets a grant from EA Funds/OP. 

6
Vasco Grilo
1y
Hi Devon, Thanks for engaging! A priori, I think it makes sense to assume grantmakers from the EA Funds are better than me. I am open to the possibility of finding better opportunities than EA Funds, but guess it would take me too much time to be more effective than deferring to EA Funds. I also believe exceptions exist, as Joel pointed to above. However, I do agree there should be more efforts to assess past grants from EA Funds, as Nuño Sempere did here. I agree that, for the same amount of non-risk-adjusted funding, more  funders will tend to increase risk-adjusted funding, which is good. However, it is arguably easier to increase non-risk-adjusted funding via large funders, since wealth is heavy-tailed. I do not know which consideration (and there are more) is stronger. "Lots of people making many different decisions on where to give" does not seem to have worked out perfectly in the outer world. I expect the median person aligned with effective altruism would make better decisions than the median citizen, but specialisation to still be good. So the median grantmaker of EA Funds is arguably better at grantmaking than the median person aligned with effective altruism. One compromise is having more grantmakers (currently one of 80,000 Hours' top carrer paths).

In light of FTX, I am updating a bit away from giving to meta stuff, as some media made clear that a (legitimate) concern is EA orgs donating to each other and keeping the money internal to them. I don't think EAs do this on purpose for any bad reason, in fact I think meta is high leverage, but concern does give one pause to think about why we are doing this and also how this is perceived from the outside.

Answer by Devon FritzNov 29, 20226
🌟3

This year, I am giving $10K to Charity Entrepreneurship's incubated charities at their discretion as they know where it will best be placed after all counterfactuals have been calculated. I am giving here for a lot of reasons (CoI: I like them so much I am on the board):

  • I think there is a lot of counterfactual value in supporting new EA startups with higher risk profiles, especially within CE, where there is a good rate of growth to GW Top Charity status.
  • I like to fund stuff that isn't getting funded through the normal means to create more diversified fund
... (read more)

Hi Simon, love the initiative and have been working on an illustrated philosophy book for kids as well (and by 'working on' I mean 'made it 5 years ago and have to get back to it when I find the time with an outstanding promise to finish it before my daughter turns 6').

Will definitely sign up for the beta and provide feedback. Looking forward to reading it! 

2
Simon Newstead
1y
Thanks Devon, looking forward to the feedback. The illustrated philosophy book for older kids sounds like a great idea, and could be helpful for parents as well who can learn new things while reading with the child. Nice you have a "deadline" for it :)

There is a big difference between not wanting to work with someone because you don't like their ethics (looking at you Kerry) and thinking they are going to commit the century's worst fraud.

I don't think anyone asking for more information about what people knew believes that central actors knew anything about fraud. If that is what you think, then maybe therein lies the rub. It is more that strong signs of bad ethics are important signals and your example of Kerry is perhaps a good one. Imagine if people had concerns with someone on the level of what they ... (read more)

Strongly agree with these points and think the first is what makes the overwhelming difference on why EA should have done better. Multiple people allege (both publicly on the forum and people who have told me in confidence)  to have told EA leadership that SBF was doing thinks that strongly break with EA values since the Alameda situation of 2018.

 This doesn't imply we should know about any particular illegal activity SBF might have been undertaking, but I would expect SBF to not have been so promoted throughout the past couple of years. This is ... (read more)

I respect that people who aren't saying what they know have carefully considered reasons for doing so. 

I am not confident it will come from the other side as it hasn't to date and there is no incentive to do so. 

May I ask why you believe it will be made public eventually? I truly hope that is the case.

The incentives for them to do so include 1) modelling healthy transparency norms, 2) avoiding looking bad when it comes out anyway, 3) just generally doing the right thing.

I personally commit to making my knowledge about it public within a year. (I could probably commit to a shorter time frame than that, that's just what I'm sure I'm happy to commit to having given it only a moment's thought.)

I agree - I don't think that the FTX debacle should define cause areas per se and I think all of the cause areas on the EA menu are important and meaningful. I meant more to say that EA has played more fast and loose over the years and gotten a bit too big for its britches and I think that is somehow correlated with what happened, although like everyone else I am working all of this out in my head like everyone else. Just imagine that Will was connecting SBF with Elon Musk as one example that was made public and only because it had to be, so we can assume ... (read more)

That could very well be and there are a lot of moving parts. That is why I think it is important for people who supposedly warned leadership to say who was told and what they were told. If we are going to unravel this this all feels like necessary information.

The people who are staying quiet about who they told have carefully considered reasons for doing so, and I'd encourage people to try to respect that, even if it's hard to understand from outside.

My hope is that the information will be made public from the other side. EA leaders who were told details about the events at early Alameda know exactly who they are, and they can volunteer that information at any time. It will be made public eventually one way or another.

My decision procedure does allow for that and I have lots of uncertainties, but it feels that given many insiders claim to have warned people in positions of power about this and Sam got actively promoted anyway. If multiple people with intimate knowledge of someone came to you and told you that they thought person X was of bad character you wouldn't have to believe them hook line and sinker to be judicious about promoting that person. 

 

Maybe this is the most plausible of the 3 and I shouldn't have called it super implausible, but it doesn't seem very plausible for me, especially from people in a movement that takes risks more seriously than any other that I know.

I don't mean to claim that you think it should replace discussion. I think just reading your text I felt that you bracketed off the discussion/reflection quite quickly and moved on to what things will be like after which feels very quick to me. I think the discussions in the next few months will (I hope) change so many aspects of EA that I don't feel like we can make any meaningful statements about the shape of what comes next.

I see from your comment here that you also want to be motivational in the face of people being disheartened by the movement. I can see that now and think that that aspect is great.

I strongly disagree with this. While there is some trivial sense of the word "ambition" I'd endorse, I think it is time for the EA community to slow down, forget mega projects and reflect. And maybe do some boring, old school stuff, like buying malaria nets.

3
Chris Leong
1y
I don’t know, I wouldn’t suggest choosing cause areas based on FTX collapsing, but I’d think more carefully about mega-projects given the potential of the funding situation to substantially change.

Mind expending on this?

[anonymous]1y17
5
0

We absolutely think (and stated this clearly) that this outlook shouldn't replace the discussion and reflection on the current situation. 
What I've noticed when discussing with EAs during the past few days, is that many feel like this is a disaster we wouldn't recover from. I do think it's important to emphasize that we're not back to ground zero.

Truly sorry to hear about your experience and thank you for sharing it.

If insiders were making serious accusations about his character to EA leadership and they went on to promote him that would be weird to me. Especially if many people did it which is what has been claimed. Of course I have no idea who “leadership” is because no one is being specific.

To be fair sometimes people make accusations that are incorrect? Your decision procedure does need to allow for the possibility of not taking a given accusation seriously. I don't know who knew what and how reasonable a conclusion this was for any given person given their state of knowledge, in this case, but also people do get this wrong sometimes, this doesn't seem implausible to me.

So massive is too big of a word, but the qualifier in some sense let’s everything in and isn’t powerful.

3
RobBensinger
1y
Seems to me that you have it exactly backwards? Everyone agrees that the ends usually justify the means -- e.g., it's a good idea to go grocery shopping because this results in getting food. "Are there exceptions?" is exactly the claim that naive consequentialists are getting wrong.

I think it is morally correct and that people would agree with it, but I don’t think if it as strong evidence for the claim “we are against this type of behavior.”

2
Devon Fritz
1y
So massive is too big of a word, but the qualifier in some sense let’s everything in and isn’t powerful.

I don't think leadership needed to know how the sausage was made to be culpable to some degree. Many people are claiming that they warned leadership that SBF was not doing things above board and if true then has serious implications, even if they didn't know exactly what SBF was up to. 

Not I am not claiming that anyone, specific or otherwise, knew anything.

That doesn't strike me as a massive qualifier, it strikes me as something that's straightforwardly true (or as true as a moral claim can be). For example, if you're in a situation where you can lie to save 1B people from terrible suffering, then I bet most people think it's not only acceptable, but obligatory to lie. If so, the ends clearly do sometimes justify the means.

I'll go with you part of the way, but I also think that experts, traders, and even investors were further from SBF than at least some of the people in the equation here, which seems more and more true the more accounts I hear about people from Alameda saying they warned top brass. I wouldn't expect an investor to have that kind of insight.

How can both of these be true:

  1. You (and others, if all of the accounts I've been reading about are true) told EA leadership about a deep mistrust of SBF.
  2. EA decided to hold up and promote SBF as a paragon of EA values and on of the few prominent faces in the EA community.

If both of those are true, how many logical possibilities are there?

  1. The accounts that people told EA leadership are false.
  2. The accounts are true and EA leadership didn't take these accounts seriously.
  3. EA leadership took the accounts seriously, but still proceeded to market SBF.

     

I find them all super implausible so I don't know what to think!

My guess is different parts of leadership. I don't think many of the people I talked to promoted SBF a lot. E.g. see my earlier paragraph on a lot of this promotion being done by the more UK focused branches that I talk much less to.

What do you find super implausible about 2?

My understanding is that the answer is basically 2.

I'd love to share more details but I haven't gotten consent from the person who told me about those conversations yet, and even if I were willing to share without consent I'm not confident enough of my recollection of the details I was told about those conversations when they happened to pass that recollection along. I hope to be able to say more soon.

EDIT: I've gotten a response and that person would prefer me not to go into more specifics currently, so I'm going to respect that. I do understand the frust... (read more)

I strongly believe it is hyperrelevant to know who knew what, when so that these people are held to account. I don't think this is too much to ask, nor does it have to be arduous in the way you described of getting every name with max fidelity. I see so many claims that "key EA members knew what was going on" and never any sort of name associate with it.  

8
Ozzie Gooen
1y
I agree this is really important and would really, really want it to be figured out, and key actions taken. I think I'm less focused on all of the information of such a discovery being public, as opposed to much of it being summarized a bit.

Bless your heart, GiveWell. Keep chugging along doing object-level work amidst the chaos.

What is the main bottleneck to CE scaling up faster?

We have the ideas, (and can find more) we have the program, (and can train more). What we need are more applicants. More leaders to pick up the ideas and make them a reality.

Load more