MathiasKB

Director @ Center for Effective Aid Policy
4667 karmaJoined Jul 2018aidpolicy.org

Comments
231

I'm awestruck, that is an incredible track record. Thanks for taking the time to write this out.

These are concepts and ideas I regularly use throughout my week and which have significantly shaped my thinking. A deep thanks to everyone who has contributed to FHI, your work certainly had an influence on me.

I think I'm sympathetic to Oxford's decision.

By the end, the line between genuine scientific inquiry and activistic 'research' got quite blurry at FHI. I don't think papers such as: 'Proposal for a New UK National Institute for Biological Security', belong in an academic institution, even if I agree with the conclusion.

One thing that stood out to me reading the comments on Reddit, was how much of the poor reception that could have been avoided with a little clearer communication.

For people such as MacAskill, who are deeply familiar with effective altruism, the question: "Why would SBF pretend to be an Effective Altruist if he was just looking to do fraud?"  is quite the conundrum. Of all the types of altruism, why specifically pick EA as the vehicle to smuggle your reputation? EA was already unlikeable and elitist before the scandal. Why not donate to puppies and Harvard like everyone else?

I actually admire MacAskill for asking that question. The easy out, would be to say: "how could we have been so foolish, SBF was clearly never a real EA". But he instead grapples with the fact that SBF seems to have been genuinely motivated by effective altruism, and that these ideals must have played some part in SBFs decision to commit fraud.

But for any listener who is not as deeply familiar with the effective altruism movement, and doesn't know its reputation, the question comes off as hopelessly naive. The emphasis they hear is: "Why would SBF, a fraudulent billionaire, pretend to be an Effective Altruist?" The answer to that is obvious - malicious actors pretend to be altruistic all the time!

I see EA communication make this mistake all the time. A question or idea whose merit is obvious to you might not be obvious to everyone else if you don't spell out the assumptions it rests on.

I think I am misunderstanding the original question then?

I mean if you ask: "what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students"

then the reach is not the 10 million people watching the show, it's the people you get a chance to speak to.

Wasn't the Future Fund quite explicitly about longtermist projects?

I mean if you worked for an animal foundation and were in a call about give directly, I can understand that somebody might say: "Look we are an animal fund, global poverty is outside our scope".

Obviously saying "I don't care about poverty" or something sufficiently close that your counterpart remembers it as that, is not ideal, especially not when you're speaking to an ex-minister of the United Kingdom.

But before we get mad at those who ran the Future Fund, please consider there's much context we don't have. Why did this call get set up in the first place? I would expect them to be screening mechanisms in place to prevent this kind of mismatch. What Rory remembers might not have been what the Future Fund grant maker remembers and there might have been a mismatch between the very blunt 'SF culture' the future fund operated by and what an ex-minister expects.

That said I have a very positive impression of Rory Stewart, and it saddens me to hear our community gave him this perception. Had I been in his shoes, I'm not sure I would have thought any different.

I'm working on an article about gene drives to eradicate malaria, and am looking for biology experts who can help me understand certain areas I'm finding confusing and fact check claims I feel unsure about.

If you are a masters or grad student in biology and would be interested in helping, I would be incredibly grateful.

 

An example of a question I've been trying to answer today:

How likely is successful crossbreeding between subspecies of Anopheles Gambiae (such as anopheles gambiae s.s. and anopheles arabiensis), and how likely is successful crossbreeding between anopheles gambiae and other complexes?

 

If you know the answer to questions like these or would have an easy time finding it out, send me a dm! Happy to pay for your time.

a devastating argument, years of work wasted. Why oh why did I insist that the book's front cover had to be a snowman?

Answer by MathiasKBMar 31, 2024136
40
11

I think it's a travesty that so many valuable analyses are never publicly shared, but due to unreasonable external expectations it's currently hard for any single organization to become more transparent without occurring enormous costs.

If open phil actually were to start publishing their internal analyses behind each grant, I will bet you at good odds the the following scenario is going to play out on the EA Forum:

  1. Somebody digs deep into a specific analysis carried out. It turns out Open Phil’s analysis has several factual errors that any domain expert could have alerted them to, additionally they entirely failed to consider some important aspect which may change the conclusion.
  2. Somebody in the comments accuses Open Phil of shoddy and irresponsible work. That they are making such large donations decisions based on work filled with errors, proves their irresponsibility. Moreover, why have they still not responded to the criticism?
  3. A new meta-post argues that the EA movement needs reform, and uses the above as one of several examples showing the incompetence of ‘EA leadership’.

Several things would be true about the above hypothetical example:

  1. Open Phil’s analysis did, in fact, have errors.
  2. It would have been better for Open Phil’s work not to have those errors.
  3. The errors were only found because they chose to make the analysis public.
  4. The costs for Open Phil to reduce the error rate of analyses, would not be worth the benefits.
  5. These mistakes were found, and at no cost (outside of reputation) to the organization.

Criticism shouldn’t have to warrant a response if it takes time away from work which is more important. The internal analyses from open phil I’ve been privileged to see were pretty good. They were also made by humans, who make errors all the time.

In my ideal world, every one of these analyses would be open to the public. Like open-source programming people would be able to contribute to every analysis, fixing bugs, adding new insights, and updating old analyses as new evidence comes out.

But like an open-source programming project, there has to be an understanding that no repository is ever going to be bug-free or have every feature.

If open phil shared all their analyses and nobody was able to discover important omissions or errors, my main conclusion would be they are spending far too much time on each analysis.

Some EA organizations are held to impossibly high standards. Whenever somebody points this out, a common response is: “But the EA community should be held to a higher standard!”. I’m not so sure! The bar is where it’s at because it takes significant effort to higher it. EA organizations are subject to the same constraints the rest of the world is subject to.

More openness requires a lowering of expectations. We should strive for a culture that is high in criticism, but low in judgement.

Agree, I suspect most people downvoted it because they inferred it was a leading question.

I haven't seen the series, but am currently halfway through the second book.

I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post?

But sure, if someone mentioned to me they watched and liked the series and they don't know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.

Load more