Hide table of contents

[I wrote this over year ago - posting it now for Draft Amnesty. I don't necessarily still endorse my conclusions here, and I'm not planning to respond to comments, sorry!]

 

One tiny, tiny upside of the FTX crisis: I had interesting new thoughts!

A problem with EA having a few big funders:

  • The common argument I’d heard against billionaires funding EA was that these billionaires' biases would affect how funding was spent. E.g. Dustin Muskovitz would push OpenPhil to spend money to prevent Big Tech regulation.
  • I’m not very convinced here: EAs as a group aspire to cause neutrality and finding the most effective ways of doing good, and seem to have a strong awareness of how personal biases can affect this.
  • Anecdotally, Dustin doesn’t have much of a role in deciding what OpenPhil spends its money on. See this tweet showing he didn’t know it was OpenPhil who paid for the £10 million Wytham Abbey. [Edit: can't find the tweet link, sorry!]
  • However, the FTX crisis convinced me of a related argument.
  • EA funding coming from a specific billionaire gives the EA community a stake in whatever organisation, industry or system that billionaire’s wealth is tied up in.
  • For example, I’m somewhat invested in the success of Asana because of the significant portion of Dustin Muskovitz’ wealth being tied up in that.
  • Extrapolating a little, a wealthy EA has more to lose, and our dreams of what we could do with that wealth and plain old loss-aversion can subtly push every EA towards supporting whatever that predicted wealth depends upon. This can be anything from a specific product to Silicon Valley to the capitalist economic system.
  • The severity of this depends on the structure of the billionaire in question. Bill Gates’ wealth is mainly in the stock market, not in Microsoft, so the effect is less strong. (though it still could tie people to the stock market doing well)
  • Sometimes you might think it worth it to support a specific product because EA causes would get much more funding if it prospers. However, sometimes it definitely isn’t.
  • Be explicit with yourself about why you have a positive opinion of an organisation or Silicon Valley. Don’t let your positive feelings about the money something brings to EA cloud your opinion in the whole of it.
  • Potential EAs or EA collaborators could legitimately be turned off of EA if they experience it as full of Silicon Valley devotees, blind to that place’s faults.
  • To the extent you value truth-seeking as a core value of EA, this also damages that.
  • My overall takeaway boils down to the common rationalist message: know why you believe what you believe.
  • I’m not arguing against EA allying with billionaire donors in this post, the amount of money they could contribute to high-impact causes outweighs this in my opinion.
  • However, I think we as a community should be more aware of the subtle impacts of billionaire funding. Don’t blind yourself to the flaws of a company, culture or economic system because it currently benefits EA. Make clear to yourself what you think about EA funding sources, and most importantly, remember positive impact (and whatever intangible values you also believe in!) are what ultimately matter.

Smaller thoughts:

Posts about EA funding should be more clear that this is only a prediction, and explain various other possibilities to decrease the possibility that people grow too attached to the big number.

  • I think Ben Todd’s post about EA having 50 billion made people grow attached to EA having lots of money - when it was nothing but pledged donations that don’t really exist!
  • I know EAs who made decisions based on the large amount of money pledged to EA and I suspect some of them didn’t give much thought to alternative worlds.
  • My suggestions for framing future EA funding situations:  
    • Explain various other scenarios, as they might not be apparent to casual readers. (People also vary in how they think, I’m someone for whom writing really helps me think, so by just reading about something I don’t necessarily critique it enough…)
    • Frame it as: ‘this is one possible world’, rather than ‘we have this money!’ because losing money feels very bad.

EA needs more people with experience, this is a strike against talented young people

  • Newspaper articles have drawn attention to the sheer… incompetence of Alameda and FTX. No accounting department? Let alone ‘losing’ eight billion.
  • A meme I’ve seen in EA is young people can do anything if we’re talented and ambitious enough. I’m all for aiming high, but the FTX crisis has made me update towards the need for more professional experience in EA.
  • If an EA organisation runs like a company then it should learn from the business world that’s had decades to perfect marketing, operations, recruiting etc.
  • This might look like young EAs going into the business world to learn these skills, EA organisations hiring aligned business professionals. I think there should also be more thought as to the importance of EA-alignment in each role in an EA org.
  • This might already exist in word of mouth form that hasn’t reached me.

Something about the reaction and its effects on me:

  • The FTX crisis produced emotional effects that scared me.
  • It cost me focus and sleep, almost like a tiny breakup.
  • It took more than a week for me to feel okay again.
  • I read everything I could find on the EA forum and in my Twitter circles.
  • This was really scary. I’d never experienced anything like this before.
  • But, hearing from other people experiencing similar effects made me feel much less scared. I didn’t know this was a thing that could happen and felt like something was wrong with me.
  • I now understand when my university sends out support messages when bad things happen in the world.
  • Overall, the crisis opened me to a new part of human experience.

80,000 Hours portrays interviewees in a positive light:

  • One strong feeling I had after the FTX crisis was that ‘But the 80k podcast gave a really positive view of him’
  • In hindsight, this is something 80k does to all its guests.
  • Anecdotally, I’ve heard lots of people strongly disagree with Ian Morris’ ideas in this episode, but the impression I got from the episode is that ‘this is true and really cool’
  • 80k could challenge guests a little on the show, ask for the strongest counterarguments to some of their ideas, or issue a disclaimer saying they don’t necessarily wholeheartedly support whoever they host on their podcast.
  • However, challenging people might make guests more hesitant to come on the show.
  • I think there should be more community awareness that Rob Wiblin portraying someone as super cool on 80k doesn’t mean they are fully vetted and supported by the EA community.
    • I think a blog post making clear 80k’s general view towards guests would help, and I also think this should be passed along by word of mouth.
Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 6m read
 · 
This post summarizes a new meta-analysis from the Humane and Sustainable Food Lab. We analyze the most rigorous randomized controlled trials (RCTs) that aim to reduce consumption of meat and animal products (MAP). We conclude that no theoretical approach, delivery mechanism, or persuasive message should be considered a well-validated means of reducing MAP consumption. By contrast, reducing consumption of red and processed meat (RPM) appears to be an easier target. However, if RPM reductions lead to more consumption of chicken and fish, this is likely bad for animal welfare and doesn’t ameliorate zoonotic outbreak or land and water pollution. We also find that many promising approaches await rigorous evaluation. This post updates a post from a year ago. We first summarize the current paper, and then describe how the project and its findings have evolved. What is a rigorous RCT? We operationalize “rigorous RCT” as any study that: * Randomly assigns participants to a treatment and control group * Measures consumption directly -- rather than (or in addition to) attitudes, intentions, or hypothetical choices -- at least a single day after treatment begins * Has at least 25 subjects in both treatment and control, or, in the case of cluster-assigned studies (e.g. university classes that all attend a lecture together or not), at least 10 clusters in total. Additionally, studies needed to intend to reduce MAP consumption, rather than (e.g.) encouraging people to switch from beef to chicken, and be publicly available by December 2023. We found 35 papers, comprising 41 studies and 112 interventions, that met these criteria. 18 of 35 papers have been published since 2020. The main theoretical approaches: Broadly speaking, studies used Persuasion, Choice Architecture, Psychology, and a combination of Persuasion and Psychology to try to change eating behavior. Persuasion studies typically provide arguments about animal welfare, health, and environmental welfare reason
LintzA
 ·  · 15m read
 · 
Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is 99% automation of fully-remote jobs in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achieve 25% on its Frontier Math dataset (thou
Omnizoid
 ·  · 5m read
 · 
Edit 1/29: Funding is back, baby!  Crossposted from my blog.   (This could end up being the most important thing I’ve ever written. Please like and restack it—if you have a big blog, please write about it). A mother holds her sick baby to her chest. She knows he doesn’t have long to live. She hears him coughing—those body-wracking coughs—that expel mucus and phlegm, leaving him desperately gasping for air. He is just a few months old. And yet that’s how old he will be when he dies. The aforementioned scene is likely to become increasingly common in the coming years. Fortunately, there is still hope. Trump recently signed an executive order shutting off almost all foreign aid. Most terrifyingly, this included shutting off the PEPFAR program—the single most successful foreign aid program in my lifetime. PEPFAR provides treatment and prevention of HIV and AIDS—it has saved about 25 million people since its implementation in 2001, despite only taking less than 0.1% of the federal budget. Every single day that it is operative, PEPFAR supports: > * More than 222,000 people on treatment in the program collecting ARVs to stay healthy; > * More than 224,000 HIV tests, newly diagnosing 4,374 people with HIV – 10% of whom are pregnant women attending antenatal clinic visits; > * Services for 17,695 orphans and vulnerable children impacted by HIV; > * 7,163 cervical cancer screenings, newly diagnosing 363 women with cervical cancer or pre-cancerous lesions, and treating 324 women with positive cervical cancer results; > * Care and support for 3,618 women experiencing gender-based violence, including 779 women who experienced sexual violence. The most important thing PEPFAR does is provide life-saving anti-retroviral treatments to millions of victims of HIV. More than 20 million people living with HIV globally depend on daily anti-retrovirals, including over half a million children. These children, facing a deadly illness in desperately poor countries, are now going