Habryka

Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).

Topic Contributions

Comments

The biggest risk of free-spending EA is not optics or motivated cognition, but grift

I am confused why the title of this post is: "The biggest risk of free-spending EA is not optics or epistemics, but grift" (emphasis added). As Zvi talks about extensively in his moral mazes sequence, the biggest problems with moral mazes and grifters is that many of their incentives actively point away from truth-seeking behavior and towards trying to create confusing environments in which it is hard to tell who is doing real work and who is not. If it was just the case that a population of 50% grifters and 50% non-grifters would be half as efficient as a population of 0% grifters and 100% non-grifters, that wouldn't be that much of an issue. The problem is that a population of 50% grifters and 50% non-grifters probably has approximately zero ability to get anything done, or react to crises, and practically everyone within that group (including the non-grifters) will have terrible models of the world.

I don't think it's that bad if we end up wasting a lot of resources, compared to what I think is the more likely outcome, which is that the presence of grifters will deteriorate our ability to get accurate information about the world, and build accurate shared models of the world. The key problem is epistemics, and I feel like your post makes that point pretty well, but then it has a title that actively contradicts that point, which feels confusing to me.

Against “longtermist” as an identity

(1) Calling yourself “longtermist” bakes empirical or refutable claims into an identity, making it harder to course-correct if you later find out you’re wrong.

Isn't this also true of "Effective Altruist"? And I feel like from my epistemic vantage point, "longtermist" bakes in many fewer assumptions than "Effective Altruist". I feel like there are just a lot of convergent reasons to care about the future, and the case for it seems more robust to me than the case for "you have to try to do the most good", and a lot of the hidden assumptions in EA. 

I think a position of "yeah, agree, I also think people shouldn't call themselves EAs or rationalists or etc." is pretty reasonable and I think quite defensible, but I feel a bit confused what your actual stance here is given the things you write in this post.

EA and the current funding situation

Almost all nonprofit grants usually require everyone to take very low salaries. There are very few well-paying nonprofit projects. My guess is EA is the most widely-known community that might pay high salaries for relatively illegible nonprofit projects (and maybe the only widely-known funder/community that pays high-salaries for nonprofit projects in-general).

EA and the current funding situation

Reading this, I guess I'll just post the second half of this memo that I wrote here as well, since it has some additional points that seem valuable to the discussion: 

When I play forward the future, I can imagine a few different outcomes, assuming that my basic hunches about the dynamics here are correct at all:

  1. I think it would not surprise me that much if many of us do fall prey to the temptation to use the wealth and resources around us for personal gain, or as a tool towards building our own empire, or come to equate "big" with "good". I think the world's smartest people will generally pick up on us not really aiming for the common good, but I do think we have a lot of trust to spend down, and could potentially keep this up for a few years. I expect eventually this will cause the decline of our reputation and ability to really attract resources and talent, and hopefully something new and good will form in our ashes before the story of humanity ends.
  2. But I think in many, possibly most, of the worlds where we start spending resources aggressively, whether for personal gain, or because we do really have a bold vision for how to change the future, the relationships of the central benefactors to the community will change. I think it's easy to forget that for most of us, the reputation and wealth of the community is ultimately borrowed, and when Dustin, or Cari or Sam or Jaan or Eliezer or Nick Bostrom see how their reputation or resources get used, they will already be on high-alert for people trying to take their name and their resources, and be ready to take them away when it seems like they are no longer obviously used for public benefit. I think in many of those worlds we will be forced to run projects in a legible way; or we will choose to run them illegibly, and be surprised by how few of the "pledged" resources were ultimately available for them.
  3. And of course in many other worlds, we learn to handle the pressures of an ecosystem where trust is harder to come by, and we scale, and find new ways of building trust, and take advantage of the resources at our fingertips.
  4. Or maybe we split up into different factions and groups, and let many of the resources that we could reach go to waste, as they ultimately get used by people who don't seem very aligned to us, but some of us think this loss is worth it to maintain an environment where we can think more freely and with less pressure.

Of course, all of this is likely to be far too detailed to be an accurate prediction of what will happen. I expect reality will successfully surprise me, and I am not at all confident I am reading the dynamics of the situation correctly. But the above is where my current thinking is at, and is the closest to a single expectation I can form, at least when trying to forecast what will happen to people currently in EA leadership.

To also take a bit more of an object-level stance, I currently very tentatively believe that I don't think this shift is worth it. I don't actually really have any plans that seem hopeful or exciting to me that really scale with a lot more money or a lot more resources, and I would really prefer to spend more time without needing to be worried about full-time people trying to scheme how to get specifically me to like them.

However, I do see the hope and potential in actually going out and spending the money and reputation we have to maybe get much larger fractions of the world's talent to dedicate themselves to ensuring a flourishing future and preventing humanity's extinction. I have inklings and plans that could maybe scale. But I am worried that I've already started trying to primarily answer the question "but what plans can meaningfully absorb all this money?" instead of the question of "but what plans actually have the highest chance of success?", and that this substitution has made me worse, not better, at actually solving the problem.

I think historically we've lacked important forms of ambition. And I am excited about us actually thinking big. But I currently don't know how to do it well. Hopefully this memo will make the conversations about this better, and maybe will help us orient towards this situation more healthily.

EA and the current funding situation

I feel like this post mostly doesn't talk about what feels like to me the most substantial downside of trying to scale up spending in EA, and increased availability of funding. 

I think the biggest risk of the increased availability of funding, and general increase in scale, is that it will create a culture where people will be incentivized to act more deceptively towards others and that it will attract many people who will be much more open to deceptive action in order to take resources we currently have. 

Here are some paragraphs from an internal memo I wrote a while ago that tried to capture this: 

========

I think it it was Marc Andreessen who first hypothesized that startups usually go through two very different phases:

  1. Pre Product-market fit: At this stage, you have some inkling of an idea, or some broad domain that seems promising, but you don't yet really have anything that solves a really crucial problem. This period is characterized by small teams working on their inside-view, and a shared, tentative, malleable vision that is often hard to explain to outsiders.
  2. Post Product-market fit: At some point you find a product that works for people. The transition here can take a while, but by the end of it, you have customers and users banging on your door relentlessly to get more of what you have. This is the time of scaling. You don't need to hold a tentative vision anymore, and your value proposition is clear to both you and your customers. Now is the time to hire people and scale up and make sure that you don't let the product-market fit you've discovered go to waste.

I think it was Paul Graham or someone else close to YC (or maybe Ray Dalio) who said something like the following (NOT A QUOTE, since I currently can't find the direct source):

> The early stages of an organization are characterized by building trust. If your company is successful, and reaches product-market fit, these early founders and employees usually go on to lead whole departments. Use these early years to build trust and stay in sync, because when you are a thousand-person company, you won't have the time for long 10-hour conversations when you hang out in the evening.

> As you scale, you spend down that trust that you built in the early days. As you succeed, it's hard to know who is here because they really believe in your vision, and who just wants to make sure they get a big enough cut of the pie. That early trust is what keeps you agile and capable, and frequently as we see founders leave an organization, and with that those crucial trust relationships, we see the organization ossify, internal tensions increase, and the ability to effectively correspond to crises and changing environments get worse.

It's hard to say how well this model actually applies to startups or young organizations (it matches some of my observations, though definitely far from perfectly), and even more dubious how well it applies to systems like our community, but my current model is that it captures something pretty important.

I think whether we want it or not, I think we are now likely in the post-product-market fit part of the lifecycle of our community, at least when it comes to building trust relationships and onboarding new people. I think we have become high-profile enough, and have enough visible resources (especially with FTX's latest funding announcements), and have gotten involved in enough high-stakes politics, that if someone shows up next year at EA Global, you can no longer confidently know whether they are there because they have a deeply shared vision of the future with you, or because they want to get a big share of the pie that seems to be up for the taking around here.

I think in some sense that is good. When I see all the talk about megaprojects and increasing people's salaries and government interventions, I feel excited and hopeful that maybe if we play our cards right, we could actually bring any measurable fraction of humanity's ingenuity and energy to bear on preventing humanity's extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.

But I am also afraid that with all of these resources around, we are transforming our ecosystem into a market for lemons. That we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them, and that nuance and complexity will have to get left at the wayside in order to successfully maintain any sense of order and coherence.

I think it is not implausible that for a substantial fraction of the leadership of EA, within 5 years, there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status. For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?", and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.

I think almost every publicly visible billionaire has whole ecosystems spring up around them that try to do this. I know some of the details here for Peter Thiel, and the "Thielosphere", which seems to have a lot of these dynamics. Almost any academic at a big lab will openly tell you that among the most crucial pieces of knowledge that any new student learns when they join, is how to write grant proposals that actually get accepted. When I ask academics in competitive fields about the content of their lunch conversations in their labs, the fraction of their cognition and conversations that goes specifically to "how do I impress tenure review committees and grant committees" and "how do I network myself into an academic position that allows me to do what I want" ranges from 25% to 75% (with the median around 50%).

I think there will still be real opportunities to build new and flourishing trust relationships, and I don't think that it will be impossible for us to really come to trust someone who joins our efforts after we have become 'cool,' but I do think it will be harder. I also think we should cherish and value the trust relationships we do have between the people who got involved with things earlier, because I do think that lack of doubt of why someone's here is a really valuable resource, and one that I expect is more and more likely to be a bottleneck in the coming years.

EA is more than longtermism

Yeah, the Charity Entrepreneurship grant is what I was talking about. But yeah, classifying that one as meta isn't crazy to me, though I think I would classify it more as Global Poverty (since I don't think it involved any general EA community infrastructure).

EA is more than longtermism

Oh, I get it now. That seems like a misleading summary, given that that program was primarily aimed at EA community infrastructure (which received 66% of the funding), the statistic cited here is only for a single grants round, and one of the five concrete examples listed seems to be a relatively big global poverty grant. 

I still expect there to be some skew here, but I would take bets that the actual numbers for EA Grants look substantially less skewed than 1:16.

EA is more than longtermism
  • The EA Grants program granted ~16x more money to longtermist projects as global poverty and animal welfare projects combined

This seems wrong to me. The LTFF and the EAIF don't get 16x the money that the Animal Welfare and Global Health and Development funds get. Maybe you meant to say that the EAIF has granted 16x more money to longtermist projects?

There are currently more than 100 open EA-aligned tech jobs

I don't think this is true for the safety teams at Deepmind, but think it was true for some of the safety team at OpenAI, though I don't think all of it (I don't know what the current safety team at OpenAI is like, since most of it left to Anthropic).

Load More