The following is a memo I wrote for the private "PalmCone" conference that Lightcone ran a few months ago that involved many people active in AI Alignment and EA community building (that I was sadly unable to attend myself because of visa reasons). I previously posted it in two comments on a previous post, but people kept asking me for a more easily linkable version, so here it is as a top-level post. 


Epistemic status: Vague models that try to point at something. Many details are likely wrong. If I had more time I would say more accurate things, but I wanted to get this out before you all get started with the retreat. 

I think it it was Marc Andreessen who first hypothesized that startups usually go through two very different phases:

  1. Pre Product-market fit: At this stage, you have some inkling of an idea, or some broad domain that seems promising, but you don't yet really have anything that solves a really crucial problem. This period is characterized by small teams working on their inside-view, and a shared, tentative, malleable vision that is often hard to explain to outsiders.
  2. Post Product-market fit: At some point you find a product that works for people. The transition here can take a while, but by the end of it, you have customers and users banging on your door relentlessly to get more of what you have. This is the time of scaling. You don't need to hold a tentative vision anymore, and your value proposition is clear to both you and your customers. Now is the time to hire people and scale up and make sure that you don't let the product-market fit you've discovered go to waste.

I think it was Paul Graham or someone else close to YC (or maybe Ray Dalio) who said something like the following (NOT A QUOTE, since I currently can't find the direct source):

The early stages of an organization are characterized by building trust. If your company is successful, and reaches product-market fit, these early founders and employees usually go on to lead whole departments. Use these early years to build trust and stay in sync, because when you are a thousand-person company, you won't have the time for long 10-hour conversations when you hang out in the evening.

As you scale, you spend down that trust that you built in the early days. As you succeed, it's hard to know who is here because they really believe in your vision, and who just wants to make sure they get a big enough cut of the pie. That early trust is what keeps you agile and capable, and frequently as we see founders leave an organization, and with that those crucial trust relationships, we see the organization ossify, internal tensions increase, and the ability to effectively correspond to crises and changing environments get worse.

It's hard to say how well this model actually applies to startups or young organizations (it matches some of my observations, though definitely far from perfectly), and even more dubious how well it applies to systems like our community, but my current model is that it captures something pretty important.

I think whether we want it or not, I think we are now likely in the post-product-market fit part of the lifecycle of our community, at least when it comes to building trust relationships and onboarding new people. I think we have become high-profile enough, and have enough visible resources (especially with FTX's latest funding announcements), and have gotten involved in enough high-stakes politics, that if someone shows up next year at EA Global, you can no longer confidently know whether they are there because they have a deeply shared vision of the future with you, or because they want to get a big share of the pie that seems to be up for the taking around here.

I think in some sense that is good. When I see all the talk about megaprojects and increasing people's salaries and government interventions, I feel excited and hopeful that maybe if we play our cards right, we could actually bring any measurable fraction of humanity's ingenuity and energy to bear on preventing humanity's extinction and steering us towards a flourishing future, and most of those people of course will be more motivated by their own self-interest than their altruistic motivation.

But I am also afraid that with all of these resources around, we are transforming our ecosystem into a market for lemons. That we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them, and that nuance and complexity will have to get left at the wayside in order to successfully maintain any sense of order and coherence.

I think it is not implausible that for a substantial fraction of the leadership of EA, within 5 years, there will be someone in the world whose full-time job and top-priority it is to figure out how to write a proposal, or give you a pitch at a party, or write a blogpost, or strike up a conversation, that will cause you to give them money, or power, or status. For many months, they will sit down many days a week and ask themselves the question "how can I write this grant proposal in a way that person X will approve of" or "how can I impress these people at organization Y so that I can get a job there?", and they will write long Google Docs to their colleagues about their models and theories of you, and spend dozens of hours thinking specifically about how to get you to do what they want, while drawing up flowcharts that will include your name, your preferences, and your interests.

I think almost every publicly visible billionaire has whole ecosystems spring up around them that try to do this. I know some of the details here for Peter Thiel, and the "Thielosphere", which seems to have a lot of these dynamics. Almost any academic at a big lab will openly tell you that among the most crucial pieces of knowledge that any new student learns when they join, is how to write grant proposals that actually get accepted. When I ask academics in competitive fields about the content of their lunch conversations in their labs, the fraction of their cognition and conversations that goes specifically to "how do I impress tenure review committees and grant committees" and "how do I network myself into an academic position that allows me to do what I want" ranges from 25% to 75% (with the median around 50%).

I think there will still be real opportunities to build new and flourishing trust relationships, and I don't think that it will be impossible for us to really come to trust someone who joins our efforts after we have become 'cool,' but I do think it will be harder. I also think we should cherish and value the trust relationships we do have between the people who got involved with things earlier, because I do think that lack of doubt of why someone's here is a really valuable resource, and one that I expect is more and more likely to be a bottleneck in the coming years.

When I play forward the future, I can imagine a few different outcomes, assuming that my basic hunches about the dynamics here are correct at all:

  1. I think it would not surprise me that much if many of us do fall prey to the temptation to use the wealth and resources around us for personal gain, or as a tool towards building our own empire, or come to equate "big" with "good". I think the world's smartest people will generally pick up on us not really aiming for the common good, but I do think we have a lot of trust to spend down, and could potentially keep this up for a few years. I expect eventually this will cause the decline of our reputation and ability to really attract resources and talent, and hopefully something new and good will form in our ashes before the story of humanity ends.
  2. But I think in many, possibly most, of the worlds where we start spending resources aggressively, whether for personal gain, or because we do really have a bold vision for how to change the future, the relationships of the central benefactors to the community will change. I think it's easy to forget that for most of us, the reputation and wealth of the community is ultimately borrowed, and when Dustin, or Cari or Sam or Jaan or Eliezer or Nick Bostrom see how their reputation or resources get used, they will already be on high-alert for people trying to take their name and their resources, and be ready to take them away when it seems like they are no longer obviously used for public benefit. I think in many of those worlds we will be forced to run projects in a legible way; or we will choose to run them illegibly, and be surprised by how few of the "pledged" resources were ultimately available for them.
  3. And of course in many other worlds, we learn to handle the pressures of an ecosystem where trust is harder to come by, and we scale, and find new ways of building trust, and take advantage of the resources at our fingertips.
  4. Or maybe we split up into different factions and groups, and let many of the resources that we could reach go to waste, as they ultimately get used by people who don't seem very aligned to us, but some of us think this loss is worth it to maintain an environment where we can think more freely and with less pressure.

Of course, all of this is likely to be far too detailed to be an accurate prediction of what will happen. I expect reality will successfully surprise me, and I am not at all confident I am reading the dynamics of the situation correctly. But the above is where my current thinking is at, and is the closest to a single expectation I can form, at least when trying to forecast what will happen to people currently in EA leadership.

To also take a bit more of an object-level stance, I currently very tentatively believe that I don't think this shift is worth it. I don't actually really have any plans that seem hopeful or exciting to me that really scale with a lot more money or a lot more resources, and I would really prefer to spend more time without needing to be worried about full-time people trying to scheme how to get specifically me to like them.

However, I do see the hope and potential in actually going out and spending the money and reputation we have to maybe get much larger fractions of the world's talent to dedicate themselves to ensuring a flourishing future and preventing humanity's extinction. I have inklings and plans that could maybe scale. But I am worried that I've already started trying to primarily answer the question "but what plans can meaningfully absorb all this money?" instead of the question of "but what plans actually have the highest chance of success?", and that this substitution has made me worse, not better, at actually solving the problem.

I think historically we've lacked important forms of ambition. And I am excited about us actually thinking big. But I currently don't know how to do it well. Hopefully this memo will make the conversations about this better, and maybe will help us orient towards this situation more healthily.

Comments15
Sorted by Click to highlight new comments since:

But I am also afraid that ... we will see a rush of ever greater numbers of people into our community, far beyond our ability to culturally onboard them

 

I've had a model of community building at the back of my mind for a while that's something like this:

"New folks come in, and pick up knowledge/epistemics/heuristics/culture/aesthetics from the existing group, for as long as their "state" (wrapping all these things up in one number for simplicity) is "less than the community average". But this is essentially a one way diffusion sort of dynamic, which means that the rate at which newcomers pick stuff up from the community is about proportional to the gap between their state and the community state, and proportional to the size of community vs number of relative newcomers at any given time."

The picture this leads to is kind of a blackjack situation. We want to grow as fast as we can, for impact reasons. But if we grow too fast, we can't onboard people fast enough, the community average starts dropping, and seems unlikely to recover (we go bust). On this view, figuring out how to "teach EA culture" is extremely important - it's a limiting factor for growth, and failure due to going bust is catastrophic while failure from insufficient speed is gradual.

Currently prototyping something at the Claremont uni group to try and accelerate this. Seems like you've thought about this sort of thing a lot - if you've got time to give feedback on a draft, that would be much appreciated.

Related to that is "eternal September" https://en.wikipedia.org/wiki/Eternal_September. Each September, when new students joined there was a period where the new users has not learnt the culture and norms, but new users being the minority they did learn the norms and integrate.

Around 1993 a flood of new users overwhelmed the existing culture for online forums and the ability to enforce existing norms, and because of the massive and constant influx the norms and culture was permanently changed.  

Why would the community average dropping mean we go bust? I'd think our success is more related to the community total. Yes, there are some costs to having more people around who don't know as much, but it's further claim that these would outweigh the benefits.

Yup, existing EA's do not disappear if we go bust in this way. But I'm pretty convinced that it would still be very bad. Roughly, the community dies, even if the people making it up don't vanish. Trust/discussion/reputation dry up, the cluster of people who consider themselves "EA" are now very different from the current thing, and that cluster kinda starts doing different stuff on its own. Further community-building efforts just grow the new thing, not "real" EA.

I think in this scenario the best thing to do is for the core of old-fashioned EA's to basically disassociate with this new thing, come up with a different name/brand, and start the community-building project over again.

Are there responses to this memo that people feel comfortable with sharing (particularly interested in from the retreat itself, but also would like to see other comments here)? 

I (and perhaps other readers)  am grappling with or starting to grapple with the felt realization of this quite incredible responsibility. And I at least am somewhat uncertain about whether I'm strong enough to bear this responsibility well.

I think this post is pointing to some true dynamics. I also worry that posts like these act as a self-fulfilling prophecy: degrading trust in advance because there is concern that trust will decrease in the future (similar to how expectations that a currency will be worth less in the future causes that to be more true).

I’m not sure what the balance is between flagging these concerns and exasperating the issues around trust in the future (this post seems net positive to me but it still felt like something worth saying here anyway).

Yeah, I am also worried about this. I don't have a great solution at the moment.

I have written up a draft template post on the importance of trust within the community (and trust with others we might want to cooperate with in the future, eg. like the people who made that UN report on future generations mattering a tonne happen). 

Let me know if you would like a link, anyone reading this is also very welcome to reach out! 

Feedback to the draft content/points and also social accountability are very welcome.

A quick disclaimer: I don't have a perfect historical track record of always doing the things I believe are important so there is some chance I won't finish fleshing the post out or actually post it (though I've  been pretty good at doing my very high priority things for the last couple of years and this seems reasonably likely to remain pretty high on my priority list until I post it)

I will write a couple more paragraphs on why I think this post might help as a reply to this comment. 

I think a necessary condition to us keeping a lot of the amazing trust we have in this community is that we believe that that trust is valuable. I get that grifters are going to be an issue. I also think that grifters are going to have a much easier time if there isn't a lot of openness and transparency within the movement. 

Openness and transparency, like we've seen historically, seems only possible with high degrees of trust. 

Posting a post on the importance of trust seems like a good starting point for getting people on board with the idea that doing the things that foster trust are worth doing (I think the things that foster trust tend to foster trust because they are good signals/can help us tell grifters and trustworthy people apart so I think this sort of thing hits two birds with one stone).

Do I upvote because I found this post valuable or do I abstain or downvote because I think too much content like this could be bad in the future so I don’t want to incentivise more of it? 🧐🤔😅

A solution that doesn’t actually work but might be slightly useful: Slow the lemons by making EA-related Funding things less appealing than the alternative.

One specific way to do this is to pay less than industry pays for similar positions: altruistic pay cut. Lightcone, the org Habryka runs, does this: “Our current salary policy is to pay rates competitive with industry salary minus 30%.” At a full-time employment level, this seems like one way to dissuade people who are interested in money, at least assuming they are qualified and hard working enough to get a job in industry with similar ease.

Additionally, it might help to frame university group organizing grants in the big scheme of the world. For instance, as I was talking to somebody group organizing grants I reminded them that the amount of money they would be making (which I probably estimated at a couple thousand dollars per month), is peanuts compared to what they’ll be earning in a year or two when they graduate from a top university with a median salary of ~80k. It also seems relevant to emphasize that you actually have to put in the time and effort into organizing a group for a grant like this; it’s not free money – it’s money in exchange for time/labor. Technically it’s possible to do nothing and pretty much be a scam artist, but I didn’t want to say that.

This solution doesn’t work for a few reasons. One is that it only focuses on one issue – the people who are actually in it for themselves. I expect we will also have problems of well-intending people who just aren’t very good at stuff. Unfortunately, this seems really hard to evaluate, and many of us deal with imposter syndrome, so self-evaluation/selection seems bad.

This solution also doesn’t work because it’s hard to assess somebody’s fit for a grant, meaning it might remain easier to get EA-related money than other money. I claim that it is hard to evaluate somebody’s fit for a grant in large part because feedback loops are terrible. Say you give somebody some money to do some project. Many grants have some product or deliverable that you can judge for its output quality, like a research paper. Some EA-related grants have this, but many don’t (e.g., paying somebody to skill up might have deliverables like a test score but might not). Without some form of deliverable or something, how do you know if your grant was any good? Idk maybe somebody who does grantmaking has an idea on this. More importantly, a lot of the bets people in this community are taking are low chance of success, high EV. If you expect projects to fail a lot, then failure on past projects is not necessarily a good indicator of somebody’s fit for new grants (in fact it's likely good to keep funding high EV, low P(success) projects, depending on your risk tolerance). So this makes it difficult to actually make EA-related money harder to get than other money.

Thanks for this post! I've been wondering about how to think about this too.

Some burgeoning ideas: 

  • Maybe try to understand new people's moral priorities, e.g. understand if they 'score' high on "Expansive Altruism" and "Effectiveness-focused" scales. If they actually genuinely 'score' [1] high on those moral inclinations, I would tend to trust them more.
  • Maybe start a clearance process, in the sense of checking the background of people, etc. National security of countries also has to deal with this type of alignment problem.
  • Teach people how to be intense, ambitious, and actually optimize for the right thing. I think that people may be really interested in doing the highest impact thing but they don't have the "thinking methods" and general "super ambitious social environment" about those. People who push for high intensity and know what to prioritize are extremely rare and precious. Having workshops or online classes about successful (large) project prioritization, calculating the EV of a project,  increasing its ambition, and calculating and reducing risks, may be useful.
  1. ^

    Noting that it would be easy to Goodhart those existing scales. So this would be mostly through conversations and in-depth interactions.

It's unclear to me whether you are saying that the potentially huge number of new people in EA will try to take advantage of EA resources for personal gain or that WE, who are currently in EA for altruistic reasons, will do so. The former sounds likely to me, the latter doesn't.

 

I might be missing crucial context here since I'm not familiar with the Thielosphere and all that, but overall I also don't think a huge number of new, unaligned people will be the downfall of EA. As long as leadership, thought-leaders, and grantmakers in EA stay aligned, it may be harder for them to determine whom to give that grant (or that stamp of approval), but wouldn't that just simply lead to less grants? Which seems bad but not like the end?

 

Or are you imagining highly intelligent people with impressive resumes who strategically aim to hijack EA resources for their aims and get into important positions in EA?

I think cooperative equilibria are fragile. For example, as salaries have increased in EA, I've seen many people who previously took very low salaries now feel much worse about taking low salaries, because their less-aligned colleagues are paid a lot more than them, and this makes them feel much worse about making this additional sacrifice. 

Similarly, I've seen many people who really cared about honesty, who ended up being in environments where honesty was less valued, and then quickly also adopted less honest norms. 

I think EA leadership has a lot of people with strong moral character, but I think assuming that all of these people will completely ignore the incentives around them is overoptimistic. I think we are very likely going to see a shift in the norms among people who have historically acted very selflessly (I don't feel super happy about using the word "selflessly" here, but explaining why would take a long time, so I'll leave this parenthetical as a bookmark)

Separately, I also think that yes, we are also going to get very highly intelligence people with impressive resumes who will strategically aim to hijack EA resources for their own aims and get into important positions in EA. I think studying any others social movement, religion or other large social organization will reveal many of those people, and assuming that we will not be under similar pressures strikes me as quite unlikely. 

ah, the thing about fragile cooperative equilibria makes sense to me.

I'm not as sure as you that this shift would happen to core EA though. I could also imagine that current EAs will have a very allergic reaction to new, unaligned people coming in and trying to take advantage of EA resources. I imagine something like a counterculture forming where aligned EAs start purposefully setting themselves apart from people who're only in it for a piece of the pie, by putting even more emphasis on high EA alignment. I believe I've already seen small versions of this happening in response to non-altruistic incentives appearing in EA.

The faster the flood of new people and change of incentives happens, the more confident I am in this view. Overall, I'm not extremely confident at all though.

On your last point, if I understand this right this is not the thing you're most worried about though? Like, these people hijacking EA are not the mechanism by which EA may collapse in your view?

Curated and popular this week
Relevant opportunities