Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only thesections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up being quite wrong. Indeed, I’ve already updated towards a better and more nuanced understanding of Aschenbrenner's points, in ways that have made me less concerned than I was to begin with. But I want to say publicly that the hedge fund idea makes me nervous.
Before I give my reasons, I want to say that it seems likely most of the relevant impact comes not from the hedge fund but from the influence the ideas from Situational Awareness have on policymakers and various governments, as well as the influence and power Aschenbrenner and any cohort he builds wield. This influence may come from this hedge fund or be entirely incidental to it. I mostly do not address this here, but it does make all of the below less important.
I also believe that some (though not all) of my concerns about the hedge fund are based on specific disagreements with Aschenbrenner’s views. I discuss some of those below, but a full rebuttal this is not (and many of the points of disagreement I don’t yet feel confident in my view on). There is still plenty to do to hash out the actual empirical questions at hand.
Why I am nervous
A hedge fund investing in AI related investments means Aschenbrenner and his investors will gain financially from more and accelerated AGI progress. This seems to me to be one of the most important dynamics (excluding the points about influence above). That creates an incentive to create more AGI progress, even at the cost of safety, which seems quite concerning. I will say that Leopold has a good track record here around turning down money in not signing an NDA at Open AI despite loss of equity.
Aschenbrenner expresses strong support for the liberal democratic world to maintain a lead on AI advancement, and ensure that China does not reach an AI-based decisive military advantage over the United States[1]. The hedge fund, then, presumably aims to both support the goal of maintaining an AI lead over China and profit off of it. In my current view, this approach increases race dynamics and increases the risks of the worst outcomes (though my view on this has softened somewhat since my first draft, for reasons similar to what Zvi clarifies here[2]).
I especially think that it risks unnecessary competition when cooperation - the best outcome - could still be possible. It seems notable, for example, that no Chinese version of the Situational Awareness piece has come to my attention; going first in such a game both ensures you are first and that the game is played at all.
It’s also important that the investors (e.g. Patrick Collison) appear to be more focused on economic and technological development, and less concerned about risks from AI. The incentives of this hedge fund are therefore likely to point towards progress and away from slowing down for safety reasons.
There are other potential lines of thought here I have not yet fleshed out including:
The value of aiming to orient the US government and military attention to AGI (seems like a huge move with unclear sign)
The degree to which this move is unilateralist on Aschenbrenner’s part
How much money could be made and how much power the relevant people (e.g. Aschenbrenner and his investors) will have through investment and being connected to important decisions.
If a lot of money and/or power could be acquired, especially over AGI development, then there’s a healthy default skepticism I think should be applied to their actions and decision-making.
Specifics about Aschenbrenner himself. Different people in the same role would take very different actions, so specifics about his views, ways of thinking, and profile of strengths and weaknesses may be relevant.
Ways that the hedge fund could in fact be a good idea:
EA and AI causes could really use funder diversification. If Aschenbrenner intends to use the money he makes to support these issues, that could be very valuable (though I’ve certainly become somewhat more concerned with moonshot “become a billionaire to save the world” plans than I used to be).
The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.
I’m interested in hearing takes, ways I could be wrong, fleshing out of my arguments, or any other thoughts people have relevant to this. Happy to have private chats in DMs to discuss as well.
To be clear, Aschenbrenner wants that lead to exist to avoid a tight race in which safety and caution are thrown to the winds. If we can achieve that lead primarily through infosecurity (something he emphasizes), then added risks are low; but I think the views expressed in Situational Awareness also imply the importance of staying technologically ahead of China as their AI research improves. This comes with precisely the risks of creating and accelerating a race of this nature.
Additionally, when I read his description of the importance of even a two month lead, it implied to me that if the longer, more comfortable lead is lost, there will be strong reasons for the US to advance quickly so as to avoid China reaching superintelligence and subsequent military dominance first (which doesn’t mean he thinks we should actually do this if the time came). This seems to fairly explicitly describe the tight race scenario. I don’t think Aschenbrenner believes this would be a good situation to be in, but nonetheless thinks that’s what the true picture is.
From Zvi’s post: “He confirms he very much is NOT saying this: The race to ASI is all that matters. The race is inevitable. We might lose. We have to win. Trying to win won’t mean all of humanity loses. Therefore, we should do everything in our power to win.
I strongly disagree with this first argument. But so does Leopold. Instead, he is saying something more like this:
ASI, how it is built and what we do with it, will be all that matters. ASI is inevitable. A close race to ASI between nations or labs almost certainly ends badly. Our rivals getting to ASI first would also be very bad. Along the way we by default face proliferation and WMDs, potential descent into chaos. The only way to avoid a race is (at least soft) nationalization of the ASI effort. With proper USG-level cybersecurity we can then maintain our lead. We can then use that lead to ensure a margin of safety during the super risky and scary transition to superintelligence, and to negotiate from a position of strength.”
Aschenbrenner and his investors will gain financially from more and accelerated AGI progress.
Not necessarily - they could just invest in publicly traded company where the counterfactual impact is not very large (even a large hedge fund buying some say Google stock wouldn't much move the market cap). They could also be shorting certain companies which might reduce economically inefficient overinvestment into AI, which might also have x-risk externalities. It would be different if he ran a VC fund and invested in getting the next, say, Anthropic off the ground. Especially if the profits are donated and used for mission hedging, this might be good.
The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.
Yes, the outputs might be better as the incentives are aligned: the hedge fund / think tank has 'skin in the game' to get the correct answers on the future of AI progress (though maybe some big banks are also trying to move markets with their publications).
For your second point, you should skeptically expect their publishings/influence to be corrupted easily as the information they'd put out would be very connected to their investing alpha. The corruption could take the form of omission of key details, under-hyping stuff in which they were unable to get exposure(/investment) and biases like that.
Thanks for writing this. I agree that this makes me nervous. Various thoughts:
I think I’ve slowly come to believe something like, ‘sufficiently smart people can convince themselves that arbitrary morally bad things are actually good’. See e.g., as the gymnastic meme, but also there’s something deeper of like ‘many of the evil people throughout history have believed that what they’re doing is good actually’. I think the response to this should be deep humility and moral risk aversion. Having a big brain argument that sounds good to you about why what you’re doing is good is actually extremely weak evidence about the goodness of the thing. I think it would probably be better if EAs took took this more seriously and didn’t do things like starting an AGI company or starting an AGI hedge fund. An AGI hedge fund seems even worse than Anthropic (where I think the argument for cutting edge research is medium brained and at least somewhat true empirically). The reasons Chana lists for why hedge fund could be a good idea all seem fairly weak — they would be stronger if Leopold was saying these were part of the plan.
The unilateralist nature and relationship to race dynamics also worries me. Maybe there would have been AGI hedge funds anyway, and maybe there would have been lengthy blog posts that tell the USG and China that they should be in a massive race on AI — but those things sure weren’t being done before Leopold did it.
I don’t think I have strong reasons to actively trust Leopold. I don’t know him and I think my baseline trust isn’t super high nowadays. By “trust” I mean some combination of being of good character, having correct judgment, and good epistemic practices to make up for poor judgment. Choosing to lose OpenAI equity is a positive sign, but I’m not sure how big. So this caches out in not making much of an update on the value of an AGI hedge fund — something that seems initially medium bad.
I think it’s sus to write up a blog post telling people AGI is coming soon while starting an investment firm that will benefit from people thinking AGI is coming soon. This is clearly a case of conflicting interests. It’s not necessarily a bad thing — there are good arguments around putting your money where your mouth is and taking actions based on big if true ideas, but it is a warning flag.
I could imagine a normal person reading Situational Awareness, including the part about Superalignment, and then hearing that the author is starting an AGI hedge fund, and their response being “WTF?! You believe all this about the intelligence explosion and how there are critical safety problems we’re not on track to solve, and you’re starting a hedge fund?” This response makes a lot of sense to me (and I do think I’ve heard it somewhere, though I’m not sure where). I think ‘starting an AGI hedge fund’ is really low on the list of things somebody who cares a lot about superintelligence safety should be doing. So either I’m misunderstanding something, or this is an update that Leopold isn’t as serious about ASI safety as I thought.
I have yet to see any replies from Leopold to people commentating or responding to Situational Awareness. This seems like bad form for truth seeking and getting buy-in from EAs, but it may be the norm for general intellectual content.
This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.
I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, though; I’m not sure what should be done except to be clearer about all this, but I suspect it’s hard to properly convey “this seems like the absolute best thing in the world to do, also next year my view could be that it’s basically useless” even if you use those exact words. And maybe people have done this, or maybe it’s worth trying harder. Another approach would be something like insurance.
A frame I’ve been more interested in lately (definitely not original to me) is that earning to give is a kind of resilience / robustness-add for EA, where more donors just means better ability to withstand crazy events, even if in most worlds the small donors aren’t adding much in the way of impact. Not clear that that nets out, but “good in case of tail risk” seems like an important aspect.
A more out-there idea, sort of cobbled together from a me-idea and Ben West-understanding is that, among the many thinking and working styles of EAs, one axis of difference might be “willing to pivot quickly, change their mind and their life plan intensely and often” vs “not as subject to changing EA winds” (not necessarily in tension, but illustrative). Staying with E2G over many years might be related to having being closer to the latter; this might be an under-rated virtue and worth leveraging.
I think another example of the jerking people around thing could be the vibes from summer 2021 to summer 2022 that if you weren't exceptionally technically competent and had the skills to work on object-level stuff, you should do full-time community building like helping run university EA groups. And then that idea lost steam this year.
Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.
Hmm, this doesn't seem obvious to me – if you care more about people's success then you are more willing to give offers to people who don't have a robust resume etc., which is going to lead to a lower hit rate than usual.
It's an interesting point about the potential for jerking people around and alienating them from the movement and ideals. It could also (maybe) have something to do with having a lot of philosophers leading the movement too. It's easier to change from writing philosophically about short termism "doing good better" to long termism "what we owe the future", to writing essays about talent constraint over money constraint, but harder to psychologically and practically (although still very possible) switch from being a mid career global health worker or earning to giver to working on AI alignment.
This isn't a criticism, of course it makes sense for the philosophy driving the movement to develop, just highlighting the difference in "pivotability" between leaders and some practitioners and the obvious potential for "jerking people around" collateral as the philosophy evolves.
Also having lots of young people in the movement who haven't committed years of their life to things can make changing tacks more viable for many and seem more normal, while perhaps it is harder for those who have committed a few years to something. This "Willingness to pivot quickly, change their mind and their life plan intensely and often” could be as much about stage of career than it is personality.
Besides earning to give people being potentially "jerked around", there are some other categories with considering too.
Global health people as the relative importance within the movement seems to have slowly faded.
if (just possibilities) AI becomes far less neglected in general in the next 3 to 5 years, or it becomes apparent that policy work seems far more important/tractable than technical alignment, then a lot of people who have devoted their careers to these may be left out in the cold.
Makes sense that there would be some jerk-around in a movement that focuses a lot on prioritization and re-prioritization, with folks who are invested in finding the highest priority thing to do. Career capital takes time to build and can't be re-prioritized at the same speed. Hopefully as EA matures, there can be some recognition that diversification is also important, because our information and processes are imperfect, and so there should be a few viable strategies going at the same time about how to do the most good. This is like your tail-risk point. And some diversity in thought will benefit the whole movement, and thoughtful people pursuing those strategies with many years of experience will result in better thinking, mentorship, and advice to share.
I don't really see a world in which earning to give can't do a whole lot of good, even if it isn't the highest priority at the moment... unless perhaps the negative impacts of the high-earning career in question haven't been thought through or weighed highly enough.
Perhaps making a stronger effort to acknowledge and appreciate people who acted altruistically based on our guesses at the time, before explaining why our guesses are different now, would help? (And for this particular case, even apologizing to EtG people who may have felt scorned?)
I think there's a natural tendency to compete to be "fashion-forward", but that seems harmful for EA. Competing to be fashion-forward means targeting what others will approve of (or what others think others will approve of), as opposed to the object-level question of what actually works.
Maybe the sign of true altruism in an EA is willingness to argue for boring conventional wisdom, or willingness to defy a shift in conventional wisdom if you don't think the shift makes sense for your particular career situation. 😛 (In particular, we shouldn't discount switching costs and comparative advantage. I can make a radical change to the advice I give an aimless 20-year-old, while still believing that a mid-career professional should stay on their current path, e.g. due to hedging/diminishing marginal returns to the new hot thing.)
Burnout is extremely expensive, because it does not just cost time in and of itself but can move your entire future trajectory. If I were writing practical career tips for young EAs, my first headline would be "Whatever you do, don't burn out."
Plenty of people in the EA community have burned out. A small number of us talk about it. Most people, understandably, prefer to forget and move on. Beware this and other selection effects in (a) who is successful enough that you are listening to them in the first place and (b) what those people choose to talk about.
IMO, acknowledging and appreciating the effort people put in is the best way to prevent burnout. Implying that "your career path is boring now" is the opposite. Almost everyone in EA is making some level of sacrifice to do good for others; let's thank them for that!
I was thinking about to what extent NDAs (either non-disclosure or non-disparagement agreements) played a role in the 2018 blowup at Alameda Research (since if there were a lot, that could be a throughline between messiness at Alameda and messiness at Open AI recently).
Here's what I've collected from public records:
Not mentioned as far as I can tell in Going Infinite
Ben West: "I don’t want to speak for this person, but my own experience was pretty different. For example: Sam was fine with me telling prospective AR employees why I thought they shouldn’t join (and in fact I did do this),[4] and my severance agreement didn’t have any sort of non-disparagement clause. This comment says that none of the people who left had a non-disparagement clause, which seems like an obvious thing a person would do if they wanted to use force to prevent disparagement.[5]" From here
Kerry Vaughn: "Information about pre-2018 Alameda is difficult to obtain because the majority of those directly involved signed NDAs before their departure in exchange for severance payments. I am aware of only one employee who did not. The other people who can spreak freely on the topic are early investors in Alameda and members of the EA community who heard about Alameda from those directly involved before they signed their NDAs". From here.
ftxthrowaway: "Lastly, my severance agreement didn't have a non-disparagement clause, and I'm pretty sure no one's did. I assume that you are not hearing from staff because they are worried about the looming shitstorm over FTX now, not some agreement from four years ago." From here (it's a response to the previous)
nbouscal: “I'm the person that Kerry was quoting here, and am at least one of the reasons he believed the others had signed agreements with non-disparagement clauses. I didn't sign a severance agreement for a few reasons: I wanted to retain the ability to sue, I believed there was a non-disparagement clause, and I didn't want to sign away rights to the ownership stake that I had been verbally told I would receive. Given that I didn't actually sign it, I could believe that the non-disparagement clauses were removed and I didn't know about it, and people have just been quiet for other reasons (of which there are certainly plenty).” From here (it's a response to the previous)
Later says "I do think I was probably just remembering incorrectly about this to be honest, I looked back through things from then and it looks like there was a lot of back-and-forth about the inclusion of an NDA (among other clauses), so it seems very plausible that it was just removed entirely during that negotiation (aside from the one in the IP agreement)." Link here.
arthrowaway: "Also no non-disparagement clause in my agreement. FWIW I was one of the people who negotiated the severance stuff after the 2018 blowup, and I feel fairly confident that that holds for everyone. (But my memory is crappy, so that's mostly because I trust the FB post about what was negotiated more than you do.)" From here (it's in the same thread as the above)
Overall this tells a story where NDAs weren't a big part of the Alameda story (since I think Ben West and nbouscal at least left during the 2018 blowup, but folks should correct me if I'm wrong). This is a bit interesting to me.
just saying what everyone knows out loud (copied over with some edits from a twitter thread)
Maybe it's worth just saying the thing people probably know but isn't always salient aloud, which is that orgs (and people) who describe themselves as "EA" vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray.
Especially for newer or less connected people, I think it's important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of different people and orgs, which from afar might blur into "they have the EA stamp of approval"
Probably a lot of thoughtful people think whatever seems shiny in a "everyone supports this" kind of way is bad in a bunch of ways (though possibly net good!), and that granularity is valuable.
I think feel very free to ask around to get these takes and see what you find - it's been a learning experience for me, for sure. Lots of this is "common knowledge" to people who spend a lot of their time around professional EAs and so it doesn't even occur to people to say + it's sensitive to talk about publicly. But I think "some smart people in EA think this is totally wrongheaded" is a good prior for basically anything going on in EA.
Maybe at some point we should move to more explicit and legible conversations about each others' strengths and weaknesses, but I haven't thought through all the costs there, and there are many. Curious for thoughts on whether this would be good! (e.g. Oli Habryka talking about people with integrity here)
I think the wiki entry is a pretty good place for this. It's "the canonical place" as it were. I would think it's important to do this rather fairly. I wouldn't want someone to edit a short CEA article with a "list of criticisms", that (believe you me) could go on for days. And then maybe, just because nobody has a personal motivation to, nobody ends up doing this for Giving What We Can. Or whatever. Seems like the whole thing could quickly prove to be a mess that I would personally judge to be not worth it (unsure). I'd rather see someone own editing a class of orgs and adding in substantial content, including a criticism section that seeks to focus on the highest impact concerns.
Features that contribute to heated discussion on the forum
From my observations. I recognize many of these in myself. Definitely not a complete list, and possibly some of these things are not very relevant, please feel free to comment to add your own.
Interpersonal and Emotional
Fear, on all sides (according to me lots of debates are bravery debates and people on "both sides" feel in the minority and fighting against a more powerful majority (and often it's both true, just in different ways), and this is really important for understanding the dynamics)
Political backlash
What other EAs will think of you
Just sometimes the experience of being on the forum
Trying to protect colleagues or friends
Speed as a reaction to having strong opinions, or worrying that others will jump on you
Frustration at having to rehash arguments / protect things that should go without saying
Desire to gain approval / goodwill from people you’d like to hire/fund/etc you in the future
Desire to sound smart
Desire to gain approval / goodwill from your friends, or people you respect
Pattern matching (correctly or not) to conversations you’ve had before and porting over the emotional baggage from them
Sometimes it helps to assume the people you’re talking to are still trying to win their last argument with someone else
People don't communicate openly their takes on things.
This leads to significant misunderstanding.
This leads to distrust of each other and assumptions of poor intent.
This leads to parties doing more zero-sum or adversarial actions to each other.
When any communication does happen, it's inspected with a magnifying glass (because of how rare it is). It's misunderstood (because of how little communication there has been).
The communicators then think, "What's the point? My communication is misunderstood and treated with hostility." So they communicate less.
Not tracking being scrupulously truth-telling out of a desire to get less criticism
This is so perceptive, relevant and respectfully written, thank you.
people on "both sides" feel in the minority and fighting against a more powerful majority
I've noticed this too and I think another common dynamic is where "both sides" feel like the other side obviously "started it" and so feel justified in responding in kind.
I've also noticed in myself recently this additional layer of upset that sounds something like, "We're supposed to be allies!" I think I need to keep reminding myself that this is just what people do, namely fight with people very much like them but a little bit different*. I think EA's been remarkably good at avoiding much of this over the years and obviously I wish we weren't falling prey to it quite so much right now, but I don't think it's a reason to feel extra upset.
*Here's my favourite dramatisation of this phenomenon.
Thanks for sharing, I think this is a very useful overview of important factors, and I encourage you to share it as a normal post (I mostly miss shortforms like this).
Post: OAI NDA drama, do we know that Anthropic does not do similar things? I heard something that reassured me but I no longer know what it was. Interested in what people have heard or know (feel free to DM me) or general takes on the sitaution.
I want to draw attention to the distinction between current and former employees, since OpenAI was deploying their leverage on ex-staff and keeping that quiet amidst current staff. And what confirmation we have is about current Anthropic staff, haven't heard from ex-staff yet.
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that "everyone knows", because it feels like there's no one to tell who doesn't already know. It’s easy to think that surely this is priced in to everyone's decision making. Some reasons to do it anyway:
You might be wrong about what “everyone” knows - maybe everyone in your social circle does, but not outside. I see this a lot in Bay gossip vs. London gossip - what "everyone knows" is very different in those two places
You might be wrong about what "everyone knows" - sometimes people use a vague shorthand, like "the FTX stuff" and it could mean a million different things, and either double illusion of transparency (you both think you know what the other person is talking about but don’t) or the pressure to nod along in social situations means that it seems like you're all talking about the same thing but you're actually not
Just because people know doesn't mean it's the right level of salient - people forget, are busy with other things, and so on.
Bystander effect: People might all be looking around assuming someone else has the concern covered because surely everyone knows and is taking the right amount of action on it.
In short, if you're acting based on the belief that there’s a thing “everyone knows”, check that that’s true.
[Caveat: There's an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]
About going to a hub A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the
For people who consider taking or end up taking this advice, some things I'd say if we were having a 1:1 coffee about it:
Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage.
The culture here is full of energy and ambition and truth telling. That's really awesome, but it can be a tricky adjustment. In some spaces, you'll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don't think you're a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important.
People both are and seem really smart. It's easy to just believe them when they say things. Remember to flag for yourself things you've just heard versus things you've discussed at length vs things you've really thought about yourself. Try to ask questions about the gears of people's models, ask for credences and cruxes. Remember that people disagree, including about very big questions. Notice the difference between people's offhand hot takes and their areas of expertise. We want you to be someone who can disagree with high status people, who can think for themselves, who is in touch with reality.
I'd recommend staying grounded with friends/connections/family outside the EA space. Making friends over the summer is great, and some of them may be deep connections you can rely on, but as with all new friends and people, you don't have as much evidence about how those connections will develop over time or with any shifts in your relationships or situations. It's easy to get really attached and connected to people in the new space, and that might be great, but I'd keep track of your level of emotional dependency on them.
We use the word "community" but I wouldn't go in assuming that if you come on your own you'll find a waiting, welcoming pre -made social scene, or that people will have the capacity to proactively take you under their wing, look out for you and your well being, especially if there are lots of people in a similar boat. I don't want you to feel like you've been promised anything in particular here. That might be up to you to make for yourself.
One thing that's intense is the way that the personal and professional networks overlap, so keep that in mind as you think about how you might keep your head on straight and what support you might need if your job situation changes, you have a bad roommate experience, you date and break up with someone (maybe get a friend's take on the EV of casual hookups or dating during this intense time, given that the emotional effects might last a while and play out in your professional life - you know yourself best and how that might play out for you).
This might be a good place to flag that just because people are EAs doesn't mean they're automatically nice or trustworthy, pay attention to your own sense of how to interact with strangers.
Feeling lonely or ungrounded or uncertain is normal. There is lots of discussion on the forum about people feeling this way and what they've done about it. There is an EA peer support Facebook group where you can post anonymously if you want. If you're in more need than that, you can contact Julia Wise or Catherine Low on the community health team.
As per my other comment, some of this networking is constrained by capacity. Similarly, I wouldn't go in assuming you'll find a mentor or office space or all the networking you want. By all means ask, but also also give affordance for people to say no, respect their time and professional spaces and norms. Given the capacity constraints, I wouldn't be surprised if weird status or competitive dynamics formed, even within people in a similar cohort. That can be hard.
Status stuff in general is likely to come up; there's just a ton of the ingredients for feeling like you need to be in the room with the shiniest people and impress them. That seems really hard; be gentle with yourself if it comes up. On the other hand, that would be great to avoid, which I think happens via emotional grounding, cultivating the ability to figure out what you believe even if high status people disagree and keeping your eye on the ball.
This comment and this post and even many other things you can read are not all the possible information, this is a community with illegibility like any other, people all theoretically interacting with the same space might have really different experiences. See what ways of navigating it work for you, if you're unsure, treat it as an experiment.
Keep your eye on the ball. Remember that the goal is to make incredible things happen and help save the world. Keep in touch with your actual goals, maybe by making a plan in advance of what a great time in the Bay would like, what would count as a success and what wouldn't. Maybe ask friends to check in with you about how that's going.
My guess is that having or finding projects and working hard on them or on developing skills will be a better bet for happiness and impact than a more "just hang around and network" approach (unless you approach that as a project - trying to create and develop models of community building, testing hypotheses empirically, etc). If you find that you're not skilling up as much as you'd like, or not getting out of the Bay what you'd hoped, figure out where your impact lies and do that. If you find that the Bay has social dynamics and norms that are making you unhappy and it's limiting your ability to work, take care of yourself and safeguard the impact you'll have over the course of your life.
We all want (I claim) EA to be a high trust, truth-seeking, impact-oriented professional community and social space. Help it be those things. Blurt truth (but be mostly nice), have integrity, try to avoid status and social games, make shit happen.
Now, you could argue that either your expectations about this volatility should be compatible with the basic Bayesianism above (such that, e.g., if you think it reasonably like that you’ll have lots of >50% days in future, you should be pretty wary of saying 1% now), or you’re probably messing up. And maybe so. But I wonder about alternative models, too. For example, Katja Grace suggested to me a model where you’re only able to hold some subset of the evidence in your mind at once, to produce your number-noise, and different considerations are salient at different times. And if we use this model, I wonder if how we think about volatility should change.
I'm sure this must have been said before, but I couldn't find it on the forum, LW or google
I'd like to talk more about trusting X in domain Y or on Z metric rather than trusting them in general. People/orgs/etc have strengths and weaknesses, virtues and vices, and I think this granularity is more precise and is a helpful reminder to avoid the halo and horn effects, and calibrates us better on trust.
A commonly used model in the trust literature (Mayer et al., 1995) is that trustworthiness can be broken down into three factors: ability, benevolence, and integrity.
RE: domain specific, the paper incorporates this under 'ability':
The domain of the ability is specific because the trustee may be highly competent in some technical area, affording that person trust on tasks related to that area. However, the trustee may have little aptitude, training, or experience in another area, for instance, in interpersonal communication. Although such an individual may be trusted to do analytic tasks related to his or her technical area, the individual may not be trusted to initiate contact with an important customer. Thus, trust is domain specific.
There are other conceptions but many of them describe something closer to trust that is domain specific rather than generalised.
...All of these are similar to ability in the current conceptualization. Whereas such terms as expertise and competence connote a set of skills applicable to a single, fixed domain (e.g., Gabarro's interpersonal competence), ability highlights the task- and situation-specific nature of the construct in the current model.
I do want to say something stronger here, where "competence" sounds like technical ability or something, but I also mean a broader conception of competence that includes "is especially clear thinking here / has fewer biases here / etc"
I hope to flesh this out at some point, but I just want to put somewhere that by default (from personal experience and experience as an instructor and teacher) I think sleepaway experiences (retreats, workshops, camps) are potentially emotionally intense for at least 20% of participants, even entirely setting aside content (CFAR has noted this as well): away from normal environment, new social scene with all kinds of status-stuff to figure out, less sleep, lots of late night conversations that can be very powerful, romantic / sexual stuff in a charged environment, a lot of closeness happening very quickly because of being around each other 24/7, anything going on from outside the environment that's stressful you have less time and space to deal with. This can be valuable in the sense of giving people a chance to fully immerse themselves, but it's a lot, especially for younger people and it is worth organizers explicitly noting this in organizing, talking about it to participants, providing time for chillness / regrounding and being off the clock, and having people around who it's easy to talk to if you're going through a hard time.
For collecting thoughts on the concept of "epistemic hazards" - contexts in which you should expect your epistemics to be worse. not fleshed out yet. Interested in if this has already been written about, I assume so, maybe in a different framing.
From Habryka: "Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, since telling people what decisions you are facing now exposes you to the risk of them outing you. Or working on dangerous technologies that you can't tell anyone about makes it harder to get feedback on whether you are making the right tradeoffs (since doing so would usually involve leaking some of the details behind the dangerous technology). "
There's a paradox I'm confused about, where if someone from a group I'm not in - let's say Christians, came to me on a college campus and smiled at me and asked about my interests and connected all of them to Jesus and then I found out I'd been logged in a spreadsheet as "potential convert" or something and then found the questions they'd asked me in a blog post or "Christian evangelist top questions" I might very well feel extremely weird about that (though I think less than others, I kind of respect the hustle).
BUT, when I think about how one gets there, I think, ok:
You're a christian, you care about saving other people from hell
You want to talk to people about this and get a community together + persuade people via arguments you think are in fact persuasive
Other people want to do the same, you discuss approaches
Other people have framings and types of questions that seem better to you than yours, so you switch
You're talking to a lot of people and it's hard to keep track of what each of them said and what they wanted out of a community or worldview, so you start writing it down
You don't want people to get approached for the same conversations over and over again, so you share what you've written with your fellow Christian evangelists
It doesn't seem useful to anyone to keep talking to people who don't seem interested in Christianity, so you let your fellow evangelists know which folks are in that category
People who seem excited about Christianity would probably get a lot out of going to conferences or reading more about it, so you recommend conferences and books and try to make it as easy as possible for them to access those, without having annoying atheists who just want to cause trouble showing up.
This is probably too charitable, there is definitely a thing where you actively want to persuade people because you think your thing is important, and you might lose interest in people who aren't excited about what you're excited about, but those things also seem reasonable to me.
A process that seems bad:
Want to maximize number of EAs
Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm gesturing at the extreme other end]
Make people feel weird and bad for disagreeing with you, whether on purpose or not
Encourage people to repress their disagreements
Get energy and labor from people that they won't endorse having given in a few years, or if they knew things you knew
3-5 seem like the worst parts here. 1 seems like a reasonable implication of their beliefs, though I do think we all have to cooperate to not destroy the commons.
2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
3 and 4 are bad, also hard to avoid.
5 seems really bad, and something I'd like to strongly avoid via things like transparency and some other percolating advice I might end up endorsing for people new to EA, like not letting your feet go faster than your brain, figuring out how much deference you endorse, seeing avoiding resentment as a crucial consideration in your life choices, staying grounded, etc.
I also think the processes can feel pretty similar from the inside (therefore danger alert!) but also look similar from the outside when they aren't. I certainly have systematically underestimated the moral seriousness and earnestness of many an EA.
What's the difference?
I think people are going to want to say something like "treating people as ends" but I don't know where that obligation stops. I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux. Ex: I endorse protecting the time and energy of other people by not telling everyone who I would talk to if I had a certain question or needed help in a certain way.
I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.
[This is separate from the instrumental reasons to act differently because it weirds people out etc.]
....................................................................................................................... Later musings:
Presumably the strongest argument is that these feelings are tracking a bunch of the bad stuff that's hard to point at:
people not actually understanding the arguments they're making
people not having your best interests in mind
people being overconfident their thing is correct
people not being able to address your ideas / cruxes
I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.
Of course this is a spectrum, and we shouldn't put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this extreme). But the implicit jump from "some things shouldn't be explicit" to "digital sentience might weird some people out so there's a decent chance we shouldn't be that explicit about it" seems very non-obvious to me, given how central it is to a lot of longtermist's worldviews and honestly I think it wouldn't turn off many of the most promising people (in the long run; in the short run, it might get an initial "huh??" reaction).
Oh, sorry, those were two different thoughts. "digital sentience is a weird reason to do a lot of things" is one thing, where it's not most people's crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, "there are lots of things I endorse it being inappropriate to be too explicit about", like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you're being assessed in a bunch of contexts would be very good!)
I think steps 1 and 2 in your chain are also questionable, not just 3-5.
Want to maximize number of EAs
Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.
2. Use framings, arguments and examples that you don't think hold water but work at getting people to join your group [I don't think EAs do this, I'm gesturing at the extreme other end]
[...]
2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
I think sometimes/often talking about people's cruxes rather than your own is good and fine. The issue is Goodharting via an optimal message to convert as many people to EA as quickly as possible, rather than messages that will lead to a healthy community over the long run.
I think there are two separate processes going on when you think about systematizing and outreach and one of them is acceptable to systematize and the other is not.
The first process is deciding where to put your energy. This could be deciding whether to set up a booth at a college's involvement fair, buying ads, door-to-door canvassing, etc. It could also be deciding who to follow up with after these interactions, from the email list collected, to who's door to go to a second time, to which places to spend money on in your second round of ad buys. These things all lend themselves to systematization. They can be data driven and you can make forecasts on how likely each person was to respond positively and join an event, revisit those forecasts and update them over time.
The second process is the actual interaction/conversation with people. I think this should not be systematized and should be as authentic as possible. Some of this is a focus on treating people as individuals. Even if there are certain techniques/arguments/framings that you find work better than others, I'd expect there to be significant variation among people where some work better than others. A skilled recruiter would be able to figure out what the person they are talking to cares about and focus on that more, but I think this is just good social skills. They shouldn't be focusing on optimizing for recruitment. They should try to be a likeable person that others will want to be around and that goes a long way to recruitment in and of itself.
I see what you're pointing at, I think, but I don't know that this resolves all my edge cases. For instance, where does "I know this person is especially interested in animal welfare, so talk about that" fall?
I separately don't want to optimize for recruitment in the metric of number of people because of my model of what good additions to the community looks like (e.g. I want especially thoughtful people who have a good sense of the relevant ideas and arguments and what they buy and what their uncertainties are") - maybe your approach comes from that? Or are you saying even if one were trying to maximize numbers, they shouldn't systematize?
Thanks so much for writing this! I think it could be a top-level post, I'm sure many others would find it very helpful.
My 2 cents:
2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question). We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it's a huge difference from the "cult" mindset.
I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better. From my personal perspective this is strongly related to the point on uncertainty: I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct". I don't know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.
Agree about the "not holding water", I was trying to say that "addresses cruxes you don't have" might look similar to this bad thing, but I'm not totally sure that's true.
I disagree about getting people to join your group - that definitely seems like an instrumental goal, though definitely "get the relevant people to join your group" is more the thing - but different people might have different views on how relevant they need to be, or what their goal with the group is.
Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).
I kind of agree here; I think there are things in EA I'm not particularly uncertain of, and while I'm open to being shown I'm wrong, I don't want to pretend more uncertainty than I have.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If value the wellbeing of others then EA can help them achieve their values better.
I've definitely heard that frame, but it honestly doesn't resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. I've definitely had my values changed by argumentation! Or at least values on some level of abstraction - not on the level of solipsism vs altruism, but there are many layers between that and "just an empirical question".
I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct"
I incorporate an inside view on my values - if I didn't think they were right, I'd do something else with my time!
Transparency for undermining the weird feelings around systematizing community building
There's a lot of potential ick as things in EA formalize and professionalize, especially in community building. People might reasonably feel uncomfortable realizing that the intro talk they heard is entirely scripted, or that interactions with them have been logged in a spreadsheet or that the events they've been to are taking them through the ideas on a path from least to most weird (all things I've heard of happening, with a range of how confident I am in them actually happening as described here). I think there's a lot to say here about how to productively engage with this feeling (and things community builders should do to mitigate it), but I also think there's a quick trick that will markedly improve things (though do not fix all problems): transparency.
(This is an outside take from someone who doesn't do community building on college campuses or elsewhere, I think that work is hard and filled with paradoxes, and it's also possible that this is already done by default, but in the spirit of stating the obvious)
I've been updating over and over again over the last few years that earnestness is just very powerful, and I think there are ways (though maybe they require some social / communication skills that aren't universal) to say things like (conditional on them being true):
NB: I don't think these are the best versions of these scripts, this was a first pass to point at the thing I mean
"This EA group is one of many around the country and the world. There is a standard intro talk that contains framings we think are exceptionally useful and helps us make sure we don't miss any of the important ideas or caveats, so we are giving it here today. I am excited to convey these core concepts, and then for the group of people who come in subsequent weeks to figure out which aspects of these they're most interested in pursuing and customizing the group to our needs."
"Hey, I'm excited to talk to you about EA stuff. The organizers of this group are hoping to chat with people who seem interested and not be repetitive or annoying to you, would it be ok with you if I took some notes on our conversation that other organizers can see?"
"EA ideas span a huge gamut from really straightforward to high-context / less conventional. These early dinners start with the less weird ones because we think the core ideas are really valuable to the world whether or not people buy some of the other potential implications. Later on, with more context, we'll explore a wider range."
"I get that the perception that people only get funding or help if they seem interested in EA is uncomfortable / seems bad. From my perspective, I'm engaged in a particular project with my EA time / volunteer time / career / donations / life, and I'm excited to find people who are enthused by that same project and want to work together on it. If people find this is not the project for them, that's a great thing to have learned, and I'm excited for them to find people to work with on the things they care about most"
Not everything needs to be explicit, but this at least tracks whether you're passing the red face test.
I think that being transparent in this way requires:
Some communication skills to convey things like the above with nuance and grace
Being able to track when explicitness is bad or unhelpful
Some social skills in tracking what the other person cares about and is looking for in conversations
Non-self-hatingness: Thinking that you are doing something valuable, that matters to you, that you don't have to apologize for caring about, along with its implications
A willingness to be honest and earnest about the above.
When I was doing a bunch of explaining of EA and my potential jobs during my most recent job search to friends, family and anyone else, one framing I landed on I found helpful was "ambitious altruism." It let me explain why just helping one person didn't feel like enough without coming off as a jerk (i.e. "I want to be more ambitious than that" rather than "that's not effective").
It doesn't have the maximizing quality, but it doesn't not have it either, since if there's something more you can do with the same resources, there's room to be more ambitious.
Things I'd like to try more to make conversations better
Running gather towns / town halls when there's a big conversation happening in EA
I thought the Less Wrong gather town on FTX was good and I'm glad it happened
I've run a few internal-to-CEA gather towns on tricky issues so far and I'm glad I did, though they didn't tend to be super well attended.
Offering calls / conversations when a specific conversation is getting more heated or difficult
I haven't done this yet myself, but it's been on my mind for a while, and I was just given this advice in a work context, which reminded me.
If a back and forth on the forum isn't going well, it seems really plausible that having a face to face call, or offering to mediate one for other people (this one I'm more skeptical of, especially without having any experience in mediation, though I do think there can be value add here) will make that conversation go better, give space for people to be better understood and so on.
An off-beat but related idea
Using podcasts where people can explain their thinking and decision making, including in situations where they wouldn't do the same thing again, since podcasts allow for longer more natural explanations
Some experimental thoughts on how to moderate / facilitate sensitive conversations
Drawn from talking to a few people who I think do this well. Written in a personal capacity.
Go meta - talk about where you’re at and what you’re grappling with. Do circling-y things. Talk about your goals for the discussion.
Be super clear about the way in which your thing is or isn’t a safe space and for what
Be super clear about what bayesian updates people might make
Consider starting with polls to get calibrated on where people are
Go meta on what the things people are trying to protect are, or what the confusion you think is at play is
Aim first to create common knowledge
Distinguish between what’s thinkable and what’s sayable in this space and why that distinction matters
Reference relevant norms around cooperative spaces or whatever space you’ve set up here
If you didn’t set up specific norms but want to now, apologize for not doing so until that point in a “no fault up to this point but no more” way
If someone says something you wish they hadn’t:
Do many of the above
Figure out what your goals are - who are you trying to protect / make sure you have their back
If possible, strategize with the person/people who are hurt, get their feedback (though don’t precommit to doing what they say)
Have 1:1s with people able
If you want to dissociate with someone or criticize them, explain the history and connection to them, don't memoryhole stuff, give people context for understanding
Display courage.
Be specific about what you're criticizing
Cheerlead and remind yourself and others of the values you're trying to hold to
People are mad for reasonable and unreasonable reasons, you can speak to the reasonable things you overlap on with strength
In my experience, the most important parts of a sensitive discussion is to display kindness, empathy, and common ground.
It's disheartening to write something on a sensitive topic based on upsetting personal experiences, only to be met with seemingly stonehearted critique or dismissal. Small displays of empathy and gratitude can go a long way towards this, to make people feel like their honesty and vulnerability has been rewarded rather than punished.
I think your points are good, but if deployed wrongly could make things worse. For example, if a non-rationalist friend of yours tells you about their experiences with harassment, immediately jumping into a bayesian analysis of the situation is ill-advised and may lose you a friend.
(Written in a personal capacity) Yeah, agree, and your comment made me realize that some of these are actually my experimental thoughts on something like "facilitating / moderating" sensitive conversations. I don't know if what you're pointing at is common knowledge, but I'd hope it is, and in my head it's firmly in "nonexperimental", standard and important wisdom (as contained, I believe, in some other written advice on this for EA group leaders and others who might be in this position).
From my perspective, a hard thing is how much work is done by tone and presence - I know people who can do the "talk about a bayesian analysis of harassment" with non-rationalists with sensitivity, warmth, care, and people who do "displaying kindness, empathy and common ground" in a way that leaves people more tense than before. But that doesn't mean the latter isn't generally better advice, I think it probably is for most people - and I hope it's in people's standard toolkits.
Been flagging more often lately that decision-relevant conversations work poorly if only A is sayable (including "yes we should have this meeting") and not-A isn't.
At the same time I've been noticing the skill of saying not-A with grace and consideration, breezily and not with "I know this is going to be unpopular, but..." energy and it's an extremely useful skill.
Seems like there's room in the ecosystem for a weekly update on AI that does a lot of contextualization / here's where we are on ongoing benchmarks. I'm familiar with:
a weekly newsletter on AI media (that has a section on important developments that I like)
Jack Clark's substack which I haven't read much of but seems more about going in depth on new developments (though does have a "Why this matters" section. Also I love this post in particular for the way it talks about humility and confusion.
Doing Westminster Better on UK politics and AI / EA, which seems really good but again I think goes in depth on new stuff
I could imagine spending time on aggregation of prediction markets for specific topics, which Metaculus and Manifold are doing better and better over time.
I'm interested in something that says "we're moving faster / less fast than we thought we would 6 months ago" or "this event is surprising because" and kind of gives a "you are here" pointer on the map. This Planned Obsolescence post called "Language models surprised us" I think is the closest I've seen.
Seems hard, also maybe not worth it enough to do, also maybe it's happening and I'm not familiar with it, would love to hear, but it's what I'd personally find most useful and I suspect I'm not alone.
Wout Schellart, Jose Hernandez-Orallo, and Lexin Zhou have started an AI evaluation digest, which includes relevant benchmark papers etc. It's pretty brief, but they're looking for more contributers, so if you want to join in and help make it more comprehensive/contextualised, you should reach out! https://groups.google.com/g/ai-eval/c/YBLo0fTLvUk
Engaging seriously with the (nontechnical arguments) for AI Risk: One person's core recommended reading list (I saw this list in a private message from a more well-read EA than me and wanted to write it up, it's not my list since I haven't read most of these, but I thought it was better to have it be public than not):
Intro to ML Safety (lots of examples of AI safety work being done in a modern ML paradigm. There's debate about exactly how much is relevant to existential safety)
Over the last six months, I've been having more and more calls with people interested in EA and EA careers. Sometimes I'm one of their first calls because they know me from social things, and sometimes I'm an introduction someone else (eg at 80k) has made. I've often found that an hour, my standard length for a call, feels very short. Sometimes I just chat, sometimes I try to have more of a plan. Of course a lot depends on context, but I'm interested in having a bit of a template so that I can be maximally helpful to them in limited time (I can't be a career coach for everyone I'm introduced to) and with the specifics that I can give (not trying to replicate / replace eg 80k advising).
Posting so that people can give advice / help me with it and/or use it if it seems helpful.
Template
What's your relationship to EA?
I think I currently either spend no time on this or way too much time. I'm hoping this question (rather than "how did you get involved with EA?" or "what do you know about EA") will keep it short but useful. I'm also considering asking this more as a matter of course before the call.
What are your current options / thinking?
This is a place where, for people early in their thinking (which is most of the people I talk to) I tend to recommend a 5 minute timer to generate more options, advise taking a more explore attitude
Frequently recommend looking for small experiments to find out what they might like or are good at
I tend to recommend developing a view on which of the options are best by the metrics they care about, including impact
When relevant, I want to make a habit of recommending useful reading / podcast
Sometimes trying to raise people's ambitions (https://forum.effectivealtruism.org/posts/dMNFCv7YpSXjsg8e6/how-to-raise-others-aspirations-in-17-easy-steps)
I want to find out what sets them apart / their skillset, but I don't currently have a really good way of doing this if I don't already know that doesn't feel interview-y
The ways I think I can often be most helpful, especially for people really new to institutional EA is to tell them about landscape
giving an overview of orgs, foundations, and types of work
tell them what I know about who else is working on things they're excited about
sometimes that some of their interests aren't a focus of most EA work / money
asking about their views on longtermism
what people think are the main bottlenecks and do they have an interest in developing those skills
management
ops
vetting / grantmaking
If they're talking to me specifically about community building / outreach, I give my view on the landscape there: what's happening, what people are excited about, etc.
I also have given my thoughts on how to make 80k advising most helpful
Be honest about your biggest uncertainties and what you want help from them on
Really try to generate options
I wonder what else I can say here
It's possible I should ask more about the cause areas they care about - that feels like it's such a big conversation that it doesn't fit in an hour, but maybe it's really crucial. Don't know! Still figuring it out.
Scattered Takes and Unsolicited Advice (new ones added to the top)
If you care about being able to do EA work longterm, it's worth pretty significant costs to avoid resenting EA. Take that into account when you think about what decisions you're making and with what kind of sacrifice.
"Say more?" and "If your thoughts are in a pile, what's on top?" are pretty powerful conversational moves, in my experience
A lot of our feelings and reactions come reactively / contextually / on the margins - people feel a certain way e.g. when they are immersed in EA spaces and sometimes have critiques, and when they are in non-EA spaces, they miss the good things about EA spaces. This seems normal and healthy and a good way to get multiple frames on something, but also good to keep in mind.
People who you think of as touchstones of thinking a particular thing may change their minds or not be as bought in as you'd expect
The world has so much detail
One of the most valuable things more senior EAs can do for junior EAs is contextualize: EA has had these conversations before, the thing you experienced was a 20th/50th/90th percentile experience, other communities do/don't go through similar things etc.
One of the best things we can all do for each other is push on expanding option sets, and ask questions that get us to think more about what we think and what we should do.
When you're new to EA, it's very exciting: Don't let your feet go faster than your brain - know what you're doing and why. It's not good for you or the world if in two years you look around and don't believe any of it and don't know how you got there and feel tricked or disoriented.
You're not alone in feeling overwhelmed or like an imposter
If you're young in EA: Don't go into community building just because the object level feels scarier and you don't have the skills yet
Networking is great, but it's not the only form of agency / initiative taking
Lots of ick feelings about persuasion and outreach get better if you're honest and transparent
Lots of ick feelings about all kinds of things are tracking a lot of different things at once: people's vibes, a sense of honesty or dishonesty, motivated reasoning, underlying empirical disagreements - it's good to track those things separately
Ask for a reasonable salary for your work, it's not as virtuous as you think to work for nothing
Sets bad norms for other people who can't afford to do that
Makes it more like volunteering so you might not take the work as seriously
Don't be self-hating about EA; figure out what you believe and don't feel bad about believing it and its implications and acting in the world in accordance with it
Earnestness is shockingly effective - if you say what you think and why you think it (including "I read the title of a youtube video"), if you say when don't know what to do and what you're confused about, if you say what you're confident in and why, if you say how you feel and why, I find things (at least in this social space) go pretty damn well, way better than I would have expected.
Answer questions specifically as asked, looping back into my models of the world
I sometimes have a habit of modelling questions more as moves in a game, and I play the move that supports the overall outcome of the conversation I'm going for, which doesn't support truth-seeking
I also sometimes say things using some heuristics, and answer other questions with other heuristics and it takes work to notice that they're not consistent
When I hear a claim, think about whether I've observed it in my life
Notice what "fads" I'm getting caught up in
Trying to be more gearsy, less heuristics-y. What's actually good or bad about this, what do they actually think not just what general direction are they pulling the rope, etc
Noticing when we're arguing about the wrong thing, when we e.g. should be arguing about the breakdown of what percent one thing versus another
I've been thinking about and promulgating EA as nerdsniping (https://chanamessinger.com/blog/ea-as-nerdsniping) as a good intro to bring in curious people interested intellectually in the questions who can come up with their own ideas, often in contrast to EA as an amazing moral approach, but EAG x Oxford pushed me to update towards the fact that EA / rationality give seeking people a worldview that makes a lot of sense and is more consistent and honest than many others they encounter is a huge part of the appeal, which is both good to know from an outreach perspective and has implications about how much people might get really into EA / rationality if they're looking for that in particular, which might point to being wary if you don't want to be in some sense "too convincing".
A point I haven't seen is that the "Different Worlds" hypothesis implies that if you consistently have an experience, especially interpersonally, you should on the margin expect it to happen more often relative to what conventional wisdom says.
Example: If your reports or partners consistently get angry when you do X, then decent chance that even if that isn't all that common, you're inadvertently selecting for people for whom it is, so don't update as far down on the likelihood of it happening again as you otherwise might
About going to a hub to do networking: A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the
I think there's a lot of truth to the points made in this post.
I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)
Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.
On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.
Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable career capital in the future for ever doing EA things outside of EA or bringing valuable skills and knowledge to EA (like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?). This concern will need to be fleshed out empirically and will vary a lot in applicability by person.
(I work on CEA's community health team but am not making this post on behalf of that team)
Leopold Aschenbrenner is starting a cross between a hedge fund and a think tank for AGI. I have read only the sections of Situational Awareness most relevant to this project, and I don't feel nearly like I understand all the implications, so I could end up being quite wrong. Indeed, I’ve already updated towards a better and more nuanced understanding of Aschenbrenner's points, in ways that have made me less concerned than I was to begin with. But I want to say publicly that the hedge fund idea makes me nervous.
Before I give my reasons, I want to say that it seems likely most of the relevant impact comes not from the hedge fund but from the influence the ideas from Situational Awareness have on policymakers and various governments, as well as the influence and power Aschenbrenner and any cohort he builds wield. This influence may come from this hedge fund or be entirely incidental to it. I mostly do not address this here, but it does make all of the below less important.
I also believe that some (though not all) of my concerns about the hedge fund are based on specific disagreements with Aschenbrenner’s views. I discuss some of those below, but a full rebuttal this is not (and many of the points of disagreement I don’t yet feel confident in my view on). There is still plenty to do to hash out the actual empirical questions at hand.
Why I am nervous
A hedge fund investing in AI related investments means Aschenbrenner and his investors will gain financially from more and accelerated AGI progress. This seems to me to be one of the most important dynamics (excluding the points about influence above). That creates an incentive to create more AGI progress, even at the cost of safety, which seems quite concerning. I will say that Leopold has a good track record here around turning down money in not signing an NDA at Open AI despite loss of equity.
Aschenbrenner expresses strong support for the liberal democratic world to maintain a lead on AI advancement, and ensure that China does not reach an AI-based decisive military advantage over the United States[1]. The hedge fund, then, presumably aims to both support the goal of maintaining an AI lead over China and profit off of it. In my current view, this approach increases race dynamics and increases the risks of the worst outcomes (though my view on this has softened somewhat since my first draft, for reasons similar to what Zvi clarifies here[2]).
I especially think that it risks unnecessary competition when cooperation - the best outcome - could still be possible. It seems notable, for example, that no Chinese version of the Situational Awareness piece has come to my attention; going first in such a game both ensures you are first and that the game is played at all.
It’s also important that the investors (e.g. Patrick Collison) appear to be more focused on economic and technological development, and less concerned about risks from AI. The incentives of this hedge fund are therefore likely to point towards progress and away from slowing down for safety reasons.
There are other potential lines of thought here I have not yet fleshed out including:
Ways that the hedge fund could in fact be a good idea:
EA and AI causes could really use funder diversification. If Aschenbrenner intends to use the money he makes to support these issues, that could be very valuable (though I’ve certainly become somewhat more concerned with moonshot “become a billionaire to save the world” plans than I used to be).
The hedge fund could position Aschenbrenner to have a deep understanding of and connections within the AI landscape, making the think tank outputs very good, and causing important future decisions to be made better.
Aschenbrenner of course could be right about the value of the US government’s involvement, maintaining a US lead, and the importance of avoiding Chinese military supremacy over the US. In that case, him achieving his goals would of course be good. Cruxes include the likelihood of international cooperation, the possibility of international bans, probability of catastrophic outcomes from AI and the likelihood of “muddling through” on alignment.
I’m interested in hearing takes, ways I could be wrong, fleshing out of my arguments, or any other thoughts people have relevant to this. Happy to have private chats in DMs to discuss as well.
To be clear, Aschenbrenner wants that lead to exist to avoid a tight race in which safety and caution are thrown to the winds. If we can achieve that lead primarily through infosecurity (something he emphasizes), then added risks are low; but I think the views expressed in Situational Awareness also imply the importance of staying technologically ahead of China as their AI research improves. This comes with precisely the risks of creating and accelerating a race of this nature.
Additionally, when I read his description of the importance of even a two month lead, it implied to me that if the longer, more comfortable lead is lost, there will be strong reasons for the US to advance quickly so as to avoid China reaching superintelligence and subsequent military dominance first (which doesn’t mean he thinks we should actually do this if the time came). This seems to fairly explicitly describe the tight race scenario. I don’t think Aschenbrenner believes this would be a good situation to be in, but nonetheless thinks that’s what the true picture is.
From Zvi’s post: “He confirms he very much is NOT saying this:
The race to ASI is all that matters.
The race is inevitable.
We might lose.
We have to win.
Trying to win won’t mean all of humanity loses.
Therefore, we should do everything in our power to win.
I strongly disagree with this first argument. But so does Leopold.
Instead, he is saying something more like this:
ASI, how it is built and what we do with it, will be all that matters.
ASI is inevitable.
A close race to ASI between nations or labs almost certainly ends badly.
Our rivals getting to ASI first would also be very bad.
Along the way we by default face proliferation and WMDs, potential descent into chaos.
The only way to avoid a race is (at least soft) nationalization of the ASI effort.
With proper USG-level cybersecurity we can then maintain our lead.
We can then use that lead to ensure a margin of safety during the super risky and scary transition to superintelligence, and to negotiate from a position of strength.”
Not necessarily - they could just invest in publicly traded company where the counterfactual impact is not very large (even a large hedge fund buying some say Google stock wouldn't much move the market cap). They could also be shorting certain companies which might reduce economically inefficient overinvestment into AI, which might also have x-risk externalities. It would be different if he ran a VC fund and invested in getting the next, say, Anthropic off the ground. Especially if the profits are donated and used for mission hedging, this might be good.
Yes, the outputs might be better as the incentives are aligned: the hedge fund / think tank has 'skin in the game' to get the correct answers on the future of AI progress (though maybe some big banks are also trying to move markets with their publications).
For your second point, you should skeptically expect their publishings/influence to be corrupted easily as the information they'd put out would be very connected to their investing alpha.
The corruption could take the form of omission of key details, under-hyping stuff in which they were unable to get exposure(/investment) and biases like that.
Thanks for writing this. I agree that this makes me nervous. Various thoughts:
I think I’ve slowly come to believe something like, ‘sufficiently smart people can convince themselves that arbitrary morally bad things are actually good’. See e.g., as the gymnastic meme, but also there’s something deeper of like ‘many of the evil people throughout history have believed that what they’re doing is good actually’. I think the response to this should be deep humility and moral risk aversion. Having a big brain argument that sounds good to you about why what you’re doing is good is actually extremely weak evidence about the goodness of the thing. I think it would probably be better if EAs took took this more seriously and didn’t do things like starting an AGI company or starting an AGI hedge fund. An AGI hedge fund seems even worse than Anthropic (where I think the argument for cutting edge research is medium brained and at least somewhat true empirically). The reasons Chana lists for why hedge fund could be a good idea all seem fairly weak — they would be stronger if Leopold was saying these were part of the plan.
The unilateralist nature and relationship to race dynamics also worries me. Maybe there would have been AGI hedge funds anyway, and maybe there would have been lengthy blog posts that tell the USG and China that they should be in a massive race on AI — but those things sure weren’t being done before Leopold did it.
I don’t think I have strong reasons to actively trust Leopold. I don’t know him and I think my baseline trust isn’t super high nowadays. By “trust” I mean some combination of being of good character, having correct judgment, and good epistemic practices to make up for poor judgment. Choosing to lose OpenAI equity is a positive sign, but I’m not sure how big. So this caches out in not making much of an update on the value of an AGI hedge fund — something that seems initially medium bad.
I think it’s sus to write up a blog post telling people AGI is coming soon while starting an investment firm that will benefit from people thinking AGI is coming soon. This is clearly a case of conflicting interests. It’s not necessarily a bad thing — there are good arguments around putting your money where your mouth is and taking actions based on big if true ideas, but it is a warning flag.
I could imagine a normal person reading Situational Awareness, including the part about Superalignment, and then hearing that the author is starting an AGI hedge fund, and their response being “WTF?! You believe all this about the intelligence explosion and how there are critical safety problems we’re not on track to solve, and you’re starting a hedge fund?” This response makes a lot of sense to me (and I do think I’ve heard it somewhere, though I’m not sure where). I think ‘starting an AGI hedge fund’ is really low on the list of things somebody who cares a lot about superintelligence safety should be doing. So either I’m misunderstanding something, or this is an update that Leopold isn’t as serious about ASI safety as I thought.
I have yet to see any replies from Leopold to people commentating or responding to Situational Awareness. This seems like bad form for truth seeking and getting buy-in from EAs, but it may be the norm for general intellectual content.
He seems to believe:
I also find this weird.
The traditional EA / AI funding strategy seems to be: "Take money from EAs only, and keep things on the low."
I assume we generally don't want to convince or actively many others to invest more in AI capabilities.
But Situational Awareness and the fund's openness to many investors seem very much not like that.
I'm really curious what Leopold's thinking on this is.
Effective giving quick take for giving season
This is quite half-baked because I think my social circle contains not very many E2G folks, but I have a feeling that when EA suddenly came into a lot more funding and the word on the street was that we were “talent constrained, not funding constrained”, some people earning to give ended up pretty jerked around, or at least feeling that way. They may have picked jobs and life plans based on the earn to give model, where it would be years before the plans came to fruition, and in the middle, they lost status and attention from their community. There might have been an additional dynamic where people who took the advice the most seriously ended up deeply embedded in other professional communities, so heard about the switch later or found it harder to reconnect with the community and the new priorities.
I really don’t have an overall view on how bad all of this was, or if anyone should have done anything differently, but I do have a sense that EA has a bit of a feature of jerking people around like this, where priorities and advice change faster than the advice can be fully acted on. The world and the right priorities really do change, though; I’m not sure what should be done except to be clearer about all this, but I suspect it’s hard to properly convey “this seems like the absolute best thing in the world to do, also next year my view could be that it’s basically useless” even if you use those exact words. And maybe people have done this, or maybe it’s worth trying harder. Another approach would be something like insurance.
A frame I’ve been more interested in lately (definitely not original to me) is that earning to give is a kind of resilience / robustness-add for EA, where more donors just means better ability to withstand crazy events, even if in most worlds the small donors aren’t adding much in the way of impact. Not clear that that nets out, but “good in case of tail risk” seems like an important aspect.
A more out-there idea, sort of cobbled together from a me-idea and Ben West-understanding is that, among the many thinking and working styles of EAs, one axis of difference might be “willing to pivot quickly, change their mind and their life plan intensely and often” vs “not as subject to changing EA winds” (not necessarily in tension, but illustrative). Staying with E2G over many years might be related to having being closer to the latter; this might be an under-rated virtue and worth leveraging.
I think another example of the jerking people around thing could be the vibes from summer 2021 to summer 2022 that if you weren't exceptionally technically competent and had the skills to work on object-level stuff, you should do full-time community building like helping run university EA groups. And then that idea lost steam this year.
Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.
Hmm, this doesn't seem obvious to me – if you care more about people's success then you are more willing to give offers to people who don't have a robust resume etc., which is going to lead to a lower hit rate than usual.
It's an interesting point about the potential for jerking people around and alienating them from the movement and ideals. It could also (maybe) have something to do with having a lot of philosophers leading the movement too. It's easier to change from writing philosophically about short termism "doing good better" to long termism "what we owe the future", to writing essays about talent constraint over money constraint, but harder to psychologically and practically (although still very possible) switch from being a mid career global health worker or earning to giver to working on AI alignment.
This isn't a criticism, of course it makes sense for the philosophy driving the movement to develop, just highlighting the difference in "pivotability" between leaders and some practitioners and the obvious potential for "jerking people around" collateral as the philosophy evolves.
Also having lots of young people in the movement who haven't committed years of their life to things can make changing tacks more viable for many and seem more normal, while perhaps it is harder for those who have committed a few years to something. This "Willingness to pivot quickly, change their mind and their life plan intensely and often” could be as much about stage of career than it is personality.
Besides earning to give people being potentially "jerked around", there are some other categories with considering too.
Global health people as the relative importance within the movement seems to have slowly faded.
if (just possibilities) AI becomes far less neglected in general in the next 3 to 5 years, or it becomes apparent that policy work seems far more important/tractable than technical alignment, then a lot of people who have devoted their careers to these may be left out in the cold.
Just some very low confidence musings!
Makes sense that there would be some jerk-around in a movement that focuses a lot on prioritization and re-prioritization, with folks who are invested in finding the highest priority thing to do. Career capital takes time to build and can't be re-prioritized at the same speed. Hopefully as EA matures, there can be some recognition that diversification is also important, because our information and processes are imperfect, and so there should be a few viable strategies going at the same time about how to do the most good. This is like your tail-risk point. And some diversity in thought will benefit the whole movement, and thoughtful people pursuing those strategies with many years of experience will result in better thinking, mentorship, and advice to share.
I don't really see a world in which earning to give can't do a whole lot of good, even if it isn't the highest priority at the moment... unless perhaps the negative impacts of the high-earning career in question haven't been thought through or weighed highly enough.
Perhaps making a stronger effort to acknowledge and appreciate people who acted altruistically based on our guesses at the time, before explaining why our guesses are different now, would help? (And for this particular case, even apologizing to EtG people who may have felt scorned?)
I think there's a natural tendency to compete to be "fashion-forward", but that seems harmful for EA. Competing to be fashion-forward means targeting what others will approve of (or what others think others will approve of), as opposed to the object-level question of what actually works.
Maybe the sign of true altruism in an EA is willingness to argue for boring conventional wisdom, or willingness to defy a shift in conventional wisdom if you don't think the shift makes sense for your particular career situation. 😛 (In particular, we shouldn't discount switching costs and comparative advantage. I can make a radical change to the advice I give an aimless 20-year-old, while still believing that a mid-career professional should stay on their current path, e.g. due to hedging/diminishing marginal returns to the new hot thing.)
BTW this recent post made a point that seems important:
IMO, acknowledging and appreciating the effort people put in is the best way to prevent burnout. Implying that "your career path is boring now" is the opposite. Almost everyone in EA is making some level of sacrifice to do good for others; let's thank them for that!
Thank you, whoever's reading this!
I was thinking about to what extent NDAs (either non-disclosure or non-disparagement agreements) played a role in the 2018 blowup at Alameda Research (since if there were a lot, that could be a throughline between messiness at Alameda and messiness at Open AI recently).
Here's what I've collected from public records:
Overall this tells a story where NDAs weren't a big part of the Alameda story (since I think Ben West and nbouscal at least left during the 2018 blowup, but folks should correct me if I'm wrong). This is a bit interesting to me.
Interested in if others have different takeaways.
Not all "EA" things are good
just saying what everyone knows out loud (copied over with some edits from a twitter thread)
Maybe it's worth just saying the thing people probably know but isn't always salient aloud, which is that orgs (and people) who describe themselves as "EA" vary a lot in effectiveness, competence, and values, and using the branding alone will probably lead you astray.
Especially for newer or less connected people, I think it's important to make salient that there are a lot of takes (pos and neg) on the quality of thought and output of different people and orgs, which from afar might blur into "they have the EA stamp of approval"
Probably a lot of thoughtful people think whatever seems shiny in a "everyone supports this" kind of way is bad in a bunch of ways (though possibly net good!), and that granularity is valuable.
I think feel very free to ask around to get these takes and see what you find - it's been a learning experience for me, for sure. Lots of this is "common knowledge" to people who spend a lot of their time around professional EAs and so it doesn't even occur to people to say + it's sensitive to talk about publicly. But I think "some smart people in EA think this is totally wrongheaded" is a good prior for basically anything going on in EA.
Maybe at some point we should move to more explicit and legible conversations about each others' strengths and weaknesses, but I haven't thought through all the costs there, and there are many. Curious for thoughts on whether this would be good! (e.g. Oli Habryka talking about people with integrity here)
I would like a norm of writing some criticisms on wiki entries.
I think the wiki entry is a pretty good place for this. It's "the canonical place" as it were. I would think it's important to do this rather fairly. I wouldn't want someone to edit a short CEA article with a "list of criticisms", that (believe you me) could go on for days. And then maybe, just because nobody has a personal motivation to, nobody ends up doing this for Giving What We Can. Or whatever. Seems like the whole thing could quickly prove to be a mess that I would personally judge to be not worth it (unsure). I'd rather see someone own editing a class of orgs and adding in substantial content, including a criticism section that seeks to focus on the highest impact concerns.
Features that contribute to heated discussion on the forum
From my observations. I recognize many of these in myself. Definitely not a complete list, and possibly some of these things are not very relevant, please feel free to comment to add your own.
Interpersonal and Emotional
Low trust environment
Something to protect / Politics
Organizational politics
This is so perceptive, relevant and respectfully written, thank you.
I've noticed this too and I think another common dynamic is where "both sides" feel like the other side obviously "started it" and so feel justified in responding in kind.
I've also noticed in myself recently this additional layer of upset that sounds something like, "We're supposed to be allies!" I think I need to keep reminding myself that this is just what people do, namely fight with people very much like them but a little bit different*. I think EA's been remarkably good at avoiding much of this over the years and obviously I wish we weren't falling prey to it quite so much right now, but I don't think it's a reason to feel extra upset.
*Here's my favourite dramatisation of this phenomenon.
Thanks for sharing, I think this is a very useful overview of important factors, and I encourage you to share it as a normal post (I mostly miss shortforms like this).
Post: OAI NDA drama, do we know that Anthropic does not do similar things? I heard something that reassured me but I no longer know what it was. Interested in what people have heard or know (feel free to DM me) or general takes on the sitaution.
You might also be interested in discussion here.
You might be interested in discussion here.
Good info there re current employees!
I want to draw attention to the distinction between current and former employees, since OpenAI was deploying their leverage on ex-staff and keeping that quiet amidst current staff. And what confirmation we have is about current Anthropic staff, haven't heard from ex-staff yet.
But "everyone knows"!
A dynamic I keep seeing is that it feels hard to whistleblow or report concerns or make a bid for more EA attention on things that "everyone knows", because it feels like there's no one to tell who doesn't already know. It’s easy to think that surely this is priced in to everyone's decision making. Some reasons to do it anyway:
In short, if you're acting based on the belief that there’s a thing “everyone knows”, check that that’s true.
Relatedly: Everybody Knows, by Zvi Mowshowitz
[Caveat: There's an important balance to strike here between the value of public conversation about concerns and the energy that gets put into those public community conversations. There are reasons to take action on the above non-publicly, and not every concern will make it above people’s bar for spending the time and effort to get more engagement with it. Just wanted to point to some lenses that might get missed.]
About going to a hub
A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the
For people who consider taking or end up taking this advice, some things I'd say if we were having a 1:1 coffee about it:
We all want (I claim) EA to be a high trust, truth-seeking, impact-oriented professional community and social space. Help it be those things. Blurt truth (but be mostly nice), have integrity, try to avoid status and social games, make shit happen.
Really intrigued by this model of thinking from Predictable Updating about AI Risk.
Trust is a two-argument function
I'm sure this must have been said before, but I couldn't find it on the forum, LW or google
I'd like to talk more about trusting X in domain Y or on Z metric rather than trusting them in general. People/orgs/etc have strengths and weaknesses, virtues and vices, and I think this granularity is more precise and is a helpful reminder to avoid the halo and horn effects, and calibrates us better on trust.
A commonly used model in the trust literature (Mayer et al., 1995) is that trustworthiness can be broken down into three factors: ability, benevolence, and integrity.
RE: domain specific, the paper incorporates this under 'ability':
There are other conceptions but many of them describe something closer to trust that is domain specific rather than generalised.
Thanks for this! Very interesting.
I do want to say something stronger here, where "competence" sounds like technical ability or something, but I also mean a broader conception of competence that includes "is especially clear thinking here / has fewer biases here / etc"
Strongly agree. I'm surprised I haven't seen this articulated somewhere else previously.
I hope to flesh this out at some point, but I just want to put somewhere that by default (from personal experience and experience as an instructor and teacher) I think sleepaway experiences (retreats, workshops, camps) are potentially emotionally intense for at least 20% of participants, even entirely setting aside content (CFAR has noted this as well): away from normal environment, new social scene with all kinds of status-stuff to figure out, less sleep, lots of late night conversations that can be very powerful, romantic / sexual stuff in a charged environment, a lot of closeness happening very quickly because of being around each other 24/7, anything going on from outside the environment that's stressful you have less time and space to deal with. This can be valuable in the sense of giving people a chance to fully immerse themselves, but it's a lot, especially for younger people and it is worth organizers explicitly noting this in organizing, talking about it to participants, providing time for chillness / regrounding and being off the clock, and having people around who it's easy to talk to if you're going through a hard time.
For collecting thoughts on the concept of "epistemic hazards" - contexts in which you should expect your epistemics to be worse. not fleshed out yet. Interested in if this has already been written about, I assume so, maybe in a different framing.
From Habryka: "Confidentiality and obscurity feel like they worsen the relevant dynamics a lot, since they prevent other people from sanity-checking your takes (though this is also much more broadly applicable). For example, being involved in crimes makes it much harder to get outside feedback on your decisions, since telling people what decisions you are facing now exposes you to the risk of them outing you. Or working on dangerous technologies that you can't tell anyone about makes it harder to get feedback on whether you are making the right tradeoffs (since doing so would usually involve leaking some of the details behind the dangerous technology). "
Poke holes in my systematizing outreach apologism
Re: Ick at systematizing outreach and human interactions
There's a paradox I'm confused about, where if someone from a group I'm not in - let's say Christians, came to me on a college campus and smiled at me and asked about my interests and connected all of them to Jesus and then I found out I'd been logged in a spreadsheet as "potential convert" or something and then found the questions they'd asked me in a blog post or "Christian evangelist top questions" I might very well feel extremely weird about that (though I think less than others, I kind of respect the hustle).
BUT, when I think about how one gets there, I think, ok:
This is probably too charitable, there is definitely a thing where you actively want to persuade people because you think your thing is important, and you might lose interest in people who aren't excited about what you're excited about, but those things also seem reasonable to me.
A process that seems bad:
3-5 seem like the worst parts here. 1 seems like a reasonable implication of their beliefs, though I do think we all have to cooperate to not destroy the commons.
2 is complicated - when people have different cruxes than you is it dishonest to talk about what should convince them based on their cruxes?
3 and 4 are bad, also hard to avoid.
5 seems really bad, and something I'd like to strongly avoid via things like transparency and some other percolating advice I might end up endorsing for people new to EA, like not letting your feet go faster than your brain, figuring out how much deference you endorse, seeing avoiding resentment as a crucial consideration in your life choices, staying grounded, etc.
I also think the processes can feel pretty similar from the inside (therefore danger alert!) but also look similar from the outside when they aren't. I certainly have systematically underestimated the moral seriousness and earnestness of many an EA.
What's the difference?
I think people are going to want to say something like "treating people as ends" but I don't know where that obligation stops. I think I want to say something like "are you acting in the interests of the people you're talking to", but that doesn't work either - I'm not! being an EA has a decent chance of being less pleasant than the other thing they were doing, and either way it's not a crux. Ex: I endorse protecting the time and energy of other people by not telling everyone who I would talk to if I had a certain question or needed help in a certain way.
I do think it's more about whether you're doing things in such a way that if they knew why you were doing them, they'd mostly not be bothered (ie passing the red face test). But that doesn't really solve the problem that digital sentience is a weird reason to do a lot of things, and there are lots of things I endorse it being inappropriate to be too explicit about.
[This is separate from the instrumental reasons to act differently because it weirds people out etc.]
.......................................................................................................................
Later musings:
Presumably the strongest argument is that these feelings are tracking a bunch of the bad stuff that's hard to point at:
Of course this is a spectrum, and we shouldn't put up a public website listing all our beliefs including the most controversial ones or something like that (no one in EA is very close to this extreme). But the implicit jump from "some things shouldn't be explicit" to "digital sentience might weird some people out so there's a decent chance we shouldn't be that explicit about it" seems very non-obvious to me, given how central it is to a lot of longtermist's worldviews and honestly I think it wouldn't turn off many of the most promising people (in the long run; in the short run, it might get an initial "huh??" reaction).
Oh, sorry, those were two different thoughts. "digital sentience is a weird reason to do a lot of things" is one thing, where it's not most people's crux and so maybe not the first thing you say, but agree, should definitely come up, and separately, "there are lots of things I endorse it being inappropriate to be too explicit about", like the granularity of assessment you might be making of a person at any given time (though possibly more transparency about the fact that you're being assessed in a bunch of contexts would be very good!)
I think steps 1 and 2 in your chain are also questionable, not just 3-5.
Why do we want to maximize number of EAs, this seems very non-obvious to me? Some people would add much more to the community than others via epistemics, culture, direct talent, etc. If we added enough of certain types of people to the community, especially too quickly, it could easily be net negative.
I think sometimes/often talking about people's cruxes rather than your own is good and fine. The issue is Goodharting via an optimal message to convert as many people to EA as quickly as possible, rather than messages that will lead to a healthy community over the long run.
I think there are two separate processes going on when you think about systematizing and outreach and one of them is acceptable to systematize and the other is not.
The first process is deciding where to put your energy. This could be deciding whether to set up a booth at a college's involvement fair, buying ads, door-to-door canvassing, etc. It could also be deciding who to follow up with after these interactions, from the email list collected, to who's door to go to a second time, to which places to spend money on in your second round of ad buys. These things all lend themselves to systematization. They can be data driven and you can make forecasts on how likely each person was to respond positively and join an event, revisit those forecasts and update them over time.
The second process is the actual interaction/conversation with people. I think this should not be systematized and should be as authentic as possible. Some of this is a focus on treating people as individuals. Even if there are certain techniques/arguments/framings that you find work better than others, I'd expect there to be significant variation among people where some work better than others. A skilled recruiter would be able to figure out what the person they are talking to cares about and focus on that more, but I think this is just good social skills. They shouldn't be focusing on optimizing for recruitment. They should try to be a likeable person that others will want to be around and that goes a long way to recruitment in and of itself.
I see what you're pointing at, I think, but I don't know that this resolves all my edge cases. For instance, where does "I know this person is especially interested in animal welfare, so talk about that" fall?
I separately don't want to optimize for recruitment in the metric of number of people because of my model of what good additions to the community looks like (e.g. I want especially thoughtful people who have a good sense of the relevant ideas and arguments and what they buy and what their uncertainties are") - maybe your approach comes from that? Or are you saying even if one were trying to maximize numbers, they shouldn't systematize?
Thanks so much for writing this! I think it could be a top-level post, I'm sure many others would find it very helpful.
My 2 cents:
I think it's definitely bad to "Use framings, arguments and examples that you don't think hold water but work at getting people to join your group". If I understand correctly it can cause point 5. Also "getting people to join your group" is rarely an instrumental goal, and "getting people to join your group for the wrong reasons" is probably not that useful in the long term.
Something that I think is very important that seems missing from this is that there's a significant probability that we're wrong about important things (i.e. EA as a question).
We could be wrong about the impact of bednets, wrong about AI being the most important thing, wrong about population ethics, etc. I think it's a huge difference from the "cult" mindset.
The way I think about this, on first approximation, is that I want people to work on maximising their values (and not their wellbeing). If they think altruism is not important and are solipsistic egoists and only value their own wellbeing, I don't think EA can help them. If they value the wellbeing of others then EA can help them achieve their values better.
From my personal perspective this is strongly related to the point on uncertainty: I don't want to push other people to work on my values because from an outside view I don't think my values are more important than their values, or more likely to be "correct".
I don't know if it makes any sense, really curious to hear your thoughts, you have certainly thought about this more than I.
Thanks, Lorenzo!
Agree about the "not holding water", I was trying to say that "addresses cruxes you don't have" might look similar to this bad thing, but I'm not totally sure that's true.
I disagree about getting people to join your group - that definitely seems like an instrumental goal, though definitely "get the relevant people to join your group" is more the thing - but different people might have different views on how relevant they need to be, or what their goal with the group is.
I kind of agree here; I think there are things in EA I'm not particularly uncertain of, and while I'm open to being shown I'm wrong, I don't want to pretend more uncertainty than I have.
I've definitely heard that frame, but it honestly doesn't resonate for me. I think some people are wrong about what values are right and arguing with me sometimes convinces them of that. I've definitely had my values changed by argumentation! Or at least values on some level of abstraction - not on the level of solipsism vs altruism, but there are many layers between that and "just an empirical question".
I incorporate an inside view on my values - if I didn't think they were right, I'd do something else with my time!
Transparency for undermining the weird feelings around systematizing community building
There's a lot of potential ick as things in EA formalize and professionalize, especially in community building. People might reasonably feel uncomfortable realizing that the intro talk they heard is entirely scripted, or that interactions with them have been logged in a spreadsheet or that the events they've been to are taking them through the ideas on a path from least to most weird (all things I've heard of happening, with a range of how confident I am in them actually happening as described here). I think there's a lot to say here about how to productively engage with this feeling (and things community builders should do to mitigate it), but I also think there's a quick trick that will markedly improve things (though do not fix all problems): transparency.
(This is an outside take from someone who doesn't do community building on college campuses or elsewhere, I think that work is hard and filled with paradoxes, and it's also possible that this is already done by default, but in the spirit of stating the obvious)
I've been updating over and over again over the last few years that earnestness is just very powerful, and I think there are ways (though maybe they require some social / communication skills that aren't universal) to say things like (conditional on them being true):
NB: I don't think these are the best versions of these scripts, this was a first pass to point at the thing I mean
Not everything needs to be explicit, but this at least tracks whether you're passing the red face test.
I think that being transparent in this way requires:
Ambitious Altruism
When I was doing a bunch of explaining of EA and my potential jobs during my most recent job search to friends, family and anyone else, one framing I landed on I found helpful was "ambitious altruism." It let me explain why just helping one person didn't feel like enough without coming off as a jerk (i.e. "I want to be more ambitious than that" rather than "that's not effective").
It doesn't have the maximizing quality, but it doesn't not have it either, since if there's something more you can do with the same resources, there's room to be more ambitious.
Things I'd like to try more to make conversations better
Some experimental thoughts on how to moderate / facilitate sensitive conversations
Drawn from talking to a few people who I think do this well. Written in a personal capacity.
In my experience, the most important parts of a sensitive discussion is to display kindness, empathy, and common ground.
It's disheartening to write something on a sensitive topic based on upsetting personal experiences, only to be met with seemingly stonehearted critique or dismissal. Small displays of empathy and gratitude can go a long way towards this, to make people feel like their honesty and vulnerability has been rewarded rather than punished.
I think your points are good, but if deployed wrongly could make things worse. For example, if a non-rationalist friend of yours tells you about their experiences with harassment, immediately jumping into a bayesian analysis of the situation is ill-advised and may lose you a friend.
(Written in a personal capacity) Yeah, agree, and your comment made me realize that some of these are actually my experimental thoughts on something like "facilitating / moderating" sensitive conversations. I don't know if what you're pointing at is common knowledge, but I'd hope it is, and in my head it's firmly in "nonexperimental", standard and important wisdom (as contained, I believe, in some other written advice on this for EA group leaders and others who might be in this position).
From my perspective, a hard thing is how much work is done by tone and presence - I know people who can do the "talk about a bayesian analysis of harassment" with non-rationalists with sensitivity, warmth, care, and people who do "displaying kindness, empathy and common ground" in a way that leaves people more tense than before. But that doesn't mean the latter isn't generally better advice, I think it probably is for most people - and I hope it's in people's standard toolkits.
Been flagging more often lately that decision-relevant conversations work poorly if only A is sayable (including "yes we should have this meeting") and not-A isn't.
At the same time I've been noticing the skill of saying not-A with grace and consideration, breezily and not with "I know this is going to be unpopular, but..." energy and it's an extremely useful skill.
Seems like there's room in the ecosystem for a weekly update on AI that does a lot of contextualization / here's where we are on ongoing benchmarks. I'm familiar with:
I'm interested in something that says "we're moving faster / less fast than we thought we would 6 months ago" or "this event is surprising because" and kind of gives a "you are here" pointer on the map. This Planned Obsolescence post called "Language models surprised us" I think is the closest I've seen.
Seems hard, also maybe not worth it enough to do, also maybe it's happening and I'm not familiar with it, would love to hear, but it's what I'd personally find most useful and I suspect I'm not alone.
I think I agree, but also want to flag this list in case you (or others) haven't seen it: List of AI safety newsletters and other resources
Another newsletter(?) that I quite like is Zvi's
Wout Schellart, Jose Hernandez-Orallo, and Lexin Zhou have started an AI evaluation digest, which includes relevant benchmark papers etc. It's pretty brief, but they're looking for more contributers, so if you want to join in and help make it more comprehensive/contextualised, you should reach out!
https://groups.google.com/g/ai-eval/c/YBLo0fTLvUk
Less directly relevant, but Harry Law also has a new newsletter in the Jack Clark style, but more focused on governance/history/lessons for AI:
https://learningfromexamples.substack.com/p/the-week-in-examples-3-2-september
Thanks!
Engaging seriously with the (nontechnical arguments) for AI Risk: One person's core recommended reading list
(I saw this list in a private message from a more well-read EA than me and wanted to write it up, it's not my list since I haven't read most of these, but I thought it was better to have it be public than not):
If still unconvinced, might recommend (as examples of arguments most uncorrelated with the above)
For going deeper:
Might as well put a list of skilling up possibilities (probably this has been done before)
Correct me if there are mistakes here
Template for EA Calls
Over the last six months, I've been having more and more calls with people interested in EA and EA careers. Sometimes I'm one of their first calls because they know me from social things, and sometimes I'm an introduction someone else (eg at 80k) has made. I've often found that an hour, my standard length for a call, feels very short. Sometimes I just chat, sometimes I try to have more of a plan. Of course a lot depends on context, but I'm interested in having a bit of a template so that I can be maximally helpful to them in limited time (I can't be a career coach for everyone I'm introduced to) and with the specifics that I can give (not trying to replicate / replace eg 80k advising).
Posting so that people can give advice / help me with it and/or use it if it seems helpful.
Template
It's possible I should ask more about the cause areas they care about - that feels like it's such a big conversation that it doesn't fit in an hour, but maybe it's really crucial. Don't know! Still figuring it out.
Scattered Takes and Unsolicited Advice (new ones added to the top)
Habits of thought I'm working on
Habits of thought I might work on someday
More I like: https://twitter.com/ChanaMessinger/status/1287737689849176065
Conversational moves in EA / Rationality that I like for epistemics
Everyone who works with young people should have two pieces of paper in their pockets:
In one: "they look up to you and remember things you say years later"
In the other: "have you ever tried to convince a teenager of anything?"
Take out as needed.
Reference: https://www.gesher-jds.org/2016/04/15/a-coat-with-two-pockets/
I've been thinking about and promulgating EA as nerdsniping (https://chanamessinger.com/blog/ea-as-nerdsniping) as a good intro to bring in curious people interested intellectually in the questions who can come up with their own ideas, often in contrast to EA as an amazing moral approach, but EAG x Oxford pushed me to update towards the fact that EA / rationality give seeking people a worldview that makes a lot of sense and is more consistent and honest than many others they encounter is a huge part of the appeal, which is both good to know from an outreach perspective and has implications about how much people might get really into EA / rationality if they're looking for that in particular, which might point to being wary if you don't want to be in some sense "too convincing".
My Recommended Reading About Epistemics
For content, but also the vibe it immerses me in I think makes me better
A point I haven't seen is that the "Different Worlds" hypothesis implies that if you consistently have an experience, especially interpersonally, you should on the margin expect it to happen more often relative to what conventional wisdom says.
Example: If your reports or partners consistently get angry when you do X, then decent chance that even if that isn't all that common, you're inadvertently selecting for people for whom it is, so don't update as far down on the likelihood of it happening again as you otherwise might
About going to a hub to do networking:
A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the
I think there's a lot of truth to the points made in this post.
I also think it's worth flagging that several of them: networking with a certain subset of EAs, asking for 1:1 meetings with them, being in certain office spaces - are at least somewhat zero sum, such that the more people take this advice, the less available these things will actually be to each person, and possibly on net if it starts to overwhelm. (I can also imagine increasingly unhealthy or competitive dynamics forming, but I'm hoping that doesn't happen!)
Second flag is that I don't know how many people reading this can expect to have an experience similar to yours. They may, but they may not end up being connected in all the same ways, and I want people to go knowing that they take that as a risk and to decide whether it's worth it for them.
On the other side, people taking this advice can do a lot of great networking and creating a common culture of ambition and taking ideas seriously with each other, without the same set of expectations around what connections they'll end up making.
Third flag is I have an un-fleshed out worry that this advice funges against doing things outside Berkeley/SF that are more valuable career capital in the future for ever doing EA things outside of EA or bringing valuable skills and knowledge to EA (like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?). This concern will need to be fleshed out empirically and will vary a lot in applicability by person.
(I work on CEA's community health team but am not making this post on behalf of that team)