I don't think there's a consensus on how the average young person should navigate the field
Yeah that sounds right, I agree that people should have a vibe "here is a take, it may work for some and not others - we're still figuring this out" when they are giving career advice (if they aren't already. Though I think I'd give that advice for most fieldbuilding, including AI safety so maybe that's too low a bar.
I'm curious about whether other people who would consider themselves particularly well-informed on AI (or an "AI expert") found these results surprising. I only skimmed the post, but I asked Claude to generate some questions based on the post for me to predict answers to, and I got a Brier score of 0.073, so I did pretty well (or at least don't feel worried about being wildly out of touch). I'd guess that most people I work with would also do pretty well.
...I'm not an expert in this space; @Grace B, who I've spoken to a bit about this, runs the AIxBio fellowship and probably has much better takes than I do. Fwiw, I think I have a different perspective to the post.
My rough view is:
1. Historically, we have done a bad job at fieldbuilding in biosecurity (for nuanced reasons, but I guess that we made some bad calls).
2. As of a few months ago, we have started to do a much better job at fieldbuilding e.g. the AIxBio fellowship that you mentioned is ~the first of it's kind. The other fellowships you ...
In general, I think many people who have the option to join Anthropic could do more altruistically ambitious things, but career decisions should factor in a bunch of information that observers have little access to (e.g. team fit, internal excitement/motivation, exit opportunities from new role ...).[1] Joe seems exceptionally thoughtful, altruistic, and earnest, and that makes me feel good about Joe's move.
I am very excited about posts grappling with career decisions involving AI companies, and would love to see more people write them. Thank you very...
I have lots of disagreements with the substance of this post, but at a more meta level, I think your post will be better received (and is a more wholesome intellectual contribution) if you change the title to "reasons against donating to Lightcone Infrastructure" which doesn't imply that you are trying to give both sides a fair shot (though to be clear I think posts just representing one side are still valuable).
Quick, non-exhaustive list of places where a few strategic, dedicated, and ambitious altruists could make a significant dent within a year (because, rn, EA is significantly dropping the ball).
Improving the media, China stuff, increasing altruism, moral circle expansion, AI mass movement stuff, frontier AI lab insider coordination (within and among labs), politics in and outside the US, building up compute infrastructure outside the US, security stuff, EA/longtermist/School for Moral Ambition/other field building, getting more HNW people into EA, etc.
(List originally shared with me by a friend)
I suggested the following question for Carl Shulman a few years ago
I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).
I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.
https://forum.effective...
Yeah, I think we have a substantive disagreement. My impression before and after reading your list above is that you think that being convinced of longtermism is not very important for doing work that is stellar according to "longtermism" and it's relatively easy to convince people that x-risk/AIS/whatever is important.
I agree with the literal claim, but think that empirically longtermists represent the bulk of people that concern themeselves with thinking clearly about how wild the future could be. I don't think all longtermists do this, but longtermism e...
A few scattered points that make me think this post is directionally wrong, whilst also feeling meh about the forum competition and essays:
Yeah I also think hanging out in a no 1:1s area is weirdly low status/unexciting. I’d be a bit more excited about cause or interest specific areas like “talk about ambitious project ideas”.
I just returned from EAG NYC, which exceeded my expectations - it might have been the most useful and enjoyable EAG for me so far.
Ofc, it wouldn’t be an EAG without inexperienced event organisers complaining about features of the conference (without mentioning it in the feedback form), so to continue that long tradition here is an anti-1:1s take.
EAGs are focused on 1:1s to a pretty extreme degree. It’s common for my friends to have 10-15 30 minute 1:1s per day, at other conferences I’ve been to it’s generally more like 0-5. I woul...
My impression is EAGx Prague 22 managed to balance 1:1s with other content simply by not offering SwapCard 1:1s slots part of the time, having a lot of spaces for small group conversations, and suggesting to attendees they should aim for something like balanced diet. (Turning off SwapCard slots does not prevent people from scheduling 1:1, just adds a little friction; empirically it seems enough to prevent the mode where people just fill their time by 1:1s).
As far as I understand this will most likely not happen, because weight given to / goodharting on met...
(written v quickly, sorry for informal tone/etc)
i think that a happy medium is getting small-group conversations (that are useful, effective, etc) of size 3–4 people. this includes 1-1s, but the vibe of a Formal, Thirty Minute One on One is a very different vibe from floating through 10–15, 3–4-person conversations in a day, each that last varying amounts of time.
I’m not sure I understood the last sentence. I personally think that a bunch of areas Will mentioned (democracy, persuasion, human + AI coups) are extremely important, and likely more useful on the margin than additional alignment/control/safety work for navigating the intelligence explosion. I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it.
I also don’t think that having higher odds of AI x-risk are a crux, though different “shapes” of intelligence ...
I’m probably a bit less “aligned ASI is literally all that matters for making the future go well” pilled than you, but it’s definitely a big part of it.
Sure, but the vibe I get from this post is that Will believes in that a lot less than me, and the reasons he cares about those things don't primarily route through the totalizing view of ASI's future impact. Again, I could be wrong or confused about Will's beliefs here, but I have a hard time squaring the way this post is written with the idea that he intended to communicate that people should w...
I’m sure it was a misunderstanding, but fwiw, in the first paragraph, I do say “positive contributors” by which I meant people having a positive impact.
I agree with some parts of your comment though it’s not particularly relevant to the thesis that most people with significant responsibility for most of the top-tier (according to my view on top tier areas for making AGI go well) have values that are much more EA like than would naively be expected.
I don’t think the opposite of (i) is true.
Imagine a strong fruit loopist, believes there’s an imperative to maximise total fruit loops.
If you are not a strong fruit loopist there’s no need to minimise total fruit loops, you can just have preferences that don’t have much of an opinion on how many fruit loops should exist (I.e. everyone’s position).
Maybe this is working for them, but I can’t help feeling icked by it, and it makes me lose a bit of faith in the project.
Plausibly useful feedback, but I think this is ~0 evidence for how much faith you should have in blue dot relative to factors like reach, content, funding, materials, testimonials, reputation, public writing, past work of team members... If. I were doing a grant evaluation of Blue Dot, it seems highly unlikely that this would make it into the eval
There's definitely some selection bias (I know a lot of EAs), but anecdotally, I feel that almost all the people who, in my view, are "top-tier positive contributors" to shaping AGI seem to exemplify EA-type values (though it's not necessarily their primary affinity group).
Some "make AGI go well influencers" who have commented or posted on the EA Forum and, in my view, are at the very least EA-adjacent include Rohin Shah, Neel Nanda, Buck Shlegeris, Ryan Greenblatt, Evan Hubinger, Oliver Habryka, Beth Barnes, Jaime Sevilla, Adam Gleave, Eliezer Yudkowsky, ...
I would say the main people "shaping AGI" are the people actually building models at frontier AI companies. It doesn't matter how aligned "AI safety" people are if they don't have a significant say on how AI gets built.
I would not say that "almost all" of the people at top AI companies exemplify EA-style values. The most influential person in AI is Sam Altman, who has publicly split with EA after EA board members tried to fire him for being a serial liar.
On a related note, I happened to be thinking about this a little today as I took a quick look at what ~18 past LTFF who were given early career grants are doing now, and at least 14 of them are doing imo clearly relevant things for AIS/EA/GCR etc. I couldn't quickly work out what the other four were doing (though I could have just emailed them or spent more than 20 minutes total on this exercise).
For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch), though I don't think this should be much of an update for others, especially when thinking about the EA community more comprehensively.
For me, it was a moderate update against "bycatch" amongst LTFF grantees (an audience which, in principle, should be especially vulnerable to bycatch)
Really? I think it would be the opposite: LTFF grantees are the most persistent and accomplished applicants and are therefore the least likely to end up as bycatch.
We evaluate grants in other longtermist areas but you’re correct that it’s rare for us to fund things that aren’t AI or bio (and biosecurity grants more recently have been relatively rare). We occasionally fund work in forecasting, macrostrategy, and fieldbuilding.
It’s possible that we’ll support a broader array of causes in the future but until we make an announcement I think the status quo of investigating a range of areas in longtermism and then funding the things that seem most promising to us (as represented by our public reporting) will persist.
I think I follow and agree with "spirit" of the reasoning, but don't think it's very cruxy. I don't have cached takes on what it implies for the people replying to the EA survey.
Some general confusions I have that make this exercise hard:
* not sure how predictive choice of org to work at is of choice of org to donate to, lots of people I know donate to the org they work at because they think it's the best, some donate to think they think are less impactful (at least on utilitarian grounds) than the place they work (e.g. see CEA giving season charity recs) ...
But if they're really sort of at all different, then you should really want quite different people to work on quite different things.
I agree, but I don't know why you think people should move from direct work (or skill building) to e2g. Is the argument that the best things require very specialised labour, so on priors, more people should e2g (or raise capital in other ways) than do direct work?
I don’t understand why this is relevant to the question of whether there are enough people doing e2g. Clearly there are many useful direct impact or skill building jobs that aren’t at ea orgs. E.g. working as a congressional staffer.
I wouldn’t find it surprising at all if most EAs are a good fit for good non e2g roles. In fact, earning a lot of money is quite hard, I expect most people won’t be a very good fit for it.
I think we’re talking past each other when we say “ea job”, but if you mean job at an ea org I’d agree there aren’t enough roles for everyone...
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g.
Idk I feel like you can get a decent sense of this from running hiring rounds with lots of work tests. I think many talented EAs are looking for EA jobs, but often it's a question of "fit" over just raw competence.
> My understanding is that many non-EA jobs provide useful knowledge and skills that are underrepresented in current EA organizations, albeit my impression is that this is improving as EA organizations profess...
The percentage of EAs earning to give is too low
(I wasn't going to comment, but rn I'm the only person who disagrees)
Some reasons against the current proportion of e2g'ers being too low.
* There aren't many salient examples of people doing direct work that I want to switch to e2g.
* Doing direct work gives you a lot more exposure to great giving opportunities.
* Many people doing direct work I know wouldn't earn dramatically more if they switched to e2g.
* Most people doing e2g aren't doing super ambitious e2g (e.g. earning putting themselves in a position to ...
This is a cool list. I am unsure if this one is very useful:
* There aren't many salient examples of people doing direct work that I want to switch to e2g.
This is because I think that we are not able to evaluate what replacement candidate would fill the role if the employed EA had done e2g. My understanding is that many extremely talented EAs are having trouble finding jobs within EA, and that many of them are capable of work at the quality that current EA employees do.
This reason I think bites both ways:
* E2g is often less well optimised for learning...
Matching campaigns get a bad rep in EA circles* but it’s totally reasonable for a donor to be concerned that if they put in lots of money into an area other people won’t donate, and matching campaigns preserve the incentive for others to donate, crowding in funding.
* I agree that campaigns claiming you’ll have twice the impact as your donation will be matched are misleading.
Thanks, this is a great response. I appreciate the time and effort you put into this.
I'm not sure it makes sense to isolate 2b and 3b here - 1a can also play a role in mitigating failure (and some combination of all three might be optimal).
I just isolated these because I thought that you were most interested in EA orgs improving on 2b/3b, but noted.
I'd be curious to see a specific fictional story of failure that you think is:
* realistic (e.g. you'd be willing to bet at unfavourable odds that something similar has happened in the last year)
* seems very bad (e.g. worth say 25%+ of the org's budget to fix)
* is handled well at more mature charities with better governance
* stems from things like 2b and 3b
I'm struggling to come up with examples that I find compelling, but I'm sure you've thought about this a lot more than I have.
In my opinion, one of the main things that EA / rationality / AI safety communities have going for them is that they’re extremely non-elitist about ideas. If you have a “good idea” and you write about it on one of the many public forums, it’s extremely likely to be read by someone very influential. And insofar as it’s actually a good idea, I think it’s quite likely to be taken up and implemented, without all the usual status games that might get in the way in other fields.
While I agree that it's not "elitist" in the sense that anyone can put forward ideas and be considered by significant people in the community (which I think is great!), I would say there's still some expectations that need to be met in that the "good idea" generally must accept several commonly agreed up premises that represent what I'd call the "orthodoxy" of EA / rationality / AI safety.
For instance, I noticed way back when I first joined Less Wrong that the Orthogonality Thesis and Instrumental Convergence are more or less doctrines, and challenging the...
This is true, and I've appreciated it personally. I've been pleasantly surprised how people have responded to a couple of things I've written, even when they didn't know me from a bar of soap. I think this was unlikely to happen in academia or in the high brow public health world where status games often prevail like you said.
There is still though an element of being "known" which helps your ideas get traction. This does make sense as if someone has written something decent in the past, there's a higher chance that other things they write may also be decen...
Every now and then I'm reminded of this comment from a few years ago: "One person's Value Drift is another person's Bayesian Updating"
I probably agree with this idea, but I wouldn't label it it "value drift" myself.
From my perspective I would call what you're describing more keeping a scout mindset around our values, and trying to ever improve.
"Value drift" for me signals the negative process of switching off our moral radar and almost unconscious drifting towards the worlds norms of selfishness, status, blissful ignorance etc. Reversion towards the mean. Hence the the "drift". I'm not sure I've ever seen someone drift their way to better values. Within the church I have seen big d...
Unfortunately I feel that culturally these spaces (EEng/CE) are not very transmissible to EA-ideas and the boom in ML/AI has caused significant self-selection of people towards hotter topics.
Fwiw I have some EEE background from undergrad and I spend some time doing fieldbuilding with this crowd and I think a lack of effort on outreach is more predictive of the lack of relevant people at say EAGs as opposed to AI risk messaging not landing well with this crowd.
I have updated upwards a bit on whistleblowers being able to make credible claims on IE. I do think that people in positions with whistleblowing potential should probably try and think concretely about what they should do, what they'd need to see to do it, and who specifically they'd get in contact with, and what evidence might be compelling to them (and have a bunch of backup plans).
a. An intelligence explosion like you're describing doesn't seem very likely to me. It seems to imply a discontinuous jump (as opposed to regular acceleration), and also implies that this resulting intelligence would have profound market value, such that the investments would have some steeply increased ROI at this point.
I'm not exactly sure what you mean by discontinuous jump. I expect the usefulness of AI systems to be pretty "continuous" inside AI companies and "discontinuous" outside AI companies. If you think that:
1. model release cadence will s...
I found this text particularly useful for working out what the program is.
When
Where
- Remote, with all chats and content located on the Supercyc
... (read more)