GCR capacity-building grantmaking and projects at Open Phil.
I like the core point and think it's very important — though I don't really vibe with statements about calibration being actively dangerous.
I think EA culture can make it seem like being calibrated is the most important thing ever. But I think on the topic of "will my ambitious projects succeed?" it seems very difficult to be calibrated and fairly cursed overall, and it may overall be unhelpful to try super hard at this vs. just executing.
For example, I'm guessing that Norman Borlaug didn't feed a billion people primarily by being extremely well-calibrated. I think he did it via being a good scientist, dedicating himself fully to something impactful-in-principle even when the way forward was unclear, and being willing to do things outside his normal wheelhouse — like bureaucracy or engaging with Indian government officials. I'd guess he was well-calibrated about micro-aspects of his wheat germination work, such as which experiments were likely to work out, or perhaps which politicians would listen to him (but on the other hand, he could simply have been uncalibrated and very persistent). I wouldn't expect he'd be well-calibrated about the overall shape of his career early on, and it doesn't seem very important for him to have been calibrated about that.
One often hears about successful political candidates that they always had unwarranted-seeming confidence in themselves and always thought they'd win office. I've noticed that the most successful researchers tend to seem a bit 'crazy' and have unwarranted confidence in their own work. Successful startup founders too are not exactly known for realistic ex-ante estimates of their own success. (Of course this all applies to many unsuccessful political candidates, researchers and founders as well.)
I think something psychologically important is going on here; my guess is that "part of you" really needs to believe in outsized success in order to have a chance of achieving it. This old Nate Soares post is relevant.
Yeah, totally a contextual call about how to make this point in any given conversation, it can be easy to get bogged down with irrelevant context.
I do think it's true that utilitarian thought tends to push one towards centralization and central planning, despite the bad track record here. It's worth engaging with thoughtful critiques of EA vibes on this front.
Salaries are the most basic way our economy does allocation, and one possible "EA government utopia" scenario is one where the government corrects market inefficiencies such that salaries perfectly track "value added to the world." This is deeply sci-fi of course, but hey why not dream. In such a utopia world, if we really did reach the point where marginal safety researchers are not adding more value than marginal post office workers, salaries would.
I like the main point you're making.
However, I think "the government's version of 80,000 Hours" is a very command-economy vision. Command economies have a terrible track record, and if there were such a thing as an "EA world government" (which I would have many questions about regardless) I would strongly think it shouldn't try to plan and direct everyone's individual careers, and should instead leverage market forces like ~all successful large economies.
Thanks for writing this!!
This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I'm not sure if that choice has held up.
Come 2027, I'd love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there's a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.
People with plans in this area should feel free to apply for career transition funding from my team at Coefficient (fka Open Phil) if they think that would be helpful to them.
Wow, that boggles my mind, especially as someone who attended a similar school for undergrad. Anywhere we can read about this? Presumably this happened after the initial post which reported 80 pledges.
(Middlebury has about 2800 undergrads so this would be 280 students taking the trial or full GWWC pledges.)