Former CTO and co-founder of earn-to-give fintech Mast.
This is wonderful – thank you so much for writing it.
Mutual dedication to one another’s ends seems like a thing commonly present in religious and ethnic communities. But it seems quite uncommon to the demographic of secular idealists, like me. Such idealists tend to form and join single-focus communities like effective altruism, which serve only a subset of our eudaemonic needs.
Agree about secular, single-purpose communities – but I'm not sure EA is quite the same.
I've found my relationships with other EAs tend to blossom to be about more than just EA; those principles provide a good set of shared values from which to build other things, like a sense of community, shared houses, group meals, playing music together and just supporting each other generally. Then again, I don't consider EA to be the core of my identity, so YMMV.
Broadly agree but:
The current problem is the lack of good training programs in impact-focused thinking, so it's hard for people with tons of experience and great credentials to get to the required EA-ness stage (impact-focused mindset, landscape familiarity) quickly enough to get the positions on offer, when they join EA.
Aren’t we mitigating this with things like MATS and BlueDot et al? These should be producing useful hires at a high rate so training isn’t the issue it seems
Let me write something up and come back to you.
Broadly in order of safety it’s probably caffeine > modafinil > amphetamines (Vyvanse, Ritalin, Concerta, dexamphatamine etc). But amphetamines are very commonly prescribed for ADHD/narcolepsy (usually with an ECG and occasional blood pressure checks). I think the risk-reward works out very much positively but obviously I’m eliding a lot of detail.
Great post. Two things come to mind:
One way to just be able to do more stuff is to take stimulants. I think there are cases where being on them can dent your intelligence in some subtle ways but broadly they can drastically increase your ability to do more, work through when you're fatigued, etc. Maybe it's still a sufficiently edgy position that you don't mentioned it here but the absence was interesting. People at college are all taking modafinil for a reason.
I worry that some incredibly ambitious people in the EA world have gone on to pursue paths that have actually been harmful. Early employees at the frontier AI labs seem like the obvious example - Anthropic was founded as an "AI safety lab" with commitments not to push the frontier but they obviously forgot about that along the way, and it seems hard to justify continuing to work there on capabilities imo. I suspect there's a lot of motivated reasoning going on among this group. Perhaps it's a cautionary tale about ambition unmoored from reflection as other people point out here, or that if your ambition leads to filthy lucre then it's very hard to course correct later on.
(Agree with the other commenters here that maybe the rate-limiting step isn't just pushing harder but co-ordination, taking more individual risks, etc)
reposted from my comment on the original Substack article
Is there a risk of boiling the ocean here?
The 'community notes everywhere' proposal seems easy enough to build (I've been hacking away at a Chrome extension version of it). I'm not sure it makes sense to wait for personal computing to change fundamentally before trying to attempt this.
I agree that distribution is an issue, which I'm not sure how to solve. One approach might be to have a core group of users onboarded who annotate a specific subset of pages - like the top 20 posts on Hacker News - so that there's some chance of your notes being seen if you're a contributor. But I suppose this relies on getting that rather large core group of users (e.g. HN readers) to start using the product.
Alternatively you build the thing and hope that it gets adopted in some larger way, say it gets acquired by X if they want to roll out community notes to the whole web.
You do address the FTX comparison (by pointing out that it won't make funding dry up), that's fair. My bad.
But I do think you're make an accusation of some epistemic impropriety that seems very different from FTX - getting FTX wrong (by not predicting its collapse) was a catastrophe and I don't think it's the same for AI timelines. Am I missing the point?
I might be missing the point, but I'm not sure I see the parallels with FTX.
With FTX, EA orgs and the movement more generally relied on the huge amount of funding that was coming down the pipe from FTX Foundation and SBF. When all that money suddenly vanished, a lot of orgs and orgs-to-be were left in the lurch, and the whole thing caused a huge amount of reputational damage.
With the AI bubble popping... I guess some money that would have been donated by e.g. Anthropic early employees disappears? But it's not clear that that money has been 'earmarked' in the same way the FTX money was; it's much more speculative and I don't think there are orgs relying on receiving it.
OpenPhil presumably will continue to exist, although it might have less money to disburse if a lot of it is tied up in Meta stock (though I don't know that it is). Life will go on. If anything, slowing down AI timelines will probably be a good thing.
I guess I don't see how EA's future success is contingent on AI being a bubble or not. If it turns out to be a bubble, maybe that's good. If it turns out not to be a bubble, we sure as hell will have wanted to be on the vanguard of figuring out what a post-AGI world looks like and how to make it as good for humanity as possible.
For effect, I would have pulled in a quote from the Reddit thread on akathisia rather than just linking to it.
Akathisia is a inner restlessness that is as far as I know the most extreme form of mental agitation known to man. This can drive the sufferer to suicide [...] My day today consisted of waking up and feeling like I was exploding from my skin, I had a urge that I needed to die to escape. [...] I screamed, hit myself, threw a few things and sobbed. I can’t get away from it. My family is the only reason why I’m alive. [...] My CNS is literally on fire and food is the last thing I want. My skin burns, my brain on fire. It’s all out survival.
Ferrous sulphate is also common but a bit nauseating and poorly absorbed in any case. Ferrous bisglycinate is also found branded as “gentle iron”.
For those very deficient in iron, an iron infusion will give you ~two years’ worth of iron in one go - and skips all the issues with oral bioavailability of iron. You will need to test your iron levels first to avoid iron overload.
I write a bit about iron supplementation in my guide to treating restless leg syndrome (RLS) for which iron deficiency is a common cause: https://henryaj.substack.com/p/how-to-treat-restless-legs-syndrome