I might be missing the point, but I'm not sure I see the parallels with FTX.
With FTX, EA orgs and the movement more generally relied on the huge amount of funding that was coming down the pipe from FTX Foundation and SBF. When all that money suddenly vanished, a lot of orgs and orgs-to-be were left in the lurch, and the whole thing caused a huge amount of reputational damage.
With the AI bubble popping... I guess some money that would have been donated by e.g. Anthropic early employees disappears? But it's not clear that that money has been 'earmarked' in t...
For effect, I would have pulled in a quote from the Reddit thread on akathisia rather than just linking to it.
Akathisia is a inner restlessness that is as far as I know the most extreme form of mental agitation known to man. This can drive the sufferer to suicide [...] My day today consisted of waking up and feeling like I was exploding from my skin, I had a urge that I needed to die to escape. [...] I screamed, hit myself, threw a few things and sobbed. I can’t get away from it. My family is the only reason why I’m alive. [...] My CNS is literally on fire and food is the last thing I want. My skin burns, my brain on fire. It’s all out survival.
if I had kept at it and pushed harder, maybe the project would have got further... but I don't think I actually wanted to be in that position either!
I think this is a problem with for-profit startups as well. Most of the time they fail. But sometimes they succeed (in the sense of “not failing” rather than breakout success which is far rarer), and in that case you’re stuck with the thing to see it through to an exit.
people who have strong conviction in EA start with a radical critique of the status quo (e.g. a lot of things like cancer research or art or politics or volunteering with lonely seniors seem a lot less effective than GiveWell charities or the like, so we should scorn them), then see the rationales for the status quo (e.g. ultimately, society would start to fall apart if tried to divert too many resources to GiveWell charities and the like by taking them away from everything else), and then come full circle back around to some less radical position
I agre...
One thing that occurs to me (as someone considering a career pivot) is the case of who someone isn't committed to a specific cause area. Here you talk about someone who is essentially choosing between EtG for AI safety or doing AI safety work directly.
But in my case, I'm considering a pivot to AI safety from EtG - but currently I exclusively support animal welfare causes when I donate. Perhaps this is just irrational on my part. My thinking is that I'm unlikely, given my skillset, to be any good at doing direct work in the animal welfare space, but conside...
I wonder why this hasn't attracted more upvotes - seems like a very interesting and high-effort post!
Spitballing - I guess there's such a lot of math here that many people (including me) won't be able to fully engage with the key claims of the post, which limits the surface area of people who are likely to find it interesting.
I note that when I play with the app, the headline numbers don't change for me when I change the parameters of the model. May be a bug?
I can start by giving my own answer to this (things I might do with my time):
IIRC this was basically the thesis behind the EA Hotel (now CEELEAR) - a low-cost space for nascent EAs to do a bunch of thinking without having to worry too much about the basics.
More broadly this is also a benefit of academic tenure - being able to do your research without having to worry about finding a job (although of course getting funding is still the bottleneck and a big force in directing where research effort is directed).
What do we thinking about maintenance-free masks (they're like the half-face respirators but have single-use filters) - seems like better than using N95s/having nothing but worse than having swappable filters?
(The cost of mask + swappable filters seems much higher than the maintenance-free mask, maybe 2x judging by the cost on Amazon UK)
Looks like the amendment passed, sadly:
Lawmakers at the European Parliament voted by 355 to 247 in favour of an amendment to a regulation designed to give farmers a stronger negotiating position so that powerful companies in the food supply chain do not impose unfavourable conditions.
The text of the final regulation will follow negotiations between representatives of the Parliament, EU governments and the Commission, with the Parliament backing a ban of terms such as "veggie-burger" or "vegan sausage".
I guess it's an interesting position you're in - you might personally want to be strictly vegan, but also in some ways the whole point of FarmKind is that you don't need to do that/doing that doesn't have all that large an impact.
Which also puts you in a bit of a bind bc as you say there are animal advocates who will see not being vegan as a mark of unseriousness.
Getting FarmKind featured by Sam Harris would be a real coup.
Ferrous sulphate is also common but a bit nauseating and poorly absorbed in any case. Ferrous bisglycinate is also found branded as “gentle iron”.
For those very deficient in iron, an iron infusion will give you ~two years’ worth of iron in one go - and skips all the issues with oral bioavailability of iron. You will need to test your iron levels first to avoid iron overload.
I write a bit about iron supplementation in my guide to treating restless leg syndrome (RLS) for which iron deficiency is a common cause: https://henryaj.substack.com/p/how-to-treat-restless-legs-syndrome
I agree that removing the 10% of animal products from your diet that causes the least suffering is not that important, and otherwise clear-headed EAs treat it like a very big deal in a way they don’t treat giving 10% less to charity as a very big deal. ... it’s not clear to me whether the signaling value of being 100% vegan and strict about it ... is positive
Given this, why are you vegan?
(I'm also ~vegan but wrestle with the relative importance of it given how difficult can be; the signalling value to others is one of the reasons I think it's good.)
Good question! I think (a) having to think about which is the 10% and “should I eat this” every meal uses too much bandwidth. I find a simple rule easier overall. It’s kind of like how I don’t calculate the consequences of my actions at every decision even though I’m consequentialist. I rely on heuristics instead. (b) I found it really hard to get to my current diet. It took me many years. And I think that personally I’ll find it hard to re-introduce 10% of the animal products without being tempted and it becoming 50%. (c) I think the things I say about veganism to other vegans / animal people are more credible when I’m vegan [as I’m clearly committed to the cause and not making excuses for myself].
your future self is a separate person. If we imagine the argument targeting any other person, it's horrible - it states that should lock them into some state that ensures they're forced to serve your (current) interests
Surely this proves too much? Any decision with long-term consequences is going to bind your future self. Having kids forces your future self onto a specific path (parenthood) just as much as relocating to an EA hub does.
(A potential lesson from Wytham Abbey: You don't need to buy a fancy church building, you just need to befriend the people who currently own it!)
On that note - seems like an enormous waste that Wytham Abbey sits empty when it could be used (presumably at very little marginal cost?) while EVF works on selling it.
I'm still skeptical of using 'obviousness'/'plausibility' as evidence of a theory being correct - as a mental move it risks proving too much. Multiple theories might have equally obvious implications. Plenty of previously-unthinkable views would have been seen to be deeply un-obvious.
You have your intuitions and I have mine - we can each say they're obvious to us and it gets us no further, surely? Perhaps I'm being dense.
In Don't Valorize The Void you say:
...Omelas is a very good place, and it's deeply irrational to condemn it. We can demonstrate this by noti
This got added as a comment on the original Substack article; think it's worth reading in this context:
https://expandingcircle.substack.com/p/the-dark-side-of-pet-ownership
Not what I was saying. More like, it’s a weak argument to merely say “my position generates a sensible-sounding conclusion and thus is more likely to be true”, and it would surprise me if eg a highly-upvoted EA Forum post used this kind of circular reasoning. Or is that what you’re defending?
I suppose I agree that we’re not obliged to give every crackpot view equal airtime - I just disagree that “pets have net negative lives” is such a view.
I think the “pathology” comment is probably a norm violation. The “sensible” comment feels more like circular reasoning I guess? (Or maybe it doesn’t feel obvious to me, and perhaps therefore it irks me more than it does others.)
Although they're presented in adjacent sentences, this:
I’d say that domesticated life seems both (i) clearly good overall, and (ii) the best form of life that’s realistically available for many non-human animals.
seems distinct from:
(I know I’d much rather be reincarnated as a well-cared-for companion animal than as a starving, parasite-ridden stray. Yeah, even at the cost of a minute spent tied to a lamppost!)
I would also prefer to be a companion animal over being a stray – but I would probably prefer not to exist than exist as a companion animal.
N...
I tend to agree; better to be explicit especially as the information is public knowledge anyway.
It refers to this: https://forum.effectivealtruism.org/posts/HqKnreqC3EFF9YcEs/
most people's coworkers aren't trying to reshape the lightcone without public consent so idk, maybe different standards should apply here
Exactly. Daniela and the senior leadership at one of the frontier AI labs are not the same as someone's random office colleague. There's a clear public interest angle here in terms of understanding the political and social affiliations of powerful and influential people - which is simply absent in the case you describe.
Oddly I used to work at Pivotal[1] - have very fond memories of the London office with its full breakfast every morning...
Has since been acquired by VMware and gradually killed, and then VMware was acquired again by Broadcom who have really really killed it
There's almost no mention of it online now as the brand has been killed off
To what extent do you think this fits the typical EA brief of 'important, neglected, tractable'? Even if we think that supporting sturgeon populations is intrinsically valuable - given that the fall in sturgeon populations is caused by overfishing for caviar, isn't is more obvious to just... not consume it?
I sometimes think about whether we have or should have language for a mental health equivalent of Second-Impact syndrome. At the time I burned out I would say I was dealing with four ~independent situations or circumstances that most people would recognise as challenging, but my attitude to each one was 'this is fine, I can handle this'. Taken one at a time that was probably true, all at once was demonstrably false.
Somehow I needed to notice that I was already dealing with one or two challenging situations and strongly pivot to a defensive posture to...
Hooray - this is awesome work. Fight the good fight.
I donated to THL last year because of this case; was advised by Founders Pledge that without funding the appeal might fall through and was keen for that not to happen. I wonder what other time-sensitive efforts in this space with room for more funding?
Time for the Shrimp Welfare Project to do a Taylor Swift crossover?
https://www.instagram.com/p/C59D5p1PgNm/?igsh=MXZ5d3pjeHAxeHR2dw==
You do address the FTX comparison (by pointing out that it won't make funding dry up), that's fair. My bad.
But I do think you're make an accusation of some epistemic impropriety that seems very different from FTX - getting FTX wrong (by not predicting its collapse) was a catastrophe and I don't think it's the same for AI timelines. Am I missing the point?