I think typical financial advice is that emergency funds should be kept in very low-risk assets, like cash, money market funds, or short-term bonds. This makes sense because the probability that you need to draw on emergency funds is negatively correlated with equities: market downturns make it more likely that you will lose your job, or some sort of disaster could cause both market downturns and personal loss. You really don't want your emergency fund to lose value at the same time that you're most likely to need it.
One dynamic worth considering here is that a person with near-typical longtermist views about the future also likely believes that there are a large number of salient risks in the future, including sub-extinction AI catastrophes, pandemics, war with China, authoritarian takeover, "white collar bloodbath" etc.
It can be very psychologically hard to spend all day thinking about these risks without also internalizing that these risks may very well affect oneself and one's family, which in turn implies that typical financial advice and financial lifecycle plann...
There used to be a website to try to coordinate this; not sure what ever happened to it.
I'm not going to defend my whole view here, but I want to give a though experiment as to why I don't think that "shadow donations"—the delta between what you could earn if you were income-maximizing, and what you're actually earning in your direct work job—are a great measure for the purposes of practical philosophy (though I agree they're both a relevant consideration and a genuine sacrifice).
Imagine two twins, Anna and Belinda. Both have just graduated with identical grades, skills, degrees, etc. Anna goes directly from college work on AI safety at Safet...
It's not clear to me whether you're talking about people who (a) do a voluntary salary sacrifice while working at an EA org, or (b) people who could have earned much more in industry but moved to a nonprofit so now earn much less than their hypothetical maximum earning potential.
In case (a), yes, their salary sacrifice should count towards their real donations.
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justif...
I disagree and think that b is actually totally sufficient justification. I'm taking as an assumption that we're using an ethical theory that says people do not have an unbounded ethical obligation to give everything up to subsistence and that it is fine to set some kind of a boundary and fraction of your total budget of resources that you spend on altruistic purposes. Many people doing well paying altruistic careers (eg technical AI safety careers) could earn dramatically more money eg at least twice as much, if they were optimising for the highest paying...
Thanks for this very thoughtful reply!
I have a lot to say about this, much of which boils down to a two points:
The r...
Ah, interesting, not exactly the case that I thought you were making.
I more or less agree with the claim that "Elon changing the twitter censorship policies was a big driver of a chunk of Silicon Valley getting behind Trump," but probably assign it lower explanatory power than you do (especially compared to nearby explanatory factors like, Elon crushing internal resistance and employee power at Twitter). But I disagree with the claim that anyone who bought Twitter could have done that, because I think that Elon's preexisting sources of power and influence ...
I will say that not appreciating arguments from open-source advocates, who are very concerned about the concentration of power from powerful AI, has lead to a completely unnecessary polarisation against the AI Safety community from it.
I think if you read the FAIR paper to which Jeremy is responding (of which I am a lead author), it's very hard to defend the proposition that we did not acknowledge and appreciate his arguments. There is an acknowledgment of each of the major points he raises on page 31 of FAIR. If you then compare the tone of the FAIR pap...
(Elon's takeover of twitter was probably the second—it's crazy that you can get that much power for $44 billion.)
I think this is pretty significantly understating the true cost. Or put differently, I don't think it's good to model this as an easily replicable type of transaction.
I don't think that if, say, some more boring multibillionaire did the same thing, they could achieve anywhere close to the same effect. It seems like the Twitter deal mainly worked for him, as a political figure, because it leveraged existing idiosyncratic strengths that he had,...
A warm welcome to the forum!
I don't claim to speak authoritatively, or to answer all of your questions, but perhaps this will help continue your exploration.
There's an "old" (by EA standards) saying in EA, that EA is a Question, Not an Ideology. Most of what connects the people on this forum is not necessarily that they all work in the same cause area, or share the same underlying philosophy, or have the same priorities. Rather, what connects us is rigorous inquiry into the question of how we can do the most good for others with our spare resources. Becaus...
I upvoted and didn't disagree-vote, because I generally agree that using AI to nudge online discourse in more productive directions seems good. But if I had to guess where disagree votes come from, it might be a combination of:
Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.
I also have very wide error bars on my $1B estimate; I have no idea how much equity early employees would normally retain in a startup like Anthropic. That number is also probably dominated by the particular compensation arrangements and donation plans of ~5–10 key people and so very sensitive to assumptions about them individually.
One factor this post might be failing to account for: the wealth of Anthropic founders and early-stage employees, many of whom are EAs, EA-adjacent, or at minimum very interested in supporting existential AI safety. I don't know how much equity they have collectively, how liquid it is, how much they plan to donate, etc. But if I had to guess, there's probably at least $1B earmarked for EA projects there, at least in NPV terms?
(In general, this topic seems under-discussed.)
...On the allocative efficiency front, the Harris campaign has pledged to impose nation-wide rent controls, an idea first floated by President Biden. Under the proposal, “corporate landlords” with 50+ units would have to “either cap rent increases on existing units to no more than 5% or lose valuable federal tax breaks,” referring to depreciation write-offs. This would be a disastrously bad policy for the supply-side of housing, and an example of the sort of destructive economic populism normally ascribed to Trump.
Harris’s terrible housing policy can be disco
I agree he shouldn’t have his past donations held against him, and that his past generosity should be praised.
At the same time, he’s not simply “stopping giving.” His prior plan was that his estate would go to BMGF. Let’s assume that that was reflected in his estate planning documents. He would have had to make an affirmative change to effect this new plan. So with this specific action he is not “stopping giving,” he is actively altering his plan to be much worse.
I think many people are tricking themselves into being more intellectually charitable to Hanania than warranted.
I know relatively little about Hanania other than stuff that has been brought to my attention through EA drama and some basic “know thy enemy” reading I did on my own initiative. I feel pretty comfortable in my current judgment that his statements on race are not entitled charitable readings in cases of ambiguity.
Hanania by his own admission was deeply involved in some of the most vilely racist corners of the internet. He knows what sorts of mess...
Pretty wild discussion in this podcast about how aggressively the USSR cut corners on safety in their space program in order to stay ahead of the US. In the author's telling of the history, this was in large part because Khrushchev wanted to rack up as many "firsts" (e.g., first satellite, first woman in space) as possible. This seems like it was most proximately for prestige and propaganda rather than any immediate strategic or technological benefit (though of course the space program did eventually produce such bigger benefits).
Evidence of the following ...
It could be the case that the board would reliably fail in all nearby fact patterns but that market participants simply did not know this, because there were important and durable but unknown facts about e.g. the strength of the MSFT relationship or players' BATNAs.
I agree this is an alternative explanation. But my personal view is also that the common wisdom that it was destined to fail ab initio is incorrect. I don't have much more knowledge than other people do on this point, though.
...I think it would be fair to describe some Presidents as being effe
I agree this would be appealing to intellectually consistent conservatives, but this seems like a bad meme to be spreading/strengthening for animal welfare. Maybe local activists should feel free to deploy it if they think they can flip some conservative's position, but they will be setting themselves up for charges of hypocrisy if they later want to e.g. ban eggs from caged chickens.
How are you defining "powerless"? See my previous comment: I think the common meaning of "powerless" implies not just significant constraints on power but rather the complete absence thereof.
I would say that the LTBT is powerless iff it can be trivially prevented from accomplishing its primary function—overriding the financial interests of the for-profit Anthropic investors—by those investors, such as with a simple majority (which is the normal standard of corporate control). I think this is very unlikely to be true, p<5%.
I definitely would not say that the OpenAI Board was powerless to remove Sam in general, for the exact reason you say: they had the formal power to do so, but it was politically constrained. That formal power is real and, unless it can be trivially overruled in any instance in which it is exercised for the purpose for which it exists, sufficient to not be "powerless."
It turns out that they were maybe powerless to remove him in that instance and in that way, but I think there are many nearby fact patterns on which the Sam firing could have worked. This is e...
I think "powerless" is a huge overstatement of the claims you make in this piece (many of which I agree with). Having powers that are legally and politically constrained is not the same thing as the nonexistence of those powers.
I agree though that additional information about the Trust and its relationship to Anthropic would be very valuable.
Quote from VC Josh Wolfe:
...Biology. We will see an AWS moment where instead of you having to be a biotech firm that opens your own wet lab or moves into Alexandria Real Estate, which is you know, specializes in hosting biotech companies, in in all these different regions approximate to academic research centers. You will be able to just take your experiment and upload it to the cloud where there are cloud-based robotic labs. We funded some of these. There's one company called Stratios.
There's a ton that are gonna come on wave, and this is exciting because
OP gave some reasoning for their views on their recent blog post:
...Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.
We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private f
According to the book Bullies and Saints: An Honest Look at the Good and Evil of Christian History, some early Christians sold themselves into slavery so they could donate the proceeds to the poor. Super interesting example of extreme and early ETG.
(I'm listening on audiobook so I don't have the precise page for this claim.)
(To avoid bad-faith misinterpretation: I obviously think that nobody should do the same.)
Longtermist shower thought: what if we had a campaign to install Far-UVC in poultry farms? Seems like it could:
Insofar as one of the main obstacles is humans' concerns for health effects, this would at least only raise these for a small group of workers.
I had a similar thought a (few) year (s) ago and emailed a couple of people to sanity check the idea - all the experts I asked seemed to think this wouldn't be an effective thing to do (which is why I didn't do any more work on it). I think Alex's points are true (mostly the cost part - I think you could get high enough intensity for it to be effective).
Hi Chelsea. You should probably hire a trusts & estates lawyer to help you understand your rights with respect to the trust better.