I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
I don't understand what a "request to check on deadlines" is. Were they requesting an extension of time to respond to the clarifying questions?
Based on the limited information provided, I would not publish tomorrow. 24 hours is not that much time from seeing the draft article to turn around a response, especially when the particular 24-hour period was not selected in consultation with the charity. Unless there is a breaking-news element to the draft article, I would routinely grant more than 24 hours to comment on it. To me, that is enough to forbear from publishing tomorrow without getting into the lost e-mail issue at all. Opining on how long of an extension would be appropriate would require knowing a lot more information about the organization, the article, and the clarifying questions.
What do you think would happen at the frontier labs if EAs left their jobs en masse? I understand the view that the newly-departed would be more able to "speak[] out and contribut[e] to real external pressure and regulation." And I understand the view that the leadership isn't listening to safety-minded EAs anyway.
But there are potential negative effects from the non-EAs who would presumably be hired as replacements. On your view, could replacement hiring make things worse at the labs? If so, how do you balance that downside risk against the advantages of departing the labs?
I don't have an opinion on whether Holly is correct that no one should work for the labs. But even for those who disagree, there are some implied hypotheses here that are worth pondering:
If people decide to work in a frontier lab anyway, to what extent can they mitigate the risk of being "frogboiled" by
(I'm open to the response that there are no meaningful detection and/or mitigation techniques.)
I think I want to give (b) partial credit here in general. There may not be much practical difference between partial and full credit where the financial delta between a more altruistic job and a higher-salary job is high enough. But there are circumstances in which it might make a difference.
Without commenting on any specific person's job or counterfactuals, I think it is often true that the person working a lower-paid but more meaningful job secures non-financial benefits not available from the maximum-salary job and/or avoids non-financial sacrifices associated with the maximum-salary job. Depending on the field, these could include lower stress, more free time, more pleasant colleagues, more warm fuzzies / psychological satisfaction, and so on. If Worker A earns 100 currency units doing psychologically meaningful, low to optimal stress work but similarly situated Worker B earns 200 units doing unpleasant work with little in the way of non-monetary benefits, treating the entire 100 units Worker A forewent as spent out of their resource budget on altruistic purposes does not strike a fair balance between Worker A and Worker B.
I understand your frustration here, but EAs may have decided it was better to engage in pro-democracy activity in a non-EA capacity. One data point: the pre-eminent EA funder was one of the top ten donors in the 2024 US elections cycle.
Or they may have decided that EA wasn't a good fit for this kind of work for any combination of a half-dozen reasons, such as:
I don't think it is necessary to rule out all possible alternative explanations before writing a critical comment. However, I think if you're going to diagnose what you perceive as the root cause -- "You wanted to settle for the ease of linear thinking" -- I think it's fair for us to ask for either clear evidence or a rule-out of alternative explanations.
As David points out, there have been a number of posts about democracy and elections, such as this analysis about the probability that a flipped vote in a US swing state would flip the presidential election outcome. I recall some discussion of the cost-per-vote as well. There are a lot of potential cause areas out there, and limited evaluative capacity, so I don't think the evaluations being relatively shallow is problematic. That they did not consider all possible approaches to protecting/improving democracy is inevitable. I think it's generally okay to place the burden of showing that a cause area warrants further investigation on proponents, rather than those proponents expecting the community to do a deep dive for them.
But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable.
It's complicated, I think. Based on your distinguishing (a) and (b), I am reading "salary sacrifice" as voluntarily taking less salary than was offered for the position you encumber (as discussed in, e.g., this post). While I agree that should count, I'm not sure (b) is not relevant.
The fundamental question to me is about the appropriate distribution of the fruits of one's labors ("fruits") between altruism and non-altruism. (Fruits is an imperfect metaphor, because I mean to include (e.g.) passive income from inherited wealth, but I'll stick with it.)
We generally seem to accept that the more fruit one produces, the more (in absolute terms) it is okay to keep for oneself. Stated differently -- at least for those who are not super-wealthy -- we seem to accept that the marginal altruism expectation for additional fruits one produces is less than 100%. I'll call this the "non-100 principle." I'm not specifically defending that principle in this comment, but it seems to be assumed in EA discourse.
If we accept this principle, then consider someone who was working full-time in a "normal" job and earn a salary of 200 apples per year. They decide to go down to half-time (100-apple salary) and spend the half of their working hours producing 100 charitable pears for which they receive no financial benefit. [1]The non-100 principle suggests that it's appropriate for this person to keep more of their apples than a person who works full-time to produce 100 apples (and zero pears). Their total production is twice as high, so they aren't similarly situated to the full-time worker who produces the same number of apples. The decision to take a significantly less well-paid job seems analogous to splitting one's time between remunerative and non-remunerative work. One gives up the opportunity to earn more salary in exchange for greater benefits that flow to others by non-donation means.
I am not putting too much weight on this thought experiment, but it does make me think that either the non-100 principle is wrong, or that the foregone salary counts for something in many circumstances even when it is not a salary sacrifice in the narrower sense.
How to measure pear output is tricky. The market rate for similar work in the for-profit sector may be the least bad estimate here.
That analysis would be more compelling if the focus of the question were on a specific individual or small group. But, at least as I read it, the question is about the giving patterns of a moderately numerous subclass of EAs (working in AI + "earning really well") relative to the larger group of EAs.
I'm not aware of any reason the dynamics you describe would be more present in this subclass than in the broader population. So a question asking about subgroup differences seems appropriate to me.
No -- that something is unsurprising, even readily predictable, does not imply anything about whether it is OK.
The fact that people seem surprised by the presence of corpspeak here does make me concerned that they may have been looking at the world with an assumption that "aligned" people are particularly resistant to the corrosive effects of money and power. That, in my opinion, is a dangerous assumption to make -- and is not one I would find well-supported by the available evidence. Our models of the world should assume that at least the significant majority of people will be adversely and materially influenced by exposure to high concentrations of money and power, and we need to plan accordingly.
It's a DAF, which makes grants based on donor recommendations to other tax-deductible charities. Since the charities to which it funnels money are themselves tax-deductible, shutting down the DAFs wouldn't prevent donors from funding the organizations you find problematic directly. And the First Amendment prevents us from denying tax-exempt status to charities because we find their message odious or threatening.