J

Jason

17618 karmaJoined Working (15+ years)

Bio

I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . . 

How I can help others

As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.

Posts
2

Sorted by New
6
Jason
· · 1m read

Comments
2072

Topic contributions
2

It's a DAF, which makes grants based on donor recommendations to other tax-deductible charities. Since the charities to which it funnels money are themselves tax-deductible, shutting down the DAFs wouldn't prevent donors from funding the organizations you find problematic directly. And the First Amendment prevents us from denying tax-exempt status to charities because we find their message odious or threatening.

I don't understand what a "request to check on deadlines" is. Were they requesting an extension of time to respond to the clarifying questions?

Based on the limited information provided, I would not publish tomorrow. 24 hours is not that much time from seeing the draft article to turn around a response, especially when the particular 24-hour period was not selected in consultation with the charity. Unless there is a breaking-news element to the draft article, I would routinely grant more than 24 hours to comment on it. To me, that is enough to forbear from publishing tomorrow without getting into the lost e-mail issue at all. Opining on how long of an extension would be appropriate would require knowing a lot more information about the organization, the article, and the clarifying questions.

What do you think would happen at the frontier labs if EAs left their jobs en masse? I understand the view that the newly-departed would be more able to "speak[] out and contribut[e] to real external pressure and regulation." And I understand the view that the leadership isn't listening to safety-minded EAs anyway. 

But there are potential negative effects from the non-EAs who would presumably be hired as replacements. On your view, could replacement hiring make things worse at the labs? If so, how do you balance that downside risk against the advantages of departing the labs?

Jason
31
6
0
2
2

I don't have an opinion on whether Holly is correct that no one should work for the labs. But even for those who disagree, there are some implied hypotheses here that are worth pondering:

  • People systematically underestimate how much money, power, and prestige will influence their beliefs and their judgment.
  • People systematically overestimate how much influence they have on others and underestimate how much influence others have on them. Editorializing on my own, I suspect that almost everyone thinks of themselves as a net influencer, but net amount of [influence on others - influence by others] in a system seemingly has to be zero.

If people decide to work in a frontier lab anyway, to what extent can they mitigate the risk of being "frogboiled" by

  • having a plan to evaluate -- as objectively as possible -- whether they are being influenced in the ways Holly describes. (What would this look like?);
  • living well beneath the AI-lab salary and chipmunking away most of the excess, reducing the risk that they will feel psychological pressure to continue with a lab to maintain their standard of living;
  • going out of their way to ensure enough of their social lives / support is independent of the lab, so that their desire to maintain that support will not lead them to stay with the lab if that no longer seems wise;
  • publicly commit to yellow lines under which they would seriously consider reversing course, and red lines under which they would pre-commit to doing so;
  • or something else?

(I'm open to the response that there are no meaningful detection and/or mitigation techniques.)

I think I want to give (b) partial credit here in general. There may not be much practical difference between partial and full credit where the financial delta between a more altruistic job and a higher-salary job is high enough. But there are circumstances in which it might make a difference.

Without commenting on any specific person's job or counterfactuals, I think it is often true that the person working a lower-paid but more meaningful job secures non-financial benefits not available from the maximum-salary job and/or avoids non-financial sacrifices associated with the maximum-salary job. Depending on the field, these could include lower stress, more free time, more pleasant colleagues, more warm fuzzies / psychological satisfaction, and so on. If Worker A earns 100 currency units doing psychologically meaningful, low to optimal stress work but similarly situated Worker B earns 200 units doing unpleasant work with little in the way of non-monetary benefits, treating the entire 100 units Worker A forewent as spent out of their resource budget on altruistic purposes does not strike a fair balance between Worker A and Worker B.

I understand your frustration here, but EAs may have decided it was better to engage in pro-democracy activity in a non-EA capacity. One data point: the pre-eminent EA funder was one of the top ten donors in the 2024 US elections cycle. 

Or they may have decided that EA wasn't a good fit for this kind of work for any combination of a half-dozen reasons, such as:

  • the EA brand could be ill-suited or even detrimental to this kind of work, either due to FTX or its association with a tech billionaire who made a lot of money on a platform that many believe to be corrosive of democracy;
  • the EA "workforce" isn't well suited to this kind of work;
  • there are plenty of actors working in these spaces already, and there was no great reason to think that EAs would be more effective than those actors;
  • being seen as too political would impose heavy costs on other EA cause areas, especially AI policy -- and "anti-authoritarian" is not non-partisan in 21st century America.

I don't think it is necessary to rule out all possible alternative explanations before writing a critical comment. However, I think if you're going to diagnose what you perceive as the root cause -- "You wanted to settle for the ease of linear thinking" -- I think it's fair for us to ask for either clear evidence or a rule-out of alternative explanations.

As David points out, there have been a number of posts about democracy and elections, such as this analysis about the probability that a flipped vote in a US swing state would flip the presidential election outcome. I recall some discussion of the cost-per-vote as well. There are a lot of potential cause areas out there, and limited evaluative capacity, so I don't think the evaluations being relatively shallow is problematic. That they did not consider all possible approaches to protecting/improving democracy is inevitable. I think it's generally okay to place the burden of showing that a cause area warrants further investigation on proponents, rather than those proponents expecting the community to do a deep dive for them.

But I think a practical moral philosophy wherein donation expectations are based on your actual material resources (and constraints), not your theoretical maximum earning potential, seems more justifiable. 

It's complicated, I think. Based on your distinguishing (a) and (b), I am reading "salary sacrifice" as voluntarily taking less salary than was offered for the position you encumber (as discussed in, e.g., this post). While I agree that should count, I'm not sure (b) is not relevant.

The fundamental question to me is about the appropriate distribution of the fruits of one's labors ("fruits") between altruism and non-altruism. (Fruits is an imperfect metaphor, because I mean to include (e.g.) passive income from inherited wealth, but I'll stick with it.) 

We generally seem to accept that the more fruit one produces, the more (in absolute terms) it is okay to keep for oneself. Stated differently -- at least for those who are not super-wealthy -- we seem to accept that the marginal altruism expectation for additional fruits one produces is less than 100%. I'll call this the "non-100 principle." I'm not specifically defending that principle in this comment, but it seems to be assumed in EA discourse.

If we accept this principle, then consider someone who was working full-time in a "normal" job and earn a salary of 200 apples per year. They decide to go down to half-time (100-apple salary) and spend the half of their working hours producing 100 charitable pears for which they receive no financial benefit. [1]The non-100 principle suggests that it's appropriate for this person to keep more of their apples than a person who works full-time to produce 100 apples (and zero pears). Their total production is twice as high, so they aren't similarly situated to the full-time worker who produces the same number of apples. The decision to take a significantly less well-paid job seems analogous to splitting one's time between remunerative and non-remunerative work. One gives up the opportunity to earn more salary in exchange for greater benefits that flow to others by non-donation means.

I am not putting too much weight on this thought experiment, but it does make me think that either the non-100 principle is wrong, or that the foregone salary counts for something in many circumstances even when it is not a salary sacrifice in the narrower sense.

  1. ^

    How to measure pear output is tricky. The market rate for similar work in the for-profit sector may be the least bad estimate here.

Jason
2
0
0
1
1

Tagging @Austin since his comments are the main focus of this quick take

That analysis would be more compelling if the focus of the question were on a specific individual or small group. But, at least as I read it, the question is about the giving patterns of a moderately numerous subclass of EAs (working in AI + "earning really well") relative to the larger group of EAs. 

I'm not aware of any reason the dynamics you describe would be more present in this subclass than in the broader population. So a question asking about subgroup differences seems appropriate to me.

No -- that something is unsurprising, even readily predictable, does not imply anything about whether it is OK.

The fact that people seem surprised by the presence of corpspeak here does make me concerned that they may have been looking at the world with an assumption that "aligned" people are particularly resistant to the corrosive effects of money and power. That, in my opinion, is a dangerous assumption to make -- and is not one I would find well-supported by the available evidence. Our models of the world should assume that at least the significant majority of people will be adversely and materially influenced by exposure to high concentrations of money and power, and we need to plan accordingly.

Load more