Thomas Kwa

Student at Caltech. I help run Caltech EA.

Topic Contributions

Comments

The EA movement’s values are drifting. You’re allowed to stay put.

Sure, here's the ELI12:

Suppose that there are two billionaires, April and Autumn. Originally they were funding AMF because they thought working on AI alignment would be 0.01% likely to work and solving alignment would be as good as saving 10 billion lives, which is an expected value of 1 million lives, lower than you could get by funding AMF.

After being in the EA community a while they switched to funding alignment research for different reasons.

  • April updated upwards on tractability. She thinks research on AI alignment is 10% likely to work, and solving alignment is as good as saving 10 billion lives.
  • Autumn now buys longtermist moral arguments. Autumn thinks research on AI alignment is 0.01% likely to work, and solving alignment is as good as saving 10 trillion lives.

Both of them assign the same expected utility to alignment-- 1 billion lives. As such they will make the same decisions. So even though April made an epistemic update and Autumn  a moral update, we cannot distinguish them from behavior alone.

This extends to a general principle: actions are driven by a combination of your values and subjective probabilities, and any given action is consistent with many different combinations of utility function and probability distribution.

As a second example, suppose Bart is an investor who makes risk-averse decisions (say, invests in bonds rather than stocks). He might do this for two reasons:

  1. He would get a lot of disutility from losing money (maybe it's his retirement fund)
  2. He irrationally believes the probability of losing money is higher than it actually is (maybe he is biased because he grew up during a financial crash).

These different combinations of probability and utility inform the same risk-averse behavior. In fact, probability and utility are so interchangeable that professional traders-- just about the most calibrated, rational people with regard to probability of losing money, and who are only risk-averse for reason (1) -- often model financial products as if losing money is more likely than it actually is, because it makes the math easier.

The EA movement’s values are drifting. You’re allowed to stay put.

Maybe related is that even for ideal expected utility maximizers, values and subjective probabilities are impossible to disentangle by observing behavior. So it's not always easy to tell what changes are value drift vs epistemic updates.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

But if a random variable is 0 with probability measure 1 and is undefined with probability measure 0, we can't just say it's identical to the zero random variable or that it has expected value zero (I think, happy to be corrected with a link to a math source).

The definition of expected value is . If the set of discontinuities of a function has measure zero, then it is still Riemann integrable.  So the integral exists despite not being identical to the zero random variable, and the value is zero. In the general case you have to use measure theory, but I don't think it's needed here.

Also, there's no reason our intuitions about the goodness of the infinite sequence of bets has to match the expected value.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

I don't have a confident opinion about the implications to longtermism, but from a purely mathematical perspective, this is an example of the following fact: the EV of the limit of an infinite sequence of policies (say yes to all bets; EV=0) doesn't necessarily equal the limit of the EVs of each policy (no, yes no, yes yes no, ...; EV goes to infinity).

In fact, either or both quantities need not converge. Suppose that bet 1 is worth -$1, bet 2 is worth +$2, bet k is worth  and you must either accept all bets or reject all bets. The EV of rejecting all bets is zero. The limit of EV of accepting the first k bets is undefined. The EV of accepting all bets depends on the distribution of outcomes of each bet and might also diverge.

The intuition I get from this is that infinity is actually pretty weird. The idea that if you accept 1 bet, you should accept infinite identical bets should not necessarily be taken as an axiom.

You should join an EA organization with too many employees

You can get it from log returns to labor. If impact is k*log(labor) for for-profit firms and 50*k*log(labor) for altruistic firms, the altruistic firms will buy 50x the labor before returns diminish to the same level. I'm not sure this is the right model for companies though.

You should join an EA organization with too many employees

Yeah, I think you also have to assume that charities want something other than to create surplus value in the way for-profit companies do. Suppose not-- say there are altruistic and for-profit firms, and altruistic and selfish labor, and the altruists want to maximize total surplus created. Not an economist, but I think the equilibrium looks like:

  • altruistic firms are better buyers of selfish labor; therefore the price of labor goes up
  • altruistic employees work for the most altruistically efficient firms, whether they're altruistic or for-profit. For-profit firms are less altruistically efficient because they're optimizing their profit, not total surplus. So they shift to partially optimizing for altruistic value to attract better talent in the face of higher labor prices.
Some potential lessons from Carrick’s Congressional bid

What's the proportion of Hispanic people in OR-6? Based on the county data I'd guess it's close to the national average of 18.7%. Someone should probably compute this.

"Big tent" effective altruism is very important (particularly right now)

big tent doesn’t mean actively increasing reach. Big tent means encouraging and showcasing the diversity that exists within the community so that people can see that we’re committed to the question of “how can we do the most good” not a specific set of answers.

Thanks, this clears up a lot for me.

"Big tent" effective altruism is very important (particularly right now)

Thanks, I made an edit to weaken the wording.

I mostly wanted to point out a few characteristics of applause lights that I thought matched:

  • the proposed actions are easier to cheer for on a superficial level
  • arguing for the opposite is difficult, even if it might be correct: "Avoid coming across as dogmatic, elitist, or out-of-touch." inverts to "be okay with coming across as dogmatic, elitsit, or out-of-touch"
  • when you try to put them into practice, the easy changes you can make don't address fundamental difficulties, and making sweeping changes has high cost

Looking over it again, saying they are applause lights is saying that the recommendations are entirely vacuous, which is a pretty serious claim I didn't mean to make.

"Big tent" effective altruism is very important (particularly right now)

First off, note that my comment was based on a misunderstanding of "big tent" as "big movement", not "broad spectrum of views".

Correct me if I'm wrong in my interpretation here, but it seems like you are modelling impact on a unidimensional scale, as though there is always an objective answer that we know with certainty when asked 'is X or Y more impactful'? 

As Linch pointed out, there are three different questions here (and there's a 4th important one):

  1. Whether impact can be collapsed to a single dimension when doing moral calculus.
  2. Whether morality is objective
  3. Whether we have the predictive prowess to know with certainty ahead of time which actions are more impactful
  4. Whether we can identify groups of people to invest in, given the uncertainty we have

Under my moral views, (1) is basically true. I think morality is not (2) objective. (3) is clearly false. But the important point is that (3) is not necessary to put actions on a unidimensional scale, because we should be maximizing our expected utility with respect to our current best guess. This is consistent with worldview diversification, because it can be justified by unidimensional consequentialism in two ways: maximizing EV under high uncertainty and diminishing returns, and acausal trade / veil of ignorance arguments. Of course, we should be calibrated as to the confidence we have in the best guess of our current cause areas and approaches.

There is a tail of talented people who will make the most impact, and any diversion of resource towards less talented people will be lower expected value.

I would state my main point as something like "Many of the points in the OP are easy to cheer for, but do not contain the necessary arguments for why they're good, given that they have large costs". I do believe that there's a tail of talented+dedicated people who will make much more impact than others, but I don't think the second half follows, just that any reallocation of resources requires weighing costs and benefits.

Here are some things I think we agree on:

  • Money has low opportunity cost, so funding community-building at a sufficiently EA-aligned synagogue seems great if we can find one.
  • Before deciding that top community-builders should work at a synagogue, we should make sure it's the highest EV thing they could be doing (taking into account uncertainty and VOI). Note there are other high-VOI things to do, like trying to go viral on TikTok or starting EA groups at top universities in India and Brazil.
  • We can identify certain groups of people who will pretty robustly have higher expected impact (again where "expected" takes into account our uncertainty over what paths are best): people with higher engagement (able to make career changes), higher intelligence+conscientiousness.
  • Putting some resources towards less talented/committed people is good given some combination of uncertainty and neglectedness/VOI, and it's unclear where to put the marginal resource.
Load More