Owen_Cotton-Barratt

Owen_Cotton-Barratt's Comments

Do impact certificates help if you're not sure your work is effective?

This use-case for impact certificates isn't predicated on trusting the market more than yourself (although that might be a nice upside). It's more like a facilitated form of moral trade, where people with different preferences about what altruistic work happens all end up happier on account of switching so that people can work on things they can make more progress on rather than the things they personally want to bet on. (There are some reasons to be sceptical about how often this will actually be a good trade, because there can be significant comparative advantage to working on a project you believe in, from both motivation and having a clear sense of the goals; however I expect at least some of the time there would be good trades.)

On your second concern, I think that working in this way should basically be seen as a special case of earning to give. You're working for an employer whose goals you don't directly believe in because they will pay you a lot (in this case in impact certificates), which you can use to further things you do believe in. Sure there's a small degree to which people might interpret your place of work as an endorsement, but I don't think this is one of the principle factors feeding into our collective epistemic processes (particularly since you can explicitly disavow it; and in a world where this happens often others may be aware of the possibility even before disavowal) and wouldn't give it too much weight in the decision.

Are we living at the most influential time in history?

I appreciate your explicitly laying out issues with the Laplace prior! I found this helpful.

The approach to picking a prior here which I feel least uneasy about is something like: "take a simplicity-weighted average over different generating processes for distributions of hinginess over time". This gives a mixture with some weight on uniform (very simple), some weight on monotonically-increasing and monotonically-decreasing functions (also quite simple), some weight on single-peaked and single-troughed functions (disproportionately with the peak or trough close to one end), and so on…

If we assume a big future and you just told me the number of people in each generation, I think my prior might be something like 20% that the most hingey moment was in the past, 1% that it was in the next 10 centuries, and the rest after that. After I notice that hingeyness is about influence, and causality gives a time asymmetry favouring early times, I think I might update to >50% that it was in the past, and 2% that it would be in the next 10 centuries.

(I might start with some similar prior about when the strongest person lives, but then when I begin to understand something about strength the generating mechanisms which suggest that the strongest people would come early and everything would be diminishing thereafter seem very implausible, so I would update down a lot on that.)

'Longtermism'

I think some of the differences in opinion about what the definition should be may be arising because there are several useful but distinct concepts:
A) an axiological position about the value of future people (as in Hilary's suggested minimal definition)
B) a decision-guiding principle for personal action (as I proposed in this comment)
C) a political position about what society should do (as in your suggested minimal definition)

I think it's useful to have terms for each of these. There is a question about which if any should get to claim "longtermism".

I think that for use A), precision matters more than catchiness. I like Holly's proposal of "temporal cosmopolitanism" for this.

To my mind B) is the meaning that aligns closest with the natural language use of longtermism. So my starting position is that it should get use of the term. If there were a strong reason for it not to do so, I suppose you could call it e.g. "being guided by long-term consequences".

I think there is a case to be made that C) is the one in the political sphere and which therefore would make best use of the catchy term. I do think that if "longtermism" is to refer to the political position, it would be helpful if it were as unambiguous as possible that it were a political position. This could perhaps be achieved by making "longtermist" an informal short form of the more proper "member of the longtermism movement". Overall though, I feel uncompelled by this case, and like just using "longtermist" for the thing it sounds most like — which in my mind is B).

'Longtermism'

I'm uneasy about loading empirical claims about how society is doing into the definition of longtermism (mostly part (ii) of your definition). This is mostly from wanting conceptual clarity, and wanting to be able to talk about what's good for society to do separately and considering what it's already doing.

An example where I'm noticing the language might be funny: I want to be able to talk about a hypothetical longtermist society, perhaps that we aspire to, where essentially everyone is on board with longtermism. But if the definition is society-relative this is really hard to do. I might say I think longtermism is true but we should try to get more and more people to adopt longtermism and then longtermism will become false and we won't actually want people to be longtermists any more — but we would still want them to be longtermist about 2019.

I think this happens because "longtermism" doesn't really sound like it's about a problem, so our brains don't want to parse it that way.

How about a minimal definition which tries to dodge this issue:
> Longtermism is the view that the very long term effects of our actions should be a significant part of our decisions about what to do today
?

(You gesture at this in the post, saying "I think the distinctive idea that we should be trying to capture is the idea of trying to promote good long-term outcomes"; I agree, and prefer to just build the definition around that.)

'Longtermism'

I agree with the sentiment that clause (i) is stronger than it needs to be. I don't really think this is because it would be good to include other well-specified positions like exponential discounting, though. It's more that it's taking a strong position, and that position isn't necessary for the work we want the term to do. On the other hand I also agree that "nonzero" is too weak. Maybe there's a middle ground using something like the word "significant"?

[For my own part intellectual honesty might make me hesitate before saying "I agree with longtermism" with the given definition — I think it may well be correct, but I'm noticeably less confident than I am in some related claims.]

“Just take the expected value” – a possible reply to concerns about cluelessness

By now it should be clear that simply following the expected value is not a sufficient response to concerns of cluelessness.

I was pretty surprised by this sentence. Maybe you could say more precisely what you mean?

I take the core concern of cluelessness to be that perhaps we have no information about what options are best. Expected value gives a theoretical out to that (with some unresolved issues around infinite expectations for actors with unbounded utility functions). Approximations to expected value that humans can implement are as you point out kind of messy and opaque, but that's a feature of human reasoning in general, and doesn't seem particularly tied to expected value. Is that what you're pointing at?

Announcing the 2017 donor lottery

I don't quite have an algorithm in mind for this. I think in practice it would likely be easy to find solutions to dividing tickets, but perhaps one would want something more specified first.

With a well-specified algorithm and an understanding that it was well-behaved, one could imagine shrinking the block size right down to give people flexibility over their lottery size and reduce the liability of the guarantor. There is perhaps an advantage to having a canonical size for developing buy-in to the idea, though.

Announcing the 2017 donor lottery

A simple variation on the current system would allow people to opt into lottery-ing up further (to the scale of the total donor lottery pot):

Ask people what scale they would like to lottery to. If $100k, allocate them a range of tickets in one block as in the current system. If (say) $300k, split their tickets between three blocks, giving them the same range in each block. If their preferred scale exceeds the total pot, just give them correlated tickets on all available blocks.

If there's a conflict of preference between people wanting small and large lotteries so they're not simultaneously satisfiable (I think this is somewhat unlikely in practice unless someone comes in with $90k hoping to lottery up to $100k), first satisfy those who want smaller totals, then divide the rest as fairly as possible.

Announcing the 2017 donor lottery

We think that it's in the spirit of the lottery that someone who does useful research that would be of interest to other donors should publish it (or give permission for CEA to publish their grant recommendation). Also, if they convince others to donate then they'll be causing additional grants to go to their preferred organization(s). We'll strongly encourage winners to do so, however, in the interests of keeping the barriers to entry low, we haven't made it a hard requirement.

Seems like even strong social pressure might be enough to be a significant barrier to entry. I feel excited about entering a donor lottery, and would feel less excited if I thought I'd feel accountable if I won (I might still enter, but it seems like a significant cost).

Would an attitude of "we think it's great if you want to share (and we could help you with communication) but there's no social obligation" capture the benefits? That's pretty close to what you were saying already, but the different tone might be helpful for some people.

Causal Network Model III: Findings

Thanks for the write-up!

I found the figures for existential-risk-reduced-per-$ with your default values a bit suspiciously high. I wonder if the reason for this is in endnote [2], where you say:

say one researcher year costs $50,000

I think this is too low as the figure to use in this calculation, perhaps by around an order of magnitude.

Firstly, that is a very cheap researcher-year even just paying costs. Many researcher salaries are straight-up higher, and costs should include overheads.

A second factor is that having twice as much money doesn't come close to buying you twice as much (quality-adjusted) research. In general it is hard to simply pay money to produce more of some of these specialised forms of labour. For instance see the recent 80k survey of willingness to pay of EA orgs to bring forward recent hires, where the average willingness to forgo donations to move a senior hire forward by three years was around $4 million.

Load More