AMA: Owen Cotton-Barratt, RSP Director

by Owen_Cotton-Barratt1 min read28th Aug 202080 comments

76

Ask Me AnythingResearch methodsOrg StrategyResearch Training Programs
Frontpage

I'm planning to spend time on the afternoon (UK time) of Wednesday 2nd September answering questions here (though I may get to some sooner). Ask me anything!

A little about me:

80 comments, sorted by Highlighting new comments since Today at 10:18 PM
New Comment
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Does FHI or the RSP have a relatively explicit, shared theory of change? Do different people have different theories of change, but these are still relatively explicit and communicated between people? Is it less explicit than that? 

Whichever is the case, could you say a bit about why you think that's the case?

For RSP, I think that:

  • In starting RSP, I had an implicit theory of change in my head
    • There are quite a few facets of this (mechanisms for value produced, a continuum of hypotheses, etc.)
    • One important facet (particularly for early-RSP) was a sense of "pretty sure there's significant value available via something in this vicinity, let's try it and see if we can hone in"
  • I explicitly share and communicate parts of this model to the extent that it's accessible for me to do so
    • This involved some conversations with people before RSP started, and some presenting thoughts to the research scholars as the programme started, and periodically returning to it
  • As RSP has developed and other people have become major stakeholders, they've developed their own implicit theories of change
    • We make some space to discuss these / exchange models
  • As RSP matures, it will make more sense to pin down a theory of change and have it explicit and shared
    • The facet of "let's work out what here is good" will naturally diminish, and we'll work out which other facets are best to lean on

Some general thoughts:

  • Advantages of having an explicit theory of change:
    • Makes it easier to sync up about direction/priorities/reasons for d
... (read more)
2MichaelA8moThanks for that detailed answer! I've quoted the part from "Some general thoughts:" to the second-last paragraph in a new comment on my earlier question post Do research organisations make theory of change diagrams? Should they? [https://forum.effectivealtruism.org/posts/LgLYCGCs8Nji3oEWj/do-research-organisations-make-theory-of-change-diagrams] (I flagged that you weren't talking about ToC diagrams in particular.) Hope that's ok.
3Misha_Yagudin8moA related question: which fraction of your and RSP's impact do you expect to come from direct and from community/field-building? E.g. * When working on a paper, do you think value consists of field-building or from a small personal chance of, say, coming up with a crucial consideration? * Will most value of RSP will come from direct work done by scholars or by scholars [and program] indirectly influencing other people/organizations? [I would count consulting policy-makers as direct work.]
6Misha_Yagudin8moOh, even better! In your What Does (and Doesn’t) AI Mean for Effective Altruism? [https://www.effectivealtruism.org/articles/what-does-and-doesnt-ai-mean-for-effective-altruism-owen-cotton-barratt/] slide four speaks about different timelines: immediate (~5 years), this generation (~15), next-generation (~40), distant (~100). Which timelines are you optimizing RSP for?
7Owen_Cotton-Barratt8moOf these, I think RSP is most aiming at "next-generation", with "this generation" a significant secondary target.
3Owen_Cotton-Barratt8moThis question doesn't quite feel right to me. I think that when working on a paper I normally have an idea of what insights I want it to convey. The value might be in field-building, or the direct value of disseminating that insight (not counting its spillover to field-building). Work that might find crucial insights feels like it happens before the paper-writing stage. I try to spend some time in that mode.
1Misha_Yagudin8moYeah, on a reflection framing of "working on a paper" is not quite right. So let me be more specific, * Prospecting for Gold'simpact comes from promoting a certain established way of thinking [≈ econ 101 and ITN] within the EA community and, unclear, if intended or not, also providing local communities with an excellent discussion topic. * The expected value of cost-effectiveness of researchseems to be dominated by chances of stumbling on considerations for the EA researchers, GiveWell, 80K's career recommendations, etc. * The impact of work onmoral uncertaintyseems to primarily come from field-building. Doing EA-relevant research within a prestigious branch of philosophy increase odds that more pressing EA questions would be addressed by the next generation of academics. There are other potentials reasons to do research, say, one might prefer to fully concentrate on mentoring but need to do research for the second-order effects: having prestige for hiring; having scholars' respect for better mentorship; having fresh meta-cognitive observations to emphasize with mentees for better advising). I am curious about which impact pathways do you prioritize? I feel the most confused about moral uncertainty because it doesn't resonate with my taste and my knowledge of the subject and of field politics is very limited. I hope my oversimplification doesn't diminish/misrepresent your work too much.
2Owen_Cotton-Barratt8moI want to say "yes, by indirect influence", but I expect that this will be true also of most cases of consulting policy-makers (this would remain true even if you got to set policies directly, as I think that most things we do have value filtered through what future people do). This makes me think I'm somehow using a different lens on the world which makes it hard to answer this question directly.

I've heard many people express the view that in EA, and perhaps especially in longtermism:

  • There are a lot of people who could potentially be good/great researchers, but have limited experience thus far
  • There is too little capacity to mentor and manage these people
    • This is partly because the best candidates for doing that are also able to do very valuable research themselves, or other things like outreach, so the opportunity costs for them are very high
  • This results in an untapped pool of potential talent, and also makes it harder to fix this problem itself, because it limits the pipeline of new mentors and managers as well
  • So it'd be highly valuable for more people to build skills in research as well as in mentorship/management, to address this problem
  • And maybe this pushes in favour of starting one's research career outside of explicitly EA orgs, e.g. in academia, to draw on the mentorship capacity there

1. Does all of those claims seem true to you? 

2. If so, do you expect this to remain true for a long time, or do you think we're already moving rapidly towards fixing it? (E.g., maybe there are a lot of people already "in the pipeline", reducing the need for new people to enter it.)

3. Do you think there are other ways to potentially address this problem (if it exists) that deserve more attention or that I didn't mention above?

4. Do you think RSP, or things like it, are especially good ways to address this problem (if it exists)?

1. Does all of those claims seem true to you?

Yes, with some important comments:

  • I don't think this is centrally about "researchers", but about "people-who-are-decent-at-working-out-what-to-do-amongst-the-innumerable-possibilities"
    • This is a class we need more of in EA (and particularly longtermist EA); research is one of the applications of the (major) applications of such people, but far from the only one
  • Mentorship/management is more like a thousand small things than two big things
    • Often people will be better off learning from multiple strong mentors than one, because they'll be good at different subcomponents
  • There are very substantial reasons beyond this to spend part of one's (research) career outside of explicitly EA orgs, particularly if you get an opportunity to work with outstanding people
    • Such as:
      • You can better learn the specialist knowledge belonging to the relevant domain by spending time working with top experts
        • Or idiosyncratic-but-excellent pieces of mentorship
      • To the extent that EA has important insights that are relevant in many domains, working closely with smart people is a good opportunity to share those insights
      • It's a powerful way to develop a network
    • I gave the reasons
... (read more)

Hmm, I think that I'm less conceiving of this as a problem-to-be-fixed than you are.

I think my second question was broad and vague. 

I could operationalise part of it as: "Do you expect there's still high expected value in more people starting now at trying to get good at 'research mentorship/management'? Do you expect the same would be true if they started on that in, e.g., 2 years? Or do you think that, by the time people got good at this if they start now, the 'gap' will have been largely filled?"

It sounds like you think the answer is essentially "Yes, there's still high expected value in this"?

I'd agree that there are other strong arguments for many people working outside of explicitly EA orgs. And I think many EAs - myself included - are biased towards and often overemphasise working at explicitly EA orgs. 

But "jobs/projects that are unusually good for getting better at 'research mentorship/management'" includes various jobs both within and outside of EA, as well as excluding various jobs both within and outside of EA. So I think the questions in this comment are distinct from - though somewhat related to - the question "Should more people work outside of EA orgs?"

Ahh, I think I was interpreting your general line of questioning as being:

A) Absent ability to get sufficient mentorship within EA circles, should people go outside to get mentorship?

... whereas this comment makes me think you were more asking:

B) Since research mentorship/management is such a bottleneck, should we get people trying to skill up a lot in that?

I think that some of the most important skills for research mentorship from an EA perspective include transferring intuitions about what is important to work on, and that this will be hard to learn properly outside an EA context (although there are probably some complementary skills one can effectively learn).

I do think that if the questions were in the vein of B) I'm more wary in my agreement: I kind of think that research mentorship is a valuable skill to look for opportunities to practice, but a little hard to be >50% of what someone focuses on? So I'm closer to encouraging people doing research that seems valuable to look for opportunities to do this as well. I guess I am positive on people practicising mentorship generally, or e.g. reading a lot of different pieces of research and forming inside views on what makes some pieces seem more valuable. I think the demand for these skills will become slightly less acute but remain fairly high for at least a decade.

3MichaelA8moI think I had both of those lines of questioning in mind, but didn't make this explicit. Thanks for your responses :)
4MichaelA8moThanks for that interesting response! Other than research, what do you see as fitting in this category? I'd guess it includes grantmaking, and making high-level/strategic organisation decisions. And I'd guess it wouldn't include working out which accounting firm an organisation should use. But I'm unsure about both of those guesses, and especially about things "in between" them. Perhaps you mean something like "people who are decent at working out what strategies and interventions we should pursue amongst the innumerable possibilities"? (As opposed to what fine-grained decisions individual people/orgs should make on a day to day level.) I'm not sure I know what you mean by this (as you anticipate!). Is it about the research scholars themselves spending part of their career before or after RSP outside of EA orgs? Or about the research scholars complementing other people working elsewhere?
6Owen_Cotton-Barratt8moRoughly, yes. e.g. I think several people currently at RSP have had some career outside first, and I think that they are typically deriving some real benefit from that (i.e. RSP is providing a complement rather than a substitute for the experience they have already). (Not claiming that RSP is only for people with such experience!)
4Owen_Cotton-Barratt8moYes, I think that's mostly a better characterisation. (There's definitely some grey area, as e.g. I think that people who are good at the thing I'm pointing to are in touch with the reasons behind a choice of intervention, in a way that feeds into some of the decisions about how to implement it on a day-to-day level.)
7John_Maxwell8moWhy not just have the people who need mentorship serve as "research personal assistants" to improve the productivity of people who are qualified to provide mentorship? (This describes something which occurs between professors and graduate students right?)
4MichaelA8moI've heard the view that more EAs should consider being research assistants to seemingly highly skilled EA researchers[1], both for their own learning and to improve those researchers' productivity. Is this what you mean? I didn't deliberately exclude mention of this from my above comment; I just didn't think to include it. And now that you mention it (or something similar), I'd be interested in Owen's take on this as well :) [1] One could of course also do this for highly skilled non-EA researchers working in relevant areas. I just haven't heard that suggested as often; I'm not sure if there are good reasons for that.

Suppose, in 10 years, that the Research Scholar's Programme has succeeded way beyond what you expected now. What happened?

Interesting question!

Related: Suppose that, in 10 years, the RSP seems to have had no impact.* What signs would reveal this? And what seem the most likely explanations for the lack of impact?

*There already seem to be some indicators of impact, so feel free to interpret this as "seems to have had no impact after 2020", or as "seems to have had no impact after 2020, plus the apparent impacts by 2020 all ended up washing out over time".

7Owen_Cotton-Barratt8moSomething like: it seems like the people we're taking on the programme are doing kind of good things, but when we dig into counterfactual analysis it seems like they might on average have done more if they hadn't joined the programme (perhaps because e.g. normal academic pressures are surprisingly helpful motivationally, or because we're fostering a community which is too inward-looking).
7Owen_Cotton-Barratt8moSomething like: it catalysed the creation of a whole stream of major new projects (led by scholars who used the space afforded by the programme to think seriously about possibilities, and who are well-networked with the broader x-risk ecosystem which makes coordination and recruitment easier).

Is there any impact measurement of RSP currently? I appreciate it is unusually hard, but have you had any thoughts on good ways to go about this?

5Owen_Cotton-Barratt8moWe're doing a combination of: * Looking at what people go on from RSP to do * For now this is just "where they're going next" for people leaving [https://www.fhi.ox.ac.uk/next-steps-for-departing-research-scholars/], but in the future I expect us to check back in a few years later * Surveys (& conversations) asking research scholars how useful they have found RSP (and in what that value consists), and what they guess they would have done otherwise * Comparison of the above with some people who narrowly didn't join RSP (for one reason or another) * Looking at to what extent work done by research scholars while on the programme is directly useful * Our (=RSP management's) independent impressions of whether / how much we've helped people (I think we're still finding out feet with this.) There are a relatively small number of individuals who have gone through the programme, and it's important to us to protect their privacy, so at the moment we don't have plans to publish any of this. When we have slightly more data I kind of like the idea of publishing some aggregate summaries, but I haven't thought seriously about whether this will be possible to do in a way which is properly privacy-preserving while also actually useful to readers.

What common belief in EA do you most strongly disagree with?

That personal dietary choices are important on consequentialist effectiveness grounds. 

I actually think there are lots of legitimate and powerful reasons for EAs to consider veg*nism, such as:

  • A ~deontological belief that it's wrong to eat animals
  • A desire for lifestyle choices that help you connect with what you care about
  • Signalling caring
  • A desire for shared culture with people you share values with

... but it feels to me almost intellectually dishonest to have it be part of an answer to someone saying "OK, I'm bought into the idea that I should really go after what's important, what do I do now?"

(I'm not vegetarian, although I do try to only consume animals I think have had reasonable welfare levels, for reasons in the vicinity of the first two listed above. I still have some visceral unease about the idea of becoming vegetarian that is like "but this might be mistaken for being taken in by intellectually dishonest arguments".)

I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!

I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA. If something is a good thing, and provided it doesn't really have an opportunity cost, then it seems to me that a consequentialist EA should do it regardless of how good it is.

To illustrate my point, one can say it's a good thing to donate to a seeing eye dog charity. In a way it is, but an EA would say it isn't because there is an opportunity cost as you could instead donate to the Against Malaria Foundation for example which is more effective. So donating to a seeing eye dog charity isn't really a good thing to do.

Choosing to follow a ve*an diet doesn't have an opportunity cost (usually). You have to eat, and you're just choosing to eat something different. It doesn't stop you doing something else. Therefore even if it realises a small benefit it seems worth it (and for the record I don't think the benefit is small).

Or perhaps you just think the... (read more)

I almost feel cheeky responding to this as you've essentially been baited into providing a controversial view, which I am now choosing to argue against. Sorry!

That's fine! :)

In turn, an apology: my controversial view has baited you into response, and I'm now going to take your response as kind-of-volunteering for me to be critical. So I'm going to try and exhibit how it seems mistaken to me, and I'm going (in part) to use mockery as a rhetorical means to achieve this. I think this would usually be a violation of discourse norms, but here: the meta-level point is to try and exhibit more clearly what this controversial view I hold is and why; the thing I object to is a style of argument more than a conclusion; I think it's helpful for the exhibition to be able to draw attention to features of a specific instance, and you're providing what-seems-like-implicit-permission for me to do that. Sorry!

I'd say that something doesn't have to be the most effective thing to do for it to be worth doing, even if you're an EA.

To be clear: I strongly agree with this, and this was a big part of what I was trying say above.

So donating to a seeing eye dog charity isn't really a good thing to do.

This is... (read more)

I'm not 100% sure but we may be defining opportunity cost differently. I'm drawing a distinction between opportunity cost and personal cost. Opportunity cost relates to the fact that doing something may inhibit you from doing something else that is more effective. Even if going vegan didn't have any opportunity cost (which is what I'm arguing in most cases), people may still not want to do it due to high perceived personal cost (e.g. thinking vegan food isn't tasty). I'm not claiming there is no personal cost and that is indeed why people don't go / stay vegan - although I do think personal costs are unfortunately overblown.

Without addressing all of your points in detail I think a useful thought experiment might be to imagine a world where we are eating humans not animals. E.g. say there are mentally-challenged humans of a comparable intelligence/capacity to suffer to non-human animals and we farm them in poor conditions and eat them causing their suffering. I'd imagine most people would judge this as morally unacceptable and go vegan on consequentialist grounds (although perhaps not and it would actually be deontological grounds?). If you ... (read more)

4Bella_Forristal8moThanks for this interesting discussion; for others who read this and were interested, I thought I'd [https://forum.effectivealtruism.org/posts/YkHR4qYXkQvjvTi5v/four-practices-where-eas-ought-to-course-correct#Over_emphasis_on_diet_change] link [https://forum.effectivealtruism.org/posts/YuFD4v7DFBcM57eSA/consequences-of-animal-product-consumption-combined-model] some [https://forum.effectivealtruism.org/posts/Nxmshrz3EeJb7Ng3w/don-t-sweat-die] previous [https://slatestarcodex.com/2015/09/23/vegetarianism-for-meat-eaters/] EA [https://meteuphoric.com/2014/11/21/when-should-an-effective-altruist-be-vegetarian/] discussions [https://nothingismere.com/2015/09/10/revenge-of-the-meat-people/] on this topic in case it's helpful :) One brief addition: I think the kind of conscientious omnivorism you describe ('I do try to only consume animals I think have had reasonable welfare levels') might have similar opportunity costs to veg*ism, and there's some not very conclusive psychological literature [https://www.researchgate.net/publication/268693729_Can_you_have_your_meat_and_eat_it_too_Conscientious_omnivores_vegetarians_and_adherence_to_diet] to suggest that, since it is a finer grained rule than 'eat no animals', it might even be harder to follow. Obviously, this depends very much on what we mean by opportunity cost, and it also depends on how one goes about only trying to eat happy animals. I'm not sure what the best answer to either of those questions is.
4Denis Drescher8moI’ve thought a bit about this for personal reasons, and I found Scott Alexander’s take on it [https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/] to be enlightening. I see a tension between the following two arguments that I find plausible: 1. Some people run into health issues due to a vegan diet despite correct supplementation. In most cases it’s probably because of incorrect or absent supplementation, but probably not in all. This could mean that a highly productive EA doing highly important work may cease to be as productive with a small probability. Since they’ve probably been doing extremely valuable work, this decrease in output may be worse than the suffering they would’ve inflicted if they had [eaten some beef and had some milk](https://impartial-priorities.org/direct-suffering-caused-by-various-animal-foods.html). So they should at least eat a bit of beef and drink a bit of milk to reduce that risk. (These foods may increase other risks – but let’s assume for the moment that the person can make that tradeoff correctly for themselves.) 2. There is currently in our society a strong moral norm against stealing. We want to live in a society that has a strong norm against stealing. So whenever we steal – be it to donate the money to a place where it has much greater marginal utility than with its owner – we erode, in expectation, the norm against stealing a bit. People have to invest more into locks, safes, guards, and fences. People can’t just offer couchsurfing anymore. This increase in anomie (roughly, lack of trust and cohesion) may be small in expectation but has a vast expected societal effect. Hence we should be very careful about eroding valuable societal norms, and, conversely, we should also take care to foster new valuable societal norms or at least not stand in the way of them emerging. I see a bit of a Laffer curve here (like an upside-down U) wh
-11Milton8mo

What do you think is the most valuable research you've produced so far? Did you think it would be so valuable at the time?

Estimating the value of research seems really hard to me (and this is significantly true even in retrospect).

That said, some candidates are:

  • Work making the point that we should give outsized attention to mitigating risks that might manifest unexpectedly soon, since we're the only ones who can
    • At the time it didn't seem unusually valuable, but I think it was relatively soon after (a few months) that I saw some people changing behaviour in light of the point, which increased my sense of its importance
  • Work on cost-effectiveness of research of unknown difficulty, particularly the principle of using log returns when you don't know where to start
    • Felt sort-of important at the time, although I think the kind of value I anticipated hasn't really manifested
    • I have felt like it's been useful for my thinking in a variety of domains, thinking about pragmatic prioritisation (and I've seen some others get some value from that); however logarithm is an obvious-enough functional form that maybe it didn't really add much
  • Maybe something where it was more about dissemination of ideas than finding deep novel insights (I think it's very hard to draw a line between what counts as "research" or what doesn't
... (read more)

You have a pure maths research background. What areas/problems do you think this background and way of thinking give you the strongest comparative advantage at?

Can you give any examples of times your background has felt like it helped you come to valuable insights?

6Owen_Cotton-Barratt8moThere's a class of things which feel majorly helpful, but it's hard to distinguish between whether I was helped by the background in pure mathematics, or whether I have some characteristics which both helped me in mathematics and help me now (I suspect it's some of both): * Being good at framing things * Turning things over in my head, looking for the angle which makes them most parsimonious, and easiest to comprehend clearly * Relatedly, feeling happy to dive in and try to make up theory, but keep it grounded by "this has to actually explain the things we want to know about" * These are useful skills when faced with domains where we haven't yet settled on paradigms which we're satisfied capture the important parts of what we care about * Generally keeping track of precisely what are the epistemic statuses of different claims, and how they interact * This is a useful skill for domains where we're projecting out beyond things we can easily check empirically Then there are some cases where I was more directly applying some mathematical thinking, e.g.: * Work on normative uncertainty (chiefly: variance normalization [http://users.ox.ac.uk/~ball1714/Variance%20normalisation.pdf]; bargaining [https://globalprioritiesinstitute.org/a-bargaining-theoretic-approach-to-moral-uncertainty/] ) * Theory of logarithmic returns [http://www.fhi.ox.ac.uk/theory-of-log-returns/]

Would you currently prefer a marginal resource to be used by an impatient longtermist (i.e. to reduce existential risk) or by a patient longtermist (i.e. to invest for the future)? Assume both would spend their resource as effectively as possible

Where do you think the impatient longtermist would spend their resource and where do you think the patient longtermist would spend their resource?

Finally, how do you best think we should proceed to answer these questions with more certainty?

P.S. there may well have been a much simpler way to formulate these questions, feel free to reformulate if you want to!

I'm not sure I really believe that "patient vs impatient longtermists" cleaves the world at its joins. I'll use the terms to mean something like resources aimed at reducing existential risk over the next fifty years or so, versus aiming to be helpful on a timescale of over a century?

In either case I think it depends a lot on the resource in question. Many resources (e.g. people's labour) are not fully fungible with one another, so it can depend quite a bit on comparative advantage.

If we're talking about financial resources, these are fairly fungible. There I tend to think (still applies to both "patient" and "impatient" flavours of longtermism):

  • It doesn't make so much sense to analyse at the level of the individual donor
  • Instead we should think about the portfolio we want longtermist capital as a whole to be spread across, and what are good ways to contribute to that portfolio at the margin
    • Sometimes particular donors will have comparative advantage in giving to certain places (e.g. they have high visibility on a giving opportunity so it's less overhead for them to assess it, and it makes sense for them to fill it)
    • Sometimes it's more about coordinating to have roughly the right amoun
... (read more)
1jackmalde8moThanks for this detailed reply! I appreciate these aren't questions with simple answers. Do you mind elaborating slightly on what you mean here? To me this just reads as finding out the best activities to do if you're a longtermist, but given that you say it's a "small slice of our portfolio" I suspect this is this more specific.
4Owen_Cotton-Barratt8moSorry that was poorly worded. I mean for various activities X, estimating how many resources end up devoted to longtermist ends as a result of X (and what the lags are). e.g. some Xs = writing articles about longtermism; giving talks in schools; talking about EA but not explicitly longtermism; outreach to foundations; consultancy to help people give better according to their values (and clarify those values); ...
1jackmalde8moAh OK thanks, that makes sense. Certainly seems worthwhile to have more research into this

What do you believe* that seems important and that you think most EAs/longtermists/people at FHI would disagree with you about? 

*Perhaps in terms of your independent impression, before updating on others' views.

That in thinking about community/movement building, it's more important to consider something like how people should be -- e.g. what virtues should be cultivated/celebrated -- rather than just what people should do (although of course both matter). 

(That's in impression space. I have various drafts related to this, and I hope to get something public up in the next few months, so I'll leave it brief for now.)

Do you think Ellsberg preferences and/or uncertainty/ambiguity aversion are irrational? 

Do you think it's a requirement of rationality to commit to a single joint probability distribution, rather than use multiple distributions or ranges of probabilities?

Related papers:

I think the debate about ambiguity aversion mostly comes down to a bucket error about the meaning of "rational":

  • I think that a fully rational actor would:
    • not exhibit ambiguity aversion
    • commit to a single joint probability distribution
  • I think for boundedly rational actors:
    • ambiguity aversion is a (very) useful heuristic
      • particularly if you're in an environment which is or might be partially designed by other agents who could stand to benefit from your loss
    • it can make sense to hold onto ranges of probabilities
      • e.g. maybe you think event X has probability between 10% and 20%, then that's enough to determine what to do for lots of policy decisions; in cases where it doesn't determine what to do you can consider whether it's worth time investment to sharpen your probability estimate
  • I think it's a bad (but frequently made, at least implicitly) assumption that boundedly rational actors should mimic the behaviour of fully rational actors in cases where they can work out what that is
    • For a particularly vivid example of (something at least strongly analogous to) this assumption breaking, see the theorem in the optimal taxation literature that the top marginal tax rate should be zero
8Owen_Cotton-Barratt8moMeta: I really appreciated being asked this question! It made me realise I no longer felt confused about ambiguity aversion. (I think the last time I thought explicitly about it, I'd have said "seems like ambiguity aversion is a good heuristic in some circumstances and that generates the intuitions in favour of it, but it's irrational", and the time before I'd have said "I think ambiguity aversion is irrational".)
7Owen_Cotton-Barratt8moMeta: the last time I looked into any literature around this was about 5-6 years ago (and I wasn't thorough then), so I really don't know if this perspective is represented somewhere in the debate. In case it isn't, and if any reader feels like they would like to take on the hard work of fleshing out details and seeing what problems it does/doesn't address, and writing it up for a paper, I'd be really happy to hear that that had been done. (Also feel free to reach out if that might be you and you'd want to discuss.)
2MichaelStJules8moSeparating fully and boundedly rational actors is very helpful. Would a fully rational actor need to have a universal prior? Wouldn't they need to have justified one choice of a universal prior over all others? It seems like there might be a hard first step here that could prevent them from committing to a single joint probability distribution. Maybe you'd want a prior over universal priors, but then where would that come from? Maybe this is the only place where multiple distributions can creep in for a fully rational actor, and all other probabilities would be based on your universal prior and observations. Do you mean that they will fail to approximate the fully rational behaviour and sometimes be more biased when they try to approximate it? My instinct in response to the optimal top marginal tax rate being zero is that their model is probably missing very important features (which might be hard to measure or quantify).
6Owen_Cotton-Barratt8moRoughly yes. They might even exactly match the fully rational behaviour on some dimension under consideration, but in so doing be a worse approximation overall to full rationality. I think a proper study of full rationality and boundedly rational actors would look at limits of behaviour as you impose weaker and weaker computational constraints. I think that it could be really useful to understand which properties of the fully rational actor are converged upon in a reasonable time and basically hold for powerful-enough boundedly rational actors, and which e.g. only hold in the very limit when the actors comprehension ability is large compared to the world. Yes, I think it is missing imperfect information and bounded rationality. (TBC, I don't think that anyone working in optimal tax theory thinks that top marginal rates should actually be zero.) I think the theorem is pretty clear that in the perfect information case with all actors rational the top rate should be zero (basically needs an additional assumption about smoothness of preferences, but that's pretty reasonable). And although this sounds surprising, it is just correct! To set up an example that's about bounded rationality in particular, suppose: * The taxpayers are fully rational * You, the tax-setter, have a lot of giant spreadsheets which express all of the taxpayer preferences for different levels of work/consumption, marginal value of public funds etc. (so theoretically full information) * You now get to set all the tax rates (which could be quite complicated) * If you were fully rational and could calculate everything out, you would be able to set optimal tax policy * But calculating everything out is too much of a mess, and you can't do it * You know for certain that the optimal solution would have a marginal top rate of zero somewhere * But as you can't work out where that is, and as having a marginal top rate of zero is not that important, you'll probably decide on a set of
4Owen_Cotton-Barratt8moI'd usually think of being fully rational as giving constraints after your choice of prior; there are questions about whether some priors are better than others, but you can treat that separately.

Hey Owen, you have a background in mathematic. What is your favorite theorem/proof/object/definition/algorithm/conjecture/..?

One that comes to mind:

Theorem: Every finitely presented group is the fundamental group of some compact 4-manifold.

I like it because:

  • It's a universal claim relating two very broad classes of objects, such that when I look at the statement I think "wow, how would you even start thinking about how to prove that?"
  • There's a proof which is geometric, elegant, and short 
    • In fact there are multiple quite different geometric proofs!

[With apologies for the fact that this likely makes no sense to most readers.]

7Max_Daniel8mo(FWIW, I hadn't heard of that theorem before but don't feel that surprised by the statement. But I'm quite curious if the proofs provide an intuitive understanding for why we need 4 dimensions. Maybe this is hindsight bias, but I feel like if you had asked me "Can we get any member of [broad but appropriately restricted class of groups] as fundamental group of a [sufficiently general class of manifolds]?" my immediate reply would have been "uh, I'd need to think about this, but it's at least plausible that the answer is yes", whereas there is no way I'd have intuitively said "yes, but we need at least four dimensions".)
4Owen_Cotton-Barratt8moI think that often the topology of things in low dimensions ends up interestingly different to in high dimensions -- roughly when your dimensionality gets big enough (often 3, 4, or 5 is "big enough") there's enough space to do the things you want without things getting in the way. One of the proofs I know takes advantage of the fact that fD3×S1(which is not simply connected) has boundaryS2×S1, which is also the boundary ofD2×S2(which is simply connected); there isn't room for the analogous trick a dimension down.

My impression is that, of FHI's focus areas, biotechnology is substantially more credentialist than the others. I've been hesitant to recommend RSP to life scientists who are considering a PhD because I'm worried that not having a "traditional" degree is harmful to their job prospects.

Do you think that's an accurate concern? (I mostly speak with US-based people, if that's relevant.)

6Gregory_Lewis8moFWIW I agree with Owen. I agree the direction of effect supplies a pro tanto consideration which will typically lean in favour of other options, but it is not decisive (in addition to the scenarios he notes, some people have pursued higher degrees concurrently with RSP). So I don't think you need to worry about potentially leading folks astray by suggesting this as an option for them to consider - although, naturally, they should carefully weigh their options up (including considerations around which sorts of career capital are most valuable for their longer term career planning).
6Owen_Cotton-Barratt8moI don't feel like I'm at all an expert in biosecurity careers, but I agree with that directionally they seem more credentialist. I think this is a consideration against RSP, although it doesn't feel like an overwhelming one, since: * It could be a reasonable option before a PhD * This is particularly relevant if taking the time to think about what you want to work on allows you to do a PhD in which your work is much closer to things you eventually care about * (similarly it could be a good option for some people after a PhD) * There may well be some roles (now or in the future) which are less credential-locked

How do you think the EA community can improve its interactions and cooperation with the broader global community, especially those who might not be completely comfortable with the underlying philosophy? Do you think it's more of a priority to spread those underlying arguments, or to simply grow the network of people sympathetic to EA causes, even if they disagree with the principles of EA?

4Owen_Cotton-Barratt8moGood question. Of the two options I'd be tempted to say it's more of a priority to spread the underlying arguments, but actually I think something more nuanced: it's a priority to keep engaging with people about the underlying arguments, finding where there seems to be the greatest discomfort and turning a critical eye on the arguments there, looking to see if we can develop stronger versions of them. I think that talking about the tentative conclusions along with this is important both for growing the network of people sympathetic to those, and for providing concrete instantiation of what is meant by the underlying philosophy (too much risk of talking past each other or getting lost in abstraction-land without this)

You've done research that seems to me very valuable, and now (I imagine) spend a lot of your time on something more like "facilitating and mentoring other researchers", in your role running the RSP.

1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision? What would've made you not make that decision, or what would lead you to switch back to a larger focus on doing your own research?

2. What do you think makes running RSP your comparative advantage (assuming you think that)? More generally, what do you think makes that sort of "research facilitation/mentorship" someone's comparative advantage?

3. Any thoughts on how to test or build one's skills for that sort of role/pathway? (I guess I currently consider things like research management, project management at a research org, and coordinating fellowships to be in the same broad category. This may not be the best way of grouping things.)

(Feel free to just pick one question, or just say related things!)

1. Did you make an active decision to shift your priorities somewhat from doing to facilitating research? If so, what factors drove that decision?

There was something of an active decision here. It was partly based on a sense that the returns had been good when I'd previously invested attention in mentoring junior researchers, and partly on a sense that there was a significant bottleneck here for the research community.

2. What do you think makes running RSP your comparative advantage (assuming you think that)? 

Overall I'm not sure what my comparative advantage is! (At least in the long term.) 

I think:

  • Some things which makes me good at research mentoring are:
    • being able to get up to speed on different projects quickly
    • holding onto a sense of why we're doing things, and connecting to larger purposes
    •  finding that I'm often effective in 'reactive' mode rather than 'proactive' mode 
      • (e.g. I suspect this AMA has the highest ratio of public-written-words / time-invested of anything substantive I've ever done)
    • being able to also connect to where the researcher in front of me is, and what their challenges are
  • There are definitely parts of running RSP which seem not my comparat
... (read more)

Thanks for doing this AMA!

Recently I've been thinking around the themes of how we try to avoid catastrophic behaviour from humans (and how that might relate to efforts with AI)

Do you think "malevolence" (essentially, high levels of traits like Machiavellianism, narcissism,  psychopathy, and/or sadism) may play an important role here? Or do other psychological traits, biases, and limitations seem far more important? Or values? Or things like game-theoretic dynamics, how groups interact, institutional structures,  etc.? 

(Feel free to just talk about this area in the terms that make sense to you, rather than answering that particular framing of the question.)

4Owen_Cotton-Barratt8moMalevolence seems potentially important to me, although I mostly haven't been thinking about it (except a bit about psychopathy and its absence). Things more like game-theoretic dynamics are where a good portion of my attention has been ... but I don't want to claim this means they're more important. [meta: this is a short answer because while I might have things to say about crisper questions within this space, for saying things-in-general I think it makes more sense to wait until I have coherent enough ideas to publish something.]

Which approaches and directions for decision-making under deep uncertainty seem most promising? Are there any that seem likely to be rational but not (apparently?) too permissive like Mogensen's maximality rule?

Which approaches do you see people using or endorsing that you think are bad (e.g. irrational)?

4Owen_Cotton-Barratt8moI guess I think that "decision-making under deep uncertainty" is mostly too broad a category to be able to say useful things about (although maybe we can draw together useful lessons that seem to hold in a variety of more specialised contexts), and we're better trying to look at more particular setups and reason about those.

What intellectual progress did you make in the 2010s? (See SSC and Gwern's essays on the question.)

4Owen_Cotton-Barratt8moThis is an interesting question, but I don't think there's a decent short-answer version; it's more like investing several hours or not at all. So I'll take this as a prompt to consider the several-hour version, but won't answer for now.

What percentage of "EA intellectual work" is done as part of the standard academic process? From your perspective, how far away is it from the optimal distribution?

8Owen_Cotton-Barratt8moGee, this is really hard to measure. I'd guess that somewhere between 10% and 30% is done as part of something that we'd naturally call the "standard academic process" ? I think that there are some good reasons for deviation, and some things that academic norms provide that we may be missing out on. I think academia is significantly set up as a competitive process, where part of the game is to polish your idea and present it in the best light. This means: * It encourages you to care about getting credit, and people are discouraged from freely-sharing early stage ideas that they might turn into papers, for fear of being scooped * This seems broadly bad * It encourages people to put in the time to properly investigate the ins and outs of an idea, and find the clearest framing of it, making it more efficient for later readers * This seems broadly good I'd like it if we could work out how to get more of the good here with less of the bad. That could mean doing a larger proportion of things within some version of the academic process, or could mean working out other ways to get the benefits. There's also a credentialing benefit to doing things within the academic process. I think this is non-negligible, but also that if you do really high-quality work anywhere, people will observe this and come, so I don't think it's necessary to rest on that credentialing.

What's the difference between deep uncertainty and (complex) cluelessness?

8Owen_Cotton-Barratt8moI'm just using "deep uncertainty" to refer to a theme of situations where there are challenges about how you get going. I'm not thinking of it as a crisp referent. I guess that complex cluelessness would be a subclass of cases of deep uncertainty in my ontology, but I also mean to include e.g. normative uncertainty; Knightian uncertainty; heuristics for estimating probabilities when you don't really know where to start.

I could probably figure this out online so don't answer if you don't have a quick answer cached, but is it difficult for RSP scholars (who are not admitted through other channels to take other classes/do other studies at Oxford? Either at the philosophy department or elsewhere) For example if someone's interested in classes in philosophy, public health, ML, or statistical methods.

5Linch8mo(I'm amused at the distribution of votes on this question).
5Owen_Cotton-Barratt8moGenerally Oxford lectures are open to any university members, although: * They wouldn't generally get "academic credit" for this * They wouldn't necessarily be able to join accompanying classes (although we might be able to arrange this) * I've no idea what the situation is now so many things are remote because of COVID-19
2Linch8moThanks a lot!