Max_Daniel

Project Manager for the Research Scholars Programme (RSP) at FHI and Fund Manager at the EA Infrastructure Fund.

Previously part of the 2018-2020 cohort of RSP and Executive Director of the Foundational Research Institute (now Center on Long-Term Risk), a project by the Effective Altruism Foundation (but I don't endorse that organization's 'suffering-focused' view on ethics).

Unless stated otherwise, I post on the Forum in a personal capacity and don't speak for any organization I'm affiliated with.

Comments

Should you do a PhD in science?

Interesting, thank you for sharing.

Do you have a take on how accurate the national average estimates are? In particular, I'd be interested in whether they were determined using a different methodology, and so perhaps one that will be biased toward "underreporting". Where as at first glance your methodology might seem to be biased toward "overreporting" (though idk to what extent you may have "corrected" for non-reponse bias, which would be one source of "overreporting").

Should you do a PhD in science?

That's what I thought. I also have a vague sense that depending on one's goals a majority of US tenure track positions may not be great because they are at colleges that do little research and where one predominantly has to do teaching? Or are these not included in the numbers the OP gives / aren't called 'tenure-track positions'? (As is obvious by now I don't understand the US higher education system very well.)

How much does performance differ between people?

[The following is a lightly edited response I gave in an email conversation.]

What is your overall intuition about positive feedback loops and the importance of personal fit? Do they make it (i) more important, since a small difference in ability will compound over time or (ii) less important since they effectively amplify luck / make the whole process more noisy?

My overall intuition is that the full picture we paint suggests personal fit, and especially being in the tail of personal fit, is more important than one might naively think (at least in domains where ex-post output really is very heavy-tailed). But also how to "allocate"/combine different resources is very important. This is less b/c of feedback loops specifically but more b/c of an implication of multiplicative models:

[So one big caveat is that the following only applies to situations that are in fact well described by a multiplicative model. It's somewhat unclear which these are.]

If ex-post output is very heavy-tailed, total ex-post output will be disproportionately determined by outliers. If ex-post output is multiplicative, then these outliers are precisely those cases where all of the inputs/factors have very high values. 

So this could mean: total impact will be disproportionately due to people who are highly talented, have had lots of practice at a number of relevant skills, are highly motivated, work in a great & supportive environment, can focus on their job rather than having to worry about their personal or financial security, etc., and got lucky.

If this is right, then I think it adds interesting nuance on discussions around general mental ability (GMA). Yes, there is substantial evidence indicating that we can measure a 'general ability factor' that is a valid predictor for ~all more specific cognitive abilities. And it's useful to be aware of that, e.g. b/c almost all extreme outlier performers in jobs that rely crucially on cognitive abilities will be people with high GMA. (This is consistent with many/most people having the potential to be "pretty good" at these jobs.) However, conversely, it does NOT follow that GMA is the only thing paying attention to. Yes, a high-GMA person will likely be "good at everything" (because everything is at least moderately correlated). But there are differences in how good, and these really matter. To get to the global optimum, you really need to allocate the high-GMA person to a job that relies on whatever specific cognitive ability they're best at, supply all other 'factors of production' at maximal quality, etc. You have to optimize everything.

I.e. this is the toy model: Say we have ten different jobs,  to . Output in all of them is multiplicative. They all rely on different specific cognitive abilities  to  (i.e. each  is a factor in the 'production function' for  but not the other). We know that there is a "general ability factor"  that correlates decently well with all of the . So if you want to hire for any  it can make sense to select on , especially if you can't measure the relevant  directly. However, after you selected on  you might end up with, say, 10 candidates who are all high on . It does NOT follow that you should allocate them at random between the jobs because " is the only thing that matters". Instead you should try hard to identify for any person which  they're best at, and then allocate them to . For any given person, the difference between their  and  might look small b/c they're all correlated, but because output is multiplicative this "small" difference will get amplified.

(If true, this might rationalize the common practice/advice of: "Hire great people, and then find the niche they do best in.")

I think Kremer discusses related observations quite explicitly in his o-ring model. In particular, he makes the basic observation that if output is multiplicative you maximize total output by "assortative matching". This is basically just the observation that if  and you have four inputs  with  etc., then

 

- i.e. you maximize total output by matching inputs by quality/value rather than by "mixing" high- and low-quality inputs. It's his explanation for why we're seeing "elite firms", why some countries are doing better than others, etc. In a multiplicative world, the global optimum is a mix of 'high-ability' people working in high-stakes environments with other 'high-ability' people on one hand, and on the other hand 'low-ability' people working in low-stakes environments with other 'low-ability' people. Rather than a uniform setup with mixed-ability teams everywhere, "balancing out" worse environments with better people, etc.

(Ofc as with all simple models, even if we think the multiplicative story does a good job at roughly capturing reality, there will be a bunch of other considerations that are relevant in practice. E.g. if unchecked this dynamic might lead to undesirable inequality, and the model might ignore dynamic effects such as people improving their skills by working with higher-skilled people, etc.)

Possible misconceptions about (strong) longtermism

Hi Sam, thank you for your thoughtful reply.

Here are some things we seem to agree on:

  • The cases for specific priorities or interventions that are commonly advocated based on a longtermist perspective (e.g. "work on technical AI safety") are usually far from watertight. It could be valuable to improve them, by making them more "robust" or otherwise.
  • Expected-value calculations that are based on a single quantitative model have significant limitations. They can be useful as one of many inputs to a decision, but it would usually be bad to use them as one's sole decision tool.
    • (I am actually a big fan of the GiveWell/Holden Karnofsky posts you link to. When I disagree with other people it often comes down to me favoring more "cluster thinking". For instance, these days this happens a lot to me when talking to people about AI timelines, or other aspects of AI risk.)

However, I think I disagree with your characterization of the case for CL more broadly, at least for certain uses/meanings of CL.

Here is one version of CL which I believe is based on much more than just expected-value calculations within a single model: This is roughly the claim that (i) in our project of doing as much good as possible we should at the highest level be mostly guided by very long-run effects and (ii) this makes an actual difference for how we plan and prioritize at intermediate levels.

Here are I have a picture in mind that is roughly as follows:

  • Lowest level: Which among several available actions should I take right now?
  • Intermediate levels: 
    • What are the "methods" and inputs (quantitative models, heuristics, intuitions, etc.) I should use when thinking about the lowest level?
    • What systems, structures, and incentives should we put in place to "optimize" which lowest-level decision situations I and other agents find ourselves in in the first place?
    • How do I in turn best think about which methods, systems, structures, etc. to use for answering these intermediate-level questions?
    • Etc.
  • Highest level: How should I ultimately evaluate the intermediate levels?

So the following would be one instance of part (i) of my favored CL claim: When deciding whether to use cluster thinking or sequence thinking for a decision, we should aim to choose whichever type of thinking best helps us find the option with most valuable long-run effects. For this it is not required that I make the choice between sequence thinking or cluster thinking by an expected-value calculation, or indeed any direct appeal to any long-run effects. But, ultimately, if I think that, say, cluster thinking is superior to sequence thinking for the matter at hand, then I do so because I think this will lead to the best long-run consequences.

And these would be an instances of part (ii): That often we should decide primarily based on the proxy of "what does most reduce existential risk?"; that it seems good to increase the "representation" of future generations in various political contexts; etc.

Regarding what the case for this version of CL rests on:

  • For part (i), I think it's largely a matter of ethics/philosophy, plus some high-level empirical claims about the world (the future being big etc.). Overall very similar to the case for AL. I think the ethics part is less in need of "cluster thinking", "robustness" etc. And that the empirical part is, in fact, quite "robustly" supported.
  • [This point made me most want to push back against your initial claim about CL:] For part (ii), I think there are several examples of proxy goals, methods, interventions, etc., that are commonly pursued by longtermists which have a somewhat robust case behind them that does not just rely on an expected value estimate based on a single quantitative model. For instance, avoiding extinction seems very important from a variety of moral perspectives as well as common sense, there are historical precedents of research and advocacy at least partly motivated by this goal (e.g. nuclear winter, asteroid detection, perhaps even significant parts of environmentalism), there is a robust case for several risks longtermists commonly worry about (including AI), etc. More broadly, conversations involving explicit expected value estimates, quantitative models, etc. are only a fraction of the longtermist conversations I'm seeing. (If anything I might think that longtermists, at least in some contexts, make too little use of these tools.) E.g. look at the frontpage of LessWrong, or their curated content. I'm certainly not among the biggest fans of LessWrong or the rationality community, but I think it would be fairly inaccurate to say that a lot of what is happening there is people making explicit expected value estimates. Ditto for longtermist content featured in the EA Newsletter, etc. etc. I struggle to think of any example I've seen where a longtermist has made an important decision based just on a single EV estimate.

 

Rereading your initial comment introducing AL and CL, I'm less sure if by CL you had in mind something similar to what I'm defending above. There certainly are other readings that seem to hinge more on explicit EV reasoning or that are just absurd, e.g. "CL = never explicitly reason about anything happening in the next 100 years". However, I'm less interested in these versions since they to me would seem to be a poor description of how longtermists actually reason and act in practice.

Notes on 'Atomic Obsession' (2009)

Interesting, thank you! I hadn't been aware of this case.

[Linkpost] Rethink Priorities is hiring a Research Project and Hiring Manager to accelerate its research

I've been a big fan of Rethink's work, and am excited to hear that you're planning to grow further. I hope you'll find a great candidate for this role!

Draft report on existential risk from power-seeking AI

I think there is a tenable view that considers an AI catastrophe less likely than what AI safety researchers think but is not committed to anything nearly as strong as the field being "crazy" or people in it being "very wrong":

We might simply think that people are more likely to work on AI safety if they consider an AI catastrophe more likely. When considering their beliefs as evidence we'd then need to correct for that selection effect.

[ETA: I thought I should maybe add that even the direction of the update doesn't seem fully clear. It depends on assumptions about the underlying population. E.g. if we think that everyone's credence is determined by an unbiased but noisy process, then people with high credences will self-select into AI safety because of noise, and we should think the 'correct' credence is lower than what they say. On the other hand, if we think that there are differences in how people form their beliefs, then it at least could be the case that some people are simply better at predicting AI catastrophes, or are fast at picking up 'warning signs', and if AI risk is in fact high then we would see a 'vanguard' of people self-selecting into AI safety early who also will have systematically more accurate beliefs about AI risk than the general population.]

(I am sympathetic to "I'd still want 80K to explicitly argue that point, and note the disagreement.", though haven't checked to what extent they might do that elsewhere.)

[Linkpost]: German judges order government to strengthen legislation before end of year to protect future generations

I also thought this was pretty interesting. Here is what I wrote on some Slack workspace:

 

I think these news from Germany are kind of wild (though probably not unprecedented internationally), and quite possibly interesting for 'longtermist policy', rights of future generations, etc., more broadly:

Germany must update its climate law by the end of next year to set out how it will bring carbon emissions down to almost zero by 2050, its top court ruled on Thursday, siding with a young woman who argued rising sea levels would engulf her family farm.

The court concluded that a law passed in 2019 had failed to make sufficient provision for cuts beyond 2030, casting a shadow over a signature achievement of Chancellor Angela Merkel's final term in office.

https://www.reuters.com/business/environment/germany-must-further-tighten-climate-change-law-top-court-rules-2021-04-29/

Note that this is from Germany's constitutional court. This both means that there is no appeal to this ruling, and that in a sense the statement is quite strong: according the court, failure to properly specify climate policy for the 2030--2050 period is unconstitutional.

Specifically, I think it's interesting that:
 

  • A court rules that a government must now change its law pertaining to issues ~10 years ahead.
  • The legal basis for this are the basic rights of currently young people.
  • Essentially the government is required to act in a certain way: to come up with a plan for how to prevent future harms. It's not just about not making things worse (as in e.g. striking down fossil fuel subsidies).

The court's reasoning is also interesting:

  • There is a clause in Germany's constitution, added in 1994, that essentially requires government to protect the environment to such an extent that future generations will be able to thrive.
  • However, the court's decision is not based on an outright violation of that requirement. Basically b/c the climate law's stated emission reductions by 2050 are considered sufficient for meeting that requirement.
  • Instead, the court argued that the current climate law leaves too much room for a disproportionate curtailment of freedom for people living beyond 2030. The court essentially says: the required total emission reductions by 2050 will be costly, and at least some ways for how to achieve them may significantly curtail people's freedom; you are therefore obligated to minimize these 'costs' to people's freedom, and at the very least this requires you to give people sufficient "advance warning" by stating what the emission reduction schedule 2030-2050 will be. And deciding this in 2025 is not enough "advance warning", you have to do it now.

So basically the government is violating young people's basic rights today by leaving too much room for potential future curtailments of their freedom.

These future obligations to reduce emissions have an impact on practically every type of freedom because virtually all aspects of human life still involve the emission of greenhouse gases and are thus potentially threatened by drastic restrictions after 2030. Therefore, the legislator should have taken precautionary steps to mitigate these major burdens in order to safeguard the freedom guaranteed by fundamental rights.

(From the court's English press release.) 

It's also interesting that some complainants "live in Bangladesh and Nepal".

(I haven't read the Reuters article. In my experience at least German media are notoriously bad at accurately reporting the constitutional court's rulings. I recommend reading the court's English press release, or to machine translate the full ruling -- currently only available in German -- if you wanted to know more.)

Launching a new resource: 'Effective Altruism: An Introduction'

FWIW these sound like fairly good changes to me.  :)

(Also for reasons unrelated to the "Was the original selection 'too longtermist'?" issue, on which I don't mean to take a stand here.)

What previous work has been done on factors that affect the pace of technological development?

I would guess that there is a lot of relevant work in economics. In particular work that asks what kind of incentives (e.g. in terms of patents, intellectual property regulation, etc.) a social planner would need to set to achieve a "socially optimal" level of R&D expenses by private actors. This isn't exactly what you'd want to know, but I expect that sometimes it would be able to extract possible "levers" that could apply at the level of specific sectors and that could also be pulled on by actors other than government.

I recommend asking someone with an econ background, perhaps at GPI.

Load More