Gregory_Lewis

Researcher (on bio) at FHI

Comments

Draft report on existential risk from power-seeking AI

Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).

I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of something based on suggestive antecedents (e.g. chance of a war given an altercation, etc.) So attending to "Even if A, for it to lead to D one should attend to P(B|A), P(C|B) etc. etc.", tend to lead to downwards corrections. 

Naturally, you can mess this up. Although it's not obvious you are at greater risk if you arrange your decomposed considerations conjunctively or disjunctively: "All of A-E must be true for P to be true" ~also means "if any of ¬A-¬E are true, then ¬P".  In natural language and heuristics, I can imagine "Here are several different paths to P, and each of these seem not-too-improbable, so P must be highly likely" could also lead one astray. 

Thoughts on being overqualified for EA positions

Similar to Ozzie, I would guess the 'over-qualified' hesitation often has less to do with, "I fear I would be under-utilised and become disinterested if I took a more junior role, and thus do less than the most good I could", but a more straightforward, "Roles which are junior, have unclear progression and don't look amazing on my CV if I move on aren't as good for my career as other opportunities available to me." 

This opportunity cost (as the OP notes) is not always huge, and it can be outweighed by other considerations. But my guess is it is often a substantial disincentive:

  • In terms of traditional/typical kudos/cred/whatever, getting in early on something which is going up like a rocket offers a great return on invested reputational or human capital. It is a riskier return, though: by analogy I'd guess being "employee #10" for some start-ups is much better than working at google, but I'd guess for the median start-up it is worse.
  • Many EA orgs have been around for a few years now, and their track-record so far might incline one against rocketing success by conventional and legible metrics. (Not least, many of them are targeting a very different sort of success than a tech enterprise, consulting firm, hedge fund, etc. etc.)
  • Junior positions at conventionally shiny high-status things have good career capital. I'd guess my stint as a junior doctor 'looks good' on my CV even when applying to roles with ~nothing to do with clinical practice. Ditto stuff like ex-googler, ex-management consultant, ?ex-military officer, etc. "Ex-junior-staffer-at-smallish-nonprofit" usually won't carry the same cachet. 
  • As careers have a lot of cumulative/progressive characteristics, 'sideways' moves earlier on may have disproportionate impact on ones trajectory. E.g. 'longtermist careerists' might want very outsized compensation for such a  'tour' to make up for compounded loss of earnings (in expectation) for pausing their climb up various ladders.

None of this means 'EA jobs' are only for suckers. There are a lot of upsides even from a 'pure careerism' perspective (especially for particular career plans), and obvious pluses for folks who value the mission/impact too. But insofar as folks aren't perfectly noble, and care somewhat about the former as well as the latter (ditto other things like lifestyle, pay, etc. etc.) these disincentives are likely to be stronger pushes for more 'overqualified' folks. 

And insofar as EA orgs would like to recruit more 'overqualified' folks for their positions (despite, as I understand it, their job openings being broadly oversubscribed with willing and able - but perhaps not 'overqualified' - applicants), I'd guess it's fairly heavy-going as these disincentives are hard to 'fix'.

Launching a new resource: 'Effective Altruism: An Introduction'

Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous. 

If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.

Yet, however you slice it, EA as it stands now hasn't by-and-large 'moved on' to be 'basically longtermism', where its interest in (e.g) global health is clearly atavistic. I'd be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which 'greatly favouring longtermism over everything else' exceeds.  

How you choose to frame an introduction is up for grabs, and I don't think 'the big three' is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.

Launching a new resource: 'Effective Altruism: An Introduction'

Per others: This selection isn't really 'leans towards a focus on longtermism', but rather 'almost exclusively focuses on longtermism': roughly any 'object level' cause which isn't longtermism gets a passing mention, whilst longtermism is the subject of 3/10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) 'lean towards' longtermism either in terms of subject matter or affinity.

Despite being a longtermist myself, I think this is dubious for a purported 'introduction to EA as a whole': EA isn't all-but-exclusively longtermist in either corporate thought or deed.

Were I a more suspicious sort, I'd also find the 'impartial' rationales offered for why non-longtermist things keep getting the short (if not pointy) end of the stick scarcely credible:

i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.

The first episode with Karnofsky also covers longtermism and AI - at least as much as global health and animals. Yet this didn't stop episodes on the specific cause areas of longtermism (Ord) and AI (Christiano) being included. Ditto the instance of "entrepreneurship, independent thinking, and general creativity" one wanted to highlight just-so-happens to be a longtermist intervention (versus, e.g. this).

Proposed Longtermist Flag

I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.

test

Progress Open Thread: March 2021

A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events 'doubling the probability' versus 'doubling the odds' get basically the same answer, but not so for more common events. Loosely, flipping a coin three times 'trebles' my risk of observing it landing tails, but the probability isn't 1.5. (cf).

E.g.

Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse (and I would estimate something like a 15% chance of serious abuse). [my emphasis]

If you used the 80% definition instead of 20%, then the '4x' risk factor implied by 60% additional chance (with 20% base rate) would give instead an additional 240% chance.

[(Of interest, 20% to 38% absolute likelihood would correspond to an odds ratio of ~2.5, in the ballpark of 3-4x risk factors discussed before. So maybe extrapolating extreme event ratios to less-extreme event ratios can do okay if you keep them in odds form. The underlying story might have something to do with logistic distributions closely resemble normal distributions (save at the tails), so thinking about shifting a normal distribution across the x axis so (non-linearly) more or less of it lies over a threshold loosely resembles adding increments to log-odds (equivalent to multiplying odds by a constant multiple) giving (non-linear) changes when traversing a logistic CDF.

But it still breaks down when extrapolating very large ORs from very rare events. Perhaps the underlying story here may have something to do with higher kurtosis : '>2SD events' are only (I think) ~5X more likely than >3SD events for logistic distributions, versus ~20X in normal distribution land. So large shifts in likelihood of rare(r) events would imply large logistic-land shifts (which dramatically change the whole distribution, e.g. an OR of 10 makes evens --> >90%) much more modest in normal-land (e.g. moving up an SD gives OR>10 for previously 3SD events, but ~2 for previously 'above average' ones)]

Tristan Cook's Shortform

Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.

In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.

There's also the worry in a pairwise comparison one might inadvertently pick a counterexample for one 'side' that turns the screws less than the counterexample for the other one. Most people find the 'very repugnant conclusion' (where not only Z > A, but 'large enough Z and some arbitrary number having awful lives > A') even more costly than the 'standard' RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.

By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.

It seems the main engine of RC-like examples is the aggregation - it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness - most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all - yet often we're inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of 'upside'.

So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn't outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).

Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC - by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia - Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn't discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.

(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and - depending how you litigate - susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there's never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).

I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).

But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for 'like-for-like': e.g. perhaps a 'negative plus a lexical threshold' view is more palatable than classical util, yet CLU would fare even better than either.

Complex cluelessness as credal fragility

[Mea culpa re. messing up the formatting again]

1) I don't closely follow the current state of play in terms of 'shorttermist' evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) "Why aren't you factoring in impacts on climate change for these interventions?" would be some mix of:

a) "We have looked at this, and we're confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc."

b) "We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn't vary appreciably between interventions) so we get higher yield investigating other things."

c) "We are explicit our analysis is predicated on moral (e.g. "human lives are so much more important than animals lives any impact on the latter is ~moot") or epistemic (e.g. some 'common sense anti-cluelessness' position) claims which either we corporately endorse and/or our audience typically endorses." 

Perhaps such hopes would be generally disappointed.

2) Similar to above, I don't object to (re. animals) positions like "Our view is this consideration isn't a concern as X" or "Given this consideration, we target Y rather than Z", or "Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation."

But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change ("Sure, it's actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we're aiming for"). Yet this overriding motivation typically only 'came up' in the context of this discussion, and corollary questions  like:

*  "Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?"

* "Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?"

* "Shouldn't we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?" 

seemed seldom asked. 

Naturally I hope this is a relic of my perhaps jaundiced memory.

Complex cluelessness as credal fragility

FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing poverty, a known cause of conflict"); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good.

(Apologies in advance I'm rehashing unhelpfully)

The usual cluelessness scenarios are more about that there may be powerful lever for impacting the future, and your intended intervention may be pulling it in the wrong direction (rather than a 'confirmed discovery'). Say your expectation for the EV of GiveDirectly on conflict has a distribution with a mean of zero but an SD of 10x the magnitude of the benefits you had previously estimated. If it were (e.g.) +10, there's a natural response of 'shouldn't we try something which targets this on purpose?'; if it were 0, we wouldn't attend to it further; if it meant you were -10, you wouldn't give to (now net EV = "-9") GiveDirectly. 

The right response where all three scenarios are credible (plus all the intermediates) but you're unsure which one you're in isn't intuitively obvious (at least to me). Even if (like me) you're sympathetic to pretty doctrinaire standard EV accounts (i.e. you quantify this uncertainty + all the others and just 'run the numbers' and take the best EV) this approach seems to ignore this wide variance, which seems to be worthy of further attention.

The OP tries to reconcile this with the standard approach by saying this indeed often should be attended to, but under the guise of value of information rather than something 'extra' to orthodoxy. Even though we should still go with our best guess if we to decide (so expectation neutral but high variance terms 'cancel out'), we might have the option to postpone our decision and improve our guesswork. Whether to take that option should be governed by how resilient our uncertainty is. If your central estimate of GiveDirectly and conflict would move on average by 2 units if you spent an hour thinking about it, that seems an hour well spent; if you thought you could spend a decade on it and remain where you are, going with the current best guess looks better. 

This can be put in plain(er) English (although familiar-to-EA jargon like 'EV' may remain). Yet there are reasons to be hesitant about the orthodox approach (even though I think the case in favour is ultimately stronger): besides the usual bullets, we would be kidding ourselves if we ever really had in our head an uncertainty distribution to arbitrary precision, and maybe our uncertainty isn't even remotely approximate to objects we manipulate in standard models of the same. Or (owed to Andreas) even if so, similar to how rule-consequentialism may be better than act-consequentialism, some other epistemic policy would get better results than applying the orthodox approach in these cases of deep uncertainty. 

Insofar as folks are more sympathetic to this, they would not want to be deflationary and perhaps urge investment in new techniques/vocab to grapple with the problem. They may also think we don't have a good 'answer' yet of what to do in these situations, so may hesitate to give 'accept there's uncertainty but don't be paralysed by it' advice that you and I would. Maybe these issues are an open problem we should try and figure out better before pressing on.

Complex cluelessness as credal fragility

Belatedly:

I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to. 

The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particular alternative?" instead of "Out of all options, should it be AMF?" Parallel to you, although I do think (per OP) AMF donations are net good, I also think (per the contours of your reply) it should be excluded as a promising candidate for the best thing to donate to: if what really matters is how the deep future goes, and the axes of these accessible at present are things like x-risk, interventions which are only tangentially related to these are so unlikely to be best they can be ruled-out ~immediately.

So if that isn't a main motivation, what is? Perhaps something like this:

1) How to manage deep uncertainty over the long-run ramifications of ones decisions is a challenge across EA-land - particularly acute for longtermists, but also elsewhere: most would care about risks about how in the medium term a charitable intervention could prove counter-productive. In most cases, these mechanisms for something to 'backfire' are fairly trivial, but how seriously credible ones should be investigated is up for grabs.

Although "just be indifferent if it is hard to figure out" is a bad technique which finds little favour, I see a variety of mistakes in and around here. E.g.:

a) People not tracking when the ground of appeal for an intervention has changed. Although I don't see this with AMF, I do see this in and around animal advocacy. One crucial consideration around here is WAS, particularly an 'inverse logic of the larder' (see), such as "per area, a factory farm has a lower intensity of animal suffering than the environment it replaced". 

Even if so, it wouldn't follow the best thing to do would to be as carnivorous as possible. There are also various lines of response. However, one is to say that the key objective of animal advocacy is to encourage greater concern about animal welfare, so that this can ramify through to benefits in the medium term. However, if this is the rationale, metrics of 'animal suffering averted per $' remain prominent despite having minimal relevance. If the aim of the game is attitude change, things like shelters and companion animals over changes in factory farmed welfare start looking a lot more credible again in virtue of their greater salience.

b) Early (or motivated) stopping across crucial considerations. There are a host of ramifications to population growth which point in both directions (e.g. climate change, economic output, increased meat consumption, larger aggregate welfare, etc.) Although very few folks rely on these when considering interventions like AMF (but cf.) they are often being relied upon by those suggesting interventions specifically targeted to fertility: enabling contraceptive access (e.g. more contraceptive access --> fewer births --> less of a poor meat eater problem), or reducing rates of abortion (e.g. less abortion --> more people with worthwhile lives --> greater total utility).

Discussions here are typically marred by proponents either completely ignoring considerations on the 'other side' of the population growth question, or giving very unequal time to them/sheltering behind uncertainty (e.g. "Considerations X, Y, and Z all tentatively support more population growth, admittedly there's A, B, C, but we do not cover those in the interests of time - yet, if we had, they probably would tentatively oppose more population growth"). 

2) Given my fairly deflationary OP, I don't think these problems are best described as cluelessness (versus attending to resilient uncertainty and VoI in fairly orthodox evaluation procedures). But although I think I'm right, I don't think I'm obviously right: if orthodox approaches struggle here, less orthodox ones with representors, incomparability or other features may be what should be used in decision-making (including when we should make decisions versus investigate further). If so then this reasoning looks like a fairly distinct species which could warrant it's own label.

Load More