Gregory_Lewis

Researcher (on bio) at FHI

Wiki Contributions

Comments

Help me find the crux between EA/XR and Progress Studies

I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.

"XR primacy"

Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/runway available if you need to use it, so velocity and acceleration are in themselves much less important than direction - you can cover much more ground in expectation if you make sure you're not headed into a crash first. 

This typically (but not necessarily, cf.) implies longtermism. 'Global catastrophic risk', as a longtermist term of art, plausibly excludes the vast majority of things common sense would call 'global catastrophes'. E.g.:

[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilizing enough to permanently worsen humanity’s future or lead to human extinction. (Open Phil)

My impression is a 'century more poverty' probably isn't a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasn't globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination. 

This makes it's continued existence no less an outrage to human condition. But, across the scales from threats to humankind's entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not there's any direct cross-purposes in activity) the currency of XR reduction has much greater value.

Per discussion, there are a variety of ways the story sketched above could be wrong:

  • Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
  • XR is either very low, or intractable, so XR reduction isn't a good buy even on the exchange rate XR views endorse. 
  • Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.

I don't see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/non-pascalian values. Although costly activity that buys an absolute risk reduction of 1/trillions looks dubious to common sense, 1/thousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough. 

It's not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldn't endorse them, it doesn't have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).


Buying the technological progress index?

Granting the story sketched above, there's not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.

  • There's obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. It'd also generally surprise for what is best for XR to also be best for 'progress' (cf.)
  • The recent track record doesn't seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and 'derisking' these downsides remain remote. It's hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
  • Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.

Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/enthusiastic for each particular case). I'd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/best means to mitigate the latter), which then feeds into more or less 'generalized techno-optimism'.

But I'd guess the majority of the action is around the 'modal XR account' of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. "Technocircumspection" seems a fairly sound corollary from this set of controversial conjuncts.   

[Link] 80,000 Hours Nov 2020 annual review

[Own views etc.]

I'm unsure why this got downvoted, but I strongly agree with the sentiment in the parent. Although I understand the impulse of "We're all roughly on the same team here, so we can try and sculpt something better than the typically competitive/adversarial relationships between firms, or employers and employees", I think this is apt to mislead one into ideas which are typically economically short-sighted, often morally objectionable, and occasionally legally dubious. 

In the extreme case, it's obviously unacceptable for Org X to not hire candidate A (their best applicant), because they believe its better they stay at Org Y. Not only (per the parent) that A is probably a better judge of where they are best placed,[1] but Org X screws over not only itself (they now appoint someone they think are not quite as good) and A themselves (who doesn't get the job they want), for the benefit of Org Y. 

These sort of oligosponic machinations are at best a breach of various fiduciary duties (e.g. Org X to their donors to use their money to get the best staff rather than opaque de facto transfer contributions of labour to another organisation), and at least colourably illegal in many jurisdictions due to labour law around anti-trust, non-discrimination, etc. (see)

Similar sentiments apply to less extreme examples, such as 'not proactively 'poaching'' (the linked case above was about alleged "no cold call" agreements). The typical story for why these practices are disliked is a mix of econ efficiency arguments (e.g. labour market liquidity, competition over conditions is a mechanism for higher performing staff to match into higher performing orgs) and worker welfare ones (e.g. the net result typically disadvantages workers by suppressing their pay, conditions, and reducing their ability to change to roles they prefer).

I think these rationales apply roughly as well to EA-land as anywhere else-land. Orgs should accept that staff may occasionally leave to other orgs for a variety of reasons. If they find that they consistently  lose out for familiar reasons, they should either get better or accept the consequences for remaining worse.


[1]: Although, for the avoidance of doubt, I think it is wholly acceptable for people to switch EA jobs for wholly 'non-EA' reasons - e.g. "Yeah, I expect I'd do less good at Org X than Org Y, but Org X will pay me 20% more and I want a higher standard of living." Moral sainthood is scarce as well as precious. It is unrealistic that all candidates are saintly in this sense, and mutual pretence to the contrary unhelpful.

If anything, 'no poaching' (etc.) practices are even worse in these cases than the more saintly 'moving so I can do even more good!' rationale. In the latter case, Orgs are merely being immodest in presuming to know better than applicants what their best opportunity to contribute is; in the former, Orgs conspire to make their employees' lives worse than they could otherwise be.

Draft report on existential risk from power-seeking AI

Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).

I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of something based on suggestive antecedents (e.g. chance of a war given an altercation, etc.) So attending to "Even if A, for it to lead to D one should attend to P(B|A), P(C|B) etc. etc.", tend to lead to downwards corrections. 

Naturally, you can mess this up. Although it's not obvious you are at greater risk if you arrange your decomposed considerations conjunctively or disjunctively: "All of A-E must be true for P to be true" ~also means "if any of ¬A-¬E are true, then ¬P".  In natural language and heuristics, I can imagine "Here are several different paths to P, and each of these seem not-too-improbable, so P must be highly likely" could also lead one astray. 

Thoughts on being overqualified for EA positions

Similar to Ozzie, I would guess the 'over-qualified' hesitation often has less to do with, "I fear I would be under-utilised and become disinterested if I took a more junior role, and thus do less than the most good I could", but a more straightforward, "Roles which are junior, have unclear progression and don't look amazing on my CV if I move on aren't as good for my career as other opportunities available to me." 

This opportunity cost (as the OP notes) is not always huge, and it can be outweighed by other considerations. But my guess is it is often a substantial disincentive:

  • In terms of traditional/typical kudos/cred/whatever, getting in early on something which is going up like a rocket offers a great return on invested reputational or human capital. It is a riskier return, though: by analogy I'd guess being "employee #10" for some start-ups is much better than working at google, but I'd guess for the median start-up it is worse.
  • Many EA orgs have been around for a few years now, and their track-record so far might incline one against rocketing success by conventional and legible metrics. (Not least, many of them are targeting a very different sort of success than a tech enterprise, consulting firm, hedge fund, etc. etc.)
  • Junior positions at conventionally shiny high-status things have good career capital. I'd guess my stint as a junior doctor 'looks good' on my CV even when applying to roles with ~nothing to do with clinical practice. Ditto stuff like ex-googler, ex-management consultant, ?ex-military officer, etc. "Ex-junior-staffer-at-smallish-nonprofit" usually won't carry the same cachet. 
  • As careers have a lot of cumulative/progressive characteristics, 'sideways' moves earlier on may have disproportionate impact on ones trajectory. E.g. 'longtermist careerists' might want very outsized compensation for such a  'tour' to make up for compounded loss of earnings (in expectation) for pausing their climb up various ladders.

None of this means 'EA jobs' are only for suckers. There are a lot of upsides even from a 'pure careerism' perspective (especially for particular career plans), and obvious pluses for folks who value the mission/impact too. But insofar as folks aren't perfectly noble, and care somewhat about the former as well as the latter (ditto other things like lifestyle, pay, etc. etc.) these disincentives are likely to be stronger pushes for more 'overqualified' folks. 

And insofar as EA orgs would like to recruit more 'overqualified' folks for their positions (despite, as I understand it, their job openings being broadly oversubscribed with willing and able - but perhaps not 'overqualified' - applicants), I'd guess it's fairly heavy-going as these disincentives are hard to 'fix'.

Launching a new resource: 'Effective Altruism: An Introduction'

Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous. 

If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.

Yet, however you slice it, EA as it stands now hasn't by-and-large 'moved on' to be 'basically longtermism', where its interest in (e.g) global health is clearly atavistic. I'd be willing to go to bat for substantial slants to longtermism, as (I aver) its over-representation amongst the more highly engaged and the disproportionate migration of folks to longtermism from other areas warrants claims that epistocratic weighting of consensus would favour longtermism over anything else. But even this has limits which 'greatly favouring longtermism over everything else' exceeds.  

How you choose to frame an introduction is up for grabs, and I don't think 'the big three' is the only appropriate game in town. Yet if your alternative way of framing an introduction to X ends up strongly favouring one aspect (further, the one you are sympathetic to) disproportionate to any reasonable account of its prominence within X, something has gone wrong.

Launching a new resource: 'Effective Altruism: An Introduction'

Per others: This selection isn't really 'leans towards a focus on longtermism', but rather 'almost exclusively focuses on longtermism': roughly any 'object level' cause which isn't longtermism gets a passing mention, whilst longtermism is the subject of 3/10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) 'lean towards' longtermism either in terms of subject matter or affinity.

Despite being a longtermist myself, I think this is dubious for a purported 'introduction to EA as a whole': EA isn't all-but-exclusively longtermist in either corporate thought or deed.

Were I a more suspicious sort, I'd also find the 'impartial' rationales offered for why non-longtermist things keep getting the short (if not pointy) end of the stick scarcely credible:

i) we decided to focus on our overall worldview and way of thinking rather than specific cause areas (we also didn’t include a dedicated episode on biosecurity, one of our 'top problems'), and ii) both are covered in the first episode with Holden Karnofsky, and we prominently refer people to the Bollard and Glennerster interviews in our 'episode 0', as well as the outro to Holden's episode.

The first episode with Karnofsky also covers longtermism and AI - at least as much as global health and animals. Yet this didn't stop episodes on the specific cause areas of longtermism (Ord) and AI (Christiano) being included. Ditto the instance of "entrepreneurship, independent thinking, and general creativity" one wanted to highlight just-so-happens to be a longtermist intervention (versus, e.g. this).

Proposed Longtermist Flag

I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.

test

Progress Open Thread: March 2021

A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events 'doubling the probability' versus 'doubling the odds' get basically the same answer, but not so for more common events. Loosely, flipping a coin three times 'trebles' my risk of observing it landing tails, but the probability isn't 1.5. (cf).

E.g.

Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like an additional 60% chance of your other child going through at least some level of abuse (and I would estimate something like a 15% chance of serious abuse). [my emphasis]

If you used the 80% definition instead of 20%, then the '4x' risk factor implied by 60% additional chance (with 20% base rate) would give instead an additional 240% chance.

[(Of interest, 20% to 38% absolute likelihood would correspond to an odds ratio of ~2.5, in the ballpark of 3-4x risk factors discussed before. So maybe extrapolating extreme event ratios to less-extreme event ratios can do okay if you keep them in odds form. The underlying story might have something to do with logistic distributions closely resemble normal distributions (save at the tails), so thinking about shifting a normal distribution across the x axis so (non-linearly) more or less of it lies over a threshold loosely resembles adding increments to log-odds (equivalent to multiplying odds by a constant multiple) giving (non-linear) changes when traversing a logistic CDF.

But it still breaks down when extrapolating very large ORs from very rare events. Perhaps the underlying story here may have something to do with higher kurtosis : '>2SD events' are only (I think) ~5X more likely than >3SD events for logistic distributions, versus ~20X in normal distribution land. So large shifts in likelihood of rare(r) events would imply large logistic-land shifts (which dramatically change the whole distribution, e.g. an OR of 10 makes evens --> >90%) much more modest in normal-land (e.g. moving up an SD gives OR>10 for previously 3SD events, but ~2 for previously 'above average' ones)]

Tristan Cook's Shortform

Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.

In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to negative numbers for classical utilitarians. So the negative view fares better as the classical one has to bite one extra bullet.

There's also the worry in a pairwise comparison one might inadvertently pick a counterexample for one 'side' that turns the screws less than the counterexample for the other one. Most people find the 'very repugnant conclusion' (where not only Z > A, but 'large enough Z and some arbitrary number having awful lives > A') even more costly than the 'standard' RC. So using the more or less costly variant on one side of the scales may alter intuitive responses.

By my lights, it seems better to have some procedure for picking and comparing cases which isolates the principle being evaluated. Ideally, the putative counterexamples share counterintuitive features both theories endorse, but differ in one is trying to explore the worst case that can be constructed which the principle would avoid, whilst the other the worst case that can be constructed with its inclusion.

It seems the main engine of RC-like examples is the aggregation - it feels like one is being nickel-and-dimed taking a lot of very small things to outweigh one very large thing, even though the aggregate is much higher. The typical worry a negative view avoids is trading major suffering for sufficient amounts of minor happiness - most typically think this is priced too cheaply, particularly at extremes. The typical worry of the (absolute) negative view itself is it fails to price happiness at all - yet often we're inclined to say enduring some suffering (or accepting some risk of suffering) is a good deal at least at some extreme of 'upside'.

So with this procedure the putative counter-example to the classical view would be the vRC. Although negative views may not give crisp recommendations against the RC (e.g. if we stipulate no one ever suffers in any of the worlds, but are more or less happy), its addition clearly recommends against the vRC: the great suffering isn't outweighed by the large amounts of relatively trivial happiness (but it would be on the classical view).

Yet with this procedure, we can construct a much worse counterexample to the negative view than the OC - by my lights, far more intuitively toxic than the already costly vRC. (Owed to Carl Shulman). Suppose A is a vast but trivially-imperfect utopia - Trillions (or googleplexes, or TREE(TREE(3))) lives lives of all-but-perfect bliss, but for each enduring an episode of trivial discomfort or suffering (e.g. a pin-prick, waiting a queue for an hour). Suppose Z is a world with a (relatively) much smaller number of people (e.g. a billion) living like the child in Omelas. The negative view ranks Z > A: the negative view only considers the pinpricks in this utopia, and sufficiently huge magnitudes of these can worse than awful lives (the classical view, which wouldn't discount all the upside in A, would not). In general, this negative view can countenance any amount of awful suffering if this is the price to pay to abolish a near-utopia of sufficient size.

(This axiology is also anti-egalitarian (consider replacing half the people in A with half the people in Z) and - depending how you litigate - susceptible to a sadistic conclusion. If the axiology claims welfare is capped above by 0, then there's never an option of adding positive welfare lives so nothing can be sadistic. If instead it discounts positive welfare, then it prefers (given half of A) adding half of Z (very negative welfare lives) to adding the other half of A (very positive lives)).

I take this to make absolute negative utilitarianism (similar to average utilitarianism) a non-starter. In the same way folks look for a better articulation of egalitarian-esque commitments that make one (at least initially) sympathetic to average utilitarianism, so folks with negative-esque sympathies may look for better articulations of this commitment. One candidate could be what one is really interested in cases of severe rather than trivial suffering, so this rather than suffering in general should be the object of sole/lexically prior concern. (Obviously there are many other lines, and corresponding objections to each).

But note this is an anti-aggregation move. Analogous ones are available for classical utilitarians to avoid the (v/)RC (e.g. a critical-level view which discounts positive welfare below some threshold). So if one is trying to evaluate a particular principle out of a set, it would be wise to aim for 'like-for-like': e.g. perhaps a 'negative plus a lexical threshold' view is more palatable than classical util, yet CLU would fare even better than either.

Complex cluelessness as credal fragility

[Mea culpa re. messing up the formatting again]

1) I don't closely follow the current state of play in terms of 'shorttermist' evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) "Why aren't you factoring in impacts on climate change for these interventions?" would be some mix of:

a) "We have looked at this, and we're confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc."

b) "We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn't vary appreciably between interventions) so we get higher yield investigating other things."

c) "We are explicit our analysis is predicated on moral (e.g. "human lives are so much more important than animals lives any impact on the latter is ~moot") or epistemic (e.g. some 'common sense anti-cluelessness' position) claims which either we corporately endorse and/or our audience typically endorses." 

Perhaps such hopes would be generally disappointed.

2) Similar to above, I don't object to (re. animals) positions like "Our view is this consideration isn't a concern as X" or "Given this consideration, we target Y rather than Z", or "Although we aim for A, B is a very good proxy indicator for A which we use in comparative evaluation."

But I at least used to see folks appeal to motivations which obviate (inverse/) logic of the larder issues, particularly re. diet change ("Sure, it's actually really unclear becoming vegan reduces or increases animal suffering overall, but the reason to be vegan is to signal concern for animals and so influence broader societal attitudes, and this effect is much more important and what we're aiming for"). Yet this overriding motivation typically only 'came up' in the context of this discussion, and corollary questions  like:

*  "Is maximizing short term farmed animal welfare the best way of furthering this crucial goal of attitude change?"

* "Is encouraging carnivores to adopt a vegan diet the best way to influence attitudes?"

* "Shouldn't we try and avoid an intervention like v*ganism which credibly harms those we are urging concern for, as this might look bad/be bad by the lights of many/most non-consequentialist views?" 

seemed seldom asked. 

Naturally I hope this is a relic of my perhaps jaundiced memory.

Load More