Gregory_Lewis

Researcher (on bio) at FHI

Topic Contributions

Comments

My bargain with the EA machine

Bravo. I think diagrams are underused as crisp explanations, and this post gives an excellent demonstration of their value (among many other merits).

A minor point (cf. ThomasWoodside's remarks): I'd be surprised if one really does (or really should) accept no trade-offs between "career quality" for "career impact". The 'isoquoise ' may not slant all the way down from status quo to impactful toil, but I think it should slant down at least a little (contrariwise, you might also be willing to trade less impact for higher QoL etc).

Should have erased the left bit of the red line, sorry.

One motivation for having a flat line is to avoid (if the opportunity is available) feeling obliged to trade all the quality for (in)sufficiently large increases in impact. But maybe you can capture similar intuitions by using curved lines: at low levels of quality, the line is flat/almost flat, meaning you are unwilling to trade down further no matter the potential impact on the table, but at higher levels you are, so the slope is steeper at higher qualities. Maybe the 'isoquioses' would look something like this: 
 

You could get similar bottom lines by playing with a non-linear Y axis instead.

I appreciate 'just set a trade-off function' might be the first step down the totalising path you want to avoid, but one (more wonky than practical) dividend of such a thing is it would tell you where to go on the bargaining frontier (graphically, pick the point on the ellipse which touches the biggest isoquoise line). With the curved line story above, if you're available options all lie below the floor (~horizontal line) you basically pick the best quality option, whereas if the option frontier only has really high quality options (so the slope is very steep), you end up close to the best impact option.

 

I feel anxious that there is all this money around. Let's talk about it

[own views etc]

I think the 'econ analysis of the EA labour market' has been explored fairly well - I highly recommend this treatment by Jon Behar. I also find myself (and others) commonly in the comment threads banging the drum for it being beneficial to pay more, or why particular ideas to not do so (or pay EA employees less) are not good ones. 

Notably, 'standard economic worries' point in the opposite direction here.  On the standard econ-101 view, "Org X struggles as competitor Org Y can pay higher salaries", or "Cause ~neutral people migrate to 'hot' cause area C, attracted by higher pay" are desirable features, rather than bugs, of competition.  Donors/'consumers' demand more of Y's product than X's (or more of C generally), and the price signal of higher salaries acts to attract labour to better satisfy this demand (both from reallocation within the 'field', and by incentivizing outsiders to join in). In aggregate, both workers and donors expect to benefit from the new status quo.

In contrast, trying to intervene in the market to make life easier for those losing out in this competition is archetypally (and undesirably) anti-competitive. The usual suggestion (implied here, but expressly stated elsewhere) is unilateral or mutual agreement between orgs to pay their employees less - or refrain from paying them more. The usual econ-101 story is this is a bad idea as although this can anoint a beneficiary (i.e. Those who run and donate to Org X, who feel less heat from Org Y potentially poaching  their staff), it makes the market more inefficient overall, and harms/exploits employees (said activity often draws the ire of antitrust regulators). To cash out explicitly who can expect to lose out:

  • Employees at Org X, who lose the option of migrating to more lucrative employment.
  • Employees at Org Y, who lose out in getting paid less than they otherwise would.
  • (probably) Org Y,  who by artificially suppressing salary can expect a supply shortfall versus a preferable market equilibrium (as they value marginal labour more than the artificially suppressed price).
  • Donors to Org Y, who (typically) prefer their donations lead to more Org Y activity, rather than being partially siphoned off in an opaque transfer subsidy to Org X. Even donors who would want to 'buy' more Org Y and Org X could do so more efficiently with donation splitting.

Also, on the econ-101 story, Orgs can't unfairly screw each other by over-cutting each other on costs. If a challenger can't compete with an incumbent on salary, their problem really is they can't convince donors to give it more money (despite its relatively discounted labour), which implies donors agreeing with the incumbent, not the challenger, that it is the better use of marginal scarce resources. 

Naturally, there are corner cases where this breaks down - e.g. if labour supply was basically inelastic, upping pay just wastes money: none of these seem likely. Likewise how efficient the 'EA labour market' is unclear - but if inefficient and distorted, the standard econ reflex would be hesitant this could be improved by adding in more distortions and inefficiencies. Also, as being rich does not mean being right, economic competition could distort competition in the marketplace of ideas. But even if the market results are not synonymous with the balance of reason, they are probably not orthogonal either. If Animal-welfare-leaning Alice goes to Redwood over ACE, it also implies she's not persuaded the intrinsic merit of ACE is that much greater to warrant a large altruistic donation from her in terms of salary sacrifice;  if Mike the mega donor splashes the cash on AI policy but is miserly about mental health, this suggests he thinks the former is more promising than the latter. Even if the economic weighting (wealth) was completely random, this noisily approximates equal weight voting on the merits - I'd guess it weakly correlates with epistemic accuracy.

So I think Org  (or cause) X, if it is on the wrong side of these dynamics, should basically either get better or accept the consequences of remaining worse, e.g.:

  • Try and persuade donors it is underappreciated on the direct merits.
  • Or try and persuade them they should have a hypothecated exploratory budget for areas which do not currently, but might in future, have attractive direct merits (and Org X would satisfy these criteria)
  • Alternatively, accept their budget constraints mean they hire fewer staff at market rates.
  • Or accept they can't compete on salary, try to get more staff on lower salaries, but accept this strategy will result in a more constrained recruitment pool (e.g. only staff highly committed to the cause, those without the skill sets to be hired by Org Y and company). 

Appealing for subsidies of various types seems unlikely to work (as although they are in Org X's interest, they aren't really in anyone else's) and probably is -EV from most idealized 'ecosystem wide' perspectives.

Minimalist axiologies and positive lives

Thanks for the reply, and with apologies for brevity.

Re. 1 (ie. "The primary issue with the VRC is aggregation rather than trade-off"). I take it we should care about plausibility of axiological views with respect to something like 'commonsense' intuitions, rather than those a given axiology urges us to adopt. It's at least opaque to me whether commonsense intuitions are more offended by 'trade-offy/CU' or 'no-trade-offy/NU' intuitions. On the one hand:

  • "Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things (for some value of k)"
  • (a fortiori) "N awful things can be better than nothing providing they are counterbalanced by k*N good things (and N can be arbitrarily large, say a trillion awful lives)."

But on the other:

  • "No amount of good things (no matter how great their magnitude) can compensate for a single awful thing, no matter how astronomical the ratio (e.g. trillions to 1, TREE(3) to 1, whatever)."
  • (a fortiori) "No amount of great things can compensate for a single bad thing, no matter how small it is (e.g. pinpricks, a minute risk of an awful thing)"

However, I am confident the aggregation views - basically orthogonal to this question - are indeed the main driver for folks finding the V/RC particularly repugnant. Compare:

  1. 1 million great lives vs. 1 million terrible lives and a Quadrillion great lives.
  2. 1 thousand great lives vs. 1 thousand terrible lives and TREE(3) marginally good lives.

A minimalist view may well be concerned with increasing the amount of aggregate harm in 1 vs. 2, and so worry that (re. 2) if CU was willing to accept this, it would accept a lot more aggregate harm if we increase the upside to more than compensate (e.g. TREE(3) great lives). Yet I aver commonsense intuitions favour 1 over 2, and would find variants of 2 where the downside is increased but the upside is reduced but concentrated (e.g. a trillion great lives) more palatable. 

So appeals along the lines of "CU accepts the VRC, and - even worse - would accept even larger downsides if the compensating upside was composed of very- rather than marginally- happy lives" seems misguided, as this adaptation of the VRC aligns it better, not worse, with commonsense (if not minimalist) intuitions.

 

Re. 3 I've read Budolfson & Spears, and as you note (*) it seems we can construct xVRCs which minimalist views (inc. those which introduce lexical thresholds) are susceptible to. (I also note they agree with me re. 1 - e.g. s8: "Whenever aggregation is done over an unbounded space, repugnant outcomes inevitably occur"; their identification with the underlying mechanism for repugnance being able to aggregate e-changes.) 

The replies minimalists can make here seem very 'as good for the goose as the gander' to me:

  1. One could deny minimalism is susceptible to even xVRCs as one should drop aggregation/continuity/etc. Yet symmetric views should do the same, so one should explore whether on the margin of this atypical account of aggregation minimalist axiologies are a net plus or minus to overall plausibility.
  2. One could urge we shouldn't dock points to a theory for counter-examples which are impractical/unrealistic, the x/VRCs for minimalism fare much better than the x/VRCs for totalism. This would be quite a departure from my understanding of how the discussion proceeds in the literature, where the main concern is the 'in principle' determination for scenarios (I don't ever recall - e.g. - replies for averagism along the lines of "But there'd never be a realistic scenario where we'd actually find ourselves minded to add net-negative lives to improve average utility"). In any case, a lot of the xVRCs applicable to CU-variants require precisely stipulated 'base populations', so they're presumably also 'in the clear' by this criterion.
  3.  One could accept minimalism entails an xVRC, but this bullet is easier to bite than x/VRCs against symmetric views. Perhaps, but in which case we should probably pick the closest symmetric comparator (e.g. if they can't play with thresholds, you should deal with Shulman-esque pinprick scenarios). I also note the appeals to plausibility made (here and in the comments you link) seem to be mostly re-statements of minimalism itself (e.g. that epsilon changes in misery count but epsilon changes in happiness don't, 'subjective perfection' equated to neutrality, etc.) "Conditional on minimalist intuitions, minimalism has no truly counter-intuitive results" is surely true, but also question-begging to folks who don't share them (compare a totalist asserting the VRC is much less counter-intuitive than minimalist-xVRCs as - 'obviously' - wellbeing can be greater than zero, and axiology shouldn't completely discount unbounded amounts of it in evaluation).

[Finally, I'm afraid I can't really see much substantive merit in the 'relational goods' approach. Minimalism (like SFE and NU) straightforwardly offends the naive intuition that happiness is indeed 'better than nothing', and I don't find relational attempts to undercut this by offering an account of these being roundabout ways/policies of reducing problems either emotionally satisfying (e.g. All the rich relationships between members of a community may make everyone have 'lives worth living' in the sense that 'without me these other people would be worse off', but minimalism appears still committed to the dispiriting claim that this rich tapestry of relationships is still worse than nothing) or intellectually credible (cf. virtually everyone's expressed and implied preferences suggest non-assent to 'no-trade-off' views). 

Similarly, I think assessing 'isolated' goods as typical population cases do is a good way to dissect out the de/merits of different theories, and noting our evaluation changes as we add in a lot of 'practical' considerations seems apt to muddy the issue again (for example, I'd guess various 'practical elaborations' of the V/RC would make it appear more palatable, but I don't think this is a persuasive reply). 

I focus on the 'pure' population ethics as "I don't buy it" is barren ground for discussion.]

Minimalist axiologies and positive lives

Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.

Yeah, that's it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/RC isn't really a strike against 'symmetric axiology' simpliciter, but merely 'symmetric axiologies with a mistaken account of aggregation'. If instead 'straightforward/unadorned' aggregation is the right way to go, then the V/RC is a strike against symmetric views and a strike in favour of minimalist ones; but 'straightforward' aggregation can also produce highly counter-intuitive results for minimalist views which symmetric axiologies avoid (e.g. "better N awful lives than TREE(N+3) lives of perfect bliss and a pin-prick"). 

Hence (per 3) I feel the OP would be trying to have it both ways if they don't discuss argumentative resources which could defend a rival theory from objections they mount against it, yet subsequently rely upon those same resources to respond to objections to their preferred theory.

(Re. 2, perhaps it depends on the value of "tiny" - my intuition is the dynamic range of (e.g.) human happiness is much smaller than that for future beings, so 'very small' on this scale would still typically be greatly above the 'marginally good' range by the lights of classical util. If (e.g.) commonsenically happy human lives/experiences are 10, joyful future beings could go up to 1000, and 'marginally good' is anything <1, we'd be surprised to find the optimal average for the maximal aggregate is in the marginally good range. Adding in the 'V' bit to this RC adds a further penalty). 

Minimalist axiologies and positive lives

Tradeoffs like the Very Repugnant Conclusion (VRC) are not only theoretical, because arguments like that of Bostrom (2003) imply that the stakes may be astronomically high in practice. When non-minimalist axiologies find the VRC a worthwhile tradeoff, they would presumably also have similar implications on an arbitrarily large scale. Therefore, we need to have an inclusive discussion about the extent to which the subjective problems (e.g. extreme suffering) of some can be “counterbalanced” by the “greater (intrinsic) good” for others, because this has direct implications for what kind of large-scale space colonization could be called “net positive”.

 

This seems wrong to me, and confusing 'finding the VRC counter-intuitive' with 'counterbalancing (/extreme) bad with with good in any circumstance is counterintuitive' (e.g. the linked article to Omelas) is unfortunate - especially as this error has been repeated a few times in and around SFE-land.

 

First, what is turning the screws in the VRC is primarily the aggregation, not the (severe/) suffering. If the block of 'positive lives/stuff' in the VRC was high magnitude - say about as much (or even more) above neutral as the block of 'negative lives/stuff' lie below it - there is little about this more Omelas-type scenario a classical utilitarian would find troubling. "N terrible lives and k*N wonderful lives is better than N wonderful lives alone" seems plausible for sufficiently high values of k. (Notably, 'Minimalist' views seem to fare worse as it urges no value of k - googleplexes, TREE(TREE(3)), 1/P(Randomly picking the single 'wrong' photon from our light cone a million times consecutively), etc. would be high enough.)

The challenge of the V/RC is the counter-intuitive 'nickel and diming' where a great good or bad is outweighed by a vast multitude of small/trivial things. "N terrible lives and c*k*N barely-better-than-nothing lives is better than N wonderful lives alone" remains counter-intuitive to many who accept the first scenario (for some value of k) basically regardless of how large you make c. The natural impulse (at least for me) is to wish to discount trivially positive wellbeing rather than saying it can outweigh severe suffering if provided in sufficiently vast quantity. 

If it were just 'The VRC says you can counterbalance severe suffering with happiness' simpliciter  which was generally counterintuitive, we could skip the rigmarole of A, A+, B etc. and just offer Omelas-type scenarios (as Tomasik does in the linked piece) without stipulating the supposedly outweighing good stuff comprises a lot of trivial well-being.

 

Second, although scenarios where one may consider counterbalancing (/severe) suffering with happiness in general may not be purely theoretical (either now or in the future) the likelihood of something closely analogous to the VRC in particular looks very remote. In terms of 'process' the engine of the counter-intuitiveness relies on being able to parcel out good stuff in arbitrarily many arbitrarily small increments rather than in smaller more substantial portions; in terms of 'outcome' one needs a much smaller set of terrible lives outweighed by a truly vast multitude of just-about-better-than-nothing ones. I don't see how either arise on credible stories of the future. 

 

Third, there are other lines classical utilitarians or similar can take in response to the VRC besides biting the bullet (or attempting to undercut our intuitive responses): critical level views, playing with continuity, and other anti-aggregation devices to try and preserve trading-off in general but avoid the nickel and diming issues of the VRC in particular . Obviously, these themselves introduce other challenges (so much so I'm more inclined to accept the costly counter-examples than the costs of (e.g.) non-continuity) and surveying all this terrain would be a gargantuan task far beyond the remit of work introducing a related but distinct issue. 

But I bring this up because I anticipate the likely moves you will make to avoid the counter-example Shulman and I have brought up will be along the lines of anti-aggregationist moves around lexicality, thresholds, and whatnot. If so, what is good for the goose is good for the gander: it seems better to use similarly adapted versions of total utilitarianism as a 'like for like' comparison. 'Lexical threshold total utilitarianism', which lexically de-prioritises dis/value below some magnitude can accept mere addition, accept trading off suffering for sufficient (non-trivial) happiness, but avoid both the RC and VRC. This seems a better point of departure for weighing up minimalism or not, rather than discussing counter-examples to one or the other view which only apply given an (ex hypothesi) mistaken account of how to aggregate harms and benefits.

Clarifying the Petrov Day Exercise

I've been accused of many things in my time, but inarticulate is a new one. ;)

Clarifying the Petrov Day Exercise

I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval. 

I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life - with at-best-murky EV to both themselves and the 'cause'.  I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the 'big red button' at the top of the site might be similar to how many Christians react to some of their brethren 'reenacting' the crucifixion themselves.

But hey, I'm (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here - to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where 'success' or 'failure' is taken to be important (at least by some) not only for those who do or don't hit the button, but corporate praxis generally. 

As someone who is already ambivalent, it rankles that my inaction will be taken as tacit support for some after-action pean to some sticky-back-plastic icon of 'who we are as a Community'. Yet although 'protesting' by '''''nuking''''' [sic] ([sic]) LW has some benefit of a) probably won't get opted in again and b) maybe make this less likely to be an ongoing 'thing', it has some downsides. I'm less worried about 'losing rep' (I have more than enough of both e-clout and ego to make counter-signalling an attractive proposition; '''''nuking''''' LW in a fit of 'take this and shove it' pique is pretty on-brand for me), but more that some people take this (very) seriously and would be sad if this self-imposed risk is realised. Even I disagree (and think this is borderline infantile), protesting in this way feels a bit like trying to refute a child's belief their beloved toy is sapient by destroying it in front of them.  

I guess we can all be thankful 'writing asperous forum comments' provides a means of de-escalation.

A Primer on the Symmetry Theory of Valence

Thanks, but I've already seen them. Presuming the implication here is something like "Given these developments, don't you think you should walk back what you originally said?", the answer is "Not really, no": subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.

(Apologies if I mistake what you are trying to say here. If it helps generally, I expect - per my parent comment - to continue to affirm what I've said before however the morass of commentary elsewhere on this post shakes out.)

A Primer on the Symmetry Theory of Valence

[Own views]

I'm not sure 'enjoy' is the right word, but I also noticed the various attempts to patronize Hoskin. 

This ranges from the straightforward "I'm sure once you know more about your own subject you'll discover I am right":

I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD

'Well-meaning suggestions' alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit. 

I’m a little baffled by the emotional intensity here but I’d suggest approaching this as an opportunity to learn about a new neuroimaging method, literally pioneered by your alma mater. :) 

[Adding a smiley after something insulting or patronizing doesn't magically make you the 'nice guy' in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I'm sure once you reflect on what I said and grow up a bit you'll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you'll make us proud! :)]

Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.

I understand it may feel significant that you have published work using fMRI, and that you hold a master’s degree in neuroscience.

 

I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field.

I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn't write similarly in response to criticism (however strident) from someone more junior in my own field. 

What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we'd take, "Post-graduate degree, current doctoral student, and relevant publication record" over "Basically nothing I could put on an academic CV, but I've written loads of stuff about my grand theory of neuroscience." 

In that context (plus the genders of the participants) I guess you could call it 'mansplaining'. 

Load More