Gregory_Lewis

Researcher (on bio) at FHI

Wiki Contributions

Comments

Clarifying the Petrov Day Exercise

I've been accused of many things in my time, but inarticulate is a new one. ;)

Clarifying the Petrov Day Exercise

I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval. 

I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metastasises over the rest of their life - with at-best-murky EV to both themselves and the 'cause'.  I also think particular attempts at ritual often feel kitsch and prone to bathos: I imagine my feelings towards the 'big red button' at the top of the site might be similar to how many Christians react to some of their brethren 'reenacting' the crucifixion themselves.

But hey, I'm (thankfully) not the one carrying down the stone tablets of community norms from the point of view of the universe here - to each their own. Alas this restraint is not universal, as this is becoming a (capital C) Community ritual, where 'success' or 'failure' is taken to be important (at least by some) not only for those who do or don't hit the button, but corporate praxis generally. 

As someone who is already ambivalent, it rankles that my inaction will be taken as tacit support for some after-action pean to some sticky-back-plastic icon of 'who we are as a Community'. Yet although 'protesting' by '''''nuking''''' [sic] ([sic]) LW has some benefit of a) probably won't get opted in again and b) maybe make this less likely to be an ongoing 'thing', it has some downsides. I'm less worried about 'losing rep' (I have more than enough of both e-clout and ego to make counter-signalling an attractive proposition; '''''nuking''''' LW in a fit of 'take this and shove it' pique is pretty on-brand for me), but more that some people take this (very) seriously and would be sad if this self-imposed risk is realised. Even I disagree (and think this is borderline infantile), protesting in this way feels a bit like trying to refute a child's belief their beloved toy is sapient by destroying it in front of them.  

I guess we can all be thankful 'writing asperous forum comments' provides a means of de-escalation.

A Primer on the Symmetry Theory of Valence

Thanks, but I've already seen them. Presuming the implication here is something like "Given these developments, don't you think you should walk back what you originally said?", the answer is "Not really, no": subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.

(Apologies if I mistake what you are trying to say here. If it helps generally, I expect - per my parent comment - to continue to affirm what I've said before however the morass of commentary elsewhere on this post shakes out.)

A Primer on the Symmetry Theory of Valence

[Own views]

I'm not sure 'enjoy' is the right word, but I also noticed the various attempts to patronize Hoskin. 

This ranges from the straightforward "I'm sure once you know more about your own subject you'll discover I am right":

I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD

'Well-meaning suggestions' alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit. 

I’m a little baffled by the emotional intensity here but I’d suggest approaching this as an opportunity to learn about a new neuroimaging method, literally pioneered by your alma mater. :) 

[Adding a smiley after something insulting or patronizing doesn't magically make you the 'nice guy' in the conversation, but makes you read like a passive-aggressive ass who is nonetheless too craven for candid confrontation. I'm sure once you reflect on what I said and grow up a bit you'll improve so your writing inflicts less of a tax on our collective intelligence and good taste. I know you'll make us proud! :)]

Or just straight-up belittling her knowledge and expertise with varying degrees of passive-aggressiveness.

I understand it may feel significant that you have published work using fMRI, and that you hold a master’s degree in neuroscience.

 

I’m glad to hear you feel good about your background and are filled with confidence in yourself and your field.

I think this sort of smug and catty talking down would be odious even if the OP really did have much more expertise than their critic: I hope I wouldn't write similarly in response to criticism (however strident) from someone more junior in my own field. 

What makes this kinda amusing, though, is although the OP is trying to set himself up as some guru trying to dismiss his critic with the textual equivalent of patting her on the head, virtually any reasonable third party would judge the balance of expertise to weigh in the other direction. Typically we'd take, "Post-graduate degree, current doctoral student, and relevant publication record" over "Basically nothing I could put on an academic CV, but I've written loads of stuff about my grand theory of neuroscience." 

In that context (plus the genders of the participants) I guess you could call it 'mansplaining'. 

Is effective altruism growing? An update on the stock of funding vs. people

[Predictable disclaimers, although in my defence, I've been banging this drum long before I had (or anticipated to have) a conflict of interest.]

I also find the reluctance to wholeheartedly endorse the 'econ-101' story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:

  • EA-land tends sympathetic using 'econ-101' accounts reflexively on basically everything else in creation. I thought the received wisdom these approaches are reasonable at least for first-pass analysis, and we'd need persuading to depart greatly from them.
  • Considerations why 'econ-101' won't (significantly) apply here don't seem to extend to closely analogous cases:  we don't fret (and typically argue against others fretting) about other charity's paying their staff too much, we don't think (cf. reversal test) that google could improve its human capital by cutting pay - keeping the 'truly committed googlers', generally sympathetic to public servants getting paid more if they add much more social value (and don't presume these people are insensitive to compensation beyond some limit), prefer simple market mechs over more elaborate tacit transfer system (e.g. just give people money) etc. etc.
  • The precise situation makes the 'econ-101' intervention particularly appetising: if you value labour much more than the current price, and you are sitting atop a ungodly pile of lucre so vast you earnestly worry about how you can spend big enough chunks of it at once, 'try throwing money at your long-standing labour shortages' seems all the more promising.
  • Insofar as it goes, the observed track record looks pretty supportive of the econ-101 story - besides all the points Ryan mentions, compare "price suppression results in shortages" to the years-long (and still going strong) record of orgs lamenting they can't get the staff.

Perhaps the underlying story is as EA-land is generally on the same team, one might hope you can do better than taking one's cue from 'econ-101', given the typically adversarial/competitive dynamics it presumes between firms, and employee/employer. I think this hope is forlorn: EA-land might be full aspiring moral saints, but aspiring moral saints remain approximate to homo economicus. So the usual stories about the general benefits econ efficiency prove hard to better- and (play-pumps style) attempts to try feel apt to backfire (1, 2, 3, 4 - ad nauseum).
 
However, although I don't think 'PR concerns' should guide behaviour (if X really is better than ¬X, the costs of people reasonably - if mistakenly - thinking less of you for doing X is typically better than strategising to hide this disagreement), many things look bad because they are bad.

In the good old days, I realised I was behind on my GWWC pledge so used some of my holiday to volunteer for a week of night-shifts as a junior doctor on a cancer ward. If in the future my 'EA praxis' is tantamount to splashing billionaire largess on a lifestyle for myself of comfort and affluence scarcely conceivable to my erstwhile beneficiaries, spending my days on intangible labour in well-appointed offices located among the richest places heretofore observed in human history, an outside observer may wonder what went wrong. 

I doubt they would be persuaded by my defence is any better than obscene: "Not all heroes wear capes; some nobly spend thousands on yuppie accoutrements they deem strictly necessary for them to do the most good!". Nor would they be moved by my remorse: self-effacing acknowledgement is not expiation, nor complaisance to my own vices atonement. I still think jacking up pay may be good policy, but personally, perhaps I should doubt myself too.   

Denise_Melchin's Shortform

If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).

Although I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries:

First, the ex ante 'expected $ raised' from folks aiming at E2G (e.g. at a similar early career stage) is much more even than the ex post distribution. Algo-trader Alice and Entrepreneur Edward may have similar expected lifetime income, but Edward has much higher variance - ditto one of entrepreneur Edward and Edith may swamp the other if one (but not the other) hits the jackpot. 

Second, part of the reason direct work contributions look more even is this is largely an ex ante estimate - a clairvoyant ex post assessment would likely be much more starkly skewed. E.g. If work on AI paradigm X alone was sufficient to avert existential catastrophe (which turned out to be the only such danger), the impact of the lead researcher(s) re. X is astronomically larger than everything else everyone else is doing. 

Third, I also wonder that raw $ value may mislead in credit assignment for donation impact. The entrepreneur who makes a billion $ company hasn't done all the work themselves, and it's facially plausible some shapley/whatever credit sharing between these founders and (e.g.) current junior staff would not be as disproportionate as the money which ends up in their respective bank accounts.

Maybe not: perhaps the reward in terms of 'getting things off the ground', taking lots of risk, etc. do mean the tech founder megadonor bucks should be attributed ~wholly to them. But similar reasoning could be applied to direct work as well. Perhaps the lion's share of all contributions for global health work up to now should be accorded to (e.g.) Peter Singer, as all subsequent work is essentially 'footnotes to Famine, Affluence, and Morality'; or AI work to those who toiled in the vineyards over a decade ago, even if now their work is a much smaller proportion of the contemporary aggregate contribution.

Help me find the crux between EA/XR and Progress Studies

I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.

"XR primacy"

Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, borrowing from the initial car analogy, is you have piles of open road/runway available if you need to use it, so velocity and acceleration are in themselves much less important than direction - you can cover much more ground in expectation if you make sure you're not headed into a crash first. 

This typically (but not necessarily, cf.) implies longtermism. 'Global catastrophic risk', as a longtermist term of art, plausibly excludes the vast majority of things common sense would call 'global catastrophes'. E.g.:

[W]e use the term “global catastrophic risks” to refer to risks that could be globally destabilizing enough to permanently worsen humanity’s future or lead to human extinction. (Open Phil)

My impression is a 'century more poverty' probably isn't a GCR in this sense. As the (pre-industrial) normal, the track record suggests it wasn't globally destabilising to humanity or human civilisation. Even moreso if the matter is of a somewhat-greater versus somewhat-lower rate in its elimination. 

This makes it's continued existence no less an outrage to human condition. But, across the scales from threats to humankind's entire future, it becomes a lower priority. Insofar as these things are traded-off (which seems to be implicit in any prioritisation given both compete for resources, whether or not there's any direct cross-purposes in activity) the currency of XR reduction has much greater value.

Per discussion, there are a variety of ways the story sketched above could be wrong:

  • Longtermist consequentialism (the typical, if not uniquely necessary motivation for the above) is false, so we our exchange rate for common sense global catastrophes (inter alia) versus XR should be higher.
  • XR is either very low, or intractable, so XR reduction isn't a good buy even on the exchange rate XR views endorse. 
  • Perhaps the promise of the future could be lost with less of a bang but a whimper. Perhaps prolonged periods of economic or stagnation should be substantial subjects of XR concern in their own right, so PS-land and XR-land converge on PS-y aspirations.

I don't see Pascalian worries as looming particularly large apart from these. XR-land typically takes the disjunction of risks and envelope of mitigation to be substantial/non-pascalian values. Although costly activity that buys an absolute risk reduction of 1/trillions looks dubious to common sense, 1/thousands + (e.g.) is commonplace (and commonsensical) when stakes are high enough. 

It's not clear how much of a strike that Pascalian counter-examples are constructable from the resources of a given view, and although the view wouldn't endorse them, it doesn't have a crisp story of decision theoretic arcana why not. Facially, PS seems susceptible to the same (e.g. a PS-ers work is worth billions per year, given the yield if you compound an (in expectation) 0.0000001% marginal increase in world GDP growth for centuries).


Buying the technological progress index?

Granting the story sketched above, there's not a straightforward upshot on whether this makes technological progress generally good or bad. The ramifications of any given technological advance for XR are hard to forecast; aggregating over all of them to get a moving average harder still. Yet there seems a lot to temper fairly unalloyed enthusiasm around technological progress I take as the typical attitude in PS-land.

  • There's obviously the appeal to the above sense of uncertainty: if at least significant bits of the technological progress portfolio credibly have very bad dividends for XR, you probably hope humanity is pretty selective and cautious in its corporate investments. It'd also generally surprise for what is best for XR to also be best for 'progress' (cf.)
  • The recent track record doesn't seem greatly reassuring. The dual-use worries around nuclear technology remain profound 70+ years after their initial development, and 'derisking' these downsides remain remote. It's hard to assess the true ex ante probability of a strategic nuclear exchange during the cold war, nor exactly how disastrous it would have been, but pricing in reasonable estimates of both probably takes a large chunk out of the generally sunny story of progress we observe ex post over the last century.
  • Insofar as folks consider disasters arising from emerging technologies (like AI) to represent the bulk of XR, this supplies concern against their rapid development in particular, and against exuberant technological development which may generate further dangers in general.

Some of this may just be a confusion of messaging (e.g. even though PS folks portray themselves as more enthusiastic and XR folks less so, both would actually be similarly un/enthusiastic for each particular case). I'd guess more of it is more substantive around the balance of promise and danger posed by given technologies (and the prospects/best means to mitigate the latter), which then feeds into more or less 'generalized techno-optimism'.

But I'd guess the majority of the action is around the 'modal XR account' of XR being a great moral priority, which can be significantly reduced, and is substantially composed of risks from emerging technology. "Technocircumspection" seems a fairly sound corollary from this set of controversial conjuncts.   

[Link] 80,000 Hours Nov 2020 annual review

[Own views etc.]

I'm unsure why this got downvoted, but I strongly agree with the sentiment in the parent. Although I understand the impulse of "We're all roughly on the same team here, so we can try and sculpt something better than the typically competitive/adversarial relationships between firms, or employers and employees", I think this is apt to mislead one into ideas which are typically economically short-sighted, often morally objectionable, and occasionally legally dubious. 

In the extreme case, it's obviously unacceptable for Org X to not hire candidate A (their best applicant), because they believe its better they stay at Org Y. Not only (per the parent) that A is probably a better judge of where they are best placed,[1] but Org X screws over not only itself (they now appoint someone they think are not quite as good) and A themselves (who doesn't get the job they want), for the benefit of Org Y. 

These sort of oligosponic machinations are at best a breach of various fiduciary duties (e.g. Org X to their donors to use their money to get the best staff rather than opaque de facto transfer contributions of labour to another organisation), and at least colourably illegal in many jurisdictions due to labour law around anti-trust, non-discrimination, etc. (see)

Similar sentiments apply to less extreme examples, such as 'not proactively 'poaching'' (the linked case above was about alleged "no cold call" agreements). The typical story for why these practices are disliked is a mix of econ efficiency arguments (e.g. labour market liquidity, competition over conditions is a mechanism for higher performing staff to match into higher performing orgs) and worker welfare ones (e.g. the net result typically disadvantages workers by suppressing their pay, conditions, and reducing their ability to change to roles they prefer).

I think these rationales apply roughly as well to EA-land as anywhere else-land. Orgs should accept that staff may occasionally leave to other orgs for a variety of reasons. If they find that they consistently  lose out for familiar reasons, they should either get better or accept the consequences for remaining worse.


[1]: Although, for the avoidance of doubt, I think it is wholly acceptable for people to switch EA jobs for wholly 'non-EA' reasons - e.g. "Yeah, I expect I'd do less good at Org X than Org Y, but Org X will pay me 20% more and I want a higher standard of living." Moral sainthood is scarce as well as precious. It is unrealistic that all candidates are saintly in this sense, and mutual pretence to the contrary unhelpful.

If anything, 'no poaching' (etc.) practices are even worse in these cases than the more saintly 'moving so I can do even more good!' rationale. In the latter case, Orgs are merely being immodest in presuming to know better than applicants what their best opportunity to contribute is; in the former, Orgs conspire to make their employees' lives worse than they could otherwise be.

Draft report on existential risk from power-seeking AI

Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).

I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of something based on suggestive antecedents (e.g. chance of a war given an altercation, etc.) So attending to "Even if A, for it to lead to D one should attend to P(B|A), P(C|B) etc. etc.", tend to lead to downwards corrections. 

Naturally, you can mess this up. Although it's not obvious you are at greater risk if you arrange your decomposed considerations conjunctively or disjunctively: "All of A-E must be true for P to be true" ~also means "if any of ¬A-¬E are true, then ¬P".  In natural language and heuristics, I can imagine "Here are several different paths to P, and each of these seem not-too-improbable, so P must be highly likely" could also lead one astray. 

Load More