All of Gregory_Lewis's Comments + Replies

High Impact Medicine, 6 months later - Update & Key Lessons

Howdy, and belatedly:

0) I am confident I understand; I just think it's wrong. My impression is HIM's activity is less 'using reason and evidence to work out what does the most good', but rather 'using reason and evidence to best reconcile prior career commitments with EA principles'.

By analogy, if I was passionate about (e.g.) HIV/AIDS, education, or cancer treatment in LICs, the EA recommendation would not (/should not) be I presume I maintain this committment, but rather soberly evaluate how interventions within these areas stack up versus all othe... (read more)

1[comment deleted]19d

I think your two comments here are well-argued, internally consistent, and strong. However, I think I disagree with

As, to a first approximation, reality works in first-order terms

in the context of EA career choice writ large, which I think may be enough to flip the bottom-line conclusion.

I think the crux for me is that I think if the differences in object-level impact across people/projects is high enough, then for anybody whose career or project is not in the small subset of the most impactful careers/projects, their object-level impacts will ... (read more)

It seems bizarre that, without my strong upvote, this comment is at minus 3 karma.

Karma polarization seems to have become much worse recently. I think a revision of the karma system is urgently needed.

Terminate deliberation based on resilience, not certainty

Thanks. Perhaps with the benefit of hindsight the blue envelopes probably should have been dropped from the graph, leaving the trace alone:

• As you and Kwa note, having a  'static' envelope you are bumbling between looks like a violation of the martingale property - the envelope should be tracking the current value more (but I was too lazy to draw that).
• I agree all else equal you should expect resilience to increase with more deliberation - as you say, you are moving towards the limit of perfect knowledge with more work. Perhaps graph 3 and 4 [I've adde
High Impact Medicine, 6 months later - Update & Key Lessons

I worry a lot of these efforts are strategically misguided. I don't think noting 'EA should be a question', 'it's better to be inclusive' 'positive approach and framing' (etc.) are adequate justifications for soft-peddling uncomfortable facts which nonetheless are important for your audience to make wise career decisions. The most important two here are:

1. 'High-impact medicine' is, to a first approximation, about as much of a misnomer as 'High-impact architecture' (or 'high-impact [profession X]'). Barring rare edge cases, the opportunities to have the great

Hi Greg,

Big picture, I wanted to clarify two specific points where you have misunderstood the aims of the organisation (we take full responsibility for these issues however as if you have got this impression it is possible others have too).

1. We do not necessarily encourage people to apply for and study medicine. We are not giving any advice to high school level students about degree choices and paths to impact. To quote what you wrote, "medicine often selects for able, conscientious, and altruistic people, who can do a lot of go... (read more)

My bargain with the EA machine

Bravo. I think diagrams are underused as crisp explanations, and this post gives an excellent demonstration of their value (among many other merits).

A minor point (cf. ThomasWoodside's remarks): I'd be surprised if one really does (or really should) accept no trade-offs between "career quality" for "career impact". The 'isoquoise ' may not slant all the way down from status quo to impactful toil, but I think it should slant down at least a little (contrariwise, you might also be willing to trade less impact for higher QoL etc).

I feel anxious that there is all this money around. Let's talk about it

[own views etc]

I think the 'econ analysis of the EA labour market' has been explored fairly well - I highly recommend this treatment by Jon Behar. I also find myself (and others) commonly in the comment threads banging the drum for it being beneficial to pay more, or why particular ideas to not do so (or pay EA employees less) are not good ones.

Notably, 'standard economic worries' point in the opposite direction here.  On the standard econ-101 view, "Org X struggles as competitor Org Y can pay higher salaries", or "Cause ~neutral people migrate ... (read more)

8MichaelPlant3mo
[also speaking in a personal capacity, etc.] Hello Greg. I suspect we're speaking at cross-purposes and doing different 'econ 101' analyses. If the EA world were one of perfect competition (lots of buyers and sellers, competition of products, ease of entry and exit, buyers have full information, equal market share) I'd be inclined to agree with you. In that case, I would effectively be arguing for less competitive organisations to get subsidies. That is not, however, the world I observe. Suppose I describe a market along the following lines. One or two firms consume over 90% of the goods whilst also being sellers of goods. There are only a handful of other sellers. The existing firms coordinate their activities with each other, including the sellers mostly agreeing not to directly compete over products. Access to the market and to information about the available goods is controlled by the existing players. Some participants fear (rightly or wrongly) that criticising the existing players or the structure of the market will result in them being blacklisted. Does such a market seem problematically uncompetitive? Would we expect there to be non-trivial barriers to entry for new firms seeking to compete on particular goods? Does this description bear any similarity to the EA world? Unfortunately, I fear the answer to all three of the questions is yes. So, to draw it back to the original point, for the market incumbents to offer very high salaries to staff is one way in which such firms might use their market power to 'price out' the competition. Of course, if one happened to think that it would bad, all things considered, for that competition to succeed, then of course one might not mind this state of affairs.
Minimalist axiologies and positive lives

Thanks for the reply, and with apologies for brevity.

Re. 1 (ie. "The primary issue with the VRC is aggregation rather than trade-off"). I take it we should care about plausibility of axiological views with respect to something like 'commonsense' intuitions, rather than those a given axiology urges us to adopt. It's at least opaque to me whether commonsense intuitions are more offended by 'trade-offy/CU' or 'no-trade-offy/NU' intuitions. On the one hand:

• "Any arbitrarily awful thing can be better than nothing providing it is counterbalanced by k good things
5Teo Ajantaival7mo
Minimalist axiologies and positive lives

Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.

Yeah, that's it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/RC isn't really a strike against 'symmetric axiology' simpliciter, but merely 'symmetric axiologies with a mistaken account of aggregation'. If instead 'straightforward/unadorn... (read more)

8MichaelStJules8mo
That all seems fair to me. With respect to 2, I'm thinking something on the order of insect brains. There are reasons to expect pleasure to scale sublinearly with brain size even in artificial brains optimized for pleasure, e.g. a lot of unnecessary connections that don't produce additional value, greater complexity in building larger brains without getting things wrong, or even giving weight to the belief that integrating minds actually reduces value, say because of bottlenecks in some of the relevant circuits/functions. Smaller brains are easier/faster to run in parallel. This is assuming the probability of consciousness doesn't dominate. There may also be scale efficiencies, since the brains need containers and to be connected to things (even digitally?) or there may be some other overhead. So, I don't think it would be too surprising to find the optimal average in the marginally good range.
Minimalist axiologies and positive lives

Tradeoffs like the Very Repugnant Conclusion (VRC) are not only theoretical, because arguments like that of Bostrom (2003) imply that the stakes may be astronomically high in practice. When non-minimalist axiologies find the VRC a worthwhile tradeoff, they would presumably also have similar implications on an arbitrarily large scale. Therefore, we need to have an inclusive discussion about the extent to which the subjective problems (e.g. extreme suffering) of some can be “counterbalanced” by the “greater (intrinsic) good” for others, because this has dire

6antimonyanthony7mo
I think it's useful to have a thought experiment to refer to other than Omelas to capture the intuition of "a perfect, arbitrarily large utopia is better than a world with arbitrarily many miserable lives supposedly counterbalanced by sufficiently many good lives." Because: * The "arbitrarily many" quantifiers show just how extreme this can get, and indeed the sort of axiology that endorses the VRC is committed to judging the VRC as better the more you multiply the scale, which seems backwards to my intuitions. * The first option is a utopia, whereas the Omelas story doesn't say that there's some other civilization that is smaller yet still awesome and has no suffering. * Omelas as such is confounded by deontological intuitions, and the alternative postulated in the story is "walking away," not preventing the existence of such a world in the first place. I've frequently found that people get hung up on the counterproductiveness of walking away, which is true, but irrelevant to the axiological point I want to make. The VRC is purely axiological, so more effective at conveying this. So while I agree that aggregation is an important part of the VRC, I also disagree that the "nickel and diming" is at the heart of this. To my intuitions, the VRC is still horrible and borderline unacceptable if we replace the just-barely-worth-living lives with lives that have sufficiently intense happiness, intense enough to cross any positive lexical threshold you want to stipulate. In fact, muzak and potatoes lives as Parfit originally formulated them (i.e., with no suffering) seem much better than lots of lives with both lexically negative and lexically "positive" experiences. I'll eagerly accept Parfit's version of the RC [https://forum.effectivealtruism.org/posts/AwAJmJexceHa6QRDL/antimonyanthony-s-shortform?commentId=cAGLmEMbEE9WoiqXQ] . (If you want to say this is contrary to common sense intuitions, that's fine, since I don't put much stock in c

(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)

Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the reader’s working memory.)

This seems wrong to me,

The quoted passage contained many claims; which one(s) seemed wrong to you?

and confusing 'finding the VRC counter-intuitive' with 'counterbalancing (/extreme) bad with with good in any circumstance is counterintuitive' (e.g. the linked article to Omelas) is unfortunate -
8MichaelStJules8mo
Clarifying the Petrov Day Exercise

I've been accused of many things in my time, but inarticulate is a new one. ;)

You do use lots of big words

Clarifying the Petrov Day Exercise

I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval.

I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metast... (read more)

-2ryan_b9mo
I am curious: if you believe professional distance is a good thing for EA, then what is your explanation for EA not being a solved problem already? The existing networks of professionals had all the analytical knowledge required; save for the people who joined up straight out of college pretty much everyone currently working in EA was already in those networks.

So to summarise, you seem not to be a fan of the ritual? :P

A Primer on the Symmetry Theory of Valence

Thanks, but I've already seen them. Presuming the implication here is something like "Given these developments, don't you think you should walk back what you originally said?", the answer is "Not really, no": subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.

(Apologies if I mistake what you are trying to say here. If it helps generally, I expect - per my parent comment - to continue to affirm what I've said before however the morass of commentary elsewhere on this post shakes out.)

5Abby Hoskin10mo
Just want to be clear, the main post isn't about analyzing eigenmodes with EEG data. It's very funny that when I am intellectually honest enough to say I don't know about one specific EEG analysis that doesn't exist and is not referenced in the main text, people conclude that I don't have expertise to comment on fMRI data analysis or the nature of neural representations. Meanwhile QRI does not have expertise to comment on many of the things they discuss, but they are super confident about everything and in the original posts especially did not clearly indicate what is speculation versus what is supported by research. I continue to be unconvinced with the arguments laid out, but I do think both the tone of the conversation and Mike Johnson's answers improved after he was criticized. (Correlation? Causation?)
9MikeJohnson10mo
Gregory, I’ll invite you to join the object-level discussion between Abby and I.

Greg, I want to bring two comments that have been posted since your comment above to your attention:

1. Abby said the following to Mike:

Your responses here are much more satisfying and comprehensible than your previous statements, it's a bit of a shame we can't reset the conversation.

2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby's line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn't have the expertise to further evaluate:

if they had given the response that they

A Primer on the Symmetry Theory of Valence

[Own views]

I'm not sure 'enjoy' is the right word, but I also noticed the various attempts to patronize Hoskin.

This ranges from the straightforward "I'm sure once you know more about your own subject you'll discover I am right":

I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD

'Well-meaning suggestions' alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit.

I’m a little baffled by the emotional intensity here but I’d su

Greg, I have incredible respect for you as a thinker, and I don't have a particularly high opinion of the Qualia Research Institute. However, I find your comment to be unnecessarily mean: every substantive point you raise could have been made more nicely and less personal, in a way more conducive to mutual understanding and more focused on an evaluation of QRI's research program. Even if you think that Michael was condescending or disrespectful to Abby, I don't think he deserves to be treated like this.

2MikeJohnson10mo
Hi Gregory, I’ll own that emoticon. My intent was not to belittle, but to show I’m not upset and I‘m actually enjoying the interaction. To be crystal clear, I have no doubt Hoskin is a sharp scientist and cast no aspersions on her work. Text can be a pretty difficult medium for conveying emotions (things can easily come across as either flat or aggressive).
Is effective altruism growing? An update on the stock of funding vs. people

[Predictable disclaimers, although in my defence, I've been banging this drum long before I had (or anticipated to have) a conflict of interest.]

I also find the reluctance to wholeheartedly endorse the 'econ-101' story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:

• EA-land tends sympathetic using 'econ-101' accounts reflexively on basically everything else in creation. I thought the received wisdom these approaches are reasonable at least for first-pass analysis, and we'd need persuading to depart greatl
2Benjamin_Todd10mo
I'm just saying that when we think offering more salary will help us secure someone, we generally do it. This means that further salary raises seem to offer low benefit:cost. This seems consistent with econ 101. Likewise, it's possible to have a lot of capital, but for the cost-benefit of raising salaries to be below the community bar (which is something like invest the money for 20yr and spend on OP's last dollar - which is a pretty high bar). Having more capital increases the willingness to pay for labour now to some extent, but tops out after a point. To be clear, I'm sympathetic to the idea that salaries should be even higher (or we should have impact certificates or something). My position is more that (i) it's not an obvious win (ii) it's possible for the value of a key person to be a lot higher than their salary, without something going obviously wrong.
Denise_Melchin's Shortform

If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).

Although I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries:

First, the ex ante 'expected $ra... (read more) 2Linch1y Re your third point: I find it plausible that both startup earnings and explicit allocation of research insight can to at least some degree be modeled as a tournament for "being first/best," which means you have a pretty extreme distribution if you are trying to win resources (hopefully for altruism) like$s or prestige, but a much less extreme distribution if we're trying to estimate actual good done while trying to spend down such resources. Put another way, I find it farcical to think that Newton should get >20% of the credit for inventing calculus (given both the example of Leibniz and that many of the ideas were floating around at the time), probably not even >5%, yet I get the distinct impression (never checked with polling or anything) that many people would attribute the invention of calculus solely or mostly to Newton. Similarly, there are two importantly different applied ethics questions to ask whether it's correct to give billionaires billions of dollars to their work vs whether individuals should try to make billions of dollars to donate.
2Benjamin_Todd1y
That makes sense, thanks for the comment. I think you're right looking at ex post doesn't tell us that much. If I try to make ex ante estimates, then I'd put someone pledging 10% at a couple of thousand dollars per year to the EA Funds or equivalent. But I'd probably also put similar (or higher) figures on the value of the other ways of contributing above.
Help me find the crux between EA/XR and Progress Studies

I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.

"XR primacy"

Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, bor... (read more)

[Link] 80,000 Hours Nov 2020 annual review

[Own views etc.]

I'm unsure why this got downvoted, but I strongly agree with the sentiment in the parent. Although I understand the impulse of "We're all roughly on the same team here, so we can try and sculpt something better than the typically competitive/adversarial relationships between firms, or employers and employees", I think this is apt to mislead one into ideas which are typically economically short-sighted, often morally objectionable, and occasionally legally dubious.

In the extreme case, it's obviously unacceptable for Org X to not hire c... (read more)

Draft report on existential risk from power-seeking AI

Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).

I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of som... (read more)

Thoughts on being overqualified for EA positions

Similar to Ozzie, I would guess the 'over-qualified' hesitation often has less to do with, "I fear I would be under-utilised and become disinterested if I took a more junior role, and thus do less than the most good I could", but a more straightforward, "Roles which are junior, have unclear progression and don't look amazing on my CV if I move on aren't as good for my career as other opportunities available to me."

This opportunity cost (as the OP notes) is not always huge, and it can be outweighed by other considerations. But my guess is it is often ... (read more)

The disincentives listed here make sense to me. I would just add that people's motivations are highly individual, and so people will differ in how much weight they put on any of these points or on how well their CV looks.

Personally, I've moved from Google to AMF and have never looked back. The summary: I'm much more motivated now; the work is actually more varied and technically challenging than before, even though the tech stack is not as close to the state of the art. People are (as far as I can tell) super qualified in both organizations. I'm happy to chat personally about my individual motivations if anyone who reads this feels that it would benefit them.

Launching a new resource: 'Effective Altruism: An Introduction'

Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous.

If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.

Yet, however you slice... (read more)

Launching a new resource: 'Effective Altruism: An Introduction'

Per others: This selection isn't really 'leans towards a focus on longtermism', but rather 'almost exclusively focuses on longtermism': roughly any 'object level' cause which isn't longtermism gets a passing mention, whilst longtermism is the subject of 3/10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) 'lean towards' longtermism either in terms of subject matter or affinity.

Despite being a longtermist myself, I think this is dubious for a purported 'introduction to EA as a whole': EA isn't all-bu... (read more)

What Would A Longtermist Flag Look Like?

I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.

2Raemon1y
Oh man, this is pretty cool. I actually like the fact that it's sort of jagged and crazy.
2Ben Pace1y
Appreciate you drawing this, I like the idea.

I agree with others that this concept is great, but that the gradient probably isn't a great idea.

Here's a very quick inkscape version without the dot. (Any final version would want a smoother curve but I wanted to get this done quickly)

While I personally like monochrome a lot (the Cornish flag is one of my favourites), I worry that it will be a bit too stark for most people. Changing the colour could also help reduce the association with space a bit. Here's a couple of quick versions using Cullen's colour scheme from the hourglass concept below.

7RyanCarey1y
Yeah, this is cool! Although maybe too expansionist - it suggests that we plan to conquer our light cone, which might mean defending it against non-Earth-originating life. Separately, I guess adding a colour gradient is bad, since that's harder to draw, and flags usually don't have them.
6Jack Malde1y
I like this. Ryan's original example, whilst a pretty good suggestion overall, gives the impression of insignificance, whereas this one gives the impression of insignificance mixed with vast potential and hope for something more. The only reservation I have is that this flag might imply that longtermism is only valid if we can spread to the stars. I think the jury is still out on whether or not this is actually the case? It has been suggested that existential security may only be possible if we spread out in the universe, but I'm not sure if this is generally accepted? Perhaps I'm being overly nitpicky though.

A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events 'doubling the probability' versus 'doubling the odds' get basically the same answer, but not so for more common events. Loosely, flipping a coin three times 'trebles' my risk of observing it landing tails, but the probability isn't 1.5. (cf).

E.g.

Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like a

2Habryka1y
Yep, I should have definitely kept the probabilities in log-form, just to be less confusing. It wouldn't have made a huge difference to the outcome, but it seems better practice than the thing that I did.
Tristan Cook's Shortform

Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.

In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to... (read more)

8Tristan Cook1y
Thanks for such a detailed and insightful response Gregory. Thanks for pointing this out. I think I realised this extra bullet biting after making the post. This makes a lot of sense, and not something I’d considered at all and seems pretty important when playing counterxample-intuition-tennis. Again, this feels really useful and something I want to think about further. I think my slight negative intuition comes from that fact that although I may be willing to endure some suffering for some upside, I wouldn’t endorse inflicting suffering (or risk or suffering) on person A for some upside for person B. I don't know how much work the differences of fairness personal identity (i.e. the being that suffered gets the upside) between the examples are doing, and it what direction my intuition is 'less' biased. I like this example a lot! and definitely lean A > Z. Reframing the situation, and my intuition becomes less clear: considering A’, in which TREE(TREE(3))) lives are in perfect bliss, but there are also TREE(TREE(3))) beings that monetarily experience a single pinprick before ceasing to to exist. This is clearly equivalent to A in the axiology but my intuition is less clear (if at all) that A’ > Z. As above, I’m unsure how much work personal identity is doing. In my mind, I find population ethics easier to think about by considering ‘experienced moments’ rather than individuals. Thanks for pointing out the error. I think think I’m right in saying that the ‘welfare capped by 0’ axiology is non-anti-egalitarian, which I conflated with absolute NU in my post (which is anti-egalitarian as you say). The axiologies are much more distinct than I originally thought.
Complex cluelessness as credal fragility

[Mea culpa re. messing up the formatting again]

1) I don't closely follow the current state of play in terms of 'shorttermist' evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) "Why aren't you factoring in impacts on climate change for these interventions?" would be some mix of:

a) "We have looked at this, and we're confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc."

b) "We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn'... (read more)

2MichaelStJules1y
80,000 Hours [https://80000hours.org/problem-profiles/climate-change/#is-the-possibility-of-extreme-climate-change-an-existential-risk] and Toby Ord [https://80000hours.org/podcast/episodes/toby-ord-the-precipice-existential-risk-future-humanity/#climate-change-005108] at least think that climate change could be an existential risk, and 80,000 Hours ranks it as a higher priority than global health and poverty, so I think it's not obvious that the effects would be negligible (assuming total utilitarianism, say) if they tried to work through it, although they might still think so. Other responses they might give: 1. GiveWell-recommended charities mitigate x-risk more than they worsen them or do more good for the far future in other ways. Maybe there's a longtermist case for growth. It doesn't seem 80,000 Hours really believes this, though, or else health in poor countries would be higher up. Also, this seems like suspicious convergence, but they could still think the charities are justified primarily by short-term effects, if they think the long-term ones are plausibly close to 0 in expectation. Or, 2. GiveWell discounts the lives of future people (e.g. with person-affecting views, possibly asymmetric ones, although climate change could still be important on some person-affecting views), which falls under your point c). I think this is a plausible explanation for GiveWell's views based on what I've seen. I think another good response (although not the one I'd expect) is that they don't need to be confident the charities do more good than harm in expectation, since it's actually very cheap to mitigate any possible risks from climate change from them by also donating to effective climate change charities, even if you're deeply uncertain about how important climate change is. I discuss this approach more here [https://forum.effectivealtruism.org/posts/Mig4y9Duu6pzuw3H4/hedging-against-deep-and-moral-uncertainty] . The resul
Complex cluelessness as credal fragility

FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing po

Complex cluelessness as credal fragility

Belatedly:

I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.

The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particu... (read more)

6AGB1y
This makes some sense to me, although if that's all we're talking about I'd prefer to use plain English since the concept is fairly common. I think this is not all other people are talking about though; see my discussion [https://forum.effectivealtruism.org/posts/Q3ZBt3X8aeLaWjbhK/complex-cluelessness-as-credal-fragility?commentId=Q3LjoHMpjSfNYNcZC] with MichaelStJules. FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing poverty, a known cause of conflict"); the surprisingly-high magnitude of an incidental impact is what is really catching my attention, because it suggests there are much better ways to do good. I think you were trying to draw a distinction, but FWIW this feels structurally similar to the 'AMF impact population growth/economic growth' argument to me, and I would give structually the same response: once you truly believe a factory farm net reduced animal suffering via the wild environment it incidentally destroyed, there are presumably much more efficient ways to destroy wild environments. As a result, it appears we can benefit wild animals much more than farmed animals, and ending factory farming should disappear from your priority list, at least as an end goal (it may come back as an itself-incidental consequence of e.g. 'promoting as much concern for animal welfare as possible'). Is your point just that it does not in fact disappear from people's priority lists in this case? That I'm not well-placed to observe or comment on either way. This I agree is
6MichaelStJules1y
(I suppose I should mention I'm an intern at ACE now, although I'm not speaking for them in this comment.) These are important points, although I'm not sure I agree with your object-level judgements about how EAs are acting. Also, it seems like you intended to include some links in this comment, but they're missing. Little favour where? I think this is common in shorttermist EA. As you mention, effects on wild animals are rarely included. GiveWell's analyses ignore most of the potentially important consequences of population growth, and the metrics they present to donors on their top charity page are very narrow, too. GiveWell did look at how (child) mortality rates affect population sizes, and I think there are some discussions of some effects scattered around (although not necessarily tied to population size), e.g., I think they believe economic growth is good. Have they written anything about the effects of population growth on climate change and nonhuman animals? What indirect plausibly negative effects of the interventions they support have they written about? I think it's plausible they just don't find these effects important or bad, although I wouldn't be confident in such a judgement without looking further into it myself. Even if you thought the population effects from AMF were plausibly bad and you had deep uncertainty about them, you could target those consequences better with different interventions (e.g. donating to a climate change charity or animal charity) or also support family planning to avoid affecting the population size much in expectation. I'd be willing to bet factory farming is worse if you include only mammals and birds (and probably fishes), and while including amphibians or invertebrates might lead to deep and moral uncertainty for me (although potentially resolvable with not too much research), it doesn't for some others, and in this case, 'animal suffering averted per $' may not be that far off. Furthermore, I think it's reasonable Complex cluelessness as credal fragility I may be missing the thread, but the 'ignoring' I'd have in mind for resilient cluelessness would be straight-ticket precision, which shouldn't be intransitive (or have issues with principle of indifference). E.g. Say I'm sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation - maybe I'm confident there's no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else. Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I'm resiliently uncertain. 2MichaelStJules1y My impression is that assigning precise credences may often just assume away the issue without addressing it, since the assignment can definitely seem more or less arbitrary. The larger your range would be if you entertained multiple distributions, the more arbitrary just picking one is (although using this to argue for multiple distributions seems circular). Or, just compare your choice of precise distribution with your peers', maybe those with similar information specifically; the more variance or the wider the range, the more arbitrary just picking one, and the more what you do depends on the particulars of your priors which you never chose for rational reasons over others. Maybe this arbitrariness doesn't actually matter, but I think that deserves a separate argument before we settle forever on a decision procedure that is not at all sensitive to it. (Of course, we can settle tentatively on one, and be willing to adopt one that is sensitive to it later if it seems better.) Complex cluelessness as credal fragility Mea culpa. I've belatedly 'fixed' it by putting it into text. Complex cluelessness as credal fragility The issue is more the being stuck than the range: say it is (0.4, 0.6) rather than (0, 1), you'd still be inert. Vallinder (2018) discusses this extensively, including issues around infectiousness and generality. 2MichaelStJules1y You can entertain both a limited range for your prior probability, and a limited range of likelihood functions, and use closed (compact) sets if you're away from 0 and 1 anyway. Surely you can update down from 0.6 if you had only one prior and likelihood, and if you can do so with your hardest to update distribution with 0.6, then this will reduce the right boundary. Thoughts on whether we're living at the most influential time in history For my part, I'm more partial to 'blaming the reader', but (evidently) better people mete out better measure than I in turn. Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. I'd take ~0.3% to be 'significant' credence for some values of significant. 'Strong' 'compelling' or 'good' arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200. I also think quantitative articulation would help the reader (or at least this reader) better benchmark t... (read more) 4William_MacAskill2y Thanks Greg - I asked and it turned out I had one remaining day to make edits to the paper, so I've made some minor ones in a direction you'd like, though I'm sure they won't be sufficient to satisfy you. Going to have to get back on with other work at this point, but I think your arguments are important, though the 'bait and switch' doesn't seem totally fair - e.g. the update towards living in a simulation only works when you appreciate the improbability of living on a single planet. Thoughts on whether we're living at the most influential time in history But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at 'face value'. From this on page 13 I guess a generous estimate (/upper bound) is something like 1/ 1 million for the 'among most important million people': [W]e can assess the quality of the arguments given in favour of ... (read more) Thanks for this, Greg. "But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million." I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself. It's the former, and for the reason of your explanation (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the blog post I describe what I call the outside-view arguments, including t... (read more) Thoughts on whether we're living at the most influential time in history “It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormou ... (read more) 8William_MacAskill2y Thanks, Greg. I really wasn't meaning to come across as super confident in a particular posterior (rather than giving an indicative number for a central estimate), so I'm sorry if I did. "It seems more reasonable to say 'our' prior is rather some mixed gestalt on considering the issue as a whole, and the concern about base-rates etc. should be seen as an argument for updating this downwards, rather than a bid to set the terms of the discussion." I agree with this (though see for the discussion with Lukas [https://forum.effectivealtruism.org/posts/j8afBEAa7Xb2R9AZN/thoughts-on-whether-we-re-living-at-the-most-influential?commentId=tQjXiWFiMhht2kuvZ] for some clarification about what we're talking about when we say 'priors', i.e. are we building the fact that we're early into our priors or not.). Avoiding Munich's Mistakes: Advice for CEA and Local Groups I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless. I think so. In the abstract, 'negotiating via ultimatum' (e.g. "you must cancel the talk, or I will do this") does not mean one is acting in bad fait... (read more) Yeah, I think I agree with everything you're saying. I think we were probably thinking of different aspects of the situation -- I'm imagining the sorts of crusades that were given as examples in the OP (for which a good faith assumption seems straightforwardly wrong, and a bad faith assumption seems straightforwardly correct), whereas you're imagining other situations like a university withdrawing affiliation (where it seems far more murky and hard to label as good or bad faith). Also, I realize this wasn't clear before, but I emphatically don't think that making threats is necessarily immoral or even bad; it depends on the context (as you've been elucidating). Avoiding Munich's Mistakes: Advice for CEA and Local Groups Another case where 'precommitment to refute all threats' is an unwise strategy (and a case more relevant to the discussion, as I don't think all opponents to hosting a speaker like Hanson either see themselves or should be seen as bullies attempting coercion) is where your opponent is trying to warn you rather than trying to blackmail you. (cf. 1, 2) Suppose Alice sincerely believes some of Bob's writing is unapologetically misogynistic. She believes it is important one does not give misogynists a platform and implicit approbation. Thus she finds host... (read more) I agree with parts of this and disagree with other parts. First off: First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck. Definitely agree that pre-committing seems like a bad idea (as you could probably guess from my previous comment). Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice ... (read more) What actually is the argument for effective altruism? This isn't much more than a rotation (or maybe just a rephrasing), but: When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like "using evidence and reason to do the most good", or "trying to find the best things to do, then doing them" are things I can imagine the typical person nodding along with, but then wondering what the fuss is about ("Sure, I'm also a fan of doing more good rather than less good - aren't we all?") I feel I need to elaborate with a distinctive example (e.g. "I lef... (read more) 4Benjamin_Todd2y Hi Greg, I agree that when introducing EA to someone for the first, it's often better to lead with a "thick" version, and then bring in thin later. (I should have maybe better clarified that my aim wasn't to provide a new popular introduction, but rather to better clarify what "thin" EA actually is. I hope this will inform future popular intros to EA, but that involves a lot of extra steps.) I also agree that many objections are about EA in practice rather than the 'thin' core ideas, and that it can be annoying to retreat back to thin EA, and that it's often better to start by responding to the objections to thick. Still, I think it would be ideal if more people understood the thin/thick distinction (I could imagine more objections starting with "I agree we should try to find the highest-impact actions, but I disagree with the current priorities of the community because...), so I think it's worth making some efforts in that direction. Thanks for the other thoughts! 5Aaron Gertler2y On "large returns to reason": My favorite general-purpose example of this is to talk about looking for a good charity, and then realizing how much better the really good charities were than others I had supported. I bring up real examples of where I donated before and after discovering EA, with a few rough numbers to show how much better I think I'm now doing on the metric I care about ("amount that people are helped"). I like this approach because it frames EA as something that can help a person make a common decision -- "which charity to support?" or "should I support charity X?" -- but without painting them as ignorant or preferring less good (in these conversations, I acknowledge that most people don't think much about decisions like this, and that not thinking much is reasonable given that they don't know how huge the differences in effectiveness can be). Challenges in evaluating forecaster performance I'm afraid I'm also not following. Take an extreme case (which is not that extreme given I think 'average number of forecasts per forecaster per question on GJO is 1.something). Alice predicts a year out P(X) = 0.2 and never touches her forecast again, whilst Bob predicts P(X) = 0.3, but decrements proportionately as time elapses. Say X doesn't happen (and say the right ex ante probability a year out was indeed 0.2). Although Alice > Bob on the initial forecast (and so if we just scored that day she would be better), if we carry forw... (read more) 1Misha_Yagudin2y This example [https://forum.effectivealtruism.org/posts/JsTpuMecjtaG5KHbb/challenges-in-evaluating-forecaster-performance?commentId=SNDWmBwEDocLWWqj7] is somewhat flawed (because forecasting only once breaks the assumption I am making) but might challenge your intuitions a bit :) 2Misha_Yagudin2y Thanks, everyone, for engaging with me. I will summarize my thoughts and would likely not actively comment here anymore: * I think the argument holds given the assumption [(a) probability to forecast on each day are proportional for the forecasters (previously we assumed uniformity) + (b) expected number of active days] I made. * > I think intuition to use here is that the sample mean is an unbiased estimator of expectation (this doesn't depend on the frequency/number of samples). One complication here is that we are weighing samples potentially unequally, but if we expect each forecast to beactivefor an equal number of days this doesn't matter. * The second assumption seems to be approximately correct assuming the uniformity but stops working on the edge [around the resolution date], which impacts the average score on the order of expected num. active days/total num. days. * This effect could be noticeable, this is an update. * Overall, given the setup, I think that forecasting weekly vs. daily shouldn't differ much for forecasts with a resolution date in 1y. * I intended to use this toy model to emphasize that the important difference between the active and semi-active forecasters is thedistributionof days they forecast on. * This difference, in my opinion, is mostly driven by the 'information gain' (e.g. breaking news, pull is published, etc). * This makes me skeptical about features s.a. automatic decay and so on. * This makes me curious about ways to integrate information sources automatically. * And less so about notifications that community/followers forecasts have significantly changed. [It is already possible to sort by the magnitude of crowd update since your last forecast on GJO]. On a meta-level, I am * Glad I had the discussion and wrote this comment :) * Confused about people's intuitions about the linearity of EV. * I would AMA: Owen Cotton-Barratt, RSP Director FWIW I agree with Owen. I agree the direction of effect supplies a pro tanto consideration which will typically lean in favour of other options, but it is not decisive (in addition to the scenarios he notes, some people have pursued higher degrees concurrently with RSP). So I don't think you need to worry about potentially leading folks astray by suggesting this as an option for them to consider - although, naturally, they should carefully weigh their options up (including considerations around which sorts of career capital are most valuable for their longer term career planning). Some thoughts on the EA Munich // Robin Hanson incident As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).] As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite. Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (w... (read more) I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a ... (read more) Some thoughts on the EA Munich // Robin Hanson incident I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one. I downvoted the above comment by Khorton (not the one asking for explanations, but the one complaining about the comparison of Trolley's and rape), and think Larks explained part of the reason pretty well. I read it in substantial parts as an implicit accusation of Robin to be in support of rape, and also seemed to itself misunderstand Vaniver's comment, which wasn't at all emphasizing a dimension of trolley problems that made a comparison with rape unfitting, and doing so in a pretty accusatory way (which meerpirat clarified below). I agree that voting qua... (read more) 2Linch2y I am reasonably confident that this is the best first-order explanation. EDIT: Habryka's comment makes me less sure that this is true. Some thoughts on the EA Munich // Robin Hanson incident Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here. If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies. Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure ... (read more) Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was pa... (read more) What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent? I recall Hsiung being in favour of conducting disruptive protests against EAG 2015: I honestly think this is an opportunity. "EAs get into fight with Elon Musk over eating animals" is a great story line that would travel well on both social and possibly mainstream media. ... Organize a group. Come forward with an initially private demand (and threaten to escalate, maybe even with a press release). Then start a big fight if they don't comply. Even if you lose, you still win because you'll generate massive dialogue! It is unclear whether the... (read more) I think he wouldn't have thought of this as "throwing the community under the bus". I'm also pretty skeptical that this consideration is strong enough to be the main consideration here (as opposed to eg the consideration that Wayne seems way more interested in making the world better from a cosmopolitan perspective than other candidates for mayor). Where does this quote come from? Use resilience, instead of imprecision, to communicate uncertainty My reply is a mix of the considerations you anticipate. With apologies for brevity: • It's not clear to me whether avoiding anchoring favours (e.g.) round numbers or not. If my listener, in virtue of being human, is going to anchor on whatever number I provide them, I might as well anchor them on a number I believe to be more accurate. • I expect there are better forms of words for my examples which can better avoid the downsides you note (e.g. maybe saying 'roughly 12%' instead of '12%' still helps, even if you give a later articulation ... (read more) 3MichaelA2y That all makes sense to me - thanks for the answer! And interesting point regarding the way anchoring may also boost the value of precision - I hadn't considered that previously. Use resilience, instead of imprecision, to communicate uncertainty I had in mind the information-theoretic sense (per Nix). I agree the 'first half' is more valuable than the second half, but I think this is better parsed as diminishing marginal returns to information. Very minor, re. child thread: You don't need to calculate numerically, as: , and . Admittedly the numbers (or maybe the remark in the OP generally) weren't chosen well, given 'number of decimal places' seems the more salient difference than the squaring (e.g. per-thousandths does not have double t... (read more) 1FCCC2y How does this account for the leftmost digit giving the most information, rather than the rightmost digit (or indeed any digit between them)? Let's say I give you$1 + $Y where Y is either 0,$0.1, $0.2 ... or$0.9. (Note $1 is analogous to 1%, and Y is equivalent adding a decimal place. I.e. per-thousandths vs per-cents.) The average value of Y, given a uniform distribution, is$0.45. Thus, against $1, Y adds almost half the original value, i.e.$0.45/$1 (45%). But what if I instead gave you$99 + $Y?$0.45 is less than 1% of the value of \$99. The leftmost digit is more valuable because it corresponds to a greater place value (so the magnitude of the value difference between places is going to be dependent on the numeric base you use). I don't know information theory, so I'm not sure how to calculate the value of the first two digits compared to the third, but I don't think per-thousandths has 50% more information than per-cents.
Use resilience, instead of imprecision, to communicate uncertainty

It's fairly context dependent, but I generally remain a fan.

There's a mix of ancillary issues:

• There could be a 'why should we care what you think?' if EA estimates diverge from consensus estimates, although I imagine folks tend to gravitate to neglected topics etc.
• There might be less value in 'relative to self-ish' accounts of resilience: major estimates in a front facing report I'd expect to be fairly resilient, and so less "might shift significantly if we spent another hour on it".
• Relative to some quasi-ideal
I tend to agree. This feels a bit like a "be the change you want to see in the world" thing. Ordinary communication norms would push us towards just using verbal claims like 'likely' but for the reasons you mention, I pretty strongly think we should quantify and accept any short-term weirdness hit.
Evidence on good forecasting practices from the Good Judgment Project: an accompanying blog post

It is true that given the primary source (presumably this), the implication is that rounding supers to 0.1 hurt them, but 0.05 didn't:

To explore this relationship, we rounded forecasts to the nearest 0.05, 0.10, or 0.33 to see whether Brier scores became less accurate on the basis of rounded forecasts rather than unrounded forecasts. [...]
For superforecasters, rounding to the nearest 0.10 produced significantly worse Brier scores [by implication, rounding to the nearest 0.05 did not]. However, for the other two groups, rounding to the nearest 0.10 ha
3Linch2y
I think I broadly agree with what you say and will not bet against your last paragraph, except for the trivial sense that I expect most studies to be too underpowered to detect those differences.

It always seemed strange to me that the idea was expressed as 'rounding'. Replacing a 50.4% with 50% seems relatively innocuous to me; replacing 0.6% with 1% - or worse, 0.4% with 0% - seems like a very different thing altogether!

On-site image hosting for posts/comments? This is mostly a minor QoL benefit, and maybe there would be challenges with storage. Another benefit would be that images would not vanish if their original source does.

2Aaron Gertler2y
I'm stopping by to mention that this is now live: https://forum.effectivealtruism.org/posts/CMy2ueJ9WhZFNyBGs/ea-forum-update-new-editor-and-more [https://forum.effectivealtruism.org/posts/CMy2ueJ9WhZFNyBGs/ea-forum-update-new-editor-and-more]
6Ben Pace2y
The new editor has this! :)

Import from HTML/gdoc/word/whatever: One feature I miss from the old forum was the ability to submit HTML directly. This allowed one to write the post in google docs or similar (with tables, footnotes, sub/superscript, special characters, etc.), export it as HTML, paste into the old editor, and it was (with some tweaks) good to go.

This is how I posted my epistemic modesty piece (which has a table which survived the migration, although the footnote links no longer work). In contrast, when cross-posting it to LW2, I needed the kind help of a moderator - and... (read more)

Alas, I don’t think this is possible in the way you are suggesting it here. We can allow submission of a narrow subset of HTML, but indeed one of the single most common complaints that we got on the old forum was many posts having totally inconsistent formatting because people were submitting all kinds of weird HTML+CSS with differing font-sizes for each post, broken formatting on smaller devices, inconsistent text colors, garish formatting, floating images that broke text layout, etc.

Indeed just a week ago I got a bug report about the formatting o... (read more)

Footnote support in the 'standard' editor: For folks who aren't fluent in markdown (like me), the current process is switching the editor back and forth to 'markdown mode' to add these footnotes, which I find pretty cumbersome.[1]

[1] So much so I lazily default to doing it with plain text.

2Habryka2y
Yeah, this is the current top priority with the new editor rework, and the inability to make this happen was one of the big reasons for why we decided to switch editors. I expect this will happen sometime in the next month or two.
Examples of people who didn't get into EA in the past but made it after a few years

I applied for a research role at GWWC a few years ago (?2015 or so), and wasn't selected. I now do research at FHI.

In the interim I worked as a public health doctor. Although I think this helped me 'improve' in a variety of respects, 'levelling up for an EA research role' wasn't the purpose in mind: I was expecting to continue as a PH doctor rather than 'switching across' to EA research in the future; if I was offered the role at GWWC, I'm not sure whether I would have taken it.

There's a couple of points I... (read more)

1agent182y
Thanks for the suggestions. do you have an example? Agreed that I should have a backup. But why does it seem unwise? Based on what? Have you looked at the possible impact based on replaceability and displacement chains? What else is there to do I don't know, other than working in some form (researcher, program manager) in "orgs that do good"? I think I can ETG (in the US) but owing to my lack of citizenship there is a 50% chance (H1B and RFE issues) that I make it just considering just random factors. This is still my back up. I am never going to be able to find what the best way to "EA CC" for EA orgs. Alternate being I look at examples. What do you suggest to do then and why?