Thanks. Perhaps with the benefit of hindsight the blue envelopes probably should have been dropped from the graph, leaving the trace alone:
I worry a lot of these efforts are strategically misguided. I don't think noting 'EA should be a question', 'it's better to be inclusive' 'positive approach and framing' (etc.) are adequate justifications for soft-peddling uncomfortable facts which nonetheless are important for your audience to make wise career decisions. The most important two here are:
Hi Greg,
Thank you for your comment.
Big picture, I wanted to clarify two specific points where you have misunderstood the aims of the organisation (we take full responsibility for these issues however as if you have got this impression it is possible others have too).
1. We do not necessarily encourage people to apply for and study medicine. We are not giving any advice to high school level students about degree choices and paths to impact. To quote what you wrote, "medicine often selects for able, conscientious, and altruistic people, who can do a lot of go... (read more)
Bravo. I think diagrams are underused as crisp explanations, and this post gives an excellent demonstration of their value (among many other merits).
A minor point (cf. ThomasWoodside's remarks): I'd be surprised if one really does (or really should) accept no trade-offs between "career quality" for "career impact". The 'isoquoise ' may not slant all the way down from status quo to impactful toil, but I think it should slant down at least a little (contrariwise, you might also be willing to trade less impact for higher QoL etc).
[own views etc]
I think the 'econ analysis of the EA labour market' has been explored fairly well - I highly recommend this treatment by Jon Behar. I also find myself (and others) commonly in the comment threads banging the drum for it being beneficial to pay more, or why particular ideas to not do so (or pay EA employees less) are not good ones.
Notably, 'standard economic worries' point in the opposite direction here. On the standard econ-101 view, "Org X struggles as competitor Org Y can pay higher salaries", or "Cause ~neutral people migrate ... (read more)
Thanks for the reply, and with apologies for brevity.
Re. 1 (ie. "The primary issue with the VRC is aggregation rather than trade-off"). I take it we should care about plausibility of axiological views with respect to something like 'commonsense' intuitions, rather than those a given axiology urges us to adopt. It's at least opaque to me whether commonsense intuitions are more offended by 'trade-offy/CU' or 'no-trade-offy/NU' intuitions. On the one hand:
Do you mean trivial pains adding up to severe suffering? I can see how if you would accept lexicality or thresholds to prevent this, you could do the same to prevent trivial pleasures outweighing severe suffering or greater joys.
Yeah, that's it. As you note these sorts of moves seem to have costs elsewhere, but if one thinks on balance they nonetheless should be accepted, then the V/RC isn't really a strike against 'symmetric axiology' simpliciter, but merely 'symmetric axiologies with a mistaken account of aggregation'. If instead 'straightforward/unadorn... (read more)
... (read more)Tradeoffs like the Very Repugnant Conclusion (VRC) are not only theoretical, because arguments like that of Bostrom (2003) imply that the stakes may be astronomically high in practice. When non-minimalist axiologies find the VRC a worthwhile tradeoff, they would presumably also have similar implications on an arbitrarily large scale. Therefore, we need to have an inclusive discussion about the extent to which the subjective problems (e.g. extreme suffering) of some can be “counterbalanced” by the “greater (intrinsic) good” for others, because this has dire
(Edit: Added a note(*) on minimalist views and the extended VRC of Budolfson & Spears.)
Thanks for highlighting an important section for discussion. Let me try to respond to your points. (I added the underline in them just to unburden the reader’s working memory.)
This seems wrong to me,
The quoted passage contained many claims; which one(s) seemed wrong to you?
and confusing 'finding the VRC counter-intuitive' with 'counterbalancing (/extreme) bad with with good in any circumstance is counterintuitive' (e.g. the linked article to Omelas) is unfortunate -... (read more)
I've been accused of many things in my time, but inarticulate is a new one. ;)
You do use lots of big words
I strongly agree with all this. Another downside I've felt from this exercise is it feels like I've been dragged into a community ritual I'm not really a fan of where my options are a) tacitly support (even if it is just deleting the email where I got the codes with a flicker of irritation) b) an ostentatious and disproportionate show of disapproval.
I generally think EA- and longtermist- land could benefit from more 'professional distance': that folks can contribute to these things without having to adopt an identity or community that steadily metast... (read more)
So to summarise, you seem not to be a fan of the ritual? :P
Thanks, but I've already seen them. Presuming the implication here is something like "Given these developments, don't you think you should walk back what you originally said?", the answer is "Not really, no": subsequent responses may be better, but that is irrelevant to whether earlier ones were objectionable; one may be making good points, but one can still behave badly whilst making them.
(Apologies if I mistake what you are trying to say here. If it helps generally, I expect - per my parent comment - to continue to affirm what I've said before however the morass of commentary elsewhere on this post shakes out.)
Greg, I want to bring two comments that have been posted since your comment above to your attention:
Your responses here are much more satisfying and comprehensible than your previous statements, it's a bit of a shame we can't reset the conversation.
2. Another anonymous commentator (thanks to Linch for posting) highlights that Abby's line of questioning regarding EEGs ultimately resulted in a response satisfactory to her and which she didn't have the expertise to further evaluate:
... (read more)if they had given the response that they
[Own views]
I'm not sure 'enjoy' is the right word, but I also noticed the various attempts to patronize Hoskin.
This ranges from the straightforward "I'm sure once you know more about your own subject you'll discover I am right":
I would say I expect you to be surprised by certain realities of neuroscience as you complete your PhD
'Well-meaning suggestions' alongside implication her criticism arises from some emotional reaction rather than her strong and adverse judgement of its merit.
... (read more)I’m a little baffled by the emotional intensity here but I’d su
Greg, I have incredible respect for you as a thinker, and I don't have a particularly high opinion of the Qualia Research Institute. However, I find your comment to be unnecessarily mean: every substantive point you raise could have been made more nicely and less personal, in a way more conducive to mutual understanding and more focused on an evaluation of QRI's research program. Even if you think that Michael was condescending or disrespectful to Abby, I don't think he deserves to be treated like this.
[Predictable disclaimers, although in my defence, I've been banging this drum long before I had (or anticipated to have) a conflict of interest.]
I also find the reluctance to wholeheartedly endorse the 'econ-101' story (i.e. if you want more labour, try offering more money for people to sell labour to you) perplexing:
If anything, income seems to be unusually heavy-tailed compared to direct work (the top two donors in EA account for the majority of the capital, but I don't think the top 2 direct workers account for the majority of the value of the labour).
Although I think this stylized fact remains interesting, I wonder if there's an ex-ante/ ex-post issue lurking here. You get to see the endpoint with money a lot earlier than direct work contributions, and there's probably a lot of lottery-esque dynamics. I'd guess these as corollaries:
First, the ex ante 'expected $ ra... (read more)
I'd guess the story might be a) 'XR primacy' (~~ that x-risk reduction has far bigger bang for one's buck than anything else re. impact) and b) conditional on a), an equivocal view on the value of technological progress: although some elements likely good, others likely bad, so the value of generally 'buying the index' of technological development (as I take Progress Studies to be keen on) to be uncertain.
"XR primacy"
Other comments have already illustrated the main points here, sparing readers from another belaboured rehearsal from me. The rough story, bor... (read more)
[Own views etc.]
I'm unsure why this got downvoted, but I strongly agree with the sentiment in the parent. Although I understand the impulse of "We're all roughly on the same team here, so we can try and sculpt something better than the typically competitive/adversarial relationships between firms, or employers and employees", I think this is apt to mislead one into ideas which are typically economically short-sighted, often morally objectionable, and occasionally legally dubious.
In the extreme case, it's obviously unacceptable for Org X to not hire c... (read more)
Maybe not 'insight', but re. 'accuracy' this sort of decomposition is often in the tool box of better forecasters. I think the longest path I evaluated in a question had 4 steps rather than 6, and I think I've seen other forecasters do similar things on occasion. (The general practice of 'breaking down problems' to evaluate sub-issues is recommended in Superforecasting IIRC).
I guess the story why this works in geopolitical forecasting is folks tend to overestimate the chance 'something happens' and tend to be underdamped in increasing the likelihood of som... (read more)
Similar to Ozzie, I would guess the 'over-qualified' hesitation often has less to do with, "I fear I would be under-utilised and become disinterested if I took a more junior role, and thus do less than the most good I could", but a more straightforward, "Roles which are junior, have unclear progression and don't look amazing on my CV if I move on aren't as good for my career as other opportunities available to me."
This opportunity cost (as the OP notes) is not always huge, and it can be outweighed by other considerations. But my guess is it is often ... (read more)
The disincentives listed here make sense to me. I would just add that people's motivations are highly individual, and so people will differ in how much weight they put on any of these points or on how well their CV looks.
Personally, I've moved from Google to AMF and have never looked back. The summary: I'm much more motivated now; the work is actually more varied and technically challenging than before, even though the tech stack is not as close to the state of the art. People are (as far as I can tell) super qualified in both organizations. I'm happy to chat personally about my individual motivations if anyone who reads this feels that it would benefit them.
Although I understand the nationalism example isn't meant to be analogous, but my impression is this structural objection only really applies when our situation is analogous.
If historically EA paid a lot of attention to nationalism (or trans-humanism, the scepticism community, or whatever else) but had by-and-large collectively 'moved on' from these, contemporary introductions to the field shouldn't feel obliged to cover them extensively, nor treat it the relative merits of what they focus on now versus then as an open question.
Yet, however you slice... (read more)
Per others: This selection isn't really 'leans towards a focus on longtermism', but rather 'almost exclusively focuses on longtermism': roughly any 'object level' cause which isn't longtermism gets a passing mention, whilst longtermism is the subject of 3/10 of the selection. Even some not-explicitly-longtermist inclusions (e.g. Tetlock, MacAskill, Greaves) 'lean towards' longtermism either in terms of subject matter or affinity.
Despite being a longtermist myself, I think this is dubious for a purported 'introduction to EA as a whole': EA isn't all-bu... (read more)
I also thought along similar lines, although (lacking subtlety) I thought you could shove in a light cone from the dot, which can serve double duty as the expanding future. Another thing you could do is play with a gradient so this curve/the future gets brighter as well as bigger, but perhaps someone who can at least successfully colour in have a comparative advantage here.
I agree with others that this concept is great, but that the gradient probably isn't a great idea.
Here's a very quick inkscape version without the dot. (Any final version would want a smoother curve but I wanted to get this done quickly)
While I personally like monochrome a lot (the Cornish flag is one of my favourites), I worry that it will be a bit too stark for most people. Changing the colour could also help reduce the association with space a bit. Here's a couple of quick versions using Cullen's colour scheme from the hourglass concept below.
I'm not su... (read more)
A less important motivation/mechanism is probabilities/ratios (instead of odds) are bounded above by one. For rare events 'doubling the probability' versus 'doubling the odds' get basically the same answer, but not so for more common events. Loosely, flipping a coin three times 'trebles' my risk of observing it landing tails, but the probability isn't 1.5. (cf).
E.g.
... (read more)Sibling abuse rates are something like 20% (or 80% depending on your definition). And is the most frequent form of household abuse. This means by adopting a child you are adding something like a
Most views in population ethics can entail weird/intuitively toxic conclusions (cf. the large number of'X conclusion's out there). Trying to weigh these up comparatively are fraught.
In your comparison, it seems there's a straightforward dominance argument if the 'OC' and 'RC' are the things we should be paying attention to. Your archetypal classical utilitarian is also committed to the OC as 'large increase in suffering for one individual' can be outweighed by a large enough number of smaller decreases in suffering for others - aggregation still applies to... (read more)
[Mea culpa re. messing up the formatting again]
1) I don't closely follow the current state of play in terms of 'shorttermist' evaluation. The reply I hope (e.g.) a Givewell Analyst would make to (e.g.) "Why aren't you factoring in impacts on climate change for these interventions?" would be some mix of:
a) "We have looked at this, and we're confident we can bound the magnitude of this effect to pretty negligible values, so we neglect them in our write-ups etc."
b) "We tried looking into this, but our uncertainty is highly resilient (and our best guess doesn'... (read more)
... (read more)FWIW, I don't think 'risks' is quite the right word: sure, if we discover a risk which was so powerful and so tractable that we end up overwhelming the good done by our original intervention, that obviously matters. But the really important thing there, for me at least, is the fact that we apparently have a new and very powerful lever for impacting the world. As a result, I would care just as much about a benefit which in the medium term would end up being worth >>1x the original target good (e.g. "Give Directly reduces extinction risk by reducing po
Belatedly:
I read the stakes here differently to you. I don't think folks thinking about cluelessness see it as substantially an exercise in developing a defeater to 'everything which isn't longtermism'. At least, that isn't my interest, and I think the literature has focused on AMF etc. more as salient example to explore the concepts, rather than an important subject to apply them to.
The AMF discussions around cluelessness in the OP are intended as toy example - if you like, deliberating purely on "is it good or bad to give to AMF versus this particu... (read more)
I may be missing the thread, but the 'ignoring' I'd have in mind for resilient cluelessness would be straight-ticket precision, which shouldn't be intransitive (or have issues with principle of indifference).
E.g. Say I'm sure I can make no progress on (e.g.) the moral weight of chickens versus humans in moral calculation - maybe I'm confident there's no fact of the matter, or interpretation of the empirical basis is beyond our capabilities forevermore, or whatever else.
Yet (I urge) I should still make a precise assignment (which is not obliged to be indifferent/symmetrical), and I can still be in reflective equilibrium between these assignments even if I'm resiliently uncertain.
Mea culpa. I've belatedly 'fixed' it by putting it into text.
The issue is more the being stuck than the range: say it is (0.4, 0.6) rather than (0, 1), you'd still be inert. Vallinder (2018) discusses this extensively, including issues around infectiousness and generality.
For my part, I'm more partial to 'blaming the reader', but (evidently) better people mete out better measure than I in turn.
Insofar as it goes, I think the challenge (at least for me) is qualitative terms can cover multitudes (or orders of magnitudes) of precision. I'd take ~0.3% to be 'significant' credence for some values of significant. 'Strong' 'compelling' or 'good' arguments could be an LR of 2 (after all, RCT confirmation can be ~3) or 200.
I also think quantitative articulation would help the reader (or at least this reader) better benchmark t... (read more)
But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million. I want to push on this because if your own credences are inconsistent with your argument, the reasons why seem both important to explore and to make clear to readers, who may be mislead into taking this at 'face value'.
From this on page 13 I guess a generous estimate (/upper bound) is something like 1/ 1 million for the 'among most important million people':
... (read more)[W]e can assess the quality of the arguments given in favour of
Thanks for this, Greg.
"But what is your posterior? Like Buck, I'm unclear whether your view is the central estimate should be (e.g.) 0.1% or 1 / 1 million."
I'm surprised this wasn't clear to you, which has made me think I've done a bad job of expressing myself.
It's the former, and for the reason of your explanation (2): us being early, being on a single planet, being at such a high rate of economic growth, should collectively give us an enormous update. In the blog post I describe what I call the outside-view arguments, including t... (read more)
... (read more)“It’s not clear why you’d think that the evidence for x-risk is strong enough to think we’re one-in-a-million, but not stronger than that.” This seems pretty strange as an argument to me. Being one-in-a-thousand is a thousand times less likely than being one-in-a-million, so of course if you think the evidence pushes you to thinking that you’re one-in-a-million, it needn’t push you all the way to thinking that you’re one-in-a-thousand. This seems important to me. Yes, you can give me arguments for thinking that we’re (in expectation at least) at an enormou
I agree with this in the abstract, but for the specifics of this particular case, do you in fact think that online mobs / cancel culture / groups who show up to protest your event without warning should be engaged with on a good faith assumption? I struggle to imagine any of these groups accepting anything other than full concession to their demands, such that you're stuck with the BATNA regardless.
I think so.
In the abstract, 'negotiating via ultimatum' (e.g. "you must cancel the talk, or I will do this") does not mean one is acting in bad fait... (read more)
Yeah, I think I agree with everything you're saying. I think we were probably thinking of different aspects of the situation -- I'm imagining the sorts of crusades that were given as examples in the OP (for which a good faith assumption seems straightforwardly wrong, and a bad faith assumption seems straightforwardly correct), whereas you're imagining other situations like a university withdrawing affiliation (where it seems far more murky and hard to label as good or bad faith).
Also, I realize this wasn't clear before, but I emphatically don't think that making threats is necessarily immoral or even bad; it depends on the context (as you've been elucidating).
Another case where 'precommitment to refute all threats' is an unwise strategy (and a case more relevant to the discussion, as I don't think all opponents to hosting a speaker like Hanson either see themselves or should be seen as bullies attempting coercion) is where your opponent is trying to warn you rather than trying to blackmail you. (cf. 1, 2)
Suppose Alice sincerely believes some of Bob's writing is unapologetically misogynistic. She believes it is important one does not give misogynists a platform and implicit approbation. Thus she finds host... (read more)
I agree with parts of this and disagree with other parts.
First off:
First, if she is acting in good faith, pre-committing to refuse any compromise for 'do not give in to bullying' reasons means one always ends up at ones respective BATNAs even if there was mutually beneficial compromises to be struck.
Definitely agree that pre-committing seems like a bad idea (as you could probably guess from my previous comment).
Second, wrongly presuming bad faith for Alice seems apt to induce her to make a symmetrical mistake presuming bad faith for you. To Alice... (read more)
This isn't much more than a rotation (or maybe just a rephrasing), but:
When I offer a 10 second or less description of Effective Altruism, it is hard avoid making it sound platitudinous. Things like "using evidence and reason to do the most good", or "trying to find the best things to do, then doing them" are things I can imagine the typical person nodding along with, but then wondering what the fuss is about ("Sure, I'm also a fan of doing more good rather than less good - aren't we all?") I feel I need to elaborate with a distinctive example (e.g. "I lef... (read more)
I'm afraid I'm also not following. Take an extreme case (which is not that extreme given I think 'average number of forecasts per forecaster per question on GJO is 1.something). Alice predicts a year out P(X) = 0.2 and never touches her forecast again, whilst Bob predicts P(X) = 0.3, but decrements proportionately as time elapses. Say X doesn't happen (and say the right ex ante probability a year out was indeed 0.2). Although Alice > Bob on the initial forecast (and so if we just scored that day she would be better), if we carry forw... (read more)
FWIW I agree with Owen. I agree the direction of effect supplies a pro tanto consideration which will typically lean in favour of other options, but it is not decisive (in addition to the scenarios he notes, some people have pursued higher degrees concurrently with RSP).
So I don't think you need to worry about potentially leading folks astray by suggesting this as an option for them to consider - although, naturally, they should carefully weigh their options up (including considerations around which sorts of career capital are most valuable for their longer term career planning).
As such, blackmail feels like a totally fair characterization [of a substantial part of the reason for disinviting Hanson (though definitely not 100% of it).]
As your subsequent caveat implies, whether blackmail is a fair characterisation turns on exactly how substantial this part was. If in fact the decision was driven by non-blackmail considerations, the (great-)grandparent's remarks about it being bad to submit to blackmail are inapposite.
Crucially, (q.v. Daniel's comment), not all instances where someone says (or implies), "If you do X (w... (read more)
I agree that the right strategy to deal with threats is substantially different than the right strategy to deal with warnings. I think it's a fair and important point. I am not claiming that it is obvious that absolutely clear-cut blackmail occured, though I think overall, aggregating over all the evidence I have, it seems very likely (~85%-90%) to me that situation game-theoretically similar enough to a classical blackmail scenario has played out. I do think your point about it being really important to get the assessment of whether we are dealing with a ... (read more)
I'm fairly sure the real story is much better than that, although still bad in objective terms: In culture war threads, the typical norms re karma roughly morph into 'barely restricted tribal warfare'. So people have much lower thresholds both to slavishly upvote their 'team',and to downvote the opposing one.
I downvoted the above comment by Khorton (not the one asking for explanations, but the one complaining about the comparison of Trolley's and rape), and think Larks explained part of the reason pretty well. I read it in substantial parts as an implicit accusation of Robin to be in support of rape, and also seemed to itself misunderstand Vaniver's comment, which wasn't at all emphasizing a dimension of trolley problems that made a comparison with rape unfitting, and doing so in a pretty accusatory way (which meerpirat clarified below).
I agree that voting qua... (read more)
Talk of 'blackmail' (here and elsethread) is substantially missing the mark. To my understanding, there were no 'threats' being acquiesced to here.
If some party external to the Munich group pressured them into cancelling the event with Hanson (and without this, they would want to hold the event), then the standard story of 'if you give in to the bullies you encourage them to bully you more' applies.
Yet unless I'm missing something, the Munich group changed their minds of their own accord, and not in response to pressure ... (read more)
Having participated in a debrief meeting for EA Munich, my assessment is indeed that one of the primary reasons the event was cancelled was due to fear of disruptors showing up at the event, similar to how they have done for some events of Peter Singer. Indeed almost all concerns that were brought up during that meeting were concerns of external parties threatening EA Munich, or EA at large, in response to inviting Hanson. There were some minor concerns about Hanson's views qua his views alone, but basically all organizers who spoke at the debrief I was pa... (read more)
I recall Hsiung being in favour of conducting disruptive protests against EAG 2015:
I honestly think this is an opportunity. "EAs get into fight with Elon Musk over eating animals" is a great story line that would travel well on both social and possibly mainstream media.
...
Organize a group. Come forward with an initially private demand (and threaten to escalate, maybe even with a press release). Then start a big fight if they don't comply.
Even if you lose, you still win because you'll generate massive dialogue!
It is unclear whether the... (read more)
I think he wouldn't have thought of this as "throwing the community under the bus". I'm also pretty skeptical that this consideration is strong enough to be the main consideration here (as opposed to eg the consideration that Wayne seems way more interested in making the world better from a cosmopolitan perspective than other candidates for mayor).
Where does this quote come from?
My reply is a mix of the considerations you anticipate. With apologies for brevity:
I had in mind the information-theoretic sense (per Nix). I agree the 'first half' is more valuable than the second half, but I think this is better parsed as diminishing marginal returns to information.
Very minor, re. child thread: You don't need to calculate numerically, as: , and . Admittedly the numbers (or maybe the remark in the OP generally) weren't chosen well, given 'number of decimal places' seems the more salient difference than the squaring (e.g. per-thousandths does not have double t... (read more)
It's fairly context dependent, but I generally remain a fan.
There's a mix of ancillary issues:
It is true that given the primary source (presumably this), the implication is that rounding supers to 0.1 hurt them, but 0.05 didn't:
To explore this relationship, we rounded forecasts to the nearest 0.05, 0.10, or 0.33 to see whether Brier scores became less accurate on the basis of rounded forecasts rather than unrounded forecasts. [...]
For superforecasters, rounding to the nearest 0.10 produced significantly worse Brier scores [by implication, rounding to the nearest 0.05 did not]. However, for the other two groups, rounding to the nearest 0.10 ha... (read more)
It always seemed strange to me that the idea was expressed as 'rounding'. Replacing a 50.4% with 50% seems relatively innocuous to me; replacing 0.6% with 1% - or worse, 0.4% with 0% - seems like a very different thing altogether!
On-site image hosting for posts/comments? This is mostly a minor QoL benefit, and maybe there would be challenges with storage. Another benefit would be that images would not vanish if their original source does.
Import from HTML/gdoc/word/whatever: One feature I miss from the old forum was the ability to submit HTML directly. This allowed one to write the post in google docs or similar (with tables, footnotes, sub/superscript, special characters, etc.), export it as HTML, paste into the old editor, and it was (with some tweaks) good to go.
This is how I posted my epistemic modesty piece (which has a table which survived the migration, although the footnote links no longer work). In contrast, when cross-posting it to LW2, I needed the kind help of a moderator - and... (read more)
Alas, I don’t think this is possible in the way you are suggesting it here. We can allow submission of a narrow subset of HTML, but indeed one of the single most common complaints that we got on the old forum was many posts having totally inconsistent formatting because people were submitting all kinds of weird HTML+CSS with differing font-sizes for each post, broken formatting on smaller devices, inconsistent text colors, garish formatting, floating images that broke text layout, etc.
Indeed just a week ago I got a bug report about the formatting o... (read more)
Footnote support in the 'standard' editor: For folks who aren't fluent in markdown (like me), the current process is switching the editor back and forth to 'markdown mode' to add these footnotes, which I find pretty cumbersome.[1]
[1] So much so I lazily default to doing it with plain text.
I applied for a research role at GWWC a few years ago (?2015 or so), and wasn't selected. I now do research at FHI.
In the interim I worked as a public health doctor. Although I think this helped me 'improve' in a variety of respects, 'levelling up for an EA research role' wasn't the purpose in mind: I was expecting to continue as a PH doctor rather than 'switching across' to EA research in the future; if I was offered the role at GWWC, I'm not sure whether I would have taken it.
There's a couple of points I... (read more)
Howdy, and belatedly:
0) I am confident I understand; I just think it's wrong. My impression is HIM's activity is less 'using reason and evidence to work out what does the most good', but rather 'using reason and evidence to best reconcile prior career commitments with EA principles'.
By analogy, if I was passionate about (e.g.) HIV/AIDS, education, or cancer treatment in LICs, the EA recommendation would not (/should not) be I presume I maintain this committment, but rather soberly evaluate how interventions within these areas stack up versus all othe... (read more)
I think your two comments here are well-argued, internally consistent, and strong. However, I think I disagree with
in the context of EA career choice writ large, which I think may be enough to flip the bottom-line conclusion.
I think the crux for me is that I think if the differences in object-level impact across people/projects is high enough, then for anybody whose career or project is not in the small subset of the most impactful careers/projects, their object-level impacts will ... (read more)
It seems bizarre that, without my strong upvote, this comment is at minus 3 karma.
Karma polarization seems to have become much worse recently. I think a revision of the karma system is urgently needed.