All of Arden Koehler's Comments + Replies

I like this post and also worry about this phenomenon.

When I talk about personal fit (and when we do so at 80k) it's basically about how good you are at a thing/the chance that you can excel.

It does increase your personal fit for something to be intuitively motivated by the issue it focuses on, but I agree that it seems way too quick to conclude then that your personal fit with that is higher than other things (since there are tons of factors and there are also lots of different jobs for each problem area), let alone that that means you should work on that issue all things considered (since personal fit is not the only factor).

I think it would be especially valuable to see to which degree they reflect the individual judgment of decision-makers.

The comment above hopefully helps address this.

I would also be interested in whether they take into account recent discussions/criticisms of model choices in longtermist math that strike me as especially important for the kind of advising 80.000 hours does (tldr: I take one crux of that article to be that longtermist benefits by individual action are often overstated, because the great benefits longtermism advertises require both redu

... (read more)
9
mhendric
4mo
Hey there, thank you both for the helpful comments. I agree the shorttermist/longtermist framing shouldn't be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/Factory Farming vs AI-Risk/Biosecurity etc).  The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!   I'm a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I don't think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/recomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EA's more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as I'd be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and I'd be happy to be corrected! I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk). I think my perception of 80k as "partisan" stems from

?I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment.

Thanks for your feedback here!

Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?

I feel unsure about whether we sho... (read more)

I agree that it might be worthwhile to try to become the president of the US - but that wouldn't mean it's best for us to have an article on it, especially highly ranked. that takes real estate on our site, attention from readers, and time. This specific path is a sub-category of political careers, which we have several articles on. In the end, it is not possible for us to have profiles on every path that is potentially worthwhile for someone. My take is that it's better for us to prioritise options where the described endpoint is achievable for at least a healthy handful of readers.

No, we have lots of external advisors that aren't listed on our site. There are a few reasons we might not list people, including:

  • We might not want to be committed to asking for someone's advice for a long time or need to remove them at some point.

  • The person might be happy to help us and give input but not want to be featured on our site.

  • It's work to add people, and we often will reach out to someone in our network fairly quickly and informally, and it would feel like overkill / too much friction to get a bio, and get permission from them for it,

... (read more)

This is a good question -- we don't have a formal approach here, and I personally think that in general, it's quite a hard problem who to ask for advice.

A few things to say:

  • the ideal is often to have both.

  • the bottleneck on getting more people with domain expertise is more often us not having people in our network with sufficient expertise, that we know about and believe are highly credible, and who are willing to give us their time, rather than their values. People who share our values tend to be more excited to work with us.

  • it depends a lot on th

... (read more)

Hey Vasco —

Thanks for your interest and also for raising this with us before you posted so I could post this response quickly!

I think you are asking about the first of these, but I'm going to include a few notes on the 2nd and 3rd too as well just in case, as there's a way of hearing your question as about them. 

  1. What is the internal process by which these rankings are produced and where do you describe it? 
  2. What are problems and paths being ranked by? What does the ranking mean?
  3. Where is our reasoning for why we rank each problem or path the way we
... (read more)
6
Guy Raveh
4mo
Hi Arden, thanks for engaging like this on the forum! Re: "the general type of person we tend to ask for input" - how do you treat the tradeoff between your advisors holding the values of longtermist effective altruism, and them being domain experts in the areas you recommend? (Of course, some people are both - but there are many insightful experts outside EA).
3
Ulrik Horn
4mo
What is your thinking for not including this? I am asking as there might be people (you know better than me!) that might think it worthwhile to pursue this career even if it to them has a 0.01% chance of success. I am asking as there is existing EA advice about being ambitious, but is there advice that I have not seen about not being too ambitious? I feel like many people might "qualify" for becoming a president even if the chance of "making it" is low, so in one way it is perhaps not that narrow (even if there is only one 1st place). And on the way to this goal, people are likely to be managing large pots of money and/or making impactful policy more likely to happen.
3
Vasco Grilo
4mo
Thanks for the comprehensive reply, Arden! Thanks for sharing the 1st version of your answer too, which prompted me to add a little more detail about what I was asking in the post. I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment. Thanks for sharing! The approach you are following seems to be analogous to what happens in the broader society, where there is often one single person responsible for informally aggregating various views. Using a formal aggregation method is the norm in forecasting circles. However, there are often many forecasts to be aggregated, so informal aggregation would hardly be feasible for most cases. On the other hand, Samotsvety, "a group of forecasters with a great track record", also uses formal aggregation methods. I am not aware of research comparing informal to formal aggregation of a few forecasts, so there might not be a strong case either way. In any case, I encourage you to try formal aggregation to see if you arrive to meaningfully different results. Makes sense. Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such that the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?

Hi Nick —

Thanks for the thoughtful post! As you said, we’ve thought about these kinds of questions a lot at 80k. Striking the right balance of content on our site, and prioritising what kinds of content we should work on next, are really tricky tasks, and there’s certainly reasonable disagreement to be had about the trade-offs.

We’re not currently planning to focus on neartermist content for the website, but:

  • We just released a giant update to our career guide and re-centered it on our site. It is targeted at a broad audience, not just those interested in
... (read more)

I think I've become substantially more hardworking!

I think I started from a middle-to-high baseline but I think I am now "pretty hard working" at least (I say as I write this at 8 am on a Tuesday, demonstrating viscerally my not-perfect work ethic).

the big thing for me was going from academic philosophy to working at 80k. Active ingredients in order of importance:

  1. Sense of importance of the work getting done and that if I don't do it, just less stuff I think is good will happen.
  2. Sense of competence and being valued.
  3. teammates to provide mix of accountabil
... (read more)

Copying from my comment above:

Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.

Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

Love this, thanks Catherine! Great way of structuring a career story for being useful to the audience btw, might copy it at some point.

Arden here - I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!

We have several different programmes, which face different bottlenecks. I'll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the "current challenges" sections for each programme (though that's from some months ago). 

Some current bottlenecks:

  • More writing and research capacity to further improve our online career advice and keep it up to date.
  • Be
... (read more)

Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.

Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.

Here are some of the places we talk about this:

1. Our problem profiles page (one of our most popular pages) explicitly say... (read more)

2
Arden Koehler
7mo
Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
3
NickLaing
8mo
"E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should. " I agree with this, and feel like the best transparent approach might be to put your headline findings on the front page and more clearly, because like you say you do have to dig a surprising amount to find your headline findings. Something like (forgive the average wording) "We think that working on longtermists causes is the best way to do good, so check these out here..." Then maybe even as a caveat somewhere (blatant near termist plug) "some people believe near termist causes are the most important, and others due to their skills or life stage may be in a better position to work on near term causes. If you're interested in learning more about high impact near termist causes check these out here .." Obviously as a web manager you could do far better with the wording but you get my drift!

Love this post -- thanks Rocky! I feel like 5-7 are especially well explained // I haven't seen them explained that way before.

However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.

Agree, though there are arguments from one to the other! In particular:

  1. As I understand it, longtermism requires it to be tractable to, in expectation, affect the long-term future ("ltf").[1]
  2. Some people might think that the only or most tractable way of affecting the ltf is to reduce extinction[2] risk in the coming decades or century (as you might think we can have no idea about the expected e
... (read more)

Thanks for this post! One thought on what you wrote here:

"My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have."

I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the bes... (read more)

This seems true to me, although I don't have great confidence here.

For some years at times I had thought to myself "Damn, EA is pulling off something interesting - not being an organization, but at the same time being way more harmonious and organized than a movement. Maybe this is why it's so effective and at the same time feels so inclusive." Not much changed recently that would make me update in a different direction. This always stood out to me in EA, so maybe this is one of its core competencies[1] that made it so successful in comparison to so m... (read more)

I don't know the answer to this, because I've only been working at 80k since 2019 - but my impression is this isn't radically different from what might have been written in those years.

Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.

But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).

So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our reade... (read more)

This feels fairly tricky to me actually -- I think between the two options presented I'd go with (1) (except I'm not sure what you mean by "If we'd focus specifically on EAs it would be even better" -- I do overall endorse our current choice of not focusing specifically on EAs).

However, some aspects of (2) seem right too. For example, I do think that we talk about a lot of things EAs already know about in much of our content (though not all of it). And I think some of the "here's why it makes sense to focus on impact" - type content does fall into that cat... (read more)

2
Yonatan Cale
10mo
Thanks I was specifically thinking about career guides (and I'm most interested in software, personally).   (I'm embarrassed to say I forgot 80k has lots of other material too, especially since I keep sharing that other-material with my friends and referencing it as a trusted source. For example, you're my go-to source about climate. So totally oops for forgetting all that, and +1 for writing it and having it relevant for me too)

I'm grateful to the people who start new orgs to fill the gaps they see, knowing that's a path with a high chance of not working. I like how dynamic EA is (and think we could stand to be even more dynamic!) and this is largely because new projects keep coming on the scene.

thanks for this post! I'm curious - can you explain this more?

the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex

2
Jan_Kulveit
1y
Sorry for the delay in response. Here I look at it from a purely memetic perspective - you can imagine thinking as a self-interested memplex. Note I'm not claiming this is the main useful perspective, or this should be the main perspective to take.  Basically, from this perspective * the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried - ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together.  The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get. * but also from the opposite direction... : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes - powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward "being in this game". Subjectively, it's much better if you - the risk-aware, pro-humanity player - are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point... Empirically, the more people buy into the "single powerful AI systems are incredibly dangerous", the more attention goes toward work on such system. Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.  
3
David Johnston
1y
AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned. Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.
4[anonymous]1y
Maybe something like this: https://www.lesswrong.com/posts/KYzHzqtfnTKmJXNXg/the-toxoplasma-of-agi-doom-and-capabilities 

My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn't build it first), or that we should shut it all down entirely. 

By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both. 

As someone who does not accept these premises, this is somewhat frustrating to watch. 

I'm trying out updating some of 80,000 Hours pages iteratively that we don't have time to do big research projects on right now. To this end, I've just released an update to https://80000hours.org/problem-profiles/improving-institutional-decision-making/ — our problem profile on improving epistemics and institutional decision making.

This is sort of a tricky page because there is a lot of reasonable-seeming disagreement about what the most important interventions are to highlight in this area.

I think the previous version had some issues: It was confusing, a... (read more)

Hey Holden,

Thanks for these reflections!

Could you maybe elaborate on what you mean by a 'bad actor'? There's some part of me that feels nervous about this as a framing, at least without further specification -- like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with "hard-core utilitarianism", which I'd think wouldn't be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.

2
Holden Karnofsky
1y
To give a rough idea, I basically mean anyone who is likely to harm those around them (using a common-sense idea of doing harm) and/or "pollute the commons" by having an outsized and non-consultative negative impact on community dynamics. It's debatable what the best warning signs are and how reliable they are.

Thank you for doing this work and for the easy-to-read visualisations!

3
Willem Sleegers
1y
Thanks!

Thanks Vaidehi -- agree! I think another key part of why it's been useful is that it's just really readable/interesting -- even for people who aren't already invested in the ideas.

Hey! Arden here, also from 80,000 Hours. I think I can add a few things here on top of what Bella said, speaking more to the web content side of the question:

(These are additional to the 'there's a headwind on engagement time' part of Bella's answer above – though they less important I think compared to the points Bella already mentioned about a 'covid spike' in engagement time in 2020 and marketing not getting going strong until the latter half of 2022 .)

  1. The career guide (https://80000hours.org/career-guide/) was very popular. In 2019 we deprioritised

... (read more)

Thank you Max for all your hard work and all the good you've done in your role. Your colleagues' testimonials here are lovely to see. I think it's really cool you're taking care of yourself and thinking ahead in this way about handing off responsibility - even though I'm sure it's hard.

Good luck with the transition <3

Nice post. One thought on this - you wrote:

"I’d be especially excited for people to spread messages that help others understand - at a mechanistic level - how and why AI systems could end up with dangerous goals of their own, deceptive behavior, etc. I worry that by default, the concern sounds like lazy anthropomorphism (thinking of AIs just like humans)."

I agree that this seems good for avoiding the anthropomorphism (in perception and in one's own thought!) but I think it'll be important to emphasise when doing this that these are conceivable ways and... (read more)

2
Holden Karnofsky
1y
Agreed!

[writing in my personal capacity, but asked an 80k colleague if it seemed fine for me to post this]

Thanks a lot for writing this - I agree with a lot of (most of?) of what's here.

One thing I'm a bit unsure of is the extent to which these worries have implications for the beliefs of those of us who are hovering more around 5% x-risk this century from AI, and who are one step removed from the bay area epistemic and social environment you write about. My guess is that they don't have much implication for most of us, because (though what you say is way bette... (read more)

The 5% figure seems pretty common, and I think this might also be a symptom of risk inflation. 

There is a huge degree of uncertainty around this topic. The factors involved in any prediction very by many orders of magnitude, so it seems like we should expect the estimates to vary by orders of magnitude as well. So you might get some people saying the odds are 1 in 20, or 1 in 1000, or 1 in a million, and I don't see how any of those estimates can be ruled out as unreasonable. Yet I hardly see anyone giving estimates of 0.1% or 0.001%. 

I think people are using 5% as a stand in for "can't rule it out". Like why did you settle at 1 in 20 instead of 1 in a thousand? 

6
NunoSempere
1y
Hey,  * Your last point about exaggeration incentives seems like an incentive that could exist, but I don't see it playing out * For 80kh itself, considerations such as in this post might apply to career advisors, who have the tricky job of balancing charismatic persuasion with just providing evidence and stepping back when they try to help people make better career decisions.

Hi vmasarik,

Arden from 80k here. Yes https://80000hours.org/career-reviews/founder-impactful-organisations/ is the general and up to date write-up covering founding a charity. I'm not sure exactly what previous page you are referring to, but it sounds to me like it will be represented by https://80000hours.org/career-reviews/founder-impactful-organisations/ (Though https://80000hours.org/career-reviews/founding-effective-global-poverty-non-profits/ might be helpful as well if you are thinking specifically of global health and development charities -- thoug... (read more)

1
vmasarik
1y
I edited the post, and previously the page had a similar list with "Founding charities" at the 1st place.
Answer by Arden KoehlerNov 29, 202232
❤️5

I fulfil my gwwc pledge by donating each month to the EA funds animal welfare fund for the fund managers to distribute as they see fit. I trust them to make a better decision than I will on the individual charities' effectiveness since I don't have that much time/expertise to look into it.

I think the long-run future is incredibly important, and I spend my labour mostly on that. But my guess (though I'm pretty unsure) is that my donations do more good in animal welfare than in longtermism-focused things. Perhaps the new landscape should change that but I ha... (read more)

Thank you for writing this - strong +1. At 80k we are going to be thinking carefully about what this means for our career advice and our ways of communicating - how this should change things and what we should do going forward. But there’s a decent amount we still don’t know and it will also just take time to figure that all out.

It feels like we've just gotten a load of new information, and there’s probably more coming, and I am in favour of updating on things carefully.

Hey, Arden from 80k here -

It'd take more looking into stuff/thinking to talk about the other points, but I wanted to comment on something quickly: thank you for pointing out that the philosophy phd career profile and the competitiveness of the field wasn’t sufficiently highlighted on the GPR problem profile . We’ve now added a note about it in the "How to enter" section.

I wrote the career review when I'd first started at 80k, and for me it was just an oversight not to link to it and its points more prominently on the GPR problem profile.

Nice! I should have mentioned somewhere: the 80K website is huge and has tons of articles on partly-overlapping topics, written over many years by a bunch of different people. If there's an inconsistency, my first guess would have been that one of the articles is out-of-date or they're just different perspectives at 80K that no one noticed need to be brought into contact to hash out who's right.

5
tcelferact
1y
Thanks Arden! I should probably have said it explicitly in the post, but I have benefited a huge amount from the work you folks do, and although I obviously have criticisms, I think 80K's impact is highly net-positive.

One reason might be that this framework seems to bake totalist utilitarianism into longtermism (by considering expansion/contraction and average willbeing incrase/decrease) as the two types of longtermist progress/regress, whereas longtermism is compatible with many ethical theories?

2
Arepo
1y
It's phrased in broadly utilitarian terms (though 'wellbeing' is a broad enough concept potentially encompass concerns that go well beyond normal utilitarian axiologies), but you could easily rephrase using the same structure to encompass any set of concerns that would be consistent with longtermism, which is still basically consequentialist.  I think the only thing you'd need to change to have the generality of longtermism is to call 'average wellbeing increase/decrease' something more general like 'average value increase/decrease' - which I would have liked to do but I couldn't think of phrase succinct enough to fit on the diagram that didn't sound confusingly like it meant 'increase/decrease to average person's values'.

Again there doesn’t seem to be a strong reason to think there’s an upper bound to the amount of people that could be killed in a war featuring widespread deployment of AI commanders or lethal autonomous weapons systems.[17]

So on technological grounds, at least, there seem to be no strong reasons to think that the distribution of war outcomes continues all the way to the level of human extinction.

Sounds right!

This made me realise that my post is confusing/miseadling in a particular way -- because of the context of the 80,000 Hours problem profiles pag... (read more)

4
Stephen Clare
1y
Thanks Arden, that makes sense. I think it will be hard to separate "x-risk from conventional war" from "x-risk from war fought with WMDs and autonomous weapons" because pacifying interventions like improving US-China relations would seem to reduce both those risks simultaneously.

Thanks for this post!

I strongly agree with this:

This seems odd to consider an ‘existential’ risk - there are many ways in which we can imagine positive or negative changes to expected future quality of life (see for example Beckstead’s idea of trajectory change). Classing low-value-but-interstellar outcomes as existential catastrophes seems unhelpful both since it introduces definitional ambiguity over how much net welfare must be lost for them to qualify, and since questions of expected future quality of life are very distinct from questions of future q

... (read more)
4
Arepo
1y
I certainly don't think we should keep using the old terms with different meanings. I suggest using some new, cleaner terms that are more conducive to probabilistic thinking. In practice I'm sure people will still talk about existential risk for the reason you give,  but perhaps less so, or perhaps specifically when talking about less probabilistic concepts such as population ethics discussions.

Yay! Glad you're doing this.

Whenish might the results be available? (e.g. by the new year, or considerably after?)

Thanks!

I'd say it's pretty uncertain when we'll start publishing the main series of posts. For example, we might work on completing a large part of the series, before we start releasing individual posts, and we may use a format this year where we put more results on a general dashboard, and then include a smaller set of analyses in the main series of posts. But best guess is early in the new year.

That said, we'll be able to provide results/analyses for specific questions you might want to ask about essentially immediately after the survey closes. 

Great post.

One disagreement:

Principle 3: Our explicit, subjective credences are approximately accurate enough, most of the time, even in crazy domains, for it to be worth treating those credences as a salient input into action.

I think for me at least, and I'd guess for other people, the thing that makes the explicit subjective credences worth using is that since we have to make prioritisation decisions//decisions about how to act anyway, and we're going to make them using some kind of fuzzy approximated expected value reasoning , making our probabiliti... (read more)

Thanks Rebecca, I see how that's a confusing way to organise things -- will pass on this feedback.

3
IanDavidMoss
1y
I was also going to say that it's pretty confusing that this list is not the same as either the top problem areas listed elsewhere on the site or the top-priority career paths, although it seems derived from the latter. Maybe there are some version control issues here?

Thanks David - it seems like an important harm to consider if we've caused people who'd otherwise be doing valuable work in global health / animal welfare / other issues to leave the EA community // not do as valuable work.

Thanks! Agree about there being tradeoffs here. Curious if you have more to say on this:

Mainly, I worry that (mainly through social dynamics) some people are pushed out of careers where they would actually have more impact, by moving into careers where they can't thrive as well

Am I right in thinking that the worry that, by raising the status of some careers, 80k creates social pressure to do those rather than the one you have greater personal fit for?

(Do you think there’s a (reasonable) amount of emphasis on personal fit we could present which would mostly ameliorate your worries on this?)

1
pete
1y
One element of personal fit that’s not mentioned is the choice to have kids / become a primary caregiver for someone — see bessieodell’s great post. Current impact calculations don’t include this by default, which I think creates a cultural undercurrent of “real EAs don’t factor caregiving into their careers.” Post here: https://forum.effectivealtruism.org/posts/ahne8S7JdmjmjHieu/does-effective-altruism-cater-to-women
3
Pseudaemonia
1y
I think more emphasis on what makes a fulfilling career, as distinct from personal fit, which I take to mean ‘chance of being excellent at this’, would help ameliorate this and similar worries. This could just mean signal boosting more of your research on what makes a fulfilling career
8
lincolnq
1y
I think 80k has tried to emphasize personal fit in the content, but something about the presentation seems to dominate the content, and I think that is somehow related to social dynamics. Something seems to get in the way of the "personal fit" message coming through; I think it is related to having "top recommended career paths". I don't know how to ameliorate this, or I would suggest it directly. I'm sure this is frustrating to you too, since like 90% of the guide is dedicated to making the point that personal fit is important; and people seem to gloss over that. One thing that could help would be eliminating the "top recommended career paths" part of the website entirely. That will be very unsatisfying to some readers, and possibly reduce the 'virality' of the entire project, so may be a net bad idea; but it would help with this particular problem. I am afraid I don't have any better ideas.

Arden here from 80k -- just wanted to note the figures you cite are from a survey and were not 80k's overall views.

Our articles put AI risk closer to 10% (https://80000hours.org/problem-profiles/artificial-intelligence/) and nano much lower though we don't try to estimate it numerically (we have a mini writeup here https://80000hours.org/problem-profiles/atomically-precise-manufacturing/)

seems like we should update that article anyway though. thanks for drawing my attention to it

1
Prometheus
1y
Thanks! I'll update to correct this.

https://www.xriskology.com/books

I guess if that were the reason it'd probably be because people worry that implies Rees might agree with a bunch of Torres' views they think are very bad. though I think that forwarding someone's book or blurbing someone's book is pretty consistent with disagreeing strongly with a bunch of their stuff (if you even know about it).

Rees has also written multiple blurbs for Will MacAskill, Nick Bostrom et al.

unsure why downvoted. upvoted for being a possible reason he might not be mentioned more (not saying it's a good reason ).

sidenote: if we're so parochial that Cambridge is too far for Oxford-doninated ea to take notice of what goes on there.... that seems like pretty bad news.

[anonymous]2y12
7
0

Oxford vs Cambridge seems more likely to me than the blurb explanation because Torres' book was published in 2017 and would only explain changes after that time, but I don't have any particular reason to think anything changed at that time. Happy to be corrected though. 

Load more