All of Arden Koehler's Comments + Replies

Is it because EAs feel helpless in addressing this problem? Do they think it’s simply not neglected enough to be worth the impact?

I think this is part of why EA doesn't invest much here, along with what Ollie said.

I'm pretty excited about EAs doing good work in politics, but (1) it's a hard sell from a tractability / neglectedness perspective, & (2) it's easy to do bad work, so it's kind of hard to boot up much effort.

Thank you for doing this analysis!

Would you say this analysis is limited to safety from misalignment related risks, or any (potentially catastrophic) risks from AI, including misuse, gradual disempoerment, etc.?

1
Stephen McAleese
The technical AI safety organizations cover a variety of different areas including AI alignment, AI security, interpretability, and evals with the most FTEs working on empirical AI safety topics like LLM alignment, jailbreaks, and robustness which covers a variety of different risks including misalignment and misuse.
Arden Koehler
4
0
0
70% agree

Far-future effects are the most important determinant of what we ought to do


I agree it's insanely hard to know what will affect the far future, and how. But I think we should still try, often by using heuristics (one I'm currently fond of is "what kinds of actions seem to put us on a good trajectory, e.g. to be doing well in 100 years?")

I think that in cases where we do have reason to think an action will affect the long run future broadly and positively in expectation (i.e. even if we're uncertain) that's an extremely strong reason -- and usually an overr... (read more)

Arden Koehler
4
0
0
90% ➔ 60% agree

I feel unsure I'd be trying hard to do good at all, let alone actually doing things I think have a lot of ex ante value. I wasn't on track when I heard of EA to dedicate much of my resources to positive impact. But hard to be certain ofc! + not sure doing good now, since what I work on has a lot of uncertainty on the impacts (& even sign). 

(trying to hit like 80% agree but seem to be missing it) 

Hey Matt,

  1. I share several of the worries articulated in this post.
  2. I think you're wrong about how you characterise 80k's strategic shift here, and want to try to correct the record on that point. I'm also going to give some concrete examples of things I'm currently doing, to illustrate a bit what I mean, & also include a few more personal comments.

(Context: I run the 80k web programme.)

if you glorify some relatively-value-neutral conception of AI safety as the summum bonum of what is or used to be EA, there is just a good chance that you will lose the pl

... (read more)

My view is that it's worth it, because there is a danger of people just jumping into jobs that have "AI" or even "AI security/safety" in the name, without grappling with tough questions around what it actually means to help AGI go well or prioritising between options based on expected impact.

 

I appreciate the dilemma and don't want to imply this is an easy call. 

For me the central question is all of this is whether you foreground process (EA) or conclusion (AGI go well). It seems like the whole space is uniformly rushing to foreground the conclus... (read more)

Anecdote: I'm one of those people -- would say I'd barely heard of ea / basically didn't know what it was, before a friend who already knew of it suggested I come to an EA global (I think at the time one got a free t-shirt for referring friends). We were both philosophy students & I studied ethics, so I think he thought I might be intersted even though we'd never talked about EA.

Thanks as always for this valuable data! 

Given 80k is a large and growing source of people hearing about and getting involved in EA, some people reading this might be worried that 80k will stop contributing to EA's growth, given our new strategic focus on helping people work on safely navigating the transition to a world with AGI. 

tl;dr I don't think it will stop, and might continue as before, though it's possible it will be reduced some.

More:

I am not sure whether 80k's contribution to building ea in terms of sheer numbers of people get... (read more)

2
NickLaing
Thanks thats a useful reply with your points 1 and 2 being quite reassuring. Your no 4. that seems very optimistic. A more narrow focus send unlikely to increase interest over the whole spectrum of seekers coming to the sure, when the default is 80k being the front page of the EA Internet for all coners. The number of AI interested people getting hooked increasing more than the fallout for all other areas seems pretty unlikely. And I can't really see a world where older people would be more attracted to a site which focuses on an emerging and largely young person's issue.
5
David_Moss
Thanks Arden! I also agree that prima facie this strategic shift might seem worrying given that 80K has been the powerhouse of EA movement growth for many years. That said, I share your view that growth via 80K might reduce less than one would naively expect. In addition to the reasons you give above, another consideration is our finding is that a large percentage of people get into EA via 'passive' outreach (e.g. someone googles "ethical career" and finds the 80K website', and for 80K specifically about 50% of recruitment was 'passive'), rather than active outreach, and it seems plausible that much of that could continue even after 80K's strategic shift. As noted elsewhere, we plan to research this empirically. Fwiw, my guess is that broader EA messaging would be better (on average and when comparing the best messaging from each) at recruiting people to high levels of engagement in EA (this might differ when looking to recruit people directly into AI related roles), though with a lot of variance within both classes of message.

Hey Geoffrey,

Niel gave a response to a similar comment below -- I'll just add a few things from my POV:

  • I'd guess that pausing (incl. for a long time) or slowing downAGI development would be good for helping AGI go well if it could be done by everyone / enforced / etc- so figuring out how to do that would be in scope re this more narrow focus. SO e.g. figuring out how an indefinite pause could work (maybe in a COVID-crisis like world where the overton window shifts?) seems helpful
  • I (& others at 80k) are just a lot less pessimistic vis a vis the prospect
... (read more)

We don’t have anything written/official on this particular issue I don't think (though we have covered other mental health topics here), though this is one reason why we don’t think it’s the case that everyone should work on AIS/trying to help things go well with AGI, such that even though we want to encourage more people to consider it, we don’t blanket recommend it to everyone. We wrote a little bit here about an issue that seems related - what to do if you find the case for an issue intellectually compelling but don't feel motivated by it.

Hi Romain,

Thanks for raising these points (and also for your translation!)

We are currently planning to retain our cause-neutral (& cause-opinionated), impactful careers branding, though we do want to update the site to communicate much more clearly and urgently our new focus on helping things go well with AGI, which will affect our brand.

How to navigate the kinds of tradeoffs you are pointing to is something we will be thinking about more as we propagate through this shift in focus through to our most public-facing programmes. We don't have answers jus... (read more)

6
david_reinstein
I would lean the other way, at least in some comms. You wouldn’t want people to think that (e.g.) “the career guidance space in high impact global health and wellbeing is being handled by 80k”. Changing branding could more clearly open opportunities for other orga to enter spaces like that.

Thanks for the feedback here. I mostly want to just echo Niel's reply, which basically says what I would have wanted to. But I also want to add for transparency/accountability's sake that I reviewed this post before we published it with the aim of helping it communicate the shift well – I focused mostly on helping it communicate clearly and succinctly, which I do think is really important, but I think your feedback makes sense, and I wish that I'd also done more to help it demonstrate the thought we've put into the tradeoffs involved and awareness of the c... (read more)

Adding a bit more to my other comment:

For what it’s worth, I think it makes sense to see this as something of a continuation of a previous trend – 80k has for a long time prioritised existential risks more than the EA community as a whole. This has influenced EA (in my view, in a good way), and at the same time EA as a whole has continued to support work on other issues. My best guess is that that is good (though I'm not totally sure - EA as a whole mobilising to help things go better with AI also sounds like it could be really positively impactful).

From a

... (read more)
4
zdgroff
  I think the argument you linked to is reasonable. I disagree, but not strongly. But I think it's plausible enough that AGI concerns (from an impartial cause prioritization perspective) require fanaticism that there should still be significant worry about it. My take would be that this worry means an initially general EA org should not overwhelmingly prioritize AGI.

Hi Håkon, Arden from 80k here.

Great questions.

On org structure:

One question for us is whether we want to create a separate website ("10,000 Hours?"), that we cross-promote from the 80k website, or to change the 80k website a bunch to front the new AI content. That's something we're still thinking about, though I am currently weakly leaning toward the latter (more on why below). But we're not currently thinking about making an entire new organisation.

Why not?

For one thing, it'd be a lot of work and time, and we feel this shift is urgent.

Primarily, though, 8... (read more)

Hey Zach. I'm about to get on a plane so won't have time to write a full response, sorry! But wanted to say a few quick things before I do.

Agree that it's not certain or obvious that AI risk is the most pressing issue (though it is 80k's best guess & my personal best guess, and I don't personally have the view that it requires fanaticism.) And I also hope the EA community continues to be a place where people work on a variety of issues -- wherever they think they can have the biggest positive impact.

However, our top commitment at 80k is to do our best ... (read more)

7
zdgroff
  Yeah, FWIW, it's mine too. Time will tell how I feel about the change in the end. That EA Forum post on the 80K-EA community relationship feels very appropriate to me, so I think my disagreement is about the application.

Arden from 80k here -- just flagging that most of 80k is currently asleep (it's midnight in the UK), so we'll be coming back to respond to comments tomorrow! I might start a few replies, but will be getting on a plane soon so will also be circling back.

I agree with this - 80,000 Hours made this change about a year ago.

Carl Shulman questioned the tension between AI welfare & AI safety on the 80k podcast recently -- I thought this was interesting! Basically argues AI takeover could be even worse for AI welfare. From the end of the section.

Rob Wiblin: Maybe a final question is it feels like we have to thread a needle between, on the one hand, AI takeover and domination of our trajectory against our consent — or indeed potentially against our existence — and this other reverse failure mode, where humans have all of the power and AI interests are simply ignored. Is there

... (read more)
3
Lucius Caviola
Thanks, I also found this interesting. I wonder if this provides some reason for prioritizing AI safety/alignment over AI welfare.

Thanks for this valuable reminder!

btw, the link on "more about legal risks" at the top goes to the wrong place.

Cool project - I tried to subscribed to the podcast, to check it out. But I couldn't find it on pocketcasts, so I didn't (didn't seem worth me using a 2nd platform).

I wanted to subscribe because I've wanted an audio feed that will help me be in touch with events outside my more specific areas of interest that i hear about through niche channels while I commute, while not going quite as broad / un-curated as the BBC news (which I currently use for this) -- and this seemed like potentially a good middle ground.

tiny other feedback: the title feels aggressive ... (read more)

1
OdinMB 🔸
Thanks so much for the feedback! I submitted the podcast to PocketCasts. I should be available within a few days. (Also Podchaser and a few others I had missed.) I agree that "Actually Relevant" can come off as dismissive or confrontational. I'll check how big of an issue this is for other people. Maybe we can balance this impression somehow with other aspects of the service. I'm hesitant about changing the name as that would be quite a hassle.

The project aligns closely with the fund's vision of a "principles-first EA" community, we’d be excited for the EA community’s outputs to look more like Richard’s.

Is this saying that the move to principle's first EA as a strategic perspective for EAF goes with a belief that more EA work should be "principles first" & not cause specific? (so that more of the community's outputs look like Richard's)? I wouldn't have necessarily inferred that just from the fact that you're making this strategic shift (could be ore of a comp advantage / focus thing) so wanted to clarify.

4
Tom Barnes🔸
Hi Arden, thanks for the comment I think this was something that got lost-in-translation during the grant writeup process. In the grant evaluation doc this was written as: This a fairly fuzzy view, but my impression is Richard's outputs will align with the takes in this post both by "fighting for EA to thrive long term" (increasing the quality of discussion around EA in the public domain), and also by increasing the number of "thoughtful, sincere, selfless" individuals in the community (via his substack which has a decently sized readership), who may become more deeply involved in EA as a result.  -- On the broader question about "principles first" vs "cause specific" EA work: * I think EAIF will ceteris paribus fund more "principles-first" projects than cause specific meta projects compared to previously.  * However, I think this counterbalances other grantmaking changes which focus on cause-specific meta directly (e.g. OP GCR capacity building / GHW funding).  * I'd guess this nets out such that the fraction of funding towards "principles-first" EA decreases, rather than increases (due to OP's significantly larger assets). * As such, the decision to focus on "principles-first" is more of a comp advantage / focus for EAIF specifically, rather than a belief about what the community should do more broadly * (That said, on the margin I think a push in this direction is probably helpful / healthy for the community more broadly, but this is pretty lightly held and other fund managers might disagree)

Speaking in a personal capacity here --

We do try to be open to changing our minds so that we can be cause neutral in the relevant sense, and we do change our cause rankings periodically and spend time and resources thinking about them (in fact we’re in the middle of thinking through some changes now). But how well set up are we, institutionally, to be able to in practice make changes as big as deprioritising risks from AI if we get good reasons to? I think this is a good question, and want to think about it more. So thanks!

Just want to say here (since I work at 80k & commented abt our impact metrics & other concerns below) that I think it's totally reasonable to:

  1. Disagree with 80,000 Hours's views on AI safety being so high priority, in which case you'll disagree with a big chunk of the organisation's strategy.
  2. Disagree with 80k's views on working in AI companies (which, tl;dr, is that it's complicated and depends on the role and your own situation but is sometimes a good idea). I personally worry about this one a lot and think it really is possible we could be wrong h
... (read more)

Thanks Arden. I suspect you don't disagree with the people interviewed for this report all that much then, though ultimately I can only speak for myself. 

One possible disagreement that you and other commenters brought up that which I meant to respond to in my first comment, but forgot: I would not describe 80,000 hours as cause-neutral, as you try to do here and here. This seems to be an empirical disagreement, quoting from second link:

We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we co

... (read more)

Hey, Arden from 80,000 Hours here – 

I haven't read the full report, but given the time sensitivity with commenting on forum posts, I wanted to quickly provide some information relevant to some of the 80k mentions in the qualitative comments, which were flagged to me.

Regarding whether we have public measures of our impact & what they show

It is indeed hard to measure how much our programmes counterfactually help move talent to high impact causes in a way that increases global welfare, but we do try to do this.

From the 2022 report the relevant sectio... (read more)

Hi Arden,

Thanks for engaging. 

(1) Impact measures: I'm very appreciative of the amount of thought that went into developing the DIPY measure. The main concern (from the outside) with respect to DIPY is that it is critically dependent on the impact-adjustment variable - it's probably the single biggest driver of uncertainty (since causes can vary by many magnitudes). Depending on whether you think the work is impactful (or if you're sceptical, e.g. because you're an AGI sceptic or because you're convinced of the importance of preventing AGI risk but wo... (read more)

The 2020 EA survey link says "More than half (50.7%) of respondents cited 80,000 Hours as important for them getting involved in EA". (2022 says something similar

 

I would also add these results, which I think are, if anything, even more relevant to assessing impact:

... (read more)

I like this post and also worry about this phenomenon.

When I talk about personal fit (and when we do so at 80k) it's basically about how good you are at a thing/the chance that you can excel.

It does increase your personal fit for something to be intuitively motivated by the issue it focuses on, but I agree that it seems way too quick to conclude then that your personal fit with that is higher than other things (since there are tons of factors and there are also lots of different jobs for each problem area), let alone that that means you should work on that issue all things considered (since personal fit is not the only factor).

I think it would be especially valuable to see to which degree they reflect the individual judgment of decision-makers.

The comment above hopefully helps address this.

I would also be interested in whether they take into account recent discussions/criticisms of model choices in longtermist math that strike me as especially important for the kind of advising 80.000 hours does (tldr: I take one crux of that article to be that longtermist benefits by individual action are often overstated, because the great benefits longtermism advertises require both redu

... (read more)
9
mhendric🔸
Hey there, thank you both for the helpful comments. I agree the shorttermist/longtermist framing shouldn't be understood as too deep a divide or too reductive a category, but I think it serves a decent purpose for making clear a distinction between different foci in EA (e.g. Global Health/Factory Farming vs AI-Risk/Biosecurity etc).  The comment above really helped me in seeing how prioritization decisions are made. Thank you for that, Ardenlk!   I'm a bit less bullish than Vasco on it being good that 80k does their own prioritization work. I don't think it is bad per se, but I am not sure what is gained by 80k research on the topic vis a vis other EA people trying to figure out prioritization. I do worry that what is lost are advocates/recomendations for causes that are not currently well-represented in the opinion of the research team, but that are well-represented among other EA's more broadly. This makes people like me have a harder time funneling folks to EA-principles based career-advising, as I'd be worried the advice they receive would not be representative of the considerations of EA folks, broadly construed. Again, I realize I may be overly worried here, and I'd be happy to be corrected! I read the Thorstadt critique as somewhat stronger than the summary you give- certainly, just invoking X-risk should not per default justify assuming astronomical value. But my sense from the two examples (one from Bostrom, one on cost-effectiveness on Biorisk) was that more plausible modeling assumptions seriously undercut at least some current cost-effectiveness models in that space, particularly for individual interventions (as opposed to e.g. systemic interventions that plausibly reduce risk long-term). I did not take it to imply that risk-reduction is not a worthwhile cause, but that current models seem to arrive at the dominance of it as a cause based on implausible assumptions (e.g. about background risk). I think my perception of 80k as "partisan" stems from

?I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment.

Thanks for your feedback here!

Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?

I feel unsure about whether we sho... (read more)

I agree that it might be worthwhile to try to become the president of the US - but that wouldn't mean it's best for us to have an article on it, especially highly ranked. that takes real estate on our site, attention from readers, and time. This specific path is a sub-category of political careers, which we have several articles on. In the end, it is not possible for us to have profiles on every path that is potentially worthwhile for someone. My take is that it's better for us to prioritise options where the described endpoint is achievable for at least a healthy handful of readers.

No, we have lots of external advisors that aren't listed on our site. There are a few reasons we might not list people, including:

  • We might not want to be committed to asking for someone's advice for a long time or need to remove them at some point.

  • The person might be happy to help us and give input but not want to be featured on our site.

  • It's work to add people, and we often will reach out to someone in our network fairly quickly and informally, and it would feel like overkill / too much friction to get a bio, and get permission from them for it,

... (read more)

This is a good question -- we don't have a formal approach here, and I personally think that in general, it's quite a hard problem who to ask for advice.

A few things to say:

  • the ideal is often to have both.

  • the bottleneck on getting more people with domain expertise is more often us not having people in our network with sufficient expertise, that we know about and believe are highly credible, and who are willing to give us their time, rather than their values. People who share our values tend to be more excited to work with us.

  • it depends a lot on th

... (read more)

Hey Vasco —

Thanks for your interest and also for raising this with us before you posted so I could post this response quickly!

I think you are asking about the first of these, but I'm going to include a few notes on the 2nd and 3rd too as well just in case, as there's a way of hearing your question as about them. 

  1. What is the internal process by which these rankings are produced and where do you describe it? 
  2. What are problems and paths being ranked by? What does the ranking mean?
  3. Where is our reasoning for why we rank each problem or path the way we
... (read more)
6
Guy Raveh
Hi Arden, thanks for engaging like this on the forum! Re: "the general type of person we tend to ask for input" - how do you treat the tradeoff between your advisors holding the values of longtermist effective altruism, and them being domain experts in the areas you recommend? (Of course, some people are both - but there are many insightful experts outside EA).
3
Benevolent_Rain
What is your thinking for not including this? I am asking as there might be people (you know better than me!) that might think it worthwhile to pursue this career even if it to them has a 0.01% chance of success. I am asking as there is existing EA advice about being ambitious, but is there advice that I have not seen about not being too ambitious? I feel like many people might "qualify" for becoming a president even if the chance of "making it" is low, so in one way it is perhaps not that narrow (even if there is only one 1st place). And on the way to this goal, people are likely to be managing large pots of money and/or making impactful policy more likely to happen.
3
Vasco Grilo🔸
Thanks for the comprehensive reply, Arden! Thanks for sharing the 1st version of your answer too, which prompted me to add a little more detail about what I was asking in the post. I think it would be valuable to include all the additional notes which are not on your website. As a minimum viable product, you may want to link to your comment. Thanks for sharing! The approach you are following seems to be analogous to what happens in the broader society, where there is often one single person responsible for informally aggregating various views. Using a formal aggregation method is the norm in forecasting circles. However, there are often many forecasts to be aggregated, so informal aggregation would hardly be feasible for most cases. On the other hand, Samotsvety, "a group of forecasters with a great track record", also uses formal aggregation methods. I am not aware of research comparing informal to formal aggregation of a few forecasts, so there might not be a strong case either way. In any case, I encourage you to try formal aggregation to see if you arrive to meaningfully different results. Makes sense. Your previous quantitative framework was equivalent to a weighted-factor model (WFM) with the logarithms of importance, tractability and neglectedness as factors with the same weight, such that the sum respects the logarithm of the cost-effectiveness. Have you considered trying a WFM with the factors that actually drive your views?

Hi Nick —

Thanks for the thoughtful post! As you said, we’ve thought about these kinds of questions a lot at 80k. Striking the right balance of content on our site, and prioritising what kinds of content we should work on next, are really tricky tasks, and there’s certainly reasonable disagreement to be had about the trade-offs.

We’re not currently planning to focus on neartermist content for the website, but:

  • We just released a giant update to our career guide and re-centered it on our site. It is targeted at a broad audience, not just those interested in
... (read more)

I think I've become substantially more hardworking!

I think I started from a middle-to-high baseline but I think I am now "pretty hard working" at least (I say as I write this at 8 am on a Tuesday, demonstrating viscerally my not-perfect work ethic).

the big thing for me was going from academic philosophy to working at 80k. Active ingredients in order of importance:

  1. Sense of importance of the work getting done and that if I don't do it, just less stuff I think is good will happen.
  2. Sense of competence and being valued.
  3. teammates to provide mix of accountabil
... (read more)

Copying from my comment above:

Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.

Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

Love this, thanks Catherine! Great way of structuring a career story for being useful to the audience btw, might copy it at some point.

Arden here - I lead on the 80k website and am not on the one-on-one team, but thought I could field this one. This is a big question!

We have several different programmes, which face different bottlenecks. I'll just list a few here, but it might be helpful to check out our most recent two-year review for more thoughts – especially the "current challenges" sections for each programme (though that's from some months ago). 

Some current bottlenecks:

  • More writing and research capacity to further improve our online career advice and keep it up to date.
  • Be
... (read more)

Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.

Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.

Here are some of the places we talk about this:

1. Our problem profiles page (one of our most popular pages) explicitly say... (read more)

2
Arden Koehler
Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.
3
NickLaing
"E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should. " I agree with this, and feel like the best transparent approach might be to put your headline findings on the front page and more clearly, because like you say you do have to dig a surprising amount to find your headline findings. Something like (forgive the average wording) "We think that working on longtermists causes is the best way to do good, so check these out here..." Then maybe even as a caveat somewhere (blatant near termist plug) "some people believe near termist causes are the most important, and others due to their skills or life stage may be in a better position to work on near term causes. If you're interested in learning more about high impact near termist causes check these out here .." Obviously as a web manager you could do far better with the wording but you get my drift!

Love this post -- thanks Rocky! I feel like 5-7 are especially well explained // I haven't seen them explained that way before.

However many elements of the philosophical case for longtermism are independent of contingent facts about what is going to happen with AI in the coming decades.

Agree, though there are arguments from one to the other! In particular:

  1. As I understand it, longtermism requires it to be tractable to, in expectation, affect the long-term future ("ltf").[1]
  2. Some people might think that the only or most tractable way of affecting the ltf is to reduce extinction[2] risk in the coming decades or century (as you might think we can have no idea about the expected e
... (read more)

Thanks for this post! One thought on what you wrote here:

"My broad view is that EA as a whole is currently in the worst of both worlds with respect to centralisation. We get the downsides of appearing (to some) like a single entity without the benefits of tight coordination and clear decision-making structures that centralised entities have."

I feel unsure about this. Or like, I think it's true we have those downsides, but we also probably get upsides from being in the middle here, so I'm unsure we're in the worst of both worlds rather than e.g. the bes... (read more)

This seems true to me, although I don't have great confidence here.

For some years at times I had thought to myself "Damn, EA is pulling off something interesting - not being an organization, but at the same time being way more harmonious and organized than a movement. Maybe this is why it's so effective and at the same time feels so inclusive." Not much changed recently that would make me update in a different direction. This always stood out to me in EA, so maybe this is one of its core competencies[1] that made it so successful in comparison to so m... (read more)

I don't know the answer to this, because I've only been working at 80k since 2019 - but my impression is this isn't radically different from what might have been written in those years.

Hey Joey, Arden from 80k here. I just wanted to say that I don't think 80k has "the answers" to how to do the most good.

But we do try to form views on the relative impact of different things, so we do try to reach working answers, and then act on our views (e.g. by communicating them and investing more where we think we can have more impact).

So e.g. we prioritise cause areas we work most on by our take at their relative pressingness, i.e. how much expected good we think people can do by trying to solve them, and we also communicate these views to our reade... (read more)

This feels fairly tricky to me actually -- I think between the two options presented I'd go with (1) (except I'm not sure what you mean by "If we'd focus specifically on EAs it would be even better" -- I do overall endorse our current choice of not focusing specifically on EAs).

However, some aspects of (2) seem right too. For example, I do think that we talk about a lot of things EAs already know about in much of our content (though not all of it). And I think some of the "here's why it makes sense to focus on impact" - type content does fall into that cat... (read more)

2
Yonatan Cale
Thanks I was specifically thinking about career guides (and I'm most interested in software, personally).   (I'm embarrassed to say I forgot 80k has lots of other material too, especially since I keep sharing that other-material with my friends and referencing it as a trusted source. For example, you're my go-to source about climate. So totally oops for forgetting all that, and +1 for writing it and having it relevant for me too)

I'm grateful to the people who start new orgs to fill the gaps they see, knowing that's a path with a high chance of not working. I like how dynamic EA is (and think we could stand to be even more dynamic!) and this is largely because new projects keep coming on the scene.

thanks for this post! I'm curious - can you explain this more?

the AGI doom memeplex has, to some extent, a symbiotic relationship with the race toward AGI memeplex

2
Jan_Kulveit
Sorry for the delay in response. Here I look at it from a purely memetic perspective - you can imagine thinking as a self-interested memplex. Note I'm not claiming this is the main useful perspective, or this should be the main perspective to take.  Basically, from this perspective * the more people think about AI race, the easier is to imagine AI doom. Also the specific artifacts produced by AI race make people more worried - ChatGPT and GPT-4 likely did more for normalizing and spreading worried about AI doom than all the previous AI safety outreach together.  The more the AI race is clear reality people agree on, the more attentional power and brainpower you will get. * but also from the opposite direction... : one of the central claim of the doom memplex is AI systems will be incredibly powerful in our lifetimes - powerful enough to commit omnicide, take over the world, etc. - and their construction is highly convergent. If you buy into this, and you are certain type of person, you are pulled toward "being in this game". Subjectively, it's much better if you - the risk-aware, pro-humanity player - are at the front. Safety concerns of Elon Musk leading to founding of OpenAI likely did more to advance AGI than all advocacy of Kurzweil-type accelerationist until that point... Empirically, the more people buy into the "single powerful AI systems are incredibly dangerous", the more attention goes toward work on such system. Both memeplexes share a decent amount of maps, which tend to work as blueprints or self-fullfilling prophecies for what to aim for.  
3
David Johnston
AFAIK the official MIRI solution to AI risk is to win the race to AGI but do it aligned. Part of the MIRI theory is that winning the AGI race will give you the power to stop anyone else from building AGI. If you believe that, then it’s easy to believe that there is a race, and that you sure don’t want to lose.
4[anonymous]
Maybe something like this: https://www.lesswrong.com/posts/KYzHzqtfnTKmJXNXg/the-toxoplasma-of-agi-doom-and-capabilities 

My interpretation would be that they both tend to buy into the same premises that AGI will occur soon and that it will be godlike in power. Depending on how hard you believe alignment is, this would lead you to believe that we should build AGI as fast as possible (so that someone else doesn't build it first), or that we should shut it all down entirely. 

By spreading and arguing for their shared premises, both the doomers and the AGI racers get boosted by the publicity given to the other, leading to growth for them both. 

As someone who does not accept these premises, this is somewhat frustrating to watch. 

I'm trying out updating some of 80,000 Hours pages iteratively that we don't have time to do big research projects on right now. To this end, I've just released an update to https://80000hours.org/problem-profiles/improving-institutional-decision-making/ — our problem profile on improving epistemics and institutional decision making.

This is sort of a tricky page because there is a lot of reasonable-seeming disagreement about what the most important interventions are to highlight in this area.

I think the previous version had some issues: It was confusing, a... (read more)

Hey Holden,

Thanks for these reflections!

Could you maybe elaborate on what you mean by a 'bad actor'? There's some part of me that feels nervous about this as a framing, at least without further specification -- like maybe the concept could be either applied too widely (e.g. to anyone who expresses sympathy with "hard-core utilitarianism", which I'd think wouldn't be right), or have a really strict definition (like only people with dark tetrad traits) in a way that leaves out people who might be likely to (or: have the capacity to?) take really harmful actions.

2
Holden Karnofsky
To give a rough idea, I basically mean anyone who is likely to harm those around them (using a common-sense idea of doing harm) and/or "pollute the commons" by having an outsized and non-consultative negative impact on community dynamics. It's debatable what the best warning signs are and how reliable they are.

Thank you for doing this work and for the easy-to-read visualisations!

3
Willem Sleegers
Thanks!

Thanks Vaidehi -- agree! I think another key part of why it's been useful is that it's just really readable/interesting -- even for people who aren't already invested in the ideas.

Load more