1. By recommending people work at the big AI labs (whose explicit aim is to create AGI), do you think this creates a positive Halo Effect for the labs' brand? 80k is known as an organisation whose mission is to make the world a better place, so by recommending people invest their careers at a lab, then those positive brand associations get passed onto the lab (this is how most brand partnerships work. This point shouldn't be a crux since 80k has run partnerships in the past for their own marketing purposes). 

Put concretely, the impact of this is that people (job seekers, investors, users of LLMs) can look at the lab in question and assume that that lab is not doing a bad thing by trying to quickly create AGI. 

2. If you think the answer to #1 is Yes (it does create a positive Halo Effect), then do you believe that there is a cost of this Halo Effect? I.e. is it bad that you're improving their brand perception among job seekers, investors, users of LLMs? (TBH, I don't think I've ever actually seen/heard anyone at 80k point at a big lab and say "um I don't think you should make that thing that might kill everyone", so maybe this is a non-starter?).

3. If you think there is a cost, do you believe this cost is outweighed by the benefit of having safety minded EA / Rationalist folk inside big labs? This is a crux I find it hard to wrap my head around, but it is possible that everything boils down to this question. And my personal take is that if you're unsure about this then you shouldn't be creating the Halo Effect in the first place.




Sorted by Click to highlight new comments since:

I'm not affiliated with 80k, but I would be surprised if the average reader who encounters their work comes away from it with higher regard for AI labs than they came in with — and certainly not that there is something like a brand partnership going on. Most of the content I've seen from them has (in my reading) dealt pretty frankly with the massive negative externalities that AI labs could be generating. In fact, my reading of their article "Should you work at a leading AI lab?" is that they don't broadly recommend it at all. Here's their 1-sentence summary verbatim:

 Recommendation: it's complicated

We think there are people in our audience for whom this is their highest impact option — but some of these roles might also be very harmful for some people. This means it's important to take real care figuring out whether you're in a harmful role, and, if not, whether the role is a good fit for you.

 Hopefully this is helpful. It also sounds like these questions could be rhetorical / you have suspicions about their recommendation, so it could be worth writing up the affirmative case against working at labs if you have ideas about that. I know there was a post last week about this, so that thread could be a good place for this.

Hey Tyler - I agree that this solves for the case where somebody engages deeply with 80k's content (System 2 thinking). But unfortunately that is not how most people make decisions and form opinions (System 1 thinking). 

The bias here is something like - I am an effective altruist who thinks long and hard about all the opinions I form and decisions I make and therefore that is what everyone else does.

I think 80k needs to either deny or acknowledge that is the reality of the situation.

Hey yanni,

I just wanted to return to this and say that I think you were directionally correct here and, in light of recent news, recommending jobs at OpenAI in particular was probably a worse mistake than I realized when I wrote my original comment.

Reading the recent discussion about this reminded me of your post, and it's good to see that 80k has updated somewhat. I still don't know quite how to feel about the recommendations they've left up in infosec and safety, but I think I'm coming around to your POV here.

Hey mate! Lovely to hear from you :)

Yeah I just think that most EAs assume that the message does most of the work in marketing when it is actually the medium: https://en.wikipedia.org/wiki/The_medium_is_the_message

I think this is a fair assumption to make if you believe people make decisions extremely rationally. 

I basically don't (I.e. the 80k brand is powering the OpenAI brand through a Halo Effect).

Unfortunately this is really hard to avoid!

I’m very confused about what you mean by “brand partnerships” in this context

Organisations often partner together because they have overlapping audiences but separate products. Here is a dumb example https://www.woolworths.com.au/shop/productdetails/299503/nescafe-original-choc-mocha-tim-tam-coffee-sachets

Lol I know what a brand partnership is in general, but I'm not aware of 80k doing anything like that 

They're not quite doing a brand partnership. 

But 80k has featured various safety researchers working at AGI labs over the years. Eg. see OpenAI.

So it's more like 80k has created free promotional content, and given their stamp of approval of working at AGI labs (of course 'if you weigh up your options, and think it through rationally' like your friends).

I generally think people who listen to detail-focused 3 hour podcasts are the sorts of people who weigh up options 

I agree that implies that those people are more inclined to spend the time to consider options. At least they like listening to other people give interesting opinions about the topic.

But we’re all just humans, interacting socially in a community. I think it’s good to stay humble about that.

If we’re not, then we make ourselves unable to identify and deal with any information cascades, peer proof, and/or peer group pressures that tend to form in communities.

If everyone I targeted with marketing initiatives listened to an entire 3 hour podcast my job (as a marketer) would be a lot easier. 

Of 80k's entire reach, I'd be surprised if 1% had listened to an entire 3 hour podcast in the last 6 months with a lab.

Most people will glance at their content and see that they're "working together" (you can replace "working together" with "partnership" or whatever phrase you think is more accurate).

Most people will glance at their content and see that they're "working together"

I still don’t see how that would be the conclusion people would draw

They do it with influencers, but not with labs. They partner with labs by promoting their initiatives, producing thought leaders within the labs, listing their jobs.

This seems like intentionally inflammatory concept creep to me

I don't think it is fair to assume anything about my intentions without first asking - maybe you have a question about my intentions? I'm happy to answer any.

Am I right in saying that you think that me using the word "partnering" is inaccurate when when describing 80k listing jobs at labs, having people on their podcast to promote their initiatives and their people? etc

I'd be happy to use another word, as it doesn't really change the substance of my claims.

I do think it’s inaccurate to say that 80k listing a job at an organisation indicates a partnership with them. Otherwise you’d have to say that 80k is partnering with e.g. the US, UK, Singapore and EU governments and the UN.

Re the podcast, I don’t think that’s the central purpose or effect. On the podcast homepage, the only lab employee in the highlighted episode section works on information security, and that is pitched as the focus of the episode.

I am disappointed at how soft-balled some of the podcast episodes have been, and I agree it’s plausible that for some guests it would be better if they weren’t interviewed, if that’s the trade-off. However I also think that overstating the case by describing it in a way that would give a mistaken impression to onlookers is unlikely to do anything to persuade 80k about it.

I’m not assuming anything - I’m stating how it appears to me (ie I said ‘this seems like X to me’, not ‘this is X’).

Curated and popular this week
Relevant opportunities