Tyler Johnston

Executive Director @ The Midas Project
1389 karmaJoined Working (0-5 years)Tulsa, OK, USA

Bio

Participation
4

Book a 1:1 with me: https://cal.com/tylerjohnston/book

Share anonymous feedback with me: https://www.admonymous.co/tylerjohnston

Comments
79

Ooh interesting. Thanks for pointing this out, I'm revising my ballot now.

(Edited at 19:35 UTC-5 as I misunderstood how the voting system works)

My top 10 right now look something like:

1. The Midas Project
2. EA Animal Welfare Fund
3. Rethink Priorities
4. MATS Research
5. Shrimp Welfare Project
6. Apart Research
7. Legal Impact for Chickens
8. PauseAI
9. Wild Animal Initiative
10. High Impact Professionals

I ranked my organization, The Midas Project, first on my ballot. I don't think we have a stronger track record than many of the organizations in this election (and I expect the winners will be a few familiar top contenders like Rethink Priorities, who certainly deserve to be there), but I do think the election will undervalue our project due to general information asymmetries and most of our value being speculative/heavy-tailed. This seems in line with the tactical voting suggestion, but it does feel a bit icky/full of hubris.

Also, in making this list, I realized that I favored large orgs whose work I'm familiar with, and most skipped over small orgs who I know little about (including ones that made posts for marginal funding week that I just haven't read). This was a funny feeling because (as mentioned) I run a small org that I expect many people don't know about and will skip over. 

One way people can counteract this would be, in making your selection, choose 1-2 orgs you've never heard of at random, do a deep dive on them, and place them somewhere in your rankings (even at the bottom if you aren't excited about them). With enough people doing this, there should be enough coverage of small orgs for the results of the election to be a bit more informative, at least in terms of how smaller orgs compare to each other.

But the EATS act would basically nullify the value in any popular incrimentalist state laws, no? That's what has me worried I think. Otherwise I'd be excited about seeing Prop 12-like citizens' initiatives across the country.

Good point — in retrospect that was hyperbole on my part, and I should have just said "signals."

I suppose I see banning any industry, especially for politicians who tend to favor free markets, as essentially trading off GDP for whatever cultural/electoral benefits are gained by the ban. But you're right that the cost to the local economy is virtually zero, at least right now. I suppose that will change if cultivated meat can one day be produced affordably at scale.

This is interesting and exactly the sort of consideration I was worried my anecdote-based feelings could miss. A bit of googling suggests to me that there is some evidence in favor of increased spending correlating with significant changes in ballot measure outcomes (I've heard it's more uncertain with electoral politics). 

If it's true that the ballot initiative failures were just a funding issue rather that a broader reflection of the electorate's willingness to support, I think that'd be a big deal, and maybe an argument in favor of investing more in this work.

Also, side note — I'm really surprised that there was such weak opposition to Prop 12, especially given the costs to industry and the fight it's put up since then. It makes me wonder of Ballotopedia missed anything here.

Agreed! I tried to mention this in the last paragraph but probably should have emphasized it elsewhere. Thanks for pointing it out.

Thank you for writing this! Was really interesting to read. I'd love to see more posts of this nature. And it seems like you've done a lot for the world — thank you.

I have a couple questions, if you don't mind:

You write

I still generally suspect corporate campaigns are no longer particularly effective, especially those run by the largest groups (e.g. Mercy For Animals or The Humane League), and don’t think these meet the bar for being an EA giving area anymore, and haven’t in the US since around 2017, and outside the US since around 2021.

I would love to hear your reasoning (pessimism about fulfillment? WAW looking better?) and what sort of evidence has convinced you. I think this is really important, and I haven't seen an argument for this publicly anywhere. Ditto about your skepticism of the organizations leading this work.

We can make meaningful progress on abolishing factory farming or improving farmed animal welfare by 2050

Did you mean to change one of the years in the two statements of this form?

Most people interested in EA should earn to give

I'd love to hear more about this. How much value do you think e.g. the median EA doing direct work is creating? Or, put another way, how significant an annual donation would exceed the value of a talented EA doing direct work instead?

Thank you for writing this! It's probably the most clear and rigorous way I've seen these arguments presented, and I think a lot of the specific claims here are true and important to notice.

That being said, I want to offer some counterarguments, both for their own sake and to prompt discussion in case I'm missing something. I should probably add the disclaimer that I'm currently working at an organization advocating for stronger self-governance among AI companies, so I may have some pre-existing biases toward defending this strategy. But it also makes this question very relevant to me and I hope to learn something here.

Addressing particular sections:

Only Profit-Maximizers Stay At The Frontier

This section is interesting and reminds me of some metaphors I've heard comparing the mechanism of free markets to Darwinism... i.e. you have to profit-maximize, and if you don't, someone else will and they'll take your place. It's survival of the fittest, like it or not. Take this naïve metaphor seriously enough and you would expect most market ecosystems to be "red in tooth and claw," with bare-minimum wages, rampant corner-cutting, nothing remotely resembling CSR/ESG, etc.

One problem is: I'm not sure how true this is to begin with. Plenty of large companies act in non-profit-maximizing ways simply out of human error, or passivity, or because the market isn't perfectly competitive (maybe they and their nearest rivals are benefitting from entrenchment and economies of scale that mean they no longer have to), or perhaps most importantly, because they are all responding to non-financial incentives (such as the personal values of the people at the company) that their competitors are equally subject to.

But more convincingly, I think social good / avoiding dangerous accidents really are just more aligned with profit incentives than the metaphor would naively suggest. I know your piece acknowledges this, but you also write it off as having limitations, especially under race conditions aiming toward a particular capabilities threshold. 

But that doesn't totally follow to me — under such conditions, while you might be more open to high-variance, high-risk strategies to reach that threshold, you might also be more averse to those strategies since the costs (direct or reputational or otherwise) imposed by accidents before that threshold is reached become so much more salient. In the case of AI, the costs of a major misuse incident from an AI product (threatening investment/employee retention/regulatory scrutiny/etc.) might outweigh the benefits of moving quickly or without regard to safety — even when racing to a critical threshold. A lot of this probably depends on how far off you think such a capability threshold is, and where relative to the frontier you currently are. This is all to say that race dynamics might make high-variance high-risk strategies more attractive, but they also might make them less attractive, and the devil is probably in the details. I haven't heard a good argument for how the AI case shakes out (and I've been thinking about it for a while).

Also, correct me if I'm wrong, but one thing the worldview you write about here would suggest is that we shouldn't trust companies to fulfill their commitments to carbon neutrality, or that if they do, they will soon no longer be on the forefront of their industry — doing so is expensive, nobody is requiring it of them (at least not on the timeline they are committing to), the commitment is easy to abandon, and even if they do it, someone who chooses not to will outcompete them and take their place at the forefront of the market. But I just don't really expect that to happen. I think in 2030 there's a good chance Apple's supply chain will be carbon-neutral, and that they'll still be in the lead for consumer electronics (either because the reputational benefits of the choice, and the downstream effects it has on revenue and employee retention and whatnot, made it the profit-maximizing thing to do, and/or because they were sufficiently large/entrenched that they can just make choices like that due to non-financial personal/corporate values without damaging their competitive position, even when doing so isn't maximally efficient.)

Early in the piece, you write: 

A profit-driven tech corporation seems exceedingly unlikely to hinge astronomical capex on an AI corporation that does not give off the unmistakable impression of pursuing maximal profits.

But we can already prove this isn't true given that OpenAI has a profit cap, their deal with Microsoft had a built-in expiration, and Anthropic is a B-corp. Even if you don't trust that some of these measures will be ahered to (e.g. I believe the details on OpenAI's profit cap quietly changed over time), they certainly do not give off the unmistakable impression of maximal profit seeking. But I think these facts exist because either (1) many of the people at these companies are thinking about social impact in addition to profit (2) social responsibility is an important intermediate step to being profitable, or (3) the companies are so entrenched that there simply are no alternative, extra profit-maximizing firms who can compete, i.e. they have the headroom to make concessions like this, much as Apple can make climate commitments. I'm not sure what the balance between these three explanations are, but #1 and #3 challenge the strong view that only seemingly hard-nosed profit-maximizers are going to win here, and #2 challenges the view that profit-maximizing is mutually exclusive with long-term safety efforts.

All this considered, my take here is instead something like "We should expect frontier AI companies to generally act in profit-maximizing ways, but we shouldn't expect them to always be perfectly profit-maximizing across all dimensions, nor should we expect that profit-maximizing is always opposed to safety."

Constraints from Corporate Structure Are Dangerously Ineffective

I don't have a major counterargument here, aside from the fact that well-documented and legally recognized corporate structures often can be pretty effective thanks in part to the fact that judges/regulators get input on when and how they can be changed, and while I'm no expert, my understanding is that there are ways to optimize for this.

But your idea that companies are exchangeable shells for what really matters under the hood — compute, data, algorithms, employees — seems very true and very underrated to me. I think of this as something like "realpolitik" for AI safety. What really matters, above ideology and figureheads and voluntary commitments, is where the actual power lies (which is also where the actual bottlenecks for developing AI are) and where that power wants to go.

Hope In RSPs Is Misguided

The claim that "RSPs on their own can and will easily be discarded once they become inconvenient" seems far too strong to me — and again, if it were true, we should expect to see this with all costly voluntary safety/CSR measures that are made in other industries (which often isn't the case).

A few things that may make non-binding voluntary commitments like RSPs hard to discard:

  • It's really hard to abandon them without looking hypocritical and untrustworthy (to the public, to regulators, to employees, to corporate partners, etc.)
  • In large bureaucracies, lock-in effects make it easy to create new teams/procedures/practices/cultures and much harder to change them.
  • Abandoning these commitments can open companies up to liability for deceptive advertising, misleading shareholders, or even fraud if the safety practices were used to promote e.g. an AI product, convince investors that they are a good company to support, convince regulators or the public that they can be trusted, etc. I'm not an expert on this by any means, nor do I have specific examples to point to right now, so take this one with a grain of salt.

There's also the fact that RSPs aren't strictly an invention of the AI labs. Plenty of independent experts have been involved in developing and advocating for either RSPs or risk evaluation procedures that look like them.

Here, I think a more defensible claim would be "The fact that RSPs may be easily discarded when inconvenient should be a point in favor of binding solutions like legislation, or at least indicate that they should be considered one of many potentially fallible safeguards for a defense-in-depth strategy"

An optimistic view of RSPs might be that they are a good way to hold AI corporations accountable - that public and political attention would be able to somehow sanction labs once they did diverge from their RSPs. Not only is this a fairly convoluted mechanism of efficacy, it also seems empirically shaky: Meta is a leading AI corporation with industry-topping amounts of compute and talent and does not publish RSPs. This seems to have garnered neither impactful public and political scrutiny nor hurt the Meta AI business.

Minor factual point: probably worth noting that Meta, as well as most leading AI labs, have now committed to publish an RSP. Time will tell what their policy ends up looking like.

It's true that the presence of, and quality of, RSPs at individual companies doesn't seem to have translated to any public/political scrutiny yet. I'm optimistic this can change (it's what I'm working on), or perhaps even will change by default once models reach a new level of capabilities that make catastrophic risks from AI an ever-more-salient issue among the public.

The downside of choosing an RSP-based legislative process should be obvious - it limits, or at least frames, the option space to the concepts and mechanisms provided by the AI corporations themselves. But this might be a harmful limitation: As we have argued above, these companies are incentivized to mainly provide mechanisms they might be able to evade, that might fit their idiosyncratic technical advantages, that might strengthen their market position, etc. RSP codification hence seems like a worse way to safe AI legislation than standard regulatory and legislative processes.

This is a question: my understanding is that the RSP model was specifically inspired by regulatory pathways from other industries, where voluntary measures like this got codified into what is now seen (in retrospect) as sensible policy. Is this true? I can't remember where I heard it, and can't find mention of it now, but if so, it seems like those past cases might be informative in terms of how successful we can expect the RSP codification strategy to be today.

That actually brings me to one last meta point that I want to make, which is that I am tempted to think that we are just in a weird situation where there are psychological facts about the people at leading profit-driven AI labs that make the heuristic of profit maximization a poor predictor of their behavior, and a lot of this comes down to genuine, non-financial concern about long-term safety. 

Earlier I mentioned how even in a competitive market, you might see multiple corporations collectively acting in non-profit-maximizing ways due to non-financial incentives collectively acting upon the decision-makers at each those companies. Companies are full of humans who make choices for non-financial reasons, like wanting to feel like a good person, wanting to have a peaceful home life where their loved ones accept and admire them, and genuinely wanting to fix problems in the world. I think the current psychological profile of AI lab leaders (and, indeed, the AI lab employees that hold the "real power" under the hood) is surprisingly biased toward genuine concern about the risks of AI. Many of them correctly recognized, way before anyone else, how important this technology would be.

Sorry for the long comment. l do think AI labs need fierce scrutiny and binding constraints, and their incentives are largely not pointing in the right place and might bias them toward putting profit over safety — again, this is my main focus right now — but I'm also not ready to totally write off their ability to adopt genuinely valuable and productive voluntary measures to reduce AI risk.

Hey yanni,

I just wanted to return to this and say that I think you were directionally correct here and, in light of recent news, recommending jobs at OpenAI in particular was probably a worse mistake than I realized when I wrote my original comment.

Reading the recent discussion about this reminded me of your post, and it's good to see that 80k has updated somewhat. I still don't know quite how to feel about the recommendations they've left up in infosec and safety, but I think I'm coming around to your POV here.

Load more