Interim head of the CEA Online Team, which runs this Forum.
A bit about me, to help you get to know me: Prior to CEA, I was a data engineer at an aerospace startup. I got into EA through reading the entire archive of Slate Star Codex in 2015. I found EA naturally compelling, and donated to AMF, then GFI, before settling on my current cause prioritization of meta-EA, with AI x-risk as my object-level preference. I try to have a wholehearted approach to morality, rather than thinking of it as an obligation or opportunity. You see my LessWrong profile here.
I love this Forum a bunch. I've been working on it for 5 years as of this writing, and founded the EA Forum 2.0. (Remember 1.0?) I have an intellectual belief in it as an impactful project, but also a deep love for it as an open platform where anyone can come participate in the project of effective altruism. We're open 24/7, anywhere there is an internet connection.
In my personal life, I hang out in the Boston EA and Gaymer communities, enjoy houseplants, table tennis, and playing coop games with my partner, who has more karma than me.
A tax, not a ban
In which JP tries to learn about AI governance by writing up a take. Take tl;dr: Overhang concerns and desire to avoid catchup effects seem super real. But that need not imply speeding ahead towards our doom. Why not try to slow everything down uniformly? — Please tell me why I’m wrong.
After the FLI letter, the debate in EA has coalesced into “6 month pause” vs “shut it all down” vs some complicated shrug. I’m broadly sympathetic to concerns (1, 2) that a moratorium might make the hardware overhang worse, or make competitive dynamics worse.
To put it provocatively (at least around these parts), it seems like there’s something to the OpenAI “Planning for AGI and beyond” justification for their behavior. I think that sudden discontinuous jumps are bad.
Ok, but it remains true that OpenAI has burned a whole bunch of timelines, and that’s bad. It seems to me that speeding up algorithmic improvements is incredibly dubious. And the large economic incentives they’ve created for AI chips seems really bad.[1]
So, how do we balance these things? Proposal: ““we”” reduce the economic incentive to speed ahead with AI. If we’re successful, we could slow down hardware progress, and algorithmic progress, for OpenAI, and all its competitors.
How would this work? This could be the weak point of my analysis, but you could put a tax on “AI products”. This would be terrible and distortionary, but it would probably be effective at slashing the most centrally AGI companies. You can also put a tax on GPUs.
Note: a way to think about this is a Pigouvian tax.
Counterarguments
China. Yep. This is a counterargument against a moratorium as well. I think I’m willing to bite this bullet.
Something like: we really need the cooperation of chip companies. If we tax their chips they’ll sell to China immediately. This is the major reason why I’m more optimistic about taxing “AI products” than GPUs, which would be easier to tax.
IDK, JP, a moratorium would be easier to coordinate around, is more likely to actually happen, and isn’t that bad. We should put our wood behind that arrow. Seems plausible, but I really don’t think this is a long term solution, and I tentatively think the tax thing is.
***
Again, will be very appreciative of different takes, etc here.
See also: Notes on Potential Future AI Tax Policy. This post was written before reading that one. Sadly that post spends too much time in the weeds arguing against a specific implementation which Zvi apparently really doesn’t like, and not enough time discussing overall dynamics, IMO.
I will personally venmo[1] anyone $10 per good link they put in to supply background reading for those examples.
Please try to put effort into your links, I reserve the right to read your link and capriciously decide that I don't like it enough to pay out. Offer valid for one link per historical example with more available at my option.
[1]: Or transferwise I guess.
Here's an interesting tweet from a thread by Ajeya Cotra:
But I'm not aware of anyone who successfully took complex deliberate *non-obvious* action many years or decades ahead of time on the basis of speculation about how some technology would change the future.
I'm curious for this particular line to get more discussion, and would be interested in takes here.
Yeah. Note that, in my culture, people can write fanfiction for media that they're not the biggest fans of. Like, they might see a core of a thing they like, and hate a lot of the rest, and still write a fic because they really want to explore that thing they liked more. Or they might really like it, but are adapting it because they like it so much they want to play with it!
I would be interested in a good explainer here! I just wrote a post that probably could have done with me reflecting on what I recommend doing.
I disagree with this take, and fortunately now have a post to link to. I think steelmanning is a fine response to this situation.
I think your (3) is the one I spend the most time digging into in the post, and I feel quite confident is not a good reason not to steelman.
Re: 1&2, I agree I'm, like, not that bullish on getting a bunch of value from this book, but it looks like a bunch of people have already gotten value from the theme of excessive focus on measurability. And generally I want to see more constructive engagement with criticism, and don't think "eh, low prior on it working" is a good critique of a good mental move.
Thanks for the suggestion!
I have also wanted this. My suggested bad workaround is to subscribe to posts by the author, often for the duration of the sequence the author is only posting posts to that sequence.