JL

James Lester

10 karmaJoined Apr 2022

Bio

I'm an economics undergrad at the University of Cambridge, currently acting as president of the EA student group.

Comments
2

tl;dr [edited mainly for tone] - although in general buying out lobbyists can work, the proposal doesn't use the right numbers in calculating its costs.


Hi, in your first paragraph you implicitly identify that there are two important and distinct numbers at play here:
1) the maximum amount the "sugar lobby" is willing to pay to stop sugar laws being introduced - this is the amount you call $X, and is (as you say) equal to the total discounted profits that would be lost if a tax were introduced.
2) the amount the lobby actually pays. Assuming I read you correctly, this stands at $50m in 2009.

As you say, (1) is higher than (2), but not just because of externalities and uncertainties: there's also the fact that (1) is determined by total costs and benefits, and (2) by marginal costs and benefits in terms of how much additional lobbying money translates into reducing the probability of a tax being introduced (this is a generally important distinction, but I'll ignore it for now)

What we primarily care about is a third number, which is the minimum cost of buying out the sugar lobby. In a "certain" world (where lobbyists can knowingly guarantee a policy does or doesn't get passed by spending certain amounts) I think this corresponds the difference in these numbers, i.e. $(X-50m). That is, a philanthropist could go to the sugar lobby and say, we'll give you $(X-49.9999m), you the lobbyist save $50m that you would have spent on lobbying, and you lose $X in profits because of the tax. Therefore, you're made very slightly better off by this deal. Note that for a given X, the higher the spending on lobbying, the easier it is to pay off the lobbyist.
[In reality, the sugar lobby could play hardball and bargain up the buy-out price, hence minimum cost, but that's not really the point]

The point is we don't know what $X is - you'd probably need to look at total sugar consumption multiplied by proposed per unit tax minus the deadweight loss due to falling consumption. This in turn requires you to know the shape of the beverage industry's marginal cost curve.

I think you've conflated all three numbers at various points, having seen the lobby spends $50m, and then taken that as the cost to a philanthropist of getting a tax introduced.

I haven't checked the rest of your numbers, or thought about practical/political challenges, but I think the main stumbling block is the fact that it'll cost way more than $50m to buy out the sugar lobby.

The alternative that neither of us consider is the cost of counter-lobbying lawmakers to ignore the sugar lobby and introduce the tax - that's a whole different question and may well be cost-effective.

If I'm misrepresenting or misunderstanding what you're saying, I do apologise! I also wouldn't be surprised if a political economist spotted some errors in my analysis, but I'd still expect the $50m figure you quote to be misleading and a significant underestimate.

Interesting idea – thanks for sharing and would be cool to see some further development.

I think there’s a crucial disanalogy between your proposal and the carbon emissions case, assuming your primary concern is x-risk. Pigouvian carbon taxes make sense because you have a huge number of emitters whose negative externality is each roughly proportional to the amount they emit. 1000 motorists collectively cause 1000 times the marginal harm, and thus collectively pay 1000 times as much, as 1 motorist. However, the first company to train GPT-n imposes a significant x-risk externality on the world by advancing the capabilities frontier, but each subsequent company that develops a similar or less powerful model imposes a somewhat lower externality. Once GPT-5 comes out, I don’t think charging 1bn (or whatever) to train models as powerful as, say, ChatGPT affects x-risk either way. Would be interested to hear whether you have a significantly different perspective, or if I've misunderstood your proposal.

I’m wondering whether it makes more sense to base a tax on some kind of dynamic measure of the “state-of-the-art” – e.g. any new model with at least 30% the parameter count of some SOTA model (currently, say, GPT4, which has on the order of 100tn parameters) must pay a levy proportional to how far over the threshold the new model is (these details are purely illustrative).

Moreover, especially if you have shorter timelines, the number of firms at any given time who have a realistic chance of winning the AGI race is probably less than five. Even if you widen this to "somehow meaningfully advance broad AI capabilities", I don't think it's more than 20(?)
A Pigouvian tax is very appealing when you have billions of decentralised actors each pursuing their own self-interest with negative externalities - in most reasonable models you get a (much) more socially efficient outcome through carbon taxation than direct regulation. For AGI though I honestly think it’s far more feasible and well-targeted to introduce international legislation along the lines of “if you want to train a model more powerful than [some measure of SOTA], you need formal permission from this international body” than to tax it – apart from the revenue argument, I don’t think you’ve made the case for why taxes are better.

That being said, as a redistributive proposal your tax makes a lot of sense (although a lot depends on whether economic impact scales roughly linearly with model size, whereas my guess is, again, that one pioneering firm advancing capabilities a little has a more significant economic effect than 100 laggards building more GPT-3s, certainly in terms of how much profit its developers would expect, because of returns to scale).
Also, my whole argument relies on the intuition that the cost to society (in terms of x-risk) of a given model is primarily a function of its size relative to the state of the art (and hence propensity to advance capabilities), rather than absolute size, at least until AGI arrives. Maybe on further thought I’d change my mind on this.