ben.smith

285Downtown, Eugene, OR, USAJoined Jul 2019

Comments
47

Are there similar surveys being taken on (1) AI researchers in general (2) technology experts in general and (3) voters?

The first group is relevant because they are the ones currently building AGI; the second group is relevant because they are the ones who will implement it; the third group is relevant because they set up incentives for policymakers.

I am a social science researcher, and if there is no work being done on this, I would consider research on it myself.

To add to my comments yesterday, while GDP growth doesn't seem to drive subjective well-being (SWB) in Western countries, recessions certainly harm it. You acknowledged unemployment could be an issue. I don't know if shifting to a four day workweek is a good idea to generate extra jobs because (1) it's authoritarian to stop people from working five days a week if they want to (2) people will suffer a real loss of 20% of their income and they may suffer in SWB terms as a result (3) many jobs are not super fungible and 4 workers doing 5 days a week can't simply be replaced by 5 workers doing four days a week without a lot of retraining (4) slowing down parts of the economy unconnected to carbon emissions will lower output unnecessarily (as an example, if you cut teacher hours by 20%, it isn't clear how this meaningfully reduces carbon emissions because teaching isn't a high carbon occupation, but you will suddenly have to higher an extra 25% more teachers if you want to keep total teaching hours the same) and (5) while GDP growth doesn't raise SWB, GDP declines could still lower it. Humans are loss averse, and tend to want to avoid losses more than they want to approach equivalent gains, by a ratio of about 2:1.

It does seem like good public policy partially protects against the SWB impacts of unemployment and recessions (https://link.springer.com/article/10.1007/s42413-019-00022-0). However, (1) that doesn't mean it's entirely protected and (2) we can't snap our fingers to get better public policy --that'll take work.

A discussion about GDP growth and climate change (1) doesn't really add anything to the known economic strategies to end emissions (sinking lid carbon caps) (2) nor does it engage with the key pressures preventing us from implementing them. So I agree with you that we shouldn't be too concerned about impact on GDP through carbon mitigation, but also think arguments in favour or against that aren't really engaging with the critical solutions or challenges.

On the solutions side: economically, at the level of economics design, a sinking lid carbon cap makes it very simple. We set a carbon budget from now to 2050 based on the level of temperature increase and other climate impact we deem tolerable. Then, anyone emitting carbon has to buy emissions credit out of the limited budget available. I'm not saying governments should entirely leave market forces to sort it all out--there's a strong argument to helping them along with e.g. tailpipe emissions regulations. But economically that'd be sufficient. Under this approach you don't have to worry about whether you get growth or degrowth or something in between--just set your target and the economy will adjust to achieve it.

I suppose I must concede that some people trying to decide exactly what level of emissions, or temperature increase, we might want to tolerate treat it as a GDP optimisation problem. What level of temperature increase maximized GDP given the cost of avoiding to climate change against the costs of experiencing climate change?

But there's also the key political pressures that keep us from implementing carbon budgets. Oil prices went up this year by a lot, and this has led to high inflation across the economy, according to some experts, and has been experienced directly by consumers as higher costs to fuel combustion engine vehicles. Political leaders of all stripes including Biden, who ran on a platform of lowering emissions, amongst other priorities, have responded by trying to get oil prices down. That's the exact opposite of what you do to avoid climate change, but it might be the only choice democratically elected leaders have to respond to public concerns in order to be elected again. And voters aren't primarily concerned about maintaining topline growth; they want to know: will I have a job; will I keep my job; can I afford to drive my car? GDP is a contributor to all those things, and SWB measures definitely decline during recessions. but a debate on GDP targets and subjective well-being won't help you understand how to get support from voters for a tractable climate change response.

So I am left wondering the value of framing the question in terms of GDP at all, except perhaps in setting an economically optimal temperature increase limit target for carbon credit schemes. Maybe that kind of puts me in agreement with OP? But I disagree that the inevitable conclusion is that we need to "shrink material throughput". What needs to be shrunk are greenhouse gas emissions, and to target growth in a positive or negative sense from the outset seems misplaced.

Hey there

I had a go with something very similar here:

https://forum.effectivealtruism.org/posts/9XgLq4eQHMWybDsrv/how-to-dissolve-moral-cluelessness-about-donating-mosquito-1

I tried a kind of drake equation analysis and describe how it worked out in the post.

The post didn't take off in a huge way, but I still believe in the idea. Just needs someone to present it in the right angle!

Good post!

I think in an ideal world, you could confidently say that "there is value in trying to interact with AI researchers outside of the AI Alignment bubble", not only so you can figure out cruxes and better convince them, but actually because you might learn they are right and we are wrong. I don't know whether you believe that, but it seems not only true but also follows very strongly from our movements epistemic ideals about being open-minded to follow evidence and reason where it leads.

If you felt that you would get pushback on suggesting that there's an outside view where AGI Alignment cause area sceptics might be right, I hope you are wrong, but if there are many other people who feel that way, it indicates some kind of epistemic problem in our movement.

Any time we're in a place where someone feels there's something critical they can't say, even when speaking in good faith, to best use evidence and reason to do the most good, that's a potential epistemic failure mode we need to guard against.

A key characteristic of a cult is a single leader who accrues a large amount of trust and is held by themselves and others to be singularly insightful. The LW space gets like that sometimes, less so EA, but they are adjacent communities. 

Recently, Eliezer wrote

The ability to do new basic work noticing and fixing those flaws is the same ability as the ability to write this document before I published it, which nobody apparently did, despite my having had other things to do than write this up for the last five years or so.  Some of that silence may, possibly, optimistically, be due to nobody else in this field having the ability to write things comprehensibly - such that somebody out there had the knowledge to write all of this themselves, if they could only have written it up, but they couldn't write, so didn't try.  I'm not particularly hopeful of this turning out to be true in real life, but I suppose it's one possible place for a "positive model violation" (miracle).  The fact that, twenty-one years into my entering this death game, seven years into other EAs noticing the death game, and two years into even normies starting to notice the death game, it is still Eliezer Yudkowsky writing up this list, says that humanity still has only one gamepiece that can do that.  I knew I did not actually have the physical stamina to be a star researcher, I tried really really hard to replace myself before my health deteriorated further, and yet here I am writing this.  That's not what surviving worlds look like.

I don't necessarily disagree with this analysis, in fact, I have made similar observations myself. But the social dynamic of it all pattern-matches to cult-like, and I  think that's a warning sign we should be wary of as we move forward. In fact, I think we should probably have an ongoing community health initiative targeted specifically at monitoring signs of group-think and other forms of epistemic failure in the movement.

Ord (2020) listed climate change as an x-risk. Though, on reflection, he may have said that 1/1000 was an absolute upper bound and he thought the actual risk was lower than that.

I have a hard time understanding stories not mediated through climate change or resource shortage (which seems closely linked to climate change, in that many resource limits boil down to carbon emissions) about how population growth in Africa could lead to higher existential risk--particularly in a context where global population seems like it will hit a peak and then decline sometime in the second half of the 21st century. Most of the pathways I can imagine would point to lower existential risk. If the starting point is that bednet distribution leads to lower existential risk, there isn't really a dilemma, and so that case seemed less interesting to analyse. So that's probably one reason I saw more value in starting my analysis with the climate change angle.

However, there are probably causal possibilities I've missed. I'd be interested to hear what you think they might be. I do think someone should try to examine those more closely in order to try and put reasonable probabilistic bounds around them.

I certainly don't think the analysis above is complete. As I said in the post, the intent was to demonstrate how we could "dissolve" or reduce some moral cluelessness to ordinary probabilistic uncertainty using careful reason and evidence to evaluate possible causal pathways. I think the analysis above is a start and a demonstration that we can reduce uncertainty through reasoned analysis of evidence. But we'd definitely need a more extended analysis to act. Then, we can take an expected value approach to work out the likely benefit of our actions.

You're right that I didn't discuss it much. Perhaps I should have.

I have a head model that world per capita net GHG emissions will begin to decline at some point before 2050, and reach net zero some time between 2050 and 2100. The main relevance for population here was that higher population would increase emissions. But once the world reaches net zero per capita emissions, additional people might not produce more emissions.

I think it's quite plausible that population decline due to economic growth induced in 2022 won't show up for a couple of generations--potentially after we reach net zero. So I didn't include it in the model. If I had done, we'd get a result more in favour of donating bednets.

Good post, well done! Cost effectiveness analysis was done well. There are a couple things I'd like to see in the cost effectiveness that could further enhance the argument. First, I don't have a good handle on the cost per DALY through well-known givewell interventions like AMF or the equivalent for direct giving, and it would be good to see that compared (comparison with other health variables might be helpful).

Second, if the sources are strictly measuring medical and health outcomes of reduced violence, the true magnitude of the benefit could actually be quite a bit more, because plausibly there are additional well-being benefits not captured by a pure medical analysis.

You have mentioned economic benefits in other parts of the report so I suppose it would be helpful to capture that in the analysis of specific cause areas too.

That said, cost per DALY of $52-78 sounds reasonably good at least?

Load More