ben.smith

Topic Contributions

Comments

Friends or relatives in Oregon? Please let us know! Updates & actions to help Carrick win

I volunteered over the weekend on the campaign door-knocking. the team are hardworking and largely made up of EAs. If anyone can help on the phone banks before the day ends then I expect it would have a substantial EV impact.

Announcing the Future Fund

I put in an application on 21st March, but haven't yet heard back. Are some applications still being processed, or should I assume this is either a negative response or that I must have made some mistake in submitting?

We're announcing a $100,000 blog prize

Perhaps an enterprising blogger could start an interview-format blog, where they interview EA authors of those "internal discussions and private documents" and ask them to elucidate their ideas in a way suitable for a general audience. I think that would make for a pretty neat and high-value blog!

How to become an AI safety researcher

Interesting post Peter, really appreciate this and got a lot of useful ideas. While trying to assign the appropriate weight to the perspectives here it was useful for me to see where I've been consistent with these success stories and where I might have area to make up.

I wonder if it's worth following up this very useful qualitative work with a quantitative survey?

"Long-Termism" vs. "Existential Risk"

Speaking about AI Risk particularly, I haven't bought into the idea there's a "cognitively substantial" chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven't either. There's two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:

  • Transformative AI will happen likely happen within 10 years, or 30
  • There's a significant chance it will kill us all, or at least a catastrophic number of people (e.g. >100m)

It's not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn't matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.

If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1/10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven't seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1/3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation "convince me 1% matters" can look a lot like just explaining longtermism to someone.

Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children

Some great points, and you've got me thinking again, honestly. I'll concede that if the GDP impact or human life impact were quite a bit different, and they absolutely could be, I'd be...at least thinking a lot harder about this.

I guess my central point was that you cannot argue that CC should not be a significant factor deciding on having children or not (if you care for total happiness), without arguing whether having children is something that will effectively exacerbate CC in the long run or not. And I think you were trying to do that.

That a fair criticism. Trying to sum up, I think the point I'm trying to get across (poorly expressed in my OP, I have to say) is that 

(1) one should (under a total view of happiness) include the enjoyment one's potential child will get out of life in the calculations

(2) the enjoyment one's potential child will get out of life is almost certainly still positive, and 

(3) to make a new person's existence net-negative, the marginal impact of climate change of an extra person would have to be large to outweigh the total utility of an extra person living, say 40-80 well-being adjusted life-years. While we can all see the impact of climate change as a whole is large, that is the combined impact of 8 billion people; the individual impact of each marginal person is much smaller than the WALYs they experience through existing.

On my understanding of impacts, I had thought (2) and (3) would be uncontroversial given the evidence. Thus, I mainly wanted to point out the analytical argument outlined in the previous paragraph, and that would be enough. But now you've told me true GDP impact could be much greater than 10%, I'm much less certain about that! I guess you are right at least that the debate is "messy".

Do you have any sources you can recommend that contain more reliable estimates of (a) GDP impact, (b) human life impact, or (c) long-run exacerbation where things become "overwhelmingly negative"? All of that would concern me, particularly the long-run overwhelmingly-negative scenario.

I understand this is getting into an entirely new argument I didn't make originally, so appreciate if you don't want to stray, but at some point, I think the "climate cost" to grow the population by some amount is the lesser of the mitigation of their carbon footprint by other means, or the actual effects of their carbon footprint. That makes the assumption that "we" (whoever the imagined "we" is) will choose the lesser cost option, which is problematic, but on the other hand, I'm not sure how much moral responsibility you can build into the choice to have a child if a less impactful alternative to mitigation exists which society as a whole chooses not to pursue.

Against anti-natalism; or: why climate change should not be a significant factor in your decision to have children

Interesting, appreciate your reply! I think you raised a couple of concerns:

  1. Bringing an additional child into the world results in them essentially taking from a limited resource (the finite share of carbon emissions that can be captured, mitigated, or tolerated), reducing the resource available for everyone else.
  2. It's plainly wrong to argue life won't be substantially worse than today

Have I understood your argument right?

I think (1) is complicated. Even if it's true that bringing an additional child into the world results in less for everyone else, the primary beneficiary isn't the parents of the child, but the child themselves (although this depends on whether you take a "total view" or "person-affecting" view of population ethics. It's true under the total view, which is my own perspective. If you take the person-affecting view, you could disagree). The key point I was trying to make in my post is the benefits accruing to that one child are greater than the total sum of harm that additional child does by existing and producing a carbon footprint. I think other commenters were right to say I haven't made a strong affirmative case, but at least, I'd appeal to you to consider whether the calculations need to be done.

I'll attempt a brief calculation, though. I don't necessarily stand by these figures, but my point is that (from a consequentialist point-of-view) doing a calculation like this is important for understanding whether anti-natalism is a good response to climate change.

The largest impact of climate change on human beings in expectation seems to be forcing people out of their homes and communities to migrate, possibly across thousands of miles to different countries. Many will die of famine, thirst, or other acute problems, but all have their lives uprooted. Understanding the number of people this will impact is difficult, but the best estimate I can find is roughly 200 million. If this scales linearly with the number of people in the world, roughly 8 billion now, then for every 40 new people in the world, we'll have 1 new climate refugee. Is it worth coming into the world if you have a one in forty chance of being a climate change refugee, or causing someone else to be? Of course no one can actively make that choice, but we can make that choice for someone "in expectation" if we're in a position to decide whether to bring them into the world. To me a 1 in 40 chance of a bad outcome is worth a 39 out of 40 chance of a good outcome.

But even though that still seems a worthwhile gamble, in reality, I think the situation is much, much less dire than that. The impact of climate change won't scale linearly, because as we get more people, we'll spend more resources on carbon capture and transitioning to a zero emission economy. This does impose costs on people, but the sacrifice of driving a bit less, or spending a bit more money on solar panels, or other forms of getting to carbon zero, seem less of a sacrifice than not existing at all. This isn't completely obvious, because the burden is across the whole of society, but I'll have to leave that exercise for the future.

For the second point (2): people have done their best to work out the economic impact of climate change. The best indications are in the range of 2-10% of world GDP. On average, the US and other developed economies grow about 1-2% a year, or 10-20% a decade. So the impacts of climate change, and responding to it, will cost us a decade of growth and rise in living standards,. But, overall, it seems like living standards will still be higher in future than they are now, even accounting for the impact of climate change.

The Future Fund’s Project Ideas Competition

Building on the above idea...

Research the technology required to restart modern civilization and ensure the technology is understood and accessible in safe havens throughout the world

A project could ensure that not only the know-how but also the technology exists dispersed in various parts of the world to enable a restart. For instance, New Zealand is often considered a relatively safe haven, but New Zealand’s economy is highly specialized and for many technologies, relies on importing technology rather than producing it indigenously. Kick-starting civilization from wikipedia could prove very slow. Physical equipment and training enabling strategic technologies important for restart could be planted in locations like New Zealand and other social contexts which are relatively safe. At an extreme, industries could be subsidized which localize technology required for a restart. This would not necessarily mean the most advanced technology; rather, it means technologies that have been important to develop to the point we are at now.


 

The Future Fund’s Project Ideas Competition

Group psychology in space

Space governance

When human colonies are established in outer space, their relationship with Earth will be very important for their well-being. Initially, they’re likely to be dependent on Earth. Like settler colonies on Earth, they may grow to desire independence over time. Drawing on history and research on social group identities from social psychology, researchers should attempt to understand the kind of group identities likely to arise in independent colonies. As colonies grow they’ll inevitably form independent group identities, but depending on relationships with social groups back home, these identities could support links with Earth or create antagonistic relationships with them. Attitudes on Earth might also vary from supportive, exclusionary, or even prejudiced. Better understanding intergroup relations between Earth powers and their settler colonies off-world could help us develop equitable governance structures that promote peace and cooperation between groups.

The Future Fund’s Project Ideas Competition

Fund publicization of scientific datasets

Epistemic institutions
 

Scientific research has made huge strides in the last 10 years towards more openness and data sharing. But it is still common for scientists to keep some data proprietary for some length of time, particularly large datasets that cost millions of dollars to collect like, for instance, fMRI datasets in neuroscience. More funding for open science could pay scientists when their data is actually used by third parties, further incentivizing them to make data not only accessible but useable. Open science funding could also facilitate the development of existing open science resources like osf.io and other repositories of scientific data. Alternatively, a project to systematically catalogue scientific data available online–a “library of raw scientific data” could greatly expand access and use of existing datasets.

Load More