William_S

Posts

Sorted by New

Comments

Why I prioritize moral circle expansion over artificial intelligence alignment

I don't think CEV or similar reflection processes reliably lead to wide moral circles. I think they can still be heavily influenced by their initial set-up (e.g. what the values of humanity when reflection begins).

Why do you think this is the case? Do you think there is an alternative reflection process (either implemented by an AI, by a human society, or combination of both) that could be defined that would reliably lead to wide moral circles? Do you have any thoughts on what would it look like?

If we go through some kind of reflection process to determine our values, I would much rather have a reflection process that wasn't dependent on whether or not MCE occurred before hand, and I think not leading to a wide moral circle should be considered a serious bug in any definition of a reflection process. It seems to me that working on producing this would be a plausible alternative or at least parallel path to directly performing MCE.

Introducing Canada’s first political advocacy group on AI Safety and Technological Unemployment

I've talked to Wyatt and David, afterwards I am more optimistic that they'll think about downside risks and be responsive to feedback on their plans. I wasn't convinced that the plan laid out here is a useful direction, but we didn't dig into it into enough depth for me to be certain.

Introducing Canada’s first political advocacy group on AI Safety and Technological Unemployment

Seems like the main argument here is that: "The general public will eventually clue in to the stakes around ASI and AI safety and the best we can do is get in early in the debate, frame it as constructively as possible, and provide people with tools (petitions, campaigns) that will be an effective outlet for their concerns."

One concern about this is that "getting in early in the debate" might move up the time that the debate happens or becomes serious, which could be harmful.

An alternative approach would be to simply build latent capacity - work on issues that are already in the political domain (I think basic income as a solution for technological employment is something that is already out there in Canada), but avoid raising new issues until other groups move into that space too. While you're doing that, you could build latent capacity (skills, networks) and learn how to effectively advocate in spaces that don't carry the same risks of prematurely politicizing AI related issues. Then when something related to AI becomes a clear goal for policy advocacy, moving onto it at the right time.

Open Thread #38

Thanks for the Nicky Case links

Open Thread #38

Any thoughts on individual-level political de-polarization in the United States as a cause area? It seems important, because a functional US government helps with a lot of things, including x-risk. I don't know whether there are tractable/neglected approaches in the space. It seems possible that interventions on individuals that are intended to reduce polarization and promote understanding of other perspectives, as opposed to pushing a particular viewpoint or trying to lobby politicians, could be neglected. http://web.stanford.edu/~dbroock/published%20paper%20PDFs/broockman_kalla_transphobia_canvassing_experiment.pdf seems like a useful study in this area (it seems possible that this approach could be used for issues on the other side of the political spectrum)

The Map of Global Warming Prevention

I'm not saying these mean we shouldn't do geoengineering, that they can't be solved or that they will happen by default, just that these are additional risks (possibly unlikely but high impact) that you ought to include in your assessment and we ought to make sure that we avoid.

Re coordination problems not being bad: It's true that they might work out, but there's significant tail risk. Just imagine that say, the US unilaterally decides to do geonengineering, but it screws up food production and the economy in China. This probably increases chances of nuclear war (even more so than if climate change does it indirectly, as there will be a more specific, attributable event). It's worth thinking about how to prevent this scenario.

The Map of Global Warming Prevention

Extra risks from geoengineering:

Cause additional climate problems (ie. it doesn't just uniformly cool planet. I recall seeing a simulation somewhere where climate change + geoengineering did not equal no change, but instead significantly changed rainfall patterns).

Global coordination problems (who decides how much geoengineering to do, compensation for downside, etc.). This could cause a significant increase in international tensions, plausibly war.

Climate Wars by Gwynne Dyer has some specific negative scenarios (for climate change + geoengineering) https://www.amazon.com/Climate-Wars-Fight-Survival-Overheats/dp/1851688145

Announcing the Good Technology Project

It might be useful to suggest Technology for Good as, ie, a place where companies with that focus could send job postings, and have them seen by people who are interested in working on such projects.

Announcing the Good Technology Project

This is probably not answerable until you've made some significant progress in your current focus, but it would be nice to get a sense of how well the pool of people available to work on technology for good projects lines up with the skills required for those problems (for example, are there a lot of machine learning experts who are willing to work on these problems, but not many projects where that is the right solution? Is there a shortage of, say, front-end web developers who are willing to work on these kinds of projects?).

EA risks falling into a "meta trap". But we can avoid it.

Another way of thinking about this is that in an overdetermined environment it seems like there would be a point at which the impact of EA movement building will be "causing a person to join EA sooner" instead of "adding another person to EA" (which is the current basis for evaluating EA movement building impact), which would be much less valuable.

Load More