Kerry_Vaughan

Wiki Contributions

Comments

Lessons for AI governance from the Biological Weapons Convention

I think I still don't quite get why this seems implausible. (For what it's worth, I think your view is pretty mainstream, so I'm asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)

It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn't seem much more difficult than distinguishing biological weapons from civilian uses of biological technology.

Of course we're mostly interested in AGI, not narrower AI technology. I agree that society doesn't think of AGI development as a weapons technology and so banning "AGI weapons" seems strange to contemplate, but it's not too difficult to imagine that changing! After all, many of the proponents of the technology are clear that they think it will be the most powerful technology ever invented, granting its creators unprecedented strength. Various components of the US military and intelligence services certainly seems to think AGI development has military implications, so the shift to seeing it as a dual-use weapons technology doesn't seem to be too big of a leap to imagine.

Lessons for AI governance from the Biological Weapons Convention

This isn't central to the post, but I'm interested in this parenthetical:

(To clarify - the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.

Can you say more about why this seems implausible from your point of view?

Should Grants Fund EA Projects Retrospectively?

I think the consensus around impact certificates was that they seemed like a good idea and yet the idea never really took off.

Should Grants Fund EA Projects Retrospectively?

Lots of funding is implicitly retrospective in the sense that what you've done historically is a big input into whether individuals and groups get funding. Yet, because most funding mixes several factors including past work, anticipated future work, reputation, etc. I think there may be an open opportunity here.

I'd be particularly excited to see funding for projects that have already occurred where it is clear that the success or failure of the past project is all that is being considered. This might encourage more unconventional or initially hard-to-assess projects and would provide a more concrete signal about which projects actually succeeded historically.

EA Survey 2020: Demographics

In the world where changes to the survey explain the drop, I'd expect to see a similar number of people click through to the survey (especially in 2019) but a lower completion rate. Do you happen to have data on the completion rate by year?

If the number of people visiting the survey has dropped, then that seems consistent with the hypothesis that the drop is explained by the movement shrinking unless the increased time cost of completing the survey was made very clear upfront in 2019 and 2020.

EA Survey 2020: Demographics

From context, that appears to be an incomplete list of metrics selected as positive counterexamples. I assumed there are others as well.

EA Survey 2020: Demographics

You probably already agree with this, but I think lower survey participation should make you think it's more likely that the effective altruism community is shrinking than you did before seeing that evidence.

If you as an individual or CEA as an institution have any metrics you track to determine whether effective altruism is growing or shrinking, I'd find it interesting to know more about what they are.

Introducing Ayuda Efectiva

I am well aware of the general reticence about mass media and the preference for a high fidelity model of spreading the ideas of effective altruism. However, I think that (1) the misrepresentation risks are less acute in the narrower effective-giving space and (2) some coverage —even if it is a bit off-target— can often be better than no coverage when you are launching a new organization.

I want to express some general support for being less concerned about fidelity when spreading ideas like effective giving.

Something that I didn't discuss in the article on fidelity is risk assessment. While all ideas are susceptible to misunderstandings as they spread, not all misunderstandings are equally harmful. Effective giving appears to be a relatively low-risk idea to spread both both because the idea seems to be close to society's existing concepts and because there have been a number of past attempts at spreading the idea without any particular problematic results (I'd be interested in counterexamples if anyone knows of any).

How have you become more (or less) engaged with EA in the last year?

I work at Leverage Research as the Program Manager for our Early Stage Science research.

How have you become more (or less) engaged with EA in the last year?

I'm much less involved now than I was 12 months ago. 

There are a few reasons for this. The largest factor is that my engagement has steadily decreased since I stopped working an EA job where engagement with EA was a job requirement and took a non-EA job instead. My intellectual interests have also shifted to history of science which is mostly outside the EA purview.

More generally, from the outside, EA feels stagnant both intellectually and socially. The intellectual advances that I'm aware of seem to be concentrated in working out the details of longtermism using the tools of philosophy and economics -- important work to be sure, but not work that is likely to substantially influence my worldview or plans. 

Socially, many of the close friends I met in EA are drifting away from EA involvement. The newer people I've met also tend to have a notably different vibe from EAs in the past. Newer EAs seem to be looking to the older EA intellectuals to tell them the answer to what they should do with their lives and how they should think about the world. Something I liked about the vibe of the EA community in the past was the sense of possibility; the sense that there were many unanswered questions and that everyone had to work together to figure things out. 

As the EA community has matured, it seems to have narrowed its focus and reigned in its level of ambition. That's probably for the best, but I suspect it means that the intellectual explorers of the future are probably going to be located elsewhere.

Load More