Kerry_Vaughan

Topic Contributions

Comments

It's ok to leave EA

This post is great and I really admire you for posting it.

How Life Sciences Actually Work: Findings of a Year-Long Investigation

Very enlightening and useful post for understanding not only life sciences, but other areas of science funding as well.

My current thoughts on MIRI's "highly reliable agent design" work

One of the most straightforward and useful introductions to MIRIs work that I've read.

The EA Community and Long-Term Future Funds Lack Transparency and Accountability

This post highlighted an important problem that would have taken much longer to address otherwise. I would point to this post as an example of how to hold powerful people accountable in a way that is fair and reasonable.

(Disclosure: I worked for CEA when this post was published)

Sentience Institute 2021 End of Year Summary

I've read some of the work from the historical case studies project and it seems like a project that has the potential to be extremely useful for anyone interested in movement building. I did a comparatively shallow dive into the Neoliberal movement a while ago and found it very useful for my own thinking about movement building and this project seems like it is of substantially better quality. 

In fact, I'm surprised no one started a project of reviewing historical movement-building cases until now.

Despite billions of extra funding, small donors can still have a significant impact

If I imagine being someone who is new-ish to EA, who wants to do good in the world and is considering making donations my plan for impact, I imagine that I really have two questions here:

  1. Is donating an effective way to do good in the world given the amount of money committed to EA causes?
  2. Will other people in the EA community like and respect me if I focus on donating money?

I think question 2) understandably matters to people, but it's a bit uncouth to say it out loud (which is why I'm trying to state it explicitly).

In the earliest days of EA, the answer to 2) was "yeah, definitely, especially if you're thoughtful about where you donate." Over time, I think the honest answer shifted to "not really, they'll tell you to do direct work." I don't know what the answer is currently, but reading between the lines of the article I'd guess that it's probably close to "not really" than "yeah definitely."

Assuming that earning to give is in fact quite useful, this seems like a big problem to me! It's also a very difficult problem to solve even for high-status community members.

I'd be interested in thoughts on whether this problem exists today and if so, what individual members of the community can do to fix it.

I think I still don't quite get why this seems implausible. (For what it's worth, I think your view is pretty mainstream, so I'm asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)

It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn't seem much more difficult than distinguishing biological weapons from civilian uses of biological technology.

Of course we're mostly interested in AGI, not narrower AI technology. I agree that society doesn't think of AGI development as a weapons technology and so banning "AGI weapons" seems strange to contemplate, but it's not too difficult to imagine that changing! After all, many of the proponents of the technology are clear that they think it will be the most powerful technology ever invented, granting its creators unprecedented strength. Various components of the US military and intelligence services certainly seems to think AGI development has military implications, so the shift to seeing it as a dual-use weapons technology doesn't seem to be too big of a leap to imagine.

This isn't central to the post, but I'm interested in this parenthetical:

(To clarify - the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.

Can you say more about why this seems implausible from your point of view?

Should Grants Fund EA Projects Retrospectively?

I think the consensus around impact certificates was that they seemed like a good idea and yet the idea never really took off.

Should Grants Fund EA Projects Retrospectively?

Lots of funding is implicitly retrospective in the sense that what you've done historically is a big input into whether individuals and groups get funding. Yet, because most funding mixes several factors including past work, anticipated future work, reputation, etc. I think there may be an open opportunity here.

I'd be particularly excited to see funding for projects that have already occurred where it is clear that the success or failure of the past project is all that is being considered. This might encourage more unconventional or initially hard-to-assess projects and would provide a more concrete signal about which projects actually succeeded historically.

Load More