Kerry_Vaughan

928Joined Sep 2014

Comments
179

I wanted to add a brief comment about EA Ventures.

I think this piece does a fair job of presenting the relevant facts about the project and why it did not ultimately succeed. However, the tone of the piece seems to suggest that something untoward was happening with the project in a way that seems quite unfair to me.

For example, you say:

Personally, I (and others) suspect the main reason EAV failed is that it did not actually have committed funding in place.

That this was a big part of the issue with the project is correct, but also, the lack of committed funding was no secret!

The launch announcement included this line about the role of funders

For funders Effective Altruism Ventures is a risk-free way of gaining access to higher quality projects. We will learn about your funding priorities and then introduce you to vetted projects that meet your priorities. If you don’t like a project you are free to decline to fund it. We simply ask that you provide us with your reasons so we can improve our evaluation procedure.

Additionally, the tagline under "funders" on the website included the following:

Impact-focused backers who review proposals vetted by our partners and experts

Similarly, you attempt to show an inconsistency in the evaluation of EA Ventures by contrasting the following paragraphs:

When piecemeal evaluations have surfaced, they’ve offered conflicting evidence as to why EAV failed. In a 2017 comment thread, EAV co-founder Kerry Vaughn wrote: “We shut down EA Ventures because 1) the number of exciting new projects was smaller than we expected; 2) funder interest in new projects was smaller than expected and 3) opportunity cost increased significantly as other projects at CEA started to show stronger results.”

Vaughan has also suggested in 2017 that “Part of the problem is that the best projects are often able to raise money on their own without an intermediary to help them. So, even if there are exciting projects in EA, they might not need our help.” That explanation seems quite different from the original three reasons he supplied; it also seems easy to prove by listing specific high quality projects that applied to EAV but were instead funded by others.

But you fail to note that the comment cited in the second paragraph was in reply to a comment from the first paragraph!

I was merely responding to a question about how it can be the case that the project received fewer exciting projects than expect while also having a harder time funding those projects than expected. There's nothing inconsistent about holding that while also holding that the three reasons I cited are why the project did not succeed.

Overall, I think EA Ventures was probably a worthwhile experiment (although it's hard to be certain), but it was certainly a failure. I think I erred in not more cleanly shutting down the project with a write-up to explain why. Thanks for your assistance in making the relevant facts of the situation clear.

This post is great and I really admire you for posting it.

Very enlightening and useful post for understanding not only life sciences, but other areas of science funding as well.

One of the most straightforward and useful introductions to MIRIs work that I've read.

This post highlighted an important problem that would have taken much longer to address otherwise. I would point to this post as an example of how to hold powerful people accountable in a way that is fair and reasonable.

(Disclosure: I worked for CEA when this post was published)

I've read some of the work from the historical case studies project and it seems like a project that has the potential to be extremely useful for anyone interested in movement building. I did a comparatively shallow dive into the Neoliberal movement a while ago and found it very useful for my own thinking about movement building and this project seems like it is of substantially better quality. 

In fact, I'm surprised no one started a project of reviewing historical movement-building cases until now.

If I imagine being someone who is new-ish to EA, who wants to do good in the world and is considering making donations my plan for impact, I imagine that I really have two questions here:

  1. Is donating an effective way to do good in the world given the amount of money committed to EA causes?
  2. Will other people in the EA community like and respect me if I focus on donating money?

I think question 2) understandably matters to people, but it's a bit uncouth to say it out loud (which is why I'm trying to state it explicitly).

In the earliest days of EA, the answer to 2) was "yeah, definitely, especially if you're thoughtful about where you donate." Over time, I think the honest answer shifted to "not really, they'll tell you to do direct work." I don't know what the answer is currently, but reading between the lines of the article I'd guess that it's probably close to "not really" than "yeah definitely."

Assuming that earning to give is in fact quite useful, this seems like a big problem to me! It's also a very difficult problem to solve even for high-status community members.

I'd be interested in thoughts on whether this problem exists today and if so, what individual members of the community can do to fix it.

I think I still don't quite get why this seems implausible. (For what it's worth, I think your view is pretty mainstream, so I'm asking about it more to understand how people are thinking about AI and not as any kind of criticism of the post or the parenthetical.)

It seems clear to me that an AI weapon could exist. AI systems designed to autonomously identify and destroy targets seem like a particularly clear example. A ban which distinguishes that technology from nearby civilian technology doesn't seem much more difficult than distinguishing biological weapons from civilian uses of biological technology.

Of course we're mostly interested in AGI, not narrower AI technology. I agree that society doesn't think of AGI development as a weapons technology and so banning "AGI weapons" seems strange to contemplate, but it's not too difficult to imagine that changing! After all, many of the proponents of the technology are clear that they think it will be the most powerful technology ever invented, granting its creators unprecedented strength. Various components of the US military and intelligence services certainly seems to think AGI development has military implications, so the shift to seeing it as a dual-use weapons technology doesn't seem to be too big of a leap to imagine.

This isn't central to the post, but I'm interested in this parenthetical:

(To clarify - the BWC is an arms control treaty that prohibits bioweapons; it is unlikely that we’ll see anything similar with AI (i.e. a complete ban of any “AI weapons”, whatever this means.)

At first glance, a ban on AI weapons research or AI research with military uses seems pretty plausible to me. For example, one could ban research on lethal autonomous weapons systems and research devoted to creating an AGI without banning, e.g., the use of machine learning for image classification or text generation.

Can you say more about why this seems implausible from your point of view?

I think the consensus around impact certificates was that they seemed like a good idea and yet the idea never really took off.

Load More