Kat Woods

2475Joined Sep 2014

Bio

Think it would be high-impact or fun to meet? Book a 20-minute slot here https://calendly.com/katwoods/location-independent-eag

Comments
117

I don't know why sphor's comment was downvoted (I'm also confused by that), but for Ryan's, I can at least speak for myself of why I downvoted it:

  1. I strongly disagree with the comment and think that
    1.  This sort of thinking is paralyzing for the EA movement and leads to way more potential founders giving up on ideas, bouncing from the EA movement, not posting on the Forum, or moving so slowly that a lot of impact is lost.  (I might write a post about this because I think it's important and neglected in the movement)
    2. It derails the conversation on something I consider to be a small detail about an improbable small-downside outcome, and I wanted more people focusing on more fruitful potential criticisms or points about the prize. 
  2. While a lot of the comment was polite and constructive, it also said that we were being "shifty", which felt unnecessarily accusatory. I think if that word was changed I would change it from a strong downvote to just a downvote

Of note, I just strongly disagree with this comment/idea. In general, I think Ryan is great and consider him a friend. 

Large companies are usually much less innovative than small companies

I think this is still in the framework of thinking that large groups of people having to coordinate leads to stagnation. To change my mind, you'd have to make the case that having a larger number of startups leads to less innovation, which seems like a hard case to make. 

the larger EA gets, the more people are concerned about someone "destroying the reputation of the community"

I think this is a separate issue that might be caused by the size of the movement, but a different hypothesis is that it's simply an idea that has traction in the movement. One which has been around for a long time, even while we were a lot smaller. Spending your "weirdness points" and such considerations have been around since the very beginning. 

(On a side note, I think we're overly concerned about this, but that's a whole other post. Suffice to say here that a lot of the probability mass is on this not being caused by the size of the movement, but rather a particularly sticky idea)

I think there exist potential configurations of a research field that can scale substantially better, but I don't think we are currently configured that way

🎯 I 100% agree. I'm thinking of spending some more time thinking on and writing up ways that we could make it so the movement could usefully take on more researchers. I also encourage others to think on this, because it could unlock a lot of potential. 

I expect by default exploration to go down as scale goes up

I think this is where we disagree. It'd be very surprising if ~150 researchers is the optimal amount, or that having less would lead to more innovation and more/better research agendas. 

in general, the number of promising new research agendas and direction seems to me to have gone down a lot during the last 5 years as EA has grown a lot, and this is a sentiment I've heard mirrored from most people who have been engaged for that long

An alternative hypothesis is that people you've been talking to have been becoming more pessimistic about having hope at all (if you hang out with MIRI folk a lot, I'd expect this to be more acute). It might not be because there's more people having bad ideas or that having more people in the movement leads to a decline in quality, but rather a certain contingency think alignment is impossible or deeply improbable, so that all  ideas seem bad. In this paradigm/POV, the default is that all new research agendas seem bad. It's not that the agendas got worse. It's that people think the problem is even harder than they originally thought. 

Another hypothesis is that the idea of epistemic humility has been spreading, combined with the idea that you need intensive mentorship. This leads to new people coming in being less likely to actually come up with new research agendas, but rather to defer to authority. (A whole other post there!)

Anyways, just some alternatives to consider :) It's hard to convey tone over text, but I'm enjoying this discussion a lot and you should read all my writing assuming a lot of warmth and engagement. :) 

Also, I'm surprised at the claim that more people doesn't lead to more progress. I've heard that one major cause of progress so far has just been that there's a much larger population of people to try things (of course, progress also causes there to be more people, so the causal chain goes both ways). Similarly, the reason why cities tend to have more innovation than small towns is because there's a denser number of people around each other. 

You can also think of it from the perspective of adding more explore. Right now there are surprisingly few research agendas. Having more people would lead to more of them, and it increases the odds that one of them is correct. 

Of note, I do share your concerns about making sure the field doesn't just end up maximizing proxy metrics. I think that will be tricky and will require a lot of work (as it does right now even!). 

I agree that 10k people working in the same org would be unwieldy. I'm thinking more having 10k people working in hundreds or orgs and sometimes independently, etc. Each of these people would be in their own little microcosm, and dealing with the same normal amount of interactions. Should address the lowering social environment cost. Might even make it better because people could more easily find their "tribe"

And I agree right now we wouldn't be able to absorb that number usefully. That's currently an unsolved problem that would be good to make progress on.

Thanks! 

Good idea about linking to the hiring post. I wrote that after this one, but I've gone back and added it. Thanks for the suggestion! 

Couldn't agree more! 

Some others to add to the list:

Thank you for making this! This looks great. I've added it to the list of AI safety courses.

It's not on just technical AI safety but I feel like it's related enough that anybody looking at the list will also be interested in this resource.

I'd love to add that but unfortunately it would be really difficult technically speaking, so we probably won't make it happen.

Great point! I didn't give it much thought, honestly. I think you're right and saying 5% each time is better. Gonna update it now.

Thanks for the suggestion!

Load More