rachelAF

42Joined Aug 2019

Bio

Rachel Freedman, AIS researcher at CHAI. London/Berkeley. Cats are not model-based reinforcement learners.

Comments
6

Thanks for the nuanced response. FWIW, this seems reasonable to me as well:

I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.

Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But that's a distinct concern from the one I quoted from the post.

In my experience, smart people have a pretty high rate of failing to do useful research (by researching in an IMO useless direction, or being unproductive), so I'd never be that confident in someone's research direction just based on them seeming really smart, even if they were famously smart.

I've personally observed this as well; I'm glad to hear that other people have also come to this conclusion.

I think the key distinction here is between necessity and sufficiency. Intelligence is (at least with a certain threshold) necessary to do good technical research, but it isn't sufficient. Impressive quantitative achievements, like competing in the international math olympiad, are sufficient to demonstrate intelligence (again, above a certain threshold), but not necessary (most smart people don't compete in IMO and, outside of specific prestigious academic institutions, haven't even heard of it). But mixing this up can lead to poor conclusions, like one I heard the other night: "Doing better technical research is easy; we just have to recruit the IMO winners!"

I strongly agree with this particular statement from the post, but have refrained stating it publicly before out of concern that it would reduce my access to EA funding and spaces.

EAs should consciously separate:

  • An individual’s suitability for a particular project, job, or role
  • Their expertise and skill in the relevant area(s)
  • The degree to which they are perceived to be “highly intelligent”
  • Their perceived level of value-alignment with EA orthodoxy
  • Their seniority within the EA community
  • Their personal wealth and/or power

I've been surprised how many researchers, grant-makers, and community organizers around me do seem to interchange these things. For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to a group "I rank [Researcher X] as an A-Tier researcher. I don't actually know what they work on, but they just seem really smart." I found this very epistemically concerning, but other people didn't seem to.

I'd like to understand this reasoning better. Is there anyone who disagrees with the statement (aka, disagrees that these factors should be consciously separated) who could help me to understand their position? 

This is a great idea! I don't currently have capacity for one-to-one calls, but I do hold monthly small group calls in an "office hours" format.

I'm a technical AI safety researcher at CHAI and PhD student at UC Berkeley, and I'm happy to talk about my research, others' research, graduate school, careers in AI safety, and other related topics. If you're interested, you can find out more about my research here, and sign up to join an upcoming call here.

Thank you for explaining more. In that case, I can understand why you'd want to spend more time thinking about AI safety.

I suspect that much of the reason that "understanding the argument is so hard" is because there isn't a definitive argument -- just a collection of fuzzy arguments and intuitions. The intuitions seem very, well, intuitive to many people, and so they become convinced. But if you don't share these intuitions, then hearing about them doesn't convince you. I also have an (academic) ML background, and I personally find some topics (like mesa-optimization) to be incredibly difficult to reason about.

I think that generating more concrete arguments and objections would be very useful for the field, and I encourage you to write up any thoughts that you have in that direction!

(Also, a minor disclaimer that I suppose I should have included earlier: I provided technical feedback on a draft of TAP, and much of the "AGI safety" section focuses on my team's work. I still think that it's a good concrete introduction to the field, because of how specific and well-cited it is, but I also am probably somewhat biased.)

Thank you for writing this! I particularly appreciated hearing your responses to Superintelligence and Human Compatible, and would be very interested to hear how you would respond to The Alignment Problem. TAP is more grounded in modern ML and current research than either of the other books, and I suspect that this might help you form more concrete objections (and/or convince you of some points). If you do read it, please consider sharing your responses.

That said, I don’t think that you have any obligation to read TAP, or to consider thinking about AI safety at all. It sounds like you aren’t drawn to a career in the field, and that’s fine. There are plenty of other ways to do good with an ML skill set. But if you don’t need to weigh working in AI safety against other career options, and you don’t find it interesting or enjoyable to consider, then why focus on forming personal views about AI safety at all?

Edited to add a disclaimer: I provided technical feedback on a draft of TAP, and much of the "AGI safety" section focuses on my team's work. I still think that it's a good concrete introduction to the field, because of how specific and well-cited it is, but I also am probably somewhat biased.

This closely matches my personal experience of EAG. I typically have back-to-back meetings throughout the entire conference, including throughout all talks. At the most recent EAG London, I and a more senior person in my field mutually wanted to meet, and exchanged many messages like the one in the screenshot above -- "I just had a spot open up in 15 minutes if you're free?", "Are you taking a lunch break tomorrow?", etc. (We ultimately were not able to find mutual availability, and met on zoom a couple of weeks later.)

Like Charles, I don't necessarily think that this is a bad thing. However, if this is the primary intent of the conference, it could be improved somewhat to make small meetings easier (and possibly to include more events like the speaker reception, where people who spend the rest of the conference in prearranged 1:1s can casually chat).

I personally would be very excited about a conference app that allowed people to book small group (1:2) or (1:3) meetings. I find that many people I speak to ask the same questions, and that I am frustratingly unable to accommodate everyone who wants to have a 1:1. I sometimes hold group zoom calls (1:3 or 1:5) afterward for people who I wasn't able to meet during the conference, and this format seems to work well.