ElizabethBarnes

145Joined Sep 2014

Comments
12

This is a really great write-up, thanks for doing this so conscientiously and thoroughly. It's good to hear that Surge is mostly meeting researchers' needs.

Re whether higher-quality human data is just patching current alignment problems - the way I think about it is more like: there's a minimum level of quality you need to set up various enhanced human feedback schemes. You need people to actually read and follow the instructions, and if they don't do this reliably you really won't be able to set up something like amplification or other schemes that need your humans to interact with models in non-trivial ways. It seems good to get human data quality to the point where it's easy for alignment researchers to implement different schemes that involve complex interactions (like the humans using an adversarial example finder tool or looking at the output of an interpretability tool). This is different from the case where we e.g. have an alignment problem because MTurkers mark common misconceptions as truthful, whereas more educated workers correctly mark them as false, which I don't think of as a scalable sort of improvement.

​The evaluations project at the Alignment Research Center is looking to hire a generalist technical researcher and a webdev-focused engineer. We're a new team at ARC building capability evaluations (and in the future, alignment evaluations) for advanced ML models. The goals of the project are to improve our understanding of what alignment danger is going to look like, understand how far away we are from dangerous AI, and create metrics that labs can make commitments around (e.g. 'If you hit capability threshold X, don't train a larger model until you've hit alignment threshold Y').  We're also still hiring for model interaction contractors, and we may be taking SERI MATS fellows.

I think DM clearly restricts REs more than OpenAI (and I assume Anthropic). I know of REs at DM who have found it annoying/difficult to lead projects because of being REs, I know of someone without a PhD who left Brain (not DeepMind but still Google so prob more similar) partly because it was restrictive, and lead team at OAI/Anthropic, and I know of people without an undergrad degree who have been hired by OAI/Anthropic. At OpenAI I'm not aware of it being more difficult for people to lead projects etc because of being 'officially an RE'. I had bad experiences at DM that were ostensibly related to not having a PhD (but could also have been explained by lack of research ability). 

High-quality human data

Artificial Intelligence

Most proposals for aligning advanced AI require collecting high-quality human data on complex tasks such as evaluating whether a critique of an argument was good, breaking a difficult question into easier subquestions, or examining the outputs of interpretability tools. Collecting high-quality human data is also necessary for many current alignment research projects. 

We’d like to see a human data startup that prioritizes data quality over financial cost. It would follow complex instructions, ensure high data quality and reliability, and operate with a fast feedback loop that’s optimized for researchers’ workflow. Having access to this service would make it quicker and easier for safety teams to iterate on different alignment approaches

Some alignment research teams currently manage their own contractors because existing services (such as surgehq.ai and scale.ai) don’t fully address their needs; a competent human data startup could free up considerable amounts of time for top researchers.

Such an organization could also practice and build capacity for things that might be needed at ‘crunch time’ – i.e., rapidly producing moderately large amounts of human data, or checking a large volume of output from interpretability tools or adversarial probes with very high reliability. 

The market for high-quality data will likely grow – as AI labs train increasingly large models at a high compute cost, they will become more willing to pay for data. As models become more competent, data needs to be more sophisticated or higher-quality to actually improve model performance. 

Making it less annoying for researchers to gather high-quality human data relative to using more compute would incentivize the entire field towards doing work that’s more helpful for alignment, e.g., improving products by making them more aligned rather than by using more compute.


[Thanks to Jonas V for writing a bunch of this comment for me]
[Views are my own and do not represent that of my employer]

Although I believe all the deaths were at a nursing home, where you'd expect a much higher death rate

Big source of uncertainty is how long the fatigue persists - it wasn't entirely clear from the SARS paper whether that was the fraction of people who still had fatigue at 4 years, or people who'd had it at some point. Numbers are very different if it's a few months of fatigue vs rest of your life. Not sure I've split up the persistent CF vs temporary post-viral fatigue properly

A friend pointed me to a study showing a high rate of chronic fatigue in SARS survivors (40%). I did a quick analysis of risk of chronic fatigue from getting COVID-19 (my best guess for young healthy people is ~2 weeks lost in expectation, but could be less than a day or more like 100 days on what seem like reasonable assumptions. ) https://docs.google.com/spreadsheets/d/1z2HTn72fM6saFH42VKs6lEdvooLJ6qaXwCrQ5YZ33Fk/edit?usp=sharing

Thanks for doing this! Some nitpicking on this graph: https://i.ibb.co/wLd1vSg/donations-income-scatter.png (donations and income)

1) the trendline looks a bit weird. Did you force it to go through (0,0)?

2) Your axis labels initially go up by factors of 100, then the last one only a factor of 10.

Thanks for the post! I am generally pretty worried that I and many people I know are all deluding ourselves about AI safety - it has a lot of red flags from the outside (although these are lessening as more experts come onboard, more progress is made in AI capabilities, and more concrete work is done on safety). I think it's more likely than not we've got things completely wrong, but that it's still worth working on. If that's not the case, I'd like to know!

I like your points about language. I think there's a closely related problem where it's very hard to talk or think about anything that's between human level at some task and omnipotent. Once you try to imagine something that can do things humans can't, there's no way to argue that the system wouldn't be able to do something. There is always a retort that just because you, a human, think it's impossible, doesn't mean a more intelligent system couldn't achieve it.

On the other hand, I think there are some good examples of couching safety concerns in non-anthropomorphic language. I like Dr Krakovna's list of specification gaming examples: https://vkrakovna.wordpress.com/2018/04/02/specification-gaming-examples-in-ai/

I also think Iterated Distillation and Amplification is a good example of a discussion of AI safety and potential mitigation strategies that's couched in ideas of training distributions and gradient descent rather than desires and omnipotence.

Re the sense of meaning point, I don't think that's been my personal experience - I switched into CS from biology partly because of concern about x-risk, and know various other people who switched fields from physics, music, maths and medicine. As far as I could tell, the arguments for AI safety still mostly hold up now I know more about relevant fields, and I don't think I've noticed egregious errors in major papers. I've definitely noticed some people who advocate for the importance of AI safety making mistakes and being confused about CS/ML fundamentals, but I don't think I've seen this from serious AI safety researchers.

Re anchoring, this seems like a very strong claim. I think a sensible baseline to take here would be expert surveys, which usually put several percent probability on HLMI being catastrophically bad. (e.g. https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/#Chance_that_the_intelligence_explosion_argument_is_about_right)

I'd be curious if you have an explanation for why your numbers are so far away from expert estimates? I don't think that these expert surveys are a reliable source of truth, just a good ballpark for what sort of orders of magnitude we should be considering.

You say

I think a given amount of dolorium/dystopia (say, the amount that can be created with 100 joules of energy) is far larger in absolute moral expected value than hedonium/utopia made with the same resources

Could you elaborate more on why this is the case? I would tend to think that a prior would be that they're equal, and then you update on the fact that they seem to be asymmetrical, and try to work out why that is the case, and whether those factors will apply in future. They could be fundamentally asymmetrical, or evolutionary pressures may tend to create minds with these asymmetries. The arguments I've heard for why are:

  • The worst thing that can happen to an animal, in terms of genetic success, is much worse than the best thing.

This isn't entirely clear to me: I can imagine a large genetic win such as securing a large harem could be comparable to the genetic loss of dying, and many animals will in fact risk death for this. This seems particularly true considering that dying leaving no offspring doesn't make your contribution to the gene pool zero, just that it's only via your relatives.

  • There is selection against strong positive experiences in a way that there isn't against strong negative experiences.

The argument here is, I think, that strong positive experiences will likely result in the animal sticking in the blissful state, and neglecting to feed, sleep, etc, whereas strong negative experiences will just result in the animal avoiding a particular state, which is less maladaptive. This argument seems stronger to me but still not entirely satisfying - it seems to be quite sensitive to how you define states.

Load More