Buck

Chief Technology Officer @ Redwood Research
5035Berkeley, CA, USAJoined Sep 2014

Bio

I'm Buck Shlegeris. I am the CTO of Redwood Research, a nonprofit focused on applied alignment research. Read more about us here: https://www.redwoodresearch.org/

Comments
258

Obviously it would be more convenient if EA orgs accepted interns earlier, and I totally agree that it destroys value when we don't :(

Moving EA program deadlines up to compete with industry might be challenging for orgs that aren't 100% sure what kind of funding they'll have for summer programs. In this case, consider what kind of guidelines you can provide to prospective candidates to give them a better sense of how likely they will be accepted for a summer program at your org. Also, consider doing early acceptances if an extremely talented student contacts you with an exploding offer from another organization, and publicizing this policy if you decide to enact it.

FWIW in Redwood Research's case, our main bottleneck isn't funding, it's that we aren't sure what our org is going to look like in eight months' time--we might be feeling like we're doing a great job and have lots of management capacity and space for interns, or we might be feeling like we're wandering in the desert and don't know what research we should be doing, in which case interns will be a dangerous distraction.

I think the right comparison to draw is between EA orgs and similarly small startups; my rough experience is that small startups are similarly uncomfortable making internship offers far in advance.

I’m not sure that you’re making the wrong call, but I think it’s sort of weird/hypocritical to advertise EA by making donation choices that sacrifice altruistic impact in order to seem more normal.

Another effect is that I’d much rather evangelize EA to the kind of people who understand donor lotteries quickly.

I am glad we did not have 50 interns in July. But I’m 75% that we’ll run a giant event like this with at least 25 participants by the end of January. I’ll publish something about this in maybe a month.

I think that microCOVID probably looks pretty good on EA grounds just via saving a bunch of EAs a bunch of time worrying about what their COVID policies should be. But I like your point.

But it seems much more difficult to justify for the evolution anchor, which Ajeya admits would be far more computationally intensive than storing text or simulating a deterministic Atari game.

 

The evolution anchor involves more compute than the other anchors (because you need to get so many more data points and train the AI on them), but it's not obvious to me that it requires a larger proportion of compute spent on the environment than the other anchors. Like, it seems plausible to me that the evolution anchor looks more like having the AI play pretty simple games for an enormously long time, rather than having a complicated physically simulated environment.

Ajeya's report addresses this in the "What if training data or environments will be a bottleneck?" section, in particular in the "Having computation to run training environments" subsection:

 

An implicit assumption made by all the biological anchor hypotheses is that the overwhelming majority of the computational cost of training will come from running the model that is being trained, rather than from running its training environment. 

This is clearly the case for a transformative model which only operates on text, code, images, audio, and video since in that case the “environment” (the strings of tokens or pixels being processed) requires a negligible amount of computation and memory compared to what is required for a large model. Additionally, as I mentioned above, it seems possible that some highly abstract mathematical environments which are very cheap to run could nonetheless be very rich and support extremely intelligent agents. I think this is likely to be sufficient for training a transformative model, although I am not confident.  

If reinforcement learning in a rich simulated world (e.g. complex physics or other creatures) is required to train a transformative model, it is less clear whether model computation will dominate the computation of the environment. Nonetheless, I still believe this is likely. My understanding is that the computation used to run video game-playing agents is currently in the same ballpark as the computation used to run the game engine. Given these models are far from perfect play, there is likely still substantial room to improve on those same environments with a larger model. It doesn’t seem likely that the computational cost of environments will need to grow faster than the computational cost of agents going forward.  (If several intelligent agents must interact with one another in the environment, it seems likely that all agents can be copies of the same model.) 

In the main report, I assume that the computation required to train a transformative model under this path can be well-approximated by FHKP, where F is the model’s FLOP / subj sec, H is the model’s horizon length in subjective seconds, P is the parameter count of the model, and and K describe scaling behavior. I do not add an additional term for the computational cost of running the environment.

as long as you imitate someone aligned then it doesn't pose much safety risk.

Also, this kind of imitation doesn't result in the model taking superhumanly clever actions, even if you imitate someone unaligned.

I don't normally think you should select for speaking fluent LessWrong jargon, and I have advocated for hiring senior ops staff who have read relatively little LessWrong.

I think we might have fundamental disagreements about 'the value of outside perspectives' Vs. 'the need for context to add value'; or put another way 'the risk of an echo chamber from too-like-minded people' Vs. 'the risk of fracture and bad decision-making from not-like-minded-enough people'. 

I agree that this is probably the crux.

(I'm flattered by the inclusion in the list but would fwiw describe myself as "hoping to accomplish great things eventually after much more hard work", rather than "accomplished".)

FWIW I went to the Australian National University, which is about as good as universities in Australia get. In Australia there's way less stratification of students into different qualities of universities--university admissions are determined almost entirely by high school grades, and if you graduate in the top 10% of high school graduates (which I barely did) you can attend basically any university you want to. So it's pretty different from eg America, where you have to do pretty well in high school to get into top universities. I believe that Europe is more like Australia in this regard.

Load More