jacobpfau

Topic Contributions

Comments

How to apply for a PhD

A few thoughts on ML/AI safety which may or may not generalize:

You should read successful candidates' SOPs to get a sense of style, level of detail, and content c.f. 1, 2, 3. Ask current EA PhDs for feedback on your statement. Probably avoid writing a statement focused on an AI safety/EA idea which is not in the ML mainstream e.g. IDA, mesa-optimization, etc. If you have multiple research ideas, considering writing more than one (i.e. tailored) SOP and submit the SOP which is most relevant to faculty at each university.

Look at groups' pages to get a sense of the qualification distribution for successful applicants, this is a better way to calibrate where to apply than looking at rankings IMO. This is also a good way to calibrate how much experience you're expected to have pre-PhD. My impression is that in many ML programs it is very difficult to get in directly out of undergraduate if you do not have an exceptional track-record e.g. top publications, or Putnam high scores etc.

For interviews, bringing up concrete ideas on next steps for a professor's paper is probably very helpful.

My vague impression is that financial security and depression are less relevant than in other fields here, as you can probably find job opportunities partway through if either becomes problematic. Would be interested to hear disagreement.

The Future Fund’s Project Ideas Competition

On-demand Software Engineering Support for Academic AI Safety Labs

AI safety work, e.g. in RL and NLP, involves both theoretical and engineering work, but academic training and infrastructure does not optimize for engineering. An independent non-profit could cover this shortcoming by providing software engineers (SWE) as contractors, code-reviewers, and mentors to academics working on AI safety. AI safety research is often well funded, but even grant-rich professors are bottlenecked by university salary rules and professor hours which makes hiring competent SWE at market rate challenging. An FTX Foundation funded organization could get around these bottlenecks by doing independent vetting of SWE and offering industry-competitive salaries and then having hired SWE collaborate with academic safety researchers at no cost to the lab. If successful, academic AI safety work ends up faster in terms of researcher hours and higher impact because papers are accompanied by more legible and standardized code bases -- i.e. AI safety work ends up looking more like distill. Estimating potential impact of this proposal could be done by soliciting input from researchers who moved from academic labs to private AI safety organizations.

EDIT: This seems to already exist at https://alignmentfund.org/

Important, actionable research questions for the most important century

Re: feasibility of AI alignment research, Metaculus already has Control Problem solved before AGI invented . Do you have a sense of what further questions would be valuable?

A mesa-optimization perspective on AI valence and moral patienthood

Ok, seems like this might have been more a terminological misunderstanding on my end. I think I agree with what you say here, 'What if the “Inner As AGI” criterion does not apply? Then the outer algorithm is an essential part of the AGI’s operating algorithm'.

A mesa-optimization perspective on AI valence and moral patienthood

Ok, interesting. I suspect the programmers will not be able to easily inspect the inner algorithm, because the inner/outer distinction will not be as clear cut as in the human case. The programmers may avoid sitting around by fiddling with more observable inefficiencies e.g. coming up with batch-norm v10.

A mesa-optimization perspective on AI valence and moral patienthood

Good clarification. Determining which kinds of factoring are the ones which reduce valence is more subtle than I had thought. I agree with you that the DeepMind set-up seems more analogous to neural nociception (e.g. high heat detection). My proposed set-up (Figure 5) seems significantly different from the DM/nociception case, because it factors the step where nociceptive signals affect decision making and motivation. I'll edit my post to clarify.

A mesa-optimization perspective on AI valence and moral patienthood

Your new setup seems less likely to have morally relevant valence. Essentially the more the setup factors out valence-relevant computation (e.g. by separating out a module, or by accessing an oracle as in your example) the less likely it is for valenced processing to happen within the agent.

Just to be explicit here, I'm assuming estimates of goal achievement are valence-relevant. How generally this is true is not clear to me.

A mesa-optimization perspective on AI valence and moral patienthood

Thanks for the link. I’ll have to do a thorough read through your post in the future. From scanning it, I do disagree with much of it, many of those points of disagreement were laid out by previous commenters. One point I didn’t see brought up: IIRC the biological anchors paper suggests we will have enough compute to do evolution-type optimization before the end of the century. So even if we grant your claim that learning to learn is much harder to directly optimize for, I think it’s still a feasible path to AGI. Or perhaps you think evolution like optimization takes more compute than the biological anchors paper claims?

A mesa-optimization perspective on AI valence and moral patienthood

Certainly valenced processing could emerge outside of this mesa-optimization context. I agree that for "hand-crafted" (i.e. no base-optimizer) systems this terminology isn't helpful. To try to make sure I understand your point, let me try to describe such a scenario in more detail: Imagine a human programmer who is working with a bunch of DL modules and interpretability tools and programming heuristics which feed into these modules in different ways -- in a sense the opposite end of the spectrum from monolithic language models. This person might program some noxiousness heuristics that input into a language module. Those might correspond to a Phenumb-like phenomenology. This person might program some other noxiousness heuristics that input into all modules as scalars. Those might end up being valenced or might not, hard to say. Without having thought about this in detail, my mesa-optimization framing doesn't seem very helpful for understanding this scenario.

Ideally we'd want a method for identifying valence which is more mechanistic that mine. In the sense that it lets you identify valence in a system just by looking inside the system without looking at how it was made. All that said, most contemporary progress on AI happens by running base-optimizers which could support mesa-optimization, so I think it's quite useful to develop criterion which apply to this context.

Hopefully this answers your question and the broader concern, but if I'm misunderstanding let me know.

A mesa-optimization perspective on AI valence and moral patienthood

Your interpretation is a good summary!

Re comment 1: Yes, sorry this was just meant to point at a potential parallel not to work out the parallel in detail. I think it'd be valuable to work out the potential parallel between the DM agent's predicate predictor module (Fig12/pg14) with my factored-noxiousness-object-detector idea. I just took a brief look at the paper to refresh my memory, but if I'm understanding this correctly, it seems to me that this module predicts which parts of the state prevent goal realization.

Re comment 2: Yes, this should read "(positive/negatively)". Thanks for pointing this out.

Re EDIT: Mesa-optimizers may or may not represent a reward signal -- perhaps there's a connection here with Demski's distinction between search and control. But for the purposes of my point in the text, I don't think this much matters. All I'm trying to say is that VPG-type-optimizers have external reward signals, whereas mesa-optimizers can have internal reward signals.

Load More