I am currently facing the situation of having to choose between good options: either making a career change to do a postdoc in AI Safety, or staying in my current field of expertise (quantum algorithms) in one of the best quantum computing startups, which pays really well, allowing to earn to give, and also allowing me to remote work near my girlfriend (who brings me a lot of joy). I'm honestly quite confused about what it would be best for me to do.

As part of the decision process, I have talked with my family to whom I have carefully explained the problem of AI Safety and why experts believe is important. They have however raised a question that looks valid:

If the community has so much money, and we believe this is such an important problem, why can't we just hire/fund world experts in AI/ML to work on it?

These are some of the answers I have heard in my local community as well as the AGI Safety Fundamentals slack channel:

  • Most experts are not aligned, in that they do not understand the AI Safety problem well enough. But this does not square well with the fact that they recognize this is an important problem. After all, it would be moot to say that they believe this is an important problem without really understanding it.
  • We want to grow the field slow enough that we can control the quality of the research and ensure we do not end up with a reputation crisis. Perhaps, but then should we still not focus on hiring/funding AI experts rather than career changes from undergraduates or graduate students?
  • This is not a well-known enough problem. Same counter-answer as the previous one.
  • Most experts prefer working on topics where the problem is concrete enough that they can play with toy problems, and somewhat gamify the scientific process. I have found some evidence for this (https://scottaaronson.blog/?p=6288#comment-1928022 and https://twitter.com/CraigGidney/status/1489803239956508672?t=JCmx7SC4PhxIXs8om2js5g&s=19), but it is unclear. I like this argument.
  • Researchers do not find this problem interesting enough or may think the problem is not so important or very far away in time, and therefore they are not willing to accept the money to work on a different topic.
  • They have other responsibilities, with the people they work with and their area of expertise. They believe they are making a greater contribution by staying in their subfield. Money is also not an issue for them.
  • The field is so young that young people with not a lot of expertise are almost equally effective as seasoned AI professors. In other words, the field is preparadigmatic. But I think even if we still have to create the tools it is more likely that people with AI expertise will do better.
  • We need more time to organize ourselves or convince people because we are just getting started.
  • Maybe we've not done things right?
  • (Ryan Carey's opinion): Senior researchers are less prone to changing their research field.

All of this suggests to me that the number 1 priority to solve AI Safety is making it concrete enough that we can make it easy for researchers to get adsorbed by small subproblems. For example, we could define a few concrete approaches that allow people to progress at a concrete level, even if we don't solve AI Safety once and for all, as perhaps Yudkowski would hope.

In any case, my friend Jaime Sevilla argues that at a community level it is probably better to leave earning-to-give to people who can earn more than $1M. But I would like to understand better your thoughts on this decision and what I should do, as well as get a better understanding of why we can't just "buy more experts to work on this problem"? Note that with the provided funding experts could themselves hire their postdocs, Ph.D. students... This may lead to fewer career changes though, as for example I have found it difficult to get a postdoc in AI because of my different background, which makes me prima facie less attractive candidate.

Thanks!

88

0
0

Reactions

0
0

More posts like this

Comments30
Sorted by Click to highlight new comments since: Today at 4:23 PM

Some thoughts:
 1) Most importantly: In your planning, I would explicitly include the variable of how happy you are. In particular, if the AI Safety option would result in a break-up of a long-term & happy relationship, or cause you to be otherwise miserable, it is totally legitimate to not do the AI Safety option. Even if it was higher "direct" impact. (If you need an impact-motivated excuse - which might even be true - then think about the indirect impact of avoiding signalling "we only want people who are so hardcore that they will be miserable just to do this job".)

2) My guess: Given that you think your QC work is unlikely to be relevant to AI Safety, I personally believe that (ignoring the effect on you), the AI Safety job is higher impact.

3) Why is it hard to hire world experts to work on this? (Some thoughts, possibly overlapping with what other people wrote.)

  • "world experts in AI/ML" are - kinda tautologically - experts in AI/ML, not in AI Safety. (EG, "even" you and me have more "AI Safety" expertise than most AI/ML experts.)
  • Most problems around AI Safety seem vague, and thus hard to delegate to people who don't have their own models of the topic. Such models take time to develop. So these people might not be productive for a year (or two? or more? I am not sure) even if they are genuine about AI Safety work.
  • Top people might be more motivated by prestige than money. (And being "bought off" seems bad from this point of view, I guess.)
  • Top people might be more motivated by personal beliefs than money. (So the bottleneck is convincing them, not money.)

4) I am tempted to say that all the people who could be effectively bought with money are already being bought with money, so you donating doesn't help here. But I think a more careful phrasing is "recruiting existing experts is bottlenecked on other things than money (including people coming up with good recruiting strategies)".

5) Phrased differently: In our quest for developing the AI Safety field, there is basically no tradeoff between "hiring 'more junior' people (like you)" and "recruiting senior people", even if those more junior people would go earning to give otherwise.

Agreed. The AIS job will have higher direct impact, but career transitions and relocating are both difficult. Before taking the plunge, I'd suggest people consider whether they would be happy with the move. And whether they have thought through some of the sacrifices involved, for instance, if the transition to AIS research is only partially successful, would they be happy spending time on non-research activities like directing funds or advising talent?

Thanks for your comments Ryan :) I think I would be ok if I try and fail; of course I would prefer a lot more succeding, but I think I am happier if I know I'm doing the best I can do than if I try to compare myself to some unattainable level. That being said there is some sacrifice as you mention particularly in having learned a new research area and also in spending time away, both of which you understand :)

+1 to all of this. Sounds like a very tough decision. If it were me, I would probably choose quality of life and stick with the startup. (Might also donate to areas that are more funding constrained like global development and animal welfare.)

Thanks for making concrete bets @aogara :)

If the community has so much money, and we believe this is such an important problem, why can't we just hire/fund world experts in AI/ML to work on it?


Food for thought: LeCun and Hinton both hold academic positions in addition to their industry positions at Meta and Google, respectively. Yoshua Bengio is still in academia entirely. Do you think that tech companies haven't tried to buy every minute of their attention? Why are the three pioneers of deep learning not all in the highest-paying industry job? Clearly, they care about something more than this.

One thing you should consider is that most of the impact is likely to be at the tails. For instance, the distribution of impact for people is probably power-law distributed (this is true in ML in terms of first author citations; I suspect it could be true for safety specifically). From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you. You don't say that some of the top AI safety orgs are trying to hire you.

Then you have to consider how useful quantum algorithms are to existential risk. Just because people don't talk about that subject doesn't mean it's useless. How many quantum computing PhDs have you seen on the EA forum or met at an EA conference? You are the only one I've met. As somebody with unique knowledge,  it's probably worth a pretty significant chunk of time thinking about how it could possibly fit in, getting feedback on your ideas, sharing thoughts with the community, etc.

Then you have to think about how likely quantum computing is likely to make you really rich (probably through equity, not salary) in a period of time where it will matter (e.g. being rich in 5 years is very different from being rich in 50 years).

I think if it's completely useless for existential risk and is extremely unlikely to make you rich, probably worth pivoting. But consider those questions first, before you give up the chance to be one of the (presumably) very few professional quantum computing researchers in the world.

 

Also, have you considered 80k advising?

From your description, it seems like you might be more likely to end up in the tail of ability for quantum computing, if one of the best quantum computing startups is trying to hire you.

I think this is right.

You don't say that some of the top AI safety orgs are trying to hire you.

I was thinking of trying an academic career. So yeah, not really anyone seeking for me, it was more me trying to go to Chicago to learn from Victor Veitch and change careers.

Then you have to consider how useful quantum algorithms are to existential risk.

I think it is quite unlikely that this will be so. I'm 95% sure that QC will not be used in advanced AI, and even if that were the case, it is quite unlikely it will matter for AIS: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial Perhaps I could be surprised, but do we really need someone watch out in case this turns out valuable? My intuition is that if that were to happen I could just learn whatever development has happened quite quickly with my current background. I could spend say, 1-3h a month, and that would probably be enough to be on the watch.

One thing you should consider is that most of the impact is likely to be at the tails. For instance, the distribution of impact for people is probably power-law distributed (this is true in ML in terms of first author citations; I suspect it could be true for safety specifically).

In fact, the reason why I wanted to go for academia, apart from my personal fit, is that the AI Safety community is right now very tilted towards the industry. I think there is a real risk that between blog posts and high-level ideas we could end up with a reputation crisis. We need to be seen as a serious scientific research area, and for that, we need more academic research and way better definitions of the concrete problems we are trying to solve. In other words, if we don't get over the current `preparadigmaticity' of the field, we risk reputation damage.

Then you have to think about how likely quantum computing is likely to make you really rich (probably through equity, not salary).

Good question. I have been offered 10k stock options with a value of around $5 to $10 each. Right now the valuation of this startup is in $3B. What do you think?

Also, have you considered 80k advising? I want to talk to Habiba before making a decision but she was busy this week with EAGx Oxford. Let's see what she thinks.

Thanks Thomas!

Related: https://80000hours.org/articles/applying-an-unusual-skill-to-a-needed-niche/

Given your background, you can probably contribute a lot to AI safety efforts by continuing in quantum computing.

Photonics and analog neural net hardware will probably have enormous impacts on capabilities (qualitatively similar to the initial impacts of GPUs in 2012-2019). Quantum computing is basically another fundamental hardware advance that may be a bit further out.

The community needs people thinking about the impacts of quantum computing on advanced AI. What sorts of capabilities will quantum computing grant AI? How will this play into x-risk? I haven't heard any good answers to these questions.

Hey Mantas! So while I think there is a chance that photonics will play a role in future AI hardware, unfortunately, my expertise is quite far from the hardware itself. Up to now, I have been doing quantum algorithms.

The problem though is that I think quantum computing will not play an important role in AI development. It may seem that the quadratic speedup that quantum computing provides in a range of problems is good enough to justify using it. However, if one takes into account the hardware requirements such as the error correction, you will be losing some 10 orders of magnitude of speed, which makes QC unlikely to help in generic problems.

Where QC shines is in analyzing and predicting the properties of quantum systems, such as chemistry and material science. This is by itself very useful, and it may bring up new batteries, new drugs... but it is different from AI.

Also, for cryptography there might be some applications but one can already use quantum-resistant classically cryptography, so I'm not very excited about cryptography as an application.

My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer, not whether quantum computing will play into the development of AGI. This question is important for AI safety concerns, and few people are talking about it / qualified to tackle it.

Quantum algorithms seem highly relevant to this question. At the risk of revealing my total lack of expertise in quantum computing, one might even wonder what learnable quantum circuits / neural networks would entail. Idk. It just seems wide open.

Some questions:

  • Forecasting is highly information limited. A superintelligence that can't see half the chessboard can still lose. Does quantum computing provide a differential advancement here?
  • Does alphafold et al render the quantum computing hopes to supercharge simulation of chemical/physical systems irrelevant? Or would a 'quantum version of alphafold' trounce the original? (again, I am no expert here)
  • Where will exponential speedups play a role in practical problems? Simulation? Of just quantum systems, or does it help with simulating complex systems more generally? Any case where the answer is "yes" is worth thinking about the implications of wrt AI safety.

My question is more about what the capabilities of a superintelligence would be once equipped with a quantum computer

I think it would be an AGI very capable of chemistry :-)

one might even wonder what learnable quantum circuits / neural networks would entail.

Right now they just mean lots of problems :P More concretely, there are some results that indicate that quantum NN (or variational circuits, as they call them) are not likely to be more efficient for learning classical data than classical NN are. Although I agree this is a bit too much in the air yet.

Does alphafold et al render the quantum computing hopes to supercharge simulation of chemical/physical systems irrelevant?

By chemistry I mean electronic simulation. Other than that, proteins are quite classical, and that's why alphafold works well, and why it is highly unlikely that neurons would have any quantum effects involved in their functioning.

Or would a 'quantum version of alphafold' trounce the original?

For this I even have a published article showing that (probably) no: https://arxiv.org/pdf/2101.10279.pdf (published in https://iopscience.iop.org/article/10.1088/2058-9565/ac4f2f/meta)

Where will exponential speedups play a role in practical problems? Simulation? Of just quantum systems, or does it help with simulating complex systems more generally? Any case where the answer is "yes" is worth thinking about the implications of wrt AI safety.

My intuition is that no, but if that were to be the case, then it is unlikely to be an issue for AI Safety: https://www.alignmentforum.org/posts/ZkgqsyWgyDx4ZssqJ/implications-of-quantum-computing-for-artificial

Thanks in any case, Mantas :)

All of this suggests to me that the number 1 priority to solve AI Safety is making it concrete enough that we can make it easy for researchers to get adsorbed by small subproblems. For example, we could define a few concrete approaches that allow people to progress at a concrete level, even if we don't solve AI Safety once and for all, as perhaps Yudkowski would hope.

 

I'm very sympathetic to the general idea that building the AI safety field is currently more important than making direct progress (though continuous progress of course helps with field building). Have you considered doing this for a while if you think it's possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?

Sidenote 1: Another option that I'm even more excited about is getting promising CS students engaged with AI safety. This would avoid things like "senior researchers are kinda stuck in their particular interests" and "senior people don't care so much about money". Michael Chen's comment about his experience with AI Safety university groups made it sound quite tractable and possibly highly underrated.

Sidenote 2: I would be quite surprised if AI safety orgs would not allow you to work remotely at least a significant fraction of your time? E.g. even if some aspects of the work need to be in person, I know quite a few researchers who manage to do this by travelling there a few times per year for a couple weeks.

Have you considered doing this for a while if you think it's possibly the most important problem, i.e. for example trying to develop concrete problems that can then be raised to the fields of ML and AI?

Indeed, I think that would be a good objective for the postdoc. It's also true that I think this is the kind of thing we need to do to make progress in the field, and my intuition is that aiming for academic papers should be necessary to increase quality.

Cool, I'd personally be very glad if you would contribute to this. Hmm, I wonder whether a plausible next step could be to work on this independently for a couple months to try how much you like doing the work. Maybe you could do this part-time while staying at your current job?

Unfortunately, this is not feasible: I am finishing my Ph.D. and have to decide what I am doing next in the next couple of weeks. In any case, my impression is that to pose good questions I need a couple of years of understanding the field of expertise, so things are tractable, state of the art, concretely defined...

Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problem full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?

(Btw I'm still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)

Ah, dang. And how difficult would it be to do reject the startup offer, independently and remotely work on concretizing AI safety problems full-time for a couple of months and testing your fit, and then if you don't feel like this is clearly the best use of your time you can (I image) very easily get another job offer in the quantum computing field?

The thing that worries me is working on some specific technical progress, not being able to make sufficient progress, and feeling stuck. But I think this will happen after more than 2 months, perhaps after a year. I'm thinking of it more in academic terms; I would like to target academic-quality papers. But perhaps if that happens I could come back to quantum computing or any other boring computer scientist job.

(Btw I'm still somewhat confused why AI safety research is supposed to be in much friction with working remotely at least most of the time.)

The main reason is that if I go to a place where people are working in technical AI Safety I will get to speed with the AI/ML part faster if I am there. So it'd be for learning purposes.

Regarding the very specific question of whether it will help more to earn-to-give or to work directly: You can ask the AI Safety company that will potentially hire you if they prefer hiring you or getting [however much you'd donate]. They're in a perfect position to make that decision

 

I predict they'd prefer you over 10x whatever you'd donate, but why guess something that is easily testable?

Regarding your specific situation, my prior is that there may be a creative 3rd option that will be win-win.

As a first suggestion, how about sharing this challenge with the AI Safety company? 

Maybe they'll let you work remotely, or pay for a limousine to take you and/or your girlfriend back and forth every day? Or maybe there's an AI Safety place that will fit you that is close to home? Or maybe something in the new area would attract your girlfriend (and family?) to move? 

These are all extreme long shots, probably all are wrong, I'm just trying to explain what I'm pointing at. These kinds of solutions often come from having at least one person with all the information, so I'd start with opening this to conversation

Yeah, I'll try to be creative. It is not a company though, it's academy. But that gives you flexibility too, so it's good. Even to do partially remote work.

Though I am not sure I can ask her to move from European to America. She values stability quite a lot, and she wants to get a permanent position as a civil servant in Spain, where we are from.

Thanks Yonathan!

What exactly would the postdoc be about? Are you and others reasonably confident your research agenda would contribute to the field?

I submitted an application about using causality as a means for improved value learning and interpretability of NN: https://www.lesswrong.com/posts/5BkEoJFEqQEWy9GcL/an-open-philanthropy-grant-proposal-causal-representation My main reason for putting forward this proposal is that I believe the models of the world humans operate, are somewhat similar to causal models, with some high-level variables that AI systems might be able to learn. So using causal models might be useful for AI Safety.

I think there are also some external reasons why it makes sense as a proposal:

  • It is connected to the work of https://causalincentives.com/
  • Most negative feedback I have received is because the proposal is still a bit too high level, and most people believe this is something worth trying out (even if I am not the right person).
  • I got approval from LTFF, and got to the second round of both FLI and OpenPhil (still undecided in both cases, so no rejections).

I think the risk of me not being the right person to carry out research on this topic is greater than the risk of this not being a useful research agenda. On the other hand, so far I have been able to do research well even when working independently, so perhaps the change of topic will turn out ok.

What's the difference between being funded by LTFF vs. one of the other two?

Thanks Chris! Not much: duration and amount of funding. But the projects I applied with were similar, so in a sense I was arguing that independent evaluations of a proposal might provide more signal of the perceived usefulness of this project.

Indeed! My plans were to move back to Spain after the postdoc, because there is already one professor interested in AI Safety and I could build a small hub here.

Thanks acyhalide! My impression was that I should work in person more at the beginning, once I know the tools and the intuitions this can be done remotely. In fact, I am pretty much doing my Ph.D. remotely at this point. But since it's a postdoc, I think the speed of learning matters.

In any case, let me say that I appreciate you poking into assumptions, it is good and may help me find acceptable solutions :)

Sure, acylhalide! Thanks for proposing ideas. I've done a couple of AI Safety camps, and one summer internship. I think the issue is that to make progress I need to become an expert in ML as well, not as I understand it now. That was my main motivation for this. That's perhaps the reason why I think it is beneficial to do some kind of presencial postdoc, even if I could work part of the time from home. But it's also long-distance relationships are costly, so that's the issue.

Curated and popular this week
Relevant opportunities