M

markvdw

10 karmaJoined Jan 2022

Comments
1

I totally see where you're coming from. I would tend to agree that Bayesian inference doesn't seem to be that useful right now. Visible and exciting leaps are being made by large neural networks, like you say. Bayesian inference on the other hand just works OKish, and faces stiff competition in benchmarks from non-Bayesian methods.

In my opinion, the biggest reasons why Bayesian inference isn't making much of an impact at the moment are:

  • Bayes is being applied in simple settings where deep learning works well, and uncertainty is not really needed: large-data, batch training, prediction only tasks. Bayesian inference becomes more important when you have small data, need to update your belief (from heterogeneous sources), and if you need to make decisions on how to act.
  • Since we have large datasets, the biggest bottleneck in AI/ML is setting up the problem correctly (i.e. asking the right question). OpenAI models like CLIP and GPT-3, and research directions like multitask/metalearning illustrate this nicely. By setting the problem up in a different way, they can leverage huge amounts of new data. Once you introduce a new problem, you reach for the easiest tool to start to solve it. Bayesian inference is not the easiest tool, so it doesn't contribute much here.
  • Current approximate Bayesian training tools don't work very well. (Some disagree with me on this, but I do believe it.)

I also think that in AI alignment, the biggest problem is figuring out the right question to ask. Currently, observable failure cases in the current simple test settings are rare*, which makes it hard to do concrete technical work regardless of whether you choose the to use the Bayesian paradigm.

The thought experiments that motivate AI/ML safety are often longer term, and embedded in larger systems (like society). One place where I do think people have a concrete idea of problems that need solving in AI/ML is social science! I have seen interesting points being made e.g. in FAcct [1] about how you should set up your AI/ML system if you want to avoid certain consequences when deploying in society.

This is where I would focus, if I were to want to work on good outcomes of AI/ML that have impact right now.

Now, I still work on Bayesian ML (although not directly on AI safety). Why do I do this when I agree that Bayesian inference doesn't have much to offer right now? Well, because I think there are reasons to believe that Bayesian inference will have new abilities to offer deep learning (not just uncertainty) in the long term. We just need to work on them! I may turn out to be wrong, but I do still believe that my research direction should be explored in case it turns out to be right. Big companies seem to have large model design under control anyway.

What should you work on? This is difficult. Large model research is difficult to do outside of a few organisations, and requires a lot of engineering effort. You could try to join such an organisation, you could try to investigate properties of trained models, you could try and make these models more widely available, or try to find smaller scale test set-ups where you can probe properties (this is difficult if scale is intrinsically necessary).

I do believe that a great way to get impact right now is to work on questions about deploying ML, and where we want to apply it in society.

You could also remember that your career is longer than a PhD, and work on technical problems now, even if they're not related to AI safety. If you want to work on Bayesian ML, or just ML that quantifies uncertainty in a way that matters (right now), I would work on problems with a decision making element in them. Places where the data-gathering<->decision loop is closed.

Self-driving cars are a cool example, but again difficult to work on outside specific organisations. Bayesian Optimisation or experimental design in industrial settings is a cool small-scale example of where uncertainty modelling really matters. I think smaller scale and lower dimensional applied ML problems are undervalued in the ML community. They're hard to find (it may require talking to people in industry), but I think they are numerous. And what's interesting, is that research can make the difference between a poor solution and a great solution.

I'm curious to hear other people's thoughts on this as well.

 

[1] https://facctconference.org/

 

* Correct me if I'm wrong on this. If the failure cases are not rare, then my advice would be to pick one, and try to solve it in a tool agnostic way.