It seems that it is generally useful for people aiming to get a career in AI Safety to have some capability in ML software engineering. Some of the most straightforward ways I've come across are:
- fast.ai-like or Coursera courses.
- Bootcamps, such as the one recently organized by Reedwood (https://forum.effectivealtruism.org/posts/iwTr8S8QkutyYroGy/apply-to-the-ml-for-alignment-bootcamp-mlab-in-berkeley-jan).
Probably the second is one of the best options but requires some career flexibility that is not available to everyone. On the other hand, after doing a few of the most popular ML courses, I still struggle quite a lot to code working solutions either in ML or RL, even if I understand all the math underneath.
Would a ML engineering fellowship be something useful for the community?
I think that working with a small group of colleagues in implementing some specially chosen problem or paper, and with some availability for supervision would be really useful to learn quickly without getting stuck for really long times. Of course, I know of these incredible cowboys of GitHub who indeed do this alone, but I would personally find this very valuable.