All of Frederik's Comments + Replies

I don't have a great model of the constraints but my guess is that we're mostly talent and mentoring constrained in that we need to make more research progress but also we don't have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial.

I agree that most safety work isn't monetizable, some things are, e.g. in order to make a nicer chat bot, but it's questionable whether that actually reduces X... (read more)

(context: I've recently started as a research engineer (RE) on DeepMind's alignment team. All opinions are my own)

Hi, first off, it's really amazing that you are looking into changing your career to help reduce x-risks from AI.

I'll give my perspective on your questions.

1.

a. All of this is on a spectrum but.. There is front-end engineering, which to my knowledge is mostly used to build human-feedback-interfaces or general dialogue chats like ChatGPT.

Then there's research engineering, which I'd roughly sort into two categories. One is more low-level machine ... (read more)

2
mmKALLL
1y
Thank you for taking the time to provide your thoughts in detail, it was extremely helpful for understanding the field better. It also helped me pinpoint some options for the next steps. For now, I'm relearning ML/DL and decided to sign up for the Introduction to ML Safety course. I had a few follow-up questions, if you don't mind: That seems like a reasonable explanation. My impression was that the field was very talent-constrained, but you make it seem like neither talent nor funding is the bottleneck. What do you think are the current constraints for AI safety work? My confusion about the funding/private company situation stems from my (likely incorrect) assumption that AI safety solutions are not very monetizable. How would a private company focusing primarily on AI safety make a steady profit? I currently view OpenAI and DeepMind more like AI product companies, with "ethics and safety as considerations but not as the focus." Does this seem accurate to you? Do engineers (in general) have the option to focus mainly on safety-related projects in these companies? Separately, I'd also like to wish the DeepMind alignment team the best of luck! I was surprised to find that it has grown much more than I thought in recent years.

Is there a way to this post-hoc, while keeping the comments in tact? Otherwise, I'd just leave it as is now that it already received answers.

Oops! I was under the impression that I had done that when clicking on "New question". But maybe something somewhere went wrong on my end when switching interfaces :D

Nothing to add -- just wanted to explicitly say I appreciate a lot that you took the time to write the comment I was too lazy to.

I can support the last point for Germany at least. There's relatively little stratification among universities. It's mostly about which subject you want to study, with popular subjects like medicine requiring straight A's at basically every university. However you can get into a STEM program at the top universities without being in the top-10% at highschool level.

Oh yes I agree. I think that'd be a wonderful addition and would lower the barrier to entry!.

My intuition is that this is quite relevant if the goal is to appeal to a wider audience. Not everyone, not even most, people are drawn in by purely written fiction.

I think it is quite relevant.

However, if FLI are publishing a group of these possible worlds, maybe they want to consider outsourcing the media piece to  someone else. It: 

a) links/brands the stories together (like how in a book of related short stories, a single illustrator is likely used)

b) makes it easier for lone EAs to contribute using the skill that is more common in the community.

There is much that could be said in response to this.

  1. The tone of your comment is not very constructive. I get that you're upset but I would love if we could aim for a higher standard on this platform.

  2. The EA community is not a monolithic super-agent that has perfect control over what all its parts do -- far from it. That is actually one of the strengths of the community (and some might even say that we give too much credit to orthodoxy). So even if everyone on this forum or the wider community did agree that this was a stupid idea, then we could stil

... (read more)

Should I reapply if I already filled in the interest form earlier? I notice that the application form is slightly updated.

5
Buck
2y
No, the previous application will work fine. Thanks for applying :)

Could you expand on the planned GCR institute in India? How certain is it that it will be established? What exactly will be its focus or research agenda?

6
AronM
4y
We can't give a public statement yet. We are expecting one on December 13. The intention of the institute is to cover GCR, x-risks and futurology/foresight. As soon as we have something to publish I will update this comment and then report accordingly.

Thanks for this post!

If any reader of this post would like to ask Aron or David questions about ALLFED's work personally, we still have some spots left on our webinar, next Tuesday. Simply apply via the form in linked post.

There are still spots left for the upcoming webinar with Aron and David.

If you consider joining the presentation, we want to encourage you to sign up.

If you can't make it for the live event but have questions for David or Aron, please consider filling in the application form with a comment to that effect. We will publicly post the curated answers to all the questions asked after the webinar.