(context: I've recently started as a research engineer (RE) on DeepMind's alignment team. All opinions are my own)
Hi, first off, it's really amazing that you are looking into changing your career to help reduce x-risks from AI.
I'll give my perspective on your questions.
1.
a. All of this is on a spectrum but.. There is front-end engineering, which to my knowledge is mostly used to build human-feedback-interfaces or general dialogue chats like ChatGPT.
Then there's research engineering, which I'd roughly sort into two categories. One is more low-level machine ...
Is there a way to this post-hoc, while keeping the comments in tact? Otherwise, I'd just leave it as is now that it already received answers.
Oops! I was under the impression that I had done that when clicking on "New question". But maybe something somewhere went wrong on my end when switching interfaces :D
Nothing to add -- just wanted to explicitly say I appreciate a lot that you took the time to write the comment I was too lazy to.
I can support the last point for Germany at least. There's relatively little stratification among universities. It's mostly about which subject you want to study, with popular subjects like medicine requiring straight A's at basically every university. However you can get into a STEM program at the top universities without being in the top-10% at highschool level.
My intuition is that this is quite relevant if the goal is to appeal to a wider audience. Not everyone, not even most, people are drawn in by purely written fiction.
I think it is quite relevant.
However, if FLI are publishing a group of these possible worlds, maybe they want to consider outsourcing the media piece to someone else. It:
a) links/brands the stories together (like how in a book of related short stories, a single illustrator is likely used)
b) makes it easier for lone EAs to contribute using the skill that is more common in the community.
There is much that could be said in response to this.
The tone of your comment is not very constructive. I get that you're upset but I would love if we could aim for a higher standard on this platform.
The EA community is not a monolithic super-agent that has perfect control over what all its parts do -- far from it. That is actually one of the strengths of the community (and some might even say that we give too much credit to orthodoxy). So even if everyone on this forum or the wider community did agree that this was a stupid idea, then we could stil
Should I reapply if I already filled in the interest form earlier? I notice that the application form is slightly updated.
Could you expand on the planned GCR institute in India? How certain is it that it will be established? What exactly will be its focus or research agenda?
There are still spots left for the upcoming webinar with Aron and David.
If you consider joining the presentation, we want to encourage you to sign up.
If you can't make it for the live event but have questions for David or Aron, please consider filling in the application form with a comment to that effect. We will publicly post the curated answers to all the questions asked after the webinar.
I don't have a great model of the constraints but my guess is that we're mostly talent and mentoring constrained in that we need to make more research progress but also we don't have enough mentoring to upskill new researchers (programs like SERI MATS are trying to change this though).. but also we need to be able to translate that into actual systems so buy in from the biggest players seems crucial.
I agree that most safety work isn't monetizable, some things are, e.g. in order to make a nicer chat bot, but it's questionable whether that actually reduces X... (read more)