Thank you for the examples! Could you elaborate on the technical example of breaking down a large model into sub-components, then training each sub-components individually, and finally assembling it into a large model? Will such a method realistically be used to train AGI-level systems? I would think that the model needs to be sufficiently large during training to learn highly complex functions. Do you have any resources you could share that indicate that large models can be successfully trained this way?
Thanks you for this feedback, and well put! I've been having somewhat similar thoughts in the back of my mind, and this clarifies many of those thoughts.
The Whole Brain Emulation Workshop link takes me nowhere:
https://foresight.org/foresight-neurotech-workshop-2023?utm_source=Foresight+Newsletter+Subscribers&utm_campaign=9477bf4b79-EMAIL_CAMPAIGN_2022_11_11_06_12_COPY_01&utm_medium=email&utm_term=0_7c1b7f710b-9477bf4b79-
It says "Page not found".
Seems like the correct link is: https://foresight.org/whole-brain-emulation-workshop-2023/
Personal progress update regarding the cause priority of alternative proteins (resulted from GHI talk):
Question: Is it worth the EA Community trying to accelerate the growth of alt protein production, or should we just allow market forces to move it forward? What are the neglected areas of the alt protein landscape that an EA should get involved instead of purely profit-motivated agents?
Answer: GFI thinks the market will not solve these problems on its own. A particular case of this seems to be fundamental scientific research, where markets need better pro...
Jonas Hallgren shared this Distillation Alignment Practicum with me, which answers all my question and much more.
What are some bottlenecks in AI safety?
I'm looking for ways to utilize my combined aptitude for community building, research, operations, and entrepreneurship to contribute with AI safety.
Ahh yes this is also a good question which I don't have a good answer to, so I support your approach in revisiting this question over time with new information. With very low confidence, I would expect that there would become more ways to aid with AGI alignment indirectly as the space grows. A broader variety of ways to contribute to AGI alignment would then make it more likely for you to find something within that space that matches your personal fit. Generally speaking, examples of ways to indirectly contribute to a cause could be something like operatio...
Humans seem to be notoriously bad at predicting what will make us most happy, and we don’t realize how bad we are at it. The typical advice "Pursue your passion" seems like a bad advice since our passion often develops parallel to other more tangible factors being fulfilled. I think 80,000 Hours' literature review on "What makes for a dream job" will help you tremendously in better assessing whether you would enjoy a career in AI alignment.
Great question! While expected tangible rewards (e.g. prizes) undermine autonomous motivation, unexpected rewards don't undermine autonomous motivation, and verbal rewards generally enhance autonomous motivation (Deci et al., 2001). Let's brake it down to it's components:
Our behavior is often controlled by the rewards we expect to obtain if we behave certain desirable ways such as engage with work, perform well on a task, or complete an assignments. Conversely, we do not experience unexpected rewards as controlling since we cannot foresee what behavior wil...
Thanks for the post Jonathan! I think this can be a good starting point for discussions around spreading longtermism. Personally, I like the use of "low-key longtermism" for internal use between people that are already familiar with longtermism, but I wouldn't use it for massive outreach purposes. This is because the mentioned risk posed by info-hazard seems to outweigh the potential benefits of using the term longtermism. Also, since the term doesn't add any information value to people that don't already know what it is, I am even more certain that it's b...
This made me incredibly excited about distilling research! However, I don't really know where to find research that would be valuable distilling. Could you give me some general pointers to help me get started? Also, do you have examples of great distillations that I can use as my benchmark? I'm fairly new to technical AI since I've been majoring Chemistry the last three years, however I'm determined to upskill in AI quickly, where distilling seems like a great challenge to boost my learning process while being impactful.
Thanks for sharing Akash! This will be helpful when I start getting in touch with AI safety researchers after upskilling in basic ML and neural networks.
Thank you so much for taking your time to write this! As someone who's seriously considering to leave their unfinished major in Chemistry behind to pursue AI alignment work, I can't emphasize enough how much I appreciate this guide.
While I'm at it, I might as well share with you a suggestion I have made to Lizka about "[...] making a library of student projects at the EA Forum. This suggestion resulted from the post-EAG London 2022 GCP group organizers summit, where a bunch of group organizers expressed interest in making a library of student projects. The rationale behind this is that more an more student groups are transitioning to an engagement-driven model using project work as a funnel for engagement to EA. The success of engagement-driven groups is dependent on having promising...
I'm glad to see that more people raise these points, and thank you for writing about them! I've been thinking about these things for over a year now, and I am in the process of writing two forum posts that will cover most of these points. The first post is about engaging students through projects focused on developing competence and planning their career. This post will likely be published within a week from now. The second post is about a model of engagement-driven student groups especially tailored towards giving students opportunities to do good during ...
I want to add that I think AI safety research with the intention of mitigating existential risk has been severely neglected. This suggests that the space of ideas for how to solve the problem remains vastly unexplored, and I don't think you need to be a genius to have the chance of coming up with a smart low-hanging fruit solution to the problem.