PIF

Per Ivar Friborg

179 karmaJoined Apr 2022

Bio

My gratitude to the many wonders of life makes me highly engaged in preserving a prosperous long term future for all sentient life.

Comments
18

Topic contributions
1

I want to add that I think AI safety research with the intention of mitigating existential risk has been severely neglected. This suggests that the space of ideas for how to solve the problem remains vastly unexplored, and I don't think you need to be a genius to have the chance of coming up with a smart low-hanging fruit solution to the problem. 

Thanks for the feedback and for sharing Yonadav Shavits paper!

Thank you for the examples! Could you elaborate on the technical example of breaking down a large model into sub-components, then training each sub-components individually, and finally assembling it into a large model? Will such a method realistically be used to train AGI-level systems? I would think that the model needs to be sufficiently large during training to learn highly complex functions. Do you have any resources you could share that indicate that large models can be successfully trained this way?

Thanks you for this feedback, and well put! I've been having somewhat similar thoughts in the back of my mind, and this clarifies many of those thoughts. 

Personal progress update regarding the cause priority of alternative proteins (resulted from GHI talk):

Question: Is it worth the EA Community trying to accelerate the growth of alt protein production, or should we just allow market forces to move it forward? What are the neglected areas of the alt protein landscape that an EA should get involved instead of purely profit-motivated agents?

Answer: GFI thinks the market will not solve these problems on its own. A particular case of this seems to be fundamental scientific research, where markets need better products, but are not willing to invest in the research themselves.

Update: I initially thought profit-motivated agents would be sufficient to accelerate the growth of alt protein production, but I now doubt that stance, and realize that there likely are neglected areas within alt protein where EAs can have a high marginal impact. 
 

Jonas Hallgren shared this Distillation Alignment Practicum with me, which answers all my question and much more. 

What are some bottlenecks in AI safety?

I'm looking for ways to utilize my combined aptitude for community building, research, operations, and entrepreneurship to contribute with AI safety. 

Ahh yes this is also a good question which I don't have a good answer to, so I support your approach in revisiting this question over time with new information. With very low confidence, I would expect that there would become more ways to aid with AGI alignment indirectly as the space grows. A broader variety of ways to contribute to AGI alignment would then make it more likely for you to find something within that space that matches your personal fit. Generally speaking, examples of ways to indirectly contribute to a cause could be something like operations, graphic design, project management, software development, and community building. My point is that there are likely many different ways to aid in solving AGI alignment, which increases the chances of finding something you have the proper skills for. Again, I place very low confidence on this since I don't think I have an accurate understanding of the work needed within the space of AGI alignment at all. This is more meant as an alternative way of thinking about your question. 

Load more