Per Ivar Friborg

125Trondheim, NorwayJoined Apr 2022

Bio

My gratitude to the many wonders of life makes me highly engaged in preserving a prosperous long term future for all sentient life. My biggest concern is the complete elimination of humanity's future potential caused by misaligned superintelligence. I've been studying Chemistry the past three years, and I now intend to pivot my career towards working with mitigating AI risk either directly or indirectly. Among others, I have aptitudes fit for doing research, community building, entrepreneurship and operations. I am currently working as the community manager of EA NTNU - the local university group at the Norwegian University of Science and Technology. I have been part of this group for three years, and I have spent much time thinking about the best strategy for EA community building at universities.

Comments
13

Topic Contributions
1

Personal progress update regarding the cause priority of alternative proteins (resulted from GHI talk):

Question: Is it worth the EA Community trying to accelerate the growth of alt protein production, or should we just allow market forces to move it forward? What are the neglected areas of the alt protein landscape that an EA should get involved instead of purely profit-motivated agents?

Answer: GFI thinks the market will not solve these problems on its own. A particular case of this seems to be fundamental scientific research, where markets need better products, but are not willing to invest in the research themselves.

Update: I initially thought profit-motivated agents would be sufficient to accelerate the growth of alt protein production, but I now doubt that stance, and realize that there likely are neglected areas within alt protein where EAs can have a high marginal impact. 
 

Jonas Hallgren shared this Distillation Alignment Practicum with me, which answers all my question and much more. 

What are some bottlenecks in AI safety?

I'm looking for ways to utilize my combined aptitude for community building, research, operations, and entrepreneurship to contribute with AI safety. 

Ahh yes this is also a good question which I don't have a good answer to, so I support your approach in revisiting this question over time with new information. With very low confidence, I would expect that there would become more ways to aid with AGI alignment indirectly as the space grows. A broader variety of ways to contribute to AGI alignment would then make it more likely for you to find something within that space that matches your personal fit. Generally speaking, examples of ways to indirectly contribute to a cause could be something like operations, graphic design, project management, software development, and community building. My point is that there are likely many different ways to aid in solving AGI alignment, which increases the chances of finding something you have the proper skills for. Again, I place very low confidence on this since I don't think I have an accurate understanding of the work needed within the space of AGI alignment at all. This is more meant as an alternative way of thinking about your question. 

Humans seem to be notoriously bad at predicting what will make us most happy, and we don’t realize how bad we are at it. The typical advice "Pursue your passion" seems like a bad advice since our passion often develops parallel to other more tangible factors being fulfilled. I think 80,000 Hours' literature review on "What makes for a dream job" will help you tremendously in better assessing whether you would enjoy a career in AI alignment. 

Great question! While expected tangible rewards (e.g. prizes) undermine autonomous motivation, unexpected rewards don't undermine autonomous motivation, and verbal rewards generally enhance autonomous motivation (Deci et al., 2001). Let's brake it down to it's components:

Our behavior is often controlled by the rewards we expect to obtain if we behave certain desirable ways such as engage with work, perform well on a task, or complete an assignments. Conversely, we do not experience unexpected rewards as controlling since we cannot foresee what behavior will lead to the unexpected outcome. Verbal rewards are often experienced as unexpected, and may enhance perceived competence which in turn enhances autonomous motivation. That being said, if verbal reward is given in a context where people feel pressured by it to think, feel, or behave in particular ways (e.g. controlling praise) it will typically undermine autonomous motivation. 

I therefore think that thanking volunteers for the work they are doing is unproblematic, and if some information value is included it will enhance autonomous motivation via competence-support (e.g. at an EAG event: "thank you for doing a good job at welcoming the event speakers. We received feedback that they felt relaxed during their stay at the green room, and that they were impressed by the punctuality of you volunteers.").

Assuming that the engagement in writing competitions with financial incentives is driven by the expectance of a tangible external reward, I would expect writing competitions with financial incentives to undermine autonomous motivation unless the rewards are well internalized. The same amounts to gift cards and job certificates. Whether we need financial rewards or not, is a tough question I do not have a good answer to. I believe it is a trade-off between short-term and long-term impact, where financial rewards may improve the outcome of a specific activity, such as a writing contest, but lead to lower quality outcomes in the long run because people no longer engage in those activities voluntarily due to low autonomous motivation.

Thanks for the post Jonathan! I think this can be a good starting point for discussions around spreading longtermism. Personally, I like the use of "low-key longtermism" for internal use between people that are already familiar with longtermism, but I wouldn't use it for massive outreach purposes. This is because the mentioned risk posed by info-hazard seems to outweigh the potential benefits of using the term longtermism. Also, since the term doesn't add any information value to people that don't already know what it is, I am even more certain that it's best to leave the term behind when doing massive outreach. This post also shows some great examples of how the message of longtermism can be warped and misunderstood as a secular cult, adding another element of concern for longtermism outreach: How EA is perceived is crucial to its future trajectory - EA Forum (effectivealtruism.org).
 

My point is that I favor low-key longtermism outreach as long as the term longtermism is excluded. 

This made me incredibly excited about distilling research! However, I don't really know where to find research that would be valuable distilling. Could you give me some general pointers to help me get started? Also, do you have examples of great distillations that I can use as my benchmark? I'm fairly new to technical AI since I've been majoring Chemistry the last three years, however I'm determined to upskill in AI quickly, where distilling seems like a great challenge to boost my learning process while being impactful.

Thanks for sharing Akash! This will be helpful when I start getting in touch with AI safety researchers after upskilling in basic ML and neural networks. 

Load More