Working (0-5 years experience)

I'm an Atlas Fellow '22. I have an interest in large language models.

How others can help me

I'm looking for grants, opportunities, and learning related to:

  • improving humanity's long-term future through enhancing human cognition and communication (communication is just collective cognition)
  • improving global mental and socioemotional health
  • grants fellowships eligible for those under 20, that don't require a PhD
  • empirical alignment of transformative AI

Open to taking an AI degree at a school that allows me to develop my own curriculum, test out of some classes, and attend part time.

Topic Contributions


How to become more agentic, by GPT-EA-Forum-v1

One goal is to make it easier to understand Effective Altruism through an interactive model.

I'm sick with COVID right now. I might respond in greater depth when I'm not sick.

Digital people could make AI safer

Digital humans would be much cheaper to query than biological humans. This is because:

An efficient general intelligence on a biological substrate uses a brain structure. It's unclear if that same structure would be efficient on silicon or photonic processors.

Training a GPT model on EA texts: what data?
  • A book on ethics seems worth considering. Can you tell me more about how the ideas relate to EA? Nonetheless, these are useful sources for future projects regarding AI alignment.
  • Is only about utilitarianism? If so, the rest of the training set should already have a sufficient degree of utilitarian bias.
  • How influential is FHI's texts on the EA community?
  • This seems like a good text to make the model more generally coherent.
Training a GPT model on EA texts: what data?

The goal is not to create a model to create the most good. While aligning an AI with values and principles could be a potentially interesting project, the goal of this project is to create a descriptive model of the EA community, not a normative one of the idealized EA community.

I believe GPT-3 can do more than memorizing specific objectives like malaria nets. Infusing principles deeply would need to happen using more sophisticated techniques, probably post-finetuning. 

upbias (-1, 1) is the Forum editors' or users' perspective on the fraction of upvotes that happened due to fear, other negative emotions, or limited critical thinking that the post motivated otherwise

How do I calculate upbias?

Thank you for the books to use in the dataset. I will review each of them.

The original GPT-3 was trained largely on a web crawl known as Common Crawl. Users on the internet, especially tend to optimize for attention. Unlike GPT-3, GPT-J's training set is around a third academic sources.

SSC blog includes posts like Meditations on Moloch or the review of Seeing Like a State. These seem like perspectives important to the EA community. Are you suggesting I include posts based on if they're linked from the EA Forum frequently?

I'll try to crawl the EA Funds' grant program as well.

Training a GPT model on EA texts: what data?

How much % of the training mix should be the GiveWell blog and how much should be the 80,000 hours blog? In other words, how many bytes of blog posts should be used from each, relative to the entire dataset?

What kinds of posts are on each blog, and which best reflects the wider EA community, and which reflects the professional EA community? How can this be used to create a dataset?

I also checked and neither blog has a direct view count measure-- some other proxy metric would need to be used.

Training a GPT model on EA texts: what data?

Thanks for these sources.

How should GiveWell blog and 80,000 hours blog weighted against each other? My instinct is to weight by the number of views.

Posts/comments in Facebook groups, slack groups, and discord groups?

Does the EA community have the norm that these comments are public? I want to make sure the consent of participants is obtained.

Unflattering reasons why I'm attracted to EA

This is a list of EA biases to be aware of and account for.

Load More
Working (0-5 years experience)