Calm down. It's a complex situation developing rapidly, let's wait and see for what happens as a final outcome.
I used a model I fine-tuned to generate takes on Effective Altruism.
was unclear. It should be:
I used a model that I fine-tuned, in order to generate takes on Effective Altruism.
This model was not fine-tuned specifically for Effective Altruism. It was developed to explore the effects of training language models on a twitter account. I became surprised and concerned when I noticed it was able to generate remarkable takes regarding effective altruism, despite not being present in the original dataset. Furthermore, these takes are always criticism.
This p...
I used a model I fine-tuned to generate takes on Effective Altruism. The prompt is "effective altruism is." Here are its first three:
...effective altruism is vampirism, except instead of sucking blood you suck hours and happiness from helping people who would otherwise have spent the time improving their lives.
effective altruism is parasitic. it latches onto the success of actual altruism, which is genuine and humanizing, to justify its cold calculations and make them feel virtuous too.
effective altruism is rich kid hobbyism pretending to be a moral imperativ
What role do different people in reviewing applications for the fellowship, and who fills those roles?
This is a call for test prompts for GPT-EA. (announcement post: https://forum.effectivealtruism.org/posts/AqfWhMvfiakEcpwfv/training-a-gpt-model-on-ea-texts-what-data) I want testcases and interesting prompts you want to see tried. This helps track and guide the development of GPT-EA versions. The first version, GPT-EA-Forum-v1 has been developed. GPT-EA-Forum-v2 will include more posts and also comments.
One goal is to make it easier to understand Effective Altruism through an interactive model.
I'm sick with COVID right now. I might respond in greater depth when I'm not sick.
Digital humans would be much cheaper to query than biological humans. This is because:
An efficient general intelligence on a biological substrate uses a brain structure. It's unclear if that same structure would be efficient on silicon or photonic processors.
The goal is not to create a model to create the most good. While aligning an AI with values and principles could be a potentially interesting project, the goal of this project is to create a descriptive model of the EA community, not a normative one of the idealized EA community.
I believe GPT-3 can do more than memorizing specific objectives like malaria nets. Infusing principles deeply would need to happen using more sophisticated techniques, probably post-finetuning.
...upbias (-1, 1) is the Forum editors' or users' perspective on the fraction of upvot
How much % of the training mix should be the GiveWell blog and how much should be the 80,000 hours blog? In other words, how many bytes of blog posts should be used from each, relative to the entire dataset?
What kinds of posts are on each blog, and which best reflects the wider EA community, and which reflects the professional EA community? How can this be used to create a dataset?
I also checked and neither blog has a direct view count measure-- some other proxy metric would need to be used.
Thanks for these sources.
How should GiveWell blog and 80,000 hours blog weighted against each other? My instinct is to weight by the number of views.
Posts/comments in Facebook groups, slack groups, and discord groups?
Does the EA community have the norm that these comments are public? I want to make sure the consent of participants is obtained.
The definition of health here should include mental and socioemotional health, since they affect how people reason and relate to each other, respectively.
While they are insolvent, FTX and SBF have not declared bankruptcy. In developing scenarios, information is unclear and from unknown sources. (Alameda's balance sheet may prove incomplete.)