Hide table of contents

Happy to write my first post on the EA Forum and to share with you the EA Explorer GPT. This tool is made to explore the nuances of effective altruism philosophy and the ecosystem. 

While it may NOT be accurate in its inference, it still may be a useful tool to start your research in specific cause areas, papers, or even moral dilemmas. 

I believe the development of AI systems and alignment research (EAs in particular) will become more interconnected. This GPT may be one tiny step toward that vision. 

In particular, the bot is instructed to: 

  • Promote respectful and constructive dialogue. I tried to make it less radical and aware of the complexity of the EA. 
  • Be relevant to EA. I asked it to consider papers related to moral uncertainty, QALY, longtermism and make distinctions between existential/suffering/catastrophe risks. 
  • Browse the Internet. I asked it to browse the web when the most recent information is requested. 

Also, I am a part of a small local EA community, where we formed our position on one of the most pressing world problems. Added our position to instructions.

The chatbot is made with the OpenAI GPT feature. Please note that you need a ChatGPT subscription. The data is processed and accessed only by OpenAI (I won’t see your prompts).

Link to the EA Explorer - https://chat.openai.com/g/g-SqNYVhz3b-ea-explorer

Your Participation and Feedback:

I would love for you to interact with EA Explorer and share your experiences. Your feedback (and criticism) will be invaluable in assessing its utility and guiding its future development! 


The post picture was made with Dall-E.




Sorted by Click to highlight new comments since:

I haven't tried this, but I'm excited about the idea! Effective Altruism as an idea seems unusually difficult to communicate faithfully, and creating a GPT that can be probed on various details and correct misconceptions seems like a great way to increase communication fidelity.

Curated and popular this week
Relevant opportunities