Elisabeth Rieger

12 karmaJoined Nov 2022



I am currently co-organiser of EA St Andrews and am studying Philosophy and Economics.

Otherwise, I am very much focused on personal development, object-level knowledge, and testing personal fit. Right now, I am volunteering for the Shrimp Welfare Project and am part of a research project within the Oxford Biosecurity Group.

In my free time, I really enjoy sports, going out, and music :))

How others can help me

  • internship/volunteering/shorter research opportunities at an entry level for numerous career paths

How I can help others

  • community building


Hey, thank you for this post!

It sounds extremely plausible that it should be a priority to avoid speciesism (on the grounds of intelligence and other factors) in AI (and reduce it in humans).
Just out of curiosity, it seems important to identify how our current species bias on the grounds of intelligence looks like:

a) 'any species with an average intelligence that is lower than average human intelligence is viewed to be morally less significant.'

b) 'any species that is (on average) less intelligent than one's own species, is morally less significant.'

If (a), would this imply that AI would not harm us based on speciesism on the grounds of inferior intelligence? 
Would love to hear people's thoughts on this (although I realise that such a discussion, in general context, might be a distraction from the most important aspects to focus on: avoiding speciesism).

Yes, super interesting to see the transition of @tobyj 's thoughts on AI! 

I wonder how much time it takes for the average EA without technical background and AI-related job to fully wrap their mind around the complexities of AI (given that now there are much more resources and discussions about this topic). 

Obviously, there are many factors playing into this, but I would love to hear some rough estimates about this :))

This is super insightful and definitely sounds like highly valuable to do, in order to make decisions that have higher credence. Thank you! 
I am wondering what people's thoughts on the following are:

  • Is there a rough estimate that should be spent on such an evaluation and how many topics of exploration to choose (without being overly ambitious or loosing track of the ultimate goal)? I assume that given the ‘Lessons learned’ there might be a chance that the ~40 hours @Evan LaForge  spent on 12 individual points/questions might not be what could be recommend…
    • note: I appreciate that this is highly dependent on the individual and understand if it is hard to give a specific answer or one at all :)
  • Could it be the case that such a moral evaluation process is only useful once one has sufficient object-level knowledge of a wider range of topics relevant to EA and high-impact choices? Maybe there even is an example of a minimum of things/courses/readings one should have completed before such a project can be done successfully + effectively.

TLDR: current undergraduate student looking for work experience in EA (-related) jobs; operations, communications, research

Skills & background: experience in EA community building and operations; volunteering for Shrimp Welfare Project; participant in an Oxford Biosecurity Security Research Project; helping to organise EAGxLondon (admissions marketing, production); interested in research and/or operations and open to any new experience; excellent academic background; stronger involvement with EA since summer 2023

Location/remote: flexible/no strong preference; if in person then preferably in Germany, the UK, or neighbouring countries

Availability & type of work: full-time internship (or volunteering) between mid-May and beginning of September 2024


Email/contact:; slack; dm on the forum

Other notes: I would like to up-skill, gain valuable experience to make considerate, high-impact career choices, and simultaneously am looking to work for a high-impact org/employer