Welcome!
If you're new to the EA Forum:
- Consider using this thread to introduce yourself!
- You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
- (You can also put this info into your Forum bio.)
Everyone:
- If you have something to share that doesn't feel like a full post, add it here! (You can also create a Shortform post.)
- You might also share good news, big or small (See this post for ideas.)
- You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Other Forum resources

Hi everyone,
In this recent critique of EA, Erik Hoel claims that EA is sympathetic towards letting AGI develop because of the potential for billions of happy AIs (~35 mins) . He claims that this influences EA funding to go more towards alignment rather than trying to prevent/delay AGI (such as through regulation).
Is this true, or is it a misrepresentation of why EA funding goes towards alignment? For example, perhaps it is because EAs think AGI is inevitable or it is too difficult to delay/prevent?
Thanks very much!
Lucas
Interesting, thanks both!