Welcome!
If you're new to the EA Forum:
- Consider using this thread to introduce yourself!
- You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
- (You can also put this info into your Forum bio.)
Everyone:
- If you have something to share that doesn't feel like a full post, add it here! (You can also create a Shortform post.)
- You might also share good news, big or small (See this post for ideas.)
- You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Other Forum resources

I feel confused about how dangerous/costly it is to use LLMs for private documents or thoughts to assist longtermist research, in a way that may wind up in the training data for future iterations of LLMs. Some sample use cases that I'd be worried about:
I'm worried about using LLMs for the following reasons:
I'm confused whether these are actually significant concerns, vs pretty minor in the grand scheme of things. Advice/guidance/more considerations highly appreciated!
The privacy concerns seem more realistic. A rogue superintelligence will have no shortage of ideas, so 2 does not seem very important. As to biasing the motivations of the AI, well, ideally mechanistic interpretability should get to the point we can know for a fact what the motivations of any given AI are, so maybe this is not a concern. I guess for 2a, why are you worried about a pre-superintelligence going rogue? That would be a hell of a fire alarm, since a pre-superintelligence is beatable.
Something you didn't mention though: how will you be sure the L... (read more)