Welcome!
If you're new to the EA Forum:
- Consider using this thread to introduce yourself!
- You could talk about how you found effective altruism, what causes you work on and care about, or personal details that aren't EA-related at all.
- (You can also put this info into your Forum bio.)
Everyone:
- If you have something to share that doesn't feel like a full post, add it here! (You can also create a Shortform post.)
- You might also share good news, big or small (See this post for ideas.)
- You can also ask questions about anything that confuses you (and you can answer them, or discuss the answers).
For inspiration, you can see the last open thread here.
Other Forum resources

The privacy concerns seem more realistic. A rogue superintelligence will have no shortage of ideas, so 2 does not seem very important. As to biasing the motivations of the AI, well, ideally mechanistic interpretability should get to the point we can know for a fact what the motivations of any given AI are, so maybe this is not a concern. I guess for 2a, why are you worried about a pre-superintelligence going rogue? That would be a hell of a fire alarm, since a pre-superintelligence is beatable.
Something you didn't mention though: how will you be sure the LLM actually successfully did the task you gave it? These things are not that reliable: you will have to double-check everything for all your use cases, making using it kinda moot.