Toby Tremlett🔹

Content Strategist @ CEA
8465 karmaJoined Working (0-5 years)Oxford, UK

Bio

Participation
2

Hello! I'm Toby. I'm Content Strategist at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.

How others can help me

Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.

How I can help others

Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.

Sequences
4

Your most valuable posts of 2025
Best of: Career Conversations Week 2025
Best of: Existential Choices Week
Existential Choices: Reading List

Comments
668

Topic contributions
121

Thanks for the comments @Clara Torres Latorre 🔸 @NickLaing @Aaron Gertler 🔸 @Ben Stevenson. This is all useful to hear. I should have an update later this month.

Cheekily butting in here to +1 David's point - I don't currently think it's currently reasonable to assume that there is a relationship between the inner workings of an AI system which might lead to valenced experience, and its textual output. 

For me I think this is based on the idea that when you ask a question, there isn't a sense in which an LLM 'introspects'. I don't subscribe to the reductive view that LLMs are merely souped up autocorrect, but they do have something in common. An LLM role-plays whatever conversation it finds itself in. They have long been capable of role-playing 'I'm conscious, help' conversations, as well as 'I'm just a tool built by OpenAI' conversations. I can't imagine any evidence coming from LLM self-reports which isn't undermined by this fact. 

 

Thanks to everyone who voted for our next debate week topic! Final votes were locked in at 9am this morning. 

We can’t announce a winner immediately, because the highest karma topic (and perhaps some of the others) touches on issues related to our politics on the EA Forum policy. Once we’ve clarified which topics we would be able to run, we’ll be able to announce a winner. 

Once we have, I’ll work on honing the exact wording. I’ll write a post with a few options, so that you can have input into the exact version we end up discussing. 

PS: Apologies for the delay here — in retrospect, I should have checked on adherence to our policy before allowing voting. In the now very likely event that we cannot have the highest karma discussion on the EA Forum, I’d remind you that this is not the only place for EA-related discussions on the internet — Substack and Twitter do not have our politics policy. 

I curated this because it's a really useful document, and a fantastic community service. Thanks Tristan!

My plan is to make the redrafting public as well - current idea is that I'll write a post with some options, get comments, and then decide how to phrase the statement based on input. 

Maybe- how would that work in a full sentence?
General point - we'd probably want to also change 'urgent' (bit ambiguous) and 'more traditional longtermist concerns' (which?)

Hey Shivani! Welcome to the EA Forum!
I'm Toby, from the EA Forum team. Let me know if you have any questions about the Forum (or EA), are looking for something to read, or would like feedback on a draft, 
Cheers, 
Toby

Thanks great to hear Andrew, welcome to the EA Forum!

Hey Karen Maria, welcome to the EA Forum! Thanks for joining us. 
I'm Toby, from the EA Forum team. If you have any questions, I'm happy to answer them. If you haven't already listened to it, I'd recommend the 80,000 Hours podcast. 
Let me know if you'd like specific kind of recommendations, I have an extensive knowledge of EA Forum posts :)
Cheers, 
Toby

Unlike an asteroid impact, which leaves behind a lifeless or barely habitable world, the AI systems that destroyed humanity would presumably continue to exist and function. These AI systems would likely go on to build their own civilization, and that AI civilization could itself eventually expand outward and colonize the cosmos.

This is by no means certain. For example we should still be worried about extinction via misuse, which could kill off humans before AI is developed enough to be self-replicating/ autonomous. For example, the development of bioweapons. Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.

Load more