Toby Tremlett🔹

Senior Content Strategist @ CEA
9316 karmaJoined Working (0-5 years)Oxford, UK

Bio

Participation
2

Hello! I'm Toby. I'm the Senior Content Strategist for CEA's Online Team. I work with the team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.

How others can help me

Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.

How I can help others

Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.

Sequences
7

In Development Highlight
Better Futures
Best of: AGI & Animals Debate Week
Your most valuable posts of 2025
Best of: Career Conversations Week 2025
Best of: Existential Choices Week
Existential Choices: Reading List

Comments
755

Topic contributions
130

If you avert a existential extinction level disaster in 2030 that allows future people in 2100 to live and flourish, but a second disaster (of the same or a similar type) needs to be averted in 2050, how do you avoid double counting that life saved?

The short answer is that you discount the value of the future you save based on the expected number of lives in it. 

I.e. if, in a simple case, you know the Sun will turn into a red giant and kill us all in 2100, then averting extinction this year would says 10s of billions of lives, but no more. This is more complicated when we have X% chance of going extinct by 2100, Y% by 2200 etc... 

I'm less sure what to say about Paul's point that saving a life today = many more lives that exist in the future. I'd guess that demographic projections lower the impact of this on the calculation (i.e. we expect a lower population anyway), but I'm not sure. A more general response is that basically everything except preventing extinction washes out in the long-term, so increasing population over the next 100 years would be no exception. 
 

Hmm yes - would it also work if it was a coloured callout you could get used to and ignore? I explicitly want newer users to know what the disclosures mean - i.e. a colour code without any text would be too esoteric. 

And also, AFAIK if you volunteer your ticket is free :)

scaring people into disclosing all forms of LLM usage at the top of essays, which I argue is a bad norm

Yep, that's different. I've only seen one example of this so far, but if it continues it's probably just a design issue we can tweak (i.e. maybe the copy isn't clear enough on the post-page). 

Thanks for writing this Daniel!

It's super interesting - and I'd never heard this take before. 

One question I had while reading was: why is this advice to westerners? I.e. if there is money to be made here, why wouldn't people in the countries we are discussing start their own export businesses?

And if the answer is something like 'people in the west can raise more funding' or 'people in the west have more western connections' would the answer then be to start an organisation to provide these things to entrepreneurs in target countries rather than to start the business yourself? 

You write a sentence or two on this but I'd love to hear more. 

I disagree - disclosure is for the benefit of the reader, not the author[1]. If the reader had to read half a post, or even an entire post, before they were told they were reading LLM-generated text, they might be wasting quite a lot of time and attention. 

We'll see how this shakes out in practice though. If it proves too costly for authors of good quality posts which are LLM-assisted, we can always reconsider. 

  1. ^

    Though we don't want disclosure to be too onerous, which is why it is currently just text rather than the callout boxes LessWrong is using.

Worth mentioning because the policy is so new: your disclosure was interesting but isn't required. Disclosure is only required when your post contains significant LLM-generated text, so you're all good, and can cut it unless you included it by choice. 

"LLM disclosure: I wrote this post myself, then asked an LLM to copy-edit it before posting. I manually made any edits I liked and copy-pasted no text from the LLM (my current practice for using LLMs in writing that I care about)."

Question for @Lauren Gilbert - who are your favourite global health and development authors who you have not yet published? 

Hey Nick, the policy actually wasn't in place when this post was posted, so this would only apply to Arthur's next post. 

FYI, next week we will be highlighting the first batch of articles from In Development, @Lauren Gilbert's new global development magazine

Lauren and most of the authors will be on the Forum to answer your questions throughout the week. More info to come on Monday, but I figured I'd mention in case anyone wanted to read the articles in advance (they are here, and all authors apart from Paul Niehaus will be around to answer questions). 

I'm looking forward to the discussion. 

Load more