Toby Tremlett🔹

Content Manager @ CEA
5246 karmaJoined Working (0-5 years)Oxford, UK

Bio

Participation
2

Hello! I'm Toby. I'm Content Manager at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.

How I can help others

Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.

Sequences
1

Existential Choices: Reading List

Comments
380

Topic contributions
72

I love point 3 "to the extent that I see something as core infrastructure that is a natural near-monopoly (e.g., the Forum, university groups) [...] I think there's an enhanced obligation to share the commons" - that's a good articulation of something I feel about Forum stewardship. 

Thanks for engaging on this as well! I do feel the responsibility involved in setting event topics, and it's great to get constructive criticism like this. 

To respond to the points a bit (and this is just my view- quite quickly written because I've got a busy day today and I'm happy to come back and clarify/change my mind in another reply): 

(a) - maybe, but I think the actual content of the events almost always contains some scepticism of the question itself, discussion of adjacent debates etc... The actual topic of the event doesn't seem like a useful place to look for evidence on the community's priorities. Also, I generally run events about topics I think people aren't prioritising. However, I think this is the point I disagree with the least - I can see that if you are looking at the forum in a pretty low-res way, or hearing about the event from a friend, you might get an impression that 'EA cares about X now'. 

(b) - The Forum does appear in EA-critical pieces, but I personally don't think those pieces distinguish much between what one post on the Forum says and what the Forum team puts in a banner (and I don't think readers who lack context would distinguish between those things either). So, I don't worry too much about what I'm saying in the eyes of a very adversarial journalist (there are enough words on the forum that they can probably find whatever they'd like to find anyway). 

To clarify - for readers and adversarial journalists - I still have the rule of "I don't post anything I wouldn't want to see my name attached to in public" (and think others should too), but that's a more general rule, not just for the Forum. 

(c)- I'm sure that it isn't the optimum Forum week. However (1) I do think this topic is important and potentially action-relevant - there is increasing focus on 'AI Safety', but AI Safety is a possibly vast field with a range of challenges that a career or funding could address, and the topic of this debate is potentially an important distinction to have a take on when you are making those decisions. And (2) I'm pretty bullish on forum events, and I'd like to run more, and get the community involved more, so any suggestions for future events are always welcome. 

 


 

I think yes and for all the reasons. I'm a bit sceptical that we can change the values ASIs will have - we don't understand present models that well, and there are good reasons not to treat how a model outputs text as representative of its goals (it could be hallucinating, it could be deceptive, it's outputs might just not be isomorphic to a reward structure). 

And even if we could, I don't know of any non-controversial value to instill in the ASI, that isn't just included in basic attempts to control the ASI (which I'd be doing mostly for extinction related reasons). 

I've had drafts of this take lying around for years - really glad to see it out in the open! 
I'd love to hear pushback from anyone who thinks it is still valuable. 

I think this is a fair point - but it's not the frame I've been using to consider debate week topics.

My aim has been to generate useful discussion within the effective altruism community. I'd like to choose topics which nudge people to examine assumptions they've been making, and might lead to them changing their minds, and perhaps their priorities, or the focus of their work. I haven't been thinking about debate weeks as a piece of communications work/ as a way of reaching out to a broader audience. This question in particular was chosen because the Forum audience wouldn't necessarily have cached takes on it - an audience outside the Forum would need a lot of context to get what we are talking about.

Perhaps I'm missing something though - do you think this is more public facing than I'm assuming? To be clear, I know that it is public, but it's not directed at an outside audience in the way a book or podcast or op-ed might be. 

Edit: I'm also uncertain on the claim that "there are few interventions that are predictably differentiated along those lines" - I think Forethought would disagree, and though I'm not sure I agree with them, they've thought about it more than I have. 

Yeah I've always been a bit sceptical of this as well. Surely it's just a yardstick that a department uses to decide between which investments it should make, rather than a considered (or even descriptive) "value of a life" for the US Government. 
Descriptively - the US government would spend far more for a few lives if those lives were hostages of a foreign adversary, and probably has far less willingness to pay for cheap ways the US govt could save lives (idk what these are, probably there are examples in public health). 
Basically - I don't think it's a number that can be meaningfully extrapolated to figure out the value of avoiding extinction or catastrophe, because the number was designed with far smaller trade-offs in mind, and doesn't really make sense outside of its intended purpose. 

Thanks - yep I think this is becoming a bit of an issue (it came up a couple times in the symposium as well). I might edit the footnote to clarify - worlds with morally valuable digital minds should be included as a non-extinction scenario, but worlds where an AI which could be called "intelligent life" but isn't conscious/ morally valuable takes over and humans become extinct should count as an extinction scenario. 

I think the "earth-originating intelligent life" term should probably include something that indicates sentience/ moral value. Perhaps you could read that into "intelligent" but that feels like a stretch. But I didn't want to imply that a world with no humans but many non-conscious AI systems would count as anything but an extinction scenario - that's one of the key extinction scenarios. 

Maybe another way to think about this (dropping the religion stuff - don't want to cast aspersions on any particular religions) is that we could think of black-ball and white-ball ideologies (like the Bostrom thought experiment where black-balls = technologies which can cause extinction). Perhaps certain ideologies are just much more exclusive and expansion focused than others - black-balls. You can pick out as many white-balls as you like, but picking out a black-ball means you have to get rid of your white-balls. Even if there are few black-balls in the bag, you'd always end up holding one. 

Load more