Toby Tremlett

Content Manager @ CEA
2394 karmaJoined Working (0-5 years)Oxford, UK

Bio

Participation
2

Hello! I'm Toby. I'm Content Manager at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more. 

Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.

How I can help others

Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.

Comments
178

Topic contributions
44

Hi Ulf! Welcome to the Forum, and thanks for the anecdote :) If you have any questions about using the Forum/ customising it to suit your interests, feel free to message me (click on my profile and click message).
Cheers, 
Toby (Content Manager for the EA Forum)

I'd love to hear more from the disagree reactors. They should feel very free to dm. 
I'm excited to experiment more with interactive features in the future, so critiques are especially useful now!

Interesting. Certainty could also be a Y-axis, but I think that trades off against simplicity for a banner. 

Thanks Brad, I didn't foresee that! (Agree react Brad's comment if you experienced the same thing).
Would it have helped if we had marked increments along the slider? Like the below but prettier? (our designer is on holiday)
 

You could say something about memetics and that it is the most understandable memes that get passed down rather than the truth, which is, to some extent, fair. I guess I'm a believer that the world can be updated based on expert opinion. 

I think this is a good description of the kind of scepticism I'm attracted to, perhaps to an irrational degree. Thanks for describing it!

I like your point about AI Safety. It seems at least a bit true. 

I'll update my vote on the banner to be a bit less sceptical- I think my scepticism of the potential for us to know whether AI is conscious is a major part of my disagreement with the debate statement. I don't endorse the level of scepticism I hold. Thanks!

Thanks Nathan! People seem to like it so we might use it again in the future. If you or anyone else has feedback that might improve the next iteration of it, please let us know! You can comment here or just dm. 

I don't have an example to mind exactly, but I'd expect you could find one in animal welfare. Where there are agricultural interests pushing against a decision, you need a public campaign to counter them. We don't live in technocracies-- representatives need to be shown that there is a commensurate interest in favour of the animals. On less important issues/ legislation which can be symbolic but isn't expected to be used- experts can have a more of a role. I'd expect that the former category is the more important one for digital minds. Does that make sense? I'm aware its a bit too stark of a dichotomy to be true. 

I'm quite excited for this week as it is a topic I'm very interested in but something that I also feel that I can't really talk about that much or take seriously as it is a bit fringe so thank you for having it!

Thanks! I'm also excited about this week- it's really cool to see how many people have already voted- goes well beyond my expectations. 

You don't have to convince the general public; you have to convince the major stakeholders of tests that check for AI consciousness. It honestly seems kind of similar to what we have done for the safety of AI models but instead for the consciousness of them?

I think this is a great point, and might change my mind. However, if these consciousness evals become burdensome for AI companies, I would imagine we would need a public push in support of them in order for them to be enforced, especially through legislation. Then we get back to my dichotomy, where if people think AI is obviously conscious (whether or not it is) we might get legislation, and if they don't, I can only imagine some companies doing it half-heartedly/ voluntarily until it becomes too costly (as is, arguably, the current state of safety evals). 

Hi Will, thanks for joining the Forum! Glad to have you here. Let me know if you'd like any tips setting up the Forum to see more of the kind of content you're interested in (for example, you can set filters on the homepage to filter out content you aren't interested in seeing). 
Cheers, 

Toby (Content Manager for the EA Forum)

Load more