Hello! I'm Toby. I'm Content Strategist at CEA. I work with the Online Team to make sure the Forum is a great place to discuss doing the most good we can. You'll see me posting a lot, authoring the EA Newsletter and curating Forum Digests, making moderator comments and decisions, and more.Â
Before working at CEA, I studied Philosophy at the University of Warwick, and worked for a couple of years on a range of writing and editing projects within the EA space. Recently I helped run the Amplify Creative Grants program, to encourage more impactful podcasting and YouTube projects. You can find a bit of my own creative output on my blog, and my podcast feed.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Reach out to me if you're worried about your first post, want to double check Forum norms, or are confused or curious about anything relating to the EA Forum.
Cheekily butting in here to +1 David's point - I don't currently think it's currently reasonable to assume that there is a relationship between the inner workings of an AI system which might lead to valenced experience, and its textual output.Â
For me I think this is based on the idea that when you ask a question, there isn't a sense in which an LLM 'introspects'. I don't subscribe to the reductive view that LLMs are merely souped up autocorrect, but they do have something in common. An LLM role-plays whatever conversation it finds itself in. They have long been capable of role-playing 'I'm conscious, help' conversations, as well as 'I'm just a tool built by OpenAI' conversations. I can't imagine any evidence coming from LLM self-reports which isn't undermined by this fact.Â
Â
Thanks to everyone who voted for our next debate week topic! Final votes were locked in at 9am this morning.Â
We canât announce a winner immediately, because the highest karma topic (and perhaps some of the others) touches on issues related to our politics on the EA Forum policy. Once weâve clarified which topics we would be able to run, weâll be able to announce a winner.Â
Once we have, Iâll work on honing the exact wording. Iâll write a post with a few options, so that you can have input into the exact version we end up discussing.Â
PS: Apologies for the delay here â in retrospect, I should have checked on adherence to our policy before allowing voting. In the now very likely event that we cannot have the highest karma discussion on the EA Forum, Iâd remind you that this is not the only place for EA-related discussions on the internet â Substack and Twitter do not have our politics policy.Â
Hey Karen Maria, welcome to the EA Forum! Thanks for joining us.Â
I'm Toby, from the EA Forum team. If you have any questions, I'm happy to answer them. If you haven't already listened to it, I'd recommend the 80,000 Hours podcast.Â
Let me know if you'd like specific kind of recommendations, I have an extensive knowledge of EA Forum posts :)
Cheers,Â
Toby
Unlike an asteroid impact, which leaves behind a lifeless or barely habitable world, the AI systems that destroyed humanity would presumably continue to exist and function. These AI systems would likely go on to build their own civilization, and that AI civilization could itself eventually expand outward and colonize the cosmos.
This is by no means certain. For example we should still be worried about extinction via misuse, which could kill off humans before AI is developed enough to be self-replicating/ autonomous. For example, the development of bioweapons. Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Thanks for the comments @Clara Torres Latorre đ¸ @NickLaing @Aaron Gertler đ¸ @Ben Stevenson. This is all useful to hear. I should have an update later this month.