I strongly agree that current LLM's don't seem to pose a risk of a global catastrophe, but I'm worried about what might happen when LLM's are combined with things like digital virtual assistants who have outputs other than generating text. Even if it can only make bookings, send emails, etc., I feel like things could get concerning very fast.
Is there an argument for having AI fail spectacularly in a small way which raises enough global concern to slow progress/increase safety work? I'm envisioning something like a LLM virtual assistant which leads to a lot of lost productivity and some security breaches but nothing too catastrophic, which makes people take AI safety seriously, slowing progress on more advanced AI, perhaps.
A complete spitball.
This is cool! I came across EA in early 2015, and I've sometimes been curious about what happened in the years before then. Books like The Most Good You Can Do sometimes incidentally give anecdotes, but I haven't seen a complete picture in one public place. Not to toot our own horn too much, but I wonder if there will one day be a documentary about the movement itself.
Thanks for the great question. I'd like to see more attempts to get legislation passed to lock in small victories. The Sioux Falls slaughterhouse ban almost passing gives me optimism for this. Although it seemed to be more for NIMBY reasons than for animal rights reasons, in some ways that doesn't matter.
I'm also interested in efforts to maintain the lower levels of speciesism we see in children into their adult lives, and to understand what exactly drives that so we can incorporate it into outreach attempts targeted at adults. Our recent interview with Matti Wilks touches on this a little if you're interested.
Thank you for the feedback! I just wanted to let you know that while I haven't had time to write a proper response, I've read your feedback and will try to take it on board in my future work.
People more involved with X-risk modelling (and better at math) than I could better say whether this is better than existing tools for X-risk modelling, but I like it! I hadn't heard of the absorbing state terminology, that was interesting. When reading that, my mind goes to option value, or lack thereof, but that might not be a perfect analogy.
Regarding x-risks requiring a memory component, can you design Markov chains to have the memory incorporated?
Some possible cases where memory might be useful (without thinking about it too much) might be:
Maybe this information can just be captured without memory anyway?
Thanks for sharing, I'm looking forward to this! I'm particularly excited about the sections on measuring suffering and artificial suffering.
Thanks for sharing! I love seeing concrete roadmaps/plans for things like this, and think we should do it more.
Fair enough! I probably wasn't clear - what I had in mind was one country detecting an asteroid first, then deflecting it into Earth before any other country/'the global community' detects it. Just recently we detected a 1.5 km near Earth object that has an orbit which intersects with Earth. The scenario I had in mind was that one country detects this (but probably a smaller one ~50 m) first, then deflects it.
We detect ~50 m asteroids as they make their final approach to Earth all the time, so detecting one first by chance could be a strategic advantage.
I take your other points, though.
I supplement iron and vitamin C, as my iron is currently on the lower end of normal (after a few years of being vegan it was too high, go figure).
I tried creatine for a few months but didn't notice much difference in the gym and while rockclimbing.
I drink a lot of B12 fortified soy milk which seems to cover that.
I have about 30g of protein powder a day with a good range of different amino acids to help hit 140g a day.
I have a multivitamin every few days.
I have iodine fortified salt that I cook with sometimes.
I've thought about supplementing omega 3 or eating more omega 3 rich foods but never got around to it.
8 years vegan for reference.