yanni kyriacos

Director & Movement Builder @ AI Safety ANZ, GWWC Advisory Board Member (Growth)
714 karmaJoined Dec 2020Working (15+ years)

Bio

Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).

Posts
19

Sorted by New

Comments
133

I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

I think Peter might be hoping people read this as "a rich and influential guy might be persuadable!" rather than "let's discuss the minutiae of what constitutes an EA". I've watched quite a few of Bryan's videos and honestly I could see this guy swing either way on this (could be SBF, could be Dustin, honestly can't tell how this shakes out).

Has anyone seen an analysis that takes seriously the idea that people should eat some fruits, vegetables and legumes over others based on how much animal suffering they each cause?

I.e. don't eat X fruit, eat Y one instead, because X fruit is [e.g.] harvested in Z way, which kills more [insert plausibly sentient creature].

The catchphrase I walk around with in my head regarding the optimal strategy for AI Safety is something like: Creating Superintelligent Artificial Agents* (SAA) without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (*we already have AGI).

I thought it might be useful to spell that out.

I recently discovered the idea of driving all blames into oneself, which immediately resonated with me. It is relatively hardcore; the kind of thing that would turn David Goggins into a Buddhist.

Gemini did a good job of summarising it:

This quote by Pema Chödron, a renowned Buddhist teacher, represents a core principle in some Buddhist traditions, particularly within Tibetan Buddhism. It's called "taking full responsibility" or "taking self-blame" and can be a bit challenging to understand at first. Here's a breakdown:

What it Doesn't Mean:

  • Self-Flagellation: This practice isn't about beating yourself up or dwelling on guilt.
  • Ignoring External Factors: It doesn't deny the role of external circumstances in a situation.

What it Does Mean:

  • Owning Your Reaction: It's about acknowledging how a situation makes you feel and taking responsibility for your own emotional response.
  • Shifting Focus: Instead of blaming others or dwelling on what you can't control, you direct your attention to your own thoughts and reactions.
  • Breaking Negative Cycles: By understanding your own reactions, you can break free from negative thought patterns and choose a more skillful response.

Analogy:

Imagine a pebble thrown into a still pond. The pebble represents the external situation, and the ripples represent your emotional response. While you can't control the pebble (the external situation), you can control the ripples (your reaction).

Benefits:

  • Reduced Suffering: By taking responsibility for your own reactions, you become less dependent on external circumstances for your happiness.
  • Increased Self-Awareness: It helps you understand your triggers and cultivate a more mindful response to situations.
  • Greater Personal Growth: By taking responsibility, you empower yourself to learn and grow from experiences.

Here are some additional points to consider:

  • This practice doesn't mean excusing bad behavior. You can still hold others accountable while taking responsibility for your own reactions.
  • It's a gradual process. Be patient with yourself as you learn to practice this approach.

Be the meme you want to see in the world (screenshot).


 

Yeah, Case Studies as Research need to be treated very carefully (i.e. they can still be valuable exercises but the analyser needs to be aware of their weaknesses)

I hope you're right. Thanks for the example, it seems like a good one.

What are some historical examples of a group (like AI Safety folk) getting something incredibly wrong about an incoming Technology? Bonus question: what led to that group getting it so wrong? Maybe there is something to learn here.

Load more