yanni kyriacos

Director & Movement Builder @ AI Safety ANZ, GWWC Advisory Board Member (Growth)
745 karmaJoined Dec 2020Working (15+ years)

Bio

Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).

Posts
20

Sorted by New

Comments
141

Hi Guy! Thanks for commenting :) I am a bit confused by the analogy. Would you mind explaining it further?

I think this is a great idea! Is there a way to have two versions:

  1. The detailed version (with %'s, etc)
  2. And the meme-able version (which links to the detailed version)

Content like this is only as good as the number of people that see it, and while its detail would necessarily be reduced in the meme-able version, I think it is still worth doing.

The Alliance for Animals does this in the lead up to elections and it gets spread widely: https://www.allianceforanimals.org.au/nsw-election-2023

 

Thanks for taking the time to comment Michael! I appreciate it :)

I probably should have mentioned in my post that I've spent probably > 1000 hours consuming Buddhist related content and/or meditating, which gives me a narrow and deep "inside view" on the topic. My views (and comments below) are heavily informed by Tibetan Buddhism especially. Regarding your points:


"As I understand, enlightenment doesn't free you from all suffering. Enlightenment is better described as "ego death"

  • My 2 cents is that the path to Enlightenment can be started (but not fully realised) by glimpsing the illusory nature subject/object duality. The self is the ultimate "subject", so I agree that "ego death" is a viable path!
  • I think full Enlightenment frees someone from basically all unnecessary suffering (which in Buddhism is distinguished from pain). A simple formula is something like "discomfort x resistance = suffering". An enlightened person in my view wouldn't be attached to a particular moment or it's content, and therefore wouldn't "cling" to or "resist" it.

"Enlightenment is extremely hard to achieve (it requires spending >10% of your waking life meditating for many years) and doesn't appear to make you particularly better at anything. Like if I could become enlightened and then successfully work 80 hours a week because I stop caring about things like motivation and tiredness, that would be great, but I don't think that's possible."

  • I think full Enlightenment is extremely hard to achieve, like you said, but getting 10% of the way there is totally within a normal persons grasp. I think it is plausible this could have the same increase in wellbeing for your average person as a good diet and exercise combined. Maybe more.
  • I think becoming partly Enlightened could make a person more altruistic but less driven. Hard to say!

If you're interested in exploring this further from a personal perspective, I recommend checking out Loch Kelly :)

Hi Matthew! I'd be curious to hear your thoughts on a couple of questions (happy for you to link if you've posted elsewhere): 

1/ What is the risk level above which you'd be OK with pausing AI?

2/ Under what conditions would you be happy to attend a protest? (LMK if you have already attended one!)

I'd like to make clear to anyone reading that you can support the PauseAI movement right now, only because you think it is useful right now. And then in the future, when conditions change, you can choose to stop supporting the PauseAI movement. 

AI is changing extremely fast (e.g. technical work was probably our best bet a year ago, I'm less sure now). Supporting a particular tactic/intervention does not commit you to an ideology or team forever!

This seems close enough that I might co-opt it :)

https://en.wikipedia.org/wiki/Freeganism

Yeah this is a good point, which I've considered, which is why I basically only do it at home.

This is an extremely "EA" request from me but I feel like we need a word for people (i.e. me) who are Vegans but will eat animal products if they're about to be thrown out. OpportuVegan? UtilaVegan?

I think it would be good if lots of EAs answered this twitter poll, so we could get a better sense for the communities views on the topic of Enlightenment / Awakening: https://twitter.com/SpencrGreenberg/status/1782525718586413085?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet

I think Peter might be hoping people read this as "a rich and influential guy might be persuadable!" rather than "let's discuss the minutiae of what constitutes an EA". I've watched quite a few of Bryan's videos and honestly I could see this guy swing either way on this (could be SBF, could be Dustin, honestly can't tell how this shakes out).

Load more