yanni kyriacos

Co-Founder & Director @ AI Safety ANZ
1405 karmaJoined Working (15+ years)
www.aisafetyanz.com.au/

Bio

Creating superintelligent artificial agents without a worldwide referendum is ethically unjustifiable. Until a consensus is reached on whether to bring into existence such technology, a global moratorium is required (n.b. we already have AGI).

Posts
25

Sorted by New

Comments
304

Good to know:

  1. Can you share more about these efforts?

  2. What makes you think it isn't neglected? I.e. what makes there being two efforts mean it isn't neglected? Part of me wonders whether many national governments should consider such exercises (but I wouldn't want to take it to military, only to have them become excited by capabilities).

NotebookLM is basically magic. Just take whatever Forum post you can't be bothered reading but know you should and use NotebookLM to convert it into a podcast.

It seems reasonable that in 6 - 12 months there will be a button inside each Forum post that converts said post into a podcast (i.e. you won't need to visit NotebookLM to do it).

If i found a place that raised cows that had predictably net positive lives, what would be the harm in eating beef from this farm?

I've been ostrovegan for ~ 7 years but open to changing my mind with new information.

I’m going to leave the most excruciatingly annoying comment, but in doing so, prove my point: it is possible to take positive and negative feedback without it affecting you much, if at all.

If you view yourself as unconditionally lovable (as I do myself), then one of two things happen:

  1. someone gives me a compliment, I absorb it like “duh, I know I’m extremely lovable”

  2. someone gives me criticism, I’m like “yeah that’s a point, also I’m extremely lovable”

I think the reason it can feel painful is because what our minds hear during public criticism from an evo psych perspective is;

‘this community hates me’ → ‘I might get kicked out of this community’ → ‘When I get kicked out of community I die’

And I think self love / esteem is a buttress for fear of death.

The reason this is an annoying comment is because I’m not pointing at a problem the community has (which could also be true!), but suggesting the information an individual receives passes through an interpretative matrix in their minds before landing as “harmful”, and that need not be the case.

As the Buddhists like to say: the reality we experience is the one our minds construct.

This is can be an extremely hard path, but is transformational if successful.

Shantideva: "You can't cover the whole world with leather to make it smooth, but you can wear sandals."

Shunryu Suzuki: “Each of you is perfect the way you are ... and you can use a little improvement.”

I expect that over the next couple of years GenAI products will continue to improve, but concerns about AI risk won't. For some reason we can't "feel" the improvements. Then (e.g. around 2026) we will have pretty capable digital agents and there will be a surge of concern (similar to when ChatGPT was released, maybe bigger), and then possibly another perceived (but not real) plateau. 

I am 90% sure that most AI Safety talent aren't thinking hard enough about what Neglectedness. The industry is so nascent that you could look at 10 analogous industries, see what processes or institutions are valuable and missing and build an organisation around the highest impact one. 

The highest impact job ≠ the highest impact opportunity for you!

 

AI Safety (in the broadest possible sense, i.e. including ethics & bias) is going be taken very seriously soon by Government decision makers in many countries. But without high quality talent staying in their home countries (i.e. not moving to UK or US), there is a reasonable chance that x/c-risk won’t be considered problems worth trying to solve. X/c-risk sympathisers need seats at the table. IMO local AIS movement builders should be thinking hard about how to either keep talent local (if they're experiencing brain drain) OR increase the amount of local talent coming into x/c-risk Safety, such that outgoing talent leakage isn't a problem.

I find it weird anyone is disagreeing with Peter's comment. I'd be interested to hear a disagreer explain their position.

Load more