CTL

Clara Torres Latorre 🔶️

Postdoc @ CSIC
8 karmaJoined Working (6-15 years)

Comments
5

My uninformed guess is that an automatic system doesn't need to be superintelligent to create trouble, it only needs some specific abilities (depending on the kind of trouble).

For example, the machine doesn't need to be agentic if there is a human agent deciding to make bad stuff happen.

So I think it would be an important point to discuss, and maybe someone has done it already.

Thank you for your comment. I edited my post for clarity. I was already thinking of x-risk or s-risk (both in AGI risk and in narrow AI risk).

(just speculating, would like to have other inputs)

 

I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk.

I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.

Thank you Toby.

I agree that to observe macroeconomic effects something has to be broad scale and my question was quite speculative.

On the other hand, about the Forum, I see that posts are like essays and appear informative. I wonder what is the right place to things that might be interesting or valuable, but don't fit the general vibe, for instance, just a question. Do they belong in here? As quick takes?

Hi,

I recently took the Giving What We Can 10% pledge (to give 10% of my income to effective charities). Part of the appeal of the pledge is making effective giving a cultural norm.

I wonder about the effects of that in the global economy, and on the economy of rich countries, where I hope the pledge takes roots stronger and faster.

Has anyone looked into that?