NN

Neil Natarajan

25 karmaJoined Jan 2022

Posts
1

Sorted by New
24
· 2y ago · 1m read

Comments
3

I confess I’m not entirely sure how you got there after reading the link post. Not that I disagree (I’m personally fine w/ being called a neartermist, I think it sounds good, but open to p much anything)

“It seems to me a generally bad practice to take the positive part of the phrase a movement or philosophy uses to describe itself, and then negate that to describe people outside the movement” seems to imply that we shouldn’t be “not longtermist”s

Hi Hazelfire,

Thanks for pointing them out, we’ll definitely have a chat with them!

It looks to me like they’re mostly focused on volunteering opportunities at pre-existing projects, whereas our main focus is going to be in helping people start / join something new - not necessarily volunteering. Our aim is to break down the barriers that would keep people from going into value-aligned jobs / full-time roles, where CoLabs appears to be mostly matching helping hands to projects that need help.

For what it's worth, I'm optimistic about the EU AI regulation! I think, at least insofar as it is a smoke test, it marks a beginning in much-needed regulation on AI. I am also optimistic about transferability – perhaps not in exact wording or policy, but I like the risk-based approach. I think it gives companies the ability to replace the most risky AIs while keeping the less risky ones, and I think if it is shown to be feasible in the EU, it will likely be adopted elsewhere – if only because companies will already have found replacement solutions for unacceptable-risk AIs.

If there is one complaint I'd levy at it, I think it has been stripped back a little too far.  EU laws tend to be vague by design, but even by those standards, this new AI regulation leaves quite a few holes. I am also worried that the initial proposal was much stronger than the current EU AI act, and that this may continue:  the scaling back of regulations might even reflect a hesitancy in Court of Justice enforcement.  It isn't impossible that this becomes something like the cookies banners on websites in the EU now. If this happens, I fear that less regulation-savvy bodies (like the US government) might adopt a similar regulation to satisfy public pressure, while doing as little as possible to regulate AI. This might actually hinder efforts at AI regulation, as it would allow the EU and other regulatory bodies to put up a hollow regulation for good press, while leaving the problem unsolved.

Really hope this doesn't happen, or at least that if it does, the regulation serves as a stepping stone to stronger, future regulations :)