DM

Darren McKee

384 karmaJoined Jun 2022

Comments
24

Sometimes, it is not enough to make a point theoretically, it has to be made in practice. Otherwise, the full depth of the point may not be appreciated.  In this case, I believe the point is that, as a community, we should have consistent (high-quality) standards for investigations or character assessments.

This is why I think it is reasonable to have the section "Sharing Information on Ben Pace".  It is also why I don't see it as retaliatory. 

The response to that section is negative by some even though Kat specifically pointed out all the flaws in it, said that people shouldn't update about it, and that Ben shouldn't have to respond to such things.  Why? I believe she is illustrating the exact problem with saying such things, even if one tries to weaken them.  The emotional and intellectual displeasure you feel is correct. And it should apply to anyone being assessed in such a way.  

I fear there are those who don't see the parallel between Ben's original one-sided (by his own statements) post and Kat's one-sided example (also by her own statements), that is clearly for educational purposes only. 

Although apparently problematic to some, I hope the section has been useful to highlight the larger point: assessments of character should be more comprehensive, more evidence-based, and (broadly) more just (eg allowing those discussed time to respond).  

If what's at issue was the 'overall character of Nonlinear staff', then is it fair to assume you fully disagreed with Ben's one-sided approach? 

Thanks!
There might be. If you're interested in pursuing that in Australia, send me a DM and we'll explore what's possible. 

It's a tricky balance and I don't think that there is a perfect solution.   The issue that both the Title and the Cover have to be intriguing and compelling (also, ideally, short / immediately understandable).  What will intrigue some will be less appealing to others. 
So, I could have had a question mark, or some other less dramatic image... but when not only safety researchers but the CEOs of the leading AI companies believe the product that they are developing could lead to extinction, I believe that this is alarming. This is an alarming fact about the world. That drove the cover.
The inside is more nuanced and cautious. 

Sure do! As I said in the second last bullet it is in progress :)
(hopefully within the next two weeks) 

Great post. I can't help but agree the broad idea given that I'm just finishing up a book that has the main goal of raising awareness of AI safety to a broader audience. Non-technical, average citizens, policy makers, etc. Hopefully out in November. 

I'm happy your post exists even if I have (minor?) differences on strategy. Currently, I believe the US Gov sees AI as a consumer item so they link it to innovation and economic good and important things. (Of course, given recent activity, there is some concern about the risks).   As such, I'm advocating for safe innovation with firm rules/regs that enable that.  If those bars can't be met, then we obviously shouldn't have unsafe innovation.   I sincerely want good things from advanced AI, but not if it will likely harm everyone. 

Thank you. 
I quite like the "we don't have a lot of time" part, both in the fact that we'd need to prepare in advance, and because making decisions under time pressure is almost always worse. 

Noted.  I find many are stuck on the 'how'.    That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn't entirely clear who needs to hear which arguments/analysis. 

Great post!

A and B about 30 years are useful ideas/talking points. Thanks for the reminder/articulation!

Load more