All of Darren McKee's Comments + Replies

Sometimes, it is not enough to make a point theoretically, it has to be made in practice. Otherwise, the full depth of the point may not be appreciated.  In this case, I believe the point is that, as a community, we should have consistent (high-quality) standards for investigations or character assessments.

This is why I think it is reasonable to have the section "Sharing Information on Ben Pace".  It is also why I don't see it as retaliatory. 

The response to that section is negative by some even though Kat specifically pointed out all the fl... (read more)

I agree, and find the ratio of agree/disagreement on your comment really disheartening in terms of what lesson this community has learned from all this. 

I get that people find it too "retaliatory" and bad-faith. Maybe it would have been cleaner if it wasn't about Ben, though I don't think a hypothetical person would have made the lesson as clear, and if Ben wasn't fair game for having written that article, I don't know who would be. Unless people believe Kat is just making up accusations entirely, they must believe those accusations deserve just as mu... (read more)

If what's at issue was the 'overall character of Nonlinear staff', then is it fair to assume you fully disagreed with Ben's one-sided approach? 

-3
David M
3mo
I phrased that poorly, please see my reply to Vlad's reply for an explanation. I weakly think Ben's decision to search for bad information rather than good was a good policy, but that the investigation was lacking in some other aspects.
0
Robi Rahman
4mo
David probably meant "overall character of Nonlinear management" there. And in that case you might not interview the managers themselves, although you'd probably want to interview other employees to see if they were treated like Alice and Chloe.

Thanks!
There might be. If you're interested in pursuing that in Australia, send me a DM and we'll explore what's possible. 

It's a tricky balance and I don't think that there is a perfect solution.   The issue that both the Title and the Cover have to be intriguing and compelling (also, ideally, short / immediately understandable).  What will intrigue some will be less appealing to others. 
So, I could have had a question mark, or some other less dramatic image... but when not only safety researchers but the CEOs of the leading AI companies believe the product that they are developing could lead to extinction, I believe that this is alarming. This is an alarming fact about the world. That drove the cover.
The inside is more nuanced and cautious. 

Sure do! As I said in the second last bullet it is in progress :)
(hopefully within the next two weeks) 

1
DaneelO
4mo
Great! I don’t know how I missed that 😅

Great post. I can't help but agree the broad idea given that I'm just finishing up a book that has the main goal of raising awareness of AI safety to a broader audience. Non-technical, average citizens, policy makers, etc. Hopefully out in November. 

I'm happy your post exists even if I have (minor?) differences on strategy. Currently, I believe the US Gov sees AI as a consumer item so they link it to innovation and economic good and important things. (Of course, given recent activity, there is some concern about the risks).   As such, I'm advocat... (read more)

Some parts of the US Government are waking up to the extinction threat. By November - following the UK AI Safety Summit and Google's release of Gemini(?) - they might've fully woken up (we can hope).

9
Holly_Elmore
6mo
I consider the consumer regulation route complementary to what I’m doing and I think a diversity of approaches is more robust, as well.
8
Holly_Elmore
6mo
I didn’t know about your book! Happy to hear it :)

Thank you. 
I quite like the "we don't have a lot of time" part, both in the fact that we'd need to prepare in advance, and because making decisions under time pressure is almost always worse. 

Noted.  I find many are stuck on the 'how'.    That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn't entirely clear who needs to hear which arguments/analysis. 

Great post!

A and B about 30 years are useful ideas/talking points. Thanks for the reminder/articulation!

I'm definitely aware of that complication but I don't think that is the best way to broader impact. Uncertainty abounds.  If I can get it out in 3 months, I will. 

Thanks for sharing, this and the others.  I read that one and it was a bit more about the rationality community than the risks. (It's in the list with a different title)

1
JakubK
1y
The AI Does Not Hate You is the same book as The Rationalist's Guide to the Galaxy? I didn't realize that. Why do they have different titles?

FYI, I'm working on a book about the risks of AGI/ASI for a general and I hope to get it out within 6 months. It likely won't be as alarmist as your post but will try to communicate the key messages, the importance, the risks, and the urgency. Happy to have more help. 

2
Greg_Colbourn
1y
Cool, but a lot is likely to happen in the next 6 months! Maybe consider putting it online and updating it as you go? I feel like this post I wrote is already in need of updating (with mention of the H100 "summoning portals" already in the pipeline, CthuluGPT, Stability.ai, discussion at Zuzalu last week, Pause AI.)

Thank you for a great post and the outreach you are doing.  We need more posts and discussions about optimal framing. 

I was referring to external credibility if you are looking for a scientific paper with the key ideas. Secondarily, an online, modular guide is not quite the frame of the book either (although it could possible be adapted towards such a thing in the future)

Interesting points. I'm working on a book which is not quite a solution to your issue but hopefully goes in the same direction. 
And I'm now curious to see that memo :)

3
Harrison Durland
1y
Which issue are you referring to? (External credibility?)  I don’t see a reason to not share the paper, although I will caveat that it definitely was a rushed job. https://docs.google.com/document/d/1ctTGcmbmjJlsTQHWXxQmhMNqtnVFRPz10rfCGTore7g/edit

Thanks for the compilation! This might be helpful for the book I'm writing.  
One of my aspirations was to throw a brick through the overton window regarding AI safety, but things have already changed with more and more stuff coming out like what you've listed. 

I am fully supportive of more books coming out on EA related topics. I've also always enjoyed your writings. 

As someone trying to write a book about the threat of AI for a broader audience, I've learned that you should have a good idea of your goal for the book's distribution.  Meaning, is your goal to get this published by a publisher?
Or self-publish?  An eCopy or audiobook?

To get something published, you typically need an agent. To get an agent you usually need a one-page pitch, a writing sample, and perhaps an outline. 

If no agent is interested, it is a risk to write the book if you want a third party to publish it. 
 

Thought that this was filled with interesting ideas. Thank you.

If open to constructive feedback, I think that there is an opportunity (mainly for host but also perhaps you as guest) to reduce the amount of 'likes' that are equivalent to 'ums' in your speech. 

This may be cultural/generational, and perhaps few others care, but personally I found parts hard to listen to because there were so many 'likes'. 

I couldn't help but be curious, so I did a search on the transcript and it pops up 577 times (of course, a decent chunk of those are not filler but part of normal speech).

Oops, looks like I read that last July (and didn't agree with the general thesis). Thanks for the comment. 

Thanks for sharing.  Do you happen to have the number of books you sent out (as you've already given the cost).  Just wondering if my current estimate of around 30000 is close. 

5
Bella
1y
Edit: I'm sorry, I made a spreadsheet error in the place where I sourced the figure for my previous answer — the real answer is 46,000!  Sure thing! Yep your estimate is really good — it's about 36,000 :)

"Meanwhile, at a meeting with Alameda employees on Wednesday, Ms. Ellison explained what had caused the collapse, according to a person familiar with the matter. Her voice shaking, she apologized, saying she had let the group down. Over recent months, she said, Alameda had taken out loans and used the money to make venture capital investments, among other expenditures.

Around the time the crypto market crashed this spring, Ms. Ellison explained, lenders moved to recall those loans, the person familiar with the meeting said. But the funds that Alameda had sp... (read more)

7
Nathan Ashby
1y
Yikes. As an outside observer I appreciate getting this insight, but boy oh boy I would have told her to not say it if I was her lawyer.
2
Lauren Maria
1y
I'll add this to the post :)