C

Caruso

Author and researcher - Cyber Warfare
17 karmaJoined Working (15+ years)
www.insidecyberwarfare.com

Bio

I'm Jeff Caruso, an author and researcher focusing on Cyber Warfare and AI. The third edition of my book "Inside Cyber Warfare" (O'Reilly, 2009, 2011, 2024) will be out this fall. I was a Russia Subject Matter Expert contracted with the CIA's Open Source Center, provided numerous cyber briefings to U.S. government agencies, and I've been a frequent lecturer at the U.S. Air Force Institute of Technology and the U.S. Army War College.

Posts
2

Sorted by New
1
Caruso
· · 1m read

Comments
11

In today's Bulletin of the Atomic Scientists is this headline - "Trump has a strategic plan for the country: Gearing up for nuclear war" 

https://thebulletin.org/2024/07/trump-has-a-strategic-plan-for-the-country-gearing-up-for-nuclear-war/

Does EA have a plan to address this? If not, now would be a good time.  

Thank you.

Separately, I just read your executive summary re the nuclear threat; something that I think is particularly serious and worthy of effort. It read to me like the report suggests that there is such a thing as a limited nuclear exchange. If that's correct, I would offer that you're doing more harm than good by promoting that view which unfortunately some politicians and military officers share. 

If you have not yet read, or listened to, Nuclear War: A Scenario by Anne Jacobsen, I highly encourage you to do so. Your budget for finding ways to prevent that from happening would, in my opinion, be well-spent creating condensed versions of what Jacobsen accomplished and making it go viral. You'll understand what I mean once you've consumed her book. It completely changed how I think about the subject. 

Once the genie is out of the bottle, it doesn't matter, does it? Much of China's current tech achievements began with industrial espionage. You can't constrain a game-changing technology while excluding espionage as a factor. 

It's exactly the same issue with AI. 

While you have an interesting theoretical concept, there's no way to derive a strategy from it that would lead to AI safety that I can see. 

A theory of victory approach won't work for AI. Theories of victory are borne out of a study of what hasn't worked in warfare.  You've got nothing to draw from in order to create an actual theory of victory. Instead, you appear to be proposing a few different strategies, which don't appear to be very well thought out.

You argue that the U.S. could have established a monopoly on nuclear weapons development. How?The U.S. lost its monopoly to Russia due to acts of Russian espionage that took place at Los Alamos. How do you imagine that could have been prevented? 

AI is software, and in software security, offense always has the advantage over defense. There is no network that cannot be breached w/ sufficient time and resources because software is inherently insecure. 

I haven't seen the phrase "Advanced Artificial Intelligence" in use before. How does AAI differ from Frontier AI, AGI, and Artificial Superintelligence? 

This was good news delivered in a poor way. GiveDirectly.org isn't an EA organization and, in my opinion, that's to their credit. EA could learn about what "effective altruism" really means by studying what GiveDirectly is doing and moving to their model. 

Fired from OpenAI's Superalignment team, Aschenbrenner now runs a fund dedicated to funding AGI-focused startups, according to The Information. 

"Former OpenAI super-alignment researcher Leopold Aschenbrenner, who was fired from the company for allegedly leaking information, has started an investment firm to back startups with capital from former Github CEO Nat Friedman, investor Daniel Gross, Stripe CEO Patrick Collision and Stripe president John Collision, according to his personal website.

In a recent podcast interview, Aschenbrenner spoke about the new firm as a cross between a hedge fund and a think tank, focused largely on AGI, or artificial general intelligence. “There’s a lot of money to be made. If AGI were priced in tomorrow, you could maybe make 100x. Probably you can make even way more than that,” he said. “Capital matters.”

“We’re going to be betting on AGI and superintelligence before the decade is out, taking that seriously, making the bets you would make if you took that seriously. If that’s wrong, the firm is not going to do that well,” he said."

What happened to his concerns over safety, I wonder? 

I published a short piece on Yann LeCun posting about Jan Leike's exit from OpenAI over perceived safety issues, and wrote a bit about the difference between Low Probility - High Impact events and Zero Probability - High Impact events. 

https://www.insideaiwarfare.com/yann-versus/

Thanks for the link to Open Asteroid Impact. That's some really funny satire. :-D

This is an interesting #OpenPhil grant. $230K for a cyber threat intelligence researcher to create a database that tracks instances of users attempting to misuse large language models.

https://www.openphilanthropy.org/grants/lee-foster-llm-misuse-database/

 Will user data be shared with the user's permission? How will an LLM determine the intent of the user when it comes to differentiating between purposeful harmful entries versus user error, safety testing, independent red-teaming, playful entries, etc. If a user is placed on the database, is she notified? How long do you stay in LLM prison? 

I did send an email to OpenPhil asking about this grant, but so far I haven't heard anything back.

Load more