TH

Tobias Häberli

1151 karmaJoined Dec 2018Bern, Switzerland

Comments
78

"Profits for investors in this venture [ETA: OpenAI] were capped at 100 times their investment (though thanks to a rule change this cap will rise by 20% a year starting in 2025)."


I stumbled upon this quote in this recent Economist article [archived] about OpenAI. I couldn't find any good source that supports the claim additionally, so this might not be accurate. The earliest mention I could find for the claim is from January 17th 2023 although it only talks about OpenAI "proposing" the rule change.

If true, this would make the profit cap less meaningful, especially for longer AI timelines. For example, a 1 billion investment in 2023 would be capped at ~1540 times in 2040.

  • Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. 

Would the information in this quote fall under any of the Freedom of Information Act (FOIA) exemptions, particularly those concerning national security or confidential commercial information/trade secrets? Or would there be other reasons why it wouldn't become public knowledge through FOIA requests?

As far as I understand the plan is for it to be a (sort of?) national/governmental institute.[1] The UK government has quite a few scientific institutes. It might be the first in the world of that kind.

  1. ^

    In this article from early October, the phrasing implies that it would be tied to the UK government:

    Sunak will use the second day of Britain's upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s plans.

    The body would assist governments in evaluating national security risks associated with frontier models, which are the most advanced forms of the technology.

    The idea is that the institute could emerge from what is now the United Kingdom’s government’s Frontier AI Taskforce[...].

Thanks for the context, didn't know that!

SBF was additionally charged with bribing Chinese officials with $40 million. Caroline Ellison testified in court that they sent a $150 million bribe.

My hope and expectation is that neither will be focused on EA

I'd be surprised [p<0.1] if EA was not a significant focus of the Michael Lewis book – but agree that it's unlikely to be the major topic. Many leaders at FTX and Alameda Research are closely linked to EA. SBF often, and publically, said that effective altruism was a big reason for his actions. His connection to EA is interesting both for understanding his motivation and as a story-telling element. There are Manifold prediction markets on whether the book would mention 80'000h (74%), Open Philanthropy (74%), and Give Well (80%), but these markets aren't traded a lot and are not very informative.[1]

This video titled The Fake Genius: A $30 BILLION Fraud (2.8 million views, posted 3 weeks ago) might give a glimpse of how EA could be handled. The video touches on EA but isn't centred on it. It discusses the role EAs played in motivating SBF to do earning to give, and in starting Alameda Research and FTX. It also points out that, after the fallout at Alameda Research, 'higher-ups' at CEA were warned about SBF but supposedly ignored the warnings. Overall, the video is mainly interested in the mechanisms of how the suspected fraud happened, where EA is only one piece of the puzzle. One can equally get a sense of "EA led SBF to do fraud" as "SBF used EA as a front to do fraud".

ETA:
The book description[2] "mentions "philanthropy", makes it clear that it's mainly about SBF and not FTX as a firm, and describes the book as partly a psychological portrait.

  1. ^

    I also created a similar market for CEA, but with 2 mentions as the resolving criteria. One mention is very likely as SBF worked briefly for them.

  2. ^

    "In Going Infinite Lewis sets out to answer this question, taking readers into the mind of Bankman-Fried, whose rise and fall offers an education in high-frequency trading, cryptocurrencies, philanthropy, bankruptcy, and the justice system. Both psychological portrait and financial roller-coaster ride, Going Infinite is Michael Lewis at the top of his game, tracing the mind-bending trajectory of a character who never liked the rules and was allowed to live by his own—until it all came undone."

(Not sure if this is within the scope of what you're looking for. )
I'd be excited about having something like a roundtable with people who have been through 80'000h advising – talking about how their thinking about their career has changed, advice for people in a similar situation, etc. I'd imagine this could be a good fit for 80k After Hours?

On Microsoft Edge (the browser) there's a "read aloud" option that offers a range of natural voices for websites and PDFs. It's only slightly worse than speechify and free – and can give a glimpse of whether $139/year might be worth it for you.

I think that a very simplified ordering for how to impress/gain status within EA is:

Disagreement well-justified ≈ Agreement well-justified >>> Agreement sloppily justified > Disagreement sloppily justified

Looking back on my early days interacting with EAs, I generally couldn't present well-justified arguments. I then did feel pressure to agree on shaky epistemic grounds. Because I sometimes disagreed nevertheless, I suspect that some parts of the community were less accessible to me back then.

I'm not sure about what hurdles to overcome if you want EA communities to push towards 'Agreement sloppily justified' and 'Disagreement sloppily justified' being treated similarly.

Load more