Ozzie Gooen

8522 karmaJoined Berkeley, CA, USA


I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.


Amibitous Altruistic Software Efforts


Topic contributions

> Yes, I could express that as a fairly involved function, but isn't the sentence I wrote above a better description of my view? 

That sentence isn't easily scorable, because it's not very precise. There's no description of the uncertainty that it has or specificity of what the jumps would be, exactly. It's also hard to use in an aggregator or similar, or do other sorts of direct modifications on.

But say that these attributes were added. Then, we'd just want some way to formally specify this. We don't have many options here, as few programming languages / formal specifications are made for this sort of thing. We've tried to make Squiggle as a decent fit between "simple to express views with uncertainty" and "runs in code", but it's not perfect, and people will want different things. 

The easiest thing now is that someone would be in charge of converting this sentence into a formal specification or algorithm. This could be done with an LLM or similar. 

This setup feels very similar to other prediction platforms. You could argue some people would feel, "Do I really need to say I'm 85% sure, instead of, 'I'm really sure'?".

Thanks! Some very quick points:
1. I think that discontinuities like that are rare, and I'd question this one. Basically, I think that you can get ~90% of the benefit here with just a linear or exponential model, with the right uncertainty. 
2. When writing a function that expresses 100 things (effectively), but in 3x the time, you wouldn't be expected to forecast those things as well as if you spend 100x the time. In other words, I'd expect that algo forecasters would begin with a lot of shortcuts and approximations. Their worse forecasts can still be calibrated, just not as high-resolution as we'd expect from point forecasts with a similar amount of effort. I think a lot of people get caught up here, by thinking, "I have this specific model in my head, and unless I can model every part of it, the entire thing is useless", but this really isn't the case!
3. I fed a modified version of your question straight to our GPT-Squiggle tool, and it came up with this, (basically) no modification needed. Not perfect, but not terrible!  

Squiggle Link

Thank you! I'm intending to run some (starting small) algo forecasting competitions soonish! 

Happy to see conversation and excitement on this!

Some quick points:
- Eli Lifland and I had a podcast episode about this topic a few weeks back. This goes into some detail on the details and viability of forecasting+AI being a cost-effective EA intervention.
- We at QURI have been investigating a certain thread of ambitious forecasting (which would require a lot of AI) for the last few years. We're a small group, but I think our writing and work would be interesting for people in this area.
- Our post Prioritization Research for Advancing Wisdom and Intelligence from 2021 described much of this area as "Wisdom and Intelligence" interventions, and there I similarly came to the conclusion that AI+epistemics was likely the most exciting generic area there. I'm still excited for more prioritization work and direct work in this area.
- The FTX Future Fund made epistemics and AI+epistemics a priority. I'd be curious to see other funders research this area more. (Hat tip to the new OP forecasting team)
- "A forecasting bot made by the AI company FutureSearch is making profit on the forecasting platform Manifold. The y-axis shows profit. This suggests it’s better even than collective prediction of the existing human forecasters." -> I want to flag here that it's not too hard for a smart human to do as good or better. Strong human forecasters are expected to make a substantial profit. A more accurate statement here is, "This suggests that it's powerful for automation to add value to a forecasting platform, and to outperform some human forecasters", which is a lower bar. I expect it will be a long time until AIs beat Humans+AIs in forecasting, but I agree AIs will add value.


I've known Marisa for a few years and had the privilege of briefly working with her. I was really impressed by her drive and excitement. She seemed deeply driven and was incredibly friendly to be around. 

This will take me some time to process. I'm so sorry it ended like this. 

She will be remembered.

Sorry - it was automatically sent out to multiple platforms, but I don't think our system can to spotify. I recommend trying another podcasting platform. 

(This is a draft I wrote in December 2021. I didn't finish+publish it then, in part because I was nervous it could be too spicy. At this point, with the discussion post-chatGPT, it seems far more boring, and someone recommended I post it somewhere.)

Thoughts on the OpenAI Strategy

OpenAI has one of the most audacious plans out there and I'm surprised at how little attention it's gotten.

First, they say flat out that they're going for AGI.

Then, when they raised money in 2019, they had a clause that says investors will be capped at getting 100x of their returns back.

"Economic returns for investors and employees are capped... Any excess returns go to OpenAI Nonprofit... Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress."[1]

On Hacker News, one of their employees says,

"We believe that if we do create AGI, we'll create orders of magnitude more value than any existing company." [2]

You can read more about this mission on the charter:

"We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all, and to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power.

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit."[3]

This is my [incredibly rough and speculative, based on the above posts] impression of the plan they are proposing:

  1. Make AGI
  2. Turn AGI into huge profits
  3. Give 100x returns to investors
  4. Dominate much (most?) of the economy, have all profits go to the OpenAI Nonprofit
  5. Use AGI for "the benefit of all"?

I'm really curious what step 5 is supposed to look like exactly. I’m also very curious, of course, what they expect step 4 to look like.

Keep in mind that making AGI is a really big deal. If you're the one company that has an AGI, and if you have a significant lead over anyone else that does, the world is sort of your oyster.[4] If you have a massive lead, you could outwit legal systems, governments, militaries.

I imagine that the 100x return cap means that the excess earnings would go to the hands of the nonprofit; which essentially means Sam Altman, senior leadership at OpenAI, and perhaps the board of directors (if legal authorities have any influence post-AGI).

This would be a massive power gain for a small subset of people.

If DeepMind makes AGI I assume the money would go to investors, which would mean it would be distributed to all of the Google shareholders. But if OpenAI makes AGI, the money will go to the leadership of OpenAI, on paper to fulfill the mission of OpenAI.

On the plus side, I expect that this subset is much more like the people reading this post than most other AGI competitors would be. (The Chinese government, for example). I know some people at OpenAI, and my hunch is that the people there are very smart and pretty altruistic. It might well be about the best we could expect from a tech company.

And, to be clear, it’s probably incredibly unlikely that OpenAI will actually create AGI, and even more unlikely they will do so with a decisive edge over competitors.

But, I'm sort of surprised so few other people seem at least a bit concerned and curious about the proposal? My impression is that most press outlets haven't thought much at all about what AGI would actually mean, and most companies and governments just assume that OpenAI is dramatically overconfident in themselves. 

(Aside on the details of Step 5)
I would love more information on Step 5, but I don’t blame OpenAI for not providing it.

  • Any precise description of how a nonprofit would spend “a large portion of the entire economy” would upset a bunch of powerful people.
  • Arguably, OpenAI doesn’t really need to figure out Step 5 unless their odds of actually having a decisive AGI advantage seem more plausible.
  • I assume it’s really hard to actually put together any reasonable plan now for Step 5. 

My guess is that we really could use some great nonprofit and academic work to help outline what a positive and globally acceptable (wouldn’t upset any group too much if they were to understand it) Step 5 would look like. There’s been previous academic work on a “windfall clause”[5] (their 100x cap would basically count), having better work on Step 5 seems very obvious.

[1] https://openai.com/blog/openai-lp/

[2] https://news.ycombinator.com/item?id=19360709

[3] https://openai.com/charter/
[4] This was titled a “decisive strategic advantage” in the book Superintelligence by Nick Bostrom

[5] https://www.effectivealtruism.org/articles/cullen-okeefe-the-windfall-clause-sharing-the-benefits-of-advanced-ai/

Also, see:
Artificial intelligence will create so much wealth that every adult in the United States could be paid $13,500 per year from its windfall as soon as 10 years from now.




Yea, this is what I was assuming the action/alternative would be. This strategy is very tried-and-true. 

Of course! In general I'm happy for people to make quick best-guess evaluations openly - in part, that helps others here correct things when there might be some obvious mistakes. :)

Load more