Ozzie Gooen

9901 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
894

Topic contributions
4

Thanks for the clarification! 

Yea, I'm not very sure what messaging to use. It's definitely true that there's a risk we won't be able to maintain our current team for another year. At the same time, if we could get more than our baseline of funding, I think we could make good use of it (up to another 1-2 FTE, for 2025).

I'm definitely still hoping that we could eventually (next 1-5 years) either significantly grow (this could mean up to 5-7 FTE) or scale in other ways. Our situation now seems pretty minimal to me, but I still strongly prefer it to not having it. 

I'd flag that the funding ecosystem feels fairly limited for our sort of work. The main options are really the SFF and the new Open Philanthropy forecasting team. I've heard that some related groups have also been having challenges with funding. 

If the confusion is that you expected us to have more runway, I'm not very sure what to say. I think this sector can be pretty difficult. We're in talks for funding from one donor, which would help cover this gap, but I'd like to not depend that much on them. 

We also do have a few months of reserves that we could spend in 2025 if really needed. 

We so far raised $62,000 for 2025, from the Survival and Flourishing Fund.

Slava and myself are both senior software engineers, I'm in Berkeley (to be close to the EA scene here). Total is roughly $200k for the two of us (including taxes and health care).

In addition, we have server and software payments, plus other misc payments.

We then have a 14% overhead from our sponsorship with Rethink Priorities. 

I said around $200k, so this assumes basically a $262k budget. This is on the low end for what I'd really prefer, but given the current EA funding situation, is what I'll aim for now.

If we had more money we could bring in contractors for things like research and support.

Answering on behalf the Quantified Uncertainty Research Institute!

We're looking to raise another ~$200k for 2025, to cover our current two-person team plus expenses. We'd also be enthusiastic about expanding our efforts if there is donor interest.

We at QURI have been busy on software infrastructure and epistemics investigations this last year.  We currently have two full-time employees - myself and Slava Matyuhin. Slava focuses on engineering, I do a mix of engineering, writing, and admin.

Our main work this year has been improving Squiggle and Squiggle Hub.

In the last few months we’ve built Squiggle AI as well, which we’ve started getting feedback on and will write more about here shortly. Basically, we believe that BOTECs and cost-benefit models are good fits for automation. So far, with some tooling, we think that we’ve created a system that produces decent first passes on many simple models. This would ideally be something EAs benefit from directly, and something that could help inspire other epistemic AI improvements.  

On the side of software development, we’ve posted a series of articles about forecasting, epistemics, and effective altruism. Recently these have focused on the combination of AI and epistemics.

For 2025, we're looking to expand more throughout the EA and AI safety ecosystems. We have a backlog of Squiggle updates to inform people, and have a long list of new things we expect people to like. We've so far focused on product experimentation and development, and would like to spend more time on education and outreach. In addition, we'll probably continue focusing a lot on AI - both on improving AI systems to write and audit cost-effectiveness models and similar, and also on helping build cost-effectiveness models to guide AI safety. 

If you support this sort of work and are interested in chatting or donating, please reach out! You can reach me at ozzie@quantifieduncertainty.org. We're very focused on helping the EA ecosystem, and would really like to diversify our base of close contacts and donors. 

QURI is fiscally sponsored by Rethink Priorities. We have a simple donation page here. 

 

Donate

I still think that EA Reform is pretty important. I believe that there's been very little work so far on any of the initiatives we discussed here

My impression is that the vast majority of money that CEA gets is from OP. I think that in practice, this means that they represent OP's interests significantly more than I feel comfortable with. While I generally like OP a lot, I think OP's focuses are fairly distinct from those of the regular EA community. 

Some things I'd be eager to see funded:
- Work with CEA to find specific pockets of work that the EA community might prioritize, but OP wouldn't. Help fund these things.
- Fund other parties to help represent / engage / oversee the EA community.
- Audit/oversee key EA funders (OP, SFF, etc); as these often aren't reviewed by third parties.
- Make sure that the management in key EA orgs are strong, including the boards.
- Make sure that many key EA employees and small donors are properly taken care of and are provided with support. (I think that OP has reason to neglect this area, as it can be difficult to square with naive cost-effectiveness calculations)
- Identify voices that want to tackle some of these issues head-on, and give them a space to do so. This could mean bloggers / key journalists / potential community leaders in the future.
- Help encourage or set up new EA organizations to sit apart from CEA, but help oversee/manage the movement.
- Help out the Community Health team at CEA. This seems like a very tough job that could arguably use more support, some of which might be best done outside of CEA.

Generally, I feel like there's a very significant vacuum of leadership and managerial visibility in the EA community. I think that this is a difficult area to make progress on, but also consider it much more important than other EA donation targets. 

Thanks for bringing this up. I was unsure what terminology would be best here.

I mainly have in mind fermi models and more complex but similar-in-theory estimations. But I believe this could extend gracefully for more complex models. I don't know of many great "ontologies of types of mathematical models," so am not sure how to best draw the line.  

Here's a larger list that I think could work.

  • Fermi estimates
  • Cost-benefit models
  • Simple agent-based models
  • Bayesian models
  • Physical or social simulations
  • Risk assessment models
  • Portfolio optimization models
     

I think this framework is probably more relevant for models estimating an existing or future parameter, than models optimizing some process, if that helps at all. 

Ah, I didn't quite notice that at the time - that wasn't obvious from the UI (you need to hover over the date to see the time of it being posted).

Anyway, happy this was resolved! Also, separately, kudos for writing this up, I'm looking forward to seeing where Metaculus goes this next year +.

I feel like the bulk of this is interesting, but the title and opening come off as more grandiose than necessary. 

[This comment is no longer endorsed by its author]Reply

This is neat to see!

Obviously, some of these items are much more likely than others to kill 100M+ lives.

WW3 seems like a big wild card to me. I'd be curious if there are any/many existing attempts to try to estimate would it would look like and how bad it would be. 

Load more