Ozzie Gooen

10304 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
944

Topic contributions
4

Thanks for reaching out about this, it seems like a task that others likely have too.

I know a handful of people who could retire soon, but instead stay active in the space.

At a high level, I really don't think that [being able to retire] should change your plans that much. The vast majority of recommendations from 80,000 Hours, and work done by Effective Altruists, wouldn't be impacted by this. For instance, for most of the important positions, money to hire a specific candidate isn't a major bottleneck - if you're good enough to provide a lot of value, then a livable/basic salary really shouldn't be a deal-breaker.

There are some situations where it can be very useful to basically do useful independent projects for a few years without needing to raise funding. But these are pretty niche, and require a lot of knowledge about what to do.

From what I've seen, most people who can retire and want to help out, typically don't really want to do the work, or don't want to accept positions that aren't very high status (as is typically needed to at least get started in a new position). These people seem to have a habit of trying a little bit with something they would enjoy a lot or identify with, finding that that doesn't work great, then completely giving up.

So while having the extra money can be useful, it can just as easily be long-term damaging for making an impact. I think it can be very tempting to just enjoy the retirement life. 

All this to say, if you think that might be a risk for you, it's something I'd recommend you think long and hard about, consider how much you care about making an impact in the rest of your life, then come up with strategies to make sure you actually do that. 

Personally, I think the easy thing to advise is something like, "keep as much money as you basically need to not worry too much about your future", generally donate everything above that threshold, then think of yourself as a regular person attempting a career in charity/altruism. The good organizations will still pay you a salary, and you can donate (basically) everything you make.

There was discussion on LessWrong:
https://www.lesswrong.com/posts/YqrAoCzNytYWtnsAx/the-failed-strategy-of-artificial-intelligence-doomers

Obvious point, but I assume that [having a bunch of resources, mainly money] is a pretty safe bet for these worlds. 

AI progress could/should bring much better ideas of what to do with said resources/money as it happens. 

It looks like Concept2, a popular sports equipment company, just put ownership into a Purpose Trust
 

As we look toward stepping back from day-to-day operations, we have researched options to preserve the company’s long-held values and mission into the future, including Purpose Trust ownership. With a Purpose Trust as owner, profits are reinvested in the business and also used to fulfill the designated purpose of the Trust. Company profits do not flow to individual shareholders or beneficiaries. And a Purpose Trust can endure in perpetuity.

We are excited to announce we have transferred 100% of Concept2’s ownership to the Concept2 Perpetual Purpose Trust as of January 1, 2025. The Concept2 Perpetual Purpose Trust will direct the management and operations of Concept2 in a manner that maintains continuity. The value we create through our business will be utilized for a greater purpose in serving the Concept2 community. Our vision and mission will carry on in the hands of our talented employee base, and Concept2 will remain the gold standard for providing best in-class products and unmatched customer service. We hope you share in our enthusiasm and will join us on this next phase of our journey as a company.'

I asked Perplexity for other Purchase Trusts, it mentioned that Patagonia is one, plus a few other companies I don't know of. 

My impression is that B-Corps have almost no legal guarantees of public good, and that 501c3s also really have minimal guarantees (if 501c3s fail to live up to their mission, the worst that happens is that they lose their charity and thus tax-deductability status. But this isn't that bad otherwise). 

I imagine that Trusts could be far more restrictive (in a good way). I worked with a company that made Irrevocable Trusts before, I think these might be the structure that would provide the best assurances that we currently have. 

I find a lot of the challenge of making Fermi estimates is in creating early models / coming up with various ways to parameterize things. LLMs have been very good at this, in my opinion.

I wrote more in the "How good is it?" section of the Squiggle AI blog post.

https://forum.effectivealtruism.org/posts/jJ4pn3qvBopkEvGXb/introducing-squiggle-ai#How_Good_Is_It_
 

We don't yet have quantitative measures of output quality, partly due to the challenge of establishing ground-truth for cost-effectiveness estimates. However, we do have a variety of some qualitative results.

Early Use

As the primary user, I (Ozzie) have seen dramatic improvements in efficiency - model creation time has dropped from 2-3 hours to 10-30 minutes. For quick gut-checks, I often find the raw AI outputs informative enough to use without editing.

Our three Squiggle workshops (around 20 total attendees) have shown encouraging results, with participants strongly preferring Squiggle AI over manual code writing. Early adoption has been modest but promising - in recent months, 30 users outside our team have run 168 workflows total.

Accuracy Considerations

As with most LLM systems, Squiggle AI tends toward overconfidence and may miss crucial factors. We recommend treating its outputs as starting points rather than definitive analyses. The tool works best for quick sanity check and initial model drafts.

Current Limitations

Several technical constraints affect usage:

  • Code length soft-caps at 200 lines
  • Frequent workflow stalls from rate limits or API balance issues
  • Auto-generated documentation is decent but has gaps, particularly in outputting plots and diagrams

While slower and more expensive than single LLM queries, Squiggle AI provides more comprehensive and structured output, making it valuable for users who want detailed, adjustable, and documentable reasoning behind their estimates.

I've heard from friends outside the EA scene that they think most AI-risk workers have severe mental issues like depression and burnout.

I don't mean to downplay this issue, but I think a lot of people get the wrong idea.

My hunch is that many of the people actively employed and working on AI safety are fairly healthy and stable. Many are very well paid and have surprisingly nice jobs/offices.

I think there is this surrounding ring of people who try to enter this field, who have a lot of problems. It can be very difficult to get the better positions, and if you're stubborn enough, this could lead to long periods of mediocre management and poor earnings.

I think most of the fully employed people typically are either very busy, or keep to a limited social circle, so few outsiders will meet them. Instead, outsiders will meet people who are AI safety adjacent or trying to enter the field, and those people can have a much tougher time.

So to me, most people who are succeeding in the field come across a lot like typical high-achievers, with the profiles of typical high-achievers. And people not succeeding in the field come across as people trying and not succeeding in other competitive fields. I'd expect that statistics / polls would broadly reflect this.

All that to say, if you think that people shouldn't care about AI safety / x-risks because then they'll go through intense depression and anxiety, I think you might be missing some of the important demographic details.

In some ways, Prohibition didn't seem that bad to me?

There are two clear arguments you bring up:
1. If the government could effectively ban alcohol, it shouldn't, because doing so is anti-liberty.
2. The government won't be able to effectively ban alcohol.

It seems like (2) is a policy question. I think that today, most liberals are often on the side of drug legalization, especially because they question (2). 

Personally, I don't have massive problems with (1). There's a concrete question on if alcohol is net-harmful, and if so, is this something the government should prioritize. There's a lot of empirical questions to ask here. That said, if it were the case that alcohol were net-harmful enough, and I thought the government could net-effectively ban it, that seems good to me. This is the kind of common question where utilitarians and libertarians would often clash.

All that said, as pointed out in other comments, a "total ban" is often the ideal policy (taxes seem better), but sometimes other options are just too complex / unpopular. 

Lastly, note that Prohibition ended, and now we have more information. It lasted from 1920 to 1933, a fairly short time for a major policy. I'm a big fan of trying out certain policies, then canceling them if they are clear failures. (That said, I could certainly imagine cheaper experiments than a national-level 13-year ban)

You can also bet on how many participants this will get, here:
https://manifold.markets/OzzieGooen/number-of-applicants-for-the-300-fe

Great work with this! I particularly enjoyed the photos - it's great to see community members across the globe. 

I've heard some really positive things about Cape Town before, and generally am excited for more activity in Africa, so I'm quite happy to hear of events like this. 

just on a longer timeline than the unrealistic ones that were once touted

I spent a few minutes digging into prediction markets to see if sentiment there changed. I couldn't find good questions from 2023 or before that are still open. But here are two that have been open for about a year - and in both cases, it doesn't seem like people have become more cynical over that period. 

So it roughly seems like the forecasting community hasn't really updated downwards in 2024, at least.

Load more