Hide table of contents

Update: We've announced a $1,000 prize for early experimentation with Squiggle.

Introduction

Squiggle is a special-purpose programming language for probabilistic estimation[1]. Think: "Guesstimate as a programming language." Squiggle is free and open-source.

Our team has been using Squiggle for QURI estimations for the last few months and found it very helpful. The Future Fund recently began experimenting with Squiggle for estimating the value of different grants.

Now we're ready for others publicly to begin experimenting with it. The core API should be fairly stable; we plan to add functionality but intend to limit breaking changes[2].

We'll do our best to summarize Squiggle for a diverse audience. If any of this seems intimidating, note that Squiggle can be used in ways not much more advanced than Guesstimate. If it looks too simple, feel free to skim or read the docs directly.

Work on Squiggle!

We're looking to hire people to work on Squiggle for the main tooling. We're also interested in volunteers or collaborators for the ecosystem (get in touch!). 

Public Website, Github Repo, Previous LessWrong Sequence 

What Squiggle is and is not

What Squiggle Is

  • A simple programming language for doing math with probability distributions.
  • An embeddable language that can be used in Javascript applications. This means you can use Squiggle directly in other websites.
  • A tool to encode functions as forecasts that can be embedded in other applications.

What Squiggle Is Not

  • A complete replacement for enterprise Risk Analysis tools. (See Crystal Ball, @Risk, Lumina Analytica)
  • A probabilistic programming language. Squiggle does not support Bayesian inference. (Confusingly, "Probabilistic Programming Language" really refers to this specific class of language and is distinct from "languages that allow for using probability.")
  • A tool for substantial data analysis. (See programming languages like Python or Julia)
  • A programming language for anything other than estimation.
  • A visually-driven tool. (See Guesstimate and Causal)

Strengths

  • Simple and readable syntax, especially for dealing with probabilistic math.
  • Fast for relatively small models. Useful for rapid prototyping.
  • Optimized for using some numeric and symbolic approaches, not just Monte Carlo.
  • Embeddable in Javascript.
  • Free and open-source (MIT license).

Weaknesses

  • Limited scientific capabilities.
  • Much slower than serious probabilistic programming languages on sizeable models.
  • Can't do backward Bayesian inference.
  • Essentially no support for libraries or modules (yet).
  • Still very new, so a tiny ecosystem.
  • Still very new, so there are likely math bugs.
  • Generally not as easy to use as Guesstimate or Causal, especially for non-programmers.

Example: Piano Tuners

Note: Feel free to skim this section, it's just to give a quick sense of what the language is.

Say you're estimating the number of piano tuners in New York City. You can build a simple model of this, like so.

// Piano tuners in NYC over the next 5 years
populationOfNewYork2022 = 8.1M to 8.4M // This means that you're 90% confident the value is between 8.1 and 8.4 Million.

proportionOfPopulationWithPianos = (.002 to 0.01) // We assume there are almost no people with multiple pianos

pianoTunersPerPiano = {
    pianosPerPianoTuner = 2k to 50k // This is artificially narrow, to help graphics later
    1 / pianosPerPianoTuner
} // This {} syntax is a block. Only the last line of it, "1 / pianosPerPianoTuner", is returned.

totalTunersIn2022 = (populationOfNewYork2022 * proportionOfPopulationWithPianos * pianoTunersPerPiano)

totalTunersIn2022

 

Now let's take this a bit further. Let's imagine that you think that NYC will rapidly grow over time, and you'd like to estimate the number of piano tuners for every point in time for the next few years.

// ...previous code
//Time in years after 2022
populationAtTime(t) = {
    averageYearlyPercentageChange = -0.01 to 0.05 // We're expecting NYC to continuously and rapidly grow. We model this as having a constant growth of between -1% and +5% per year.
    populationOfNewYork2022 * ((averageYearlyPercentageChange + 1) ^ t)
}
median(v) = quantile(v, .5)
totalTunersAtTime(t) = (populationAtTime(t) * proportionOfPopulationWithPianos * pianoTunersPerPiano)
{
    populationAtTime: populationAtTime,
    totalTunersAtTimeMedian: {|t| median(totalTunersAtTime(t))}
}

Using settings in the playground, we can show this over a 40-year period. 

You can play with this directly at the playground here.
 

Some Playground details

  1. If you hover over the populationAtTime variable, you can see the distribution at any point.
  2. You can change “sample count” to change the simulation size. It starts at 1,000, which is good for experimentation, but you’ll likely want to increase this number for final results. The above graphs used sampleCount=1,000,000.

If you want to get ambitious, you can. Consider changes like:

  • Instead of doing the estimate for New York City, parameterize it to work for any global city.
  • Instead of just estimating the number of piano tuners, add a parameter to estimate most professions.
  • Instead of time being "years from 2022", change it to be a Time value. Allow times from 0AD to 3000AD.
  • Add inputs for future conditionals. For example, job numbers might look different in worlds with intense automation.

Using Squiggle

You can currently interact with Squiggle in a few ways:

Playground

The Squiggle Playground is a friendly tool for working with small models and making prototypes. You can make simple, shareable links.

Visual Studio Code Extension

There's a simple Virtual Studio Code extension for running and visualizing Squiggle code. We find that VS Code is a helpful editor for managing larger Squiggle setups.

(This example is a playful, rough experiment of an estimate of Ozzie's life output.)

Typescript Library

Squiggle is built using Rescript, and is accessible via a simple Typescript library. You can use this library to either run Squiggle code in full, or to call select specific functions within Squiggle (though this latter functionality is very minimal).

React Components Library

All of the components used in the playground and documentation are available in a separate component NPM repo. You can see the full Storybook of components here.

Observable

You can use Squiggle Components in Observable notebooks. Sam Nolan put together an exportable Observable Notebook of the main components. 

Public Squiggle Models

Early Access

We're calling this version "Early Access." It's more stable than some alphas, but we'd like it to be more robust before a public official beta. Squiggle is experimental, and we plan on trying out some exploratory use cases for effective altruist use cases for at least the next several months. You can think of it like some video games in "early access"; they often have lots of exciting functionality but deprioritize the polish that published games often have. Great for a smaller group of engaged users, bad for larger organizations that expect robust stability or accuracy.

Should effective altruists really fund and develop their own programming language?

It might seem absurd to create an "entire programming language" that's targeted at one community. However, after some experimentation and investigation, we concluded that it seems worth trying. Some points:

  • Domain-specific languages are already fairly common for sizeable companies with specific needs. They can be easier to build than one would think.
  • We don't need a large library ecosystem. Much of the difficulty for programming languages comes from their library support, but the needs for most Squiggle programs are straightforward.
  • The effective altruist community looks pretty unusual in its interest in probabilistic estimation, so it's not surprising that few products are a good fit.
  • The main alternative options are proprietary and expensive risk analysis tools. It seems critical to have our community's models be public and easy to audit. Transparency doesn't work well if models require costly software to run.
  • Much of the necessary work is in making decent probability distribution tooling for Javascript. This work itself could have benefits for forecasting and estimation websites. The "programming language" aspects might only represent one half the work of the project.

Questions & Answers

What makes Squiggle better than Guesstimate?

Guesstimate is great for small models but can break down for larger or more demanding models. Squiggle scales much better.

Plain text formats also allow for many other advantages. For example, you can use Github workflows with versioning, pull requests, and custom team access. 

Should I trust Squiggle calculations for my nuclear power plant calculations?

No! Squiggle is still early and meant much more for approximation than precision. If the numbers are critical, we recommend checking them with other software. See this list of key known bugs and this list of gotchas.

Can other communities use Squiggle?

Absolutely! We imagine it could be a good fit for many business applications. However, our primary focus will be on philanthropic use cases for the foreseeable future. 

Is there an online Squiggle community?

There's some public discussion on Github. There's also a private EA/epistemics Slack with a few channels for Squiggle; message Ozzie to join. 

Future Work

Again, we're looking to hire a small team to advance Squiggle and its ecosystem. Some upcoming items:

  • Improve the core of the language. Add error types, improve performance, etc.
  • Import functionality to allow Squiggle models to call other datasets or other Squiggle models.
  • Command line tooling for working with large Squiggle codebases.
  • Public web applications to share and discuss Squiggle models and values.
  • Integration with forecasting platforms so that people can forecast using Squiggle models. (Note: If you have a forecasting platform and are interested in integrating Squiggle, please reach out!)
  • Integration with more tools. Google Docs, Google Sheets, Airtable, Roam Research, Obsidian, Python, etc. (Note: This is an excellent area for external help!)
  • Make web components for visualization and analysis of probability distributions in different circumstances. Ideally, most of these could be used without Squiggle.

Contribute to Squiggle

If you'd like to contribute to Squiggle or the greater ecosystem, there's a lot of neat stuff to be done. 

  • Build integrations with other tools (see "Future Work" section above).
  • Improve the documentation. It's currently fairly basic.
  • Give feedback on the key features and the tools. (Particularly useful if you have experience with programming language design.)
  • Suggest new features and report bugs.
  • Audit and/or improve the internal numeric libraries.
  • Make neat models and post them to the EA Forum or other blogs.
  • Experiment with Squiggle in diverse settings. Be creative. There must be many very clever ways to use it with other Javascript ecosystems in ways we haven't thought of yet.
  • Build tools on top of Squiggle. For example, Squiggle could help power a node-based graphical editor or a calendar-based time estimation tool. Users might not need to know anything about Squiggle; it would just be used for internal math.
  • Implement Metalog distributions in Javascript. We really could use someone to figure this one out.
  • Make open-source React components around probability distributions. These could likely be integrated fairly easily with Squiggle.

If funding is particularly useful or necessary to any such endeavor, let us know.

Organization and Funding

Squiggle is one of the main projects of The Quantified Uncertainty Research Institute. QURI is a 501(c)(3) primarily funded by the LTFF, SFF, and Future Fund.

Contributors

Squiggle has very much been a collaborative effort. You can see an updating list of contributors here


Many thanks to Nuño Sempere, Quinn Dougherty, and Leopold Aschenbrenner for feedback on this post.

 

  1. ^

    Risk Analysis is a good term for Squiggle, but that term can be off putting to those not used to it.

  2. ^

    Right now Squiggle is in version 0.3.0. When it hits major version numbers (1.0, 2.0), we might introduce breaking changes, but we’ll really try to limit things until then. It will likely be three to twenty months until we get to version 1.0. 

Comments11


Sorted by Click to highlight new comments since:

This looks awesome and I'm looking forward to playing with it!

One minor point of feedback: I think the main web page at https://www.squiggle-language.com/, as well as the github repo readme, should have 1-3 tiny, easy-to-comprehend examples of what Squiggle is great at.

Good point! I'll see about adding more strong examples. I think we could add a lot to improve documentation & education.

One example you might find useful was generated here, where someone compared Squiggle to a Numpy implementation. It's much simpler.

https://forum.effectivealtruism.org/posts/ycLhq4Bmep8ssr4wR/quantifying-uncertainty-in-givewell-s-givedirectly-cost?commentId=i9xfwkxfimEALFGcq

I played around with Squiggle for hours after reading the GiveDirectly uncertainty quantification, and I enjoyed it a lot. That said, what is the value add of Squiggle over probabilistic programming libraries like Stan or PyMC3? For less coding work and less complicated models, Guesstimate does fine. For more coding work and more complicated models, Stan or PyMC3 work well. Is Squiggle meant to capture the in between case of people who need to make complicated models but don't want to do a lot of coding work? Is that too niche?

Thanks for the question! 

I think the big-picture difference is something like, "What is the language being optimized for." When designing a language, there are a ton of different trade-offs. PPLs are typically heavily optimized for data analysis on large datasets  and fancy MCMC algorithms. Squiggle is more for intuitive personal estimation.

Some  specific differences:

  • Squiggle is optimized much more for learnability and readability (for the models that it works with). This is good for cases where we want to get people up to speed quickly, or for having others audit models.
  • Squiggle is in Javascript, so can run directly in the browser and other Javascript apps. (Note: WebPPL is also in JS, but is very different).
  • The boot-up time for Squiggle, for small models, is trivial (<100ms). For many PPLs it can be much longer (10s+).

If you're happy with using a PPL for a project, it's probably a good bet! If it really doesn't seem like a fit, consider Squiggle.

Do you have any plans for interoperability with other PPLs or languages for statistical computing? It would be pretty useful to be able to, e.g. write a model in Squiggle and port it easily to R or to PyMC3, particularly if Bayesian updating is not currently supported in Squiggle. I can easily imagine a workflow where we use Squiggle to develop a prior, which we'd then want to update using microdata in, say, Stan (via R).

I think that for the particular case where Squiggle produces a distribution (as opposed to a function that produces a distribution), this is/should be possible.

No current plans. I think it would be tricky, because Squiggle both supports some features that other PPL's don't, and because some of them require stating information about variables upfront, which Squiggle doesn't. Maybe it's possible for subsets of Squiggle code.

Would be happy to see experimentation here.

I think one good workflow would be to go the other way; use a PPL to generate certain outcomes, then cache/encode these in Squiggle for interactive use. I described this a bit in this sequence. https://www.lesswrong.com/s/rDe8QE5NvXcZYzgZ3

Just want to say this looks super cool and I can't wait to try it out. Congratulations QURI team!

Thanks! It's appreciated. 

This looks great! I'm just starting a project that I'll definitely try it out for.

When you say it doesn't have 'robust stability and accuracy', can you be more specific? How likely is it to return bad values? And how fast is it progressing in this regard?

More from Ozzie Gooen
82
Ozzie Gooen
· · 9m read
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 11m read
 · 
Our Mission: To build a multidisciplinary field around using technology—especially AI—to improve the lives of nonhumans now and in the future.  Overview Background This hybrid conference had nearly 550 participants and took place March 1-2, 2025 at UC Berkeley. It was organized by AI for Animals for $74k by volunteer core organizers Constance Li, Sankalpa Ghose, and Santeri Tani.  This conference has evolved since 2023: * The 1st conference mainly consisted of philosophers and was a single track lecture/panel. * The 2nd conference put all lectures on one day and followed it with 2 days of interactive unconference sessions happening in parallel and a week of in-person co-working. * This 3rd conference had a week of related satellite events, free shared accommodations for 50+ attendees, 2 days of parallel lectures/panels/unconferences, 80 unique sessions, of which 32 are available on Youtube, Swapcard to enable 1:1 connections, and a Slack community to continue conversations year round. We have been quickly expanding this conference in order to prepare those that are working toward the reduction of nonhuman suffering to adapt to the drastic and rapid changes that AI will bring.  Luckily, it seems like it has been working!  This year, many animal advocacy organizations attended (mostly smaller and younger ones) as well as newly formed groups focused on digital minds and funders who spanned both of these spaces. We also had more diversity of speakers and attendees which included economists, AI researchers, investors, tech companies, journalists, animal welfare researchers, and more. This was done through strategic targeted outreach and a bigger team of volunteers.  Outcomes On our feedback survey, which had 85 total responses (mainly from in-person attendees), people reported an average of 7 new connections (defined as someone they would feel comfortable reaching out to for a favor like reviewing a blog post) and of those new connections, an average of 3