D

dschwarz

CEO @ FutureSearch
166 karmaJoined Nov 2022
futuresearch.ai

Bio

Interested in forecasting, epistemology, and AI. Long-time LW lurker, https://www.lesswrong.com/users/dschwarz

CEO of FutureSearch. Previously CTO Metaculus, creator of Google's current internal prediction market.

How others can help me

Please reach out at dan at futuresearch dot ai if you're interested in getting involved in AI forecasting.

Comments
10

FutureSearch is hiring! We're seeking Full-Stack Software Engineers, Research Engineers, and Research Scientists to help us build an AI system that can answer hard questions, including the forecasting and research questions critical for effective allocation of resources. You can read more our motivations, and how it works.

Salary and benefits: $70k - $120k, location and seniority depending. We aim to offer higher equity than startups at our size (6 people) typically do.

Location: Full remote. We pay for travel every few months to work together around the US and Europe. We are primarily based on SF and London.

Apply on our careers page.

Hiring Process: 

  1. Answer some questions about your understanding of our domain (1 hour)
  2. Offline technical screen (2 hours)
  3. Online non-technical founder interview (45 minutes)
  4. Offer

This seems plausible, perhaps more plausible 3 years ago. AGI is so mainstream now that I imagine there are many people who are motivated to advance the conversation but have no horse in the race.

If only the top cadre of AI experts are capable of producing the models, then yes, we might have a problem of making such knowledge a public good.

Perhaps philanthropists can provide bigger incentives to share than their incentives not to share.

Yeah, I do like your four examples of "just the numbers" forecasts that are valuable: weather, elections, what people believe, and "where is there lots of disagreement? I'm more skeptical that these are useful, rather than curiosity-satisfying.

Election forecasts are a case in point. People will usually prepare for all outcomes regardless of the odds. And if you work in politics, deciding who to choose for VP or where to spend your marginal ad dollar, you need models of voter behavior. 

Probably the best case for just-the-numbers is probably your point (b), shift-detection. I echo your point that many people seem struck by the shift in AGI risk on the Metaculus question.

I’m worried that in the context of getting high-stakes decision makers to use forecasts, some of the demand for rationales is due to lack of trust in the forecasts.

Undoubtedly some of it is. Anecdotally, though, high-level folks frequently take one (or zero) glances at the calibration chart, nod, and then say "but how I am supposed to use this?", even on questions I pick to be highly relevant to them, just like the paper I cited finding "decision-makers lacking interest in probability estimates."

Even if you're (rightly) skeptical about AI-generated rationales, I think the point holds for human rationales. One example: Why did DeepMind hire Swift Centre forecasters when they already had Metaculus forecasts on the same topics, as well as access to a large internal prediction market?

I suppose I left it intentionally vague :-). We're early, and are interested in talking to research partners, critics, customers, job applicants, funders, forecaster copilots, writers.

We'll list specific opportunities soon, consider this to be our big hello.

Agreed Eli, I'm still working to understand where the forecasting ends and the research begins. You're right, the distinction is not whether you put a number at the end of your research project.

In AGI (or other hard sciences) the work may be very different, and done by different people. But in other fields, like geopolitics, I see Tetlock-style forecasting as central, even necessary, for research.

At the margin, I think forecasting should be more research-y in every domain, including AGI. Otherwise I expect AGI forecasts will continue to be used, while not being very useful.

Nice post. I also have been exploring reasoning by analogy. I like some of the ones in your sheet, like "An international AI regulatory agency" --> "The International Atomic Energy Agency".

I think this effort could be a lot more concrete. The "AGI" section could have a lot more about specific AI capabilities (math, coding, writing) and compare them to recent technological capabilities (e.g. Google Maps, Microsoft Office) or human professions (accountant, analyst).

The more concrete it is, the more inferential power. I think the super abstract ones like "AGI" --> "Harnessing fire" don't give much more than a poetic flair to the nature of AGI.

Nicely done! The college campus forecasting clubs and competition model feels extremely promising to me. Really great to see a dedicated effort start to take off.

I'm especially happy to see an ACX Manifund mini-grant get realized so quickly. I admit I was skeptical of these grants.

Excited to see the next iteration of this, and hopefully many more to come on college campuses all over!

Metaculus is getting better at writing quickly-resolving questions, and we can probably help write some good ones for the next iteration of OPTIC.

There's a certain eye for news that is interesting, forecastable, and short-term one develops. Our Beginner tournaments (current, 2023 Q1, 2022 Q4) explicitly only have questions that resolve within 1 week, so you can see some inspiration there.

 

If you find me wandering around the Gather Town tomorrow at the event, be warned that I may talk to whoever will listen about the Tyler Cowen vs. Scott Alexander kerfuffle on this topic going on right now :-).

[I wrote this comment on LW, copying to this post. Shouldn't that happen automatically?]

Nice post! I'll throw another signal boost for the Metaculus hackathon that OP links, since this is the first time Metaculus is sharing their whole 1M db of individual forecasts (not just the db of questions & resolutions which is already available). You have to apply to get access though. I'll link it again even though OP already did: https://metaculus.medium.com/announcing-metaculuss-million-predictions-hackathon-91c2dfa3f39

There are nice cash prizes too.

As the OP writes, I think most the ideas here would be valid entries in the hackathon, though the emphasis is on forecast aggregation & methods for scoring individuals. I'm particularly interested in decay of predictions idea. I don't think we know how well predictions age, and what the right strategy for updating your predictions should be for long-running questions.

Load more