Saul Munn

@ OPTIC; Manifest; Manifund
415 karmaJoined Jul 2022Pursuing an undergraduate degree
saulmunn.com

Comments
48

Meta: Thanks for your response! I recognize that you are under no obligation to comment here, which makes me all the more appreciative that you're continuing the conversation. <3

***

I've engaged with the Collins' content for about a minute or two in total, and with them personally for the equivalent of half an email chain and a tenth of a conversation. Interpersonally, I've found them quite friendly/reasonable people. Their shared panel at the last Manifest was one of the highest rated of the conference; multiple people came up to me to tell me that they really enjoyed it. On their actual content, I think Austin and/or Rachel have much more knowledge/takes/context — I deferred to them re: "does their content check out." Those were my reasons for inviting them back.

I'll add that there is a class of people who have strongly-worded, warped, and especially inflammatory headlines (or tweets, etc), but whose underlying perspectives/object-level views can often be much more reasonable — or at least I strongly respect the methods by which they go about their thoughts. There's a mental wince to reading one of their headlines, where in my head I go "...oh, god. Man, I know what you're trying to say, but couldn't you... I dunno, say it nicely? in a less inflammatory way, or something?" And I often find that these people are actually quite kind/nice IRL — but you read their Twitter, or something, and you think "...oh man, these are some pretty wild takes."

I'm not too sure how to act in these scenarios/how to react to these types of people. Still, the combination of [nice/bright IRL] + [high respect for Rachel & Austin's perspective on object-level things] = the Collins' probably fall into the category of "I really dislike the fact that they use clickbaity, inflammatory titles to farm engagement, but they (probably) have high-quality object-level takes and I know that they're reasonable people IRL."

I appreciate your bringing to attention their YouTube channel, which I hadn't seen before. I'm not heartened by the titles, though I haven't reviewed the content.

***

Again — thanks for your comments. I'm going to continue copying the note below in this and following comments, both for you & for posterity.

(To Ben & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )

Hi Ben! Thanks for your comment.

I'm curious what you think the upsides and the downsides are?

I'll also add to what Austin said — in general, I think the strategy of [inviting highly accomplished person in field X to a conference about field Y] is underrated to cross-pollinate among and between fields. I think this is especially true of something like prediction markets, where by necessity they're applicable across disciplines; prediction markets are useless absent something on which to predict. This is the main reason I'm in favor of inviting e.g. Rob Miles, Patrick McKenzie, Evan Conrad, Xander Balwit & Nico McCarty, Dwarkesh Patel, etc — many of whom don't actively directly straightforwardly obviously clearly work in prediction markets/forecasting (the way that e.g. Robin Hanson, Nate Silver, or Allison Duettmann do). It's pretty valuable to import intellectual diversity into the prediction market/forecasting community, as well as to export the insights of prediction markets/forecasting to other fields.

(And also, a note to both Ben & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )

Hi Rockwell, thanks for voicing your concern — I appreciate the time/effort you took to write out & post the comment.

To clarify: is the thing you’re worried about those “whose public reputation is seen as pro-eugenics,” or those who are in-fact pro-eugenics in a harmful/negative way?

I can understand why you might dislike platforming people who inhabit either/both of those categories. I’d like to clarify exactly what you mean before responding.

(And also, a note to both Rockwell & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )

hey jason, thanks for leaving your thoughts! i wrote a lot below — if you'd prefer to talk about this over a call, i'd be down! (i'd also be happy to take notes on the call and post them in this thread, for anyone who comes across it later. i'll update this comment if/when that happens.)

***

it looks like you had some uncertainty about how we're concretely planning to go about reviewing projects that will still be uncertain ~12 months from now. concretely, the way that the philanthropies will review projects a year from now is listed here:

Final oracular funders will operate on a model where they treat retrospective awards the same as prospective awards, multiplied by a probability of success. For example, suppose LTFF would give a $20,000 grant to a proposal for an AI safety conference, which they think has a 50% chance of going well. Instead, an investor buys the impact certificate for that proposal, waits until it goes well, and then sells it back to LTFF. They will pay $40,000 for the certificate, since it’s twice as valuable as it was back when it was just a proposal with a 50% success chance.

however, you also have a more substantive point:

I'm having a hard time understanding the value proposition of devoting resources to predict how grantmakers will view the projected impact of these projects in ~12 months' time on ~the same information.

if that ends up being the case — that, in ~12 months, grantmakers are acting on ~the same information about the projects — i think that we will have ex-post provided little value. however, i think the key here is the extent to which new information is revealed in the next ~12 months. here's how i'm thinking about it:

  • there will be less uncertainty about the impactfulness of most projects in ~12 months compared to now. that delta in uncertainty is really important, and the more of it there is, the better. (i'll touch on the examples you listed in a moment.)
  • however, the delta in uncertainty changes from one project to another:
    • for some projects, you get a lot of new information from the point in time they first received investment to the point in time the grantmakers review them.
    • ...but for others, you get little new information; in your words, "grantmakers will [be] view[ing] the projected impact of these projects in ~12 months' time on ~the same information."
  • impact markets work best when they list projects in the first category — ones that have a big delta in uncertainty from now to ~12 months from now; ones for which you get a lot of new information from the point in time they first received investment to the point in time the grantmakers review them. ideally, we want the projects on an impact market to maximize that delta in uncertainty.

my understanding of your claim is: you agree with the above three bullet points, and are concerned that a number of projects that we're listing will have a really small delta in uncertainty from now to ~12 months from now.

i'm not too worried about this concern, for two overlapping reasons:

  1. a lot happens in a year! for most of the projects, i'd be shocked if they were in a roughly similar position as they are immediately after getting funding — even if the position they're in is "we straightforwardly did what we said we would do," you've already reduced a lot of uncertainty. i agree that we aren't truly measuring impact in some platonic sense of impact measurement, but if i were a grantmaker, i would much much much rather be in the position of evaluating a project in the "~12 months from now" category than the "now" category. for the two examples that you happened to list, i think it's actually quite possible that there will be significant new information that comes out in the ~12 months after a project gets investment:[1]
    1. Someone would like funding for an MPhil or MPhil/PhD.
      • ~12 months from now, they didn't get into any grad schools/they decided not to go to grad school/etc.
      • ~12 months from now, they got into a great grad school, and their first few semesters they got great grades/published great research/etc.
    2. Someone would like funding to distribute books.
      1. ~12 months from now, they took the money, then disappeared/totally failed/the project didn't materialize/etc.
      2. ~12 months from now, they successfully distributed the books, in exactly the way they described.
  2. if we're running the impact market correctly, the above point (1) should be baked into investment decisions — investors want returns, which incentivizes them to pick the projects that will have the highest delta in uncertainty from now to ~12 months' time. after all, if a philanthropy reviews a project in a years' time and sees exactly the same information as an investor does today... then that investor won't make any returns.

i think you picked up on one potential failure mode, and i'd be interested to see if we end up failing in this way. right now, i'm not too concerned that will happen, though.

also, thanks for your detailed comment — i really appreciate the feedback. if you think i've missed something or that i went wrong somewhere, i'd really love to hear your thoughts. again, feel free to leave a comment in response, or if you'd prefer to talk about this over a call, i'd be down! :)

  1. ^

    obviously, all of the ones i'm listing are examples, and i'm making no forecast about the probabilities of any of them to actually happen

ahh, sorry — i meant that there are a bunch of things on the map that you might consider adding, particularly in the "forecasting tools" section (e.g. manifold, metaculus, squiggle, guesstimate, metaforecast, etc). i didn't necessarily mean to imply that you should also add the map, though i could be persuaded either way.

also re: manifund, this is sorta hard to convey concisely, but we do both of:

  1. fund impactful projects (e.g. you can submit an application and get funded)
  2. provide infrastructure to fund projects (e.g. we're hosting the ACX Grants on manifund)

not sure exactly how to describe this, and i think you did a pretty good job in your description!

(edit: added the last sentence of the first paragraph)

you might find a number of good resources — specifically within forecasting — here: predictionmarketmap.com. i would particularly highlight Manifund as a way for EAs to get funding~

 

coi: i built the aforementioned map, and i currently work at manifund.

writing here to add a signal: i know less about the first two (LW and the codebase behind it), but Lighthaven is a godsend. i've run two EA-aligned events at lighthaven that would've either been infeasible to run elsewhere due to cost constraints, or significantly worse at other venues.

this is really cool! i'm excited to watch the forecasting community grow, and for a greater number of impactful forecasting projects to be built.

We are as of yet uncertain about the most promising type of project in the forecasting focus area, and we will likely fund a variety of different approaches ... we plan to continue exploring the most plausible theories of change for forecasting.

i'm curious what you're currently excited about (specific projects, broad topic areas, etc). what is OP's theory of change for how forecasting can be most impactful? what sorts of things would you be most excited to see happen?

on the flipside, if — 1/5/20 years from now — we look back and realize that forecasting wasn't so impactful, why do you think that would be the case?

left some comments on the doc — i overall agree with this critique, but would like to see a bit more on your thoughts driving the research you've already done.

wonderfully welcoming comment, @Jay Bailey! :)

Load more