Saul Munn

@ Manifest, Manifund, OPTIC
448 karmaJoined Jul 2022Pursuing an undergraduate degreeWorking (0-5 years)
saulmunn.com

Comments
54

Commenting to signal-boost that @frances_lorenz , who works on the EAG team, has responded here.

You can do this in parallel to having this post publicly; in fact, you can even email them to notify them that this post exists! However, I wouldn't expect that they'll see this question on the Forum by default. There's a lot of content on the Forum, and the EAG team is extremely busy.

Answer by Saul MunnApr 16, 202415
6
0

Hi! I'd recommend reaching out directly to the organizing team. You can reach them here: hello@eaglobal.org.

I think it's quite possible that OP has built quantitative models which estimate GCR, but that they haven't published them (e.g. they use them internally).

i've been working at manifund for the last couple months, figured i'd respond where austin hasn't (yet)

here's a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we're raising for.

tldr of that application:

  • core ops
    • staff salaries
    • misc things (software, etc)
  • programs like regranting, impact certificates, etc, for us to run how we think is best[1]

additionally, if a funder was particularly interested in a specific funding program, we're also happy to provide them with infrastructure. e.g. we're currently facilitating the ACX grants, we're probably (70%) going to run a prize round for dwarkesh patel, and we'd be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn't really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren't manifund].

i'll also add that we're a less funding-crunched than when austin first commented; we'll be running another regranting round, for which we'll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.)

  1. ^

    i'm keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we're tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast.

  2. ^

    we often charge a fee of 5% of the total funding; we've been paid $75k in commission to run the $1.5mm regranting round last year.

I thought this was great. Thank you for writing this, Eli!

Meta: Thanks for your response! I recognize that you are under no obligation to comment here, which makes me all the more appreciative that you're continuing the conversation. <3

***

I've engaged with the Collins' content for about a minute or two in total, and with them personally for the equivalent of half an email chain and a tenth of a conversation. Interpersonally, I've found them quite friendly/reasonable people. Their shared panel at the last Manifest was one of the highest rated of the conference; multiple people came up to me to tell me that they really enjoyed it. On their actual content, I think Austin and/or Rachel have much more knowledge/takes/context — I deferred to them re: "does their content check out." Those were my reasons for inviting them back.

I'll add that there is a class of people who have strongly-worded, warped, and especially inflammatory headlines (or tweets, etc), but whose underlying perspectives/object-level views can often be much more reasonable — or at least I strongly respect the methods by which they go about their thoughts. There's a mental wince to reading one of their headlines, where in my head I go "...oh, god. Man, I know what you're trying to say, but couldn't you... I dunno, say it nicely? in a less inflammatory way, or something?" And I often find that these people are actually quite kind/nice IRL — but you read their Twitter, or something, and you think "...oh man, these are some pretty wild takes."

I'm not too sure how to act in these scenarios/how to react to these types of people. Still, the combination of [nice/bright IRL] + [high respect for Rachel & Austin's perspective on object-level things] = the Collins' probably fall into the category of "I really dislike the fact that they use clickbaity, inflammatory titles to farm engagement, but they (probably) have high-quality object-level takes and I know that they're reasonable people IRL."

I appreciate your bringing to attention their YouTube channel, which I hadn't seen before. I'm not heartened by the titles, though I haven't reviewed the content.

***

Again — thanks for your comments. I'm going to continue copying the note below in this and following comments, both for you & for posterity.

(To Ben & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )

Hi Ben! Thanks for your comment.

I'm curious what you think the upsides and the downsides are?

I'll also add to what Austin said — in general, I think the strategy of [inviting highly accomplished person in field X to a conference about field Y] is underrated to cross-pollinate among and between fields. I think this is especially true of something like prediction markets, where by necessity they're applicable across disciplines; prediction markets are useless absent something on which to predict. This is the main reason I'm in favor of inviting e.g. Rob Miles, Patrick McKenzie, Evan Conrad, Xander Balwit & Nico McCarty, Dwarkesh Patel, etc — many of whom don't actively directly straightforwardly obviously clearly work in prediction markets/forecasting (the way that e.g. Robin Hanson, Nate Silver, or Allison Duettmann do). It's pretty valuable to import intellectual diversity into the prediction market/forecasting community, as well as to export the insights of prediction markets/forecasting to other fields.

(And also, a note to both Ben & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )

Hi Rockwell, thanks for voicing your concern — I appreciate the time/effort you took to write out & post the comment.

To clarify: is the thing you’re worried about those “whose public reputation is seen as pro-eugenics,” or those who are in-fact pro-eugenics in a harmful/negative way?

I can understand why you might dislike platforming people who inhabit either/both of those categories. I’d like to clarify exactly what you mean before responding.

(And also, a note to both Rockwell & anyone else who’s reading this: I’d be happy to hop on a call with anyone who’d like to talk more about any of the decisions we’ve made, take notes on/recording of the call, then post the notes/recording publicly here. https://savvycal.com/saulmunn/manifest )

hey jason, thanks for leaving your thoughts! i wrote a lot below — if you'd prefer to talk about this over a call, i'd be down! (i'd also be happy to take notes on the call and post them in this thread, for anyone who comes across it later. i'll update this comment if/when that happens.)

***

it looks like you had some uncertainty about how we're concretely planning to go about reviewing projects that will still be uncertain ~12 months from now. concretely, the way that the philanthropies will review projects a year from now is listed here:

Final oracular funders will operate on a model where they treat retrospective awards the same as prospective awards, multiplied by a probability of success. For example, suppose LTFF would give a $20,000 grant to a proposal for an AI safety conference, which they think has a 50% chance of going well. Instead, an investor buys the impact certificate for that proposal, waits until it goes well, and then sells it back to LTFF. They will pay $40,000 for the certificate, since it’s twice as valuable as it was back when it was just a proposal with a 50% success chance.

however, you also have a more substantive point:

I'm having a hard time understanding the value proposition of devoting resources to predict how grantmakers will view the projected impact of these projects in ~12 months' time on ~the same information.

if that ends up being the case — that, in ~12 months, grantmakers are acting on ~the same information about the projects — i think that we will have ex-post provided little value. however, i think the key here is the extent to which new information is revealed in the next ~12 months. here's how i'm thinking about it:

  • there will be less uncertainty about the impactfulness of most projects in ~12 months compared to now. that delta in uncertainty is really important, and the more of it there is, the better. (i'll touch on the examples you listed in a moment.)
  • however, the delta in uncertainty changes from one project to another:
    • for some projects, you get a lot of new information from the point in time they first received investment to the point in time the grantmakers review them.
    • ...but for others, you get little new information; in your words, "grantmakers will [be] view[ing] the projected impact of these projects in ~12 months' time on ~the same information."
  • impact markets work best when they list projects in the first category — ones that have a big delta in uncertainty from now to ~12 months from now; ones for which you get a lot of new information from the point in time they first received investment to the point in time the grantmakers review them. ideally, we want the projects on an impact market to maximize that delta in uncertainty.

my understanding of your claim is: you agree with the above three bullet points, and are concerned that a number of projects that we're listing will have a really small delta in uncertainty from now to ~12 months from now.

i'm not too worried about this concern, for two overlapping reasons:

  1. a lot happens in a year! for most of the projects, i'd be shocked if they were in a roughly similar position as they are immediately after getting funding — even if the position they're in is "we straightforwardly did what we said we would do," you've already reduced a lot of uncertainty. i agree that we aren't truly measuring impact in some platonic sense of impact measurement, but if i were a grantmaker, i would much much much rather be in the position of evaluating a project in the "~12 months from now" category than the "now" category. for the two examples that you happened to list, i think it's actually quite possible that there will be significant new information that comes out in the ~12 months after a project gets investment:[1]
    1. Someone would like funding for an MPhil or MPhil/PhD.
      • ~12 months from now, they didn't get into any grad schools/they decided not to go to grad school/etc.
      • ~12 months from now, they got into a great grad school, and their first few semesters they got great grades/published great research/etc.
    2. Someone would like funding to distribute books.
      1. ~12 months from now, they took the money, then disappeared/totally failed/the project didn't materialize/etc.
      2. ~12 months from now, they successfully distributed the books, in exactly the way they described.
  2. if we're running the impact market correctly, the above point (1) should be baked into investment decisions — investors want returns, which incentivizes them to pick the projects that will have the highest delta in uncertainty from now to ~12 months' time. after all, if a philanthropy reviews a project in a years' time and sees exactly the same information as an investor does today... then that investor won't make any returns.

i think you picked up on one potential failure mode, and i'd be interested to see if we end up failing in this way. right now, i'm not too concerned that will happen, though.

also, thanks for your detailed comment — i really appreciate the feedback. if you think i've missed something or that i went wrong somewhere, i'd really love to hear your thoughts. again, feel free to leave a comment in response, or if you'd prefer to talk about this over a call, i'd be down! :)

  1. ^

    obviously, all of the ones i'm listing are examples, and i'm making no forecast about the probabilities of any of them to actually happen

Load more