Yeah - though in practice the charity payouts are transferred once a quarter anyways, so a month or two delay in rolling out payouts wouldn't change the results much.
In any case, definitely think now is a great time as any to do your charity allocations, given our general uncertainty on how all of this will look!
(I'm pretty bullish on sweepstakes payouts actually happening, I think like 80% chance this year. If they don't, then probably something like the charity program would make sense again)
Thanks for posting this, Henri! I'm happy to answer any questions you might have regarding the changes here, the donation program, the future of Manifold or anything else like that.
Very briefly:
The move to 1000:1 is prompted by the fact that we have currently roughly $1.2m of mana issued against $1.5m cash in bank. As we move to sweepstakes, we want to make sure we can fully back this and still have a healthy runway. (fwiw, I think currency rate change is a terrible solution to this and think there's a small chance, 15%?, that we can avoid this)
Speaker there was me - I think there's like a ~70% chance we decide to end the charity program after this round of payments, tentatively as of May 15 or or end of May.
The primary reason is that the real money cash outs should supersede it, and running the charity program is operationally kind of annoying. The charity program is neither a core focus for Manifold or Manifund, so we might not want to keep it up. Will make a broader announcement if this ends up being the case.
For sure, I think a slightly more comprehensive comparison of grantmakers would include the stats for the number of grants, median check size, and amount of public info for each grant made.
Also, perhaps # of employees, or ratio of grants per employee? Like, OpenPhil is ~120 FTE, Manifund/EA Funds are ~2, this naturally leads to differences in writeup-producing capabilities.
So, as a self-professed mechanism geek, I feel like the Shapley Value stuff should be my cup of tea, but I must confess I've never wrapped my head around it. I've read Nuno's post and played with the calculator, but still have little intuitive sense of how these things work even with toy examples, and definitely no idea on how they can be applied in real-world settings.
I think delineating impact assignment for shared projects is important, though I generally look to the business world for inspiration on the most battle-tested versions of impact assig...
Thanks for updating your post and for the endorsement! (FWIW, I think the LTFF remains an excellent giving opportunity, especially if you're in less of a position to evaluate specific regrantors or projects.)
Manifund is pretty small in comparison to these other grantmakers (we've moved ~$3m to date), but we do try to encourage transparency for all of our grant decisions; see for example here and here.
A lot of our transparency just comes from the fact that we have our applicants post their application in public -- the applications have like 70% of the context that the grantmaker has. This is a pretty cheap win; I think many other grantmakers could do if they just got permission from the grantees. (Obviously, not all applications are suited for public posting, b...
This is awesome! I've been a fan of Timothy's since his Full Stack Economics days, and it's great to see more collaborations between the forecasting world and journalism. AI journalism is an especially pivotal area, and so I'm glad for the additional rigor in the form of Metaculus question operationalizations.
Hey Ben! I'm guessing you're asking because the Collins's don't seem particularly on-topic for the conference? For Manifest, we'll typically invite a range of speakers & guests, some of whom don't have strong pre-existing connections to forecasting; perhaps they have interesting things to share from outside the realm of forecasting, or are otherwise thinkers we respect, and are curious to learn more about prediction markets.
(Though in this specific case, Simone and Malcolm have published a great book covering different forms of governance, which ...
In principle we'd be happy to forward donations to RP, CLTR or other charities (in principle any 501c3, doesn't have to be EA); in practice the operational costs of tracking these things mean that we don't really want to be doing this except for larger donation sizes.
Although since EA Philippines has set its minimum project threshold at a fairly low $500, I'd 95% expect them to succeed and that this wouldn't come up.
Thanks for the feedback!
(2) hm, we could pay $10/mo for the professional tier to change the supabase URL address, the Scrooge in me didn't think it was worth it but perhaps...
(3) interesting -- I don't think we've considered an option to let people pledge without funds added yet; will see if that makes sense.
Hey Dawn! At Manifund we support crypto-based donations for adding to your donation balance; USDC over Eth or Solana is preferred but we could potentially process other crypto depending on the size you have in mind. We generally prefer to do this for larger donation sizes (eg $5k+) because of the operational overhead, but I'd be willing to make an exception in this case to help support the EA Philippines folks. More details here.
Hi there, Austin from Manifund here! I can't speak for the EA Philippines team, but some reasons we think our platform is a good way for raising donations:
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we've been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket.
But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the fut...
Awesome to hear! I'm happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I'm hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I've spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!
Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals
It really was a time-suck, and I really have experienced the relating point in the past! But I loved putting time into Manifund instead of reading yet another decision-irrelevant post.
Happy to hear you enjoyed your time regranting! I'd love to get a quick estimate on how much time you spent as a regrantor, just for the purposes of our calibration. My napkin math: (8 grants made * 6h) + (16 grants investigated * 1h) = 64h?
I expect more quickly diminishing returns within the grantmaking of a given regrantor than I would for a more centralized operation. This is principally because independent regrantors have more limited deal flow, making their early grants look unusually strong.
I think this could become true eventually; but imo currently, most of our small ($50k) budget regrantors could effectively allocate $200-$500k/year budgets. Eg you mentioned earlier that many opportunities of the form "start this great org" require >$50k; also, many regrants on Manifund incl...
Like @MarcusAbramovitch , I'd feel pretty comfortable allocating ~$1m part-time. I mean just on my existing grants I would've been happy to donate another ~$150k without thinking more about it! Concrete >$50k grants I had to pass up but would otherwise have wanted to fund total >$200k (extremely rough). So I'm already at >$400k (EDIT: per 5 months!) without even thinking about how my behavior or prospective grantee behavior might have changed if I had a larger pot.
That said, I think there's a sense in which I hit strongly diminishing returns at ~$...
At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.
Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.
Thanks for writing this bit; it mirrors my own thinking on my personal donation allocation a...
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of "desirability among applicants" as opposed to "established bureaucracy". My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I've heard informal complaints from leaders of other EA orgs, roughly "it's hard to find and keep good people, because our best candidates keep joining OP instead". So I was curious to learn more about OP's internal thinking about this effect.
I have this impression of OpenPhil as being the Harvard of EA orgs -- that is, it's the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅
When should someone who cares a lot about GCRs decide not to work at OP?
When should someone who cares a lot about GCRs decide not to work at OP?
I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to "why wouldn't someone want to work at OP?"
Culture, worldview, and relationship with labs
Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.
As I've gotten more involved in AI policy, I've updated mor...
Thanks, really appreciated this post.
In case anyone is looking for a bank recommendation, I would recommend Mercury, for their excellent UX and good pricing model. We use them for both Manifold the for-profit, and Manifold for Charity. They do provide ~5% yield to for-profits through Mercury Treasury (we use a different interest provider but if we could do it over again, we would definitely choose Mercury Treasury instead). Unfortunately, they don't provide Treasury to nonprofits. Mercury can also do payments to intl accounts with a 1% FX exchange rate (wo...
I'm grateful for the CEA Community Health team -- interpersonal issues can be tricky to navigate, but the Health team is consistently nice, responsive, helpful and has many useful resources compiled for making good decisions, whether it be about running an event or managing grant dynamics.
How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?
Here are my guesses for the most valuable qualities:
I've heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I'm not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.
For example, if in 100 projects, you have [1x +1000, 4x -100, 95x ~0], then I think black swarm farming still doe...
I really appreciated this list of examples and it's updated me a bit towards checking in with LTFF & others a bit more. That said, I'm not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.
One frame: is longtermist funding more like "admitting a Harvard class/YC batch" or more like "pre-seed/seed-stage funding"? In the former case, it's more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latt...
In the latter case, you are "black swan farming"; the important thing is to not miss out on the one Facebook that 1000xs, and you're happy to fund 99 duds in the meantime.
One risk of this framing is that as a seed funder your downside is pretty much capped at "you don't get any money" while with longtermist grantmaking your downside could be much larger. For example, you could fund someone to do outreach who is combative and unconvincing or someone who will use poor and unilateral judgement around information hazards. The article has an example of avoid...
I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants
One of my pet ideas is to set up a grantmaker coordination channel (eg Discord) where only grantmakers may post, but anyone may read. I think siloed communication channels are important for keeping the signal to noise ratio high, but 97% of the time I'd be happy to share whatever thoughts we have with the applicant & the rest of the world too.
I think this is worth doing for large grants (eg >$50k); for smaller grants, coordination can get to be costly in terms of grantmaker time. Each additional step of the review process adds to the time until the applicant gets their response and their money.
Background checks with grantmakers are relatively easier with an application system that works in rounds (eg SFF is twice a year, Lightspeed and ACX also do open/closed rounds) -- you can batch them up, "here's 40 potential grantees, let us know if you have red flags on any". But if you have a continuo...
I agree that default employment seems preferred by most fulltime workers, and that's why I'm interested in the concept of "default-recurring monthly grants".
I will note that this employment structure is not the typical arrangement among founders trying to launch a startup, though. A broad class of grants in EA are "work on this thing and maybe turn it into a new research org", and the equivalent funding norms in the tech sector at least are not "employment" but "apply for incubators, try to raise funding".
For EAs trying to do research... academia is the ty...
I'm sympathetic to treating good altruistic workers well; I generally advocate for a norm of much higher salaries than is typically provided. I don't think job insecurity per se is what I'm after per se, but rather allowing funders to fund the best altruistic workers next year, rather than being locked into their current allocations for 3 years.
The default in the for profit sector in the US is not multi-year guaranteed contracts but rather at will employment, where workers may leave a job or be fired for basically any reason. It may seem harsh in compariso...
I do think freelancers spend significant amounts of time on job searching, but I'm not sure that's evidence for "low productivity". Productivity isn't a function of total hours worked but rather of output delivered. The one time I engaged a designer on a freelancer marketplace, I got decent results exceptionally quickly. Another "freelancer marketplace" I make heavy use of is uber, which provides good customer experiences.
Of course, there's a question of whether such marketplaces are good for the freelancers themselves - I tend to think so (eg that the existence of uber is better for drivers than the previous taxi medallion system) but freelancing is not a good fit for many folks.
Lots of my favorite EA people seem to think this is a good idea, so I'll provide a dissenting view: job security can be costly in hard-to-spot ways.
I would be surprised if people on freelancer marketplaces are exceptionally productive - I would guess they end up spending a lot more of their time trying to get jobs than actually doing the jobs.
A few possibilities from startup land:
Thanks for building this! Kelly criteria is one of those super neat concepts that has had a lot of analysis, but not much "here's a thing you can play with". I love that Manifolio lets you play with different users and markets, to give a more intuitive sense of what the Kelly Criteria means. The UI is simple and communicates key info quickly, and I like that there's a Chrome extension for tighter integration!
The main reason we'd prefer to use Wise is that they advertise much lower currency exchange fees; my guess is something like 0.5% compared to 3-4% on Paypal, which really adds up on large foreign grants.
One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying -- this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there's a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.
Another solution I've been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is ve...
I really appreciated your assessments of the alignment space, and would be open to paying out a retroactive bounty and/or commissioning reports for 2022 and 2023! Happy to chat via DM or email (austin@manifund.org)
Hi Omega, I'd be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you've critiqued, Apollo is very new and hasn't received the requisite >$10m, but it's easy to imagine them becoming a major TAIS lab over the next years!
Yeah idk, this just seems like a really weird nitpick, given that you both like Holly's work...? I'm presenting a subjective claim to begin with: "Holly's track record is stellar", as based on my evaluation of what's written in the application plus external context.
If you think this shouldn't be funded, I'd really appreciate the reasoning; but I otherwise don't see anything I would change about my summary.
Thanks for the feedback. I'm not sure what our disagreement cashes out to - roughly, I would expect "if funded, Holly would do a good job such that 1 year later, we were happy to have funded her for this"?
3b. As a clarification, for a period of time we auto-enrolled people in a subset of groups we considered to be broadly appealing (Econ/Tech/Science/Politics/World/Culture/Sports), so those group size metrics are not super indicative of user preferences. We aren't doing this at this point in time, but did not unenroll those users.
One theory is that EA places unusual weight on issues in the long-term future, compared to existing actors (companies, governments) who are more focused on eg quarterly profits or election cycles. If you care more about the future, you should be differentially excited about techniques to see what the future will hold.
(A less-flattering theory is that forecasting just seems like a cool mechanism, and people who like EA also like cool mechanisms.)
I have not read much of Tetlock's research, so I could be mistaken, but isn't the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts? Over this timescale, I would've expected forecasting to be very useful for non-EA actors, so the central puzzle remains. Indeed, if there is not evidence for long-term forecasting, then wouldn't one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?
Of course, it would be hard to gather evidence f...
Thanks for the thoughts (and your posts on Futarchy years ago, I found them to be a helpful review of the literature!)
I'm a bit suspicious of metrics that depend on a vote 5 years from now.
I am too, though perhaps for different reasons. Long-term forecasting has slow feedback loops, and fast feedback loops are important for designing good mechanisms. Getting futarchy to be useful probably involves a lot of trial-and-error, which is hard when it takes you 5 years to assess "was this thing any good?"
Thanks for the writeup, Nathan; I am indeed excited about the possibility of making better grants through forecasting/futarchic mechanisms. So I'll start from the other direction: instead of reaching for futarchy as a hammer, start with, what are current major problems grantmakers face?
The problem that seems most important to solve: "finding projects that turn out to be orders of magnitude more successful/impactful than the rest". Paul Graham describes funding seed-stage startups as "farming black swans", which rings true to me. To look at two example roun...
Thanks, we hope so too! (To be clear, we also have a lot of respect for centralized grantmaking orgs and the work they do; and have received funding through some in EA such as LTFF and SFF.)
- (Briefly: we got into this via a loose monetary policy involving lots of printing mana for bonuses and subsidies, in order to encourage engagement. But there's historical precedent for this - eg Paypal famously gave away $10 to every user to get their network effects started)
... (read more)I think our monetary situation is actually fine. It's tempting to look at things from a cash balance perspective because it's simple, but that's pretty naive. This post from CommonCog has informed my thinking of these kinds of things:
"People with limited understanding of business think