All of Austin's Comments + Replies

  1. (Briefly: we got into this via a loose monetary policy involving lots of printing mana for bonuses and subsidies, in order to encourage engagement. But there's historical precedent for this - eg Paypal famously gave away $10 to every user to get their network effects started)

    I think our monetary situation is actually fine. It's tempting to look at things from a cash balance perspective because it's simple, but that's pretty naive. This post from CommonCog has informed my thinking of these kinds of things:

    "People with limited understanding of business think
... (read more)
5
Pat Myron
2d
This omits why Manifold users didn't cash out much: return rates were unsustainably high. Ponzi schemes manage cash flow at the expense of profit
5
Jason
2d
I tend to agree with your co-founders on this one. I am not sure that the behavior of past Manifold users in a play-money economy where the only cash-out was to charity is a reliable guide to how future users will react to in a ~real-money environment.   When we're talking about play money potentially redeemable for charitable donations, that is one thing, especially where the vast majority was ~freely obtained (as opposed to being purchased with cash. If people can't donate play money they were largely given for free, that doesn't keep me up at night too much. It's something different where the quasi-cash was largely purchased with real cash (or obtained in wagers of quasi-cash that was largely purchased with real cash). In the latter case, I think you have to be prepared for the risk of a bank run.   Maybe, but conditioned on there being a run on the bank, Manifold equity would not provide a solid backstop for customer claims. If you are in a bankrun situation, there is a pretty decent possibility that Manifold equity is either illiquid or ~worthless. You might be hard-pressed to find buyers either because of the underlying facts that led to the bankrun or due to skepticism about the value of a business whose customers are in a panicked rush for the door.  Moreover, the base rate of young startup failure is pretty high, so there could be a number of scenarios in which a run makes sense. If I thought Manifold might be going under soon, and my quasi-cash was backed only to a limited extent, I think I'd rather exchange my quasi-cash for real cash ASAP.  Perhaps you could get an irrevocable line of credit for the next ~2 years backed by a certain amount of equity? If you can, then that could back the quasi-cash liabilities. If you can't, is evidence that you can't get a sophisticated lender to accept the equity as collateral also evidence that Manifold users shouldn't accept it as backing?  I guess another way of saying this is that I think Manifold should tre

Yeah - though in practice the charity payouts are transferred once a quarter anyways, so a month or two delay in rolling out payouts wouldn't change the results much.

In any case, definitely think now is a great time as any to do your charity allocations, given our general uncertainty on how all of this will look!

(I'm pretty bullish on sweepstakes payouts actually happening, I think like 80% chance this year. If they don't, then probably something like the charity program would make sense again)

Austin
2d16
0
0
1
1

Thanks for posting this, Henri! I'm happy to answer any questions you might have regarding the changes here, the donation program, the future of Manifold or anything else like that.

Very briefly:

  • The move to 1000:1 is prompted by the fact that we have currently roughly $1.2m of mana issued against $1.5m cash in bank. As we move to sweepstakes, we want to make sure we can fully back this and still have a healthy runway. (fwiw, I think currency rate change is a terrible solution to this and think there's a small chance, 15%?, that we can avoid this)
  • Our dona
... (read more)

The move to 1000:1 is prompted by the fact that we have currently roughly $1.2m of mana issued against $1.5m cash in bank. As we move to sweepstakes, we want to make sure we can fully back this and still have a healthy runway. (fwiw, I think currency rate change is a terrible solution to this and think there's a small chance, 15%?, that we can avoid this)

  1. This seems like a bad situation to have gotten into. Did this happen because manifold didn't plan well or was it Future fund related? If my mana is gonna take a haircut the difference seems pretty importan
... (read more)

Speaker there was me - I think there's like a ~70% chance we decide to end the charity program after this round of payments, tentatively as of May 15 or or end of May.

The primary reason is that the real money cash outs should supersede it, and running the charity program is operationally kind of annoying. The charity program is neither a core focus for Manifold or Manifund, so we might not want to keep it up. Will make a broader announcement if this ends up being the case.

3
Jason
2d
Agree that real money cash outs would largely supersede this, but that's conditional on them actually happening and sticking around. It doesn't sound to me like real money is likely to roll out next month, though.

For sure, I think a slightly more comprehensive comparison of grantmakers would include the stats for the number of grants, median check size, and amount of public info for each grant made.

Also, perhaps # of employees, or ratio of grants per employee? Like, OpenPhil is ~120 FTE, Manifund/EA Funds are ~2, this naturally leads to differences in writeup-producing capabilities.

2
Vasco Grilo
20d
Thanks, Austin. @Joey did an analysis 2 years ago (published on 21 June 2022) where he estimated the ratio between total hours of vetting and dollars granted for various organisations. Here is the table with the results: I am little confused by the colour coding. In the last column, I think "1:5000" and "1:3600" should be in green given "1:7000" is in green. It would be nice to have an updated table for 2023 with total amount granted, total words in public write-ups, total cost (excluding grants), ratio between total amount granted and cost, and ratio between total amount granted and words in public write-ups. Maybe @Sjir Hoeijmakers and @Michael Townsend could do this as part of Giving What We Can's project to evaluate the evaluators.

So, as a self-professed mechanism geek,  I feel like the Shapley Value stuff should be my cup of tea, but I must confess I've never wrapped my head around it. I've read Nuno's post and played with the calculator, but still have little intuitive sense of how these things work even with toy examples, and definitely no idea on how they can be applied in real-world settings.

I think delineating impact assignment for shared projects is important, though I generally look to the business world for inspiration on the most battle-tested versions of impact assig... (read more)

Thanks for updating your post and for the endorsement! (FWIW, I think the LTFF remains an excellent giving opportunity, especially if you're in less of a position to evaluate specific regrantors or projects.)

Answer by AustinApr 01, 202417
2
0

Manifund is pretty small in comparison to these other grantmakers (we've moved ~$3m to date), but we do try to encourage transparency for all of our grant decisions; see for example here and here.

A lot of our transparency just comes from the fact that we have our applicants post their application in public -- the applications have like 70% of the context that the grantmaker has. This is a pretty cheap win; I think many other grantmakers could do if they just got permission from the grantees. (Obviously, not all applications are suited for public posting, b... (read more)

1[comment deleted]9d
2
Vasco Grilo
24d
Thanks for commenting, Austin! I think it is great that Manifund shares more information about its grants than all grantmakers I mentioned except for CE (which is incubating organisations, so it makes sense they have more to share). Sorry for not having mentioned Manifund. I have now added: I have been donating to the Long-Term Future Fund (LTFF), but, if I was going to donate now[1], I think I would either pick specific organisations, or Manifund's regrantors or projects. 1. ^ I usually make my annual donations late in the year.

This is awesome! I've been a fan of Timothy's since his Full Stack Economics days, and it's great to see more collaborations between the forecasting world and journalism. AI journalism is an especially pivotal area, and so I'm glad for the additional rigor in the form of Metaculus question operationalizations.

1
christian
25d
Great to see this! Absolutely, we're looking forward to sharing more Metaculus collaborations with more interesting public thinkers in the near future. 

Hey Ben! I'm guessing you're asking because the Collins's don't seem particularly on-topic for the conference? For Manifest, we'll typically invite a range of speakers & guests, some of whom don't have strong pre-existing connections to forecasting; perhaps they have interesting things to share from outside the realm of forecasting, or are otherwise thinkers we respect, and are curious to learn more about prediction markets. 

(Though in this specific case, Simone and Malcolm have published a great book covering different forms of governance, which ... (read more)

9
Ben Stewart
1mo
Thanks, yeah I'm surprised the upsides outweigh the downsides but not my conference [own views]

In principle we'd be happy to forward donations to RP, CLTR or other charities (in principle any 501c3, doesn't have to be EA); in practice the operational costs of tracking these things mean that we don't really want to be doing this except for larger donation sizes.

Although since EA Philippines has set its minimum project threshold at a fairly low $500, I'd 95% expect them to succeed and that this wouldn't come up.

3
Dawn Drescher
1mo
Yep, that makes a lot of sense. I've done donation forwarding for < 10 projects once, and it was already quite time-consuming!

Thanks for the feedback!

(2) hm, we could pay $10/mo for the professional tier to change the supabase URL address, the Scrooge in me didn't think it was worth it but perhaps...

(3) interesting -- I don't think we've considered an option to let people pledge without funds added yet; will see if that makes sense.

Hey Dawn! At Manifund we support crypto-based donations for adding to your donation balance; USDC over Eth or Solana is preferred but we could potentially process other crypto depending on the size you have in mind. We generally prefer to do this for larger donation sizes (eg $5k+) because of the operational overhead, but I'd be willing to make an exception in this case to help support the EA Philippines folks. More details here.

2
Dawn Drescher
1mo
Oh, brilliant! USDC would also be my top choice. But I'm basically paying into a DAF, and so can't get a refund if this project doesn't succeed, right? That would have a high cost in option value since I don't know whether my second-best donation opportunity will be on Manifund. Is there a way to donate to Rethink Priorities or the Center on Long-Term Risk through Manifund? That would lower-bound the cost in option value.

Hi there, Austin from Manifund here! I can't speak for the EA Philippines team, but some reasons we think our platform is a good way for raising donations:

  • For users like you, registering should be pretty fast, <2min (you can sign up with any email or Google account). And you can easily add money via credit card; we also support bank transfers, DAF, and crypto for larger donation sizes.
  • As we're set up as a 501c3, US-based donors can get a tax deduction for donating to projects that we host.
  • On Manifund, we have a network of donors who already have their b
... (read more)
4
mhendric
1mo
Thanks for this response, Austin. For me, three things that made me hesitant to use Manifund:  (1) requires an account (2) when linking via a google account, the supabase address looks scammy as it is just 15 random characters. (3) I have to pay money into an account before pledging. Given not all projects may end up taking place, this makes me nervous about wasting money (i.e. if the project does not take place). Compare this to, e.g., Kickstarter, where you only need to pay if the project takes place, yet you can pledge without loading money into Kickstarter.   I do think of Manifund as a good fundraiser option; I do think that it is good to have multiple options listed for the reasons explained above.

Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.

(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we've been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)

7
Elizabeth
2mo
what would fundraising mean here? is it for staffing, or donations to programs, or to your grantmakers to distribute as they seem fit?
  1. Yeah, I agree neglectedness is less important but it does capture something important; I think eg climate change is both important and tractable but not neglected. In my head, "importance" is about "how much would a perfectly rational world direct at this?" while "neglected" is "how far are we from that world?".
  2. Also agreed that the lack of external funding is an update that forecasting (as currently conceived) has more hype than real utility. I tend to think this is because of the narrowness of how forecasting is currently framed, though (see my comments o
... (read more)
2
MarcusAbramovitch
2mo
1. I think your point 1 is a good starting point but I would add "in percentage terms compared to all other potential causes" and you have to be in the top 1% of that for EA to consider the cause neglected.  3. I didn't make it. It is great though. I was talking about on a yearly basis in the last couple years. That said, I made the comment off memory so I could be wrong.

Yes, it's a meta topic; I'm commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn't get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I'm aware of over the last 2 years were $30k in "minigrants" funded by Scott Alexander out of pocket.

But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the fut... (read more)

5
MarcusAbramovitch
2mo
  1. I don't think it's necessary to talk in terms of an ITN framework but something being neglected isn't nearly reason enough to fund it. Neglectedness is perhaps the least important part of the framework and something being neglected alone isn't a reason to fund it. Getting 6 year olds in race cars for example seems like a neglected cause but one that isn't worth pursuing. 2. I think something not getting funding outside of EA is probably a medium-sized update to the thing not being important enough to work on. Things start to get EA funding once a sufficient number of the community finds the arguments for working on a problem sufficiently convincing. But many many many problems have come across EA's eyes and very few of them have stuck. For something to not get funding from others suggests that very few others found it to be important. 3. Forecasting still seems to get a fair amount of dollars, probably about half as much as animal welfare. https://docs.google.com/spreadsheets/d/1ip7nXs7l-8sahT6ehvk2pBrlQ6Umy5IMPYStO3taaoc/edit?usp=sharing Your points on helping future people (and non-human animals) are well taken.

Awesome to hear! I'm happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I'm hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I've spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!

Preventing catastrophic risks, improving global health and improving animal welfare are goals in themselves. At best, forecasting is a meta topic that supports other goals

It really was a time-suck, and I really have experienced the relating point in the past! But I loved putting time into Manifund instead of reading yet another decision-irrelevant post.

 

Happy to hear you enjoyed your time regranting! I'd love to get a quick estimate on how much time you spent as a regrantor, just for the purposes of our calibration. My napkin math: (8 grants made * 6h) + (16 grants investigated * 1h) = 64h?

4
Joel Becker
4mo
I think my estimate isn't going to be very informative -- I intentionally spent more time than I might otherwise endorse working on Manifund stuff, because it was fun and seemed like good skills-building. My best guess as to how much time I would have spent on an otherwise similar process in the absence of this factor is (EDIT: there was a mistake in my BOTEC) 59 (42 to 85) hours.

I expect more quickly diminishing returns within the grantmaking of a given regrantor than I would for a more centralized operation. This is principally because independent regrantors have more limited deal flow, making their early grants look unusually strong.

 

I think this could become true eventually; but imo currently, most of our small ($50k) budget regrantors could effectively allocate $200-$500k/year budgets. Eg you mentioned earlier that many opportunities of the form "start this great org" require >$50k; also, many regrants on Manifund incl... (read more)

Like @MarcusAbramovitch , I'd feel pretty comfortable allocating ~$1m part-time. I mean just on my existing grants I would've been happy to donate another ~$150k without thinking more about it! Concrete >$50k grants I had to pass up but would otherwise have wanted to fund total >$200k (extremely rough). So I'm already at >$400k (EDIT: per 5 months!) without even thinking about how my behavior or prospective grantee behavior might have changed if I had a larger pot.

That said, I think there's a sense in which I hit strongly diminishing returns at ~$... (read more)

7
MarcusAbramovitch
4mo
I feel quite able to give >$500k/year. I also think more money would make a lot more "just completely fund the thing" instead of people throwing $1000-2000 for "signal boosting" or hoping others come along to fund the thing. I feel I could do similar and even larger amounts for the animal welfare space.

At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.

Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.

 

Thanks for writing this bit; it mirrors my own thinking on my personal donation allocation a... (read more)

Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of "desirability among applicants" as opposed to "established bureaucracy". My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I've heard informal complaints from leaders of other EA orgs, roughly "it's hard to find and keep good people, because our best candidates keep joining OP instead". So I was curious to learn more about OP's internal thinking about this effect.

Did you ever consider starting your own company (software or otherwise) for earning to give?

6
Jeff Kaufman
6mo
Yes. In Fall 2011 I was thinking pretty hard about founding a startup as a way to maximize my income, and was considering the Summer 2012 YC batch. With what I know about myself now I think I would have been something like 80% likely to get in if I'd gone this route, and the Hall and Woodward (2009) estimate I was using (expected value of $5.8M conditional on getting in) was too low. But overall this ended up not being a direction I wanted to go: I wasn't willing to give my life over to my work to the extent it would have required. (My dad founded several small businesses growing up, and while these days we get to see him a lot—which is great!—as a kid I saw less of him than I wish I had.)
Austin
6mo53
10
2
1

I have this impression of OpenPhil as being the Harvard of EA orgs -- that is, it's the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅

When should someone who cares a lot about GCRs decide not to work at OP?

3
PhilZ
6mo
This is a hard question to answer, because there are so many different jobs someone could take in the GCR space that might be really impactful. And while we have a good sense of what someone can achieve by working at OP, we can't easily compare that to all the other options someone might have. A comparison like "OP vs. grad school" or "OP vs. pursuing a government career" comes with dozens of different considerations that would play out differently for any specific person. Ultimately, we hope people will consider jobs we've posted (if they seem like a good fit), and also consider anything else that looks promising to them.

When should someone who cares a lot about GCRs decide not to work at OP?

I agree that there are several advantages of working at Open Phil, but I also think there are some good answers to "why wouldn't someone want to work at OP?"

Culture, worldview, and relationship with labs

Many people have an (IMO fairly accurate) impression that OpenPhil is conservative, biased toward inaction, generally prefers maintaining the status quo, and is generally in favor of maintaining positive relationships with labs.

As I've gotten more involved in AI policy, I've updated mor... (read more)

Thanks, really appreciated this post.

In case anyone is looking for a bank recommendation, I would recommend Mercury, for their excellent UX and good pricing model. We use them for both Manifold the for-profit, and Manifold for Charity. They do provide ~5% yield to for-profits through Mercury Treasury (we use a different interest provider but if we could do it over again, we would definitely choose Mercury Treasury instead). Unfortunately, they don't provide Treasury to nonprofits. Mercury can also do payments to intl accounts with a 1% FX exchange rate (wo... (read more)

5
JueYan
6mo
I’ve also heard that Mercury has a great user experience, but as you mentioned, sadly, they’re not available for nonprofits. For a for-profit, your money market sweep goes to the Vanguard Treasury Money Market Fund, which is awesome: a reputable provider, $59bn under management, and 5.24% yield*. Mercury also offers a multi-bank sweep option, where you can put your balance across say 20 banks, so you get 20x the $250k FDIC limit in government protection. If it weren’t for these “invincibility”** features, Mercury may well have failed when Silicon Valley Bank and First Republic failed. Mercury serves mostly startups, who tend to have large balances (over the $250k FDIC limit) and who know each other (and can spark bank runs through their dense networks). Worse, it’s not a bank, and there’s no stock prices or bond prices*** to watch to see when they’re in trouble. * note that if you got the same yield via a certificate of deposit instead of a money market account, that’s materially worse, since you’re not insulated from bank failures, and may not be able to redeem if the bank becomes distressed ** not actually invincible *** observable measures of distress are a double-edged sword: you know when the bank is in trouble, but everyone does, so small concerns can snowball into a real large concern
Answer by AustinSep 08, 202324
7
2
2

I'm grateful for the CEA Community Health team -- interpersonal issues can be tricky to navigate, but the Health team is consistently nice, responsive, helpful and has many useful resources compiled for making good decisions, whether it be about running an event or managing grant dynamics.

How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?

Here are my guesses for the most valuable qualities:

  1. Deep technical background and knowledge in longtermist topics, particularly in alignment. 
    1. Though I haven't studied this area myself, my understanding of the history of good funding for new scientific fields (and other forms of research "leadership"/setting strategic direction in highly innovative domains) is that usually you want people who are quite good at the field you want to advance or fund, even if they aren't the very top scientists. 
      1. Basically you might not want the best scientists at the
... (read more)

I've heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I'm not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.

For example, if in 100 projects, you have [1x +1000, 4x -100, 95x ~0], then I think black swarm farming still doe... (read more)

"Focused Research Org"

I really appreciated this list of examples and it's updated me a bit towards checking in with LTFF & others a bit more. That said, I'm not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.

One frame: is longtermist funding more like "admitting a Harvard class/YC batch" or more like "pre-seed/seed-stage funding"? In the former case, it's more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latt... (read more)

In the latter case, you are "black swan farming"; the important thing is to not miss out on the one Facebook that 1000xs, and you're happy to fund 99 duds in the meantime.

One risk of this framing is that as a seed funder your downside is pretty much capped at "you don't get any money" while with longtermist grantmaking your downside could be much larger. For example, you could fund someone to do outreach who is combative and unconvincing or someone who will use poor and unilateral judgement around information hazards. The article has an example of avoid... (read more)

I'm not sure I would be comfortable with the idea of grant makers sharing information with each other that they weren't also willing to share with the applicants


One of my pet ideas is to set up a grantmaker coordination channel (eg Discord) where only grantmakers may post, but anyone may read. I think siloed communication channels are important for keeping the signal to noise ratio high, but 97% of the time I'd be happy to share whatever thoughts we have with the applicant & the rest of the world too.

I think this is worth doing for large grants (eg >$50k); for smaller grants, coordination can get to be costly in terms of grantmaker time. Each additional step of the review process adds to the time until the applicant gets their response and their money.

Background checks with grantmakers are relatively easier with an application system that works in rounds (eg SFF is twice a year, Lightspeed and ACX also do open/closed rounds) -- you can batch them up, "here's 40 potential grantees, let us know if you have red flags on any". But if you have a continuo... (read more)

I agree that default employment seems preferred by most fulltime workers, and that's why I'm interested in the concept of "default-recurring monthly grants".

I will note that this employment structure is not the typical arrangement among founders trying to launch a startup, though. A broad class of grants in EA are "work on this thing and maybe turn it into a new research org", and the equivalent funding norms in the tech sector at least are not "employment" but "apply for incubators, try to raise funding".

For EAs trying to do research... academia is the ty... (read more)

1
Aaron_Scher
8mo
What does FRO stand for?

I'm sympathetic to treating good altruistic workers well; I generally advocate for a norm of much higher salaries than is typically provided. I don't think job insecurity per se is what I'm after per se, but rather allowing funders to fund the best altruistic workers next year, rather than being locked into their current allocations for 3 years.

The default in the for profit sector in the US is not multi-year guaranteed contracts but rather at will employment, where workers may leave a job or be fired for basically any reason. It may seem harsh in compariso... (read more)

9
Larks
8mo
I think there's a big difference between "you are an at will employee, and we can fire you on two weeks notice, but the default is you will stay with us indefinitely" and "you have a one year contract and can re-apply at the end". Legally the latter gives the worker 50 extra weeks of security, but in practice the former seems to be preferable to many people.

I do think freelancers spend significant amounts of time on job searching, but I'm not sure that's evidence for "low productivity". Productivity isn't a function of total hours worked but rather of output delivered. The one time I engaged a designer on a freelancer marketplace, I got decent results exceptionally quickly. Another "freelancer marketplace" I make heavy use of is uber, which provides good customer experiences.

Of course, there's a question of whether such marketplaces are good for the freelancers themselves - I tend to think so (eg that the existence of uber is better for drivers than the previous taxi medallion system) but freelancing is not a good fit for many folks.

Lots of my favorite EA people seem to think this is a good idea, so I'll provide a dissenting view: job security can be costly in hard-to-spot ways.

  • I notice that the places that provide the most job security are also the least productive per-person (think govt jobs, tenured professors, big tech companies). The typical explanation goes like "a competitive ecosystem, including the ability for upstarts to come in and senior folks to get fired, leads to better services provided by the competitors"
  • I think respondents on the EA Forum may think "oh of course I'd
... (read more)
2
RyanCarey
8mo
I'm focused on how the best altruistic workers should be treated, and if you think that giving them job insecurity would create good incentives, I don't agree. We need the best altruistic workers to be rewarded not just better than the less productive altruists, but also better than those pursuing non-altruistic endeavours. It would be hard to achieve this if they do not have job security.
9
S.E. Montgomery
8mo
Do you have evidence for this? Because there is lots of evidence to the contrary - suggesting that job insecurity negatively impacts people's productivity as well as their physical, and mental health.[1][2][3].  This goes both ways - yes, there is a chance to fund other potential better upstarts, but by only offering short-term grants, funders also miss out on applicants who want/need more security (eg. competitive candidates who prefer more secure options, parents, people supporting family members, people with big mortgages, etc).  I think there are options here that would help both funders and individuals. For example, longer grants could be given with a condition that either party can give a certain amount of notice to end the agreement (typical in many US jobs), and many funders could re-structure to allow for longer grants/a different structure for grants if they wanted to. As long as these changes were well-communicated with donors, I don't see why we would be stuck to a 1-year cycle.  My experience: As someone who has been funded by grants in the past, job security was a huge reason for me transitioning away from this. It's also a complaint I've heard frequently from other grantees, and something that not everyone can even afford to do in the first place. I'm not implying that donors need to hire people or keep them on indefinitely, but even providing grants for 2 or more years at a time would be a huge improvement to the 1-year status quo. 
7
Elizabeth
8mo
  FWIW I can imagine being really happy under this system. Contingent on grantmaker/supervisor quality of course, and since those already seem to be seriously bottlenecked this doesn't feel like an easy solution to me. But I'd love to see it work out.  

I would be surprised if people on freelancer marketplaces are exceptionally productive - I would guess they end up spending a lot more of their time trying to get jobs than actually doing the jobs.

A few possibilities from startup land:

  • derive worth from how helpful your users find your product
  • chase numbers! usage, revenue, funding, impact, etc. Sam Altman has a line like "focus on adding another 0 to your success metric"
  • the intrinsic sense of having built something cool
9
Patrick Gruban
8mo
After transitioning from for-profit entrepreneurship to co-leading a non-profit in the effective altruism space, I struggle to identify clear metrics to optimize for. Funding is a potential metric, but it is unreliable due to fluctuations in donors' interests. The success of individual programs, such as user engagement with free products or services, may not accurately reflect their impact compared to other potential initiatives. Furthermore, creating something impressive doesn't necessarily mean it's useful.  Lacking a solid impact evaluation model, I find myself defaulting to measuring success by hours worked, despite recognizing the diminishing returns and increased burnout risk this approach entails.

Thanks for building this! Kelly criteria is one of those super neat concepts that has had a lot of analysis, but not much "here's a thing you can play with". I love that Manifolio lets you play with different users and markets, to give a more intuitive sense of what the Kelly Criteria means. The UI is simple and communicates key info quickly, and I like that there's a Chrome extension for tighter integration!

The main reason we'd prefer to use Wise is that they advertise much lower currency exchange fees; my guess is something like 0.5% compared to 3-4% on Paypal, which really adds up on large foreign grants.

1
Constance Li
9mo
That's a very good point! I just have a lot of bad experience dealing with Wise, but I also haven't had to deal with very large grants so the tradeoff wasn't as large.

One possible solution is to have applicants create a prediction market on their chance of getting a job/grant, before applying -- this helps grant applicants get a sense of how good their prospects are. (example 1, 2) Of course, there's a cost to setting up a market and making the relevant info legible to traders, but it should be a lot less than the cost of writing the actual application.

Another solution I've been entertaining is to have grantmakers/companies screen applications in rounds, or collaboratively, such that the first phase of application is ve... (read more)

4
Joseph Lemien
9mo
I'd be interested in seeing some organizations try out the very very quick method. Heck, I'd be willing to help set it up and trial run it. My rough/vague perception is that a lot of the information in a job application is superfluous. I also remember Ben West posting some data about how a variety of "how EA is this person" metrics held very little predictive value in his own hiring rounds.

I really appreciated your assessments of the alignment space, and would be open to paying out a retroactive bounty and/or commissioning reports for 2022 and 2023! Happy to chat via DM or email (austin@manifund.org)

Hi Omega, I'd be especially interested to hear your thoughts on Apollo Research, as we (Manifund) are currently deciding how to move forward with a funding request from them. Unlike the other orgs you've critiqued, Apollo is very new and hasn't received the requisite >$10m, but it's easy to imagine them becoming a major TAIS lab over the next years!

Yeah idk, this just seems like a really weird nitpick, given that you both like Holly's work...? I'm presenting a subjective claim to begin with: "Holly's track record is stellar", as based on my evaluation of what's written in the application plus external context.

If you think this shouldn't be funded, I'd really appreciate the reasoning; but I otherwise don't see anything I would change about my summary.

Thanks for the feedback. I'm not sure what our disagreement cashes out to - roughly, I would expect "if funded, Holly would do a good job such that 1 year later, we were happy to have funded her for this"?

9
Zach Stein-Perlman
9mo
I wasn't commenting on expectations, just your framing of the evidence. (Conditional or counterfactually-conditional on Holly being funded, I expect her to mostly fail because I think most advocacy mostly fails, and 1 year later I agree you will probably still think it was a reasonable grant at the time.)

3b. As a clarification, for a period of time we auto-enrolled people in a subset of groups we considered to be broadly appealing (Econ/Tech/Science/Politics/World/Culture/Sports), so those group size metrics are not super indicative of user preferences. We aren't doing this at this point in time, but did not unenroll those users.

1
Lizka
10mo
Thanks! This is really useful to know. Edited my comment. 
Answer by AustinJul 09, 20234
2
2

One theory is that EA places unusual weight on issues in the long-term future, compared to existing actors (companies, governments) who are more focused on eg quarterly profits or election cycles. If you care more about the future, you should be differentially excited about techniques to see what the future will hold.

(A less-flattering theory is that forecasting just seems like a cool mechanism, and people who like EA also like cool mechanisms.)

I have not read much of Tetlock's research, so I could be mistaken, but isn't the evidence for Tetlock-style forecasting only for (at best) short-medium term forecasts?  Over this timescale, I would've expected forecasting to be very useful for non-EA actors, so the central puzzle remains.  Indeed, if there is not evidence for long-term forecasting, then wouldn't one expect non-EA actors (who place less importance on the long-term) to be at least as likely as EAs use this style of forecasting?
 

Of course, it would be hard to gather evidence f... (read more)

Thanks for the thoughts (and your posts on Futarchy years ago, I found them to be a helpful review of the literature!)

I'm a bit suspicious of metrics that depend on a vote 5 years from now.

I am too, though perhaps for different reasons. Long-term forecasting has slow feedback loops, and fast feedback loops are important for designing good mechanisms. Getting futarchy to be useful probably involves a lot of trial-and-error, which is hard when it takes you 5 years to assess "was this thing any good?"

2
Nathan Young
10mo
Fwiw I think this is an issue with grantmaking too. 

Thanks for the writeup, Nathan; I am indeed excited about the possibility of making better grants through forecasting/futarchic mechanisms. So I'll start from the other direction: instead of reaching for futarchy as a hammer, start with, what are current major problems grantmakers face?

The problem that seems most important to solve: "finding projects that turn out to be orders of magnitude more successful/impactful than the rest". Paul Graham describes funding seed-stage startups as "farming black swans", which rings true to me. To look at two example roun... (read more)

2
Nathan Young
10mo
Do you think there was a sense that this might be the case? I guess you could encourage anyone to make markets, not just the funders. Then have some way to select the 10 most interesting markets. If you wanted you could try and run an LLM to generate text for some kind of premortem. Seems a bit galaxy brained though.

Thanks, we hope so too! (To be clear, we also have a lot of respect for centralized grantmaking orgs and the work they do; and have received funding through some in EA such as LTFF and SFF.)

Haha, I think you meant this sarcastically but I would actually love to find Republican, or non-college-educated, or otherwise non-"traditional EA" regrantors. (If this describes you or someone you know, encourage them to apply!)

Load more