I am an earlyish crypto investor who has accumulated enough to be a mid-sized grantmaker, and I intend to donate most of my money over the next 5-10 years to try and increase the chances that humanity has a wonderful future. My best guess is that this is mostly decided by whether we pass the test of AI alignment, so that’s my primary focus.
AI alignment has lots of money flowing into it, with some major organizations not running fundraisers, Zvi characterizing SFF as having “too much money”, OpenPhil expanding its grantmaking for the cause, FTX setting themselves up as another major grantmaker, and ACX reporting the LTFF’s position as:
what actually happened was that the Long Term Future Fund approached me and said “we will fund every single good AI-related proposal you get, just hand them to us, you don’t have to worry about it”
So the challenge is to find high-value funding opportunities in a crowded space.
One option would be to trust that the LTFF or whichever organization I pick will do something useful with the money, and I think this is a perfectly valid default choice. However, I suspect that as the major grantmakers are well-funded, I have a specific comparative advantage over them in allocating my funds: I have much more time per unit money to assess, advise, and mentor my grantees. It helps that I have enough of an inside view of what kinds of things might be valuable that I have some hope of noticing gold when I strike it. Additionally, I can approach people who would not normally apply to a fund.
What is my grantmaking strategy?
First, I decided what parts of the cause to focus on. I’m most interested in supporting alignment infrastructure, because I feel relatively more qualified to judge the effectiveness of interventions to improve the funnel which takes in people who don’t know about alignment in one end, takes them through increasing levels of involvement, and (when successful) ends with people who make notable contributions. I’m also excited about funding frugal people to study or do research which seems potentially promising to my inside view.
Next, I increased my surface area with places which might have good giving opportunities by involving myself with many parts of the movement. This includes Rob Miles’s Discord, AI Safety Support’s Slack, in-person communities, EleutherAI, and the LW/EA investing Discord, where there are high concentrations of relevant people, and exploring my non-EA social networks for promising people. I also fund myself to spend most of my time helping out with projects, advising people, and learning about what it takes to build things.
Then, I put out feelers towards people who are either already doing valuable work unfunded or appear to have the potential and drive to do so if they were freed of financial constraints. This generally involves getting to know them well enough that I have a decent picture of their skills, motivation structure, and life circumstances. I put some thought into the kind of work I would be most excited to see them do, then discuss this with them and offer them a ~1 year grant (usually $14k-20k, so far) as a trial. I also keep an eye open for larger projects which I might be able to kickstart.
When an impact certificate market comes into being (some promising signs on the horizon!), I intend to sell the impact of funding the successful projects and use the proceeds to continue grantmaking for longer.
Alongside sharing my models of how to grantmake in this area and getting advice on it, the secondary purpose of this post is to pre-register my intent to sell impact in order to strengthen the connection between future people buying my impact and my current decisions. I’ll likely make another post in two or three years with a menu of impact purchases for both donations and volunteer work I do, once it’s more clear which ones produced something of value.
I have donated about $40,000 in the past year, and committed around $200,000 over the next two years using this strategy. I welcome comments, questions, and advice on improving it.
I’m on board with that, and the second that you’re quoting seems to express that. Or am I misunderstanding what you’re referring to? (The quoted section basically says that, e.g., +100 utility with 50% probability and -100 utility with 50% probability cancel out to 0 utility in expectation. So the positive and the negative side are weighed equally and the units are the same.)
Generally, this (yours) is also my critique of the conflict between prioritarianism and classic utilitarianism (or some formulations of those).
Yeah, that’s how I imagine it. You mean it would just have a limited life expectancy like any company or charity? That makes sense. Maybe we could try to push to automate it and create several alternative implmentations of it. Being able to pay people would also be great. Any profit that it could use to pay staff would detract from its influence, but that’s also a tradeoff one could make.
Oh, another idea of mine was to use Augur markets. But I don’t know enough about Augur markets yet to tell if there are difficulties there.
I still need to read it, but it’s on my reading list! Getting investments from selfish investors is a large part of my motivation. I’m happy to delay that to test all the mechanisms in a safe environment, but I’d like it to be the goal eventually when we deem it to be safe.
Yeah, it would be interesting to get opinions of anyone else who is reading this.
So the way I understand this questions is that there may be retro funders who reward free and open source software projects that have been useful. Lots of investors will be very quick and smart about ferreting out what the long-tail of the 10,000s of tiny libraries are that are holding all the big systems like GPT-3 together. Say, maybe the training data for GPT-3 is extracted by a custom software that relies on cchardet to detect the encoding of the websites it downloads if they’re not declared, misdeclared, or ambiuously declared. That influx of funding to these tiny projects will speed up their development and will supercharge them to the point where they can do their job a lot better and speed up development processes by 2–10x or so.
Attributed impact, the pot, and aligned retro funders would first need to become aware of this (or similar hidden risks), and would then decide that software projects like that are risky, and that they need to make a strong case for why they’re differentially more useful for safety or other net positive work than for enhancing AI capabilities. But the risk is sufficiently hidden that this is the sort of thing where another unaligned funder with a lot of money might come in an skew the market in the direction of their goals.
The assumptions, as I see them, are:
Yeah, that sounds sensible. Or make it impossible to display them anywhere in the first place without audit?