I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing.
The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is.
You might appreciate Ozy Brennan's writeup on capabilitarianism. Contrasting with most flavors of utilitarianism:
Utilitarians maximize “utility,” which is pleasure or happiness or preference satisfaction or some more complicated thing. But all our ways of measuring utility are really quite bad. Some people use self-reported life satisfaction or happiness, but these metrics often fail to match up with common-sense notions about what makes people better off. GiveWell tends to use lives saved and increased consumption, which are fine as far as they go, but everyone agrees that that’s only a small fraction of what we care about. A lot of people wind up relying basically on intuition, or on heuristics like “I would not like it if I went hungry” or “probably if you give people more money they’ll be happier.”
In my experience, a lot of utilitarians tend to stuff how hard it is to measure utility up into the attic like the first wife in a gothic novel. It is rare to find a work of utilitarian philosophy that comes up with any sort of well-thought-out principled system for determining what people prefer or what brings them pleasure.
The thing I like about capabilitarianism is that it puts its arbitrariness up front. “There are the things we care about!” it says. “These are the things we’re going to be trying to measure! You can argue with us about them if you want.” Nothing is being smuggled in through the back door.
So what is it?
Capabilitarianism is based on the philosophy of Amartya Sen and Martha Nussbaum. It is consequentialist, but heavily influenced by deontology (especially Kantianism) and virtue ethics (especially Aristotleanism). (If that doesn’t mean anything to you, don’t worry about it.) Capabilitarianism is about making sure people have certain central capabilities. ... Society should make sure that everyone has the central capabilities.
When I say “society should make sure,” I don’t mean “the government should make sure.” While the government has an appropriate role in making sure people can exercise the central capabilities, so do markets, civil society, charities, families, and individuals. Many central capabilities are best met by a combination: for example, the best way to make sure everyone has the “enough food” central capability is a free market in groceries, combined with a robust welfare state to take care of those who can’t afford to buy food on their own.
Finally, what matters is that you have the capability, not that you choose to exercise the capability. If you can’t leave the house, that’s bad. If you legally and socially and physically can leave your house, and freely choose to live the Emily Dickinson lifestyle, that is fine, and capabilitarians have no problem with this.
Ozy reproduces Martha Nussbaum's first-draft list of the central capabilities in their essay; in short: life, bodily health, bodily integrity, senses imagination and thought, emotions, practical reason, affiliation, other species, play, control over one's environment (political and material).
I’m working on a project to estimate the cost-effectiveness of AIS orgs, something like Animal Charity Evaluators does. This involves gathering data on metrics such as:
People impacted (e.g., scholars trained).
Research output (papers, citations).
Funding received and allocated.
Some organizations (e.g., MATS, AISC) share impact analyses, there’s no broad comparison. AI safety orgs operate on diverse theories of change, making standardized evaluation tricky—but I think rough estimates could help with prioritization.
I’m looking for:
Previous work
Collaborators
Feedback on the idea
If you have ideas for useful metrics or feedback on the approach, let me know!
For previous work, I point you to @NunoSempere’s ‘Shallow evaluations of longtermist organizations,’ if you haven’t seen it already. (While Nuño didn’t focus on AI safety orgs specifically, I thought the post was excellent, and I imagine that the evaluation methods/approaches used can be learned from and applied to AI safety orgs.)
Thanks! I saw that post. It's an excellent approach. I'm planning to do something similar, but less time-consuming and limited. The range of theories of change that are pursued in AIS is limited and can be broken down into:
Evals
Field-building
Governance
Research
Evals can be measured by quality and number of evals, relevance to ex-risks. It seems pretty straightforward to differentiate a bad eval org from a good eval org—engaging with major labs, having a lot of evals, and a relation to existential risks.
Field-building—having a lot of participants who do awesome things after the project.
Research—I argue that the number of citations is also a good proxy for the impact of a paper. It's definitely easy to measure and is related to how much engagement a paper received. In the absence of any work done to bring the paper to the attention of key decision makers, it's very related to the engagement.
I'm not sure how to think about governance.
Take this with a grain of salt.
EDIT: Also I think that engaging broader ML community with AI safety is extremely valuable and citations tells us how if an organization is good at that. Another thing that would be good to reivew is to ask about transparency of organizations, how thier estimate their own impact and so on - this space is really unexplored and this seems crazy to me. The amount of money that goes into AI safety is gigantic and it would be worth exploring what happens with it.
Meta: I'm requesting feedback and gauging interest. I'm not a grantmaker.
You can use prediction markets to improve grantmaking. The assumption is that having accurate predictions about project outcomes benefits the grantmaking process.
Here’s how I imagine the protocol could work:
Someone proposes an idea for a project.
They apply for a grant and make specific, measurable predictions about the outcomes they aim to achieve.
Examples of grant proposals and predictions (taken from here):
Project: Funding a well-executed podcast featuring innovative thinking from a range of cause areas in effective altruism.
Prediction: The podcast will reach 10,000 unique listeners in its first 12 months and score an average rating of 4.5/5 across major platforms.
Project: Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank.
Prediction: The student will publish two policy-relevant research briefs within 12 months of attending the program.
Project: A 12-month stipend and budget for an EA to develop programs increasing the positive impact of biomedical engineers and scientists.
Prediction: Three biomedical researchers involved in the program will identify or implement career changes aimed at improving global health outcomes.
Project: Stipends for 4 full-time-equivalent (FTE) employees and operational expenses for an independent research organization conducting EA cause prioritization research.
Prediction: Two new donors with a combined giving potential of $5M+ will use this organization’s recommendations to allocate funds.
A prediction market is created based on these proposed outcomes, conditional on the project receiving funding. Some of the potential grant money is staked to make people trade.
Obvious criticism is that:
Markets can be gamed, so the potential grantee shouldn't be allowed to bet.
Exploratory projects and research can't make predictions like this.
A lot of people need to participate in the market.
I'm also a broad fan of this sort of direction, but have come to prefer some alternatives. Some points: 1. I believe of this is being done at OP. Some grantmakers make specific predictions, and some of those might be later evaluated. I think that these are mostly private. My impression is that people at OP believe that they have critical information that can't be made public, and I also assume it might be awkward to make any of this public. 2. Personally, I'd flag that making and resolving custom questions for each specific grant can be a lot of work. In comparison, it can be great when you can have general-purpose questions, like, "how much will this organization grow over time" or "based on a public ranking of the value of each org, where will this org be?" 3. While OP doesn't seem to make public prediction market questions on specific grants, they do sponsor Metaculus questions and similar on key strategic questions. For example, there are a tournaments on AI risk, bio, etc. I'm overall a fan of this.
4. In the future, AI forecasters could do interesting things. OP could take the best ones, then these could make private forecasts of many elements of any program.
Re 2. I agree that this is a lot of work but it's little given how much money goes into grants. Some of the predictions are also quite straightforward to resolve.
Well, glad to hear that they are using it.
I believe that an alternative could be funding a general direction, e.g., funding everything in AIS, but I don't think that these approaches are exclusive.
I realized that the concept of utility as a uniform, singular value is pretty off-putting to me. I consider myself someone who is inherently aesthetic and needs to place myself in a broader context of the society, style and so on. I require a lot of experiences— in some way, I need more than just happiness to reach a state of fulfillment. I need to have aesthetic experience of beauty, the experience of calmness, the anxiety of looking for answers, the joy of building and designing.
The richness of everyday experience might be reducible to two dimensions: positive and negative feelings but this really doesn't capture what a fulfilling human life is.
You might appreciate Ozy Brennan's writeup on capabilitarianism. Contrasting with most flavors of utilitarianism:
So what is it?
Ozy reproduces Martha Nussbaum's first-draft list of the central capabilities in their essay; in short: life, bodily health, bodily integrity, senses imagination and thought, emotions, practical reason, affiliation, other species, play, control over one's environment (political and material).
I’m working on a project to estimate the cost-effectiveness of AIS orgs, something like Animal Charity Evaluators does. This involves gathering data on metrics such as:
Some organizations (e.g., MATS, AISC) share impact analyses, there’s no broad comparison. AI safety orgs operate on diverse theories of change, making standardized evaluation tricky—but I think rough estimates could help with prioritization.
I’m looking for:
If you have ideas for useful metrics or feedback on the approach, let me know!
For previous work, I point you to @NunoSempere’s ‘Shallow evaluations of longtermist organizations,’ if you haven’t seen it already. (While Nuño didn’t focus on AI safety orgs specifically, I thought the post was excellent, and I imagine that the evaluation methods/approaches used can be learned from and applied to AI safety orgs.)
Thanks! I saw that post. It's an excellent approach. I'm planning to do something similar, but less time-consuming and limited. The range of theories of change that are pursued in AIS is limited and can be broken down into:
Evals can be measured by quality and number of evals, relevance to ex-risks. It seems pretty straightforward to differentiate a bad eval org from a good eval org—engaging with major labs, having a lot of evals, and a relation to existential risks.
Field-building—having a lot of participants who do awesome things after the project.
Research—I argue that the number of citations is also a good proxy for the impact of a paper. It's definitely easy to measure and is related to how much engagement a paper received. In the absence of any work done to bring the paper to the attention of key decision makers, it's very related to the engagement.
I'm not sure how to think about governance.
Take this with a grain of salt.
EDIT: Also I think that engaging broader ML community with AI safety is extremely valuable and citations tells us how if an organization is good at that. Another thing that would be good to reivew is to ask about transparency of organizations, how thier estimate their own impact and so on - this space is really unexplored and this seems crazy to me. The amount of money that goes into AI safety is gigantic and it would be worth exploring what happens with it.
Meta: I'm requesting feedback and gauging interest. I'm not a grantmaker.
You can use prediction markets to improve grantmaking. The assumption is that having accurate predictions about project outcomes benefits the grantmaking process.
Here’s how I imagine the protocol could work:
Examples of grant proposals and predictions (taken from here):
A prediction market is created based on these proposed outcomes, conditional on the project receiving funding. Some of the potential grant money is staked to make people trade.
Obvious criticism is that:
I'm also a broad fan of this sort of direction, but have come to prefer some alternatives. Some points:
1. I believe of this is being done at OP. Some grantmakers make specific predictions, and some of those might be later evaluated. I think that these are mostly private. My impression is that people at OP believe that they have critical information that can't be made public, and I also assume it might be awkward to make any of this public.
2. Personally, I'd flag that making and resolving custom questions for each specific grant can be a lot of work. In comparison, it can be great when you can have general-purpose questions, like, "how much will this organization grow over time" or "based on a public ranking of the value of each org, where will this org be?"
3. While OP doesn't seem to make public prediction market questions on specific grants, they do sponsor Metaculus questions and similar on key strategic questions. For example, there are a tournaments on AI risk, bio, etc. I'm overall a fan of this.
4. In the future, AI forecasters could do interesting things. OP could take the best ones, then these could make private forecasts of many elements of any program.
Re 2. I agree that this is a lot of work but it's little given how much money goes into grants. Some of the predictions are also quite straightforward to resolve.
Well, glad to hear that they are using it.
I believe that an alternative could be funding a general direction, e.g., funding everything in AIS, but I don't think that these approaches are exclusive.