SjirH

883Joined Dec 2015

Comments
37

Thanks for your comment Hendrik!

To address this, I think it's important to look at the value each additional layer of evaluation provides. It seems (with the multitude of evaluators and fundraisers) we are now at a point where at least some work in the second layer is necessary/useful, but I don't think a third layer would currently be justified (with 0-1 organisations active in the second layer).

Another way to see this is: the "turtles all the way down" concern already works for the first layer of evaluators (why do we need one if charities are already evaluating themselves and reporting on their impact? who is evaluating these evaluators?): the relevant question is whether the layer adds enough value, which this first layer clearly does (given how many charities and donors there are and the lack of public and independent information available on how they compare), and I argue above the second does as well.

FWIW I don't think this second layer should be fully or forever centralised in GWWC, and I see some value in more fundraising organisations having at least some research capacity to determine their recommendations, but we need to start somewhere and there are diminishing returns to adding more. Relatedly, I should say that I don't expect fundraising organisations to just "listen to whatever GWWC says": we provide recommendations and guidance, and these organisations may use that to inform their choices (which is a significant improvement to having no guidance at all to choose among evaluators).

Thank you both for offering to help! I'm not yet clear on whether it'll make sense to work with volunteers on this, but it is certainly something we'll consider. Could you please indicate your interest by filling out this form? (select "skilled volunteering"-->"impact analysis and evaluation")

Conditional on fundraising for GWWC's 2023 budget, we'll very likely hire an extra researcher to work on this early next year. If this is something you'd be interested in as well, please do feel free to reach out at sjir@givingwhatwecan.org and I'll let you know once the position opens up for applications.

I also think it's worth stressing that the best alternative to finding a great (above-bar) option to spend money on now is not to spend on options below the bar, but to wait / keep looking and spend it at an above-bar opportunity later (and ideally invest to give while you're at it).

In your example, this cashes out (roughly) in us using Research multiple times to find as many Alpha-like projects as possible and fund those, and to only start looking for and funding Beta-like projects when there are no more Alpha-like projects to find. Even if there is only one extra Alpha and one extra Beta to find, it's better (with the parameters as provided in your example) to find and fund that Alpha and find and fund Beta, than to find at fund only one of the two.

Cases somewhat akin to "you can only use Research for either Alpha or Beta" can occur, but only under very specific conditions, e.g. when opportunities are time-sensitive and/or when there is a very tight bottleneck on research resources (=strongly increasing marginal costs to doing research), which might in fact be the case currently.

(As a side point: given the option of investing to give, it's important to "set" the bar taking into account our expectations of how cost-effective future opportunities will be, investment benefits one can achieve in the meantime, value drift and expropriation risks etc.)

I would like to push back a bit, as I don't think it's true that scalability per se matters more now than it did in the past.

Instead, I think the availability of more funding has pushed down the cost-effectiveness bar for funding opportunities, thereby "unlocking" some new worthy funding opportunities, including some very scalable ones.

To see this, consider that the added value of discovering/creating any new funding opportunity for the community is roughly given by (not accounting for diminishing returns when spending at bar level):

"value created by adding a new funding opportunity" = ("average cost-effectiveness of the opportunity" - "current cost-effectiveness bar") * "room for funding of the opportunity"

I.e. what you're effectively doing by adding a new opportunity is improving the cost-effectiveness of money that would have otherwise been spent at the bar level.

This implies that any opportunity that is above the current bar in terms of its cost-effectiveness can be worth discovering if it's scalable enough. But that is nothing new: it was true as much in 2010-2014 as it is now. It's just that the bar was higher, so some very scalable but below-bar opportunities weren't worth discovering back then but are now.

A social media platform with better incentives

Epistemic Institutions, Values and Reflective Processes

Social media has arguably become a major way in which people consume information and develop their values, and the most popular platforms are far from optimally set up to bring people closer to truthfulness or altruistic ends. We’d love to see experiments with social media platforms that provide more pro-social incentives and yet have the potential to reach a large audience.

Institutions as coordination mechanisms

Artificial Intelligence, Biorisk and Recovery from Catastrophe, Great Power Relations, Space Governance, Values and Reflective Processes

A lot of major problems - such as biorisk, AI governance risk and the risks of great power war - can be modeled as coordination problems, and may be at least partially solved via better coordination among the relevant actors. We’d love to see experiments with institutions that use mechanism design to allow actors to coordinate better. One current example of such an institution is NATO: Article 5 is a coordination mechanism that aligns the interests of NATO member states. But we could create similar institutions for e.g. biorisk, where countries commit to a matching mechanism - where “everyone acts in a certain way if everyone else does” - with costs imposed to defectors to solve a tragedy of the commons dynamic.

Experiments with and within video games

Values and Reflective Processes, Empowering Exceptional People

Video games are a powerful tool to reach hundreds of millions of people, an engine of creativity and innovation, and a fertile ground for experimentation. We’d love to see experiments with and within video games that help create new tools to address major issues. For instance, we’d love experiments with new governance and incentive systems and institutions, new ways to educate people about pressing problems, games that simulate actual problems and allow players to brainstorm solutions, and games that help identify and recruit exceptional people.

Representation of future generations within major institutions

Values and Reflective Processes, Epistemic Institutions

We think at least part of the issues facing us today would be better handled if there was less political short-termism, and if there were more incentives for major political and non-political institutions to take into account the interests of future generations. One way to address this is to establish explicit representation of future generations in these institutions through strategic advocacy, which can be done in many ways and has been piloted in the past few decades.

Scaling successful policies

Biorisk and Recovery from Catastrophe, Economic Growth

Information flow across institutions (including national governments) is far from optimal, and there could be large gains in simply scaling what already works in some places. We’d love to see an organization that takes a prioritized approach to researching which policies are currently in place to address major global issues, identifying which of these are most promising to bring to other institutions and geographies, and then bringing these to the institutions and geographies where they are most needed.

Load More