Possibly. Yes, it could be split between separate mechanisms 1) Public budgeting tool using quadratic voting for what I want govs to fund now, and 2) Forecasting tournament/prediction market for what will be the data/consensus about national priorities 3y later (without knowing forecasters´ prior performance, multiple-choice Surprising popularity approach could also be very relevant here). I see benefits in trying to merge these and wanted to put it out here, but yes, I'm totally in favor of more experimenting with these ideas separately, that's what we hope to do in our Megatrends project :)
Citizens are incentivized to predict what experts will say? This seems a little bit weak, because experts can be arbitrarily removed from reality. You might think that, no, our experts have a great grasp of reality, but I'd intuitively be skeptical. As in, I don't really know that many people who have a good grasp of what the most pressing problems of the world are.
Yes, there are not many experts with this kind of grasp, but a DELPHI done by a diversified group of experts from various fields seems to be currently the best method for identifying megatrends (while some methods of text analysis, technological forecasting, or serious games can help). Only the expertise represented in the group will be known in advance, not the identity of experts.
So in effect, if that's the case, then the key feedback loops of your system are the ones between experts using the Delphi system <> reality, and the loop between experts <> forecasters seems secondary.
"What are the top national/world priorities" is usually so complex, that it will remain to be a mostly subjective judgment. Then, how else would you resolve it than by looking for some kind of future consensus?
But I agree that even if the individual experts are not known, their biases could be predictable, especially if the pool of relevant local experts is small or there is a lot of academic inbreeding. This could be solved by lowering the bar for expertise (e.g. involving junior experts - Ph.D. students/postdocs in the same fields) so that each year, different experts participate in the resolution-DELPHI.
If the high cost and length of a resolution-DELPHI turns out to be a problem (I suppose so), those junior experts could just participate in a quick forecasting tournament on "what would senior experts say, if we run a DELPHI next month?", 1 out of 4 of these tournaments would be randomly followed by a DELPHI, while the rewards here would be 4x higher. But this adds a lot of complexity.
Perhaps CSET has something to say here. In particular, they have a neat method of taking big picture questions and decomposing them into scenarios and then into smaller, more forecastable questions.
Thanks! We are in touch with CSET and I think their approach is super useful. Hopefully, we´ll be able to specify some more research questions together before we start the trials.
This may have the problem that once the public identifies a "leader", either a very good forecaster or a persuasive pundit, they can just copy their forecasts.
Yeah, that's a great point - if the leader is consistently a good forecaster and a lot of people (but probably not more than a couple of % of participants in case of a widespread adoption) copy him, there are fewer info inputs, but it has other benefits (a lot of people now feel ownership in the right causes, they gain traction etc.). There will also be influential "activists" that will get copied a lot (it's probably unrealistic to prevent everyone from revealing real-life identity if they want to), but since there is cash at stake and no direct social incentive (unlike with e.g. retweeting), I think most people will be more cautious about the priorities of the person they want to copy.
This depends on how much of the budget is chosen this way. In the worst case scenario, this gives a veneer of respectability to a process which only lets citizens decide over a very small portion of the budget.
A small portion of the budget (e.g. 1%) would still be an improvement - most citizens would not think about how little of budget the allocate, but that they allocate not negligible $200, and they would feel like they actually participated in the whole political process, not only in 1% of it.
don’t they have plenty of that already and further pressures are actually negative if they think they know best?
Yes, but a lot of it seems to be the inputs from the lobby, interest groups, or people who are mostly virtue signaling to their peers, honest citizen participation (citizen assemblies etc.) is not that common... In this case, the government pre-commits to only allocate a small part of the budget accordingly, apart from that, politicians can still do what they think they know best.
Who are the experts? I expect this to cause controversy. ... Maybe this could be circumvented by letting the population decide it? Or at least their elected representatives? I‘ve stumbled upon the „Bayesian Truth Serum“ mechanism ...
Thanks for sharing! The Nature study that Robin Hanson talks about is pretty relevant. But in our mechanism, the participants are predicting expert consensus (not their own consensus), so we don't need to make it harder for them to coordinate their answers, we just have to make sure they don't know who the experts in DELPHI 3-5 years later will be, so that they can´t influence them.
Also, unlike in the Surprising Popularity mechanism, in case you are confident that only you and a few others know the truth, your incentive is not to keep going with the contemporary consensus but actually go with your contrarian opinion, especially when it is likely to become accepted 3-5 years later (and experts should more likely to accept it earlier than the majority of the public).
I’d expect even more „Activists“, more like 95% maybe?
If "what do I support" becomes a socially useful topic to mention to your friends, this social incentive might be more important than the financial incentive for choosing the forecaster strategy. But probably you´re right, there would be less the 30% forecasters.
I assume they’d get filtered beforehand to be kind of common-sensical and maybe even boring to think about. Speaking of granularity, I wonder if this would even be minimally enough to distinguish good from lucky forecasters, especially when there are so many participants.
Right, we need to find the level of granularity between "boring to most" and "too difficult to most". I think there are already pretty good setups and scoring mechanisms to eliminate luck - like if you forecast in a probability distribution and are rewarded based on how much you have improved the current aggregate. But yes, this needs more research.
Do the grants come from the Czech government?
Yes, the Technological Agency of the Czech Republic.
And I would be happy if our prioritization research discusses other heuristics such as the ones that Denis (http://effective-altruism.com/ea/1lu/current_thinking_on_prioritization_2018/#interventions_1) or Michael (http://effective-altruism.com/ea/yp/evaluation_frameworks_or_when_importance/) propose.
Thanks for the feedback, this is very helpful!
EA vs. CCC values: I think about prioritization as a 3-step process of choosing a cause, an intervention, and then a specific organization. EA, 80k Hours or Global priorities are focused especially on choosing causes (the most „meta“ activity), GiveWell and charity evaluators are focusing on the third step – recommending organizations. Copenhagen Consensus´ approach can be seen as a compatible middle step in this process – prioritizing between possible interventions and solutions (hopefully more and more in the most high-impact areas, being increasingly compatible with EA).
Discount rates: Yes, Copenhagen Consensus uses discount rates (3% and 5% in previous projects) I would argue especially because of the uncertainty about the future, and our comparative advantage in solving current issues. We are always open to discuss this with EA, especially considering projects in more developed countries.
X-risks: Our projects are done in three steps, that we value more or less equally: 1) stakeholder research (gathering 1000+ policy ideas and choosing the top 60-80 interventions to analyse), 2) cost benefit analyses on those interventions, and 3) media dissemination and public advocacy for the top priorities. I would expect x-risks to be considered in the research in developed country rather that in Bagladesh, Haiti or India. Interventions reducing x-risks will definitely be among the 1000+ policy ideas, and even if they don´t make it into the 60-80 intervention analysed, thinking about low-chance, extreme-impact effects is certainly something that should be included in all relevant cost-benefit analyses.
Meta: There is a substantial difficulty to reasonably calculate very broad long-term benefits. Cost-benefit ratio of “Improving institutional decision-making” would be almost impossible to calculate, but we will assess our own impact, and since this is exactly our goal, some interesting data might come up. It would be also helpful to analyse partial interventions such as anti-corruption or transparency, that should lead to better institutional decision-making as a result. There are other interventions with long-term effects, that might make it to the final round and EA would probably agree on, such as programs in mental health, giving homes, consumption of animal products (e.g. removing subsidies for factory farms), antibiotic resistance etc.
Advocacy challenges: The project intends to, of course, say the most true things. If some of the top priorities will be difficult to implement, politicians will simply choose not pay attention to them, but at least public awareness will be created. I don’t think there will be any extremely controversial interventions (in AI Safety, for example), that would make us consider not publishing it to protect the whole project from being ridiculed and discredited.
Public sentiment and preventing authoritarianism: Yes, we expect public sentiment to be a key driving factor for change (along with roundtables and presentations to politicians, political parties and members of budget committee), more so than in third-world countries. We are in touch with local media, that are influential in shaping public opinion. Implementing the best interventions would have great effects, but even if not implemented, we hope to move public discussion (that is always in conflict between different world-views) to a little bit more rational level, and open up as many eyes and educate as many minds as possible to think in bigger picture. That seems to be a good way to fight irrational populism, which has all sorts of bad impacts on the society.
Importance of robust policies vs. acceptable policies: This possible trade-off should be made in each analysis, the researcher should think if the specific intervention would make the most impact by making the connected policies more robust, or if the most impact can be done by, for example, increasing the funding for this intervention slowly, so that it´s acceptable by all sides and works best in the long run. This should ideally be encompassed in each cost-benefit ratio.
Preventing bad policies vs. improving good ones: We will look for policies that can have any of these effects, but we are not specifically looking for the existing bad policies. Improving good ones is not the goal either, we want to find policies that have great effects per each Koruna, but occupy unfairly bad position in the current equilibria - might be unpopular, underfunded, not yet well understood or not known by the public.
Sure, you can follow our web www.ceskepriority.cz or hit me up via email at jan@copenhagenconsensus.com
Sure, I´m here particularly looking for 1) general arguments for or against national policy prioritization; 2) references to suitable international sources of funding; and 3) mapping the possibility of community engagement in choosing which policies to run cost-benefit analyses on, once the project starts.
Hi, I´d like to share a full post about a potential of cost-benefit prioritization projects in developed countries. I´m heavily involved in EA, but a newbie to this forum, so need karma points. Here is a link to the planned post, if you think it´s relevant, like this comment :)
https://www.scribd.com/document/373093413/Prioritization-in-a-Developed-Country
Last year, we (Czech priorities) have done a foresight study on global megatrends for the Czech government, where we worked with the 2040-50 horizon. The outcomes are meant to provide a broad framework for national R&I funding. We applied some innovations such as a forecasting tournament as a wider participatory input. Here is the website and ENG version of the results and methodology. Happy to chat, if it´d be useful.