27 karmaJoined


Thanks for your feedback! We haven't looked at the S-Process but it sounds promising and I'll check it out. We are in touch with Optimism and would love to play in part in future RPGF rounds but haven't reached any agreement yet.

Some great links there, will check them out. I've just joined your Discord, we should arrange a call.

Fixed the links.

It is not particularly different from what Optimism is trying to achieve, but importantly it is actually on-chain whereas Optimism's initial test round was done informally with a google doc to notarize votes and project nominations and disbursement was manual. As things scale this isn't really a great way to fund future rounds especially if more money is involved. 

That's a good point, observing risk reduction is hard and it was a can of worms I didn't really open in the article. I am relying on sensible wisdom of the crowd type decisions to be implemented by groups of experienced assessors and forecasters. We'd like to come up with some broad traffic light system metrics to help guide voters, but ultimately this will require more research and development. What do you mean with difficult to monitor? Broad goods like "risk reduction research" may be difficult to monitor but individual contributions or projects which are nominated can still be assessed even if the overarching progress is hard to measure.

The payout is tied to the design decisions made at the round instantiation and the votes. The responsibility lies with the badge holders to assess those uncertainties and to potentially halt funding streams. See the discussion with ofer.

That makes more sense now. Nothing inherent to the retrox platform would prevent this if the expert badgeholders agree to vote for the retroactive funding of the risky viral engineering project.

The fact that severe risks had to be taken should be factored into the assignment of the votes, i.e. how value was created. Incentivizing more high risk behavior with potentially extremely harmful impacts is undesireable. Retroactively funding a project of this nature would set a precedent for the types of projects which are funded in the future which I think would probably not lead to a pareto preferred future. The expected value trade-offs would be something like: value added for humanity by financially supporting successful but risky viral engineering project vs potential harm induced by incentivizing more people to pursue high risk endeavours into the future. I think the latter outweighs the former hence my previous hunch.

What sort of of projects are you envisioning? AI research labs where there is a 50/50 chance as to whether they end up caring about AI safety? Retroactive funding means that one has the ability to assess the past impact of a particular project in a particular domain and then give out grants through quadratic voting. The ability to look at a project's impact in the past would aid with setting priors for how likely something is to be harmful in the future. If a project has the potential to be incredibly harmful then this should be weighed up by the badge holders who vote and less (or no) votes should be assigned to projects, depending on the probability and severity of the potential negative impacts in the future. 

From a practical standpoint, the continuous stream of funds which extends well into the future can be stopped by the expert voters if the project is deemed harmful. In general - as it stands - the retrox platform does not have any built in logic which prevents any projects from being funded in the first place, but this is something which needs to be carefully considered and weighed up by those who vote on where the funds are allocated. I think this is where a lot of the "heavy lifting" is done and more careful consideration on who should be eligible to vote is perhaps required. Maybe you have some interesting ideas. Ideally you'd have an immutable and accessible record of people's qualifications, skills and past experiences which would allow one to pick out the right candidates.

Another idea would be to have a consensus mechanism between the expert voters which would allow projects to be "blacklisted" or blocked from being funded at all if the risk of them causing extreme harm is considered too great.

Hey alex - interesting post - i generally agree with your sentiment and think a greater degree of collaboration between crypto economic researcher and x-risk researchers would be helpful, particularly when it comes to incentive design and predictions. Perhaps you might be interested in a specific example where I think that using a blockchain application might be of help to the EA community: https://forum.effectivealtruism.org/posts/9kcMNim6R2Lvh4FAf/optimizing-public-goods-funding-with-blockchain-tech-and, would love to hear your feedback.