Denis Drescher
CofounderatGood Exchange
Working (6-15 years of experience)

I’m working on Impact Markets – markets to trade nonexcludable goods.

If you’re also interested in less directly optimific things – such as climbing around and on top of boulders or amateurish musings on psychology – then you may enjoy some of the posts I don’t cross-post from my blog, Impartial Priorities.

Pronouns: Ideally they or she, but the others are fine too. I also go by Dawn now.

How others can help me

Good Exchange needs: advisors, collaborators, and funding. The funding can be for our operation or for retro funding of other impactful projects on our impact markets.

How I can help others

I’m happy to do calls, give feedback, or go bouldering together. You can book me on Calendly.

Sequences

Impact Markets
Researchers Answering Questions

Topic Contributions

Comments

Impact markets may incentivize predictably net-negative projects

Shutting down an impact market, if successful, functionally means burning all the certificates that are owned by the market participants, who may have already spent a lot of resources and time in the hope to profit from selling their certificates in the future.

It could be done a bit more smoothly by (1) accepting no new issues, (2) completing all running prize rounds, and (3) declaring the impact certificates not burned and allowing people some time to export their data. (I don’t think it would be credible for the marketplace to declare the certs burned since it doesn’t own them.)

Also, my understanding is that there was (and perhaps still is) an intention to launch a decentralized impact market (i.e. Web3 based), which can be impossible to shut down.

My original idea from summer 2021 was to use blockchain technology simply for technical ease of implementation (I wouldn’t have had to write any code). That would’ve made the certs random tokens among millions of others on the blockchain. But then to set up a centralized, curated marketplace for them with a smart and EA curation team.

We’ve moved away from that idea. Our current market is fully web2 with no bit of blockchain anywhere. Safety was a core reason for the update. (But the ease-of-implementation reasons to prefer blockchain also didn’t apply so much anymore. We have a doc somewhere with all the pros and cons.)

For our favored auction mechanisms, it would be handy to be able to split transactions easily, so we have thought about (maybe, at some point) allowing users to connect a wallet to improve the user experience, but that would be only for sending and receiving payments. The certs would still be rows in a Postgres database in this hypothetical model. Sort of like how Rethink Priorities accepts crypto donations or a bit like a centralized crypto exchange (but that sounds a bit pompous).

But what do you think about the original idea? I don’t think it's so different from a fully centralized solution where you allow people to export their data or at least not prevent them from copy-pasting their certs and ledgers to back them up.

My greatest worries about crypto stem less from the technology itself (which, for all I know, could be made safe) but from the general spirit in the community that decentralization, democratization, ungatedness, etc. are highly desirable values to strive for. I don’t want to have to fight against the dominant paradigms, so that doing it on my server was more convenient. But then again big players in the Ethereum space have implemented very much expert-run systems with no permissionless governance tokens and such.  So I hope (and think) that there are groups that can be convinced that an impact market should be gated and curated by trusted experts only.

But even so, a solution that is crypto-based beyond making payments easier is something that I consider more in the context of joining existing efforts to make them safer rather than actions that would influence their existence.

Impact markets may incentivize predictably net-negative projects

I love the insurance idea because compared to our previous ideas around shorting with hedge tokens that compound automatically to maintain a -1x leverage, collateral, etc. (see Toward Impact Markets), the insurance idea also has the potential of solving the incentive problems that we face around setting up our network of certificate auditors! (Strong upvotes to both of you!)

(The insurances would function a bit like the insurances in Robin Hanson’s idea for a tort law reform.)

Impact markets may incentivize predictably net-negative projects

Dawn’s (Denis’s) Intellectual Turing Test Red-Teaming Impact Markets

I want to check how well I understand Ofer’s position against impact markets. The “Imagined Ofer” below is how I imagine Ofer to respond (minus language – I’m not trying to imitate his writing style though our styles seem similar to me). I would like to ask the real Ofer to correct me wherever I’m misunderstanding his true position.

I currently favor using the language of prize contests to explain impact markets unless I talk to someone intimately familiar with for-profit startups. People seem to understand it more easily that way.

My model of Ofer is informed by (at least) these posts/comment threads.


Dawn: I’m doing these prize contests now where I encourage people to help each other (monetarily and otherwise) to produce awesome work to reduce x-risks and finally I reward all participants in the best ones of the projects. I’m writing software to facilitate this. I will only reward them in proportion to the gains from moral trade that they’ve generated, and I’ll use my estimate of their ex ante EV as a ceiling for my overall evaluation of a project.

This has all sorts of benefits! It’s basically a wide-open regrantor program where the quasi-regrantors (the investors) absorb most of the risk. It scales grantmaking up and down – grantmakers have ~ 10x less work and can thus scale their operation up by 10x, and the investors can be anyone around the world, so they can draw on their existing networks for their investments, so they can consider many more much smaller investments or investments which require very niche knowledge or access. Many more ideas will get tried, and it’ll be easier for people to start projects even when they still lack personal contact to the right grantmakers.


Imagined Ofer: That seems very dangerous to me. What if someone else also offers a reward and also encourages people to help each other with the projects but does not apply your complicated ex ante EV ceiling? Someone may create a flashy but extremely risky project and attract a lot of investors for it.

Dawn: But they can do that already? All sorts of science prizes, all the other EA-related prizes, Bountied Rationality, new prizes they promote on Twitter, etc.

Imagined Ofer: Okay, but you’re building a software to make it easier, so presumably you’ll thereby increase the number of people who will offer such prizes and increase the number of people who will attract investments in advance because the user experience and networking with investors is smoother and because they’re encouraged to do so.

Dawn: That’s true. We should make our software relatively unattractive to such prize offerers and their audiences, for example by curating the projects on it such that only the ones that are deemed to be robustly positive in impact are displayed (something I proposed from the start, in Aug. 2021). I could put together a team of experts for this.


Imagined Ofer: That’s not enough. What if you or your panel of experts overlook that a project was actually ex ante net-negative in EV, for example because it has already matured and so happened to turn out good? You’d be biased in a predictably upward direction in your assessment of the ex ante EV. In fact, people could do a lot of risky projects and then only ever submit the ones that worked out fine.

Dawn: Well, we can try really hard… Pay bounties for spotting projects that were negative in ex ante EV but slipped through; set up a network of auditors; make it really easy and effortless to hold compounding short positions on projects that manage their -1x leverage automatically; recruit firm like Hindenburg Research (or individuals with similar missions) to short projects and publish exposés on them; require issuers to post collateral; set up a mechanisms whereby it becomes unlikely that there’ll be other prizes with any but a small market share (such as the “pot”); maybe even require preregistration of projects to avoid the tricks you mention; etc. (All the various fixes I propose in Toward Impact Markets.) 

Imagined Ofer: Those are only unreliable patches for a big fundamental problem. None of them is going to be enough, not even in combination. They are slow and incomplete. Ex ante negative projects can slip through the cracks or remain undetected for long enough to cause harm in this world or a likely counterfactual world.

Dawn: Okay, so one slips through, attracts a lot of investment, gets big, maybe even manages to fool us into awarding it prize money.  It or new projects in the same reference class have some positive per-year probability of being found out due to all the safety mechanisms. Eventually a short-seller or an exposé-bounty poster will spot them and make a lot of money for doing so. We will react and make it super-duper clear going forward that we will not reward projects in that reference class ever again. Anyone who wants to get investments will need to make the case that their project is not in that reference class.

Imagined Ofer: But by that time the harm is done, be it to a counterfactual world. Next time the harm will be done to the factual world. Besides, regardless of how safe you actually make the system, what’s important is that there can always be issuers and investors who believe (be it wrongly believe) that they can get their risky project retro-funded. You can’t prevent that no matter how safe you make the system.


Dawn: But that seems overly risk averse to me because prospective funders can also make mistakes, and current prizes – including prizes in EA – are nowhere near as safe. Once our system is safer than any other existing methods, the bad actors will prefer the existing methods.

Imagined Ofer: The existing methods are much safer. Prospective funding is as safe as it gets, and current prizes have a time window of months or so, so by the time the prizes are awarded, the projects that they are awarded to are still very young, so the prizes are awarded on the basis of something that is still very close to ex ante EV.

Dawn: But retroactive funders can decide when to award prizes. In fact, we have gone with a month in our experiment. But admittedly, in the end I imagine that cycles of a year or two are more realistic. That is still not that much more. (See this draft FAQ for some calculations. Retro funders will pay out prizes of up to 1000% in the success case, but outside the success case investors will lose all or most of their principal. They are hits-based investors, so their riskless benchmark profit is probably much higher than 5% per year. They’ll probably not want to stay in certificates for more than a few years even at 1000% return in the success case.)

Imagined Ofer: A lot more can happen in a year or two than in a month. EA, for example, looked very different in 2013 compared to 2015, but it looked about the same in January vs. February 2015. But more importantly, you write about tying the windfall clauses of AGI companies to retro funding with enormous budgets, budgets that surely offset even the 20 years that it may take to get to that point and the very low probability.

Dawn: The plan I wrote about has these windfalls reward projects that were previously already rewarded by our regular retro funders, no more.

Imagined Ofer: But what keeps a random, unaligned AGI company from just using the mechanism to reward anyone they like?

Dawn: True. Nothing. Let’s keep this idea private. I can unpublish my EA Forum post too, but maybe that’s the audience that should know about it if anyone should. As an additional safeguard against uncontrolled speculation, how about we require people to always select one or several actual present prize rounds when they submit a project?


Imagined Ofer: That might help, but people could just churn out small projects and select whatever prize happens to be offered at the time whereas in actuality they’re hoping that one of these prizes will eventually be mentioned in a windfall clause or their project will otherwise be retro funded through a windfall clause or some other future funder who ignore the setting.

Dawn: Okay, but consider how far down the rabbit hole we’ve gone now: We have a platform that is moderated; we have relatively short cycles for the prize contest (currently just one month); we explicitly offer prizes for exposés; we limit our prizes to stuff that is, by dint of its format, unlikely to be very harmful; we even started with EA Forum posts, a forum that has another highly qualified moderation team. Further, we want to institute more mechanisms – besides exposés – that make it easy to short certificates to encourage people to red-team them; mechanisms to retain control of the market norms even if many new retro funders enter; even stricter moderation; etc. We’re even considering requiring preregistration, mandatory selection of present prize rounds (even though it runs counter to how I feel impact markets should work), and very narrow targets set by retro funders (like my list of research questions in our present contest). Compare that to other EA prize contests. Meanwhile, the status quo is that anyone with some money and Twitter following can do a prize contest, and anyone can make a contract with a rich friend to secure a seed investment that they’ll repay if they win. All of our countless safeguards should make it vastly easier for unaligned retro funders and unaligned project founders to do anything other than use our platform. All that remains is that maybe we’re spreading the meme that you can seed-invest into potential prize winners, but that’s also something that is already happening around the world with countless science prizes. What more can we do!

Imagined Ofer: This is not an accusation – we’re all human – but money and sunk time-cost fallacy corrupt. For all I know this could be a motte-and-bailey type of situation: The moment a big crypto funder offers you a $1m grant, you might throw caution to the wind and write a wide-open ungated blockchain implementation of an impact market.

Dawn: I hope I’ve made clear in my 20,000+ words of writing on impact market safety that were unprompted by your comments (other than the first one in 2021) that my personal prioritization has long rested on robustness over mere positive EV. I’ve just quit my well-paid ETG job as software engineer in Switzerland to work on this. If I were in it for the money, I wouldn’t be. (More than what I need for my financial safety.) Our organization is also set up with a very general purview so that we can pivot easily. So if I should start work on a more open version of the currently fully moderated, centralized implementation, it’s because I’ve come to believe that it’s more robustly positive than I currently think it is. (Or it may well be possible to find a synthesis of permissionlessness and curation.) The only thing that can convince me otherwise are evidence and arguments.

Imagined Ofer: I think that most interventions that have a substantial chance to prevent an existential catastrophe also have a substantial chance to cause an existential catastrophe, such that it’s very hard to judge whether they are net-positive or net-negative (due to complex cluelessness dynamics that are caused by many known and unknown crucial considerations). So the typical EA Forum post with sufficient leverage over our future to make a difference at all is about equally likely to increase or to decrease x-risk.

Dawn: I find that to be an unusual opinion. CEA and others try to encourage people to post on the EA Forum rather than discourage them. That was also the point of the CEA-run EA Forum contest. Personally, I also find it unintuitive that that should be the case: For any given post, I try to think of pathways along which it could be beneficial and detrimental. Usually there are few detrimental pathways, and if there are any, there are strong social norms around malice and government institutions such as the police in the way of pursuing the paths. A few posts come to mind that are rare, unusual exceptions to this theme, but it’s been several years since I read one of those. Complex cluelessness also doesn’t seem to make a difference here because it applies equally to any prospective funding, to prizes after one month, and to prizes after one year. Do you think that writing on high-leverage topics such as x-risks should generally be discouraged rather than encouraged on the EA Forum?

Imagined Ofer: Even if you create even a very controlled impact market that is safer than the average EA prize contest you are still creating a culture and a meme regarding retroactive funding. You could inspire someone to post on Twitter “The current impact markets are too curated. I’m offering a $10m retro prize for dumping 500 tons of iron sulfate into the ocean to solve climate change.” If someone posted this now no one would take them seriously. If you create an impact market with tens of millions of dollars flowing through it and many market actors, it will become believable to some rouge players that this payout is likely real.

Impact markets may incentivize predictably net-negative projects

We’ve considered a wide range of mechanisms and ended up most optimistic about this one.

When it comes to prediction markets on funding decisions, I’ve thought about this in two contexts in the past:

  1. During the ideation phase, I found that it was already being done (by Metaculus?) and not as helpful because it doesn’t provide seed funding.
  2.  In Toward Impact Markets, I describe the “pot” safety mechanism that, I surmised, could be implemented with a set of prediction markets. The implementation that I have in mind that uses prediction markets has important gaps, and I don’t think it’s the right time to set up the pot yet. But the basic idea was to have prediction markets whose payouts are tied to decisions of retro funders to buy a particular certificate. That action resolves the respective market. But the yes votes on the market can only be bought with shares in the respective cert or by people who also hold shares in the respective cert and in proportion to them. (In Toward Impact Markets I favor the product of the value they hold in either as determinant of the payout.)

But maybe you’re thinking of yet another setup: Investors buy yes votes on a prediction market (e.g. Polymarket, with real money) about whether a particular project will be funded. Funders watch those prediction markets and participants are encouraged to pitch their purchases to funders. Funders then resolve the markets with their actual grants and do minimal research, mostly trust the markets. Is that what you envisioned?

I see some weaknesses in that model. I feel like it’s rather a bit over 10x as good as the status quo vs. our model, which I think is over 100x as good. But it is an interesting mechanism that I’ll bear in mind as a fallback!

Impact markets may incentivize predictably net-negative projects

Going Forward

  1. We will convene a regular working group to more proactively iterate and improve the mechanism design focused on risk mitigation. We intend for this group to function for the foreseeable future. Anyone is welcome to join this group via our Discord.
  2. We will attempt to gain consultation from community figures that have expressed interest in impact markets (Paul Christiano, Robin Hanson, Scott Alexander, Eliezer Yudkowsky, Vitalik Buterin). This should move the needle towards more community consensus.
  3. We will continue our current EA Forum contest. We will not run another contest in July.
  4. We will do more outreach to other projects interested in this space (Gitcoin, Protocol Labs, Optimism, etc.) to make sure they are aware of these issues as well and we can come up with solutions together.

Do we think that impact markets are net-negative?

We – the Impact Markets team of Denis, Dony, and Matt – have been active EAs for almost a combined 20 years. In the past years we’ve individually gone through a prioritization process in which we’ve weighed importance, tractability, neglectedness, and personal fit for various projects that are close to the work of QURI, CLR, ACE, REG, CE, and others. (The examples are mostly taken from my, Denis’s, life because I’m drafting this.) We not only found that impact markets were net-positive but have become increasingly convinced (before we started working on them) that they are the most (positively!) impactful thing in expectation that we can do. 

We have started our work for impact markets because we found that it was the best thing that we could do. We’ve more or less dedicated our lives to maximizing our altruistic impact – already a decade ago. We were not nerdsniped into it and adjusted our prioritization to fit.

We’re not launching impact certificates to make ourselves personally wealthy. We want to be able to pay the rent, but once we’re financially safe, that’s enough. Some of us have previously moved countries for earning to give.

Why do we think impact markets are so good?

Impact markets reduce the work of funders – if a (hits-based) funder hopes for 10% of their grantees to succeed, then they cut down on the funders’ work by 10x. The funders pay out correspondingly higher rewards which incentivize seed investors to pick up the slack. This pool of seed investors can be orders of magnitude larger than current grant evaluators and would be made up of individuals from different cultures, with different backgrounds, and different networks. They have access to funding opportunities that the funders would not have learned of, they can be confident in these opportunities because they come out of their existing networks, and they can make use of economies of scale if the projects they fund have similar needs. These opportunities can also be more and smaller than opportunities that it would’ve been cost-effective for a generalist funder to evaluate.

Thus impact markets solve the scaling problem of grantmaking. We envision that the result will be an even more vibrant and entrepreneurial EA space that makes maximal use of the available talent and attracts more talent as EA expands.

What do we think about the risks?

The risks are real – we’ve spent June 2021 to March 2022 almost exclusively thinking about the downsides, however remote, to position us well to prevent them. But abandoning the project of impact markets because of the downsides seems about as misguided to us as abandoning self-driving cars because of adversarial-example attacks on street signs.

A wide range of distribution mismatches can already happen due to the classic financial markets. Where an activity is not currently profitable, these don’t work, but there have been prize contests for otherwise nonprofitable outcomes for a long time. We see an impact market as a type of prize contest.

Other things being equal, simpler approaches are easier to communicate …

Attributed Impact may look complicated but we’ve just operationalized something that is intuitively obvious to most EAs – expectational consequentialism. (And moral trade and something broadly akin to UDT.) We may sometimes have to explain why it sets bad incentives to fund projects that were net-negative in ex ante expectation to start, but the more sophisticated the funder is, the less likely it is that we need to expound on this. There’s also probably a simple version of the definition that can be easily understood. Something like: “Your impact must be seen as morally good, positive-sum, and non-risky before the action takes place.”

If there is no way to prevent anyone from becoming a retro funder …

We already can’t prevent anyone from becoming a retro funder. Anyone with money and a sizable Twitter following can reward people for any contributions that they so happen to want to reward them for – be it AI safety papers or how-tos for growing viruses.

Even if we hone Attributed Impact to be perfectly smooth to communicate and improve it to the point where it is very hard to misapply it, that hypothetical person on Twitter can just ignore it. Chances are they’ll never hear of it in the first place.

The price of a certificate tracks the maximum amount of money that any future retro funder will be willing to pay for it …

The previous point applies here too. Anyone on Twitter with some money can already outbid others when it comes to rewarding actions.

An additional observation is that the threshold for people to seed-invest into projects seems to be high. We think that very few investors will put significant money into a project that is not clearly in line with what major retro funders already explicitly profess to want to retro-fund only because there may later be someone who does.

Suppose that a risky project that is ex-ante net-negative ends up being beneficial …

There are already long-running prize contests where the ex ante and the ex post evaluation of the expected impact can deviate. These don’t routinely seem to cause catastrophes. If they are research prizes outside EA, it’s also unlikely that the prize committees will always be sophisticated enough that contenders will trust them to evaluate their projects according to its ex ante impact. Even the misperception that a prize committee would reward a risky project is enough to create an incentive to start the project.

And yet we very much do not want our marketplace to be used for ex ante net-negative activities. We are eager to put safeguards in place above and beyond what any other prize contest in EA has done. As soon as  any risks appear to emerge, we are ready to curate the marketplace with an iron fist, to limit the length of resell chains, to cap the value of certificates, to consume the impact we’re buying, and much more.

What are we actually doing?

  1. We are not currently working on a decentralized impact marketplace. (Though various groups in the Ethereum space are, and there is sporadic interest in the EA community as well.)
    1. This is our marketplace. It is a React app hosted on an Afterburst server with a Postgres database. We can pull the plug at any time.
    2. We can hide or delete individual certificates. We’re ready to make certificates hidden by default until we approve them.
    3. You can review the actual submissions that we’ve received to decide how risky the average actual submission is.
    4. We would be happy to form a curation committee and include Ofer and Owen now or when the market grows past the toy EA Forum experiment we have launched so far.
  2. This is our current prize round.
    1. We have allowed submissions that are directly related to impact markets (and received some so that we don’t want to back down from our commitment now), but we’re ready to exclude them in future prize rounds.
    2. We would never submit our own certificates to a prize contest that we are judging, but we’d also be open to not submitting any of our impact market–related work to any other prize contests if that’s what consensus comes to.
    3. An important safety mechanism that we have already started implementing is to reward solutions to problems with impact markets. A general ban on using such rewards would remove this promising mechanism.
    4. We don’t know how weak consensus should be operationalized. Since we’ve already launched the marketplace, it seems to us that we’ve violated this requirement before it was put into place. We would welcome a process by which we can obtain a weak consensus, however measured, before our next prize round.

Miscellaneous notes

  1. Attributed Impact also addresses moral trade.
  2. “A naive implementation of this idea would incentivize people to launch a safe project and later expand it to include high-risk high-reward interventions” – That would have to be a very naive implementation because if the actual project is different from the project certified in the certificate, then the certificate does not describe it. It’s a certificate for a different project that failed to happen.
On Deference and Yudkowsky's AI Risk Estimates

Yeah, that sounds perfectly plausible to me.

“A bit confused” wasn’t meant to be any sort of rhetorical pretend understatement or something. I really just felt a slight surprise that caused me to check whether the forum rules contain something about ad hom, and found that they don’t. It may well be the right call on balance. I trust the forum team on that.

On Deference and Yudkowsky's AI Risk Estimates

Maybe, but I find it important to maintain the sort of culture where one can be confidently wrong about something without fear that it’ll cause people to interpret all future arguments only in light of that mistake instead of taking them at face value and evaluating them for their own merit.

The sort of entrepreneurialness that I still feel is somewhat lacking in EA requires committing a lot of time to a speculative idea on the off-chance that it is correct. If it is not, the entrepreneur has wasted a lot of time and usually money. If additionally it has the social cost that they can't try again because people will dismiss them because of that past failure, it makes it just so much less likely still that anyone will try in the first place.

Of course that’s not the status quo. I just really don’t want EA to move in that direction.

On Deference and Yudkowsky's AI Risk Estimates

I've nevertheless downvoted this post because it seems like it's making claims that are significantly too strong, based on a methodology that I strongly disendorse.

 

I agree, and I’m a bit confused that the top-level post does not violate forum rules in its current form. There is a version of the post – rephrased and reframed – that I think would be perfectly fine even though I would still disagree with it.

And I say that as someone who loved Paul’s response to Eliezer’s list!

Separately, my takeaway from Ben’s 80k interview has been that I think that Eliezer’s take on AI risk is much more truth-tracking than Ben’s. To improve my understanding, I would turn to Paul and ARC’s writings rather than Eliezer and MIRI’s, but Eliezer’s takes are still up there among the most plausible ones in my mind.

I suspect that the motivation for this post comes from a place that I would find epistemically untenable and that bears little semblance to the sophisticated disagreement between Eliezer and Paul. But I’m worried that a reader may come away with the impression that Ben and Paul fall into one camp and Eliezer into another on AI risk when really Paul agrees with Eliezer on many points when it comes to the importance and urgency of AI safety (see the list of agreements at the top of Paul’s post).

Critiques of EA that I want to read

A bit of a tangent, but:

Sometimes funders try to play 5d chess with each other to avoid funging each other’s donations, and this results in the charity not getting enough funding.

That seems like it could be a defection in a moral trade, which is likely to burn gains of trade. Often you can just talk to the other funder and split 50:50 or use something awesome like the S-Process.

But I’ve been in the situation where I wanted to make a grant/donation (I was doing ETG), knew of the other donor, but couldn’t communicate with them because they were anonymous to me. Hence I resorted to a bit of proto-ECL: There are two obvious Schelling points, (1)  both parties each fill half of the funding gap, or (2) both parties each put half of their pre-update budget into the funding gap. Point 2 is inferior because the other party knows, without even knowing me, that more likely than not my donation budget is much smaller than half the funding gap, and because the concept of the funding gap is subjective and unhelpful anyway. Point 1 should thus be the compromise point of which it is relatively obvious to both parties that is should be obvious to both parties. Hence I donated half my pre-update budget.

There’s probably a lot more game theory that can be done on refining this acausal moral trade strategy, but I think it’s pretty good already, probably better than the status quo without communication.

Critiques of EA that I want to read

Maybe something along the lines of: Thinking in terms of individual geniuses, heroes, Leviathans, top charities implementing vertical health interventions, central charity evaluators, etc. might go well for a while but is a ticking time bomb because these powerful positions will attract newcomers with narcissistic traits who will usurp power of the whole system that the previous well-intentioned generation has built up.

The only remedy is to radically democratize any sort of power, make sure that the demos in question is as close as possible to everyone who is affected by the system, and build in structural and cultural safeguards against any later attempts of individuals to try to usurp absolute power over the systems.

But I think that's better characterized as a libertarian critique, left  or right.  I can’t think of an authoritarian-left critique. I wouldn’t pass an authoritarian-left intellectual Turing test, but I have thought of myself as libertarian socialist at one point in my life.

Load More
CofounderatGood Exchange
Working (6-15 years of experience)

impactmarkets.io