R

RoboTeddy

31 karmaJoined Mar 2020

Posts
1

Sorted by New

Comments
8

First I've heard of utilitarian-leaning meta-pluralism! Sounds interesting — have any links?

Agree with this post! It's nice to see these concerns written down.

Deeply agree that "the thing we're trying to maximize" is itself confusing/mysterious/partially unknown, and there is something slightly ridiculous and worrying about running around trying really hard to maximize it without knowing much about what it is. (Like, we hardly even have the beginnings of a science of conscious experience, yet conscious experience is the thing we're trying to affect.).

And I don't think this is just a vague philosophical concern — I really do think that we're pretty terrible at understanding the experience of many different peoples across time/which combinations of experiences are good/bad and how to actually facilitate various experiences.

People seem to have really overconfident views about what counts as improvement in value, i.e., many people by default seem to think that GDP going up and Our World in Data numbers going up semi-automatically means that that things are largely improving. The real picture might be much more mixed — I think it'd be possible to have those numbers go way up while things simultaneously get worse for large tracts (or even a majority of) people. People are complicated, value is really complex and hard to understand, but often people act as if these things are mostly understood for intents and purposes. I think they're mostly not understood.


More charitably, the situation could be described as, "we don't know exactly what we're trying to maximize, but it's something in the vicinity of 'over there', and it seems like it would be bad if e.g. AI ran amok, since that would be quite likely to destroy whatever it is that actually is important to maximize". I think this is a fine line of reasoning, but I think it's really critical to be very consciously alert to the fact that we only have a vague idea of what we're trying to maximize.


One potential approach could be to choose a maximand which is pluralistic. In other words:

  • Seek a large vector of seemingly-important things (e.g. include many many detailed aspects of human experience. Could even include things which you might not care about fundamentally but are important instrumental proxies for things that you do care about fundamentally, e.g. civic engagement, the strength of various prosocial norms, ...)
  • Choose a value function over that vector which has a particular kind of shape: it goes way way down if even one or two elements of the vector end up close to zero. I.e., don't treat things in the vector as substitutable with one another; having 1000x of item A isn't necessarily enough to make up for having 0 of item B. To give a general idea: something like the product of the sqrts of the items in the vector.
  • Maintain a ton of uncertainty about what elements should be included in the vector, generally seek to be adding things, and try to run processes that pull deep information in from a TON of different people/perspectives about what should be in the vector. (Stuff like Polis can be kind of a way to do this; could likely be taken much further though. ML can probably help here. Consider applying techniques from next-gen governance — https://forum.effectivealtruism.org/posts/ue9qrxXPLfGxNssvX/cause-exploration-governance-design-and-formation)
  • Treat the question of "which items should be in the vector?" and "what value function should we run over the vector?" as open questions that need to be continually revisited. Answering that question is a whole-of-society project across all time!

I like your post and ideas!

Create your own karma Choose a set of accounts, only see karma generated by their upvotes and downvotes.

Would love to have a version of this that works transitively, ie I choose a few accounts, and I see karma generated by them and by accounts they give karma to, recursively, with a decay factor. (Can think of it like google pagerank, except for accounts)


Another version could work similarly to Polis:

  1. Cluster accounts by how they vote on articles (ie, accounts that tend to upvote the same articles would end up in the same cluster)
  2. Let me view the clusters and pick which clusters to give weight to (or, alternatively, automatically give weight to the clusters whose members are in my chosen set of accounts)

Thanks for the thoughts!

I think a truly next-gen democracy might not necessarily take as its premise (as many people do) that citizens have independent views that just need to be accurately detected, aggregated, and translated into policy -- but rather it should take greater account of the ways in which opinion-formation probably flows the other way -- and should be designed to "nudge" both mass publics and elites against tribalism, against short-termism, and towards evidence and reason.

Yup agree with this. Ideally information flows in from constituents, and then there's some synthesis with expert views, with information/influence flowing both directions. Agree that this didn't come through in the essay.

I also think there are lots of different kinds of information that could flow in from citizens, rather than just their views as we traditionally think of views. For example, the way constituents are feeling (lonely, disenfranchised, purposeless, etc) all seems like really useful information that could help steer decisions. (It might be that constituents have instinctive first-blush ideas about what changes would help with those things, and those ideas might often not be very good. But they would contain information!)

I think of it kind of like product design. Generally, in product design, it's a mistake to give people exactly what they ask for. Usually the game is to figure out what they really want, and why, and then figure out a way to give it to them. The answer might look like something they never would've thought of. But, critically, after you show them a draft of the answer, they should hopefully go, "Yes! That would do what I want!" — i.e, it's important that they participate the whole way along. (That way, you're not imposing unwanted stuff on them.)

Relatedly, the crux of our governance/democracy problems are informational and epistemic.

Agree that this is a crux

I think there's a fair amount of experience that's not included here... Things like participatory budgeting, etc.

Undoubtedly! If a list of things happens to jump to mind, would love to see it. The more lego blocks in the set, the better.

Thanks for your comment!

crypto infrastructure

Agree that ultimately on-chain execution would be useful. For practical purposes (e.g. scaling which for now is limited, and UX problems, and development speed), it may be wise to engage in lots off-chain experiments — perhaps until it becomes a real issue that people don't trust whoever is running the server. (There are also hybrid approaches where you use a centralized server which could censor, but which cannot manipulate beyond that)

identity

Agree that proof of humanity / identity themselves require high-quality governance! I and another person actually started working on such a system (https://hackmd.io/@zorro-project/zorro-whitepaper) before deciding that it wouldn't be possible to defeat bribery attacks without killing most of the upsides of the system. Good governance though could help fix. It's a bit of a chicken-egg problem I guess!

liquid democracy

I like liquid democracy as a lego block, but I also expect that it wouldn't stand well on its own. E.g. Alex Jones would end up with a ton of votes delegated to him...

Yep I think almost entirely overlapping!

RE: the name: I like "Governance Experiments and Scaling" but just asked around and some other people said they liked "Governance Design & Formation" better 🤷 I don't have any strong feelings about the names.

Great post — definitely aligns with my world model and experience!

One small thing to add: When adding someone to an org, it's really expensive to transfer context to them. In CEA's case, I imagine this cost would be even higher if the person were unfamiliar with EA .

Btw, one possible exception to this rule might be a really good product/ui designer, because they can be enough of an empath to quickly pick up the critical considerations in conversation. (But people at this level are really rare.)

Software engineers could help conduct real-time outbreak response in Seattle: https://twitter.com/trvrb/status/1234931579702538240

[This comment is no longer endorsed by its author]Reply