I think you missed a disadvantage: I think there's a free rider problem where everyone reaps the benefits of the research and it's too easy for a given org to decline funding it.
Overall I like the idea a lot and
Some mechanism may be required to ensure that multiple organisations do not fund the same work.
I hope to find time for this exercise later today.
We need a name for the following heuristic, I think, I think of it as one of those "tribal knowledge" things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I'll certainly credit you in a top level post!
I heard it from Abram Demski at AISU'21.
Suppose you're either going to end up in world A or world B, and you're uncertain about which one it's going to be. Suppose you can pull lever LA which will be 100 valuable if you end up in world A, or you can pull lever LB which will be 100 valuable if you end up in world B. The heuristic is that if you pull LA but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you'll end up in world A should not screw you over in timelines where you end up in world B.
This can be fully mathematized by saying "if most of your probability mass is on ending up in world A, then obviously you'd pick a lever L such that V(L|A) is very high, just also make sure that V(L|B)>=0 or creates an acceptably small amount of disvalue.", where V(L|A) is read "the value of pulling lever L if you end up in world A"
One downside of decentralization you missed is the idea that protocols are slower to update than any other software, which in some scenarios leads to a lock-in risk.
To be more specific, suppose mechanism designer A encodes beliefs/values/aesthetics X into a mechanism M, which gets deployed in a robustly decentralized fashion. Then, upon philosophical breakthroughs totally updating X into X', A encodes X' into a new mechanism M'. The troubling idea I'm pointing to is that coordinating the pivot from M to M' seems exceedingly difficult, likely much more difficult than coordination in the absence of a robustly decentralized fashion. And this is only in a world of one mechanism designer: things get much more troubling in the real world of many competing mechanism designers A and many competing beliefs/values/aesthetics X.
Is there an econ major or geek out there who would like to
something like 5 hours / week, something like $20-40 /hr
(EA Forum DMs / email@example.com / disc @quinn#9100)
I'm aware that there are contractor-coordinating services for each of these asks, I just think it'd be really awesome to have one person to do both and to keep the money in the community, maybe meet a future collaborator!
This is odd. I audited/freeloaded at a perfectly mediocre university math department and they seemed careful to assign the prof who's dissertation was in functional analysis to teach real analysis, and the prof who's dissertation was in algebraic geometry to teach group theory. I guess I only observed in the 3rd/4th year courses case. For 1st/2nd year courses, intuitively you'd want the analysts teaching calculus and the logicians teaching discrete, perhaps something like this, but I don't expect a disaster if they crossed the streams, in the way that I sort of think learning the basic deontology vs. utilitarianism distinction from a nietzsche expert, a deleuze or derrida expert, etc. is a disaster.
(Thankful I learned both calculus and discrete from a professor who dropped out of a high-energy particle physics PhD to do a topoi theory PhD in the math department-- maybe the optimal teachers fit a description like that, interdisciplinarity and so on)
post idea: based on interviews, profile scenarios from software of exploit discovery, responsible disclosure, coordination of patching, etc. and try to analyze with an aim toward understanding what good infohazard protocols would look like.
(I have a contact who was involved with a big patch, if someone else wants to tackle this reach out for a warm intro!)
What if pedant was a sort of "backend" to a sheet UX? A compiler that takes sheet formulae and generates pedant code?
The central claim is that sheet UX is error prone, so why not keep the UX and add verification behind it?
to partially rehash what was on discord and partially add more: