MakoYass

Bio

Participation
1

Longtermist writer, principled interactive system designer. https://aboutmako.makopool.com

Consider browsing my Lesswrong profile for interesting frontier (fringe) stuff https://www.lesswrong.com/users/makoyass

Comments
98

Topic Contributions
2

we tech/ea/ai people are overly biased in the actual relevance of our own field (I'm CS student)?

You can just as easily say that global institutions are biased about the relevance of their own fields, and I think that is a good enough explanation: Traditional elite fields (press, actors, lawyers, autocrats) don't teach AI, and so can't influence the development of AGI. To perform the feats of confidence that gains or defends career capital in those fields, or to win the agreement and flattery of their peers, they have to avoid acknowledging that AGI is important, because if it's important, then none of them are important.

But, I think this dam will start to break, generally. Economists know better than the other specializations, they have the background in decision theory to know what superintelligence will mean, and they see what's happening in industry. Military is also capable of sometimes recognizing and responding to emerging risks. They're going to start to speak up, and then maybe the rest of the elite will have to face it.

Theory: Seeing other people happily doing something is a stronger signal to the body about whether it's okay than proprioception error. Consider, seeing other people puke is gonna make you want to puke. Maybe seeing people not puke at all does the opposite.

I'm pretty sure rationality and rationalization read the same though? That's sort of the point of rationalization. The distinction, whether it is sampling the evidence in a biased way, is often outside of the text.

I actually think EA is extremely well positioned to eat that take, digest it, become immensely stronger as a result of it, remain standing as the only great scourge-pilled moral community, immune to preference falsification cascades, members henceforth unrelentingly straightforward and authentic about what their values are, and so, more likely to effectively pursue them, instead of fake values that they don't hold.

Because:

  1. Most movements operate through voting. We instead tend to operate through philanthropy (often anonymized philanthropy) and career change, which each require an easily quantifiable sacrifice, they're much closer to being unfakable signals of revealed preference.
  2. EA is sort of built on the foundation of rationalism where eating nasty truths and accepting nasty truthtellers is a norm.
  3. A lot of that theory also makes negotiating peace between conflicting factions easier. Statistics and decision theory form a basis and an intro to economic theory and cooperative bargaining theory, for instance. And the orthogonality thesis, the claim that an intelligent thing can also have values that conflict with ours, is also the claim that a person with values that conflict with ours can be intelligent (and so worthy of respect)!

Oh thank you, I might. Initially I Had Criticisms, but as with the FLI worldbuilding contest, my criticisms turned into outlines of solutions and now I have ideas.

I was more interested in the obesity analogy and where that might lead, but I think you only ended up doing a less productive recapitulation of Bostrom's vulnerable world hypothesis.

I think "knowledge explosion" might be a more descriptive name for that, I'm not sure it's better strategically (do you really want your theory to be knee-jerk opposed by people who think that you want to generally halt the production or propagation of knowledge?)

Knowledge Obesity though... I'd be interested in talking about that. It's a very good analogy. Look at twitter, it's so easy to digest, extremely presentist, hot takes, conspiracy theories, sounds a lot like the highly processed salted fat and sugar of information consumption, to me.
The places where the analogy breaks are interesting. I suspect that it's going to be very hard to develop standards and consensus about healthy information diets because because modernity relies on specialization, we all have to read very different things. Some people probably should drink from the firehose of digestible news. Most of us shouldn't, but figuring out who should and shouldn't and how they should all fit together is like, the biggest design problem in the world and I've never seen anyone aspire to it. The people who should be doing it, recruiters or knowledge institutions, are all reprehensibly shirking their duty in one way or another.

I'm interested in the claim that the networks (and so, the ideas) of textual venues are going to stay the same as the the networks of voice venues. It's possible, there's a large overlap between oral and textual conversation, but there are also divergences, and I don't know if it's clear yet whether those will grow over time or not.
Voice dialog can traverse topics that're really frustrating and aversive in text. And I find that the people I enjoy hanging out with in VR are a bit different from the ones I enjoy hanging out with in text. And very different in terms of who I'd introduce to which communities. The social structures haven't had time to diverge yet and that most of us are most oriented in text and don't even know the merits of voice and how to use them.
But yeah, I think it's pretty likely that text and voice systems are never going to come far apart. And I'm planning on trying to hold them together, because I think text (or at least, the text venues that are coming) is generally more wholesome than voice and voice could get really bad if it splits off.

The claim that people aren't going to change... I don't think that's true. VR makes it easy to contact and immerse oneself in transformative communities. Oddly... I have experienced doing something like that in a textual online community (we were really great at making people feel like they were now a different kind of person, part of a different social network, on a different path), but I think VR will tend to make that a lot more intense, because there was a limit to how socially satisfying text relationships can be, and with VR that limit kind of isn't there.

Understandable. I wish I'd put more thought into the title before posting, but I vividly remember having hit some sort of nontrivial stamina limit.

I think this could benefit from being expanded. I can only assume you're referring to the democratization of access to knowledge. It's not at all obvious why this is something we need to prepare for or why it would introduce any non-obvious qualitative changes in the world rather than just generally making it go a bit faster.

I believe I could do this. My background is just writing, argument, and constitution of community, I guess.

An idea that was floated recently was an interactive site that asks the user a few questions about themselves and their worldview then targets an introduction to them.

I'm not sure how strong the need actually is, though. I get the impression that, EA is such a simple concept (reasoned evidenced moral dialog, earnest consequentialist optimization of our shared values) that most misunderstandings of what EA is are a result of deliberate misunderstanding, and having better explanations wont actually help much. It's as if people don't want to believe that EA is what it claims to be.
It's been a long time since I was outside of the rationality community, but I definitely remember having some sort of negative feeling about the suggestion that I can be better at foundational capacities like reasoning, or in EA's case, knowing right from wrong.

I guess a solution there is to convince the reader that rationality/practical ethics isn't just a tool for showing off for others (which is zero-sum, and so we wouldn't collectively benefit from improvements in the state of the art), and that being trained in it would make their life better in some way. I don't think LW actually developed the ability to sell itself as self-help (I think it just became a very good analytic philosophy school). I think that's where the work needs to be done.
What bad things will happen to you if you reject expected a VNM axiom or tell yourself pleasant lies? What choking cloud of regret will descend around you if you aren't doing good effectively?

Load More