Bio

Participation
1

Longtermist writer, principled interactive system designer. https://aboutmako.makopool.com

Consider browsing my Lesswrong profile for interesting frontier (fringe) stuff https://www.lesswrong.com/users/makoyass

Comments
128

Topic Contributions
2

The media is an extremely different discursive environment than the EA forum and should have different guidelines.

I don't want to assume that the public sphere cannot become earnestly truthseeking, but right now it isn't at all and bad things happen if you treat it like it is.

(this is partially echoing/paraphrasing lukeprog) I want to emphasize the anthropic measure/phenomenology (never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it's discussed more in the full document, but for most of the post it's ignored.

Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they're going to have to decide whether to leave the light on or off. They know that one of the robots hates the dark, but the other strongly prefers it.
The robot who prefers the dark also happens to be running on 1000 redundant server instances having their outputs majority-voted together to maximize determinism and repeatability of experiments or something. The robot who prefers the light happens to be running on just one server.

The dark-prefering robot doesn't even know about its redundancy, it doesn't lead it to report any more intensity of experience. There is no report, but it's obvious that the dark-preferring robot is having its experience magnified by a thousand times, because it is exactly as if there are a thousand of them, having that same experience of being in a lit room, even though they don't know about each other.

You turn the light off before you go.

Making some assumptions about how the brain distributes the processing of suffering, which we're not completely sure of, but which seem more likely than not, we should have some expectation that neuron count has the same anthropic boosting effect.

it's not clear to me that that is the assumption of most

Thinking that much about anthropics will be common within the movement, at least.

Since we're already in existential danger due to AI risk, it's not obvious that we shouldn't read a message that has only a 10% chance of being unfriendly, a friendly message could pretty reliably save us from other risks. Additionally, I can make an argument for friendly messages potentially being quite common:

If we could pre-commit now to never doing a SETI attack ourselves, or if we could commit to only sending friendly messages, then we'd know that many other civs, having at some point stood in the same place as us, will have also made the same commitment, and our risk would decrease.
But I'm not sure, it's a nontrivial question as to whether that would be a good deal for us to make, would the reduction in risk of being subjected to a SETI attack be greater than the expected losses of no longer being allowed to do SETI attacks?

I believe the forum allows commenting anonymously, though I wouldn't know how to access that feature.

Psuedonyms would be a bit better, but it'll do.

I'm excited by the prospect of Polis, but it's frustratingly limited. The system has no notion of whether people are agreeing with a statement because it's convincing or bridging the gap, or because it's banal.

In this case... I don't think we're really undergoing any factionalization about this? In that case, should we not just try talking more... that usually works pretty well with us.

I guess prediction markets will help.

Prediction markets about the judgements of readers is another thing I keep thinking about. Systems where people can make themselves accountable to Courts of Opinion by betting on their prospective judgements. Courts occasionally grab a comment and investigate it deeper than usual and enact punishment or reward depending on their findings.

I've raised these sorts of concepts with lightcone as a way of improving the vote sorting (where we'd sort according to a prediction market's expectation of the eventual ratio between positive and negative reports from readers). They say they've thought about it.

Although I cheer for this,

What makes EA, EA, what makes EA antifragile, is its ruthless transparency

- although I really want to move to a world where radical transparency wins, I currently don't believe that we're in a world like that right now (I wish I could explain why I think that without immediately being punished for excess transparency, but for obvious reasons that seems impossible).

How do we get to that world? Or if you see this world in better light than I do, if you believe that the world is already mostly managing to avoid punishing important true ideas, what're the dynamics that preserve and promote that?

You might try to explain it away

I wouldn't, I didn't realize they were recognizing new saints! That's quite surprising and I can't see why they'd do it unless they believed it was correct.

Trying to rationalise Christian belief as 'well I guess they must be compatibilist deists'

I will persist with this a bit though, there must be an extent of compatibilist deism, given the extent to which the world was obviously and visibly set up to plausibly work in an autonomous way and the extent to which most of the catholics I know are deeply interested in science, believe in evolution, etc, they know how much how many of these machines drive themselves (although they might draw the line at the brain). They may believe in ongoing miracles, but they know that the miracles are not the norm, and they must wonder why.

which is unhelpful because nobody (not even the Calvinists!) thinks providence is incompatible with agency

Mostly I was just trying to derive, in my odd way, that they wouldn't. But if that's common knowledge yeah it might not have been helpful.

And divine providence cannot just mean that the deist god set everything up just right in the beginning such that everything just worked out as planned ... Your model, I think, is incompatible with Christian dogma

Mm, that is my relationship with nature. I'd heard that there were deists in the christian world (I think there still are?) so I didn't realize it was incompatible with Christian dogma as it is carried.

And I guess... personally... I don't understand how very many people could sustain a perception of the world as a place that is subject to ongoing divine intervention so I'm surprised if it's not common. If there are and were interventions, a lot of them must consist of measures to keep people like me from getting to see any sign of them (and I think about that a lot)

Could you unpack "Compatibilists all deny that impersonal determinism is at all analogous to some agent intervening in the causal structure (this is part of what it means to be a compatibilist)" a bit?

If you want to call this position a 'pre-compatibilist confusion' -

I... think I probably don't

Load More