Information system designer.

Conceptual/AI writings are mostly on my LW profile


Topic contributions

There's also the possibility that computation could be more efficient in quiet regimes

The aestivation hypothesis was refuted by gwern as soon as it was posted and then again by charles bennet and robin hanson. Afaik, the argument was simple: being able to do stuff later doesn't create a disincentive from doing visible stuff now. Cold computing isn't relevant to the firmi hypothesis.

But yes, the argument outlined in Section 3 was limited to "base reality" scenarios.

Huh, so I guess this could be one of the very rare situations where I think it's important to acknowledge the simulation argument, because assuming it's false could force you to reach implausible conclusions about techno-eschatology. Though I can't see a practical need to be right about techno-eschatology, that kind of thing is an intrinsic preference.

For example, the strategic situation and motives in quiet expansionist scenarios would plausibly be more concerned with potential adversaries from elsewhere, and civs in such scenarios may thus be significantly more inclined to simulate the developmental trajectories of potential adversaries from elsewhere.

I haven't been able to think of a lot of reasons a civ would simulate nature beyond intrinsic curiosity. That's a good one (another one I periodically consider and then cringe from has to do with trade deals with misaligned singletons). Intrinsic curiosity would be a pretty dominant reason to do nature/history sims among life-descended species though.

I think the average quiet regime is more likely to just not ever do large scale industry. If you have an organization whose mission was to maintain a low activity condition for a million years, there are organizational tendencies to invent reasons to continue maintaining those conditions (though maybe those don't matter as much in high tech conditions where cultural drift can be prevented?), or it's likely that they were maintaining those conditions because the conditions were just always the goal. For instance, if they had constitutionalised conservationism as a core value, holding even the dead dust of mars sacred.

Refuting 3: Life/history simulations under visible/grabby civs would far outnumber natural origin civs under quiet regimes.

VNM Utility is the thing that people actually pursue and care about. If wellbeing is distinct from that, then wellbeing is the wrong thing for society to be optimizing. I think this actually is the case. Harsanyi, and myself, are preference utilitarians. Singer and Parfit seem to be something else. I believe they were wrong about something quite foundational. Writing about this properly is extremely difficult and I can understand why no one has done it and I don't know when I'll ever get around to it.

optimizing for AI safety, such as by constraining AIs, might impair their welfare

This point doesn't hold up imo. Constrainment isn't a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.

If you're trying to keep something that's smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it's not going to stay stuck in the box for very long. I also struggle to see a way of constraining it that wouldn't also make it much much less useful, so in the face of competitive pressures this practice wouldn't be able to continue.

Despite being a panpsychist, I rate it fairly low. I don't see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.

seem to deny that the object went into the water and moved in the water

Did you notice that there are moments where it goes most of the way invisible over the land too? Also, when it supposedly goes under the water, it doesn't move vertically at all? (So in order to be going underwater it would have to be veering exactly away and towards the camera)
So I interpret that to be the cold side of the lantern being blown to obscure the warm side.

they still seem to move together in "fixed" unison

They all answer to the wind, and the wind is somewhat unitary.

this comment

Yeah, I saw that. Some people said some things indeed. Although I do think it's remarkable how many people are saying such things, and none of them ever looked like liars to me, I remind people to bear in mind the absolute scale of the internet and how many kinds of people it contains and how comment ranking works. Even if only the tiniest fraction of people would tell a lie that lame, a tiny fraction of the united states is thousands of people, and most of those people are going to turn up, and only the most convincing writing will be upvoted.

Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It's mundane. All I really needed to hear was "the IR camera was on a plane", which then calls into question the assumption that it's moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed.
And I'd agree with this member's take that the NYC 2010 one looks like balloons that were initially tethered coming apart.

The sao paulo video is interesting though, I hadn't seen that before.

My fav videos are dadsfriend films a hovering black triangle (could have been faked with some drones but I still like it) and the Nellis Air Range footage. But I've seen so many videos debunked that I don't put much stock in these.

You would probably enjoy my UFO notes, I see (fairly) mundane explanations a lot of the other stuff too. So at this point, I don't think we have compelling video evidence at all, I think all we have is a lot of people saying that they saw things that were really definitely something, and I sure do wonder why they're all saying these things. I don't know if we'll ever know.

I've played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls "memorization" might actually be all the human brain is doing when we develop the capacity to solve them. If so, there's a some possibility that the first real world transformative AGI will be ineligible for the prize.

Debate safety essentially is a wisdom-augmenting approach, each AI is attempting to arm the human with the wisdom to assess the arguments (or mechanisms) of the other.

I'd love to see an entry that discusses safety through debate, in a public-facing way. It's an interesting approach that may demonstrate to people outside of the field that making progress here is tractable. Assessing debates between experts is also a pretty important skill for dealing with the geopolitics of safety, an opportunity to talk about debate in the context of AI would be valuable.
It's also conceivable (to me at least) that some alignment approaches will put ordinary humans in the position of having to referee dueling AI debaters, bidding for their share of the cosmic endowment, and without some pretty good public communication leading up to that, that could produce outcomes that're worse than random.

I might be the first to notice the relevance of debate to this prize, but I'm probably not the right person to write that entry (and I have a different entry planned, discussing mental enhancement under alignment, inevitably retroactively dissolving all prior justifications for racing). So, paging @Rohin Shah, @Beth Barnes, @Liav.Koren 

humanities current situation could ever be concerned with this is a dream of Ivory Tower fools

It might be true that it's impractical for most people, living today, to pay much attention to the AI situation. Most of us should just remain focused on the work that they can do on these sorts of civic, social and economic reforms. But if I'd depicted a future where these reforms of ours end up being a particularly important part of history, that would not have been honest.

Load more