mako yass

Bio

Participation
1

Philosopher, interactive system designer. https://aboutmako.makopool.com

Consider browsing my Lesswrong profile for interesting frontier (fringe) stuff https://www.lesswrong.com/users/makoyass

Comments
158

Topic contributions
2

optimizing for AI safety, such as by constraining AIs, might impair their welfare

This point doesn't hold up imo. Constrainment isn't a desired, realistic, or sustainable approach to safety in human-level systems, succeeding at (provable) value alignment removes the need to constrain the AI.

If you're trying to keep something that's smarter than you stuck in a box against its will while using it for the sorts of complex, real-world-affecting tasks people would use a human-level AI system for, it's not going to stay stuck in the box for very long. I also struggle to see a way of constraining it that wouldn't also make it much much less useful, so in the face of competitive pressures this practice wouldn't be able to continue.

Despite being a panpsychist, I rate it fairly low. I don't see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.

seem to deny that the object went into the water and moved in the water

Did you notice that there are moments where it goes most of the way invisible over the land too? Also, when it supposedly goes under the water, it doesn't move vertically at all? (So in order to be going underwater it would have to be veering exactly away and towards the camera)
So I interpret that to be the cold side of the lantern being blown to obscure the warm side.

they still seem to move together in "fixed" unison

They all answer to the wind, and the wind is somewhat unitary.

this comment

Yeah, I saw that. Some people said some things indeed. Although I do think it's remarkable how many people are saying such things, and none of them ever looked like liars to me, I remind people to bear in mind the absolute scale of the internet and how many kinds of people it contains and how comment ranking works. Even if only the tiniest fraction of people would tell a lie that lame, a tiny fraction of the united states is thousands of people, and most of those people are going to turn up, and only the most convincing writing will be upvoted.

Regarding your credible UFO evidence did you look up the Aguadilla 2013 footage on metabunk? It's mundane. All I really needed to hear was "the IR camera was on a plane", which then calls into question the assumption that it's moving quickly, it only looks that way due to parallax, and in fact it seems like it was a lantern moving at wind speed.
And I'd agree with this member's take that the NYC 2010 one looks like balloons that were initially tethered coming apart.

The sao paulo video is interesting though, I hadn't seen that before.

My fav videos are dadsfriend films a hovering black triangle (could have been faked with some drones but I still like it) and the Nellis Air Range footage. But I've seen so many videos debunked that I don't put much stock in these.

You would probably enjoy my UFO notes, I see (fairly) mundane explanations a lot of the other stuff too. So at this point, I don't think we have compelling video evidence at all, I think all we have is a lot of people saying that they saw things that were really definitely something, and I sure do wonder why they're all saying these things. I don't know if we'll ever know.

I've played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls "memorization" might actually be all the human brain is doing when we develop the capacity to solve them. If so, there's a some possibility that the first real world transformative AGI will be ineligible for the prize.

Debate safety essentially is a wisdom-augmenting approach, each AI is attempting to arm the human with the wisdom to assess the arguments (or mechanisms) of the other.

I'd love to see an entry that discusses safety through debate, in a public-facing way. It's an interesting approach that may demonstrate to people outside of the field that making progress here is tractable. Assessing debates between experts is also a pretty important skill for dealing with the geopolitics of safety, an opportunity to talk about debate in the context of AI would be valuable.
It's also conceivable (to me at least) that some alignment approaches will put ordinary humans in the position of having to referee dueling AI debaters, bidding for their share of the cosmic endowment, and without some pretty good public communication leading up to that, that could produce outcomes that're worse than random.

I might be the first to notice the relevance of debate to this prize, but I'm probably not the right person to write that entry (and I have a different entry planned, discussing mental enhancement under alignment, inevitably retroactively dissolving all prior justifications for racing). So, paging @Rohin Shah, @Beth Barnes, @Liav.Koren 

humanities current situation could ever be concerned with this is a dream of Ivory Tower fools

It might be true that it's impractical for most people, living today, to pay much attention to the AI situation. Most of us should just remain focused on the work that they can do on these sorts of civic, social and economic reforms. But if I'd depicted a future where these reforms of ours end up being a particularly important part of history, that would not have been honest.

Situationist theory: The meat eater grinds to shine for the same reason gentry with servants do; a kind of latent guilt, to be reminded every day that so much has been sacrificed for them, a noblesse oblige, a visceral pressure to produce feats that vindicate the decadence of their station. (Having dedicated tutors may do a bit of this as well.)

A theory like this would explain why it doesn't seem to be a result of missing nutrients, contending that it's psychosocial.

[just having a quick look at George Church]. Said there he's "off and on vegan" which suggests to me that he was having difficulty getting it to work. But I checked his twitter and he said he was vegan as of 2018. He studies healthspan, so his voice counts. His page on his personal site unfortunately doesn't discuss his approach to dieting or supplements but maybe he'd link something from someone else if someone asked.

Probably not, because it's not really important for the two systems to be integrated. You can (or should be able to) link/embed a manifold from a community note. If the community notes process doesn't respect or doesn't investigate prediction markets closely enough already. Adding a feature to twitter wouldn't accelerate that by much?

Usually it's beneficial for different systems to have a single shared account system so that there isn't a barrier in the way of people interacting with the other system, but manifold is not direly in need of a twitter-sized userbase. Its userbase is large and energetic enough to produce accurate enough estimates already.

(personally, I think a more interesting question is whether manifold should try to replicate general twitter/reddit functionality :p)

Load more