H

harfe

726 karmaJoined

Posts
1

Sorted by New
5
harfe
· · 1m read

Comments
126

Just speculating here, but if you want to capture most of the energy of a star (e.g. by a Dyson swarm), this will be visible. And if you can only use a fraction of the energy available, this might reduce your expansion speeds.

If you take a bunch of random samples of a normal distribution, and only look at subsamples with median 2 sds out, in approximately ~0 subsamples will you find it equally likely to see + 0 sds and +4 sds.

Wait, are you claiming +0 SD is significantly more likely than +4 SD in a subsample with median +2 SD, or are you claiming that +4 SD is more likely than +0 SD? And what makes you think so?

This looks like a good argument for proportional representation. This might be worth bringing up when the next discussion of FPTP vs approval vs ranked choice takes place.

What is missing (imo) is what EAs can do about it. Just having the "correct" opinion is not enough, and most countries are probably reluctant to change their constitutions.

FYI: Emile Torres is using they/them pronouns. I think you should edit your comment to use their preferred pronouns.

In fact, I'd go further and suggest that it would be great if they were to set up their own forum.

Manifold already has a highly active discord, where they can discuss all the manifold-specific issues. This did not prevent the EA Forum from discussing the topic, and I doubt it would be much different if Manifold had a proper forum instead of a discord.

This is annoying because many of these discussions rate high on controversy but low on importance for EA.

It might seem low on importance for EA to you, but I suspect some people who are upset about Manifest inviting right-wing people do not consider it low-importance.

I strongly disagree. I think human extinction would be bad.

Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome.

Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.

Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.

I am not sure if he actually took part in the event, but there were people involved with him that were present who said he might be dropping by and that he had bought a ticket

Note that at this point we only have indirect word that he bought a ticket. Also note that anyone can buy a ticket, and if his ticket was cancelled by Manifold (which is probably the thing you want), we would not hear about that directly. Of course, information can emerge that he actually did attend.

Thanks for linking it! I recommend watching the 5-minute video.

Your title sounds like trump thinks there is a risk that AI takes over the human race (maybe consider changing the title).

The actual text from Trump in the video is:

you know there are those people that say it takes over the human race

Given the way Trump talks it can be sometimes difficult to assess what he actually believes. In general, Trump expressed a mix of concern and support for advanced AI. My impression is that Trump was more interested in advancing AI, rather than opposing it out of concern for the human race.

But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“.

How do you know it tells the truth or its best knowledge of the truth without solving the "eliciting latent knowledge" problem?

Load more