722 karmaJoined


Sorted by New
· · 1m read


This looks like a good argument for proportional representation. This might be worth bringing up when the next discussion of FPTP vs approval vs ranked choice takes place.

What is missing (imo) is what EAs can do about it. Just having the "correct" opinion is not enough, and most countries are probably reluctant to change their constitutions.

FYI: Emile Torres is using they/them pronouns. I think you should edit your comment to use their preferred pronouns.

In fact, I'd go further and suggest that it would be great if they were to set up their own forum.

Manifold already has a highly active discord, where they can discuss all the manifold-specific issues. This did not prevent the EA Forum from discussing the topic, and I doubt it would be much different if Manifold had a proper forum instead of a discord.

This is annoying because many of these discussions rate high on controversy but low on importance for EA.

It might seem low on importance for EA to you, but I suspect some people who are upset about Manifest inviting right-wing people do not consider it low-importance.

I strongly disagree. I think human extinction would be bad.

Not every utility function is equally desirable. For example, an ASI that maximizes the number of paperclips in the universe would be a bad outcome.

Thus, unless one adopts anthropocentric values, the utilitarian philosophy common in this forum (whether you approve of additivity or not) implies that it would be desirable for humans to develop ASI to exterminate humans as quickly and with as high a probability as possible, as opposed to the exact opposite goal that many people pursue.

Most people here do adopt anthropocentric values, in that they think human flourishing would be more desirable than a vast amount of paperclips.

I am not sure if he actually took part in the event, but there were people involved with him that were present who said he might be dropping by and that he had bought a ticket

Note that at this point we only have indirect word that he bought a ticket. Also note that anyone can buy a ticket, and if his ticket was cancelled by Manifold (which is probably the thing you want), we would not hear about that directly. Of course, information can emerge that he actually did attend.

Thanks for linking it! I recommend watching the 5-minute video.

Your title sounds like trump thinks there is a risk that AI takes over the human race (maybe consider changing the title).

The actual text from Trump in the video is:

you know there are those people that say it takes over the human race

Given the way Trump talks it can be sometimes difficult to assess what he actually believes. In general, Trump expressed a mix of concern and support for advanced AI. My impression is that Trump was more interested in advancing AI, rather than opposing it out of concern for the human race.

But if we get to GPT-7, I assume we could sort of ask it, “Would taking this next step, have a large chance of failing?“.

How do you know it tells the truth or its best knowledge of the truth without solving the "eliciting latent knowledge" problem?


I am far more pessimistic than him about extinction from misaligned AI systems, but I think it's quite sensible to try to make money from AI even in worlds from high probability of extinction, since the market signal provided counterfactually moves the market far less than the realizable benefit from being richer in such a crucial time.

I am sympathetic to this position when it comes to your own money. Like, if regular AI safety people put a large fraction of their savings into NVIDIA stock, that is understandable to me.

But the situation with Aschenbrenner starting an AGI investment firm is different. He is not directing (just) his own money, but the much larger capital of his investors into AGI companies. So the majority of the wealth gain will not end up in Aschenbrenner's hands, but belong to the investors. This is different from a small-scale shareholder who gets all the gains (minus some tax) of his stock ownership.

But even if Aschenbrenner's plan is to invest into the world-destroying in order to become richer later when it matters, it would be nice to say so and also explain how you intend to use the money later. My guess however is that this is not actually what Aschenbrenner actually believes. He might just be in favour of accelerating these technologies.

Load more