titotal

Computational Physicist
8710 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail. 

Comments
728

I think you might be engaging in a bit of Motte-and-Baileying here. Throughout this comment, you're stating MIRI's position as things like "it will be hard to make ASI safe", and that AI will "win", and that it will be hard for an AI to be perfectly aligned with "human flourishing"  Those statements seem pretty reasonable. 

But the actual stance of MIRI, which you just released a book about, is that there is an extremely high chance that building powerful AI will result in everybody on planet earth being killed. That's a much narrower and more specific claim. You can imagine a lot of scenarios where AI is unsafe, but not in a way that kills everyone. You can imagine cases where AI "wins", but decides to cut a deal with us. You can imagine cases where an AI doesn't care about human flourishing because it doesn't care about anything, it ends up acting like a tool that we can direct as we please. 

I'm aware that you have counterarguments for all of these cases (that I will probably disagree with). But these counterarguments will have to be rooted in the actual nuts and bolts details of how actual, physical AI works. And if you are trying to reason about future machines, you want to be able to get a good prediction about their actual characteristics. 

I think in this context, it's totally reasonable for people to look at your (in my opinion poor) track record of prediction and adjust their credence in your effectiveness as an institution. 

I'm a huge fan of epistemological humility, but it seems odd to invoke it for a topic where the societal effects have been exhaustively studied for decades. The measurable harms and comparatively small benefits are as well known as you could reasonably expect for a medical subject. 

Your counterargument seems to be that there are unmeasured benefits, as revealed by the fact that people choose to smoke despite knowing the harm it does. But I don't think these are an epistemological mystery either: you can just ask people why they smoke and they'll tell you. 

It's seems like this is more of a difference in values than a question of epistemics: one might regard the freedom to choose self-destructive habits as being an important principle worth defending. 

I don't think this sort of anchoring is a useful thing to do. There is no logical reason for third party presidency success and AGI success to be linked mathematically. It seems like the third party thing is based on much greater empirical grounding. 

You linked them because your vague impression of the likelihood of one was roughly equal to the vague impression of the likliehood of the other: If your vague impression of the third party thing changes, it shouldn't change your opinion of the other thing. You think that AGI is 5 times less likely than you previously thought because you got more precise odds about one guy winning the presidency ten years ago?

My (perhaps controversial) view is that forecasting AGI is in the realm of speculation where quantification like this is more likely to obscure understanding than to help it. 

I believe you are correct, and will probably write up a post explaining why in detail at some point. 

Maybe the name doesn't matter that much, but it will still have some effect. If we're still early on in the name change process then the cost to change to a better name is basically nothing. So the cost-effectiveness of getting it right is actually extremely high. 

The broad appeal applies to multi-millionaires as well. Most multi-millionaires are not into clunky nerd stuff. 

I believe Rice's theorem applies to a programmable calculator. Do you think it is impossible to prove that a programmable handheld calculator is "safe"? Do you think it is impossible to make a programmable calculator safe? 

My point is, just because you can't formally, mathematically prove something, doesn't mean it's not true. 

titotal
6
1
0
50% agree

What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?

 

My leading view is that there will be some sort of bubble pop, but with people still using genAI tools to some degree afterwards (like how people kept using the internet after the dot com burst). 

Still major uncertainty on my part because I don't know much about financial markets, and am still highly uncertain about the level where AI progress fully stalls. 

I think if someone is running a blog, it should be socially acceptable to ban people from commenting for almost any reason, including just finding someone annoying. According to the definition used in this article, this counts as "suppression of speech". Maybe it is in the literal sense, but I don't think smuggling in the bad feelings associated with government censorship is fair. 

Or say you are s run a fish and chips shop, and it turns out the person you hired at the front is an open racist who drives customers away by telling them how much he despises albanian people. Are you meant to sacrifice your own money and livelihood for the sake of "protecting the man's speech"?

People have a right to curate their spaces for their actual needs. The questions become thornier in a case like college campuses, because academic debate and discussion is part of the needs of such an institution. Organisations have to determine the pros and cons of what they allow people to say on their platforms.

I don't see anything in the OP about asking for disproportionate representation of minorities. They seem to be advocating for proportionate representation, and noticing that EA fails to live up to this expectation. 

I also don't think that EA truly is only a "sacrifice". For one thing, plenty of EA jobs pay quite well. EA is also an opportunity to do good. EA also has a lot of influence, and directs substantial amounts of money. It's totally reasonable to be concerned that the people making these decisions are not representative of the people that they affect, and may lack useful insight as a result. 

Load more