All of Squark's Comments + Replies

Nice review! Two comments so far:

  • Re Critch's paper, the result is actually very intuitive once you understand the underlying mechanism. Critch considers a situation of, so to speak, Aumannian disagreement. That is, two agents hold different beliefs, despite being aware of each other's beliefs, because some assumption of Aumann's theorem is false: e.g. each agent considers emself smarter than the other. For example, imagine that Alice believes the Alpha Centauri system has more than 10 planets (call it "proposition P"), Bob believes it has less
... (read more)

I'm not sure what point you could see in continuing this conversation either way, since you clearly aren't armed with any claims which haven't already been repeated and answered over and over in the most basic of arguments over socialism and Marxist philosophy...

Indeed I no longer see any point, given that you now reduced yourself to insults. Adieu.

-3
kbog
7y
It's not an insult if it's true. You clearly have no background knowledge or expertise in any of this, but you decided to argue with me, despite me repeatedly telling you that there are better places for you to learn. Don't get offended when you get called out on it.

You still haven't provided a reference for "materialist epistemology".

I would say this is a good essay: https://www.marxists.org/archive/fromm/works/1961/man/ch02.htm

So, "historical materialism" is some collection of vague philosophical ideas by Marx. Previously, you replied to my claim that "to the extent they [utopian socialism and Marxism] are based on any evidence at all, this evidence is highly subjective interpretation of history" by saying that "Marxism was derived from materialist epistemology". This ... (read more)

0
kbog
7y
I'm not sure how to respond to a statement this dismissive, but for what it's worth, effective altruism is based on 'vague philosophical ideas', as are neoliberalism and all sorts of other ideologies, and if you want to be rational about the matter then you might want to start by taking ideas in philosophy seriously. I don't understand what you are complaining about. Suppose I asked "what objective evidence is there that Givewell recommends good charities?" And you replied, "well, they recommend the ones that are best rated by their analysis method." And I said, "This is extremely misleading to say that charities' ratings are derived from something when that something is itself an invention of Givewell!" Clearly, such complaints are silly. Except I didn't say that the mere existence of historical materialism was evidence for Marxism. I said that analysis conducted through the lens of historical materialism provided evidence for Marx's theories. I'm not sure what point you could see in continuing this conversation either way, since you clearly aren't armed with any claims which haven't already been repeated and answered over and over in the most basic of arguments over socialism and Marxist philosophy, so you can't possibly be trying to convince me, and because I've already stated that I'm far from the most helpful or informed authority on socialism and Marxist philosophy, so you can't possibly be trying to learn or have your arguments answered. Sure, that was a snide remark. I didn't say we should. There is strong evidence for it. It's just not quantitative. No, the 'catastrophes' of communist revolutions were due to totalitarian governments and food shortages. We might think that placing the means of production in public control would not result in a totalitarian government nor would it result in a food shortage, primarily because both of those things are generally implausible in modern contexts and secondarily because the only evidence you have provided for

What is the right way to approach things?

By combining insights from sociology, history, economics, and other domains. For instance, materialist epistemology is a method of analysis that draws upon sociology and history to understand economic developments.

You still haven't provided a reference for "materialist epistemology".

Anyone can claim to "combine insights" from anything. In fact, most political ideologies claim such insights, nevertheless reaching different, sometimes diametrically opposite, conclusions.

Sure, but not ever

... (read more)
0
kbog
7y
I would say this is a good essay: https://www.marxists.org/archive/fromm/works/1961/man/ch02.htm You might expect me to provide reasons, or arguments, or historical examples, or other kinds of objective evidence. Those are very valuable, and if you're looking for them I would recommend you consult the relevant literature and communities which specialize in providing them. What you certainly shouldn't expect is that everything be quantitative, or that everything be condensed into a meta-analysis that you can look at really quickly to save you the trouble of engaging with complicated and complex social and political issues. Sociopolitical systems are too complicated for that, which is why people who study political science and international relations do not condense everything into quantitative evidence. Quantitative evidence can certainly enter into a broader debate, and socialists and marxists cite all kinds of quantitative evidence in various contexts, but the discussion we're having is too vague and ambiguous for any particular statistic to be appropriate to bring up. A singleminded emphasis on statistics is absolutely not what effective altruism is about. There are no meta-analyses citing data about the frequency of above-human-intelligence machines being badly aligned with values; there are no studies which quantify the sentience of cows and chickens; there are no regression tables showing whether the value of creating an active social movement is worth the expense. And yet we concern ourselves with those things anyway. If you intend to go by quantitative data then I would suggest avoiding cases with a <10 sample size and I would also suggest correcting for significant confounding variables such as "dictatorship". Not entirely - the USSR's economy was complicated and changed significantly throughout the decades. The more general point of course is that the USSR did not succeed in abolishing political class. We might, but as I said above, many of the people

...asking for an empirical meta study of complex social ideologies is not the right way to approach things.

What is the right way to approach things? In order to claim that certain policies will have certain consequences, you need some kind of model. In order to know that a model is useful, you need to test it against empirical data. The more broad, unusual and complex policy changes you are proposing, the more stringent standard of evidence you need to meet.

I have seen several empirical analyses by economists showing positive economic and welfare data

... (read more)
-1
kbog
7y
By combining insights from sociology, history, economics, and other domains. For instance, materialist epistemology is a method of analysis that draws upon sociology and history to understand economic developments. Sure, but not everything that counts as empirical data can be fit into a regression table and subjected to meta-analysis. And mine lived in Romania, but I'm not sure that this is the most reliable form of empirical data. Yes. They also killed a quarter of their population. So whether or not their economy succeeded seems to be more strongly governed by other factors. Yes, well class in the Marxist definition is about the distinction between capital owners and laborers, which is a bit different from how it's used in other contexts. We might think that inequality of wealth is bad as it allocates goods to those who can afford them rather than those who need them; we might think that capitalist markets lead to tragedies of the commons which exacerbate resource shortages and existential risks; we might think that unequal distribution of power in society corrupts politics. This is not really a good comparison, given many cases of success in socialist and communist economies (such as Cuba, which roundly beats other Latin American countries in human development standards) and many failures in capitalist economies (such as the widespread economic disaster which followed the end of the Soviet system). But again, I am not an expert here, so if you want to learn more then I'd suggest looking elsewhere. I'm not sure what you're trying to say. My justification for saying "public ownership of the means of production" was that was that many of the people who give serious thought and attention to the idea of public ownership of the means of production are in favor it. Some of those people are Marxists. That sounds great too.

All economic systems make certain assumptions about the way wealth and society are organized, different perspectives make different assumptions and operate on different levels of analysis, so e.g. Marxists aren't concerned with computing DSGE.

This is a generic statement that conveys little information about Marxism in particular.

I don't know what that would even look like. Can you recommend me a good survey of studies (preferably meta analyses) supporting libertarian ideas? There is no such thing.

I never claimed that implementing libertarian ideas i... (read more)

1
kbog
7y
The relevant information about Marxism is that it views the world on different terms than through the mathematical market framework of neoclassical economics. Like I said, since I'm not an expert, I don't have that much to say and you're better off looking elsewhere if you want answers to questions like this. I did not mean that libertarianism had anything to do with EA. Just that asking for an empirical meta study of complex social ideologies is not the right way to approach things. I don't claim that you should trust intellectual elites, but just that you should see what they have to say, evaluate their arguments, etc. I have seen several empirical analyses by economists showing positive economic and welfare data from Soviet countries. It's a bit contentious. Many types of socialism and communism have not been implemented. For instance, Marxism advocates a classless and moneyless society. The USSR was not classless and was not moneyless. "More harmful/immoral than socialism", or something like that. I don't see how any of this takes away from the point it started from, namely that capitalism as an economic system has its own record of brutality as well as communism. I did not mean it to be. I said "public ownership of the means of production", and Marxism is just one of several frameworks for doing this. More importantly, I did not suggest that the EA community embrace it. I suggested that people look into it, see if was desirable, etc. Doing so requires serious engagement with the relevant literature and discussing it with people who can answer your questions better. If I was trying to argue for socialism or communism, of course I would be speaking much differently and with much more extensive sources and evidence.

Mainstream economics doesn't seek to answer the same questions that Marxian economics does...

I'm not so sure, can you spell it out in more detail? Maybe you're saying that Marxian economics is mostly prescriptivist while mainstream economics is mostly descriptivist. But then, we have welfare economics and mechanism design which are more or less mainstream and have a prescriptivist bend.

...much of the 19th century socialist work was very mainstream and derived from the ordinary economic thought of the time.

I suspect it depends on the socialist work. ... (read more)

0
kbog
7y
All economic systems make certain assumptions about the way wealth and society are organized, different perspectives make different assumptions and operate on different levels of analysis, so e.g. Marxists aren't concerned with computing DSGE. Yup. I don't know what that would even look like. Can you recommend me a good survey of studies (preferably meta analyses) supporting libertarian ideas? There is no such thing. Materialism has many different meaningw and you are referring to something completely different. I am referring not to materialism as a theory of mind but to materialist epistemology, a method of social analysis. I think most of the elites who supported Stalin are now dead. In any case this seems like a pretty strange thing to worry about, like saying we should disbelieve in evolution because of social darwinists and eugenics. Not really true, and we might think that various directions in Marxism and socialism can be implemented without following the same policies that they did. Harmful, immoral, etc. Instead of arguing with me you would probably learn more by going to serious readings such as Marx, postwar socialist theory or to communities which are specifically oriented to discuss this sort of thing, such as reddit.com/r/asksocialscience.

Well the original strands of thought mostly came from early 19th century utopian socialists and were updated by Marx and Engels. There has been a lot of post-Marxian analysis as well.

AFAICT, the strands of thought you are talking about are poorly correlated with reality. Marxist thought is largely outside of mainstream economics. They use neither studies nor mathematical models (at least they didn't in the 19th century). To the extent they are based on any evidence at all, this evidence is highly subjective interpretation of history. Finally, Marxist re... (read more)

0
kbog
7y
Mainstream economics doesn't seek to answer the same questions that Marxian economics does, while much of the 19th century socialist work was very mainstream and derived from the ordinary economic thought of the time. Needless to say, modern heterodox economists use studies quite frequently. No, utopian socialism was supported by standard economic and utilitarian thought, whereas Marxism was derived from materialist epistemology. Later directions in socialist and communist analysis have taken various different directions. Well this doesn't look very likely, because most of the intellectual elites who seriously support and engage with Marxism have engaged with the relevant literature and find it compelling in its own right for various reasons. Conversely, we might think that people who have never seriously studied socialist or Marxist thought are likely to dismiss it for purely political reasons since they have not analyzed its objective intellectual merit. We might think that intellectual elites who engage with socialist thought or with Marxist thought differentiate between the various doctrines and directions within this ideological space and accept some ideas while rejecting others. Or we might think that the behavior of a state doesn't make all of its policies wrong: for instance, we might dispute the idea that capitalist states' rampant imperialism demonstrates that capitalism is always wrong.

Of the examples you give here, I think #1 is the best by far.

Regarding #2, I think that world government is a great idea (assuming it's a liberal, democratic world government!) but it's highly unobvious how to get there. In particular, am very skeptical about giving more power to the UN. The UN is a fundamentally undemocratic institution, both because each country has 1 vote regardless of size and because (more importantly) many countries are represented by undemocratic governments. I am not at all convinced removing the security council veto power would h... (read more)

-2
kbog
7y
Yes I think education is a big deal. Particularly early education. If you can raise people to think differently and more ethically then you are changing society at the root of its problems. Schools are considered a powerful agent of political socialization, so the way that they educate and reform children affects the values and behaviors they will have later in life. Our schools as of now preach obedience alongside individualism; I'd suggest that they should be doing the opposite of both. Well the original strands of thought mostly came from early 19th century utopian socialists and were updated by Marx and Engels. There has been a lot of post-Marxian analysis as well. Many of their ideas are considered insightful and useful in social science.

So you claim that you have values related to animals that most people don't have and you want your eccentric values to be overrepresented in the AI?

I'm asking unironically (personally I also care about wild animal suffering but I also suspect that most people would care about if they spent sufficient time thinking about it and looking at the evidence).

Who said we will preserve wild nature in its present form? We will re-engineer it to eliminate animal suffering while enhancing positive animal experience and wild nature's aesthetic appeal.

6[anonymous]8y
The number of people who want to re-engineer nature is currently much, much smaller than the number of dedicated conservationists. It is a fringe view that basically only effective altruists support, and not even all EAs. I see no reason to believe that humans will ever modify wild animals to be more happy. Humans might eventually destroy most habitats, however.

I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as "bad" without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.

How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.

Yes, medical science has no normative force. The fact smoking leads to cancer is a claim about causal relationship between phenomena in the physical world. The fact cancer causes suffering and death is also such a relationship. The idea that suffering and death are evil is already a subjective preference (subjective not in the sense that it is undefined but in the sense that different people might have different preferences; almost all people prefer avoiding suffering and death but other preferences might have more variance).

I completely don't understand what you mean by "killing people is incorrect." I understand that "2+2=5" is "incorrect" in the sense that there is a formally verifiable proof of "not 2+2=5" from the axioms of Peano arithmetic. I understand that general relativity is "correct" in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is "correct" in the sense that it is the simplest model that produces all previous observations; the... (read more)

0
MichaelDello
8y
As we discuss in our post, imagine the worst possible world. Most humans are comfortable in saying that this would be very bad, and any steps towards it would be bad, and if you disagree and think that steps towards the WPW are good, then you're wrong. In the same vein, holding a 'version of ethics' that claims that moving towards the WPW is good, you're wrong. To address you second point, humans are not AGIs, our values are fluid.

"I'm not entirely sure what you mean here. We don't argue that it's wrong to interfere with other cultures."

I was refuting what appeared to me as a strawman of ethical subjectivism.

"If someone claims they kill other humans because it's their moral code and it's the most good thing to do, that doesn't matter. We can rightfully say that they are wrong."

What is "wrong"? The only meaningful thing we can say is "we prefer people not the die therefore we will try to stop this person." We can find other people who share thi... (read more)

0
MichaelDello
8y
By 'wrong' I don't mean the opposite of morally just, I mean the opposite of correct. That is to say, we could rightfully say they are incorrect. I fundamentally disagree with your final point. I used to be a meat-eater, and did not care one bit about the welfare of animals. To use your wording, I honestly didn't mind killing animals. Through careful argument over a year from a friend of mine, I was finally convinced that was a morally incorrect point of view. To say that it would be impossible to convince a rational murderer who doesn't mind killing people that murder is wrong is ludicrous.

Thanks for replying!

"There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable."

The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is "not science" in the sense that it de... (read more)

0
RobertFarq
8y
Then by extension you have to say that medical science has no normative force. If it's just subjective, then when medicine says you ought not to smoke if you want to avoid lung cancer, they're completely unjustified when they say ought not to.

This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.

I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sens... (read more)

0
MichaelDello
8y
"The step from ethical subjectivism to the claim it's wrong to interfere with other cultures seems to me completely misguided, even backwards." I'm not entirely sure what you mean here. We don't argue that it's wrong to interfere with other cultures. It is our view that there is a general attitude that each person can have whatever moral code they like and be justified, and we believe that attitude is wrong. If someone claims they kill other humans because it's their moral code and it's the most good thing to do, that doesn't matter. We can rightfully say that they are wrong. So why should there be some cut off point somewhere that we suddenly can't say someone else's moral code is wrong? To quote Sam Harris, we shouldn't be afraid to criticise bad ideas. "I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sense that different people and different cultures can possess different ethics (although perhaps the differences are not very substantial) and there is no objective measure by which one is better than the other. " I agree with the first part in that different people and cultures can possess different ethics, but I reject your notion that there is no objective measure by which one is better than the other. If a culture's ethical code was to brutally maim innocent humans, we don't say 'We disagree with that but it's happening in another culture so it's ok, who are we to say that our version of morality is better?' We would just say that they are wrong. "If according to my ethics your culture is doing something bad then it is completely rational for me to stop your culture from doing it" when you say this it sounds like you are trending to our view anyway.
1
RobertFarq
8y
Thanks for your remarks. The is-ought distinction wasn't discussed explicitly to help include those unfamiliar with Hume. However, the opening section of the essay attempts to establish morality as just another domain of the physical world. There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable. Science studies physical reality, and the ambit of morality is a subset of physical reality. Therefore, science studies morality too. The essay is silent on 'hedonistic' utilitarianism (we do not endorse it, either), as again, a) we think these aren't useful terms with which to structure the debate with as wide an audience as possible, and b) because they are concerns outside the present scope. This essay focuses on establishing the moral domain as just a subset of the physical, and therefore, that there will be moral facts to be obtained scientifically - even if we don't know how to obtain them just yet. How to perfectly balance competing interests, for example, is for a later discussion. First, convincing people that you actually can do that with any semblance of objectivity is required. The baby needs to walk before it can run. We discuss cross-cultural claims in the section on everyday empiricism.

I don't think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don't reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.

In the preferences page there is a box for "EA Profile Link." How does it work? That is, how do other users get from my username to the profile? I linked my LessWrong profile but it doesn't seem to have any effect...

1
Tom_Ash
8y
Hey Squark, there's a guide to using it (and other features of the forum) at http://bit.ly/1OHRd1X . As it explains, the effect is that you get a little link to it next to your username when you comment or post. When you tried entering your LessWrong profile you should have got the warning below about other links not working. If that didn't happen, can you tell me what browser and OS (Mac, Linux, etc.) you're using? Thanks!
1
number42
8y
As the help text below it says, that's specifically for EA Profiles (which are the profiles at that link). It'll only accept a link to one of those; if you don't already have one, you should create one!

Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).

The deviation from imaginary “perfect altruisim” is either du... (read more)

3
ScottA
8y
I gave the example of giving 10% to bed nets because that's an especially clear example of a division between charitable and non-charitable money - eg I have pledged to give 10% to charity, but the other 90% of my money goes to expenses and luxuries and there's no cost to EA to giving that to offsets instead. I know many other EAs work this way too. If you believe this isn't enough, I think the best way to take it up with me is to suggest I raise it above 10%, say 20% or even 90%, rather than to deny that there's such a thing as charitable/non-charitable division at all. That way lies madness and mental breakdowns as you agonize over every purchase taking away money that you "should have" given to charity. But if you're not working off a model where you have to agonize over everything, I'm not sure why you should agonize over offsets.

I think downvoting as disagreement is terrible.

First, promoting content based on majority agreement is a great way to build an echo chamber. We should promote content which is high-quality (well written, well argumented, thought-provoking, contains novel insights, provides a balanced perspective etc.). Hearing repetitions of what you already believe just amplifies your confirmation bias. I want to learn something new.

Second, downvoting creates a strong negative incentive against posting. Silencing people you disagree with is also a great way to build an e... (read more)

Some simple observations.

To perform such a QALY estimate you need

  1. A credible model for predicting the consequences of possible responses
  2. An estimate of how likely your advocacy is to effect policy

1 is something you need to even know what the best response it (and I'm currently not sure whether you have it).

2 sounds like something that should have been researched by many people by now, but I'm far from an expert so no specific suggestions.

0
Gleb_T
8y
Good ideas! 1) Hard to predict the consequences, of course - so many possible ones. Lots of noise and variability. Probably best to focus on the most likely ones, namely a smallish shift toward a pro-peace stance and a small increase in the rationality of political decision-making by readers. 2) I think this is the tougher question. How to estimate this is really hard! That's why I was suggesting thinking in terms of intuitive gut reactions might be helpful.

I think that most people here will tell you that we already know specific examples of such wrongdoing e.g. factory farming.

2
Linch
8y
The author addresses this: "The reader may be an activist, already convinced that some specific moral catastrophe is taking place, and doing everything he can to put an end to it. However, so as not to obscure my main point about unidentified catastrophes, I ask the reader to set known catastrophes aside; let him imagine that all of his favorite political causes triumph, and society becomes organized exactly as he thinks best. I hope to convince him that even in such a scenario, a moral catastrophe would still probably be taking place. My reason is this: there are so many different ways in which a society—whether our actual one or the one of the reader’s dreams—could be catastrophically wrong that it is almost impossible to get everything right."

I have some evidence that there are many software engineers who would gladly volunteer to code for EA causes (and some access to such engineers). What volunteering opportunities like that are available? EA organizations that need coders? Open source projects that can be classified as EA causes? Anything else?

What do you mean by "accept what it really means to be Human"? To what end is it "more productive"?

Not any "human" thing is a good thing. Being susceptible to disease, old age and death is part of "being human" but it is a part I would rather part with. On the other hand, being Human also means having the ability to use reason to find a better strategy than the one suggested by the initial emotional response.

A rational thinker should factor the limitations of their own brain into their decision making. Also, sometim... (read more)

This strikes me as a strange choice of words since e.g. I think it is good to occasionally experience sadness. But arguing over words is not very fruitful.

I'm not sure this interpretation is consistent with "filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible."

First, "pure happiness" sounds like a raw pleasure signal rather than "things conscious beings experience that are good" but ok, maybe it's just about wording.

Second, "specifically geared"... (read more)

Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.

Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics ... (read more)

0
MichaelDickens
9y
Happiness and suffering in the utilitarian sense are both extraordinarily complicated concepts and encompass a lot of different experiences. They're shorthand for "things conscious beings experience that are good/bad." Meta-ethically I don't disagree with you that much.

I agree with Tom. I think the core values of EA have to include:

  1. Always keep looking for new creative ways to do better.
  2. Maintain an open, honest and respectful discussion with your peers.

In particular exploring new interventions and causes should always be in the EA spotlight. When you think something is an effective charity but most EAs wouldn't agree with you, in my book it's a reason to state your case loud and clear rather than self-censor.

"Hold until future orders" is one approach but it might turn out to be much more difficult than actually creating an AI with correct values. This is because the formal specification of metaethics (that is a mathematical procedure that takes humans as input and produces a utility function as output) should be of much lower complexity than specifying what it means to "protect from other AI but do nothing else."

I completely agree that many conceivable post-human future have low value. See also "unhumanity" scenario in my analysis. I think that term "existential risk" might be somewhat misleading since what we're really aiming it as "existence of beings and experiences that we value" rather than just existence of "something." That is, I view your reasoning not as an argument for caring less about existential risk but as an argument for working towards a valuable far future.

Regarding MIRI, I think their position is completely... (read more)

1
Lila
9y
I worry that the values that people want to put into a singleton are badly wrong, e.g. creating hedonium. I want a singleton that will protect us from other AI. Other than that, I'd be wary of trying to maximize a value right now. At most I'd tell the AI "hold until future orders".

Interesting. One way to solve the replaceability problem is to force the government to announce a preliminary budget before the ear-marking "bids" and pledge to treat the bids as differentials with respect to the preliminary budget.

The question is what is the mechanism of value spreading.

If the mechanism is having rational discussions then it is not necessarily urgent to have these discussions right now. Once we create a future in which there is no death and no economic pressures to self-modify in ways that are value destructive, we'll have plenty of time for rational discussions. Things like "experience machine" also fit into this framework, as long as the experiences are in some sense non-destructive (this rules out experiences that create addiction, for example).

If the ... (read more)

3
MichaelDickens
9y
I mostly agree with you. I am less confident than you are that a solution to the FAI problem will be on the meta-level. I think you're probably right, but I have enough uncertainty about it that I much prefer someone who's doing AI safety research to share my values so I can be more confident that they will do research that's useful for all sentient beings and not just humans.

I think the main comparative advantage (= irreplaceability) of the typical EA comes not from superior technical skill but from motivation to improve the world (rather than make money, advance one's career or feel happy). This means researching questions which are ethically important but not grant-sexy, donating to charities which are high-impact but don't yield a lot of warm-fuzzies, promoting policy which goes against tribal canon etc.

1
Evan_Gaensbauer
9y
Sometimes it's not about promoting policy which goes against tribal canon. It can also be about promoting policy so technical and obtuse that virtually everyone else's eyes glaze over when thinking about it, so they never pay it any mind.

In the Arab Spring many of the revolutionary groups were radical Islamists rather than champions of liberal democracy. Also, I didn't say anything about revolution: in some cases a gradual transition is more likely to work.

Infiltrating an organization you hate while preserving sanity and your true values is a task few people are capable of. I'm quite certain I wouldn't make it.

I think that we need serious research + talking to people from the relevant countries to devise realistic strategies.

0
Germantia
9y
Hm, really? I don't think it'd be a problem for me. Could look in to the research on counterintelligence and double agents. Of course, I'm just spitballing.

Do you think Soviet attempts to foster communism in the US during the cold war were a stabilising influence?

Well, they might have been stabilizing if they worked :) Although I think war between communist countries is much more likely than war between liberal democracies.

Countries generally and rightfully take affront at foreigners trying to meddle with their internal affairs.

I mostly agree with the descriptive claim but not with the normative claim. Why "rightfully"?

For a more recent example, look at the aftermath of the western coup in

... (read more)

I'm not sure what the EA movement can do that will have significant effect in the short term. In the long term we should be looking into establishing liberal democracy in countries which either posses nuclear weapons or have the capacity to develop them in the near future (Russia, China, North Korea, Pakistan, Iran...). For example we can support the pro-liberalisation groups which already exist in these countries.

1
Germantia
9y
I'm skeptical of this approach given how poorly the Arab Spring ended up working out. I'm skeptical of whether revolutions are a wise idea in general. I think it may be wiser to try to nudge their existing governments towards being more liberal. This approach could include, for example, encouraging EAs in China to join the party their and try to rise through the ranks.
0
Larks
9y
I think this is the opposite of true. Do you think Soviet attempts to foster communism in the US during the cold war were a stabilising influence? Countries generally and rightfully take affront at foreigners trying to meddle with their internal affairs. For a more recent example, look at the aftermath of the western coup in Ukraine.

Hi Tom, thx for commenting!

For me, the meta-point that we should focus on steering into better scenarios was a more important goal of the post than explaining the actual scenarios. The latter serve more as examples / food for thought.

Regarding objections to Utopian scenarios, I can try to address them if you state the objections you have in mind. :)

Regarding dictatorships, I indeed focused on situations that are long-term stable since I'm discussing long-term scenarios. A global dictatorship with existing technology might be possible but I find it hard to believe it can survive for more than a couple of thousand years.

0
tomstocker
9y
Good points. Thanks. I was actually looking for objections rather than having them. Will illustrate my personal responses if I get time.

If your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn't be (?)

Regarding definition of good, it's pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let's define "koodness(X)" to mean "the extent to which things X wants to happen actually happen" and "gudness" to mean "the extent to which what is happening to all beings is what they want to happe... (read more)

My distribution isn't tight, I'm just saying there is a significant probability of large serial depth. You are right that much of the benefit of current work is "instrumental": interesting results will convince other people to join the effort.

Hi Uri, thanks for the thoughtful reply!

It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of "good" that makes sense to me is "things I want to happen" and I definitely don't want a universe empty of love. A random UFAI is likely to have none of the above properties.

0
UriKatz
9y
For the sake of argument I will start with your definition of good and add that what I want to happen is for all sentient beings to be free from suffering, or for all sentient beings to be happy (personally I don't see a distinction between these two propositions, but that is a topic for another discussion). Being general in this way allows me to let go of my attachment to specific human qualities I think are valuable. Considering how different most people's values are from my own, and how different my needs are from Julie's (my canine companion), I think our rationality and imagination are too limited for us to know what will be good for more evolved beings in the far future. A slightly better, though still far from complete, definition of "good" (in my opinion) would run along the line of: "what is happening is what those beings it is happening to want to happen". A future world may be one that is completely devoid of all human value and still be better (morally and in many other ways) than the current world. At least better for the beings living in it. In this way even happiness, or lack of suffering, can be tossed aside as mere human endeavors. John Stuart Mill famously wrote: "It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, is of a different opinion, it is only because they only know their own side of the question." And compared with the Super-Droids of tomorrow, we are the pigs...

I think the distance between our current understanding of AI safety and the required one is of similar order of magnitude to the distance between invention of Dirac sea in 1930 and discovery of asymptotic freedom in non-Abelian gauge theory in 1973. This is 43 years of well-funded research by the top minds of mankind. And that without taking into account the engineering part of the project.

If remaining time frame for solving FAI is 25 years than:

  1. We're probably screwed anyway
  2. We need invest all possible effort into FAI since the tail the probability distr
... (read more)
0
Owen Cotton-Barratt
9y
Could you say something about why your subjective probability distribution for the difficulty is so tight? I think it is very hard to predict in advance how difficult these problems are; witness the distribution of solution times for Hilbert's problems. Even if you're right, I think that says that we should try to quickly get to the point with a serious large programme. It's not clear that the route to that means doing focusing on direct work at the margin now. It will involve some, but mostly because of the instrumental benefits in helping increase the growth of people working on it, and because it's hard to scale up later overnight.

Thx for the feedback and the references!

I think Ord's "coarse setting" is very close to my type II. The activities you mentioned belong to type II inasmuch as they consider specific scenarios or to type I inasmuch as they raise general awareness of the subject.

Regarding relative value vs. time: I absolutely agree! This is part of the point I was trying to make.

Btw, I was somewhat surprised by Ord's assessment of the value of current type III interventions in AI. I have a very different view. In particular, the 25-35 years time window he mentions... (read more)

0
Owen Cotton-Barratt
9y
I agree that AI safety has some similarities to those fields, but: * I guess you may be overestimating the effect of serial depth in those fields. While there is quite a lot of material that builds on other material, there are a lot of different directions that get pushed on simultaneously, too. * AI safety as a field is currently tiny. It could absorb many more (extremely skilled) researchers before they started seriously treading on each others' toes by researching the same stuff at the same time. I think some type III interventions are valuable now, but mostly for their instrumental effects in helping type I and type II, or for helping with scenarios where AI comes surprisingly soon.

In a way, the two are interchangeable: if we define "steps" as changes of given magnitude then faster change means more densely spaced steps.

There is another effect that has to be taken into account. Namely, some progress in understanding how to adapt to automation might be happening without the actual adoption of automation, that is, progress that occurs because of theoretical deliberation and broader publicity for the relevant insights. This sort of progress creates an incentive to move all adoption later in time.

Your toy model makes sense. However, if instead of considering the future automation technology X we consider some past (already adopted) automation technology Y, the conclusion would be opposite. Therefore, to complete your argument you need to show that in some sense the next significant step in automation after self-driving cars is closer in time than the previous significant step in automation.

0
Owen Cotton-Barratt
9y
I see what you're thinking. We break the symmetry not by thinking that the next step is going to be closer in time, but that the next step(s) are going to be more important to get right than either self-driving cars or earlier automation.

Thx for replying!

I'm still not sure I follow your argument in full. Consider two scenarios:

  1. Self-driving cars are adopted soon. Progress in automation continues. Automation is eventually adopted in other areas as well.

  2. Self-driving cars are adopted later. Progress in automation still continues, in particular through advances in other field such as computer game AI. Eventually, self-driving cars and automation in other areas are adopted.

In each of these scenarios, we can consider the time at which a given type/level of automation was adopted. You claim... (read more)

1
Owen Cotton-Barratt
9y
I agree that it's possible that your scenario 2 just shifts everything back uniformly in time, but think in expectation the spacing will be denser. Toy model: looking at the spacing between self-driving cars and some future automation technology X. A major driver of the time X is adopted is technological sophistication. Whether or not we adopt self-driving cars now won't have too much effect on the point when we reach technological sophistication level for technology X. If we had the same social position either way, this would mean that we would adopt X at roughly the same time regardless of when we adopt self-driving cars. Of course social views might be different depending on what happened with self-driving cars. If we want to maximise the time between self-driving cars and X, we'd be best adopting the cars as soon as possible (given technological constraints), and pushing back adoption of X as long as possible.

Hi Owen and Sebastian,

The assumption behind your argument seems to be that slowing (resp. accelerating) progress in automation will result in faster (resp. slower) changes in the future rather than e.g. uniform time translation. Can you explain the reasoning behind this assumption in more detail?

0
Owen Cotton-Barratt
9y
That isn't the assumption that's meant to be driving the argument. I think there are two main factors: (i) Pushing self-driving cars relative to other automation is likely to increase societal wisdom regarding automation faster. They are very visible and have macro-level effects, and will require us to develop new frameworks for dealing with them. In contrast, better AI in computer games has to a first approximation none of these effects, but could also feed into long-term automation capabilities. (ii) Pushing for adoption of self-driving cars is useful relative to pushing for improvements in the underlying automation technology, because it will give us longer to deal with these issues for a given automation level (because we can assume that improvements in automation will continue regardless of adoption; although note that adoption may well speed up automation a bit too). I actually think the assumption you mention is probably true too -- because the rest of the economy is likely to continue to grow it will be cheaper relative to wealth to improve automation later, so it could go faster. But this effect seems rather smaller to me, and as increasing automation isn't the only driver of increasing societal wisdom, I'm much more sceptical about whether it's good to speed automation as a whole.

Hi Nate, nice post!

I think you're describing the difference between instrumental value and terminal value. The market price of something is its instrumental value. A dollar is valuable because of the things you can buy with it, not because of intrinsic worth. On the other hand, human lives, happiness etc. have intrinsic worth. I think that the distinction will persist in almost any imaginable universe although the price ratios can be vastly different.

Hi Julia, thx for replying!

I don't know enough about the vegetarian community but I think that it grew so much recently that it might be considered a young movement, like EA (it is also a related movement, obviously). Political opinions definitely seem to be transmitted from parent to child, at least from my experience. It is true that there are "teenage rebellions" but I think that the opposite is more common. Academic ideas are often very narrow-field and of little interest to the wide public so a different approach is natural.

I'm not planning ... (read more)

Hi Tom,

Thx for starting a discussion on moral philosophy: I find it interesting and important!

It seems to me that you're wrong when you say that assigning special importance to people closer to oneself makes one a non-consequentialist. One can measure actions by their consequences and measure the consequences in ways that are asymmetric with respect to different people.

Personally I believe that ethics is a property of the human brain and as such it

  1. Has high Kolmogorov complexity (complexity of value). In particular it is not just "maximize pleasure -
... (read more)

Well, the problem with optimizing for a specific target audience is the risk to put off other audiences. I would say something like:

Being born with advantages isn't something to feel guilty about. Being born with advantages is something to be glad about: it gives you that much more power to improve life for everyone.

Load more