I'm not sure what point you could see in continuing this conversation either way, since you clearly aren't armed with any claims which haven't already been repeated and answered over and over in the most basic of arguments over socialism and Marxist philosophy...
Indeed I no longer see any point, given that you now reduced yourself to insults. Adieu.
You still haven't provided a reference for "materialist epistemology".
I would say this is a good essay: https://www.marxists.org/archive/fromm/works/1961/man/ch02.htm
So, "historical materialism" is some collection of vague philosophical ideas by Marx. Previously, you replied to my claim that "to the extent they [utopian socialism and Marxism] are based on any evidence at all, this evidence is highly subjective interpretation of history" by saying that "Marxism was derived from materialist epistemology". This ...
What is the right way to approach things?
By combining insights from sociology, history, economics, and other domains. For instance, materialist epistemology is a method of analysis that draws upon sociology and history to understand economic developments.
You still haven't provided a reference for "materialist epistemology".
Anyone can claim to "combine insights" from anything. In fact, most political ideologies claim such insights, nevertheless reaching different, sometimes diametrically opposite, conclusions.
...Sure, but not ever
...asking for an empirical meta study of complex social ideologies is not the right way to approach things.
What is the right way to approach things? In order to claim that certain policies will have certain consequences, you need some kind of model. In order to know that a model is useful, you need to test it against empirical data. The more broad, unusual and complex policy changes you are proposing, the more stringent standard of evidence you need to meet.
...I have seen several empirical analyses by economists showing positive economic and welfare data
All economic systems make certain assumptions about the way wealth and society are organized, different perspectives make different assumptions and operate on different levels of analysis, so e.g. Marxists aren't concerned with computing DSGE.
This is a generic statement that conveys little information about Marxism in particular.
I don't know what that would even look like. Can you recommend me a good survey of studies (preferably meta analyses) supporting libertarian ideas? There is no such thing.
I never claimed that implementing libertarian ideas i...
Mainstream economics doesn't seek to answer the same questions that Marxian economics does...
I'm not so sure, can you spell it out in more detail? Maybe you're saying that Marxian economics is mostly prescriptivist while mainstream economics is mostly descriptivist. But then, we have welfare economics and mechanism design which are more or less mainstream and have a prescriptivist bend.
...much of the 19th century socialist work was very mainstream and derived from the ordinary economic thought of the time.
I suspect it depends on the socialist work. ...
Well the original strands of thought mostly came from early 19th century utopian socialists and were updated by Marx and Engels. There has been a lot of post-Marxian analysis as well.
AFAICT, the strands of thought you are talking about are poorly correlated with reality. Marxist thought is largely outside of mainstream economics. They use neither studies nor mathematical models (at least they didn't in the 19th century). To the extent they are based on any evidence at all, this evidence is highly subjective interpretation of history. Finally, Marxist re...
Of the examples you give here, I think #1 is the best by far.
Regarding #2, I think that world government is a great idea (assuming it's a liberal, democratic world government!) but it's highly unobvious how to get there. In particular, am very skeptical about giving more power to the UN. The UN is a fundamentally undemocratic institution, both because each country has 1 vote regardless of size and because (more importantly) many countries are represented by undemocratic governments. I am not at all convinced removing the security council veto power would h...
So you claim that you have values related to animals that most people don't have and you want your eccentric values to be overrepresented in the AI?
I'm asking unironically (personally I also care about wild animal suffering but I also suspect that most people would care about if they spent sufficient time thinking about it and looking at the evidence).
Who said we will preserve wild nature in its present form? We will re-engineer it to eliminate animal suffering while enhancing positive animal experience and wild nature's aesthetic appeal.
I completely fail to understand how your WPW example addresses my point. It is absolutely irrelevant what most humans are comfortable in saying. Truth is not a democracy, and in this case the claim is not even wrong (it is ill defined since there is no such thing as "bad" without specifying the agent from whose point of view it is bad). It is true that some preferences are nearly universal for humans but other preferences are less so.
How is the fluidity of human values a point in your favor? If anything it only makes them more subjective.
Yes, medical science has no normative force. The fact smoking leads to cancer is a claim about causal relationship between phenomena in the physical world. The fact cancer causes suffering and death is also such a relationship. The idea that suffering and death are evil is already a subjective preference (subjective not in the sense that it is undefined but in the sense that different people might have different preferences; almost all people prefer avoiding suffering and death but other preferences might have more variance).
I completely don't understand what you mean by "killing people is incorrect." I understand that "2+2=5" is "incorrect" in the sense that there is a formally verifiable proof of "not 2+2=5" from the axioms of Peano arithmetic. I understand that general relativity is "correct" in the sense that we can use it to predict results of experiments and verify our predictions (on a more fundamental level, it is "correct" in the sense that it is the simplest model that produces all previous observations; the...
"I'm not entirely sure what you mean here. We don't argue that it's wrong to interfere with other cultures."
I was refuting what appeared to me as a strawman of ethical subjectivism.
"If someone claims they kill other humans because it's their moral code and it's the most good thing to do, that doesn't matter. We can rightfully say that they are wrong."
What is "wrong"? The only meaningful thing we can say is "we prefer people not the die therefore we will try to stop this person." We can find other people who share thi...
Thanks for replying!
"There are no moral qualities over and above the ones we can measure, either a) in the consequences of an act, or b) in the behavioural profiles or personality traits in people that reliably lead to certain acts. Both these things are physical (or, at least, material in the latter case), and therefore measurable."
The parameters you measure are physical properties to which you assign moral significance. The parameters themselves are science, the assignment of moral significance is "not science" in the sense that it de...
This essay comes across as confused about the is-ought problem. Science in the classical sense studies facts about physical reality, not moral qualities. Once you already decided something is valuable, you can use science to maximize it (e.g. using medicine to maximize health). Similarly if you already decided hedonistic utilitarianism is correct you can use science to find the best strategy for maximizing hedonistic utility.
I am convinced that ethics is subjective, not in the sense that any claim about ethics is as good as any other claim, but in the sens...
I don't think one should agonize over offsets. I think offsets are not a satisfactory solution the problem of balancing resource spending on charitable vs. personal ends since they don't reflect the correct considerations. If you admit X leads to mental breakdowns then you should admit X is ruled out by purely consequentialist reasoning, without the need to bring in extra rules such as offsetting.
In the preferences page there is a box for "EA Profile Link." How does it work? That is, how do other users get from my username to the profile? I linked my LessWrong profile but it doesn't seem to have any effect...
Your reply seems to be based on the premise that EA is some sort of a deontological duty to donate 10% of your income towards buying bednets. My interpretation of EA is very different. My perspective is that EA is about investing significant effort into optimizing the positive impact of your life on the world at large, roughly in the same sense that a startup founder invests significant effort into optimizing the future worth of their company (at least if they are a founder that stands a chance).
The deviation from imaginary “perfect altruisim” is either du...
I think downvoting as disagreement is terrible.
First, promoting content based on majority agreement is a great way to build an echo chamber. We should promote content which is high-quality (well written, well argumented, thought-provoking, contains novel insights, provides a balanced perspective etc.). Hearing repetitions of what you already believe just amplifies your confirmation bias. I want to learn something new.
Second, downvoting creates a strong negative incentive against posting. Silencing people you disagree with is also a great way to build an e...
Some simple observations.
To perform such a QALY estimate you need
1 is something you need to even know what the best response it (and I'm currently not sure whether you have it).
2 sounds like something that should have been researched by many people by now, but I'm far from an expert so no specific suggestions.
I think that most people here will tell you that we already know specific examples of such wrongdoing e.g. factory farming.
I have some evidence that there are many software engineers who would gladly volunteer to code for EA causes (and some access to such engineers). What volunteering opportunities like that are available? EA organizations that need coders? Open source projects that can be classified as EA causes? Anything else?
What do you mean by "accept what it really means to be Human"? To what end is it "more productive"?
Not any "human" thing is a good thing. Being susceptible to disease, old age and death is part of "being human" but it is a part I would rather part with. On the other hand, being Human also means having the ability to use reason to find a better strategy than the one suggested by the initial emotional response.
A rational thinker should factor the limitations of their own brain into their decision making. Also, sometim...
This strikes me as a strange choice of words since e.g. I think it is good to occasionally experience sadness. But arguing over words is not very fruitful.
I'm not sure this interpretation is consistent with "filling the universe with tiny beings whose minds are specifically geared toward feeling as much pure happiness as possible."
First, "pure happiness" sounds like a raw pleasure signal rather than "things conscious beings experience that are good" but ok, maybe it's just about wording.
Second, "specifically geared"...
Upvoted, because although I disagree with much of this on object level, I think the post is totally legit and I think we should encourage original thinking.
Perhaps we need to find a time and place to start a serious discussion of ethics. I think hedonistic utilitarianism is wrong already on the level of meta-ethics. It seems to assume the existence of universal morals which from my point of view is a non-sequitur. Basically all discussions of universal morals are games with meaningless words, maps of non-existing territories. The only sensible meta-ethics ...
I agree with Tom. I think the core values of EA have to include:
In particular exploring new interventions and causes should always be in the EA spotlight. When you think something is an effective charity but most EAs wouldn't agree with you, in my book it's a reason to state your case loud and clear rather than self-censor.
"Hold until future orders" is one approach but it might turn out to be much more difficult than actually creating an AI with correct values. This is because the formal specification of metaethics (that is a mathematical procedure that takes humans as input and produces a utility function as output) should be of much lower complexity than specifying what it means to "protect from other AI but do nothing else."
I completely agree that many conceivable post-human future have low value. See also "unhumanity" scenario in my analysis. I think that term "existential risk" might be somewhat misleading since what we're really aiming it as "existence of beings and experiences that we value" rather than just existence of "something." That is, I view your reasoning not as an argument for caring less about existential risk but as an argument for working towards a valuable far future.
Regarding MIRI, I think their position is completely...
Interesting. One way to solve the replaceability problem is to force the government to announce a preliminary budget before the ear-marking "bids" and pledge to treat the bids as differentials with respect to the preliminary budget.
The question is what is the mechanism of value spreading.
If the mechanism is having rational discussions then it is not necessarily urgent to have these discussions right now. Once we create a future in which there is no death and no economic pressures to self-modify in ways that are value destructive, we'll have plenty of time for rational discussions. Things like "experience machine" also fit into this framework, as long as the experiences are in some sense non-destructive (this rules out experiences that create addiction, for example).
If the ...
I think the main comparative advantage (= irreplaceability) of the typical EA comes not from superior technical skill but from motivation to improve the world (rather than make money, advance one's career or feel happy). This means researching questions which are ethically important but not grant-sexy, donating to charities which are high-impact but don't yield a lot of warm-fuzzies, promoting policy which goes against tribal canon etc.
In the Arab Spring many of the revolutionary groups were radical Islamists rather than champions of liberal democracy. Also, I didn't say anything about revolution: in some cases a gradual transition is more likely to work.
Infiltrating an organization you hate while preserving sanity and your true values is a task few people are capable of. I'm quite certain I wouldn't make it.
I think that we need serious research + talking to people from the relevant countries to devise realistic strategies.
Do you think Soviet attempts to foster communism in the US during the cold war were a stabilising influence?
Well, they might have been stabilizing if they worked :) Although I think war between communist countries is much more likely than war between liberal democracies.
Countries generally and rightfully take affront at foreigners trying to meddle with their internal affairs.
I mostly agree with the descriptive claim but not with the normative claim. Why "rightfully"?
...For a more recent example, look at the aftermath of the western coup in
I'm not sure what the EA movement can do that will have significant effect in the short term. In the long term we should be looking into establishing liberal democracy in countries which either posses nuclear weapons or have the capacity to develop them in the near future (Russia, China, North Korea, Pakistan, Iran...). For example we can support the pro-liberalisation groups which already exist in these countries.
Hi Tom, thx for commenting!
For me, the meta-point that we should focus on steering into better scenarios was a more important goal of the post than explaining the actual scenarios. The latter serve more as examples / food for thought.
Regarding objections to Utopian scenarios, I can try to address them if you state the objections you have in mind. :)
Regarding dictatorships, I indeed focused on situations that are long-term stable since I'm discussing long-term scenarios. A global dictatorship with existing technology might be possible but I find it hard to believe it can survive for more than a couple of thousand years.
If your only requirement is for all sentient beings to be happy, you should be satisfied with a universe completely devoid of sentient beings. However, I suspect you wouldn't be (?)
Regarding definition of good, it's pointless to argue about definitions. We should only make sure both of us know what each word we use means. So, let's define "koodness(X)" to mean "the extent to which things X wants to happen actually happen" and "gudness" to mean "the extent to which what is happening to all beings is what they want to happe...
My distribution isn't tight, I'm just saying there is a significant probability of large serial depth. You are right that much of the benefit of current work is "instrumental": interesting results will convince other people to join the effort.
Hi Uri, thanks for the thoughtful reply!
It is not necessarily bad for future sentients to be different. However, it is bad for them to be devoid of properties that make humans morally valuable (love, friendship, compassion, humor, curiosity, appreciation of beauty...). The only definition of "good" that makes sense to me is "things I want to happen" and I definitely don't want a universe empty of love. A random UFAI is likely to have none of the above properties.
I think the distance between our current understanding of AI safety and the required one is of similar order of magnitude to the distance between invention of Dirac sea in 1930 and discovery of asymptotic freedom in non-Abelian gauge theory in 1973. This is 43 years of well-funded research by the top minds of mankind. And that without taking into account the engineering part of the project.
If remaining time frame for solving FAI is 25 years than:
Thx for the feedback and the references!
I think Ord's "coarse setting" is very close to my type II. The activities you mentioned belong to type II inasmuch as they consider specific scenarios or to type I inasmuch as they raise general awareness of the subject.
Regarding relative value vs. time: I absolutely agree! This is part of the point I was trying to make.
Btw, I was somewhat surprised by Ord's assessment of the value of current type III interventions in AI. I have a very different view. In particular, the 25-35 years time window he mentions...
In a way, the two are interchangeable: if we define "steps" as changes of given magnitude then faster change means more densely spaced steps.
There is another effect that has to be taken into account. Namely, some progress in understanding how to adapt to automation might be happening without the actual adoption of automation, that is, progress that occurs because of theoretical deliberation and broader publicity for the relevant insights. This sort of progress creates an incentive to move all adoption later in time.
Your toy model makes sense. However, if instead of considering the future automation technology X we consider some past (already adopted) automation technology Y, the conclusion would be opposite. Therefore, to complete your argument you need to show that in some sense the next significant step in automation after self-driving cars is closer in time than the previous significant step in automation.
Thx for replying!
I'm still not sure I follow your argument in full. Consider two scenarios:
Self-driving cars are adopted soon. Progress in automation continues. Automation is eventually adopted in other areas as well.
Self-driving cars are adopted later. Progress in automation still continues, in particular through advances in other field such as computer game AI. Eventually, self-driving cars and automation in other areas are adopted.
In each of these scenarios, we can consider the time at which a given type/level of automation was adopted. You claim...
Hi Owen and Sebastian,
The assumption behind your argument seems to be that slowing (resp. accelerating) progress in automation will result in faster (resp. slower) changes in the future rather than e.g. uniform time translation. Can you explain the reasoning behind this assumption in more detail?
Hi Nate, nice post!
I think you're describing the difference between instrumental value and terminal value. The market price of something is its instrumental value. A dollar is valuable because of the things you can buy with it, not because of intrinsic worth. On the other hand, human lives, happiness etc. have intrinsic worth. I think that the distinction will persist in almost any imaginable universe although the price ratios can be vastly different.
Hi Julia, thx for replying!
I don't know enough about the vegetarian community but I think that it grew so much recently that it might be considered a young movement, like EA (it is also a related movement, obviously). Political opinions definitely seem to be transmitted from parent to child, at least from my experience. It is true that there are "teenage rebellions" but I think that the opposite is more common. Academic ideas are often very narrow-field and of little interest to the wide public so a different approach is natural.
I'm not planning ...
Hi Tom,
Thx for starting a discussion on moral philosophy: I find it interesting and important!
It seems to me that you're wrong when you say that assigning special importance to people closer to oneself makes one a non-consequentialist. One can measure actions by their consequences and measure the consequences in ways that are asymmetric with respect to different people.
Personally I believe that ethics is a property of the human brain and as such it
Well, the problem with optimizing for a specific target audience is the risk to put off other audiences. I would say something like:
Being born with advantages isn't something to feel guilty about. Being born with advantages is something to be glad about: it gives you that much more power to improve life for everyone.
Nice review! Two comments so far: