All of itaibn's Comments + Replies

I reached this article through a link that already revealed that it was about self-care but didn't notice the "self-care" in the title, and I expected the rhetoric to be a bait-and-switch that starts by talking about how aiming for the minimum in directly impact-related things is bad and then switches to arguing that the same reasoning applies to self-care.

Answer by itaibnJan 08, 202118
0
0

Gwern argues here against supporting the American revolution.

1
quinn
3y
So I read Gwern and I also read this Dylan Matthews piece, I'm fairly convinced the revolution did not lead to the best outcomes for slaves and for indigenous people. I think there are two cruxes for believing that it would be possible to make this determination in real-time:  1. as Matthews points out, follow the preferences of slaves. 2. notice that a complaint in the declaration of independence was that the british wanted to citizenize indigenous people.  One of my core assumptions, which is up for debate, is that EAs ought to focus on outcomes for slaves and indigenous people more than the general case of outcomes. 
5
Josh Jacobson
3y
It's worth noting that a large part of the argument there (but far from all of it) would not apply to this question unless you were in such an influential position that you could have a meaningful effect on whether or not the war took place at all.

I'm glad you agree! For the sake of controversy, I'll add that I'm not entirely sure that scenario is out of consideration from an EV point of view, firstly because the exhaust will have a lot of energy and I'm not sure what will happen to it, and secondly because I'm open to a "diminishing returns" model of population ethics where the computational capacity furloughed does not have an overwhelmingly higher value.

On singletons, I think the distinction between "single agent" and "multiple agents" is more of a difference in how we imagine a system than an ac... (read more)

2
kokotajlod
3y
Mmm, good point. Perhaps the way to salvage the concept of a singleton is to define it as the opposite of moloch, i.e. a future is ruled by a singleton to the extent that it doesn't have moloch-like forces causing drift towards outcomes that nobody wants, money being left on the table, etc. Or maybe we could just say a singleton is where outcomes are on or close to the pareto frontier. Idk.

I guess. I don't like the concept of a singleton. I prefer to think that by describing a specific failure mode this gives a more precise model for exactly what kind of coordination is needed to prevent it. Also, we definitely shouldn't assume a coordinated colonization will follow the Armstrong-Sandberg method. I'm also motivated by a "lamppost approach" to prediction: This model of the future has a lot of details that I think could be worked out to a great deal of mathematical precision, which I think makes it a good case study. Finally, if the necessary ... (read more)

2
kokotajlod
3y
Agreed on all counts except that I like the concept of a singleton. I'd be interested to hear why you don't, if you wish to discuss it.
  1. It's true that making use of resources while matching the probe's speed requires a huge expenditure of energy, by the transformation law of energy-momentum if for no other reason. If the remaining energy is insufficient then the probe won't be able to go any faster. Even if there's no more efficient way to extract resources than full deceleration/re-acceleration I expect this could be done infrequently enough that the probe still maintains an average speed of >0.9c. In that case the main competitive pressure among probes would be minimizing the number o
... (read more)
historical cases are earlier than would be relevant directly

Practically all previous pandemics were far enough back in history that their applicability is unclear. I think it's unfair to discount your example because of that, because every other positive or negative example can be discounted the same way.

I've just examined the two Wikipedia articles you link to and I don't think this is an independent discovery. The race between Einstein and Hilbert was for finding the Einstein field equations which put general relativity in a finalized form. However, the original impetus for developing general relativity was Einstein's proposed Equivalence Principle in 1907, and in 1913 he and Grossman published the proposal that it would involve spacetime being curved (with a pseudo-Riemannian metric). Certainly after 1913 general relativity was inevitable... (read more)

Answer by itaibnMay 16, 20198
0
0

I don't recall the source, but I remember hearing from a physicist that if Einstein hadn't discovered the theory of special relativity it would surely have been discovered by other scientists at the time, but if he hadn't discovered the theory of general relativity it wouldn't have been discovered until the 1970s. More specifically, general relativity has an approximation known as linearized gravity which suffices to explain most of the experimental anomalies of Newtonian gravity but doesn't contain the concept that spacetime is curved, and that could have been discovered instead.

8[anonymous]5y
While I am certainly not an expert on this topic, the claim that general relativity wouldn't have been discovered until the 1970s without Einstein seems false to me. David Hilbert was doing similar work at the same time, and from what I'm aware there was something of a race between Einstein and Hilbert to finish the work first, with Einstein winning narrowly (on the order of days). More information can be found on Wikipedia pages: History of General Relativity and Relativity Priority Dispute.

I'm puzzled by Mallatt's response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don't know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, a... (read more)

1
Max_Carpendale
5y
Yeah, I think this is a worry for his view. I do also personally assign a somewhat higher likelihood to invertebrate consciousness than modern AI consciousness because of evolutionary relatedness, greater structural homology, and because they probably satisfy more of the criteria for consciousness that I would use. You might be interested in my next interview on this subject which will be with someone who discusses modern AI and robotics findings in the context of invertebrate consciousness, and comes to a more sceptical conclusion based on that.

On the second paragraph, making your point succinctly is a valuable skill that is also important for anti-debates. A key part of this skill is understanding which parts of your argument are crucial for your conclusion and which merit less attention. The bias towards quick arguments and the bandwagon effect also exist in natural conversation and I'm not sure if it's any worse in competitive debating. I have little experience with competitive debating so I cannot make the comparison and am just arguing from how this should work in principle.

On the ... (read more)

You should consider whether something has gone terribly wrong if your method for preventing s-risks is to simulate individuals suffering intensely in huge quantities.

-1
turchin
6y
See patches in comments below: there are ways to do the trick not increasing the total number of suffering observer-moments.
0
turchin
6y
I could see three possible problems: The method will create new suffering moments, and even may be those suffering moments, which will not exist otherwise. But there is a patch for it: see my comment above to Lukas. The second possible problem is that the universe will be tiled with past simulations which try to resurrect any ant ever lived on Earth – and thus there will be an opportunity cost, as many other good things could be done. This could be patched by what could be called "cheating death in Damascus" approach where some timelines choose not to play this game by using a random generator, or by capping amount of resources which they may spend on the past sufferings prevention. The third problem could be ontological, like a wrong theory of the human personal identity. But if a (pseudo)-Benevolent AI has a wrong understanding of the human identity, we will have many other problems, e.g. during uploading.

A particular word choice that put me at unease is calling "dating a non-EA" "dangerous" without qualifying this word properly. It is more precise to say that something is "good" or "bad" for a particular purpose than to just call it "good" or "bad"; just the same with "dangerous". If you call something "dangerous" without qualification or other context, this leaves an implicit assumption that the underlying purpose is universal and unquestioned, or almost so, in the community y... (read more)

It seems to me like you're in favor of unilateral talent trading, that is, that someone should work on a cause he thinks isn't critical but he has a comparative advantage there, because he believes that this will induce other people to work on his preferred causes. I disagree with this. When someone works on a cause, this also increases the amount of attention and perceived value it is given in the EA community as a whole. As such I expect the primary effect of unilateral talent trading would be to increase the cliquishness of the EA community -- people wo... (read more)

On this very website, clicking the link "New to Effective Altruism?" and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.

1
Ben Pace
6y
Yup. I suppose I wrote down my assessment of the information available about the funds and the sort of things that would cause me to donate to it, not the marketing used to advertise it - which does indeed feel disconnected. It seems that there's a confusing attempt to make this seem reasonable to everyone whilst in fact not offering the sort of evidence that should make it so. The evidence about it is not the 'evidence-backed charities' that made GiveWell famous/trustworthy, but is "here is a high status person in a related field that has a strong connection to EA", which seems not that different from the way other communities ask their members to give funding - it's based on trust in the leaders in the community, not on objectively verifiable metrics to outsiders. So you should ask yourself what causes you to trust CEA and then use that, as opposed to the objective metrics associated with the EA funds (which there are far fewer of than with GiveWell). For example if CEA has generally made good philosophical progress in this area and also made good hiring decisions, that would make you trust the grant managers more.

I haven't responded to you for so long firstly because I felt like we got to the point in the discussion where it's difficult to get across anything new and I wanted to be attentive to what I say, and then because after a while without writing anything I became disinclined from continuing. The conversation may close soon.

Some quick points:

  • My whole point in my previous comment is that the conceptual structure of physics is not what you make it out to be, and so your analogy to physics is invalid. If you want to say that my arguments against consciousness

... (read more)

Do you think we should move the conversation to private messages? I don't want to clutter a discussion thread that's mostly on a different topic, and I'm not sure whether the average reader of the comments benefits or is distracted by long conversations on a narrow subtopic.

Your comment appears to be just reframing the point I just made in your own words, and then affirming that you believe that the notion of qualia generalizes to all possible arrangements of matter. This doesn't answer the question, why do you believe this?

By the way, although there is no... (read more)

0
MikeJohnson
6y
EA forum threads auto-hide so I’m not too worried about clutter. I don’t think you’re fully accounting for the difference in my two models of meaning. And, I think the objections you raise to consciousness being well-defined would also apply to physics being well-defined, so your arguments seem to prove too much. To attempt to address your specific question, I find the hypothesis that ‘qualia (and emotional valence) are well-defined across all arrangements of matter’ convincing because (1) it seems to me the alternative is not coherent (as I noted in the piece on computationalism I linked for you) and (2) it seems generative and to lead to novel and plausible predictions I think will be proven true (as noted in the linked piece on quantifying bliss and also in Principia Qualia). All the details and sub arguments can be found in those links. Will be traveling until Tuesday; probably with spotty internet access until then.

It wasn't clear to me from your comment, but based on your link I am presuming that by "crisp" you mean "amenable to generalizable scientific theories" (rather than "ontologically basic"). I was using "pleasure/pain" as a catch-all term and would not mind substituting "emotional valence".

It's worth emphasizing that just because a particular feature is crisp does not imply that it generalizes to any particular domain in any particular way. For example, a single ice crystalline has a set of directions in whic... (read more)

0
MikeJohnson
6y
This is an important point and seems to hinge on the notion of reference, or the question of how language works in different contexts. The following may or may not be new to you, but trying to be explicit here helps me think through the argument. Mostly, words gain meaning from contextual embedding- i.e. they’re meaningful as nodes in a larger network. Wittgenstein observed that often, philosophical confusion stems from taking a perfectly good word and trying to use it outside its natural remit. His famous example is the question, “what time is it on the sun?”. As you note, maybe notions about emotional valence are similar- trying to ‘universalize’ valence may be like trying to universalize time-zones, an improper move. But there’s another notable theory of meaning, where parts of language gain meaning through deep structural correspondence with reality. Much of physics fits this description, for instance, and it’s not a type error to universalize the notion of the electromagnetic force (or electroweak force, or whatever the fundamental unification turns out to be). I am essentially asserting that qualia is like this- that we can find universal principles for qualia that are equally and exactly true in humans, dogs, dinosaurs, aliens, conscious AIs, etc. When I note I’m a physicalist, I intend to inherit many of the semantic properties of physics, how meaning in physics ‘works’. I suspect all conscious experiences have an emotional valence, in much the same way all particles have a charge or spin. I.e. it’s well-defined across all physical possibilities.

Thanks for the link. I didn't think to look at what other posts you have published and now I understand your position better.

As I now see it, there two critical questions for distinguishing the different positions on the table:

  1. Does our intuitive notion of pleasure/suffering have objective precisely defined fundamental concept underlying it?
  2. In practice, is it a useful approach to look for computational structures exhibiting pleasure/suffering in the distant future as a means to judge possible outcomes?

Brian Tomasik answers these questions "No/Yes&... (read more)

1
MikeJohnson
6y
Thanks, this is helpful. My general position on your two questions is indeed "Yes/No". The question of 'what are reality's natural kinds?' is admittedly complex and there's always room for skepticism. That said, I'd suggest the following alternatives to your framing: * Whether the existence of qualia itself is 'crisp' seems prior to whether pain/pleasure are. I call this the 'real problem' of consciousness. * I'm generally a little uneasy with discussing pain/pleasure in technically precise contexts- I prefer 'emotional valence'. * Another reframe to consider is to disregard talk about pain/pleasure, and instead focus on whether value is well-defined on physical systems (i.e. the subject of Tegmark's worry here). Conflation of emotional valence & moral value can then be split off as a subargument. Generally speaking, I think if one accepts that it's possible in principle to talk about qualia in a way that 'carves reality at the joints', it's not much of a stretch to assume that emotional valence is one such natural kind (arguably the 'c. elegans of qualia'). I don't think we're logically forced to assume this, but I think it's prima facie plausible, and paired with some of our other work it gives us a handhold for approaching qualia in a scientific/predictive/falsifiable way. Essentially, QRI has used this approach to bootstrap the world's first method for quantifying emotional valence in humans from first principles, based on fMRI scans. (It also should work for most non-human animals; it's just harder to validate in that case.) We haven't yet done the legwork on connecting future empirical results here back to the computationalism vs physicalism debate, but it's on our list. TL;DR: If consciousness is a 'crisp' thing with discoverable structure, we should be able to build/predict useful things with this that cannot be built/predicted otherwise, similar to how discovering the structure of electromagnetism let us build/predict useful things we could not ha

Thanks for reminding me that I was implicitly assuming computationalism. Nonetheless, I don't think physicalism substantially affects the situation. My arguments #2 and #4 stand unaffected; you have not backed up your claim that qualia is a natural kind under physicalism. While it's true that physicalism gives clear answers for the value of two identical systems or a system simulated with homomorphic encryption, it may still be possible to have quantum computations involving physically instantiated conscious beings, by isolating the physical environment of... (read more)

0
MikeJohnson
6y
It seems to me your #2 and #4 still imply computationalism and/or are speaking about a straw man version of physicalism. Different physical theories will address your CPT reversal objection differently, but it seems pretty trivial to me. I would generally agree, but would personally phrase this differently; rather, as noted here, there is no objective fact-of-the-matter as to what the 'computational behavior' of a system is. I.e., no way to objectively derive what computations a physical system is performing. In terms of a positive statement about physicalism & qualia, I'm assuming something on the order of dual-aspect monism / neutral monism. And yes insofar as a formal theory of consciousness which has broad predictive power would depart from folk intuition, I'd definitely go with the formal theory.

My current position is that the amount of pleasure/suffering that conscious entities will experience in a far-future technological civilization will not be well-defined. Some arguments for this:

  1. Generally utility functions or reward functions are invariant under affine transformations (with suitable rescaling for the learning rate for reward functions). Therefore they cannot be compared between different intelligent agents as a measure of pleasure.

  2. The clean separation of our civilization into many different individuals is an artifact of how evolution op

... (read more)
1
MikeJohnson
6y
Possibly the biggest unknown in ethics is whether bits matter, or whether atoms matter. If you assume bits matter, then I think this naturally leads into a concept cluster where speaking about utility functions, preference satisfaction, complexity of value, etc, makes sense. You also get a lot of weird unresolved thought-experiments like homomorphic encryption. If you assume atoms matter, I think this subtly but unavoidably leads to a very different concept cluster-- qualia turns out to be a natural kind instead of a leaky reification, for instance. Talking about the 'unity of value thesis' makes more sense than talking about the 'complexity of value thesis'. TL;DR: I think you're right that if we assume computationalism/functionalism is true, then pleasure and suffering are inherently ill-defined, not crisp. They do seem well-definable if we assume physicalism is true, though.

Indeed, maybe I should made the point more harshly. To be clear, that comment is not about something people might do, it's about what's already present in the top post, which I see as breaking the Reddit rules.

I used soft language because I was worried about EA discussions breaking into arguments whenever someone suggests a good thing to do, and was worried that I might have erred too much in the other direction in other contexts. I still don't feel I have a good intuition on how confrontational I should be.

9
CarlShulman
6y
I think it was an understandable first thought for someone who didn't know those rules, and Dony shouldn't be castigated for not knowing about them in a useful post about an important topic. But I think we should be definite about not violating the rules (e.g. by editing the post) now that everyone involved knows about them, while pursuing Dony's other good ideas.

I've spent some time thinking and investigating what the current state of affairs is, and here's my conclusions:

I've been reading through PineappleFund's comments. Many are responses to solicitations for specific charities with him endorsing them as possibilities. One of these was for SENS foundation. Matthew_Barnett suggested that this is evidence that he particularly cares about long-term future causes, but given the diversity of other causes he endorsed I think it is pretty weak evidence.

They haven't yet commented on any of the subthreads specifically d... (read more)

6
DC
6y
Oh dear! No, I didn't explicitly realize this beyond passing thoughts. In retrospect, I'm confused why this wasn't cached in my mind as being against reddiquette. I should eat my own dogfood regarding brigading. I edited it so it's not soliciting. Let me know here or privately if there are any further fixes I should make to the post (i.e. if I should just remove the links to the known EA comments).

Keep in mind that soliciting upvotes for a comment is explicitly against Reddit rules. I understand if you think that the stakes of this situation are more important than these rules, but be sure you are consciously aware of the judgment you have made.

I'd say our policy should be 'just don't do that.' EA has learned its lesson on this from GiveWell.

Also:

Integrity:

Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of

... (read more)

First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature.

Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The sa... (read more)

I don't see any high-value interventions here. Simply pointing out a problem people have been aware of for millenia will not help anyone.

3
Kaj_Sotala
7y
There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations) iv) and of course psychology in general is full of interesting ideas for improving mental health and well-being that haven't been rigorously tested, which also suggests that v) any meta-work that would improve psychology's research practices would also be even more valuable than we previously thought. As for the "pointing out a problem people have been aware of for millenia", well, people have been aware of global poverty for millenia too. Then we got science and randomized controlled trials and all the stuff that EAs like, and got better at fixing the problem. Time to start looking at how we could apply our improved understanding of this old problem, to fixing it.

I don't think the people of this forum are qualified to discuss this. Nobody in the post or comments (as of the time I posted my comment, and I am including myself) leaves me with a visible impression that they have detailed knowledge of the process and trade-offs for making a new government agency or any other type of major governmental action on x-risk. As laymen I believe we should not be proposing or judging any particular policy but recognizing and supporting people with genuine expertise interested in existential risk policy.

Before you get too excited about this idea, I want you to recall your days at school and how well it turned out when the last generation of thinkers tried this.

While I couldn't quickly find the source for this, I'm pretty sure Eliezer read the Lectures on Physics as well. Again, I think Surely You're Joking is good, I just think the Lectures on Physics is better. Both are reasonable candidates for the list.

The article on machine learning doesn't discuss the possibility that more people to pursuing machine jobs can have a net negative effect. It's true your venue will generally encourage people that will be more considerate of the long-term and altruistic effects of their research and so will likely have a more positive effect than the average entrant to the field, but if accelerating the development of strong AI is a net negative then that could outweigh the benefit of the average researcher being more altruistic.

0
kbog
7y
Accelerating the development of machine intelligence is not a net negative since it can make the world better and safer at least as much as it is a risk. The longer it takes for AGI algorithms to be developed, the more advanced hardware and datasets there will be to support an uncontrolled takeoff. Also, the longer it takes for AI leaders to develop AGI then the more time there is for other nations and organizations to catch up, sparking more dangerous competitive dynamics. Finally, even if it were a net negative, the marginal impact of one additional AI researcher is tiny whereas the marginal impact of one additional AI safety researcher is large, due to the latter community being much smaller.

What do you mean by Feynman? I endorse his Lectures in Physics as something that had a big effect on my own intellectual development, but I worry many people won't be able to get that much out of it. While his more accessible works are good, I don't rate them as highly.

1
Ben Pace
7y
"Surely You're Joking Mr Feynman" still shows genuine curiosity, which is rare and valuable. But as I say, it's less about whether I can argue for it, and more about whether the top intellectual contributors in our community found it transformative in their youth. I think many may have read Feynman when young (e.g. it had a big impact on Eliezer).

This post is a bait-and-switch: It starts off with a discussion of the Good Judgement Project and what lessons it teaches us about forecasting superintelligence. However, starting with the section "What lessons should we learn?", you switch from a general discussion of these techniques towards making a narrow point about which areas of expertise forecasters should rely on, an opinion which I suspect the author arrived at through means not strongly motivated from the Good Judgement Project.

While I also suspect the Good Judgement Project could have... (read more)

0
WillPearson
7y
Sorry if you felt I was being deceptive. The list of areas of expertise I mentioned in the 80K hours section was relatively broad and not meant to be exhaustive. I could add physics and economics off the top of my head. I'm sure there were many more. I was considering each AGI team as having to do small amounts of forecasting about the likely success and usefulness of their projects. I think building it in the superforecasting mindset at all levels of endeavours could be valuable, without having to rely on explicit superforecasters for every decision. It would be great to have a full team of forecasters working on intelligence in general (so they would have something to correlate their answers on Superintelligence). I was being moderate in my demands in how much Open Philanthropy Project should change how they make forecasts about what is good to do. I just wanted it to be directionally correct. There was a simple thing people could do to improve their predictions. From the book: The ten commandment appendix is where I got the list of things to do. I figure if I managed to get Open Philosophy Project to try and follow them, things would improve. But I agree them getting good forecasters somehow would be a lot better. Does that clear up where I was coming from?

Suggestion: The author should have omitted the "Thoughts" section of this post and put the same content in a comment, and, in general, news posts should avoid subjective commentary in the main post.

Reasoning: The main content of this post is its report of EA-related news. This by itself is enough to make it worth posting. Discussion and opinions of this news can be done in the comments. By adding commentary you are effectively "bundling" a high-quality post with additional content, which grants this extra content with undue attention.

No... (read more)

1
the_jaded_one
7y
A post which simply quotes a news source could be criticized as not containing anything original and therefore not worth posting. Someone has already complained that this post is superfluous since a discussion already exists on Facebook. Actually if I had to criticize my own post I would say its weakness is that it lacks in-depth analysis and research. Unfortunately, in-depth analysis takes a lot of time...

The following is entirely a "local" criticism: It responds only to a single statement you made, and has essentially no effect on the validity of the rest of what you say.

I always run content by (a sample of) the people whose views I am addressing and the people I am directly naming/commenting on... I see essentially no case against this practice.

I found this statement surprising, because it seems to me that this practice has a high cost. It increases the amount of effort it takes to make a criticism. Increasing the cost of making criticisms c... (read more)