itaibn

itaibn's Posts

Sorted by New

itaibn's Comments

Are there historical examples of excess panic during pandemics killing a lot of people?
historical cases are earlier than would be relevant directly

Practically all previous pandemics were far enough back in history that their applicability is unclear. I think it's unfair to discount your example because of that, because every other positive or negative example can be discounted the same way.

Which scientific discovery was most ahead of its time?

I've just examined the two Wikipedia articles you link to and I don't think this is an independent discovery. The race between Einstein and Hilbert was for finding the Einstein field equations which put general relativity in a finalized form. However, the original impetus for developing general relativity was Einstein's proposed Equivalence Principle in 1907, and in 1913 he and Grossman published the proposal that it would involve spacetime being curved (with a pseudo-Riemannian metric). Certainly after 1913 general relativity was inevitable, perhaps it was inevitable after 1907, but it still all depended on Einstein's first ideas.

That's a far cry from saying that these idea wouldn't have been discovered until the 1970s, which I'm basing mainly on hearsay and I confess is much more dubious.

Which scientific discovery was most ahead of its time?

I don't recall the source, but I remember hearing from a physicist that if Einstein hadn't discovered the theory of special relativity it would surely have been discovered by other scientists at the time, but if he hadn't discovered the theory of general relativity it wouldn't have been discovered until the 1970s. More specifically, general relativity has an approximation known as linearized gravity which suffices to explain most of the experimental anomalies of Newtonian gravity but doesn't contain the concept that spacetime is curved, and that could have been discovered instead.

Interview with Jon Mallatt about invertebrate consciousness

I'm puzzled by Mallatt's response to the last question about consciousness in computer systems. It appears to me like he and Feinberg are applying a double-standard when judging the consciousness of computer programs. I don't know what he has in mind when he talks about the enormous complexity of conscious, but based on other parts of the interview we can see some of the diagnostic criteria Mallatt uses to judge consciousness in practice. These include behavioral tests such as going back to places an animal saw food before, tending wounds, and hiding when injured, as well as structural tests such as a multiple levels of intermediate processing from the sensory input to motor output. Existing AIs already pass the structural test I listed, and I believe they could pass the behavior tests with a simple virtual environment and reward function. I don't see a principled way of including the simplest types of animal conscious while any form of computer consciousness.

Debate and Effective Altruism: Friends or Foes?

On the second paragraph, making your point succinctly is a valuable skill that is also important for anti-debates. A key part of this skill is understanding which parts of your argument are crucial for your conclusion and which merit less attention. The bias towards quick arguments and the bandwagon effect also exist in natural conversation and I'm not sure if it's any worse in competitive debating. I have little experience with competitive debating so I cannot make the comparison and am just arguing from how this should work in principle.

On the other hand, in natural conversation you want to minimize use both of the audiences' time and cognitive resources, whereas competitive debate weighs more heavily in minimizing time, which distorts how people learn succinctness from it. Also, the time constraint in competitive debate might be much more severe than the mental resource constraint in the most productive natural conversations, and so many important skills that are only applied in long-form conversation are not practiced at all.

Curing past sufferings and preventing s-risks via indexical uncertainty

You should consider whether something has gone terribly wrong if your method for preventing s-risks is to simulate individuals suffering intensely in huge quantities.

Empirical data on value drift

A particular word choice that put me at unease is calling "dating a non-EA" "dangerous" without qualifying this word properly. It is more precise to say that something is "good" or "bad" for a particular purpose than to just call it "good" or "bad"; just the same with "dangerous". If you call something "dangerous" without qualification or other context, this leaves an implicit assumption that the underlying purpose is universal and unquestioned, or almost so, in the community you're speaking to. In many cases it's fine to assume EA values in these sorts of statements -- this is an EA forum, after all. Doing so for statements about value drift appears to support the norm that people here should want to stay with EA values forever, a norm which I oppose.

Comparative advantage in the talent market

It seems to me like you're in favor of unilateral talent trading, that is, that someone should work on a cause he thinks isn't critical but he has a comparative advantage there, because he believes that this will induce other people to work on his preferred causes. I disagree with this. When someone works on a cause, this also increases the amount of attention and perceived value it is given in the EA community as a whole. As such I expect the primary effect of unilateral talent trading would be to increase the cliquishness of the EA community -- people working on what's popular in the EA community rather than what's right. Also, what's commonly considered as EA priorities could differ significantly from the actual average opinion, and unilateral trading would unrightly shift the latter in the direction of the former, especially as the former is more easily gamed by advertising etc.. On the whole, I discourage working on a cause you don't think is important unless you are confident this won't decrease the total amount of attention given to your preferred cause. That is, only accept explicit bilateral trades with favorable terms.

How to improve EA Funds

On this very website, clicking the link "New to Effective Altruism?" and a little browsing quickly leads to recommendations to give to EA funds. If EA funds really is intended to be a high-trust option, CEA should change that recommendation.

Why I prioritize moral circle expansion over artificial intelligence alignment

I haven't responded to you for so long firstly because I felt like we got to the point in the discussion where it's difficult to get across anything new and I wanted to be attentive to what I say, and then because after a while without writing anything I became disinclined from continuing. The conversation may close soon.

Some quick points:

  • My whole point in my previous comment is that the conceptual structure of physics is not what you make it out to be, and so your analogy to physics is invalid. If you want to say that my arguments against consciousness apply equally well to physics you will need to explain the analogy.

  • My views on consciousness that I mentioned earlier but did not elaborate on are becoming more relevant. It would be a good idea for me to explain them in more detail.

  • I read your linked piece on quantifying bliss and I am unimpressed. I concur with the last paragraph of this comment.

Load More