Devin Kalish

1212New York, NY, USAJoined Jan 2022

Bio

Hello, I'm Devin, I blog here along with Nicholas Kross. Currently working on a bioethics MA at NYU.

Comments
160

I don't agree with MIRI on everything, but yes, this is one of the things I like most about it

For what it’s worth, speaking as a non-comms person, I’m a big fan of Rob Bensinger style comms people. I like seeing him get into random twitter scraps with e/acc weirdos, or turning obnoxious memes into FAQs, or doing informal abstract-level research on the state of bioethics writing. I may be biased specifically because I like Rob’s contributions, and would miss them if he turned himself into a vessel of perfect public emptiness into which the disembodied spirit of MIRI’s preferred public image was poured, but, look, I also just find that type of job description obviously offputting. In general I liked getting to know the EAs I’ve gotten to know, and I don’t know Shakeel that well, but I would like to get to know him better. I certainly am averse to the idea of wrist slapping him back into this empty vessel to the extent that we are blaming him for carelessness even when he specifies very clearly that he isn’t speaking for his organization. I do think that his statement was hasty, but I also think we need to be forgiving of EAs whose emotions are running a bit hot right now, especially when they circle back to self-correct afterwards.

Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).

The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.

Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.

At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.

For what it’s worth, I think that you are a well-liked and respected critic not just outside of EA, but also within it. You have three posts and 28 comments but a total karma of 1203! Compare this to Emile Torres or Eric Hoel or basically any other external critic with a forum account. I’m not saying this to deny that you have been treated unfairly by EAs, I remember one memorable event when someone else was accused by a prominent EA of being your sock-puppet on basically no evidence. This just to say, I hope you don’t get too discouraged by this, overall I think there’s good reason to believe that you are having some impact, slowly but persistently, and many of us would welcome you continuing to push, even if we have various specific disagreements with you (as I do). This comment reads to me as very exhausted, and I understand if you feel you don’t have the energy to keep it up, but I also don’t think it’s a wasted effort.

Personally I think the Most Important Century series is closest to my own thinking, though there isn't any single source that would completely account for my views. Then again I think my timelines are longer than some of the other people in the comments, and I'm not aware of a good comprehensive write up of the case for much shorter timelines.

The impact for me was pretty terrible. There were two main components of the devastating parts of my timeline changes which probably both had a similar amount of effect on me:

-my median estimate year moved back significantly, cut down by more than half

-my probability mass on AGI significantly sooner than even that bulked up

The latter gives me a nearish term estimated prognosis of death somewhere between being diagnosed with prostate cancer and colorectal cancer, something probably survivable but hardly ignorantle. Also everyone else in the world has it. Also it will be hard for you to get almost anyone else to take you seriously if you tell them the diagnosis.

The former change puts my best guess arrival for very advanced AI well within my life expectancy, indeed when I’m middle aged. I’ve seen people argue that it is actually in one’s self interest to hope that AGI arrives during their lifetimes, but as I’ve written a bit about before this doesn’t really comfort me at all. The overwhelming driver of my reaction is more that, if things go poorly and everything and everyone I ever loved is entirely erased, I will be there to see it (well, see it in a metaphorical sense at least).

There were a few months, between around April and July of this year, when this caused me some serious mental health problems, in particular it worsened my insomnia and some other things I was already dealing with. At this point I am doing a bit better, and I can sort of put the idea back in the abstract idea box AI risk used to occupy for me and where it feels like it can’t hurt me. Sometimes I still get flashes of dread, but mostly I think I’m past the worst of it for now.

In terms of donation plans, I donated to AI specific work for the first time this year (MIRI and Epoch, the process of deciding which places to pick was long, frustrating, and convoluted, but probably the biggest filter was that I ruled out anyone doing significant capabilities work). More broadly I became much more interested in governance work and generally work to slow down AI development than I was before.

I’m not planning to change career paths, mostly because I don’t think there is anything very useful I can do, but if there’s something related to AI governance that comes up that I think I would be a fit for, I’m more open to it than I was before.

I think the overall balance of positive and negative sources is fair when only viewed from a "positive versus negative" standpoint. As I think Habiba Islam pointed out somewhere much of the positive reading is much much longer. Where I think this will wind up running into trouble is something like this:

-While there is some primary reading in this list, most of the articles, figures, events, ideas etc. that are discussed across these readings appear in the secondary sources.

-This is pretty much inevitable, the list would multiply out far too much if she added all of the primary sources needed to evaluate the secondary sources from scratch

-Most of the secondary sources are negative, and often misleading in some significant way

-The standard way to try to check these problems without multiplying out primary sources too much is to read other pieces arguing with the original ones

-The trouble is, there are very few of those outside of blogs and the EA forum on these topics, something I've been hand wringing about for a while, and Thorn seems to only be looking at more official sources like academic/magazine/newspaper publications

-I think Thorn will try to be balanced and thoughtful, but I think this disparity will almost ensure that the video will inherit many of the flaws of its sources

Endorsed. A bunch of my friends were recommending that I read the sequences for a while, and honestly I was skeptical it would be worth it, but I was actually quite impressed. There aren’t a ton of totally new ideas in it, but where it excels it honing in on specific, obvious-in-retrospect points about thinking well and thinking poorly, being clear engaging and catchy describing them, and going through a bit of the relevant research. In short, you come out intellectually with much of what you went in with, but with reinforcements and tags put in some especially useful places.

As a caveat I take issue with a good deal of the substantial material as well. Most notably I don’t think he describes those he disagrees with fairly sometimes, for instance David Chalmers, and I think “Purchase Fuzzies and Utilons Separately” injected a basically wrong and harmful meme into the EA community (I plan to write a post on this at some point when I get the chance). That said if you go into them with some skepticism of the substance, you will come out satisfied. You can also audiobook it here, which is how I read it.

Interesting, I’ll have to think about this one a bit, but I tend to think that something like Shiffrin’s gold bricks argument is the stronger antinatalist argument anyway.

Load More