All of jai's Comments + Replies

I was pretty happy to see the animated-birds-explaining-things crossover with the animated-dogs-explaining-things, especially as I'm now working with the animated explainer dogs.

We could combine them as "Eradication Day" on May 8, like the US does for "President's Day" in February.

Replying a year later - I think you have a point about the soldier mindset. The reason I wrote it this way originally was because I was writing it for myself first. I felt afraid to involve myself in things for fear I would feel responsible ever after, I was often afraid of doing a little because I could not commit myself to doing a lot, and I felt an aversion to a lot of positive-sum interactions. I've gotten better at those things, but it's still hard.

I think a lot has been written to this effect since by other people, but if I were to write a follow-up ... (read more)

jai
1y19
1
0

Apologies for making the coordination problem worse. I actually picked December 9th before I knew about the two dates, inspired by this comment from B_For_Bandana on LessWrong in 2013 - in particular the last paragraph:

Because Smallpox Eradication Day marks one of the most heroic events in the history of the human species, it is not surprising that it has become a major global holiday in the past few decades, instead of inexplicably being an obscure piece of trivia I had to look up on Wikipedia. I'm just worried that as time goes on it's going to get too c

... (read more)
jai
1y10
2
0

Thank you for the praise. I just want more people to know this story. 

In addition to being just a really impressive person, I can't get over how cinematic Dr. Sapara's life is.

This is profoundly silly, but it's a thought I can't get out of my head: "The doctor, in the course of his travels, investigates a mysterious force that's killing people, discovers a cult whose priests are using the living weapon to kill and keep the people in fear. He confronts them with the truth and defeats them without firing a shot" sounds like the synopsis of an episode of... (read more)

6
ClaireZabel
1y
Seriously. Someone should make a movie!

Today, May 7 2023, I am planning to make some updates to this piece for historical accuracy. As part of this update, I'm including some citations and commentary in this comment.

I feel torn writing about some of these, because it's attempting to summarize and quantify an incomprehensible quantity of suffering and death. The numbers are important, to recognize what was lost, to honor the fallen as best we can, and to emphasize the importance of killing Smallpox. But quantifying that loss, for me at least, requires temporarily embracing a detached attitude, e... (read more)

1
aderonke
7mo
Thank you.
jai
1y36
10
0

The Rational Animations team has animated this, with narration by Robert Miles. It's great, as is the rest of their growing body of work.
 

7
Jelle Donders
8mo
And now even Kurzgesagt, albeit indirectly!

The current form appears to only allow uploading image files; I can upload a PNG, but not an SVG. This is probably just as well in my case, as the SVG only makes it more painfully obvious that I have no idea how to use Inkscape, but it seems like unintended behavior you might want to change.

2
Adrian Cipriani
1y
Changed it, thanks for noting. You can add it now :)

I'd like to apologize for "What Almost Was" being a dead link now - I've been trying to retire that blog for a while now while preserving the content elsewhere. "What Almost Was" can now be found here.

Here's a quick attempt at a subset of conjunctive assumptions in Nate's framing:

- The functional ceiling for AGI is sufficiently above the current level of human civilization to eliminate it
- There is a sharp cutoff between non-AGI AI and AGI, such that early kind-of-AGI doesn't send up enough warning signals to cause a drastic change in trajectory.
- Early AGIs don't result in a multi-polar world where superhuman-but-not-godlike agents can't actually quickly and recursively self-improve, in part because none of them wants any of the others to take over - and without being able to grow stronger, humanity remains a viable player.

2
Greg_Colbourn
1y
Thanks! I don't think anyone is seriously arguing this? (Links please if they are). We are getting the warning signals now. People (including me) are raising the alarm. Hoping for a drastic change of trajectory, but people actually have to put the work in for that to happen! But your point here isn't really related to P(doom|AGI)  - i.e. the conditional is on getting AGI. Of course there won't be doom if we don't get AGI! That's what we should be aiming for right now (not getting AGI). Nate may focus on singleton scenarios, but that is not a pre-requisite for doom. To me Robin Hanson's (multipolar) Age of Em is also a kind of doom (most humans don't exist, only a few highly productive ones are copied many times and only activated to work; a fully Malthusian economy). I don't see how "humanity remains a viable player" in a world full of superhuman agents.

It doesn't seem to me that titotal is assuming MI is solved; having direct access to the brain doesn't give you full insight into someone's thoughts either, because neuroscience is basically a pile of unsolved problems with growing-but-still-very-incomplete-picture of low-level and high-level details. We don't even have a consensus on how memory is physically implemented.

Nonetheless, if you had a bunch of invasive probes feeding you gigabytes/sec of live data from the brain of the genius general of the opposing army, it would be extremely likely to be usef... (read more)

4
Greg_Colbourn
1y
Ok, so the "brain" is fully accessible, but that is near useless with the level of interpretability we have. We know way more human neuroscience by comparison. It's hard to grasp just how large these AI models are. They have of the order of a trillion dimensions. Try plotting that out in Wolfram Alpha or Matlab.. It should be scary in itself that we don't even know what these models can do ahead of time. It is an active area of scientific investigation to discover their true capabilities, after the fact of their creation.

Interesting perspective, although I'm not sure how much we actually disagree. "Complicated and open", to me reads as "difficult"

 

Is there a rephrasing of the initial statement you would endorse that makes this clearer? I'd suggest "If you apply a security mindset (Murphy’s Law) to the problem of AI alignment, it should quickly become apparent that we do not currently possess the means to ensure that any given AI is safe."

3
Greg_Colbourn
1y
Yes, I would endorse that phrasing (maybe s/"safe"/"100% safe"). Overall I think I need to rewrite and extend the post to spell things out in more detail. Also change the title to something less provocative[1] because I get the feeling that people are knee-jerk downvoting without even reading it, judging by some of the comments (i.e. I'm having to repeat things I refer to in the OP). 1. ^ perhaps "Why the most likely outcome of AGI is doom"?
jai
1y-2
1
1

But that's not a function of whether "a lot of the fixed costs (like transaction costs) have been paid". That is very specifically referring to break-even relative to costs already expended, not to counterfactual spending. It's okay if you've changed your mind, but I don't think that what you said originally is consistent with this comment.

7
Habryka
1y
I don't understand this response? You asked "why not sell it now?", and I answered that exact question. I also covered a slightly broader case of "committing to sell", but that just totally covers the "sell now" case.

Nit: I was very explicitly asking why not sell, not suggesting a commitment to sell; I don't appreciate the rhetorical pivot to argue against a point I was not making.

I don't get this nit. Wasn't Oliver's comment straightforwardly answering your question, "Why not sell it now?" by giving an argument against selling it now?

How is that a pivot? He added the word "commiting", but I don't see how that changes the substance. I think he was just emphasizing what would be lost if we sold now without waiting for more info. Which seems like a perfectly valid answer to the question you asked!

4
Habryka
1y
Sorry, yes, by break-even I meant "a better use than the last dollar that Open Phil expects to spend".
jai
1y26
10
13

Given the massive decline in expected EA liquidity since the purchase and the fact that the purchase was largely justified on the grounds that as a durable asset it could be converted back into liquid funds with minimal loss, why not sell it now?.

Seems worth it to check whether it breaks even given a lot of the fixed costs (like transaction costs) have been paid. If it breaks even, we can keep it. If it doesn't, we can sell it, but committing to selling it right now seems like it would waste a bunch of valuable information that can be (relatively) cheaply obtained right now.

You can click on your username in the upper-right corner. Under "My Drafts", there will be an option for "New Post".

You can also hover over your username, and the drop-down menu that appears should include an option to create a new post.

Hello and welcome Sammy! Excited about Future Perfect and looking forward to what Vox does with it. It looks like your comment may have gotten cut off, to the detriment of anyone who wishes to stay informed.

0
SammyF
6y
Thanks! Kelsey flagged this and I was able to fix it. I really appreciate you letting me know Jal :)
jai
6y10
0
0

I've been asked to post "500 Million But Not a Single One More" on the EA Forum so that it's easier to include in a sequence of EA-related content. I need 5 karma to post - and the most straightforward way to get that seems like straight-up begging.

Just so this comment isn't entirely devoid of content: This Week In Polio is a great way to track humanity's (hopefully) final battles against Polio and one of my go-to pages when I'm looking to feel good about my species.

1
Peter Wildeford
6y
You made it!

You're definitely not alone - I've felt a lot of these things. Thank you for writing this. I think a lot of people are going to get a lot of good out of it.

0[anonymous]6y
Thank you also for what you write. We've included 500 Million at most of our London Secular Solstices and I loved Only Human...I'm tearing up just rereading it now ("The rebels who defy the world they were made for, who never stop dreaming and working for a better tomorrow." <3 <3 <3)