WilliamKiely

Comments

An animated introduction to longtermism (feat. Robert Miles)

Gotcha. I don't actually have a strong opinion on the net negative queation. I worded my comment poorly.

An animated introduction to longtermism (feat. Robert Miles)

Some of your criticism is actually about Bostrom's paper

Assuming this is what your comment is in reference to: I looked at Bostrom's paper after and I think his sentence about 1% reduction in x-risk being like a 10M+ year delay before growth is actually intuitive given his context (he mentioned that galaxies exist for billions of years just before), so I actually think the version of this you put in the script is significantly less intuitive. The video viewer also only has the context of the video up to that point whereas thr paper reader has a lot more context from the paper. Also videos should be a lot more comprehensible to laypeople than Bostrom papers.

I think the question of whether the video will be net negative on the margin is complicated. A more relevant question that is easier to answer is "is it reasonable to think that a higher quality video could be made for a reasonable additional amount of effort and would that be clearly better on net to have given the costs and benefits?"

I think the answer to this is "yes" and if we use that as the counterfactual rather than no video at all, it seems clear that you should target producing that video, even if your existing video is positive on net relative to no video.

Introducing Rational Animations

In case you're not already familiar, it may be useful for you to talk with and collaborate with the people behind the A Happier World YouTube channel: https://forum.effectivealtruism.org/posts/MtbXzAh5SZxRJiHnH/i-made-a-video-on-engineered-pandemics

An animated introduction to longtermism (feat. Robert Miles)

I left two other comments with some feedback, but want to note:

  • I strong-upvoted this EA Forum post because I really like that you are sharing the script and video with the community to gather feedback

  • I refrained from liking the video on YouTube and don't expect to share it with people not famililar with longtermism as a means of introducing them to it because I don't think the video is high enough quality for it to be a good thing for more people to see it. I'd like to see you gather more feedback on the scripts of future videos related to EA before creating the videos.

An animated introduction to longtermism (feat. Robert Miles)

Thanks for posting here for feedback. In general I think the video introduced too many ideas and didn't explain them enough.

Some point by point feedback:

  • It seems inappropriate to include "And remember that one trillion is one thousand times a billion" much later after saying "humanity has a potentially vast future ahead in which we might inhabit countless star systems and create trillions upon trillions of worthwhile lives" in the first sentence. If people didn't know what the word "trillion" meant later, then you already lost them at the first sentence.

  • Additionally, since I think that essentially all the viewers you care about do already know what "trillion" means, including "remember that one trillion is one thousand times a billion" will likely make some of them think that the video is created for a much less educated audience than them.

  • "I guess you might be skeptical that humanity has the potential to reach this level of expansion, but that’s a topic for another video." Isn't this super relevant to the topic of this video? The video is essentially saying that future civilization can be huge, but if you're skeptical of that we'll address that in another video. Shouldn't you be making the case now? If not, then why not just start the video with "Civilization can be astronomically large in the future. We'll address this claim in a future video, but in this video let's talk about the implications of that claim if true. [Proceed to talk about the question of whether we can tractably affect the size.]

  • "But there is another ethical point of view taken into consideration, which is called the “person affecting view”." I don't think you should have included this, at least not without saying more about it. I don't think someone who isn't already familiar with person affecting views would gain anything from this, but it could very plausibly just be more noise to distract them from the core message of the video.

  • "If you don’t care about giving life to future humans who wouldn’t have existed otherwise, but you only care about present humans, and humans that will come to exist, then preventing existential risk and advancing technological progress have a similar impact." I think I disagree with this and the reasoning you provide in support of it doesn't at all seem to justify it. For example, you write "If we increase this chance, either by reducing existential risk or by hastening technological progress, our impact will be more or less the same" but don't consider the tractability or neglectedness of advancing technological progress or reducing existential risk.

  • For the video animation, when large numbers are written out on the screen, include the commas after every three zeros so the viewers can actually tell at a glance what number is written. Rob narrates "ten to the twenty-three humans" and we see 100000000000000000000000 appear on the screen with 0's continuously being added and it gives me the impression that the numbers are just made-up (even though I know they're not).

  • "100 billion trillion lives" is a rather precise number. I'd like for you to use more careful language to communicate if things like this are upper or lower bounds (e.g. "at least" 100 billion trillion lives) or the outputs of specific numerical estimates (in which case, show us what numbers lead to that output.

An animated introduction to longtermism (feat. Robert Miles)

Bostrom estimates that a single percentage point of existential risk reduction yields the same expected value as advancing technological development by 10 million years

I didn't recall this from Bostrom's work and had to pause the video after hearing it to try to understand it better. I wish you had either explained the estimate or provided a footnote citation to the exact place viewers could find where Bostrom explains this estimate.

After thinking about it for a minute, my guess is that Bostrom's estimate comes from a few assumptions: (a) Civilization reaches it's full scale (value realized per year) in a very short amount of time relative to 10 million years and (b) Civilization lasts a billion years. Neither of these assumptions seem like a given to me, so if this is indeed where Bostrom's (rough) estimate came from it would seem that you really ought to have explained it in the video or something. I imagine a lot of viewers might just gloss over it and not accept it.

What are the 'PlayPumps' of cause prioritisation?

Cause Area A: Blindness in the United States

Cause Area B: Blindness in the developing world

From Peter Singer's TED Talk:

Take, for example, providing a guide dog for a blind person. That's a good thing to do, right? Well, right, it is a good thing to do, but you have to think what else you could do with the resources. It costs about 40,000 dollars to train a guide dog and train the recipient so that the guide dog can be an effective help to a blind person. It costs somewhere between 20 and 50 dollars to cure a blind person in a developing country if they have trachoma. So you do the sums, and you get something like that. You could provide one guide dog for one blind American, or you could cure between 400 and 2,000 people of blindness. I think it's clear what's the better thing to do.

What are some key numbers that (almost) every EA should know?

exponential growth numbers (e.g. rule of 72)

My favorite exponential growth numbers:

1.01^20,000 = 10^86

1.03^7000 = 10^89

1.05^4000 = 10^84

If the economy were to grow by 1% annually for a mere 20,000 years (which is a blink of an eye on a geologic timescale), then the economy would grow by a factor of 10^86, which is more than the number of atoms in the oberservable universe.

Of course this won't happen, but when talking with people outside of EA about the question of how soon we might create AGI or how soon we might reach technological maturity or how soon we might create a civilization with more value in it each year than the value of all life that has ever lived on Earth so far, I sometimes find that peoples' intuiton is that the answer to each question is a very long time, e.g. millions of years.

However, when I give these exponential growth numbers in this context it often acts as an intuition pump, such that whoever I'm talking to immediately sees that "millions of years" is too long, "thousands of years" is a lot more reasonable seeming than moments before, and "a few centuries or less" suddenly seems plausible.

"Existential risk from AI" survey results

The wide spread in responses is surprising to me. Perhaps future surveys like this should ask people what their inside view is and what their all-things-considered view is. My suspicion/prediction would be that doing that would yield all-things-considered views closer together.

"Existential risk from AI" survey results

In retrospect, my forecast that the median response to the first question would be as low as 10% was too ambitious. That would have been surprisingly low for a median.

I think my other forecasts were good. My 18% mean on Q1 was so low only because my median was low. Interestingly my own answer for Q1 was 20%, which was exactly the median response. I forget why I thought the mean and median answer would be lower than mine.

Load More