All of Isaac King's Comments + Replies

What’s alive right now?

What's the significance of the two different columns under the heading "Billion tonnes of carbon " in the first table?  What does it mean for the number to be in one or the other?

1Elias Au-Yeung16h
I think the numbers on the right column are supposed to be the 'totals' of the biomass of the domains/kingdoms/viruses as entire groups. The original table is on page 89 of the supplementary materials document of the Bar-On, Phillips, and Milo paper.
Stuff I buy and use: a listicle to boost your consumer surplus and productivity

I have the opposite issue with my Macbook: The screen brightness settings range only from "bright" to "extremely bright". When I'm using it in a dark room I'd like to be able to dim the screen down to a reasonable level, but that's simply not possible.

3Charles He2mo
Maybe privacy filters and related accessories can make screens darker?
2Aaron Bergman2mo
Wish I could buy some nits from you lol
What are some high-EV but failed EA projects?

Carrick Flynn's congressional campaign just failed.

https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention

What examples are there of (science) fiction predicting something strange/bad, which then happened?

This appears to be a list of all science fiction technology, even if it doesn't exist in real life. For example I see "antigravity" on this list.

Twitter-length responses to 24 AI alignment arguments

Just pick a human to upload and let them recursively improve themselves into an SAI. If they're smart enough to start out with, they might be able to keep their goals intact throughout the process.

 

(This isn't a strategy I'd choose given any decent alternative, but it's better than nothing. Likely to be irrelevant though, since it looks like we're going to get GAI before we're even close to being able to upload a human.)

Twitter-length responses to 24 AI alignment arguments

Any atom that isn't being used in service of the AI's goal could instead be used in service of the AI's goal. Which particular atoms are easiest to access isn't relevant; it will just use all of them.

6Jonas Vollmer5mo
My point is that the immediate cause of death for humans will most likely not be that the AI wants to use human atoms in service of its goals, but that the AI wants to use the atoms that make up survival-relevant infrastructure to build something, and humans die as a result of that (and their atoms may later be used for something else). Perhaps a practically irrelevant nitpick, but I think this mistake can make AI risk worries less credible among some people (including myself).
Nuclear Preparedness Guide

For comparison, this analysis finds a 0.4% yearly risk, which is in line with the EA survey and other estimates I've seen, so I'm strongly inclined to think that the 0.1%-1% order of magnitude is the correct place to be.

Mosul Dam Could Kill 1 Million Iraqis.

>A visual depiction of what it could potentially look like from the ground if the Mosul Dam were to collapse.

This link appears to be broken, it just links back to this page.

1DonyChristie5mo
Link removed! (Maybe I'll find the video and add it in, but not that important.)
Risks from Asteroids

we know that the chance of an Earth-impact for asteroids 1-10km in diameter is about 1 in 6,000, and about 1 in 1.5 million for asteroids larger than 10km across

I don't know how I'm supposed to interpret this statistic without a time frame. Is this supposed to be per century?

2finm6mo
Thanks for the pointer, fixed now. I meant for an average century.
1ggilgallon7mo
Thank you!
[Feature Announcement] Rich Text Editor Footnotes

This is great! One minor flaw I noticed is that clicking the "^" to take me back to the footnote reference puts that reference at the top of the page, which means it's hidden behind the header. I have to scroll up a few lines before I can continue where I left off.

2Jonathan Mustin7mo
Thanks Isaac! Right, I should've listed this under known shortcomings: I worked on a fix for this not long before releasing the feature, but the canonical solutions I found for this problem either a) weren't usable in this case or b) interfered with text selection in the lines preceding the footnote reference. I'll take another stab at it this coming week.
[Creative Writing Contest] Noumenon

Out of curiosity, were the lumiqs inspired by Dust in His Dark Materials?

1Ben Stewart7mo
No, I haven't read His Dark Materials but I'm planning on it! But maybe I had some subliminal awareness of it?
The Unweaving of a Beautiful Thing

They did not enjoying doing so

Typo here.

1Tobias Dänzer7mo
Also: -> would not miss
1atb7mo
Thanks. Fixed (and I took the chance to also fix a few other minor bits and pieces).
Flimsy Pet Theories, Enormous Initiatives

I'm pretty sure Mark Zuckerberg still thinks Facebook is a boon to humanity, based on his speculation on the value of "connecting the planet".

This seems a bit naive to me. Most big companies come up with some generic nice-sounding reason why they're helping people. That doesn't mean the people in charge honestly believe that; it could easily just be marketing.

5Ozzie Gooen8mo
My read is that many bullshitters fairly deeply believe their BS. They often get to be pretty good at absorbing whatever position is maximally advantageous to them. Things change if they have to put real money down on it (asking to bet, even a small amount, could help a lot), but these sorts of people are good at putting themselves into positions where they don't need to make those bets. There's a lot of work on motivated reasoning out there. I liked Why Everyone (else) is a Hypocrite.
How do EAs deal with having a "weird" appearance?

I try to keep my weirdness to a level that's greater than 0 (in order to push back against stupid norms) but still low enough that I don't incur significant costs.

How can we make Our World in Data more useful to the EA community?

Better tools for simple comparisons of different datasets and generating custom charts. For example, there have been a number of times when I wanted per-capita data but could only find charts for total, or vice versa. (This should be a low-priority request since it's primarily a convenience issue.)

Is working on AI safety as dangerous as ignoring it?

 If everyone who wants to make sure GAI is safe abstains from working on it, that guarantees that one of the following will happen:

  • GAI is invented by people who were not thinking about its safety.
  • GAI is never invented at all.

In order for the second possibility to be true, there must be something fundamental to GAI that safety researchers could discover but the thousands of other researchers with billions of dollars in funding will never discover on their own.

3calebp1y
I'd like to add that I think there are ways in which safety work gets done without people working on 'AI safety'. This isn't in conflict with what you said, but it does mean that people who want to work on safety could not go on to work on it but there are still people doing the jobs of AI safety researchers. It seems plausible to me that a person could end up working on AI and economic incentives push them to work on a topic related to safety (e.g. google want to build TAI -> they want to understand what is going on in their deep neural nets better -> they get some ai researcher to work on interpretability -> [AI becomes a bit interpretable and possibly more safe]). I guess that in this case, the people may not be thinking about safety but if they are doing the jobs of the people that would then I don't think it really matters. I do think that people should work on AI safety on net, but this seems like a reasonable counterargument.
2jkmh1y
This answer clarified in my mind what I was poorly trying to grasp at with my analogy. Thank you. I think the answer to my original question is a certain "no" at this point.
1Harrison Durland1y
Yeah, I haven't thought about this question previously and am not very familiar with AI safety research/debates (even though I occasionally skim stuff), but one objection that came to my mind when reading the original post/question was "If you aren't working on it, does that actually mean there will be one whole less person working on it?" Of course, I suppose it's possible that AI safety is somewhat weird/niche enough (in comparison to e.g., nursing, teaching) where the person-replacement ratio is moderate or low and/or the relative marginal returns of an additional worker are still fairly high, e.g., your individual choice to get a job in AI safety may have the expected average effect of increasing the total amount of people working on the project by, say, 0.75. I don't have the field knowledge to answer that question, and it's only one of many factors to consider, but if it is the case that the replaceability ratio is relatively high (e.g., your net average effect is <0.25) then that immediately has a big reduction on the potential for "I am increasing the number of people working on AI which increases the likelihood of bad AI occurring." That being said, I'm confident there are much better counterarguments that draw on more knowledge of how working on AI safety can reduce the risk you are talking about without also contributing to this blob concept of "more people working on AI" which you worry could increase the likelihood of AGI, which increases the likelihood of bad AGI.