All of Isaac King's Comments + Replies

Yeah, I don't do it on any non-LW/EAF post.

Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like "current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).

Oh, I agree. Arguments of the form "bad things are theoretically possible, therefore we should worry" are bad and shouldn't be used. But "bad things are likely" is fine, and seems more likely to reach an average person than "bad things are 50% likely".

Isn't that what the strong upvote is for?

I can tell you why I downvoted it.

Cryptocurrency doesn't actually work

False, it works just fine. It's a token that can't be duplicated and people can send to each other without any centralized authority.

and only is there for scams and fraud.

There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.

Not surprising that FTX collapsed.

Clear hindsight bias. This person should h... (read more)

Sabs
1y16
8
1

"This person should have made some money betting against FTX before it collapsed and then I'd take them more seriously."

this is naive EMH fundamentalism

not everything can be shorted, not everything can be shorted easily, not everything should be shorted, markets can be manipulated. Especially the crypto market. It both can be the case that people 100% think X is a fraud, and X collapses, and shorting X would have been a losing trade over most timeframes. "Never short" is an oversimplification but honestly not a bad one.


 

Very reasonable! I understand you feel like you have to walk a fine line in order to not trigger social disapproval of your words; I think that's bad, and to be clear, I did not mean to make it seem like I disapproved of your comment. I wish EA could be a place where everyone felt comfortable speaking naturally without having to add a bunch of disclaimers.

7
Nick Kautz
4mo
People speaking up at risk to themselves does come with increased credibility and/or deserves attention, especially when it's in opposition to a dogpiley bandwagon narrative that many people may feel obligated to be on the safe side of, without actually knowing or caring much about the matter.... or because they might feel slightly less insignificant watching the downfall of someone who had accomplished (and given) more in a couple years than they will do in a lifetime.

I just wanted to mention that this comment tripped my "bravery debate" detector. I still upvoted it because honestly the bravery debate framing seems correct here, and I said something similar in my own comments earlier. But then again, everyone who engages in bravery debates thinks their framing is accurate. So let's be careful not to give posts additional weight just because they're speaking against majority EA opinion.

No, I'm just used to, as a woman, buttering most comments up (irl and online) in unnatural ways to not be seen as a bitch or low-intelligence or a clueless outsider. Right now I'm tired so maybe I over-corrected here, but living life in that way does cause anxiety, so that's also a genuine anxious tone you're catching. I read the other comments and they are getting upvotes when they clarify that they don't really agree with the post or like it. I think I agree with and like the post more than the other commentors and have been considering writing similar.&... (read more)

A summary of sorts is being compiled here:

And on a personal note, I aspire to create a lot of value for the world, and direct it towards doing lots of good. Call me overconfident, but I expect to be a billionaire someday. The way EA treats SBF here sets a precedent: if the EA community is happy to accept money when the going is good, but then is ready to cut ties once the money dries up… you can guess how excited I would be to contribute in the first place.

 

This is a weird paragraph. If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failu... (read more)

If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failure?

Because he's a human being and human beings need social support to thrive. I think it's false to equate this perfectly fine human need with a lower motive like status-seeking. If we want people to try hard to do good we as a community should still be there for them when they fall.

7
Austin
1y
Yeah, idk, it's actually less of a personal note than a comment on decision theory among future and current billionaires. I guess the "personal" side is where I can confidently say "this set of actions feels very distasteful to me" because I get to make claims about my own sense of taste; and I'm trying to extrapolate that to other people who might become meaningful in the future. Or maybe: This is a specific rant to the "EA community" separate from "EA principles". I hold my association with the "EA community" quite loosely; I only actually met people in this space like this year as a result of Manifold, whereas I've been donating/reading EA for 6ish years. The EA principles broadly make sense to me either way; and I guess I'm trying to figure out whether the EA community is composed of people I'm happy to associate with.
[anonymous]1y12
7
1

I don't think it's either/or. I think it's consistent for Austin's philanthropy to be primarily motivated by altruism and for him to also feel scared of the prospect of his community turning on him when he makes a mistake, perhaps to the point of putting him off the whole idea completely. And I'd expect most EAs to have a similar mix of motivations.

Yeah, reading further, I definitely don't agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to -15, and I haven't even finished reading it.

-8
Isaac King
1y

Thank you for posting this. I haven't yet read through the whole thing yet, and I don't necessarily agree with it, but I think it's important that people feel comfortable expressing their opinions here. The fact that within minutes of posting this has gotten -8 votes is something I find concerning, as I doubt those people have even had time to read and process what you said before voting and I suspect they're voting based on anger and groupthink. I hope the community will be able to have a productive conversation in these comments.

Yeah, reading further, I definitely don't agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to -15, and I haven't even finished reading it.

3
Isaac King
1y

I'll just note that I have a prediction market on this here, which is currently at a 7% chance of some prominent event causing mainstream AI capabilities researchers to start taking the risk more seriously by 2028.

But anecdotally, many EAs still feel uncomfortable quantifying their intuitions and continue to prefer using words like “likely” and “plausible” which could be interpreted in many ways.

This issue is likely to get worse as the EA movement attempts to grow quickly, with many new members joining who are coming in with various backgrounds and perspectives on the value of subjective credences

 

Don't take this as a serious criticism; I just found it funny.

3
elifland
2y
Yeah I realized this when proofreading and left it as I thought it drove home my point well :p

Hugh Thompson Jr. ended the Mỹ Lai massacre by instructing their helicopter crew to fire on their own military's soldiers if they continued to kill innocent civilians, then informed command of what was going on and got them to order the company committing the massacre to stop.

 

All sorts of people helped Jews escape the Holocaust at their own risk. Oskar Schindler, for example, was originally a member of the Nazi party, then saw what was going on and spent their entire fortune on bribes to keep their Jewish employees from being sent to concentration ca... (read more)

What's the significance of the two different columns under the heading "Billion tonnes of carbon" in the first table?  What does it mean for the number to be in one or the other?

2
Elias Au-Yeung
2y
I think the numbers on the right column are supposed to be the 'totals' of the biomass of the domains/kingdoms/viruses as entire groups. The original table is on page 89 of the supplementary materials document of the Bar-On, Phillips, and Milo paper. 

I have the opposite issue with my Macbook: The screen brightness settings range only from "bright" to "extremely bright". When I'm using it in a dark room I'd like to be able to dim the screen down to a reasonable level, but that's simply not possible.

3
Charles He
2y
Maybe privacy filters and related accessories can make screens darker?
2
Aaron Bergman
2y
Wish I could buy some nits from you lol

Carrick Flynn's congressional campaign just failed.

https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention

This appears to be a list of all science fiction technology, even if it doesn't exist in real life. For example I see "antigravity" on this list.

Just pick a human to upload and let them recursively improve themselves into an SAI. If they're smart enough to start out with, they might be able to keep their goals intact throughout the process.

 

(This isn't a strategy I'd choose given any decent alternative, but it's better than nothing. Likely to be irrelevant though, since it looks like we're going to get GAI before we're even close to being able to upload a human.)

Any atom that isn't being used in service of the AI's goal could instead be used in service of the AI's goal. Which particular atoms are easiest to access isn't relevant; it will just use all of them.

6
Jonas V
2y
My point is that the immediate cause of death for humans will most likely not be that the AI wants to use human atoms in service of its goals, but that the AI wants to use the atoms that make up survival-relevant infrastructure to build something, and humans die as a result of that (and their atoms may later be used for something else). Perhaps a practically irrelevant nitpick, but I think this mistake can make AI risk worries less credible among some people (including myself).

For comparison, this analysis finds a 0.4% yearly risk, which is in line with the EA survey and other estimates I've seen, so I'm strongly inclined to think that the 0.1%-1% order of magnitude is the correct place to be.

>A visual depiction of what it could potentially look like from the ground if the Mosul Dam were to collapse.

This link appears to be broken, it just links back to this page.

1
DC
2y
Link removed! (Maybe I'll find the video and add it in, but not that important.)

we know that the chance of an Earth-impact for asteroids 1-10km in diameter is about 1 in 6,000, and about 1 in 1.5 million for asteroids larger than 10km across

I don't know how I'm supposed to interpret this statistic without a time frame. Is this supposed to be per century?

2
finm
2y
Thanks for the pointer, fixed now. I meant for an average century.
1
ggilgallon
2y
Thank you! 

This is great! One minor flaw I noticed is that clicking the "^" to take me back to the footnote reference puts that reference at the top of the page, which means it's hidden behind the header. I have to scroll up a few lines before I can continue where I left off.

2
Jonathan Mustin
2y
Thanks Isaac! Right, I should've listed this under known shortcomings: I worked on a fix for this not long before releasing the feature, but the canonical solutions I found for this problem either a) weren't usable in this case or b) interfered with text selection in the lines preceding the footnote reference. I'll take another stab at it this coming week.

Out of curiosity, were the lumiqs inspired by Dust in His Dark Materials?

1
Ben Stewart
2y
No, I haven't read His Dark Materials but I'm planning on it! But maybe I had some subliminal awareness of it?

They did not enjoying doing so

Typo here.

1
Tobias Dänzer
2y
Also: -> would not miss
1
atb
2y
Thanks. Fixed (and I took the chance to also fix a few other minor bits and pieces). 

I'm pretty sure Mark Zuckerberg still thinks Facebook is a boon to humanity, based on his speculation on the value of "connecting the planet".

This seems a bit naive to me. Most big companies come up with some generic nice-sounding reason why they're helping people. That doesn't mean the people in charge honestly believe that; it could easily just be marketing.

5
Ozzie Gooen
2y
My read is that many bullshitters fairly deeply believe their BS. They often get to be pretty good at absorbing whatever position is maximally advantageous to them.  Things change if they have to put real money down on it (asking to bet, even a small amount, could help a lot), but these sorts of people are good at putting themselves into positions where they don't need to make those bets.   There's a lot of work on motivated reasoning out there. I liked Why Everyone (else) is a Hypocrite. 

I try to keep my weirdness to a level that's greater than 0 (in order to push back against stupid norms) but still low enough that I don't incur significant costs.

Better tools for simple comparisons of different datasets and generating custom charts. For example, there have been a number of times when I wanted per-capita data but could only find charts for total, or vice versa. (This should be a low-priority request since it's primarily a convenience issue.)

 If everyone who wants to make sure GAI is safe abstains from working on it, that guarantees that one of the following will happen:

  • GAI is invented by people who were not thinking about its safety.
  • GAI is never invented at all.

In order for the second possibility to be true, there must be something fundamental to GAI that safety researchers could discover but the thousands of other researchers with billions of dollars in funding will never discover on their own.

4
calebp
3y
I'd like to add that I think there are ways in which safety work gets done without people working on 'AI safety'. This isn't in conflict with what you said, but it does mean that people who want to work on safety could not go on to work on it but there are still people doing the jobs of AI safety researchers. It seems plausible to me that a person could end up working on AI and economic incentives push them to work on a topic related to safety (e.g. google want to build TAI -> they want to understand what is going on in their deep neural nets better -> they get some ai researcher to work on interpretability -> [AI becomes a bit interpretable and possibly more safe]). I guess that in this case, the people may not be thinking about safety but if they are doing the jobs of the people that would then I don't think it really matters. I do think that people should work on AI safety on net, but this seems like a reasonable counterargument.
2
jkmh
3y
This answer clarified in my mind what I was poorly trying to grasp at with my analogy. Thank you. I think the answer to my original question is a certain "no" at this point.
1
Harrison Durland
3y
Yeah, I haven't thought about this question previously and am not very familiar with AI safety research/debates (even though I occasionally skim stuff), but one objection that came to my mind when reading the original post/question was "If you aren't working on it, does that actually mean there will be one whole less person working on it?" Of course, I suppose it's possible that AI safety is somewhat weird/niche enough (in comparison to e.g., nursing, teaching) where the person-replacement ratio is moderate or low and/or the relative marginal returns of an additional worker are still fairly high, e.g., your individual choice to get a job in AI safety may have the expected average effect of increasing the total amount of people working on the project by, say, 0.75. I don't have the field knowledge to answer that question, and it's only one of many factors to consider, but if it is the case that the replaceability ratio is relatively high (e.g., your net average effect is <0.25) then that immediately has a big reduction on the potential for "I am increasing the number of people working on AI which increases the likelihood of bad AI occurring." That being said, I'm confident there are much better counterarguments that draw on more knowledge of how working on AI safety can reduce the risk you are talking about without also contributing to this blob concept of "more people working on AI" which you worry could increase the likelihood of AGI, which increases the likelihood of bad AGI.