Yeah, most of the p(doom) discussions I see taking place seem to be focusing on the nearer term of 10 years or less. I believe there are quite a few people (e.g. Gary Marcus, maybe?) who operate under a framework like "current LLMs will not get to AGI, but actual AGI will probably be hard to align), so they may give a high p(doom before 2100) and a low p(doom before 2030).
Oh, I agree. Arguments of the form "bad things are theoretically possible, therefore we should worry" are bad and shouldn't be used. But "bad things are likely" is fine, and seems more likely to reach an average person than "bad things are 50% likely".
I can tell you why I downvoted it.
Cryptocurrency doesn't actually work
False, it works just fine. It's a token that can't be duplicated and people can send to each other without any centralized authority.
and only is there for scams and fraud.
There are indeed a lot of those, but scams and fraud were very clearly not the intention of its creators. Realistically they were cryptography nerds who wanted to make something cool, or libertarians with overly-idealistic visions of the future.
Not surprising that FTX collapsed.
Clear hindsight bias. This person should h...
"This person should have made some money betting against FTX before it collapsed and then I'd take them more seriously."
this is naive EMH fundamentalism
not everything can be shorted, not everything can be shorted easily, not everything should be shorted, markets can be manipulated. Especially the crypto market. It both can be the case that people 100% think X is a fraud, and X collapses, and shorting X would have been a losing trade over most timeframes. "Never short" is an oversimplification but honestly not a bad one.
Very reasonable! I understand you feel like you have to walk a fine line in order to not trigger social disapproval of your words; I think that's bad, and to be clear, I did not mean to make it seem like I disapproved of your comment. I wish EA could be a place where everyone felt comfortable speaking naturally without having to add a bunch of disclaimers.
I just wanted to mention that this comment tripped my "bravery debate" detector. I still upvoted it because honestly the bravery debate framing seems correct here, and I said something similar in my own comments earlier. But then again, everyone who engages in bravery debates thinks their framing is accurate. So let's be careful not to give posts additional weight just because they're speaking against majority EA opinion.
No, I'm just used to, as a woman, buttering most comments up (irl and online) in unnatural ways to not be seen as a bitch or low-intelligence or a clueless outsider. Right now I'm tired so maybe I over-corrected here, but living life in that way does cause anxiety, so that's also a genuine anxious tone you're catching. I read the other comments and they are getting upvotes when they clarify that they don't really agree with the post or like it. I think I agree with and like the post more than the other commentors and have been considering writing similar.&...
And on a personal note, I aspire to create a lot of value for the world, and direct it towards doing lots of good. Call me overconfident, but I expect to be a billionaire someday. The way EA treats SBF here sets a precedent: if the EA community is happy to accept money when the going is good, but then is ready to cut ties once the money dries up… you can guess how excited I would be to contribute in the first place.
This is a weird paragraph. If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failu...
If your goal were doing the most good, why would it matter how you expect EA to treat you in the case of failure?
Because he's a human being and human beings need social support to thrive. I think it's false to equate this perfectly fine human need with a lower motive like status-seeking. If we want people to try hard to do good we as a community should still be there for them when they fall.
I don't think it's either/or. I think it's consistent for Austin's philanthropy to be primarily motivated by altruism and for him to also feel scared of the prospect of his community turning on him when he makes a mistake, perhaps to the point of putting him off the whole idea completely. And I'd expect most EAs to have a similar mix of motivations.
Yeah, reading further, I definitely don't agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to -15, and I haven't even finished reading it.
Thank you for posting this. I haven't yet read through the whole thing yet, and I don't necessarily agree with it, but I think it's important that people feel comfortable expressing their opinions here. The fact that within minutes of posting this has gotten -8 votes is something I find concerning, as I doubt those people have even had time to read and process what you said before voting and I suspect they're voting based on anger and groupthink. I hope the community will be able to have a productive conversation in these comments.
Yeah, reading further, I definitely don't agree with a lot of these claims. But the fact that I feel like I have to post this clarification in order to avoid getting downvoted myself is something I think needs to be talked about. The original post is now down to -15, and I haven't even finished reading it.
But anecdotally, many EAs still feel uncomfortable quantifying their intuitions and continue to prefer using words like “likely” and “plausible” which could be interpreted in many ways.
This issue is likely to get worse as the EA movement attempts to grow quickly, with many new members joining who are coming in with various backgrounds and perspectives on the value of subjective credences
Don't take this as a serious criticism; I just found it funny.
Hugh Thompson Jr. ended the Mỹ Lai massacre by instructing their helicopter crew to fire on their own military's soldiers if they continued to kill innocent civilians, then informed command of what was going on and got them to order the company committing the massacre to stop.
All sorts of people helped Jews escape the Holocaust at their own risk. Oskar Schindler, for example, was originally a member of the Nazi party, then saw what was going on and spent their entire fortune on bribes to keep their Jewish employees from being sent to concentration ca...
What's the significance of the two different columns under the heading "Billion tonnes of carbon" in the first table? What does it mean for the number to be in one or the other?
I have the opposite issue with my Macbook: The screen brightness settings range only from "bright" to "extremely bright". When I'm using it in a dark room I'd like to be able to dim the screen down to a reasonable level, but that's simply not possible.
Carrick Flynn's congressional campaign just failed.
https://forum.effectivealtruism.org/posts/Qi9nnrmjwNbBqWbNT/the-best-usd5-800-i-ve-ever-donated-to-pandemic-prevention
This appears to be a list of all science fiction technology, even if it doesn't exist in real life. For example I see "antigravity" on this list.
Just pick a human to upload and let them recursively improve themselves into an SAI. If they're smart enough to start out with, they might be able to keep their goals intact throughout the process.
(This isn't a strategy I'd choose given any decent alternative, but it's better than nothing. Likely to be irrelevant though, since it looks like we're going to get GAI before we're even close to being able to upload a human.)
Any atom that isn't being used in service of the AI's goal could instead be used in service of the AI's goal. Which particular atoms are easiest to access isn't relevant; it will just use all of them.
>A visual depiction of what it could potentially look like from the ground if the Mosul Dam were to collapse.
This link appears to be broken, it just links back to this page.
we know that the chance of an Earth-impact for asteroids 1-10km in diameter is about 1 in 6,000, and about 1 in 1.5 million for asteroids larger than 10km across
I don't know how I'm supposed to interpret this statistic without a time frame. Is this supposed to be per century?
This is great! One minor flaw I noticed is that clicking the "^" to take me back to the footnote reference puts that reference at the top of the page, which means it's hidden behind the header. I have to scroll up a few lines before I can continue where I left off.
I'm pretty sure Mark Zuckerberg still thinks Facebook is a boon to humanity, based on his speculation on the value of "connecting the planet".
This seems a bit naive to me. Most big companies come up with some generic nice-sounding reason why they're helping people. That doesn't mean the people in charge honestly believe that; it could easily just be marketing.
I try to keep my weirdness to a level that's greater than 0 (in order to push back against stupid norms) but still low enough that I don't incur significant costs.
Better tools for simple comparisons of different datasets and generating custom charts. For example, there have been a number of times when I wanted per-capita data but could only find charts for total, or vice versa. (This should be a low-priority request since it's primarily a convenience issue.)
If everyone who wants to make sure GAI is safe abstains from working on it, that guarantees that one of the following will happen:
In order for the second possibility to be true, there must be something fundamental to GAI that safety researchers could discover but the thousands of other researchers with billions of dollars in funding will never discover on their own.
Yeah, I don't do it on any non-LW/EAF post.