W

WilliamKiely

2421 karmaJoined Nov 2014Austin, TX, USA

Bio

Participation
3



You can send me a message anonymously here: https://www.admonymous.co/will

Comments
416

Thanks for sharing about your experience.

I see 4 people said they agreed with the post and 3 disagreed, so I thought I'd share my thoughts on this. (I was the 5th person to give the post Agreement Karma, which I endorse with some nuance added below.)

I've considered going on a long hike before and like you I believed the main consideration against doing so was the opportunity cost for my career and pursuit of having an altruistic impact.

It seemed to me that clearly there was something else I could do that would be better for my career and altruistic impact that e.g. taking 6 months to go hike the Appalachian Trail so I dismissed considering the possibility more seriously, as tempting as it was. (Bill Bryson's book A Walk in the Woods tempted me when I read it in 2012.)

I still think that most young people who actually do decide to go on such a long hike could have done something else that would have been better for their career and pursuit of the most good, and I think the same would have been true of my former self had I decided to actually spend 6 months going for such a long walk.

That said, what my life experience thus far (a very lackluster career) makes obvious to me now is that deciding against going for a 6-month hike on the basis that it was almost definitely subpotimal was a mistake. After all, almost every potential path is suboptimal, whether it's a 6-month hike, Job A, Job B, or almost every other concrete option.

A more reasonable way to think about the question is whether the long hike seems better or worse than the other options one is considering. And on that note I'd opine that there are many unideal jobs that one could work for 6 months that'd be worse than spending those 6 months on a long hike that one is really motivated to do.

And I don't just mean trash jobs one isn't considering. Rather, I think going on a 6-month hike can actually often be better than the job-path one would have taken otherwise.

Reflecting on my own past, it's not clear to me that had younger-me spent 6 months going for a long hike that that would have been worse than what I actually did. I've spent a lot of time in mediocre jobs and also a lot of time not working and yet not doing any intentional career-break project like a long hike. So I think going for a long hike would have been quite a reasonable decision had I chosen to do so. It very likely wouldn't have been optimal path, but it may well have been a good decision, better than the likely counterfactuals.

I'll also add that I didn't like the subtitle of the video: "A case for optimism".

A lot of popular takes on futurism topics seem to me to focus on being optimistic or pessimistic, but whether one is optimistic or pessimistic about something doesn't seem like the sort of thing one should argue for. It seems a little like writing the bottom line first.

Rather, people should attempt to figure out what the actual probabilities of different futures are and how we are able to influence the future to make certain futures more or less probable. From there it's just a semantic question whether having a certain credence in a certain kind of future makes one an optimistic or a pessimist.

If one sets out to argue for being an optimist or pessimist, that seems like it would just introduce a bias into one's thinking, where once one identifies as e.g. an optimist, they'll have trouble updating their beliefs about the probability that the future will be good or bad to various degrees. Paul Graham says Keep Your Identity Small, which seems very relevant.

I've been a fan of melodysheep since discovering his Symphony of Science series about 12 years ago.

Some thoughts as I watch:

- Toby Ord's The Precipice and his 16 percent estimate of existential catastrophe (in the next century) is cited directly

- The first part of the script seems heavily-inspired by Will MacAskill's What We Owe the Future
- In particular there is a strong focus on non-extinction, non-existentially catastrophic civilization collapse, just like in WWOTF

- 12:40 "But extinction in the long-term is nothing to fear. No species survives forever. Time will shape us into something new. The noble way to go extinct will be to evolve naturally to a higher species." -- This is kind of ambiguous. I'm not clear what message melodysheep is trying to get across, but it's also vague enough that I don't I have a specific critique of it.

- 14:12 "But the best way to secure our long-term survival is to take the leap that no other lifeform has ever taken, to become a multi-planetary species." "Once a self-sustaining civilization is established on another planet, the chances of our extinction will plummet." -- No argument is made for either of these points in the video, and due to me thinking that colonizing another planet as a strategy to reduce existential risk is quite overrated in general, I'm disappointed about that.

- As usual, melodysheep's music and visuals are stunning, and I can't help but feel that the weakest part of the video is the script.

- Melodysheep's top Patreon tier is $100 per video, and includes a one-on-one hangout with him (John Boswell). Given his videos get millions of views and are on important future-oriented topics, this seems like a cost-effective way to get in touch and potentially positively influence the direction of his videos.

- I skimmed his list of $10+ Patreon supporters and didn't see any names I recognized, so I think it it may be worthwhile for some EAs/longtermists who can provide useful feedback on his scripts to become supporters or otherwise get in touch in order to do that. I'm not sure how open to feedback he is, but it seems worth trying. Anyone potentially interested?

That is, I wasn’t viscerally worried. I had the concepts. But I didn’t have the “actually” part.

For me I don't think having a concrete picture of the mechanism for how AI could actually kill everyone ever felt necessary to viscerally believing that AI could kill everyone.

And I think this is because every since I was a kid, long before hearing about AI risk or EA, the long-term future that seemed most intuitive to me was a future without humans (or post-humans).

The idea that humanity would go on to live forever and colonize the galaxy and the universe and live a sci-fi future has always seemed too fantastical to me to assume as the default scenario. Sure it's conceivable--I've never assumed it's extremely unlikely--but I have always assumed that in the median scenario humanity somehow goes extinct before ever getting to make civilizations in hundreds of billions of star systems. What would make us go extinct? I don't know. But to think otherwise would be to think that all of us today are super special (by being among the first 0.000...001% (a significant number of 0s) of humans to ever live). And that has always felt like an extraordinary thing to just assume, so my intuitive, gut, visceral belief has always been that we'll probably go extinct somehow before achieving all that.

So when I learned about AI risk I intellectually though "Ah, okay, I can see how something smarter than us that doesn't share our goals could cause our extinction; so maybe AI is the thing that will prevent us from making civilizations on hundreds of billions of stars."

I don't know when I first formulated a credence that AI would cause doom, but I'm pretty sure that I always viscerally felt that AI could cause human extinction ever since first hearing an argument that it could.

(The first time I heard an argument for AI risk was probably in 2015, when I read HPMOR and Superintelligence; I don't recall knowing much at all about EY's views on AI until Jan-Mar 2015 when I read /r/HPMOR and people mentioned AI) I think reading Superintelligence the same year I read HPMOR (both in 2015) was roughly the first time I thought about AI risk. Just looked it up actually: From my Goodreads I see that I finished reading HPMOR on March 4th, 2015, 10 days before HPMOR finished coming out. I read it in a span of a couple weeks and no doubt learned about it via a recommendation that stemmed from my reading of HPMOR. So Superintelligence was my first exposure to AI risk arguments. I didn't read a lot of stuff online at that time; e.g. I didn't read anything on LW that I can recall.)

Thinking out loud about credences and PDFs for credences (is there a name for these?):

I don't think "highly confident people bare the burden of proof" is a correct way of saying my thought necessarily, but I'm trying to point at this idea that when two people disagree on X (e.g. 0.3% vs 30% credences), there's an asymmetry in which the person who is more confident (i.e. 0.3% in this case) is necessarily highly confident that the person they disagree with is wrong, whereas the the person who is less confident (30% credence person) is not necessarily highly confident that the person they disagree with is wrong. So maybe this is another way of saying that "high confidence requires strong evidence", but I think I'm saying more than that.

I'm observing that the high-confidence person needs an account of why the low-confidence person is wrong, whereas the opposite isn't true.

Some math to help communicate my thoughts: The 0.3% credence person is necessarily at least 99% confident that a 30% credence is too high. Whereas a 30% credence is compatible with thinking there's, say, a 50% chance that a 0.3% credence is the best credence one could have with the information available.

So a person who is 30% confident X is true may or may not think that a person with a 0.3% credence in X is likely reasonable in their belief. They may think that that person is likely correct, or they may think that they are very likely wrong. Both possibilities are coherent.

Whereas the person who credence in X is 0.3% necessarily believes the person whose credence is 30% is >99% likely wrong.

Maybe another good way to think about this:

If my point-estimate is X%, I can restate that by giving a PDF in which I give a weight for all possible estimates/forecasts from 0-100%.

E.g. "I'm not sure if the odds of winning this poker hand are 45% or 55% or somewhere in between; my point-credence is about 50% but I think the true odds may be a few percentage points different, though I'm quite confident that the odds are not <30% or >70%. (We could draw a PDF)."

Or "If I researched this for an hour I think I'd probably conclude that it's very likely false, or at least <1%, but on the surface it seems plausible that I might instead discover that it's probably true, though it'd be hard to verify for sure, so my point-credence is ~15%, but after an hour of research I'd expect (>80%) my credence to be either less than 3% or >50%.

Is there a name for the uncertainty (PDF) about one's credence?

I just got notified that my December 7th test donation was matched. This is extremely unexpected to me, and leads me to believe I got my forecast wrong and that the EA community actually could have gotten ~$1M matched this year with the donation trade scheme I had in mind.



By "messaged" do you mean you got an email, Facebook notification, or something else?

I'm not sure. I think you are the first person I heard of saying they got matched. When I asked in the EA Facebook group for this on December 15th if anyone got matched, all three people who responded (including myself) reported that they were double-charged for their December 15th donations. Initially we assumed the second receipt was a match, but then we saw that Facebook had actually just charged us twice. I haven't heard anything else about the match since then and just assumed I didn't get matched.

Load more