Oh, I wasn't implying any link between worlds 335 and 281, I was just riffing off the idea of sentient and/or symbiotic fungi. I actually think by tying them together in the main body of the post it confusing things.
I am the symbiotic sentient lichen responsible for https://worldbuild.ai/W-0000000335/.
Please DM if you'd like to discuss the possibility of having one of my moieties colonize your lungs or other moist crevasses.
Location: Toronto, Canada
Remote: Yep
Willing to relocate: Maybe!
Skills: web programming (9 years), agile development, writing, illustration, resaerch
Résumé/CV/LinkedIn: https://www.linkedin.com/in/l-koren-25893152/ (linkedin is out of date, resume available on request)
Email: liav dot koren gmail
I feel like this post makes concrete some of the tensions I was more abstractly pointing at in A Keynesian/Hayekian model of community building.
I can see "Republican" becoming its own cluster in the last couple of decades, but what cleanly distinguishes small-c conservative from libertarian? Eg, I definitely would not call Cowen a Republican, but I get the sense he might be somewhat conservative in how he thinks about development, economics and institutions.
Don't know if you want to include "podcast conversations" in your set here, but if you do:
Russ Roberts is fairly conservative, also seemed quite thoughtful and to have good epistemology, when I was listening to econtalk regularly (which hasn't been in a few years). He had a conversation with Bostrom, about AGI, which I thought went terribly (no good, bad bad bad). He also had a conversation with MacAskill, which I don't remember as well, but I have the general sense that it also didn't go super well. Maybe worth a re-listen. He's probably talked with some other major figures, if you go digging in the archives -- there have been a lot of development economists, some of whom are probably important to EA research.
>There aren't any prominent conservative EAs (or at least none that I've heard of).
I feel like Tyler Cowen is reasonably libertarian/right of centre. I don't know if he would call himself an EA, but he has an account on the forum, under his full name. I feel like he's pretty well know, at least in these circles..
Thank you for the snippets.
EAG was, by the end, very emotional for me. I found some of my personal failures being juxtaposed with some of my civilization's failings. I was put in very direct touch with the yearning at my core. I talked with people who I like and respect and feel wary around. Some of them are spooked and worried about the shape of things to come. I felt my own anxieties about my place in the world and my value rear up. It was fun and challenging and exhausting.
In 2017 I quit my job and spent a significant amount of time self studying ML, roughly following a curriculum that Dario Amodei laid out in an 80k podcast. I ran this plan past a few different people, including in an 80k career advising session, but after a year, I didn't get a job offer from any of the AI Safety orgs I'd applied to (Ought, OpenAI, maybe a couple of others) and was quite burned out and demotivated. I didn't even feel up to trying to interview for an ML focused job. Instead I went back to web development (although it was with a startup that did suggest I'd be able to do some ML work eventually, but that job ultimately wasn't a great fit, and I moved on to my current role... as a senior web dev.)
I think there are a bunch of lessons I learned from this exercise, but overall I consider it one of my failures.
I think this is one of the things that distinguishes EAs and rationalists from randomly selected smart people. I like to say that EAs have a taste for biting bullets.