All posts

New & upvoted

Week of Saturday, 25 June 2022
Week of Sat, 25 Jun 2022

Frontpage Posts

168
· · 1m read
167
· · 7m read

Quick takes

About going to a hub A response to: https://forum.effectivealtruism.org/posts/M5GoKkWtBKEGMCFHn/what-s-the-theory-of-change-of-come-to-the-bay-over-the For people who consider taking or end up taking this advice, some things I'd say if we were having a 1:1 coffee about it: * Being away from home is by its nature intense, this community and the philosophy is intense, and some social dynamics here are unusual, I want you to go in with some sense of the landscape so you can make informed decisions about how to engage. * The culture here is full of energy and ambition and truth telling. That's really awesome, but it can be a tricky adjustment. In some spaces, you'll hear a lot of frank discussion of talent and fit (e.g. people might dissuade you from starting a project not because the project is a bad idea but because they don't think you're a good fit for it). Grounding in your own self worth (and your own inside views) will probably be really important. * People both are and seem really smart. It's easy to just believe them when they say things. Remember to flag for yourself things you've just heard versus things you've discussed at length  vs things you've really thought about yourself. Try to ask questions about the gears of people's models, ask for credences and cruxes.  Remember that people disagree, including about very big questions. Notice the difference between people's offhand hot takes and their areas of expertise. We want you to be someone who can disagree with high status people, who can think for themselves, who is in touch with reality. * I'd recommend staying grounded with friends/connections/family outside the EA space. Making friends over the summer is great, and some of them may be deep connections you can rely on, but as with all new friends and people, you don't have as much evidence about how those connections will develop over time or with any shifts in your relationships or situations. It's easy to get really attached and connected to peop
Maybe people are overoptimistic about indendepent/ grant funded work as an option or something?  EA seems unusually big on funding people indendently, eg. people working via grants rather than via employment through some sort of organisation or institution. (Why is that? Well EAs want to do EA work. And there are more EAs that want to do EA work than there are EA jobs in organisations. Also EA has won the lottery again... so EAs get funded outside the scope of organisations). When I was working at an organisation (or basically any time I've been in an institution) I was like 'I can't wait to get out of this organisation with all it's meetings and slack notifications and meetings... once I'm out I'll be independent and free and I'll finally make my own decisions about how to spend my time and realise my true potential'. But as inspirational speaker Dylan Moran warns: 'Stay away from your potential. You'll mess it up, it's potential, leave it. Anyway, it's like your bank balance - you always have a lot less than you think' After leaving an organisation and beginning to work on grant funding I found it a lot more difficult than expected, and missed the structure that came with working in an organisation. Some more good things about organisations: mentorship, colleagues, training, plausibly free stationary, a clear distinction between work time and not work time, defined roles and responsibilities, feedback, a sense of identity, something to blame if things don't go to plan. Speculatively, EA is quite big on self-belief/ believing in one's own potential, and encouraging people to take risks. And I worry that all this means that more people end up doing independent work than is a good idea. 
My attempt to help with AI Safety Meta: This feels like something emotional where if somebody would look at my plan from the outside, they'd have obvious and good feedback, but my own social circle is not worried or knowledgable about AGI, and so I hope someone will read this. Best bet: Meta software projects It would be my best personal fit, running one or multiple software projects that require product work such as understanding what the users actually want. My bottle neck: Talking to actual users with pain points (researchers? meta orgs with software problems? funders? I don't know) Plan B: Advocacy I think I have potential to grow into a role where I explain complicated things in a simple way, without annoying people. Advocacy seems scary, but I think my experience strongly suggests I should try. Plan C: Research? Usually when I look closely at a field, I have new stuff to contribute. I do have impostor syndrome around AGI Safety research, but again, probably people like me should try (?) [I am not a mathematician at all. Am I just wrong here?] Bottle neck for Plans B+C: Getting a better model What model specifically: If you'd erase all information I heard about experts speculating "when will we have AGI" and "what's the chance it will kill us all?", could I re-invent it? could I figure out which expert is right? This seems like the first layer, and an important one My actionable items: 1. Talk to friends about AGI. They ask questions, like "can't the AGI simply ADVICE us on what to do?", and I answer.  1. We both improve our model (specifically, if what I say doesn't seem convincing, then maybe it's wrong?) 2. I slowly exit my comfort zone of "being the weird person talking about AGI" 2. Write my own model, post it for comments 1. Maybe my agreements/disagreements with this? 2. Seems hard and tiring What am I missing? Give me the obvious stuff
8
quinn
2y
4
We need an in-depth post on moral circle expansion (MCE), minoritarianism, and winning. I expect EA's MCE projects to be less popular than anti-abortion in the US (37% say ought to be illegal in all or most cases, while for one example veganism is at 6%) . I guess the specifics of how the anti-abortion movement operated may be too in the weeds of contingent and peculiar pseudodemocracy, winning elections with less than half of the votes and securing judges and so on, but it seems like we don't want to miss out on studying this. There may be insights.  While many EAs would (I think rightly) consider the anti-abortion people colleagues as MCE activists, some EAs may also (I think debatably) admire republicans for their ruthless, shrewd, occasionally thuggish commitment to winning. Regarding the latter, I would hope to hear out a case for principles over policy preference, keeping our hands clean, refusing to compromise our integrity, and so on. I'm about 50:50 on where I'd expect to fall personally, about the playing fair and nice stuff. I guess it's a question of how much republicans expect to suffer from externalities of thuggishness, if we want to use them to reason about the price we're willing to put on our integrity.  Moreover, I think this "colleagues as MCE activists" stuff is under-discussed. When you steelman the anti-abortion movement, you assume that they understand multiplication as well as we do, and are making a difficult and unhappy tradeoff about the QALY's lost to abortions needed by pregancies gone wrong or unclean black-market abortions or whathaveyou. I may feel like I oppose the anti-abortion people on multiplicationist/consequentialist grounds (I also just don't think reducing incidence of disvaluable things by outlawing them is a reasonable lever), but things get interesting when I model them as understanding the tradeoffs they're making.  (To be clear, this isn't "EA writer, culturally coded as a democrat for whatever college/lgbt/atheist r
Is it all a bit too convenient? There's been lots of discussion about EA having so much money; particularly long-termist EA. Worries that that means we are losing the 'altruist' side of EA, as people get more comfortable, and work on more speculative cause areas. This post isn't about what's right / wrong or what "we should do"; it's about reconciling the inner tension this creates. Many of us now have very well-paid jobs, which are in nice offices with perks like table-tennis. And that many people are working on things which often yield no benefit to humans and animals in the near-term but might in future; or indeed the first order effect of the jobs are growing the EA community, and 2nd and 3rd are speculative benefit to humans and animals or sentient beings in the future. These jobs are often high status.  Though not in an EA org, I feel my job fits this bill as well. I get a bit pissed with myself sometimes feeling I've sold out; because it just seems to be a bit too convenient that the most important thing I could do gets me high profile speaking events, a nice salary, an impressive title, access to important people, etc. And that potential impact from my job, which is in AI regulation, is still largely speculative. I feel long-termish, in that I aim to make the largest and most sustainable change for all sentient minds to be blissful, not suffer and enjoy endless pain au raisin. But that doesn't mean ignoring humans and animals today. To blatantly mis-quote Peter Singer the opportunity cost of not saving a drowning child today is still real, even if that means showing up 5 minutes late to work every day compromising on your productivity, which you believe is so important because you have a 1/10^7* chance of saving 10^700** children. For me to believe I'm living my values, I think I need to still try to make an impact today. I try donate a good chunk to global health and  wellbeing initiatives, lean harder into animal rights, and (am now starting to) suppo