All posts

New & upvoted

2021

Building effective altruism 49
Community 34
Career choice 24
Cause prioritization 14
Longtermism 12
Existential risk 10
More

Frontpage Posts

356
atb
· 2y ago · 29m read
304
· 3y ago · 10m read
256
· 3y ago · 10m read
230
[anonymous]
· 2y ago · 14m read
263
· 3y ago · 5m read
224
· 3y ago · 9m read
177
· 3y ago · 9m read
153
· 3y ago · 2m read
174
153
· 3y ago · 9m read

Quick takes

116
Linch
3y
20
Red teaming papers as an EA training exercise? I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.  I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad^ with  a decent science or social science degree. I think this is good career building for various reasons: * you can develop a healthy skepticism of the existing EA orthodoxy * I mean skepticism that's grounded in specific beliefs about why things ought to be different, rather than just vague "weirdness heuristics" or feeling like the goals of EA conflict with other tribal goals. * (I personally  have not found high-level critiques of EA, and I have read many, to be particularly interesting or insightful, but this is just a personal take). * you actually deeply understand at least one topic well enough to point out errors * For many people and personalities, critiquing a specific paper/blog post may be a less hairy "entry point" into doing EA-research adjacent work than plausible alternatives like trying to form your own deep inside views on questions that  are really open-ended and ambiguous like "come up with a novel solution in AI alignment" or "identify a new cause X" * creates legible career capital (at least within EA) * requires relatively little training/guidance from external mentors, meaning * our movement devotes less scarce mentorship resources into this * people with worse social skills/network/geographical situation don't feel (as much) at a disadvantage for getting the relevant training * you can start forming your own opinions/intuitions of both object-level and meta-level heuristics for what things are likely to be correct vs wrong. * In some cases, the errors are actually quite big, and worth correcting  (relevant parts of ) the entire EA movement on. Main "cons" I can think of: * I'm not aware of anybody successfully  doing a really good critique for the sake of doing a really good critique. The most exciting things I'm aware of (publicly, zdgroff's critique of Ng's original paper on wild animal suffering, alexrjl's critique of Giving Green. I also have private examples) mostly comes from people trying to deeply understand a thing for themselves, and then along the way spotting errors with existing work. * It's possible that doing deliberate "red-teaming" would make one predisposed to spot trivial issues rather than serious ones, or falsely identify issues where there aren't any. * Maybe critiques are a less important skill to develop than forming your own vision/research direction and executing on it, and telling people to train for this skill might actively hinder their ability to be bold & imaginative?   ^ Of course, this depends on field. I think even relatively technical papers within EA are readable to a recent undergrad who cares enough, but  this will not be true for eg (most) papers in physics or math. 
101
Buck
3y
19
Here's a crazy idea. I haven't run it by any EAIF people yet. I want to have a program to fund people to write book reviews and post them to the EA Forum or LessWrong. (This idea came out of a conversation with a bunch of people at a retreat; I can’t remember exactly whose idea it was.) Basic structure: * Someone picks a book they want to review. * Optionally, they email me asking how on-topic I think the book is (to reduce the probability of not getting the prize later). * They write a review, and send it to me. * If it’s the kind of review I want, I give them $500 in return for them posting the review to EA Forum or LW with a “This post sponsored by the EAIF” banner at the top. (I’d also love to set up an impact purchase thing but that’s probably too complicated). * If I don’t want to give them the money, they can do whatever with the review. What books are on topic: Anything of interest to people who want to have a massive altruistic impact on the world. More specifically: * Things directly related to traditional EA topics * Things about the world more generally. Eg macrohistory, how do governments work, The Doomsday Machine, history of science (eg Asimov’s “A Short History of Chemistry”) * I think that books about self-help, productivity, or skill-building (eg management) are dubiously on topic. Goals: * I think that these book reviews might be directly useful. There are many topics where I’d love to know the basic EA-relevant takeaways, especially when combined with basic fact-checking. * It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about. * I think it would be healthy for EA’s culture. I worry sometimes that EAs aren’t sufficiently interested in learning facts about the world that aren’t directly related to EA stuff. I think that this might be improved both by people writing these reviews and people reading them. * Conversely, sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development. * It might surface some talented writers and thinkers who weren’t otherwise known to EA. * It might produce good content on the EA Forum and LW that engages intellectually curious people. Suggested elements of a book review: * One paragraph summary of the book * How compelling you found the book’s thesis, and why * The main takeaways that relate to vastly improving the world, with emphasis on the surprising ones * Optionally, epistemic spot checks * Optionally, “book adversarial collaborations”, where you actually review two different books on the same topic.
Reflection on my time as a Visiting Fellow at Rethink Priorities this summer I was a Visiting Fellow at Rethink Priorities this summer. They’re hiring right now, and I have lots of thoughts on my time there, so I figured that I’d share some. I had some misconceptions coming in, and I think I would have benefited from a post like this, so I’m guessing other people might, too. Unfortunately, I don’t have time to write anything in depth for now, so a shortform will have to do. Fair warning: this shortform is quite personal and one-sided. In particular, when I tried to think of downsides to highlight to make this post fair, few came to mind, so the post is very upsides-heavy. (Linch’s recent post has a lot more on possible negatives about working at RP.) Another disclaimer: I changed in various ways during the summer, including in terms of my preferences and priorities. I think this is good, but there’s also a good chance of some bias (I’m happy with how working at RP went because working at RP transformed me into the kind of person who’s happy with that sort of work, etc.). (See additional disclaimer at the bottom.) First, some vague background on me, in case it’s relevant: * I finished my BA this May with a double major in mathematics and comparative literature. * I had done some undergraduate math research, had taught in a variety of contexts, and had worked at Canada/USA Mathcamp, but did not have a lot of proper non-Academia work experience. * I was introduced to EA in 2019. Working at RP was not what I had expected (it seems likely that my expectations were skewed). One example of this was how my supervisor (Linch) held me accountable. Accountability existed in such a way that helped me focus on goals (“milestones”) rather than making me feel guilty about falling behind. (Perhaps I had read too much about bad workplaces and poor incentive structures, but I was quite surprised and extremely happy about this fact.) This was a really helpful transition for me from the university context, where I often had to complete large projects with less built-in support. For instance, I would have big papers due as midterms (or final exams that accounted for 40% of a course grade), and I would often procrastinate on these because they were big, hard to break down, and potentially unpleasant to work on. (I got really good at writing a 15-page draft overnight.) In contrast, at Rethink, Linch would help me break down a project into steps (“do 3 hours of reading on X subject,” “reach out to X person,” “write a rough draft of brainstormed ideas in a long list and share it for feedback,” etc.), and we would set deadlines for those. Accomplishing each milestone felt really good, and kept me motivated to continue with the project. If I was behind the schedule, he would help me reprioritize and think through the bottlenecks, and I would move forward. (Unless I’m mistaken, managers at RP had taken a management course in order to make sure that these structures worked well — I don’t know how much that helped because I can’t guess at the counterfactual, but from my point of view, they did seem quite prepared to manage us.) Another surprise: Rethink actively helped me meet many (really cool) people (both when they did things like give feedback, and through socials or 1-1’s). I went from ~10 university EA friends to ~25 people I knew I could go to for resources or help. I had not done much EA-related work before the internship (e.g. my first EA Forum post was due to RP), but I never felt judged or less respected for that. Everyone I interacted with seemed genuinely invested in helping me grow. They sent me relevant links, introduced me to cool new people, and celebrated my successes. I also learned a lot and developed entirely new interests. My supervisor was Linch, so it might be unsurprising that I became quite interested in forecasting and related topics. Beyond this, however, I found the work really exciting, and explored a variety of topics. I read a bunch of economics papers and discovered that the field was actually really interesting (this might not be a surprise to others, but it was to me!). I also got to fine-tune my understanding of and opinions on a number of questions in EA and longtermism. I developed better work (or research) habits, gained some confidence, and began to understand myself better. Here’s what I come up with when I try to think of negatives: * I struggled to some extent with the virtual setting (e.g. due to tech or internet issues). Protip: if you find yourself with a slow computer, fix that situation asap. * There might have been too much freedom for me— I probably spent too long choosing and narrowing my next project topics. Still, this wasn’t purely negative; I think I ended up learning a lot during the exploratory interludes (where I went on deep-dives into things like x-risks from great power conflict, but they did not help me produce outputs). As far as I know, this issue is less relevant for more senior positions, and a number of more concrete projects are more straightforwardly available now. (It also seems likely that I could have mitigated this by realizing it would be an issue right away.) * I would occasionally fall behind and become stressed about that. A few tasks became ugh fields. As the summer progressed, I think I got better about immediately telling Linch when I noticed myself feeling guilty or unhappy about a project, and this helped a lot. * Opportunity cost. I don’t know exactly what I would have done during the summer if not RP, but it’s always possible it would have been better. Obviously, if I were restarting the summer, I would do some things differently. I might focus on producing outputs faster. I might be more active in trying to meet people. I would probably organize my daily routine differently. But some of the things I list here are precisely changes in my preferences or priorities that result from working at RP. :) I don’t know if anyone will have questions, but feel free to ask questions if you do have any. But I should note that I might not be able to answer many, as I’m quite low on free time (I just started a new job). Note: nobody pressured me to write this shortform, although Linch & some other people at RP did know I was doing it and were happy for it. For convenience, here’s a link to RP’s hiring page.
67
Buck
2y
10
I think it's bad when people who've been around EA for less than a year sign the GWWC pledge. I care a lot about this. I would prefer groups to strongly discourage new people from signing it. I can imagine boycotting groups that encouraged signing the GWWC pledge (though I'd probably first want to post about why I feel so strongly about this, and warn them that I was going to do so). I regret taking the pledge, and the fact that the EA community didn't discourage me from taking it is by far my biggest complaint about how the EA movement has treated me. (EDIT: TBC, I don't think anyone senior in the movement actively encouraged we to do it, but I am annoyed at them for not actively discouraging it.) (writing this short post now because I don't have time to write the full post right now)
A case of precocious policy influence, and my pitch for more research on how to get a top policy job. Last week Lina Khan was appointed as Chair of the FTC, at age 32! How did she get such an elite role? At age 11, she moved to the US from London. In 2014, she studied antitrust topics at the New America Foundation (centre-left think tank). Got a JD from Yale in 2017, and published work relevant to the emerging Hipster Antitrust movement at the same time. In 2018, she worked as a legal fellow at the FTC. In 2020, became an associate professor of law at Columbia. This year - 2021 - she was appointed by Biden. The FTC chair role is an extraordinary level of success to reach at such a young age. But it kind-of makes sense that she should be able to get such a role: she has elite academic credentials that are highly relevant for the role, has riden the hipster antitrust wave, and has experience of and willingness to work in government. I think biosec and AI policy EAs could try to emulate this. Specifically, they could try to gather some elite academic credentials, while also engaging with regulatory issues and working for regulators, or more broadly, in the executive branch of goverment. Jason Matheny's success is arguably a related example. This also suggests a possible research agenda surrounding how people get influential jobs in general. For many talented young EAs, it would be very useful to know. Similar to how Wiblin ran some numbers in 2015 on the chances at a seat in congress given a background at Yale Law, we could ask about the whitehouse, external political appointments (such as FTC commissioner) and the judiciary. Also, this ought to be quite tractable: all the names are in public, e.g. here [Trump years] and here [Obama years], most of the CVs are in the public domain - it just needs doing.