Here are some things I've learned from spending the better part of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors.
Note that I assume that anybody reading this already has familiarity with Phillip Tetlock's work on (super)forecasting, particularly Tetlock's 10 commandments for aspiring superforecasters.
1. Forming (good) outside views is often hard but not impossible. I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views.
I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It's often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this.
2. For novel out-of-distribution situations, "normal" people often trust centralized data/ontologies more than is warranted. See here for a discu... (read more)
Consider making this a top-level post! That way, I can give it the "Forecasting" tag so that people will find it more often later, which would make me happy, because I like this post.
Red teaming papers as an EA training exercise?
I think a plausibly good training exercise for EAs wanting to be better at empirical/conceptual research is to deep dive into seminal papers/blog posts and attempt to identify all the empirical and conceptual errors in past work, especially writings by either a) other respected EAs or b) other stuff that we otherwise think of as especially important.
I'm not sure how knowledgeable you have to be to do this well, but I suspect it's approachable for smart people who finish high school, and certainly by the time they finish undergrad^ with a decent science or social science degree.
I think this is good career building for various reasons:
cross-posted from Facebook.
Sometimes I hear people who caution humility say something like "this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?". While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable [1], I think the framing is broadly wrong.
In particular, using geologic time rather than anthropological time hides the fact that there probably weren't that many people actively thinking about these issues, especially carefully, in a sustained way, and making sure to build on the work of the past. For background, 7% of all humans who have ever lived are alive today, and living people compose 15% of total human experience [2] so far!!!
It will not surprise me if there are about as many living philosophers today as there were dead philosophers in all of written history.
For some specific questions that particularly interest me (eg. population ethics, moral uncertainty), the total research work done on these questions is generously less than five philosopher-lifetimes. Even for classical age-old philosophical dilemmas/"grand projects... (read more)
cross-posted from Facebook.
Catalyst (biosecurity conference funded by the Long-Term Future Fund) was incredibly educational and fun.
Random scattered takeaways:
1. I knew going in that everybody there will be much more knowledgeable about bio than I was. I was right. (Maybe more than half the people there had PhDs?)
2. Nonetheless, I felt like most conversations were very approachable and informative for me, from Chris Bakerlee explaining the very basics of genetics to me, to asking Anders Sandberg about some research he did that was relevant to my interests, to Tara Kirk Sell detailing recent advances in technological solutions in biosecurity, to random workshops where novel ideas were proposed...
3. There's a strong sense of energy and excitement from everybody at the conference, much more than other conferences I've been in (including EA Global).
4. From casual conversations in EA-land, I get the general sense that work in biosecurity was fraught with landmines and information hazards, so it was oddly refreshing to hear so many people talk openly about exciting new possibilities to de-risk biological threats and promote a healthier future, while still being fully cognizant ... (read more)
Publication bias alert: Not everybody liked the conference as much as I did. Someone I know and respect thought some of the talks weren't very good (I agreed with them about the specific examples, but didn't think it mattered because really good ideas/conversations/networking at an event + gestalt feel is much more important for whether an event is worthwhile to me than a few duds).
That said, on a meta level, you might expect that people who really liked (or hated, I suppose) a conference/event/book to write detailed notes about it than people who were lukewarm about it.
While talking to my manager (Peter Hurford), I made a realization that by default when "life" gets in the way (concretely, last week a fair amount of hours were taken up by management training seminars I wanted to attend before I get my first interns, this week I'm losing ~3 hours the covid vaccination appointment and in expectation will lose ~5 more from side effects), research (ie the most important thing on my agenda that I'm explicitly being paid to do) is the first to go. This seems like a bad state of affairs.
I suspect that this is more prominent in me than most people, but also suspect this is normal for others as well. More explicitly, I don't have much "normal" busywork like paperwork or writing grants and I try to limit my life maintenance tasks (of course I don't commute and there's other stuff in that general direction). So all the things I do are either at least moderately useful or entertaining. Eg, EA/work stuff like reviewing/commenting on other's papers, meetings, mentorship stuff, slack messages, reading research and doing research, as well as personal entertainment stuff like social media, memes, videogames etc (which I do much more than I'm willing to admi... (read more)
Over a year ago, someone asked the EA community whether it’s valuable to become world-class at an unspecified non-EA niche or field. Our Forum’s own Aaron Gertler responded in a post, saying basically that there’s a bunch of intangible advantages for our community to have many world-class people, even if it’s in fields/niches that are extremely unlikely to be directly EA-relevant.
Since then, Aaron became (entirely in his spare time, while working 1.5 jobs) a world-class Magic the Gathering player, recently winning the DreamHack MtGA tournament and getting $30,000 in prize monies, half of which he donated to Givewell.
I didn’t find his arguments overwhelmingly persuasive at the time, and I still don’t. But it’s exciting to see other EAs come up with unusual theories of change, actually executing on them, and then being wildly successful.
Recently I was asked for tips on how to be less captured by motivated reasoning and related biases, a goal/quest I've slowly made progress on for the last 6+ years. I don't think I'm very good at this, but I do think I'm likely above average, and it's also something I aspire to be better at. So here is a non-exhaustive and somewhat overlapping list of things that I think are helpful:
Something that came up with a discussion with a coworker recently is that often internet writers want some (thoughtful) comments, but not too many, since too many comments can be overwhelming. Or at the very least, the marginal value of additional comments is usually lower for authors when there are more comments.
However, the incentives for commentators is very different: by default people want to comment on the most exciting/cool/wrong thing, so internet posts can easily by default either attract many comments or none. (I think) very little self-policing is done, if anything a post with many comments make it more attractive to generate secondary or tertiary comments, rather than less.
Meanwhile, internet writers who do great work often do not get the desired feedback. As evidence: For ~ a month, I was the only person who commented on What Helped the Voiceless? Historical Case Studies (which later won the EA Forum Prize).
This will be less of a problem if internet communication is primarily about idle speculations and cat pictures. But of course this is not the primary way I and many others on the Forum engage with the internet. Frequently, the primary publication v... (read more)
cross-posted from Facebook.
Reading Bryan Caplan and Zach Weinersmith's new book has made me somewhat more skeptical about Open Borders (from a high prior belief in its value).
Before reading the book, I was already aware of the core arguments (eg, Michael Huemer's right to immigrate, basic cosmopolitanism, some vague economic stuff about doubling GDP).
I was hoping the book will have more arguments, or stronger versions of the arguments I'm familiar with.
It mostly did not.
The book did convince me that the prima facie case for open borders was stronger than I thought. In particular, the section where he argued that a bunch of different normative ethical theories should all-else-equal lead to open borders was moderately convincing. I think it will have updated me towards open borders if I believed in stronger "weight all mainstream ethical theories equally" moral uncertainty, or if I previously had a strong belief in a moral theory that I previously believed was against open borders.
However, I already fairly strongly subscribe to cosmopolitan utilitarianism and see no problem with aggregating utility across borders. Most of my concerns with open borders are rel... (read more)
Do people have advice on how to be more emotionally resilient in the face of disaster?
I spent some time this year thinking about things that are likely to be personally bad in the near-future (most salient to me right now is the possibility of a contested election + riots, but this is also applicable to the ongoing Bay Area fires/smoke and to a lesser extent the ongoing pandemic right now, as well as future events like climate disasters and wars). My guess is that, after a modicum of precaution, the direct objective risk isn't very high, but it'll *feel* like a really big deal all the time.
In other words, being perfectly honest about my own personality/emotional capacity, there's a high chance that if the street outside my house is rioting, I just won't be productive at all (even if I did the calculations and the objective risk is relatively low).
So I'm interested in anticipating this phenomenon and building emotional resilience ahead of time so such issues won't affect me as much.
I'm most interested in advice for building emotional resilience for disaster/macro-level setbacks. I think it'd also be useful to build resilience for more personal setbacks (eg career/relationship/impact), but I naively suspect that this is less tractable.
Thoughts?
I think it might be interesting/valuable for someone to create "list of numbers every EA should know", in a similar vein to Latency Numbers Every Programmer Should Know and Key Numbers for Cell Biologists.
One obvious reason against this is that maybe EA is too broad and the numbers we actually care about are too domain specific to specific queries/interests, but nonetheless I still think it's worth investigating.
I find the unilateralist’s curse a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing.
Consider the following hypothetical situations:
Edit: By figuring out ethics I mean both right and wrong in the abstract but also what the world empirically looks like so you know what is right and wrong in the particulars of a situation, with an emphasis on the latter.
I think a lot about ethics. Specifically, I think a lot about "how do I take the best action (morally), given the set of resources (including information) and constraints (including motivation) that I have." I understand that in philosophical terminology this is only a small subsection of applied ethics, and yet I spend a lot of time thinking about it.
One thing I learned from my involvement in EA for some years is that ethics is hard. Specifically, I think ethics is hard in the way that researching a difficult question or maintaining a complicated relationship or raising a child well is hard, rather than hard in the way that regularly going to the gym is hard.
When I first got introduced to EA, I believed almost the opposite (this article presents something close to my past views well): that the hardness of living ethically is a matter of execution and will, rather than that of constantly making tradeoffs in a difficult-to-navigate domain.
I still ... (read more)
I've started trying my best to consistently address people on the EA Forum by username whenever I remember to do so, even when the username clearly reflects their real name (eg Habryka). I'm not sure this is the right move, but overall I think this creates slightly better cultural norms since it pushes us (slightly) towards pseudonymous commenting/"Old Internet" norms, which I think is slightly better for pushing us towards truth-seeking and judging arguments by the quality of the arguments rather than be too conscious of status-y/social monkey effects.
(It's possible I'm more sensitive to this than most people).
I think some years ago there used to be a belief that people will be less vicious (in the mean/dunking way) and more welcoming if we used Real Name policies, but I think reality has mostly falsified this hypothesis.
I'm worried about a potential future dynamic where an emphasis on forecasting/quantification in EA (especially if it has significant social or career implications) will have adverse effects on making people bias towards silence/vagueness in areas where they don't feel ready to commit to a probability forecast.
I think it's good that we appear to be moving in the direction of greater quantification and being accountable for probability estimates, but I think there's the very real risk that people see this and then become scared of committing their loose thoughts/intuitive probability estimates on record. This may result in us getting overall worse group epistemics because people hedge too much and are unwilling to commit to public probabilities.
See analogy to Jeff Kaufman's arguments on responsible transparency consumption:
https://www.jefftk.com/p/responsible-transparency-consumption
Malaria kills a lot more people >age 5 than I would have guessed (Still more deaths <=5 than >5, but a much smaller ratio than I intuitively believed). See C70-C72 of GiveWell's cost-effectiveness estimates for AMF, which itself comes from the Global Burden of Disease Study.
I've previously cached the thought that malaria primarily kills people who are very young, but this is wrong.
I think the intuition slip here is that malaria is a lot more fatal for young people. However, there are more older people than younger people.
Should there be a new EA book, written by somebody both trusted by the community and (less importantly) potentially externally respected/camera-friendly?
Kinda a shower thought based on the thinking around maybe Doing Good Better is a bit old right now for the intended use-case of conveying EA ideas to newcomers.
I think the 80,000 hours and EA handbooks were maybe trying to do this, but for whatever reason didn't get a lot of traction?
I suspect that the issue is something like not having a sufficiently strong "voice"/editorial line, and what you want for a book that's a)bestselling and b) does not sacrifice nuance too much is one final author + 1-3 RAs/ghostwriters.
In the Precipice, Toby Ord very roughly estimates that the risk of extinction from supervolcanoes this century is 1/10,000 (as opposed to 1/10,000 from natural pandemics, 1/1,000 from nuclear war, 1/30 from engineered pandemics and 1/10 from AGI). Should more longtermist resources be put into measuring and averting the worst consequences of supervolcanic eruption?
More concretely, I know a PhD geologist who's interested in doing an EA/longtermist career and is currently thinking of re-skilling for AI policy. Given that (AFAICT) literally zero people in our community currently works on supervolcanoes, should I instead convince him to investigate supervolcanoes at least for a few weeks/months?
If he hasn't seriously considered working on supervolcanoes before, then it definitely seems worth raising the idea with him.
I know almost nothing about supervolcanoes, but, assuming Toby's estimate is reasonable, I wouldn't be too surprised if going from zero to one longtermist researcher in this area is more valuable than adding an additional AI policy researcher.
What will a company/organization that has a really important secondary mandate to focus on general career development of employees actually look like? How would trainings be structured, what would growth trajectories look like, etc?
When I was at Google, I got the distinct impression that while "career development" and "growth" were common buzzwords, most of the actual programs on offer were more focused on employee satisfaction/retention than growth. (For example, I've essentially never gotten any feedback on my selection of training courses or books that I bought with company money, which at the time I thought was awesome flexibility, but in retrospect was not a great sign of caring about growth on the part of the company).
Edit: Upon a reread I should mention that there are other ways for employees to grow within the company, eg by having some degree of autonomy over what projects they want to work on.
I think there are theoretical reasons for employee career growth being underinvested by default. Namely, that the costs of career growth are borne approximately equally between the employer and the employee (obviously this varies from case to case), whil... (read more)
I'm interested in a collection of backchaining posts by EA organizations and individuals, that traces back from what we want -- an optimal, safe, world -- back to specific actions that individuals and groups can take.
Can be any level of granularity, though the more precise, the better.
Interested in this for any of the following categories:
I'd appreciate a 128kb square version of the lightbulb/heart EA icon with a transparent background, as a Slack emoji.
I continue to be fairly skeptical that the all-things-considered impact of EA altruistic interventions differ by multiple ( say >2) orders of magnitude ex ante (though I think it's plausible ex post). My main crux here is that I believe general meta concerns start dominating once the object-level impacts are small enough.
This is all in terms of absolute value of impact. I think it's quite possible that some interventions have large (or moderately sized) negative impact, and I don't know how the language of impact in terms of multiplication best deals with this.
Minor UI note: I missed the EAIF AMA multiple times (even after people told me it existed) because my eyes automatically glaze over pinned tweets. I may be unusual in this regard, but thought it worth flagging anyway.
Do people have thoughts on what the policy should be on upvoting posts by coworkers?
Obviously telling coworkers (or worse, employees!) to upvote your posts should be verboten, and having a EA forum policy that you can't upvote posts by coworkers is too draconian (and also hard to enforce).
But I think there's a lot of room in between to form a situation like "where on average posts by people who work at EA orgs will have more karma than posts of equivalent semi-objective quality." Concretely, 2 mechanisms in which this could happen (and almost c... (read more)
crossposted from LessWrong
There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.
I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member there.
For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be cl... (read more)
Are there any EAA researchers carefully tracking the potential of huge cost-effectiveness gains in the ag industry from genetic engineering advances of factory farmed animals? Or (less plausibly) advances from better knowledge/practice/lore from classical artificial selection? As someone pretty far away from the field, a priori the massive gains made in biology/genetics in the last few decades seems like something that we plausibly have not priced in in. So it'd be sad if EAAs get blindsided by animal meat becoming a lot cheaper in the next few decades (if this is indeed viable, which it may not be).
I'm now pretty confused about whether normative claims can be used as evidence in empirical disputes. I generally believed no, with the caveat that for humans, moral beliefs are built on a scaffolding of facts, and sometimes it's easier to respond to an absurd empirical claim with the moral claim that has the gestalt sense of empirical beliefs if there isn't an immediately accessible empirical claim.
I talked to a philosopher who disagreed, and roughly believed that strong normative claims can be used as evidence against more confused/less c... (read more)
Updated version on https://docs.google.com/document/d/1BDm_fcxzmdwuGK4NQw0L3fzYLGGJH19ksUZrRloOzt8/edit?usp=sharing
Cute theoretical argument for #flattenthecurve at any point in the distribution
I think it's really easy to get into heated philosophical discussions about whether EAs overall use too much or too little jargon. Rather than try to answer this broadly for EA as a whole, it might be helpful for individuals to conduct a few quick polls to decide for themselves whether they ought to change their lexicon.
Here's my Twitter poll as one example.
Economic benefits of mediocre local human preferences modeling.
Epistemic status: Half-baked, probably dumb.
Note: writing is mediocre because it's half-baked.
Some vague brainstorming of economic benefits from mediocre human preferences models.
Many AI Safety proposals include understanding human preferences as one of its subcomponents [1]. While this is not obviously good[2], human modeling seems at least plausibly relevant and good.
Short-term economic benefits often spur additional funding and research interest [citation not given]. So a possible quest... (read more)
I find it quite hard to do multiple quote-blocks in the same comment on the forum. For example, this comment took one 5-10 tries to get right.
On the forum, it appears to have gotten harder for me to do multiple quote blocks in the same comment. I now often have to edit a post multiple times so quoted sentences are correctly in quote blocks, and unquoted sections are not. Whereas in the past I do not recall having this problem?
On the meta-level, I want to think hard about the level of rigor I want to have in research or research-adjacent projects.
I want to say that the target level of rigor I should have is substantially higher than for typical FB or Twitter posts, and way lower than research papers.
But there's a very wide gulf! I'm not sure exactly what I want to do, but here are some gestures at the thing:
- More rigor/thought/data collection should be put into it than 5-10 minutes typical of a FB/Twitter post, but much less than a hundred or... (read more)
I do agree that there are notable differences in what writing styles are often used and appreciated on the two sites.
Could this also be simply because of a difference in the extent to which people already know your username and expect to find posts from it interesting on the two sites? Or, relatedly, a difference in how many active users on each site you know personally?
I'm not sure how much those factors affect karma and comment numbers on either site, but it seems plausible that the have a substantial affect (especially given how an early karma/comment boost can set off a positive feedback loop).
Also, have you crossposted many things and noticed this pattern, or was it just a handful? I think there's a lot of "randomness" in karma and comment numbers on both sites, so if it's just been a couple crossposts it seems hard to be confident that any patterns would hold in future.
Personally, when I've crossposted something to the EA Forum and to LessWrong, those posts have decently often gotten more karma on the Forum and decently often the opposite, and (from memory) I don't think there's been a strong tendency in one direction or the other.