Here are some things I've learned from spending the better part of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors.
Note that I assume that anybody reading this already has familiarity with Phillip Tetlock's work on (super)forecasting, particularly Tetlock's 10 commandments for aspiring superforecasters.
1. Forming (good) outside views is often hard but not impossible. I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views.
I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It's often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim and Muelhauser for some discussions of this.
2. For novel out-of-distribution situations, "normal" people often trust centralized data/ontologies more than is warranted. See here for a discussion. I believe something similar is true for trust of domain experts, though this is more debatable.
3. The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting.
(Note that I think this is an improvement over the status quo in the broader society, where by default approximately nobody trusts generalist forecasters at all)
I've had several conversations where EAs will ask me to make a prediction, I'll think about it a bit and say something like "I dunno, 10%?"and people will treat it like a fully informed prediction to make decisions about, rather than just another source of information among many.
I think this is clearly wrong. I think in any situation where you are a reasonable person and you spent 10x (sometimes 100x or more!) time thinking about a question then I have, you should just trust your own judgments much more than mine on the question.
To a first approximation, good forecasters have three things: 1) They're fairly smart. 2) They're willing to actually do the homework. 3) They have an intuitive sense of probability.
This is not nothing, but it's also pretty far from everything you want in a epistemic source.
4. The EA community overrates Superforecasters and Superforecasting techniques. I think the types of questions and responses Good Judgment .* is interested in is a particular way to look at the world. I don't think it is always applicable (easy EA-relevant example: your Brier score is basically the same if you give 0% for 1% probabilities, and vice versa), and it's bad epistemics to collapse all of the "figure out the future in a quantifiable manner" to a single paradigm.
Likewise, I don't think there's a clear dividing line between good forecasters and GJP-certified Superforecasters, so many of the issues I mentioned in #3 are just as applicable here.
I'm not sure how to collapse all the things I've learned on this topic in a few short paragraphs, but the tl;dr is that I trusted superforecasters much more than I trusted other EAs before I started forecasting stuff, and now I consider their opinions and forecasts "just" an important overall component to my thinking, rather than a clear epistemic superior to defer to.
5. Good intuitions are really important. I think there's a Straw Vulcan approach to rationality where people think "good" rationality is about suppressing your System 1 in favor of clear thinking and logical propositions from your system 2. I think there's plenty of evidence for this being wrong*. For example, the cognitive reflection test was originally supposed to be a test of how well people suppress their "intuitive" answers to instead think through the question and provide the right "unintuitive answers", however we've later learned (one fairly good psych study. May not replicate, seems to accord with my intuitions and recent experiences) that more "cognitively reflective" people also had more accurate initial answers when they didn't have the time to think through the question.
On a more practical level, I think a fair amount of good thinking is using your System 2 to train your intuitions, so you have better and better first impressions and taste for how to improve your understanding of the world in the future.
*I think my claim so far is fairly uncontroversial, for example I expect CFAR to agree with a lot of what I say.
6. Relatedly, most of my forecasting mistakes are due to emotional rather than technical reasons. Here's a Twitter thread from May exploring why; I think I mostly still stand by this.
Consider making this a top-level post! That way, I can give it the "Forecasting" tag so that people will find it more often later, which would make me happy, because I like this post.
Thanks for the encouragement and suggestion! Do you have recommendations for a really good title?
Titles aren't my forte. I'd keep it simple. "Lessons learned from six months of forecasting" or "What I learned after X hours of forecasting" (where "X" is an estimate of how much time you spent over six months).
I second this.
cross-posted from Facebook.
Sometimes I hear people who caution humility say something like "this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?". While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable , I think the framing is broadly wrong.In particular, using geologic time rather than anthropological time hides the fact that there probably weren't that many people actively thinking about these issues, especially carefully, in a sustained way, and making sure to build on the work of the past. For background, 7% of all humans who have ever lived are alive today, and living people compose 15% of total human experience  so far!!! It will not surprise me if there are about as many living philosophers today as there were dead philosophers in all of written history.For some specific questions that particularly interest me (eg. population ethics, moral uncertainty), the total research work done on these questions is generously less than five philosopher-lifetimes. Even for classical age-old philosophical dilemmas/"grand projects" (like the hard problem of consciousness), total work spent on them is probably less than 500 philosopher-lifetimes, and quite possibly less than 100.There are also solid outside-view reasons to believe that the best philosophers today are just much more competent  than the best philosophers in history, and have access to much more resources.Finally, philosophy can build on progress in natural and social sciences (eg, computers, game theory).Speculating further, it would not surprise me, if, say, a particularly thorny and deeply important philosophical problem can effectively be solved in 100 more philosopher-lifetimes. Assuming 40 years of work and $200,000/year per philosopher, including overhead, this is ~800 million, or in the same ballpark as the cost of developing a single drug. Is this worth it? Hard to say (especially with such made-up numbers), but the feasibility of solving seemingly intractable problems no longer seems crazy to me. For example, intro philosophy classes will often ask students to take a strong position on questions like deontology vs. consequentialism, or determinism vs. compatibilism. Basic epistemic humility says it's unlikely that college undergrads can get those questions right in such a short time.  https://eukaryotewritesblog.com/2018/10/09/the-funnel-of-human-experience/ Flynn effect, education, and education of women, among others. Also, just https://en.wikipedia.org/wiki/Athenian_democracy#Size_and_make-up_of_the_Athenian_population. (Roughly as many educated people in all of Athens at any given time as a fairly large state university). Modern people (or at least peak performers) being more competent than past ones is blatantly obvious in other fields where priority is less important (eg, marathon runners, chess). Eg, internet, cheap books, widespread literacy, and the current intellectual world is practically monolingual. https://en.wikipedia.org/wiki/Cost_of_drug_development
If a problem is very famous and unsolved, doesn't those who tried solving it include many of the much more competent philosophers alive today? The fact that the problem has not been solved by any of them either would suggest to me it's a hard problem.
Honest question: are there examples of philosophical problems that were solved in the last 50 years? And I mean solved by doing philosophy not by doing mostly unrelated experiments (like this one). I imagine that even if some philosophers felt they answered a question, other would dispute it. More importantly, the solution would likely be difficult to understand and hence it would be of limited value. I'm not sure I'm right here.
After a bit more googling I found this which maybe shows that there have been philosophical problems solved recently. I haven't read about that specific problem though. It's difficult to imagine a short paper solving the hard problem of consciousnesses though.
I enjoyed this list of philosophy's successes, but none of them happened in the last 50 years.
You might be interested in the following posts on the subject from Daily Nous, an excellent philosophy blog:
"Why Progress Is Slower In Philosophy Than In Science"
"How Philosophy Makes Progress (guest post by Daniel Stoljar)"
"How Philosophy Makes Progress (guest post by Agnes Callard)"
"Whether Philosophical Questions Can Be Answered"
"Convergence as Progress in Philosophy"
I'll be interested in having someone with a history of philosophy background weigh in on the Gettier question specifically. I thought Gettier problems were really interesting when I first heard about them, but I've also heard that "knowledge as justified true belief" wasn't actually all that dominant a position before Gettier came along.
Catalyst (biosecurity conference funded by the Long-Term Future Fund) was incredibly educational and fun.
Random scattered takeaways:
1. I knew going in that everybody there will be much more knowledgeable about bio than I was. I was right. (Maybe more than half the people there had PhDs?)
2. Nonetheless, I felt like most conversations were very approachable and informative for me, from Chris Bakerlee explaining the very basics of genetics to me, to asking Anders Sandberg about some research he did that was relevant to my interests, to Tara Kirk Sell detailing recent advances in technological solutions in biosecurity, to random workshops where novel ideas were proposed...
3. There's a strong sense of energy and excitement from everybody at the conference, much more than other conferences I've been in (including EA Global).
4. From casual conversations in EA-land, I get the general sense that work in biosecurity was fraught with landmines and information hazards, so it was oddly refreshing to hear so many people talk openly about exciting new possibilities to de-risk biological threats and promote a healthier future, while still being fully cognizant of the scary challenges ahead. I guess I didn't imagine there were so many interesting and "safe" topics in biosecurity!
5. I got a lot more personally worried about coronavirus than I was before the conference, to the point where I think it makes sense to start making some initial preparations and anticipate lifestyle changes.
6. There was a lot more DIY/Community Bio representation at the conference than I would have expected. I suspect this had to do with the organizers' backgrounds; I imagine that if most other people were to organize biosecurity conferences, it'd be skewed academic a lot more.
7. I didn't meet many (any?) people with a public health or epidemiology background.
8. The Stanford representation was really high, including many people who have never been to the local Stanford EA club.
9. A reasonable number of people at the conference were a) reasonably interested in effective altruism b) live in the general SF area and c) excited to meet/network with EAs in the area. This made me slightly more optimistic (from a high prior) about the value of doing good community building work in EA SF.
10. Man, the organizers of Catalyst are really competent. I'm jealous.
11. I gave significant amounts of money to the Long-Term Future Fund (which funded Catalyst), so I'm glad Catalyst turned out well. It's really hard to forecast the counterfactual success of long-reach plans like this one, but naively it looks like this seems like the right approach to help build out the pipeline for biosecurity.
12. Wow, evolution is really cool.
13. Talking to Anders Sandberg made me slightly more optimistic about the value of a few weird ideas in philosophy I had recently, and that maybe I can make progress on them (since they seem unusually neglected).
14. Catalyst had this cool thing where they had public "long conversations" where instead of a panel discussion, they'd have two people on stage at a time, and after a few minutes one of the two people get rotated out. I'm personally not totally sold on the format but I'd be excited to see more experiments like that.
15. Usually, conferences or other conversational groups I'm in have one of two failure modes: 1) there's an obvious hierarchy (based on credentials, social signaling, or just that a few people have way more domain knowledge than others) or 2) people are overly egalitarian and let useless digressions/opinions clog up the conversational space. Surprisingly neither happened much here, despite an incredibly heterogeneous group (from college sophomores to lead PIs of academic biology labs to biotech CEOs to DiY enthusiasts to health security experts to randos like me)
16. Man, it seems really good to have more conferences like this, where there's a shared interest but everybody come from different fields so it's less obviously hierarchal/status-jockeying.
17. I should probably attend more conferences/network more in general.
18. Being the "dumbest person in the room" gave me a lot more affordance to ask silly questions and understand new stuff from experts. I actually don't think I was that annoying, surprisingly enough (people seemed happy enough to chat with me).
19. Partially because of the energy in the conference, the few times where I had to present EA, I mostly focused on the "hinge of history/weird futuristic ideas are important and we're a group of people who take ideas seriously and try our best despite a lot of confusion" angle of EA, rather than the "serious people who do the important, neglected and obviously good things" angle that I usually go for. I think it went well with my audience today, though I still don't have a solid policy of navigating this in general.
20. Man, I need something more impressive on my bio than "unusually good at memes."
Publication bias alert: Not everybody liked the conference as much as I did. Someone I know and respect thought some of the talks weren't very good (I agreed with them about the specific examples, but didn't think it mattered because really good ideas/conversations/networking at an event + gestalt feel is much more important for whether an event is worthwhile to me than a few duds).
That said, on a meta level, you might expect that people who really liked (or hated, I suppose) a conference/event/book to write detailed notes about it than people who were lukewarm about it.
I am glad to hear that! I sadly didn't end up having the time to go, but I've been excited about the project for a while.
Thanks for your report! I was interested but couldn't manage the cross country trip and definitely curious to hear what it was like.
I'd really appreciate ideas for how to try to confer some of what it was like to people who couldn't make it. We recorded some of the talks and intend to edit + upload them, we're writing a "how to organize a conference" postmortem / report, and one attendee is planning to write a magazine article, but I'm not sure what else would be useful. Would another post like this be helpful?
We recorded some of the talks and intend to edit + upload them, we're writing a "how to organize a conference" postmortem / report, and one attendee is planning to write a magazine article
We recorded some of the talks and intend to edit + upload them, we're writing a "how to organize a conference" postmortem / report, and one attendee is planning to write a magazine article
That all sounds useful and interesting to me!
Would another post like this be helpful?
Would another post like this be helpful?
I think multiple posts following events on the personal experiences from multiple people (organizers and attendees) can be useful simply for the diversity of their perspectives. Regarding Catalyst in particular I'm curious about the variety of backgrounds of the attendees and how their backgrounds shaped their goals and experiences during the meeting.
Over a year ago, someone asked the EA community whether it’s valuable to become world-class at an unspecified non-EA niche or field. Our Forum’s own Aaron Gertler responded in a post, saying basically that there’s a bunch of intangible advantages for our community to have many world-class people, even if it’s in fields/niches that are extremely unlikely to be directly EA-relevant.
Since then, Aaron became (entirely in his spare time, while working 1.5 jobs) a world-class Magic the Gathering player, recently winning the DreamHack MtGA tournament and getting $30,000 in prize monies, half of which he donated to Givewell.
I didn’t find his arguments overwhelmingly persuasive at the time, and I still don’t. But it’s exciting to see other EAs come up with unusual theories of change, actually executing on them, and then being wildly successful.
Reading Bryan Caplan and Zach Weinersmith's new book has made me somewhat more skeptical about Open Borders (from a high prior belief in its value).
Before reading the book, I was already aware of the core arguments (eg, Michael Huemer's right to immigrate, basic cosmopolitanism, some vague economic stuff about doubling GDP).
I was hoping the book will have more arguments, or stronger versions of the arguments I'm familiar with.
It mostly did not.
The book did convince me that the prima facie case for open borders was stronger than I thought. In particular, the section where he argued that a bunch of different normative ethical theories should all-else-equal lead to open borders was moderately convincing. I think it will have updated me towards open borders if I believed in stronger "weight all mainstream ethical theories equally" moral uncertainty, or if I previously had a strong belief in a moral theory that I previously believed was against open borders.
However, I already fairly strongly subscribe to cosmopolitan utilitarianism and see no problem with aggregating utility across borders. Most of my concerns with open borders are related to Chesterton's fence, and Caplan's counterarguments were in three forms:
1. Doubling GDP is so massive that it should override any conservativism prior.2. The US historically had Open Borders (pre-1900) and it did fine.3. On the margin, increasing immigration in all the American data Caplan looked at didn't seem to have catastrophic cultural/institutional effects that naysayers claim.
I find this insufficiently persuasive.___Let me outline the strongest case I'm aware of against open borders:Countries are mostly not rich and stable because of the physical resources, or because of the arbitrary nature of national boundaries. They're rich because of institutions and good governance. (I think this is a fairly mainstream belief among political economists). These institutions are, again, evolved and living things. You can't just copy the US constitution and expect to get a good government (IIRC, quite a few Latin American countries literally tried and failed).
We don't actually understand what makes institutions good. Open Borders means the US population will ~double fairly quickly, and this is so "out of distribution" that we should be suspicious of the generalizability of studies that look at small marginal changes.____I think Caplan's case is insufficiently persuasive because a) it's not hard for me to imagine situations bad enough to be worse than doubling GDP is good, 2)Pre-1900 US was a very different country/world, 3) This "out of distribution" thing is significant.
I will find Caplan's book more persuasive if he used non-US datasets more, especially data from places where immigration is much higher than the US (maybe within the EU or ASEAN?).
I'm still strongly in favor of much greater labor mobility on the margin for both high-skill and low-skill workers. Only 14.4% of the American population are immigrants right now, and I suspect the institutions are strong enough that changing the number to 30-35% is net positive. [EDIT: Note that this is intuition rather than something backed by empirical data or explicit models]
I'm also personally in favor (even if it's negative expected value for the individual country) of a single country (or a few) trying out open borders for a few decades and for the rest of us to learn from their successes and failures. But that's because of an experimentalist social scientist mindset where I'm perfectly comfortable with "burning" a few countries for the greater good (countries aren't real, people are), and I suspect the government of most countries aren't thrilled about this.
Overall, 4/5 stars. Would highly recommend to EAs, especially people who haven't thought much about the economics and ethics of immigration.
If you email this to him, maybe adding a bit more polish, I'd give ~40% odds he'll reply on his blog, given how much he loves to respond to critics who take his work seriously.
It's not hard for me to imagine situations bad enough to be worse than doubling GDP is good
I actually find this very difficult without envisioning extreme scenarios (e.g. a dark-Hansonian world of productive-but-dissatisfied ems). Almost any situation with enough disutility to counter GDP doubling seems like it would, paradoxically, involve conditions that would reduce GDP (war, large-scale civil unrest, huge tax increases to support a bigger welfare state).
Could you give an example or two of situations that would fit your statement here?
Almost any situation with enough disutility to counter GDP doubling seems like it would, paradoxically, involve conditions that would reduce GDP (war, large-scale civil unrest, huge tax increases to support a bigger welfare state).
I think there was substantial ambiguity in my original phrasing, thanks for catching that!
I think there are at least four ways to interpret the statement.
1. Interpreting it literally: I am physically capable (without much difficulty) of imagining situations that are bad to a degree worse than doubling GDP is good.
2. Caplan gives some argument for doubling of GDP that seems persuasive, and claims this is enough to override a conservatism prior, but I'm not confident that the argument is true/robust, and I think it's reasonable to believe that there are possible bad consequences that are bad enough that even if I give >50% probability (or >80%), this is not automatically enough to override a conservatism prior, at least not without thinking about it a lot more.
3. Assume by construction that world GDP will double in the short term. I still think there's a significant chance that the world will be worse off.
4. Assume by construction that world GDP will double, and stay 2x baseline until the end of time. I still think there's a significant chance that the world will be worse off.
To be clear, when writing the phrasing, I meant it in terms of #2. I strongly endorse #1 and tentatively endorse #3, but I agree that if you interpreted what I meant as #4, what I said was a really strong claim and I need to back it up more carefully.
Makes sense, thanks! The use of "doubling GDP is so massive that..." made me think that you were taking that as given in this example, but worrying that bad things could result from GDP-doubling that justified conservatism. That was certainly only one of a few possible interpretations; I jumped too easily to conclusions.
That was not my intent, and it was not the way I parsed Caplan's argument.
Do people have advice on how to be more emotionally resilient in the face of disaster?
I spent some time this year thinking about things that are likely to be personally bad in the near-future (most salient to me right now is the possibility of a contested election + riots, but this is also applicable to the ongoing Bay Area fires/smoke and to a lesser extent the ongoing pandemic right now, as well as future events like climate disasters and wars). My guess is that, after a modicum of precaution, the direct objective risk isn't very high, but it'll *feel* like a really big deal all the time.
In other words, being perfectly honest about my own personality/emotional capacity, there's a high chance that if the street outside my house is rioting, I just won't be productive at all (even if I did the calculations and the objective risk is relatively low).
So I'm interested in anticipating this phenomenon and building emotional resilience ahead of time so such issues won't affect me as much.
I'm most interested in advice for building emotional resilience for disaster/macro-level setbacks. I think it'd also be useful to build resilience for more personal setbacks (eg career/relationship/impact), but I naively suspect that this is less tractable.
The last newsletter from Spencer Greenberg/Clearer Thinking might be helpful:
Wow, reading this was actually surprisingly helpful for some other things I'm going through. Thanks for the link!
I think it is useful to separately deal with the parts of a disturbing event over which you have an internal or external locus of control. Let's take a look at riots:
I'm worried about a potential future dynamic where an emphasis on forecasting/quantification in EA (especially if it has significant social or career implications) will have adverse effects on making people bias towards silence/vagueness in areas where they don't feel ready to commit to a probability forecast.
I think it's good that we appear to be moving in the direction of greater quantification and being accountable for probability estimates, but I think there's the very real risk that people see this and then become scared of committing their loose thoughts/intuitive probability estimates on record. This may result in us getting overall worse group epistemics because people hedge too much and are unwilling to commit to public probabilities.
See analogy to Jeff Kaufman's arguments on responsible transparency consumption:
Malaria kills a lot more people >age 5 than I would have guessed (Still more deaths <=5 than >5, but a much smaller ratio than I intuitively believed). See C70-C72 of GiveWell's cost-effectiveness estimates for AMF, which itself comes from the Global Burden of Disease Study.
I've previously cached the thought that malaria primarily kills people who are very young, but this is wrong.
I think the intuition slip here is that malaria is a lot more fatal for young people. However, there are more older people than younger people.
In the Precipice, Toby Ord very roughly estimates that the risk of extinction from supervolcanoes this century is 1/10,000 (as opposed to 1/10,000 from natural pandemics, 1/1,000 from nuclear war, 1/30 from engineered pandemics and 1/10 from AGI). Should more longtermist resources be put into measuring and averting the worst consequences of supervolcanic eruption?
More concretely, I know a PhD geologist who's interested in doing an EA/longtermist career and is currently thinking of re-skilling for AI policy. Given that (AFAICT) literally zero people in our community currently works on supervolcanoes, should I instead convince him to investigate supervolcanoes at least for a few weeks/months?
If he hasn't seriously considered working on supervolcanoes before, then it definitely seems worth raising the idea with him.
I know almost nothing about supervolcanoes, but, assuming Toby's estimate is reasonable, I wouldn't be too surprised if going from zero to one longtermist researcher in this area is more valuable than adding an additional AI policy researcher.
The biggest risk here I believe is anthropogenic; supervolcanoes could theoretically be weaponized.
What will a company/organization that has a really important secondary mandate to focus on general career development of employees actually look like? How would trainings be structured, what would growth trajectories look like, etc?
When I was at Google, I got the distinct impression that while "career development" and "growth" were common buzzwords, most of the actual programs on offer were more focused on employee satisfaction/retention than growth. (For example, I've essentially never gotten any feedback on my selection of training courses or books that I bought with company money, which at the time I thought was awesome flexibility, but in retrospect was not a great sign of caring about growth on the part of the company).
Edit: Upon a reread I should mention that there are other ways for employees to grow within the company, eg by having some degree of autonomy over what projects they want to work on.
I think there are theoretical reasons for employee career growth being underinvested by default. Namely, that the costs of career growth are borne approximately equally between the employer and the employee (obviously this varies from case to case), while the benefits of career growth are mostly accrued by the employee and their future employers.
This view will predict that companies will mostly only invest in general career development/growth of employees if one of a number of conditions are met:
I suppose that in contrast to companies, academia is at least incentivized to focus on general career development (since professors are judged at least somewhat on the quality of their graduate students' outputs/career trajectories). I don't know in practice how much better academia is than industry however. (It is at least suggestive that people often take very large pay cuts to do graduate school).
I think the question of how to do employee career development well is particularly interesting/relevant to EA organizations, since there's a sense in which developing better employees is a net benefit to "team EA" even if your own org doesn't benefit, or might die in a year or three. A (simplified) formal view of this is that effective altruism captures the value of career development over the expected span of someone continuing to do EA activities.*
*eg, doing EA-relevant research or policy work, donating, working at an EA org, etc.
Definitely agreed. That said, I think some of this should probably be looked through the lens of "Should EA as a whole help people with personal/career development rather than specific organizations, as the benefits will accrue to the larger community (especially if people only stay at orgs for a few years).I'm personally in favor of expensive resources being granted to help people early in their careers. You can also see some of this in what OpenPhil/FHI funds; there's a big focus on helping people get useful PhDs. (though this helps a small minority of the entire EA movement)
I find the unilateralist’s curse a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing.
Consider the following hypothetical situations:
Putting aside whether the above actions were correct or not, in each of the above cases, have the protagonists acted unilaterally?
I think this is a hard question to answer. My personal answer is “yes,” but I think another reasonable person can easily believe that the above protagonists were fully cooperative. Further, I don’t think the hypothetical scenarios above were particularly convoluted edge cases. I suspect that in real life, figuring out whether the unilateralist’s curse applies to your actions will hinge on subtle choices of reference classes. I don’t have a good solution to this.
I really like this (I think you could make it top level if you wanted). I think these of these are cases of multiple levels of cooperation. If you're part of an organization that wants to be uncooperative (and you can't leave cooperatively), then you're going to be uncooperative with one of them.
Good point. Now that you bring this up, I vaguely remember a Reddit AMA where an evolutionary biologist made the (obvious in hindsight, but never occurred to me at the time) claim that with multilevel selection, altruism on one level is often means defecting on a higher (or lower) level. Which probably unconsciously inspired this post!
As for making it top level, I originally wanted to include a bunch of thoughts on the unilateralist's curse as a post, but then I realized that I'm a one-trick pony in this domain...hard to think of novel/useful things that Bostrom et. al hasn't already covered!
I'm interested in a collection of backchaining posts by EA organizations and individuals, that traces back from what we want -- an optimal, safe, world -- back to specific actions that individuals and groups can take.Can be any level of granularity, though the more precise, the better.
Interested in this for any of the following categories:
I think a sort-of relevant collection can be found in the answers to this question about theory of change diagrams. And those answers also include other relevant discussion, like the pros and cons of trying to create and diagrammatically represent explicit theories of change. (A theory of change diagram won't necessarily exactly meet your criteria, in the sense that it may backchain from an instrumental rather than intrinsic goal, but it's sort-of close.)
The answers in that post include links to theory of change diagrams from Animal Charity Evaluators (p.15), Convergence Analysis, Happier Lives Institute, Leverage Research, MIRI, and Rethink Priorities. Those are the only 6 research orgs I know of which have theory of change diagrams. (But that question was just about research orgs, and having such diagrams might be somewhat more common among non-research organisations.)
I think Leverage's diagram might be the closest thing I know of to a fairly granular backchaining from one's ultimate goals. It also seems to me quite unwieldy - I spent a while trying to read it once, but it felt annoying to navigate and hard to really get the overall gist of. (That was just my personal take, though.)
One could also argue that Toby Ord's "grand strategy for humanity" is a very low-granularity instance of backchaining from one's ultimate goals. And it becomes more granular once one connects the first step of the grand strategy to other specific recommendations Ord makes in The Precipice.
(I know you and I have already discussed some of this; this comment was partly for other potential readers' sake.)
 For readers who haven't read The Precipice, Ord's quick summary of the grand strategy is as follows:
I think that at the highest level we should adopt a strategy proceeding in three phases:Reaching Existential SecurityThe Long ReflectionAchieving Our Potential
I think that at the highest level we should adopt a strategy proceeding in three phases:
The book contains many more details on these terms and this strategy, of course.
It has occurred to me that very few such documents exist.
I'm curious what it looks like to backchain from something so complex. I've tried it repeatedly in the past and feel like I failed.
crossposted from LessWrong
There should maybe be an introductory guide for new LessWrong users coming in from the EA Forum, and vice versa.
I feel like my writing style (designed for EAF) is almost the same as that of LW-style rationalists, but not quite identical, and this is enough to be substantially less useful for the average audience member there.
For example, this identical question is a lot less popular on LessWrong than on the EA Forum, despite naively appearing to appeal to both audiences (and indeed if I were to guess at the purview of LW, to be closer to the mission of that site than that of the EA Forum).
I continue to be fairly skeptical that the all-things-considered impact of EA altruistic interventions differ by multiple ( say >2) orders of magnitude ex ante (though I think it's plausible ex post). My main crux here is that I believe general meta concerns start dominating once the object-level impacts are small enough.This is all in terms of absolute value of impact. I think it's quite possible that some interventions have large (or moderately sized) negative impact, and I don't know how the language of impact in terms of multiplication best deals with this.
By "meta concerns", do you mean stuff like base rate of interventions, risk of being wildly wrong, methodological errors/biases, etc.? I'd love it if you could expand a bit.
Also, did you mean that these dominate when object-level impacts are big enough?
By "meta concerns", do you mean stuff like base rate of interventions, risk of being wildly wrong, methodological errors/biases, etc.?
Hmm I think those are concerns too, but I guess I was primarily thinking about meta-EA concerns like whether an intervention increases or decreases EA prestige, willingness of new talent to work on EA organizations, etc.
No. Sorry I was maybe being a bit confusing with my language. I mean to say that when comparing two interventions, the meta-level impacts of the less effective intervention will dominate if you believe the object-level impact of the less effective intervention is sufficiently small. Consider two altruistic interventions, direct AI Safety research and forecasting. Suppose that you did the analysis and think the object-level impact of AI Safety research is X (very high) and the impact of forecasting is only 0.0001X.
(This is just an example. I do not believe that the value of forecasting is 10,000 times lower than AI Safety research). I think it will then be wrong to think that the all-things-considered value of an EA doing forecasting is 10,000 times lower than the value of an EA doing direct AI Safety research, if for no other reason than because EAs doing forecasting has knock-on effects on EAs doing AI Safety. If the object-level impacts of the less effective intervention are big enough, then it's less obvious that the meta-level impacts will dominate. If your analysis instead gave a value of forecasting as 3x less impactful than AIS research, then I have to actually present a fairly strong argument for why the meta-level impacts may still dominate, whereas I think it's much more self-evident at the 10,000x difference level. Let me know if this is still unclear, happy to expand. Oh, also a lot of my concerns (in this particular regard) mirror Brian Tomasik's, so maybe it'd be easier to just read his post.
Thanks, much clearer! I'll paraphrase the crux to see if I understand you correctly:
If the EA community is advocating for interventions X and Y, then more resources R going into Y leads to more resources going into X (within about R/10^2).
Is this what you have in mind?
Yes, though I'm strictly more confident about absolute value than the change being positive (So more resources R going into Y can also eventually lead to less resources going into X, within about R/10^2).
And the model is that increased resources into main EA cause areas generally affects the EA movement by increasing its visibility, diverting resources from that cause area to others, and bringing in more people in professional contact with EA orgs/people - those general effects trickle down to other cause areas?
Yes that sounds right. There are also internal effects in framing/thinking/composition that by itself have flow-through effects that are plausibly >1% in expectation.For example, more resources going into forecasting may cause other EAs to be more inclined to quantify uncertainty and focus on the quantifiable, with both potentially positive and negative flow-through effects, more resources going into medicine- or animal welfare- heavy causes will change the gender composition of EA, and so forth.
Thanks again for the clarification!
I think that these flow-through effects mostly apply to specific targets for resources that are more involved with the EA-community. For example, I wouldn't expect more resources going into efforts by Tetlock to improve the use of forecasting in the US government to have visible flow-through effects on the community. Or more resources going into AMF are not going to affect the community.
I think that this might apply particularly well to career choices.
Also, if these effects are as large as you think, it would be good to more clearly articulate what are the most important flow-through effects and how do we improve the positives and mitigate the negatives.
I'm now pretty confused about whether normative claims can be used as evidence in empirical disputes. I generally believed no, with the caveat that for humans, moral beliefs are built on a scaffolding of facts, and sometimes it's easier to respond to an absurd empirical claim with the moral claim that has the gestalt sense of empirical beliefs if there isn't an immediately accessible empirical claim.
I talked to a philosopher who disagreed, and roughly believed that strong normative claims can be used as evidence against more confused/less certain empirical claims, and I got a sense from the conversation that his view is much more common in academic philosophy than mine.
Would like to investigate further.
I haven't really thought about it, but it seems to me that if an empirical claim implies an implausible normative claim, that lowers my subjective probability of the empirical claim.
Updated version on https://docs.google.com/document/d/1BDm_fcxzmdwuGK4NQw0L3fzYLGGJH19ksUZrRloOzt8/edit?usp=sharing
Cute theoretical argument for #flattenthecurve at any point in the distribution
I think it's really easy to get into heated philosophical discussions about whether EAs overall use too much or too little jargon. Rather than try to answer this broadly for EA as a whole, it might be helpful for individuals to conduct a few quick polls to decide for themselves whether they ought to change their lexicon. Here's my Twitter poll as one example.
Economic benefits of mediocre local human preferences modeling.
Epistemic status: Half-baked, probably dumb.
Note: writing is mediocre because it's half-baked.
Some vague brainstorming of economic benefits from mediocre human preferences models.
Many AI Safety proposals include understanding human preferences as one of its subcomponents . While this is not obviously good, human modeling seems at least plausibly relevant and good.
Short-term economic benefits often spur additional funding and research interest [citation not given]. So a possible question to ask if we can get large economic benefits from a system with the following properties (each assumption can later be relaxed):
1. Can run on a smartphone in my pocket
2. Can approximate simple preference elicitations at many times a second
3. Low fidelity, has both high false-positive and false-negative rates
4. Does better on preferences with lots of training data ("in-distribution")
5. Initially works better on simple preferences (preference elicitations takes me 15 seconds to think about an answer, say), but has continuous economic benefits from better and better models.
An *okay* answer to this question is recommender systems (ads, entertainment). But I assume those are optimized to heck already so it's hard for an MVP to win.
I think a plausibly better answer to this is market-creation/bidding. The canonical example is ridesharing like Uber/Lyft, which sells a heterogeneous good to both drivers and riders. Right now they have a centralized system that tries to estimate market-clearing prices, but imagine instead if riders and drivers bid on how much they're willing to pay/take for a ride from X to Y with Z other riders?
Right now, this is absurd because human preference elicitations take up time/attention for humans. If a driver has to scroll through 100 possible rides in her vicinity, the experience will be strictly worse.
But if a bot could report your preferences for you? I think this could make markets a lot more efficient, and also gives a way to price in increasingly heterogeneous preferences. Some examples:
1. I care approximately zero about cleanliness or make of a car, but I'm fairly sensitive to tobacco or marijuana smell. If you had toggles for all of these things in the app, it'd be really annoying.
2. A lot of my friends don't like/find it stressful to make small talk on a trip, but I've talked to drivers who chose this job primarily because they want to talk on the job. It'd be nice if both preferences are priced in.
3. Some riders like drivers who speak their native language, and vice versa.
A huge advantage of these markets is that "mistakes" are pricey but not incredibly so. Ie, I'd rather not overbid for a trip that isn't worth it, but the consumer/driver surplus from pricing in heterogeneous preferences at all can easily make up for the occasional (or even frequent) mispricing.
There's probably a continuous extension of this idea to matching markets with increasingly sparse data (eg, hiring, dating).
One question you can ask is why is it advantageous to have this run on a client machine at all, instead of aggregative human preference modeling that lots of large companies (including Uber) already do?
The honest high-level answer is that I guess this is a solution in search of a problem, which is rarely a good sign...
A potential advantage of running it on your smartphone (imagine a plug-in app that runs "Linch's Preferences" with an API other people can connect to) is that it legally makes the "Marketplace" idea for Uber and companies like Uber more plausible? Like right now a lot of them claim to have a marketplace except they look a lot like command-and-control economies; if you have a personalized bot on your client machine bidding on prices, then I think the case would be easier to sell.
On the forum, it appears to have gotten harder for me to do multiple quote blocks in the same comment. I now often have to edit a post multiple times so quoted sentences are correctly in quote blocks, and unquoted sections are not. Whereas in the past I do not recall having this problem?
I'm going to guess that the new editor is the difference between now and previously. What's the issue you're seeing? Is there a difference between the previewed and rendered text? Ideally you could get this to repro on LessWrong's development server, which would be useful for bug reports, but no worries if not.
Cross-posted from Facebook
On the meta-level, I want to think hard about the level of rigor I want to have in research or research-adjacent projects.
I want to say that the target level of rigor I should have is substantially higher than for typical FB or Twitter posts, and way lower than research papers.
But there's a very wide gulf! I'm not sure exactly what I want to do, but here are some gestures at the thing:
- More rigor/thought/data collection should be put into it than 5-10 minutes typical of a FB/Twitter post, but much less than a hundred or a few hundred hours on papers.- I feel like there are a lot of things that are worth somebody looking into it for a few hours (more rarely, a few dozen), but nowhere near the level of a typical academic paper?- Examples that I think are reflective of what I'm interested in are some of Georgia Ray and Nuno Sempere's lighter posts, as well as Brian Bi's older Quora answers on physics. (back when Quora was good)- "research" has the connotation of pushing the boundaries of human knowledge, but by default I'm more interested in pushing the boundaries of my own knowledge? Or at least the boundaries of my "group's" knowledge.- If the search for truthful things shakes out to have some minor implications on something no other human currently knows, that's great, but by default I feel like aiming for novelty is too constraining for my purposes.- Forecasting (the type done on Metaculus or Good Judgment Open) feels like a fair bit of this. Rarely do forecasters (even/especially really good ones) discover something nobody already knows; rather than the difficulty comes almost entirely in finding evidence that's already "out there" somewhere in the world and then weighing the evidence and probabilities accordingly.- I do think more forecasting should be done. But forecasting itself provides very few bits of information (just the final probability distribution on a well-specified problem). Often, people are also interested in your implicit model, the most salient bits of evidence you discovered, etc. This seems like a good thing to communicate.- It's not clear what the path to impact here is. Probably what I'm interested in is what Stefan Schubert calls "improving elite epistemics," but I'm really confused on whether/why that's valuable.- Not everything I or anybody does has to be valuable, but I think I'd be less excited to do medium rigor stuff if there's no or minimal impact on the world?- It's also not clear to me how much I should trust my own judgement (especially in out-of-distribution questions, or questions much harder to numerically specify than forecasting).- How do I get feedback? The obvious answer is from other EAs, but I take seriously worries that our community is overly insular.- Academia, in contrast, has a very natural expert feedback mechanism in peer review. But as mentioned earlier, peer review pre-supposes a very initially high level of rigor that I'm usually not excited about achieving for almost all of my ideas.- Also on a more practical level, it might just be very hard for me to achieve paper-worthy novelty and rigor in all but a handful of ideas?- In the few times in the past I reached out to experts (outside the EA community) for feedback, I managed to get fairly useful stuff, but I strongly suspect this is easier for precise well-targeted questions than some of the other things I'm interested in?- Also varies from field to field, for example a while back I was able to get some feedback on questions like water rights, but I couldn't find public contact information from climate modeling scientist after a modest search (presumably because the latter is much more politicized these days)- If not for pre-existing connections and connections-of-connections, I also suspect it'd be basically impossible to get ahold of infectious disease or biosecurity people to talk to in 2020.- In terms of format, "blog posts" seems the most natural. But I think blog posts could mean anything from "Twitter post with slightly more characters" to "stuff Gwern writes 10,000 words on." So doesn't really answer the question of what to do about the time/rigor tradeoff.
Another question that is downstream of what I want to do is branding. Eg, some people have said that I should call myself an "independent researcher," but this feels kinda pretentious to me? Like when I think "independent research" I think "work of a level of rigor and detail that could be publishable if the authors wanted to conform to publication standards," but mostly what I'm interested in is lower quality than that? Examples of what I think of as "independent research" are stuff that Elizabeth van Nostrand, Dan Luu, Gwern, and Alexey Guzey sometimes work on (examples below).
Stefan Schubert on elite epistemics: https://twitter.com/StefanFSchubert/status/1248930229755707394
Negative examples (too little rigor):
- Pretty much all my FB posts?
Negative examples (too much rigor/effort):
- almost all academic papers
- many of Gwern's posts
- eg https://www.gwern.net/Melatonin
(To be clear, by "negative examples" I don't mean to associate them with negative valence. I think a lot of those work is extremely valuable to have, it's just that I don't think most of the things I want to do are sufficiently interesting/important to spend as much time on. Also on a practical level, I'm not yet strong enough to replicate most work on that level).