Basically, predictions about the future are fine as long as they include the caveat "unless we figure out something else." That caveat can't be ascribed a meaningful probability because we can't know discoveries before we discovery them, we can't know things before we know them.
Beautiful! We can't determine "something we haven't thought of" as simply "1 - all the things we've thought of".
I mistakenly included my response to another comment, I'm pasting it below.
I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. ...
I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.
Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle t...
I have to look at Tetlock again - there's a difference between predicting what will be determined to be the cause of Arafat's death (historical, fact collecting) and predicting how new discoveries in the future will affect future politics. Nonetheless, I wouldn't be surprised that some people are better than others at predicting future events in human affairs. An example would be predicting that Moore's Law holds next year. In such a case, one could understand the engineering that is necessary to improve computer chips, perhaps understanding that productio...
An update that came from the discussion:
Let's split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they'll be nonsensical. That's because we cannot know how knowledge creation will affect 2. We don't even need any fancy reasoning, it's already implied in the definition of terms like knowledge creation a...
How about this: let's split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.
In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.
In 2, we can still create predictive models, but they'll be nonsensical. That's because we cannot know how knowledge creation will affect 2. We don't even need any fancy reasoning, it's already implied in the definition of terms like knowledge creation and discovery. You can't ...
A thanksgiving turkey has an excellent model that predicts the farmer wants him to be safe and happy. But an explanation of thanksgiving traditions tells us a lot more about the risks of slaughter than the number of days the turkey has been fed and protected.
With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict.
Just like with the turkey, we should pay attention to the explanation, not just try to make predictions based on past data.
With all of this, probability terminology is baked into the language and it is hard to speak without incorporating it. With the previous post, it was co-authored, and I wanted to remove that phrase, but concessions were made.
Making an estimate about something you're unaware of is like guessing the likelihood of the discovery of nuclear energy in 1850.
I can put a number on the likelihood of discovering something totally novel, but applying a number doesn't mean it's meaningful. A psychic could make quantified guesses and tell us about the factors involved in that assessment, but that doesn't make it meaningful.
I'm saying the opposite - you can't rank the difficulty of unsolved problems if you don't know what's required to solve them. That's what yet-to-be-discovered means, you don't know the missing bit, so you can't compare.
It's not that "it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that's how it goes." It's that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It's not that he didn't know, it's that he COULDN'T know, and neither could anyone else who hadn't made the discovery.
But such work would undoubtedly produce unanticipated and destabilizing discoveries. You can't grow knowledge in foreseeable ways, with only foreseeable consequences.
I'd take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research, this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is "roughly informed," and I can enroll in Phil Tetlock's forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about col...
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was "likely enough to be worth it." And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he'd still consider quitting. He might even be more convinced that it's hopeless.
He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was "likely enough to be worth it." And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.
This seems like it's pretty weak evidence given that he did in fact continue.
Yes, but what I’m getting at is How do we know there’s a limited number of low hanging fruit? Or, as we make progress, don’t previously high fruit come into reach? AND, more progress opens more markets/fields.
It seems to me low hanging fruit is a bad analogy because there’s not way to know the number of undiscovered fruit out there. And perhaps it’s infinite. Or, it INCREASES the more we figure out.
My two cents - stagnation isn’t due to supply of good ideas waiting to be discovered, it’s stifling of free and open exploration by our norms that promote institutionalization of discovery.
How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles, ie low hanging fruit, are out there?
Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.
I think the idea of effective mask use has withstood sufficient criticism to warrant spreading aggressively, both to the public as well as experts in the field. It may be a mistake, but compared to no mask at all (risk of infection, barriers to reentering society) it is hard to see it being a significant mistake. The potential upside is significant. We may have a relatively cheap and safe countermeasure within reach.
I agree with the approach of individuals controlling spread through cheap effective masks. If the portal of entry and exit of this virus is the mouth, nose and eyes, (fecal is debateable) then if everyone contained transmission through these openings, the pandemic would be over.
There is a lot of talk about vaccines and treatments and seclusion, but these are complex, prone to failure, and have very clear negative/unintended consequences.
Effective masks are simple, can be implemented rapidly, confer benefits at the margin, and the negative consequences are ...
I had this same problem and finally cracked it (navigating the iOS podcast world stumps me).
Step 1: In iOS podcast app, tap "search" in lower right and enter "econtalk" Step 2: The app populates the archives going back to 2006, tap on the year you're looking for, such as 2007 and scroll for "Weingast on Violence, Power and a Theory of Everything." Step 3: Tap the three dots to upper right of episode and choose "Download Episode." Step 5: Repeat for all the other archived episodes you want. Step 6: Now for the trick- ...
This is fantastically helpful, thank you so much for taking the time.
Makes me ponder the value of an "EA Curator." There's such an overwhelming amount of mind-bending content in the EA universe and its adjacent possible. This list of podcasts clearly only scratches the surface, yet I find myself wondering how I'm going to fit this in with the dozens of other podcast episodes, audiobooks, and print books I have on my plate, let alone other modes of discovery (and worse, how this at some point impinges on the time I have to do actual work on ideas...
Interesting. It sounds like you're possibly suggesting there's a taxonomy of ideas. Some ideas warrant simple experiments (in this case, a simple experiment would be to review the various EA threads and simply enter proposed ideas in a table online), others warrant further research (like some of the questions begot by your global warming example), etc. Am I describing this right? I'm guessing this must have been done- any ideas on where to look.
Perhaps it's worthwhile to review the analysis of- "What are productive ideas?" Ultimately, this could result in a one-pager about what a good idea is, how to develop it, and how (when, and to whom) to pitch it.
While I completely see what you're saying, at the risk of sounding obtuse, I think the opposite of your opener may be true.
"People who do things are not, in general, idea constrained"
The contrary of this statement may be the fundamental point of EA (or at least a variant of it): People who do things in general (outside of EA) tend to act on bad ideas. In fact, EA is more about the ideas underlying what we do than it is about the doing itself. Millions of affluent people are doing things (going to school, work, upgrading their cars and homes, givi...
Thanks for the thoughtful comments, agree almost completely, particularly your closing points.
My main quibble is the comparison of talent vs ideas as a bottleneck, where you say talent is 80% of the problem compared to ideas at 20%. I certainly agree that lots of weak ideas pose problems, but the trouble with with this comparison is that the first step to recruiting more talent will be an idea. So, in a sense, the talent gap IS an idea gap. In fact, aside from blind luck, every improvement on what we have will first be an idea. Perhaps we shouldn't think o...
Like me, I suspect many EA's do a lot of "micro advising" to friends and younger colleagues. (In medicine, this happens almost on a daily basis). I know I'm an amateur, and I do my best to direct people to the available resources, but it seems like creating some basic pointers on how to give casual advice may be helpful.
Alternatively, I see the value in a higher activation energy for potentially reachable advisees- if they truly are considering adjusting their careers, then they'll take the time to look at the official EA material.
Nonetheless, ...
1- The Singularity is Near changed everything for me, made me quit my job and go to med school. I've since purchased it for many people, but I no longer do. Instead, I have been sending people copies of Home Deus by Yuval Noah Harari. Broader scope, more sociology, psychology and ethics. 2- The Selfish Gene (I think this moored me to reality closer than Steven Pinker's work) 3- The Black Swan (Thinking Fast and Slow, Freakonomics, Predictably Irrational etc are probably better explications of irrationality, while Taleb is a pretty clear victim of his own c...
I bet a more neglected aspect of polarization is the degree to which the left (which I identify with) literally hates the right for being bigots, or seeming bigots (agree with Christian Kleineidam below). This is literally the same mechanism of prejudice and hatred, with the same damaging polarization, but for different reasons.
There's much more energy to address the alt-right polarization than the not-even-radical left (many of my friends profess hatred of Trump voters qua Trump voters, it gives me the same pit of the stomach feeling when I see blatant r...
Hello,
I would love some feedback on what I'm calling an "EA Idea Sounding Board"
I'm thinking of a call-in show and/or a message board, where EA's suggest ideas to someone with experience in the EA landscape, perhaps an advisor at 80,000 Hours. It might go something like this:
An 80,000 Hours advisor takes calls from EA's who essentially pitch their ideas for anything EA related: an idea for a donation drive, for a new cause area, for a startup. The advisor hears out the idea and reframes and refines it to show both how it is promising and in what ...
I love this idea, so many spin-offs come to mind, though as you describe, reaching the scale to reliably quantify the impact appears difficult.
I wonder if a way to boost followup and engagement could be to ask the recipients to donate the value of the book itself to an effective charity? "This book cost $15, if you find it interesting, can you give $15 to AMF?"
It's still a bit tricky to track actual donation... maybe setting up a simple webpage for book recipients to donate to AMF. You could create two groups, one that gets the book and the websi...
I love it. Creating lists of plausible outcomes is very valuable, we can leave alone to idea of assigning probabilities.