All of astupple's Comments + Replies

I love it. Creating lists of plausible outcomes is very valuable, we can leave alone to idea of assigning probabilities. 

Basically, predictions about the future are fine as long as they include the caveat "unless we figure out something else." That caveat can't be ascribed a meaningful probability because we can't know discoveries before we discovery them, we can't know things before we know them.  

3
Noah Scales
2y
Well, my basic opinion about forecasting is that probabilities don't inform the person receiving the forecast. Before you commit to weighting possible outcomes, you commit to at least two mutually exclusive futures, X and not X. So what you supply is a limitation on possible outcomes, either X or not X.  At best, you're aware of mutually exclusive alternative and specific futures. Then you can limit what not X means to something specific, for example, Y. So now you can say, "The future will contain X or Y." That sort of analysis is enabled by your causal model. As your causal model improves, it becomes easier to supply a list of alternative future outcomes. However, the future is not a game of chance, and there's no useful interpretation to supply meaningful weights to the future prediction of any specific outcome, unless the outcomes belong to a game of chance, where you're predicting rolls of a fair die, choice of a hand from a deck of cards, etc. What's worse, that does not limit your feelings about what probabilities apply. Those feelings can seem real and meaningful because they let you talk about lists of outcomes and which you think are more credible.  As a forecaster, I might supply outcomes in a forecast that I consider less credible along with those that I consider more credible. but if you ask me which options I consider credible, I might offer a subset of the list. So in that way weights can seem valuable, because they let you distinguish which you think are more credible and which you can rule out. But the weights also obscure that information because they can scale that credibility in confusing ways.  For example, I believe in outcomes A or B, but I offer A at 30%, B at 30%, C at 20%,  D at 10%, and E at 10%. Have I communicated what I intended with my weights, namely, that A and B are credible, that C is somewhat credible, but D and E are not? Maybe I could adjust A and B to 40% and 40%, but now I'm fiddling with the likelihoods of C, D, and E, wh

Beautiful! We can't determine "something we haven't thought of" as simply "1 - all the things we've thought of".

1
astupple
2y
Basically, predictions about the future are fine as long as they include the caveat "unless we figure out something else." That caveat can't be ascribed a meaningful probability because we can't know discoveries before we discovery them, we can't know things before we know them.  

I mistakenly included my response to another comment, I'm pasting it below.

I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.

Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. ... (read more)

I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent.

Great point - Leo Szilard foresaw nuclear weapons and collaborated with Einstein to persuade FDR to start the Manhattan Project. Szilard would have done extremely well in a Tetlock scenario. However, this also conforms with my point - Szilard was able to successfully predict because he was privy to the relevant discoveries. The remainder of the task was largely engineering (again, not to belittle t... (read more)

I have to look at Tetlock again - there's a difference between predicting what will be determined to be the cause of Arafat's death (historical, fact collecting) and predicting how new discoveries in the future will affect future politics. Nonetheless, I wouldn't be surprised that some people are better than others at predicting future events in human affairs. An example would be predicting that Moore's Law holds next year. In such a case, one could understand the engineering that is necessary to improve computer chips, perhaps understanding that productio... (read more)

1
tcelferact
2y
Yep, I think this is my difficulty with your viewpoint. You argue that there's no way to predict future human discoveries, and if I give you counterexamples your response seems to be 'that's not what I mean by discovery'. I'm not convinced the 'discovery-like' concept you're trying to identify and make claims about is coherent. Maybe a better example here would be the theory of relativity and the subsequent invention of nuclear weapons. I'm not a physicist, but I would guess the scientific breakthrough that led to nuclear weapons would have been almost impossible to predict unless you were Einstein or Einstein-adjacent. I agree we should be very scared of these sorts of breakthroughs, and the good news is many EAs agree with you! See Nick Bostrom's Vulnerable World Hypothesis for example. You don't need to argue against our ability to predict if/when all future discoveries will occur to make this case.

An update that came from the discussion:

Let's split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.

In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.

In 2, we can still create predictive models, but they'll be nonsensical. That's because we cannot know how knowledge creation will affect 2. We don't even need any fancy reasoning, it's already implied in the definition of terms like knowledge creation a... (read more)

How about this: let's split future events into two groups. 1) Events that are not influenced by people and 2) Events that are influenced by people.

In 1, we can create predictive models, use probability, even calculate uncertainty. All the standard rules apply, Bayesian and otherwise.

In 2, we can still create predictive models, but they'll be nonsensical. That's because we cannot know how knowledge creation will affect 2. We don't even need any fancy reasoning, it's already implied in the definition of terms like knowledge creation and discovery. You can't ... (read more)

3
Noah Scales
2y
There are a few distinctions that might help with your update: * determinism: knowledge of some system of causes now allows prediction of their outcomes until the end of time * closed world: we know all there is to know about the topic. Any search through our knowledge that fails to prove some hypothesis means that the hypothesis is false. * defeasibility: new observations can contradict earlier beliefs and result in withdrawal of earlier beliefs from one's knowledge. It seems like your use of the solar system example allows you to assume the first two distinctions apply to knowledge of the solar system. I'm not sure a physicist would agree with your choice of example, but I'm OK with it. Human reasoning is defeasible, but until an observation provides an update, we do not necessarily consider the unknown beyond making passive observations of the real world.  From my limited understanding of the philosophy behind classic EA epistemics, believing what you know leads to refusing new observations that update your closed world. Thus the emphasis on incomplete epistemic confidence most of the time. So the thinking goes, it ensures that you're not close-minded to always hold out that you think you might be wrong. When running predictions, until someone provides a specific new item for a list of alternative outcomes (e.g, a new s-risk), the given list is all that is considered.  Probabilities are divided among its alternatives when those alternatives are outcomes. The only exhaustive list of alternatives is one that includes a contradictory option, such as: * A * B * C * not A and not B and not C and that covers all the possibilities.  The interesting options are implicit in that last "not A and not B and not C".  This is not a big deal, since it's usually the positive statements of options (A, B, or C) that are of interest.  So what's a discovery? It seems like, in your model, it's an alternative that is not listed directly.  For example, given: 1. future

A thanksgiving turkey has an excellent model that predicts the farmer wants him to be safe and happy. But an explanation of thanksgiving traditions tells us a lot more about the risks of slaughter than the number of days the turkey has been fed and protected.

With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict.

Just like with the turkey, we should pay attention to the explanation, not just try to make predictions based on past data.

With all of this, probability terminology is baked into the language and it is hard to speak without incorporating it. With the previous post, it was co-authored, and I wanted to remove that phrase, but concessions were made.

2
Jackson Wagner
2y
I agree with you, but once again I don't see the difference between the case of the turkey and nuclear war, versus the case of longtermism or AGI.  "With nuclear war, we have explanations for why nuclear exchange is possible, including as an outcome of a conflict."  Just the same with AGI -- we have explanations for why AGI seems possible, we have some evidence from scaling laws that describe how AI systems get better when given more resources, and ideas about what might motivate people to create more and more powerful AI systems, and why that might be dangerous, etc. I am not an academically trained philosopher (rather, an engineer!), so I'm not sure what's the best way to talk about probability and make it clear what kind of uncertainty we're talking about.  But in all cases, it seems that we should basically use a mixture of empirical evidence based on past experience (where available), and first-principles reasoning about what might be possible in the future.  With some things -- mathematical theorems are a great example -- evidence might be hard to come by, so it might be very difficult to predict with precision.  But it doesn't seem like we are in fundamentally different, "unknowable" terrain -- it's more uncertain than nuclear war risk, which in turn is more uncertain than forecasting things like housing prices or wheat harvests, which in turn is more uncertain than forecasting that the sun will rise tomorrow.  They all seem like part of the same spectrum, and the long-term future of civilization seems important enough that it's worth thinking about even amid high uncertainty.

Making an estimate about something you're unaware of is like guessing the likelihood of the discovery of nuclear energy in 1850.

I can put a number on the likelihood of discovering something totally novel, but applying a number doesn't mean it's meaningful. A psychic could make quantified guesses and tell us about the factors involved in that assessment, but that doesn't make it meaningful.

I'm saying the opposite - you can't rank the difficulty of unsolved problems if you don't know what's required to solve them. That's what yet-to-be-discovered means, you don't know the missing bit, so you can't compare.

It's not that "it happened this one time with Wiles, where he really knew a topic and was also way off in his estimate, and so that's how it goes." It's that the Wiles example shows us that we are always in his shoes when contemplating the yet-to-be-discovered, we are completely in the dark. It's not that he didn't know, it's that he COULDN'T know, and neither could anyone else who hadn't made the discovery.

But such work would undoubtedly produce unanticipated and destabilizing discoveries. You can't grow knowledge in foreseeable ways, with only foreseeable consequences.

I'd take the bet, but the feeling I have that inclines me toward choosing the affirmative says nothing about the actual state of the science/engineering. Even if I research for many hours on the current state of research,  this will only affect the feeling I have in my mind. I can assign that feeling a probability, and tell others that the feeling I have is "roughly informed," and I can enroll in Phil Tetlock's forecasting challenge. But all of this learns nothing about the currently unknown discoveries that need to be made in order to bring about col... (read more)

9
Robert_Wiblin
2y
It's hard to follow your argument, but how is any of this different from "someone thought X was very unlikely but then X happened, so this shows estimating the likelihood of future events is fundamentally impossible and pointless." That line of reasoning clearly doesn't work. Things we assign low probability to in highly uncertain areas happen all the time — but that is exactly what we should expect and is consistent with our credences in many areas being informative and useful.

He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was "likely enough to be worth it." And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.

Imagine if a good EA stopped him in his moment of despair and encouraged him, with all the tools available, to create the most accurate estimate, I bet he'd still consider quitting. He might even be more convinced that it's hopeless. 

He was incentivized to decide whether to quit or to persevere (at the cost of other opportunities.) For accuracy, all he needed was "likely enough to be worth it." And yet, at the moment when it should have been most evident what this likelihood was, he was so far off in his estimate that he almost quit.

This seems like it's pretty weak evidence given that he did in fact continue.

Yes, but what I’m getting at is How do we know there’s a limited number of low hanging fruit? Or, as we make progress, don’t previously high fruit come into reach? AND, more progress opens more markets/fields.

It seems to me low hanging fruit is a bad analogy because there’s not way to know the number of undiscovered fruit out there. And perhaps it’s infinite. Or, it INCREASES the more we figure out.

My two cents - stagnation isn’t due to supply of good ideas waiting to be discovered, it’s stifling of free and open exploration by our norms that promote institutionalization of discovery.

1
jasoncrawford
3y
Maybe there's just a confusion with the metaphor here? I generally agree that there is a practically infinite amount of progress to be made.

How could it be that ideas are progressively harder to find AND we waited so long for the bicycle? How can we know how many undiscovered bicycles, ie low hanging fruit, are out there?

Seems as progress progresses and the adjacent possible expands, the number of undiscovered bicycles within easy reach expands.

2
jasoncrawford
3y
I think there are a couple things with the bicycle. One is that it depended on materials and manufacturing techniques much more than is obvious (and more than I even brought out in that post): bearings, hollow metal tubes, gears and chains, rubber, etc. The other is that it's really just the overall story of progress: in a sense there was lots of low-hanging fruit for thousands of years before the Industrial Revolution. But if you want to understand progress now, 300 years in, when the markets are much more efficient, so to speak, the analysis is different. Now there are lots of fruit-pickers everywhere looking for fruit. So there's less obvious stuff lying around. Which is why we need to open up new technical fields, to discover whole new orchards of fruit (some of which will be low-hanging).

I think the idea of effective mask use has withstood sufficient criticism to warrant spreading aggressively, both to the public as well as experts in the field. It may be a mistake, but compared to no mask at all (risk of infection, barriers to reentering society) it is hard to see it being a significant mistake. The potential upside is significant. We may have a relatively cheap and safe countermeasure within reach.

I agree with the approach of individuals controlling spread through cheap effective masks. If the portal of entry and exit of this virus is the mouth, nose and eyes, (fecal is debateable) then if everyone contained transmission through these openings, the pandemic would be over.

There is a lot of talk about vaccines and treatments and seclusion, but these are complex, prone to failure, and have very clear negative/unintended consequences.

Effective masks are simple, can be implemented rapidly, confer benefits at the margin, and the negative consequences are ... (read more)

I had this same problem and finally cracked it (navigating the iOS podcast world stumps me).

Step 1: In iOS podcast app, tap "search" in lower right and enter "econtalk" Step 2: The app populates the archives going back to 2006, tap on the year you're looking for, such as 2007 and scroll for "Weingast on Violence, Power and a Theory of Everything." Step 3: Tap the three dots to upper right of episode and choose "Download Episode." Step 5: Repeat for all the other archived episodes you want. Step 6: Now for the trick- ... (read more)

This is fantastically helpful, thank you so much for taking the time.

Makes me ponder the value of an "EA Curator." There's such an overwhelming amount of mind-bending content in the EA universe and its adjacent possible. This list of podcasts clearly only scratches the surface, yet I find myself wondering how I'm going to fit this in with the dozens of other podcast episodes, audiobooks, and print books I have on my plate, let alone other modes of discovery (and worse, how this at some point impinges on the time I have to do actual work on ideas... (read more)

3
MaxDalton
7y
Watch this space. CEA is working on putting together a set of ~20 interesting articles and talks that have come out of EA in recent years. Speaking for myself, not CEA, I'd also encourage you and others to use the EA forum as a place for linking to great EA content. I don't think we should just flood the forum with content - one of the great things about the forum is that it tends to have higher quality posts than e.g. Facebook. But linking to good content allows both for curation and discussion.

Interesting. It sounds like you're possibly suggesting there's a taxonomy of ideas. Some ideas warrant simple experiments (in this case, a simple experiment would be to review the various EA threads and simply enter proposed ideas in a table online), others warrant further research (like some of the questions begot by your global warming example), etc. Am I describing this right? I'm guessing this must have been done- any ideas on where to look.

Perhaps it's worthwhile to review the analysis of- "What are productive ideas?" Ultimately, this could result in a one-pager about what a good idea is, how to develop it, and how (when, and to whom) to pitch it.

While I completely see what you're saying, at the risk of sounding obtuse, I think the opposite of your opener may be true.

"People who do things are not, in general, idea constrained"

The contrary of this statement may be the fundamental point of EA (or at least a variant of it): People who do things in general (outside of EA) tend to act on bad ideas. In fact, EA is more about the ideas underlying what we do than it is about the doing itself. Millions of affluent people are doing things (going to school, work, upgrading their cars and homes, givi... (read more)

3
MichaelPlant
7y
You don't need to have argued yourself out of the position. Here's the thought: ideas are important. Evidence in this direction is EA coming along and showing people their previous ideas were bad. Continuing in the same line, unless we think we have all the best ideas already - which would be frighteningly arrogant - that suggests continuing to developing our ideas would be very useful. Hence working on ideas is still very important for those who, as you said, already "get it". Gworly is right that people aren't lacking ideas. You (astupple) were right that they often lacking good ideas. Further, on this: This is statement lots of philosophers, including those within EA, would disagree with. Indeed, the whole point of 80k is that your life is a long time and it's fitting to spend a non-trivial period reflecting on how to do good.
1
Gordon Seidoh Worley
7y
Yep :-) I don't but I suspect some folks around here do. Talk to Malcolm Ocean maybe?

Thanks for the thoughtful comments, agree almost completely, particularly your closing points.

My main quibble is the comparison of talent vs ideas as a bottleneck, where you say talent is 80% of the problem compared to ideas at 20%. I certainly agree that lots of weak ideas pose problems, but the trouble with with this comparison is that the first step to recruiting more talent will be an idea. So, in a sense, the talent gap IS an idea gap. In fact, aside from blind luck, every improvement on what we have will first be an idea. Perhaps we shouldn't think o... (read more)

Like me, I suspect many EA's do a lot of "micro advising" to friends and younger colleagues. (In medicine, this happens almost on a daily basis). I know I'm an amateur, and I do my best to direct people to the available resources, but it seems like creating some basic pointers on how to give casual advice may be helpful.

Alternatively, I see the value in a higher activation energy for potentially reachable advisees- if they truly are considering adjusting their careers, then they'll take the time to look at the official EA material.

Nonetheless, ... (read more)

1- The Singularity is Near changed everything for me, made me quit my job and go to med school. I've since purchased it for many people, but I no longer do. Instead, I have been sending people copies of Home Deus by Yuval Noah Harari. Broader scope, more sociology, psychology and ethics. 2- The Selfish Gene (I think this moored me to reality closer than Steven Pinker's work) 3- The Black Swan (Thinking Fast and Slow, Freakonomics, Predictably Irrational etc are probably better explications of irrationality, while Taleb is a pretty clear victim of his own c... (read more)

I bet a more neglected aspect of polarization is the degree to which the left (which I identify with) literally hates the right for being bigots, or seeming bigots (agree with Christian Kleineidam below). This is literally the same mechanism of prejudice and hatred, with the same damaging polarization, but for different reasons.

There's much more energy to address the alt-right polarization than the not-even-radical left (many of my friends profess hatred of Trump voters qua Trump voters, it gives me the same pit of the stomach feeling when I see blatant r... (read more)

Hello,

I would love some feedback on what I'm calling an "EA Idea Sounding Board"

I'm thinking of a call-in show and/or a message board, where EA's suggest ideas to someone with experience in the EA landscape, perhaps an advisor at 80,000 Hours. It might go something like this:

An 80,000 Hours advisor takes calls from EA's who essentially pitch their ideas for anything EA related: an idea for a donation drive, for a new cause area, for a startup. The advisor hears out the idea and reframes and refines it to show both how it is promising and in what ... (read more)

I love this idea, so many spin-offs come to mind, though as you describe, reaching the scale to reliably quantify the impact appears difficult.

I wonder if a way to boost followup and engagement could be to ask the recipients to donate the value of the book itself to an effective charity? "This book cost $15, if you find it interesting, can you give $15 to AMF?"

It's still a bit tricky to track actual donation... maybe setting up a simple webpage for book recipients to donate to AMF. You could create two groups, one that gets the book and the websi... (read more)