Hide table of contents

There’s no doubting the intellectual horsepower EAs bring to the table.

However, I’m confused by some of the cost-benefit decisions EA makes about which topics deserve investigation.

Specifically, it seems like being 0.1% more "weird," could make EA 100x more effective. 

For example, consider the X-risk of climate change.

One would expect that since the now-openly-admitted existence of UFOs proves the possibility of paradigm-level breakthroughs in energy production, the logical next steps would be:

But I’ve never seen any of these steps mentioned by EAs, even though they’re an obvious course of action, depend exclusively on free information disclosed by the US government itself, are indisputably high-impact (i.e., could revolutionize clean energy), and near-zero-cost to investigate given the wealth of free information available.

The possibility of developing breakthrough energy tech could also be interpreted negatively — as a new X-risk. It’s possible breakthrough energy tech would be so powerful that it would be impossible to handle responsibly, and produce the same quandaries as the proliferation of nuclear weapons (or worse), creating new X-risks. In that case, EA might consider petitioning governments to refuse to investigate UFO technology.

Even if there’s only a 0.1% chance aircraft no earthly government can explain might yield some breakthrough in physics, energy, or technology, the potential benefits (or costs, if interpreted negatively) make it worthy of investigation by EA.

Decades of previous research by UFO scientists brings the costs of an initial investigation to near-zero. Therefore, it seems impossible to justify neglect of the UFO topic on the basis that UFO technology is so unlikely to matter, that not even a free investigation is merited.

EA’s neglect of high-impact “weird” topics is not an isolated incident.

Others include:

  1. Claims of suppressed cures for cancer, free energy, water-powered cars, etc. As with UFOs, the “highly unlikely” argument seems moot, given the wealth of public information on sites like RexResearch gives these topics an extremely low cost of investigation. A higher-cost version might look like Venture Science. The Institute for Venture Science “simultaneously funds multiple research groups worldwide for each selected proposal,” based on their “potential for instigating dramatic beneficial change.” Sounds like EA's mission! The premise is the same as in Venture Capital: invest in many 0.1%-chance-of-paradigm-shift ideas, with the hope that 1 or 2 will succeed. Why shouldn’t this model work in science? Why aren’t EA’s using it?
  2. Gandhian nonviolence. Studies have shown America’s population has virtually zero influence over the actions of its government. Gandhi demonstrated “it is possible for a single individual to defy the whole might of an unjust empire.” Teaching his tactics could empower populations around the world to persuade our governments to reduce X-risks like climate change and World War III. No “0.1% chance” argument needed here — we already know it works.
  3. Spiritual reality. EA seems to assume spiritual reality is nonexistent or irrelevant. But consider the Manifesto for Post-Materialist Science, signed by over 400 scientists and professors, which states “We believe that the sciences are being constricted by dogmatism, and in particular by a subservience to the philosophy of materialism, the doctrine that matter is the only reality and that the mind is nothing but the physical activity of the brain.” Even if there’s only a 0.1% chance these scientists are correct, the possibility would change the entire question of what constitutes “effectiveness,” and likely “altruism” as well. For example, it seems self-evident even now, that in a world of growing technological power, evil (however defined) creates a growing x-risk. If 'goodness' or ‘holiness’ also meaningfully exist, it could redefine effectiveness as a battle for holiness and against evil.

Even 1 breakthrough in one of these areas could be epochal. Simultaneously, so much research has been done on these already that to sift through it for overlooked gems would be an extremely low-cost endeavor. (And of course, the Gandhi breakthrough has already been made.) 

Therefore, I’m confused by EA’s cost-benefit analysis on topics like these. 

However, they do share a common thread: Divergence from the prevailing worldview of dominant scientific and media institutions. None of those voices are making the obvious call for UFO-based breakthrough energy and propulsion research, Gandhian nonviolence, or spiritual science, either. But if EA worldviews are so unanimously aligned with dominant and pervasive institutions like these, then what are EAs for?

It seems logical that only breakthrough divergences from establishment thinking could hope to yield breakthrough effectiveness by comparison.

Thank you for reading!

(Special thanks to Sam H. Barton for help with revisions, proofreading, steelmanning, etc.)

2

0
0

Reactions

0
0

More posts like this

Comments3
Sorted by Click to highlight new comments since:

I just wanted to say I appreciate you writing this, and I agree the world ought to tolerate and celebrate weirdness more than we currently do. Break-out thinking is inconvenient and useless most of the time, but extremely beneficial some of the time. Obviously weird beliefs ought to be put to the test like any other, but we should celebrate it too, and especially safeguard its generation. I have heard we have become less welcoming to unorthodox worldviews, although it may have been inaccurate.  

A nitpick I have is your particular examples: Spiritual reality and UFOs are both famous and have lots of people researching them. I presume all good evidence would have already been found and there isn't much proof. (but for that very reason they are illustrative examples easily parsed) I'm more positive towards totally unheard of things

In principle at least. But I'm not terribly good at judging against the grain even though I wish I were, and I try to be. 

So thank you, all you high-openness, low agreeableness, prospecting personalities.

Thank you so much for this comment! I really appreciate your appreciation.

I like your point about the topics I chose being famous. If I were to re-write this article, I might center it more around "the x-risk of our institutions being drastically wrong or intellectually dishonest." I suspect there's too much trust among EAs in the conclusions of the institutions that tell us "there's not much proof" of UFOs or spiritual reality. These conclusions are simply not true, according to the experts in the (exiled and disrespected) academic fields devoted to them. 

Consider "The x-risk of exiling Galileo."

The question isn't only "is there proof" — it's also "if there's proof, would they tell you"?

EAs seem to think "yes, 100%, end of story." I think no — or at least that the "no" option is an x-risk worth investigating and mitigating.

For example: How would we falsify the hypothesis that "scientific institutions are intellectually honest"?

If they were NOT honest, you might find:
- Reputation-based culture (as though social status had anything to do with epistemic status)
- Entire fields ostracized on the basis of their conclusions, instead of on the basis of their rigor
- Significant numbers of well-credentialed experts speaking out against intellectual bias

As you can tell, I haven't given these falsifiability criteria a ton of thought yet,  but these three are all self-evident to demonstrate.  See the Manifesto for Post-Materialist Science link in the "Spiritual reality" point above.

Cheers again my friend, your comment made my day. :)

I think we'll have to agree to disagree a little bit here, but we agree on the central bit: new evidence must be considered on its own merits, and scientific conclusions must be accepted, however strange and distasteful they are.

But let me share my favorite example of this problem in science:

"For no bias can be more constricting than invisibility--and stasis, inevitably read as absence of evolution, had always been treated as a non-subject. How odd, though, to define the most common of all palaeontological phenomena as beyond interest or notice! Yet paleontologists never wrote papers on the absence of change in lineages before punctuated equilibrium granted the subject some theoretical space. And, even worse, as paleontologists didn't discuss stasis, most evolutionary biologists assumed continual change as a norm, and didn't even know that stability dominates the fossil record."

  • Stephen Jay Gould & Niles Eldredge, Punctuated Equilibrium Comes Of Age 1993
Curated and popular this week
Relevant opportunities