5267 karmaJoined Aug 2014


One can value research and find it informative or worth doing without being convinced of every view of a given researcher or team.  Open Philanthropy also sponsored a a contest to surface novel considerations that could affect its views on AI timelines and risk. The winners mostly present conclusions or considerations on which AI would be a lower priority, but that doesn't imply that the judges or the institution changed their views very much.

At large scale, Information can be valuable enough to buy even if it only modestly adjusts proportional allocations of effort, the minimum bar for funding a research project with hundreds of thousands or millions of dollars presumably isn't that one pivots billions of dollars on the results with near-certainty.

  1. there not being enough practically accessible matter available (even if we only ever need a finite amount), and

This is what I was thinking about. If I need a supply of matter set aside in advance to be able to record/receive an answer, no finite supply suffices. Only an infinite brain/tape, or infinite pile of tape making resources, would suffice. 

If the resources are created on demand ex nihilo, and in such a way that the expansion processes can't be just 'left on' you could try to jury rig around it.

I personally think unbounded utility functions don't work, I'm not claiming otherwise here, the comment above is about the thought experiment.

Now, there’s an honest and accurate genie — or God or whoever’s simulating our world or an AI with extremely advanced predictive capabilities — that offers to tell you exactly how  will turn out.[9] Talking to them and finding out won’t affect  or its utility, they’ll just tell you what you’ll get.

This seems impossible, for the possibilities that account for ~all the expected utility (without which it's finite)? You can't fit enough bits in a human brain or lifetime (or all accessible galaxies, or whatever). Your brain would have to be expanded infinitely (any finite size wouldn't be enough). And if we're giving you an actually infinite brain, the part about how infinite expectations of finite outcomes are more conservative arguments than actual infinities goes away.

I do want to point out that the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn't be controversial, and I'd guess our universe is infinite with probability >80%).

Alone and directly (not as a contributing factor to something else later), enough below 0.1% that I evaluate nuclear interventions based mainly on their casualties and disruption, not extinction. I would (and have) support them in the same kind of metric as GiveWell, not in extinction risk.

In the event of all-out WMD war (including with rogue AGI as belligerent) that leads to extinction nukes could be a contributing factor combined with bioweapons and AI (strategic WMD war raises the likelihoods of multiple WMDs being used together).

>It's plausible humans will go extinct from AI. It's also plausible humans will go extinct from supervolcanoes. 

Our primitive and nontechnological ancestors survived tens of millions of years of supervolcano eruptions (not to mention mass extinctions from asteroid/comet impacts) and our civilization's ability to withstand them is unprecedentedly high and rapidly increasing. That's not plausible, it's enormously remote, well under 1/10,000 this century.

I think there are whole categories of activity that are not being tried by the broader world, but that people focused on the problem attend to, with big impacts in both bio and AI. It has its own diminishing returns curve.

The thing to see is if the media attention translates into  action with more than a few hundred people working on the problem as such rather than getting distracted, and government prioritizing it in conflict with competing goals (like racing to the precipice). One might have thought Covid-19 meant that GCBR pandemics would stop being neglected,  but  that doesn't seem right. The Biden administration has asked for Congressional approval of a pretty good pandemic prevention bill (very similar to what EAs have suggested)  but it has been rejected because it's still seen as a low priority. And engineered pandemics remain off the radar with not much improvement as a result of a recent massive pandemic.

AIS has always had outsized media coverage relative to people actually doing something  about it, and that may continue.

I actually do every so often go over the talks from the past several EAGs on Youtube and find it does  better. Some important additional benefits are turning on speedup and subtitles, being able to skip forward or bail more easily if the talk turns out bad, and not being blocked from watching two good simultaneous talks.

In contrast, a lot of people really love in-person meetings compared to online video or phone.

Load more