A complication: Whole-brain emulation seeks to instantiate human minds, which are conscious by default, in virtual worlds. Any suffering involved in that can presumably be edited away if I go by what Robin Hanson wrote in Age of Em. Hanson also thinks that this might be a more likely first route for HLAI, which suggests that may be the "lazy solution", compared to mathematically-based AGI. However, in the S-risks talk at EAG Boston, an example of s-risk was something like this.
Analogizing like this isn't my idea of a first-principle argument, and therefore what I'm saying is not airtight either, considering the levels of uncertainty for paths to AGI.
Quoting Nate's supplement from OpenPhil's review of "Proof-producing reflection for HOL" (PPRHOL) :
there are basic gaps in our models of what it means to do good reasoning (especially when it comes to things like long-running computations, and doubly so when those computations are the reasoner’s source code)
How far along the way are you towards narrowing these gaps, now that "Logical Induction" is a thing people can talk about? Are there variants of it that narrow these gaps, or are there planned follow-ups to PPRHOL that might improve our models? What kinds of experiments seem valuable for this subgoal?
I like both of them, but I'm wondering: why wait so long? Isn't there a way some group (maybe us) could build 10% of the kind of prediction market that gets us 90% of what we actually need? I need to think about this more, but waiting for Gnosis and Augur to mature seems risky. Unless de-risking that bet means joining both projects to accelerate their advent.