Greg_Colbourn

2086Joined Sep 2014

Bio

Founder of CEEALAR (née the EA Hotel; ceealar.org)

Comments
392

I guess a failure mode of this is just going all out when exploring as well as exploiting! And optimising your pauses for reflection to maximise both. Perhaps ultimately there is no substitute for a well-cultivated garden of ends.

As mentioned in a footnote of the OP,  an exception to this might be if your AGI timelines are short. Even then though (as is the case for me), there is still uncertainty over what is the right action. I guess there is always more reading to do to figure things out better, but that always requires time to digest and think. It's still better to work smarter rather than harder in this case. And Tyler makes a point here about more well-adjusted and happy people potentially being better at coordination. 

Great post, and sorry to hear about your dark night Tyler. I think one thing that has given me pause in EA has been the explore-exploit tradeoff. Crucial Considerations, and the idea of working smarter rather than harder, means that it's unlikely to be optimal to go all out on exploiting any given opportunity. Related is the idea of giving now vs giving later, which should probably relate to time as well as money - i.e. it could be better to grow your skills before you spend them down in a sprint/burn-out, and perhaps better to not sprint/burn-out at all given new Crucial Considerations may always be forthcoming.

On the other hand, I think that the real probabilities are higher, and am confused as to why the Future Fund haven't already updated to higher probabilities, given some of the writing already out there. I give a speculative reason here.

I agree that finding the cruxes of disagreement are important, but I don't think any of the critical quotes you present above are that strong. The reviews of semi-informative priors talk about error bars and precision (i.e. critique the model), but don't actually give different answers. On explosive growth, Jones talks about the conclusion being contrary to his "intuitions", and acknowledges that "[his] views may prove wrong". Vollrath mentions "output and demand", but then talks about human productivity when regarding outputs, and admits that AI could create new in-demand products. If these are the best existing sources for lowering the Future Fund's probabilities, then I think someone should be able to do better.

I think 37% is pretty encouraging. Perhaps if it's run again in 5 years it could pass? There are signs we could be close to a tipping point, such as 100% vegan Burger Kings.

Great summary post re QRI's work as it relates to EA. I think EA should be paying more attention to this stuff as it seems ripe for generating (and hopefully resolving!) crucial considerations.

Re the 2 disagreement votes on the parent comment: is this disagreement over me asking the question(s) (/drawing attention to the fact that it could be true)? Or answering the question(s) in the negative? If the latter, please link to bigger writing prizes.

Load More