Ozzie Gooen

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Wiki Contributions

Comments

Prioritization Research for Advancing Wisdom and Intelligence

I agree there are ways for it to go wrong. There’s clearly a lot of poorly thought stuff out there. Arguably, the motivations to create ML come from desires to accelerate “wisdom and intelligence”, and… I don’t really want to accelerate ML right now.

All that said, the risks of ignoring the area also seem substantial.

The clear solution is to give it a go, but to go sort of slowly, and with extra deliberation.

In fairness, AI safety and bio risk research also have severe potential harms if done poorly (and some, occasionally even when done well). Now that I think about it, bio at least seems worse in this direction than “wisdom and intelligence”; it’s possible that AI is too.

Prioritization Research for Advancing Wisdom and Intelligence

One adjacent category which I think is helpful to consider explicitly (I think you have it implicit here) is 'well-informedness', which I motion is distinct from 'intelligence' or 'wisdom'.

That’s an interesting take.

When I was thinking about “wisdom”, I was assuming it would include the useful parts of “well-informedness”, or maybe, “knowledge”. I considered using other terms, like “wisdom and intelligence and knowledge”, but that got to be a bit much.

I agree it’s still useful to flag that such narrow notions as “well informedness” are useful.

Prioritization Research for Advancing Wisdom and Intelligence

My guess is counterintuitive, but it is that these existing institutions, that are shown to have good leaders, should be increased in quality, using large amounts of funding if necessary.

I think I agree, though I can’t tell how much funding you have in mind.

Right now we have relatively few strong and trusted people, but lots of cash. Figuring out ways, even unusually extreme ways, of converting cash into either augmenting these people or getting more of them, but seem fairly straightforward to justify.

Prioritization Research for Advancing Wisdom and Intelligence

EAs have less of an advantage in this domain.

I wasn’t actually thinking that the result of prioritization would always be that EAs end up working in the field. I would expect that in many of these intervention areas, it would be more pragmatic to just fund existing organizations.

My guess is that prioritization could be more valuable for money than EA talent right now, because we just have so much money (in theory).

Prioritization Research for Advancing Wisdom and Intelligence

It's not clear anyone should care about my opinion in "Wisdom and Intelligence"

I just want to flag that I very much appreciate comments, as long as they don’t use dark arts or aggressive techniques.

Even if you aren’t an expert here, your questions can act as valuable data as to what others care about and think. Gauging the audience, so to speak.

At this point I feel like I have a very uncertain stance on what people think about this topic. Comments help here a whole lot.

Prioritization Research for Advancing Wisdom and Intelligence

Less directly, I think caution is good for other interventions, e.g. "Epistemic Security", "Cognitive bias research", "Research management and research environments (for example, understanding what made Bell Labs work)".

I'd also agree that caution is good for many of the listed interventions. To me, that seems to be even more of a case for more prioritization-style research though, which is the main thing I'm arguing for.

Prioritization Research for Advancing Wisdom and Intelligence

I agree that the existing community (and the EA community) represent much, if not the vast majority, of the value we have now. 

I'm also not particularly excited about lifehacking as a source for serious EA funding. I wrote the list to be somewhat comprehensive, and to encourage discussion (like this!), not because I think each area deserves a lot of attention.

I did think about "recruiting" as a wisdom/intelligence intervention. This seems more sensitive to the definition of "wisdom/intelligence" than other things, so I left it out here.

I'm not sure how extreme you're meaning to be here. Are you claiming something like,
> "All that matters is getting good people. We should only be focused on recruiting. We shouldn't fund any augmentation, like LessWrong / the EA Forum, coaching, or other sorts of tools. We also shouldn't expect further returns to things like these."

Prioritization Research for Advancing Wisdom and Intelligence

This tension is one reason why I called this "wisdom and intelligence", and tried to focus on that of "humanity", as opposed to just "intelligence", and in particular, 'individual intelligence". 

I think that "the wisdom and intelligence of humanity" is much safer to optimize than "the intelligence of a bunch of individuals in isolation". 

If it were the case that "people all know what to do, they just won't do it", then I would agree that wisdom and intelligence aren't that important. However, I think these cases are highly unusual. From what I've seen, in most cases of "big coordination problems", there are considerable amounts of confusion, deception, and stupidity. 

Prioritization Research for Advancing Wisdom and Intelligence

Thanks for the link, I wasn't familiar with them. 

For one, I'm happy for people to have a very low bar to post links to things that might or might not be relevant. 

Prioritization Research for Advancing Wisdom and Intelligence

+1 for Stefan's point. 

On "don't have much time left", this is a very specific and precise question. If you think that AGI will happen in 5 years, I'd agree that advancing wisdom and intelligence probably isn't particularly useful. However, if AGI happens to be 30-100+ years away, then it really gets to be. Even if there's a <30% chance that AGI is 30+ years away, that's considerable. 

In the very short-time-frames, "education about AI safety" seems urgent, though is more tenuously "wisdom and intelligence". 

Load More