Ozzie Gooen

I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.


What are words, phrases, or topics that you think most EAs don't know about but should?

A very simple example might be someone saying, "What's up?" and the other person saying "The sky.". "What's up?" assumes a shared amount context. To be relevant, it would make much more sense for it to be asking how the other person is doing.

There are a bunch of youtube videos around the topic, I recall some go into examples.

Thomas Kwa's Shortform

First, neat idea, and thanks for suggesting it!

Is there a reason this isn't being done? Is it just too expensive?

From where I'm sitting, there are a whole bunch of potentially highly useful things that aren't being done. After several years around the EA community, I've gotten a better model of why that is:

1) There's a very limited set of EAs who are entrepreneurial, trusted by funders, and have the necessary specific skills and interests to do many specific things. (Which respected EAs want to take a 5 to 20 year bet on field anthropology?)
2) It often takes a fair amount of funder buy-in to do new projects. This can take several years to develop, especially for an research area that's new.
3) Outside of OpenPhil, funding is quite limited. It's pretty scary and risky to start something new and go for it. You might get funding from EA Funds this year, but who's to say if you'll have to fire your staff in 3 years.

On doing anthropology, I personally think there might be lower hanging fruit first engaging with other written moral systems we haven't engaged with. I'd be curious to get an EA interpretation of parts of Continental Philosophy, Conservative Philosophy, and the philosophies and writings of many of the great international traditions. That said, doing more traditional anthropology could also be pretty interesting.

Ozzie Gooen's Shortform

EA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. 

If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. 

If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice. 

Solander's Shortform

I think this is one of the principals of GiveDirectly. I imagine that more complicated attempts at this could get pretty hairy (try to get the local population to come up with large coordinated proposals like education reform), but could be interesting.

Expansive translations: considerations and possibilities

Thanks! Some responses:

I'm not sure I understand exactly why you think of this as being of perhaps similar epistemic importance as forecasting.

I plan to get to this more in future posts.  The TLDR is something like,
"Jugemental forecasting has a lot of room to grow in both research and technology. If it gets really great, that could be really useful for our shared epistemics. It would help us be more accurate about the world. Expansive translations have similar properties."

by "futuristic translation" did you mean any form of expansive translation as is written in your post

Correct. I think that these definitions will require a lot of technology and research to do well, so I'm labeling them as "futuristic".

The case I see for its importance is basically that it increases our capacity for sharing ideas more efficiently, which can improve general reasoning about complex issues and hasten progress. Is this mostly how you think of it?

Yep, that's a good way of putting it. 

One interesting point regarding how promising this is, is that either there would be an economic incentive for someone to create such an innovation or that there won't be enough public interest.

It's a common point around EA circles, but I think things are more complicated. Having worked in the tech sector for a while, and read a fair bit around the edges, I think the idea that "technology progress that's useful for industry is an efficient market" has large gaps in it. A lot of really ambitious technological development takes decades to develop and begins in academic institutions long before corporate ones. I think doing great work in this area could require long-term systematic efforts, and the way things are right now, those seem to be very haphazard and spotty to me. 

I think it's possible that much of "effective general scientific, academic, and technological progress" is a highly neglected area, even though it seems on the surface that things possibly can't be that bad. 




Judgement as a key need in EA

I'm doing research around forecasting, and I'd just note:
1) Forecasting seems nice for judgement, it is very narrow (as currently discussed). 
2) It seems quite damning that every single other field isn't currently recommended as an obvious improvement to judgement, but right now not much else comes to mind. There's a lot of Academia that seems like it could be good, but right now it's a whole lot of work to learn and the expected benefits aren't particularly clear. 

If anyone else reading this has suggestions, please leave them in the comments.

Load More