Ozzie Gooen

9876 karmaJoined Berkeley, CA, USA

Bio

I'm currently researching forecasting and epistemics as part of the Quantified Uncertainty Research Institute.

Sequences
1

Amibitous Altruistic Software Efforts

Comments
890

Topic contributions
4

I still think that EA Reform is pretty important. I believe that there's been very little work so far on any of the initiatives we discussed here

My impression is that the vast majority of money that CEA gets is from OP. I think that in practice, this means that they represent OP's interests significantly more than I feel comfortable with. While I generally like OP a lot, I think OP's focuses are fairly distinct from those of the regular EA community. 

Some things I'd be eager to see funded:
- Work with CEA to find specific pockets of work that the EA community might prioritize, but OP wouldn't. Help fund these things.
- Fund other parties to help represent / engage / oversee the EA community.
- Audit/oversee key EA funders (OP, SFF, etc); as these often aren't reviewed by third parties.
- Make sure that the management in key EA orgs are strong, including the boards.
- Make sure that many key EA employees and small donors are properly taken care of and are provided with support. (I think that OP has reason to neglect this area, as it can be difficult to square with naive cost-effectiveness calculations)
- Identify voices that want to tackle some of these issues head-on, and give them a space to do so. This could mean bloggers / key journalists / potential community leaders in the future.
- Help encourage or set up new EA organizations to sit apart from CEA, but help oversee/manage the movement.
- Help out the Community Health team at CEA. This seems like a very tough job that could arguably use more support, some of which might be best done outside of CEA.

Generally, I feel like there's a very significant vacuum of leadership and managerial visibility in the EA community. I think that this is a difficult area to make progress on, but also consider it much more important than other EA donation targets. 

Thanks for bringing this up. I was unsure what terminology would be best here.

I mainly have in mind fermi models and more complex but similar-in-theory estimations. But I believe this could extend gracefully for more complex models. I don't know of many great "ontologies of types of mathematical models," so am not sure how to best draw the line.  

Here's a larger list that I think could work.

  • Fermi estimates
  • Cost-benefit models
  • Simple agent-based models
  • Bayesian models
  • Physical or social simulations
  • Risk assessment models
  • Portfolio optimization models
     

I think this framework is probably more relevant for models estimating an existing or future parameter, than models optimizing some process, if that helps at all. 

Ah, I didn't quite notice that at the time - that wasn't obvious from the UI (you need to hover over the date to see the time of it being posted).

Anyway, happy this was resolved! Also, separately, kudos for writing this up, I'm looking forward to seeing where Metaculus goes this next year +.

I feel like the bulk of this is interesting, but the title and opening come off as more grandiose than necessary. 

[This comment is no longer endorsed by its author]Reply

This is neat to see!

Obviously, some of these items are much more likely than others to kill 100M+ lives.

WW3 seems like a big wild card to me. I'd be curious if there are any/many existing attempts to try to estimate would it would look like and how bad it would be. 

I think animals are generally more efficient/effective as a way of converting money into short-term (the next 50 years) well-being.

My impression is that the mean global health intervention does not significantly improve the long-term future. However, I could definitely be convinced otherwise, and that would get me to change my answer.

All that said, if one is focused on improving the long-term future, it seems suspicious to focus on global health, as opposed to other interventions that are clearly more focused on that. 

Around discussions of AI & Forecasting, there seems to be some assumption like:

1. Right now, humans are better than AIs at judgemental forecasting.
2. When humans are better than AIs at forecasting, AIs are useless.
3. At some point, AIs will be better than humans at forecasting.
4. At that point, when it comes to forecasting, humans will be useless.

This comes from a lot of discussion and some research comparing "humans" to "AIs" in forecasting tournaments.

As you might expect, I think this model is incredibly naive. To me, it's asking questions like,
"Are AIs better than humans at writing code?"
"Are AIs better than humans at trading stocks?"
"Are AIs better than humans at doing operations work?"

I think it should be very clear that there's a huge period, in each cluster, where it makes sense for humans and AIs to overlap. "Forecasting" is not one homogeneous and singular activity, and neither is programming, stock trading, or doing ops. There's no clear line for automating "forecasting" - there are instead a very long list of different skills one could automate, with a long tail of tasks that would get increasingly expensive to automate. 

Autonomous driving is another similar example. There's a very long road between "helping drivers with driver-assist features" and "complete level-5 automation, to the extent that almost no human are no longer driving for work purposes."

A much better model is a more nuanced one. Break things down into smaller chunks, and figure out where and how AIs could best augment or replace humans at each of those. Or just spend a lot of time working with human forecasting teams to augment parts of their workflows.

Some ideas:
1. What are the main big mistakes that EAs are making? Maybe have a few people give 30-minute talks or something.
2. A summary of the funding ecosystem and key strategic considerations around EA. Who are the most powerful actors, how competent are they, what are our main bottlenecks at the moment?
3. I'd like frank discussions about how to grow funding in the EA ecosystem, outside of the current donors. I think this is pretty key.
4. It would be neat to have a debate or similar on AI policy legislation. We're facing a lot of resistance here, and some of it is uncertain.
5. Is there any decent 5-10 year plan of what EA itself should be? Right now most of the funding ultimately comes from OP, and there's very little non-OP community funding or power. Are there ideas/plans to change this?

I generally think that EA Globals have had far too little disagreeable content. It feels like they've been very focused on making things seem positive for new people, instead of focusing more on candid and more raw disagreements and improvement ideas.

Answer by Ozzie Gooen49
21
0
1

I really would like to see more communication with the Global Catastrophic Risks Capacity Building Team at Open Philanthropy, given that they're the ones in charge of funding much of the EA space. Ideally there would be a lot of capacity for Q&A here. 

Load more