Ozzie Gooen

I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.

Ozzie Gooen's Comments

Announcing the EA Virtual Group

Sure thing. I'm less concerned with the name than the collection of possibly-too-varied people it brings in, for the sake of the project. I imagine you'll get a better sense though as you start it.

Announcing the EA Virtual Group

Kudos for taking on an initiative like this!

I think trying to have this be the ea virtual group will be difficult. If all EAs were in one city, there just couldn't be a single EA meetup in that city, it would be too large.

There is also a really tricky issue of who will attend. EAs aren't one unit. There are a bunch of pockets that have various relationships and opinions of each other. I think trying to aim for "absolutely everyone" will prove challenging, if that is the goal. For instance, experienced researchers don't typically enjoy spending a lot of time with very new people, as most of the questions are very basic.

I'd probably encourage you to think more about either specializing on some subtopic, or trying to identify a specific few key members that represent what you want the social group to grow towards.

I've found a lot of these social/reading groups are something like, "These 3 people really like talking to each other and are similar in a few key ways, and they can grow to others who share some of those similarities."

I've done something similar (.impact, a while back), and also hosted a few EA groups in the past; these ideas come from those experiences.

Good luck!

Against opposing SJ activism/cancellations

I really don't like this about the voting system. My read is that you (Chichiko) provided some points on one side of an uncomfortable discussion. Most readers seem to overall agree with the other side. My impression is that they used their downvotes to voice their high level opinion, rather than because they found your specific points to be bad.

I feel quite strange about this but feel that we're in some kind of meta-level argument of censorship; that any points in-favor of occasional censorship quickly get censored. By downvoting this piece so much, that's kind of what's happening.

External evaluation of GiveWell's research

Oh man, happy to have come across this. I'm a bit surprised people remember that article. I was one of the main people that set up the system, that was a while back.

I don't know specifically why it was changed. I left 80k in 2014 or so and haven't discussed this with them since. I could imagine some reasons why they stopped it though. I recommend reaching out to them if you want a better sense.

This was done when the site was a custom Ruby/Rails setup. This functionality required a fair bit of custom coding functionality to set up. Writing quality was more variable then than it is now; there were several newish authors and it was much earlier in the research process. I also remember that originally the scores disagreed a lot between evaluators, but over time (the first few weeks of use) they converged a fair bit.

After I left they migrated to Wordpress, which I assume would have required a fair effort to set up a similar system in. The blog posts seem like they became less important than they used to be; in favor of the career guide, coaching, the podcast, and other things. Also the quality has become a fair bit more consistent, from what I can tell as an onlooker.

The ongoing costs of such a system are considerable. First, it just takes a fair bit of time from the reviewers. Second, unfortunately, the internet can be a hostile place for transparency. There are trolls and angry people who will actively search through details and then point them out without the proper context. I think this review system was kind of radical, and can imagine it not being very comfortable to maintain, unless it really justified a fair bit of effort.

I'm of course sad it's not longer in place, but can't really blame them.

CEA's Plans for 2020

I think I’m pretty torn up about this. I agree that this was a failure, but going too far in the other direction seems like a loss of opportunity. I think my ideal would be something like a very competent and large CEA, or another competent and large organization spearheading a bunch of new EA initiatives. I think there’s enough potential work to absorb an additional 30-1000 full time people. I’d prefer small groups to do this to a poorly managed big group, but in general don’t trust small groups all too much for this kind of work in the long run. Major strategic action requires a lot of coordination, and this is really difficult with a lot of small groups.

I think my take is that the failures mentioned were mostly failures of expectations, rather than bad decisions in the ideal. If CEA could have done all these things well, that would have been the ideal scenario to me. The projects often seemed quite reasonable, it just seemed like CEA didn’t quite have the necessary abilities at those points to deliver on them.

Referencing above comments, I think, “Let’s make sure that our organization runs well, before thinking too much about expanding dramatically” is a very legitimate strategy. My guess is that given the circumstances around it, it’s a very reasonable one as well. But I also have some part of me inside screaming, “How can we get EA infrastructure to grow much faster?”.

Perhaps more intense growth, or at least bringing in several strong new product managers, could be more of a plan in 1-2 years or so.

Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI?

I think these comments could look like an attack on the author here. This may not be the intention, but I imagine many may think this when reading it.

Online discussions are really tricky. For every 1000 reasonable people, there could be 1 who's not reasonable, and who's definition of "holding them accountable" is much more intense than the rest of ours.

In the case of journalists this is particularly selfishly-bad; it would be quite bad for any of our communities to get them upset.

I also think that this is very standard stuff for journalists, so I really don't feel the specific author here is particularly relevant to this difficulty.

I'm all for discussion of the positives and weaknesses of content, and for broad understanding of how toxic the current media landscape can be. I just would like to encourage we stay very much on the civil side when discussing individuals in particular.

Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI?

I feel like it's quite possible that the headline and tone was changed a bit by the editor, it's quite hard to tell with articles like this.

I wouldn't single out the author of this specific article. I think similar issues happen all the time. It's a highly common risk when allowing for media exposure, and a reason to possibly often be hesitant (though there are significant benefits as well).

How to estimate the EV of general intellectual progress

Agreed, though the suggestions are appreciated!

VOI calculations in general seem like a good approach, but figuring out how to best apply them seems pretty tough.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

I'm a bit surprised that recusal seems to be pushed for last-resort in this document. Intuitively I would have expected that because there are multiple members of the committee, many in very different locations, it wouldn't be that hard to have the "point of contact" be different from the "one who makes the decision." Similar to how in some cases if one person recommends a candidate for employment, it can be easy enough to just have different people interview them.

Recusal seems really nice in many ways. Like, it would also make some things less awkward for the grantors, as their friends wouldn't need to worry about being judged as much.

Any chance you could explain a bit how the recusal process works, and why it's preferred to not do this? Do other team members often feel really unable to make decisions on these people without knowing them? Is it common that the candidates are known closely by many of the committee members, such that collective recusal would be infeasible?

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Kudos for writing up a proposal here and asking for feedback publicly!

Companies and nonprofits obviously have boards for similar situations, these funds having similar boards that would function in similar ways would seem pretty reasonable to me. I imagine it may be tricky to find people both really good and really willing. Having a board kind of defers some amount of responsibility to them, and I imagine a lot of people wouldn't be excited to gain this responsibility.

I guess one quick take would be that I think the current proposed COI policy seems quite lax, and I imagine potential respected board members may be kind of uncomfortable if they were expected to "make it respectable". So I think a board may help, but wouldn't expect it help that much, unless perhaps they did some thing much more dramatic, like work with the team to come up with much larger changes.

I would personally be more excited about methods of eventually having the necessary resources to be able to have a less lax policy without it being too costly; for instance, by taking actions to grow the resources dedicated to funding allocations. I realize this is a longer-term endeavor, though.

Load More