Ozzie Gooen

I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.

Ozzie Gooen's Comments

Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI?

I think these comments could look like an attack on the author here. This may not be the intention, but I imagine many may think this when reading it.

Online discussions are really tricky. For every 1000 reasonable people, there could be 1 who's not reasonable, and who's definition of "holding them accountable" is much more intense than the rest of ours.

In the case of journalists this is particularly selfishly-bad; it would be quite bad for any of our communities to get them upset.

I also think that this is very standard stuff for journalists, so I really don't feel the specific author here is particularly relevant to this difficulty.

I'm all for discussion of the positives and weaknesses of content, and for broad understanding of how toxic the current media landscape can be. I just would like to encourage we stay very much on the civil side when discussing individuals in particular.

Any response from OpenAI (or EA in general) about the Technology Review feature on OpenAI?

I feel like it's quite possible that the headline and tone was changed a bit by the editor, it's quite hard to tell with articles like this.

I wouldn't single out the author of this specific article. I think similar issues happen all the time. It's a highly common risk when allowing for media exposure, and a reason to possibly often be hesitant (though there are significant benefits as well).

How to estimate the EV of general intellectual progress

Agreed, though the suggestions are appreciated!

VOI calculations in general seem like a good approach, but figuring out how to best apply them seems pretty tough.

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

I'm a bit surprised that recusal seems to be pushed for last-resort in this document. Intuitively I would have expected that because there are multiple members of the committee, many in very different locations, it wouldn't be that hard to have the "point of contact" be different from the "one who makes the decision." Similar to how in some cases if one person recommends a candidate for employment, it can be easy enough to just have different people interview them.

Recusal seems really nice in many ways. Like, it would also make some things less awkward for the grantors, as their friends wouldn't need to worry about being judged as much.

Any chance you could explain a bit how the recusal process works, and why it's preferred to not do this? Do other team members often feel really unable to make decisions on these people without knowing them? Is it common that the candidates are known closely by many of the committee members, such that collective recusal would be infeasible?

Request for Feedback: Draft of a COI policy for the Long Term Future Fund

Kudos for writing up a proposal here and asking for feedback publicly!

Companies and nonprofits obviously have boards for similar situations, these funds having similar boards that would function in similar ways would seem pretty reasonable to me. I imagine it may be tricky to find people both really good and really willing. Having a board kind of defers some amount of responsibility to them, and I imagine a lot of people wouldn't be excited to gain this responsibility.

I guess one quick take would be that I think the current proposed COI policy seems quite lax, and I imagine potential respected board members may be kind of uncomfortable if they were expected to "make it respectable". So I think a board may help, but wouldn't expect it help that much, unless perhaps they did some thing much more dramatic, like work with the team to come up with much larger changes.

I would personally be more excited about methods of eventually having the necessary resources to be able to have a less lax policy without it being too costly; for instance, by taking actions to grow the resources dedicated to funding allocations. I realize this is a longer-term endeavor, though.

EA Forum Prize: Winners for December 2019

Would it have been reasonable for you to have been secretively part of the process or something?

Some options:

  1. You write in that if you win, you just don't accept the cash prize.
  2. You write in that if you win, they tell you, but don't tell anyone else, and select the next best person for the official prize.

I'd be curious what the signaling or public value of the public explanation, "Person X would have won 1st place, but removed themselves from the running" would be compared to "Person X won 1st place, but gave up the cash prize"

Is learning about EA concepts in detail useful to the typical EA?

Quick take:

I think that in theory, if things were being done quite well and we had a lot of resources, we should be in a situation where most EAs really don't need much outside of maybe 20-200 hours of EA-specific information; after which focusing more on productivity and career-specific skills would result in greater gains.

Right now things are more messy. There's no great one textbook, and the theory is very much still in development. As such, it probably does require spending more time, but I'm not sure how much more time.

I don't know if you consider these "EA" concepts, but I do have a soft spot for many things that have somewhat come out of this community but aren't specific to EA. These are more things I really wish everyone knew, and they could take some time to learn. Some ideas here include:

  • "Good" epistemics (This is vague, but the area is complicated)
  • Bayesian reasoning
  • Emotional maturity
  • Applied Stoicism (very similar to managing one's own emotions well)
  • Cost-benefit analyses and related thinking
  • Pragmatic online etiquette

If we were in a culture that was firmly attached to beliefs around the human-sacrificing god Zordotron, I would think that education to carefully remove both the belief and many of the practices that are caused by that belief, would be quite useful, but also quite difficult. Doing so may be decently orthogonal to learning about EA, but would seem like generally a good thing.

I believe that common culture taught in schools and media is probably not quite as bizarre, but definitely substantially incorrect in ways that are incredibly difficult to rectify.

Seeking Advice: Arab EA

Sounds good, best of luck with that! Writing posts on the EA forum or LessWrong on things you find interesting and partaking in the conversation can be a good way of getting up to speed and getting comfortable with ongoing research efforts.

Seeking Advice: Arab EA

I just want to point out that this seems very, very difficult to me, and I would not recommend trusting "being safe" unless you really have no other choice.

I know of multiple very smart people who have tried to stay anonymous, got caught, and bad things happened. (For instance, read many books on "top hackers")

Defining Effective Altruism

After more thought in areas of definition, I've come to believe that the presumption of authority can be a bit misrepresentative.

I'm all for the coming up and encouraging of definitions of Effective Altruism and other important topics, but the phrase "The definition of effective altruism" can be seen to presuppose authority and unanimity.

I'm sure that even after this definition was proposed, alternative definitions will be used.

Of course if there were to be one authority on the topic it would be William MacAskill, but I think that even if there were only one main authority, the use of pragmatic alternative definitions could only be discouraged. It would be difficult to call them incorrect or invalid. Dictionaries typically follow use, not create it.

Also, to be clear, I have this general issue with a very great deal of literature, so it's not like I'm blaming this piece because it's particularly bad, but rather, I'm pointing it out because this piece is particularly important.

Maybe there could be a name like the "The academic definition...", "The technical definition", or "The definition according to the official CEA Ontology". Sadly these still use "The" which I'm hesitant to, but they are at least more narrow.

Load More