Ozzie Gooen

I'm currently working as a Research Scholar at the Future of Humanity Institute. I've previously co-created the application Guesstimate. Opinions are typically my own.

Comments

If I pay my taxes, why should I also give to charity?

Like Larks, I'm happy that work is being put into this. That said, I find this issue quite frustrating to discuss, because I think a fully honest discussion would take a lot more words than most people would have time for.

“Since I already pay my fair share in taxes, I don’t need to give to charity”

This is the sort of statement that has multiple presuppositions that I wouldn't agree with.

  • I pay my "fair share" in taxes
  • There's such thing as a "fair share"
  • There is some fairly objective and relevant notion of what one "needs to do"

The phrase is about as alien to me, and as far from my belief system, as an argument saying,

The alien Zordon transmits that Xzenefile means no charity.

One method of dealing with the argument above would be something like,

"Well, we know that Zordon previously transmitted Zerketeviz, which implies that signature Y12 might be relevant, so actually charity is valid."

But my preferred answer would be,
"First, I need you to end your belief in this Zordon figure".

The obvious problem is that this latter point would take a good amount of convincing, but I wanted to put this out there.

As an EA, Should I renounce my US citizenship?

I'm not very familiar with investment options in the UK, but there are of course many investment options in the US. I believe that being a citizen of the US helps a fair bit for some of these options. 

My impression is that getting full citizenship of both the US and the UK is generally extremely difficult, I imagine ever changing your mind would be quite a challenge.

One really nice benefit of having both citizenship is that it gives you a lot of flexibility. If either country suddenly becomes much more preferable for some reason or another (imagine some tail risk, like a political disaster of some sort), you have the option of easily going to the other. 

You also need account for how the US might treat you if you do renounce citizenship. My impression is that they can be quite unfavorable to those who do this (particularly if they think it's for tax reasons); both by coming at these people for assets, making it difficult to come back to the US for any reason, or other things. 

I would be very hesitant to renounce citizenship of either, until you really do a fair amount of research on the cons of the matter.

https://foreignpolicy.com/2012/05/17/could-eduardo-saverin-be-barred-from-the-u-s-for-life/

"Good judgement" and its components

I've been thinking about this topic recently. One question that comes to mind: How much of Good Judgement do you think is explained by g/IQ? My quick guess is that they are heavily correlated. 

My impression is that people with "good judgement" match closely with the people that hedge funds really want to hire as analysts, or who make strong executives of product managers. 

Is there evidence that recommender systems are changing users' preferences?

(1) The difference between preferences and information seems like a thin line to me. When groups are divided about abortion, for example, which cluster would that fall into? 

It feels fairly clear to me that the media facilitates political differences, as I'm not sure how else these could be relayed to the extent they are (direct friends/family is another option, but wouldn't explain quick and correlated changes in political parties). 

(2) The specific issue of prolonged involvement doesn't seem hard to be believe. People spend lots of time on Youtube. I've definitely gotten lots of recommendations to the same clusters of videos. There are only so many clusters out there.

All that said, my story above is fairly different from Stuart's. I think his is more of "these algorithms are a fundamentally new force with novel mechanisms of preference changes". My claim is that media sources naturally change the preferences of individuals, so of course if algorithms have control in directing people to media sources, this will be influential in preference modification. This is where "preference modification" basically means, "I didn't used to be an intense anarcho-capitalist, but then I watched a bunch of the videos, and now tie in strongly to the movement"

However, the issue of "how much do news organizations actively optimize preference modification for the purposes of increasing engagement, either intentionally or non intentionally?" is more vague.

Is there evidence that recommender systems are changing users' preferences?

There's a lot of anecdotal evidence that news organizations essentially change user's preferences. The fundamental story is quite similar. It's not clear how intentional this is, but there seem to be many cases of people becoming extremized after watching/reading the news (not that I think about it, this seems like a major factor in most of these situations). 

I vaguely recall Matt Taibbi complaining about this in the book Hate Inc. 

https://www.amazon.com/Hate-Inc-Todays-Despise-Another/dp/B0854P6WHH/ref=sr_1_3?dchild=1&keywords=Matt+Taibbi&qid=1618282776&sr=8-3

Here are a few related links:

https://nymag.com/intelligencer/2019/04/i-gathered-stories-of-people-transformed-by-fox-news.html
https://www.salon.com/2018/11/23/can-we-save-loved-ones-from-fox-news-i-dont-know-if-its-too-late-or-not/

If it turns out that the news channels change preferences, it seems like a small leap to suggest that recommender algorithms that get people onto news programs leads to changing their preferences. Of course, one should have evidence to the magnitude and so on.

What are the highest impact questions in the behavioral sciences?

I've done a bit of thinking on this topic, main post here:
https://www.lesswrong.com/posts/vCQpJLNFpDdHyikFy/are-the-social-sciences-challenging-because-of-fundamental

I'm most excited about fundamental research in the behavioral sciences, just ideally done much better. I think the work of people like Joseph Henrick/David Graeber/Robin Hanson was useful and revealing. It seems to me like right now our general state of understanding is quite poor, so what I imagine as minor improvements in particular areas feel less impactful than just better overall understanding. 

A Comparison of Donor-Advised Fund Providers

This looks really useful, many thanks for the writeup. I'd note that I've been using Vanguard for regular investments and found website annoying and the customer support quite bad; there would be long periods where they wouldn't offer any because things were "too crowded". I think most people underestimate the value of customer support, in part because it is most valuable in the tail end situations. 

Some quick questions:
- Are there any simple ways of making investments in these accounts that offer 2x leverage or more? Are there things here that you'd recommend?
- Do you have an intuition around when one should make a Donor-Advised Fund? If there are no minimums, should you set one up once you hit, say, $5K in donations that won't be spent a given tax year?
- How easy is it for others to invest in one's Donor-Advised Fund? Like, would it be really easy to set up your own version of EA Funds?

Announcing "Naming What We Can"!

I think the phrases "Research Institute", and particular "...Existential Risk Institute" are a best practice and should be used much more frequently.

Centre for Effective Altuism -> Effective Altruism Research Institute (EARI)
Open Philanthropy -> Funding Effective  Research Institute (FERI)
GiveWell -> Shortermist Effective  Funding Research Institute (SEFRI)
80,000 Hours -> Careers that are Effective Research Institute (CERI)
Charity Entrepreneurship -> Charity Entrepreneurship Research Institute (CERI 2)
Rethink Priorities -> General Effective Research Institute (GERI)
 Center for Human-Compatible Artificial Intelligence -> Berkeley University Ai Research Institute (BUARI)
CSER -> Cambridge Existential Risk Institute (CERI 3)
LessWrong -> Blogging for Existential Risk Institute (BERI 2)
Alignement Forum -> Blogging for AI Risk Institute (BARI)
SSC -> Scott Alexanders' Research Institute (SARI)
 

New Top EA Causes for 2021?

Maybe, Probabilistically Good?

Some quick notes on "effective altruism"

I think this is a good point. That said, I imagine it's quite hard to really tell. 

Empirical data could be really useful to get here. Online experimentation in simple cases, or maybe we could even have some University chapters try out different names and see if we can infer any substantial differences. 

Load More