G Gordon Worley III

Director of Research at PAISRI

Topic Contributions

Comments

Doing good easier: how to have passive impact

A couple comments.

First, I think there's something akin to creating a pyramid scheme for EA by leaning too heavy on this idea, e.g. "earn to give, or better yet get 3 friends to earn to give and you don't need to donate yourself because you had so much indirect impact!". I think david_reinstein's comment is in the same vein and good.

Second, this is a general complaint about the active/passive distinction that is not specific to your proposal but since your proposal relies on it I have to complain about it. :-)

I don't think the active/passive distinction is real (or at real enough to be useful). I think it just looks that way to people who only earn money by directly trading their labor for it. So-called passive income still requires work (otherwise money would just earn you more money with zero effort), just less of it. And that's the key. Thus I think it's better to talk about leverage rather than active/passive.

To say a bit more, trading labor for money/impact by default has 1:1 leverage, i.e. you get linear return on your labor. For example, literally handing out malaria nets, literally serving food to the destitute, etc.. Then you can do work that gets a bit of leverage but is still linear. So maybe you can leverage your knowledge, network, etc. to have 1:n leverage. This might be working as a researcher, doing work for an EA meta-org, etc.. Then there's opportunities to have non-linear levage where each unit of work gets quadratic or exponential returns. In the realm of money and "passive" income this is stuff like investing in or starting a company (I know, not what people usually think of as "passive" income). In EA this might be defining a new field, starting a new EA org, etc..

Note though that we rely on people having impact in all these different ways for the economy/ecosystem to function. Yes, 1:1 leverage work would best be automated, but sometimes it can't be, and then it's a bottleneck and we need someone to do it. If you squeeze out too much of this type work you get something like a high-income/impact trap: no one can be bothered to do important work because it isn't high leverage enough!

So, I think people should try to have as much leverage as they can, but also we need to be careful about how we promote leverage, especially in EA where there are fewer feedback systems in the economy to help the EA ecosystem self-regulate, so that we don't end up without anyone to do the essential, low-leverage work.

Free-spending EA might be a big problem for optics and epistemics

Maybe I can help Chris explain his point here, because I came to the comments to say something similar.

The way I see it, neartermists and longtermists are doing different calculations and so value money and optics differently.

Neartermists are right to be worried about spending money on things that aren't clearly impacting measures of global health, animal welfare, etc. because they could in theory take that money and funnel it directly into work on that stuff, even if it had low marginal returns. They should probably feel bad if they wasted money on a big party because that big party could have saved some kids from dying.

Longtermists are right to not be too worried about spending money. There's astronomical amounts of value at stake, so even millions or billions of dollars wasted doesn't matter if it ended up saving humanity from extinction. There might be nearterm reasons related to the funding pipeline they should care (so optics), but long term it doesn't matter. Thus, longtermists will want to be more free with money in the hopes of, for example, hitting on something that solves AI alignment.

That both these things try to exist under EA causes tension, since the different ways of valuing outcomes result in different recommended behaviors.

This is probably the best case for splitting EA in two: PR problems for one half stop the other half from executing.

Go Republican, Young EA!

Two thoughts:

  1. We should be careful about claiming the GOP is the "worse party". Worse for whom? Maybe they are doing things you don't like, but half the country thinks the Democrats are the worse party. We should be wise to the state of normative uncertainty we are in. Neither party is really worse except by some measure, and because of how they are structured against each other one party being worse means the other is better by that measure. If you wanted to make a case that one party or the other is better for EA and then frame the claim that way I think it'd be fine.
  2. Yes, causing a party to lose its base is a great way to force the party to change, though note that this isn't an isolated system, changing the GOP will also change the Democratic Party and that might not actually be for the better. Some might argue we were better off before Southern white voters were "betrayed" by the Democratic Party on civil rights legislation and abortion, since my understanding is that that caused the shift to the current party alignment structure and ended a long era of bipartisanship. Looking back, many have said they would have moved slower to avoid the long term negative consequences caused by moving fast and then not really getting the desired outcome due to reactionary pushback. This suggests we might be better off trying for slow change given uncertain effects of what will happen in a dynamic system.
Go Republican, Young EA!

to the fall of US democracy and a party that has much worse views on almost every subject under most moral frameworks.

This seems like a pretty partisan take and fails to adequately consider metaethical uncertainty. There's nothing about this statement that I couldn't imagine a sincere Republican with good intentions saying about Democrats and being basically right (and wrong!) for the same reasons (right assuming their normative framework, wrong when we suppose normative uncertainty).

Go Republican, Young EA!

While I don't want to suggest that you or any other person who feels the GOP has an obligation to work for them, part of the reason they are able to be hostile to various groups is because those groups are not part of how they get elected. If tomorrow the GOP was dependent on LGBTQ votes to win elections, they'd transform into a different party.

So while I'm not expert enough here to see how to change the current situation, I think there is something interesting about changing the incentive gradients for both parties to make them both more inclusive (both construct on outgroup—GOP: minorities and foreigners, Democrats: rural and working-class white people) and I expect that to have positive outcomes.

How to Choose the Optimal Meditation Practice

The more I practice, the more I've come to believe that that only thing that really matters is that you do it. Not that you do it well by whatever standard one might judge, but just that you do it. 30 minutes of quiet time is a foundation on which more can be explored and discovered. You don't have to sit a special way, do a special thing with your mind, or do anything else in particular for it to be worth the effort, although all those things can help and are worth doing if you're called to them!

You should totally learn a bunch of techniques or practice a certain way if you feel called to it, but also I think there's a lot to be said for simply spending 30 minutes with the intention to be present with what is, even if that means 30 minutes spent with your mind racing or fidgeting. The time itself will work on you to allow you to find your own way.

.01% Fund - Ideation and Proposal

What does this funding source do that existing LT sources don’t?

Natural followup: why a new fund rather than convince an existing fund to use and emphasize the >0.0.1% xrisk reduction criterion?

Nuclear attack risk? Implications for personal decision-making

Even if he wants to do that, his power is not absolute. I'd expect/hope for his generals to step in if he tries something like that, perhaps using it as reason for a coup.

Nuclear attack risk? Implications for personal decision-making

I'm not super worried. Maybe this is because I am old enough that I grew up with a perception that nuclear war could happen at any time and unexpectedly kill us all. The current threat level feels like a return to the Cold War: something could happen, but MAD still works and Putin, like everyone else, doesn't really have anything to gain from all out nuclear war, but does have something to gain from playing chicken. So we should expect a lot of posturing but probably no real action, except by accident.

In think the largest risk of nuclear weapons comes from the use of tactical nukes being used in the conflict zones. I would expect Putin to use them if he felt desperate enough, especially since he would use them on Ukrainian soil. But presumably no nukes would be deployed on NATO countries or Russia itself since that would trigger all out nuclear retaliation. So most of the nuclear risk probably falls on people literally within Ukraine.

What psychological traits predict interest in effective altruism?

Yes, I suppose I left out non-English. I should have more properly made my claim that growth has slowed in English-speaking countries where the ideas have already had time to saturate and reach more of the affected people.

I forget where I got this from. I'm sure I can dig something up, but I seem to recall other posts on this forum showing that the growth of EA in places where it was already established had slowed.

Load More