G Gordon Worley III

Director of Research at PAISRI

G Gordon Worley III's Comments

What values would EA want to promote?

At its heart, EA seems to naturally tend to promote a few things:

  • a larger moral circle is better than a smaller one
  • considered reasoning ("rationality") is better than doing things for other reasons alone
  • efficiency in generating outcomes is better than being less efficient, even if it means less appealing at an emotional level

I don't know that any of this are what EA should promote, and I'm not sure there's anyone who can unilaterally make the decision of what is normative for EA, so instead I offer these as the norms I think EA is currently promoting in fact, regardless of what anyone thinks EA should be promoting.

Ramiro's Shortform

One challenge will be that any attempt to time donations based on economic conditions risks becoming a backdoor attempt to time the market, which is notoriously hard.

Democracy Promotion as an EA Cause Area
EA organizations are also less likely to be perceived as biased or self-interested actors.

I think this is unlikely. EAs disproportionately come from wealthy democratic nations and those who have reason to resist democratic reform will have an easy time painting EA participation in democracy promotion as a slightly more covert version of foreign-state-sponsored attempts at political reform. Further, EAs are also disproportionately from former colonizing states that have historically dominated other states, and I don't think that correlation will be ignored.

This is not to say I necessarily think it is the case that EA attempts at democracy promotion would in fact be covert extensions of existing efforts that have negative connotations, only that I think it will be possible to argue and convince people that they are, making this not an actual advantage.

Slate Star Codex, EA, and self-reflection

The downvotes are probably because, indeed, the claims only make sense if you look at the level of something like "has Scott ever said anything that could be construed as X". I think a complete engagement with SSC doesn't support the argument, and it's specifically the fact that SSC is willing to address issues in their whole without flinching away from topics that might make a person "guilty by association" that makes it a compelling blog.

Dignity as alternative EA priority - request for feedback

I think there could be a case that QALY/DALY/etc. calculations should factor in dignity in some way, and view mismatches between, say, QALY calculations and what feels "right" in terms of dignity as sign that the calculations may be leaving something important out. For example, if intervention X produces 10 QALY and makes someone feel 10% less dignified, then either we want to be sure the 10 QALY figure already incorporates that cost to dignity or it is adjusted to consider it. Seems like there is a strong case to be made for possibly more nuanced calculation of metrics, especially so we don't miss cases where ignoring something like dignity would cause us to think an intervention was good but in fact it is overall bad once dignity is factored in. That this has come up and seems an issue suggests some calculations people are doing today fail to factor it in.

Is it suffering or involuntary suffering that's bad, and when is it (involuntary) suffering?

I think we don't quite have the words to distinguish between all these things in English, but in my mind there's something like

  • pain - the experience of negative valence
  • suffering - the experience of pain (i.e. the experience of the experience of negative valence)
  • expected suffering - the experience of pain that was expected, so you only suffer for the pain itself
  • unexpected suffering - the experience of pain that was not expected, so you suffer both the pain itself and the pain of suffering itself from it not being expected and thus having negative valence

Of them all, unexpected suffering is the worst because it involves both pain and meta-pain.

What are good sofware tools? What about general productivity/wellbeing tips?

I live by the advice that best tools are the ones that are available, so for that reason I love to use Google products with few modifications so the same tools/data are accessible on multiple platforms.

I only regularly use a few other things that are either specific to my job or are needed to fill gaps in Google's product line for core use cases I have, like Pocket and Feedly, and even those I'm constantly checking to see if I could get away with not using them.

Thus my task list, documents, calendar, etc. are all in Google.

How to make the most impactful donation, in terms of taxes?

In some states and municipalities the tax rate is higher due to local taxes. For example, in California the maximum marginal rate is 37% + 13.5% = 50.5%.

Is the Buy-One-Give-One model an effective form of altruism?

I think whether or not B1G1 is effective depends on what you care about. It's clearly not the most effective way to, say, give shoes to people without shoes, since its creating an inefficiency by tying the supply of free shoes to the demand for shoes from wealthy people. And this is to say nothing of whether or not giving shoes to people without shoes is an effective use of money relative to the alternatives.

But, maybe B1G1 is effective in making people more altruistic, and is an effective intervention for creating the conditions under which people will give more effectively. Intuitively I doubt it and expect it fails on this measure because of effects like people feeling their good doing responsibility is discharged by their purchase and so may feel they less owe others more altruism because they already bought $X of "good" goods, thus decreasing their giving on the margin. But I'm not an expert here so I could very well be wrong.

The difficulty is that B1G1 potentially has many effects to consider other than just the direct good done via giving, although that we even need consider those effects is itself evidence in my eyes that it's not effective since we don't, for example, think much about how people giving money to AMF will influence their charitable thoughts since we already feel pretty good about the outcomes of the donations.

Trade Heroism For Grit.

I think this is a great point.

In the startup world there's a similar notion. Starting a successful business can seem impossible, and many of the big successes depend to a certain extent on luck. It's hard to control for having the right idea at the right time, and people who do manage to do this are usually lucky rather than good at having the right idea at the right time, and only believe otherwise due to survivorship bias.

But the reality of what you can do to make a business succeed or fail is not in having the right idea, but what we might call the right effort. It's putting in the work, having the grit to keep going, and building the skills to improve your baseline chances of success.

Put another way, you can't make yourself lucky, but you can make yourself prepared to take advantage of luck when it appears so as not to fail to take advantage of an opportunity presented to you.

I think this same idea translates back into EA. There's a lot of unseen work that goes into improving the world. It's easy to look at someone who is already having impact and all the things they did and the conditions they found themselves in that made it possible for them to have impact and feel like it would be impossible for you to do that yourself, but actually they did it and so can you, it just takes a lot of work and a willingness to put in years of work to make yourself ready to achieve something.

I think a useful framing is to see the grit and determination to keep going as the real heroism, not the highly visible stuff that gains you accolades or is causally near impact.

Load More