huw

Co-Founder & CTO @ Kaya Guides
1666 karmaJoined Working (6-15 years)Sydney NSW, Australia
huw.cool

Bio

Participation
2

I live for a high disagree-to-upvote ratio

Comments
248

Heya Vasco, I think I might be missing something here. I’m struggling to see the connection between this post and your recommendation to donate to WAI.

In the past, I’ve heard that wild animal suffering is probably not very tractable. Is that true for both insects and vertebrates? What about WAI sets them up for success here? (You mention they support research into pesticides, but not direct work?)

huw
6
0
0
1

Richest 1% wealth share, US (admittedly, this has been flat for the last 20 years, but you can see the trend since 1980):

Pre-tax income shares, US:

A 3–4% change for most income categories isn’t anything to sneeze at (even if this is pre-tax).

You can explore the WID data through OWID to see the effect for other countries; it’s less pronounced for many but the broad trend in high-income neoliberalised countries is similar (as you’d expect to happen with lower taxation).

I think it’s tractable, right? The rich had a far greater hold over American politics in the early 1900s, and after financial devastation coupled with the threat of communism, the U.S. got the New Deal and a 90% marginal tax rate for 20 years following the war (well after the war effort had been fully paid off), during the most prosperous period in U.S. history. My sense of these changes is that widespread labour & political organisation threatened the government into a compromise in order to protect liberalism & capitalism from a near-total overthrow. It can be done.

But equally, that story suggests that things will probably have to get much worse before the political will is there to be activated. And there’s no guarantee that any money raised from taxation will be spent on the global poor!

My honest, loosely held opinion here is that EA/adjacent money could be used to build research & lobbying groups (rather than grassroots organising or direct political donations—too controversial and not EA’s strong suit), that would be ready for such a moment if/when it comes. They should be producing policy briefs and papers that, and possibly public-facing outputs, on the same level as the current YIMBY/abundance movement, who are far more developed than the redistributionists on these capabilities. When the backlash hits and taxes get raised, we should already have people well-placed to push for high redistribution on an international and non-speciesist level.

Surely it would be easier to just take the money from them, with taxes

No, but seriously—the U.S. presently has an extremely clear example of the excesses of oligarchy and low taxation. The idea that billionaires need less tax in order to invest more in the economy is laughable when Elon has used his excess money to essentially just enrich himself. I think it would be pretty high leverage to put money, time, and connections into this movement (if you can legally do so); and if the enemy is properly demarcated as oligarchy, it should result in reducing wealth inequality once its proponents take power.

huw
9
0
0
1

Excuse the naïve question, but could far-UVC also reduce the cost of running high-level labs? If so, this could have transformational effects on medical development and cultured meat also

huw
7
1
0
1

Perhaps this is a bit tangential, but I wanted to ask since the 80k team seem to be reading this post. How have 80k historically approached the mental health effects of exposing younger (i.e. likely to be a bit more neurotic) people to existential risks? I’m thinking in the vein of Here’s the exit. Do you/could you recommend alternate paths or career advice sites for people who might not be able to contribute to existential risk reduction due to, for lack of a better word, their temperament? (Perhaps a similar thing for factory farming, too?)

For example, I think I might make a decent enough AI Safety person and generally agree it could be a good idea, but I’ve explicitly chosen not to pursue it because (among other reasons) I’m pretty sure it would totally fry my nerves. The popularity of that LessWrong post suggests that I’m not alone, and also raises the interesting possibility that such people might end up actively detracting from the efforts of others, rather than just neutrally crashing out.

Here’s a much less intellectual podcast on the Rationalists, Zizians, and EA from TrueAnon, more on the dirtbag left side of things (for those who’re interested in how others see EA)

Would also be interested to hear from the realists: Do they believe they have discovered any of these moral truths themselves, or just that these truths are out there somewhere?

huw
7
0
0
1

Here is their plot over time, from the Chapter 2 Appendix. I think these are the raw per-year scores, not the averages.

I find this really baffling. It’s probably not political; the Modi government took power in 2014 and only lost absolute majority in late 2024. The effects of COVID seem to be varied; India did relatively well in 2020 but got obliterated by the Delta variant in 2021. Equally, GDP per capita steadily increased over this time, barring a dip in 2020. Population has steadily increased, and growth has steadily decreased.

India have long had a larger residual value than others in the WHR’s happiness model; they’re much less happy than their model might predict.

Without access to the raw data, it’s hard to say if Gallup’s methodology has changed over this time; India is a huge and varied country, and it’s hard to tell if Gallup maintained a similar sample over time.

AIM seems to be doing this quite well in the GHW/AW spaces, but lacks the literal openness of the EA community-as-idea (for better or worse)

Load more