Yarrow🔸

889 karmaJoined Canada

Bio

Pronouns: she/her or they/them. 

I got interested in effective altruism back before it was called effective altruism, back before Giving What We Can had a website. Later on, I got involved in my university EA group and helped run it for a few years. Now I’m trying to figure out where effective altruism can fit into my life these days and what it means to me.

Comments
209

Topic contributions
1

If AI is having an economic impact by automating software engineers' labour or augmenting their productivity, I'd like to see some economic data or firm-level financial data or a scientific study that shows this.

Your anecdotal experience is interesting, for sure, but the other people who write code for a living who I've heard from have said, more or less, AI tools save them the time it would take to copy and paste code from Stack Exchange, and that's about it. 

I think AI's achievements on narrow tests are amazing. I think AlphaStar's success on competitive StarCraft II was amazing. But six years after AlphaStar and ten years after AlphaGo, have we seen any big real-world applications of deep reinforcement learning or imitation learning that produce economic value? Or do something else practically useful in a way we can measure? Not that I'm aware of.

Instead, we've had companies working on real-world applications of AI, such as Cruise, shutting down. The current hype about AGI reminds me a lot of the hype about self-driving cars that I heard over the last ten years, from around 2015 to 2025. In the five-year period from 2017 to 2022, the rhetoric on solving Level 4/5 autonomy was extremely aggressive and optimistic. In the last few years, there have been some signs that some people in the industry are giving up, such as Cruise closing up shop.

Similarly, some companies, including Tesla, Vicarious, Rethink Robotics, and several others have tried to automate factory work and failed. 

Other companies, like Covariant, have had modest success on relatively narrow robotics problems, like sorting objects into boxes in a warehouse, but nothing revolutionary.

The situation is complicated and the truth is not obvious, but it's too simple to say that predictions about AI progress have overall been too pessimistic or too conservative. (I'm only thinking about recent predictions, but one of the first predictions about AI progress, made in 1956, was wildly overoptimistic.[1])

I wrote a post here and a quick take here where I give my other reasons for skepticism about near-term AGI. That might help fill in more information about where I'm coming from, if you're curious.

  1. ^

    Quote:

    An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

This is a Forum Team crosspost from Substack

What does this mean? Is the author of this post, Matt Reardon, on the EA Forum team? Or did a moderator/admin of the EA Forum crosspost this from Matt Reardon's Substack, under Matt's EA Forum profile? 

I completely feel the same way that racism and sympathy toward far-right and authoritarian views in effective altruism is a reason for me to want to distance myself from the movement. As well as people maybe not agreeing with these views but basically shrugging and acting like it's fine.

Here's a point I haven't seen many people discuss:

...many people could have felt betrayed by the fact that EA leadership was well aware of FTX sketchiness and didn't say anything (or weren't aware, but then maybe you'd be betrayed by their incompetence). 

What did the EA leadership know and when did they know it? About a year ago, I asked in a comment here about a Time article that claims Will MacAskill, Holden Karnofsky, Nick Beckstead, and maybe some others were warned about FTX and/or Sam Bankman-Fried. I might have missed some responses to this, but I don't remember ever getting a clear answer on this.

If EA leaders heard credible warnings and ignored them, then maybe that shows poor judgment. Hard to say without knowing more information. 

Most people who do linkposts to their own writing put the link and also include the full text.

More people will read this if you put the full text of the post here. 

Thanks for this post! Your forum bio says you're a professional economist at the Bank of Canada, so that makes me trust your analysis more if you were just a random layperson.

I don't know if you're interested in creating a blog or a newsletter, but it seems like this analysis should be shared more widely!

It seems in a lot of cases you have disagreed with concepts before understanding them fully. Would you agree? And if so, why do you think this happened here, where I'm sure that you are great at making evidence-based judgements in other areas?

This comes across as passive-aggressive. Neel's patient response below is right on the money. 

If I recommend a book to someone on the EA Forum (or any forum), there's a slim chance they're going to read that book. The only way there's going to be a realistic chance they'll read it is either if I said something so interesting about it that it got them curious or if they were already curious about that topic area and decided the book is up their alley.

The same idea applies, to varying extents, to any other kind of media — blog posts, papers, videos, podcasts, etc. 

A few of your other comments also contain stuff that comes across as passive-aggressive. (Particularly the ones that have zero or negative karma.)

I can empathize with your position in that I can understand what it's like to try to engage with people who have really different perspectives on a topic that is important to me, and that this often feels frustrating. 

All I can say is that if your goal is persuasion or to have some kind of meeting of the minds, then saying stuff like this just pushes people further away.

The types of radical feminism you are mentioning in your first three bullet points are not types that I or the people or organisations I am mentioning would associate with. These groups are often labelled as Trans- or Sex worker- exclusionary radical feminists. It is a shame that they use this label too. They are generally funded by far right groups and instrumentalised to make it seem that they represent the feminist movement as a whole, or women's interests more broadly.

I think you're whitewashing the history of radical feminism a bit here. I think the radical feminist movement has to own these mistakes in order to move on from them. To say something like "that's not real radical feminism" or "that's a false flag operation" is not to acknowledge the reality of what happened and the harm that was done. For example, the pornography ban I mentioned was supported by key figures in radical feminism.

The fourth bullet point I hadn't heard about, and that Contrapoints video has been on my watch list for a long time now- I should really watch it!

If you're a fan of ContraPoints too, then that's one thing we can agree on! Her videos are wise, perspicacious, funny, and visually beautiful. I think they should win awards. I'm a huge fan. 

Could you give a little more context on what you don't understand? I'm not sure I can see the same issues, at least at the moment

I read to read Pleasure Activism in part because the idea of "pleasure activism" sounded interesting to me. I wondered, is the idea to make activism more fun? More guided toward things that are emotionally rewarding, rather than all about pain and discomfort and altruistic self-sacrifice? Or, alternatively, is it about fighting for things that bring us pleasure and joy?

The book does not really explain this. It does not really explain what pleasure activism is, at least not in a way I could make any sense of. I'm not alone in this, since I asked my friend who recommended the book to me if he understood what adrienne maree brown was trying to say, and he basically said no. 

When I tried to read Pleasure Activism, I wanted to see if anyone could make more sense of it than me. One of the reviews I found, from a sympathetic reviewer,[1] was generally positive, but also called out how confusing the book is. (It also mentions adrienne maree brown's claim that she was bitten by a vampire.)

To me, if you write a book about a new idea and you don't explain what that idea is in a way that's easy to understand, your book has failed as a piece of scholarship. If I can't understand what you're trying to say, and especially if you don't even try particularly hard to explain it, then there's nothing I can do with your work. It can't affect me. It can't cause me to think or act differently. I can't engage with it. I can't even disagree with it, because I don't know what I would be disagreeing with. 

One thing I'll say for now is that there are certainly parts of the feminist movements that you will strongly disagree with, and that disagreement is welcome.

I am a feminist and I have a good grasp on feminist theory, partly because I took courses on feminist theory when I was in university. I already articulated four key points I disagree with many radical feminists about — trying to harm trans people, banning pornography, opposing decriminalization or legalization of sex work, and narrow views on what kinds of sex are ethical. Especially nowadays, I would guess there are some people who call themselves radical feminists who have different views on these topics (and this seems to be what you're saying). But I also probably disagree with those people as well, for example on economic issues.

Some parts of the radical feminists' critiques were correct. They were correct to focus on many of the social and cultural phenomena that contribute to women's oppression (although they made some serious mistakes here, too, as I mentioned), going beyond a focus just on formal, legal equality (which is important, of course, but too limited). For example, the radical feminist critique of rape culture was hugely important. 

It seems to me a lot of radical feminists' critiques have been absorbed into the mainstream in a way that didn't feel true (or nearly as true) 15 years ago. 

This is a good thing for the mainstream, since the critiques that were absorbed are correct, but it also makes self-identified "radical feminists" today less relevant, since their good ideas are now a part of mainstream feminism — and feminism, in general, is more a part of mainstream culture — and what radical feminists have to offer is now less differentiated from mainstream feminism. 

I agree that authoritarian communism is bad, but I have a lot more belief in degrowth. Could you give some more specifics on what your issues are with it?

Economic degrowth is the idea that we should make the world significantly poorer (i.e. significantly decrease the world's total income) for environmental reasons. I think this would be a humanitarian catastrophe on a scale that's hard to fathom and I don't think it would even be particularly helpful for achieving environmental goals, and might even do harm, ultimately.

For example, if we could snap our fingers and cut the world's consumption of fossil fuels in half, probably millions of people would starve or die from otherwise preventable deaths. And our progress on climate change might end up getting set back since the world's economy would be so crippled, it would be hard to do things like fund R&D into wind, solar, geothermal, nuclear, energy storage, and other sustainable energy technologies or to make long-term capital investments into deploying these technologies. 

If you want to read a more in-depth critique of the idea of economic degrowth, Kelsey Piper at Vox wrote one that clear's and accessible. 

One of the most succinct and eloquent critiques of degrowth I have read comes from a review of Naomi Klein's book This Changes Everything:

The second, incredibly risky response to the climate crisis that she [Naomi Klein] recommends is a policy of “degrowth” (88). This is sort of a euphemism for reducing the size of GDP, which in practice means creating a policy-induced, long-term recession, followed (presumably) by measures designed to restrict the economy to a zero-growth equilibrium. Now because she plans to shift millions of workers into low-productivity sectors of the economy (126-7), and perhaps reduce work hours (93), she imagines that this degrowth can happen without creating any unemployment. So the picture presumably is one in which individuals experience a slow, steady decline in real income, of perhaps 2% per year over a period of 10 years (none of the people recommending this seem to give specific numbers, so I’m just guessing what they have in mind), followed by permanent income stagnation. (There would, presumably, still be technological change, so a degrowth policy would have to be accompanied by some mechanism to ensure that work hours were cut back in response to any increase in productive efficiency, in order to ensure that production as a whole did not increase.)

At the same time that incomes are either shrinking or remaining stagnant, Klein also proposes an enormous shift from private-sector to public-sector consumption, presumably financed by significant increases in personal income tax. Again, she doesn’t give any specific numbers, but from the way she talks it sounds like she wants to shift around a quarter of the remaining GDP. Plus she wants to see a huge amount of redistribution to the poor. So again, just ballparking, but it sounds as though she wants the average person to accept a pay cut of around 20%, followed by the promise of no pay increase ever again, combined with an increase in average income tax rates of around 25% (so in Canada, from around 30% to 55%). And don’t forget, this is all supposed to be achieved democratically. As in, people are going to vote for this, not just once, but repeatedly.

What I find astonishing about proponents of “degrowth” – not just Klein, but Peter Victor as well – is that they don’t see the tension between this desire to reduce average income and the desire to reduce economic inequality. They expect people to support increased redistribution at the same time that their own incomes are declining. This leaves me at something of a loss – I struggle to find words to express the depth of my incredulity at this proposition. In what world has this, or could this, ever occur?

In the real world, economic recessions are rather strongly associated with a significant increase in the nastiness of politics. Economic growth, on the other hand, makes redistribution much easier, simply because the transfers do not show up as absolute losses to individuals who are financing them, but rather as foregone gains, which are much more abstract. It’s not an accident that the welfare state was created in the context of a growing economy. (See Benjamin Friedman, The Moral Consequences of Economic Growth, for a general discussion of the effect of growth on politics.) It seems to me obvious that a degrowth strategy – by making the economy negative-sum – would massively increase resistance to both taxation and redistribution. At the limit, it could generate dangerous blow-back, in the form of increased support for radical right-wing parties.

As a result, I just don’t see any moral difference between what Klein is doing in this book and what the geoengineering enthusiasts are doing. The latter are techno-utopians, while Klein is a socialist-utopian. But both are trying to pin our hopes for resolving the climate crisis on a risky, untested, and potentially dangerous policy. Furthermore, the idea that Klein’s agenda could be achieved democratically strikes me as being otherworldly, in a country where the left can’t even figure out how to get the Conservative party out of power.

The "shadow" of degrowth is environmental authoritarianism, in which "the hardest choices require the strongest wills" (to quote a villain), and so, the ability of people to resist unpopular policies that make them poorer needs to be quashed with force. 

Some people go in the opposite direction and, rather than "biting the bullet" and endorsing an ugly conclusion, lean into cognitive dissonance and try to say that degrowth is not really about negative GDP growth, after all, but about... something they either have a hard time making clear, or that just doesn't make sense, or ends up amounting to green growth (the opposite of degrowth), or ends up undermining their claim that degrowth is not about negative GDP growth.

Your concluding comments seem like rage-bait and might be an unnecessary addition to your otherwise very thoughtful reply.

It's not rage bait, it's just rage. I have deep exposure to radical leftist ideas, spanning about 15 years, and I'm just fed up with so much of it. I think so much of radical leftist discourse is incoherent (like Pleasure Activism), insane (like degrowth), or evil (like the level of praise or apologetics for authoritarian communism you see in radical leftist communities). And the way that radical leftists try to advance their ideas is often cruel and sadistic, for example, by harassing or bullying people who express disagreement (and sometimes by endorsing physical violence).[2] I am angry at the radical left for being this way. 

I have been as much of an insider to radical leftism as it's possible to be. I know the ins and outs. My perspective does not come from a shallow gloss of radical leftism, but from a deep familiarity. 

I think probably one of the most effective ways to limit the harm caused by the radical left as it currently exists is to try to fill the vacuum of liberal, progressive, centre-left, or leftist ideas for structural reform. 

One of the most encouraging examples I've seen is the economist Thomas Piketty's short political manifesto at the end of his book Capital and Ideology. The manifesto is the final chapter of the book, titled "Elements for a Participatory Socialism for the Twenty-First Century". This is the most coherent, most sane, and most constructive version of radical leftist economic thought (if it is accurate to call it radical leftist) I have ever seen. More of this, please!

I am also reading (almost finished) Ezra Klein and Derek Thompson's book Abundance, which just came out this year. It's awesome. I am sold on the idea of "abundance liberalism", which started out being called "supply-side progressivism", but now has a much better name and has probably also expanded a bit in terms of the ideas it encompasses.

As much as I'm fed up with so much about the radical left as it exists today, just complaining about the radical left probably isn't a good strategy for changing things for the better. We should come up with good, constructive ideas to draw people away from bad, destructive ideas and to take the energy away from bad, destructive political discourse.

The importance of any of my criticisms (of radical feminism, of radical leftism, of degrowth) pales in comparison to the importance of coming up with and advocating for good ideas that can offer an alternative. This is hard work and it's where I want to put more of my focus going forward.

I don't know how much energy I have to continue this thread of conversation, so if you decide to reply, please do so with the warning that I may not read your reply or respond to it. I can get really into writing stuff on the EA Forum, but it takes up a lot of my time and energy, and I have to prioritize. 

  1. ^

    I'm not clear on this, but I think the person who wrote the review even works at the radical leftist publishing company, AK Press, that published the book.

  2. ^

    The phrase "the cruelty is the point" has been used as a criticism of Donald Trump and the Republican Party under his leadership, but it would also apply aptly to a lot of radical leftists' behaviour.

I can't shake off the feeling that this type of argument has often aged poorly when it comes to AI. I've certainly been baffled many times by AI solving tasks that I predicted to be very hard.

This may be true for games like chess, go, and StarCraft, or for other narrow tests of AI. But for claims that AI will do something useful, practical, and economically valuable — like driving cars or replacing humans on assembly lines — the opposite is true. The predictions about rapid AI progress have been dead wrong and the AI skeptics have been right.

Load more