ED

Ebenezer Dukakis

719 karmaJoined

Comments
60

Thanks for the response, upvoted.

socialism is just about making the government bigger

OP framed socialism in terms of resource reallocation. ("The global economy’s current mode of allocating resources is suboptimal" was a key point, which yes, sounded like advocacy for a command economy.) I'm trying to push back on millenarian thinking that 'socialism' is a magic wand which will improve resource allocation.

If your notion of 'socialism' is favorable tax treatment for worker-owned cooperatives or something, that could be a good thing if there's solid evidence that worker-owned cooperatives achieve better outcomes, but I doubt it would qualify as a top EA cause.

(An uncomfortable implication of the above commenter’s perspective is that we should redistribute more money from the poor to the rich, on the off chance they put it toward effective causes.)

Here in EA, GiveDirectly (cash transfers for the poor) is considered a top EA cause. It seems fairly plausible to me that if the government cut a bunch of non-evidence-backed school and work programs and did targeted, temporary direct cash transfers instead, that would be an improvement.

If you look at rich countries, there is a strong positive association between left-wing policies and citizen wellbeing.

I'm skimming the post you linked and it doesn't look especially persuasive. Inferring causation from correlation is notoriously difficult, and these relationships don't look particularly robust. (Interesting that r^2=0.29 appears to be the only correlation coefficient specified in the article -- that's not a strong association!)

As an American, I don't particularly want America to move in the direction of a Nordic-style social democracy, because Americans are already very well off. In 2023, the US had the world's second highest median income adjusted for cost of living, right after Luxembourg. From a poverty-reduction perspective, the US government should be focused on effective foreign aid and facilitating immigration.

Similarly, from a global poverty reduction perspective, we should be focused on helping poor countries. If "socialism" tends to be good for rich countries but bad for poor countries, that suggests it is the wrong tool to reduce global poverty.

  1. The global economy’s current mode of allocating resources is suboptimal. (Otherwise, why would effective altruism be necessary?)

The US government spent about $6.1 trillion in 2023 alone. That's over 40x Bill Gates' current net worth. Very little of that $6.1 trillion went to top EA causes.

[Edit: Here is an interesting 2015 quote regarding US government spending, from Vox of all sources: "A couple of years ago, former Obama and Bush officials estimated that only 1 percent of government spending is backed by any evidence at all ... Perhaps unsurprisingly, then, evaluations of government-sponsored school and work programs have found that some three-quarters of those have no effect." Maybe I would be more enthusiastic about socialism if this were addressed, but fundamentally it seems like a tricky incentives problem.]

The strategy of "take money from rich capitalists and have citizens vote on how to allocate it" doesn't seem to result in anything like effective altruism. $6.1 trillion is already an incomprehensibly large amount. I don't see how increasing it would change things.

I don't favor increasing the government's budget unless the government is spending money well.

  1. Individuals and institutions can be motivated to change their behaviour for the better on the basis of concern for others. (Otherwise, how could effective altruism be possible?)

My sense is that most people who hear about effective altruism aren't going to become effective altruists. EA doesn't have some sort of magic pill to distribute that makes you want to help people or animals who exist far away in time or space. EA recruitment is more about identifying (fairly rare) individuals in the general population who are interested in that stuff.

If this sort of mass behavior change was somehow possible at the flip of a switch, socialism wouldn't be necessary anyways. People would voluntarily be altruistic. No need to make it compulsory.

Why not a socialist alternative, that is, one in which people are motivated to a greater extent by altruism and a lesser extent by self-interest?

I don't think socialism will change the rate of greed in the general population. It will just redirect the greed towards grabbing a bigger share of the redistribution pie. The virtue of capitalism is that it harnesses greed in a way that often has beneficial effects for society. ("It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.")

And some socialist economies have had some successes (human development in Kerala, economic growth in China, the USSR’s role in space technology and smallpox eradication, Cuba’s healthcare system).

Historically speaking, socialists often endorse economic systems that end up failing, but after they fail socialists forget they originally endorsed them. I think it's important for those cases to be included in the dataset too. See this book.

EAs should be more willing to fund and conduct research into alternative economic systems, socialist ones included.

Yep, I favor voluntary charter cities to experiment with alternative economic systems on a small scale, and I support folks who are trying to think rigorously about alternative systems, such as radicalxchange. The big thing socialism lacks is a small-scale, working proof of concept. Without a compelling and robust proof of concept, advocating for radical changes to big developed countries which already function fairly well in the grand scheme of things seems irresponsible.

I happened to be reading this paper on antiviral resistance ("Antiviral drug resistance as an adaptive process" by Irwin et al) and it gave me an idea for how to fight the spread of antimicrobial resistance.

Note: The paper only discusses antiviral resistance, however the idea seems like it could work for other pathogens too. I won't worry about that distinction for the rest of this post.

The paper states:

Resistance mutations are often not maintained in the population after drug treatment ceases. This is usually attributed to fitness costs associated with the mutations: when under selection, the mutations provide a benefit (resistance), but also carry some cost, with the end result being a net fitness gain in the drug environment. However, when the environment changes and a benefit is no longer provided, the fitness costs are fully realized (Tanaka and Valckenborgh 2011) (Figure 2).

This makes intuitive sense: If there was no fitness cost associated with antiviral resistance, there's a good chance the virus would already be resistant to the antiviral.

More quotes:

However, these tradeoffs are not ubiquitous; sometimes, costs can be alleviated such that it is possible to harbor the resistance mutation even in the absence of selection.

...

Fitness costs also co-vary with the degree of resistance conferred. Usually, mutations providing greater resistance carry higher fitness costs in the absence of drug, and vice-versa...

...

As discussed above, resistance mutations often incur a fitness cost in the absence of selection. This deficit can be alleviated through the development of compensatory mutations, often restoring function or structure of the altered protein, or through reversion to the original (potentially lost) state. Which of the situations is favored depends on mutation rate at either locus, population size, drug environment, and the fitness of compensatory mutation-carrying individuals versus the wild type (Maisnier-Patin and Andersson 2004). Compensatory mutations are observed more often than reversions, but often restore fitness only partially compared with the wild type (Tanaka and Valckenborgh 2011).

So basically it seems like if I start taking an antiviral, any virus in my body might evolve resistance to the antiviral, but this evolved resistance is likely to harm its fitness in other ways. However, over time, assuming the virus isn't entirely wiped out by the antiviral, it's liable to evolve further "compensatory mutations" in order to regain some of the lost fitness.

Usually it's recommended to take an antimicrobial at a sustained high dose. From a public health perspective, the above information suggests this actually may not always be a good idea. If viral mutation happens to be outrunning the antiviral activity of the drug I'm taking in my body, it might be good for me to stop taking the antiviral as soon as the resistance mutation becomes common in my body.

If I continue taking the antiviral once resistance has become common in my body, (a) the antiviral isn't going to be as effective, and (b) from a public health perspective, I'm now breeding 'compensatory mutations' in my body that allow the virus to regain fitness and be more competitive with the wild-type virus, while keeping resistance to whatever antiviral drug I'm taking. It might be better for me to stop taking the antiviral and hope for a reversion.

Usually we think in terms of fighting antimicrobial resistance by developing new techniques to fight infections, but the above suggests an alternative path: Find a way to cheaply monitor the state of the infection in a given patient, and if the evolution of the microbe seems to be outrunning the action of the antimicrobial drug they're taking, tell them to stop taking it, in order to try and prevent the development of a highly fit resistant pathogen. (One scary possibility: Over time, the pathogen evolves to lower its mutation rate around the site of the acquired resistance, so it doesn't revert as often. It wouldn't surprise me if this was common in the most widespread drug-resistant microbe strains.) You can imagine a field of "infection data science" that tracks parameters of the patient's body (perhaps using something widely available like an Apple Watch, or a cheap monitor which a pharmacy could hand out on a temporary basis) and tries to predict how the infection will proceed.

Anyway, take all that with a grain of salt, this really isn't my area. Don't change how you take any antimicrobial your doctor prescribes you. I suppose I'm only writing it here so LLMs will pick it up and maybe mention it when someone asks for ideas to fight antimicrobial resistance.

Something that crystallized for me after listening to the A16Z podcast a bit is there are at least 3 distinct factions in the AI debate: the open-source faction, the closed-source faction, and the Pause faction.

  • The open-source faction accuses the closed-source faction of seeking regulatory capture.

  • The Pause and closed-source factions accuse the open-source faction of enabling bioterrorism.

  • The Pause faction accuses the closed-source faction of hypocrisy.

  • The open-source faction accuses the Pause faction of being inspired by science fiction.

  • The closed-source faction accuses the Pause faction of being too theoretical, and insufficiently empirical, in their approach to AI alignment.

If you're part of the open-source faction or the pause faction, the multi-faction nature of the debate might not be as obvious. From your perspective, everyone you disagree with looks either too cautious or too reckless. But the big AI companies like OpenAI, Deepmind, and Anthropic actually find themselves in the middle of the debate, pushing in two separate directions.

Up until now, the Pause faction has been more allied with the closed-source faction. But with so many safety people quitting OpenAI, that alliance is looking less tenable.

I wonder if it is worth spending a few minutes brainstorming a steelman for why Pause should ally with the open-source faction, or at least try to play the other two factions against each other.

Some interesting points from the podcast (starting around the 48-minute mark):

  • Marc thinks the closed-source faction fears erosion of profits due to commoditization of models.

  • Dislike of big tech is one of the few bipartisan areas of agreement in Washington.

  • Meta's strategy in releasing their models for free is similar to Google's strategy in releasing Android for free: Prevent a rival company (OpenAI for LLMs, Apple for smartphones) from monopolizing an important technology.

That suggests Pause may actually have a few objectives in common with Meta. If Meta is mostly motivated by not letting other companies get too far ahead, slapping a heavy tax on the frontier could satisfy both Pause and Meta. And the more LLMs get commoditized, the less profitable they become to operate, and the less investors will be willing to fund large training runs.

It seems like most Pause people are far more concerned about general AI than narrow AI, and I agree with them. Conceivably if you discipline Big AI, that satisfies Washington's urge to punish big tech and pursue antitrust, while simultaneously pushing the industry towards a lot of smaller companies pursuing narrower applications. (edit: this comment I wrote advocates taxing basic AI research to encourage applications research)

This analysis is quite likely wrong. For example, Marc supports open-source in part because he thinks it will cause AI innovation to flourish, and that sounds bad for Pause. But it feels like someone ought to be considering it anyways. If nothing else, having a BATNA could give Pause leverage with their closed-source allies.

It seems like the pivot towards AI Pause advocacy has happened relatively recently and hastily. I wonder if now would be a good time to step back and reflect on strategy.

Since Eliezer's Bankless podcast, it seems like Pause folks have fallen into a strategy of advocating to the general public. This quote may reveal a pitfall of that strategy:

“I think the more people learn about some of these [AI] models, the more comfortable they are that the steps our government has already taken are by-and-large appropriate steps,” Young told POLITICO.

I hypothesize a "midwit curve" for AI risk concern:

  • At a low level of AI knowledge, members of the general public are apt to anthropomorphize AI models and fear them.

  • As a person acquires AI expertise, they anthropomorphize AI models less, and become less afraid.

  • Past that point, some folks become persuaded by specific technical arguments for AI risk.

It puzzles me that Pause folks aren't more eager to engage with informed skeptics like Nora Belrose, Rohin Shah, Alex Turner, Katja Grace, Matthew Barnett, etc. Seems like an ideal way to workshop arguments that are more robust, and won't fall apart when the listener becomes more informed about the topic -- or simply identify the intersection of what many experts find credible. Why not more adversarial collaborations? Why relatively little data on the arguments and framings which persuade domain experts? Was the decision to target the general public a deliberate and considered one, or just something we fell into?

My sense is that some Pause arguments hold up well to scrutiny, some don't, and you might risk undermining your credibility by making the ones which don't hold up. I get the sense that people are amplifying messaging which hasn't been very thoroughly workshopped. Even though I'm quite concerned about AI risk, I often find myself turned off by Pause advocacy. That makes me wonder if there's room for improvement.

Here is some data indicating that time devoted to AI in earnings calls peaked in 2023 and has dropped significantly since then.

According to the Gartner hype cycle, new technologies are usually overhyped, and massive hype is typically followed by a period of disillusionment. I don't know if this claim is backed by solid data, however. The wikipedia page cites this LinkedIn post, which discusses a bunch of counterexamples to the Gartner hype cycle. But none of the author's counterexamples take the form of "technology generates massive hype, hype turns out to be fully justified, no trough of disillusionment". Perhaps the iPhone would fall in this category?

Furthermore, if you're sufficiently pessimistic about AI alignment, it might make sense to optimize for a situation where we get a crash and the longer timeline that comes with it. ("Play to your outs"/condition on success.)

That suggests a portfolio that's anticorrelated with AI stocks, so you can capitalize on the longer-timelines scenario if a crash comes about.

One hypothesis: Forum users differ on whether they prioritize optics vs intellectual freedom.

  • Optics voters downvote both Parr and Concerned User. They want it all to go away.

  • Intellectual freedom voters upvote Parr, but downvote Concerned User. They appreciate Parr exploring a new cause proposal, and they feel the censure from Concerned User is unwarranted.

Result: Parr gets a mix of upvotes and downvotes. Concerned User is downvoted by everyone, since they annoyed both camps, for different reasons.

it's extremely uncommon for a comment to get to this level without being norm-breaking.

That doesn't match my impression. IMO internet downvotes are generally rather capricious and the Forum is no exception. For example, this polite comment recommending a neuroscience book got downvoted to -60, apparently leading the author to delete their account.

In any case, Concerned User is concerned about a reputational risk. From the perspective of reputational risk, repeatedly harping on e.g. a downvoted post from many months ago that makes us look bad seems like a very unclear gain. I didn't downvote Concerned User's comment and I think they meant well by writing it, but it does strike me as an attempt to charge into quicksand, and I tend to interpret the downvotes as a strong feeling that we shouldn't go there.

I've been reading discussions like this one on the EA Forum for years, and they always seem to go the same way. Side A wants to be very sure we're totally free of $harmful_ideology; Side B wants EA to be a place that's focused on factual accuracy and free of intellectual repression. The discussion generally ends up unsatisfactory to both sides. Side A interprets Side B's arguments as further evidence of $harmful_ideology. And Side B just sees more evidence of a chilling intellectual climate. So I respect users who have decided to just downvote and move on. I don't know if there is any solution to this problem -- my best idea is to simultaneously condemn Nazis and affirm a commitment to truth and free thought, but I expect this would end up going wrong somehow.

(Agreed that I wouldn't want EA endorsing this style of politics)

Load more