All of AllAmericanBreakfast's Comments + Replies

A Quick Qualitative Analysis of Laypeople’s Critiques of Longtermism

"The USSR used to plead the future when doing something  nasty in the present..."

Here longtermists can respond that for the work they currently prioritize Pascal’s mugging is not occurring, as the probability of existential risk is nontrivial.

I'd note that for Marx and Engels, communism was not merely a "nontrivial probability," but a historical inevitability. "But these ends really do justify the means" doesn't sound very reassuring.

More importantly, however, this quote is pointing out how common it is for political movements to use violence and oppr... (read more)

Why aren't EAs talking about the COVID lab leak hypothesis more?

I follow updates and arguments about the lab leak as a consequence of my EA-driven interest in biorisk. I don’t think that EA has a comparative advantage in shedding light on this hypothesis relative to those more directly involved in the investigation. I also think that it wouldn’t be that surprising from the perspective of common sense EA wisdom if the pandemic had zoonotic or lab leak origins. So not a ton of updating to do either way. This is why I don’t look to EA for information and analysis on this subject. Can’t speak for others.

Most* small probabilities aren't pascalian

Thank you for bringing the data!

I'm a little skeptical about this survey due to its 17% response rate. I also worry about conflict of interest. AI Impacts is lead by people associated with the rationalist community, and the rationalist community has its inception in trying to figure out ways to convince people of the threat of AGI.

However, I think it's great that these surveys are being created and support further efforts to make the state of expert opinion on this subject more legible.

Most* small probabilities aren't pascalian

I think the impulse to call AGI safety a Pascal's Mugging does not stem from extremely low probabilities. In fact, I don't think extremely low probabilities are necessary or sufficient for a Pascal's Mugging.

Instead, I think Pascal's Mugging is about epistemic helplessness in evaluating reasonably low priors. Even if I have no hope of evaluating the mugger's claim, at least until it's too late, I'm mathematically prohibited  from assigning his promises a probability of zero. This bug lets the mugger increase the size of his promises or threats until I... (read more)

I'm not sure what the threshold is for consensus but a new survey [] of ML researchers finds: "Support for AI safety research is up: 69% of respondents believe society should prioritize AI safety research “more” or “much more” than it is currently prioritized, up from 49% in 2016. ... The median respondent believes the probability that the long-run effect of advanced AI on humanity will be “extremely bad (e.g., human extinction)” is 5%. ... Many respondents put the chance substantially higher: 48% of respondents gave at least 10% chance of an extremely bad outcome. Though another 25% put it at 0%."
The Possibility of Microorganism Suffering

Bear with me, I can only reply piecemeal.

I think even if we have a robust scientific understanding of, e.g., the human brain, we would still think that human suffering exists. I don't think understanding the physical mechanisms behind a particular system means that it can't be associated with a first-person experience.

On further thought, I think that the most tractable way to pursue your research agenda is to try and discover the mechanistic link between biochemistry and consciousness. It seems to me that this missing link is the main factor that leaves ro... (read more)

3Elias Au-Yeung8d
Sure, that's alright :) One of the things I mention in the post is that whenever we're looking at scientific findings, we're imposing certain standards on what counts as evidence. But it's actually not all that clear how we're supposed to construct these standards in the case of first-person experiences we don't have access to. Brains/neural architectures are categories we invent to put particular instances of "brains" and "neural architectures" in. They're useful in science and medicine but that doesn't mean referring to those categories with those boundaries automatically tells us everything to know about conscious experience/suffering. What we're really interested in is the category containing systems capable of suffering, and there are a number of different views on what sort of criteria identifies elements in this category: some people follow criteria that suggest only other humans are similar enough to be capable of suffering[1] [#fnlv0u9oei5u], some people follow criteria that suggest mammals are also similar enough, some people follow criteria that suggest insects are also similar enough. These views have us decrease our threshold for acceptable similarity.[2] [#fnmzp1l7qmv5f]One next step might be to extend criteria from the cell-to-cell signaling of nervous systems to the intracellular signaling of microrganisms. If we're confident in/accept some of the other criteria, can we really rule out similar adjacent criteria? This seems difficult given how much uncertainty we have about our reasoning regarding consciousness and suffering[3] [#fn398xmf4v56t]. We only need to assign some credence to the views that count some things we know of microbes as evidence of suffering (e.g., chemical-reactive movement, cellular stress responses, how microbes react to predators & associated mechanisms) -- in order to think that microorganism suffering is at least a possibility. There's a lot of subjective judgment in this. And scientists can't escape it too. The evolution
The Possibility of Microorganism Suffering

We already have a robust scientific understanding of the biochemical causes of bacterial behavior. Why would we posit that some form of cognitive processing of suffering is also involved in controlling their actions? Endorsing a program of study purely on the basis of "important, if true" seems like it would also lead you to endorse studying things like astrology. Since you're mainly taking the fact that bacteria "look like they might be suffering," it seems like you should also be concerned that nonliving structures are potentially suffering. Wouldn't it be painful to be as hot as a star for up to trillions of years?

We already have a robust scientific understanding of the biochemical causes of bacterial behavior. Why would we posit that some form of cognitive processing of suffering is also involved in controlling their actions?

I think even if we have a robust scientific understanding of, e.g., the human brain, we would still think that human suffering exists. I don't think understanding the physical mechanisms behind a particular system means that it can't be associated with a first-person experience.

Endorsing a program of study purely on the basis of "important, if

... (read more)
What reason is there NOT to accept Pascal's Wager?

"My view makes perfect sense, contemporary culture is crazy, and history will bear me out when my perspective becomes a durable new form of common sense" is a statement that, while it scans as arrogant, could easily be true - and has been many times in the past. It at least explains why a person who ascribes to "social intelligence" as a guide might still hold many counterintuitive opinions. I agree with you though that it's not useful for settling disputes when people disagree in their predictions about "universal common sense."

If you believe that current... (read more)

I would say if we use other people's judgment as a guide for our own, it's an argument for the belief in the divine/God/the supernatural and it becomes hard to say Christianity and Islam have negligible probability. So rules that are like "ignore tiny probability" don't work. Your idea of discounting probability as utility rises still works but we've talked about why I don't think that's compelling enough. I don't have good survey evidence on Pascal's Wager, but I think a lot of religious believers would agree with the general concept- don't risk your soul, life is short and eternity is long, and other phrases like that seem to reference the basic idea. This guy converted on his deathbed because of the wager (John von Neumann []).
What reason is there NOT to accept Pascal's Wager?

If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we're in the last generation so there are no future humans, would you change your mind?

I've had a little more chance to flesh out this idea of "universal common sense." I'm now thinking of it as "the wisdom of the best parts of the past, present, and future."

Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume t... (read more)

I'm not against it- I think it's an okay way of framing something real. Your phrasing here is pretty sensible to me. "Let's say we could identify exemplary societies across the past, present, and future. Furthermore, assume that, on some questions, these societies had a consensus common sense view. Finally, assume that, in some cases, we can predict what that intertemporal consensus common sense view would be. Given all three of these assumptions, then I think we should consider adopting that point of view." But I have concerns about the future perspective, in theory and practice. I think people will just assert future people will agree with them. You think future people will agree with you, I think future people will agree with me. There's no way to settle that dispute conclusively (maybe expert predictions or a prediction market can point to some answer), so I think imagining the future perspective is basically worthless. In contrast, we can look at people today or in the past (contingent on historical records). The widespread belief in the divine is, I think, at least another piece of (weak?) evidence that points to taking the wager. This could be weakened if secular societies or institutions were much more successful than their contemporaries.
What reason is there NOT to accept Pascal's Wager?

How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?


Great question.

Let me offer the idea of "universal common sense."

"Common sense" is "the way most people look at things." The way people commonly use this phrase today is what we might call "local common sense." It is the common sense of the people who are currently alive and part of our culture.

Local common sense is useful for local questions. Universal common sense is useful for universal questions.

Since religion, as well as scien... (read more)

I think imagining that current view X is justified, because one imagines that future generations will also believe in X is really unconvincing. I think most people think their views will be more popular in the future. Liberal democrats and Communists have both argued that their view would dominate the world. I don't think it adds anything other than illustrating the speaker is very confident of the merits of their worldview. If for instance, demographers put together an amazing case that most future humans would be Mormon, would you change your mind? If you became convinced that AI would kill humanity next decade and we're in the last generation so there are no future humans, would you change your mind?
What reason is there NOT to accept Pascal's Wager?

I suspect that the answer to some of these questions at an intersection between psychology and mathematics.

Our understanding of physics is empirical. Before making observations of the universe, we'd have no reason to entertain the hypothesis that "light exists." There would be infinite possibilities, each infinitely unlikely.

Yet somehow, based on our observations, we find it wise to believe that our current understanding of how physics works is true. How did we go from a particular physics model being infinitely unlikely to it being considered almost certa... (read more)

I would put it a different way. If we use normal decision-making rules that many people use, especially consequentialists, we find that Pascal's wager is a pretty strong argument. There are many weak objections to and some more promising objections. But unless we're certain of these objections it seems difficult to escape the weight of infinity. If we look to other more informal ways to make decisions- favoring ideas that are popular, beneficial, and intuitive, then major religions that claim to offer a route to infinity are pretty popular, arguably beneficial, and theism in general seems more intuitive to most people than atheism I think that given we have no strong reason to reject Pascal's wager, I would suggest that people in general should do "due diligence" by investigating the claims and evidences for at least the major religions. If someone says hey I've spent 500 hours investigating Christianity and 500 hours investigating Islam and glanced at these other things and they all seem implausible... that's one thing. But I think it's hard (probably impossible) to justify not taking Pascal's wager without substantially investigating religious claims. If for instance, you end up think there's 0.5% chance that Jesus was God or Mohammed was the messenger of God, that's pretty substantial. How many hours do you think a reasonable person is obligated to spend investigating religions before rejecting the wager?
What reason is there NOT to accept Pascal's Wager?

I guess it’s useful then to clarify which point we’re interested in.

I personally am interested in the question “given free will and personal control over the outcome, should we choose a strategy of pursuing infinite utility?”

I am less interested in “if you did not have control over the outcome, would you say it’s better if the universe was deterministically set up such that we are pursuing infinite utility?”

Are you interested in the second question?

I'm mostly interested in the first. I think people should take Pascal's wager!
What reason is there NOT to accept Pascal's Wager?

I agree. My approach is to carve out locally valid arguments, then see if they can be connected together. I doubt this problem can be solved all at once :)

What reason is there NOT to accept Pascal's Wager?

Outside religion, an infinite universe or multiverse maybe exists so if our actions are correlated with other people all our actions might produce an infinite payoff.

The word "produce" is causal language. It seems to me that even if our actions are correlated with other people, there's no reason to think that we in particular are the ones controlling that correlated action. Do you think we can be said to "produce" utility if we're not causally in control of that production?

Yes, I feel comfortable saying if the EV changes based on our action, we are responsible in some sense or produced it. In Newcomb's paradox, I think you can "produce" additional dollars.
What reason is there NOT to accept Pascal's Wager?

I’m not sure about #1 or #3. I do think that #2 is false, again on mechanistic grounds. It’s harder to get a billion dollars than a million dollars, and that continues to apply as the sums of money offered grow larger.

Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?

"Another way of putting it - the question isn’t “how likely is this to be a scam,” but “how likely is this to be a real offer.” Would you agree that an offer of a million dollars is more likely to be real than an offer of a billion dollars?" Thanks for the example. Yes, I think you've convinced me on this point. I think I want to say something like "when we have a good sense of the distribution of events, we know the bigger the departure from typical events, the less likely it is." But I still think (and maybe this is going back to #1 a little) that this still has some issues. We don't know how likely infinite payoffs are- some theist can say literally every human has achieved an infinite payoff- so I don't think we can say infinite payoffs don't happen. Outside religion, an infinite universe or multiverse maybe exists so if our actions are correlated with other people all our actions might produce an infinite payoff []. And if I did accept that we should discount infinite payoffs, I'm not sure the probability would fall fast enough to still get a finite payoff in expectation.
What reason is there NOT to accept Pascal's Wager?

This is a good response!

It's common sense that our prior for whether or not a technology will work for a given purpose depends on empiricism. This accounts for why we'd reject the million dollar post office run - we have abundant empirical and mechanistic evidence that offers of ~free money are typically lies or scams. Utility can be an inverse proxy for mechanistic plausibility, but only because of efficient market hypothesis-like considerations. If there was a $20 on the sidewalk, somebody would have already picked it up.

Right, the distinction between expected value from tech expected utility from offers from people makes sense. But I think your axiom still doesn't provide enough reason to reject Pascal's Wager. 1. I'm not sure if we can say we have good grounds to apply this discounting to God or the divine in general. Can we put that in the same bucket as human offers? I guess you could say yes by arguing that God is just a human invention but isn't that like assuming the conclusion or something? 2. I don't think probability declines as fast as promised value rises- a guy on the street offering me $1 Billion versus $100 million is about equally likely to be a scam, but the $$$ is different. 3. Because of how infinity works, wouldn't I have to think there is a 100% chance that your axiom holds? Otherwise, I would think even if there's only a 1% chance X God is real and a 1% chance that the expected value is infinite it still dominates everything.
What reason is there NOT to accept Pascal's Wager?

Let's entertain as an axiom the claim that, in the absence of evidence, promises of utility/disutility become less likely the more is promised.

If I promise you $1 to drop off a letter at the post office for me, you'd believe me. If I promise you $1,000,000, you'd think I was joking.

More specifically, let's make our axiom the claim that, if we integrate the likelihood of a payoff over the range of utilities promised, that integral is convergent.

No matter how much utility is promised, the amount of utility received in expectation is finite.

In other words, th... (read more)

I see what you're saying but you'd need to provide me with a reason to accept your axiom. Since I'm a moral realist, you'd have to convince me that it is likely to be true, rather than simply that it is convenient.
This is an interesting response, but doesn't it run into a problem where you could have large amounts of evidence that Action X provides infinite payoff but have to ignore it. Imagine really credible scientist/theologians discover there's a 90% chance that X gives you infinite payoff and 90% chance Y gives you $5, but you feel obligated to grab the $5 just because you're an infinity skeptic? I also think this isn't consistent with how people decide things in general- we didn't need more evidence that COVID vaccines worked than flu vaccines worked, even though the expected utility from COVID vaccines was much higher.
If we ever want to start an “Amounts matter” movement, covid response might be a good flag/example

Here's how I'd put your suggestion in my own words:

Currently, political debate weighs policy options by whether or not they "help" or "harm" certain reference classes. A reference class is any symbolic, social, or physical thing that we care about. 

Examples of a reference class could include "patriotism," "community health," "jobs," or even something as concrete and small-scale as the view from an individual person's deck.

Hence, political debate is currently about defining which reference classes we should care about, and how important they are, and t... (read more)

Punching Utilitarians in the Face

naive utilitarianism implies things like lying a bunch or killing people in contrived situations

I don't know what "naive" utilitarianism is. Some possibilities include:

  1. Making incorrect predictions about the net effects of your behavior on future world states, due to the ways that utilitarian concepts might misguide your epistemics.
  2. Having different interpretations of the same outcomes from a more "sophisticated" moral thinker.

I would argue that (1) is basically an epistemic problem, not a moral one. If the major concern with utilitarian concepts is that it ... (read more)

Punching Utilitarians in the Face

You don't need explicit infinities to get weird things out of utilitarianism.

I agree with you. Weirdness, though, is a far softer "critique" than the clear paradoxes that result from explicit infinities. And high-value low-probability moral tradeoffs aren't even all that weird.

We need  information in order to have an expected value. We can be utilitarians who deny that sufficient information is available to justify a given high-value low-probability tradeoff. Some of the critiques of "weird" longtermism lose their force once we clarify either a) that ... (read more)

Punching Utilitarians in the Face

I'm not well versed enough in higher mathematics to be confident in this, but it seems to me like these objections to utilitarianism are attacking it by insisting it solve problems it's not designed to handle. We can define a "finite utilitarianism," for example, where only finite quantities of utility are considered. In these cases, the St. Petersburg Paradox has a straightforward answer, which is that we are happy to take the positive expected value gamble, because occasionally it will pay off.

This brings utilitarianism closer to engineering than to math... (read more)

3Neel Nanda1mo
OK, that seems like a pretty reasonable position. Thoough if we're restricting ourselves to everyday situations it feels a bit messy - naive utilitarianism implies things like lying a bunch or killing people in contrived situations, and I think the utility maximising decision is actually to be somewhat deontologist. More importantly though, people do use utilitarianism in contexts with very large amounts of utility and small probabilities - see strong longtermism and the astronomical waste arguments. I think this is an important and action relevant thing, influencing a bunch of people in EA, and that criticising this is a meaningful critique of utilitarianism, not a weird contrived thought experiment
1Ben Stewart1mo
To a certain extent, it's utilitarianism that invites these potential critiques. If a theory says that probabilities/expected value are integral to figuring out what to do, then questions looking at very large or very small probabilities/expected value is fair game. And looking at extreme and near-extreme cases is a legitimate philosophical heuristic [] .
3Guy Raveh1mo
You don't need explicit infinities to get weird things out of utilitarianism. Strong Longtermism is already an example of how the tiny probability that your action affects a huge number of (people?) dominates the expected value of your actions in the eyes of some prominent EAs.
New ideas for mitigating biotechnology misuse

The barriers a “DNA registry” would impose on a terrorist (the only bad actor who’d be inconvenienced by it) would be trivial if they had the capability to do the other things necessary to produce a bioweapon. In fact, DNA synthesis and sequencing wouldn’t even be a necessary part of such an endeavor. I won’t describe the technological reasons why, but a basic familiarity with these technologies will make the reasons why clear. On the other hand, depending on execution, it could be rather annoying for legitimate researchers.

The idea of sealing off biologic... (read more)

Thank you for your thoughts, I agree that this is tricky - but I believe we should at the very least have some discussions on this. The scenario I think about is based on the following reasoning (and targets not yet known pathogens): a) we are conducting research to identify new potential pandemic pathogens, b) DNA synthesis capabilities + other molecular biology capabilities required to synthesise viruses are becoming more accessible, we cannot count on all orders being properly screened, c) only a small number of labs (~20?) actually work on a given potential pandemic pathogen plus some public health folks, definitely not more than 1000s of people, therefore at least 1 to 2 order of magnitude fewer individuals than all those capable of synthesizing the potential pandemic pathogen (this obviously changes once a potential pandemic pathogen enters humans and becomes a pandemic pathogen, then genome definitely needs to be public), d) can we have those few people apply to access the genomes from established databases similar to how people apply to access patient data?
In terms of needing such a system to be lightweight and specific: this also implies needing it what is sometimes called "adaptive governance" (i.e. you have to be able to rapidly change your rules when new issues emerge). For example, there were ambiguities about whether SARS-CoV-2 fell under Australia Group export controls on "SARS-like-coronaviruses" (related journal article [])... a more functional system would include triggers for removing export controls (e.g. at a threshold of global transmission, public health needs will likely outweigh biosecurity concerns about pathogen access)
Announcing: EA Engineers

Do you do HVAC at all? There’s a big need for people with experience in this space for biorisk mitigation.

HVAC is mechanical engineering. ASHRAE is the main professional body of mechanical engineers that works on ventilation standards and guidance for reducing aerosol transmission.
1Jessica Wen1mo
Awesome! There are a lot of high-impact projects that require civil engineering expertise in biorisk mitigation (as AllAmericanBreakfast has said), but also general civilisational resilience. There's a SHELTER weekend happening in August where you can help to explore concrete steps (pun not intended) for the implementation of civilisational shelters (read more here [] ). If you're looking to do something more long-term, Ulrik Horn (also commented on this post) is looking for talent to join his bioweapons shelter project, which you can read more about here: Fønix: Bioweapons shelter project launch [] . We will keep the Discord and newsletter updated with any other relevant projects/initiatives that we hear about!
Wikipedia editing is important, tractable, and neglected

Wow, I hadn't thought to check! Thanks for pointing that out, and for writing this post!

Wikipedia editing is important, tractable, and neglected

This post inspired me to rewrite the Wikipedia article for my MS thesis research topic, on aptamers. It's been very helpful to be able to link it to people when I'm trying to explain my research. Good post!

Well done! The article receives [] about 50,000 page views each year, so there are a lot of people out there who benefit from your contribution.
Announcing: EA Engineers

If possible, I’d suggest adding an “already on the discord server” option :)

1Jessica Wen1mo
Done! :)
Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies

Reactions to the "cardinal comparability" objection (4.2):

Unlike validity, this is not a well-studied topic. When I came to look at it for my PhD, I struggled to find much of a literature on it. There were bits and pieces, but nothing that seemed to convincingly offer an overall assessment of the issue (see Plant 2020 where I try to offer one).

Understudied topics are where non-expert input is more likely to be useful. However, in this case, we do have a literature on the topic. The term is "scope insensitivity." It's one of the key cognitive biases Kahnema... (read more)

On Deference and Yudkowsky's AI Risk Estimates

I'm going to break a sentence from your comment here into bits for inspection. Also, emphasis and elisions mine.

I would also say that to the extent that Yudkowsky-style research has enjoyed any popularity of late, it's because people have been looking at the old debate and realizing that

  • extremely simple generic architectures written down in a few dozen lines of code
  • with large capability differences between very similar lines of code
  • solving many problems in many fields and subsuming entire subfields as simply another minor variant
  • with large generalizing mod
... (read more)
4Charles He2mo
This sounds like straightforward transfer learning (TL) or fine tuning, common in 2017. So you could just write 15 lines of python which shops between some set of pretrained weights and sees how they perform. Often TL is many times (1000x) faster than random weights and only needs a few examples. As speculation: it seems like in one of the agent simulations you can just have agents grab other agents weights or layers and try them out in a strategic way (when they detect an impasse or new environment or something). There is an analogy to biology where species alternate between asexual vs sexual reproduction, and trading of genetic material occurs during periods of adversity. (This is trivial, I’m sure a second year student has written a lot more.) This doesn’t seem to fit any sort of agent framework or improve agency though. It just makes you train faster.
Re gradations of agency: Level 3 and level 4 seem within reach IMO. IIRC there are already some examples of neural nets being trained to watch other actors in some simulated environment and then imitate them. Also, model-based planning (i.e. level 4) is very much a thing, albeit something that human programmers seem to have to hard-code. I predict that within 5 years there will be systems which are unambiguously in level 3 and level 4, even if they aren't perfect at it (hey, we humans aren't perfect at it either).
New cause area: bivalve aquaculture

Non-EAs are receptive to a proposal to substitute bivalves for other meat. They are not receptive to proposals to go vegetarian/vegan. Bivalves are also healthier than plant-based meat. Therefore, bivalves are the most effective way to reduce overall animal suffering.

I interpret the linked post about receptivity to proposals to go vegetarian/vegan as providing evidence that people are receptive to these proposals. It states:

However, polls suggest that the percentage of the population that’s vegetarian has stayed basically flat since 1999. In short, we’re b

... (read more)
UVC air purifier design and testing strategy

This is very helpful, thank you! I've been mainly looking into design projects for the summer, and the impression I picked up at EAGxBoston was that just having low-cost UVC devices available was a key bottleneck. Working on a design sounded like it might fit the bill. Based on what you've said, it sounds like this is more of a logistics and social coordination problem than a money problem. I'll keep this in mind for the future, though.

Some potential lessons from Carrick’s Congressional bid

So this is a definitional issue: is it accurate to call the most Hispanic district in the 14th most Hispanic state (per Wikipedia) "not a heavily Hispanic area or anything?"

We can answer this quantitatively.

17.4% of the citizen voting age population of OR-6 is Hispanic. Of 9 candidates who ran in OR-6, two, Salinas and Leon, are Hispanic, making Hispanics 22.2% of the candidate pool. So they were not particularly over- or under-represented in this race. It is slightly surprising that the strongest candidate in this race happened to be Hispanic, but 22.2% c... (read more)

Some potential lessons from Carrick’s Congressional bid

I strongly upvoted your post, and thanks for taking the time to write it.

I note that you’re effectively recommending a strategy of lobbying instead of electioneering in order to advance the cause of pandemic preparedness. Do you have data or personal experience to support the idea that lobbying is a more effective method than campaign sponsorship of aligned candidates to build political support for an issue?

Matt Lerner spent some time looking into lobbying for altruistic causes and posted about it on the EA forum. I appreciate his research, and would like ... (read more)

As someone who has both worked to elect candidates and who has lobbied at many levels, my experience is that lobbying can be quite effective if it is done with a candidate who shares your values and goals. I have done this mostly at the state level and find that, until they rise to a position of some power, candidates may not be able to achieve what they wish. In contrast to this, spending time with committee chairs who have much power over the agenda is quite effective, especially if you can establish yourself as a source of reliable information and policy directions. Both are valuable. Thanks for the article referral. I look forward to reading it. 

Personally (though obviously Carol may disagree), I don't think that's necessarily the strategic takeaway from Carol's post. The value of electioneering vs. lobbying probably depends on the specifics of the districts and candidates. 

When an EA-oriented candidate has stronger ties to the district, a more robust political history, deeper local political connections, etc? Sure, the monetary value of donating to that candidate probably exceeds lobbying. .

But at the end of the day, none of those factors were remotely there for Flynn. 

As an aside, I gr... (read more)

Bad Omens in Current Community Building

But to me the thrust of this post (and the phenomenon I was commenting on) was: there are many people with the ability to solve the worlds biggest problems. It would be a shame to lose their inclination purely due to our CB strategies. If our strategy could be nudged to achieve better impressions at people's first encounter with EA, we could capture more of this talent and direct them to the world's biggest problems.

Another way of stating this is that we want to avoid misdirecting talent away  from the world's biggest problems. This might occur if EA ... (read more)

Bad Omens in Current Community Building

The criticisms of EA movement building tactics that we hear are not necessarily the ones that are most relevant to our movement goals. Specifically, I’m hesitant to update much on a few 18 year olds who decide we’re a “cult” after a few minutes of casual observation at a fresher’s fair. I wouldn’t want to be part of a movement that eschewed useful tools for better-integrating its community because it’s afraid of the perception of a few sarcastic teenagers.

Instead, I’m interested in learning about the critiques of EA put forth by highly-engaged EAs, non-EAs... (read more)

I made this comment with the assumption that some of these people could have extremely valuable skills to offer to the problems this community cares about. These are students at a top uni in the UK for sciences, and many of whom go on to be significantly  influential in politics and business, much higher than the base rate at other unis or average population.

I agree not every student fits this category, or is someone who will ever be inclined towards EA ideas. However I don't know if we are claiming that being in this category (e.g. being in the top N... (read more)

Effective altruism’s odd attitude to mental health

What follows is mere speculation on my part.

I tend to assume that the physical health interventions we promote via global health initiatives are also the most tractable ways to improve mental health. Losing a child to malaria, or suffering anemia due to a worm infection, or being extremely poor, or living and dying through wars and plagues, seem like they’d have a devastating impact on people’s mental health.

Because EAs don’t typically suffer from these problems, and because we allow for a lot of self-care, it does not surprise me that EAs focus on specifi... (read more)

Against immortality?

I think Matt’s on the right track here. Treating “immortal dictators” as a separate scenario from “billions of lives lost to an immortal dictator” smacks of double-counting.

Really, we’re asking if immortality will tend to save or lose lives on net, or to improve or worsen QoL on net.

We can then compare the possible causes of lives lost/worsened vs gained/bettered: immortal dictators, or perhaps immortal saints; saved lives from life extension; lives less tainted by fear of death and mourning; lives more free to pursue many paths; alignment of individual se... (read more)

Against immortality?

Hi Owen! The advantages and limitations of immortality needs more thought as our society is starting to more seriously invest in anti-aging.

One of my challenges with this post is that it claims to provide an "anti-immortality case," but then proceeds to simply list some problems that might arise if people were immortal.

To make an anti-X case, you need to do more than list some problems with X. You need to make a case that the problems are insurmountably bad or risky, even after a consideration of possible solutions. Alternatively, you can make a case that ... (read more)

5Owen Cotton-Barratt4mo
Good questions! I could give answers but my error bars on what's good are enormous. (I do think my post is mostly not responding to whether longevity research is good, but to what the appropriate attitudes/rhetoric towards death/immortality are.)
Free-spending EA might be a big problem for optics and epistemics

Acknowledging that important caveat, I am very pleased to have this counterbalancing data available. I hope that we can continue to gather more of it and get a better sense of how the EA movement and its social surroundings think about these questions over time. Thank you for collecting it.

Free-spending EA might be a big problem for optics and epistemics

Consider the analogy with food production and food waste in relation to global hunger. We can grow enough food to feed the planet. Our ability to solve world hunger is not constrained by food production, but, in my understanding, by logistical issues involving waste, transportation, warfare, and governance problems.

Likewise, in EA, our ability to address the problems with which we are concerned may be increasingly unconstrained by funding. Instead, it's bottlenecked by similar logistics problems: waste, governance, coordination within and between organizat... (read more)

With the caveat that this is obviously flawed data because the sample is "people who came to an all-expenses-paid retreat," I think it's useful to provide some actual data Harvard EA collected at our spring retreat. I was slightly concerned that the spending would rub people the wrong way, so I included as one of our anonymous feedback questions, "How much did the spending of money at this retreat make you feel uncomfortable [on a scale of 1 to 10]?" All 18 survey answerers provided an answer. Mean: 3.1. Median: 3. Mode: 1. High: 9.

I think it's also worth ... (read more)

6 Year Decrease of Metaculus AGI Prediction

I note that in November 2020, the Metaculus community's prediction was that AGI would be arriving even sooner (2032, versus the current 2036 prediction). So if we're taking the Metaculus prediction seriously, we also want to understand things why the forecasters on Metaculus have longer timelines now than they did a year and a half ago. 

I note that 60 extra forecasters joined in forecasting over the last few days, representing about a 20% increase in the forecaster population for this question.

This makes me hypothesize that the recent drop in forecast... (read more)

I feel anxious that there is all this money around. Let's talk about it

Two factual nitpicks:

  1. The fellowship's $50k to 100 fellows, a total of $5.5mil.

2. The money's not described by AF as "no strings attached." From their FAQ:

Scholarship money should be treated as “professional development funding” for award winners. This means the funds could be spent on things like professional travel, textbooks, technology, college tuition, supplementing unpaid internships, and more.

Students will receive ongoing guidance to manage and effectively spend their scholarship funds.

For Fellows ($50,000), a (taxed) amount is placed in a trust fund

... (read more)
Thanks for the corrections, fixed. I agree that the hits-based justification could work out, just would like to see more public analysis of this and other FTX initiatives.
I feel anxious that there is all this money around. Let's talk about it

I've spent time in the non-EA nonprofit sector, and the "standard critical story" there is one of suppressed anger among the workers. To be clear, this "standard critical story" is not always fair, accurate, or applicable. By and large, I also think that, when it is applicable, most of the people involved are not deliberately trying to play into this dynamic. It's just that, when people are making criticisms, this is often the story I've heard them tell, or seen for myself.

It goes something like this:

[Non-EA] charities are also primarily funded by milliona

... (read more)
Overall I like your post and think there's something to be said for reminding people that they have power; and in this case, the power is to probe at the sources of their anxiety and reveal ground-truth. But there is something unrealistic, I think, about placing the burden on the individual with such anxiety; particularly because answering questions about whether Funder X is lowering / raising the bar too much requires in-depth insider knowledge which - understandably - people working for Funder X might not want to reveal for a number of reasons, such as: 1. they're too busy, and just want to get on with grant-making 2. with distributed responsibility for making grants in an organisation, there will be a distribution of happiness across staff with the process, and airing such tensions in public can be awkward and uncomfortable 3. they've done a lot of the internal auditing / assessment they thought was proportional 4. they're seeing this work as inherently experimental / learning-by-doing and therefore plan more post-hoc reviews the prior process crafting I'm also just a bit averse, from experience, of replying to people's anxieties with "solve it yourself". I was on a graduate scheme where pretty much every response to an issue raised - often really systemic, challenging issues which people haven't been able to solve for years, or could be close to whistle-blowing issues - was pretty much "well how can you tackle this?"* The takeaway mesage then feels something like "I'm a failure if I can't see the way out of this, even if this is really hard, because this smart more experienced person has told me it's on me". But lots of these systemic issues do not have an easy solution, or taking steps towards action are either emotionally / intellectually hard or frankly could be personally costly. From experience, this kind of response can be empowering, but it can also inculcate a feeling of desperation when clever and can-do attitude people (like most
Moved this comment to a shortform post here [] .
Unsurprising things about the EA movement that surprised me

When you first get to EA, it feels like there is an EA text about everything.

EA also stands for Endless Articles!

Really? I thought it stood for Easy Answers
Milan Griffes on EA blindspots

One constructive project might be to outline a sort of "pipeline"-like framework for how an idea becomes an EA cause area. What is the "epistemic bar" for:

  • Thinking about an EA cause area for more than 10 minutes?
  • Broaching a topic in informal conversation?
  • Investing 10 hours researching it in depth?
  • Posting about it on the EA forum?
  • Seeking grant funding?

Right now, I think that we have a bifurcation caused by feed-forward loops. A popular EA cause area (say, AI risk or global health) becomes an attractor in a way that goes beyond the depth of the argument in f... (read more)

Milan Griffes on EA blindspots

One estimate from 2019 is that EA has 2315 "highly-engaged" EAs and 6500 "active EAs in the community."

So a way of making your claims more precise is to estimate how many of these people should drop some or all of what they're doing now to focus on these cause areas. It would also  be helpful to specify what sorts of projects you think they'd be stopping in order to do that. If you think it would cause an influx of new members, they could be included in the anlaysis as well. Finally, I know that some of these issues do already receive attention from w... (read more)

Agree with almost all of this except: the bar for proposing candidates should be way way lower than the bar for getting them funded and staffed and esteemed. I feel you are applying the latter bar to the former purpose. Legibility is great! The reason I promoted Griffes' list of terse/illegible claims is because I know they're made in good faith and because they make the disturbing claim that our legibility / plausibility sensor is broken. In fact if you look at his past Forum posts you'll see that a couple of them are expanded already. I don't know what mix of "x was investigated silently and discarded" and "movement has a blindspot for x" explains the reception, but hey nor does anyone. Current vs claimed optimal person allocation is a good idea, but I think I know why we don't do em: because almost no one has a good idea of how large efforts are currently, once we go any more granular than "big 20 cause area". Very sketchy BOTEC for the ideas I liked: #5: Currently >= 2 people working on this? Plus lots of outsiders who want to use it as a weapon against longtermism. Seems worth a dozen people thinking out loud and another dozen thinking quietly. #10: Currently >= 3 people thinking about it, which I only know because of this post. Seems worth dozens of extra nuke people, which might come from the recent Longview push anyway. #13: Currently around 30? people, including my own minor effort. I think this could boost the movement's effects by 10%, so 250 people would be fine. #20: Currently I guess >30 people are thinking about it, going to India to recruit, etc. Counting student groups in non-focus places, maybe 300. But this one is more about redirecting some of the thousands in movement building I guess. That was hard and probably off by an order of magnitude, because most people's work is quiet and unindexed if not actively private.
We're announcing a $100,000 blog prize

Thanks very much for the response!

It sounds like you're interested both in the quality of the content and in its convenience, visibility, and readership. But it doesn't necessarily need to have the journal-like structure of a blog. The RSS/newsletter would be a way to keep regular readers apprised of new content. But it doesn't necessarily have to be primarily meant to be ingested in chronological (or reverse-chronological) order.

We're announcing a $100,000 blog prize

Does this prize draw a distinction between a website versus a blog? As an example of something that's more "website" than "blog", think of Gwern's website or imagine a scaled-down individually-run version of 80,000 Hours .

Why websites?

I. Emphasis on structure

The reason I ask is that it seems to me that there are many benefits to creating content that's a little more formal and a little more data-oriented. Blogs tend to treat individual posts as one-offs, even if they're arranged into a sequence. By contrast, websites can break local arguments off into chun... (read more)

I strongly agree with you, and would add that long content like Gwern's (or Essays on Reducing Suffering or PredictionBook or Wikipedia etc.) are important as epistemic infrastructure: they have the added value of constant maintenance, which allows them to achieve depth and scope that is usually not found in blogs. I think this kind of maintenance is really really important, especially when considering long-term content. I mourn the times when people would put a serious effort into putting together an FAQ for things—truly weapons from a more civilized age... (read more)

5Nick Whitaker5mo
We strongly recommend that your blog has some form of RSS/newsletter. This makes it easier for people to find and read (and much easier for us to judge). At the same time, I love and generally encourage the idea of building a website around the content along the lines you describe, for the reasons you enumerate. This is the big downside of Substack [].
AI Risk is like Terminator; Stop Saying it's Not

Part of the "it's not like Terminator" line is a response to people misremembering the plot of the movie. From the Dylan Matthews VOX article you linked:

What did these folks think would happen — was some company going to build Skynet and manufacture Terminator robots to slaughter anyone who stood in their way? It felt like a sci-fi fantasy, not a real problem.

Here's the description of Skynet from the wikipedia article:

Defense network computers. New... powerful... hooked into everything, trusted to run it all. They say it got smart, a new order of intellige

... (read more)
Prediction Markets For Credit?

That’s why I suggested the prediction market would be based on a curve relative to one’s classmates :) I may go back and emphasize that point.

What (standalone) LessWrong posts would you recommend to most EA community members?

John had several posts highly ranked in the 2020 LessWrong review, and one in the 2019 LessWrong review, so there's a community consensus that they're good. There was also a 2018 LessWrong review, though John didn't place there.

In general, the review is a great resource for navigating more recent LW content. Although old posts are a community touchstone, the review includes posts that reflect the live interests of community members that have also been extensively vetted not only for being exciting, but for maintaining their value a year later.

Thank you!
Load More