Can you say something about what N-D lasers are and why they present such a strong threat? A google search for "N-D laser" just turns up neodymium lasers and it isn't clear why they would be as threatening as you present. In the worst case, you build a probe with a very powerful fusion energy source which is able to fire a laser at people sufficiently powerful to kill them, you could probably also build a laser or defense system to strike and kill the probe before existential loss has been caused.
Interstellar travel will probably doom the long-term future
My intuition is that most of the galactic existential risks listed are highly unlikely, and it is possible that the likely ones (self-replicating machines and ASI) may be defense-dominant. An advanced civilization capable of creating self-replicating machines to destroy life in other systems could well be capable of building defense systems against a threat like that.
You could substantially increase your weekly active users, converting monthly active users (MAU) into weekly and even daily users, and increasing MAU as well, by using push notifications to inform users of replies to their posts and comments and other events that are currently only sent as in-forum notifications to most users. Many, many times, I have posted on the forum, sent a comment or reply, and only weeks later seen that there was a response. On the other hand, I will get an email from twitter or bluesky if one person likes my post, and I immediately...
Fair enough.
My central expectation is that value of one more human life created is roughly about even with the amount of nonhuman suffering that life would cause (based on here https://forum.effectivealtruism.org/posts/eomJTLnuhHAJ2KcjW/comparison-between-the-hedonic-utility-of-human-life-and#Poultry_living_time_per_capita). I'm also willing to assume cultured meat is not too long away. Then the childhood delay til contribution only makes a fractional difference and I tip very slightly back into the pro natalist camp, while still accepting that the meat ea...
I think no one here is trying to use pronatalism to improve animal welfare. The crux for me is more whether pronatalism is net-negative, neutral, or net-positive, and its marginal impact on animal welfare seems to matter in that case. But the total impact of animal suffering dwarfs whatever positive or negative impact pronatalism might have.
I think Richard is right about the general case. It was a bit unintuitive to me until I ran the numbers in a spreadsheet, which you can see here:
Basically, yes, assume that meat eating increases with the size of human population. But the scientific effort towards ending the need to meat eat also increases with the size of the human population, assuming marginal extra people are as equally likely to go into researching the problem as the average person. Unde...
right--in that simple model, each extra marginal average person decreases the time taken to invent cultured meat at the same rate as they contribute to the problem, and there's an exact identity between those rates. But there are complicating factors that I think work against assuring us there's no meat-eater problem:
Ok, I missed the citation to your source initially because the citation wasn't in your comment when you first posted it. The source does say less insect abundance in land converted to agricultural use from natural space. So then what i said about increased agricultural use supports your point rather than mine.
Great point! Though I think it's unless clear what the impact of more humans on wild terrestrial invertebrate populations is. Developed countries have mostly stopped clearing land for human living spaces. I could imagine that a higher human population could induce demand for agriculture and increased trash output which could increase terrestrial invertebrate populations.
Reviving this old thread to discuss the animal welfare objection to pro-natalism that I think is changing my mind on pro-natalism. I'm a regular listener to Simone and Malcolm Collins's podcast. Since maybe 2021 I've gone on an arc of first fairly neutral to then being strongly pro-natalist, third being pro-natalist but not rating it as an effective cause area, and now entering a fourth phase where I might reject pro-natalism altogether.
I value animal welfare and at least on an intellectual level I care equally about their welfare and humanity's. For every...
You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.
For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.
We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn't 10% or more.
You could model the distribution of your uncertainty with, say, a beta distribution of .
You might wonder, why b=...
This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn't (by default) include a weighting amongst the set.
It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.
If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could...
As Yann LeCun recently said, “If you do research and don't publish, it's not science.”
With all due respect to Yann LeCun, in my view he is as wrong here as he is dismissive about the risks from AGI.
Publishing is not an intrinsic and definitional part of science. Peer reviewed publishing definitely isn't--it has only been the default for several decades to a half century or so. It may not be the default in another half century.
If Trump still thinks AI is "maybe the most dangerous thing" I would be wary of giving up on chances to leverage his support on AI safety.
In 2022, individual EAs stood for elected positions within each major party. I understand there are Horizon fellows with both Democrat and Republican affiliations.
If EAs can engage with both parties in those ways, added to the fact the presumptive Republican nominee may be sympathetic, I wouldn't give up on Republican support for AI safety yet.
ha I see. Your advice might be right but I don't think "consciousness is quantum". I wonder if you could say what you mean by that?
Of course I've heard that before. In the past when I have heard people say that before, it's by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:
TLDR: I'm looking for researcher roles in AI Alignment, ideally translating technical findings into actionable policy research
Skills & background: I have been an local EA community builder since 2019. I have a PhD in social psychology and wrote my dissertation on social/motivational neuroscience. I also have a BS in computer science and spent two years in industry as a data scientist building predictive models. I'm an experienced data scientist, social scientist, and human behavioral scientist.
Location/remote: Currently located on the West Coast of the...
But I would guess that pleasure and unpleasantness isn't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.
This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there's a certain kind of recurrent cortical processing of the signal which can loosely be described as "sensation". I mean that very loosely; it even can include memories of physical events or semantic though...
I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination)....Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.
I think most of those things actually can be reduced to sensations; moods can't be, but then, are moods consciously experienced, or do they only predispose us to interpret consciou...
To give a concrete example, my infant daughter can spend hours bashing her toy keyboard with 5 keys. It makes a sound every time. She knows she isn't getting any food, sleep, or any other primary reinforcer to do this. But she gets the sensations of seeing the keys light up and a cheerful voice sounding from the keyboard's speaker each time she hits it. I suppose the primary reinforcer just is the cheery voice and the keys lighting up (she seems to be drawn to light--light bulbs, screens, etc).
During this activity, she's playing, but also learning ab...
Yes I see that is a reasonable thing to not be convinced about and I am not sure I can do justice to the full argument here. I don't have the book with me, so anything else I tell you is pulling from memory and strongly prone to error. Elsewhere in this comments section I said
...When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations b
To me "conscious pleasure" without conscious sensation almost sounds like "the sound of one hand clapping". Can you have pure joy unconnected to a particular sensation? Maybe, but I'm sceptical. First, the closest I can imagine is calm joyful moments during meditation, or drug-induced euphoria, but in both cases I think it's at least plausible there are associated sensations. Second, to me, even the purest moments of simple joy seem to be sensations in themselves, and I don't know if there's any conscious experience without sensations.
Humphrey theorises th...
Humphrey's argument fish aren't conscious doesn't only rest on their not having the requisite brain structures, because as you say, it is possible consciousness could have developed in their own structures in ways that are simply distinct from our own. But then, Humphrey would ask, if they have visual sensations, why are they uninterested in play? When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organ...
I tend to think that questions about which organisms or systems are conscious mostly depend on identifying the physical correlates of consciousness, and understanding how they work as a system, and that questions about panpsychism, illusionism, eliminativism, or even Calmer's Hard Problem don't bear on this question very much. I think there's probably still a place for that philosophical debate because (1) there might be implications about where to look for the physical systems and (2) as I said to Michael earlier, illusionism might change our perspective ...
say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike
I'm curious how, excluding phenomenal definitions, you define he defines "frustration of a desire" or "negative attitude of a dislike", because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire ("navigate through a maze t...
Not absolutely sure I'm afraid. I lent my copy of the book out to a colleague so I can't check.
Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn't actually say his view is an illusionist one.
Personally I can't stand the label "illusionism" because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane--there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are m...
Actually, I have to correct my earlier reply. Iirc the argument is that all conscious animals engage in physical play, not necessarily that all playful animals are conscious. On the other hand, Humphrey does say that all animals engaging in pure sensation-seeking type play are conscious, so that's probably the sort of play he'd need to bring him around on octopuses.
Humphrey spent a lot of time saying that authors like Peter Godfrey-Smith (whose book on octopus sociality and consciousness I have read, and also recommend) are wrong or not particularly serious when they argue that octopus behavior is play, because there are more mundane explanations for play-like behavior. I can't recall too much detail here because I no longer have Humphrey's book in my possession. In any case I think if you convinced him octopuses do play he would probably change his mind on octopuses without needing to modify any aspects of the overall theory. He'd just need to concede that the way consciousness developed in warm blooded creatures is not the only way it has developed in evolutionary history.
No, the author is ultimately unclear why qualia in itself is useful, but by reasoning about the case studies I listed, his argument that qualia is in fact related to recursive internal feedback loops is ultimately a bit stronger than just "these things all feel like the same things so they must be related".
Humphrey first argues through his case studies that activity in the neocortex seems to generate conscious experience, while activity in the midbrain does not. Further, midbrain activity is sophisticated and can do a lot of visual and other perceptual pro...
I've tried to condense a book-length presentation into a 10 minute read and I probably have made some bad choices about which parts to leave out.
Its not that sensory play is necessary for producing sentience. The claim is that any animal that is sentient would be motivated to play. There might be other motivations for play that are not sentience, but all sentient creatures (so the argument goes) would want to play in order to explore and learn about the properties of its own sensory world.
For the limbless species you mentioned, if we imagine a radical scen...
Early (and peak) Quakers went down some weird ineffective paths. It's cool that they were into nonviolence and class equality, but they were also really into renaming the days of the week and months of the year to avoid pagan names.
This sounds like hits-based cause selection. The median early Quaker cause area wasn't particularly effective, but their best cause area was probably worth all of the wasted time in all of the others.
I am visiting a Quaker church for the first time ever tomorrow. I've been out of religious community for 15 years or so and I'd like to explore one that is compatible with my current views.
I'm trying to think about what "EA but more religious" might look like. Could you form a religious community holding weekly assemblies to celebrate our aspirations to fill the lightcone with happy person moments? I think that is a profoundly spiritual and emotional activity and I think we can do it.
I'll post a longer post to this effect soon.
I conditionally disagree with the "Work trials". I think EA companies doing work trials is a pretty positive innovation that EA in particular does that enables them to hire for a better fit than not doing work trials. This is good in the long run for potential employees and for the employer.
This is conditional on
I can anticipate some reasonable disa...
It seems like a safer bet that AI will have some kind of effect on lightening the labor load than it will solve either of those particular problems.
I've spent hours going over your arguments and this is a real crux for me. AI is likely to lessen the need for human workers, at least for maintaining our existing levels of wealth.
I've stumbled here after getting more interested in the object-level debate around pronatalism. I am glad you posted this because, in the abstract, I think it's worthwhile to point out where someone may not be engaging in good faith within our community.
Having said that, I wish you had framed the Collins' actions in a little more good faith yourself. I do not consider that one quoted tweet to be evidence that of an "opportunistic power grab". I think it's probably a bit unhealthy to see our movement in terms of competing factions, and to seek wins for one'...
Have you looked at the fertility rate underlying the UN projections? They're projecting fertility rates across China, Japan, Europe, and the United States to arrest their yearly decline and begin to slowly move up back to somewhere in the 1.5 to 1.6 range.
That seems way too high because it's assuming not just that current trends stop but that they reverse to the opposite direction of that observed. Even their "low" scenario has fertility rebounding from a low in ~2030.
This despite all those countries still have a way to go before they get to the low ...
I enjoyed this post. I think it is worth thinking about whether the problem is unsolveable! I think one takeaway I had from Tegmark's Life 3.0 was that we will almost certainly not get exactly what we want from AGI. It seems intuitively that any possible specification will have downsides, including the specification to not build AGI at all.
But asking for a perfect utopia seems a high bar for "Alignment"; on the other hand, "just avoid literal human extinction" would be far too low a bar and include the possibility for all sorts of dystopias.
So I think it's...
I have a very uninformed view on the relative Alignment and Capabilities contributions of things like RLHF. My intuition is that RLHF is positive for alignment I'm almost entirely uninformed on that. If anyone's written a summary on where they think these grey-area research areas lie I'd be interested to read it. Scott's recent post was not a bad entry into the genre but obviously just worked a a very high level.
Can you describe exactly how much you think the average person, or average AI researcher, is willing to sacrifice on a personal level for a small chance at saving humanity? Are they willing to halve their income for the next ten years? Reduce by 90%?
I think in a world where there was a top down societal effort to try to reduce alignment risk, you might see different behavior. In the current world, I think the "personal choice" framework really is how it works because (for better or worse) there is not (yet) strong moral or social values attached to capability vs safety work.
Here's something else I'd like to know on that survey:
Surveys of these types are often anonymous, because
I am now also very curious about what value the community gets from various kinds of experiences in EA spaces.
For example, I'm curious how most women would weigh being in a community that lets them access healthy professional networks free from the tensions of inappropriate* sexual/romantic advances against being in a community where they are able to find find EA partners. (I am implying that there is a tradeoff here.)
I am also curious if the men in the community have an opposing view - if so, it might be important to think about how the existing sta...
Ben -- good idea. I think the crucial thing would be to phrase the questions about these issues as neutrally and factually as possible, to avoid responses biases in either direction.
Ideally EA would ask just about actual first-hand experiences of the individual, rather than general perceptions, impressions based on rumors and media coverage, or second/third-hand reports.
EA has copped a lot of media criticism lately. Some of it (especially the stuff more directly associated with FTX) is well-deserved. There are some other loud critics who seem to be motivated by personal vendettas and/or seem to fundamentally object with the movement's core aims and values, but rather than tackling those head-on, seem to be trying to simply through everything that'll stick, no matter how flimsy.
None of that excuses dismissal of the concerning patterns of abuse you've raised, but I think it explains some of the defensiveness around here right now.
When I pit depopulation against causes that capture the popular imagination and that take up the most time in contemporary political discourse, I think depopulation scores pretty high as a cause and I am glad it is getting more attention.
When I pit it against causes that the EA movement spends the most time on, including AI x-risk, farmed animal welfare, perhaps even wild animal welfare, and global poverty, I find it hard to justify giving it my considered attention because of the outsized importance of the other problems.
AI x-risk is im... (read more)