All of ben.smith's Comments + Replies

ben.smith
7
0
0
50% disagree

Depopulation is Bad

When I pit depopulation against causes that capture the popular imagination and that take up the most time in contemporary political discourse, I think depopulation scores pretty high as a cause and I am glad it is getting more attention.

When I pit it against causes that the EA movement spends the most time on, including AI x-risk, farmed animal welfare, perhaps even wild animal welfare, and global poverty, I find it hard to justify giving it my considered attention because of the outsized importance of the other problems.

AI x-risk is im... (read more)

Can you say something about what N-D lasers are and why they present such a strong threat? A google search for "N-D laser" just turns up neodymium lasers and it isn't clear why they would be as threatening as you present. In the worst case, you build a probe with a very powerful fusion energy source which is able to fire a laser at people sufficiently powerful to kill them, you could probably also build a laser or defense system to strike and kill the probe before existential loss has been caused.

1
JordanStone
I'm borrowing the term "N-D lasers" from Charlie Stross in this post: https://www.antipope.org/charlie/blog-static/2015/04/on-the-great-filter-existentia.html  N-dimensional is just referring to an arbitrarily powerful laser, potentially beyond our current understanding of physics. These lasers might be so powerful they could travel vast distances through interstellar space and destroy a star system. They would travel at the speed of light so it'd be impossible to see them coming. Kurzgesagt made a great video on this: 
ben.smith
1
0
0
50% disagree

Interstellar travel will probably doom the long-term future

My intuition is that most of the galactic existential risks listed are highly unlikely, and it is possible that the likely ones (self-replicating machines and ASI) may be defense-dominant. An advanced civilization capable of creating self-replicating machines to destroy life in other systems could well be capable of building defense systems against a threat like that.

You could substantially increase your weekly active users, converting monthly active users (MAU) into weekly and even daily users, and increasing MAU as well, by using push notifications to inform users of replies to their posts and comments and other events that are currently only sent as in-forum notifications to most users. Many, many times, I have posted on the forum, sent a comment or reply, and only weeks later seen that there was a response. On the other hand, I will get an email from twitter or bluesky if one person likes my post, and I immediately... (read more)

2
Sarah Cheng 🔸
Thanks for the suggestions! * I agree that emailing users more often will probably get them to return to the site more often. * I'm less confident than you [sound] that this will have a major effect. * Since our team has been focused on software/product for a while and haven't noticeably increased MAUs, I am skeptical that further work in this space will be the magic bullet. For example, we made significant improvements in site speed and didn't see metrics improve as much as we expected. * Our team has been more willing to email users recently (for example, about Forum events) and I want to be careful about going too far and annoying users/causing unsubscribes. * Honestly I'm not totally sure why basically none of the default notifications include an email, which makes me somewhat nervous to significantly change this. My guess is that you are a bit unusual in finding lots of email notifications fun, and probably more people would find that overwhelming or annoying. * That said, we do plan to test out making our default notification settings more in line with other sites (for example, making karma notifications realtime by default instead of batched daily) and sending a delayed email to new users explaining how they can customize their notification settings. * We'll certainly consider changing other notification default settings, but again I want to be careful with this, not just because some people would dislike it, but also because ultimately our goal is not to increase usage. I want people to have a healthy relationship with the Forum, and only use it to the extent that they think is worthwhile. * I feel like changing the notification settings for existing users is probably crossing a line.

Fair enough.

My central expectation is that value of one more human life created is roughly about even with the amount of nonhuman suffering that life would cause (based on here https://forum.effectivealtruism.org/posts/eomJTLnuhHAJ2KcjW/comparison-between-the-hedonic-utility-of-human-life-and#Poultry_living_time_per_capita). I'm also willing to assume cultured meat is not too long away. Then the childhood delay til contribution only makes a fractional difference and I tip very slightly back into the pro natalist camp, while still accepting that the meat ea... (read more)

I think no one here is trying to use pronatalism to improve animal welfare. The crux for me is more whether pronatalism is net-negative, neutral, or net-positive, and its marginal impact on animal welfare seems to matter in that case. But the total impact of animal suffering dwarfs whatever positive or negative impact pronatalism might have.

I think Richard is right about the general case. It was a bit unintuitive to me until I ran the numbers in a spreadsheet, which you can see here:

https://docs.google.com/spreadsheets/d/1pRW3WinG1gzJM3RER2Q4Tl5kscJRESuG8qupHGN1Wnw/edit?usp=drivesdk

Basically, yes, assume that meat eating increases with the size of human population. But the scientific effort towards ending the need to meat eat also increases with the size of the human population, assuming marginal extra people are as equally likely to go into researching the problem as the average person. Unde... (read more)

5
Erich_Grunewald 🔸
Yeah, but as you point out below, that simple model makes some unrealistic assumptions (e.g., that a solution will definitely be found that fully eliminates farmed animal suffering, and that a person starts contributing, in expectation, to solving meat eating at age 0). So it still seems to me that a better argument is needed to shift the prior.

right--in that simple model, each extra marginal average person decreases the time taken to invent cultured meat at the same rate as they contribute to the problem, and there's an exact identity between those rates. But there are complicating factors that I think work against assuring us there's no meat-eater problem:

  • An extra person starts eating animals from a very young age, but won't start contributing to the meat-eater problem until they're intellectually developed enough to make a contribution (21 yers to graduate undergraduate, 25-30 to get a PhD).
  • Th
... (read more)

Ok, I missed the citation to your source initially because the citation wasn't in your comment when you first posted it. The source does say less insect abundance in land converted to agricultural use from natural space. So then what i said about increased agricultural use supports your point rather than mine.

Great point! Though I think it's unless clear what the impact of more humans on wild terrestrial invertebrate populations is. Developed countries have mostly stopped clearing land for human living spaces. I could imagine that a higher human population could induce demand for agriculture and increased trash output which could increase terrestrial invertebrate populations.

3
Michael St Jules 🔸
So this would be more food/net primary productivity available for terrestrial invertebrates to eat, and agriculture would have to increase net primary productivity overall (EDIT: or transform it into a more useful form for invertebrates), right?

Reviving this old thread to discuss the animal welfare objection to pro-natalism that I think is changing my mind on pro-natalism. I'm a regular listener to Simone and Malcolm Collins's podcast. Since maybe 2021 I've gone on an arc of first fairly neutral to then being strongly pro-natalist, third being pro-natalist but not rating it as an effective cause area, and now entering a fourth phase where I might reject pro-natalism altogether.

I value animal welfare and at least on an intellectual level I care equally about their welfare and humanity's. For every... (read more)

You can just widen the variance in your prior until it is appropriately imprecise, which that the variance on your prior reflects the amount of uncertainty you have.

For instance, perhaps a particular disagreement comes down to the increase in p(doom) deriving from an extra 0.1 C in global warming.

We might have no idea whether 0.1 C of warming causes an increase of 0.1% or 0.01% of P(Doom) but be confident it isn't 10% or more.

You could model the distribution of your uncertainty with, say, a beta distribution of .

You might wonder, why b=... (read more)

This leaves me deeply confused, because I would have thought a single (if complicated) probability function is better than a set of functions because a set of functions doesn't (by default) include a weighting amongst the set.

It seems to me that you need to weight the probability functions in your set according to some intuitive measure of your plausibility, according to your own priors.

If you do that, then you can combine them into a joint probability distribution, and then make a decision based on what that distribution says about the outcomes. You could... (read more)

3
Anthony DiGiovanni
The concern motivating the use of imprecise probabilities is that you don't always have a unique prior you're justified in using to compare the plausibility of these distributions. In some cases you'll find that any choice of unique prior, or unique higher-order distribution for aggregating priors, involves an arbitrary choice. (E.g., arbitrary weights assigned to conflicting intuitions about plausibility.)

As Yann LeCun recently said, “If you do research and don't publish, it's not science.”

With all due respect to Yann LeCun, in my view he is as wrong here as he is dismissive about the risks from AGI.

Publishing is not an intrinsic and definitional part of science. Peer reviewed publishing definitely isn't--it has only been the default for several decades to a half century or so. It may not be the default in another half century.

If Trump still thinks AI is "maybe the most dangerous thing" I would be wary of giving up on chances to leverage his support on AI safety.

In 2022, individual EAs stood for elected positions within each major party. I understand there are Horizon fellows with both Democrat and Republican affiliations.

If EAs can engage with both parties in those ways, added to the fact the presumptive Republican nominee may be sympathetic, I wouldn't give up on Republican support for AI safety yet.

5
NickLaing
100%, I'd like to see the stats on what Politicians say they will repeal pre-elections, and what they actually end up repealing once they are in power. In New Zealand here at least I can think of multiple anecdotal examples where there is a lot of bluster pre election but then the law either doesn't get changed, or only modified in a minor way; Perhaps Obamacare might be one example of this in America? I think Trump had a decent amount of rhetoric saying he would repeal it, then didn't do anything  repeal it when he reached power.

ha I see. Your advice might be right but I don't think "consciousness is quantum". I wonder if you could say what you mean by that?

Of course I've heard that before. In the past when I have heard people say that before, it's by advocates of free will theories of consciousness trying to propose a physical basis for consciousness that preserves indeterminacy of decision-making. Some objections I have to this view:

  1. Most importantly, as I pointed out here: consciousness is roughly orthogonal to intelligence. So your view shouldn't give you reassurance about AGI.
... (read more)

Elliot has a phenomenally magnetic personality and is consistently positive and uplifting. He's generally a great person to be around. His emotional stamina gives him the ability to uplift the people around him and I think he is a big asset to this community.

1
Elliot Billingsley 🔸
awww shucks

TLDR: I'm looking for researcher roles in AI Alignment, ideally translating technical findings into actionable policy research


Skills & background: I have been an local EA community builder since 2019. I have a PhD in social psychology and wrote my dissertation on social/motivational neuroscience. I also have a BS in computer science and spent two years in industry as a data scientist building predictive models. I'm an experienced data scientist, social scientist, and human behavioral scientist.

Location/remote: Currently located on the West Coast of the... (read more)

1
Elliot Billingsley 🔸
I can attest that Ben is an awesome community builder and communicator about EA!

But I would guess that pleasure and unpleasantness isn't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause.

This sounds right. My claim is that there are all sorts of unconscious perceptions an valenced processing going on in the brain, but all of that is only experienced consciously once there's a certain kind of recurrent cortical processing of the signal which can loosely be described as "sensation". I mean that very loosely; it even can include memories of physical events or semantic though... (read more)

 

I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination)....Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.

 

I think most of those things actually can be reduced to sensations; moods can't be, but then, are moods consciously experienced, or do they only predispose us to interpret consciou... (read more)

4
Michael St Jules 🔸
What do you mean by "reduced to"? It's tricky to avoid confounding here, because we're constantly aware of sensations and our experiences of pleasure and unpleasantness seem typically associated with sensations. But I would guess that pleasure and unpleasantness aren't always because of the conscious sensations, but these can have the same unconscious perceptions as a common cause. Apparently even conscious physical pain affect (unpleasantness) can occur without pain sensation, but this is not normal and recorded cases seem to be the result of brain damage (Ploner et al., 1999, Uhelski et al., 2012). I'm not sure, and that's a great question! Seems pretty likely these are just dispositions. I was also thinking of separation anxiety as an unpleasant experience with no specific sensations in other animals (assuming they can't imagine their parents, when they are away), but this could just be more like a mood that disposes them to interpret their perceptions or sensations more negatively/threatening.   Thanks for pushing on this. There are multiple standards at which I could answer this, and it would depend on what I (or we) want "conscious" to mean. With relatively high standards for consciousness like Humphrey seems to be using, or something else at least as strict as having a robust global workspace (with some standard executive functions, like working memory or voluntary attention control), I'd assign maybe 70%-95% probability to the in principle possibility based on introspection, studies of pain affect without pain sensation, and imagining direct stimulation of pleasure systems, or with drugs or meditation. However, I'd be very surprised (<15%) if there's any species with conscious pleasure or unpleasantness without the species generally also having conscious sensations. It doesn't seem useful for an animal to be conscious of pleasure or unpleasantness without also being conscious of their causes, which seems to require conscious sensation. Plus, whatever me

To give a concrete example, my infant daughter can spend hours bashing her toy keyboard with 5 keys. It makes a sound every time. She knows she isn't getting any food, sleep, or any other primary reinforcer to do this. But she gets the sensations of seeing the keys light up and a cheerful voice sounding from the keyboard's speaker each time she hits it. I suppose the primary reinforcer just is the cheery voice and the keys lighting up (she seems to be drawn to light--light bulbs, screens, etc). 

During this activity, she's playing, but also learning ab... (read more)

Yes I see that is a reasonable thing to not be convinced about and I am not sure I can do justice to the full argument here. I don't have the book with me, so anything else I tell you is pulling from memory and strongly prone to error. Elsewhere in this comments section I said

When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organism that can learn (as almost all can) would evolve visual sensations b

... (read more)

To me "conscious pleasure" without conscious sensation almost sounds like "the sound of one hand clapping". Can you have pure joy unconnected to a particular sensation? Maybe, but I'm sceptical. First, the closest I can imagine is calm joyful moments during meditation, or drug-induced euphoria, but in both cases I think it's at least plausible there are associated sensations. Second, to me, even the purest moments of simple joy seem to be sensations in themselves, and I don't know if there's any conscious experience without sensations.

Humphrey theorises th... (read more)

3
Michael St Jules 🔸
Thanks, this is helpful! I would say thinking of something funny is often pleasurable. Similarly, thinking of something sad can be unpleasant. And this thinking can just be inner speech (rather than visual imagination). Inner speech is of course sensory, but it's not the sensations of the inner speech, and instead your high-level interpretation of the meaning that causes the pleasure. (There might still be other subtle sensations associated with pleasure, e.g. from changes to your heart rate, body temperature, facial muscles, or even simulated smiling.) Also, people can just be in good or bad moods, which could be pleasant and unpleasant, respectively, but not really consistently simultaneous with any particular sensations.   Maybe some other potential capacities that seem widespread among mammals and birds (and not really investigated much in others?) that could make use of conscious sensation (and conscious pleasure and unpleasantness): 1. episodic(-like) memory (although it's not clear this is consciously experienced in other animals) 2. working memory 3. voluntary attention control 4. short-term planning (which benefits from the above) FWIW, mammals seem able to discriminate anxiety-like states from other states.[1] I don't think they are motivated to explore things they find unpleasant or aversive, or unpleasantness or aversion themselves. Rather, it just happens sometimes when they're engaging in the things they are motivated to do for other reasons. Ya, this seems plausible to me. But this also seems like the thing that's more morally important to look into directly. Maybe frogs' vision is blindsight, their touch and hearing are unconscious, etc., so they aren't motivated to engage in sensory play, but they might still benefit from conscious unpleasantness and aversion for more sophisticated strategies to avoid them. And they might still benefit from conscious pleasure for more sophisticated strategies to pursue pleasure. The conscious pleasure, u

Humphrey's argument fish aren't conscious doesn't only rest on their not having the requisite brain structures, because as you say, it is possible consciousness could have developed in their own structures in ways that are simply distinct from our own. But then, Humphrey would ask, if they have visual sensations, why are they uninterested in play? When you have sensations, play can teach you a lot about your own sensory processes and subsequently use what you've learned to leverage your visual sensations to accomplish objectives. It seems odd that an organ... (read more)

3
Michael St Jules 🔸
Does he spell out more why it's useful to learn more about your own sensations? Also, couldn't this apply to any perception that feeds into executive functions/cognitive control, conscious or not? What if sensory play is just very species-specific? Do the juveniles of every mammal and bird species play? Would he think the species without play aren't conscious, even if they have basically the same sensory neural structures? A motivation to engage in (sensory) play has resource costs. Playing uses energy and time, and it takes energy to build the structures responsible for the motivation to play. And the motivation could be risky without a safe environment, e.g. away from predators or protection by parents and with enough food. Fish larvae don't seem to get such safe environments. I guess a thesis he's stated elsewhere is that it's the function of consciousness to matter. This is the adaptive belief it causes. So, conscious sensations should just be interesting to animals with them, and maybe without that interest, there's no benefit to conscious sensation. This doesn’t seem crazy to me, and it seems pretty plausible with my sympathies to illusionism. Consciousness illusions should be adaptive in some way. But, this only tells me about conscious sensation. Animals without conscious sensation could still have conscious pleasure, unpleasantness and desires, which realize the mattering and interest. And animals don't engage in play to explore unpleasantness and aversive desire. So what are the benefits of unpleasantness and aversive desire being conscious as opposed to unconscious? And could there be similar benefits for conscious sensation? If there are, then sensory play may not be (evolutionarily) necessary for consciousness in general or conscious sensation in particular after all.
1
kewlcats
I specialize in AI, and I respectfully disagree. I think there's much more low-hanging fruit available when studying consciousness. Interpretability research is a thriving subfield best left to PhD students and I don't really think that bloggers add much value here.  I personally am also not concerned about AGI because I think consciousness is quantum. 

I tend to think that questions about which organisms or systems are conscious mostly depend on identifying the physical correlates of consciousness, and understanding how they work as a system, and that questions about panpsychism, illusionism, eliminativism, or even Calmer's Hard Problem don't bear on this question very much. I think there's probably still a place for that philosophical debate because (1) there might be implications about where to look for the physical systems and (2) as I said to Michael earlier, illusionism might change our perspective ... (read more)

say that pain is bad (even if it is not phenomenal) because it constitutively includes the frustration of a desire, or the having of a certain negative attitude of dislike

I'm curious how, excluding phenomenal definitions, you define he defines "frustration of a desire" or "negative attitude of a dislike", because I wonder whether these would include extremely simple frustrations, like preventing a computer generated character in a computer game from reaching its goal. We could program an algorithm to try to solve for a desire ("navigate through a maze t... (read more)

4
Michael St Jules 🔸
I think illusionists haven't worked out the precise details, and that's more the domain of cognitive neuroscience. I think most illusionists take a gradualist approach,[1] and would say it can be more or less the case that a system experiences states worth describing like "frustration of a desire" or "negative attitude of a dislike". And we can assign more moral weight the more true it seems.[2] We can ask about: 1. how the states affect them in lowish-order ways, e.g. negative valence changes our motivations (motivational anhedonia), biases our interpretations of stimuli and attention, has various physiological effects that we experience (or at least the specific negative emotional states do; they may differ by emotional state), 2. what kinds of beliefs they have about these states (or the objects of the states, e.g. the things they desire), to what extent they're worth describing as beliefs, and the effects of these beliefs, 3. how else they're aware of these states and in what relation to other concepts (e.g. a self-narrative), to what extent that's worth describing as (that type of) awareness, and the effects of this awareness. 1. ^ Tomasik (2014-2017, various other writings here), Muehlhauser, 2017 (sections 2.3.2 and 6.7), Frankish (2023, 51:00-1:02:25), Dennett (Rothman, 2017, 2018, p.168-169, 2019, 2021, 1:16:30-1:18:00), Dung (2022) and Wilterson and Graziano, 2021. 2. ^ This is separate from their intensity or strength.

Not absolutely sure I'm afraid. I lent my copy of the book out to a colleague so I can't check.

Humphrey mentioned illusionism (page 80 acc to Google books) but iirc he doesn't actually say his view is an illusionist one.

Personally I can't stand the label "illusionism" because to me the label suggests we falsely believe we have qualia, and actually have no such thing at all! But your definition is maybe much more mundane--there, the illusion is merely that consciousness is mysterious or important or matters. I wish the literature could use labels that are m... (read more)

3
Michael St Jules 🔸
I think this is technically accurate, but illusionists don't deny the existence of consciousness or claim that consciousness is an illusion; they deny the existence of phenomenal consciousness and qualia as typically characterized[1], and claim their appearances are illusions. Even Frankish, an illusionist, uses "what-it-is-likeness" in describing consciousness (e.g. "Why We Can Know What It’s Like To Be a Bat and Bats Can’t"), but thinks that should be formalized and understood in non-phenomenal (and instead physical-functional) terms, not as standard qualia. The problem is that (classic) qualia and phenomenality have become understood as synonymous with consciousness, so denying them sounds like denying consciousness, which seems crazy.   Kammerer, 2019 might be of interest. On accounting for the badness of pain, he writes: This approach is also roughly what I'd go with. That being said, I'm a moral antirealist, and I think you can't actually ground value stance-independently.   Makes sense. 1. ^ "Classic qualia: Introspectable qualitative properties of experience that are intrinsic, ineffable, and subjective." (Frankish (video)) I think this is basically the standard definition of 'qualia', but Frankish adds 'classic' to distinguish it from Nagel's 'what-it-is-likeness'.

Thanks Michael. For readers who are confused by my post but still want to know more, consider just reading (2), which is a very good précis by Nick Humphrey of his book which I tried to summarize. It might be better for readers, rather than reading my essay, to just read that. 

Actually, I have to correct my earlier reply. Iirc the argument is that all conscious animals engage in physical play, not necessarily that all playful animals are conscious. On the other hand, Humphrey does say that all animals engaging in pure sensation-seeking type play are conscious, so that's probably the sort of play he'd need to bring him around on octopuses.

Humphrey spent a lot of time saying that authors like Peter Godfrey-Smith (whose book on octopus sociality and consciousness I have read, and also recommend) are wrong or not particularly serious when they argue that octopus behavior is play, because there are more mundane explanations for play-like behavior. I can't recall too much detail here because I no longer have Humphrey's book in my possession. In any case I think if you convinced him octopuses do play he would probably change his mind on octopuses without needing to modify any aspects of the overall theory. He'd just need to concede that the way consciousness developed in warm blooded creatures is not the only way it has developed in evolutionary history.

No, the author is ultimately unclear why qualia in itself is useful, but by reasoning about the case studies I listed, his argument that qualia is in fact related to recursive internal feedback loops is ultimately a bit stronger than just "these things all feel like the same things so they must be related".

Humphrey first argues through his case studies that activity in the neocortex seems to generate conscious experience, while activity in the midbrain does not. Further, midbrain activity is sophisticated and can do a lot of visual and other perceptual pro... (read more)

I've tried to condense a book-length presentation into a 10 minute read and I probably have made some bad choices about which parts to leave out.

Its not that sensory play is necessary for producing sentience. The claim is that any animal that is sentient would be motivated to play. There might be other motivations for play that are not sentience, but all sentient creatures (so the argument goes) would want to play in order to explore and learn about the properties of its own sensory world.

For the limbless species you mentioned, if we imagine a radical scen... (read more)

4
Bella
That makes sense — I appreciate you doing that work & making calls about what to include; I bet there's a lot I'm missing!! Ah, I wrote & meant 'a necessary condition for' — I hadn't misunderstood the argument in the way you're worried about in your second paragraph (but perhaps a useful clarification for anyone reading!) My problem is I don't buy that 'any animal that is sentient would be motivated to play' — and ultimately I think the additional explanation you've provided here, about shared ancestry and neurophysiology, is interesting & relevant to think about re: which if any animals are sentient, but I think it just boils down to: This argument, while IMO important/pretty compelling as a reason to start of with some moderate credence on animal sentience, doesn't do that much, and certainly couldn't, on its own, convince me of any necessary conditions for sentience — certainly not sensory play. It also doesn't do anything to convince me that non-bird non-mammals are sufficiently different (in terms of shared ancestry and neurophysiology) from humans, such that we should think they're not sentient. [fn] I'm unsure from your summary if Humphrey means to claim this or not, sorry!

Early (and peak) Quakers went down some weird ineffective paths. It's cool that they were into nonviolence and class equality, but they were also really into renaming the days of the week and months of the year to avoid pagan names.

This sounds like hits-based cause selection. The median early Quaker cause area wasn't particularly effective, but their best cause area was probably worth all of the wasted time in all of the others.

I am visiting a Quaker church for the first time ever tomorrow. I've been out of religious community for 15 years or so and I'd like to explore one that is compatible with my current views.

I'm trying to think about what "EA but more religious" might look like. Could you form a religious community holding weekly assemblies to celebrate our aspirations to fill the lightcone with happy person moments? I think that is a profoundly spiritual and emotional activity and I think we can do it.

I'll post a longer post to this effect soon.

Linkpost: Sheelah Kolhatkar at The New Yorker writes "Inside Sam Bankman-Fried’s Family Bubble" https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble

I conditionally disagree with the "Work trials". I think EA companies doing work trials is a pretty positive innovation that EA in particular does that enables them to hire for a better fit than not doing work trials. This is good in the long run for potential employees and for the employer.

This is conditional on 

  • work trials being paid at a reasonable rate, where "reasonable" is probably within ~20% of the expected compensation paid out by the job, on a pro rata hourly rate.
  • Probably the trial being 40 hours or less

I can anticipate some reasonable disa... (read more)

4
keller_scholl 🔸
I think that distinguishing between 1-8 hours (preferably paid), up to 40 hours, and 1-6 months, is very important here. I am happiest about the shortest ones, particularly for people who have to leave a job (part of why I think that OP is talking about the latter sort).

JJ--thanks for all your words of support in the last few years. I appreciate your attitude, care, and your hard work. I'm sorry to hear about this. Hope you are well!

It seems like a safer bet that AI will have some kind of effect on lightening the labor load than it will solve either of those particular problems.

I've spent hours going over your arguments and this is a real crux for me. AI is likely to lessen the need for human workers, at least for maintaining our existing levels of wealth.

I've stumbled here after getting more interested in the object-level debate around pronatalism. I am glad you posted this because, in the abstract, I think it's worthwhile to point out where someone may not be engaging in good faith within our community.

Having said that, I wish you had framed the Collins' actions in a little more good faith yourself. I do not consider that one quoted tweet to be evidence that of an "opportunistic power grab". I think it's probably a bit unhealthy to see our movement in terms of competing factions, and to seek wins for one'... (read more)

Have you looked at the fertility rate underlying the UN projections? They're projecting fertility rates across China, Japan, Europe, and the United States to arrest their yearly decline and begin to slowly move up back to somewhere in the 1.5 to 1.6 range.
 

That seems way too high because it's assuming not just that current trends stop but that they reverse to the opposite direction of that observed. Even their "low" scenario has fertility rebounding from a low in ~2030.

This despite all those countries still have a way to go before they get to the low ... (read more)

I enjoyed this post. I think it is worth thinking about whether the problem is unsolveable! I think one takeaway I had from Tegmark's Life 3.0 was that we will almost certainly not get exactly what we want from AGI. It seems intuitively that any possible specification will have downsides, including the specification to not build AGI at all.

But asking for a perfect utopia seems a high bar for "Alignment"; on the other hand, "just avoid literal human extinction" would be far too low a bar and include the possibility for all sorts of dystopias.

So I think it's... (read more)

3
Remmelt
Glad to read your thoughts, Ben. You’re right about this: * Even if long-term AGI safety was possible, then you still have to deal with limits on modelling and consistently acting on preferences expressed by humans from their (perceived) context. https://twitter.com/RemmeltE/status/1620762170819764229 * And not consistently represent the preferences of malevolent, parasitic or short-term human actors who want to misuse/co-opt the system through any attack vectors they can find. * And deal with that the preferences of a lot of the possible future humans and of non-human living beings will not get automatically represented in a system that AI corporations by default have built to represent current living humans only (preferably, those who pay). A humble response to layers on layers of fundamental limits on the possibility of aligning AGI, even in principle, is to ask how we got so stuck on this project in the first place.

I have a very uninformed view on the relative Alignment and Capabilities contributions of things like RLHF. My intuition is that RLHF is positive for alignment I'm almost entirely uninformed on that. If anyone's written a summary on where they think these grey-area research areas lie I'd be interested to read it. Scott's recent post was not a bad entry into the genre but obviously just worked a a very high level.

Can you describe exactly how much you think the average person, or average AI researcher, is willing to sacrifice on a personal level for a small chance at saving humanity? Are they willing to halve their income for the next ten years? Reduce by 90%?

I think in a world where there was a top down societal effort to try to reduce alignment risk, you might see different behavior. In the current world, I think the "personal choice" framework really is how it works because (for better or worse) there is not (yet) strong moral or social values attached to capability vs safety work.

Here's something else I'd like to know on that survey:

  • what proportion of respondents wants to post on EAF or engage in other discussions they think are important for EA's goals, but don't, or will only do so anonymously because they are worried about the consequences?
  • how does that compare to the proportion who feel free to contribute without fear of retribution?
  • what proportion thinks they have been in fact passed over for an opportunity because they have criticized EA or said something else "politically incorrect" here?

Surveys of these types are often anonymous, because 

  • while it is possible for people to make false responses, that doesn't happen very much, because it is time consuming, and unethical, and there just aren't that many people out there who are all of unethical, have lots of time on their hands, and want to manipulate our survey. Manipulated responses are generally more of a danger for short polls (e.g., "which political party would you vote for"), but less of an issue for 10 minute + surveys.
  • there are means of probabilistically filtering false responses
... (read more)
8
Arturo Macias
In my view you underestimate the degree of intentionality and coordination of the offensive against EA.

This is a great idea. EA already runs an annual community survey. So it wouldn't necessary to create a whole new survey to get this data --just add some questions to the existing community survey. If they aren't already on there it would be great to see them on the next survey.

I am now also very curious about what value the community gets from various kinds of experiences in EA spaces.

For example, I'm curious how most women would weigh being in a community that lets them access healthy professional networks free from the tensions of inappropriate* sexual/romantic advances against being in a community where they are able to find find EA partners. (I am implying that there is a tradeoff here.)

I am also curious if the men in the community have an opposing view - if so, it might be important to think about how  the existing sta... (read more)

Ben -- good idea. I think the crucial thing would be to phrase the questions about these issues as neutrally and factually as possible, to avoid responses biases in either direction. 

Ideally EA would ask just about actual first-hand experiences of the individual, rather than general perceptions, impressions based on rumors and media coverage, or second/third-hand reports.

1
Arturo Macias
If you have not a Census of EA, you can not do this kind of survey. The EA Survey is donde on a voluntary basis on the Forum, and false identities can be used to manipulate results. Any EA survey shall be based on a anonymous answers but verified identity.

EA has copped a lot of media criticism lately. Some of it (especially the stuff more directly associated with FTX) is well-deserved. There are some other loud critics who seem to be motivated by personal vendettas and/or seem to fundamentally object with the movement's core aims and values, but rather than tackling those head-on, seem to be trying to simply through everything that'll stick, no matter how flimsy. 

None of that excuses dismissal of the concerning patterns of abuse you've raised, but I think it explains some of the defensiveness around here right now.

[comment deleted]31
8
0
Load more