All of Geoffrey Miller's Comments + Replies

Yarrow - I'm curious which bits of what I wrote you found 'psychologically implausible'?

Beautiful and inspiring. Thanks for sharing this.

I hope more EAs think about turning abstract longtermist ideas into more emotionally compelling media!

mikbp: good question. 

Finding meaningful roles for ordinary folks ('mediocrities') is a big challenge for almost every human organization, movement, and subculture. It's not unique to EA -- although EA does tend to be quite elitist (which is reasonable, given that many of its core insights and values require a very high level of intelligence and openness to understand.) 

The usual strategy for finding roles for ordinary people in organizations is to create hierarchical structures in which the ordinary people are bossed around/influenced/deployed b... (read more)

2
Jason
6d
Would be bold to assume the leaders are "more capable" in hierarchical structures! Maybe it's more true in the private sector than in (say) government, though.
4
Joseph Pusey
7d
Which parts of EA would you say require "a very high level of intelligence...to understand"? :)

Counterpoints:

  1. Humans are about as good and virtuous as we could reasonably expect from a social primate that has evolved through natural selection, sexual selection, and social selection (I've written extensively on this in my 5 books).
  2. Human life has been getting better, consistently, for hundreds of years. See, e.g. Steven Pinker (2018) 'Enlightenment Now'.
  3. Factory farming would be ludicrously inefficient for the first several decades, at least, of any Moon or Mars colonies, so would simply not happen.

My more general worry is that this kind of narrative th... (read more)

3
BrianK
7d
Thanks for your comment. 1. I don’t think this is a compelling argument. Being less immoral than the worst doesn’t lead me to conclude we should increase the immorality further. I do think it should lead us to have compassion in so far as humanity makes it very difficult not to be immoral — it’s an evolutionary problem. 2. That’s true! But still very bad for many. And of course, I’m concerned about all sentient beings, not just humans — the math looks truly horrible when non-humans are in concluded. I do credit humans for unintentionally reducing wild animal suffering by being so drawn to destroying the planet, but I expect the opposite will happen in space colonization situations (i.e. we will see wildlife or create more digital minds, etc.) 3. I’m a longtermist in this sense. I’m concerned about us torturing non-humans not just in the next several decades, but eons after. This could look like factory farming animals, seeding wild animals, creating digital minds, bringing pets with us, and so on. Is that transhumanism to the max? I need to learn more about those who endorse this philosophy—I imagine there is some diversity. Would the immorality in us be eradicated under the ideal circumstances, in their minds (s-risks and x-risks aside from AI acceleration)? Sounds like they are a different kind of utopian.

A brief meta-comment on critics of EAs, and how to react to them:

We're so used to interacting with each other in good faith, rationally and empirically, constructively and sympathetically, according to high ethical and epistemic standards, that we EAs have real trouble remembering some crucial fact of life:

  • Some people, including many prominent academics, are bad actors, vicious ideologues, and/or Machiavellian activists who do not share our world-view, and never will
  • Many people engaged the public sphere are playing games of persuasion, influence, and manip
... (read more)

This seems to me to be a self-serving, Manichean, and psychologically implausible account of why people write criticisms of EA.

I think there's a huge difference in potential reach between a major TV series and a LessWrong post.

According to this summary from Financial Times, as of March 27, '3 Body Problem' had received about 82 million view-hours, equivalent to about 10 million people worldwide watching the whole 8-part series. It was a top 10 Netflix series in over 90 countries. 

Whereas a good LessWrong post might get 100 likes. 

We should be more scope-sensitive about public impact!

2
MathiasKB
12d
I think I am misunderstanding the original question then? I mean if you ask: "what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students" then the reach is not the 10 million people watching the show, it's the people you get a chance to speak to.

PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read '3 Body Problem' novel in  2015, we were invited to a conference on 'active Messaging to Extraterrestrial Intelligence' ('active METI') at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin's book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017:

PDF here

Journal link here

Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There
Universal Adaptations in Search, Aversion, and Signaling?

Ab... (read more)

'3 Body Problem' is a new 8-episode Netflix TV series that's extremely popular, highly rated (7.8/10 on IMDB), and based on the bestselling 2008 science fiction book by Chinese author Liu Cixin. 

It raises a lot of EA themes, e.g. extinction risk (for both humans & the San-Ti aliens), longtermism (planning 400 years ahead against alien invasion), utilitarianism (e.g. sacrificing a few innocents to save many), cross-species empathy (e.g. between humans & aliens), global governance to coordinate against threats (e.g. Thomas Wade, the UN, the Wallfacers), etc.

Curious what you all think about the series as an entry point for talking about some of these EA issues with friends, family, colleagues, and students?

4
Geoffrey Miller
17d
PS: Fun fact: after my coauthor Peter Todd (Indiana U.) and I read '3 Body Problem' novel in  2015, we were invited to a conference on 'active Messaging to Extraterrestrial Intelligence' ('active METI') at Arecibo radio telescope in Puerto Rico. Inspired by Liu Cixin's book, we gave a talk about the extreme risks of active METI, which we then wrote up as this journal paper, published in 2017: PDF here Journal link here Title: The Evolutionary Psychology of Extraterrestrial Intelligence: Are There Universal Adaptations in Search, Aversion, and Signaling? Abstract To understand the possible forms of extraterrestrial intelligence (ETI), we need not only astrobiology theories about how life evolves given habitable planets, but also evolutionary psychology theories about how intelligence emerges given life. Wherever intelligent organisms evolve, they are likely to face similar behavioral challenges in their physical and social worlds. The cognitive mechanisms that arise to meet these challenges may then be copied, repurposed, and shaped by further evolutionary selection to deal with more abstract, higher-level cognitive tasks such as conceptual reasoning, symbolic communication, and technological innovation, while retaining traces of the earlier adaptations for solving physical and social problems. These traces of evolutionary pathways may be leveraged to gain insight into the likely cognitive processes of ETIs. We demonstrate such analysis in the domain of search strategies and show its application in the domains of emotional aversions and social/sexual signaling. Knowing the likely evolutionary pathways to intelligence will help us to better search for and process any alien signals from the search for ETIs (SETI) and to assess the likely benefits, costs, and risks of humans actively messaging ETIs (METI).
4
JWS
18d
I completely agree Geoffrey! I originally read Liu Cixin's series before I became involved in EA, and would highly recommend to anyone who's reading this comment. I think the series very much touches on themes similar in EA thought, such as existential risk, speciesism, and what it means to be moral.[1] I think what makes Cixin's work seem like it's got EA themes is that a lot of the series challenges how humanity views its place in the universe, and it challenges many assumptions about both what the universe is, and our moral obligations to others in that universe, which is quite similar to how EA challenges 'common-sense' views of the world and moral obligation. 1. ^ (I also referenced it in this reply to Matthew Barnett)
2
wes R
18d
I didn't know about this, now I think I have a new netflix shot to watch! thanks! On the topic, I hear season 7, episode 5 of young sheldon is abput a dangerous AI. Edit: I watched the episode, it's not.
3
Neil Warren
19d
The book in my opinion is better, and relies so much on vast realizations and plot twists that it's better to read it blind—before the series and before even the blurb at the back of the book! So for those who didn't know it was a book, here it is: https://www.amazon.fr/Three-Body-Problem-Cixin-Liu/dp/0765377063 
4
MathiasKB
19d
I haven't seen the series, but am currently halfway through the second book. I think it really depends on the person. The person I imagine would watch three-body problem, get hooked, and subsequently ponder about how it relates to the real world, seems like they also would get hooked by just getting sent a good lesswrong post? But sure, if someone mentioned to me they watched and liked the series and they don't know about EA already, I think it could be a great way to start a conversation about EA and Longtermism.

Well Leif Wenar seems to have written a hatchet job that's deliberately misleading about EA values, priorities, and culture. 

The usual anti-EA ideologues are celebrating about Wired magazine taking such a negative view of EA.

For example, leader of the 'effective accelerationist' movement 'Beff Jezos' (aka Guillaume Verdon) wrote this post on X, linking to the Wenar piece, saying simply 'It's over. We won'. Which is presumably a reference to EA people working on AI safety being a bunch of Luddite 'decels' who want to stop the glorious progress towards ... (read more)

David - this is a helpful and reasonable comment.

I suspect that many EAs tactically and temporarily suppressed their use of EA language after the FTX debacle, when they knew that EA had suffered a (hopefully transient) setback.

This may actually be quite analogous to the cyclical patterns of outreach and enthusiasm that we see in crypto investing itself. The post-FTX 2022-2023 bear market in crypto was reflected in a lot of 'crypto influencers' just not talking very much about crypto for a year or two, when investor sentiment was very low. Then, as the pric... (read more)

Nicholas - thanks for posting this helpful summary of these empirical studies.

I do find it somewhat sad and alarming that so many EAs seem to be delaying or avoiding having kids, out of fear that this will 'impair productivity'. 

Productivity-maxxing can be a false god - and this is something that's hard to understand until one becomes a parent.

Just as money sent to charities can vary 100x in terms of actual effectiveness, 'productivity' can vary hugely in terms of actual impact in the world. 

Lots of academic parents I know (including me) realized... (read more)

Jason - fair point. 

Except that all psychological traits are heritable, so offspring of smart, conscientious, virtuous EAs are likely to be somewhat smarter, more conscientious, and more virtuous than average offspring.

I think it's important for EA to avoid partisan political fights like this - they're not neglected cause areas, and they're often not tractable.

It's easy for the Left to portray the 'far right' as a 'threat to democracy, in the form of 'fascist authoritarians'. 

It's also easy for the Right to portray the 'far left' as a 'threat to democracy' in the form of 'socialist authoritarians'.

The issue of immigration (e.g. as considered by AfD) is especially tricky and controversial, in terms of whether increased immigration into Western democracies of people w... (read more)

I think it's important that EA analysis not start with its bottom line already written. In some situations the most effective altruistic interventions (with a given set of resources) will have partisan political valence and we need to remain open to those possibilities; they're usually not particularly neglected or tractable but occasional high-leverage opportunities can arise. I'm very skeptical of Effektiv-Spenden's new fund because it arbitrarily limits its possible conclusions to such a narrow space, but limiting one's conclusions to exclude that space would be the same sort of mistake.

Agreed. Being able to identify effective interventions that support or protect democracy in certain contexts doesn't necessarily seem like a bad idea. 

The challenge with the AfD is that they seem to be the victims of behaviour that could be considered antidemocratic: lawmakers are considering banning the party, and the state has put the party under surveillance. This would be unconstitutional in many countries. I think there could be legitimate arguments that "protecting democracy" could sometimes involve defending groups like the AfD, as well as defe... (read more)

Kyle - I just completed the survey yesterday. I did find it very long and grueling. I worry that you might get lower quality data in the last 1/2 of the survey, due to participant fatigue and frustration.

My suggestion -- speaking as a psych professor who's run many surveys over the last three decades -- is to develop a shorter survey (no more than 25 minutes) that focuses on your key empirical questions, and try to get a good large sample for that. 

1
Kyle Fiore Law
3mo
Thank you, Geoffrey! I really appreciate your time and candid feedback. I will take this into careful consideration going forward. 

I just reposted your X/Twitter recruitment message, FWIW:

https://twitter.com/law_fiore/status/1706806416931987758 

Good luck! I might suggest doing a shorter follow-up survey in due course -- 90 minutes is a big time commitment for $15 payment!

Johanna -  thanks very much for sharing this fascinating, important, and useful research! Hope lots of EAs pay attention to it.

Hayven - there's a huge, huge middle ground between reckless e/acc ASI accelerationism on the one hand, and stagnation on the other hand.

I can imagine a moratorium on further AGI research that still allows awesome progress on all kinds of wonderful technologies such as longevity, (local) space colonization, geoengineering, etc -- none of which require AGI. 

1
Hayven Frienby
3mo
We can certainly research those things, but using purely human efforts (no AI) progress will likely take many decades to see even modest gains. From a longtermist perspective that's not a problem of course, but it's a difficult thing to sell to someone not excited about living what is essentially a 20th century life so we can make progress long after they are gone. A ban on AI should come with a cultural shift toward a much less individualistic, less present-oriented value set.

Isaac -- good, persuasive post. 

I agree that p(doom) is rhetorically ineffective -- to normal people, it just looks weird, off-putting, pretentious, and depressing. Most folks out there have never taken a probability and statistics course, and don't know what p(X) means in general, much less p(doom). 

I also agree that p(doom) is way too ambiguous, in all the ways you mentioned, plus another crucial way: it isn't conditioned on anything we actually do about AI risk. Our p(doom) given an effective global AI regulation regime might be a lot lower th... (read more)

Caleb - thanks for this helpful introduction to Zach's talents, qualifications, and background -- very useful for those of us who don't know him!

I agree that EA organizations should try very hard to avoid entanglements with AI companies such as Anthropic - however well-intentioned they seem. We need to be able to raise genuine concerns about AI risks without feeling beholden to AI corporate interests.

Malo - bravo on this pivot in MIRI's strategy and priorities. Honestly it's what I've hoped MIRI would do for a while. It seems rational, timely, humble, and very useful! I'm excited about this.

I agree that we're very unlikely to solve 'technical alignment' challenges fast enough to keep AI safe, given the breakneck rate of progress in AI capabilities. If we can't speed up alignment work, we have to slow down capabilities work. 

I guess the big organizational challenge for MIRI will be whether its current staff, who may have been recruited largely for ... (read more)

Will - we seem to be many decades away from being able to do 'mind uploading' or serious levels of cognitive enhancement, but we're probably only a few years away from extremely dangerous AI. 

I don't think that betting on mind uploading or cognitive enhancement is a winning strategy, compared to pausing, heavily regulating, and morally stigmatizing AI development.

(Yes, given a few generations of iterated embryo selection for cognitive ability, we could probably breed much smarter people within a century or two. But they'd still run a million times slo... (read more)

0
Hayven Frienby
3mo
Agreed, but as I said earlier, acceptance seems to be the answer. We are limited, biological beings, who aren't capable of understanding everything about ourselves or the universe. We're animals. I understand this leads to anxiety and disquiet for a lot of people. Recognizing the danger of AI and the impossibility of transhumanism and mind uploading, I think the best possible path forward is to just accept our limited state, rationally stagnate our technology, and focus on social harmony and environmental protection as the way forward.  As for the despair this could cause to some, I'm not sure what the answer is. EA has taken a lot of its organizational structure and methods of moral encouragement from philosophies like Confucianism, religions, universities, etc. Maybe an EA-led philosophical research project into human ultimate hope (in the absence of techno-salvation) would be fruitful. 

Remmelt - I agree. I think EA funders have been way too naive in thinking that, if they just support the right sort of AI development, with due concern for 'alignment' issues, they could steer the AI industry away from catastrophe. 

In hindsight, this seems to have been a huge strategic blunder -- and the big mistake was under-estimating the corporate incentives and individual hubris that drives unsafe AI development despite any good intentions of funders and founders.

2
Remmelt
3mo
This is an incisive description, Geoff. I couldn't put it better. I'm confused what the two crosses are doing on your comment.  Maybe the people who disagreed can clarify.

Glad you mentioned 'moral licensing' -- which is something EAs really need to be aware of!

Chris - this is all quite reasonable.

However, one could dispute 'Premise 2: AGI has a reasonable chance of arriving in the next 30 or 40 years.'

Yes, without any organized resistance to the AI industry, the AI industry will develop AGI (if AGI is possible) -- probably fairly quickly.

But, if enough people accept Premise 5 (likely catastrophe) and Premise 6 (we can make a difference), then we can prevent AGI from arriving. 

In other words, the best way to make 'AI go well' may be to prevent AGI (or ASI) from happening at all. 

2
Chris Leong
4mo
Also, would be keen to hear if you think I should have restructured this argument in any other way?
4
Chris Leong
4mo
Good point. I added in “by default”.

Thanks for this provocative and timely post. 

I agree that EAs have been far too friendly to AI companies, too eager to get hired within these companies as internal AI safety experts, too willing to give money to support their in-house safety work, and too wary about upsetting AI leaders and developers. 

This has diluted our warnings about extinction risks from AI. I've noticed that on social media like X, ordinary folks get very confused about EA attitudes towards AI. If we really think AI is extraordinarily dangerous, why would we be working with... (read more)

Alix - Thanks for writing this. I think it is a serious issue in terms of spreading EA from being a mostly Anglosphere movement (in UK, US, Australia, etc) to becoming a global movement,

There seem to be about 400 million native English speakers in the world, plus around another 1.5 billion people who have English as their second language (e.g. many in India and China), with varying degrees of fluency. From my experience of teaching college classes in China, often people there have much higher English fluency in writing and reading than in speaking.

So, roug... (read more)

I think the concern about jargon is misplaced in this context. Jargon is learned by native and non-native speakers alike as they engage with the community: it's specifically the stuff that already knowing the language doesn't help you with, which means not knowing the language doesn't disadvantage you. That's not to say jargon doesn't have its own problems, but I think that someone who attempts to reduce jargon specifically as a way to reach non-native speakers better has probably misdirected their focus.

Jeff - thanks very much for sharing the link to that post. I encourage others to read it - it's fairly short. It nicely sets out some of the difficulties around anonymity, doxxing, accusations, counter-accusations, etc.

I can't offer any brilliant solutions to these issues, but I am glad to see that the risks of false or exaggerated allegations are getting some serious attention.

Will - thanks very much for sharing your views, and some of the discussion amongst the EA Forum moderators.

These are tricky issues, and I'm glad to see that they're getting some serious attention, in terms of the relative costs, benefits, and risks of different possible politicies.

I'm also concerned about 'setting a precedent of first-mover advantage'. A blanket policy of first-mover (or first-accuser) anonymity would incentivize EAs to make lots of allegations before the people they're accusing could make counter-allegations. That seems likely to create massive problems, conflicts, toxicity, and schisms within EA. 

Victor - thanks for elaborating on your views, and developing this sort of 'career longtermist' thought experiment. I did it, and did take it seriously.

However.

I've known many, many academics, researchers, writers, etc who have been 'cancelled' by online mobs, who have made mountains out of molehills. In many cases, the reputations, careers, and prospects of the cancelled people are ruined. Which is, of course, the whole point of cancelling them -- to silence them, to ostracize them, and to keep them from having any public influence.

In some cases, the canc... (read more)

8
VictorW
4mo
Thanks for entertaining my thought experiment, and I'm glad because I better understand your perspective too now, and I think I'm in full agreement with your response. A shift of topic content here, feel free to not engage if this doesn't interest you. To share some vague thoughts about how things could be different. I think that posts which are structurally equivalent to a hit piece can be considered against the forum rules, either implicitly already or explicitly. Moderators could intervene before most of the damage is done. I think that policing this isn't as subjective as one might fear, and that certain criteria can be checked even without any assumptions about truthfulness or intentions. Maybe an LLM could work for flagging high-risk posts for moderators to review. Another angle would be to try and shape discussion norms or attitudes. There might not be a reliable way to influence this space, but one could try for example by providing the right material that would better equip readers to have better online discussions in general as well as recognize unhelpful/manipulative writing. It could become a popular staple much like I think "Replacing Guilt" is very well regarded. Funnily enough, I have been collating a list of green/orange/red flags in online discussions for other educational reasons. "Attitudes" might be way too subjective/varied to shape, whereas I believe "good discussion norms" can be presented in a concrete way that isn't inflexibly limiting. NVC comes to mind as a concrete framework, and I am of the opinion that the original "sharing information" post can be considered violent communication.

Jeff -- actual 'whistleblowers' make true and important allegations that withstand scrutiny and fact-checking. I agree that legit whistleblowers need the protection of anonymity. 

But not all disgruntled ex-employees with a beef against their former bosses are whistleblowers in this sense. Many are pursuing their own retaliation strategies, often turning trivial or imagined slights into huge subjective moral outrages -- and often getting credulous friends, family, journalists, or activists to support their cause and amplify their narrative.

It's true th... (read more)

So, arguably, we have a case here of two disgruntled ex-employees retaliating against a former employer. Why should their retaliation be protected by anonymity?

Highlighting that is an important crux (and one on which I have mixed feelings). Not all allegations of incorrect conduct rise to the level of "whistleblowing." A whistleblower brings alleged misconduct on a matter of public importance to light. We grant lots of protections in furtherance of that public interest, not out of regard for the whistleblower's private interests.

Is this a garden-variety di... (read more)

9
Jeff Kaufman
4mo
I think that's too strong? For example, under my amateur understanding of MA law I don't see anything about the anti-retailation provisions being conditional requiring a complaint to withstand scrutiny and fact-checking. And if this were changed to allow employers to retaliate in cases where employees claims were not sustained then I think we'd see, as a chilling effect, a decrease in employees raising true claims.

Victor - this is total victim-blaming. Good people trying to hire good workers for their organizations can be exploited and ruined by bad employees, just as much as good employees can be exploited and ruined by bad employers. 

You said 'If making a bad hire doesn't get in the way of success and doing good, does it even make sense to fixate on it?'

Well, we've just seen an example of two very bad hires ('Alice' and 'Chloe') almost ruin an organization permanently. They very much got in the way of success and doing good. I would not wish their personaliti... (read more)

3
VictorW
4mo
What I think I'm hearing from you (and please correct me if I'm not hearing you) is that you feel conflicted by the thought that the efforts of good people with good intentions can be so easily be undone, and that you wish there were some concrete ways to prevent this happening to organizations, both individually and systemically. I hear you on thinking about how things could work better as a system/process/community in this context. (My response won't go into this systems level, not because it's not important, but because I don't have anything useful to offer you right now.) I acknowledge your two examples ("Alice and Chloe almost ruined an organization) and (keeping bad workers anonymous has negative consequences). I'm not trying to dispute these or convince you that you're wrong. What I am trying to highlight is that there is a way to think about these that doesn't involve requiring us to never make small mistakes with big consequences. I'm talking about a mindset, which isn't a matter of right or wrong, but simply a mental model that one can choose to apply. I'm asking you to stash away your being right and whatever you perspective you think I hold for a moment and do a thought experiment for 60 seconds. At t=0, it looks like ex-employee A, with some influential help, managed to inspire significant online backlash against organization X led by well-intentioned employer Z. It could easily look like Z's project is done, their reputation is forever tarnished, their options have been severely constrained. Z might well feel that way themselves. Z is a person with good intentions, conviction, strong ambitions, interpersonal skills, and a good work ethic. Suppose that organization X got dismantled at t=1 year. Imagine Z's "default trajectory" extending into t=2 years. What is Z up to now? Do you think they still feel exactly the way they did at t=0? At t=10, is Z successful? Did the events of t=0 really ruin their potential at the time? At t=40, what might Z say reca

TImon - the whole point of EA was to get away from the kind of vacuous, feel-good empathy-signaling that animated most charitable giving before EA. 

EA focuses on causes that have large scope, but that are tractable and neglected. These three criteria are the exact opposite of what one would focus on if one simply wanted to signal being 'warm' and 'empathic' -- which works best when focusing on specific identifiable lives (small scope), facing problems that are commonly talked about (not neglected), and that are intractable (so the charity can keep run... (read more)

4
Timon Renzelmann
4mo
Thanks Geoffrey for raising this point. I agree that emotional empathy as defined by Paul Bloom can lead to bias and poor moral judgement, and I also appreciate the usefulness of the rational EA ideas you describe. I don't want to throw them out the window and agree with Sam Harris when he says "Reason is nothing less than the guardian of love". I agree that it is important to focus on effectiveness when judging where to give your money. I was trying to make a very different point. I was trying to make the point that we should not dismiss the caring part that might still be involved in well-intentioned but poorly executed interventions. And I have tried to make the case for being kind and not dismissing human qualities that do not appear to be efficient. I have tried to show how following these ideas too much, or in the wrong way, can lead to negative social consequences, and that it is important to keep a balance. In the context of the less effective charities you describe, the problem I see is not warmth or caring, but bias and naivety. To care is to understand. To understand the cause of suffering and the best way to alleviate it.  I would also like to point out that while Paul Bloom makes a clear case for the problems with emotional empathy and moral judgement, at the end of the book he emphasises its value in social contexts. Also, I was not trying to argue for this kind of empathy, but basically talking about emotional maturity, compassion and kindness. I think you can make kindness impartial, so that it is consistent with moral values, but also so that other people feel that they are dealing with a human being, not a robot. I'm not advocating going back to being naive and prejudiced, but rather being careful not to exclude human traits like empathy in everyday social interactions just because they might lead to bias when thinking about charity. Wisdom requires emotional as well as rational maturity.

I agree with all this, and I also think the OP might be speaking to some experiences in EA you might not have had which could result in you talking past each other.

Well all three key figures at Nonlinear are also real people, and they got deanonymized by Ben Pace's highly critical post, which had the likely effect (unless challenged) of stopping Nonlinear from doing its work, and of stigmatizing its leaders.

So, I don't understand the double standard, where those subject to false allegations don't enjoy anonymity, and those making the false allegations do get to enjoy anonymity.

Chi
4mo12
5
0
1

So, I don't understand the double standard, where those subject to false allegations don't enjoy anonymity, and those making the false allegations do get to enjoy anonymity.

I don't think all people in the replies were arguing that Ben's initial post was okay and deanonymizing Alice and or Chloe would be bad (which I think you would call a double standard, which I'm not commenting on right now). Some probably do but some probably think that Ben's initial post was bad and that deanonymizing Alice and or Chloe would also be bad and that we shouldn't try to correct one bad with another bad, which doesn't look like a double standard to me.

Lorenzo - yes, I'm complying with that request. 
 

I'm just puzzled about the apparent double standard where the first people to make allegations enjoy privacy & anonymity (even if their allegations seem to be largely false or exaggerated), but the people they're accusing don't enjoy the same privilege.

Writing in a personal capacity.

Hi Geoffrey, I think you raise a very reasonable point.

There’s some unfortunate timing at play here: 3/7 of the active mod team—Lizka, Toby, and JP—have been away at a CEA retreat for the past ~week, and have thus mostly been offline. In my view, we would have ideally issued a proper update by now on the earlier notice: “For the time being, please do not post personal information that would deanonymize Alice or Chloe.”

In lieu of that, I’ll instead publish one of my comments from the moderators’ Slack thread, along w... (read more)

Jason
4mo45
10
0
1

I agree that the Forum's rules and norms on privacy protection are confused. A few observations:

(1) Suppose a universe in which the first post on this topic had been from from Nonlinear, and had accused Alice and Chloe (by their real names) of a pattern of mendaciously spreading lies about Nonlinear. Would that post have been allowed to stay up? If yes, it is hard to come up with a principled reason why Alice and Chloe can't be named now. 

If no, we would need to think about why this hypothetical post would have been disallowed. The best argument I cam... (read more)

PS For the people downvoting and disagree-voting on my comment here:

I raised some awkward questions, without offering any answers, conclusions, or recommendations.

Are you disagreeing that it's even legitimate to raise any issues about the ethics of 'whistleblower' anonymity in cases of potential false allegations?

I'd really like to understand what you're disagreeing about here.

I raised some awkward questions, without offering any answers, conclusions, or recommendations.

I don't feel like you raised discussion with no preference for what the community decided. When I gave my answer, which many people seem to agree with, your response was to question whether that's REALLY what the EA community wants. I think it's a bit disingenuous to suggest that you're just asking a question when you clearly have a preference for how people answer!

3
Rafael Harth
4mo
I disagree-voted because your first paragraph praised the OP.

I think the questions you're raising are important. I got kind of triggered by the issue I pointed out (and the fact that it's something that has already been discussed in the comments of the other post), so I downvoted the comment overall. (Also, just because Chloe is currently anonymous doesn't mean it's risk-free to imply misleading and damaging things about her – anonymity can be fragile.)

There were many parts of your comment that I agree with. I agree that we probably shouldn't have a norm that guarantees anonymity unconditionally. (But the anonymity ... (read more)

Ivy - I really appreciate your long, thoughtful comment here. It's exactly the sort of discussion I was hoping to spark. 

I resonate to many of your conflicted feelings about these ethically complicated situations, given the many 'stakeholders' involved, and the many ways we can get our policies wrong in all kinds of ways.

6
Ivy Mazzola
4mo
Thanks for your kind comment :)

Lukas - I guess one disadvantage of pseudonyms like 'Alice' and 'Chloe' is that it's quite difficult for outsiders who don't know their real identities to distinguish between them very clearly -- especially if their stories get very intertwined.

If we can't attach real faces and names to the allegations, and we can't connect their pseudonyms to any other real-world information about them, such as LinkedIn profiles, web pages, EA Forum posts, etc., then it's much harder to remember who's who, and to assess their relatively degrees of reliability or culpabili... (read more)

5
Kirsten
4mo
You're right about the effort involved, but when these are real people who you are discussing deanonymizing in order to try to stop them from getting jobs, you should make the effort.

Even if the whistleblowers seem to be making serial false allegations against former employers?

Does EA really want to be a community where people can make false allegations with total impunity and no accountability? 

Doesn't that incentivize false allegations?

7
Jason
4mo
There's a unilateralist's curse issue here -- if there are (say) 100 people who know the identities of Alice and Chloe, does only one of them have to decide breaching the psuedonyms would be justified? [Edit to add: I think the questions Geoffrey is asking are worthwhile ones to ask. I am just struggling to see how an appropriate decision to unmask could be made given the community's structure without creating this problem. I don't see a principled basis for declaring that, e.g., CHSP can legitimately decide to unmask but everyone else had better not.]

Has there been a suggestion that Chloe has made serial false allegations against former employers? I thought that was only Alice.

TracingWoodgrains - thanks for an excellent post. I think it should lead many EAs to develop a new and more balanced perspective on this controversy. 

And thanks for mentioning my EA Forum comments about Ben Pace doing amateur investigative reporting -- reporting that doesn't seem, arguably, to have lived up to the standards of basic journalistic integrity (regardless of how much time he and the Lightcone team may have put into it.)

This leaves us with a very awkward question about the ongoing anonymity of 'Alice' and 'Chloe', and I don't know what the ... (read more)

0
VictorW
4mo
I'll respond to one aspect you raised that I think might be more significant than you realize. I'll paint a black and white picture just for brevity. If you're running organizations and do so for several years with dozens of employees across time, you will make poor hiring decisions at one time or another. While making a bad hire seems bad, avoiding this risk at all costs is probably a far inferior strategy. If making a bad hire doesn't get in the way of success and doing good, does it even make sense to fixate on it? Also, if you're blind to the signs before it happens, then you reap the consequences, learn an expensive lesson, and are less likely to make it in future, at least for that type of deficit in judgment. Sometimes the signs are obvious after having made an error, though occasionally the signs are so well hidden that anyone with better judgment than you could have still have made the same mistake. The underlying theme I'm getting at is that embracing mistakes and imperfection is instrumental. Although many EAs might wish that we could all just get hard things right the first time all the time, that's not realistic. We're flawed human beings and respecting the fact of our limitations is far more practical than giving into fear and anxiety about not having ultimate control and predictability. If anything, being willing to make mistakes is both rational and productive compared to other alternatives.

A quick reminder that moderators have asked, at least for the time being, to please not post personal information that would deanonymize Alice or Chloe.

PS For the people downvoting and disagree-voting on my comment here:

I raised some awkward questions, without offering any answers, conclusions, or recommendations.

Are you disagreeing that it's even legitimate to raise any issues about the ethics of 'whistleblower' anonymity in cases of potential false allegations?

I'd really like to understand what you're disagreeing about here.

So, what do you all think?

I continue to think that something went wrong for people to come away with takes that lump together Alice and Chloe in these ways. 

Not because I'm convinced that Alice is as bad as Nonlinear makes it sound, but because, even based on Nonlinear's portrayal, Chloe is portrayed as having had a poor reaction to the specific employment situation, and (unlike Alice) not as having a general pattern/history of making false/misleading claims. That difference matters immensely regarding whether it's appropriate to warn future potential... (read more)

Short answer: I think Ben should defer to the community health team as to whether to reveal identities to them or not (I'm guessing they know). And probably the community health team should take their names and add it to their list where orgs can ask CH about any potential hires and learn of red flags in their past. I think Alice should def be included on that list, and Chloe should maybe be included (that's the part I'd let the CH team decide if it is was bad enough). It's possible Alice should be revealed publicly, or maybe just revealed to community org... (read more)

Whistleblower anonymity should remain protected in the vast majority of situations, including this one, imo

Jason - yes, fair points. 

Hopefully any donations from individual donors who have benefitted from actually taking profits (into fiat currency) from volatile assets (such as crypto) would be less subject to corporate collapses, scandals, and clawbacks than donations from crypto companies such as FTX. But it's well worth thinking about these kinds of financial and legal risks. 

People who are disagree-voting with me on this: 

Please explain how a 120-fold difference in population sizes between groups wouldn't yield any bias in the global influence those groups would tend to have at the United Nations?

5
freedomandutility
3mo
Your first comment claims that the 120 fold difference in population makes Israel's enemies more influential than its allies at the UN (which I disagree with), which is different to claiming that the disproportionate populations have "some" effect over the UN (which I agree with). Religions are not represented at the UN, countries are, and the major forces influencing the UN in favour of Israel are the US and the UK, which are mostly not made up of Jews, and the main force influencing the UN against Israel is China, which is largely not made up of Muslims.  In other words, power struggles at the UN on Israel-Palestine are not really a power struggle between Jews and Muslims, and like lots of other geopolitics things are more of a power struggle between the USA and China.

I didn't vote on your post, but I could imagine disagree voting to indicate disagreement with the implication that Muslims are fundamentally 'enemies of Israel'.

Jason - this is a reasonable concern. The 4-year crypto asset cycle could indeed lead to cycles of windfalls and dry spells for donations. But I guess the burden would be on EA organizations to smooth this out by saving up some of the windfall money to cover the dry spells -- rather than the donors trying to avoid high-volatility assets that show such cycles?

4
Jason
4mo
That makes sense in many contexts. I can think of some in which it might not work as well: * It is plausible that orgs may have planned for the crypto cycle, but not planned for FTX collapse and probable clawbacks, and that the assets that would otherwise be used for smoothing had to be diverted. That goes for double if the org was affected by a non-crypto financial issue (e.g., grant reduction/non-renewal from another source). As a practical matter, an org can only prepare for so many contingencies at once . . . even the US military with all its massive spending is designed to maintain a two-front war, IIRC. * I think it probably relies on an assumption that the org was old enough / established enough to have received a windfall during the high-water point of the previous crypto boom cycle. Without a windfall, there would be no windfall income to devote to smoothing.

David - I mention the gender bias in moral typecasting in this context because (1) moral typecasting seems especially relevant in these kinds of organizational disputes, (2) I've noticed some moral typecasting in this specific discussion on EA Forum, and (3) many EAs are already familiar with the classical cognitive biases, many of which have been studied since the early 1970s, but may not be familiar with this newly researched bias.

To be honest, I'm facing a difficult trade-off between whether I should donate more money now to the organizations I traditionally support (e.g. Vegan Outreach), versus investing in the crypto market before the next expected bull run in 2024-2025, after the bitcoin ETF approvals, the bitcoin halving, the next hype cycle, etc -- in hopes that an investment now could yield 10x more money to give later.

I'm curious if any other EAs are thinking about this tradeoff. I know a lot of us got stung, both financially and emotionally, by the FTX disaster. But IMHO, crypto is here to stay -- at least as a hyper-volatile risk asset that can be used to leverage wealth, by those with the knowledge and risk-tolerance to buy low and sell high.

Copying from the Facebook Group

Fellow crypto investor here (since 2015/16), I now run a crypto fund. A few points to make.

1. In the short term, way more money is made on getting in on trends before they are big than on fundamentals, at least they have historically. You wanted to get in on NFTs early. You wanted to get in on "AI tokens" early. You wanted to get in on yield farming early. You wanted to get in on ICOs early. You wanted to get in on dog tokens early. You wanted to get in on anything trending early. Etc. It didn't matter what project fundamenta... (read more)

4
Jason
4mo
I haven't, but is it still true that the EA donor base's assets are fairly heavily in crypto? So one potential downside would be reinforcing a relative lack of diversification, which could lead to both periods of really bountiful funding for orgs and droughts. Though perhaps at the small/midsize donor level, that isn't as much of a concern and one should go for the best expected return on a risk-neutral basis.
5
James Özden
4mo
You might be interested to ask in this Facebook group (I would love to help and thinking similar things but know approximately nothing)

Zachary - thanks very much for this update. I imagine a lot of us were pretty worried about how the FTX debacle would affect the core EA organizations and their budgets.

Is it fair to say that the organizations under the EV umbrella are still doing OK in terms of their budgets and funding situation, at least in the short term? (eg being able to pay salaries, rents, funding grants they're committed to, etc)?  Or is there any urgent need for fund-raising that EAs could help with?

Back in the 1990s, some of us were working on using genetic algorithms (simulated evolutionary methods) to evolve neural network architectures. This was during one of the AI winters, between the late 1980s flurry of neural network research based on back-propagation, and the early 2000s rise of deep learning in much larger networks. 

Some examples of this work are here (designing neural networks with genetic algorithms, 1989), here (genetic algorithms for autonomous robot control systems, 1994), here (artificial evolution as a path towards AI, 1997), an... (read more)

3
WillPearson
4mo
Thanks, I did a MSc in this area back in the early 2000s, my system was similar to Tierra, so I'm familiar with evolutionary computation history. Definitely useful context. Learning classifier systems are also interesting to check out for aligning multi-agent evolutionary systems. It definitely informs where I am coming from. Do you know anyone with this kind of background that might be interested in writing something long form on this? I'm happy to collaborate, but my mental health has not been the best. I might be able to fund this a small bit, if the right person needs it.

There's a human cognitive bias that may be relevant to this whole discussion, but that may not be widely appreciated in EA yet: gender bias in 'moral typecasting'.

In a 2020 paper, my UNM colleague Tania Reynolds and coauthors found a systematic bias for women to be more easily categorized as victims and men as perpetrators, in situations where harm seems to have been done. The ran six studies in four countries (total N=3,317). 

(Ever since a seminal paper by Gray & Wegner (2009), there's been a fast-growing literature on moral typecasting. Beyond t... (read more)

Where is the evidence people are seeing this as primarily E vs A&C rather than K vs A&C? The post is written by Kat, and the comments on this and other recent posts are from Kat…

I don't think it's productive to name just one or two of the very many biases one could bring up. I would need some reason to think this bias is more worth mentioning than other biases (such as Ben's payment to Alice and Chloe, or commenters' friendships, etc.).

My key point about investigative journalist expertise is that amateurs can invest a huge amount of time, money, and effort into investigations that are not actually very effective, fair, constructive, or epistemically sound.

As EAs know, charities vary hugely in their cost effectiveness. Investigative activities can also vary hugely in their time-effectiveness.

8
Habryka
4mo
Yeah, I can totally imagine there are skills here that make someone substantially more effective at this (I think I have gotten vastly better at this skillset over the last 10 years, for example). As I said, I think criticizing the process seems pretty reasonable, I highly doubt that we went about this in the most optimal way.
Load more