I'm a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail.
Model 7: We colonize one or two systems as a vanity project, realise that it's a giant pain in the arse and that the benefits inherently don't outweigh the costs of interstellar travel, and space colonization ends with a whimper.
As precedent, see human moon landings: The US did them a couple of times half a century ago and never did them since, even though it would be presumably be way easier to do now, because the people of earth don't really see a benefit to doing so.
I know exactly who you mean, and they have been doing their best to create a culture where any accusation of sexism, racism, or sexual harrassment, no matter how mild, must be proven three steps beyond reasonable doubt before it is accepted as valid.
Fran has put in a frankly absurd level of care and detail into her account of events, and in responding to every little possible concern in the comments of her post. She has an airtight case backed up by independent investigators and a lawsuit settlement. And yet there is still one person in the comments that refuses to believe that there was a major problem (and probably more that are keeping quiet for now). I am heartened that basically everyone else has expressed their support so far, but I don't think you should have to go to that level of effort to get taken seriously on these matters.
I believe that the assertion that "anti-fascists" are "often just as fascist" as the right and will engage in the same behaviour if given power is factually untrue. While there are loud groups of authoritarian communists (tankies) on the left which could be arguably described as fascist, these are a fringe group that are unlikely to get anywhere near the levers of power. Anti-fascists are a wide coalition consisting of a wide array of political views.
I do not think that if the right loses the next election, that the left would be equally fascist. The current adminisatration flooded mineapolis with poorly trained thugs who made it unsafe to go outside as a non-white person. I do not believe that a President AOC or whoever will take actions of equivalent damage.
We have an obligation as an employer to treat such complaints confidentially, evaluate them seriously, and avoid retaliatory action against the person raising the concerns. These obligations exist in part to avoid creating a chilling effect where employees feel uncomfortable raising HR concerns for fear of negative consequences for themselves.
To be clear, your organisation also had obligations not to spread around documents describing an employees experience of rape. A quick clauding points to GDPR protections against sharing "data concerning a natural person’s sex life" . I'm not a lawyer but it seems like HR had a clear obligation to redact those parts of the complaint before sending it to the COO and other people, which didn't happen. And to state the obvious, concerns of a "chilling effect" were unwarranted here: a standard of "you can complain about your colleague as long as you don't sexual harrass someone" is pretty understandable to everyone.
I'm glad that you have gained understanding about the serious mistakes that your organisation made. I remain horrified that it took so long for you to reach this understanding.
I would express a strong preference for the "AGI going well" framing over something like "aligned superintelligence", as the latter presupposes a particular view of how AI is going to go that not everyone agrees with. I think the question is still worth discussing if you believe that AI progress is much more gradual or will stall out at humanish levels of intelligence. And then theres the typical question of what "aligned" means: aligned to who or what?
"AGI goes well" is better because it doesn't presuppose as much: just that we have AGI and humans are doing fine.
I believe that you want to deploy this technology in a way that avoids coercions and avoids racism. The problem is that you aren't in charge of society: once the tech is out there, you don't get a large say in how it gets used. Those decisions go to the public in the case of democracies, and to a handful of scumbags in the case of dictatorships and oligarchies.
A quick look through history will show that basically anytime one group of people sees another group as genetically or racially inferior, discrimination and atrocities result. I see no reason to think that this trend will not continue if we create new groups of people. If Bulgarians embrace genetic "amplification", to improve their "intelligence" and "morals", but Romanians ban it, human history indicates that Bulgarians will look at Romanians as their inferiors, and treat them accordingly.
Effectiveness requires being able to tell the truth, even if it is unpalatable to certain political factions. The actions of Hegseth are unreasonable and set a horrible precedent for companies looking to maintain even the barest of moral principles. Anthropic tried to engage in good faith with the administration and were stabbed in the back over it.
If you want to get people to boycott GPT, tying them to MAGA is probably a good tactic. The US populace is unfavourable to Trump by a wide margin, and the rest of the western world hates him even more.
Remember, people need some level of active motivation to maintain a boycott: and we have seen how motivating the dislike of MAGA is. If you try and make an appeal that will not offend anybody, it will be a wishy-washy thing that motivates nobody.
You're not losing it: it is obviously indefensible. I think you've provided more than enough information to make this clear, and anybody who doesn't get it at this point is probably not worth your time engaging with.
You can ask the following question to any chatbot and you will get the same answer:
I tested this on Chatgpt, Claude, Gemini, and Grok, and every single one urged me to separate the complaint from the sexual content and redact the sensitive information. And this is a much tamer situation than the one that actually happened!
They could have literally just asked a chatbot what to do, and it would have done a better job than their professional HR department.