When I say that there's a seventy percent chance of something, that specific number carries a very specific meaning: there is a 67% chance that it is the case.
(I checked my calibration online just now.)
It's not some impossible skill to get decent enough calibration.
Your post begins with,
I do not believe this interpretation is correct.
And ends with,
To be fair, upon reading it again
If in the writing of a comment you realize that you were wrong, you can just say that.
The EA Forum has recently had some very painful experiences where members of the community jumped to conclusions and tried to oust people on very flimsy evidence, and now we're seeing people upvote who are sick of the dynamic.
LessWrong commenters did a better job of navigating accusations, waiting for evidence, and downvoting low-quality combativeness. People running off half-cocked hasn't had as disastrous effects, so there aren't as many people there who are currently sick of it.
Upvoted.
I'm in strong agreement with point two and in agreement with point four. I think these are things that more people should keep in mind while putting together microcultures and they are things I worry about frequently.
I'm also in favor of point one for... basically all social groups and microcultures which aren't EA. But it wouldn't work for EA. EA is more public than a boardgame club, and many loadbearing people in EA are also public figures. Public figures are falsely accused of assault, constantly.
None of this was news to the people who use LessWrong.
The time to have a conversation about what went wrong and what a community can do better, is immediately after you learned that the thing happened. If you search for the names of the people involved, you'll see that LessWrong did that at length.
The worst possible time to bring the topic up again, is when someone writes a misleading article for the express purpose of hurting you, which was not written to be helpful and purposefully lacks the context that it would require in order to be helpful. Why...
I'm worried about this a non-zero amount.
But in the longer run I'm relatively optimistic about most futures where humans survive and continue making decisions. The future will last a very long time, and it's not uncommon for totalitarian governments to liberalize as decades or centuries wear on. Where there is life, there is hope.
The Bloomberg piece was not an update on how misconduct has happened in EA to anyone who has been previously paying attention.
I'm strongly downvoting the parent comment for now, since I don't think it should be particularly visible. I'll reverse the downvote if you release the rejection letter and it is as you've represented.
One of the comments Ivy was responding to there began "I am encouraging you to try to exercise your empathetic muscles and understand..."
And the comment thread we are in by someone who named this burner account of theirs "Eugenics-Adjacent" began "Sadly I fear stories like this are lost on the devoted EA crowd here..."
I agree that posts on the EA forum should be kind and assume good faith.
I agree that we should be aiming for excellence.
If having many examples of behavior X within a group doesn't show that the group is worse at that or better at that than average - if you expect to see the same thing in either case - then being presented with such a list has given you zero evidence on which to update.
They would have written the same article whether behavior X was half as common or twice as common or vanishingly rare. They would have written the same article whether things were handled well or poorly, as shown by their framing things mi...
In the absence of evidence that rationalism is uniquely good at dealing with sexual harrasment (it isn't), then the prior assumption about the level of misconduct should be "average", not "excellent". Which means that there is room for improvement.
Even if these stories do not update your beliefs about the level of misconduct in the communities, they do give you information about how misconduct is happening, and point to areas that can be improved. I must admit I am baffled as to why the immediate response seems to be mostly about attacking the media, instead of trying to use this new information to figure out how to protect your community.
Mentioning that in the article would have defeated the purpose of writing it, for the person who wrote it.
Someone on the LessWrong crosspost linked this relevant thing: https://slatestarcodex.com/2015/09/16/cardiologists-and-chinese-robbers/
The "chinese robber fallacy" is being overstretched, in my opinion. All it says is that having many examples of X behaviour within a group doesn't necessarily prove that X is worse than average within that group. But that doesn't mean it isn't worse than average. I could easily imagine the catholic church throwing this type of link out in response to the first bombshell articles about abuse.
Most importantly, we shouldn't be aiming for average, we should be aiming for excellence. And I think the poor response to a lot of the incidents described is pretty strong evidence that excellence is not being achieved on this matter.
Provocation can shock people out of their normal way of seeing the world into looking at some fact in a different light. This seems to be roughly what Bostrom was saying in the first paragraph of his 1996 email. However, in the case of that email, it's unclear what socially valuable fact he was trying to shock people into seeing in a new way.
Bostrom's email was in response to someone who made the point you do here about provocation sometimes making people view things in a new light. The person who Bostrom was responding to advocated saying things in a blun...
Interesting! I admit I didn't go and read the original discussion thread, so thanks for that context. To the extent that Bostrom was arguing against being needlessly shocking, he was kind of already making the same point that his critics have been making: don't say needlessly shocking things. He didn't show enough sensitivity/empathy in the process of presenting the example and explaining why it was bad, but he was writing a quick email to friends, not a carefully crafted political announcement intended to be read by thousands of people.
Here are the last four things I remember seeing linked as supporting evidence in casual conversation on the EA forum, in no particular order:
https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=HebnLpj2pqyctd72F - link to Scott Alexander, "We have to stop it with the pointless infighting or it's all we will end up doing," is 'do x'-y if anything is. (It also sounds like a perfectly reasonable thing to say and a perfectly reasonable way to say it.)
https://forum.effectivealtruism.org/posts/LvwihGYgFEzjGDhBt/?commentId=SCfBodrdQYZBA6RBy - se...
Gains from trade, and agglomeration effects, and economies of scale. Being effective is useful for doing good, having a lot of close friends and allies is useful for being effective.
I think it's pretty obvious at this point that Tegmark and FLI was seriously wronged, but I barely care about any wrong done to them and am largely uninterested in the question of whether it was wildly disproportionate or merely sickeningly disproportionate.
I care about the consequences of what we've done to them.
I care about how, in order to protect themselves from this community, the FLI is
...working hard to continue improving the structure and process of our grantmaking processes, including more internal and (in appropriate cases) external review. &nb
I barely give a gosh-guldarn about FLI or Tegmark outside of their (now reduced) capacity to reduce existential risk.
Obviously I'd rather bad things not happen to people and not happen to good people in particular, but I don't specifically know anyone from FLI and they are a feather on the scales next to the full set of strangers who I care about.
Eliezer is an incredible case of hero-worship - it's become the norm to just link to jargon he created as though it's enough to settle an argument.
I think that you misunderstand why people link to things.
If someone didn't get why I feel morally obligated to help people who live in distant countries, I would likely link them to Singer's drowning child thought experiment. Either during my explanation of how I feel, or in lieu of one if I were busy.
This is not because I hero-worship Singer. This is not because I think his posts are scripture. This is be...
There's an angry top-level post about evaporative cooling of group beliefs in EA that I haven't written yet, and won't until it would no longer be an angry one. That might mean that the best moment has passed, which will make me sad for not being strong enough to have competently written it earlier. You could describe this as my having been chilled out of the discourse, but I would instead describe it as my politely waiting until I am able and ready to explain my concerns in a collected and rational manner.
I am doing this because I care about carefully art...
For better or worse, most of the world runs on persuasion, and PR matters. The nuanced truth doesn't matter that much for social reality, and EA should ideally be persuasive and control social reality.
I think the extent to which nuanced truth does not matter to "most of the world" is overstated.
I additionally think that EA should not be optimizing for deceiving people who belong to the class "most of the world".
Both because it wouldn't be useful if it worked (realistically most of the world has very little they are offering) and because it woul...
I'd like to ask people not to downvote titotal's comment below zero, because that also hides RobBensinger's timeline. I had to strong upvote the parent comment to make the timeline visible again.
At the time of my writing this comment, the parent was at 25 karma and -31 agreement karma.
Seeing as Jim was absolutely correct, I think that the people who dismissed them out of hand should reflect on what manner of reasoning led them to do so.
EDIT: posted this before I saw that Ic had already made the same point.
I had to draft and re-draft the parent comment to write it without cursing. I am crying angry tears right now. Both are deeply out of character for me.
I have been worn down.
...8) What have we learned from this and how can we improve our grantmaking process?
The way we see it, we rejected a grant proposal that deserved to be rejected, and challenging, reasonable questions have been asked as to why we initially considered it and didn’t reject it earlier. We deeply regret that we may have inadvertently compromised the confidence of our community and constituents. This causes us huge distress, as does the idea that FLI or its personnel would somehow align with ideologies to which we are fundamentally opposed. We are working hard
The FLI did nothing wrong.
I don't completely agree: grantmaking organizations shouldn't issue grant intent letters which imply this level of certainty before completing their evaluation. I expect one outcome here will be that FLI changes how they phrase letters they send at this stage to be clearer about what they actually represent, and this will be a good thing on its own where it helps grantees better understand where they are in the process and how confident to be about incoming funds.
I'm also not convinced that the stage at which this was caught i...
I had to draft and re-draft the parent comment to write it without cursing. I am crying angry tears right now. Both are deeply out of character for me.
I have been worn down.
Uncontroversial take: EA wouldn't exist without the blithely curious and alien-brained.
More controversially: I've been increasingly feeling like I'm on a forum where people think the autistic/decoupler/rationalist cluster did their part and now should just... go away. Like, 'thanks for pointing us at the moral horrors and the world-ending catastrophe, I'll bear them in mind, now please stop annoying me.'
But it is not obvious to me that the alien-brained have noticed everything useful that they are going to notice, and done all the work that they will do, such that it is safe to discard them.
Let me say this: autism runs in my family, including two of my first cousins. I think that neurodivergence is not only nothing to be ashamed of, and not an "illness" to be "cured", but in fact a profound gift, and one which allows neurodivergent individuals to see what many of us do not. (Another example: Listen to Vikingur Olafsson play the piano! Nobody else hears Mozart like that.).
Neurodivergent individuals and high decouplers should not be chased out of effective altruism or any other movement. Doing this would not only be intrinsically wrong, but wou...
Noting that I strongly disagreed with this, rather than it being the case that someone with weighty karma did a normal disagree.
Sometimes it's more important to convey something with high fidelity to few people than it'd be to convey an oversimplified version to many.
That's the reason why we bother having a forum at all - despite the average American reading at an eighth grade level - rather than standing on street corners shouting at the passers-by.
I think that having to actively filter out controversy is the sort of trivial inconvenience that would lead to many people just not using the forum while there's a controversy on (or, use the forum ever, if this is the new normal).
My initial reaction to the mod comment was confusion, as it is not threaded beneath wachichornia's comment for me:
I'm going to push back against this a very slight amount. It is good to write a thing as simply as possible while saying exactly what it's meant to say in exactly the way it's meant to be said - but not to write a thing more simply than that.
Noting for the record that I read this post after these comments were written, and other people will as well.
Many people stand by The Scout Mindset by Julia Galef (though I haven't myself read it) (here's a book review of it that you can read to decide whether you want to buy or borrow the book). I don't know how many pages long it is exactly but am 85% sure it falls in your range.
On the nightstand next to me is Replacing Guilt by Nate Soares - it's 202 pages long and they are all of them great. You can find much of the material online here, you could give the first few chapters a glance-through to see if you like them.
I'm interested to see which books other people recommend!
Hello! Welcome to the forum, I hope you make yourself at home.
...you would be justified in requiring first some short and convincing expository work with the core arguments and ideas to see if they look sufficiently appealing and worth engaging in. Is there something of the kind for Rationalism?
In this comment Hauke Hillebrandt linked this essay of Holden Karnofsky's: The Bayesian Mindset. It's about a half-hour read and I think it's a really good explainer.
Putanumonit has their own introduction to rationality - it's less explicitly Bayesian, and som...
I got LG for my forum alignment - I'm guessing that that's the most common one?
Comment if you got a different one (unless you'd rather not (I guess you could make a throwaway account so that no one judges you for being CE)).
I disagree pretty strongly with the headline claim about extreme overconfidence, having found rationalist stuff singularly useful for reducing overconfidence with its major emphasises on falsifiable predictions, calibration, bowing quickly to the weight of the evidence, thinking through failure-states in detail and planning for being wrong.
I could defend this at length, but it's hard to find the heart to dig up a million links and write a long explanation when it seems unlikely that this is actually important to you or the people who strong-agreed with you.
A lot of the people who built effective altruism see it as an extension of the LessWrong worldview, and think that that's the reason why EA is useful to people where so many well-meaning projects are not.
Some random LessWrong things which I think are important (chosen because they come to mind, not because they're the most important things):
The many people in EA who have read and understand Death Spirals (especially Affective Death Spirals and Evaporative Cooling of Group Beliefs) make EA feel safe and like a community I can trust (instead of feeling like ...
I've read a decent chunk of the sequences, there are plenty of things to like about them, like the norms of friendliness and openess to new ideas you mention.
But I cannot say that I subscribe to the lesswrong worldview, because there are too many things I dislike that come along for the ride. Chiefly, it's seems to foster sense of extreme overconfidence in beliefs about fields people lack domain-specific knowledge about. As a physicist, I find the writings about science to be shallow, overconfident and often straight up wrong, and this has been the reactio...
Eliezer isn't (to my knowledge) an expert on, say, evolutionary biology. Reading the sequences will not make you an expert on evolutionary biology either.
They will, however, show you how to make a layman's understanding of evolutionary biology relevant to your life.
If I had to guess, I'd point at having a long bulleted list of different specific predictions about the future as a risk factor for someone registering disagreement.
No reason to feel dumb - I didn't immediately get the reference either. I saw that it was a reference to a legend about a golden apple from how it was the caption to a painting of a legend-looking-person holding a golden apple, so to answer your question I googled "golden apple legend", found the wikipedia disambiguation page, and searched that for the legend that fit.
It's a joking reference to the Apple of Discord story, wherein the goddess of discord Eris crashed a party and started the Trojan War.
I think "does EA provide what is wanted or needed by women?" is a pretty serviceable title; two nations divided by a common language and such.
There was a prominent debate between Eliezer Yudkowsky and Robin Hanson back in 2008 which is a part of the EA/rationalist communities' origin story, link here: https://wiki.lesswrong.com/index.php?title=The_Hanson-Yudkowsky_AI-Foom_Debate
Prediction is hard and reading the debate from the vantage point of 14 years in the future it's clear that in many ways the science and the argument has moved on, but it's also clear that Eliezer made better predictions than Robin Hanson did, in a way that inclines me to try and learn as much of his worldview as possible so I can analyze other arguments through that frame.
...The "alignment problem for advanced agents" or "AI alignment" is the overarching research topic of how to develop sufficiently advanced machine intelligences such that running them produces good outcomes in the real world.
Both 'advanced agent' and 'good' should be understood as metasyntactic placeholders for complicated ideas still under debate. The term 'alignment' is intended to convey the idea of pointing an AI in a direction--just like, once you build a rocket, it has to be pointed in a particular direction.
"AI alignment theory" is meant as an overarch
It's from the paper "Some Limits to Global Ecophagy" (which he's cited in this context before): https://lifeboat.com/ex/global.ecophagy