“All that was great in the past was ridiculed, condemned, combated, suppressed — only to emerge all the more powerfully, all the more triumphantly from the struggle.” 

―Nikola Tesla

“They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown.”

―Carl Sagan

There is a modest tension between showing appropriate respect for the expert consensus on a topic (or the mainstream view, or majority opinion, or culturally evolved social norms, or whatever) and wanting to have creative and innovative thoughts about that topic. It's possible to be too deferential to expert consensus (etc.), but it's also possible to be too iconoclastic. Where people in effective altruism err, they tend to err in being too iconoclastic.

The stock market is a good analogy. The market is wrong all the time, in the sense that, for example, companies rocket to large valuations and then come crashing down, not because the fundamentals changed, particularly, but because the market comes to think that the company was a bad bet in the first place. (The work of equity research analysts and investigative reporters with non-consensus/counter-consensus hunches is valuable here.) So, the market is wrong all the time, but beating the market is famously hard. The question is not whether the market often makes mistakes that could be capitalized on (it certainly does), the question is whether you, specifically, can spot the mistakes before the market does, and not make mistakes of your own that outweigh everything else. I think a similar thing is true with ideas in general. 

The consensus view is wrong all the time, society is wrong all the time, about all sorts of things. Like the stock market, expert communities and society at large have ways of integrating criticism and new thinking. There is some sort of error correcting process. The question for anyone pursuing an iconoclastic hunch is not whether society or a particular community of experts has shown it is fallible (it has, of course), the question is whether you can correct an error over and above what the error correcting process is already doing, without introducing even more error yourself. 

The danger doesn't lie simply in having non-consensus/counter-consensus views — everyone should probably have at least a few — the danger is in having too many about too many things, in being far too confident in them, and in rejecting mainstream institutions and error correcting processes. With the stock market, it takes a lot of research to support one non-consensus/counter-consensus bet. At a certain point, if someone tries to take too many bets, they're going to be stretched too thin and will not be able to do adequate research on each one. I think something similar is true with ideas in general. You need to work to engage with the consensus view properly and make a strong criticism of it. People who reject too many consensuses too flippantly are bound to be wrong about a lot. I think maybe people mistake the process of research, which takes time and hard work, which has maybe led them to be right about something that most people disagreed with once or twice, with having some sort of superpower that allows them to see mistakes everywhere without putting in due diligence. 

There's also an important difference between working to improve or contribute to institutions by participating in the error correction processes versus rejecting them and being an outsider or even calling for them to be torn down. I think there's something quietly heroic but typically unglamorous about participating in the error correcting process. The people who have the endurance for it usually seem to be motivated by something deeper than getting to be right or settling scores. It comes from something more selfless and spiritual than that. By contrast, when I see people eagerly take the role of outsider or someone who wants to smash institutions, it often seems like it's based in some kind of fantasy of will to power

One potential route for people in effective altruism who want to reform the world's ideas is publishing papers in academic journals and participating in academic conferences. I believe it was in reference to ideas about artificial general intelligence (AGI) and AI safety and alignment that the economist Tyler Cowen once gave the advice, "Publish, publish, publish." The philosopher David Thorstad, who writes the blog Reflective Altruism about effective altruism, has also talked about the merits of academic publishing in terms of applying rigour and scrutiny to ideas, in contrast to overreliance on blogs and forums. I do wish we lived in a world where academia was less expensive and more accessible, but that's a complaint about how widely the academic process can be applied, not about how important it is. 

Anything that can get a person outside the filter bubble/echo chamber/ideological bunker of effective altruism will surely help. Don't forget the basics of good epistemic practice: talking to people who disagree with you, who think differently than you, and putting yourself in an emotional state where you can really consider what they're saying and potentially change your mind. Unfortunately, the Internet has almost no venues for this sort of thing, because the norm is aggressive clashes and brutal humiliation contests. Real life offers much better opportunities, but I'm worried that too much time spent on the Internet (and really Twitter deserves an outsized share of the blame) is making people more dogmatic, combative, and dismissive of others' opinions in real life, too. I don't find this to be an easy problem myself and I don't have clean, simple advice or solutions. (If you do, let me know!)

A related topic is systemic or structural critiques of institutions, including fields in academia. This is not exactly the same idea as everything I've discussed so far, although it is relevant. I think you can make good points about systemic or structural critiques of many institutions, including government (e.g. is it properly representational, do all votes have equal power?), journalism (e.g. does widespread negativity bias have harmful effects such as making people feel more hopeless or more anxious?), and certain academic fields. I have an undergraduate degree in philosophy, so philosophy is the field I'm most familiar with. I think the practice of philosophy could be improved in some ways, such as leaning toward plain English, showing less deference to or reverence for big names in the history of philosophy (such as Kant, Hegel, and Heidegger), and being more mindful of what questions in philosophy are important and why, rather than get drawn into "How many angels can dance on the head of a pin?"-type debates. 

You can make legitimate critiques of institutions or academic fields, like philosophy, and in some way, the force of those critiques makes those institutions or fields less credible. For example, I take academic philosophy somewhat less seriously than I would if it didn't have the problems I experienced with unclear language, undue deference to or reverence for big names, or distraction by unimportant questions. But if I decide academic philosophy is a temple that needs to be torn down and I try to create my own version of philosophy outside of academia from the ground up, on balance, what I come up with is going to be far worse than the thing I'm trying to replace. Trying to reinvent everything from scratch is a forlorn project. 

So, what applies to ideas also applies to institutions that handle ideas. Systemic or structural reform of institutions is good and probably needed in many cases, but standing outside of institutions and trying to create something better is going to fail 99.9%+ of the time. I detect in effective altruism too much of an impulse too often from too many people to want to reinvent everything from scratch. Why not apply effective altruism's deeply seeing Eye of Sauron to corporations, to governments, to nations, to world history, and, in roughly the words said to me by an employee of the Centre for Effective Altruism many years ago, solve all the world's problems in priority sequence? There is a nobility in getting one to three things right that are non-consensus/counter-consensus and giving that contribution to the world. There is also an ignobility in becoming overconfident from a few wins and, rather than being realistic about the limitations of human beings, spreading yourself far too thin and making too many contrarian bets on too many things, and being surely wrong about most of them.

Getting something right, which most people get wrong, that the world needs to know about takes love, care, attention, and hard work. Doing even one thing like that is an achievement and something to be proud of. That's disciplined iconoclasm and it's beautiful. It moves the world forward. The other thing, the undisciplined thing, is in an important sense the opposite of that and its mortal enemy, although the two are easily confused and conflated. The undisciplined thing is to think because you were contrarian and right once or a few times, or just because, for whatever reason, you feel incredibly confident in your opinions, that everyone else must be wrong about everything all the time. That is actually a tragic perspective from which to inhabit life. It is often a refuge of the wounded. It's an impoverished perspective, mean and meagre. 

The heroism of the disciplined iconoclast contrasted with the tragedy of the omni-contrarian evokes for me what the Franciscan mystic Richard Rohr calls the distinction between the soul and the ego, or the true self and the false self. Ironically enough, great deeds are better accomplished by people not concerned with their greatness, and the highest forms of well-being in life, such as love, come from setting aside, at least partially, one's self-interest. Will to power is not the way. The way is love for the thing itself, regardless of external reward or recognition. That kind of selfless love is often unglamorous, sometimes boring, and definitely hard to do, but it's undoubtedly the best thing on Earth.

I think many people like myself once detected something nearly divine in effective altruism's emphasis on sacrificing personal consumption to help the world's poorest people, not for any kind of recognition or external reward, but just to do it. That is an act of basically selfless love. Given that point of comparison, you can understand why many people feel unsatisfied and uneasy with a transition toward discussing what is essentially science fiction at expensive conference venues. Where is the love or the selflessness in that? 

I think excessive contrarianism is best understood as ego or as stemming from an ego problem. It's about not accepting one's fundamental human limitations, which everyone has, and it's about being overly attached to winning, being right, gaining status, gaining recognition, and so on, rather than letting those things go as much as humanly possible in favour of focusing on love for the thing itself. I think this is always a tragic story because people are missing the point of what life is about. It's also a tragic story because every investigation of why someone is unable to let go of ego concerns seems to ultimately trace back to someone, typically a child, who deserved love and didn't get it, and found the best way they could to cope with that unbearable suffering.

When I ask myself why some people seem to need effective altruism to be something more than it could possibly realistically be, or why they seem to want to tear down all human intellectual achievement and rebuild it from scratch, I have to wonder if they would feel the same way if they felt they could be loved exactly as they are. Could it be, at bottom, that it's been about love all along?

25

0
0

Reactions

0
0

More posts like this

Comments11
Sorted by Click to highlight new comments since:

One potential route for people in effective altruism who want to reform the world's ideas is publishing papers in academic journals and participating in academic conferences.

 

I do think EA is a bit too critical of academia and peer review. But despite this, most of the top 10 most highly published authors in peer-reviewed journals in the global catastrophic risk field have at least some connection with EA.

I think where academic publishing would be most beneficial for increasing the rigour of EA’s thinking would be AGI. That’s the area where Tyler Cowen said people should “publish, publish, publish”, if I’m correctly remembering whatever interview or podcast he said that on. 

I think academic publishing has been great for the quality of EA’s thinking about existential risk in general. If I imagine a counterfactual scenario where that scholarship never happened and everything was just published on forums and blogs, it seems like it would be much worse by comparison. 

Part of what is important about academic publishing is exposure to diverse viewpoints in a setting where the standards for rigour are high. If some effective altruists started a Journal of Effective Altruism and only accepted papers from people with some prior affiliation with the community, then that would probably just be an echo chamber, which would be kind of pointless. 

I liked the Essays on Longtermism anthology because it included critics of longtermism as well as proponents. I think that’s an example of academic publishing successfully increasing the quality of discourse on a topic. 

When it comes to AGI, I think it would be helpful to see some response to the ideas about AGI you tend to see in EA from AI researchers, cognitive scientists, and philosophers who are not already affiliated with EA or sympathetic to its views on AGI. There is widespread disagreement with EA’s views on AGI from AI researchers, for example. It could be useful to read detailed explanations of why they disagree. 

Part of why academic publishing could be helpful here is that it’s a commitment to serious engagement with experts who disagree in a long-form format where you’re held to a high standard, rather than ignoring these disagreements or dismissing them with a meme or with handwavy reasoning or an appeal to the EA community’s opinion — which is what tends to happen on forums and blogs. 

EA really exists in a strange bubble on this topic, its epistemic practices are unacceptably bad, scandalously bad — if it’s a letter grade, it’s an F in bright red ink — and people in EA could really improve their reasoning in this area by engaging with experts who disagree without the intent to dismiss or humiliate them, but to actually try to understand why they think what they do and seriously consider if they’re right. (Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/alignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so what's going on here?)

Only weird masochists who dubiously prioritize their time will come onto to forums and blogs to argue with people in EA about AGI. The only real place where different ideas clash online — Twitter — is completely useless for serious discourse, and, in fact, much worse than useless, since it always seems to end up causing polarization, people digging in on opinions, crude oversimplification, and in-group/out-group thinking. Humiliation contests and personal insults are the norm on Twitter, which means people are forming their opinions not based on considering the reasons for holding those opinions, but based on needing to “win”. Obviously that’s not how good thinking gets done.

Academic publishing — or, failing that, something that tries to approximate it in terms of the long-form format, the formality, the high standards for quality and rigour, the qualifications required to participate, and the norms of civility and respect — seems the best path forward to get that F up to a passing grade. 

I think where academic publishing would be most beneficial for increasing the rigour of EA’s thinking would be AGI.

AGI is a subset of global catastrophic risks, so EA-associated people have extensively academically published on AGI - I personally have about 10 publications related to AI.

 

Examples of scandalously bad epistemic practices include many people in EA apparently never once even hearing that an opposing point of view on LLMs scaling to AGI even exists, despite it being the majority view among AI experts, let alone understanding the reasons behind that view, some people in EA openly mocking people who disagree with them, including world-class AI experts, and, in at least one instance, someone with a prominent role who responded to an essay on AI safety/alignment that expressed an opposing opinion without reading it, just based on guessing what it might have said. These are the sort of easily avoidable mistakes that predictably lead to having poorly informed and poorly thought-out opinions, which, of course, are more likely to be wrong as a result. Obviously these are worrying signs for the state of the discourse, so what's going on here?)

I agree that those links are examples of not good epistemics. But in the example of not being aware that the current paradigm may not scale to AGI, this is commonly discussed in EA, such as here and by Carl Shulman (I think here or here). I would be interested in your overall letter grades for epistemics. My quick take would be:
Ideal: A+
Less Wrong: A
EA Forum: A- (not rigorously referenced, but overall better calibrated to reality and what is most important than academia, more open to updating)
Academia: A- (rigorously referenced, but a bias towards being precisely wrong rather than approximately correct, which actually is related to the rigorously referenced part. Also a big bias towards conventional topics.)
In-person dialog outside these spaces: C
Online dialog outside these spaces: D

I think maybe people mistake the process of research, which takes time and hard work

 

I think you're confusing "hard work" with the disclosure of wisdom.

Take a look at the history of philosophy and you'll find plenty of hard work in medieval scholasticism, or in Marxist dialectical materialism. Heidegger was one of the greatest philosophers... and he was a Nazi and organized book burnings. Sartre supported Stalinism. They were true scholars who worked very hard, with extraordinary intellectual capacity and commendable academic careers.

Wisdom is something else entirely. It stems from an unbiased perspective and risks breaking with paradigms. "Effective Altruism" might be close to this. For the first time, there's a movement for social change centered on a behavioral trait, detached from old traditions and political constraints.

This is a very strange critique. The claim that research takes hard work does not logically imply a claim that hard work is all you need for research. In other words, to say hard work is necessary for research (or for good research) not does imply it is sufficient. I certainly would never say that it is sufficient, although it is necessary.

Indeed, I explicitly discuss other considerations in this post, such as the "rigour and scrutiny" of the academic process and what I see as "the basics of good epistemic practice", e.g. open-minded discussion with people who disagree with you. I talk about specific problems I see in academic philosophy research that have nothing to do with whether people are working hard enough or not. I also discuss how, from my point of view, ego concerns can get in the way, and love for research itself — and maybe I should have added curiosity — seems to be behind most great research. But, in any case, this post is not intended to give an exhaustive, rigorous account of what constitutes good research. 

If picking examples of academic philosophers who did bad research or came to bad conclusions is intended to discredit the whole academic enterprise, I discussed that form of argument at length in the post and gave my response to it. (Incidentally, some members of the Bay Area rationalist community might see Heidegger's participation in the Nazi Party and his involvement in book burnings as evidence that he was a good decoupler, although I would disagree with that as strongly as I could ever disagree about anything.) 

I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.

I think effective altruism is as much attached to intellectual tradition and as much constrained by political considerations as pretty much anything else. No one can transcend the world with an act of will. We are all a part of history and culture. 

I think accounting for bias is an important part of thinking and research, but I see no evidence that effective altruism is any better at being unbiased than anyone else. Indeed, I see many troubling signs of bias in effective altruist discourse, such as disproportionately valuing the opinion of other effective altruists and not doing much to engage seriously and substantively with the opinions of experts who are not affiliated with effective altruism.

 

Though I don't claim that EAs are without bias, I think there's lots of evidence that they have less bias than non-EAs. For instance, most EAs genuinely ask for feedback, many times specifically asking for critical feedback, and typically update their opinions when they think it is justified. Compare the EA Forum with typical chat spaces. EAs also more often admit mistakes, such as on many EA-aligned organization websites. When making a case for funding something, EAs often will include reasons for not funding something. EAs often include credences, which I think helps clarify things and promotes more productive feedback. EAs tend to have a higher standard of transparency. EAs take counterfactuals seriously, rather than just trying to claim the highest number for impact.

Another example of how EA is less biased is the EA-associated news sources. Improve the News is explicitly about separating fact from opinion, and Future Perfect and Sentinel news focus on more important issues, e.g. malaria and nuclear war, rather than plane crashes.

Of course, I agree that EA contains extravagant, Byzantine, and biased approaches, influenced by all sorts of traditions. But there is one approach that is original, unique, and that opens a window for social change. In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.

The critique of "undisciplined iconoclasm" is welcome. There is never enough improvement when there is so much to gain.

I think many people like myself once detected something nearly divine in effective altruism's emphasis on sacrificing personal consumption to help the world's poorest people, not for any kind of recognition or external reward, but just to do it. That is act of basically selfless love.

And "love" is a real phenomenon, a part of human behavior, that deserves analysis and understanding. It is not ornamental, nor a vague idealistic reference, nor a "reductio ad absurdum".

In the world of conventional academic knowledge, there is nothing but highly intelligent people trying to build successful careers.

This is pretty weird thing to say. You understand that "academic knowledge" encompasses basically all of science, right? I know plenty of academics, and I can't think of anyone I know IRL that is not committed to truthseeking, often with signficantly more rigor than is found in effective altruism. 

You understand that "academic knowledge" encompasses basically all of science, right? 

Obviously, I was not referring to the empirical sciences, but, as is clear from the context, to the social sciences, which have a certain capacity to influence moral culture.

You have the impression that the work of academic professionals is rigorously focused on the truth. I think that there are some self-evident truths about social progress that are not currently being addressed in academia. 

I don't think that EA is a complete ideology today, but its foundation is based on a great novelty: conceiving social change from a trait of human behavior (altruism).

I'm not sure I'm able to follow anything you're trying to say. I find your comments quite confusing.

I don't agree with your opinion that academia is nothing but careerism and, presumably, that effective altruism is something more than that. I would say effective altruism and academia are roughly equally careerist and roughly equally idealistic. I also don't agree that effective altruism is more epistemically virtuous than academia, or more capable of promoting social change, or anything like that.

Curated and popular this week
Relevant opportunities