Epistemic status: around that of Descartes' (low)

I am not a native English speaker. Despite that, I've had my English skills in high regard most of my life. It was the language of my studies at the university. Although I still make plenty of mistakes, I want to assure you I am capable of reading academic texts.

That being said: a whole lot of posts and comments here do feel like academic texts. The most basic/heuristic check: I found a tool to measure linguistic complexity, here https://textinspector.com/ - so you can play with it yourself, if you'd like to. Now, I realize that AI Safety is a complicated, professional topic with a lot of jargon. Hence, let's take a discussion that, I believe, should be especially welcoming to non-professionals: https://forum.effectivealtruism.org/posts/kuqgJDPF6nfscSZsZ/thread-for-discussing-bostrom-s-email-and-apology

I could make some Python project and analyse lingustic complexity of a whole range of posts, produce graphs and it sure would be fun and much better, but I am a lazy person and I just want to show you the idea. I mean to sound extremely simple when I say the following. 

There's a whole lot of syllables right there.

Most of the comments here do feel like academic papers. Reading them is a really taxing exercise. In fact, I usually just stray from it. Whether it's my shit attention span or people on a global scale are not proficient English speakers, it is my firm belief that ideas should be communicated in an understandable matter when posssible. That is, most of people should be able to understand them. If you want to increase diveristy and be more inclusive, well, I think that's one really good way at attempting so.

This is also the reason for the exact title of the post, rather than "Linguistic preferences of some effective altruists seem to be impacted by a tendency to overly intellectualize."

Comments28


Sorted by Click to highlight new comments since:

I wanted to push back on this because most commenters seem to agree with you. I disagree that the writing style on the EA forum, on a whole, is bad. Of course, some people here are not the best writers and their writing isn't always that easy to parse. Some would definitely benefit from trying to make their writing easier to understand. 

For context, I'm also a non-native English speaker and during high school, my performance in English (and other languages) was fairly mediocre.

But as a whole, I think there are few posts and comments that are overly complex. In fact, I personally really like the nuanced writing style of most content on the EA forum. Also, criticizing the tendency to "overly intellectualize" seems a bit dangerous to me. I'm afraid that if you go down this route you shut down discussions on complex issues and risk creating a more Twitter-like culture of shoehorning complex topics into simplistic tidbits. I'm sure this is not what you want but I worry that this will be an unintended side effect. (FWIW, in the example thread you give, no comment seemed overly complex to me.)

Of course, in the end, this is just my impression and different people have different preferences. It's probably not possible to satisfy everyone. 

I'm going to push back against this a very slight amount. It is good to write a thing as simply as possible while saying exactly what it's meant to say in exactly the way it's meant to be said - but not to write a thing more simply than that. 

I agree and will use this opportunity to re-share some tips for increasing readability. I used to manage teams of writers/editors and here are some ideas we found useful:

To remove fluff, imagine someone is paying you $1,000 for every word you remove.  Our writers typically could cut 20-50% with minimal loss of information.

Long sentences are hard to read, so try to change your commas into periods. 

Long paragraphs are hard to read, so try to break each paragraph into 2-3 sentences.

Most people just skim, and some of your ideas are much more important than others, so bold/italicize your important points.

This post has some additional helpful tips, in particular having a summary/putting key points up front.

This doesn't solve the problem OP complained of - that writers use unnecessarily complicated phrases and long jargon words to describe simple ideas.

Agreed that it doesn't solve that specific problem, but it serves the same end goal: making things easier for the reader.

I agree that academic language should be avoided in both forums and research papers.

It might be a good idea for forum writers to use a tool like ChatGPT to make their posts more readable before posting them. For example, they can ask ChatGPT to "improve the readability" of their text. This way, writers don't have to change their writing style too much and can avoid feeling uncomfortable while writing. Plus, it saves time by not having to go back and edit clunky sentences. Additionally, by asking ChatGPT to include more slang or colloquial language, the tool can better match the writer's preferred style. (Written with the aid of ChatGPT in exactly the way I proposed. :p)

From my playing with it, ChatGPT uses complex language even when told not to. In notion, there's a AI assistant (GPT3 based) and it has a "simplify writing" feature. The outputs were still pretty verbose and had overly long sentences. Soon though, sure!

Most output I've seen from ChatGPT has been horrendously verbose

As far as I can recall, my paragraphs are usually about half as long when I ask ChatGPT to simplify.

That said, I tend to write in an academic style.

+1 for using ChatGPT. I've also been using this. 

Similarly, I hope that GPT could later be used to customized text to whatever background the reader has, on-demand. 

Jargon is great for some people but terrible for others.

I dunno, encouraging people to use an AI tool rather than improve their writing seems a bit like a parent encouraging their child to just keep using training wheels, because it's easier.

Sure, if your goal is to be a good writer! But, I'm not worried about that. I just want people to understand me.

  1. I don't see how encouraging people to use AI tools really means discouraging them to try to improve writing.  
  2. There are many cases where I find AI tools help me become a better writer. It can be like having a personalized tutor.

I disagree about 1. About 2, I agree but that doesn't seem to me to be what Jonas is aiming for.

I agree that (2) wasn't Jonas's aim. 

 Michał -- thanks for this reality check. 

If EA wants to be genuinely, globally inclusive, we need to remember that many of our members learned English as a second language, and that it's important for us all to write as clearly as possible. 

According to sources like this, about 400 million people worldwide are native English speakers, but over 1.2 billion have learned to read English as a second language.  So that's about a 3:1 ratio of non-native to native speakers. This is worth bearing in mind when native-speaking people (like me) are writing on EA forum, and potentially being read by many non-native speakers.

It's also important to reign in our natural tendency to IQ-signal through displaying our vocabulary size, capacity for complex grammar, and subtlety of verbal reasoning. These can make us sound smart to people with similar levels of English fluency and domain expertise, but they inhibit our ability to communicate with wider audiences.

Well said, though I think your comment could use that advice :) Specific phrases/words I noticed: reign in, tendancy, bearing in mind, inhibit, subtlety, IQ-signal (?).

I'm non-native and I do know these words, but I'm mostly native level at this point (spent half my life in an English speaking country) I think many non-native speakers won't be as familiar

Ariel -- Fair point! I agree. My posts was intended to be subtly self-satirizing, but I should have made that clearer. 

Ah right, I had that thought but wasn't sure, makes sense!

All of the following are virtues in writing:

  1. Clarity
  2. Precision
  3. Accessibility

I think the EA forum writing tends to do okay on 1, well on 2, and okay-to-bad on 3. 

Obviously being better at all of them simultaneously is the best outcome, but sometimes there's a tradeoff. Personally, I think clarity and precision are more important than accessibility. That doesn't mean we shouldn't try to make our writing more accessible (I endorse Emerson Spartz's list of tips), but I think it is just more important to be clear and precise, and we should be clear about that and happy that we're doing well at those things. And therefore I don't think the writing style here is bad, although it could be improved.

(Or, in the maxim I got taught: "When looking at your writing ask: 'Is it clear? Is it true? Is it necessary?")

I feel like, if we write here to communicate, accessibility is pretty important, maybe more important than the other two (or at least, not clearly less important than them). Why do you think otherwise?

Sometimes it's more important to convey something with high fidelity to few people than it'd be to convey an oversimplified version to many. 

That's the reason why we bother having a forum at all - despite the average American reading at an eighth grade level - rather than standing on street corners shouting at the passers-by. 

Generally disagree with this. Overall, I think the EA forum norms are fairly good in terms of writing style and quality, but I might even be inclined to push in the other direction. 

 After being bombarded with modern American writing advice since University, I've recently become disillusioned with the simplifying, homogenising trend of internationalized English, in favour of a language that borrows from the best of our linguistic traditions. 

I find that the short-sentence, short-word, bullet point style of writing encourages you to skim, while more flowing and elegant language forces the reader to read aloud, and to follow the cadences of the speaker, which promotes a very different state of mind for reading and absorbing information. 

To quote from the opening passage Chapter 2 of Utilitarianism by JS Mill:

“A being of higher faculties requires more to make him happy, is capable probably of more acute suffering, and certainly accessible to it at more points, than one of an inferior type; but in spite of these liabilities, he can never really wish to sink into what he feels to be a lower grade of existence. We may give what explanation we please of this unwillingness; we may attribute it to pride, a name which is given indiscriminately to some of the most and to some of the least estimable feelings of which mankind are capable; we may refer it to the love of liberty and personal independence, as appeal to which was with the Stoics one of the most effective means for the inculcation of it; to the love of power or to the love of excitement, both of which do really enter into and contribute to it; but its most appropriate appellation is a sense of dignity, which all human beings possess in one form or other, and in some, though by no means in exact, proportion to their higher faculties, and which is so essential a part of the happiness of those in whom it is strong that nothing which conflicts with it could be otherwise than momentarily an object of desire to them.”

Utterly impossible to skim, and what a joy to read! 

Just to give you a data point from a non-native speaker who likes literature and languages, this quote wasn't a joy to read for me since it would have taken me a very long time to understand what this is about if I would not have known the context. So I am not sure what you mean by the best linguistic traditions – I think simple language can be elegant too.

It is a more joyful sentence in the context, admittedly.

Simple language can be elegant, of course, and there are excellent writers with a range of different styles and levels of simplicity. I wouldn't dream of saying that everyone should be striving for 200-word sentences, nor that we should be imitating Victorian-era philosophy, but I do think that the trends of relentless simplifying and trimming that editors and style guides foist upon budding writers have diminished the English language.

I find that the short-sentence, short-word, bullet point style of writing encourages you to skim, while more flowing and elegant language forces the reader to read aloud, and to follow the cadences of the speaker, which promotes a very different state of mind for reading and absorbing information.

But... The most common and advocated style here is exactly skimmable bullet points, while prose is many times frowned upon. And the only richness of language used is jargon. This is the opposite of what you say you want.

Also, like Ada-Maaria, the long quote was hard for me to read as a non-native, and I skipped it. That's not to say that I think communication should be confined to short sentences and simplified language. Just that thought has to be put into clarity and accessibility as well.

Strongly upvoted

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr
Recent opportunities in Building effective altruism
46
Ivan Burduk
· · 2m read