I sometimes hear that someone doesn’t want to post an idea because they feel like they would have to write a whole post, and it would need to be long, complex, and fully fleshed out. 

I think short and simple Forum posts are fine — in fact, it’s often better for a post to be significantly shorter. (You could also consider making it a Quick take.) 

Comments27


Sorted by Click to highlight new comments since:

This might not be a valid concern, but I wonder as the number of Forum users grows if there will be so many posts that most of them can only be on the front page for a very short amount of time. Most posts would then slip under the radar and get very little attention (at least compared to now). This may put people off engaging - although I guess this would then mean you'd settle at some sort of equilibrium.

Lots of very short posts could exacerbate this concern. Maybe the forum has to adapt as it grows. Various sub-groups, like Reddit has, could help allow more posts get attention from those who are interested in them.

I agree. While I appreciate the push to lower the barriers to posting for those who feel intimidated, the flipside of this is that it's pretty demotivating when a post that reflects five months and hundreds of hours of work is on the front page for less than a day. I feel like there's something wrong with the system when I can spend five minutes putting together a linkpost instead and earn a greater level of engagement.

Yeah I feel the same way, I wonder if there's a good fix for that. Given the current setup, long effortposts are usually only of interest to a small % of people, so they don't get as many upvotes.

But as long as a large fraction of this small % of people sees the post, this is not a big problem, no? I imagine that this is for example true for EAs interested in improving institutions and the landscape analysis of institutional improvements.

I agree with this concern, and that splitting is a possibility. But in the meantime, given current traffic, it could be worth considering making the frontpage a little denser, to fit like 50% more posts...

The tags feature can be good for this. I've negatively weighted some tags so that I only see the very top posts on those topics, and positively weighted other tags so I'll see posts on those topics for longer.

I think this discussion will become important in the future. On the one hand, I struggle a little bit to notice every post that is interesting for me.  On the other hand, there is the danger that the EA movement starts to fragment if the forum is splitted. Longtermists could read only longtermist stuff, people interested in animal suffering read only posts on animal advocacy etc.   

Shortforms may sometimes be good ideas, but it’s important that people recognize that shortforms are much less viewed than normal posts (or at least so I’ve heard and sensed, and I can speak to my own personal engagement which is far less for shortforms).

Shortforms are useful for when you don't want the large audience. You might be writing especially quickly and might be debating whether to post at all.

Something I do sometimes is write a shortform and then link it to people I know. That way I've written something publicly, but I still get the feedback I'm interested in.

Yes, I think it can be good for people to comment on shortforms encouraging people to make a top level post if they think it's worth it (as with this one of mine). But obviously this does require people seeing the shortform first.

(You and others can also add nuance in the comment section!)

For instance, I can note that this post was largely prompted by a conversation with Mojmir.  (Thank you!)

I like when writing advice is self-demonstrating.

Jup, would have been even funnier if the post content was just ".", but perhaps this wouldn't have helped that much convincing people that short posts are ok. xD

I suppose Shortform posts could be treated like EA Twitter?

I like this idea.

But if I don't bury people with words, how will they know I'm smarter than them?
;-)

I know it's a joke, but if you want to build status, short posts are much better than long posts.

Which is more impressive: the millionth 200-page dissertation published this year, or John Nash's 10-page dissertation?

Which is more impressive: the latest complicated math paper, or Conway & Soifer's two-word paper?

Would it help if there were some kind of commonly understood shorthand way of saying “I am writing this post in a shortened format and thus recognize there are many missing caveats and examples, but I may continue to expand on it in future updates and if you would like for me to address or clarify anything feel free to leave a comment on that… [etc. etc.]” At the very least, there have been times where I have wished that I could say a disclaimer like that. Of course, someone can just say all of that, or that might just say things like “take this with a grain of salt” (although that phrase doesn’t convey the full message/meaning).

I have considered just writing such a caveat list as a shortform and linking to it like that, although part of me would like for it to be somewhat easily and widely (within the community) understandable at a few words, similar to saying “Epistemic status: speculative.” (Then again, I think that in many cases my communicative discomfort has been unjustified/irrational, so it the main value of such a disclaimer could be setting my mind at ease and providing a CYA in the slight chance it becomes relevant. In such case simply linking to such a shortform would probably be fine.)

I liek this ! Sharing things that are in "working draft" or something. I like the idea of someone having a half-baked theory, sharing it, and then developing it as comments evolve or something. 

It doesnt' seem like the standard blog format is suited to this though.  

I've just cross-posted Elizabeth's post on "Butterfly Ideas," which I really like and which I think discusses related topics: 

"Sometimes talking with my friends is like intellectual combat, which is great. I am glad I have such strong cognitive warriors on my side. But not all ideas are ready for intellectual combat. If I don’t get my friend on board with this, some of them will crush an idea before it gets a chance to develop, which feels awful and can kill off promising avenues of investigation."

I like that post a lot! The people I tend to share early stages ideas are ones that try to make it better / understand it more or something.

Written hastily; please comment if you'd like further elaboration

Comment be short = good too

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
 ·  · 9m read
 · 
TL;DR In a sentence:  We are shifting our strategic focus to put our proactive effort towards helping people work on safely navigating the transition to a world with AGI, while keeping our existing content up. In more detail: We think it’s plausible that frontier AI companies will develop AGI by 2030. Given the significant risks involved, and the fairly limited amount of work that’s been done to reduce these risks, 80,000 Hours is adopting a new strategic approach to focus our efforts in this area.   During 2025, we are prioritising: 1. Deepening our understanding as an organisation of how to improve the chances that the development of AI goes well 2. Communicating why and how people can contribute to reducing the risks 3. Connecting our users with impactful roles in this field 4. And fostering an internal culture which helps us to achieve these goals We remain focused on impactful careers, and we plan to keep our existing written and audio content accessible to users. However, we are narrowing our focus as we think that most of the very best ways to have impact with one’s career now involve helping make the transition to a world with AGI go well.   This post goes into more detail on why we’ve updated our strategic direction, how we hope to achieve it, what we think the community implications might be, and answers some potential questions. Why we’re updating our strategic direction Since 2016, we've ranked ‘risks from artificial intelligence’ as our top pressing problem. Whilst we’ve provided research and support on how to work on reducing AI risks since that point (and before!), we’ve put in varying amounts of investment over time and between programmes. We think we should consolidate our effort and focus because:   * We think that AGI by 2030 is plausible — and this is much sooner than most of us would have predicted 5 years ago. This is far from guaranteed, but we think the view is compelling based on analysis of the current flow of inputs into AI
Relevant opportunities