It is standard form in EA to state one's welcomingness of feedback, both in a personal and professional capacity. Individuals and organisations alike often have many means by which you can deliver feedback, whether through anonymous forms or direct communication, and forum posts will often begin or end with:

"I'm open to feedback..." 
"I'm looking for feedback of the following nature..." 
"I'm very full because I ate feedback for breakfast, but there's always room for more..." 
And so on.

I'm now wondering: what happens if you write, "I am not open to feedback". Literally, is that even allowed? I've never seen it done. I'm concerned to see such homogeneous thinking on the topic and I find it alarming that a community which espouses openness would be so closed off to non-openness.

How is it that not a single person in this intellectual, professional, personal community, or rather, in the sphere of this idea or philosophy, or whatever EA is.... Sorry, that sentence got too long, let me try again:

How is it possible that not a single person in EA holds a feedback-resistant worldview?

I fear – and now highly suspect – that stating a refusal to receive feedback would lead to an instant forum ban and possibly further ostracisation. I am not curious to hear from the forum team nor from moderators, I intend to hold this suspicion closely and indefinitely. I do not have an anonymous feedback form and I will be employing strong downvotes if I even catch a whiff of something that vaguely gestures in the direction of feedback, based solely on my personal conception of what feedback is.

I encourage downvotes of this post and disagree reacts, as I would then feel more confident that everyone is similarly closed off to alternate views. Although, those could also be interpreted as a disagreement of my very premise, which feels a lot like feedback. Understandably, I'm still working through the details (and I do not welcome outside perspective). 

In any case, I would like to formally state my categorical refusal to ever again receive feedback of any form. I expect this to extend to my professional work (I've gone ahead and deleted the 1-on-1 document between myself and my manager. I'm sure he'll understand. And even if he doesn't, I won't know).

Please confirm whether you can see this post.

155

1
16
1
1
16

Reactions

1
16
1
1
16
Comments15


Sorted by Click to highlight new comments since:

I didn't read the post, so this isn't feedback. I just wanted to share my related take that I only want feedback if it's positive, and otherwise people should keep their moronic opinions to themselves. 

I think this is very brave. 

I didn't read your comment either, it just randomly occured to me that I should change my "anonymous feedback form" to "positive feedback form" and maybe add an extra "negative feedback form" that won't forward submissions to my email. 

This got me thinking:

 no namename
feedbackanonymous formnormal
no feedbackshut up???

Have you considered making a form where people can submit their names and nothing else?

This is a really good idea actually, but I have to be fundamentally opposed to this comment, sorry :( 

I will not be upvoting, downvoting, agree-voting, disagree-voting, or reacting to this piece, and I will not be leaving any comment except to say that I have no comment.

My lack of feedback should NOT be construed as an endorsement of your anti-feedback position.

Thank you, I have no reply. 

Sad to see such a cult-like homogeneity of views. I blame Eliezer. 

Typical anti-feedback-doomers making everyone scared to plug their ears, where does it end?

No I can't see it. Do better 

Okay, Claude says, "telling someone "Do better" could technically be considered feedback, but it's extremely limited and not very constructive," which makes it feel like not-quite-feedback. To your first point, I fear I've been shadow banned by the forum for speaking out :( 

Don't do better. Is that better?

Positive feedback: Great post!

Negative feedback: By taking any public actions you make it easier for people to give you feedback, a major tactical error (case in point)

Hey Neel! This reply upset me so much that I'm now planning to make AGI and actively oppose AI safety :) Hope it was worth it!

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal