Hide table of contents

tl;dr: It actually seems pretty rare for people to care about the general good as such (i.e., optimizing cause-agnostic impartial well-being), as we can see by prejudged dismissals of EA concern for non-standard beneficiaries and for doing good via indirect means.

Introduction

Moral truisms may still be widely ignored. The moral truism underlying Effective Altruism is that we have strong reasons to do more good, and it’s worth adopting the efficient promotion of the impartial good among one’s life projects. (One can do this in a “non-totalizing” way, i.e. without it being one’s only project.) Anyone who personally adopts that project (to any non-trivial extent) counts, in my book, as an effective altruist (whatever their opinion of the EA movement and its institutions).

Many people don’t adopt this explicit goal as a personal priority to any degree, but still do significant good via more particular commitments (to more specific communities, causes, or individuals). That’s fine by me, but I do think that even people who aren’t themselves effective altruists should recognize the EA project as a good one. We should all generally want people to be more motivated by efficient impartial beneficence (on the margins), even if you don’t think it’s the only thing that matters.

A popular (but silly) criticism of effective altruism is that it is entirely vacuous. As Freddie deBoer writes:

[T]his sounds like so obvious and general a project that it can hardly denote a specific philosophy or project at all… [T]his is an utterly banal set of goals that are shared by literally everyone who sincerely tries to act charitably.

This is clearly false. As Bentham’s Bulldog replies, most people give lip service to doing good effectively. But then they go and donate to local children’s hospitals and puppy shelters, while showing no interest in learning about neglected tropical diseases or improving factory-farmed animal welfare. DeBoer himself dismisses without argument “weird” concerns about shrimp welfare and existential risk reduction, which one very clearly cannot just dismiss as a priori irrelevant if one actually cares about promoting the impartial good. The latter entails a very unusual degree of open-mindedness.

The fact is: open-minded, cause-agnostic concern for promoting the impartial good is vanishingly rare. As a result, the few people who sincerely have and act upon this concern end up striking everyone else as extremely weird. We all know that the way you’re supposed to behave is to be a good ally to your social group, do normal socially-approved things that signal conformity and loyalty (and perhaps a non-threatening degree of generosity towards socially-approved recipients). “Literally everyone” does this much, I guess. But what sort of weirdo starts looking into numbers, and argues on that basis that chickens are a higher priority than puppies? Horrible utilitarian nerds, that’s who! Or so the normie social defense mechanism seems to be (never mind that efficient impartial beneficence is not exclusively utilitarian, and ought rather to be a significant component of any reasonable moral view).

Let’s be honest

 

Everyone is motivated to rationalize what they’re antecedently inclined to do. I know I do plenty of suboptimal things, due to both (i) failing to care as much as would be objectively warranted about many things (from non-cute animals to distant people), and (ii) being akratic and failing to be sufficiently moved even by things I value, like my own health and well-being. But I try to be honest about it, and recognize that (like everyone) I’m just irrational in a lot of ways, and that’s OK, even if it isn’t ideal.

Vegans care more about animals than I do, and that’s clearly to their credit. I try to compensate through some high-impact donations, and I think that’s also good (and better than going vegan without the donations). I encourage others to do likewise.

We all have various “rooted” concerns, linked to particular communities, individuals, or causes to which we have a social or emotional connection. That’s all good. Those motivations are an appropriate response to real goods in the world. But we all know there are lots of other goods in the world that we don’t so easily or naturally perceive, and that could plausibly outweigh the goods that are more personally salient to us. The really distinctive thing about effective altruism is that it seriously attempts to take all those neglected interests into account. As I wrote in level-up impartiality:

[Imagine] taking all the warmth and wonder and richness that you’re aware of in your personal life, and imaginatively projecting it into the shadows of strangers.

We glimpse but a glimmer of the world’s true value. It’s enough to turn our heads, and rightly so. If we could but see all that’s glimpsed by various others, in all its richness, depth, and importance, we would better understand what’s truly warranted. But even from our limited personal perspectives, we may at least come to understand that there is such value in everyone, even if we cannot always grasp it directly. And if we strive to let that knowledge guide our most important choices, our actions will be more in line with the reasons that exist—reasons we know we would endorse, if only we could see them as clearly as we do the ones in our more personal vicinity.

And yes, from the outside this may look like being moved by drab shadows rather than the vibrant values we grasp closer to home. But of course it isn’t really the shadows that move us, but the promise of the person beneath: a person every bit as complex, vulnerable, and vibrant as those you know and love.

Such impartiality involves a very distinctive—you might even say weird—moral perspective. I think it should be generally recognized as a good and admirable one, but I don’t see how anyone could honestly think that it’s commonplace. Few people who give to charity make any serious effort to do the most good they can with the donation. Few people who engage in political activism are seriously trying to do they most good they can with their activism. Few people pursuing an “ethical career” are trying to do the most good they can with their career. And that’s all fine—plenty of good can still be done from more partial and less optimizing motives (and even EAs only pursue the EA project in part of their life). But the claim that the moral perspective underlying EA is “trivial” or already “shared by literally everyone” is clearly false.[1]

I wonder if part of the resistance to EA may stem from people not wanting to admit that they actually aren’t much motivated by a cause-agnostic concern for the general good. Maybe it sounds like an embarrassing thing to admit, because surely the general good is a worthy thing to aim at!

Maybe it would help to make the implications more explicit. To have a cause-agnostic concern for the impartial good, you have to be open to the possibility that shrimp welfare might matter more than saving a human life. Most people probably don’t want to be open to that possibility. Maybe what they really want is more speciesist, like helping humans effectively. Further, maybe they don’t want to be open to the possibility that a 10% chance of saving a million lives is better than saving 1000 for certain (or that even the tiniest probabilities, if sufficiently well-grounded, could take priority over sure things). So maybe they really just want to do something like make a near-certain positive difference to human well-being. That’s a fine goal too, and maybe a sufficient basis to make use of some effective altruist resources like GiveWell. But again, it’s very different from caring about impartial value as such.[2]

I don’t think anyone needs to be embarrassed about having narrower concerns along these lines. Lots of them are really good! But I do think a broader concern is better—just like saving distant kids from dying of malaria is better than providing local kids with more cultural opportunities (even though the latter is also good!). So I always like to encourage folks to question whether their current moral efforts are well-focused, and consider shifting their attention if they realize that they could do more good elsewhere. I think it’s especially worth noting that our collective moral attention often gets caught on relatively less-important (but highly divisive) issues, so it’s worth trying not to get sucked into that.

These recommendations flows from taking seriously the ideas of effective altruism, regardless of what you think of the actually-existing EA movement and institutions. If you aren’t regularly prompted to think about whether your moral efforts are optimally allocated, then you should probably admit that the effective altruist project is actually pretty distinctive.

OK, but what about the actual movement/institutions?

I’m also a fan of those: I know the GWWC 10% pledge has helped motivate me to do far more good than I otherwise would have.

See Scott Alexander’s two-part defense of the EA movement as (i) having done a lot of actual good, and (ii) providing the social scaffolding to encourage people to actually put their beneficent motivations into practice.

For the best critical piece that I’m aware of, see Benjamin Ross Hoffman’s Effective Altruism is Self-Recommending. It’s a totally fair worry that it’s hard to assess whether the more speculative branches of EA are actually effective at achieving their goals or not. I’m generally pretty trusting of these folks, but have no beef with those who judge things differently.[3] I’d welcome a more diverse ecosystem of people seriously trying to optimize promotion of the impartial good via a range of different (but reasonable and rights-respecting) approaches.

Serious Evaluation Goes Beyond Vibes

Still, one thing I really want to stress is that any kind of serious concern for the impartial good is going to look very different from most people’s default moral behaviour.

It should make you think that donating openly is better than doing so quietly and anonymously. It should make you think that earning to give could very well be an excellent idea (and discouraging it is morally very risky). It should make you open to all sorts of possibilities for doing good via unconventional means—maybe even buying castles!—so long as it can be supported via a reasonable “pathway to impact”.

Most people don’t do this. They just assess things based on vibes. “Earning to give → bankers → bad.” If pressed, they can offer a post hoc rationalization to support their verdict. But their verdict wasn’t really based on the reasoning, since the latter is clearly contestable whereas their confidence in their verdict is absolute. Or they’ll complain that pandemic prevention funding is going to European scientists (in contrast to global health funding going directly to Africa), as though it were the scientists rather than the pandemic-vulnerable global population that were the intended beneficiaries of the funding. Just a total failure to even consider indirect benefits.

I find this very frustrating. It would be great to have more people seriously engage with the project of effective altruism, and critically assessing where current EAs might be going wrong. But most of the actually-existing critics don’t seem to be thinking seriously about these issues at all.

As a rule of thumb, I’d encourage the vibes-based critics to consider the possibility that people who have spent a significant chunk of their professional lives thinking carefully about how to do the most good might have some insights that a total neophyte who hasn’t even thought about the matter for five minutes might be missing. So many Twitter critics confidently repeat some utterly conventional thought as though the mere fact of going against the conventional wisdom is evidence that EAs are nuts. But society’s conventional wisdom is often wrong (or at least not aligned with truly beneficent goals), so you need to actually look more closely into the issue to work out whether a criticism is warranted or not.

By all means, you absolutely should share your thoughts if you think you’ve hit upon an important insight that might have been missed by people who have spent a significant chunk of their professional lives thinking carefully about how to do the most good. Even professionals make mistakes! But it’s worth bearing in mind that neophytes are even more likely to make mistakes, and temper your confidence accordingly. Most of the people “dunking” on EA as “obviously” failing to do good effectively are being ridiculously overconfident in their parroting of conventional wisdom. By and large, conventional wisdom around altruism doesn’t reliably indicate how to do good effectively, but just how to signal generosity in a socially-approved way, and these are very different things. We should fully expect them to come apart, and for truly effective altruism to seem “weird” and “unappealing” to many.

 

  1. ^

    Indeed, it’s so extremely false that many people apparently don’t even realize that it is false at all, because they seemingly cannot even imagine what it would be like to care about the impartial good per se, and instead assimilate this to ordinary concern for any particular goods whatsoever. If so many people struggle to even conceive of the EA mindset, it clearly isn’t trivial!

  2. ^

    That said, there are probably many departures from strict “maximizing impartial expected value” that could still count as close enough for practical purposes. One could add prioritarian weighting for the worst-off, or some modest degree of risk-aversion or ambiguity-aversion, etc. So I don’t mean to be making any strict proclamations here about precisely where to draw the line to qualify as having some concern for “cause-agnostic impartial value” per se. My point is just that anything in this remote vicinity is pretty radically different from what most people are actually concerned with.

  3. ^

    That said, anyone who thinks it’s obvious that any of the actually-existing branches of EA is “terrible” probably isn’t open to cause-agnostic value-promotion, since there’s a strong prima facie case to be made for all the main EA cause areas. You can certainly come to different verdicts at the end of the day, but if you think mainstream EAs are obviously mistaken in their priorities then I think there’s a good chance that you’re just being closed-minded in pre-judging the matter.

174

17
1
2

Reactions

17
1
2

More posts like this

Comments26
Sorted by Click to highlight new comments since: Today at 9:44 AM

From an evolution / selfish gene's perspective, the reason I or any human has morality is so we can win (or at least not lose) our local virtue/status game. Given this, it actually seems pretty wild that anyone (or more than a handful of outliers) tries to be impartial. (I don't have a good explanation of how this came about. I guess it has something to do with philosophy, which I also don't understand the nature of.)

BTW, I wonder if EAs should take the status game view of morality more seriously, e.g., when thinking about how to expand the social movement, and predicting the future course of EA itself.

What might EAs taking the status game view more seriously look like, more concretely? I'm a bit confused since from my outside-ish perspective it seems the usual markers of high status are already all there (e.g. institutional affiliation, large funding, [speculatively] OP's CJR work, etc), so I'm not sure what doing more on the margin might look like. Alternatively I may just be misunderstanding what you have in mind. 

One, be more skeptical when someone says they are committed to impartially do the most good, and keep in mind that even if they're totally sincere, that commitment may well not hold when their local status game changes, or if their status gradient starts diverging from actual effective altruism. Two, form a more explicit and detailed model of how status considerations + philosophy + other relevant factors drive the course of EA and other social/ethical movements, test this model empirically, basically do science on this and use it to make predictions and inform decisions in the future. (Maybe one or both of these could have helped avoid some of the mistakes/backlashes EA has suffered.)

One tricky consideration here is that people don't like to explicitly think about status, because it's generally better for one's status to appear to do everything for its own sake, and any explicit talk about status kind of ruins that appearance. Maybe this can be mitigated somehow, for example by keeping some distance between the people thinking explicitly about status and EA in general. Or maybe, for the long term epistemic health of the planet, we can somehow make it generally high status to reason explicitly about status?

JWS
5mo14
4
1

Hey Wei, I appreciate you responding to Mo, but I found myself still confused after reading this reply. This isn't purely down to you - a lot of LessWrong writing refers to 'status', but they never clearly define what it is or where the evidence and literature for it is.[1] To me, it seem to function as this magic word that can explain anything and everything. The whole concept of 'status' as I've seen it used in LW seems incredibly susceptible to being part of 'just-so' stories.

I'm highly sceptical of this though, like I don't know what a 'status gradient' is and I don't think it exists in the world? Maybe you mean an abstract description of behaviour? But then a 'status gradient' is just describing what happened in a social setting, rather than making scientific predictions. Maybe it's instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like 'ideas','values', and 'beliefs' should also exist in this non-reductionist way and be as important for considering human action as 'status' is.

It also tends to lead to using explanations like this:

One tricky consideration here is that people don't like to explicitly think about status, because it's generally better for one's status to appear to do everything for its own sake

Which to me is dangerously close to saying "if someone talks about status, it's evidence it's real. If they don't talk about it, then they're self-deceiving in a Hansion sense, and this is evidence for status" which sets off a lot of epistemological red-flags for me

  1. ^

    In fact, one of the most cited works about it isn't a piece of anthropology or sociology, but a book about Improv acting???

a lot of LessWrong writing refers to 'status', but they never clearly define what it is or where the evidence and literature for it is

Two citations that come to mind are Geoffrey Miller's Virtue Signaling and Will Storr's The Status Game (maybe also Robin Hanson's book although its contents are not as fresh in my mind), but I agree that it's not very scientific or well studied (unless there's a body of literature on it that I'm unfamiliar with), which is something I'd like to see change.

Maybe it's instead a kind of non-reductionist sense of existing and having impact, which I do buy, but then things like 'ideas','values', and 'beliefs' should also exist in this non-reductionist way and be as important for considering human action as 'status' is.

Well sure, I agree with this. I probably wouldn't have made my suggestion if EAs talked about status roughly as much as ideas, values, or beliefs.

Which to me is dangerously close to saying "if something talks about status, it's evidence it's real. If they don't talk about it, then they're self-deceiving in a Hansion sense, and this is evidence for status" which sets off a lot of epistemological red-flags for me

It seems right that you're wary about this, but on reflection I think the main reason I think status is real is not because people talk or don't talk about it, but because I see human behavior that seems hard to explain without invoking such a concept. For example, why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?

why are humans moral but our moralities vary so much across different communities? Why do people sometimes abandon or fail to act according to their beliefs/values without epistemic or philosophical reasons to do so? Why do communities sometimes collectively become very extreme in their beliefs/values, again without apparent epistemic or philosophical justification?

I think "status" plays some part in the answers to these, but only a fairly small one. 

Why do moralities vary across different communities? Primarily because they are raised in different cultures with different prevalent beliefs. We then modify those beliefs from the baseline as we encounter new ideas and new events, and often end up seeking out other people with shared values to be friends with. But the majority of people aren't just pretending to hold those beliefs to fit in (although that does happen), the majority legitimately believe what they say. 

Why do communities get extreme? Well, consult the literature on radicalisation, there are a ton of factors. A vivid or horrible event or ongoing trauma sometimes triggers an extreme response. Less radical members of groups might leave, making the average more radical, so even more moderates leave or split, until the group is just radicals. 

As to why we fail to act according to their values, people generally have competing values, including self-preservation and instincts, and are not perfectly rational. Sometimes the primal urge to eat a juicy burger overcomes the calculated belief that eating meat is wrong. 

These are all amateur takes, a sociologist could probably answer better. 

From an evolution / selfish gene's perspective, the reason I or any human has morality is so we can win (or at least not lose) our local virtue/status game

If you're talking about status games at all, then not only have you mostly rounded the full selective landscape off to the organism level, you've also taken a fairly low resolution model of human sociality and held it fixed (when it's properly another part of the phenotype). Approximations like this, if not necessarily these ones in particular, are of course necessary to get anywhere in biology - but that doesn't make them any less approximate.

If you want to talk about the evolution of some complex psychological trait, you need to provide a very clear account of how you're operationalizing it and explain why your model's errors (which definitely exist) aren't large enough to matter in its domain of applicability (which is definitely not everything). I don't think rationalist-folk-evopsych has done this anywhere near thoroughly enough to justify strong claims about "the" reason moral beliefs exist.

I agree that was too strong or over simplified. Do you think there are other evolutionary perspectives from which impartiality is less surprising?

I don't think it's possible to give an evolutionary account of impartiality in isolation, any more than you can give one for algebraic geometry or christology or writing or common-practice tonality. The underlying capabilities (e.g. intelligence, behavioral plasticity, language) are biological, but the particular way in which they end up expressed is not. We might find a thermodynamic explanation of the origin of self-replicating molecules, but a thermodynamic explanation of the reproductive cycle of ferns isn't going to fit in a human brain. You have to move to a higher level of organization to say anything intelligible. Reason, similarly, is likely the sort of thing that admits a good evolutionary explanation, but individual instances of reasoning can only really be explained in psychological terms.

It seems like you're basically saying "evolution gave us reason, which some of us used to arrive at impartiality" which doesn't seem very different from my thinking which I alluded to in my opening comment (except that I used "philosophy" instead of "reason). Does that seem fair, or am I rounding you off too much, or otherwise missing your point?

Yes and no: "evolution gave us reason" is the same sort of coarse approximation as "evolution gave us the ability and desire to compete in status games". What we really have is a sui generis thing which can, in the right environment, approximate ideal reasoning or Machiavellian status-seeking or coalition-building or utility maximization or whatever social theory of everything you want to posit, but which most of the time is trying to split the difference. 

People support impartial benevolence because they think they have good pragmatic reasons to do so and they think it's correct and it has an acceptable level of status in their cultural environment and it makes them feel good and it serves as a signal of their willingness to cooperate and and and and. Of course the exact weights vary, and it's pretty rare that every relevant reason for belief is pointing exactly the same way simultaneously, but we're all responding to a complex mix of reasons. Trying to figure out exactly what that mix is for one person in one situation is difficult. Trying to do the same thing for everyone all at once in general is impossible. 

Do you have research underpinning these statements? You are an expert in the field of behavior so I would be interested in anything that can back this up. Perhaps also if anything like this is echoed in various EA-related surveys?

Few people who give to charity make any serious effort to do the most good they can with the donation. Few people who engage in political activism are seriously trying to do they most good they can with their activism. Few people pursuing an “ethical career” are trying to do the most good they can with their career. 

The reason I am asking is that this is counter to my own experience with non-EA altruists. And I think if you are wrong, there might be hope for growing the EA movement much larger as people already agree with us - we then "just" need to show them that we have also been thinking about this and might have a few research outputs they might want to look at before making a donation or career move.

A resource I keep coming back to is the 2010 Money for good study from Hope Consulting. They found that only about 3% of people donate based on organizations' relative performance (see slide 41).

At the time that study came out, I figured the best thing for EA was to lean into that 3%. Is that still true? As the movement has grown, I'm not really sure.

See also 'How Donors Choose Charities' (Breeze, 2013), where even unusually engaged donors are explicit about basing their donations on personal preference and often donating quite haphazardly, with little deliberation.

See also 'Impediments to Effective Altruism' (Berman et al, 2018 [full paper]), where people endorsed making charitable decisions based on subjective preferences and often did not elect to donate to the most effective charities, even when this information was available.

See also this review by Caviola et al (2021).

I've only skimmed this article, but also Coupet and Schehl (2021) claims "Much of the nonprofit performance theory suggests that donors are unlikely to base donation decisions on nonprofit production".

Oh, I'm not a social scientist. It's just an inference to the best explanation in response to commonly observed behaviour, e.g. all those who "go and donate to local children’s hospitals and puppy shelters, while showing no interest in learning about neglected tropical diseases or improving factory-farmed animal welfare."

That said, just because the EA project is (currently) unusual doesn't mean that we can't hope that that might change!  Sometimes people initially fail to pursue a goal simply because it hasn't even occurred to them, or they haven't thought about it in the right way to see why it's actually pretty appealing.  So introducing the ideas, and making clear their intrinsic appeal, could still potentially sway many people who didn't previously have the EA project among their goals.

I found your TLDR very confusing. I am pretty confident you would have tuned out a substantial number of people (maybe up to 20% who clicked through?) with the phrasing:

"tl;dr: It actually seems pretty rare for people to care about the general good as such (i.e., optimizing cause-agnostic impartial well-being), as we can see by prejudged dismissals of EA concern for non-standard beneficiaries and for doing good via indirect means."

Thanks, that's helpful feedback. I guess I was too focused on making it concise, rather than easily understood.

no problemo!

Do you think you could linkpost your article to Lesswrong too? 

I know this article mainly focuses on EA values, but it also overlaps with a bunch of stuff that LW users like to research and think about (e.g. in order to better understand the current socio-political and and geopolitical situation with AI safety). 

There's a lot of people on LW who mainly spend their days deep into quantitative technical alignment research, but are surprisingly insightful and helpful when given a fair chance to weigh in on the sociological and geopolitical environment that EA and AI safety take place in, e.g. johnswentworth's participation in this dialogue

Normally the barriers to entry are quite high, which discourages involvement from AI safety's most insightful and quantitative thinkers. Non-experts typically start out, by default, with really bad takes on US politics or China (e.g. believing that the US military just hands over the entire nuclear arsenal to a new president every 4-8 years), and people have to call them out on that in order to preserve community epistemics. 

But it also keeps alignment researchers and other quant people separated from the people thinking about the global and societal environment that EA and AI safety take place in, which currently needs as many people as possible understanding the problems and thinking through viable solutions.

You're welcome to re-post it there, if you think it might be of interest to the LW crowd! :-)

The Nuclear football is a lie?!! TIL

This post seems to confuse Effective Altruism, which is a methodology, for a value system. Valuing the 'impartial good' or ' general good' is entirely independent of wanting to do 'good' effectively, whatever you may find to be good.

You articulate this confusion most clearly in the paragraph starting "Maybe it would help to make the implications more explicit." You make two comparisons of goals that one can choose between (shrimp or human, 10% chance of a millions lives, or 1000 lives for sure). But the value of the options is not dictated by effective altruism; this depends on ones valuation of shrimp vs human life in the first case, and ones risk profile in the second.

You're welcome to disagree with me about whether what's most distinctive about EA is its values or its methodology, but it's gratuitous to claim that I am "confusing" the two just because you disagree. (One might say that you are confusing disagreement with confusion.)

A simple reason why EA can't just be a value-neutral methodology: that leaves out the "altruism" part. Effective Nazism is not a possible sub-category of EA, even if they follow an evidence-based methodology for optimizing their Nazi goals.

A second reason, more directly connected to the argument of this post: there's nothing especially distinctive about "trying to achieve your goals effectively". Cause-agnostic beneficentrism, by contrast, is a very distinctive value system that can help distinguish the principled "core" of EA from more ordinary sorts of (cause-specific) do-gooding.

But the value of the options is not dictated by effective altruism; this depends on ones valuation of shrimp vs human life in the first case, and ones risk profile in the second.

This is a misunderstanding of my view. I never suggested that EA "dictates" how to resolves disputes about the impartial good. I merely suggested that it (at core; one might participate in some sub-projects without endorsing the core principles) involves a commitment to being guided by considerations of the impartial good.  The idea that value "depends on one's valuation" is a fairly crude and contestable form of anti-realism. Obviously if it's possible for one's valuations to be mistaken, then one should instead be guided by the correct way to balance these competing interests.

Have you ever done something good, helped someone or took part in an organised event, and afterwards wanted to tell your friends about it, but in the end decided there was nothing to tell or that others would see it as bragging? What did you decide to do then?

As a matter of principle, all charitable activities should stem from an inner need to help poor and needy people and should be selfless. We do not seek profit in them and that is precisely why they are so noble.


Hence, there is sometimes a conviction that such activities should not be spoken of loudly, or boasted about, because the search for applause, in a way, crosses out this idea of selfless help. Quiet and even anonymous help is much better perceived. It would best meet the above standards.


On the other hand, those who are flaunting helping others are sometimes seen as self-interested, seeking fame and recognition, as if taking advantage of others' suffering. At the very least, such accusations are easy to encounter, especially online. How much are they right? We will take a closer look.

 

When I recently decided to take in war refugees under my roof, and later became involved in organising aid to the wider community, exactly this issue came to mind. Is it appropriate for me to speak out about it? I'm sure someone will pick up on the fact that it's applause-seeking. Do I want to deal with it? Maybe it is better to do everything quietly, with close friends from whom I can expect understanding?


However, I quickly came to the conclusion that if I had the opportunity to reach a wider audience, I could help more than if I didn't take that opportunity. I decided that it was better to endure criticism but ultimately do more good than to chicken out and achieve far less.

Having said that, I fully understand the dilemma of "tell the story of the good I've done, or keep it to myself?". After all, I have spoken more than once about the fact that, for example, we should not say of ourselves that we are gentlemen or ladies, because it is for others to judge by our behaviour and character. It is our surroundings that give us that designation. It is the same with being a hero helping those in need.


And arguably it may seem nobler for a person to act charitably without receiving anything in return than for one who gains something in the process, but even if that were the case, I emphasise that it is a difference in gradation, not a division between good and bad! Also, someone who derives something for himself from helping those in need is acting nobly. In the end, the most important thing is that the person in need has gained something. That is what counts most here. By focusing on the people helping, we unfortunately lose the right perspective.


Moreover, there is no denying that, de facto, almost all of us gain from charity. Isn't it the case that when you donate money to an important cause you feel better at heart? When you help carry out renovations in the home of a person with a disability, don't you feel the satisfaction of energy and time well spent? When you take in a refugee under your roof, don't you feel the joy of making the world a little better? The only one who does not gain is someone who is completely insensitive and probably the one who donates money to a random institution without even knowing what purpose it will be used for.


We all also gain a little peace of mind, enjoy the gratitude shown to us and find it nice when those around us appreciate us for it. Is this unethical? Not at all!

 

So why are we afraid to praise a good deed? I feel that the problem may grow out of the fact that we increasingly treat popularity like currency. For we live in a world where fame can be a value in itself. And if we perceive it that way, people who 'boast about charity' gain something tangible in our eyes, as if someone is paying them to do good.


We are used to seeing empty celebrity, contemptuously called celebrity, who represents nothing, but gathers attention and gains wealth. So we begin to abhor all popularity and accidentally throw the baby out with the bathwater.


Because, in fact, popularity, fame and the ability to reach a wide audience can be an invaluable tool for promoting good attitudes. This is all the more important when we think that the entire media landscape is overrun by consumerism and entertainment. This is the only way to convince others that people are still good, willing to help and remember those in need. That doing good is important and worth promoting, even advertising. And if someone can show this as a fashionable and cool thing to do, so much the better! After all, this is the main way of educating the younger generation and instilling the right role models.


If the only people who could be loud were those who have nothing of value to contribute, where would we go?

 

So remember that each of us can add our brick to the building of this house. If you think yours doesn't matter, you need to know that grassroots initiatives have enormous power. So what if some foundation hires a popular actor to promote some charity? For most of us, these are institutions from another reality. On the other hand, we will perceive in a completely different way a colleague from the school bench or the office next door who helps, even though it is not his job at all.


Someone who does not do charity work will then think "why?", "why?", "who does it at all?", "what can be gained?" and so on. This is how break-outs are created.


You don't have to immediately be an advocate for a cause. You can simply bear witness to your commitment so that the trail is not left only by critics and passive people. Help others build a real picture of the world around you, because it's actually better than you might expect from information gleaned from social media. It's worth redressing these proportions.

 

Every good gesture deserves praise, but it can be presented in a variety of ways, including some that will be questionable to say the least. As I encourage praise for the good done, let's do it constructively and consider how to do it sensitively.


Don't exploit others - that is, don't use the image of the people you are helping or private information about them if they clearly do not wish it. However, do not ask permission to do so yourself. A very good example is the First Job Programme Foundation's initiative called the Clothes Bank. In the autumn, as part of this project, I had the pleasure of conducting a training session for young men from children's homes. At the time, we talked to the organisers about the possibilities of promoting the action and everyone was in full agreement that photos showing the metamorphoses of these young people would be great to advertise the action, but no one was going to do it, even with their permission. We simply felt that it would be unfair to the people involved themselves. After all, such initiatives can be promoted in many different ways and they don't necessarily have to be the simplest ones.


Don't make a hero of yourself - the fact that you are helping makes you a more noble person, but you don't actually have to say it outright. That would not be the best testimony. Everyone will know how to judge you themselves. Instead, focus on your feelings. For there is a huge difference between 'I am a hero' and 'I feel like a hero'. Pay attention to the subtleties of language when you talk about such things. By the way, it is those feelings of fulfilment that accompany us when we do something good that are the greatest reward, and it is worth highlighting this when promoting similar deeds.


Don't criticise others - if you feel that society is not involved enough in a cause you have just contributed to, try not to jump on the ignorance of those around you. Negative emotions are not going to convince anyone, although I understand that they may accompany you. If this is the case, try to wait a bit until you've cooled down, and then talk about the whole thing in a spirit that can positively or constructively encourage others to participate.


 


Charitable or philanthropic actions make you a good person. Remember that no one can tell you how specifically to help, because only you know how much you can afford both in terms of energy, finances or time, as well as mental strength. And you don't have to tell anyone about your commitment. It is your free choice.


But don't let anyone tell you that you shouldn't help or talk about it in public if you want to do it and you think it's the right thing to do. Because, in fact, by skilfully 'bragging', you may inadvertently multiply your good deed, encouraging many others to do the same.

Curated and popular this week
Relevant opportunities