TG

Tom Gardiner

Warfare officer @ Royal Navy
384 karmaJoined Sep 2021Working (0-5 years)

Bio

Participation
3

Tom is a junior officer in the UK's Royal Navy. He has been interested in EA and Rationality since 2017, was on the committee for the EA society at the University of St Andrews, and can intermittently be found in Trajan House, Oxford.

Note: Evidence suggests there is another Tom Gardiner in the EA community which may lead to reputational confusion.

Comments
23

Further to this, if the primary goal is to learn about how the general public thinks about charitable giving, you could probably achieve the same result for far less than 100k. The remainder could be held in reserve and given to that cause if you really do think it's the best use of the money, or to your current best guess if you do not. It seems like there's an insight you wish to have and you've set a needlessly big pricetag on obtaining it. 

I must offer my strongest possible recommendation for Speedy BOSH! - it has genuinely changed my relationship with food. None of the recipes I have tried are bad, some are fairly average but many are truly glorious. Obviously, as an EA I have been keeping notes on each dish I try from it in Google Doc and I'd be happy to suggest my favourites to anyone who buys/has the book. 

Lot of good points here. One slight critique and one suggestion to build on the above. If I seem at all confrontational in tone, please note that this is not my aim - I think you made a solid comment. 

Critique: I have a great sense of caution around the belief that "smart, young EAs", and giving them grants to think about stuff, are the best solution to something, no matter how well they understand the community. In my mind, one of the most powerful messages of the OP is the one regarding a preference for orthodox yet inexperienced people over those with demonstrable experience but little value alignment. Youth breaking from tradition doesn't seem a promising hope when a very large portion of this community is, and always has been, in their youth. Indeed, EA was built from the ground up by almost the same people in your proposed teams. I'm sure there are smart, young EAs readily available in our labour force to accept these grants, far more readily than people who also deeply understand the community but do not consider themselves EAs (whose takes should be most challenging) or have substantial experience in setting good norms and cultural traits (whose insights will surely be wiser than ours). I worry the availability and/or orthodoxy of the former is making them seem more ideal than the latter. 

Suggestion: I absolutely share your concerns about how the EA electorate would be decided upon. As an initial starting point, I would suggest that voting power be given to people who take the Giving What We Can pledge and uphold it for a stated minimum time. It serves the costly signalling function without expecting people to simply buy "membership". My suggestion has very significant problems, that many will see at first glance, but I share it in case others can find a way to make it work.  Edit: It seems others have thought about this a lot more than I have, and it seems intractable.

The first point here seems very likely true. As for the second, I suspect you're mostly right but there's a little more to it. The first of the people I quote in my comment was eventually persuaded to respect my views on altruism, after discussing the philosophy surrounding it almost every night for about three months. I don't think shorter timespans could have been successful in this regard. He has not joined the EA community in any way, but kind of gets what it's about and thinks it's basically a good thing. If his first contact with the community he had was hearing someone express that they donate 10% of their income or try to do as much good as possible, his response in NATO phonetics could be abbreviated to Foxtrot-Oscar. 

In the slow, personal, deliberate induction format, my friend ended up with a respectful stance. Through any less personal or nuanced medium, I'm confident he would have thought of the community only with resentment. Of course, there's no counterfactual of him donating or doing EA-aligned work so this has not been lost. The harm I see from this is a general souring of how Joe and Jane Public respond to someone identifying as an EA. Thus far, most people's experience will be their friends and family hadn't heard of it, don't have a strong opinion and, if they're not interested, live and let live. I caveat the next sentence with this being a system 1 intuition, but I fear that there's only so much of the general public who can hear about EA and react negatively before admitting to being in the community becomes an outright uncool thing, that many would be reluctant to voice. Putting the number-crunching for how that would affect total impact aside, it would be a horrible experience for all of us. I don't think you need a population that's proactively anti-EA for this to happen, a mere passive dislike is likely sufficient. 

Thank you for writing this. I'm not sure whether I agree or disagree, but it seems like a case well made. 

While I do not mean to patronise, as many others will have found this, the one contribution I feel I have to make is an emphasis on how very differently people in the wider public may react to ideas/arguments that seem entirely reasonable to the typical EA. Close friends of mine, bright and educated people, have passionately defended the following positions to me in the past:
-They would rather millions die from preventable diseases than Jeff Bezos donate his entire wealth to curing those diseases if such donation was driven by obnoxious virtue-signalling. The difference made to real people didn't register in their judgements at all, only motivations. Charitable donation can only be good if done privately without telling anyone. 

-It is more important that money be spent on the people it is most costly and difficult to help than those whose problems can be cured cheaply because otherwise the people with expensive problems will never be helped. 

-Charity should be something that everyone can agree on, and thus any charity dedicated to farmed animal welfare is not a valid donation opportunity.

-The Future of Humanity Institute shouldn't exist and people there don't have real jobs. I didn't even get to explaining what FHI is trying to do or what their research covers; from the name alone they concluded that discussion of how humanity's future might go should be considered an intellectual interest for some people, but not a career. They would not be swayed. 

Primarily, I think the "so what?" of this is trying to communicate EA ideas, nuanced or not, to the wider public is almost certainly going to be met with backlash. The first two anecdotes I list imply that even "It is better to help more people than fewer people." is contentious. Sadly, I don't think most of what this community supports fits into the "selfless person deserving praise" category many people have, and calling ourselves Effective Altruists sounds like we've ascribed ourselves virtues without justification that a person on the street would acknowledge. 

Accepting some people will react negatively and this is beyond our control, my humble recommendation would be for any more direct attempt to communicate ideas to the public gets substantial feedback beforehand from people in walks of life very different to the EA norm. People are really surprising.  

Agreed - Scott Alexander does this very well, as does Yudkowsky in Rationality: A-Z. Both of these also benefit from being blogs of their own creation, where they can dictate a lot of the norms, and so I expect to have a fair bit more slack in how high the ceiling is. 

As a teenager, I came up with a set of four rules that I resolved ought to be guiding and unbreakable in going through life. They were, somewhat dizzyingly in hindsight, the product of a deeply sad personal event, an interest in Norse mythology and Captain America: Civil War. Many years later, I can't remember what Rules 3 and 4 were; the Rules were officially removed from my ethical code at age 21, and by that point I'd stop being so ragingly deontological anyway. I recall clearly the first two.

Rule 1 - Do not give in to suffering. Rule 2 - Ease the suffering of others where possible. 

The first Rule was readily applicable to daily life. As for the second, it seemed noble and mightily important, but rarely worth enacting. In middle-class, rural England with no family drama and generally contented friends, there wasn't much suffering around me. Moving out to University, one of my flatmates was close friends with the man who set up the EA group there, and on learning more about it I was struck by the opportunity for fulfilling my Rules that GiveWell and 80k represented. 

This story does not account for my day-to-day motivation to uphold a Giving What We Can pledge or fumble through longtermist career planning. I've been persuaded by the flavour of consequentialism used here, think that improving the experience of sentient life is wonderful and, quite frankly, don't have any other strong compulsions for career aims to offer competition. Generally buying-in to the values and aims of this community is my day-to-day motivation. Nevertheless, on taking a step back and thinking about my life and what I wish to do with it, I still feel about the abstract concept of suffering the way Bucky Barnes feels about Iron Man at the end of that film. The Rules don't matter to me anymore, but their origin grants my EA values the emotional authority to set out a mission statement for what I should be doing. 

As both a member of the EA community and a retired mediocre stand-up act, I appreciate that you took the time to write this. You rightly highlight that some light-heartedness has benefited some writers within the EA community, and outside of it. My intuition is that the level of humour we can see being used is, give or take, the right level given the goals the community has. A lot of effort and money has been spent on making the community, along with many job opportunities within it, seem professional in the hope that capable individuals will infer that we mean business and consider EA on those terms.

A concept I referred to a lot when planning comedic performances, and public speaking occasions in general, is that an audience (dependent on the context and their reasons for being there) will have a given threshold for the humour they expect to find in your communication. To be funny, you must go beyond this threshold. Some way above that threshold is another boundary, a humour ceiling, defined by the social norms of the setting beyond which you no longer seem funny. Instead, you signal that you don't understand the social norms around communication in that context. In stand-up, the humour threshold is really high, so it's hard to qualify as funny at all, but nigh on impossible to be too funny. In presenting a dry subject to your boss and colleagues, the humour threshold is low and anyone could exceed it with a bit of practice, but landing safely between this threshold and the marginally greater humour ceiling is genuinely hard. You will too easily be too funny and seem a liability. When reading an obituary at a funeral, the threshold is set at essentially zero and the ceiling is coincident with it, only allowing the exemption of jokes told to highlight the cherished memories you have of the deceased. 

I explain this because it seems to me untenably hard to commit to using humour all-out, or anywhere close to that, as a communicative and persuasive aid for EA without signalling that we do not "mean business". Stick man illustrations and starchy acronyms, used sparingly, fall within the threshold-ceiling window for the work MacAskill and Karnofsky are trying to publicise, so these gags play out well. I don't think they've got that much overhead clearance before readers would infer a lack of appreciation for the aesthetics of academic writing, and thus that they shouldn't be taken seriously. 

Since the advent of democracy and ancient Greek plays using jokes to point out the mistakes made by politicians of the day, comedy has proven a very effective method for poking holes in bad ideas and forcing people to change them, lest they be further laughed at. This seems to be the running theme of the cases you mention from John Oliver's career. Much harder, I think, to propose an idea of your own that you wish people to believe is good and use humour to enhance that perception. 

I'd be interested to know, if any of the powers that be are reading, to what extent the Long Term Future Fund could step in to take up the slack left by FTX in regard to the most promising projects now lacking funding. This would seem a centralised way for smaller donors to play their part, without being blighted by ignorance as to who all the others small donors are funding. 

Thanks! I'll definitely check those out.

Load more