The setting? Madrid. The food? Vegan paella. The quality? Not great. And it really doesn't take much to satisfy me when it comes to vegan food, like I'm really happy to just have options. But so I'm going to leave this place a review, because I consider reviews to be kind of a common good that I make use of a lot and want to contribute to as such, but as I'm about to post it I'm having second thoughts.
The consequentialist: What if this lowers the restaurant 0.1 stars and then causes people to choose a non-vegan option 1% of the time?
The deontologist: "Well you can't leave a fake review, if it wasn't good it wasn't good"
The consequentialist: "Well what do you want me to do, not leave a review? Or well, maybe this can work out. Because what if people come here, and use this as an exemplar for vegan food, then getting the paella and deciding that vegan food is nasty and not for them?"
The rationalist: "Sure, but do you really think that's more likely than the 1% reduction?"
The consequentialist: "Possibly. I mean it would be one thing if we were in a vegan restaurant desert, but we're not, so the group you're really concerned with is a subset of the original, those who are geographically constrained to a tiny area in which there would not be other vegan options, which then sounds a lot less like someone who would meticulously plan their eating by researching online and more of a go-with-the-flow sort of person who will likely eat at whatever looks best in the real world."
The deontologist: "How much are you willing to compromise on things you enjoy because of a potential, hard to quantify, negative effect? A lot of what you're saying feels kind of manipulatish to me, like you're trying to think about how you can change other people's actions not exactly by lying but by hiding the truth from view. Sure this is small, but how many times have you thought about other small actions like this? How much of how you interact with the real world has been distorted by visions of what you think it should be ?
The consequentialist: "Jesus dude it really is just a review, you really think I'm out of touch if I don't toss another opinion into the ring? You're not concerned with the truth, just a facade for a basic desire to do something that makes us happy, to leave a review. But why does it make us happy? Sure, in part because we now have a trove of memories, a piece of an externalized self to come back to, but it's really to give back and help others as we have been helped so why can't we just be sure we are actually helping others, and the world, and not blindly following rules that we have never been sure of in the first place?"
The debate goes on, without winner, so I come to consult the masses, to see what input you might have for this internal debate I'm currently having. Comments welcome at both the specific level (should I leave the review) and more general level (how do others that feel they contain both deontology and consequentalism within them adjudicate hard decisions.
Update: The review has been left (in Spanish)
Ah yes, I thought of other things like your first point, but there are good, longer term things you bring up in your other two points that was not something I was thinking enough about.
But I suppose one thing I didn't make clear his is how tailored this was to just representing, without distortion, how I thought about this scenario. I've engaged with the different forms of utilitarianism, and I've engaged with other schools of thought as well, and I when doing this in an academic setting, I generally come away unconvinced by many (rule utilitarian are related approached included). So absent that sort of framework you mention in the first part of your second paragraph, it's hard for me to choose any one thing and stick with it.
But perhaps you would reply "sure you might not find rule utilitarianism totally convincing when you sit down and look at the arguments, but it seems like you don't find anything totally convincing, and you are still an actor making decisions out in the world. Further, as this post evidences, you're using frameworks I'd argue are worse, like some sort of flavor of classical utilitarianism here, that shows that despite what intellectual commitments you may have you're still endorsing an approach. So what I'm saying is maybe try to employ rule utilitarianism the way deployed classical utilitarianism here, as the temporary voice to the consequentialist amenable side of yourself, because it might help you avoid some of these tricky knots, and some bad longer term decisions (because your current framework biases against noticing these). And who knows, maybe with this change you'll find a bit less tension between the deontologist and consequentialist inside you"
Does that seem like the sort of principle you would endorse?