I don't think your argument against risk aversion fully addresses the issue. You give one argument for diversification that is based on diminishing marginal utilities, and then show that this plausibly doesn't apply in global charities. However, there's a separate argument for diversification that is actually about risk itself, and not diminishing marginal utility. You should look at Lara Buchak's book, "Risk and Rationality", which argues that there is a distinct form of rational risk-aversion (or risk-seeking-ness). On a risk neutral approach, each outcome counts in exact proportion to its probability, regardless of whether it's the best outcome, the worst, or in between. On a risk averse approach, the relative weight of the top ten percentiles of outcomes is less than the relative weight of the bottom ten percentiles of outcomes, and vice versa for risk seeking approaches.
This turns out to precisely correspond to ways to make sense of some kinds of inequality aversion - making things better for a worse off person improves the world more than making things equally much better for a better off person.
None of the arguments you give tell against this approach rather than the risk-neutral one.
One important challenge to the risk-sensitive approach is that, if you make large numbers of uncorrelated decisions, then the law of large numbers kicks in and it ends up behaving just like risk neutral decision theory. But these cases of making a single large global-scale intervention are precisely the ones in which you aren't making a large number of uncorrelated decisions, and so considerations of risk sensitivity can become relevant.
I haven't gone through this whole post, but I generally like what I have seen.
I do want to advertise a recent paper I published on infinite ethics, suggesting that there are useful aggregative rules that can't be represented by an overall numerical value, and yet take into account both the quantity of persons experiencing some good or bad and the probability of such outcomes: https://academic.oup.com/aristotelian/article-abstract/121/3/299/6367834
The resulting value scale is only a partial ordering, but I think it gets intuitive cases right, and is at least provably consistent, even if not complete. (I suspect that for infinite situations, we can't get completeness in any interesting way without using the Axiom of Choice, and I think anything that needs the Axiom of Choice can't give us any reason for why it rather than some alternative is the right one.)
A few comments:
Although doing something because it is the intuitive, traditional, habitual, or whatever way of doing things doesn't necessarily have a great record of getting good results, many philosophers (particularly those in the virtue ethics tradition, but also "virtue consequentialists" and the like) argue that cultivating good intuitions, traditions, habits, and so on is probably more effective at actually having good consequences on the world rather than evaluating each act individually. This is partly probably due to quirks of human psychology, but partly due to the general limitations of finite beings of any sort - we need to operate under heuristics rather than unboundedly complex rules or calculations. (You're probably getting at something like this point towards the end.
On the Harsanyi results - I think there's a bit more flexibility than your discussion suggests. I don't think there's any solid argument that rules out non-Archimedean value scales, where some things count infinitely more than others. I'm not convinced that there are such things, but I don't think they cause all the problems for utilitarianism and related views than they are sometimes said to. Also, I don't think the argument for expected-value reasoning and equal-weight consideration for all individuals are quite as knock-down as is sometimes suggested - Lara Buchak's work on risk aversion is very interesting to me, and it is formally analogous (through the same Harsanyi/Rawls veil of ignorance thought experiment) to one standard form of inequality aversion (I always forget whether it's "prioritarianism" or "egalitarianism" - one says that value counts for more at lower points on the value scale and is formally like "diminishing marginal utility of utility" if that wasn't a contradiction; the other says that improvements for people who are relatively low off in the social ordering count more than improvements for people who are relatively high off, and this one is analogous to Buchak's risk aversion, where improvements in the worst outcomes matter more than improvement in the best outcomes, regardless of the absolute level those improvements occur at).
You endorse sentientism, based on "the key question is the extent to which they’re sentient: capable of experiencing pleasure and suffering." It seems like it might be a friendly amendment to this to define "sentient" as "capable of preferring some states to others" - that seems to get away from some of the deeper metaphysical questions of consciousness, and allow us to consider pleasure and pain as preference-like states, but not the only ones.
The keywords in the academic discussion of this issue are the "Archimedean principle" (I forget if Archimedes was applying it to weight or distance or something else, but it's the general term for the assumption that for any two quantities you're interested in, a finite number of one is sufficient to exceed the other - there are also various non-Archimedean number systems, non-Archimedean measurement systems, and non-Archimedean value theories) and "lexicographic" preference (the idea is that when you are alphabetizing things like in a dictionary/lexicon, any word that begins with an M comes before any word that begins with a N, no matter how many Y's and Z's the M word has later and how many A's and B's the N word has later - similarly, some people argue that when you are comparing two states of affairs, any state of affairs where there are 1,000,001 living people is better than any state of affairs where there are 1,000,000 living people, no matter how impoverished the people in the first situation are and how wealthy the people in the second situation are). I'm very interested in non-Archimedean measurement systems formally, though I'm skeptical that they are relevant for value theory, and of the arguments for any lexicographic preference for one value over another, but if you're interested in these questions, those are the terms you should search for. (And you might check out PhilPapers.org for these searches - it indexes all of the philosophy journals that I'm aware of, and many publications that aren't primarily philosophy.)
I recently rewatched the movie Her (https://www.imdb.com/title/tt1798709/) which is one of the few examples of unironically utopian fiction I can find. The total extent of conflict and suffering in the movie is typical of a standard romantic comedy - the main character is going through a bad breakup with an ex, and dealing with a new relationship (which happens to be with an artificially intelligent phone operating system). It's got its own amounts of heartache and loss, but it's utopian in that all the bigger problems of the world seem to be gone. The main character lives in Los Angeles, but the city is full of skyscrapers, and it seems to be easy for people to afford a spacious apartment (and it's decorated in warm woods and gets lots of natural light, rather than being the sort of cold glass and steel thing people imagine in a skyscraper city). All the outdoor scenes are in beautiful pedestrian-oriented spaces, full of clean air and happy people of all races and genders, interacting in a friendly way. He can take the subway to the beach and the high speed rail up to Lake Tahoe. He has a fulfilling job helping clients compose thoughtful handwritten letters to their loved ones. He's worried about being judged for dating an operating system, but his best friend down the hall stays up late sharing videos with her new operating system friend, and his work friend suggests they go on a double date to Catalina island - it's only the ex who reacts poorly to his relationship with a computer. Other than the computer relationship, the thing I've heard the most negative reactions to about the movie is that it's a future where men wear high-waisted pants in 1970s colors. It might be worth studying that movie to see how to depict a utopia in a realistic way that people can like.
I think that part of the issue is that people are sometimes mistaking a comparative claim for an absolute claim. Researchers claiming that hunter-gatherer societies had better gender relations than early agricultural ones aren't thereby claiming that hunter-gatherer societies are anywhere near equal - just less unequal than the agricultural societies that followed them.
Searching a bit (using "origin of patriarchy" as the search term) I found two relevant books that seem to be the sources of a lot of claims: The Creation of Patriarchy, by Gerda Lerner, from 1986; The Civilization of the Goddess: The World of Old Europe, by Marija Gimbutaš, 1991. These seem to both often be described as stating that there was once an equal society, and a later society imposed patriarchy on it some time around 5000 years ago. But the former seems to be more specifically claiming that early Mesopotamian civilization was less unequal than later Mesopotamian civilization, and the latter seems to be more specifically claiming that the Neolithic agricultural inhabitants of Europe had a matrilocal goddess-oriented society that was disrupted by the patrilocal god-oriented nomadic society of the Indo-Europeans that gave rise to the later societies. Neither one of them particularly supports the claim that hunter-gatherer societies are egalitarian and agricultural societies are patriarchal (the latter even seems to reverse this!) But both do give some evidence for the claim that might be more plausible, that there was a period shortly before recorded history in which gender relations were not as bad as they became by the early period of recorded history.
If true, this would be one more way in which one might expect pre-agricultural life to have been substantially worse than the present, but also better than much of agricultural history.