I've just spent about an hour on the survey, at which point I noticed the progress bar was at about 1/6th. This was at the start of four timed questions which required 2 minutes each, with each one having a few follow-up questions.
At this point there had been 7 or 8 of these timed questions, as well as at least two dozen pages of multiple-choice questions and a few 'brain-teasers'. I do not see how it is possible to complete this first 1/6th in under 45 minutes, seeing that the timed questions alone already take up 16 minutes.
This post seems to confuse Effective Altruism, which is a methodology, for a value system. Valuing the 'impartial good' or ' general good' is entirely independent of wanting to do 'good' effectively, whatever you may find to be good.
You articulate this confusion most clearly in the paragraph starting "Maybe it would help to make the implications more explicit." You make two comparisons of goals that one can choose between (shrimp or human, 10% chance of a millions lives, or 1000 lives for sure). But the value of the options is not dictated by effective altruism; this depends on ones valuation of shrimp vs human life in the first case, and ones risk profile in the second.
As a matter of principle, all charitable activities should stem from an inner need to help poor and needy people and should be selfless. We do not seek profit in them and that is precisely why they are so noble."
To my best understanding, such a principle has no relation to EA whatsoever. I have yet to find a report by EA, by GiveWell, by GiveDirectly, etc. that takes into account the donors'/participants' mental state or the nobility of their intentions, and I should hope I never will.
From an EA perspective it doesn't make a difference whether you are trying to optimize global welfare, whether you're trying to feel good about helping someone in need, whether you're trying to fit in with some obscure community of nerds, or whether you're trying to feel more noble than others. The value of an action is determined purely by its outcomes/results/effects.
This kind of moral posturing actively seeks to exclude people from the EA community, -1 .