It's possible that OPS could be useful to EA, but as stated in the post, the validity is not established. It's hard for me to see how OPS has more predictive ability for mental illness (and subsequent treatment) than any other model of personality. The key feature that makes OPS unique seems to be that it tracks changing personality throughout the day - but what is it about that feature that makes you believe that it could be a better model with more predictive power? Just more granularity?
What are the key first steps that an EA could take? Are you looking for funding? Looking to connect with an established researcher in psychology, or an established institution?
"While they [Dave & Shannon] have taken steps to move to a more scientific approach, some have argued that they fall short of a truly scientific methodology. Nevertheless, that does not make their system invalid."
This is probably the biggest bottleneck to convince an EA to get involved here. Have Dave & Shannon published peer-reviewed papers that have results that can be replicated? Have they tried to come into contact with established institutions? What if the best next step is for Dave & Shannon to get into graduate school and go for a PhD doing this as their research?
This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. "long-termists interested in things like AI" vs. "short-termists who place significantly more weight on current living things" - OR - "human-centered" vs. "those who place significant weight on non-human lives."
Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a "meta-value" that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized.
I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I've never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints.
I personally never felt that just because I don't want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn't an EA.
Reading and following through reference links in the Wikipedia for "Reciprocity" might be a good start: https://en.wikipedia.org/wiki/Reciprocity_(social_psychology)
I had trouble finding much else Googling things like "science of guilt".
Are you wondering if the possible negative effects of shame/guilt could cause more harm than help in certain scenarios?
I also wonder if help coming from "institutions" helps lower any feeling of guilt for recipients, because it's less personal? Receiving help from "Organization X" seems easier to accept than receiving help from a face with a name who seems to be sacrificing time/resources for you.
I like how you've defined the terms and created sort of a scale. However, the difference between pain and suffering is somewhat unclear to me - is it that suffering is awareness of pain (which maybe makes it even more painful)? Or is the scale really just pain, expected pain, and unexpected pain?
While originally agreeing that unexpected suffering is the worst of the 4 (or 3), I ran across this study that found pain was worse when expected: https://www.colorado.edu/today/2018/11/14/more-pain-you-expect-more-you-feel-new-study-shows
Of course, it might be too small a sample size (and too limited an experiment design) to fully conclude anything.
What academic disciplines are being developed to make the career-switch less risky? I'm also interested in how insurance/pension funds could even begin to be developed.