Edit: I'm really happy about my involvement with EA, always have been, and plan to continue increasing my engagement.
I stopped applying for EA jobs because I wasn't even getting a foot in the door. Edit: This reads as sour, really I just got tired of applying for jobs and didn't feel that it made sense to keep aiming for reach scenarios during a pandemic.
I've become more involved on the Discord because I like it, and my involvement pretty much exists completely outside of cause area discussion. I just like the people. But it does definitely passively increase my attention spent on EA ideas.
I moved to SF to meet EAs and moved out due to COVID.
I may have become a little less altruistic in general due to lower QoL causing scarcity mindset, but I expect this to reverse.
Could be socially beneficial to start a project developing good online conferencing tech, the landscape is pretty limited ATM.
Those stupid reasons: In my previous non-EA group living arrangement, I felt frustrated by the conflict between being locally helpful and globally effective. But then when I got to the EA Hotel, I felt this conflict was resolved yet still wasn't very locally kind or helpful, so maybe the salience of this conflict only ever existed as a justification for being lazy.
I'm curious to know how other people have experienced the transition to and from EA bubbles with respect to this tension.
How much should conflicting desires to be locally kind and globally good affect our choices about living in EA bubbles, where our locally kind choices might multiply the effectiveness of effective people? I had previously felt it was a strong reason to live in an EA bubble, but perhaps this was due to stupid reasons.
I have never experienced Imposter Syndrome and have a strong sense that I never would under any circumstances. I have clearly have psychological characteristics that would prevent me from experiencing Imposter Syndrome, for example I seem to have low priors about other people's competence almost always, for better or worse.
I also model myself as having philosophical antibodies against it. But I can't tell the extent to which these antibodies are actually impactful vs. my personality.
For example: I would argue that if I'm surprised at how competent people think I am, and I strongly think they are wrong, then this means I am good at seeming competent, which is valuable. So this should only boost my view of my capabilities.
Another example: If I'm trying to decide whether I belong in a set of people based on a competence threshold, I should always compare myself to the least competent person in the set. The most competent people aren't relevant at all, but people with Imposter Syndrome seem to focus on them to the exclusion of the least competent people.
Do people who experience Imposter Syndrome also possess these beliefs, and it just doesn't matter? Or is this stuff useful to reflect on?
You don’t have to have the same skills as them, and it’s very unlikely that you will. You’re probably better at some things than they are ... Even if part of what you learn during this experience is “Whoah, this particular type of work is not for me,” that’s a useful thing to learn and will help you move toward whatever your comparative advantage is.
I have never seen writing on Imposter Syndrome that acknowledges a possibility that you really are less competent may have no comparative advantages at all.
Let's imagine this possibility is true... So what?
I have identified relevant factors (nature of work, competitiveness) that should attenuate distress due to Imposter Syndrome, but as far as I can tell, these factors don't attenuate the distress for people with Imposter Syndrome. Would it be useful for people to imagine their worst fears are true, and evaluate how bad that would really be?
I'm interested in feedback.
If I were in your position, I would probably give them a portfolio that included direct basic science funding as well as some things that may not be as direct or basic. I would suggest a distribution but make it clear that they might prefer a different one.
To me this seems like a responsible reaction to the fact that I would think their parameters are somewhat at odds with my own values, and it doesn't require trying to subvert their request. Their value function may even be more similar to mine than I realize-- I'd want to give that prospect a chance to bear out.
All money given to GiveDirectly funds an RCT about direct cash transfers, so it is a science project. This might not be basic enough or align with your relative's politics, though.
If your relative would be interested in scholarships, a number of people come to the EA Hotel to self-study (usually in math/CS, which seems fairly basic to me). You could cheaply buy study hours by donating to the hotel and earmarking the money for funding self-study.
Anecdote re: ruthlessness:
During my recent undergrad, I was often openly critical of the cost effectiveness of various initiatives being pushed in my community. I think anyone who has been similarly ruthless is probably familiar with the surprising amount of pushback and alienation that comes from doing this. I think I may have convinced some small portion of people. I ended up deciding that I should focus on circumventing defensiveness by proactively promoting what I thought were good ideas and not criticizing other people's stupid ideas, which essentially amounts to being very nice.
I wonder how well a good ruthlessness strategy about public contexts generalizes to private contexts and vice versa.
A few years ago I had very different priorities, pursuing them was not making me happy, and I guess at some point I correctly realized that I'd be much happier focusing on altruism instead.
Edit: After reading some other comments, I'll add that I guess I do feel good about being nice to people close to me, and altruism does generate a similar feeling. I'm hesitant to call this empathy because it's not true that I feel bad about the suffering of distant people, I just feel good about helping.
I was aware of the possibility of relevant competition law, but didn't mention it because I'm just not that familiar. My assumption was that it would not be the same for non-profits, but that could be untrue. I am not very excited about coordination between employers in any case.
Independent of legal worries, one probably doesn't need to look at resumes to gauge applicant pool - most orgs have team pages, and so one can look at bios.
This is a good point.
Thanks for the post by Kelsey. My thought is that we shouldn't expect organizations to worry too much about whether the feedback is constructive or even easy to understand, which seems to be the bulk of the work Kelsey is describing. On the one hand it's bad if EA orgs alienate applicants via the mechanisms Kelsey describes, on the other hand I do still think that something is better than nothing given sufficient maturity. Nonetheless I take your point seriously.
When people say all of the top orgs have enough money, my interpretation is that I can't really create any value at all by donating to them. That is, donor A can create 0 utils by donating to $1 to Org Z, because doing so doesn't actually allow Org Z to scale in a meaningful way.
If I also can't work at Org Z, then donating to Org Y looks like my next best option.