On the single sentence qualitative feedback, I do think this would be very helpful. Just simple, direct statements such as, "Not qualified due to lacking x/y/z core requirement," versus simply being a weak, but not fundamentally flawed candidate would be very helpful.
Right now, everyone who isn't hired is passed over in favor of stronger applicants. Obviously. I want to know whether my application was even read, to be honest. As a mid-career person trying to transition, I have this growing cynicism that many EA orgs are simply going to filter me based on my age and the fact that I have not worked at some elite firm or gone to a prestigious university. And that's fine actually I guess, but it would be helpful to get direct feedback to let me know whether I'm wasting my time applying in the first place.
I can just earn-to-give and do my own thing, it won't hurt my feelings if I'm excluded from the clique.
IMO, you need to factor in the timeline in which you think AI safety is critical. While dentists may earn more, you are foregoing 4 years of income before you even start earning. You need to determine the rough break-even point at which you would cumulatively have earned more as a dentist, and therefore been able to give more.
If the critical phase of AI safety research precedes that date or is near it, then you may ironically contribute less in terms of marginal value of your giving.
Here's how I see things—
1. If AI advances so quickly that the earning power of CS collapses before you graduate, it is likely that the same will happen to dentists before you graduate from that program. But maybe the latter isn't true, in which case it could be reasonable to pursue dentistry.
2. If AI advances at a moderate pace, the breakeven logic I mentioned above probably means that you will have more impact by getting a moderately well-paying job sooner so that you can give during the critical period of AI development, since your giving would be largely deferred until after AGI if you went into dentistry.
3. If AI advances at a slow pace, then perhaps going into dentistry will ultimately allow you to contribute more.
One possibility you didn't mention, probably because it is unappealing to you—could you just major in dentistry? Then you would get the earning power and reduce the breakeven problem.
You are young. If I were you, out of these two options, I would just major in what I was interested in, and test out my talents. I would major in CS. If I performed exceptionally in AI safety stuff, I would try my hand at a direct career in it. If I didn't, I would focus on getting some other software job with high earning potential.
You are certainly correct that earning-to-give is the rational move when you consider that the constraints are often on resources to fund our goals, rather than on candidates willing to work on them professionally.
Orwell's great. Sometimes cryptic communication is a useful means to communicate to an in group something that you want to hide from the wider audience. For example, a common interpretation of Jesus's parables is that they expressed political ideas cryptically which it would be unacceptable for him to state outright. He always had plausible deniability as to their meaning, which was nonetheless obvious to his hearers. Not really sure what the context is on this board that would require something like that though? Are the EAs liable to call together the council of moderators in the middle of the night and shadow ban someone for wrong think?
This particular metaphor really resonated with me for whatever reason.
I'm trying to career switch. I have small children in the family to care for. My current role is very demanding. I have pretty limited resources to put towards job hunting right now. I did not go to a top college. I'm not an elite applicant, though I've done well for myself in my circumstances, and a lot of my failure to do better is due to prioritizing volunteer and other work.Â
To put it crassly, if EA orgs can fully satisfy their staffing needs using recent, EA-aligned graduates of elite colleges, there is no point in me even applying.
The way it feels (when I'm feeling down) is that EA is not really intended for someone like me. The jobs are not there, and while I believe in and practice earning to give, you sometimes get the impression reading the boards that if you aren't a high enough earner, maybe even that isn't really worthwhile, since in an objective sense, it isn't high impact.
And that's fine. Maybe EA can get all it needs from those talent pools, and maybe the urgency of the moment is such that even the money I can give is not that important. Obviously, its feasible that's the case. But then, I'd like to know that, you know?
I do think some sort of moral-weights quizlet thing could be helpful for people to get to know their own values a bit better. GiveWell's models already do this but only for a narrow range of philanthropic endeavors relative to the OP (and they are actual weights for a model, not a pedagogical tool). To be clear, I do not think this would be very rigorous. As others have noted, the various areas are more-or-less speculative in their proposed effects and have more-or-less complete cost-evaluations. But it might help would-be donors to at least start thinking through their values and, based on their interests, it could then point them to the appropriate authorities.
As others have noted, I feel existing chatbots are pretty sufficient for simple search purposes (I found GiveWell through ChatGPT), and on the other hand, existing literature is probably better than any sort of fine-tuned LLM, IMO.
I have no idea what someone in this income-group would do. If I were in that class, being the respecter of expertise that I am, I would not be looking for a chatbot or a quizlet, and would seek out expert advice, so perhaps it is better to focus on getting these hypothetical expert-advisors more visibility?
I agree that thinking of these donations in terms of offsetting is not right. Your ability to donate to animal welfare is basically unrelated to your ability to stop consuming animal products, and doing one does not affect your ethical obligation to do the other, as you said.
What I would encourage and do think is right is to consider how you can do the most good, and donating to animal welfare is a highly effective way to do that. Therefore, it seems incumbent upon both vegans and non-vegans that they donate. Being vegan does not free you from the obligation to donate anymore than donating frees you from the obligation to be vegan.
I say this as a non-vegan. I am highly interested in veganism, but do not feel like I can really handle the transition in this current phase of life I'm in. But I resolved to donate, not as an offset, but because I care about animals, and I feel obligated to do the most good I can. I also strive to reduce the amount of animal products I consume, and I try to seek out more humane sources for those I do use.
Doubtless, a vegan looking at my life might question whether the complication is really worth it. I certainly have a guilty conscience and feel empathy for the animals whose suffering I am causing. Am I trying to 'offset' those feelings by doing what I can? Certainly I am to some degree. Whether that is good or bad seems like a personal question, but I think all EA's would agree that, regardless of one's personal moral imbrication in another person's suffering, the goal should be to do as much good as possible.