I'm a software engineer at a Boston startup. We connect U.S. college students with mentors that support them and help keep them in school (universities are our customers). I took this job two years ago for learning, career capital, and the promise of positive impact on the world.
If asked, I could say with confidence that it has delivered on the first two, but on the last point I feel epistemically clueless. Many universities pay for our services, we collect data on what students and mentors say have happened in their relationships, and we have a studies assessing impacts on retention rates. But in the for-profit world there is not much incentive (even from paying customers) to put our resources behind studying counterfactuals.
I don't know how to approach such an analysis, or to what extent I should expect such an analysis to be possible. I don't expect full enlightenment -- any progress would be helpful to me and help me decide what to do with my career in the future.
I would love to work with someone who has done this kind of thing before to attack this question. I would pay you for your time, at a rate you feel is reasonable.
My email is really.eli@gmail.com.
Quick 2c: I think it's typically assumed among many prominent EAs that global poverty / animal issues / long-term issues are all a lot more efficient than U.S. educational issues. As such, I'd personally expect that the main benefits of you doing that work, assuming you will later work in one of the three areas I mentioned (or meta-work), to come from the first two things you mentioned (learning & career capital.)
I think it's incredibly difficult to have much counterfactual impact in the for-profit world. You're right to have considerable epistemic uncertainty.
Thanks very much for the comment Ozzie.
I share the idea that U.S. educational issues are not the most efficient ones to be working on, all else equal. My question arises because it's not obvious to me that all else is equal in my case. (Though I think the burden of proof should be on me here.) For example, I have a pretty senior role in the organization, and therefore presumably have higher leverage. How should I factor considerations like that in? (Or is it misguided to do so?)
I'm curious also about your statement that it's hard to have much counterfactual impact in the for-profit world. I've been struggling with similar questions. Why do you think so?
[Comment removed]
This comment contained some honest estimates & thoughts, and then got decently downvoted. The back-and-forth doesn't seem highly productive to me.
I didn't see Ozzie's comment before it was removed, but I've occasionally seen posts/comments with curiously low scores and been unable to guess what it was people didn't like about them, which is frustrating.
I recommend that people who downvote posts, especially those written with positive intent, strongly consider leaving a comment explaining why they thought it had a bad effect on the discussion (or was otherwise unhelpful).
I understand the desire not to publicize that you did downvote someone, of course (it can feel awkward), but it's still worth considering the option! I think most Forum users will appreciate the honest feedback more than they feel annoyed by a downvote.
Your comment made me think of this essay by Zvi (a), especially his Part II.
By that do you mean that you feel like I am offering information that would critique people not maximizing victory points?
I felt like reallyeli was explicitly asking for an honest take of impact.
Do you have advice on how to give similar information without potential negatives that could come from it? Especially in a way that doesn't take significantly longer?
Milan: I've read that essay before, and I just tried skimming it again, but even if Ozzie's comment had remained up, I'm still not sure I'd have understood what you meant (Zvi's prose is very idea-dense). Now that Ozzie has removed his comment, it's likely too late for you to explain the link, but I'd recommend doing so in future comments like that one!
I think one assumption is that compared to the main prestigious EA positions now, most jobs are orders of magnitude lower-impact per unit time. OpenPhil has spent a lot of time exploring options and only found a few possible areas, and even some of those (prison reform) don't seem as good as AI safety, from what I can tell, in many ways. Unless there's some clever EA analysis that a field is really surprisingly good, I think the burden of proof is on that field to show some surprising insight; in this case, education. If you have a senior role you may be able to do 5x as much, or 15x, but I think the thinking is that the choice of industry could make a 50-200x difference.
Thanks Ozzie, this is helpful!
On that note, if you are an engineer, you may want to consider going the AI-safety route. I've written about this here
https://www.lesswrong.com/posts/3u8oZEEayqqjjZ7Nw/current-ai-safety-roles-for-software-engineers
Please tell me you found someone - the small number of comments makes me sad!
Are you looking for people to evaluate your impact based on the data you have, or is your company genuinely willing to research its counterfactual impact if you push for it?
The former. To your other comment -- yes, I've gotten a number of emails! :)
Thanks very much for the comment Ozzie.
I share the idea that U.S. educational issues are not the most efficient ones to be working on, all else equal. My question arises because it's not obvious to me that all else is equal in my case. (Though I think the burden of proof should be on me here.) For example, I have a pretty senior role in the organization, and therefore presumably have higher leverage. How should I factor considerations like that in? (Or is it misguided to do so?)
I'm curious also about your statement that it's hard to have much counterfactual impact in the for-profit world. I've been struggling with similar questions. Why do you think so?