Thanks for the interesting post! I just wanted to ask if there are any updates on these research projects? I think work along these lines could be pretty promising. One potential partner for cooperation could be clearerthinking.org. They already have a survey tool for intrinsic values and this seems to hit in a similar direction.
Also a big thank you from my side. It really feels like an open and honest account and to me it seems to shine the light on some very important challenges that the EA community faces in terms of making the best use of available talent. I hope that your story can inspire some voices in the community to become more self-reflective and critical about how some of the dynamics are playing out, right under our own noses. For a community that is dedicated to doing better, being able to learn from stories like yours seems like an important requirement.
In this light, I would love to see comments (or even better follow-up posts) address things like what their main takeaways are for the EA community. What can we do to help dedicated people who are starting out to make better decisions for themselves as well as the community?
Thanks for the thoughtful answer. I agree that it's not clear that it is worse than other alternatives, in my comment I didn't give a reference solution to compare it to after all.
I just wanted to highlight the potential for problems that ought to be looked at while designing such solutions. So, if you consider working more on this in the future, it might be fruitful to think about how it would influence such feedback loops.
In essence, I think that act of adding quantitative measures may lend a veil of "objectivity" to assessments of peoples work, which is intrinsically vulnerable to the success to the successful feedback loop.
Based on your comment, I had another look at the specific criteria of the rubric and agree that it seems possible that it could help to counteract something like the dynamic I outlined above, however, it would still have to be applied with care and recognizing the possibility of such dynamics.
The main problem I wanted to highlight is that something like this might obscure those dynamics and might be employed for political purposes such as justifying existing status hierarchies which might be simply circumstantial and not based on merit.
Thanks for the interesting post.
One consideration that comes to my mind is if something like this type of evaluation further reinforces a "success to the successful" feedback loop which is inherently sensitive to initial conditions. As in people might be able to produce great work given the right support and conditions but don't have them in the beginning. Someone else is more lucky and gets picked up, then more supported, which then reinforces further success.
Thus, it seems generally pretty hard to use something like this kind of system to achieve "optimal" outcomes or, rather, let's say you have to be careful about how you implement such rating systems and be aware of such feedback loops.
What do you think about this?
I just wanted to leave a quick endorsement for the concept of "local priorities research". One thing that might be easy to forget is that at least some of the best opportunities for doing good don't just "exist", they are created by entrepreneurial efforts and "made to be". Thus, simply directing people to the, at the time, most impactful opportunities is likely not the best long-term strategy. Rather, it seems logical that we also have to invest a part of our resources into developing our capacity to make the best use of available resources at a specific location and create opportunities that didn't exist before. So thank you very much, for putting this idea on the EA concept map, I hope it receives some of the attention it deserves!
One consideration that came to my mind at multiple times of the post was that I was trying to understand what your angle for writing the post was. So while I think that the post was written with the goal of demarcating and pushing "your brand" of radical social justice from EA, you clearly seem to agree with the core "EA assumption" (i.e., that it's good to use careful reasoning and evidence to try to make the world better) even though you disagree on certain aspects about how to best implement this in practice.
Thus, I would really encourage you to engage with the EA community in a collaborative and open spirit. As you can tell by the reactions here, criticism is well appreciated by the EA community if it is well reasoned and articulated. Of course there are some rules to this game (i.e., as mentioned elsewhere you should provide justification for your believes) but if you have good arguments for your position you might even affect systemic change in EA ;)
Thanks for the quick reply!
Yeah, an article or podcast on the framework and possible pitfalls would be great. I generally like ITN for broad cause assessments (i.e., is this interesting to look at?) but the quantitative version that 80k uses does seem to have some serious limitations if one digs more deeply into the topic. I would be mostly concerned about people new to EA either having false confidence in numbers or being turned off by an overly simplistic approach. But you obviously have much more insight into peoples reactions and I am looking forward to how you develop and improve on the content in the future!
Thanks for the post, very interesting initiative! However, this investigation seems to be at least slightly in conflict/contrast with other Founderspledge investigations into "giving later" options such as DAFs. Could you elaborate how these projects relate and where Founderspledge priorities are pointing to?
I know this is a late reply to an old comment but it would be awesome to know in how far you think you have addressed the issues raised? Or if you did not address them what was you reason for discarding them?
I am working through the cause prio literature at the moment and I don't really feel that 80k addresses all (or most) of the substantial concerns raised. For instance, the assessments of climate change and AI safety are great examples where 80k's considerations can be quite easily attacked given conceptual difficulties in the underlying cause prio framework/argument.