sudhanshu_kasewa

Posts

Sorted by New

Comments

Request for comments: EA Projects evaluation platform

Reposting my message to Jan after reading a draft of this post. I feel like I’m *stepping into it* with this, but I’m keen to get some feedback of my own on the thoughts I’m expressing here.

Epistemic status: confident about some things, not about all of them.

Emotional state: Turbulent/conflicted, triggering what I label as fear-of-rejection as I don’t think this confirms to many unstated standards; overruled, because I must try and be prepared to fail, for I cannot always learn in a vaccuum.

Why piggy-back off this post: Because 1) it’s related, 2) my thoughts are only partially developed, 3) I did not spend longer than an hour on this, so it doesn’t really warrant its own post, but 4) I think it can add some more dimensions to the discussions at hand.

Please be kind in your feedback. I prefer directness to euphemism, and explanations in simpler words rather than in shibboleths.

Edited for typos and minor improvements.

-----------

Hey Jan!

Thanks for leading the charge on the EA project platform project. I agree with your motivation, that community members can benefit from frequent, high-resolution, nuanced feedback, on their ideas (project or otherwise). I've left some comments / edits on your doc, but feel free to ignore them.

Also feel free to ignore the rest of this, but I thought I'd put down a few thoughts anyway:

  • I think separating project evaluation from team evaluation is a good idea; however, doing a project well runs deeper than just being able to select the right team for the job. It's also about assigning/communicating the right responsibilities to the team members, along with how they can expect those to change over time as the project grows. Some people might be good at managing a project when it's a tiny team, but may not be cut out to be CEO when there are 20 (or maybe 50) people involved, and should be prepared to take on roles better suited to their gifts as a project grows. This, I feel, is a difficult message to communicate, especially to eager young effective altruists who can see that most of their community heroes/leaders are still in or barely out of their 20s; this seems like a form of survivor bias coupled with EA being a genuinely young movement populated with generally young people ; many who got into EA in their teens 5 or 10 years ago are now in positions of prestige, and this is (I feel) an aspirational but ultimately untenable dream for most of today's new EAs.
  • I think one of your reasons for having step 1b in place is to generate some substance around a project to encourage engagement for the community once it is released on a platform / forum. Unfortunately, it is also the bit that takes up the most amount of time from a very small pool of highly skilled volunteers, and is ultimately the same bottleneck faced by grant makers and analysts everywhere. If we find another form of promoting engagement, would this step be necessary?
  • Continuing from above, could the following system work? I know you said the design space of interventions is large, but here goes anyway:

1) On EA forum, set up a separate space where project ideas can be posted anonymously; this is phase 1. Basic forum moderation (for info hazards, extremely low quality content) applies here.

2a) Each new post starts out with the following tags: Needs Critics , Needs Steelmen , Needs Engagement

2b) Community members can then (anonymously) either critique, steelman, or otherwise review the proposal ; karma scores for the reviewer can be used to signal the weight of the review (and the karma scores can be bucketed to protect their anonymity)

2c) The community can also engage with the proposal, critiques, steelmen with up/down votes or karma or whatever other mechanism makes sense

2d) As certain \sigma (reviewer karma + multiplier*review net votes) thresholds are crossed, the proposal loses its Needs Critics / Needs Steelmen / Needs Engagement tags as appropriate. Not having any more of those tags means that the proposal is now ready to be evaluated to progress to a real project. A net of Steelman Score - Critic Score or something can signal its viability straight out the gate.

3a) Projects that successfully manage to lose all three tags automatically progress to phase 2, where projects solicit funding, mentorship, and team members. Once here, the proposer is deanonymised, and reviewers have the option to become deanonymised. The proposal is now tagged with Proposal Needs Revision, Needs Sponsors, Needs Mentorship, Needs Team Members

3b) The proposer revises the proposal in light of phase 1, and adds a budget, an estimate of initial team roles + specifics (skills / hours / location / rate). This can be commented on and revised, but can only be approved by (insert set of chosen experts here). Experts can also remove the proposal from this stage with a quick review, much like an Area Chair (e.g. by noting that the net score from phase 1 was too low, indicating the proposal requires significant rework). This is the only place where expert intervention is required, by which time the proposal has received a lot of feedback and has gone through some iterations. After an expert approves, the proposal now loses its Needs Proposal Revision tag, the proposal is for all intents frozen.

3c) Members with karma > some threshold can volunteer (unpaid) to be mentors, in the expectation that they provide guidance to the project of up to some X hours a week ; a project cannot lose this tag till it has acquired at least Y mentors.

3d) All members can choose to donate towards / sponsor a project via some charity vehicle (CEA?) under the condition that their money will be returned, or their pledge won't be due, until the required budget is not exceeded... I'm unfamiliar with how something like this can work in practice though.

3e) Members can (secretly) submit their candidature for the roles advertised in the proposal, along with CVs ; proposer, mentors, and high-value sponsors (for some specification of high-value) can vet, interview, select, and secure team members from this pool, and then a mentor can remove the Needs Team Members tag.

3f) Once a project loses all four tags in phase 2, it has everything it needs to launch. Team members are revealed, proposer (with mentor / CEA / local EA Chapter support) sets up the new organization, the charity vehicle transfers the money to the organization, that then officially hires the staff, and the party gets started.

4) Between mentors, high-value sponsors, and proposer, they submit some manner of updates on the progress of the project on the forum every Z months.

Essentially the above looks like an almost entirely open, community driven exercise, but it doesn't resolve how we get engagement (in steps 2b, 2c) in the first place. I don't have an answer to that question, but I think requesting the community to be Steelmen or Critics will signal that we need more than up/downvotes on this particular item. LessWrong has a very strong commenter base, so I suspect the denizens of the EA Forum could rise up to the task.

Of course, after having written this all up, I can see how implementing this on a platform might take a little more time. I'm not a web developer, so I'm not sure how easy it is to implement all the above logic flows (e.g. when is someone anonymous v/s when are they not?), but I estimate it might take a week (between process designers and the coders) to draw out (and iterate on) all the logic and pathways, two weeks to implement in code, and another week to test.

  • Sorry for all this, I hope it wasn't complete waste of your time. If I've overstayed my welcome in your mindspace, or overstepped in my role as 'someone random on the internet giving me feedback', I deeply apologise. Know that writing all this down was cathartic for me, so in the worst case, I did it selfishly, but not maliciously.