This is still a common practice. The point of it isn't to evaluate employees by # of hours worked; the point is for their manager to have a good understanding of how time is being used, so they can make suggestions about what to go deeper on, what to skip, how to reprioritize tasks, etc.
Several employees simply opt out from this because they prefer not to do it. It's an optional practice for the benefit of employees rather than a required practice used for performance assessment.
I'm referring to the possibility of supporting academics (e.g. philosophers) to propose and explore different approaches to moral uncertainty and their merits and drawbacks. (E.g., different approaches to operationalizing the considerations listed at https://www.openphilanthropy.org/blog/update-cause-prioritization-open-philanthropy#Allocating_capital_to_buckets_and_causes , which may have different consequences for how much ought to be allocated to each bucket)
Keep in mind that Milan worked for GiveWell, not OP, and that he was giving his own impressions rather than speaking for either organization in that post.
*His "Flexible working schedule" point sounds pretty consistent with how things are here.
*We continue to encourage time tracking (but we don't require it and not everybody does it).
*We do try to explicitly encourage self-care.
Does that respond to what you had in mind?
GiveWell's CEA was produced by multiple people over multiple years - we wouldn't expect a single person to generate the whole thing :)
I do think you should probably be able to imagine yourself engaging in a discussion over some particular parameter or aspect of GiveWell's CEA, and trying to improve that parameter or aspect to better capture what we care about (good accomplished per dollar). Quantitative aptitude is not a hard requirement for this position (there are some ways the role could evolve that would not require it), but it's a major plus.
The role does include all three of those things, and I think all three things are well served by the job qualifications listed in the posting. A common thread is that all involve trying to deliver an informative, well-calibrated answer to an action-relevant question, largely via discussion with knowledgeable parties and critical assessment of evidence and arguments.
In general, we have a list of the projects that we consider most important to complete, and we look for good matches between high-ranked projects and employees who seem well suited to them. I expect that most entry-level Research Analysts will try their hand at both cause prioritization and grant investigation work, and we'll develop a picture of what they're best at that we can then use to assign them more of one or the other (or something else, such as the work listed at https://www.openphilanthropy.org/get-involved/jobs/analyst-specializing-potential-risks-advanced-artificial-intelligence) over time.
We do formal performance reviews twice per year, and we ask managers to use their regular (~weekly) checkins with reports to sync up on performance such that nothing in these reviews should be surprising. There's no unified metric for an employee's output here; we set priorities for the organization, set assignments that serve these priorities, set case-by-case timelines and goals for the assignments (in collaboration with the people who will be working on them), and compare output to the goals we had set.
All bios here: https://www.openphilanthropy.org/about/team
Grants Associates and Operations Associates are likely to report to Derek or Morgan. Research Analysts are likely to report to people who have been in similar roles for a while, such as Ajeya, Claire, Luke and Nick. None of this is set in stone though.
A few things that come to mind:
The work is challenging, and not everyone is able to perform at a high enough level to see the career progression they want.
The culture tends toward direct communication. People are expected to be open with criticism, both of people they manage and of people who manage them. This can be uncomfortable for some people (though we try hard to create a supportive and constructive context).
The work is often solitary, consisting of reading/writing/analysis and one-on-one checkins rather than large-group collaboration. It's possible that this will change for some roles in the future (e.g. it's possible that we'll want more large-group collaboration as our cause prioritization team grows), but we're not sure of that.
We don't control the visa process and can't ensure that people will get sponsorship. We don't expect sponsorship requirements to be a major factor for us in deciding which applicants to move forward with.
There will probably be similar roles in the future, though I can't guarantee that. To become a better candidate, one can accomplish objectively impressive things (especially if they're relevant to effective altruism); create public content that gives a sense for how they think (e.g., a blog); or get to know people in the effective altruism community to increase the odds that one gets a positive & meaningful referral.