For me personally, research and then grantmaking at Open Phil has been excellent for my career development, and it's pretty implausible that grad school in ML or CS, or an ML engineering role at an AI company, or any other path I can easily think of, would have been comparably useful.
If I had pursued an academic path, then assuming I was successful on that path, I would be in my first or maybe second year as an assistant professor right about now (or maybe I'd just be starting to apply for such a role). Instead, at Open Phil, I wrote less-academic reports and posts about less established topics in a more home-grown style, gave talks in a variety of venues, talked to podcasters and journalists, and built lots of relationships in industry, academia, and the policy world in the course of funding and advising people. I am likely more noteworthy among AI companies, policymakers, and even academic researchers than I would have been if I had spent that time doing technical research in a grad school and then went for a faculty role — and I additionally get to direct funding, an option which wouldn't have been easily available to me on that alternative path.
The obvious con of OP relative to a path like that is that you have to "roll your own" career path to a much greater degree. If you go to grad school, you will definitely write papers, and then be evaluated based on how many good papers you've written; there isn't something analogous you will definitely be made to do and evaluated on at OP (at least not something clearly publicly visible). But I think there are a lot of pros:
I'm very interested in these paths. In fact, I currently think that well over half the value created by the projects we have funded or will fund in 2023 will go through "providing evidence for dangerous capabilities" and "demonstrating emergent misalignment;" I wouldn't be surprised if that continues being the case.
The way I approach the role, it involves thinking deeply about what technical research we want to see in the world and why, and trying to articulate that to potential grantees (in one-on-one conversations, posts like this one, RFPs, talks at conferences, etc) so that they can form a fine-grained understanding of how we're thinking about the core problems and where their research interests overlap with Open Phil's philanthropic goals in the space. To do this well, it's really valuable to have a good grip on the existing work in the relevant area(s).
I think this is definitely a real dynamic, but a lot of EAs seem to exaggerate it a lot in their minds and inappropriately round the impact of external research down to 0. Here are a few scattered points on this topic:
Professors typically have their own salaries covered, but need to secure funding for each new student they take on, so providing funding to an academic lab allows them to take on more students and grow (it's not always the case that everyone is taking on as many students as they can manage). Additionally, it's often hard for professors to get funding for non-student expenses (compute, engineering help, data labeling contractors, etc) through NSF grants and similar, which are often restricted to students.
Yeah, I feel a lot of this stress as well, though FWIW for me personally research was more stressful. I don't think there's any crisp institutional advice or formula for dealing with this kind of thing unfortunately. One disposition that I think makes it hard to be a grantmaker at OP (in addition to your list, which I think is largely overlapping) is being overly attached to perfection and satisfyingly clean, beautifully-justifiable answers and decisions.
It's hard to project forward of course, but currently there are ~50 applicants to the TAIS team and ~100 to the AI governance team (although I think a number of people are likely to apply close to the deadline).
There is certainly no defined age cutoff, and we are usually extra excited when we can hire candidates who bring many years of career experience to the table in addition to other qualifications!
I'll just add that in a lot of cases, I fund technical research that I think is likely to help with policy goals (for example, work in the space of model organisms of misalignment can feed into policy goals).