I think I would describe him as generally supportive of global health and wellbeing focused EA / effective giving. As others note above, he's been aware of the community since roughly around the time it started and in many ways has been 'practicing' EA since long before then (e.g. founded IPA in 2002, wrote 'more than good intentions'). He's also engaged directly with the community in various ways - e.g. has been on the board of TLYCS for a long time (and advised them on charity selection for a while). He set up ImpactMatters, which evaluated and recommended a much broader range of charities than GiveWell. Overall, I think this is an extremely exciting appointment and I think he'll do a huge amount of good in the role!
P.S. I've also just seen Joan's write-up of the Focus University groups in the comments below, which suggests that there is already some decent self-evaluation, experimentation and feedback loops happening as part of these programmes' designs. So it is very possible that there is a good amount of this going on that I (as a very casual observer) am just not aware of!
I completely agree that it is far easier to suggest an analysis than to execute one! I personally won't have the capacity to do this in the next 12-18 months, but would be happy to give feedback on a proposal and/or the research as it develops if someone else is willing and able to take up the mantle.
I do think that this analysis is more likely to be done (and in a high quality way) if it was either done by, commissioned by, or executed with significant buy-in from CEA and other key stakeholders involved in community building and running local groups. This is partly a case of helping source data etc, but also gives important incentives for someone to do this research. If I had lots of free time over the next 6 months, I would only take this on if I was fairly confident that the people in charge of making decisions would value this research. One model would be for someone to write up a short proposal for the analysis and take it to the decision makers; another would be for the decision-makers to commission it (my guess is that this demand-driven approach is more likely to result in a well-funded, high quality study).
To be clear, I massively appreciate the work that many, many people (at CEA and many other orgs) do and have done on community building and professionalising the running of groups (sorry if the tone of my original comment was implicitly critical). I think such work is very likely very valuable. I also think the hits-based model is the correct one as we ramp up spending and that not all expenditure should be thoroughly evaluated. But in cases where it seems very likely that we'll keep doing the same type of activity for many years and spend comparatively large resources on it (e.g. support for groups), it makes sense to bake self-evaluation into the design of programmes, to help improve their design in the future.
It's bugged me for a while that EA has ~13 years of community building efforts but (AFAIK) not much by way of "strong" evidence of the impact of various types of community building / outreach, in particular local/student groups. I'd like to see more by way of baking self-evaluation into the design of community building efforts, and think we'd be in a much better epistemic place if this was at the forefront of efforts to professionalise community building efforts 5+ years ago.
By "strong" I mean a serious attempt at causal evaluation using experimental or quasi-experimental methods - i.e. not necessarily RCTs where these aren't practical (though it would be great to see some of these where they are!), but some sort of "difference in difference" style analysis, or before-after comparisons. For example, how do groups' key performance stats (e.g. EA's 'produced', donors, money moved, people going on to EA jobs) compare in the year(s) before vs after getting a full/part time salaried group organiser? Possibly some of this already exists either privately or publicly and the relevant people know where to look (I haven't looked hard, sorry!). E.g. I remember GWWC putting together a fundraising prospectus in 2015 which estimated various counterfactual scenarios. Have there been serious self-evaluations since ? (Sincere apologies if I've missed them or could find them easily - this is a genuine question!)
In terms of what I'd like to see more of with respect to self-evaluation, and tentatively think we could have done better on this over the last 5+ years:
Thanks a lot for sharing the syllabus, David, and for posting guidelines about using it. I think and hope this will serve as a really useful reference for people interested in pursuing (a career in) economics GPR. As you note, this is a bit broader than GPI's current research focus (which is fairly narrowly focused on longtermism and associated questions for the time being), but I think there is valuable GPR to be done in these other areas too. As you also note, GPI is currently refreshing its research agenda to account for some of the exploration research we've done in economics over the last ~18 months - hopefully we'll have a new and improved version out in the next 2-3 months.
Thanks for all of the hard work on this, Howie (and presumably many others), over the last few months and (presumably) in the coming months