This will be too esoteric for many but it could be argued that Daniel M. Ingram’s Emergent Phenomenology Research Consortium is trying to operationalise wisdom research. https://theeprc.org/
I would love to be able to listen to Open Philanthropy research reports in this way.
The claims about Europe in this article are completely absurd, but that's hardly surprising given the incredibly low standard of the sources cited. I'm in favour of engagement with diverse viewpoints and believe this should be viewed as a serious task. I think the best way to do this is to engage with the primary literature and with the output of well-regarded think tanks - see for example the Global Go To Think Tank Index Report - http://repository.upenn.edu/cgi/viewcontent.cgi?article=1009&context=think_tanks
Increasingly, MOOCs are also a great way of getting up to speed on the essentials of a discipline - https://www.class-central.com FutureLearn's course on Crime, Justice and Society, for example, was a complete revelation to me, particularly the section on miscarriages of justice -https://www.futurelearn.com/courses/crime-justice-society
The Pareto Fellowship programme sounds like paradise on earth.
I think this is wise given the complexity of GPP's core research agenda, but I really like the branding and identity of the project and the prominence it gives to research effectiveness as a critically important idea. I see it as being potentially analogous to what the Fraunhofer-Gesellschaft does for innovation in Germany in that it could turn a vague concept into a strategic process.
Congratulations to all concerned. As a member of the EA community in West Yorkshire I am really pleased to see the collaboration with the University of York. I hope the bid to the John Templeton Foundation is successful.
I'd really like to see CFAR workshops available in the UK too. Is this something CEA/80,000 Hours might be able to facilitate?
Effective Altruism certainly has the conceptual richness to support a research institute and I shall look forward to the development of the proposed Oxford Institute for Effective Altruism with considerable enthusiasm. In terms of supporting the future intellectual development of the field I hope the Institute will deliver (or contribute) to taught programmes at Oxford and build up a significant postgraduate research community. A research focus on crucial considerations and cause prioritisation is also appealing because (a) these are extremely powerful, but relatively neglected ideas and (b) when they are linked to “cause neutrality” and “means neutrality” they can become the basis for effective practical action in many diverse domains that have no connection to global philanthropy. For example, I am interested in the application of EA principles in university administration and regional development. There must be many others who have fairly constrained responsibilities and hypothecated budgets who nevertheless want to use the concepts and methods of EA to do the best they can. The new Institute should help them do that.
From time to time I worry that the ideas that make EA so interesting also constitute a barrier to effective outreach. When newcomers engage with the movement and its literature they must often be surprised by the relatively few steps from a concern with global poverty to AI risks, Dyson spheres and von Neumann probes. This is exciting stuff, but many who want to do good better are never going to be interested in things like existential risks, Bayes’ Theorem or cognitive biases - as important and relevant as these things are. I think we have to accept that the intellectual appeal and the practical appeal of EA are never going to converge for many and ensure that the way things are organised reflects this dichotomy.
80,000 Hours is probably the most accessible branch of the EA movement and I hope that after its move to the Bay it will consider a partnership with CFAR to develop a programme to deliver practical transferable skills based on EA principles. I think this would have enormous appeal to many of its clients.