PCO Moore

Director of Learning @ Didask
Working (6-15 years of experience)
4Joined Jul 2022

Bio

Born to Franco-British parents, raised in Paris, currently residing in Amman along with my partner who works for an international NGO.

Graduate of Sciences Po Paris (public affairs), LSE (European studies), and the Cambridge Judge Business School (entrepreneurship).  

After a failed attempt at building an e-democracy platform, I joined the ed tech sector to democratize access to quality education, first within the MOOC sphere, then as part of Didask, an evidence-based learning platform aiming to make it easy for anyone to create an effective, fully personalized course on any topic.

How others can help me

Help explore ideas for an EA-adjacent educational project

How I can help others

Anything to do with evidence-based training / the cognitive science of learning, my job requires me to keep up to date with the latest research

Comments
3

The repugnant conclusion is not a problem for the total view

Perhaps I am thinking about this all wrong, but isn't it the case that whether or not Z is better than A, most people would prefer a "ZA" world (Z's population AND A's happiness) to Z? 

Therefore, the repugnant conclusion is only a problem if there is , in fact, a tradeoff between population size & happiness. However, this does not appear to be the case in a non-Malthusian world. 

For instance, it seems pretty clear that we live in a "ZA" world compared to the world of only 300 years ago. Population, life expectancy and dignity all improved dramatically  at the same time. 

There is also no clear reason to believe that a ZA world compared to today's world is impossible either.

If, instead of the possible, we turn to the likely, the trend appears to be that population ultimately stabilizes. As such the real world task within our lifespan should be increasing the happiness of a mostly stable population, which is about as far as you can be from a repugnant conclusion sort of dilemma.

AGI Safety Needs People With All Skillsets!

Communication in particular would seem to be key to successful efforts in this area, given that both premises and conclusions tend to be counterintuitive to the uninitiated.

Ineffective communication on this topic is not merely neutral, it is actively harmful. 

Whether it is :

  • Shades of fanaticism
  • Overconfidence & dismissiveness of alternatives
  • Unclear writing with heavy use of jargon
  •  The lack of simple, widely appealing stories that explain why the problem is serious and how  one might help fix it

Such issues of communication :

  • Put a hard ceiling on the sort of funding this cause might receive
  • Summon an entire community of detractors that might not otherwise have existed (with implications for EA as a whole)

I'd love to see more communicators and storytellers get a crack at this. Particularly, having multiple "points of entry" to the issue that do not require a PhD to understand, and that people can easily be linked to, could be of immense value.

Open Thread: June — September 2022

Hi everyone!


I'm director of learning for a French ed tech startup focused on evidence-based learning.

I believe I found out about effective altruism proper from Vox around 2015. Had also gone down the existential risk rabbit hole around 2013-2014 while a student but did not make the connection between those issues at the time.

I recall EA arguments influencing  my career decision in 2016 as I had just left my previous company and was considering many opinions - to this day, I am thankful for being introduced to a framework that helped me reason through my choice at the right time.

I had never engaged with the intellectual community up until now. What ultimately brought me back to EA was my news and social media consumption. The zeitgeist of the past five or so years appears to have been largely driven by negative passions - anger, terror and despair - passions I myself have indulged in far too often. Rather than rein in or redirect those energies towards the common good, many of the most influential (often highly educated) people in our societies  ended up amplifying them. 

With every fleeting sentiment turned into a wild conceptual exaggeration, I started wondering - is there anything better on the menu? Reading articles, listening to podcasts such as 80K Hours, I have realized the EA community can help turn the tide.

As much as I agree with the basic principles of EA, I have found the most hope in its general disposition  - purposeful, constructive, and prosocial ; based in reality, curious, and open to new evidence. To strive to be of use;  to engage with the world as it is to that end; whatever the affliction, this must be what the cure looks like.

General areas of interest include:

  • Scaling effective instruction - My main area of professional training. While there is much warranted skepticism within the EA community when it comes to spending resources on education, there is also growing evidence that some ways of teaching are more effective than others, and that scaling them works. The potential societal returns of such an approach (especially in lower-income countries) may therefore be underestimated. One often neglected aspect of this debate I have observed in my work is that the space not taken up by effective instruction tends to create a vacuum for methods that are both inherently appealing to funders and counterproductive (anything from a focus on learning styles to pure constructivism, where infants are being expected to "create" knowledge when it would be much faster to just share it with them).
     
  • Improving decision making - With a particular focus on key decision makers. I see much promise in improving elite education (turning credentialism into a quest for usefulness), optimizing democratic incentives , and stage directing "the room where it happens" (from EU summits to White Hour "war rooms", it is striking how many influential decisions are made in bad meetings, with poor structure / access to relevant data).
     
  • Cause exploration - Not only on which areas to pursue but also on how to pursue them. For instance, we are far from being able to tell whether a given course of action is likely to increase or decrease the chance of great power conflict.

Hope I can help in some way!