Born to Franco-British parents, raised in Paris, currently residing in Amman along with my partner who works for an international NGO.
Graduate of Sciences Po Paris (public affairs), LSE (European studies), and the Cambridge Judge Business School (entrepreneurship).
After a failed attempt at building an e-democracy platform, I joined the ed tech sector to democratize access to quality education, first within the MOOC sphere, then as part of Didask, an evidence-based learning platform aiming to make it easy for anyone to create an effective, fully personalized course on any topic.
Help explore ideas for an EA-adjacent educational project
Anything to do with evidence-based training / the cognitive science of learning, my job requires me to keep up to date with the latest research
Excellent!
I was wondering, for cases like this, is there a sort of standard RCT-to-implementation pipeline/toolkit that can help?
Seems like there could be a real benefit in an organization that can accelerate these sorts of things, taking promising research-backed interventions and getting them on the right path to implementation at scale.
Activities could include providing a replicable framework for what the different steps in the process are, and helping at each stage, whether it's funding and logistics for the replication study, setting up the nonprofit and so on.
If something like this exists I'd be glad to know. If not could be a promising area to explore?
Younger people (e.g., my undergraduate students) seem more willing to entertain scenarios of catastrophes and extinction compared to older people (e.g., academics). I find that strange and I don't have a good explanation as to why that is the case.
Some hypotheses to test
- Younger people are more likely to hold and signal radical beliefs and the possibility of extinction is seen as more radical and exciting compared to humanity muddling through like it's done in the past
- Younger people are just beginning to grapple with their own mortality which freaks them out whereas older people are more likely to have made peace with it in some sense
- Older people have survived through many events (including often fairly traumatic ones) so are more likely to have a view of a world that "gets through things" as this aligns with their personal experience
- Older people have been around for a number of past catastrophic predictions that turned out to be wrong?
While I do think there is some merit in the argument that the purchase ultimately yields a net positive, I'm a bit more skeptical than most of the benefits of specialist venues, if you consider all the possible alternatives:
With that being said, I have a sense that the same purchasing decision would have raised no eyebrows had it been made by the University of Oxford itself. Such decisions are made by the University of Oxford all the time. (Well, perhaps student activists would have found something to protest about it, but it would have been linked to some symbolic matter about the history of the place rather than the financials).
To the extent that the Centre of Effective Altruism is pursuing a long term intellectual project that seeks to study how to do the most good with the same rigor the University of Oxford brings to other endeavours, either you should not be shocked by this, or you should be shocked by the fact that the University of Oxford owns so much real estate. (I am generally more puzzled than shocked about things, so I'll leave it up to you to determine which angry emotion this evokes.)
This however does not seem to be the case. I am not too familiar with EA institutions, but it seems to me that the Centre of Effective Altruism is more akin to the seat of a movement of people trying to be altruistic in a particular way. As a result, it becomes an organization which - whether inadvertently or not - advertises its own goodness. "We are good, and you should be good like us too". For a number of complex reasons, people have a mental model that members of a movement of this kind should live in relative frugality and that if they don't, they are hypocrites. (Perhaps they should live like monks... say, in a religious building made of stone surrounded by lush greenery. But also... it shouldn't be called an abbey, and they should be poor. Funny creatures aren't we.)
Not sure what to make of this, but it would seem to have implications for the proper delineation of "intellectual" and "movement" wings of EA, as well as the consideration of optics whenever switching from "is"to "ought" modes.
I agree with the overall reasoning for why we need inflation hedges.
I also agree that US debt poses a risk, as all debts do, but I would view this risk slightly differently, using a historical rather than budgetary lens:
The reason this matters is that typical US debt hawks will advocate as a main solution reduced spending on long-term infrastructure, military technology and so on.
However, if you're concerned with debt, you should be more concerned about GDP (what you get out of the dollar) than government spending (what you put into it, so to speak); about military power than military leanness;and about government stability than government drama (including periodic debt ceiling freakouts). A strong government backed by a strong share of the world economy - that is what investors see in the US dollar. The minute they stop seeing that, the party is over and hard choices will need to be made. (Remember how the markets reacted to Liz Truss's budget a few months ago? That is what a former empire making economic decisions looks like.)
So if you keep GDP running, prevent China from overturning the world order, and avoid obvious own-goals such as January 6 style craziness the US should be fine for a while longer.
Now, is this US-favourable outcome the most beneficial to the world? I don't know - it may be better to shift the equilibrium at some point (though on balance, I would tend to say, probably not now). But if it is the outcome you want, those would be the key items to safeguard.
Excellent post, learned many useful techniques I did not know about!
One interrogation I have -it would seem that most of these decisions seek to reduce variance / promote convergence towards outcomes broadly seen as reasonable. Accordingly, they seem to be highly bureaucratic, often adding new layers of processes before any decision is made. The overriding principle seems to be to avoid any major foreseeable mistake for which the institution could be blamed.
However, as evidenced by the success of startups in competitive markets dominated by large actors, there are also areas in which you want to instead increase variance, action, and iniative. Some best practices in this space include various forms of individual empowerment and accountability (putting just one individual in charge of a particular area and tracking their performance through OKRs) and a tolerance for experimentation.
The reason this works is that, in practice, the best action you could take may not make it past a committee - especially if it involves innovation - whereas an average, reasonable sounding action will.
It is not just a question of missing out on positive opportunities, however. By only ever taking average and reasonable sounding actions, bureaucracies atrophy and lose their ability to respond to changing threats quickly and decisively enough. For years, Putin, a destructive - but nimble - actor was able to run circles around slow moving institutions like the EU and its member states by taking advantage of this weakness. Slowness in approving effective vaccines against a raging pandemic killing millions is another recent example of this.
Meanwhile, we know a culture of experimentation can arise in the public sector. This gets very Progress Studies-y and I am sure many are familiar with works such as The Entrepreneurial State by Mariana Mazzucato and so on, but there is something to this literature. Some of the biggest successes of the United States government in modern times have been achieved through this - the New Deal ("it is common sense to take a method and try it; if it fails, try another"), DARPA, putting humans on the moon, Operation Warp Speed...
This is not to say all institutions should work like this all the time. There seem to be very good reasons to build consensus and raise the threshold of evidence before certain decisions are made - perhaps when you claim to be acting on behald of the community and/or the risk of your own action (VS risk of inaction) is high.
My question rather is which types of institutions/decisions would benefit from more from a culture of caution and consensus VS experimentation and accountability. Perhaps there is literature on this topic I'm not aware of - if so I would love to know about it, if not it would be a promising area to explore.
Not sure I would characterize progressive liberalism as particularly localist in the narrow sense. If you polled the founders and executives of international NGOs (organizations whose very existence is a refutation of narrow localism) my guess is you would find a whole lot of liberalism in there.
My sense is that both liberalism and socialism are more internationalist, on average, than conservatism.
For example, speaking in a European context, socialism has a proud internationalist tradition (the anthem is not called L'Internationale for nothing) yet Third Way social democrats as well as liberals (the more centre-right European brand Americans may not be used to) have tended to be the strongest supporters of European integration. Conservatives meanwhile have predictably pushed for policies such as reduced foreign aid, reduced immigration, and exiting the EU. This both reflects differing ideologies and differing electorates - conservative voters tend to be older and more rural.
Any sets of ideas, especially those resembling political ideologies, are bound to enter into tension at some point. However I believe it is also possible to fuse liberalism, socialism, and EA inspired thinking in a fruitful way.
For example you could take:
I suspect a number of EA-sympathetic left-wingers would sit fairly comfortably at this intersection.
Great post!
I might be wrong but I don't think many EAs actually believe that say, donating to GiveWell is the single most good they can do for the world. In the actual situation, given epistemic uncertainty, it happens to be a clear example of what you mentioned - "actions that they can be reasonably sure do a lot of good" So there is an implicit belief, revealed in actions that merely doing a lot of good is not only an acceptable, but a recommended behaviour.
However I'm not sure it logically follows from this that seeking to do "the most" good should be abandoned as a goal. This is particularly the case if effective altruism is not defined as an imperative of any kind but as an overall approach that says "given that I've already decided on my own to be more altruistic, how can my time/money make the biggest difference"?
Despite being an unattainable ideal if you take it literally, the "most" framing is still fruitful - it gives altruistic, open minded, but resource constrained people (which describes a lot more people than we might've thought) a scope sensitive framework to prioritize resource allocations.
To see why, let's take an example. It could be argued that giving to the community theatre does not just a little, but a lot of good. If you are a billionaire giving millions to community theatres all over the world there is a reasonable chance that you are doing a lot of good. (And such altruism should be praised, compared to spending those same millions say lobbying for big tobacco).
What effective altruism then brings to the table is to say "look, if you have a sentimental attachment to giving to the community theatre, that's fine. But if you're indifferent towards particular means and your goal is simply to be a good person and help the world, the same money could carry you much further towards your goal if you did X."
Of course you can then say sure, X sounds good, but what about Y? What about Z? And so on, ad infinitum. At some point though, you have to make a decision. That decision will be far from perfect, since you lack perfect information. However, by using a scope sensitive optimization framework, you will have been able to achieve a lot more good than you would have otherwise.
So while optimization has its flaws, I would characterize it on the whole as one of those "wrong, but useful" models.
Excellent post!
Regarding the political feasibility of nuclear disarmament, it is notable that political parties have in the past advocated for unilateral disarmament, in the wake of the Campaign for Nuclear Disarmament.
Perhaps the most prominent was UK's Labour, from 1982 to 1990. Needless to say the sample size is small and there are many factors as plays but to cut the story short they got hammered for this and lost both major elections they contested advocating for this policy by a large margin. I am fairly confident Labour itself sees dropping the policy as one of the reasons they were eventually able to regain power. (Not even Corbyn dared bring it back)
This failure was partly due to a more general perception of Labour as "soft" on defense and communism in those years.
This can be contrasted with Ronald Reagan who was able to gain major traction for disarmament by coordinating with the Soviet Union in the 1980s. Yet he had campaigned as a hawk , in fact was initially seen by disarmament advocates as a potential nuclear madman himself.
Possible takeaways would be
1) Committing to disarmament during electoral campaigns is risky -> campaigns should focus on leaders who are already in power
2) Coordinated action by the biggest nuclear powers is likely to be more effective than unilateral action by the smallest
3) Political credibility creates room to negotiation. To be able to make compromises on nuclear weapons, political leaders first need to send the message that they definitely would press the button if the situation called for it.
I have no idea what to make of this interview, there's almost too much going on - is he lying, deluded, intoxicated, putting up a front of defiance to avoid facing the reality of his fall... All the flaws in the human psyche on display at once, in often contradictory ways.
The two most rational takeaways I would be comfortable staying with:
Ugh. Painful to read.
Here as everywhere the key, to keep sane, is to hold a number of thoughts simultaneously:
1) The original email is horribly racist and indefensible. Just awful.
2) The author seems to have evolved since and should be given the benefit of the doubt.
3) Sincerely apologizing for past misdeeds is good in and of itself.
4) Descending on anyone who attempts to apologize without crediting them for the apology only emboldens those who will see the outcome and conclude that it is better not to apologize at all.
5) Thinkers are not gods or saints and should not be treated as such .
6) There is no "original sin". It is good for you to hold ideas that are good for the world. If someone else who holds ideas that are good for the world also holds ideas that are bad for the world, you are not responsible and should not feel guilty about it.
7) It is unproductive to think of "intellectual leaders", ther status, and their reputation in general. Intellectual leaders gain their status from pushing ideas that you find convincing. It is the ideas that you should focus on. Your duty is to keep and promote the ones that work, get rid of the ones that don't.
8) Because people hold a mix of contradictory ideas, if you focus on the people, you will find yourself tied to ideas you disagree with and make yourself vulnerable to guilt-by-association.
9) Ideas held by the same people are not necessarily correlated in the realm of ideas. If they are, there must be a demonstrable rational mechanism by which they are. (If, for instance, this email had been written by a far right anti-immigration politician who spends his time writing about the "great replacement", you could find a clear intellectual connection.)
10) In fact, people often hold two ideas that seem negatively correlated in the rational realm. For example, many Christians are in favour of violent retaliation, even as a concept (not to mention in practice).
11) Can you find a mechanism by which EA ideas correlate with racism? They would seem to be at complete odds. One of the central moral premises of EA is cosmopolitanism and the impartial good, the idea that human beings have inherent moral worth and dignity regardless of where they come from. Which is why, for example, EAs often get accused of trying to prioritize "the far" over "the near".
12) What about rationalism? In principle, rationalism has less of an inherent defense mechanism against racism than EA ideas. In particular, there is a form of adolescent rationalism which goes something like this: a) it is good to be intelligent and rational b) not all people are similarly intelligent and rational c) find a reason why the other people are the ones who are dumb (in fact, the more different, the dumber) and I am smart. There is an idea of ranking in b) which does have some idea-level correlation with elitism, solipsism, condescension, and various dehumanizing worldviews. However, rationalism also has a natural counter against racism which is that c) in that adolescent version is pure irrational silliness - the idea collapses on itself. Therefore embracing the adolescent version tells more about your character than it does about rationalism itself.
TDLR : That Bostrom used to be racist does not mean that it is a bad idea to give money to help the global poor in effective ways or to try to prevent nuclear wars from happening