Charlotte
Working (0-5 years experience)

I am a Predoctoral Research Fellow in Economics at the Global Priorities Institute. I read Philosophy, Politics, and Economics at the University of Warwick, led the local EA group, co-moderated our first fellowship. Previously, I interned at the European Parliament and the Future of Life Institute working on the EU AI White Paper consultation. I learned about outer space governance during a research project with the German foreign service. I am funded by BERI to collaborate with GovAI researching the Brussels Effect of EU AI regulation.

Topic Contributions

Comments

Charlotte's Shortform

Is a marginal unit of politics really that cheap?

Crossposting a response I wrote in December 2020.

Half a year ago, Scott Alexander argued that there is surprisingly little money spent in US politics. Stefan Torges responded and argued that there is less German money spent on politics than on chocolate . Is it correct that this means that an additional marginal unit of political influence is really that cheap?

I was curious to think about the European institutions, why Brussels one might argue and why about the little money in politics? This is a good thing, right?

Stefan found something between 1 and 3bn for Germany. However, there is also Brussels. And the European Union does partly create policy for Germans and German business (especially the internal market policy). It should be added to the German numbers.

There is this dataset of all registered groups in Brussels that had at least one meeting with the European Commission. I took the smallest number for every organization that fell within a budget interval. This makes 1,750,213,248 Euros overall. One might argue that a few countries might have added their complete budget instead of their policy budget. On the other hand, it seems plausible that most of the organizations have drastically underestimated their numbers (I am pretty sure Google spends more than 8 million in EU politics). In this data set, there are over 4000 organizations which self-report that they have a budget smaller than 500 Euros but 0.5 or even more full-time positions. If we assume that they all spend 50,000 per year and the others also slightly underestimate their numbers, we end up with approximately 3 Billion Euros. If you do not want to add everything you could also just take the organizations with headquarters in Germany (300,000,000). This would be an underestimate as you don’t take the European association which has headquarters in Brussels. 

Why do ordinary people not give money?

And only 4.5 million for the EU elections are spent by all German parties.  Almost no German gives money to election campaigns. We have strict institutional rules. However, almost no one gives money to lobby groups, e.g. environmental groups. Similarly to the little funding of any charity we are just faced with a classical tragedy of the commons. In addition, I believe that the knowledge that you can buy and might have to buy political influence – even if you are on the good side – is just not that widely known.

Should we only count the German numbers or everything to calculate the german budget?

I think it makes more sense to take the whole set of lobby groups in Brussels. Why?

My interest in the total political budget stems from the question of how crowded the political field is and why not more organizations or interest groups ‘buy’ political influence. 

Because of this intention, it makes more sense to take everyone who is fighting about the same regulation. I believe that this gives us a better intuition about the crowdedness.

Is there more left? Perhaps, something else is the limiting factor

There is only a 0.45 correlation between the Budget of the lobby organizations and European Parliament accreditations as well as European Commission meetings.

Maybe this means that you cannot simply buy political influence but there are greater coordination issues and other bottlenecks. 

The bottleneck might be something like ” knowing the right people in politics” rather than money and a much more limited good.

And maybe this is a good thing, stabilizing a political system. Other things are also bottle-necked. I would bet that the numbers in Washington are somewhat higher once we would really compulsory register and check everything.

Charlotte's Shortform

Against Difference-Making Risk Aversion

I wrote this shortform mostly to be able to refer people to these lines of arguments.


Introduction

Most members of the EA community are much less risk-averse in their efforts to do good (i.e. "difference-making") than most of society. However, even within effective altruism, risk aversion in difference-making is sometimes cited as an argument for AMF over the Schistosomiasis Control initiative, for funding the Humane League rather than the Good Food Institute, for poverty alleviation over AI alignment research, …

Distinguishing two ways you can be risk-averse: over the difference you make vs. over how good the world is

Notice that there is a (somewhat subtle) difference in risk aversion between the following two preferences:

  1. preferring to save 1 life for certain rather than 2 lives with a 50% probability
  2. preferring a world in which everyone has a life of value 1 over a world in which everyone has an equal chance of having a life of value 2 or 0

If you have the first preference, you are risk-averse about your own efforts to do good in the world. Call this risk aversion about difference-making. 

If you have the second preference, you are risk-averse about the overall state the world is in. Call this risk aversion over states of the world.

  • E.g. you prefer a world in which everyone has a life of value 2 over a world in which people have an equal chance of having a life of value 4 or 0.
  • If you have this form of risk aversion, you think it is particularly important to avoid very bad outcomes, e.g. a totalitarian regime in which most people get tortured or sth like that. This kind of risk aversion is like prioritarianism.
     

You can be risk-averse over states of the world and not about difference-making, or vice versa. Importantly, this post is arguing against risk aversion about difference-making.

I refer to difference-making when someone aims to “make the biggest/a difference in the world” in contrast to “making the world as good as possible”.

I show some problems with risk-averse difference-making and difference-making as a framework more generally.


[ A bit more than half of the discussion is based on a draft paper from Greaves et al. "On the Desire to Make a Difference" (of which I have seen an early presentation). I got approval to write a more accessible version.]

Toy Scenario

Consider the following toy scenario:

You have five actions to choose from. There are four states of the world (A-D), each being 25% likely.

 ABCD
Option 1:Save 1 life Save 1 life Save 1 lifeSave 1 life 
Option 2:Save 5 lives0 lives0 lives0 lives
Option 3:0 livesSave 5 lives0 lives0 lives
Option 4: 0 lives0 livesSave 5 lives0 lives
Option 5: 0 lives0 lives0 livesSave 5 lives

 

 

 

 

In other words, if you take option/action 1, you save 1 life in all possible states of the world. If you take action 2, 3, 4 or 5, you save 5 lives in 25% of states, and 0 lives in 75% of states.

You can think about "world A" as for instance: "AGI comes before 2030" or "Deworming works", etc. Which action does a risk-averse agent take?

  1. For simplicity, we assume the following form of risk aversion (about difference-making): value = — but the example will work for any sufficiently risk-averse agent:
    1. For option 1, V = 
    2. For options 2 to 5, V =

So option 1 is preferred.

Consequences of Risk Aversion about Difference-Making:

I describe 2 problems with risk aversion. 

 

  1. Preferences of a risk-averse agent are time-inconsistent and depend on what you perceive to be the decision unit.
    1. Suppose you make the above decision each day, then you want to commit to choosing between 2-5 each day. However, on the last day, you will live you want to choose 1 instead.
      1. Why? Once you reached the last day of your life, you can make a new decision and here you know you only maximise for this one day. Hence, you will choose option 1.
  2. Many risk-averse agents might choose Pareto dominated outcomes.
    1. When we talk about risk-averse difference-making we normally refer to an individual.
      1. However, suppose you and three people make the choice above. If you are risk-averse together, then you can choose options 2,3,4, and 5 respectively. In this case, we save 5 people for certain and every one of us effectively saved 1.25 people with certainty. If we are risk-averse alone, we choose option 1.
      2. Here we showed that sometimes we want to be risk averse among friends rather than as individuals, but why then only in this friend group and not also with strangers? If we were to include everyone, we would just end up with risk aversion over states of the world rather than risk averse difference making.
      3. Why should your life be the decision unit rather than the impact of your family or the impact in this decade?  you have to choose a decision unit and there doesn't seem to be a way to choose a decision unit non-arbitrarily
        1. Technically speaking this is an argument against difference-making not against risk aversion. However, risk aversion makes it clear that the unit you choose will lead to different choices + it is hard to justify the unit.
        2. If difference-making (rather than making the world as good as possible) is weird then in particular risk aversion about difference-making might also be weird
    2. In addition, even if we all had the same risk aversion level, advice/cooperation would be (actively) misleading - people can't effectively cooperate while respecting their values.
      1. Consider a risk-averse organisation which can advise many people but “wants to do at least some good” - provides advice that the risk-averse advisees actually don’t want because it includes too much risk. However, from the perspective of the organisations, the uncertainty washes out if the advice for each individual leads to good outcomes in different states of the world - the outcomes are relatively uncorrelated for the different advisees.


 

One might think that all of this is not super problematic. In certain situations, risk-averse difference-making is hard to justify from an altruistic perspective. One could exercise caution in being a risk-averse difference, and in particular make sure one is not in one of the three traps (time inconsistency, decision unit dependency, and choosing pareto-dominated outcomes in multi-agent scenarios). 

I think this is not the right takeaway. First, I think it is actually very hard to impossible to avoid these weird conclusions. Secondly, I think this is also a wrong response, if one thinks that the repugnant conclusion is repugnant, then it also does not make sense to have the total view and to just avoid making many people who just happened to have lives barely worth living.


 

In Practice

As a risk-neutral actor, you are allowed to behave risk-aversely to do more good

  1. Consider the following argument from Hayden Wilkinson.
    1. You consider either giving away 10% or 90%. If you give away 90%, it is very likely that you will stop giving in a few years. With 10%, you expect to continue for your whole life. If so, it is better to give away 10%.
  2. Similarly, suppose you think that if you don't have any ex-post impact for 5 years, you will stop trying to do good. Then, choosing altruistic action in a very risk-averse way is the right altruistic thing to do (as an expected value maximiser).


 

 You don't have to be a pure altruist. I think that risk aversion about difference-making is a personal preference, like many others, and you are allowed to have selfish preferences [You can have more than one goal]

Thanks to Sam Clarke and Hayden Wilkinson for feedback.

 

[1]  Risk aversion refers to the tendency of an agent to strictly prefer certainty to uncertainty, e.g. you strictly disprefer a mean-preserving spread of the value of outcomes. 
 

Contest: 250€ for translation of "longtermism" to German

"Wenn longtermism eine gute heuristik ist, dann sollten alle Akteure (philanthropen, Politiker*innen, Individueen, etc) langfristig denken, wenn sie wichtige Entscheidungen treffen."

Das ist mein Vorschlag wie "Langfristiges  Denken" benutzt werden soll, und eine ueberlegung warum es keine uebersetzung fuer Longtermism ist/ why you cant use it as a translation for longtermism

.

Contest: 250€ for translation of "longtermism" to German

Another version would be to stick with longtermism in German but saying that longtermism is basically the combination of the following statements:

  1. Zukunftsgerechtigkeit/Zukunftsethik/Zukunftsverantwortung - zukuenftige Menschen sind moralisch relevant
  2. Zukunftsdimension/Zukunftsausmass - die Zukunft kann sehr gross sein/ ist sehr gross im Erwartungswert
  3. Zukunftsausblick/Zukunftsvoraussicht - wir koennen die Zukunft beeinflussen/besser machen

I prefer the first terms respectively. The second terms are alternatives.

Why? most proposals here likely just capture a part of what longtermism actually refers to. Hence, we might just want to find several terms which combined  = longtermism (we don't translate longtermism, because it is unlikely that people will not use the English term).

Contest: 250€ for translation of "longtermism" to German

More people should be aware of Hans Jonas. I read his book in high school and found it very useful. I

However, I disagree that reference to Hans Jonas is a useful translation of longtermism. Hans Jonas defends a specific moral theory (same for Birnbacher) and the ecological imperative is very closely related to Kantian philosophy. Hence,  Jonas' term does for instance not includethe optimising mindset ("lets not only make sure they have okay lives, i.e. they can exist, but make sure they have lives which are as good as possible"). Birnbacher does not capture that non-utilitarian values (and makes it more likely that people think mistakenly longtermism =utilitarianism). But maybe all of these considerations are less important because almost no one will actually remember where these terms came from.

Charlotte's Shortform

Here is a Collection of Resources/Reading about (Constructing) Theories of Change - I provide a summary for all resources (except one) in the Google doc.

The overview of the collection/summary document is:

Theory of Change (Aaron Swartz's Raw Thought) 

"Backchaining" in Strategy - LessWrong 

Michael Aird: "theory of change in Research" workshop 

What is a Theory of Change? 

Hivos ToC Guidelines: Theory of Change Thinking in Practice 

Key Tools, Resources and Materials 

Charlotte’s Main Take-aways 

Other resources I did not read:

Motivation and Takeaways:

  • I looked into this today because I believe that potentially (1) the ability to construct theories of change is a key bottleneck of the EA community, e.g. if everyone were twice as good, the impact of the EA community was much higher.
  • Given this, I aim to become better at constructing theories of change myself. Moreover, I am interested in how to make this teachable (shout out to Michael Aird's work) or to set up better deliberate practice exercises.
  • I was less excited about the existing/older theory of change literature than I thought I would be. Probably the best way to become good at this is just to try and get feedback from really good people.
  • It seems very important to construct and look at ToC in such a way to efficiently improve one's ability to construct ToC, e.g. (1) set up mechanisms to review ToC you wrote in the past and (2)  the outcomes one would like to track are not simply the ones which are most important for the impact of that project but the ones you assume you will reuse the most in future ToC of other projects and project areas.

Open Questions: 

  • When should you go backwards in your theory of change? #backchaining
  • When is it okay/recommendable to go forward in your theory of change?


 

Load More
Working (0-5 years experience)