Jobst Heitzig (vodle.it)

Senior Researcher / Lead, FutureLab on Game Theory and Networks of Interacting Agents @ Potsdam Institute for Climate Impact Research
Working (15+ years of experience)
84Joined Oct 2022

Bio

I'm a mathematician working on collective decision making, game theory, formal ethics, international coalition formation, and a lot of stuff related to climate change. Here's my professional profile.

My definition of value :

  • I have a wide moral circle (including aliens as long as they can enjoy or suffer life)
  • I have a zero time discount rate, i.e., value the future as much as the present
  • I am (utility-) risk-averse: I prefer a sure 1 util to a coin toss between 0 and 2 utils
  • I am (ex post) inequality-averse: I prefer 2 people to each get 1 util for sure to one getting 0 and one getting 2 for sure
  • I am (ex ante) fairness-seeking: I prefer 2 people getting an expected 1 util to one getting an expected 0 and one getting an expected 2.
  • Despite all this, I am morally uncertain
  • Conditional on all of the above, I also value beauty, consistency, simplicity, complexity, and symmetry

How others can help me

I need help with various aspects of my main project, which is to develop an open-source collective decision app, http://www.vodle.it :

  • project and product management
  • communication, marketing, social media
  • quality control, testing
  • translations
  • funding

How I can help others

I can help by ...

  • providing feedback on ideas
  • proofreading and commenting on texts

Comments
22

Answer by Jobst Heitzig (vodle.it)Nov 23, 20223
🙌1
🎉1
❤️2

I donate monthly to charities collectively chosen by my colleagues and friends, for four reasons:

  • I believe in epistemic democracy
  • I want to learn about others' priorities
  • I want to encourage them to think about donating themselves
  • I want to test my voting app (which I use for that collective decision)

So far, most of those donations went to public health charities like Doctors Without Borders or Malaria Consortium. If you want to have a say where my November donation goes: demo.vodle.it

I forgot to add that there are of course also approaches based on regret.

  • Let's call each possible solution of ambiguity a scenario.
  • For each scenario Z and each possible strategy S, one can estimate the expected value of S in scenario Z, let's denote that by v(S|Z).
  • Let's call the difference in expected value between the chosen strategy S and the optimal one in Z the regret in Z, denoted r(S|Z) = max{v(S'|Z): strategies S'} – v(S|Z).
  • Let's denote the minimal and maximal regret when choosing S by minr(S) = min{r(S|Z): all scenarios Z} and maxr(S) = max{r(S|Z): all scenarios Z}

Then Savage's minimax regret criterion demands one should choose that S which minimizes maxr(S). The advantage over the Hurwicz criterion is that the latter only looks at the two most extreme scenarios, which might not be representative at all of what will actually happen, while Savage's criterion takes into account the available information about all possible scenarios more comprehensively.

Obviously, one might combine the Hurwicz and Savage approaches into what one might call the regret-based Hurwicz or Savage–Hurwicz criterion that would demand choosing that S which minimizes h maxr(S) + (1–h) minr(S), where h is again some parameter aiming to represent one's degree of ambiguity aversion. (I haven't found this criterion in the literature but think it must be known since it is such an obvious combination.)

This is tremendously helpful!

I personally sometimes have an anger problem. Curiously it mostly happens is someone I love seems to be obviously wrong in a recurring way.

I believe part of the reason that I then sometimes get angry is that it may then seem that the person I love might be less worthy of my love because of their seemingly silly opinion or behaviour. At the same time, I then notice that such a thought of mine is itself silly, and that makes me angry at myself. But in such a situation, I can't admit that I'm angry at myself, so I end up acting as if I was angry at the other person.

What a mess...

One thing that seems to help me most of the time is the buddhist "loving kindness" exercise, as for example explained here.

I think you are right, and the distinction still makes sense, but only as a theoretical device to disentangle things in thought experiments, maybe less in practice, unless one can argue that the correlations are weak.

I think you point to relevant tradeoffs here. I myself am currently testing a different scheme to determine my donation: let the public (or anyone interested in participating) decide collectively where to donate. I believe this might...

  • improve the decision quality due to crowd-sourcing information and reducing any bias (moral or otherwise) that I might have
  • encourage others to think about various causes and potentially make them donate more as well

The downside is probably that the total effort spent in this decision is larger than would I take it on my own...

My polls look like this, sometimes with a fixed total amount, sometimes with a total that increases with participation (to give an incentive). Typically around 100 people participate in it:

What do you think of this?

I like this type of models very much! As it happens, a few years ago I had a paper in Sustainability that used an even simpler model in a similar spirit, which I used to discuss and compare the implications of various normative systems.

Abstract: We introduce and analyze a simple formal thought experiment designed to reflect a qualitative decision dilemma humanity might currently face in view of anthropogenic climate change. In this exercise, each generation can choose between two options, either setting humanity on a pathway to certain high wellbeing after one generation of suffering, or leaving the next generation in the same state as the current one with the same options, but facing a continuous risk of permanent collapse. We analyze this abstract setup regarding the question of what the right choice would be both in a rationality-based framework including optimal control, welfare economics, and game theory, and by means of other approaches based on the notions of responsibility, safe operating spaces, and sustainability paradigms. Across these different approaches, we confirm the intuition that a focus on the long-term future makes the first option more attractive while a focus on equality across generations favors the second. Despite this, we generally find a large diversity and disagreement of assessments both between and within these different approaches, suggesting a strong dependence on the choice of the normative framework used. This implies that policy measures selected to achieve targets such as the United Nations Sustainable Development Goals can depend strongly on the normative framework applied and specific care needs to be taken with regard to the choice of such frameworks.

The model is this:

Here L is the current "lake" state, from which action A ("taking action") certainly leads to the absorbing "shelter" state S via the passing "valley of tears" state P, while action B ("business as usual") likely keeps us in L but might also lead to the absorbing "trench" state.

The names "lake", "shelter", "trench" I had introduced earlier in another paper on sustainable management that contained this figure to explain that terminology:

(Humanity sits in a boat and can row against slow currents but not up waterfalls. The dark half is ethically undesirable, the light half desirable. We are likely in a "lake" or "glade" state now, roughly comparable to your "age of perils". Your "interstellar" state would be a "shelter".)

Related to that:

Your figure says

Longtermist regress IS EITHER Contraction OR Average wellbeing decrease,

but consider a certain baseline trajectory A on which

  • longterm population = 3 gazillion person life years for sure
  • average wellbeing = 3 utils per person per life year for sure,

so that their expected product equals 9 gazillion utils, and an uncertain alternative trajectory B on which

  • if nature's coin lands heads, longterm population = 7 gazillion person life years but average wellbeing = 1 util per person per life year
  • if nature's coin lands tails, longterm population = 1 gazillion person life years but average wellbeing = 7 utils per person per life year,

so that their expected product equals (7 x 1 + 1 x 7) / 2 = 7 gazillion utils.

Then an event that changes the trajectory from A to B is a longtermist regress since it reduces the expected utility.

But it is NEITHER a contraction NOR an average wellbeing decrease. In fact, it is BOTH an Expansion, since the expected longterm population increases from 3 to 4 gazillion person life years, AND an average wellbeing increase, since that increases from 3 to 4 utils per person per life year.

You write

expected utility, and its main factors, expected number of people, and expected value per person

but that is only true if number of people and value per person are stochastically independent, which they probably aren't, right?

Thank you so much, this is especially helpful for a newbie like me who was a little confused about these things and now thinks that is mainly to do with terminology.

Just to clarify: In this text, are "welfare" and "utility" referring to same concept, or are they just proportional to each other because of the unitarianist assumption?

Load More