Assistant professor @ BI Norwegian Business School

Working (0-5 years experience)

Postdoc in statistics. Three kids, two cats, one wife. I write about statistics, EA, psychometrics, and other things at my blog

I'm looking for collaborators in everything I do. If you (a) see anything I've written about that you want to work with, or (b) want me to work on something you're doing (paid or not, depending on the project), please contact me.

I'm actively seeking master students who want to work with EA-adjacent statistics.

Statistics. Statistics is often harder than you think.

4

31

Thanks for writing this.

- I wrote about "decay of predictions" here. I would classify the problem as hard.
- Do you have a feeling for how suitable the projects are for academic projects? Such as bachelor theses or master theses, perhaps? It would be great to show a list of projects to students!

Could you elaborate?

Sorry, but I don't understand what you mean.

Here's the context I'm thinking about. Say you have two options and . They have different true expected values and . The market estimates their expectations as and . And you (or the decider) choose the option with highest estimated expectation. (I was unclear about estimation vs. true values in my previous comment.)

Does this have something to do with your remarks here?

Also, there's always a way to implement "the market decides". Instead of asking P(Emissions|treaty), ask P(Emissions|market advises treaty), and make the market advice = the closing prices. This obviously won't be very helpful if no-one is likely to listen to the market, but again the point is to think about markets that people are likely to listen to.

Potential outcomes are very clearly and rigorously defined as collections of separate random variables, there is no "I know it when I see it" involved. In this case you choose between two options, and there is no conditional probability involved unless you actually need it for estimation purposes.

Let's put it a different way. You have the option of flipping two coins, either a blue coin or a red coin. You estimate the expected probability of heads as and . You base your choice of which coin to toss on which probability is the largest. There is actually no need to use scary-sounding terms like counterfactuals or potential outcomes at all, you're just choosing between random outcomes.

We could create a separate market on how the decision market resolves, and it will resolve unambiguously.

That sounds like an unnecessarily convoluted solution to a question we do not need to solve!

However we deal with that, I expect the story ends up sounding quite similar to my original comment - the critical step is that the choice does not depend on anything but the closing price.

Yes, I agree. And that's why I believe we shouldn't use conditional probabilities at all, as it makes it confusion possible.

In this case it would be best to use the language of counterfactuals (aka potential outcomes) instead of conditional expectations. In practice, the market would estimate and for the two random functions and , and you would choose the option with the highest estimated expected value. There is no need to put conditional probability into the mix at all, and it's probably best not to, as there is no obvious probability to assign to the "events" and .

Satan cuts an apple into a countable infinity of slices and offers it to Eve, one piece at a time. Each slice has positive utility for Eve. If Eve eats only finitely many pieces, there is no difficulty; she simply enjoys her snack. If she eats infinitely many pieces, however, she is banished from Paradise. To keep things simple, we may assume that the pieces are numbered: in each time interval, the choice is Take piece n or Don’t take piece n. Furthermore, Eve can reject piece n, but take later pieces. Taking any countably infinite set leads to the bad outcome (banishment). Finally, regardless of whether or not she is banished, Eve gets to keep (and eat) her pieces of apple. Call this the original version of Satan’s apple.

We shall sometimes discuss a simplified version of Satan’s apple, different from the original version in two respects. First, Eve is banished only if she takes all the pieces. Second, once Eve refuses a piece, she cannot take any more pieces. These restrictions make Satan’s apple a close analogue to the two earlier puzzles.

Problem: When should Eve stop taking pieces?

I think the StackExchange sites have automatic reminders, or maybe even checks, of similar stuff. My last post on cross-validated (stack exchange for statistics) had hints about reproducible examples, I think.

Gwern has a writing checklist. Similar checklists could be forced on the author prior to submission.

Thanks for your suggestions! Big fan of yours for many years, by the way. Mating intelligence being the article collection that made we want to become an evolutionary psychologist (ended up a statistician though, mostly due to its much safer career path).

Now I noticed that I didn't write in the post that these four points are just a summary. The meat of the post is being linked to. I *think* I have explained these terms in the linked post, at least graded pairwise comparisons and discrete choice models. But yeah... I will modify the summary to use less technical jargon and provide an introduction.

I think it's important to build more connections between EA approaches to value (e.g. in AI alignment) and existing behavioral sciences methods for studying values.

Yes, and also to academia in general. I honestly didn't think about AI alignment when writing this post, but that could be one of the applications.

Thomas Hurka’s St Petersburg Paradox: Suppose you are offered a deal—you can press a button that has a 51% chance of creating a new world and doubling the total amount of utility, but a 49% chance of destroying the world and all utility in existence. If you want to maximise total expected utility, you ought to press the button—pressing the button has positive expected value. But the problem comes when you are asked whether you want to press the button again and again and again—at each point, the person trying to maximise expected utility ought to agree to press the button, but of course, eventually they will destroy everything.[2]

I have two gripes with this thought experiment. First, time is not modelled. Second, it's left implicit why we should feel uneasy about the thought experiment. And that doesn't work due to highly variable philosophical intuitions. I honestly don't feel uneasy about the thought experiment at all (only slightly annoyed). But maybe I would have it been completely specified.

I can see two ways to add a time dimension to the problem. First, you could let all the presses be predetermined and in one go, where we get into Satan's apple territory. Second, you could have 30 seconds pause between all presses. But in that case, we would accumulate massive amounts of utility in a very short time - just the seconds in-between presses would be invaluable! And who cares if the world ends in five minutes with probability when every second it survives is so sweet? :p

If I understand you correctly, what you're proposing is essentially a subset of classical decision theory with bounded utility functions. Recall that, under classical decision theory, we choose our action according to maxa∈AE[u(a,X)], where X is a random state of nature and A an action space.

Suppose there are N (infinitely many works too) moral theories s1,s2,…,sN, each with probability p(si) and associated utility ui. Then we can define u(a,X)=N∑i=1p(si)ui(a,X). This step gives us (moral) uncertainty in our utility function.

Then, as far as I understand you, you want to define some component utility functions as ui(a,X)={1,if (a,X) is acceptable under theory si,0,if (a,X) is unacceptable under theory si. As then 0≤Eui(a,X)≤1 is the probability of an acceptable outcome under si. And since we're taking the expected value of these bounded component utilities to construct u, we're in classical bounded utility function land.

That said, I believe that