Paal Fredrik Skjørten Kvarberg

30Joined Jun 2021

Comments
13

Thank you for this! I think that there are several paths to impact for a scaled up version of this, but I am not at all sure what path is most consequential. I am curious about what you think is the most important way evaluations of this sort can have an impact. 

Thank you for this! I think your framework for instructional design is likely to be very useful to several projects working to create educational content about EA. I happen to be one of these people, and would love to get in touch. Here is a onepager about the project I am currently pursuing. I shared your post with others who might fint it interesting. 

I look forward to seeing what you decide to do next!

I participated in an activity of this sort some years ago. I really enjoyed the structured conversation, and working towards consensus in a group. The experience was way more intense than any other context of presentation or debate that I have been a part of otherwise. I don't know whether EA groups should use the technique, but I wanted to share from my own experience:)

Thanks for writing up this idea in such a succinct and forceful way. I think the idea is good, and would like to help any way I can. However, I would encourage thinking a lot about the first part "If we get the EA community to use a lot of these", which I think might be the hardest part. 

I think that there are many ways to do something like this, and that it's worth thinking very carefully about details before starting to build. The idea is old, and there is a big graveyard of projects aiming for the same goal. That being said, I think a project of this sort has amazing upsides. There are many smart people working on this idea, or very similar ideas right now, and I am confident that something like this is going to happen at some point. 

Metaculus is also currently working on a similar idea (causal graphs). Here are some more people who are thinking or working on related ideas, (who might also appreciate your post): Adam Binks, Harrison Durland, David Manheim and Arieh Englander (see their MTAIR project).

Seems like I forgot to change "last updated 04.january 2021" to "last updated 04. january 2022" when I made changes in january haha. 

I am still working on this. I agree with Ozzie's comment below that doing a small part of this well is the best way to make progress. We are currently looking at the UX part of things. As I describe under this heading in the doc, I don't think it is feasible to expect many non-expert forecasters to enter a platform to give their credences on claims. And the expert forecasters are, as Ian mentions below, in short supply. Therefore, we are trying to make it easier to give credences on issues while reading about them the same place you read about them. I tested this idea out in a small experiment this fall (with google docs), and it does seem like motivated people who would not enter prediction platforms to forecast issues might give their takes if elicited this way. Right now we are investigating this idea further through an mvp of a browser extension that lets users give credences on claims found in texts on the web. We will experiment some more with this during the fall. A more tractable version of the long doc is likely to appear at the forum at some point. 

I'm not wedded to the concrete ideas presented in the doc, I just happen to think they are good ways to move closer to the grand vision. I'd be happy to help any project moving in that direction:)

Thank you for this. This is all very helpful, and I think your explanations of giving differential weights to factors for average orgs and EA orgs seems very sensible. The 25% for unknown unknowns is probably right too. It doesn't seem unlikely to me that most folks at average orgs would fail to understand the value of prediction markets even if they turned out to be valuable (since it would require work to prove it). 

It would really surprise me if the 'main reason' why there is a lack of prediction markets had nothing to do with anything mentioned in the post. I think all unknown unknowns might conjunctly explain 25% of why prediction markets aren't adopted, but the chance of any single unknown factor being the primary reason is, I think, quite slim. 

On 4., I very much agree that this section could be more nuanced by mentioning some positive side-effects as well. There might be many managers who fear being undermined by their employees. And surely many employees might feel shameful if they are wrong all the time. However, I think the converse is also true. That managers are insecure, and would love for the company to take decisions on complex hard to determine issues collectively. And that employees would like an arena to express their thoughts on things (where their judgments are heard, and maybe even serves to influence company strategy). I think this is an important consideration that didn't get through very clearly. There are other plausible goods of prediction markets that aren't mentioned in the value prop, but which might be relevant to their expected value. 

Thank you all for posting this! I am one of the people who are confused by the puzzle you make serious inroads towards shedding light on in this post. I really appreciate that you break down explanatory factors in the way you do. To me, it seems like all four factors are  important pieces of the puzzle. Here they are: 

  1. The markets must have a low enough cost to create and maintain.
  2. The markets must provide more value to decision-makers than the cost to create them and to subsidize predictions on them.
  3. The markets must be attractive enough to traders to elicit accurate predictions.
  4. The markets must not have large negative side-effects, such as costs to the company's dynamics and morale.

Although you explain the idea behind each of these, I have a hard time making a mental model of their relative importance compared to each other. Do you think that such an exercise is feasible, and if so, do any of you have a conception of the relative explanatory strength of any factor when considered against the others? Also, do you think that it is likely that the true explanation has nothing to do with any of these? In that case, how likely? 

Strong upvoted! I think something like this would introduce exactly the kinds of people whom we would like to use the wiki, to the wiki. I like the first version best, as many writers might not be aware of the ways to link to tags, and not be aware of what tags exist. Also, this nudges writers to use the same concepts for their words (because it is embarrassing to use a word linked to a tag in another sense then is explained in that tag). 

Load More