R

Robin

241 karmaJoined

Bio

Robin studies climate change.

Posts
3

Sorted by New

Comments
37

Lovely writeup! Just to flag that a handful of others refused to work on the Manhattan project in the first place, including Lise Meitner and Franco Rasetti.

By my accounts, you have implicitly agreed that all of 1-6 used to be issues, but 2-4 are currently not issues and 5 now needs the phrase "negative equity" deleted. I'm still making mana by reading the news, so don't see that you've halved that claim. You're right that whalebait is less profitable, and I now need to actually search for free mana to find the free mana markets. The fact that I can still do this and then throw all my savings into it means that we should expect exponential growth of mana at some risk-free rate (depending on the saturation of these markets), which is then the comparison point for determining investment skill. In practice there are most likely better things to do with it, and also I can't be bothered.

I recognise the benefit of inflation as a good thing in countering historic wealth inequality, and will remark that it's effectively a wealth tax. It unfortunately coincided with the other changes which make it harder and less rewarding to donate and worsening the time-value problem, triggering my general disengagement with the site. I agree that loans never fixed this problem, but they mitigated it partially.

The difference between this and Metaculus sock puppets is that there's no reward for making them there. The virtual currencies can't be translated into real-world gain, and only one "reward" depends on other people, so making bad predictions with your sock puppets doesn't make you look that much better if people look at multiple metrics. Similarly, by requiring currency to express a belief, Manifold structurally limits engagement on questions with no positive resolution possibility - it's cost-free to predict extinction on Metaculus, but on Manifold, even with perfect foresight (or the guarantee that the market will be NAd later) you still sacrifice the time value of your mana to warn people of a risk. This problem is unique to prediction markets. They make it costly (but potentially remunerated) to express your beliefs.

The other problem unique to adversarial prediction grading is that collaboration is disincentivised. Currently, because mana isn't that valuable, the comments section is full of people exchanging information for social kudos. But when the market becomes financially lucrative people stop doing this - the comments on polymarket are basically pure spam. This is one of the reasons why I find the idea that Manifold should become more financialised very unwise. It's not clear that the collaborative factor is smaller than the professionalisation factor for net predictive power (as indicated by the fact that polymarket doesn't have that good a calibration). To make money on these things, you don't need to beat a superforecasting team (the thing that actually beats all of these statistical aggregation methods, least we forget), you need to beat the individual whose salary the prize can support. 

I don't believe the original donation has been redistributed and donations are now curtailed by the pivot, so I imagine it will last a while longer. I know the founders believe donations will eventually come from mana purchases (or more venture capital), I'm just skeptical.

Thanks for your considered comments! I agree that Metaculus should make its best prediction more available. I also attach low importance to the self-reported Brier scores, though Manifold already excludes a tail of low-traded questions when reporting, so that's not really a good explanation for the discrepancy.

To be clear, the paper specifies that *algorithmic adjustments* of polls out-perform markets, not that the means of polls are better than the means of markets (in line with the differences between the two Metaculus predictions). If you don't adjust, they're worse, as expected and seen in the Metaculus calibration data. This conclusion is clearly written in the abstract, and they didn't try very complicated algorithms to combine estimates.

I agree (and mentioned) that recent changes alleviate some of these points. I don't think it cures them as thoroughly as you indicate though. Firstly, the pivot didn't retroactively apply these changes, so people who successfully asked engaging questions or caught whalebait still have huge mana supplies. If they're not limited by engagement time, people with any positive predictive power can exponentially grow the cash injection, and the profit will naturally then be laundered into conventional markets. In practice, I don't think top whales are exponentially growing their income most of the time - growth usually seems pretty linear, probably due to the difficulty of finding appropriate markets. But if you wanted to prove that good whalebait hunters are good predictors, you will need to demonstrate that they get a good rate of return on their investment, not merely that they have also derived M from other sources.

People can no longer go into negative equity, though you can still create accounts and transfer the M600 or make risky bets, reducing but not fixing the issue.

I just went on the site and found free mana for day-old news within the top 10 links. Ironically the pivot/transaction taxes means that there's less incentive for people with limited M to pick up these pennies, so they're left out for longer and mainly benefit whales. There are mechanisms to stop news-based trading (e.g. you could retroactively reverse post-news transactions) but they will create negative equity problems again.

I am generally skeptical that some of the changes made during the pivot will remain in the long term, as it seems like the number of users has trending downwards since it happened, and changes have broken some other things. Most noteworthily, there is now no force mitigating the time value of money effects, so we do not expect the market value of long-term markets to equal the expectation of that market even under ideal circumstances. Also, the transaction taxes are large, which creates market inefficiencies, lowering the precision of the market (because it's not worth correcting a market error unless it's wrong by a larger margin now). These problems are ones that neoliberal economists ought be aware of though, so I imagine there are plans to mitigate them. 

The idea that real money improves performance is another of these neoliberal assumptions with limited evidence. There are a range of papers on this issue that come to different answers as to what, if any, conditions exist for it to be true. 
https://www.tandfonline.com/doi/abs/10.1080/1019678042000245254
https://www.electronicmarkets.org/fileadmin/user_upload/doc/Issues/Volume_16/Issue_01/V16I1_Statistical_Tests_of_Real-Money_versus_Play-Money_Prediction_Markets.pdf
https://ubplj.org/index.php/jpm/article/view/441
https://www.ubplj.org/index.php/jpm/article/view/479
It is almost certainly not true for extinction risk factors, which is a substantial EA interest for prediction-making. It could be true that there is some threshold beyond which money becomes strongly influential, but for instance, Metaculus informally finds running competitions for $1000s to harm engagement in questions. 

I think you misunderstood the counterfactuality point. The counterfactuality issue of the charity program is that the EA orgs could just have given them to the charities they normally do, without putting them into Manifold bank accounts in the meantime and waiting for people to choose which ones to give to. Allowing people to take the money out as dollars is irrelevant, and just delays things more. 

I'm a bit confused by this discussion, since I haven't in any way suggested banning people from using the site. That's a completely separate issue from managing the balance of ideologies behind the site design. As it happens, Manifold liberally bans people but mostly because they manipulate markets via bots/puppets, troll, or are abusive: this is required for a balanced markets and good community spirit, and seems a reasonable balance.

I strongly encourage people to discuss manifest elsewhere - as stated above, I didn't go and only comment on it to illustrate the lack of thought-diversity in the site design.

The problems I outline are all caused by the fact that Manifold requires that all value be denominated in a fungible and impersonal currency* that relates probabilities to rewards, and assumes that market forces will resolve irregularities in the distribution of this currency. This assumption is what I am criticising and is a reasonable definition of neoliberal. I neither assert nor believe that bigots participating in the market make it worse (as long as they are diverse bigots who aren't publicly abusive), I am criticising the lack of thought diversity in the design of the market. 

*Yes, I know it has two currencies now which are hard to trade between one direction, but they're not used in systematically different ways within the site. Some of these criticisms could be alleviated if, say, personal markets produced a currency that can't be spent on political markets.

Thanks for engaging positively! You're correct about the crux - if the resulting prediction market worked really well, the technical complains wouldn't matter. But the number of predictions is much less important to me than their trustworthiness and the precision of specifying exactly what is being predicted. Being well-calibrated is good, but does not necessarily indicate good precision (i.e. a good Brier score), and that calibration.city is quite misleading in presenting the orders of magnitude more questions on manifold as a larger dot, rather than using dot size to indicate uncertainty bounds in the calibration. 

It's not true that markets at any scale produce the most accurate forecasts. There's extensive literature showing that long-term prediction markets need to worry about the time-value of money and risk aversion influencing the market valuation. Manifold's old loan system helped alleviate the time-value problem but gave you a negative equity problem. I don't see this time value effect in your calibration data, but I suspect that's dominated by short-term markets. Because market participation is strongly affected by liquidity, smaller markets don't have incentives for people to get involved in them unless they're very wrong. Thus getting markets to scale up when they're not intrinsically controversial and therefore interesting is a substantial problem. The incentives to make accurate predictions can just be prizes for accurate individual predictions which can be aggregated into a site prediction by any other mechanism. The key feature of a market mechanism for prediction aggregation is that the reward must be tied to the probability of the event, and must be blind to who is providing the money. There's no reason to believe either of these are useful constraints, and I don't believe they're optimal. 

I note that many accounts are still in negative equity, and that a few such accounts that primarily generated their wealth by betting on weird metamarkets substantially influence the price of AI extinction risk markets. The number and variety of markets is therefore potentially punitive to the accuracy of predictions, particularly given the power-law rewards to market participation. While I refer to negative equity, the fact that we can still create puppets and transfer their $200 to another user (directly or via bad bets) means the problem persists to a smaller extent without anyone's account going negative. 

This article isn't an exclusive list of the countries that celebrate it, merely a list of how it's celebrated in 11 noteworthy nations. It's also celebrated in Iran, China, Germany...

Good analogy. Note that environmental statements made by oil companies cannot be trusted even for a few years when expected profits increase, even when costly actions and investment patterns appear to back them up temporarily. E.g.
https://www.ft.com/content/b5b21c66-92de-45c0-9621-152aa335d48c

'BPs chief executive Bernard Looney defended its latest reversal, stating that “The conversation three or four years ago was somewhat singular around cleaner energy, lower-carbon energy. Today, there is much more conversation about energy security, energy affordability.”'

Do current person-affecting ethicists become longtermist if we achieve negligible senescence? Will virtue-ethicists too if we can predict how their virtue will develop over time? Do development economists become longtermists if we develop Foundation-style Psychohistory? We don't have a singular term for "not a virtue ethicist" other than "non-virtue ethicist" and there's no commonality amongst nonlongtermists other than being the out-group to longtermists.

Neartermist = explicitly sets a high effective discount rate (either due to uncertainty or a pure rate of time preference) should not include non-consequentialists or people with types of person-affecting views resulting in a low concern for future generations.

Load more