Prediction Markets Are Not Polls
Misconceptions about prediction markets are common; from the New York Times to prediction markets themselves, there's no shortage of people confusing them with polls.
Prediction markets and polls are similar in one way: decentralization. Traditional forecasting methods generally involve relying on the judgment of one individual; usually a subject matter expert. (Or more recently, computer models designed by a single expert or small group of experts.) This is simple and reasonably effective, but as much psychological research has found, experts can be remarkably overconfident and inaccurate even within their own field.
The wisdom of crowds is the observation that aggregating the beliefs of a large number of people will often lead to a more accurate final judgment, even when the crowd includes many non-experts.
For understanding why this is the case, think of people as all trying to have true beliefs, but led astray by incomplete information and cognitive biases. As a result, people's beliefs have a random component; they end up distributed around the truth, with a lot of variation. Any one person's belief is effectively a random point, but the center of the distribution is the truth, so averaging out all the points can reveal what that is.
Polls and prediction markets are both ways to do this. That's about where the similarities end.
The limits of polls
Polls work well when there's a broad spread of different opinions on the subject in question. But sometimes almost everyone is misinformed or biased in the same direction, and then the poll will just reflect the average bias; not the truth.
One way to fix this is to widen the range of people being polled. If you polled North Koreans on the relative standards of living in North Korea compared to the rest of the world, you probably wouldn't get a very accurate result. A poll that includes many different countries would do a lot better.
But what about when the majority of humanity is misinformed? Maybe North Korea actually does have amazing living standards that they're hiding from the rest of the world, and polling people from more other countries would just make your results worse. Or how about smoking in the 1940s, when most people thought it was good for your health, and only a few scientists (and tobacco company executives) had started to realize it caused cancer? Some method to more heavily weigh dissenting beliefs that are backed up with strong evidence is necessary.
An incentive to be right
The core difference between polls and prediction markets is that in prediction markets, participants have a concrete incentive to be correct.
For example, let's say the question is "Will NASA's next rocket launch successfully?", and by default the market starts at 50%. I can buy a share of YES for 50 cents, and it's worth a whole $1 if the rocket does in fact launch; I can double my money! But if the rocket doesn't launch, then my YES share is worth nothing at all, and I lose my investment. The best financial decision is to only buy shares if they're cheaper than the expected value of those shares.
That is, if I know that out of all rockets NASA has ever launched, 95% of them have launched successfully, and nothing is different about this one, then that means each YES share is worth 95 cents to me. If I can buy them for only 50 cents, that's free money! For every YES share I buy, the market probability goes up slightly, so maybe the second share I have to pay 51 cents for, then the next share I have to pay 52 cents, etc. Eventually the market price reaches 95 cents, and now it's no longer in my favor to keep buying.
The relevant metric here is that of expected value maximization. Sure, there's a small (5%) chance that I lose my money on the rocket market, but if I place positive expected value bets in a lot of different markets, the chance that I lose all of them approaches 0, and the chance that I turn a profit approaches 100%.
So unlike in a poll, where a voter is just going to place a quick vote based on their gut reaction and then move on, a prediction market encourages people to do deep research on the subject and seriously examine themselves for biases that may adversely affect their reasoning.
People frequently engage in self-deception, and we don't have the time nor inclination to introspect on every decision. A vote in favor of whatever you'd most like to be true is costless, so most voters aren't going to stop and carefully consider their decision before voting. But once there's money on the line, people start being a little more serious.
Good predictors are weighted more heavily than bad ones
Everyone gets an equal say in a vote, which is good for determining values and goals that are fair to everyone (democracy!), but less good for determining what's true, since the average person is not deeply informed on most issues.
Prediction markets filter out people who aren't willing to "put their money where their mouth is", keeping participation to people whose beliefs rest on solid evidence, and allowing those with stronger evidence and more reliable world models to have a larger influence on the market. A trader who isn't really sure will only bet small amounts, while someone who's discovered groundbreaking new information can throw everything behind the weight of their evidence and have a larger impact on the market.
This effect compounds over time, as prediction markets reward traders with more money for being right, which gives them even more ability to influence future markets. Over time, prediction power shifts towards the people who have been most accurate in the past, which further increases their accuracy.
This means that a prediction market's probability can be interpreted as a weighted average of the beliefs of everyone who's aware of the market's existence
Importantly, this is a weighted average of probabilistic beliefs. A large prediction market being at 73% does not mean that 73% of traders think the event is guaranteed to happen. It means that there's a 73% chance the event happens. This could be because all traders are in agreement that 73% is the right likelihood, or it could be because traders disagree and ascribe different likelihoods to it, and 73% is what the invisible hand of the market has determined is the correct aggregate.
An incentive to participate reduces selection bias
Selection bias is the scourge of all polls. Any professional polling company has to expend huge effort getting a representative sample, unbiased by things like "people with lower net worth are more interested in getting paid $5 to answer some questions". And the best polling-based forecasting organizations, like 538, don't just go with the plain results of the poll; they build additional models and use the poll results as only one input to that model.
Prediction markets reduce this problem by being open to everyone. A poll that's open to everyone would skew the results towards people who want to spend their time on a poll without getting anything in return, but a prediction market provides people with a financial incentive, which attracts everyone equally.
Prediction markets aren't perfect in this respect; some people may be more comfortable risking their money than others, and richer people will be more able to affect the probability. But the reluctance to participate will decrease as prediction markets become more mainstream and people see them as a normal job, like working on Wall Street, and the second is self-correcting, as any overconfident rich person using their wealth to swing prediction markets will rapidly cease being rich.
Play-money prediction markets especially struggle with selection bias. The Salem Center, a right/libertarian leaning political organization, ran a play-money forecasting tournament which was significantly overconfident about Republican political victories. This wouldn't have happened in a real-money market, as more even-headed people could have joined to turn a profit, but in this forecasting tournament the main prize was a fellowship at the Salem Center, which most people had no interest in.
Real money markets can also be affected by selection bias when the system prevents better-informed traders from correcting the price. PredictIt is a real money market that imposes an $850 cap in each market, and as a result, it tends to have a slight bias toward conservatives that the site's smaller share of liberals can't correct.
Prediction markets incorporate all available information
A simple "proof" that prediction markets must be at least as accurate as polls is that if the market isn't, someone could just run a poll, bet on its results, and turn a profit.
This is a general feature of prediction markets. If there's any source of information that's being neglected, there's a financial incentive to find it and incorporate it into the market probability.
For the same reason, prediction markets are also great at eliciting private information. If one person knows a secret that no one else does, a poll would end up ignoring their information, as it's drowned out by the majority. But in a prediction market, that person would know they have more information than the rest of the field, and has an incentive to stake large amounts of money in the market in order to profit off of their private information, correcting the market in the process.
Of course it's always possible some information is not being taken into account. If the total amount of potential profit in a prediction market is only $5, then it may not be very accurate, since nobody is going to bother doing a lot of research for just $5, nor would someone want to share insider information for such a paltry payout. But who's going to run a robust poll for only $5? Any time you compare a prediction market to a poll with the same amount of funding, the prediction market is likely going to be more accurate.
To put it another way, as the amount of money to be made in the market increases, the chance that someone has not bothered to go look for such information decreases and the reliability of the market probability increases.
You cannot disagree with prediction markets
If Alice holds some belief X and sees that a poll was done that had a different result, Alice can just say "well the people in the poll were wrong, I still believe X". While it'd probably be a good idea to reconsider her beliefs in light of the fact that she's in the minority, it's perfectly possible for the majority to be wrong, and this fact makes it easy to ignore polls that conflict with our preconceptions.
Prediction markets, by contrast, provide a clear course of action if you disagree with them; buy shares! If Alice believes X but the market says Y, it's positive expected value for Alice to buy shares in X, just like you'd accept a coin flip that pays out $10 if it's heads but you only lose $5 if it's tails. If she buys shares and corrects the market, then she no longer disagrees with it. And if she chooses not to purchase shares, she's revealing that she's not actually confident in X, and she believes the original market probability to be accurate.
This makes prediction markets a very useful tool for resolving disagreements, since the stated beliefs of anyone who claims to disagree with a prediction market's probability but doesn't bet it in can generally be discounted.
A) Some events like "the prediction market site goes bankrupt" introduce a systemic bias, since a YES share won't be worth anything if the market resolves YES.
B) time discounting in the value of money combined with capital constraints and opportunity costs means that there's no incentive to bet markets to very extreme probabilities like 1% or 99%, especially if the market isn't going to resolve for several months or years.
C) Play money markets don't provide the same incentives as real money markets.
But these caveats are not relevant to the majority of things that people argue about on the internet.
If you disagree, here's some free charity money: