Market Mad House

In individuals, insanity is rare; but in groups, parties, nations and epochs, it is the rule. Friedrich Nietzsche

Opportunities

Can We Trust the Polls?

As the 2020 presidential election approaches Americans are once again obsessing over our only; very inaccurate, estimate of the potential vote – the polls.

Historically, however, the polls have not been reliable. In 2012, for instance, RealClearPolitics estimates that President Barack Obama (D-Illinois) won the popular vote by 3.9%.

However, the RealClearPolitics average of polls gave Obama a 0.7% margin of victory. Hence, the Polls were wrong by a margin of 3.2% in 2012.

The Polls Failed Last Time Sort of

The Polls were a little better at prognostication in 2016. To explain, the polls got the popular vote wrong by 0.9%.

RealClearPolitics estimates Hillary R. Clinton (D-New York) won the popular vote by 2.1%. However, the Real Clear Politics average of politics of polls predicted a 3.2% margin of victory for Clinton.

Dramatically, FiveThirtyEight number cruncher Nick Silver estimates that 71% of polls favored Clinton to win the 2016 election. Thus, 71% of the polls were wrong. The polls predicted Clinton’s victory in the popular vote, however.

However, the polls could not predict the election outcome because they do not track Electoral College votes. To explain, in America the Electoral College is the unelected body that elects the President. Trump won the Electoral College by a margin of 306 to 232, 270 to Win estimates.

Given recent history, I think the best the polls can do is offer a glimpse of how Americans could vote. Unfortunately, polls cannot survey the 538 members of the Electoral College.

Why the Polls Get it Wrong

Recent results show the polls could be even less reliable in 2020.

For instance, the September 21-23 2020 Emerson Poll placed Andrew Yang (D-New York) fourth in the national Democratic presidential primary with 8%. In contrast, the October 18-21 Emerson Poll placed Yang at sixth place at 4%.

I think such variations show why we can trust the polls. The numbers are so different because pollsters use a method called sampling.

To explain, in sampling pollsters contact a select number of people whom they hope represent the national population.For example, a typical national poll opinion contacts 1,004 people, Scientific American claimed in 2004.  

I know of no way to guarantee a polling sample represents the national population. A related problem is people lying to pollsters.

Notably, researchers for the Environmental Voter Project estimate 78.1% of poll respondents lied about their election participation, Campaigns & Elections reports. In addition, people who plan to vote for controversial candidates; such as Donald J. Trump (R-New York) or U.S. Senator Bernie Sanders (I-Vermont), are likely to lie about it, Aradhna Krishna speculates.

Krishna, a University of Michigan marketing professor, thinks “embarrassment and fear of social stigma” motivate people to lie to pollsters. All it takes is a few liars or jokers to skew a poll because most pollsters use small samples.

Therefore, recent news articles and commentaries mocking Yang; and his Freedom Dividend basic-income scheme, could scare people into hiding their support for that candidate. However, the 2016 outcome shows such liars still vote for their candidates. Remember, in America voting is private but polls are public.

Why We Cannot Trust the Polls

In addition, different polls use different criteria for sampling and polling. Therefore, there is wide variation in poll results.

For instance, the October 17-21 Quinnipac Poll shows former Vice President Joe Biden (D-Delaware) leading the Democratic primary race by 21%, RealClearPolitics estimates. However, a CNN poll for October 17-20 shows Biden winning by 35%. Meanwhile, Emerson gave Biden 27% on the same day.

Obviously all three polls cannot be right, but all three polls can be wrong. In fact, all the polls could be wrong.

Under these circumstances, the RealClearPolitics average of polls is both unscientific and unreliable. Therefore, all the media outlets that refer to the RealClearPolitics’ Average of polls could get it wrong.

This is is why I stopped referring to RealClearPolitics. There’s nothing wrong with that website’s correlation of data. However, the data it is correlates is unreliable. I think we should use RealClearPolitics as a handy reference rather than an accurate source of data.

The Greatest Polling Failure in History

Polling failures are nothing new in American presidential history. In particular, one spectacular poll failure shows what’s wrong with polling.

The 1936 Literary Digest polling catastrophe was the greatest polling failure in history. Incredibly, the Literary Digest; then one of America’s most prestigious magazines, predicted Governor Alf Landon (R-Kansas) would win the presidential election with 57% of the popular vote.

Instead, President Franklin D. Roosevelt (D-New York); or FDR, won with 62% of the vote, The University of Pennsylvania reports. The Literary Digest poll was dead wrong because the pollsters only surveyed a select group of people.

Notably, the Digest’s pollsters only contacted people in telephone directories, club members, and magazine subscribers. Additionally, the pollsters only contacted respondents by telephone; when a large percentage of the population did not own a phone.

For instance, just 35% of American homes had a telephone in 1920, Statista estimates. In addition in the 1930s, most American phones connected to “party lines;” or “party wires,” in which several households shared the same phone number. Most Americans used party lines in the 1930s because they were cheaper than private phone lines.

Notably, the Digest’s pollsters only contacted people in telephone directories, club members, and magazine subscribers. Additionally, the pollsters only contacted respondents by telephone; when a large percentage of the population did not own a phone.

For instance, just 35% of American homes had a telephone in 1920, Statista estimates. In addition in the 1930s, most American phones connected to “party lines;” or “party wires,” in which several households shared the same phone number. Most Americans used party lines in the 1930s because they were cheaper than private phone lines.

Hence, your neighbors could listen in on your conversation with a pollster and hear what candidate you supported. For example, your gossipy neighbor could hear you planned to vote for FDR, and tell your Republican boss. Party lines gave people an incentive not to talk to pollsters, or to lie to them.

How Polling Failure History Could repeat itself

Hence, the Literary Digest only surveyed successful and affluent people at the height of the Great Depression; a time of 14%-24.99% unemployment.

Therefore, the pollsters were more likely to contact Republican voters. In the 1930s, the Republicans were the party of the upper class. For instance, Republicans were more likely to have a private phone line in the 1930s. Hence, Landon supporters were more likely to talk to pollsters than FDR supporters.

Interestingly, today’s pollsters could make a mistake similar to the Literary Digest’s pollsters. To explain, many pollsters call mostly land-line telephones.

Yet, Statista estimates only 41.7% of American households had a landline in 2018. Thus, pollsters could survey only older and poorer Americans who are not representative of the general population.

Meanwhile, the pollsters could ignore the  54.9% of Americans who only use a wireless phone. Note I think these figures partially explain the 2016 and 2012 polling failures. To explain, pollsters’ samples could only represent 41.7% of the population.

Can Polling Be Fixed?

On the other hand, pollsters could use other survey methods to reach smart phone users. Pollsters could survey people on the streets, for instance, or conduct online polls.

One problem is that polls have to be random to work. Contacting hundreds or thousands of people at random via the phone is easy. Most people will people pick up the phone.

A related problem is that retired senior citizens are more likely to be at home when pollsters call. Thus, the polling sample will be older, and not reflective of America’s population.

Notably, the median age of the US population was 38.2 years in 2018, Statista estimates. Hence, average Americans could at work when pollsters call their homes.

However, the ordinary American probably uses social media and the internet. It is hard, though, to make people answer emails or social media messages. Moreover, millions of people use privacy masking solutions; such as the Brave browser, to cover their tracks online.

Could Cryptocurrency and AI save Polling?

A possible solution is to pay people to answer polls in cryptocurrency.

Some companies want to use cryptocurrency for such efforts because crypto is theoretically more private than other payment methods. Several platforms; including theBasic Attention Token (BAT) and Clear Poll (POLL), plan to monetize polling with altcoins.

There is no evidence paid polls could deliver a better outcome. However, I think they could use artificial intelligence (AI) to weed out false answers. To explain, you could theoretically build an AI that could detect lies.

Consequently, a polling AI could contact you on Facebook, WhatsApp, or Telegraph, and offer you 1,000 BAT tokens; or five Tether (USDT) stablecoins, for your answers. You could respond because five Tether are theoretically worth $5.

History shows that polls have failed before and will probably fail again. Only time will tell if the pollsters can get it right in 2020. However, I think circumstantial evidence shows the pollsters will get it wrong again next year.