Whether you are apoplectic or euphoric about the results of the election, there is a higher level issue that the results bring into focus. To name the matter, the November 8 election brings out the fact that human bias and weak signals in complex data really, really matter. It also clearly articulates the best imaginable case for why machine intelligence is the only viable technology to address major shortcomings that humans bring to traditional analytical techniques.

The fact that the entire world was surprised that Donald Trump won is at its core a failure born from human bias. The people analyzing poll results convinced themselves that Trump was too brash and too far from the mainstream to garner a significant share of votes. They felt this way because they believed it themselves and those around them reinforced it. I frankly thought this way as well.

Yet there were oddities in the sample data. The standard means of polling were not working as well as they once did. For example, who has a home phone anymore? Who answers blocked numbers on a cell phone? Did anyone factor that when the media branded Trump supporters as racists, that maybe they would not answer honestly when asked? The list goes on.

One source of the “miss” is the exact opposite of the “Dewey Defeats Truman” polling debacle of 1948. Then, the pollsters used phones, which were a luxury at the time, meaning more affluent Republicans were more likely to own one. Unsurprisingly the GOP’s Dewey emerged as the pollster’s prediction. For 2016, though nothing has been published about it yet, I’d wager that a similar phenomenon was at play.

But that’s essentially a side point. Both the Truman and Trump projection errors were completely avoidable, if only the pollsters had suspended bias & dug deeper. When you are biased, you fail to dig deeper. You see odd things happening and explain them away. You convince yourself that inconsistencies are not important.

Turns out they are.

I’ll wager that previously known sampling issues will emerge as a key reason for the scale of the miss.

Then there are the subtle shifts that happened in the upper Midwest (where this election was really decided). These were not tectonic shifts. The shifts appears in the form of weak signals, but were statistically significant departures from prior voting behavior. They were, in any case, significant when you understand the vagaries of the way voting districts flow into the electoral collage system.

To round out the dynamic, the old polling techniques likely did not account for how many people were really undecided until very close to Election Day. Somehow, this should have been factored, but it wasn’t and further moved the polling away from accuracy. Another clear example of bias.

In some of the news accounts I’ve heard about today, the degree of support for Clinton in certain key districts had faded considerably when compared to over 20 years of voting behavior. This should have sent up alarm bells to pollsters and campaign leaders. But it didn’t, to Clinton’s detriment. Turns out, in her campaign, the same bias discussed above existed.

There were two polls that clearly showed the race was not a Clinton landslide and that Trump was a much stronger candidate than reported: USC Dornsife, which the LA times published; and the Investors Business Daily. Also, Nate Silver, who while not signaling a win, consistently placed the odds of a Trump win at > 30% (which made him a Trump bull among pollsters). It’s been published today, that the people behind all these polls thought Clinton would win, but designed their surveys and executed them in a bias free way.

And that brings us back to the point of this discussion. It’s nearly impossible for humans to divorce themselves from their biases. Humans approach the world with a confirmatory bias. Humans also suffer greatly from tribalism and group think. It may have allowed our ancestors to suspend fear and survive in hostile conditions, but the game has changed. Knowledge, in the complex data world we’ve moved into, is no longer about “gut”, but about thoroughness and completeness.

And that’s the most clear case for our machine intelligence solution. The Ayasdi platform won’t make the mistakes described above. It’s not capable of doing so. That’s meaningful because the same biases, tribalism and confirmatory bias exist in all companies and complex organizations.

Lack of knowledge that key elements of the electorate viewed this election as an economic referendum and not a social policy exercise hobbled the Clinton camp. Tied down with bias & an improper interpretation of the weak signals there were out there, they failed to adjust their strategy. Had they done so in time, and adjusted their message, I’ll wager they could have coaxed those swing voters the their cause. After all, a huge number of them in some of these states voted for Obama twice…

As witnessed on Tuesday, getting it right really matters. Our customers will never feel the gut punch of defeat, from out of the blue, that the Clinton campaign did on Wednesday. Our customers’ competitors will. Surprise will increasingly become a thing others lament. Assurance in the thoroughness of unbiased analysis and efficient operational deployment will differentiate those that adopt machine intelligence from those that persist in the old ways. The same old ways that had Hillary Clinton win by 12 points less than 14 days prior to 11/8…