How conspiracy theory websites gave rise to a widely spread disinformation campaign apparently aimed at voter suppression.

By Andrew Somers

There is disinformation, and then there are flat out lies. In the context of an election these deceptions are used to sway voters, and at the extreme, encourage apathy or discourage participation.

An Election III, The Polling, by William Hogarth 1755
An Election III, The Polling, by William Hogarth 1755 (PD)

Consider this: If you fully believed that the elections were fraudulent or that your vote did not count, would you stand in line for three hours? Taking time off work? In the rain? In these days of COVID?

If you believed the elections were rigged in terms of vote count fraud to the degree it made you not care & stay home, would you call that the ultimate form of voter suppression?

We have been investigating voter suppression and disinformation campaigns. This article will discuss one issue that has gained a lot of traction lately through a series of viral memes.

Election Fraud & Exit Polls

There is a sentence that is being repeated over and over across the internet through memes and conspiracy theorists:

“According to the UN, exit poll discrepancies exceeding 4% signify election fraud.”

This statement may sound reasonable to those not familiar with exit polling or statistics, but in truth? The statement is false and very misleading.

somerskane
Photo: frame from “Citizen Kane” (RKO Pictures/PD)

We have scoured through the United Nation’s documents on elections and standards, and no statement like this exists at the UN. Nor would one expect it to; even a basic understanding of statistics and polling does not support such an out-of-context assertion.

We’ve not been able to find the original source of this “UN” claim, but it appears to be part of a larger disinformation campaign. A campaign which appears to be targeted toward sowing discord and inflaming anger among the supporters of losing candidates — based on the viral popularity of these memes they are succeeding.

The problem with such a simplified “standard” as this claim of an arbitrary 4% discrepancy ignores the complexities of poll analysis, including clustering, weighting, and multilevel regression with post-stratification (MRP), not to mention sample size, respondent rates, and demographics.

For instance in a poll with a sample size of 500, 4% is less than the margin of error at 95% confidence — hardly conclusive in itself. But once you add in the effects of clustering and the many demographic variables present in exit surveys, it isn’t even close.

It is foolish to claim that an exit poll predicts fraud if the difference versus the actual vote exceeds some arbitrary value.

But moreover, no such standard exists in relation to exit polling. Some have cited a document from the USAID claiming support for exit polls for election integrity checking, but even that comment is taken far out of context.

USAID Actually Says:

“Exit polls do not provide sufficient evidence to refute or challenge official results, either at the national level or for individual polling stations.”

To be fair, this is not to say that a properly designed exit poll is useless for checking election integrity, but if verification is the goal the exit poll needs to be designed with that in mind. This then begs the question: Are U.S. exit polls on mainstream media (CNN, NBC, ABC, etc.) intended for affirming fair elections?

The answer is simple: NO

The US National Election Pool (NEP) exit poll for the major media is conducted by Edison Research, and they themselves say the US exit surveys are not designed for detecting fraud nor ensuring election integrity. According to Edison’s co-founder Joe Lenski, the purpose of the Edison polls is “to project a winner” and make “…demographic results as accurate as possible.”

Mr. Lenski points out that the NEP exit surveys are “just not designed for that type of precision.” This is in contrast to the different design of exit polls for integrity & verification sometimes used in other nations, where Lenski says “[those] exit polls are designed specifically to catch any manipulations of the vote count.” So while it is true that Edison Research conducts exit polls in some foreign nations for integrity checks, those use a different design and methodology than the US exit demographics surveys.

US Capital
 Photo: Pixbay

But a Guy on the Internet Said…

Yes, a lot of people on the internet say a lot of things. Understanding their motivations is the first step to exposing the truth. In full disclosure this author is a strong Bernie Sanders supporter, having worked on on the Sanders campaign. And along with every other Bernie supporter, we want to understand what happened in 2020. 

A widely distributed but completely false and misleading meme. While this meme does not appear to have been created by Mr Soares, it is using the false and misleading numbers presented on the TDMS website.

This led to an investigation examining the money, power, and inside players behind the suppression of Bernie Sanders and the rise of Joe Biden. 

But while interviewing voters and in particular Sanders supporters, we discovered these curious allegations that “everything was rigged by the DNC.” The source of this disinformation was a series of memes of claiming massive election fraud and cries for United Nations Intervention in US elections. The memes portrayed insanely out-of-whack differences between polling data and actual election results. Were they real? Where did they come from?

The search for the source of these fake numbers led to a WordPress website called “TDMS Research,” named for the initials of its owner Theodore de Macedo Soares.

Mr. Soares posts articles on his website regarding exit polls, along with screenshots of spreadsheets he created to further his narrative. The articles are rambling pedantic rants on elections in the US supporting his claim that some exit poll results “raise flags” for fraud, and the US elections are not to be trusted.

But the methods and statements he makes have been debunked by independent fact checkers including FactCheck, as well as our lengthy investigation and analysis of the data & math used by Mr. Soares.

Mr. Soares may have honorable intentions, but the unfortunate truth is there are significant flaws in his approach and statements. Worse, his “work” is being parroted in the most divisive way possible. The false and misleading numbers Mr. Soares created have been formed into memes that decry “US election fraud!” even though no credible fraud has been found in the 2020 Democratic primaries.

Three Frauds Plus Bonus Lies

For a complete breakdown of how Ted Soares & TDMS manipulated numbers to create this false “election rigging”narrative, see the appendix at the end of this article “TDMS EXPOSED” which describes the TDMS fraud in detail, including forensic analysis of the math used.
In short, TDMS:

  • Used preliminary survey results released by CNN before final calibration, and claimed these initial results were somehow better as if “raw” or unweighted which is a patent lie.
  • Used incorrect math to calculate Margin of Error, making incorrect assumptions about the data distribution, resulting in too low an MoE.
  • Fraudulently used incorrect math to artificially inflate the “discrepancies” to make them look enormous, when in fact they were all well within the margin of error.
  • TDMS invented numbers more than seven and a half times larger than the true discrepancies.

It is these false, exaggerated figures that are being used in the inflammatory memes—spewed forth by pundits like Lee Camp and Jimmy Dorn—and distributed by the well meaning but misguided Twitter and Facebook army of Sanders supporters.

Others have also debunked Soares’ counterfeit math including FactCheck.

But Voting Machines Tho

Nothing has terrified the public regarding election integrity as much as the use of computer voting machines. Decades ago there were some electronic machines in use with security weaknesses that raised concerns. Those machines are now obsolete and long since removed from service.

Over just the last few years, states have spent over 700 million dollars upgrading voting systems & equipment, and improving laws & procedures to restore confidence in our election process. 

At the federal level, the U.S. Election Assistance Commission (EAC) was established by the Help America Vote Act of 2002 (HAVA). The EAC has been proactive in promoting best practices for election security and administration, accredits testing laboratories and certifies voting systems.

Regardless, states have moved substantially back to systems using paper ballots. Now in 2020, thirty seven states use paper ballots, six states use Direct Recording Electronic (DRE) with a voter-verified paper trail, The few remaining states like Texas, Kentucky, and Tennessee have a mix of technologies including paper ballots and some DRE with no paper trail. New Jersey and Louisiana are working to replace their paperless machines but have run into funding issues.

And in the US in most states, the official election integrity check is in the post election validation phase. Thirty eight states mandate post election audits, and the remaining states have various levels where an audit or recount is triggered or can be requested.

It is useful to point out that some of the states that TDMS and the memes are claiming suffered election fraud do not use DRE machines at all, such as Massachusetts which mandates paper ballots only. Ballots which are used in the publicly viewable post election audits, recounts, and certifications are a far more accurate and reliable means to validate election integrity than a small sample size exit survey.

Nevertheless, some election conspiracy theorists make this misleading statement:
Screenshot segment from the TDMS site with the unsupported claim of “unobservable” counts, despite the many states that have open public viewing polices for counts, recounts, and audits.

TDMS is in error claiming “not observable by the public.” That statement is simply not true. While elections laws vary from state to state, most states are open to public observers. In California, the public is allowed to remain at the precinct to observe the precinct count, and the public is invited to observe the audits and recounts as well. In fact, tens of thousands of civic minded citizens volunteer to staff polling places and observe & protect the vote and election integrity.

Ghosts in the Machines

The second part of his spurious belief is Soares’ speculation that “because it happens in a machine” the count remains unobserved. This again demonstrates a lack of knowledge and understanding of the technologies involved. Counting machines are under observation and lock and key. They undergo pre and post election testing and auditing. While it may be true that some states would be well advised to improve their audit programs, those states are a minority.

Hacking for Fear & Scoff-it

And the final dismissal of the conspiracy theorist’s claims relates to hacking. As mentioned above there were indeed some obsolete voting machines using Windows XP that were insecure if connected to open networks or wifi. Modern machines are certified by EAC and robust, and coupled with best practices for security (such as keeping count machines air-gapped from open networks) claiming “hackable machines” is simply fear mongering.

The widely reported and inflammatory news stories about 11 year old hackers hacking election machines at DefCon is misleading. The 11 year old hacked a simulated website, that was setup specifically to be hacked.

The other story with the dramatic demonstration of “hacking” a vote machine was nothing more than accessing the admin page on an obsolete unit, which required physical access and screwdriver disassembly. In other words, something that could not happen in a real world voting environment with machines under lock & key and voters under constant observation.

Electric Sheep: On Manual Counting

While it is true that some smaller nations use manual counting methods, the United States is one of the most populous nations, and we run complicated elections with dozens of choices. Besides our representatives, we vote on our judges, district attorneys, sheriffs, assessors, levies, bonds, propositions. We have multiple elections a year in some cases. That’s a lot of votes to tally, and hand-counts are not only slower and more expensive, they are less accurate and more prone to error.

Compare our elections to Germany’s ballot containing only two vote items: one for a candidate and one for a party. And to run their election and manually count the votes, Germany must recruit 650,000 people for election day(s), in a manner similar to jury duty.

In Los Angeles for the March 3rd, 2020 primary, we had a ballot that included:

    • Party presidential nominee,
    • Three nominees for party legislatures (congress, state assembly, state senate),
    • A county supervisor,
    • Twelve or so judges (188 judges total, split among areas), 
    • The district attorney, a couple school board members depending on the area,
    • Party committee members,
    • Then the state measure (prop 13),
    • Two county measures,
    • And one to four city measures depending on the area (48 local city measures total across the area).

That works out to about twenty-six measures and offices to vote for depending on the district, though the local election board actually has to wrangle over 200 offices and 51 measures for all the districts, and this was just a small primary election.

In the last presidential election November 2016, Los Angeles voted on:

    • US President,
    • US Senator,
    • US Congressman,
    • State assembly member, 
    • State senator,
    • Water board member,
    • County supervisor,
    • Assorted judges,
    • Community college board member,
    • School board member,
    • City council member,
    • Seventeen state measures (props 51 thru 67),
    • Two county measures,
    • And depending on the area, one to six of the 58 local measures.

So for LA general election, that works out to about forty-four seats and measures to vote on compared to two in Germany. And let’s not forget Los Angeles county has had thirty elections since the 2016 general election and the 2020 primary.

Thirty Elections in less than four years.

 

Do Americans like to vote? Does a lizard sell car insurance?

It should be obvious that one cannot compare American elections to other nations. We like to vote, in some localities we have half a dozen elections a year with dozens of choices to be made. None of that would be possible if we burdened the process with hand counting. It is technology like optical scanner counting that enables our diverse democracy.

Lies Begat Lies

The fact some hobbyist created a website with some bad math and fake news is certainly a common occurrence today, and typically would not merit consideration—so why give this case any thought? Because in this case, those fake numbers have been co-opted by an appalling disinformation campaign that is creating anger, resentment, and voter apathy. To stop it we must debunk the source of the false information.

Creating a narrative of outlandish and wanton election fraud, then aggressively distributing this fake news as multiple viral memes leads to the public accepting it as fact. This has the chilling effect of damaging public confidence in our election process.

When the public has no faith that their vote matters because they believe there is fraud or “rigging,” they are less likely to vote at all — and this applies especially to younger or inexperienced voters. Thus these demotivating attacks have the direct effect of a stealth voter suppression. The end stage is the point where democracy of the many becomes tyranny of the few.

If you have to lie to make your point then you have no point to make.

The effect of these memes goes beyond the lies and the inept math. They create apathy and discord in the voting public, and they distract from the true reasons a particular candidate may have won or lost. And if you are distracted from the actual reasons for a candidate’s loss, you are less likely to correct your approach toward securing a win in a future contest.

Certainly we have serious issues in terms of campaign funding, SuperPACs, foreign national interference, mainstream media bias, and other factors that led to Joe Biden edging ahead of Bernie Sanders. But it was neither “election fraud” nor “vote flipping at polls” and those are myths that must end.

This is not to say we can’t improve our present election system—we certainly can, as discussed in this article on polarized politics and voting methods.

The demographics split in the 2020 democratic primaries is plain as day. Older voters, who are more influenced by mainstream media, voted overwhelmingly for Joe Biden. Younger voters, who are more influenced by social media, voted overwhelmingly for Bernie Sanders. And despite a strong showing for the youth vote, there was an even stronger showing for the 55 and up demographic.

Andrew Somers

APPENDIX—TDMS EXPOSED

How to be Wrong in Three Steps

There is a familiar saying about data: “Garbage In Garbage Out” and that applies to the TDMS articles. Mr. Soares does not have access to the raw exit survey data. What he has access to are preliminary weighted/estimated results as presented by CNN which were created by Edison Research.

CNN presents “preliminary” exit survey results before the precincts close to give the eager public a general idea of the direction of the election, and after polls close and some actual results are reported by certain designated sample precincts, further weighting is done to the survey data, including advanced techniques such as multilevel regression with post-stratification, which helps to align the poll and demographic data with the actual election outcome.

CONTROVERSY #1:

And herein lies the first logical error in Mr. Soares’ assertions. The final results released by CNN a few hours after the polls close have a final weighting and calibration applied to them. But Mr. Soares opted to use the earlier, un-calibrated data. He asserts that this early data is somehow “more accurate” because there has been no regression or alignment with the actual election results.

TDMS website

A screenshot of a portion of the TDMS website, where Mr. Soares presents dishonest and misleading information, including out of context references to USAID documents, and incorrect margin of error references.

This simplistic view harbors multiple problems. The early poll results that TDMS uses does have estimated weighting applied by Edison, though the extent of the weighting and the number of clusters (and standard error) is not publicly known because Edison does not release the raw data. To be absolutely clear, the preliminary CNN/Edison data that TDMS is using is not raw data, it is adjusted using estimates derived from past results, telephone exit surveys of some absentee voters, other recent polls, and estimated demographic weighting.

Further, these early aggregated results as presented on CNN provide absolutely no way to tell what weightings or other adjustments have been applied. Claiming it is “raw” data is also spurious. It is simply early aggregated data with estimated adjustments.

This fact alone throws deeply concerning questions over the entirety of TDMS’s work and articles. A principle assumption of Mr. Soares is that this early uncalibrated data with estimated weightings is somehow “better,” but that claim is an unsupported fabrication of Theodore Soares’ own imagination.

CONTROVERSY #2:

Due to the lack of raw data and the unknown nature of the intermediate results, is not possible to determine an accurate Margin of Error (MoE) using the simpler math as you would for a normal distribution.

This equation wants to see data as a normal distribution, but an exit poll is not a normal distribution, such surveys are subject to clustering and respondent dynamics that skew results and increase the margin of error.

Yet that is part of what TDMS is doing wrong — using math that reports a Margin of Error with the assumption the data is a simple normal distribution.

Nothing could be further from the truth. Election polls have substantial factors that increase the error, as described by this paper on election poll variance. In this study, statisticians examined 4,221 election polls over a 16 year period, and found a significant increase in error due to biases inherent in election polling.

Interviewers are sent to a relatively small number of precincts — about 35 — for collecting poll data from voters. While the precincts are semi-randomly selected, each precinct is not representative of the state as a whole, instead you have “clusters” of demographic groups which skews results and is nowhere near a normal distribution.

Analyzing TDMS’s results shows that he is deriving the MoE using maths that are reliant on data with a simple normal distribution. Yet he is using data of unknown weighting and unknown cluster factors with a much higher margin of error. A crude “rule of thumb” sometimes used is doubling the MoE in these circumstances. Or more ideally, adding in additional maths to estimate the effect of clustering.

An analysis of the MA poll and election data. The top section is using the official final poll results, the bottom section is using the identical data that TDMS used for their spurious exit survey analysis. However, this spreadsheet is using correct math methods to determine margin of error and actual error differences. (Src. MTI)

We have independently analyzed the same eight states TDMS did for 2020, but for the preliminary poll data Mr. Soares used, we added in a cluster adjustment per the methods described in this document.

The final exit survey results on CNN should work well with the simpler math described by ABC as it has been weighted and calibrated to be closer to a normal distribution. Even so, when the Washington Post published the Edison final exit poll results they indicated the final results had an expected error of ±4%. As you can see from the top portion of our Massachusetts spreadsheet, the MoE based on a normal distribution calculated out to ±2.58%. That the Post indicates ±4% is further evidence that an MoE based on a normal distribution is too small and will give a false positive if exceeded.

As he is using the preliminary data, TDMS’s MoE is far too small because he does not take into account the small sample size, small sampling of precincts, and much higher error due to clustering effects, not to mention the unknown, estimated, uncalibrated weighting.

As noted in this RawStory article, Joe Lenski clarifies that people claiming preliminary exit poll data is unadjusted are wrong. Edison adjusts its data throughout the day to compensate for “non-response rates and other sampling issues that come up when we conduct the survey.”

This accounts, at least in part, for why some TDMS results are allegedly outside the margin of error. But there is is still one very glaring and inescapable problem. The next controversy is the source of a great deal of false memes used in the disinformation campaigns.

CONTROVERSY #3:

Compounding the problems of bad data (#1)and incorrect maths to determine the Margin of Error (#2), we found an even greater problem with the TDMS pages. Mr. Soares is using unethical math methods to contrive numbers that fit his narrative.

When we specify a Margin of Error it is relative to the simple difference between the survey result and the final election tally. In other words, Sander’s Massachusetts exit poll was 28.2% and his final election result was 26.58% of the vote. Thus:

28.20% — 26.58% = 1.62%

A difference of 1.62% which is well within the stated ±4% MoE, it even fits in the stricter MoE of 2.32% we derived using the final calibrated poll results and calculating for an assumed simple normal distribution.

Even using the preliminary poll data that Mr. Soares insists is “better,” the difference is still within the stated final poll ±4% MoE, not to mention the higher estimated MoE for the preliminary data of ±8% using the “double it” rule of thumb.

30.40% — 26.58% = 3.82%

Clearly these results do not support Mr. Soares fairy tale of statistically significant discrepancies. Mr. Soares then used his duplicitous maths to artificially inflate the difference to 12.4% — a fraudulent increase by a factor of more than 3.2 times higher than the preliminary poll data or 7.6 times higher than the official poll data.

TDMS invented numbers more than seven and a half times larger than the true discrepancies.

And these false, exaggerated figures were used in the inflammatory memes that were spewed forth by pundits like Lee Camp and Jimmy Dorn, and the well meaning but misdirected Twitter army of Sanders supporters.
The best way to describe TDMS’s artificial expansion of exit survey divergence is to call it a fraudulent misrepresentation. And to be absolutely clear, there is no correlation between the survey margin of error and these manipulated, fake, inflated figures.

TDMS’s Defense of Criticism

Others have debunked Theodore Soares’ counterfeit math including FactCheck. Soares’ defensive response inadequately deflects the criticism. In an interview with Tina-Desiree Berg, Soares dismissed the indictment of his errors with “oh they’re making a mountain out of a molehill.” But the mountain is of Soares’ own making. And it is the mountain of deception and falsely manipulated results presented on his TDMS website.

The fact that Mr. Soares has apparently done nothing to address the false memes that are using his misrepresentations is telling. And he continues to publicize his numbers despite being informed his methods are neither sound nor accepted practice in the field, and a distortion of the truth. As is much of the disinformation on his site.

• • •