Go back to Voting Machine Webpage
SOME RESPONSES to http://www.ecotalk.org/ExitPollMadness.htm - it's a mixed bag
----- Original Message -----
> Dear Lynn Landes,
>
> I just read your recent article in CounterPunch and wanted to send you
> a missive to thank you for your efforts. I stumbled upon your
> website about a year ago and have followed your articles ever since.
> It was not until the 2000 election that it became obvious to me that
> elections in this country and in many places around the world are
> little more than wars of perception. Paper ballots counted in public
> the day they are cast is so simple and logical that it troubles me how
> we ended up with the will of the people being translated by
> proprietary technologies often behind close doors. I agree that
> secrecy at any step in the voting process is a threat, including
> perception management via exit polls. Thank you again.
>
> Ever,
>
> William
> Decatur, GA
----- Original Message -----From:Sent: Friday, March 04, 2005 11:11 AMSubject: Exit pollsWell, it depends on who does it and how its done. Polls are very sensitive to how the question is worded, the format, etc. They can be useful, if done honestly, to help verify that there's been no cheating. Since the Bush leagers lie with every breath they take nothing they say or supervise can be trusted. however the technique has its uses, as what technique does not.
Best, Morley LAMorley - It's all about transparency, oversight, and enforcement. We cannot substitute exit polls for free and fair elections.Lynn Landes
From: David DodgeSent: Thursday, March 03, 2005 1:43 PMSubject: Exit polls
Lynn,
So, if I get your drift, we can’t believe anything. Just give up? Or are you trying to say we should rebel against the whole system? That’s pretty radical. You shouldn’t be surprised if nobody agrees with you.
Steven Freeman has done an excellent job of proving that the election results and the exit polls can’t be made to agree with each other. That in itself is very useful information. You are right and I think Freeman would agree, if you weren’t so disagreeable, that the possibility of both being wrong is quite possible. The opportunity to manipulate both is certainly there. However, the motive to manipulate the poll is much lower and the difficulty of manipulating the poll is much higher. If the poll was manipulated it would be much easier to detect because the magnitude and diversity of the information collected from each participant can be compared and cross checked against known demographics. It would be extremely difficult to make up an exit poll data set that would make sense when compared with known and independently verifiable demographics. Any manipulation will always leave telltale signs that can reveal the truth. With the vote controlled by secret software and no way to audit the results manipulation is … well a cinch. There is no proof either way, but it is easy to conclude which is the most likely cause of the discrepancy Freeman and others have proven to exist.
Paper ballots can be manipulated too. It’s harder, but it’s not only possible it has been done many times.
I for one can learn a lot from Freeman’s (and other’s) analysis. You should try it sometime, but then you’d need a scientific mind. I feel sorry for you.
Dave Dodge
Dodge of http://www.uscountvotes.org/index.php?option=com_content&task=view&id=18&Itemid=47 as with Freeman, is making assumptions and racing to conclusions. The readers must make up their own minds as to who is being unscientific. L
----- Original Message -----
Lynn,
BTW, It looks like Edison and Mitofsky had a lot of integrity to release
enough data for us to determine that it is the election results were
wrong and not the exit polls.
I don't know why you keep making a case against the strongest evidence
we have that the election results were tampered with and corrupted. You
want people to discount the strong evidence we have that the election
results were corrupted, and without any proof or evidence on your part
to back up what you are saying. You make us all look like conspiracy
nuts who don't look at the evidence because you aren't looking at it.
Atleast before you criticize the very pollsters who are giving us the
evidence we need that our vote count was wrong, read their report and
our response to it.
http://exit-poll.net/election-night/EvaluationJan192005.pdf
http://uscountvotes.org/ucvAnalysis/US/USCountVotes_Re_Mitofsky-Edison.pdf
I don't know why you are so actively working against all the voting
rights activists who are trying to show that our election results were
corrupted and investigate them so that we can prove it to the American
people.
You are really a conspiracy nut who wants to hurt the very scientists
who have the most power to help restore democratic elections in America.
Please do not be so lazy as to refuse to examine the evidence that you
make so much effort to deny exists. You are unwilling to make any
effort to even read the evidence you claim doesn't exist that the vote
counts were corrupted.
You called me "closedminded" but I would never in my lifetime publicly
refute and discredit things which I have not bothered to even research
or read about like you do. It is called "projecting" when you assume
other people are like yourself. We are not. We are looking at the
evidence, not inventing untrue conspiracy theories like you are doing
that will only hurt everyone's attempts to restore democratic
elections. I cannot for the life of me fathom why you have become such
an enemy of restoring democratic elections in America as to invent such
stories as you do without bothering to even learn about it first.
I am not even slightly like you are, so please don't falsely assume that
I am like you. People who call names and invent stories without
evidence have names that apply very aptly to them and people like you
discredit the rest of us who are actively working to do practical
do-able scientific logical things to restore democracy in America and
who don't have to illogically invent untrue things and discredit the
very people who are most able to help like you do.
Kathy Dopp
I've read Freeman's stuff. He makes assumptions that lead others to believe that exit polls are infallible. That's utter nonsense. L
Lynn
AND MORE ABOUT & from KATHY
----- Original Message -----To: lynn LandesSent: Friday, March 04, 2005 5:54 PMSubject: FYI - discussion about your views on exit poll realityLynn Landes,
I was involved in the planning of the Feb. 26 Oakland Teach-in, and I spoke with you toward the end of the after-party, about your ideas for building more audience questions into events like this.
I am passing along the following post sent out by Kathy Dopp to an e-list of election activists that she has.
I've included a reply I sent back to Kathy, quoting from one of your earlier articles on the suspect nature of the exit polls.
My intent is not to stir up trouble, but to just let you know that this letter is in circulation.
I wish there were clear, reliable answers to the questions you raise. Until there are, I think your questions are entirely valid and the prospect of wholesale electoral illusion that you raise, needs to be regarded as a real possibility.
--DA
Kathy Dopp wrote:
Dear Voting Rights Leaders,
Please read this article by Lynn Landes.
http://www.ecotalk.org/ExitPollMadness.htm
Lynn is attacking the character of Steven Freeman and attacking the credibility of the exit pollsters without even bothering to read the Edison/Mitofsky report or any analysis of it, and without knowing anything about exit poll science.
(The E/M exit pollsters showed enough integrity to reveal enough data in their report to show that the election results are suspect, more than the exit polls, despite their verbal mea culpa. USCountVotes will soon release an amazing final revision of our statisticians' analysis of the Edison/Mitofsky report - within a week or so.) We hope it will garner more press than our first discussion of the exit polls and will be very fascinating to those who it will mathematically show how absurd the reluctant Bush responder hypothesis is, as an explanation, in a way that a high school math student could easily understand.
Lynn's article is very destructive to our efforts to restore democratic elections by Nov 2006 because Lynn is debunking, without even reading it, the best most scientifically irrefutable (if one understands the science of exit polls) evidence that we have that the Nov 04 election results were corrupted.
Perhaps Lynn is just math phobic and that is why she is refusing to even read the Edison/Mitofsky report before attacking our mathematical efforts to prove that the election was corrupted, but unfortunately all her readers who may be also math phobic will not know that Lynn has not bothered to even read the report or research this issue before writing her articles.
USCountVotes' mathematical analysis of the Nov 04 and all prior and future elections is essential to providing the evidence and proof, not only that the Nov 04 election was corrupted, but also to setting up the systems to provide statistical court-worthy evidence to prevent the 2006 election from being likewise corrupted.
Does anyone have any advice on what to do about Lynn Landes efforts to debunk something wrongly like this that is critical to our effort? Lynn's article is very destructive to our effort because she apparently has wide readership among progressives.
Any advice? I've tried to speak with Lynn Landes but she has made it clear that her reputation is better than mine, and that she is not listening to me. I really think it is important to stop this attempt from the left to destroy our best scientific evidence (other than a mountain of anecdotal evidence of course).
http://www.ecotalk.org/ExitPollMadness.htm
Thank you very much,
Kathy Dopp
http://uscountvotes.org
---------------------------------------------------------------------
To unsubscribe, e-mail: uscv_voting_activists-unsubscribe@uscountvotes.org
For additional commands, e-mail: uscv_voting_activists-help@uscountvotes.org
DA replied:
I don't think Lynn is attacking the character of Freeman.
She is asking the exit pollsters to show us the proof that they conducted the polls in the manner they allege they did.
Whether any subsequent calculations or conclusions drawn from those assumed poll results are valid, depends on the answers to Landes' questions.
So far, Edison/Mitofsky has refused to provide any proof about the basis for their exit poll reports.
As Zogby points out in the passage quoted from a Landes article (below) this withholding is counter to accepted professional practice among exit pollers.
I find this refusal in itself highly suspect, and I support Landes in asking the questions.
Landes is not questioning the calculations of Freeman, USCV, or anybody else.
She's asking the preceding question: is the claimed basis for the exit poll data truthful?
Seems reasonable to me.
(DA)
What follows is an excerpt from an earlier Landes article that explains why Landes questions the reality of the exit polls.
Maybe there are answers to these questions. But I haven't seen 'em-- have you? Don't you think these questions ought to have provable answers published in public?
For the full text that the following passage is drawn from, see original at: http://www.ecotalk.org/NEP.htm
Landes writes:
Nothing about the 2004 election makes sense. The numbers don't add up. The surveys don't match up. But, the networks have clamed up. Despite mounting questions and controversy, the networks continue to stonewall. Citing proprietary claims (something the voting machine companies like to do), the NEP won't release the raw exit poll data. Okay. Maybe they have a point. However, they also won't release any logistical information either, particularly where and when the exit polling was conducted. And that's definitely not cricket.John Zogby, President of Zogby International, a well-known polling company, said that such complete non-transparency is a "violation of polling ethics". Under the American Association for Public Opinion Research code, Section III, Standard for Minimal Disclosure: "Good professional practice imposes the obligation upon all public opinion researchers to include, in any report of research results, or to make available when that report is released, certain essential information about how the research was conducted. At a minimum, the following items should be disclosed, Part 8 - Method, location, and dates of data collection."When looking at the data that the networks do provide, things don't check out. According to the NEP website, 5000 people were hired for Election Day, 69,731 interviews were conducted at 1,480 voting precincts. However, NEP's raw exit poll data has just been released on the Internet by the alternative news magazine, Scoop, http://www.scoop.co.nz/stories/pdfs/Mitofsky4zonedata/. It seems legit. It indicates that on November 2nd, the results of 16,085 exit poll interviews were published by 3:59 pm, 21,250 interviews by 7:33 pm, and 26,309 by 1:24 pm on Nov 3 (which doesn't make sense, maybe they meant 1:24 am). Anyway, that grand total comes to 63,664 interviews. But, that number may not be right, either. Edie Emery, spokesperson for the NEP, wrote an email to this journalist stating, "On Election Day, 113,885 voters filled out questionnaires as they left the polling places." Where did that number come from, I asked? No answer from Edie. She said that the networks would make more information available in their "archives" sometime in the first quarter of this year. That's not very timely. Perhaps, that's the idea.At any rate, it appears that nearly a third of the results of the exit polls were not available until after midnight! Whoa, Nellie! What happened to the stampede to "project the winner" right after the polls closed, like the networks used to do? What went wrong this time?And that's not the only mystery. It looks like Mitofsky/Edison used two very different forms for their exit poll surveys. One survey is about what you would expect - http://www.cbsnews.com/htdocs/pdf/natepoll.pdf - a double-sided single sheet of paper that the voter is supposed to fill out. However, the other form, which matches the Scoop data, is several pages long; it is huge - http://election.cbsnews.com/election2004/poll/poll_p____u_s__all_us0.shtml. It is impossible to believe that anyone would take the time or trouble to answer all those questions on Election Day.And then there's the second half of NEP's role on Election Day 2004. The NEP website states that vote totals were "collected" from 2,995 "quick count precincts". I don't know what that means either, because the NEP spokesperson refused to answer my questions. So, I'll theorize. Does that mean that nearly 3,000 mainframe tabulating computers were accessed directly by the AP? Although, the AP admits it was the sole source of raw vote totals for the major news broadcasters on Election Night, AP spokesmen Jack Stokes and John Jones refused to explain to this journalist how the AP received that information. They refused to confirm or deny that the AP received direct feed from central vote tabulating computers across the country.DA
Well, Ms. Landes, Right On All Counts, you are;
in my opinion, albeit
one which is also based on Realism and Sound common sense.
Please note that the rest of this is a little long, so it's suggested
that you first examine the length for scheduling purposes; and that's if
you choose to finish reading this email. Also, this is in first-draft
form, but care was taken during the writing and there'll, hopefully, be
either no typographical errors, or relatively very few of them; please
accept my apology for those, if there are any, anyway.
You're going to wonder at time what it is I'm writing about, however I
believe that it's all quite related to what your article is about; and
if you read the whole of this, then I believe that you'll at the very
least understand that all of this is variously interrelated. There's
some socio-psychological or -intellectual matter, and then the rest is
all or at least about politics -- particularly in the context of
so-called democracies; and some of the latter includes tangential
reference to war and lack of adherence to sound, essential law(s). It's
all interrelated, in my opinion; because, and on the same basis, it's
all realted to "power corrupts", "politics is full of hypocrisy", etc.;
and "there's a cause to everything that exists".
Many people moronically confuse "common sense" with what they really
mean to be "common way"; "way" bears no meaning in terms of sense, even
when qualified with "common"; whereas "sense" rather essentially implies
reason. Of course a far too common reality is that people too regularly
lack sound sense, reason, and when this population beomes large enough
to be considered common, then we have people misunderstand "common way"
for "common sense".
Also of course though, we have far too many people in positions of
power who demonstate -- certainly don't say or admit but nevertheless
demonstrate -- that they really want the common way to make absolutely
no sound sense, to be extremely senseless, and these people certainly
don't mind others, then, counfouding common way for common sense. They
don't mind "leading" that way, because: they demonstrate that they want
to rule over a population of human lemmings, so that the latter will
support and promote that which is common sensically absurd,
hypocritical, etc.; and this is either to PROFIT -- like, for example,
PM Paul Martin has been long doing with his offshored oceanic shipping
company, for income tax evasion, to avoid paying taxes to his own
country, of which he's now PM, and formerly Finance Minister -- or
because they are truly [immature] and don't have a clue about what
they're doing, only knowing that they want to be [bosses].
Now, PM Paul Martin certainly is not anywhere near the worst criminal
politician in the world, but will nevertheless continue on his case. I
knew that income tax evasion has long been a crime, but recently learned
that it's also an international crime according to international laws of
which Canada is a co-signatory and has been for several decades.
"Conflict of interest" controversy came, because he could not legally be
permitted to become PM, if he maintained ownership of this shipping
company; therefore he -- questionably -- transitioned at least "title
ownership" of the company, to his sons, who have since continued to
racketeeringly and criminal profit from income tax evasion. He removed
that controversy about himself and shifted "title ownership" of a
criminally operating company to his sons, who maintain "dad's way"; not
"dad's sense", for he demonstrates an awful absence of sense; better
than the Conservative party leaders who concretely have proven
themselves to be incompetent and warmongers, to a criminal extent, for
having tried to incite Canada into join in an entirely unjustifiable war
in Iraq; however, Paul Martin, while maybe a little better, nevertheless
has more crimes than this on his record; not a legally committed record,
not yet anyway, but nonetheless an actual record. He's now a war
criminal for partaking in the entirely unjustifiable and criminal coup
d'etat in Haiti last Febuary, 2004; however, and while that is an AWFUL,
Horrible crime, one of huge scale, and one basically involving genocide,
I doubt that he really intented to be partaking in a coup d'etat or war
assault as criminal as the case in Haiti has turned out to be; thinking
that he's at least possibly just awfully moronic, moronically incompetent.
They're nevertheless all crimes that Paul Martin should be required to
stand trial for, however I think that the Conservative Party leaders are
more criminally inclined in terms of witting deliberation to incite
their country to war and only for the sake of profiteering; both, from
continued contracts with the USA and for Canadian defence industry
contrators, like CAE, as well as from the trade between the two
countries. I think that Paul Martin is conerned about the trade between
the two countries, but not with as much criminal intent as the
Conservative Party.
Former PM Jean Chretien is also a war criminal, as well as a criminal in
at least one other way, but will only refer to the war crime, here.
How's he a war criminal? From what I recall, I accepted to send
Canadian military forces joining in the Bill Clinton gangsterism war of
aggression in Kosovo; joined Canadian troops in G.W. Bush's war of
criminal aggression against the Taliban regime in Afghanistan, 2001 and
onward; and did not join in the war on and in Iraq, but made Canada the
greatest contributor to the USA's buildup for launching this war. The
latter is what I particularly wanted to refer to, but all three cases
are also criminal. None of these cases was justifiable, not even
remotely. Clinton et al. ensured that they would provide former
president Milosevic with an accord that would certainly and
pre-obviously be rejected by him, which in turn would give Clinton et
al. puportedly just -- but obviously unjust and thus criminal -- grounds
for launching that war of aggression. As for attcking the Taliban, they
had not been proven to have been at all culpable for or in the Sept. 11,
2001, attacks in the USA; actually, they had been rather proven to have
not been involved, and this prior to the assault launched against them.
Although they had ties with Usama Bin Ladin and some other Al-Qaeda,
this was only with respect to the context of Afghanistan. And not only
was participating in assaulting the Taliban criminal, Afghanistan is now
worse off than before; and none of the foreign forces and leaders
invovlved in causing this, any and all aspects of this situation being
held to Account. And helping the USA to militarily prepare for
launching a war on and in -- already devsated -- Iraq which was [far]
from justifiable is criminal.
What does all of that have to do with your article? FRAUD, criminal
fraud and worse.
They, in all of these cases, [obviously] refuse to be held to Account.
And that is what your article is about, LACK OF ACCOUNTABILITY, and thus
criminality; rather extremely high crimes.
Obviously, those of us who think we live in real democracies have
adopted the common way of thinking, but not common sense. After all,
there's no reason for us to sensibly believe that we live in real
democracies; au contraire, we have a flood of evidence that indicates
that we do not live in authentic demoracies, and that they're really and
very despotic, as well as corporatist fascist governments.
In that kind of context, it's only ever more crucial to count the actual
ballots that are cast and to not rely on highly questionable and
unverifiable exit polls and vote reports. The ballots needs to be
counted and exactly as you state.
And I wish to thank you for this article, for while it's been very
obvious that all of the ballots, nationwide, for the Nov. 2, 2004, U.S.
presidential election should be counted, or else the election should be
considered a non-election, and un- as well as anti-Constitutional, well,
you provided with a refreshing, welcome and at least educational "take
on" exit polls.
I should have earlier realised what you say about exit polls, but have
been suffering from some information overload and had not thought of
what you say.
The world [needs] Accountability and it's very distressing that we are
not being granted this; only worse now that it's a legal obligation.
It's all the more distressing when people who are not in positions of
temporal power but have and demonstrate far more sound sense than those
who are. Does a person who is not blind ask a blind person to lead?
Well, maybe in a philosophical context in which outlooks are being
discussed and the blind individual is a better thinker, or when a person
wants to gain some insight into what a blind idividual perceives within
the context of optical blindness; however, not in any other context.
The whole cursable mess of affairs is only more sickening when we can
soundly perceive the problems and how they can be soundly addressed, but
we're overpowered by: morons who think that they're brilliant or at
least more than an average individual -- many of whom are more brilliant
than those who pretend that they're competent and honest enough to lead;
corporatist leaning "leaders" of so-called democracies; war criminals
and mongers; hypocrites, hegemons, etc. They can't even bring
themselves to abide by the sound laws we have and force unsound laws
upon peoples.
What a world; awfully sickening. It's greatly appreciated to be aware
that there are sound people in this world; only unforatunate that
they/we are not the ones leading.
Excellent argumentation, thoroughly, and thank you for providing the
education on exit polls really being unreliable and, so far anyway, at
least obviously unverifiable. Even if they're reliable, if they can't
be verified, then that is the critical matter; in terms of whether or
not we can soundly believe these poll results. I didn't learn from the
other arguments you present on the topic of ballots needing to be
counted, for that was realised long enough ago; however, you at least
rendered the article of greater, broader educational value by incuding
both of these election or electoral topics.
Sincerely,
MC
P.S. I don't desire this to be so, but realise that "all human
institutions will fail" -- Jesus Christ -- is indeed reality. It's so
bad, our circumstances, that "why on earth would Al-Qaeda treat the UNSC
as a legimate entity of and over law?" is a [pertinent] question to
carefully understand the bases of and to realise is realistically
valid. I'm even a little upset with the EU and its present manner of
addressing Iran; it's of course better than idly sitting back and
letting the USA launch a war of unjustifiable and criminal aggression,
without making any attempt to prevent that; however, the Big Problem is
that the U.S. government is not being addressed for its far worse crimes
against the world, its [enormous] nuclear weapons arsenal, more than all
other countries combined, its refusal to allow the IAEA to inspect the
USA's nuclear armaments, etc., and that the U.S. government continues to
be granted its member status -- only worse, with also maintaining veto
power -- in the UNSC.
We cannot realistically expect to have Peace without Justice, and yet
all or many of the temporal, human legal authorities are doing the
contrary of what they're legally obliged to do.
They are the worst terrorists of all, and get away with as well as
supported in their labeling of "freedom fighters", aggression resisters,
rights defenders as the real terrorists. A new example that I just
learned about for the very first time, today, is the following article.
"US Bars Nicaragua Heroine as 'Terrorist'", Guardian/UK, March 4, 2005,
http://www.commondreams.org/headlines05/0304-02.htm
We're on a hell-bent course of madness; not only with respect to exit
polls. "No wonder" we have faulty exit polls; they're"just" a symptom
of the greater madness we have been dragged along into, and oppressed by
or with. If we can't justly address the worst aspects of this MADNESS,
then faulty exit polls are of course awful, but definitely to be
expected; and among the least of the related problems. Intentionally
skewed exit polls are LOUSY and reflect the maliciousness too many
humans like to exercise, to maliciously try to deceive others. Of
course it is to be contested behaviour; however, it happens because we
do not have honourable governments; if we did, then they'd honourably
see to their duty to make sure that lies, deceitful propaganda are or
would be legally addressed as matters of criminal conduct. To lie like
that is criminal and -- as far as I'm aware -- illegal. [Charlatanry]
is illegal, I believe anyway.
Without Soundly Accountable government, "The People" are "screwed"; and
that or this is, both, what we obviously have, as well as very distressing.
And this provides a closing for or on the early-on mention that all of
this is variously but certainly interrelated; very. False exit polls,
news media censorhip and lies, these are all crimes; at least when done
or provided, committed by people who authoratively, professionally
should or do know better and flagrantly pretend that they do, to others
and/or "The People". Someone who is not educated on the topic of how to
accurately poll, well, their false reporting could be justly addressed
with a mild reprimand and an instruction to take a competent course on
how to honestly perform polls; however, news media workers and the
professor you refer to in your article, Freeman I believe you said, have
NO excuse and are criminal charlatans; at least part-time anyway.
Like you, I also don't believe that G.W. Bush really won the election.
I wouldn't want John Kerry, neither, but nevertheless do not believe
Bush won; basically can't, with all of the information about [despotic]
and criminal voter suppression, disenfranchisement, the wicked long
waits in Democratic Party districts, and so on. It's a rather extremely
fraudulent election, in my opinion; G.W. Bush et al's second literal and
criminal hijacking of the presidency. 150 or 200 years ago, this
situation might have already evolved into a Civil War II.
Thursday, March 17, 2005
I really enjoyed your points. However, my
experience with
Steve Freeman was very different. He was open and
collaborative, sharing his data. He was also curious and
insightful about my ongoing study.
Your points are well-taken. The pollsters are a business,
about as trustworthy as Enron. Exit polling is a job for
scientists with transparent and peer-reviewed methods.
Such would befit a true democracy.
Your article stands out. Thank you and keep up the
critical reasoning."
JQ
A day after Edison Media Research/Mitofsky International made available their much anticipated report (NEP Report)[1] on the 2004 Presidential election exit polls in January, the University of Pennsylvania (UPenn) issued a press release[2] announcing that Dr. Steven F. Freeman, an "expert" on the presidential election exit poll errors, has access to a satellite link and is available for interviews.
Although Dr. Freeman is highly credentialed,[3] his publication and presentation credits to date are devoid of any published research on exit polls. His website does however include a link to one working paper, The Unexplained Exit Poll Discrepancy,[4] several op-ed pieces, and indicates that two additional working papers and a book on the subject are forthcoming.
Does Dr. Freeman’s exit poll research qualify him as an “expert” in the subject field? I suggest not, but that in no way precludes him from conducting research on exit polls and his findings should be judged solely on the logic of presentation and validity of scientific methods.
The UPenn press release quotes Dr. Freeman regarding the NEP Report:
Although the authors of the report state that, “the differences between the exit poll estimates and the actual vote [are] most likely due to Kerry voters participating in the exit polls at a higher rate than Bush voters,” they provide little data or theory to support this thesis,” said Freeman. “Rather, the report only confirms the exit poll official count discrepancy that I documented in my Nov. 12 paper, corroborates the data I collected, and rules out most types of polling error.”[5]
I intend to demonstrate that Dr. Freeman’s only publicly available research on the 2004 NEP Presidential exit polls is seriously flawed. However, despite the flaws, Dr. Freeman is correct in concluding that statistical “explanations for the discrepancy thus far provided are inadequate.”[6]
The Unexplained Exit Poll Discrepancy
In this review of The Unexplained Exit Poll Discrepancy, I start with each of Dr. Freeman’s conclusions,[7] work through the methods he used to reach each of those conclusions, and present a counter-analysis to demonstrate that, in all but his final assertion, Dr. Freeman’s conclusions are wrong and improperly drawn.
Conclusion I: In General, Exit Poll Data Are Sound
Dr. Freeman provides data and largely qualitative descriptions of exit polls from German, Utah, Mexico, and Ex-Soviet Block to make his point that, in general, exit poll data are sound. Given the lack of data to support his conclusions for exit polls in Mexico and Ex-Soviet Block nations, I have limited my scope here to the German and Utah exit polls where Dr. Freeman failed to consider that disparate methodology could account for the greater accuracy of those exit polls when compared to media funded US Presidential exit polls. It seems that Dr. Freeman thinks that all exit polls are created equal, which is far from true.
German Exits
Dr. Freeman analyzes data from several German exit polls of national elections where where the predicted result matched closely the tallied result. Mystery Pollster (MP) Mark Blumenthal uncovered an opinion[8] prepared by the ACE project, which is funded by the United Nations and the United States Agency for International Development. As excerpted by MP, the opinion states:
[Exit poll] reliability can be questionable. One might think that there is no reason why voters in stable democracies should conceal or lie about how they have voted, especially because nobody is under any obligation to answer in an exit poll. But in practice they often do. The majority of exit polls carried out in European countries over the past years have been failures.[9] (Emphasis added)
In a telephone conversation with Dr. Freeman, one month prior to the publication of the final version of The Unexplained Exit Poll Discrepancy, MP urged the professor to check the German exit poll methodology before suggesting that the accuracy of the foreign polls have any relevance to the discussion of the 2004 NEP polls.[10] Following the conversation, MP contacted Dr. Dieter Roth of FG Wahlen, the organization that generated the German exit poll data used by Dr. Freeman. Dr. Roth provided some information about methods saying, “I know that Warren Mitofsky's job is much harder than ours, because of the electoral system and the more complicated structure in the states.”[11]
Put simply, the German exit polls are designed better than the NEP exit
polls,[12]
and the potential for both sampling and non-sampling error is reduced by
their methods, which explains why exit polls in
Germany have a
greater chance at achieving accuracy than US Presidential exit polls. Dr.
Roth’s data regarding methods and his statements were available to Dr.
Freeman before he published his paper; but, this information was not
incorporated into the working paper.
BYU Exits
Dr. Freeman explains how the Brigham Young University (BYU) exit poll of Utah voters in the 2004 Presidential election came within .03 percent of predicting tallied Bush proportion and 0.1 percent of the tallied Kerry proportion, but as with the treatment of the German exit poll data, Dr. Freeman does not provide information about the BYU poll methods.[13] MP queried the BYU exit poll website and found information on the poll’s methods.[14] As was the case with the German exits, when compared to the NEP methods, the BYU methods are far superior - hence one reason that the BYU exits have a greater chance of achieving accuracy than the NEP exits.
But there is something more peculiar about Dr. Freeman’s selection of the BYU poll to as evidence that exit poll data, including the 2004 NEP Presidential exit poll data, are generally sound. Why did he not consider the NEP exit poll results for Utah; the state where the BYU exit poll nailed the election result? If he had, he would have realized that the NEP exit poll of Utah was off by 2.7 percent in Kerry’s favor.[15] Rather than demonstrating that the NEP exit poll data are “generally sound,” Dr. Freeman’s use of the BYU exit poll provides evidence to suggest that the inferiority of the NEP exit poll methods could explain the observed discrepancies.
No Apples-to-Apples Comparison?
Dr. Freeman’s review of exit poll data excluded an important piece of literature on media-funded US Presidential exit polls. The bibliography hit all the major exit poll literature sans one chapter by Warren Mitofsky and Murray Edelman written in 1995.[16] In that chapter on the 1992 VRS exit polls, the authors wrote:
The difference between the final margin and the VRS estimate (in 1992) was 1.6 percentage points. VRS consistently overstated Clinton’s lead all evening...Overstating the Democratic candidate was a problem that existed in the last two presidential elections.[17]
Certainly this year's NEP Presidential exit polls showed greater
unidirectional bias than other years, but that is not the case that Dr.
Freeman built. He chose to highlight data from exit polls that employed
highly disparate methods when compared to the methods typically used for
media-funded
US Presidential exit polls and did so
while ignoring pertinent literature on these exit polls that demonstrated
chronic Democratic bias.
Conclusion
II: Analysis of Exit Poll Data Reveal Statistically Significant
Discrepancies in OH, PA, and FL
In short, Dr. Freeman concluded that John Kerry’s predicted exit poll proportion significantly exceeded the Senator’s tallied proportion. This section begins with a review of the problems with Dr. Freeman’s data and methods, followed by the implications of these problems for his conclusions.
Freeman's Data and Methods
Dr. Freeman's null hypothesis states that, assuming independent state polls with no systematic bias, Kerry's predicted proportion should not significantly exceed his tallied proportion.
To test his null, the professor compared data extrapolated from exit poll data posted on CNN's website shortly after midnight on election eve to election tally data.[18] The CNN data were presented in tabular format and reported the predicted proportions for Bush, Kerry, and Other candidates by gender. From the Male/Female split, which was posted on the CNN website as a whole number, Dr. Freeman extrapolated to achieve a value significant to a 10th digit. Although Dr. Freeman reported both Bush's and Kerry's "predicted" (exit poll) proportion of the vote, his statistical analysis is based only on Kerry's proportion; therefore, I have only reproduced these data for Kerry’s proportion in Exhibit 1.[19]
Dr. Freeman’s paper recognizes that exit polls are not simple random samples, but cluster samples, and therefore have higher standard errors than typical phone surveys of similar sample size.[20] The difference between these standard errors is known as the design effect. Dan Merkle and Murray Edelman calculated the design effect of the 1996 Presidential election exit polls to be 1.7, the square root of which is 1.3, leading the authors to state that the 1996 exit polls showed "a 30(percent) increase in the sampling error computed under the assumption of simple random sampling."[21] This 30 percent adjustment is referred to as the design effect square root (DESR). To account for the design effect associated with the 2004 exit polls, Dr. Freeman applied Merkle and Edelman’s DESR to the standard error of Florida, Ohio, and Pennsylvania.
Exhibit 1 Dr. Freeman’s Data Kerry’s Predicted Proportion |
|||||||
State |
Electorate |
CNN “Uncorrected” Predicted Proportions[22] |
Extrapolated Partials |
Freeman’s Sum of Partials |
|||
Male |
Female |
Male |
Female |
Male |
Female |
||
FL |
46% |
54% |
47% |
52% |
21.6% |
28.1% |
49.7% |
OH |
47% |
53% |
51% |
53% |
24.0% |
28.1% |
52.1% |
PA |
47% |
53% |
52% |
56% |
24.4% |
29.7% |
54.1% |
Having estimated the standard error for each state, the professor performed a single-tailed test for comparing the results derived from a single sample to a mean of samples (or established standard) and determined that Kerry's proportion significantly exceeded the election result in all three battleground states at the 95 percent confidence level. If a finding is "significant" (p-value <.05), then one can reject the null hypothesis. If the result is "not significant" (>.05), then statisticians do not reject the null hypothesis; in fact a non-significant finding is just that – non-significant. Freeman rejected the null hypothesis by stating that the observed discrepancies are "impossible" to have been due to chance or random error (i.e., the discrepancies in each state were significant).[23]
Dr. Freeman's "statistical" analysis fails on three main points, which affects Freeman’s conclusions. The analysis: 1) violates the "rule of significant digits"; 2) improperly estimates the design effect; and 3) employs a single-tail test when his assumptions require a two-tail test.
Rule of Significant Digits
According to Dr. Freeman, the exit poll predicted that Kerry would win 49.7 percent of the vote in Florida, 52.1 percent in Ohio, and 54.1 percent in Pennsylvania. As explained in Exhibit 1 above, Dr. Freeman arrived at these proportions by building partials from an extrapolation of the CNN data by gender, which was posted on the website in whole proportions. In doing so, the professor violated the rule of significant digits, which states:
In a calculation involving multiplication, division, trigonometric functions, etc., the number of significant digits in an answer should equal the least number of significant digits in any one of the numbers being multiplied, divided etc.[24]
Acknowledging the limits of his data, Dr. Freeman makes the following statement in a footnote to his extrapolations:
Displaying these numbers out to one decimal point is not meant to imply that the numbers are precise to that level of significance, but rather to provide as much data as accurately as I can. Among the limitations of the CNN exit poll data are the lack of significant digits. I did not want to unnecessarily degrade the data further by rounding numbers derived from calculations.[25]
Essentially, the data used by Dr. Freeman are imprecise and some call a dataset like this “fuzzy,” while others describe it as “noisy.” While acknowledging the limitations of the data in his footnote, Dr. Freeman does not seem to understand the implications of these limitations for statistical analysis. In short, analysis of fuzzy data, yields fuzzy results.
Exhibit 2 shows the range of possible values for Kerry's predicted proportion given the error bounds of the data when a significant digit is considered (10th).[26] If Dr. Freeman would have analyzed the error bounds associated with his data, he would have realized that statistical analyses (calculated Z-scores and p-values in particular) are highly sensitive the number of significant digits.
Exhibit 2 Upper and Lower Bound of Dr. Freeman’s Data: Kerry’s Proportion |
|||
State |
Freeman’s 10th |
Lower Bound |
Upper Bound |
FL |
49.7% |
49.5% |
50.4% |
OH |
52.1% |
51.5% |
52.4% |
PA |
54.1% |
53.5% |
54.4% |
Now that I’ve established the fuzziness of the data, there is also a degree of fuzziness associated with the estimation of the design effect that should also be considered.
What Design Effect? Why?
As mentioned, the 1.3 DESR was applied to the standard error of a poll of the same sample size assuming a simple random sample to account for the exit poll’s design effect.
Warren Mitofsky explained to me in an e-mail that the DESR calculated for each state in the 2004 exit polls ranged from 1.5 to 1.8 depending on the average number of samples per precinct. When asked whether it was appropriate to use the 1.3 DESR factor calculated by Merkle and Edelman for the 1996 election to estimate the 2004 exit poll design effects, Mr. Mitofsky replied that “[t]he Merkle/Edelman paper is not what we computed this year...both Merkle and Edelman participated in this latest calculation.”[27]
Dan Merkle of ABC News wrote the following regarding the use of this factor for analysis of the 2004 Presidential Election exit polls:
What was in the Merkle and Edelman chapter is only a general estimate based on work at VNS in the early 1990s.
The design effect will vary state by state based on the number of interviews per precinct and how clustered the variable is. More work was done on this in 2004 by Edison/Mitofsky. Edelman and I did participate in this. I would suggest using the design effects calculated by Edison/Mitofsky for their 2004 polls.[28]
Complicating the computation of the DESR is that the fact that there are likely two different factors used for the intercept interviews and the telephone interviews. Dan Merkle wrote, that Mitofsky’s DESRs, “only appl[y] to the intercept interviews,” and that “there may be a separate (smaller) design effect for the telephone survey component.[29]
I checked with Jennifer Agiesta of Edison Media Research whether there was a
smaller DESR associated with the telephone survey component than that which
was conveyed by Mitofsky. Ms. Agiesta replied:
According to Warren, we did a new study since the one that Dan Merkle and Murray Edelman did some years ago and the design effects Warren reported to you were the latest ones computed. The whole advisory council, including Dan Merkle and Murray Edelman, participated in it and agreed that the information on design effects that Warren sent you is correct.[30]
Although I'm not certain that Ms. Agiesta understood my question and I have a follow-up question pending with her, it should be clear that application of a 1.3 DESR is not appropriate; the DESR varies by state and is at least 1.6 in all three states, but could be as high as 1.8 in Florida.[31] This information was shared with Dr. Freeman before his paper was published.
How Many Tails?
Dr. Freeman's null-hypothesis stated that, assuming independent state polls with no systematic bias, Kerry's predicted proportion should not significantly exceed his tallied proportion. Exhibit 3 is a reproduction of Dr. Freeman's Figure 1.2,[32] which depicts a normal distribution for Ohio given his calculated standard deviation for all three states.[33]
Exhibit 3
Normal Distribution of Kerry’s Tallied Distribution
with Kerry’s Predicted Proportion
The normal distribution depicts the range of possible proportions that could occur if the exit poll were conducted 100 times. Kerry's "tallied percentage of the vote" is the established standard, or, in this case, the mean of samples. The 95 percent confidence interval shows the range of proportions that would result for 95 of 100 exit polls and is commonly referred to as the "margin of error." With this figure, Freeman attempts to show that the Ohio exit poll is outside the margin of error and therefore we can be 95 percent confident that the discrepancy is "significant" and cannot be explained by random error alone.
Notice though that there are two "tails" outside the 95 percent confidence interval: left and right. The right tail consists of the 2.5 exit poll results of 100 that could be expected to significantly exceed the tallied percentage, whereas the left tail represents the 2.5 exit poll results of 100 that could be expected to be significantly lower than the tallied percentage. Unless the professor sets aside his assumption of no bias in the exit poll and an accurate election tally, he must include the probability of a significant finding at BOTH ends of the normal distribution. By insisting on a single-tail test, he is hinting that either the exit poll is biased or the tally is wrong. This insinuation is inappropriate prior to, or in the process of, testing a null hypothesis that assumes no bias and an accurate tally. Dr. Freeman's failure to properly apply a two-tail test means that his p-values are ½ what they should be.
Implications for Dr. Freeman's Analysis and Conclusions
Using Dr. Freeman's Data, I calculated Z-scores and p-values under multiple scenarios to consider range of possible p-values given the fuzziness of the dataset and to highlight the effect of using the 1.3 DESR calculated for the 1996 elections and a single-tail test in violation of the null hypothesis. The results of these tests are presented in Exhibit 4.
Exhibit 4 Results of Z-tests for Three Rounding Scenarios, Two DESR Assumptions, and Both 1-tail and 2-tail p-values
|
||||||||||||||||||
State |
1.3 DESR (Merkle & Edelman, 2000) |
1.6 DESR (Mitofsky, 2004) |
||||||||||||||||
Freeman’s 10th |
LB |
UP |
Freeman’s 10th |
LB |
UP |
|||||||||||||
Z |
p-values |
Z |
p-values |
Z |
p-values |
Z |
p-values |
Z |
p-values |
Z |
p-values |
|||||||
1-tail |
2-tail |
1-tail |
2-tail |
1-tail |
2-tail |
1-tail |
2-tail |
1-tail |
2-tail |
1-tail |
2-tail |
|||||||
FL | 2.15 | .02 | .03 | 1.99 | .02 | .05 | 2.72 | .00 | .01 | 1.75 | .04 | .08 | 1.61 | .05 | .11 | 2.21 | .01 | .03 |
OH | 2.44 | .01 | .01 | 2.03 | .02 | .04 | 2.65 | .00 | .01 | 1.98 | .02 | .05 | 1.65 | .05 | .10 | 2.15 | .02 | .03 |
PA | 2.21 | .01 | .03 | 1.80 | .04 | .07 | 2.41 | .01 | .02 | 1.80 | .04 | .07 | 1.47 | .07 | .14 | 1.96 | .03 | .05 |
Notes: Freeman’s 10th=Freeman’s extrapolation from the whole proportions; LB=Lower Bound of the true proportion significant to a 10th; UB=Upper Bound of the true proportion significant to a 10th. p-values in red indicate a significant finding. p-values in green indicate non-significant finding. p-values that are .05 were determined to be significant or not significant by taking the digit out to a 100th. |
To review, a p-value of <0.05 represents a significant finding. If the p-value is >0.05 then statistically, nothing can be said about the discrepancy - it is not significant. As shown in the exhibit, when the 1.3 DESR is assumed, all findings are significant with the exception of the 2-tail p-value for Pennsylvania when the lower bound of the exit poll data is considered. However, when the 1.6 DESR is applied, there are several scenarios where the p-values (1- or 2-tail) are not significant. When the inappropriate single-tail findings are removed, the majority of findings when rounding is considered are not significant.
Dr. Freeman’s analysis leads him to conclude the following about the data:
Assuming independent state polls with no systematic bias, the odds against any two of these statistical anomalies occurring together are more than 5,000:1...The odds against all three occurring together are 662,000-to-one. As much as we can say in social science that something is impossible, it is impossible that the discrepancies between predicted and actual vote counts in the three critical battleground states of the 2004 election could have been due to chance or random error.[34] (emphasis added)
When the data set is analyzed correctly, there possibility remains that all three states are not significant. In fact, what we learn from my analysis of the data is that it is impossible to determine the significance of discrepancies in Florida, Ohio, and Pennsylvania.
Conclusion III: Explanations for the Discrepancy Thus Far Provided are Inadequate.
Although it is technically possible that the observed discrepancy could be explained by random sampling error alone, that conclusion is not reasonable. Sampling error could account for some of the error, but it is highly improbable that it accounted for all in the three states analyzed by Dr. Freeman. In fact, this point was so obvious that the Jim Rutenberg of the New York Times wrote after reading a report prepared by Edison/Mitofsky the day after the election, “the surveys had the biggest partisan skew since at least 1988, the earliest election the report tracked.”[35] So why did Dr. Freeman set out to statistically prove a point conceded by the survey designers the day after the election? I suggest that he did so in an attempt to lend himself credibility as an “expert”[36] on the matter in time for the pending release of the NEP Report.
Dr. Freeman published his paper in December, 2004. At the time, there was little by way of explanation of what went wrong with the exit polls. The much anticipated NEP Report on the subject was not available and no one really had an idea when it would be released.[37] Nearly two pages of The Unexplained Exit Poll Discrepancy are dedicated to largely dismissive discussion of anticipated non-fraud related explanations for the discrepancies that had been suggested by talk radio show hosts and bloggers. Rather than nitpick the hasty generalizations and strawman quality of this part of his paper, I will simply say that I believe his Conclusion III stands: The exit poll-election result discrepancy is still not fully explained, despite the NEP Report that was released on January 19, 2004.
NEP Report[38]
The 77-page report prepared by the designers of the 2004 Presidential exit polls for the NEP includes a ton of information. “Say what you will about its conclusions, this report is loaded with never before disclosed data” wrote Mystery Pollster.[39] The following bullets include a sample of the summary findings from the report:[40]
· Sample section of polling locations were not the problem.
· No systematic problem in how the exit poll data were collected or processed was discovered.
· Discrepancies do not support allegations of fraud due to rigging voting equipment.
· The observed higher than average Within Precinct Error (WPE) in many precincts was likely due to Kerry voters participating in the exit polls at a higher rate than Bush voters.
A group of academics that included Dr. Freeman circulated a paper[41] under the organization US Count Votes,[42] which is critical of the NEP Report. Unlike Dr. Freeman’s The Unexplained Exit Poll Discrepancy, this paper dealt quickly with the significance of the discrepancies as the NEP Report readily admitted that the systematic Democratic bias in the 2004 NEP Presidential exit polls could not be explained by sampling error. The US Count Votes authors conclude that only one of two hypotheses are worthy of exploration: 1) the exit polls were subject to a consistent bias of unknown origin; or 2) the official vote count was corrupted.[43] The question then becomes; did the NEP Report prove the first hypothesis?
Differential Non-Response
Of primary concern to the US Count Votes authors is NEP Report conclusion that differential non-response is the likely explanation for the discrepancy. “No data in the report supports the hypothesis that Kerry voters were more likely than Bush voters to cooperate with pollsters, and the data suggests the opposite may have been true” wrote the authors.[44] The paper included a chart of data collected from the NEP Report that was “not analyzed or mentioned in the text.” [45] The chart is reproduced in Exhibit 5.
Exhibit 5
Response to Exit Polls Slightly Higher
in Republican Precincts
If these data were significantly correlated, the US Count Votes authors conclude the finding would suggest that “in precincts with higher numbers of Bush voters, response rates were slightly higher than in precincts with higher number of Kerry voters.”[46] However, the NEP Report stated that “there was no significant difference between the completion rates and the precinct partisanship.”[47] In situations like these, I would prefer to see “completion rate” regressed against “precinct partisanship,” but I take the NEP Report authors’ word that the relationship is not significant. US Count Votes should have pushed for more information supporting their non-significant finding, but instead they ignore the statement of non-significance and graphically present the data to suggest an alternative hypothesis, which seems to me to be inappropriate unless they are inferring professional incompetence or outright deception from the NEP Report authors.
Regarding the differential non-response explanation from the NEP Report, researchers Traugott, Highton, and Miller of the National Research Commission on Elections and Voting[48] write the following:
What must necessarily become more subjective, having eliminated the most directly testable explanations for the bias, is an attempt to explain what other sources there might be among many unobserved and unmeasured possible explanations. The NEP Report concludes that the most likely source of the errors is differential response patterns by Kerry and Bush voters leaving the polls – that is, Kerry voters were more likely to be interviewed while Bush voters were less likely. The authors include a simulation of the likely magnitude of the differential response rates, and then they speculate about contributing factors. These conclusions must be inferred (of necessity, since no information is available on the refusers) from the finding that the average WPE was greatest where younger and more highly educated people were interviewers, irrespective of gender. The analysis also suggests that interviewers hired later and who describe themselves as “somewhat” or “not very well” trained also were associated with data that produced higher average WPE’s. From these analyses, the NEP leaders conclude they must pay more careful attention to interviewer recruitment, including trying to hire older interviewers, and to their training.[49]
Given the frequency of NEP Report conclusions that included qualifiers such as “likely,” “may,” and “could,” I understand how US Count Votes are concerned with the analysis. In effect, the null hypothesis that differential non-response was not a factor, was never (from what I can tell) statistically rejected by the NEP Report. However, the contention that “[no] data in the report supports the hypothesis that Kerry voters were more likely than Bush voters to cooperate with pollsters” is not in the least bit accurate.[50] The NEP Report presented volumes of information that “suggests” support for their hypothesis. Nonetheless, the US Vote Counts authors turned their attention to another set of data in the NEP Report that they suggest implies election fraud is a more plausible explanation of the exit poll discrepancy.
WPE by Vote Equipment
The NEP Report included mean and median WPE by type of equipment used at the polling place. Categories of voting equipment included paper ballot, mechanical voting machine, touch screen, punch cards, and optical scan. The NEP Report rejects the fraud hypothesis because they found, “no systematic differences for precincts using touch screen and optical scan voting equipment” and “the differences are similar to the differences for punch card voting equipment, and less than the difference for mechanical voting equipment.”[51]
US Count Votes reproduced a table from the NEP Report showing that the median WPE for precincts with paper ballots was -0.9, but anywhere from -5.5 to -10.3 for all other voting methods, including optical Scan, punch cards, touch screen, and mechanical voting. The implied conclusion being that automated voting technology could have been tampered with, accounting for the substantially different WPE when compared to WPE in precincts with paper ballots. Regarding the disparate WPE by vote equipment, US Count Votes states:
[The NEP Report authors] implicitly dismiss the possibility that errors for all four automated voting systems could derive from errors in the election results and their breakdown for voting equipment ignores whether results are tallied in the precinct or at a central location.[52]
But why would the average WPE be higher for mechanical vote equipment (presumably the old fashioned lever pull-lever equipment) and punch cards than for electronic voting equipment? According to authors Dr. Freeman and Dr. Josh Mitteldorf:
[T]his fact merely suggests that all three of these systems may have been corrupted. Indeed, there is little question about problems associated with both punch card systems (recall the Florida debacle in 2000) and mechanical voting machines, which are generally unreliable, vulnerable to tinkering and leave no paper trail.[53]
Missing from this discussion is the NEP findings of these data when disaggregated into urban vs. rural precincts. The NEP Report explains that “[t]he low value of the WPE in paper ballot precincts may be due to the location of those precincts in rural areas, which had a lower WPE than other places.[54] In other words, in rural areas, where paper ballots are most prevalent, the average WPE was lower than the aggregate average WPE; in the urban areas, only five sampled precincts included in the analysis used paper ballots.
What we have here is an apparent case of Simpson’s Paradox,[55] which seems ripe for more rigorous statistical analysis. US Count Votes makes the correct call:
The Edison/Mitofsky Report does not report having done an ANOVA (analysis of variance[56]) of voting machine type that might confirm their claim that there is no difference between precincts using different types of voting machines.[57]
ANOVA is an obvious test for these data and why the NEP Report does not include the results of this test is baffling. If a significant difference by voting equipment is found in the aggregate, the data can be disaggregated by rural v. urban and retested. If the differences are not significant, then the NEP Report authors should convey this fact and explain in more detail what it is about the data that could cause the discrepant significance findings. If the discrepancies are significant, then some other explanation of the discrepancy should be pursued, as suggested by US Count Votes.
US Count Votes makes the following statement in the conclusion to their paper:
The Edison/Mitofsky report fails to substantiate their hypothesis that the difference between their exit polls and official election results should be explained by problems with the exit polls. They assert without supporting evidence that (p. 4), “Kerry voters were more likely to participate in the exit polls than Bush voters.” In fact, data included within the report suggest that the opposite might be true.
Their analysis of the potential correlation of exit poll errors with voting machine type is incomplete and inadequate, and their report ignores the alternative hypothesis that the official election results could have been corrupted.
The issue raised in the first paragraph quoted above could be dealt with by releasing the regression analysis of completion rates by precinct partisanship. The issue raised in the second paragraph could be resolved with an ANOVA of WPE by vote equipment and vote equipment location. The NEP Report authors acknowledged that they “need to do more investigation into the causes of the statistical skew in the exit poll data for the general election.”[58] For these reasons, I concur with Dr. Freeman’s Conclusion III of The Unexplained Exit Poll Discrepancy: Explanations for the discrepancy thus far provided are inadequate. I look forward to further analysis from Edison Media Research and Mitofsky International on this topic.
Concluding Remarks
This paper has demonstrated that Dr. Freeman is not an “expert” on exit polls or the 2004 Presidential exit poll discrepancies as suggested by the UPenn press release. In fact, his paper, The Unexplained Exit Poll Discrepancy, is highly flawed. His argument that “in general, exit poll data are sound”[59] fails having suppressed evidence[60] and the conclusion that “it is impossible that the discrepancies between predicted and actual vote counts in”[61] Ohio, Florida, and Pennsylvania was not substantiated statistically. Nevertheless, Dr. Freeman is right in concluding that explanations of the discrepancy to date are inadequate and Edison/Mitofsky should address the concerns of US Count Votes in subsequent analysis of their data.
Dr. Freeman wrote a book based on his research that is due out in a couple of months[62] and has a couple of working papers in progress[63] that I am told will be published in an academic or professional journal. If The Unexplained Exit Poll Discrepancy is any indicator of the quality of research included in these forthcoming works, I suggest that his publishers take a closer look at the manuscripts.
NOTES:
[1] See http://www.exit-poll.net/election-night/EvaluationJan192005.pdf.
2 See http://www.appliedresearch.us/sf/UP-MitofskyComment.htm.
3 See http://www.appliedresearch.us/sf/.
4 See http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
5 See http://www.appliedresearch.us/sf/UP-MitofskyComment.htm.
6 Page 17 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
7 Ibid.
8 See http://www.mysterypollster.com/main/2004/12/what_about_thos.html.
9 See http://www.aceproject.org/main/english/lf/lfd08e.htm.
10 See http://www.mysterypollster.com/main/2004/12/what_about_thos.html.
11 Ibid.
12 See http://www.exit-poll.net/election-night/MethodsStatementStateGeneric.pdf.
13 See pg. 8 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
14 See http://exitpoll.byu.edu/about.asp.
15 Dr. Freeman’s dataset indicates that the exit poll predicted Kerry would win 29.1 percent of the vote in Utah, whereas the election tally showed that Kerry received 26.4 percent.
16 See pg. 18 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
17 Mitofsky, Warren J. and Murray Edelman. 1995. “A Review of the 1992 VRS Exit Polls.” In Presidential Polls and the News Media. Eds. Lavrakas/Traugott/Miller. Boulder, CO: Westview Press. (pp. 81-100)
18 I could not locate the source of Freeman's election tally data.
19 Pg. 4 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp describes the extrapolation method used for Ohio.
20 For a more thorough discussion of standard errors associated with cluster samples, see: 1) Frankel, Martin. 1983. Sampling Theory. Handbook of Survey Research. Eds. P. Rossi, J. Wright., and A. Anderson. Orlando, FL: Academic Press. (pp. 47-62); 2) Kalton, Graham. 1983. Introduction to Survey Sampling. Beverly Hills, CA: Sage. (pp. 28-47); 3) Kish, L. 1965. Survey Sampling. New York: Wiley; 4) Mendenhall, William, Lymann Ott, and Richard Scheaffer. 1971. Elementary Survey Sampling. Belmont, CA: Duxbury Press. (pp. 121-141, 171-183); 5) Sudman, Seymour. 1976. Applied Sampling. New York: Academic Press. (pp. 69-84, 131-170); and 6) Williams, Bill. A Sampler on Sampling. New York: Wiley. (pp. 144-161, 239-241).
21 See pg. 72 of Merkle, Daniel M. and Murray Edelman (2000). "A Review of the 1996 Voter New Service Exit Polls from a Total Survey Error Perspective." In Election Polls, the News Media and Democracy, ed. P.J. Lavrakas, M.W. Traugott, pp. 68-92. New York: Chatam House. .
22 Freeman’s analysis relies on screenshots of the CNN Exit Poll webpage saved shortly after midnight on election eve. The data likely represent the final exit poll data posted on the website before they were weighted to conform to the election result. Weighting the exit poll data to conform to the election result is not unusual.
23 See pg. 13 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
24 See http://www.physics.uoguelph.ca/tutorials/sig_fig/SIG_dig.htm.
25 See footnote #12 on pp. 4-5 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
26 It would be preferable to include at least one more significant digit for more precise analysis, but to remain consistent with Freeman’s extrapolations I stick with the 10th.
27 Mitofksy, Warren J. 2004. Electronic communication to Rick Brady, December 7.
28 Merkle, Dan. 2004. Electronic communication to Rick Brady, December 15.
29 Merkle, Dan. 2004a. Electronic communication to Rick Brady, December 17
30 Agiesta, Jennifer. 2004. Electronic communication to Rick Brady, December 23.
31 According to Mitofsky, states where the average number of interviews per precinct (N)= 50; the DESR was calculated as 1.8. Where N=40, the DESR was calculated as 1.6. Finally, with N=30, the DESR was 1.5. The NEP State Methods statement (see: http://www.exit-poll.net/election-night/MethodsStatementStateGeneric.pdf) gives the number of precincts and both intercept and telephone interview totals for the final exit polls (the actual breakdown of intercept v. telephone interviews included in Freeman’s data is not known, but the NEP data provides a good estimate). In Florida, the average number of interviews per precinct (ANP) was 50.2 when all interviews are considered and 43.3 when only intercept interviews are considered, meaning the DESR could be either 1.6 or 1.8. No telephone interviews were conducted in either Ohio or Pennsylvania and therefore the ANP was 41.7 and 42.5 respectively, resulting in a DESR of 1.6 for those states.
32 See pg. 12 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
33 The 95 percent confident interval for Ohio was constructed with the 1.3 DESR.
34 See pg. 13 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
36 See http://www.appliedresearch.us/sf/UP-MitofskyComment.htm.
37 On November 21, 2004, Washington Post Polling Director, Richard Morin, penned a piece that started, “It will be a few more weeks before we know exactly what went wrong with the 2004 exit polls.” It was actually two months before the NEP Report was published.
38 See http://www.exit-poll.net/election-night/EvaluationJan192005.pdf'.
39 See http://www.mysterypollster.com/main/2005/01/impressions_on_.html.
40 See pgs. 3-4 of http://www.exit-poll.net/election-night/EvaluationJan192005.pdf
42 See http://www.uscountvotes.org/.
43 See pg. 3 of http://www.uscountvotes.org/ucvAnalysis/US/USCountVotes_Re_Mitofsky-Edison.pdf#search='NEP%20Exit%20Poll%20Report%20US%20Count%20Votes
44 Ibid, pgs. 3-4.
45 Ibid, pg. 4.
46 Ibid.
47 See pg. 37 of http://www.exit-poll.net/election-night/EvaluationJan192005.pdf.
48 See http://elections.ssrc.org/.
49 See pgs. 11-12 of http://elections.ssrc.org/research/ExitPollReport031005.pdf.
50 See pgs. 3-4 of http://www.uscountvotes.org/ucvAnalysis/US/USCountVotes_Re_Mitofsky-Edison.pdf#search='NEP%20Exit%20Poll%20Report%20US%20Count%20Votes.
51 See pg. 4 of http://www.exit-poll.net/election-night/EvaluationJan192005.pdf.
52 See Pg. 4 of http://www.uscountvotes.org/ucvAnalysis/US/USCountVotes_Re_Mitofsky-Edison.pdf#search='NEP%20Exit%20Poll%20Report%20US%20Count%20Votes.
53 http://www.inthesetimes.com/site/main/article/1970/
54 See pg. 40 of http://www.exit-poll.net/election-night/EvaluationJan192005.pdf.
55 See http://plato.stanford.edu/entries/paradox-simpson/.
56 See http://www.psychstat.smsu.edu/introbook/sbk27.htm.
57 See pgs. 4-5 of http://www.uscountvotes.org/ucvAnalysis/US/USCountVotes_Re_Mitofsky-Edison.pdf#search='NEP%20Exit%20Poll%20Report%20US%20Count%20Votes.
58 See pg. 12 of http://www.exit-poll.net/election-night/EvaluationJan192005.pdf.
59 See pg. 17 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
60 See http://wiki.cotch.net/index.php/Suppressed_Evidence.
61 See pg. 13 of http://center.grad.upenn.edu/center/get.cgi?item=exitpollp.
62 See http://www.appliedresearch.us/sf/hypotheses.htm.
63 See http://www.appliedresearch.us/sf/epdiscrep.htm.
From: <rick@alohalee.com>
Got to love this... Feel free to post. Rick
FIRST E-MAIL:
Brady's work not strong and we will be posting a critique of some of his
exit poll work in about one week.
What are your credentials and background in statistics? Your own papers
do not seem to be worth responding to from what I've seen of them. Do you
have credentials and a position at a university?
Kathy
_________________
I responded at length and said I was happy to field
criticism from US
Count Votes. To which she responded:
_________________
Rick, I am sorry but I can't send it USCV's group of statisticians due
to your
lack of credentials. If you had a PhD in something that required
statistics or had to do with elections, I could. The same thing goes for
me. I have a master's degree only, so no one would have to respond to
anything I said in the academic world either - unless I sign with a group
of PhDs.
Kathy
_________________
I was a bit puzzled at this new rule in academia, so
I sent along the most
recent version of the paper and she replied:
_________________
Rick, Get a statistician with a PhD to sign it
with you and then I'll read it
and send it on. I don't have time. I'm buried in work.
Kathy
_________________
Last I checked Dr. Freeman's paper and the US Count Votes response to the
NEP Report were NOT in the academic world (not published in peer reviewed
journal or book).