"I've given you a decision to make,
Things to lose, things to take,
Just as she's about ready to cut it up, She says:
"Wait a minute, honey, I'm gonna add it up."
Things to lose, things to take,
Just as she's about ready to cut it up, She says:
"Wait a minute, honey, I'm gonna add it up."
One of the more common methods of using PolitiFact's findings is to add up total ratings and form a conclusion based on the data. In its simplest form, this is when someone looks at, for example, 100 PolitiFact ratings, 50 from Republicans and 50 from Democrats, then adds up who received more trues and who had more falses, and concludes from that total who is more credible. The reality is that a collection of PolitiFact's ratings provides far more information about the ideological bias of PolitiFact's editors than it does about the people they check.
One of the reasons this flawed method is so popular is that PolitiFact frequently promotes it as part of its shtick. Whether it's the ubiquitous report cards, or the iPhone app with its absurd Truth Index (described as a "Dow Jones Industrial Average of truth"), PolitiFact implicitly tells readers they can simply click a link to find out the credibility of a particular politician. Like most diet pills and get-rich-quick schemes, it's snake oil science and complete junk.
The most obvious flaw with this method is selection bias. There's simply no way for PolitiFact, or anyone for that matter, to check every statement by every politician. This means PolitiFact needs to have some sort of random selection process in order to ensure their sample reflects the wide variety of political statements being made, as well as the politicians making them. Without a random process, editors and writers might investigate statements that pique their own ideological interests. And how does PolitiFact choose its subjects?
"We choose to check things we are curious about. If we look at something and we think that an elected official or talk show host is wrong, then we will fact-check it."
Ruh-roh.
This "things we're curious about" method may explain why Senate hopeful Christine O'Donnell (Rep-RI) garnered four PolitiFact ratings, while the no less comical Senate hopeful Alvin Greene (Dem-NC) received none.
Officially, PolitiFact only checks claims that are the "most newsworthy and significant." (Unless it's about Obama getting his daughters a puppy. Or baseball). PolitiFact also has a penchant for accepting reader suggestions. Anyone visiting PolitiFact's Facebook page is aware that their fans overwhelmingly reside on the left side of the political spectrum. If PolitiFact asks 50,000 liberals what statements to check, guess what? Statements about Fast and Furious won't be as popular as, say, Sarah Palin.
It's also important to consider the source of the statement being rated. For example, when Barack Obama made the claim that preventative health care is an overall cost saver, and Republican David Brooks wrote a column explaining Obama is wrong; PolitiFact gave a True to Brooks. This spares Obama a demerit in his column* while granting Republicans an easy True. Another example of this source selection is evident in the rating about $16 muffins for a Department of Justice meeting. Despite the claim being made in an official Inspector General report and being repeated by several media outlets, including the New York Times and PolitiFact partners NPR and ABC news, PolitiFact hung a False rating around Bill O'Reilly's neck. PolitiFact refrained from judging the nominally liberal media outlets--and the source of the claim--all while burdening O'Reilly with a negative mark in his file.
One of the most overlooked problems with analyzing a tally of the ratings is the inconsistent application of standards PolitiFact employs in different fact checks. Even if one was to assume PolitiFact used a random selection process and assigned its ratings to the appropriate source, we still have a problem when subjects aren't checked according to the same set of standards. For example, Politifact rated Obama "Half True" when he made a claim about the rates certain taxpayers pay. His claim only earned that rating when PolitiFact considered the amount their employers contributed to the employees' tax burden. Almost simultaneously, they labeled Herman Cain Mostly False in a similar claim specifically because he used the same formula. A cursory analysis of total ratings fails to detect this disparate treatment. When considering such flexible guidelines, the "report cards" don't seem like such a credible evaluation.
Ultimately, the sum of PolitiFact's ratings tells us far more about what interests PolitiFact's editors and readers than it does about the credibility of any individual politician. With so many flaws in their process, and such a minute sample size in a vast ocean of statements, conclusions about a subject's overall honesty should be considered dubious. We recognize that this flawed process will undoubtedly affect liberal politicians as well. However, it's our contention that the personal bias of the editors and writers will harm those on the right side of the aisle more often and more dramatically than those on the left.
Adding up PolitiFact's ratings in an attempt to analyze a person's or party's credibility produces misleading results. Until PolitiFact includes a check for selection bias and establishes and adheres to transparent and objective standards, an analysis based on their cumulative work is ill-founded at best, and grossly inaccurate at worst.
*PolitiFact did eventually give Obama a False after he repeatedly made the claim, but still spared several high profile Democrats for the same statement. Unlike perennial fact check favorites like the jobs created by the stimulus or Obama's birth certificate, PolitiFact seems to think the issue isn't worth revisiting.