Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
The 1980’s were the decade of serial killers. Serial killers were in the news, Hollywood movies, bestselling novels like Thomas Harris’s Red Dragon and Silence of the Lambs, and True Crime books. The serial killer craze overlapped with fears of missing children, hysteria about the Dungeons and Dragons role playing game, and tales of Satanic Ritual Abuse — themes exploited in Netflix’s hit 80s nostalgia show Stranger Things.
The craze started in the 1970s with high profile serial killer cases such as Ted Bundy and John Wayne Gacy and continued into the early 1990s with the hit Silence of the Lambsmovie starring Jodie Foster (the fifth highest grossing movie in 1991) and the notorious Jeffrey Dahmer cannibal serial killer case.
In the last three decades serial killers have waned in the news and public consciousness. The 1980s is often remembered for a Satanic Panic, a supposedly entirely unfounded hysteria about Satanism, ritual abuse in day care centers and Satanic serial killers. Henry Lee Lucas’s once widely repeated claims to have killed six-hundred people as a contract killer for a Satanic cult have been largely dismissed as fantasy.
A US Senate committee and the FBI produced an estimate in 1983 of about 3,600 Americans possibly murdered in 1981 by serial killers, rounded up to 4-5,000 in some news media reports. In 1984 the FBI rolled the number back to about ten percent of all murders, or about 540 Americans possibly murdered by serial killers in a year. Scholars challenged the FBI and government figures as grossly exaggerating the number of murders by serial killers. (see Using Murder: The Social Construction of Serial Homicide by Philip Jenkins for an in depth discussion of the statistics.)
Was the 1980s Serial Killer Wave Real?
Was there even a wave of serial killer cases or was the serial killer wave the product of news coverage and clever marketing by the FBI’s Behavioral Sciences Unit? Have serial killer cases declined in frequency and — if so — why?
Wikipedia’s List of serial killers in the United States purports to be a comprehensive list of known serial killers in the United States with names, dates, and numbers of proven and possible victims from the 1700s to the present (data table downloaded on August 21, 2022).
Indeed this Wikipedia database shows a dramatic surge in serial killers peaking in the early to mid 1980’s and declining back to historically very low levels in the 1990s through the present. This surge is clear in all cases, serial killers with five or more known victims, and even the rare serial killers with ten or more known victims such as Jeffrey Dahmer.
WHY?
The serial killer surge occurred during a period of almost no executions with a complete cessation for four years after the 1972 Furman versus George Supreme Court cast. As executions ramped back up in the 1990’s serial killer cases dropped.
This plot shows executions of serial killers between 1950 and 2020, with about 115 executed out of a total of 553 from the Wikipedia list. These are particularly heinous crimes often involving sadistic torture of victims. Serial killers face a much higher probability of execution than the typical murderer.
One can see executions had dropped to almost zero by 1966, well before the 1972 Furman versus Georgia Supreme Court case that completely stopped executions until 1976. Many of the few executions during the 1970s were murderers who wanted or claimed they wanted to be executed such as Utah’s Gary Gilmore case. Executions began to rise slowly in 1982 reaching a post 1972 peak in 1998.
The green line shows the overall United States Homicide Rate in Homicides per Million Americans. The homicide rate is usually quoted as homicides per 100,000 (100K). Per million is used to make the scale comparable to the other key data series in the plot. The overall murder rate doubles during this period, dropping back to historical levels in the 1990’s as executions rose back to late 1950, early 1960s levels.
The light blue bars show the proven victims of named serial killers from the Wikipedia list. The shorter dark blue bars show the number of active named serial killers. The list often gives a range of years when the killer was active, e.g. 1982-1986. The proven victims and the serial killer are assigned to the mid-point of the range for simplicity, e.g. 1984. The number of active serial killers by year, defined in this way, is inversely correlated with the execution rate: more executions, fewer serial killers.
The yellow bars show the number of victims of mass shootings from Wikipedia’s List of mass shootings in the United States (data tables downloaded on August 24, 2022). The number of mass shooting victims has climbed as the number of executions has dropped since the 1998 peak while the number of serial killer victims has remained low.
The plot below shows the US named serial killer proven victims per year versus the execution rate, executions per 150 million Americans. We can see that the maximum number of proven victims drops exponentially with the execution rate. At low execution rates, there is significant unexplained variation in the number — suggestion other factors at play when executions are rare or nonexistent.
The plot below shows the US Murder Victims per 100,000 Americans per year versus the execution rate, again executions for 150 million Americans. We can see the murder rate drops exponentially with the execution rate. Again, at low execution rates, there is significant unexplained variation indicating other factors than the execution rate play a role when executions are rare or non existent. R**2, known as “R squared” is the coefficient of determination and is roughly the fraction of variation in the data explained by the model.
There appears to be a strong relationship between the execution rate and the number of murders — both the general murder rate and the serial killer murder rate. Both popular news and academic articles often claim mystification as to both the reasons for the sharp late 1960’s rise in murders compared to the historical 1950s, early 1960’s levels and similarly the dramatic drop in murders in the 1990s. It is true that the execution rate is probably highly correlated with other “tough on crime” measures over time.
Alternative Theories
Nonetheless, there are several prominent attempts to explain the sharp drop in murders in the 1990s without crediting any “tough on crime” policies, let alone the execution rate.
One attempt to explain the drop in the general murder rate in the 1990s is the removal of lead paint and piping in poor — often Black areas — and yet this would probably have been an even more serious problem in the 1950s and early 1960s when murder rates and serial murder rates were quite low. Most named serial killers are white, not Black. Contrary to old claims from the FBI, some serial killers are Black, but white serial killers still dominate the statistics. Yet serial killer murders declined as well in the 1990s as the execution rate climbed to the 1998 peak.
Here, for example, is a 2004 article by Steven D. Levitt ruling out increased use of capital punishment as the cause of the 1990s decline in murders. He claims:
“given the rarity of with which executions are carried out in this country and the long delays in doing so, a rational criminal should not be deterred by the threat of execution” (emphasis added)
This is a rather odd statement for a scientific data analysis where one ought to look at empirical data. Who said murderers are rational? Most serial killers are not rational as normally defined. They are often found legally sane which is a narrower concept than common notions of rationality. Indeed the historical data suggests relatively low levels of execution have deterred both highly irrational serial killers and presumably somewhat more rational ordinary murderers.
The Conspiracy Question
The stereotype of serial killers is that they are lone nuts. However the Wikipedia list of US serial killers actually lists at least 57 serial killers active between 1950 and 2020 with an accomplice or accomplices in the Notes section out of 553 active serial killers for a total of 89 killers including the accomplices. This is about 10.3% (57/553) to 16.09% (89/553) of the serial killers depending on how one counts accomplices. In common usage, the murders are the work of a conspiracy: two or more perpetrators.
The list appears to identify only accomplices convicted in court cases. For example, there was strong forensic and eyewitness evidence that Randy Kraft, one of the three seemingly independent California “Freeway Killers” of the 70s and 80s, had at least one accomplice. Police suspected one of his roommates but were unable to find enough evidence or secure a confession. The Wikipedia list does not mention any accomplices for Randy Kraft. Thus, the list probably understates the number of cases with actual accomplices.
It is true that many of these conspiracy cases are pairs such as the so-called “Toolbox Killers” Lawrence Bittaker and Roy Norris. Nonetheless, there are several cases of three or more “serial killers” working together. Another California “Freeway Killer,” William Bonin had an astonishing four accomplices — all convicted or confessed. The Briley Brothers were three brothers and an accomplice Duncan Eric Meekins — another total of four. Dean Corll has at least two accomplices, both convicted: Elmer Wayne Henley and David Brooks. The so-called “Ripper Crew,” a Satanic cult, in Chicago included at least four (4) convicted members: Robin Gecht, Andrew Kokoraleis, Thomas Kokoraleis, and Edward Spreitzer. Four people — Manuel Moore, Larry Green, Jessie Lee Cooks, and J. C. X. Simon — were convicted of the so-called Zebra Murders. That is five cases and twenty (20) out of 553 active identified serial killers between 1950 and 2020 with three or more clearly identified — convicted or confessed — conspirators, about one to four percent depending on how one counts the cases, accomplices and serial killers.
Only small serial murder conspiracies have been demonstrated in court or by rigorous forensic evidence. There is however a popular literature alleging some or a large fraction of serial killer cases are the work of a larger conspiracy or conspiracies such as Dave McGowan’s Programmed to Kill(no relation) and Maury Terry’s The Ultimate Evil.These works blame the serial murders on Satanic cults, neo-Nazis, elite pedophile networks and CIA MK-ULTRA-like mind control programs, often combined into a single super-conspiracy and often overlapping with the Satanic Ritual Abuse allegations of the 1980s — which in fact continue to the present.
It is more difficult to identify and convict murderers in larger conspiracies such as street gang violence, the “Mafia”, and other higher level organized crime. Many unsolved murders — often in inner city Black neighborhoods — are attributed to street gang violence. Larger conspiracies are more effective at intimidating witnesses and corrupting investigations than lone killers or pairs of killers. Larger conspiracies can be long lived and technically sophisticated, better at destroying forensic evidence, disposing of bodies etc. Proving a gang leader or organized crime official has ordered a murder can be difficult or impossible.
The serial killer super conspiracy theories invoke confessions by some serial killers such as the “Son of Sam” David Berkowitz claiming to have been part of a larger conspiracy and various anomalies in some serial killer cases, some quite odd and suspicious. For example, serial killer Bob Berdella, who owned Bob’s Bizarre Bazaar, a boutique that sold artifacts of the occult, home was purchased by Kansas City multi-millionaire Delbert Dunmire — a former bank robber— who eventually destroyed the home and presumably any remaining evidence.
Cary Stayner, convicted of killing four women around the Yosemite National Park in California, is the older brother of Stephen Stayner, a high profile victim of abduction by child molester Keith Parnell — the subject of national news stories and later a TV mini-series.
Police largely failed to investigate a series of disappearances of teenage boys, many from the same Junior High School, in Dean Corll’s neighborhood in Houston, Texas despite pleas from parents some of whom hired private investigators and posted flyers throughout the neighborhood — until the boys were found buried in Corll’s rented boathouse after accomplice Elmer Wayne Henley killed Corll and called the police.
It is unclear how to evaluate such anomalies. Serial killers are quite unusual. The cases frequently attract unusually high levels of publicity. The cases often overlap with prostitution and other illegal activities as well as legal but often socially taboo activities such as homosexuality or occult practices. In some cases, the police may be paid off to “look the other way” or even involved in these activities, which may explain some instances of remarkably inept policing.
Some of these theories emphasize the military background of some serial killers, suggesting they were specially trained or even brainwashed by MK-ULTRA like mind control programs during military service. Reviewing the fifty-seven (57) entries with identified accomplices active from 1950 to 2020 in the Wikipedia list — giving a total of 89 killers including the accomplices, only nine (9 or 15.8%) appear to have served in the US military: Doug Clark, Gary Lewingdon, John Allen Muhammad, Leonard Lake, Manuel Pardo, Roy Norris, and William Bonin. The five-thirty-eight statistics web site published an article in 2015 estimating that about 13.4% of US males have served in the US military.
Since most serial killers are men, there is little evidence that US military veterans are over- or under-represented among serial killers — at least who have identified accomplices. One might expect serial killers in a super conspiracy to be over-represented among serial killers with identified accomplices.
Relevant to the causation of the serial killer wave and the more recent mass shooting wave, some of these theories, notably Dave McGowan’s Programmed to Kill, argue the serial killer wave cases were manufactured in part to frighten the US public into embracing oppressive “tough on crime” policies that increase the power of the CIA, FBI, and other police and security agencies. Thus, there is no deterrent effect from executions but rather a wave of “false flag” operations to undercut more liberal policies such as reducing or eliminating use of the death penalty.
Conclusion
The Eighties serial killer wave was real, not wholly a product of media hype or manipulated statistics from the FBI Behavioral Sciences Unit or other official sources, despite both government and media exaggerations such as the Henry Lee Lucas case and the very high claimed numbers of Americans killed by serial killers during the early 1980s.
The major contributing factor to the wave was probably the dramatic drop in the execution rate and/or associated “tough on crime” measures in the mid 1960s.
The evidence for a super conspiracy behind the serial killer wave such as proposed by the late Dave McGowan in Programmed to Kill is quite weak but not non-existent. The anomalies cited in such theories probably can be explained by the unusual nature of the crimes and perpetrators, the extreme levels of publicity, unidentified accomplices — somewhat larger small conspiracies — which comprise 10-16% of the cases, and overlaps with illegal activities such as prostitution and police corruption.
A similar rise in murders — though not as great — only a factor of two — occurred in general US murders, generally less horrific and less likely to receive the death penalty than serial killers.
The significant variation in murder rates, both general US and serial killer, at lower execution rates indicates other factors come into play as well when executions are rare or do not occur.
We may be seeing a surge of mass shootings as the execution rate has declined since 1998 instead of the surge of serial killers in the 1960s and 1970s following the near cessation of executions culminating in a total cessation for four years after the 1972 Furman versus Georgia Supreme Court decision.
Note that these conclusions are not an endorsement of capital punishment, nor do they address a range of other issues regarding capital punishment such as wrongful convictions, racial and other discrimination in the application of capital punishment, and the relative effectiveness of life imprisonment without possibility of parole.
(C) 2022 by John F. McGowan, Ph.D.
About Me
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
Six minute video on the false positive problems with medical diagnostic tests, especially when used alone without symptoms.
About Us:
Main Web Site: https://mathematical-software.com/ Censored Search: https://censored-search.com/ A search engine for censored Internet content. Find the answers to your problems censored by advertisers and other powerful interests!
Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com
Avoid Internet Censorship by Subscribing to Our RSS News Feed: http://wordpress.jmcgowan.com/wp/feed/
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
This article takes a first look at historical Presidential approval ratings (approval polls from Gallup and other polling services) from Harry Truman through Joe Biden using our math recognition and automated model fitting technology. Our Math Recognition (MathRec) engine has a large, expanding database of known mathematics and uses AI and pattern recognition technology to identify likely candidate mathematical models for data such as the Presidential Approval ratings data. It then automatically fits these models to the data and provides a ranked list of models ordered by goodness of fit, usually the coefficient of determination or “R Squared” metric. It automates, speeds up, and increases the accuracy of data analysis — finding actionable predictive models for data.
The plots show a model — the blue lines — which “predicts” the approval rating based on unemployment rate (UNRATE), the real inflation adjusted value of gold, and time after the first inauguration of a US President — the so-called honeymoon period. The model “explains” about forty-three (43%) of the variation in the approval ratings. This is the “R Squared” or coefficient of determination for the model. The model has a correlation of about sixty-six percent (0.66) with the actual Presidential approval ratings. Note that a model can have a high correlation with data and yet the coefficient of determination is small.
One might expect US Presidential approval ratings to decline with increasing unemployment and/or an increase in the real value of gold reflecting uncertainty and anxiety over the economy. It is generally thought that new Presidents experience a honeymoon period after first taking office. This seems supported by the historical data, suggesting a honeymoon of about six months — with the possible exception of President Trump in 2017.
The model does not (yet) capture a number of notable historical events that appear to have significantly boosted or reduced the US Presidential approval ratings: the Cuban Missile crisis, the Iran Hostage Crisis, the September 11 attacks, the Watergate scandal, and several others. Public response to dramatic events such as these is variable and hard to predict or model. The public often seems to rally around the President at first and during the early stages of a war, but support may decline sharply as a war drags on and/or serious questions arise regarding the war.
There are, of course, a number of caveats on the data. Presidential approval polls empirically vary by several percentage points today between different polling services. There are several historical cases where pre-election polling predictions were grossly in error including the 2016 US Presidential election. A number of polls called the Dewey-Truman race in 1948 wrong, giving rise to the famous photo of President Truman holding up a copy of the Chicago Tribune announcing Dewey’s election victory.
The input data is from the St. Louis Federal Reserve Federal Reserve Economic Data (FRED) web site, much of it from various government agencies such as unemployment data from the Bureau of Labor Statistics. There is a history of criticism of these numbers. Unemployment and inflation rate numbers often seem lower than my everyday experience. As noted, a number of economists and others have questioned the validity of federal unemployment, inflation and price level, and other economic numbers.
(C) 2022 by John F. McGowan, Ph.D.
About Me
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
Short video discussing results of analyzing President Biden’s declining approval ratings and the possible effect of the COVID pandemic and Ukraine crises on the approval ratings.
A detailed longer explanation of the analysis discussed can be found in the previous video “How to Analyze Simple Data Using Python” available on all of our video channels.
(C) 2022 by John F. McGowan, Ph.D.
About Me
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
This is the legal disclaimer that appears when starting the US Centers for Disease Control (CDC’s) Fluview Interactive application which purports to report the percentage of deaths per week “due to” pneumonia and influenza (P&I) prior to March 2020 and pneumonia, influenza, and COVID-19 (PIC) since March 2020. (URL: http://gis.cdc.gov/grasp/fluview/mortality.html)
Emphasis is added to key phrases. The NOTES explain the definition and meaning of several technical terms used in the disclaimer.
The disclaimer essentially says, in plain English, the data — the COVID-19 death counts — which is presented with no estimates of statistical or systematic errors is provisional and could be entirely wrong. Two sentences in one paragraph appear to contradict one another.
National Center for Health Statistics Mortality Surveillance System-
NOTE: The National Center for Health Statistics (NCHS) is a division of the US Centers for Disease Control and Prevention (CDC).
The National Center for Health Statistics (NCHS) collects and disseminates the Nation’s official vital statistics. NCHS collects death certificate data from state vital statistics offices for all deaths occurring in the United States. Pneumonia and/or influenza (P&I) deaths and pneumonia, influenza and/or COVID-19 (PIC) deaths are identified based on ICD-10 multiple cause of death codes.
NOTE: ICD-10 is the International Classification of Diseases 10th Edition, a medical classification list by the World Health Organization (WHO). “ICD-10 multiple cause of death codes” refers to multiple “causes of death” listed on death certificates. Many death certificates have many causes of death such as emphysema, a degenerative eventually terminal condition, and pneumonia. One cause of the death is singled out as the “underlying cause of death” or UCOD. One cause of death is singled out as the “immediate cause of death.” The immediate cause of death is often not the underlying cause of death. For example, emphysema may be the underlying cause of death and pneumonia, the influenza virus, or the “common cold” may be the immediate cause of death.
NCHS Mortality Surveillance System data are presented by the week the death occurred at the national, state, and HHS Region levels, based on the state of residence of the decedent. Data on the percentage of deaths due to P&I or PIC are released one week after the week of death to allow for collection of enough data to produce a stable percentage. States and HHS regions with less than 20% of the expected total deaths (average number of total deaths reported by week during 2008-2012) will be marked as having insufficient data. Not all deaths are reported within a week of death therefore data for earlier weeks are continually revised and the proportion of deaths due to P&I or PIC may increase or decrease as new and updated death certificate data are received by NCHS.
NOTE: Notice the conflictbetween “to allow for collection of enough data to produce a stable percentage” and “the proportion of deaths due to P&I or PIC may increase or decrease as new and updated death certificate data are received by NCHS.” Percentage is a way of expressing the proportion: for example, fifty percent (a percentage) versus one half (another way of expressing the same percentage). “Stable” usually means “not changing or fluctuating” (Merriam Webster) when used in this way.
The COVID-19 death counts reported by NCHS and presented here are provisional and will not match counts in other sources, such as media reports or numbers from county health departments. COVID-19 deaths may be classified or defined differently in various reporting and surveillance systems. Death counts reported by NCHS include deaths that have COVID-19 listed as a cause of death and may include laboratory confirmed COVID-19 deaths and clinically confirmed COVID-19 deaths. Provisional death counts reported by NCHS track approximately 1-2 weeks behind other published data sources on the number of COVID-19 deaths in the U.S. These reasons may partly account for differences between NCHS reported death counts and death counts reported in other sources.
NOTE: The language “a cause of death” likely means that COVID-19 (or pneumonia or influenza in pre-2020 figures) is one of the causes of death listed on the death certificate, not necessarily the underlying cause of death (UCOD). Remember, many death certificates have multiple causes of death, one of which is identified as the underlying cause of death. (UCOD). Note also that the disclaimer specifically states that NCHS numbers “will not match..numbers from county health departments.” County health departments are presumably official, primary sources of death data with qualified staff — medical examiners and others.
In previous seasons, the NCHS surveillance data were used to calculate the percent of all deaths occurring each week that had pneumonia and/or influenza (P&I) listed as a cause of death. Because of the ongoing COVID-19 pandemic, COVID-19 coded deaths were added to P&I to create the PIC (pneumonia, influenza, and/or COVID-19) classification. PIC includes all deaths with pneumonia, influenza, and/or COVID-19 listed on the death certificate. Because many influenza deaths and many COVID-19 deaths have pneumonia included on the death certificate, P&I no longer measures the impact of influenza in the same way that it has in the past. This is because the proportion of pneumonia deaths associated with influenza is now influenced by COVID-19-related pneumonia. The PIC percentage and the number of influenza and number of COVID-19 deaths will be presented in order to help better understand the impact of these viruses on mortality and the relative contribution of each virus to PIC mortality.
The PIC percentages are compared to a seasonal baseline of P&I deaths that is calculated using a periodic regression model that incorporates a robust regression procedure applied to data from the previous five years. An increase of 1.645 standard deviations above the seasonal baseline of P&I deaths is considered the “epidemic threshold,” i.e., the point at which the observed proportion of deaths is significantly higher than would be expected at that time of the year in the absence of substantial influenza, and now COVID-related mortality. Baselines and thresholds are calculated at the national and regional level and by age groups.
* The 10 U.S. Department of Health and Human Services regions include the following jurisdictions. Region 1: Connecticut, Main, Massachusetts, New Hampshire, Rhode Island, and Vermont; Region 2: New Jersey, New York, and New York City; Region 3: Delaware, District of Columbia, Maryland, Pennsylvania, Virginia, and West Virginia; Region 4: Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee; Region 5: Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin; Region 6: Arkansas, Louisiana, New Mexico, Oklahoma, and Texas; Region 7: Iowa, Kansas, Missouri, and Nebraska; Region 8: Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming; Region 9: Arizona, California, Hawaii, and Nevada; Region 10: Alaska, Idaho, Oregon, and Washington.
(C) 2021 by John F. McGowan, Ph.D.
About Me
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).
This is a summary (below) of our lengthy (about 13,000 word) paper on the many issues with the CDC’s pneumonia, influenza, and COVID-19 (PIC) death numbers. It is about 1,000 words in length (5-10 minute read) and summarizes most of our key findings.
John F. McGowan, Ph.D., Tam Hunt, Josh Mitteldorf, PhD. Improving CDC Data Practices Recommendations for Improving the United States Centers for Disease Control (CDC) Data Practices for Pneumonia, Influenza, and COVID-19 (v 1.1). Authorea. November 29, 2021. DOI:10.22541/au.163822197.79126460/v1 (https://doi.org/10.22541/au.163822197.79126460/v1)
A number of CDC data presentation and statistical practices since the start of the COVID-19 pandemic in early 2020 have not followed common scientific and engineering practice. Several problems with data presentation and analyses for pneumonia and influenza death numbers – which have been merged with COVID-19 death numbers in the FluView web site ‒ predate the pandemic.
Before the pandemic (March 2020), the non-standard data presentation and statistical practices appear to increase the number of deaths attributed to the influenza virus and imply the death counts are certain whereas substantial uncertainty exists due to uncertainty in the assignment of the cause of death and other reasons. Since the pandemic, these practices appear to do the same for SARS-COV-2 and COVID-19.
Remarkably, the CDC had at least three (3) different numbers for deaths attributed to pneumonia and influenza before 2020: the leading causes of death report count with about two (2) percent of deaths (about 55,000) per year attributed to influenza and pneumonia, the influenza virus deaths model with about 55,000 deaths per year attributed specifically to the influenza virus, and the FluView web site count with about 6-8 percent of deaths (about 188,000) per year attributed to pneumonia and influenza.
The FluView number differs from the other two death numbers by a factor of OVER THREE. The probable reason for this difference is that — according to the FluView technical notes — FluView counts deaths where pneumonia or influenza is listed as “a cause of death” whereas the leading cause of death report — according to the technical notes — counts only deaths where pneumonia or influenza is listed as “the underlying cause of death.” This probably reflects a large uncertainty in the assignment of the cause of death in respiratory illness cases; indeed the underlying cause of death may be ill-defined in many cases.
The CDC’s excess deaths estimates on their excess deaths web site does not report any standard goodness of fit statistics, notably the coefficient of determination often known as “R squared” and the “chi squared” goodness of fit statistic. Our analysis shows that different models with the same goodness of fit statistics give different estimates of the number of excess deaths, varying by up to 200,000 deaths in 2020. The CDC web site does not report this systematic modeling error.
The CDC appears to have chosen a set of parameters for the Noufaily/Farrington algorithm used to estimate excess deaths by the CDC that gives a lower “R Squared” value for goodness of fit than other choices and a HIGHER ESTIMATE of excess deaths — whereas common scientific and engineering practice would be to use the models with the best goodness of fit statistics, the “R Squared” closest to 1.0.
The Noufailly/Farrington algorithm is an empirical trend detection and extrapolation model theoretically incapable of accurately modeling the aging “baby boom” population which would be expected to produce “excess deaths” in recent years — nor is it able to explain the puzzling near stop of the increase in deaths per year reported in the immediate pre-pandemic years 2017-2019 despite the aging population.
The CDC does not publish (as of Dec 2021) years of life lost (YLL) estimates which include increases in suicides, homicides, and other adverse effects of the lockdowns, nor systematic modeling errors on the YLL estimates. YLL can illustrate the difference between a disease that largely kills those nearing death anyway versus a disease that easily kills the healthy.
The CDC issued a COVID death certificate guidance document in April of 2020 that appears to change the standards for assigning the underlying cause of death (UCOD) from the pre-pandemic practice for assigning the underlying cause of death for pneumonia and influenza, making COVID-19 the underlying cause of death in the many cases where the person who died had serious pre-existing conditions such as chronic bronchitis, emphysema, heart failure etc.. — the deaths counted in FluView but not in the leading causes of death report. There does not appear to have been any public comment on this guidance document to date.
In general the CDC does not report statistical errors, systematic errors, or estimates of biases in pneumonia, influenza, and COVID-19 death numbers. They do not report any monitoring of the effect of their guidance documents or other directives on the assignment of the cause of death by doctors, medical examiners, and others.
These issues are sometimes shared with other government agencies such as the US Social Security Administration (SSA) and US Census Bureau that work closely with the CDC.
Death counts for both individual causes and “all cause” deaths are frequently reported as precise to the last digit without any statistical or systematic errors, despite both known and unknown uncertainties in counting deaths, such as missing persons, unreported deaths due to deceased payee fraud, the ~1,000 living Americans incorrectly added to the government Deaths Master File (DMF), each month, for unknown reasons, and considerable uncertainties in assigning the underlying cause of death (UCOD) by coroners and doctors.
Similarly, raw counts, adjusted counts, and estimates – often based on incompletely documented computer mathematical models – are often not clearly identified as such. The Deaths Master File, with names and dates of death of deceased persons is exempt from the Freedom of Information Act (FOIA) and unavailable to the general public, independent researchers, and even other government agencies such as the IRS. This confidentiality of data makes independent verification of many CDC numbers, such as the excess deaths numbers tracked during the COVID-19 pandemic, all but impossible.
This omission of common scientific and engineering practices raises questions about the accuracy of the CDC’s data, conclusions, and public health policies in a number of important areas, including the COVID-19 pandemic.
The non-standard data presentation and statistical practices appear to increase the number of deaths attributed to the influenza virus and imply the death counts are certain whereas substantial uncertainty exists due to uncertainty in the assignment of the cause of death and other causes. Since the pandemic, these practices appear to do the same for SARS-COV-2 and COVID-19.
END OF SUMMARY
(C) 2021 by John F. McGowan, Ph.D.
About Me
John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).