[Video] Operation Warp Speed: The New Manhattan Project That Wasn’t

Uncensored Video Links: NewTube Odysee Rumble BitChute

About twenty-minute (20) video on Operation Warp Speed and how inflated expectations from the extremely and unusually sucessful World War II Manhattan Project that produced the first atomic bombs contributed to grossly unrealistic expectations for the rapid development of COVID-19 vaccines. Discusses the lessons from the disappointing results of Operation Warp Speed and the frequent failure of other “New Manhatan Projects” (e.g. the War on Cancer) since World War II.

Related Article: https://mathblog.com/the-manhattan-project-considered-as-a-fluke/

About Us:

Main Web Site: https://mathematical-software.com/
Censored Search: https://censored-search.com/
A search engine for censored Internet content. Find the answers to your problems censored by advertisers and other powerful interests!

Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com

Avoid Internet Censorship by Subscribing to Our RSS News Feed: http://wordpress.jmcgowan.com/wp/feed/

Legal Disclaimers: http://wordpress.jmcgowan.com/wp/legal/

Support Us:
PATREON: https://www.patreon.com/mathsoft
SubscribeStar: https://www.subscribestar.com/mathsoft

Rumble (Video): https://rumble.com/c/mathsoft

BitChute (Video): https://www.bitchute.com/channel/HGgoa2H3WDac/
Brighteon (Video): https://www.brighteon.com/channels/mathsoft
Odysee (Video): https://odysee.com/@MathematicalSoftware:5
NewTube (Video): https://newtube.app/user/mathsoft
Minds (Video): https://www.minds.com/math_methods/
Archive (Video): https://archive.org/details/@mathsoft

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Article] A First Look at Presidential Approval Ratings with Math Recognition

This article takes a first look at historical Presidential approval ratings (approval polls from Gallup and other polling services) from Harry Truman through Joe Biden using our math recognition and automated model fitting technology. Our Math Recognition (MathRec) engine has a large, expanding database of known mathematics and uses AI and pattern recognition technology to identify likely candidate mathematical models for data such as the Presidential Approval ratings data. It then automatically fits these models to the data and provides a ranked list of models ordered by goodness of fit, usually the coefficient of determination or “R Squared” metric. It automates, speeds up, and increases the accuracy of data analysis — finding actionable predictive models for data.

The plots show a model — the blue lines — which “predicts” the approval rating based on unemployment rate (UNRATE), the real inflation adjusted value of gold, and time after the first inauguration of a US President — the so-called honeymoon period. The model “explains” about forty-three (43%) of the variation in the approval ratings. This is the “R Squared” or coefficient of determination for the model. The model has a correlation of about sixty-six percent (0.66) with the actual Presidential approval ratings. Note that a model can have a high correlation with data and yet the coefficient of determination is small.

One might expect US Presidential approval ratings to decline with increasing unemployment and/or an increase in the real value of gold reflecting uncertainty and anxiety over the economy. It is generally thought that new Presidents experience a honeymoon period after first taking office. This seems supported by the historical data, suggesting a honeymoon of about six months — with the possible exception of President Trump in 2017.

The model does not (yet) capture a number of notable historical events that appear to have significantly boosted or reduced the US Presidential approval ratings: the Cuban Missile crisis, the Iran Hostage Crisis, the September 11 attacks, the Watergate scandal, and several others. Public response to dramatic events such as these is variable and hard to predict or model. The public often seems to rally around the President at first and during the early stages of a war, but support may decline sharply as a war drags on and/or serious questions arise regarding the war.

There are, of course, a number of caveats on the data. Presidential approval polls empirically vary by several percentage points today between different polling services. There are several historical cases where pre-election polling predictions were grossly in error including the 2016 US Presidential election. A number of polls called the Dewey-Truman race in 1948 wrong, giving rise to the famous photo of President Truman holding up a copy of the Chicago Tribune announcing Dewey’s election victory.

The input data is from the St. Louis Federal Reserve Federal Reserve Economic Data (FRED) web site, much of it from various government agencies such as unemployment data from the Bureau of Labor Statistics. There is a history of criticism of these numbers. Unemployment and inflation rate numbers often seem lower than my everyday experience. As noted, a number of economists and others have questioned the validity of federal unemployment, inflation and price level, and other economic numbers.

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Video] How to Analyze Data Using a Baseline Linear Model in Python

https://www.bitchute.com/video/b1D2KMk4kGKH/

Other Uncensored Video Links: NewTube Odysee

YouTube

Video on how to analyze data using a baseline linear model in the Python programming language. A baseline linear model is often a good starting point, reference for developing and evaluating more advanced usually non-linear models of data.

Article with source code: http://wordpress.jmcgowan.com/wp/article-how-to-analyze-data-with-a-baseline-linear-model-in-python/

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Article] How to Analyze Data with a Baseline Linear Model in Python

This article shows Python programming language source code to perform a simple linear model analysis of time series data. Most real world data is not linear but a linear model provides a common baseline starting point for comparison of more advanced, generally non-linear models.

Simulated Nearly Linear Data with Linear Model
"""
Standalone linear model example code.

Generate simulated data and fit model to this simulated data.

LINEAR MODEL FORMULA:

OUTPUT = MULT_T*DATE_TIME + MULT_1*INPUT_1 + MULT_2*INPUT_2 + CONSTANT + NOISE

set MULT_T to 0.0 for simulated data.  Asterisk * means MULTIPLY
from grade school arithmetic.  Python and most programming languages
use * to indicate ordinary multiplication.

(C) 2022 by Mathematical Software Inc.

Point of Contact (POC): John F. McGowan, Ph.D.
E-Mail: ceo@mathematical-software.com

"""

# Python Standard Library
import os
import sys
import time
import datetime
import traceback
import inspect
import glob
# Python add on modules
import numpy as np  # NumPy
import pandas as pd  # Python Data Analysis Library
import matplotlib.pyplot as plt  # MATLAB style plotting
from sklearn.metrics import r2_score  # scikit-learn
import statsmodels.api as sm  # OLS etc.

# STATSMODELS
#
# statsmodels is a Python module that provides classes and functions for
# the estimation of many different statistical models, as well as for
# conducting statistical tests, and statistical data exploration. An
# extensive list of result statistics are available for each
# estimator. The results are tested against existing statistical
# packages to ensure that they are correct. The package is released
# under the open source Modified BSD (3-clause) license.
# The online documentation is hosted at statsmodels.org.
#
# statsmodels supports specifying models using R-style formulas and pandas DataFrames. 


def debug_prefix(stack_index=0):
    """
    return <file_name>:<line_number> (<function_name>)

    REQUIRES: import inspect
    """
    the_stack = inspect.stack()
    lineno = the_stack[stack_index + 1].lineno
    filename = the_stack[stack_index + 1].filename
    function = the_stack[stack_index + 1].function
    return (str(filename) + ":"
            + str(lineno)
            + " (" + str(function) + ") ")  # debug_prefix()


def is_1d(array_np,
          b_trace=False):
    """
    check if array_np is 1-d array

    Such as array_np.shape:  (n,), (1,n), (n,1), (1,1,n) etc.

    RETURNS: True or False

    TESTING: Use DOS> python -c "from standalone_linear import *;test_is_1d()"
    to test this function.

    """
    if not isinstance(array_np, np.ndarray):
        raise TypeError(debug_prefix() + "argument is type "
                        + str(type(array_np))
                        + " Expected np.ndarray")

    if array_np.ndim == 1:
        # array_np.shape == (n,)
        return True
    elif array_np.ndim > 1:
        # (2,3,...)-d array
        # with only one axis with more than one element
        # such as array_np.shape == (n, 1) etc.
        #
        # NOTE: np.array.shape is a tuple (not a np.ndarray)
        # tuple does not have a shape
        #
        if b_trace:
            print("array_np.shape:", array_np.shape)
            print("type(array_np.shape:",
                  type(array_np.shape))
            
        temp = np.array(array_np.shape)  # convert tuple to np.array
        reference = np.ones(temp.shape, dtype=int)

        if b_trace:
            print("reference:", reference)

        mask = np.zeros(temp.shape, dtype=bool)
        for index, value in enumerate(temp):
            if value == 1:
                mask[index] = True

        if b_trace:
            print("mask:", mask)
        
        # number of axes with one element
        axes = temp[mask]
        if isinstance(axes, np.ndarray):
            n_ones = axes.size
        else:
            n_ones = axes
            
        if n_ones >= (array_np.ndim - 1):
            return True
        else:
            return False
    # END is_1d(array_np)


def test_is_1d():
    """
    test is_1d(array_np) function  works
    """

    assert is_1d(np.array([1, 2, 3]))
    assert is_1d(np.array([[10, 20, 33.3]]))
    assert is_1d(np.array([[1.0], [2.2], [3.34]]))
    assert is_1d(np.array([[[1.0], [2.2], [3.3]]]))
    
    assert not is_1d(np.array([[1.1, 2.2], [3.3, 4.4]]))

    print(debug_prefix(), "PASSED")
    # test_is_1d()


def is_time_column(column_np):
    """
    check if column_np is consistent with a time step sequence
    with uniform time steps. e.g. [0.0, 0.1, 0.2, 0.3,...]

    ARGUMENT: column_np -- np.ndarray with sequence

    RETURNS: True or False
    """
    if not isinstance(column_np, np.ndarray):
        raise TypeError(debug_prefix() + "argument is type "
                        + str(type(column_np))
                        + " Expected np.ndarray")

    if is_1d(column_np):
        # verify if time step sequence is nearly uniform
        # sequence of time steps such as (0.0, 0.1, 0.2, ...)
        #
        delta_t = np.zeros(column_np.size-1)
        for index, tval in enumerate(column_np.ravel()):
            if index > 0:
                previous_time = column_np[index-1]
                if tval > previous_time:
                    delta_t[index-1] = tval - previous_time
                else:
                    return False

        # now check that time steps are almost the same
        delta_t = np.median(delta_t)
        delta_range = np.max(delta_t) - np.min(delta_t)
        delta_pct = delta_range / delta_t
        
        print(debug_prefix(),
              "INFO: delta_pct is:", delta_pct, flush=True)
        
        if delta_pct > 1e-6:
            return False
        else:
            return True  # steps are almost the same
    else:
        raise ValueError(debug_prefix() + "argument has more"
                         + " than one (1) dimension.  Expected 1-d")
    # END is_time_column(array_np)


def validate_time_series(time_series):
    """
    validate a time series NumPy array

    Should be a 2-D NumPy array (np.ndarray) of float numbers

    REQUIRES: import numpy as np

    """
    if not isinstance(time_series, np.ndarray):
        raise TypeError(debug_prefix(stack_index=1)
                        + " time_series is type "
                        + str(type(time_series))
                        + " Expected np.ndarray")

    if not time_series.ndim == 2:
        raise TypeError(debug_prefix(stack_index=1)
                        + " time_series.ndim is "
                        + str(time_series.ndim)
                        + " Expected two (2).")

    for row in range(time_series.shape[0]):
        for col in range(time_series.shape[1]):
            value = time_series[row, col]
            if not isinstance(value, np.float64):
                raise TypeError(debug_prefix(stack_index=1)
                                + "time_series[" + str(row)
                                + ", " + str(col) + "] is type "
                                + str(type(value))
                                + " expected float.")

    # check if first column is a sequence of nearly uniform time steps
    #
    if not is_time_column(time_series[:, 0]):
        raise TypeError(debug_prefix(stack_index=1)
                        + "time_series[:, 0] is not a "
                        + "sequence of nearly uniform time steps.")

    return True  # validate_time_series(...)


def fit_linear_to_time_series(new_series):
    """
    Fit multivariate linear model to data.  A wrapper
    for ordinary least squares (OLS).  Include possibility
    of direct linear dependence of the output on the date/time.
    Mathematical formula:

    output = MULT_T*DATE_TIME + MULT_1*INPUT_1 + ... + CONSTANT

    ARGUMENTS: new_series -- np.ndarray with two dimensions
                             with multivariate time series.
                             Each column is a variable.  The
                             first column is the date/time
                             as a float value, usually a
                             fractional year.  Final column
                             is generally the suspected output
                             or dependent variable.

                             (time)(input_1)...(output)

    RETURNS: fitted_series -- np.ndarray with two dimensions
                              and two columns: (date/time) (output
                              of fitted model)

             results --
                 statsmodels.regression.linear_model.RegressionResults

    REQUIRES: import numpy as np
              import pandas as pd
              import statsmodels.api as sm  # OLS etc.

    (C) 2022 by Mathematical Software Inc.

    """
    validate_time_series(new_series)

    #
    # a data frame is a package for a set of numbers
    # that includes key information such as column names,
    # units etc.
    #
    input_data_df = pd.DataFrame(new_series[:, :-1])
    input_data_df = sm.add_constant(input_data_df)

    output_data_df = pd.DataFrame(new_series[:, -1])

    # statsmodels Ordinary Least Squares (OLS)
    model = sm.OLS(output_data_df, input_data_df)
    results = model.fit()  # fit linear model to the data
    print(results.summary())  # print summary of results
                              # with fit parameters, goodness
                              # of fit statistics etc.

    # compute fitted model values for comparison to data
    #
    fitted_values_df = results.predict(input_data_df)

    fitted_series = np.vstack((new_series[:, 0],
                               fitted_values_df.values)).transpose()

    assert fitted_series.shape[1] == 2, \
        str(fitted_series.shape[1]) + " columns, expected two(2)."

    validate_time_series(fitted_series)

    return fitted_series, results  # fit_linear_to_time_series(...)


def test_fit_linear_to_time_series():
    """
    simple test of fitting  a linear model to simple
    simulated data.

    ACTION: Displays plot comparing data to the linear model.

    REQUIRES: import numpy as np
              import matplotlib.pyplot as plt
              from sklearn.metrics impor r2_score (scikit-learn)

    NOTE: In mathematics a function f(x) is linear if:

    f(x + y) = f(x) + f(y)  # function of sum of two inputs
                            # is sum of function of each input value

    f(a*x) = a*f(x)         # function of constant multiplied by
                            # an input is the same constant
                            # multiplied by the function of the
                            # input value

    (C) 2022 by Mathematical Software Inc.
    """

    # simulate monthly data for years 2010 to 2021
    time_steps = np.linspace(2010.0, 2022.0, 120)
    #
    # set random number generator "seed"
    #
    np.random.seed(375123)  # make test reproducible
    # make random walks for the input values
    input_1 = np.cumsum(np.random.normal(size=time_steps.shape))
    input_2 = np.cumsum(np.random.normal(size=time_steps.shape))

    # often awe inspiring Greek letters (alpha, beta,...)
    mult_1 = 1.0  # coefficient or multiplier for input_1
    mult_2 = 2.0   # coefficient or multiplier for input_2
    constant = 3.0  # constant value  (sometimes "pedestal" or "offset")

    # simple linear model
    output = mult_1*input_1 + mult_2*input_2 + constant
    # add some simulated noise
    noise = np.random.normal(loc=0.0,
                             scale=2.0,
                             size=time_steps.shape)

    output = output + noise

    # bundle the series into a single multivariate time series
    data_series = np.vstack((time_steps,
                             input_1,
                             input_2,
                             output)).transpose()

    #
    # np.vstack((array1, array2)) vertically stacks
    # array1 on top of array 2:
    #
    #  (array 1)
    #  (array 2)
    #
    # transpose() to convert rows to vertical columns
    #
    # data_series has rows:
    #    (date_time, input_1, input_2, output)
    #    ...
    #

    # the model fit will estimate the values for the
    # linear model parameters MULT_T, MULT_1, and MULT_2

    fitted_series, \
        fit_results = fit_linear_to_time_series(data_series)

    assert fitted_series.shape[1] == 2, "wrong number of columns"

    model_output = fitted_series[:, 1].flatten()

    #
    # Is the model "good enough" for practical use?
    #
    # Compure R-SQUARED also known as R**2
    # coefficient of determination, a goodness of fit measure
    # roughly percent agreement between data and model
    #
    r2 = r2_score(output,  # ground truth / data
                  model_output  # predicted values
                  )

    #
    # Plot data and model predictions
    #

    model_str = "OUTPUT = MULT_1*INPUT_1 + MULT_2*INPUT_2 + CONSTANT"

    f1 = plt.figure()
    # set light gray background for plot
    # must do this at start after plt.figure() call for some
    # reason
    #
    ax = plt.axes()  # get plot axes
    ax.set_facecolor("lightgray")  # confusingly use set_facecolor(...)
    # plt.ylim((ylow, yhi))  # debug code
    plt.plot(time_steps, output, 'g+', label='DATA')
    plt.plot(time_steps, model_output, 'b-', label='MODEL')
    plt.plot(time_steps, data_series[:, 1], 'cd', label='INPUT 1')
    plt.plot(time_steps, data_series[:, 2], 'md', label='INPUT 2')
    plt.suptitle(model_str)
    plt.title(f"Simple Linear Model (R**2={100*r2:.2f}%)")

    ax.text(1.05, 0.5,
            model_str,
            rotation=90, size=7, weight='bold',
            ha='left', va='center', transform=ax.transAxes)

    ax.text(0.01, 0.01,
            debug_prefix(),
            color='black',
            weight='bold',
            size=6,
            transform=ax.transAxes)

    ax.text(0.01, 0.03,
            time.ctime(),
            color='black',
            weight='bold',
            size=6,
            transform=ax.transAxes)

    plt.xlabel("YEAR FRACTION")
    plt.ylabel("OUTPUT")
    plt.legend(fontsize=8)
    # add major grid lines
    plt.grid()
    plt.show()

    image_file = "test_fit_linear_to_time_series.jpg"
    if os.path.isfile(image_file):
        print("WARNING: removing old image file:",
              image_file)
        os.remove(image_file)

    f1.savefig(image_file,
               dpi=150)

    if os.path.isfile(image_file):
        print("Wrote plot image to:",
              image_file)

    # END test_fit_linear_to_time_series()


if __name__ == "__main__":
    # MAIN PROGRAM

    test_fit_linear_to_time_series()  # test linear model fit

    print(debug_prefix(), time.ctime(), "ALL DONE!")

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Video] How to Extract Data from Images of Plots

Free Speech Video Links: Odysee RUMBLE NewTube

Short video on how to extract data from images of plots using WebPlotDigitizer, a free, open-source program available for Windows, Mac OS X, and Linux platforms.

WebPlotDigitizer web site: https://automeris.io/WebPlotDigitizer/

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Video] Ukraine COVID and Biden Approval Ratings Deeper Dive

Uncensored Video Links: BitChute Odysee Rumble

Short video discussing results of analyzing President Biden’s declining approval ratings and the possible effect of the COVID pandemic and Ukraine crises on the approval ratings.

A detailed longer explanation of the analysis discussed can be found in the previous video “How to Analyze Simple Data Using Python” available on all of our video channels.

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Video] How to Analyze Simple Data Using Python

Uncensored Video Links: BitChute NewTube ARCHIVE Brighteon Odysee

Video on how to analyze simple data using the Python programming language using President Biden’s approval ratings as an example.

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Video] How to Analyze Simple Data with Libre Office Calc

Uncensored Video Links: BitChute NewTube

Video on how to perform a simple analysis of simple data in LibreOffice Calc, a free open-source “clone” of Microsoft Excel. Demonstrates how to use the Trend Line feature in LibreOffice Calc Charts. Discusses how to use the R Squared goodness of fit statistic to evaluate the analysis.

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Article] Ukraine and President Biden’s Approval Rating

Russia invaded Ukraine on February 24, 2022, temporarily moving the COVID-19 pandemic, pandemic response, and the huge number of COVID cases and deaths worldwide attributed to the Omicron variant of SARS-COV-2 despite high levels of vaccinations and masking. So far however, President Biden does not appear to have gotten a boost from the rally around the flag/leader effect that, for example, boosted President George W. Bush’s approval ratings dramatically after the September 11, 2001 mass murder incidents, usually described as attacks on the United States. To be sure, so far there has not been a “New Pearl Harbor” such as 9/11 or a cyberattack or other direct attack on the United States blamed on Russia.

Biden’s approval rating continues to drop (March 27, 2022)

Polling data from Gallup, Rasmussen, and a broad sampling of popular polls all show no clear boost given the few probably few percent error rate of the polls, even a small one, from the Ukraine-Russia crisis so far:

https://news.gallup.com/poll/329384/presidential-approval-ratings-joe-biden.aspx
https://www.rasmussenreports.com/public_content/politics/biden_administration/biden_approval_index_history
https://www.pollingreport.com/biden_job.htm

All of the polls show a marked drop in Biden’s approval ratings in July-August of 2021. One cannot be certain of the reasons, of course, but this is when it became clear that the COVID vaccines worked poorly at best and did not prevent infection or transmission in the vaccinated, contrary to prominent super-confident statements by Biden and his administration that the vaccines would prevent infection in the vaccinated (something obviously untrue from the Pfizer and Moderna clinical trials reports which reported some infections in vaccinated trial subjects).

The approval rating plots above were made with data copied from the referenced web sites on March 27, 2022 and plotted in LibreOffice Calc spreadsheet, a free open-source spreadsheet program similar to Excel, loosely an Excel “clone” although there are some differences.

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

[Article] The Omicron Bind

The Omicron Bind

The omicron variant of the SARS-COV-2 virus, widespread testing including the newly available (in US) antigen tests, or some combination of these factors has resulted in a huge number of both cases and deaths attributed to SARS-COV-2 despite widespread vaccination. Cases and deaths have soared in groups and regions such as Israel and the UK reporting very high vaccination rates, making attributing the cases to the “unvaccinated” implausible.

This has put public health authorities such as the US Centers for Disease Control and Prevention (CDC), WHO, and other agencies around the world in a bind. At best, the soaring cases and deaths reflect either extensive failure of the current vaccines or high false positive rates in the various tests resulting in other respiratory disease cases and other deaths being misidentified as COVID-19.

For the sake of argument, accept that the vaccines are quite safe despite the alarming VAERS data in the United States and that, as widely claimed, the vaccines do confer some reduction in death and severity of illness for a brief few month duration, nonetheless the huge number of cases and deaths throughout the world suggests that the vaccines are largely ineffective in real-world conditions. This may be due to omicron mutating around the vaccine induced immunity based on earlier variants of SARS-COV-2, the frequent waning of the immunity conferred by inactivated vaccines, or even other causes not identified.

This failure of course is an embarrassing debacle at best for public health authorities and agencies, political leaders, and various billionaire philanthropists, particularly because the costly and disruptive lockdowns were justified to protect the vulnerable until a life-saving vaccine would be available – despite the well known high failure rate of research and development, often estimated at 80-90 percent.

The other explanation, in full or in part, for soaring COVID cases and deaths which preserves the putative vaccine “miracle” is that the various tests for SARS-COV-2 and COVID-19 have high false positive rates, that a substantial number of the deaths are “with COVID” rather than “from COVID,” a simplistic binary interpretation of the causes of death, and other “bad counting” explanations. In the last few months, we have seen more and more publications and public announcements moving in this direction such as: https://www.reuters.com/business/healthcare-pharmaceuticals/cdc-reports-fewer-covid-19-pediatric-deaths-after-data-correction-2022-03-18/ , https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2790263, and possibly https://www.clarkcountytoday.com/news/probe-finds-officials-miscalculated-covid-19-death-toll/.

However, if current omicron cases and deaths reflect high false positive rates, the past case and death counts since March of 2020, often described by public health authorities or mainstream news reports citing unnamed public health authorities when reported as both “undercounts” and highly accurate are even more suspect than recent numbers. In the United States, the CDC has previously explained the lack of real time data reporting and other flaws in the COVID-19 case and death data, causing officials to rely on data from the UK and Israel, as caused by antiquated IT systems, lack of funding in previous budgets, alleged cuts by the Trump administration, and similar excuses.

Many of the early tests were produced in haste, rushed out under experimental use authorizations (EUA), including an embarrassing failure by CDC early in the pandemic to produce a usable PCR test (URL: https://arstechnica.com/science/2020/04/cdcs-failed-coronavirus-tests-were-tainted-with-coronavirus-feds-confirm/ ). Tests, testing methods and technologies should certainly have improved in the last two years of the pandemic; if not, why not? Especially given the trillions of dollars spent on the pandemic response.

Some mainstream reporting on problems with the US CDC’s data and data handling.

https://www.politico.com/news/2021/08/15/inside-americas-covid-data-gap-502565

https://www.politico.com/news/2021/09/13/cdc-biden-health-team-vaccine-boosters-511529

https://www.politico.com/news/2021/08/25/cdc-pandemic-limited-data-breakthroughs-506823

https://www.politico.com/news/2022/03/21/cdc-email-data-walensky-00018614

https://www.theverge.com/2022/3/22/22990852/cdc-public-health-data-covid

The current crisis in the Ukraine has undoubtedly distracted much of the public from the omicron bind. Nonetheless the soaring cases and deaths attributed to the omicron and post-omicron variants of SARS-COV-2 appears to reveal gross contradictions in the claims by public health authorities about the COVID pandemic. While it is usually possible in practice to find some convoluted, acrobatic explanation for obviously contradictory data and/or logic, such explanations are rarely true.

Improper Scientific Practice

The public health authorities are portraying the flip flops and contradictions in their assertions about COVID as brilliant scientific discoveries — new science or the science has changed — although that excuse is wearing thin. This is not how proper science functions — even major breakthroughs. It proceeds from tentative statements and numbers with large error bars and/or broad confidence intervals to smaller and smaller errors as more data, better measurements, and better models are developed.

Error bars are graphical representations of the variability of data and used on graphs to indicate the error or uncertainty in a reported measurement. They give a general idea of how precise a measurement is, or conversely, how far from the reported value the true (error free) value might be. Error bars often represent one standard deviation of uncertainty, one standard error, or a particular confidence interval (e.g., a 95% interval). These quantities are not the same and so the measure selected should be stated explicitly in the graph or supporting text. Error Bars, Wikipedia, March 25, 2022

Science rarely jumps from super-confident statements such as “masks don’t work” to grossly contradictory super-confident statements such as CDC Director Robert Redfield’s ludicrous “masks will stop the pandemic in 8-12 weeks” statements in the summer of 2020. (LINK: https://people.com/health/americans-wore-masks-drive-this-epidemic-to-the-ground-says-cdc-director/ ) That sort of a jump or contradiction usually indicates bad science — gross underestimation of the errors before or after the jump (or both). In most cases, the scientific discovery is reflected in a sharp discontinuous drop in the error bars due to a better theoretical and/or mathematical model or better measurements or both.

For example, Johannes Kepler’s discovery of the elliptical orbits of the planets combined with superior measurements with the new telescopes in the 17th century resulted in a dramatic drop in the error bars on predictions of planetary motions from about a one percent (1%) error with the Ptolemaic system to a tiny fraction of one percent. It did not result in a gross reversal of centuries of astronomical observations and predictions. Ptolemy and his successors knew their model was imperfect and said so.  Mars did not suddenly stop backing up for two months every two years in 1605 when Kepler realized what was going on. The empirical phenomenon did not somehow reverse overnight, rather our understanding leaped forward and the accuracy of the predictions went up dramatically.

(ABOVE) The red error bars and the dark blue data points show the ideal proper scientific practice in which the reported red error bars include the actual value largely determined in the 2015-2016 period in the hypothetical example shown when the science jumps forward. The green error bars and light cyan data points show improper scientific practice in which the scientists are over-confident both before and after the “breakthrough.”

It is common for over-confident scientists to explain the contradiction by referring to the uncertainty of science as if the poorly educated audience or critic is unaware of uncertainty and as if the scientists properly reported the large pre-2015 red error bars previously whereas they actually reported the incorrect small green error bars. This switch is the scientific uncertainty excuse.

Indeed there is a frequent improper failure to report statistical and systematic errors throughout the public health “science” both as presented to the lay public, on news shows, and in CDC and other web sites and publications.  One of the most striking examples is a large difference in the number of deaths attributed to “pneumonia and influenza” on the US CDC FluView website (~188,000 per year) and the US CDC leading causes of death report (~55,000 per year). These grossly contradictory numbers have been reported for years with no statistical or systematic errors, nor clear explanation for the difference. The discrepancy between the FluView website and the leading causes of death report predates the COVID-19 pandemic by several years. This gross discrepancy is likely extremely relevant to the question whether a death is “with COVID” or “from COVID” or some intermediate case.

The CDC FluView website shows that 6-10 percent of all deaths, varying seasonally, are due to (the precise language on the graphic) pneumonia and influenza (P&I) according to the vertical axis label on the FluView Pneumonia & Influenza Mortality plot.  The underlying data files from the National Center for Health Statistics (NCHS) list, as mentioned, ~188,000 deaths per year attributed to pneumonia and influenza.

NOTE: https://www.cdc.gov/flu/weekly/fluviewinteractive.htm and click on P&I Mortality Tab


The CDC FluView graphic and underlying data files list no statistical or systematic errors. The counts of deaths in the data files give the numbers to the last significant digit, implying an error of less than one count, one death, based on common scientific and engineering practice.

In contrast, the CDC’s leading causes of death report Table C, Deaths and percentage of total deaths for the 10 leading causes of death: United States, 2016 and 2017 on Page Nine (see Figure 3) attributes only 2 percent of annual deaths (about 55,000 in 2017) to “influenza and pneumonia.”

The difference between the CDC FluView and leading causes of death report numbers seems to be due to the requirement that pneumonia or influenza be listed as “the underlying cause of death” in the leading causes of death report and only “a cause of death” in the FluView data. This is not, however, clear. Many deaths have multiple “causes of death.” The assignment of an “underlying cause of death” may be quite arbitrary in some or even many cases. Despite this, none of these official numbers, either in the leading causes of death report or the FluView website, are reported with error bars or error estimates, as is the common scientific and engineering practice when numbers are uncertain. The leading causes of death report for 2017 reports exactly 55,672 deaths from “influenza and pneumonia” in 2017 with no errors– as shown in Figure 2.

It is impossible to perform an accurate cost benefit analysis of any policy without honest reporting of the uncertainties/error bars.  The overconfident statements will have serious real world consequences in human lives unless they prove correct through luck. 


Generally statements with — in fact — large error bars should not override personal judgment (e.g. mandates) especially in life and death situations.  The government may be justified in preventing parents from treating an illness with a fatal dose of cyanide, where the lethality of the “treatment” is certain. The government is certainly not justified in compelling parents to treat an illness with an experimental treatment with large uncertainties and unknowns even if that treatment might save the child’s life.

Scientists have an ethical obligation to honestly compute and report both statistical and systematic errors; this is common scientific and engineering practice taught by accredited universities and colleges throughout the world.

(C) 2022 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).