[Video/Article] Inconvenient Truths About Science

Inconvenient Truths about Science

Inconvenient Truths about Science

Why Listen to this video? There are many heavily promoted dangerous misconceptions about modern “science,” many of which I once shared. These misconceptions generally lead to an excessive and dangerous confidence in scientists and claims labeled as science. These can even cost you your life as happened to many arthritis sufferers who trusted scientific claims about the blockbuster painkiller Vioxx. Many other examples exist, some discussed briefly in the following video. I will discuss over a dozen common misconceptions. The discussion reflects my personal experience and research.

Why me? I have a B.S. in Physics from Caltech, a Ph.D. in experimental particle physics from the University of Illinois at Urbana-Champaign, worked for a successful video compression startup in the Silicon Valley, NASA, HP Labs, and Apple.

TOPICS COVERED

  1. Scientists are people too. Rarely the altruistic truth-seekers depicted in fiction and popular science writing. Egos, glory, greed. Comparable to less revered and even actively distrusted professions such as attorneys. Many examples of error and gross misconduct up to the present day: “Tuskegee Study of Untreated Syphilis the Negro Male” by US Public Health Service and US Centers for Disease Control (1932-1972), Eugenics, Vioxx scandal.

In her 2009 article “Drug Companies & Doctors: A Story of Corruption”, published in The New York Review of Books magazine, (former NEJM Editor-in-Chief Marcia) Angell wrote :[7]

…Similar conflicts of interest and biases exist in virtually every field of medicine, particularly those that rely heavily on drugs or devices. It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as an editor of The New England Journal of Medicine.

Moral character and intelligence (IQ, general intelligence) are uncorrelated.

  1. Since World War II most modern science is funded by the government, by giant bureaucratic funding agencies such as the National Institutes of Health, the National Science Foundation, the Department of Energy, and the DoD in the USA. There was a large transformation of science during and after World War II from small scale, often more independent research to huge government programs.

(Video segment from Eisenhower’s Farewell Address on the danger of the scientific technological elite)

  1. The success of the wartime Manhattan Project which developed the first nuclear reactors and atomic bombs appears to have been a fluke. Most New Manhattan Projects have largely or completely failed including several in physics involving the same people or their students.

https://mathblog.com/the-manhattan-project-considered-as-a-fluke/https://mathblog.com/the-mathematics-of-the-manhattan-project/

  1. There is an illusion of independence in scientists because so many are directly employed by universities such as Harvard, Stanford, Caltech and others, but those universities depend mostly on government funding. High profile academic dissidents such as linguist Noam Chomsky usually stay well away from truly taboo topics often labeled as “conspiracy theories,” e.g. the Kennedy Assassination, “pseudoscience,” or both.
  2. The federally funded academic research system is a pyramid scheme with many, many more Ph.D.’s produced than long term faculty or staff positions, typically 5-20 times more Ph.D.s. Remarkably, leading scientists and scientific institutions continue to claim terrible shortages of scientists despite this. A never ending supply of young, cheap, often starry-eyed workers — graduate students and post-docs.

https://wordpress.jmcgowan.com/wp/category/stem-shortage-claims/

  1. A well-paid but precarious elite of tenured faculty, principal investigators, senior scientists at government labs who can easily be replaced by a tiny fraction of the younger Ph.D’s if they rock the boat.
  2. Brilliant, well-educated, hard working people sometimes do dumb things, both individually and collectively.
  3. Knowledge of cognitive biases such as “confirmation bias” or “cognitive dissonance” does not immunize people from the biases.
  4. Brilliant, well-educated, hard working people are often better at rationalizing away obviously contradictory evidence or logic and convincing others to accept their rationalizations. Paradoxically knowledge of cognitive biases provides an arsenal of excuses to rationalize away the evidence or logic.
  5. The heavily promoted popular concept of “falsifiability,” usually attributed to Karl Popper, does not work in practice. Scientists can usually (not always) find technically plausible, sophisticated “explanations” for supposedly falsifying evidence. A double standard that sets an impossible obstacle for deprecated views.
  1. The scientific uncertainty excuse. Scientists often make confident statements claiming or implying no or negligible uncertainty. When the statement proves wrong, they will ridicule critics by claiming science is tentative, an ever evolving process, there is an 80-90% failure rate in science, there is uncertainty they never mentioned and by implication everyone should know that. Once the criticism is beaten back often by this ridicule they revert to more confident statements, sometimes grossly contradicting the previous statement.
  2. Modern scientists make heavy use of complex, error-prone, usually computerized mathematical models and advanced statistical methods that are difficult to reproduce or criticize. These methods are prone to finding small signals that rarely exceed the normal variation of the data when small mistakes are made, whether innocently, due to subconscious bias, or intentionally.
  3. The error rate of top science students in school, college, university, academic settings is very low, possibly zero percent for some top students (800 on SAT, a few top students at Caltech, MIT etc.). BUT this does not translate to real world R&D where failure rates are clearly much higher. Scientists selectively cite a failure rate of 80-90 percent when confronted about obvious falures (cost and schedule overruns, failed cancer breakthroughs etc.)
  4. Prodigies/highly successful scientists (tenured faculty etc.) frequently have unusual family backgrounds such as extremely wealthy, politically connected families or an often prominent academic family. Parents know calculus which is a significant hurdle for most “nerds.” Not like Good Will Hunting or The Big Bang Theory where prodigies are portrayed as working class, poor etc. Purely genetic fluke implied.
  5. “Science” (in scare quotes) is promoted by scientists as a religion or substitute for religion, a comprehensive “rational” worldview demanding fealty and paradoxically irrational “rational” obeisance. Extreme examples include the use of the term “God Particle” for the Higgs particle in particle physics, promoted by the late Nobel Laureate Leon Lederman and others. Carl Sagan’s inaccurate account of the destruction of the Library of Alexandria and murder of Hypatia in Cosmos. Often closely tied to militant atheism and materialism despite the strong use of religious and mystical terms and ideas at the same time. Organized skeptics such as CSI/CSICOP, Michael Shermer and others. Dissenting or differing points of view are labeled as anti-science, conspiracy theories, pseudoscience, denialism and other labels.

Carl Sagan, Neil deGrasse Tyson and Hypatia (Debunked):

Conclusion: I’ve discussed over a dozen major heavily promoted, dangerous misconceptions about “science.” If you find some of these hard to accept, perform your own research. I have numerous articles on the false scientist shortage claims, also known as STEM shortage claims, on my web site. I also have articles on the Manhattan Project as a fluke and the Myth of Falsifiability. I will likely post more supporting information on the other misconceptions in the future. Most importantly, true science requires thinking carefully and critically for yourself and not treating something labeled “science” as a religion or substitute for religion, either consciously or subconsciously.

References:

https://mathblog.com/the-manhattan-project-considered-as-a-fluke/https://mathblog.com/the-mathematics-of-the-manhattan-project/

https://wordpress.jmcgowan.com/wp/category/stem-shortage-claims/

Subscribe to our free Weekly Newsletter for articles and videos on practical mathematics, Internet Censorship, ways to fight back against censorship, and other topics by sending an email to: subscribe [at] mathematical-software.com

Avoid Internet Censorship by Subscribing to Our RSS News Feed: http://wordpress.jmcgowan.com/wp/feed/

Legal Disclaimers: http://wordpress.jmcgowan.com/wp/legal/

Support Us:
PATREON: https://www.patreon.com/mathsoft
SubscribeStar: https://www.subscribestar.com/mathsoft

BitChute (Video): https://www.bitchute.com/channel/HGgoa2H3WDac/
NewTube (Video): https://newtube.app/user/mathsoft
Brighteon (Video): https://www.brighteon.com/channels/mathsoft
LBRY (Video): https://lbry.tv/@MathematicalSoftware:5

(C) 2021 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

Galileo’s Greatest Blunder

Galileo’s Greatest Blunder

This is a short video about Galileo’s Greatest Blunder which probably destroyed the famous astronomer’s career.

Video also available on BitChute: https://www.bitchute.com/video/aDJ51f4lvReB/

Avoid Internet Censorship by Subscribing to Our RSS News Feed: http://wordpress.jmcgowan.com/wp/feed/

Legal Disclaimers: http://wordpress.jmcgowan.com/wp/legal/

Support Us: PATREON: https://www.patreon.com/user?u=28764298

(C) 2020 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

The Myth of Falsifiability Article

NOTE: This is an updated version of my presentation “The Myth of Falsifiability.” I have added a few comments on the application of falsifiability and falsifiability metrics to models of the COVID-19 pandemic. The main focus is on the safety and effectiveness of drugs and medical treatments and financial models of investments, but the relevance to COVID-19 models should be obvious. A video version of this presentation is available at https://youtu.be/6y6_6x_kmlY

The article starts with a discussion of the myth of falsifiability, a commonly cited doctrine often used to exclude certain points of view and evidence from consideration as “not scientific”. It discusses the glaring problems with the popular versions of this doctrine and the lack of a rigorous quantitative formulation of a more nuanced concept of falsifiability as originally proposed, but not developed, by the philosopher Karl Popper. The article concludes with a brief accessible presentation of our work on a rigorous quantitative falsifiability metric useful in practical science and engineering.

The scientific doctrine of falsifiability is key in practical problems such as confirming the accuracy and reliability of epidemiological models of the COVID-19 pandemic, the safety and effectiveness of pharmaceuticals and the safety and the reliability of financial models. How confident can we be of unprecedented world-changing policies ostensibly based on a plethora of conflicting models of the COVID-19 pandemic combined with highly incomplete and rapidly changing data?

How confident can we be that FDA approved drugs are safe, let alone effective? How confident can we be of AAA ratings for securities based on mathematical models?

In practice falsifiability is commonly cited to exclude certain points of view and evidence from consideration as “not scientific”.

The Encyclopedia Brittanica gives a typical example of the popular version of falsifiability:

Criterion of falsifiability, in the philosophy of science, a standard of evaluation of putatively scientific theories, according to which a theory is genuinely scientific only if it is possible in principle to establish that it is false.

Encyclopedia Brittanica

In practice, this popular version of falsifiability gives little guidance in evaluating whether an epidemiological model is reliable, a drug is safe or effective or a triple-A rated security is genuinely low risk. In actual scientific and engineering practice, we need a reliable estimate of how likely the apparent agreement of model and data is due to flexibility in the model from adjustable parameters, ad hoc changes to the mathematical model, and other causes such as data selection procedures. I will discuss this in more detail later in this article.

Karl Popper and The Logic of Scientific Discovery

The Austrian philosopher Karl Popper developed and presented a theory of falsifiability in his book The Logic of Scientific Discovery. This book is often cited and rarely read. My copy is 480 pages of small type.

Popper was especially concerned with rebutting the ostensibly scientific claims of Marxism and other ideologies. Popper was a deep thinker and understood that there were problems with a simple concept of falsifiability as I discuss next.

Falsifiability is largely encountered in disputes about religion and so-called pseudo-science, for example parapsychology. It is notably common in disputes over teaching evolution and creationism, the notion that God created the universe, life and human beings in some way, in schools. In the United States, creationism often refers to a literal or nearly literal interpretation of the Book of Genesis in the Bible.

This is a typical example from the RationalWiki arguing that creationism is not falsifiable and therefore is not science.

RationalWiki Example of the Common Use of Falsifiability

Remarkably, the doctrine of falsifiability is very rarely invoked in the scholarly scientific peer-reviewed literature, almost never outside of rare articles specifically rebutting the legitimacy of topics such as creationism and alleged pseudo-science. For example, a search of the arxiv.org preprint archive (1.6 million articles) turned up only eight matches for falsifiability and Popper as shown here.

Scientific and Engineering Citation of Falsifiability is Extremely Rare

In fact, there are many historical examples of scientific theories that could not be falsified but have been confirmed.

The existence of Black Swans, discovered in Australia. No matter how long one fails to find a single black swan, this does not mean they do not exist.

Stones falling from the sky, meteorites, were rejected by science for over a century despite many historical and anecdotal accounts of these remarkable objects.

Images of a Black Swan and a Meteorite
A Black Swan and a Meteorite

What experiment could we reasonably now perform that would falsify the existence of black swans and meteorites? Does this mean they are not scientific even though they exist?

The Hebrew Bible

Divine creation of the world and the existence of God are both examples of propositions that are impossible to falsify or disprove, but they can be almost completely verified by evidence that would be accepted by nearly all people as almost conclusive.

For example if we were to discover the Hebrew text of the Bible encoded in a clear way in the DNA of human beings, this would be strong – not absolutely conclusive – evidence for divine creation.

If the Sun were to stop in its course for an hour tomorrow and a voice boom out from the Heavens: “This is God. I created the world and human beings. Make love not war.” this would be reasonably accepted as nearly conclusive evidence of God and creation.

The Matrix: The World is a Computer Simulation

Of course, any evidence for God or any other remarkable or unexpected phenomenon can be explained by invoking other extreme possibilities such as time travel, super-advanced space aliens or inter-dimensional visitors, or a computer simulation reality as in The Matrix movie.

I am not endorsing any religion or divine creation in making this point. I am simply pointing out the deep flaws in the doctrine of falsifiability as generally invoked.

Fritz Zwicky and the Velocity Curves for the Triangulum Galaxy (Messier 33 or M33)

Let’s leave the world of religion and theology behind and take a look at the problems with falsifiability in mainstream scientific cosmology including the scientific account of creation, the Big Bang Theory.

In the 1930’s Fritz Zwicky, shown on the left, an astronomer at the California Institute of Technology (Caltech) noticed that the velocities of the orbit of stars in our Galaxy, the Milky Way, around the Galactic Center failed to decline with distance from the Galactic Center as predicted by both Newton’s theory of gravity and Einstein’s more recent General Theory of Relativity.

The plot on the right shows a similar dramatic discrepancy in a nearby galaxy, the Triangulum Galaxy, also known as Messier 33 (M33).

These observations would appear to falsify both Newton and Einstein’s theories of gravity in a dramatic way. Did scientists forthrightly falsify these theories as RationalWiki and other popular version of falsifiability claim they would?

NO. They did not. Instead they postulated a mysterious “dark matter” that could not be observed that fixed the gross discrepancy between theory and experiment.

In the last century, numerous additional discrepancies at the larger scales of clusters and super-clusters of galaxies have been observed, leading to the introduction of additional types of dark matter to get the theory to match the observations. None of these hypothetical dark matter candidates have ever been observed despite many searches.

Hubble Space Telescope

Einstein’s General Theory of Relativity originally included an additional term, usually known as the cosmological constant, to prevent the expansion of the universe. Einstein is reported to have called this term his “greatest blunder” after observations by Edwin Hubble showed otherwise unexplained extragalactic redshifts that could be explained as caused by the expansion of the universe, what is now called the Big Bang Theory.

The observation of the red shifts appeared to falsify Einstein’s theory. Einstein quickly dropped the cosmological constant term, achieving agreement with the data.

The Hubble Space Telescope discovered evidence that the expansion of the universe was accelerating, something the General Theory of Relativity failed to predict.

The Cosmological Term

Did scientists falsify the General Theory at this point? NO. Einstein had chosen the value of the cosmological constant to exactly balance the predicted expansion which initially contradicted known observations and theoretical prejudices. By using a different cosmological constant, modern scientists could reproduce the acceleration found by the Hubble.

Einstein, right even when he was wrong! Modern cosmologists attribute the non-zero cosmological constant to a mysterious dark energy permeating the universe. So far the dark energy, like the dark matter before it, has never been directly observed.

The modern Big Bang Theory incorporates other as yet unobserved entities such as “inflation” as well.

The Martian Epicycle

In practice, it is almost always possible to salvage a scientific theory by postulating undetected and perhaps unmeasurable entities such as dark matter, dark energy, inflation, and the original Ptolemaic epicycles.

In the Ptolemaic Earth-centered solar system Mars orbits the Earth. Mars is observed to back up in the Zodiac for about two months every two years. This clearly contradicted the Earth-centered model. This gross discrepancy was largely fixed by introducing an epicycle in which Mars orbits an invisible point which in turn orbits the Earth as shown in the plot on the right. The ancients interpreted Mars as a god or angel and justified the epicycles as complex dance moves dictated by the king of the gods or a monotheistic God.

In mathematical terms, a rigorous quantitative theory such as the General Theory of Relativity or Newton’s Theory of Gravity is a mathematical formula or expression. Discrepancies between these theories and observation can be resolved by adding, subtracting, or modifying different terms in the formula, such as the cosmological constant term. These modified terms often correspond to hypothetical entities such as dark energy.

MOND Alternative to General Relativity with Dark Matter

Many alternative theories to general relativity exist. MOND or Modified Newtonian Dynamics is the leading competitor at the moment. It can explain many (not all) observations without resorting to unobserved dark matter.

In fact, many complex mathematical theories such as those produced by modern machine learning and deep learning methods can “explain” the observations in scientific cosmology.

This is not surprising because complex theories with many adjustable parameters like the cosmological constant are plastic and can fit a wide range of data, in extreme cases like saran wrap can fit almost any solid surface.

A simple example of this saran wrap like behavior of complex mathematical formulae is the Taylor polynomial. A Taylor polynomial with enough terms can approximate almost any function arbitrarily well.

The Fourth (4th) Degree Taylor Polynomial Fitted to Periodic Data

The plot here shows a Taylor polynomial approximating a periodic function, the trigonometric sine, better and better as the degree, number of terms, increases.

Sixth (6th) Degree Taylor Polynomial Fitted to the Same Periodic Data
Eighth (8th) Degree Taylor Polynomial Fitted to the Same Periodic Data
Tenth (10th) Degree Taylor Polynomial Fitted to the Same Periodic Data
All the Taylor Polynomial Models (Degrees 4,6,8, and 10) and Data in One Plot

The region of interest (ROI), containing the data used in the fit, is the region between the red triangle on the left and the red triangle on the right.

Notice the agreement with the data in the Region of Interest improves as the degree, the number of terms, increases. R SQUARED is roughly the fraction of the data explained by the model. Notice also the agreement for the Taylor Polynomial actually worsens outside the Region of Interest as the number of terms increases.

In general the Taylor Polynomial will predict new data within the Region of Interest well but new data outside the ROI poorly.

If agreement is poor, simply add more terms – like the cosmological constant – until agreement is acceptable.

This is why the Ptolemaic theory of planetary motion with epicycles could not be falsified.

Falsifiability Metric Table for Cosmology

Is Scientific Cosmology Falsifiable?

In real scientific practice, falsifiability is too vaguely defined and is not quantitative.

Falsifiability is not a simple binary, yes or no criterion in actual practice.
Rather some theories are highly plastic and difficult to falsify. Some are less plastic, stiffer and easier to falsify. Falsifiability or plasticity is a continuum, not a simple binary yes or no, 0 or 1.0.

Falsifiability in Drug Approvals

Nonetheless, the question of falsifiability is of great practical importance. For example, many drugs are advertised as scientifically proven or strongly implied to be scientifically proven to reduce the risk of heart attacks and extend life, to slow the progression of cancer and extend life for cancer patients, and to effectively treat a range of psychological disorders such as paranoid schizophrenia, clinical depression, and Attention Deficit Hyperactivity Disorder (ADHD).

All of these claims have been questioned by a minority of highly qualified medical doctors, scientists, and investigative reporters.

Are these claims falsifiable? If not, are they therefore not scientific? How sure can we be that these drugs work? Does the doctrine of falsifiability give any insight into these critical questions?

Somehow we need to adjust the seeming agreement of models with data for the plasticity of the models – their ability to fit a wide range of data sets due to complexity.

Falsifiability in Drug Approvals

In Pharmaceutical Drug Approvals, the scientific theory being tested is that a drug is both safe and effective. Can erroneous claims of safety or effectiveness by pharmaceutical companies be falsified – not always it seems.

In the VIOXX scandal, involving a new pain reliever, marketed as a much more expensive super-aspirin that was safer than aspirin and other traditional pain relievers which can cause sometimes fatal gastrointestinal bleeding after prolonged use, scientists omitted several heart attacks, strokes, and deaths from the reported tallies for the treatment group.

This omission is similar to omitting the cosmological constant term in General Relativity. Indeed the ad hoc assumptions used to omit the injuries and deaths could be expressed mathematically as additional terms in a mathematical model of mortality as a function of drug dose.

Surveys of patients treated with VIOXX after approval showed higher heart attack, stroke, and death rates than patients treated with traditional pain relievers. Merck was nearly bankrupted by lawsuit settlements.

Vioxx: The Killer Pain Reliever Safer Than Aspirin
Merck Withdraws Vioxx from Market in 2004
Merck Stock Drops

Falsifiability in Financial Risk Models

Falsifiability of Financial Risk Models

Moving from the world of drug risks to finance: the 2008 housing and financial crash was caused in part by reliance on financial risk models that underestimated the risk of home price declines and mortgage defaults.

Many of these models roughly assumed the popular Bell Curve, also known as the Normal or Gaussian distribution. The Bell Curve is frequently used in grading school work. It also tends to underestimate the risk of financial investments.

Are financial models falsifiable? Not always it seems.

Falsifiability of Coronavirus COVID-19 Pandemic Models

The public response to the current (April 12, 2020) Coronavirus COVID-19 Pandemic has been shaped by frequently complex, sometimes contradictory, and changing epidemiological models such as the widely cited Imperial College Model from the group headed by Professor Nell Ferguson as well as a competing model from Oxford — and many other models as well. There has been considerable well-justified controversy and confusion over these models.

Can we “falsify” these models in the popular binary “yes” or “no” sense of falsifiability? They are certainly imperfect and have failed various predictions, hence various revisions. Many key parameters such as the actual mortality rate broken down by age, sex, race, pre-existing medical conditions, and other risk factors have not been measured. The Imperial College Model is reportedly quite complex and may well be very “plastic” (not very falsifiable).

In fact, all or most of the models have been “falsified” in the binary falsification sense in real time as they have made predictions that failed and have been revised in various ways. Obviously a more nuanced measure, such as the falsifiability metric discussed below, is needed to evaluate the reliability of the models and compare them.

Falsifiability in Math Recognition

This is an example of the falsifiability problem in our work at Mathematical Software. We have a large, growing database of known mathematics, functions such as the Bell Curve and the Cauchy-Lorenz function shown here. Our math recognition software identifies the best candidate mathematical models for the data from this database.

The math recognizer yields an ordered list of candidate models ranked by goodness of fit, in this example the coefficient of determination, loosely the percent of agreement with the data.

The plot is an analysis of some financial data. On the vertical axis we have the percent agreement of the model with the data, One hundred percent is perfect agreement. Technically the value on the vertical axis is the coefficient of determination, often referred to as R squared.

On the horizontal axis is the probability of getting a return on investment less than the risk free return, the return from investing in a Treasury bond, about two (2) percent per year. This probability varies dramatically from model to model. It is a key metric for investment decisions.

Our best model is the Cauchy-Lorenz model, beating out the popular Bell Curve. BUT, what if the Cauchy-Lorenz is more plastic (less falsifiable) than the Bell Curve? The better agreement may be spurious. The difference in risk is enormous! Cauchy-Lorenz means a high risk investment and the Bell Curve means a low risk investment.

This problem has been encountered many times in statistics, data analysis, artificial intelligence, and many other related fields. A wide variety of ad hoc attempts to solve it have been offered in the scientific and engineering literature. For example, there are many competing formula to correct the coefficient of determination R**2 (R SQUARED) but there does not appear to be a rigorous and/or generally accepted solution or method. These adjusted R**2 formulae included Wherry’s formula, McNemar’s formula, Lord’s formula, and Stein’s formula (see graphic below).

Various Ad Hoc Adjustments for the Flexibility of Mathematical Models

The formulae do not, for example, take into account that different functions with the same number of adjustable parameters can have different degrees of plasticity/falsifiability.

In many fields, only the raw coefficient of determination R**2 is reported.

A Prototype Falsifiability Metric

This is an example of a prototype falsifiability metric illustrated with the Taylor Polynomials.

The metric consists of an overall falsifiability measure for the function, the value F in the title of each plot, and a function or curve adjusting the raw goodness of fit, the coefficient of determination or R SQUARED in this case, for each model.

The plots show the Taylor Polynomial A times X + B in the upper left, the Taylor Polynomial A times X squared plus B times X + C in the upper right, the 6th degree Taylor Polynomial in the lower left, and the tenth degree Taylor Polynomial in the lower right.

The red marker shows the adjusted value of an R SQUARED value of 0.9 or ninety percent.

As terms are added to the model the falsifiability decreases. It is easier for the more complex models to fit data generated by other functions! The Taylor Polynomials of higher degree are more and more plastic. This is reflected in the decreasing value of the falsifiability metric F.

In addition, the goodness of fit metric, R SQUARED here, is adjusted to compensate for the higher raw values of R SQUARED that a less falsifiable, more plastic function yields. An unfalsifiable function will always give R SQUARED of 1.0, the extreme case .The adjusted R**2 enables us to compare the goodness of fit for models with different numbers of terms and parameters, different levels of falsifiability.

Conclusion

Conclusion Slide

In conclusion, a simple “yes” or “no” binary falsifiability as commonly defined (e.g. in the Encyclopedia Brittanica) does not hold up in real scientific and engineering practice. It is too vaguely defined and not quantitative. It also excludes scientific theories that can be verified but not ruled out. For example, in the present (April 12, 2020) crisis, it is clearly useless in evaluating the many competing COVID-19 pandemic models and their predictions.

Falsifiability does reflect an actual problem. Scientific and engineering models — whether verbal conceptual models or rigorous quantitative mathematical models — can be and often are flexible or plastic, able to match many different sets of data and in the worse case such as the Taylor Polynomials, essentially any data set. Goodness of fit statistics such as R**2 are boosted by this plasticity/flexibility of the models making evaluation of performance and comparison of models difficult or impossible at present.

A reliable quantitative measure is needed. What is the (presumably Bayesian) probability that the agreement between a model and data is due to this flexibility of the model as opposed to a genuine “understanding” of the data? We are developing such a practical falsifiability measure here at Mathematical Software.

(C) 2020 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

Was the Manhattan Project a Fluke?

Was the Manhattan Project a Fluke?

This video argues that the Manhattan Project which developed the first atomic bombs and nuclear reactors during World War II was a fluke, not representative of what can be accomplished with Big Science programs. There have been many failed New Manhattan Projects since World War II.

Minor Correction: Trinity, the first atomic bomb test, took place on July 16, 1945 — not in May of 1945 as stated in the audio.

(C) 2019 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

Lost in Math: The New York Times Op-Ed

Lost in Math

In July of last year, I wrote a review, “The Perils of Particle Physics,” of Sabine Hossenfelder’s book Lost in Math: How Beauty Leads Physics Astray (Basic Books, June 2018). Lost in Math is a critical account of the disappointing progress in fundamental physics, primarily particle physics and cosmology, since the formulation of the “standard model” in the 1970’s.

Lost in Math
Lost in Math

Dr. Hossenfelder has followed up her book with an editorial “The Uncertain Future of Particle Physics” in The New York Times (January 23, 2019) questioning the wisdom of funding CERN’s recent proposal to build a new particle accelerator, the Future Circular Collider (FCC), estimated to cost over $10 billion. The editorial has in turn produced the predictable howls of outrage from particle physicists and their allies:

Letters to the New York Times from theoretical physicist and science popularizer Jeremy Bernstein and Harvard Physics Professor Lisa Randall

The Worth of Physics Research

Physicists take issue with an Op-Ed article arguing against expensive upgrades to the super collider at CERN.

An article in Slate:

Particle Physics Is Doing Just Fine

In science, lack of discovery can be just as instructive as discovery.

By Chanda Prescod-Weinstein and Tim M.P. Tait

And apparently informal criticism of Dr. Hossenfelder during a recent colloquium and presumably on the physics “grapevine”:

“Maybe I’m crazy”, Blog Post, February 4, 2019

“Particle physicists surprised to find I am not their cheer-leader”, Blog Post, February 2, 2019

Probably there will be additional fireworks.

My original review of Lost in Math covers many points relevant to the editorial. A few additional comments related to particle accelerators:

Particle physics is heavily influenced by the ancient idea of atoms (found in Plato’s Timaeus about 360 B.C. for example) — that matter is comprised of tiny fundamental building blocks, also known as particles. The idea of atoms proved fruitful in understanding chemistry and other phenomena in the 19th century and early 20th century.

In due course, experiments with radioactive materials and early precursors of today’s particle accelerators were seemingly able to break the atoms of chemistry into smaller building blocks: electrons and the atomic nucleus comprised of protons and neutrons, presumably held together by exchanges of mesons such as the pion. The main flaw in the building block model of chemical atoms was the evident “quantum” behavior of electrons and photons (light), the mysterious wave-particle duality quite unlike the behavior of macroscopic particles like billiard balls.

Given this success, it was natural to try to break the protons, neutrons and electrons into even smaller building blocks. This required and justified much larger, more powerful, and increasingly more expensive particle accelerators.

The problem or potential problem is that this approach never actually broke the sub-atomic particles into smaller building blocks. The electron seems to be a point “particle” that clearly exhibits puzzling quantum behavior unlike any macroscopic particle from tiny grains of sand to giant planets.

The proton and neutron never shattered into constituents even though they are clearly not point particles. They seem more like small blobs or vibrating strings of fluid or elastic material. Pumping more energy into them in particle accelerators simply produced more exotic particles, a puzzling sub-atomic zoo. This led to theories like nuclear democracy and Regge poles that interpreted the strongly (strong here referring to the strong nuclear force that binds the nucleus together and powers both the Sun and nuclear weapons) interacting particles as vibrating strings of some sort. The plethora of mesons and baryons were explained as excited states of these strings — of low energy “particles” such as the neutron, proton, and the pion.

However, some of the experiments observed electrons scattering off protons (the nucleus of the most common type of hydrogen atom is a single proton) at sharp angles as if the electron had hit a small “hard” charged particle, not unlike an electron. These partons were eventually interpreted as the quarks of the reigning ‘standard model’ of particle physics.

Unlike the proton, neutron, and electron in chemical atoms, the quarks have never been successfully isolated or extracted from the sub-nuclear particles such as the proton or neutron. This eventually led to theories that the force between the quarks grows stronger with increasing distance, mediated by some sort of string-like tube of field lines (for lack of better terminology) that never breaks however far it is stretched.

Particles All the Way Down

There is an old joke regarding the theory of a flat Earth. The Earth is supported on the back of a turtle. The turtle in turn is supported on the back of a bigger turtle. That turtle stands on the back of a third turtle and so on. It is “Turtles all the way down.” This phrase is shorthand for a problem of infinite regress.

For particle physicists, it is “particles all the way down”. Each new layer of particles is presumably composed of smaller still particles. Chemical atoms were comprised of protons and neutrons in the nucleus and orbiting (sort of) electrons. Protons and neutrons are composed of quarks, although we can never isolate them. Arguably the quarks are constructed from something smaller, although the favored theories like supersymmetry have gone off in hard to understand multidimensional directions.

“Particles all the way down” provides an intuitive justification for building every larger, more powerful, and expensive particle accelerators and colliders to repeat the success of the atomic theory of matter and radioactive elements at finer and finer scales.

However, there are other ways to look at the data. Namely, the strongly interacting particles — the neutron, the proton, and the mesons like the pion — are some sort of vibrating quantum mechanical “strings” of a vaguely elastic material. Pumping more energy into them through particle collisions produces excitations — various sorts of vibrations, rotations, and kinks or turbulent eddies in the strings.

The kinks or turbulent eddies act as small localized scattering centers that can never be extracted independently from the strings — just like quarks.

In this interpretation, strongly interacting particles such as the proton and possibly weakly (weak referring to the weak nuclear force responsible for many radioactive decays such as the carbon-14 decay used in radiocarbon dating) interacting seeming point particles like the electron are comprised of a primal material.

In this latter case, ever more powerful accelerators will only create ever more complex excitations — vibrations, rotations, kinks, turbulence, etc. — in the primal material.   These excitations are not building blocks of matter that give fundamental insight.

One needs rather to find the possible mathematics describing this primal material. Perhaps a modified wave equation with non-linear terms for a viscous fluid or quasi-fluid. Einstein, deBroglie, and Schrodinger were looking at something like this to explain and derive quantum mechanics and put the pilot wave theory of quantum mechanics on a deeper basis.

A critical problem is that an infinity of possible modified wave equations exist. At present it remains a manual process to formulate such equations and test them against existing data — a lengthy trial and error process to find a specific modified wave equation that is correct.

This is a problem shared with mainstream approaches such as supersymmetry, hidden dimensions, and so forth. Even with thousands of theoretical physicists today, it is time consuming and perhaps intractable to search the infinite space of possible mathematics and find a good match to reality. This is the problem that we are addressing at Mathematical Software with our Math Recognition technology.

(C) 2019 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

The Mathematics Recognition Problem

 

A brief introduction to the math recognition problem and automatic math recognition using modern artificial intelligence and pattern recognition methods. Includes a call for data.  About 14 minutes.

(C) 2018 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).

The Perils of Particle Physics

Lost in Math
Lost in Math

Sabine Hossenfelder’s Lost in Math: How Beauty Leads Physics Astray (Basic Books, June 2018) is a critical account of the disappointing progress in fundamental physics, primarily particle physics and cosmology, since the formulation of the “standard model” in the 1970’s.  It focuses on the failure to find new physics at CERN’s $13.25 billion Large Hadron Collider (LHC) and many questionable predictions that super-symmetric particles, hidden dimensions, or other exotica beloved of theoretical particle physicists would be found at LHC when it finally turned on.  In many ways, this lack of progress in fundamental physics parallels and perhaps underlies the poor progress in power and propulsion technologies since the 1970s.

Lost in Math joins a small but growing collection of popular and semi-popular books and personal accounts critical of particle physics including David Lindley’s 1994 The End of Physics: The Myth of a Unified Theory, Lee Smolin’s The Trouble with Physics: The Rise of String Theory, the Fall of Science and What Comes Next, and Peter Woit’s Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law.  It shares many points in common with these earlier books. Indeed, Peter Woit is quoted on the back cover and Lee Smolin is listed in the acknowledgements as a volunteer who read drafts of the manuscript.  Anyone considering prolonged involvement, e.g. graduate school, or a career in particle physics should read Lost in Math as well as these earlier books.

The main premise of Lost in Math is that theoretical particle physicists like the author have been lead astray by an unscientific obsession with mathematical “beauty” in selecting and also refusing to abandon theories, notably super-symmetry (usually abbreviated as SUSY in popular physics writing), despite an embarrassing lack of evidence.  The author groups together several different issues under the rubric of “beauty” including the use of the terms beauty and elegance by theoretical physicists, at least two kinds of “naturalness,” the “fine tuning” of the constants in a theory to make it consistent with life, the desire for simplicity, dissatisfaction with the complexity of the standard model (twenty-five “fundamental” particles and a complex Lagrangian that fills two pages of fine print in a physics textbook), doubts about renormalization — an ad hoc procedure for removing otherwise troubling infinities — in Quantum Field Theory (QFT), and questions about “measurement” in quantum mechanics.  Although I agree with many points in the book, I feel the blanket attack on “beauty” is too broad, conflating several different issues, and misses the mark.

In Defense of “Beauty”

As the saying goes, beauty is in the eye of the beholder.  The case for simplicity or more accurately falsifiability in mathematical models is on a sounder, more objective basis than beauty however.  In many cases a complex model with many terms and adjustable parameters can fit many different data sets.  Some models are highly plastic.  They can fit almost any data set not unlike the way saran wrap can fit almost any surface.  These models are wholly unfalsifiable.

A mathematical model which can match any data set cannot be disproven.  It is not falsifiable.  A theory that predicts everything, predicts nothing.

Some models are somewhat plastic, able to fit many but not all data sets, not unlike a rubber sheet.  They are hard to falsify — somewhat unfalsifiable.  Some models are quite rigid, like a solid piece of stone fitting into another surface.  These models are fully falsifiable.

A simple well known example of this problem is a polynomial with many terms.  A polynomial with enough terms can match any data set.  In general, the fitted model will fail to extrapolate, to predict data points outside the domain of the data set used in the model fitting (the training set in the terminology of neural networks for example).  The fitted polynomial model will frequently interpolate, predict data points within the domain of the data set used in the model fitting — points near and in-between the training set data points, correctly.  Thus, we can say that a polynomial model with enough terms is not falsifiable in the sense of the philosopher of science Karl Popper because it can fit many data sets, not just the data set we actually have (real data).

This problem with complex mathematical models was probably first encountered with models of planetary motion in antiquity, the infamous epicycles of Ptolemy and his predecessors in ancient Greece and probably Babylonia/Sumeria (modern Iraq).  Pythagoras visited both Babylonia and Egypt.  The early Greek accounts of his life suggest he brought back the early Greek math and astronomy from Babylonia and Egypt.

Early astronomers, probably first in Babylonia, attempted to model the motion of Mars and other planets through the Zodiac as uniform circular motion around a stationary Earth.  This was grossly incorrect in the case of Mars which backs up for about two months about every two years.  Thus the early astronomers introduced an epicycle for Mars. They speculated that Mars moved in uniform circular motion around a point that in turn moved in uniform circular motion around the Earth.  With a single epicycle they could reproduce the biannual backing up with some errors.  To achieve greater accuracy, they added more and more epicycles, producing an ever more complex model that had some predictive power.  Indeed the state of the art Ptolemaic model in the sixteenth century was better than Copernicus’ new heliocentric model which also relied on uniform circular motion and epicycles.

The Ptolemaic model of planetary motion is difficult to falsify because one can keep adding more epicycles to account for discrepancies between the theory and observation.  It also has some predictive power.  It is an example of a “rubber sheet” model, not a “saran wrap” model.

In the real world, falsifiability is not a simple binary criterion.  A mathematical model is not either falsifiable and therefore good or not falsifiable and therefore bad.  Rather falsifiability falls on a continuum.  In general, extremely complex theories are hard to falsify and not predictive outside of the domain of the data used to infer (fit) the complex theory.  Simpler theories tend to be easier to falsify and if correct are sometimes very predictive as with Kepler’s Laws of Planetary Motion and subsequently Newton’s Law of Gravitation, from which Kepler’s Laws can be derived.

Unfortunately, this experience with mathematical modeling is known but has not been quantified in a rigorous way by mathematicians and scientists.  Falsifiabiliy remains a slogan primarily used against creationists, parapsychologists, and other groups rather than a rigorous criterion to evaluate theories like the standard model, supersymmetry, or superstrings.

A worrying concern with the standard model with its twenty-five fundamental particles, complex two-page Lagrangian (mathematical formula), and seemingly ad hoc elements such as the Higgs particle and Kobayashi-Maskawa matrix is that it is matching real data entirely or in part due to its complexity and inherent plasticity, much like the historical epicycles or a polynomial with many terms.   This concern is not just about subjective “beauty.”

Sheldon Glashow’s original formulation of what became the modern standard model was much simpler, did not include the Higgs particle, did not include the charm, top, or bottom quarks, and a number of other elements (S.L. Glashow (1961). “Partial-symmetries of weak interactions”. Nuclear Physics. 22 (4): 579–588. ).  Much as epicycles were added to the early theories of planetary motion, these elements were added on during the 1960’s and 1970’s to achieve agreement with experimental results and theoretical prejudices.  In evaluating the seeming success and falsifiability of the standard model, we need to consider not only the terms that were added over the decades but also the terms that might have been added to salvage the theory.

Theories with symmetry have fewer adjustable parameters and are less plastic, flexible, less able to match the data regardless of what data is presented.  This forms an objective but poorly quantified basis for intuitive notions of the “mathematical beauty” of symmetry in physics and other fields.

The problem is that although we can express this known problem of poor falsifiability or plasticity (at the most extreme an ability to fit any data set)  with mathematical models and modeling qualitatively with words such as “beauty” or “symmetry” or “simplicity,” we cannot express it in rigorous quantitative terms yet.

Big Science and Big Bucks

Much of the book concerns the way the Large Hadron Collider and its huge budget warped the thinking and research results of theoretical physicists, rewarding some like Nima Arkani-Hamed who could produce catchy arguments that new physics would be found at the LHC and encouraging many more to produce questionable arguments that super-symmetry, hidden dimensions or other glamorous exotica would be discovered.   The author recounts how her Ph.D. thesis supervisor redirected her research to a topic “Black Holes in Large Extra Dimensions” (2003) that would support the LHC.

Particle accelerators and other particle physics experiments have a long history of huge cost and schedule overruns — which are generally omitted or glossed over in popular and semi-popular accounts.  The not-so-funny joke that I learned in graduate school was “multiply the schedule by pi (3.14)” to get the real schedule.  A variant was “multiply the schedule by pi for running around in a circle.”  Time is money and the huge delays usually mean huge cost overruns.  Often these have involved problems with the magnets in the accelerators.

The LHC was no exception to this historical pattern.  It went substantially over budget and schedule before its first turn on in 2008, when around a third of the magnets in the multi-billion accelerator exploded, forcing expensive and time consuming repairs (see CERN’s whitewash of the disaster here).  LHC faced significant criticism over the cost overruns in Europe even before the 2008 magnet explosion.  The reported discovery of the Higgs boson in 2012 has substantially blunted the criticism; one could argue LHC had to make a discovery.  🙂

The cost and schedule overruns have contributed to the cancellation of several accelerator projects including ISABELLE at the Brookhaven National Laboratory on Long Island and the Superconducting Super Collider (SSC) in Texas.  The particle physics projects must compete with much bigger, more politically connected, and more popular programs.

The frequent cost and schedule overruns mean that pursuing a Ph.D. in experimental particle physics often takes much longer than advertised and is often quite disappointing as happened to large numbers of LHC graduate students.  For theorists, the pressure to provide a justification for the multi-billion dollar projects is undoubtedly substantial.

While genuine advances in fundamental physics may ultimately produce new energy technologies or other advances that will benefit humanity greatly, the billions spent on particle accelerators and other big physics experiments are certain, here and now.  The aging faculty at universities and senior scientists at the few research labs like CERN who largely control the direction of particle physics cannot easily retrain for new fields unlike disappointed graduate students or post docs in their twenties and early thirties.  The hot new fields like computers and hot high tech employers such as Google are noted for their preference for twenty-somethings and hostility to employees even in their thirties.  The existing energy industry seems remarkably unconcerned about alleged “peak oil” or climate change and empirically invests little if anything in finding replacement technologies.

Is there a way forward?

Sabine, who writes on her blog that she is probably leaving particle physics soon, offers some suggestions to improve the field, primarily focusing on learning about and avoiding cognitive biases.  This reminds me a bit of the unconscious bias training that Google and other Silicon Valley companies have embraced in a purported attempt to fix their seeming avoidance of employees from certain groups — with dismal results so far.  Responding rationally if perhaps unethically to clear economic rewards is not a cognitive bias and almost certainly won’t respond to cognitive bias training.  If I learn that I am unconsciously doing something because it is in my economic interest to do so, will I stop?

Future progress in fundamental physics probably depends on finding new informative data that does not cost billions of dollars (for example, a renaissance of table top experiments), reanalysis of existing data, and improved methods of data analysis such as putting falsifiability on a rigorous quantitative basis.

(C) 2018 by John F. McGowan, Ph.D.

About Me

John F. McGowan, Ph.D. solves problems using mathematics and mathematical software, including developing gesture recognition for touch devices, video compression and speech recognition technologies. He has extensive experience developing software in C, C++, MATLAB, Python, Visual Basic and many other programming languages. He has been a Visiting Scholar at HP Labs developing computer vision algorithms and software for mobile devices. He has worked as a contractor at NASA Ames Research Center involved in the research and development of image and video processing algorithms and technology. He has published articles on the origin and evolution of life, the exploration of Mars (anticipating the discovery of methane on Mars), and cheap access to space. He has a Ph.D. in physics from the University of Illinois at Urbana-Champaign and a B.S. in physics from the California Institute of Technology (Caltech).