From Pasteur to Panic: Fear, Fraud, and the Fight for Historical Truth
Interviews

From Pasteur to Panic: Fear, Fraud, and the Fight for Historical Truth

Mike Stone
Published on March 26, 2025
36 min read

A little over a year ago, the Substack Lies Are Unbekoming invited me to participate in a written interview. I was excited, as it was my first time doing a written interview, and I was curious to see how the process would unfold. While I was quite busy at the time, the format allowed me to respond at my own pace and provide relevant links to articles and sources, giving readers a deeper understanding of my answers.

I was thrilled with the thought-provoking questions and the response to my answers, making the experience truly enjoyable. So, when I was invited to participate in another written interview this year, I was just as excited—even with an even busier schedule. It took some time, but I was able to answer all of the excellent questions, covering topics ranging from Louis Pasteur’s rabies work to measles, ‘antibodies,’ and ‘viral’ genomes. I hope you find this interview as engaging to read as I did to take part in.

Be sure to check out Lies Are Unbekoming for more insightful and engaging interviews on a variety of important topics!

It is with admiration and gratitude that I introduce my second interview with Mike Stone, a researcher whose groundbreaking work in medical history is peeling back the layers of deception woven by the oligarchy and its Cartel Medicine. Mike’s meticulous exploration of the past is not just an academic exercise—it’s a vital act of reclamation, uncovering truths that those in power would rather we forget. His research shines a spotlight on the corrupt foundations of modern medicine, exposing how historical figures like Louis Pasteur and Robert Koch shaped narratives that benefit entrenched interests rather than scientific rigor. In an era where these forgotten lessons are more relevant than ever, Mike’s work stands as a clarion call to question the dogma we’ve been fed and to seek a deeper understanding of health and disease.

This interview builds on themes I’ve explored in my own writing, particularly in Virus Isolation, where Mark Gober examined how virology’s inability to properly isolate and purify viruses undermines its scientific legitimacy—a critique Mike amplifies with historical evidence of Pasteur’s flawed rabies experiments. His insights also resonate with Settling the Virus Debate, where Dr. Samantha Bailey discussed the contention over whether viruses are truly pathogenic or merely scapegoats for other causes. Mike’s research provides critical context, revealing how germ theory gained traction not through proof, but through political and financial maneuvering—insights that echo his analysis of Dr. Dulles’ challenge to Pasteur and the questionable statistics behind modern outbreaks like measles. By connecting these dots, Mike helps us see how the past informs today’s scientific disputes.

Furthermore, Mike’s work ties into The Virus Cult: A Religion Built On Fear, Not Science, where it’s shown that virology has become a belief system rooted in faith rather than evidence. His examination of antibody theory’s shaky origins and the nocebo effect’s role in disease symptoms—like those attributed to rabies—lays bare the cult-like adherence to unproven ideas that began over a century ago. This connects seamlessly to Virus, where Dr. Thomas Cowan explored how fear of invisible threats has been weaponized to control populations. Mike’s historical examples, from the germ duel to manipulated outbreak narratives, illustrate how this fear has been exploited since Pasteur’s time. Together, our works challenge readers to rethink the medical establishment’s foundations—and Mike’s contributions, as you’ll see in this interview, are indispensable to that mission.

For reference, here’s a link to our first interview, where we began this journey into the hidden history of virology and germ theory.

With thanks to Mike Stone.

ViroLIEgy Newsletter | Mike Stone | Substack

Related Posts


1. Mike, welcome back! Let’s dive straight in. In your research, you’ve examined the work of Louis Pasteur extensively. What do you consider to be the most significant problems with his rabies experiments that modern science has overlooked?

Hi, thanks for having me back. There are many problems with Pasteur’s rabies experiments, but the biggest one is simple: he never actually had a “rabies” microbe to work with.

To establish a scientific cause-and-effect relationship, your presumed cause—the independent variable—must exist and be present before any experiments take place. In other words, to prove that X causes Y, X must exist first and be properly identified.

But Pasteur merely assumed that a causal microbe existed in the saliva of “rabid” animals. When injecting this saliva into healthy animals failed to reproduce the disease, he switched to injecting nervous system tissue directly into the brains of test animals. Drilling holes into skulls and injecting brain matter is not a natural route of exposure, nor does it reflect anything observed in nature. The disease that resulted was clearly an effect of these torturous experimental conditions—not proof of a microbe that Pasteur had identified and manipulated.

Beyond this, Pasteur’s work was riddled with other issues, which I detail in my articles The Germ Hypothesis Part One: Pasteur’s ProblemsPasteur’s Method of Treating Hydrophobiaand Lost & Found. You’ll find extensive evidence of his pseudoscientific methods and outright fraud. Another excellent resource is Gerald Geison’s The Private Science of Louis Pasteur, which exposes many of these same flaws in detail.

2. You’ve written about Dr. Dulles’ critique of Pasteur’s work. What aspects of this historical critique do you find most relevant to today’s understanding of infectious disease?

Dr. Dulles’ excellent critique of Pasteur’s work remains highly relevant today. He not only challenged Pasteur’s claim of having created a successful rabies vaccine but also questioned whether “rabies” was even a distinct disease. After sixteen years of investigation, Dr. Dulles concluded that there was no such specific malady as hydrophobia—asserting that Pasteur, a chemist with no diagnostic expertise, had simply invented it. Pasteur relied on the presence of certain granules in brain tissue as a marker for rabies, despite the fact that these granules were already known to exist in non-rabid animals. Lacking the qualifications to properly diagnose rabies, Pasteur arbitrarily labeled his experimental disease as the real one.

A few years before his critique of Pasteur, Dr. Dulles wrote Disorders Mistaken for Hydrophobia, a 44-page booklet in which he identified at least thirty conditions that could mimic rabies. He argued that many rabies diagnoses were actually misdiagnoses of other disorders. Esteemed neurologist Dr. Edward Charles Spitzka supported this view, stating:

“The resemblance between the spurious hydrophobia and the so-called real affection is so great that I cannot criticise anyone for believing, with Dulles, that the existence of a genuine hydrophobia in man is not proven.”

Dr. Dulles also pointed out that Pasteur’s treatment may have been no more effective than unproven folk remedies like the “mad stone,” which at least had the benefit of being harmless. By contrast, Pasteur’s vaccine was linked to deaths—including cases where the vaccine itself seemed to create the very disease it was meant to prevent. Yet these vaccine-induced deaths were often attributed to rabies itself, further fueling public fear and reinforcing the perceived necessity of vaccination.

This cycle—where the supposed cure perpetuates the disease—remains a familiar pattern today. Pasteur, a chemist with no medical training, capitalized on folklore and fear to promote a toxic vaccine for a condition with non-specific symptoms and multiple possible causes. The result? A self-sustaining illusion, where the treatment generates the very illness it claims to eradicate.

3. Your article “Measles Magic” discusses how the CDC uses alerts to healthcare workers to identify measles cases. Could you explain how these alerts might influence disease statistics?

Mark Twain famously said, “There are lies, damned lies, and statistics.” Few organizations manipulate statistics to support a fear-based agenda better than the CDC.

What many fail to realize is that, since measles is considered eliminated in the U.S., doctors rarely suspect it when children present with the same nonspecific symptoms associated with the disease. There are several reasons for this:

  1. Measles is not classified as an endemic disease in the U.S.
  2. Vaccinated children are assumed to be “immune,” so measles is not considered.
  3. Many U.S. doctors have never seen cases diagnosed as measles and wouldn’t recognize it. Even if they suspect it, clinical diagnosis alone is deemed unreliable.

Instead, physicians diagnose the patient with other conditions associated with a fever and a maculopapular rash, such as:

  • Rubella, Scarlet fever, Roseola infantum, Kawasaki disease, Erythema infectiosum (Fifth Disease), “Coxsackievirus,” “Echovirus,” Epstein-Barr “virus,” HIV, Pharyngoconjunctival fever, Influenza
  • Dengue, Rocky Mountain spotted fever, Zika “virus”
  • Dermatologic manifestations of “Viral” hemorrhagic fevers
  • Toxic Shock Syndrome, cutaneous syphilis
  • Drug reactions (e.g., antibiotics, contact dermatitis)

However, when the CDC decides that measles cases need to be found, they send out alerts instructing doctors to be on the lookout. Measles is then suspected based on vague criteria:

  1. Fever and rash with symptoms like cough, runny nose, or red eyes
  2. Recent travel abroad or contact with a traveler
  3. Unvaccinated status

Only those who meet this pre-screening are tested, and unreliable lab tests are then used to confirm cases. Vaccinated individuals with identical symptoms are typically dismissed due to “presumed immunity” unless they are epidemiologically linked to a confirmed case. Even then, testing is often discouraged as results are deemed unreliable.

This system makes it easy for the CDC to manufacture and steer a measles outbreak narrative. By directing attention toward unvaccinated individuals while excluding vaccinated ones from diagnosis, they ensure that any outbreak appears to originate from the unvaccinated. In reality, the same symptoms in a vaccinated child might simply be called something else.

Thus, the so-called “measles resurgence” is less about an actual increase in disease and more about selective testing, shifting diagnostic labels, and statistical manipulation.

4. You’ve challenged the idea that viral genomes provide proof of virus existence. Why do you believe genomic sequencing is an inadequate method for demonstrating the existence of viruses?

Setting aside the equally flawed foundations of the DNA paradigm for a moment, let’s address this hypothetically. If someone claims to have a “viral” genome, they must first have a “virus” in hand to obtain its genetic material before sequencing and assembling the genome. For example, if I claim to have sequenced the genome of a dog, I would first need to obtain a sample from an actual dog. I would also need live dogs available to validate whether the genome is accurate and has any real biological significance. Without first securing the entity that is being sequenced, the entire premise collapses.

Virologists have never obtained a genome from fully purified and isolated “viral” particles directly from a sick host—without culturing—confirmed via electron microscopy, characterized, and then proven “pathogenic” through scientific evidence that satisfies Koch’s Postulates. My discussion with ChatGPT revealed that the first so-called “viral” genome, from bacteriophage Φ-X174, was not derived from purified and isolated “viral” particles. This means the genetic material used for sequencing may not have belonged to the bacteriophage at all or could have been an amalgamation of genetic fragments from various sources in the unpurified sample.

ViroLIEgy NewsletterA Friendly Chat About “Viral” Genomes“In order to verify and determine the presence of a virus, and following the most fundamental rules of scientific reasoning, the virus needs to be isolated and displayed in its pure form in order to rule out that cellular genetic sequences are misinterpreted as components of a virus…Read morea year ago · 91 likes · 37 comments · Mike Stone

When asked how the first “viral” genome was validated, the AI initially suggested comparative genomics—where a new sequence is compared to existing phage genomes and genetic databases. However, this explanation fails for the first-ever “viral” genome, as there would have been no prior “viral” genomes available for comparison. The AI ultimately conceded that researchers simply made an educated guess that the genetic material was “viral” in origin, acknowledging significant uncertainty in this attribution.

This is a critical issue: if no reference genome was ever established from fully purified and isolated “viral” particles, then all subsequent “viral” genomes—built upon that original flawed reference—inherit the same uncertainties and inaccuracies. This problem is compounded by the technological limitations of the time, further undermining the validity of these genomic claims.

previously analyzed the CDC’s protocol for constructing “viral” genomes, highlighting numerous ways contamination and other factors can affect the final product. Technological limitations further complicate the process. These so-called “viral” genomes are assembled from unpurified cell culture samples, which contain genetic material from multiple sources—including the host, the cultured cells, and fetal bovine serum. As a result, the true origin of the genetic material remains unknown.

Even the WHO has warned against obtaining genomes from passaging in cell culture, acknowledging that it “can result in artificial mutations in the sequences, which were not present in the original clinical sample.” They further stated that this “can have major implications for subsequent analyses” and explicitly advised that using cell culture “solely for the purpose of amplifying virus genetic material for SARS-CoV-2 sequencing should therefore be avoided.”

Virologists have reached a point where their traditional methods of indirect evidence—cell cultures, electron microscopy images, “antibody” testing, and animal experiments—have failed to convincingly demonstrate the existence of “pathogenic viruses.” For decades, they have relied on these indirect methods to mislead the public. Now, “viral” genomes are simply the latest trick used to sell the invisible bogeyman that keeps people lining up for booster shots.

However, sequences of A, C, T, and G in a computer database will never replace the need for direct empirical evidence derived from the scientific method—evidence that satisfies Koch’s Postulates. As virologist Charles Calisher warned at the rise of molecular virology:

“Although all that is terrific, says Calisher, a string of DNA letters in a data bank tells little or nothing about how a virus multiplies, which animals carry it, how it makes people sick, or whether antibodies to other viruses might protect against it. Just studying sequences, Calisher says, is “like trying to say whether somebody has bad breath by looking at his fingerprints.

5. In your work, you’ve discussed experiments by Dr. John B. Fraser and Dr. Thomas Powell who exposed themselves to bacteria without developing illness. How do these historical experiments challenge current germ theory?

Experiments conducted by Drs. John B. Fraser and Thomas Powell, where they deliberately exposed themselves to pure cultures of bacteria, provide compelling evidence against the germ “theory” of disease as proposed by Louis Pasteur. By following the logical principles set out by Robert Koch, these scientists demonstrated that even direct exposure to substantial amounts of bacteria—often regarded as the “deadliest” strains—did not result in illness. These experiments, which should have been pivotal in disproving the germ hypothesis, were unfortunately dismissed and largely forgotten by the scientific community. Their results, which fail to support the idea that bacteria are the sole cause of disease, reveal significant flaws in the germ “theory” framework. This oversight has contributed to the persistence of a pseudoscientific narrative that has been built on a foundation of unchallenged assumptions rather than rigorous empirical evidence. Rediscovering and critically reassessing these experiments can help us confront the unfounded underpinnings of germ “theory,” opening the door to a more nuanced understanding of disease causation.

6. You’ve examined the concept of fear as a driver of disease symptoms. Could you elaborate on how the “nocebo effect” might explain some cases attributed to viral infections?

In my article Fear is the Real Virus, I explored how the emotional aspect of illness is often overlooked in favor of identifying a physical cause—typically “viral” or bacterial—even when no physical culprit may exist. Many symptoms people experience could very well be psychological in nature, driven by the often-overlooked Nocebo Effect. This phenomenon occurs when a person’s belief that they will become ill leads to the manifestation of the very symptoms they fear.

A striking example of this was demonstrated by Alphonse Raymond Dochez in his study Studies in the Common Cold. In one case, a patient involved in a transmission experiment remained healthy for two days after receiving an injection of sterile solution. However, after a nurse mistakenly informed him that he had actually received a cold filtrate and had “failed” to contract a cold, he developed severe symptoms that night—sneezing, coughing, a sore throat, and nasal congestion. The next morning, upon learning that he had not been exposed to a cold filtrate, his symptoms rapidly subsided.

When applied to so-called “viral” diseases, this effect becomes even more relevant. If people hear mainstream media warnings about a new, deadly “virus” that can cause anything from no symptoms at all to those resembling allergies, the common cold, influenza, or pneumonia, they may start fixating on harmless sensations they previously ignored, such as a loss of taste and/or smell. Fear and stress can amplify these symptoms, while a positive test result can further aggravate coughs, fever, aches, and breathlessness. Seeking treatment at a hospital may escalate the emotional response, and in severe cases, the shock of a dire diagnosis—combined with invasive treatments—could even precipitate death by exacerbating heart conditions or affecting the respiratory system.

The mix of fear and belief creates a vicious cycle that can bring about disease entirely on its own—yet instead of recognizing this, the invisible “virus” is blamed, while the mainstream media fuels irrational hysteria over a nonexistent threat.

7. Your research into monoclonal antibodies suggests they may not be as specific as commonly believed. What implications does this have for diagnostic testing and treatments based on antibody technology?

For an “antibody” test result to be meaningful, it must demonstrate high specificity—the ability of an “antibody” to recognize and bind exclusively to a single, distinct antigen (a substance classified as foreign, such as toxins, proteins, peptides, or polysaccharides). Many people take for granted that these tests are highly precise, but the reality is far more complex. While marketed as identifying particular “antibodies,” a well-documented issue known as cross-reactivity occurs when “antibodies” mistakenly bind to unintended antigens.

A clear example of this problem comes from the CDC, which acknowledged that no FDA EUA-approved “SARS-COV-2 antibody” test has been definitively proven to detect only “antibodies” specific to “SARS-COV-2” antigens. Studies have demonstrated that what are labeled as “SARS-COV-2 antibodies” can bind to a diverse array of substances, including but not limited to:

  • “Viruses”: Various “coronaviruses,” Herpes, Influenza, Human “papillomavirus” (HPV), Respiratory syncytial “virus” (RSV), “Rhinoviruses,” “Adenoviruses,” “Poliovirus,” Mumps, Measles, Ebola, “HIV,” Epstein-Barr “virus,” “Cytomegalovirus” (CMV)
  • Bacteria: Pneumococcal bacteria, E. faecalisE. coliBorrelia burgdorferi (the bacterium linked to Lyme disease)
  • ParasitesPlasmodium species (Malaria), Schistosomes
  • Vaccines: DTaP, BCG, MMR
  • Foods: Milk, Peas, Soybeans, Lentils, Wheat, Roasted almonds, Cashews, Peanuts, Broccoli, Pork, Rice, Pineapple

This calls into question the reliability of these tests, as cross-reactivity with “antibodies” from other supposed “infections” or completely unrelated substances can lead to misleading results.

Test specificity is critical because, without it, an “antibody” test cannot determine with certainty whether a person was exposed to “SARS-CoV-2” or an entirely different antigen. If an “antibody” is capable of binding to multiple unrelated proteins, then a positive result does not confirm prior “infection” with “SARS-CoV-2”—or with any particular “virus.” Without strong specificity, these tests cannot be relied upon for either personal diagnosis or public health decision-making.

Furthermore, this issue goes beyond diagnostics and calls into question the claim that vaccines generate highly specific “immune” responses. If so-called “SARS-CoV-2 antibodies” interact with numerous different antigens, how can it be asserted that vaccines induce “immunity” to a particular “virus?” If these same “antibodies” arise in response to a wide variety of antigens, then distinguishing a vaccine-induced response from one triggered by natural exposure to other substances or supposed “pathogens” becomes impossible.

This fundamentally undermines the claims surrounding these injections—raising doubts about whether any measured “antibody” response is truly protective or simply a generic reaction that has been misinterpreted. Since no genuinely specific “antibodies” exist, tests based on “antibody detection” and claims of “immunity” derived from them are inherently flawed. Consequently, these tests lack scientific reliability and should not be used for individual diagnosis, research, or public health policy. Given the inevitability of cross-reactivity and false results, they cannot serve as a valid method to “confirm” past “infection,” detect a supposed “virus,” or establish so-called “immunity” from vaccination.

8. You’ve challenged the historical development of antibody theory. What do you see as the key flaws in how this concept evolved from Behring and Ehrlich’s early work?

Just like virology, the flaws in “antibody theory” originated with the creation of an assumed entity that was never directly observed. In 1890, Emil von Behring hypothesized the existence of “antibodies” based on artificial, lab-induced effects rather than any verifiable, naturally occurring substance. His experiments involved injecting animals with various “disinfectant” chemicals, such as iodine trichloride and zinc chloride, which he used to neutralize toxins in bacterial cultures. By gradually increasing the doses of these substances, Behring created the illusion of “immunity,” yet his results merely reflected a process of habituation to toxins rather than the action of a distinct biological mechanism. Habituation to toxins, a well-documented phenomenon, involves an organism’s ability to adapt to increasing levels of a harmful substance, reducing its physiological response over time. For example, animals exposed to low levels of a toxin, such as arsenic or nicotine, may develop a tolerance, allowing them to withstand higher doses without showing the same harmful effects. Similarly, Behring’s process involved animals becoming accustomed to the increasing doses of iodine trichloride and zinc chloride, which ultimately rendered the animals less sensitive to the toxins but did not suggest any “immune” protection. Even Behring admitted that this so-called “immunity” was not permanent and that unfavorable conditions could leave the animals just as susceptible to disease as if they had never been “immunized.”

Behring’s lab partner, Shibasaburo Kitasato, leaned toward the idea that this process was simply toxin habituation, but Behring rejected this interpretation. Instead, he insisted on the existence of an unseen, protective substance in the blood—what would later be called “antibodies.” Paul Ehrlich took this idea further, but even in 1898, he acknowledged that Behring initially conceptualized these substances as forces rather than actual chemical entities. Ehrlich, however, was determined to redefine them as discrete molecules, despite lacking direct evidence for their existence. He coined the term “antibody” in 1891 and proceeded to construct an elaborate chemical explanation for their supposed role in “immunity.” However, this reasoning begged the question—it presupposed that “antibodies” were real, chemically distinct substances before proving their existence. This assumption was heavily contested by some of his contemporaries.

In 1900, Ehrlich formalized his vision with a theoretical framework that included side-chains, haptophore and toxophore groups, “antigen-antibody” binding, the lock-and-key mechanism, and even tentacle-like structures that supposedly aided in digestion. Rather than adhering to the principle of parsimony, he expanded upon Behring’s vague concept and created an intricate, imaginative model for “immunity,” complete with illustrative diagrams. The law of parsimony, also known as Occam’s razor, dictates that when presented with competing hypotheses that explain the same phenomenon, the simplest one—requiring the fewest assumptions—is usually the best. In Ehrlich’s case, instead of adhering to this principle and proposing a simpler explanation for “immunity,” he chose to invent a complex, speculative model involving unobservable entities. These diagrams depicted invisible processes that had never been directly observed. Critics at the time argued that Ehrlich’s depictions were fictional and fundamentally misleading, warning that they should be discarded because they did not faithfully represent biological reality.

As noted by Cambrosio et al. in 1993, Ehrlich’s most controversial contribution to immunology was his establishment of a “domain of invisible specimen behavior.” He invented explanatory models that were based not on direct observation but on artificial laboratory manipulations and speculative reasoning. Despite these weaknesses, his ideas gained widespread acceptance, shaping the foundation of modern immunology. Today, “antibody theory” remains central to the vaccine industry, propping up the idea that artificial “immunization” confers protection by stimulating the production of these assumed entities.

The implications of this flawed foundation persist in contemporary research. Techniques such as “monoclonal antibody” production, ELISA assays, and X-ray diffraction studies are often presented as definitive proof of “antibodies” as discrete molecular entities. However, these methods still rely on indirect inference rather than direct, unambiguous isolation and characterization of “antibodies” in their natural state. As modern immunology is simply reinforcing Ehrlich’s unproven assumptions rather than providing independent verification, the entire paradigm remains built on shaky ground.

Much like Behring’s original toxin-neutralization experiments, today’s “immunization” models rely on artificially induced effects in lab settings. As the presence of “antibodies” is inferred from reactions that are themselves the result of manipulated conditions, modern immunology is still trapped in the same conceptual error established by Behring over a century ago.

9. You’ve discussed how Köhler and Milstein’s hybridoma technology shaped our understanding of antibodies. Why do you believe this technique is problematic for establishing antibody existence?

The hybridoma technology developed by Köhler and Milstein in 1975 perpetuates the problem of establishing the existence of “antibodies” as discrete, naturally occurring entities by relying on artificial, lab-created processes that are disconnected from nature. This technique relies on the fusion of mouse cancer cells to mouse spleen cells from mice that had been injected with sheep blood. This artificial process combines these cells in a HAT medium, which contains synthetic chemicals like hypoxanthine, aminopterin, and thymidine, along with fetal cow serum, antibiotics, and other chemical additives in a cell culture. From this, it is claimed that one can create specific “antibodies” of a single type. However, there are significant limitations in using such an artificial process to establish the natural existence of “antibodies” in biological systems.

First, hybridoma technology essentially creates “antibodies” artificially by selecting for cells that produce a specific response in an experimental setting. These “antibodies” are not isolated from the natural “immune” response in their original form; rather, they are products of laboratory manipulation. The hybridomas themselves are created under conditions that are far removed from any so-called naturally occurring “immune” response, meaning the “antibodies” they produce are, in essence, an artifact of the experimental process.

Second, the method assumes that these “monoclonal antibodies” are representative of the “antibodies” that might exist in vivo. However, in nature, “antibodies,” if they existed, would be part of a complex and dynamic “immune” system, influenced by numerous variables such as antigen exposure and environmental factors. The “monoclonal antibodies” produced by hybridomas would not capture the full diversity or functional complexity of naturally occurring “antibodies.” This leaves us with an incomplete understanding of what “antibodies” actually are and how they function in real biological contexts. In this sense, hybridoma technology doesn’t establish “antibody” existence in a natural, observable context—it instead reinforces the assumption that these “antibodies” exist in the way they are presented in the lab, without sufficient evidence that these entities are truly representative of naturally occurring biological processes.

Additionally, the hybridoma technology itself depends on an artificial selection mechanism that is disconnected from the complexities of the supposed “immune system’s” actual response. The “antibodies” produced are not “discovered” by observing any natural “immune” responses, but instead are manufactured by isolating a single “immune” cell line and cloning it, reinforcing an artificial set-up that does not reflect reality as observed in nature.

In short, hybridoma technology leads to the creation of what are claimed to be “antibodies” in an artificial, controlled setting, and these “antibodies” would not be faithful representations of those that would hypothetically be found in a real biological context. Relying on this method reinforces assumptions about “antibody” existence that have not been directly verified by natural observation or rigorous scientific validation. This adds another layer of uncertainty to the already fragile conceptualization of “antibodies” as distinct biological entities.

10. The germ duel between Dr. Fraser and Dr. Hill presents an interesting historical anecdote. What do you think this episode reveals about scientific debate in the early days of germ theory?

This debate revealed the stark contrast between those who clung to their beliefs without question and those willing to put their convictions to the test. Dr. John B. Fraser, a staunch critic of germ “theory,” took the bold step of experimenting on himself and his family to prove that bacteria do not cause disease. Over the course of over 150 experiments conducted over a 5-year period, he deliberately exposed himself, volunteers, and his loved ones to pure cultures of bacteria considered the deadliest—diphtheria, pneumonia, meningitis, typhoid, and tuberculosis. Yet, none of them developed the diseases associated with these microbes.

In a May 1919 article published in Physical Culture, Fraser issued a public challenge to the State Board of Health in the United States and the Provincial Board of Health in Canada, calling for an open, controlled experiment in which germs would be introduced into air, food, water, or milk to test their supposed disease-causing ability. He put forward a $1,000 wager to any physician who could conclusively demonstrate that germs cause disease. While this challenge was ignored, Dr. H. W. Hill, the executive secretary of the Minnesota Public Health Association, took it upon himself to counter Fraser. He proposed that both men be exposed to bacterial cultures and test their respective approaches—Dr. Hill relying on anti-toxins, while Dr. Fraser placed his trust in fresh air, sunlight, exercise, and proper nutrition.

However, when legal authorities caught wind of this proposed “germ duel,” assistant prosecuting attorney Harry Peterson warned that if either doctor died as a result, the survivor would be charged with murder. This threat was an obvious attempt to prevent an event that could undermine the growing acceptance of germ “theory.” At the time of Hill’s challenge, Fraser was vacationing in the North Woods and had not yet responded. In his absence, Dr. H. A. Zettel, a fellow skeptic of germ “theory” from Minnesota, volunteered to take Fraser’s place, eager to uphold the challenge.

Hill initially insisted that the duel was not illegal, provided health authorities approved. He then demanded that Fraser or Zettel sign a legal waiver clearing him of responsibility and settle matters with their insurance companies. In response, Zettel proposed that, should one of them perish, the other would serve as a pallbearer at his funeral. The threat of murder charges briefly made Hill withdraw, but he later returned, modifying the challenge so that both men would inoculate themselves with bacteria, avoiding potential homicide accusations. Authorities, now satisfied with this arrangement, agreed not to interfere.

As the legality of the duel was settled, Hill began wavering on ethical grounds. While waiting for Fraser’s official response, he urged Zettel to step aside. Fraser, however, had already written to Zettel in July, affirming that he did not need a stand-in but welcomed Zettel’s involvement as his “second.” Meanwhile, Hill took to Minnesota newspapers, ridiculing Fraser and accusing him of bluffing.

A crucial disagreement emerged over the method of exposure. Hill insisted that the bacteria be injected directly into the body—an unnatural route of exposure that Fraser rejected outright. Fraser, holding firm to his original challenge, maintained that germs should be introduced naturally—through air, food, or water—as that was the basis of germ “theory.” Hill, however, refused this condition, dismissing it with the excuse that if every germ entering the body naturally caused disease, then everyone would constantly be ill. This fundamental impasse rendered the duel impossible. By September, Fraser officially called it off.

Despite his refusal to accept a truly scientific test of germ “theory,” Hill declared victory, leveraging the situation to claim that Fraser had backed down. Yet, in reality, it was Hill who had refused to engage in a valid experiment that mirrored real-world conditions. The debate, and its eventual collapse, underscored the unwillingness of germ “theory” proponents to subject their beliefs to rigorous scrutiny.

11. You’ve suggested that many disease outbreaks coincidentally appear when vaccine hesitancy increases. Could you share examples of this pattern and why you find it significant?

The mainstream media frequently preys on fear, often blaming “growing vaccine hesitancy” for supposed outbreaks of “vaccine-preventable” diseases like measles. This narrative fuels outrage against the unvaccinated and drives support for increased vaccine uptake.

For example, in 2014, a measles “outbreak” at Disneyland capped a year with the highest number of reported measles cases in two decades. This coincided with rising trends in nonmedical vaccine exemptions in California and beyond, leading to sensational headlines, scapegoating of the unvaccinated, and renewed pressure to increase vaccination rates.

That same year, an Amish community in Ohio experienced a reported measles outbreak. The story goes that a traveler returned ill, was misdiagnosed with dengue fever, and unknowingly spread measles among friends and neighbors—many of whom had declined the vaccine due to concerns over adverse effects. This event was then used to justify a vaccination campaign targeting the Amish.

A similar pattern emerged in 2017 when concerns arose over declining MMR vaccination rates in Minnesota’s Somali community. The rate had dropped from 92% in 2004 to 42% in 2014 due to fears of an unusually high prevalence of autism among Somali children. Unsurprisingly, a measles “outbreak” was then declared—primarily among unvaccinated Somali children—further reinforcing the media’s vaccine hesitancy narrative.

As vaccine skepticism grew following the “COVID-19” response, we’ve seen this script play out repeatedly. Most recently, a measles “outbreak” in California coincidentally emerged just as Robert Kennedy Jr., a well-known “anti-vaxxer,” was confirmed as President Trump’s health secretary.

These events follow a predictable pattern: declare an outbreak, blame vaccine hesitancy, and use it to push vaccination campaigns. But are these outbreaks occurring as reported, or are they manufactured through manipulated statistics, questionable diagnostic criteria, and unreliable testing?

From what I have uncovered, these outbreaks are often constructed through statistical manipulation and expansive case definitions. The WHO and CDC classify measles cases using PCR and “antibody” tests—methods that produce false results—along with non-specific symptoms that overlap with other common illnesses. Historically, measles outbreaks have followed cyclical patterns regardless of vaccination rates, yet the media selectively ignores this context. Additionally, shifting diagnostic criteria have made it easier to classify mild rashes and fevers as measles, inflating case counts to justify vaccine campaigns.

Beyond the media’s role, financial incentives drive this narrative. Public health agencies and pharmaceutical companies benefit from fear-based reporting, as it increases vaccine uptake and secures funding for expanded immunization programs. These manufactured crises serve their interests, ensuring that outbreaks—real or exaggerated—keep the public in a state of compliance.

This pattern is not limited to measles. Time and again, we’ve seen major outbreaks conveniently emerge when new vaccines are being tested or prepared for launch. Ebola outbreaks preceded vaccine trials and rollouts. Zika hysteria flared up just in time to justify rushed vaccine development. Dengue outbreaks have aligned suspiciously with the testing and deployment of new dengue vaccines. The same formula repeats itself: introduce fear, push for emergency measures, then roll out a vaccine as the solution.

Given the vague symptoms of these diseases, selective reporting by CDC-affiliated doctors, and reliance on flawed molecular testing, it’s easy to see how outbreaks can be conjured up on demand to generate headlines and enforce compliance.

12. In your research, you’ve discussed how rabies may be a “disease of imagination” rather than a viral infection. What evidence supports this perspective?

There is significant historical evidence that rabies, or “hydrophobia,” was widely regarded as a fear-induced condition rather than a “viral” disease. During Pasteur’s time, many physicians recognized it as a nervous disorder triggered by fright. In 1888, Dr. J.M. Crawford called hydrophobia a “mythical disease” and an “old established superstition,” noting that cases primarily arose in children traumatized by dog attacks, not from any “infectious” agent. He explained that frightened adults could even be “cured” through placebo treatments, such as the so-called Mad Stone, reinforcing the idea that the disease was psychological in nature.

Criticism of Pasteur’s rabies claims continued for decades. A The Fort Scott Lantern article in 1890 labeled his “hydrophobia fraud” as benefiting him more than humanity. People were being duped into his treatment as a “cure” to “relieve their mind of an absurb imagination.” In 1904, The Buffalo News quoted J. Otis Fellows asserting that there had never been a verified case of hydrophobia in America, and the Philadelphia Academy of Sciences had an unclaimed $500 reward for proof of its existence. Even the American Kennel Club denied its occurrence. Mr. Fellows proclaimed that hydrophobia only existed in “the brain of callow reporters, the followers of Pasteur, a few kids and women.”

Pasteur himself even acknowledged that fear alone could produce rabies-like symptoms. In his 1995 book The Private Science of Louis Pasteurhistorian Gerald Geison recounted the French chemist’s recognition of “false rabies” cases, where individuals developed symptoms merely from discussing the disease or recalling past dog encounters that involved being licked. Similarly, Pasteur biographer Patrice Debré noted in his book Louis Pasteur that he “knew how to use popular fears and fantasies to impose a new medicine.”

Rather than an invisible “pathogen,” the driving force behind rabies has always been fear—fear that was manipulated and weaponized to justify Pasteur’s so-called “cure.”

13. You’ve written about the first viral genome ever sequenced (bacteriophage Φ-X174) and its limitations. How does this early genomic work influence our current approach to virology?

The fundamental issue with any “viral” genome is the absence of a verified reference genome from purified and isolated “viral” particles. This problem dates back to the first sequenced “viral” genome, bacteriophage Φ-X174. It was not derived from purified and isolated particles proven to be “pathogenic,” nor was there any proven prior “viral” genetic material to verify its accuracy. This lack of validation set a precedent, meaning any errors or uncertainties in the original reference genome carried forward, affecting all subsequent genomic work over the proceeding decades.

Since no “virus” is ever directly isolated in pure form, “viral” genomes are assembled from mixed genetic material—including host RNA and contaminants—without certainty about their origin. Even scientists acknowledge that purified “viral” samples would provide more precise attribution, yet “viral” genomes are still accepted without them. This is circular reasoning at its core.

Without validated reference genomes, the entire field of virology rests on assumptions rather than scientific certainty. What is presented as definitive genomic evidence of “viruses” is, in reality, a product of flawed methodology and inference rather than direct, empirical validation.

14. You’ve examined how fear can “spread from person to person faster than a coronavirus.” How might this understanding change our approach to managing public health crises?

If people truly understood that fear spreads faster than any “virus,” we might see an end to the 24-hour mainstream media propaganda machine designed to fuel panic, drive testing, and push treatments. But fear is the foundation of their control. The news cycle exists to keep people in a constant state of alarm, making them easier to manipulate. Those in power know this—and they’ve weaponized it against us. Waking up the majority to these fear-based tactics would change everything and officially reset the board.

15. What are you currently focused on in your research, and how can readers stay connected with your work and future publications?

I’ve been writing a book for the past year. My focus over that time has been on the early years of germ “theory,” particularly the work of Louis Pasteur and Robert Koch. I’m currently wrapping up my research on Pasteur and will soon move on to Koch’s work. After that, I’ll shift my focus to the early years of virology, exploring the special interests that helped virology gain a foothold, as well as the indirect methods used to claim “pathogenic viruses” exist—such as electron microscopy (EM). As I tend to bounce around in my research, I’ve already covered some of these topics, including in my recent article “Virus-like” Particles.

Readers can stay connected to my work and research through my websites:

And through social media:

Thank you for your continued interest and support!

ViroLIEgy Newsletter | Mike Stone | Substack

I appreciate you being here.

If you’ve found the content interesting, useful and maybe even helpful, please consider supporting it through a small paid subscription. While everything here is free, your paid subscription is important as it helps in covering some of the operational costs and supports the continuation of this independent research and journalism work. It also helps keep it free for those that cannot afford to pay.

Please make full use of the Free Libraries.

Unbekoming Interview Library: Great interviews across a spectrum of important topics.

Unbekoming Book Summary Library: Concise summaries of important books.

Stories

I’m always in search of good stories, people with valuable expertise and helpful books. Please don’t hesitate to get in touch at unbekoming@outlook.com

For COVID vaccine injury

Consider the FLCCC Post-Vaccine Treatment as a resource.

Baseline Human Health

Watch and share this profound 21-minute video to understand and appreciate what health looks like without vaccination.

About the Author(s)

M

Mike Stone

1 Response

g

gf7777

Isolating Santa: Clausology Meets Virology

On a frosty December evening, Dr. Holly Jingle and her team of Santa Clausologists at the North Pole Research Institute embarked on a groundbreaking mission to resolve the mystery of Santa Claus’s existence. Inspired by methods used in virology, they began their investigation by examining indirect evidence of Santa’s presence. Much like virologists detect symptoms of a viral infection—such as sneezing, coughing, or fever—the Clausologists identified “environmental symptoms” linked to Santa’s activity. These included the distant sound of jingling bells, the unmistakable “ho ho ho” echoing through the night, and the faint rustling of a sleigh overhead. These sensory phenomena served as the first indicators of Clausological significance.

To deepen their investigation, the team decided to repurpose the guest house at the North Pole Research Institute as their “Santa simulation chamber,” creating a controlled environment to test their hypotheses. Stockings were hung by the chimney, a plate of freshly baked cookies was set out beside a glass of milk, and a decorated Christmas tree stood as the centerpiece. By morning, intriguing results had emerged—the milk and cookies had vanished, the stockings were filled with gifts, and traces such as cookie crumbs, soot, and even a strand of snowy white hair were found. The Clausologists treated these materials as analogous to patient-derived samples collected in virology—key evidence for further analysis.

Using advanced methods akin to genetic sequencing, the team pieced together a “Santa Profile” from the collected traces. By analyzing the cookie crumbs, soot particles, and white hair, they reconstructed a representation of Santa’s activity. In virology, this step mirrors the decoding of viral genetic material, where fragments are assembled into a cohesive genome to identify and characterize a virus.

Building on their Santa Profile, the Clausologists employed computational modeling to hypothesize Santa’s remarkable abilities—rapid gift distribution, sleigh propulsion, and communication with reindeer. This mirrored virological models, which predict viral behavior, transmission patterns, and potential mutations. Critics raised questions about the validity of such inferences without direct observation, but Clausologists emphasized the rigor of their methods and the parallels to virological research, where modeling often fills gaps in direct evidence.

Seeking additional confirmation, the team set up a high-resolution camera outside the guest house at the North Pole Research Institute. Overnight, the camera captured an image of a rotund figure clad in red, with a white beard and a jovial face. Though consistent with cultural depictions of Santa, the image lacked definitive proof of his activity, raising the possibility of an impersonator. Clausologists likened this to electron microscopy in virology, which provides visual evidence of viruses, though interpretation is often required to confirm their identity.

The culminating achievement was the development of a diagnostic tool: the “Santa Detection PCR Test.” This test identified markers derived from the Santa Profile in environmental samples. Households worldwide eagerly submitted cookie crumbs, soot, and other materials for testing. A surprising number of samples tested positive for Santa’s presence, much like how virologists use PCR tests to detect specific viral genetic sequences. Clausologists argued that the widespread detection of these markers strongly supported the evidence for Santa’s existence.

At the International Congress of Clausology, Dr. Jingle and her team presented their findings, sparking animated debate. Critics demanded direct observation of Santa Claus himself, while supporters lauded the Clausologists’ inventive application of scientific principles to a seemingly mythical question. Dr. Jingle maintained that their methodology—encompassing sensory observation, environmental sampling, profiling, modeling, imaging, and diagnostic testing—constituted a robust, reproducible framework for investigating elusive phenomena.

Though debates persisted, the Clausologists’ work captured global imagination, offering not only a playful exploration of Santa Claus but also a reflection on the creativity and ingenuity required in “scientific” inquiry.

Leave a Reply

Support ViroLIEgy

If you’d like to support ViroLIEgy.com, please use either the link or the QR code. Your donation is greatly appreciated! Every contribution helps keep the site running and allows us to continue questioning the narrative with logic and critical thinking. Thank you for your support!

Donate via PayPal
PayPal Donation QR Code