“The genome sequence of this virus, as well as its termini, were determined and confirmed by reverse-transcription PCR (RT–PCR)10 and 5′/3′ rapid amplification of cDNA ends (RACE), respectively.”
https://www.nature.com/articles/s41586-020-2008-3
The above section is taken from the original study claiming the creation of the “SARS-COV-2” genome. As can be seen, the sequence and the termini were determined by Reverse Transcription Polymerase Chain Reaction, otherwise known as RT-PCR. I have gone over PCR quite a few times throughout this Testing Pandemic as it is the engine that is driving this fraud. Without PCR, there would be no genome. Without the genome, there would be no PCR diagnostic test. Without the test, there would be no cases and thus no “pandemic.” Still, people somehow fail to see the inherent problems surrounding this DNA Xerox machine turned faulty diagnostic test being used to label someone positive for non-existent “viruses.” They fail to understand what PCR is and why it is not suitable as a diagnostic test. They fail to understand that this technique has various limitations and is prone to contamination.
To clear things up a bit, let’s get some background information on what exactly PCR is first:
“Polymerase chain reaction, or PCR, is a laboratory technique used to make multiple copies of a segment of DNA. PCR is very precise and can be used to amplify, or copy, a specific DNA target from a mixture of DNA molecules. First, two short DNA sequences called primers are designed to bind to the start and end of the DNA target. Then, to perform PCR, the DNA template that contains the target is added to a tube that contains primers, free nucleotides, and an enzyme called DNA polymerase, and the mixture is placed in a PCR machine. The PCR machine increases and decreases the temperature of the sample in automatic, programmed steps. Initially, the mixture is heated to denature, or separate, the double-stranded DNA template into single strands. The mixture is then cooled so that the primers anneal, or bind, to the DNA template. At this point, the DNA polymerase begins to synthesize new strands of DNA starting from the primers. Following synthesis and at the end of the first cycle, each double-stranded DNA molecule consists of one new and one old DNA strand. PCR then continues with additional cycles that repeat the aforementioned steps. The newly synthesized DNA segments serve as templates in later cycles, which allow the DNA target to be exponentially amplified millions of times.”
https://www.nature.com/scitable/definition/polymerase-chain-reaction-pcr-110/

As I stated before and as the above section should make perfectly clear, PCR is just an expensive DNA Xerox machine. That’s all it is. Nothing more, nothing less. It makes synthetic copies of DNA through numerous cycles using various chemicals. There are plenty of issues with the (mis)use of a glorified copy machine as a diagnostic test for a non-existent “virus” such as the lack of standardized Ct values and the prevalence conundrum. There are multiple factors which can contribute to variability and unreliability in PCR Ct Values and thus any diagnostic test results such as:
- The use of different specimen collection devices
- Specimen types
- Nucleic acid extraction methods
- Genomic targets
- Real-time PCR chemistries
- The variety of testing methods
- The lack of universally applicable Ct Values

Essemtially, all of these factors make PCR diagnostic test results utterly meaningless. However, beyond the limitations of PCR as a diagnostic tool, there are also some very concerning drawbacks for PCR as a lab technique as well, primarily related to contamination as you will see over the course of the next few sources.
To begin with, highlights from this first source explain that while they try to mitigate it, contamination in PCR is unavoidable. With rapidly increasing technology such as NGS, the contamination issues have become even more pronounced. The author shares examples of how contamination crept into “viral” and ancient bacterial pathogens as well as the whole genome sequence of a cow. This is primarily due to the amplification process through PCR where it creates copies of DNA exponentially. The author calls for an emphasis on quality control and validation while stating that there needs to be better detection as well as setting of thresholds for filtering out contamination in such studies as metagenomics:
Editors’ Pick: Contamination has always been the issue!
“In the middle of the 1980’s, I heard a highly profiled professor making a comment after a lecture about a brand new technique, the polymerase chain reaction (PCR). His comment was full of doubt about this novel technology, and the message was something like: “it [PCR] can never become a widely used diagnostic tool due to the unavoidable contamination”. However, the PCR revolutionized life sciences from medicine to conservation genetics, the inventor was awarded with the Noble Prize, and many of us made careers using the very technique. Scientists, clinical diagnostics and forensic laboratories and others using PCR quickly learned to deal with contamination and build mechanisms to monitor for it. The contamination was there, but, it could be managed with the right laboratory environment, sample flow, and careful experimental design including proper sample handling and a set of controls.
For the past seven to eight years another new technology, second generation sequencing (SGS), also known as next generation sequencing (NGS) or massively parallel sequencing (MPS), is gaining more space in laboratories and scientific journals. This new wave of technologies already has had profound effects on human genomics, cancer biology, microbiology, ancient DNA studies and forensic genetics. Big data are fantastically interesting, will probably fundamentally change current views in biology, and are conceptualizing the basis of human diseases in a new way.
During the last couple of years the contamination worry has appeared again as it has with any molecular biology technique employing amplification. The studies under scrutiny surround the use of the NGS technology in the studies of e.g. modern viral [1] and ancient bacterial [2] pathogens, and whole genome sequence data of the domestic cow [3].
In 2013, Xu et al. [1] published a study consisting of 92 seronegative, non-A – E, hepatitis patients from Chongqing, China. They used Solexa deep sequencing and found that all 10 sera pools had a 3.780 bp contig, which was located at the interface of Parvoviridae and Circoviridae. The authors designated the new virus provisionally as NIH-CQV. In the study, 63 of 90 patient samples (70%) were positive, but all those from 45 healthy controls were negative. The authors recommended further studies, but concluded that their “data indicate that a parvovirus-like virus is highly prevalent in a cohort of patients with non-A–E hepatitis.” Being so would have been of great medical importance, since non-A–E hepatitis is poorly understood and infected individuals have serious complications. Soon after, Naccache et al. [4], shed doubt on these findings. They discovered, using NGS, a highly divergent DNA virus, which also was at the interface between Parvoviridae and Circoviridae, and they tentatively called it a parvovirus-like hybrid virus (PHV). The authors detected the virus originally in various sets of clinical samples, and all strains were ~ 99% identical in nucleotide and amino acid sequences with each other and the NIH-CQV. Naccache et al. [4] then showed that the source of these viruses was contaminated commercial silica-binding spin-columns used in the sample preparation, and suggested that such contamination can be time dependent and geography specific. Smuts et al. [5] and Zhi et al. [6] also studied the silica-columns from the same company and confirmed the study by Naccache et al. [4], but they showed that silica-columns from some other companies were contamination free. Since silica in most commercial spin columns is derived from the cell walls of diatoms, the authors in Naccache et al. later postulated [7] that PHV/NIH-CQV could be a diatom virus, whereas Zhi et al. hypothesized that it originated from oomycetes [6].
Ancient DNA studies have always stressed rigid contamination control, and contextual interpretation of the results. Little consensus exists on sample collection and experimental study design of NGS-based studies which, may introduce another level of concern. Indeed, the study by Campana et al. [2] serves as an example. They tried to resolve the cause of huey cocoliztli (Great Pestilence in Nahautl), a hemorrhagic fever that killed almost half of the population in 1576 in Mexico. The authors used Helicos HeliScope and Illumina 2500 sequencing platforms for metagenomic sequencing to identify the pathogen in eight human remains from a known site of the huey cocoliztli outbreak from Spanish colonial times. They also took surrounding soil samples and four pre-colonial remains for comparative studies. Without the comparative sampling, the authors could have reported Yersinia pestis and rickettsiosis as causative pathogens, which now turned more likely to be false positive findings. Due to this observation, the authors suggested that target-enrichment methods should be used to confirm the presence of a pathogen.
Finally, mammalian genomes also have been studied for microbial contamination. Recently Merchant et al. [3] studied Bos Taurus, the domestic cow, whose genome was first assembled in 2009 from 35 million Sanger sequencing reads, and mapped into chromosomes. As common in such projects, small regions remained unmapped, and Merchant et al. [3] targeted those sequences. By use of Kraken system to classify the unmapped contigs, they surprisingly identified 173 small contigs that were of microbial origin. One of those was Bovine herpes virus 6, isolate Pennsylvania 47, which is a cattle-specific virus causing various diseases. This virus is a retrovirus, and the authors considered the possibility of viral insertion to the host genome, which they excluded during further investigation. The most common contaminants belonged to Acinetobacter (29 contigs), Pseudomonas (35 contigs) and Stenotrophomonas (27 contigs). Another unexpected contaminating contig of interest was 2.885 small contigs, earlier placed in chromosomes 1 to 10, which aligned to a human specific bacterium, Neisseria gonorrhoeae, strain TCDC-NG08107. Although this sequence is putatively a complete genome, it contained multiple sequences that seemed to derive from the cow and sheep genomes. These alarming findings caused GenBank temporarily to suppress the entry for this genome.
All these reports presented above suggest that when the scientific community is changing rapidly from Sanger sequencing to the next phase(s) in the sequencing technology, the importance of quality control and validation has to be emphasized. Microbial contamination is not yet fully understood, but not surprisingly it appears to be prevalent. There is a need for clear outline for detection and validation of new marker systems and setting thresholds for filtering out contamination in such studies as metagenomics [8]. Indeed, one such paper providing guidancce towards this direction has been published recently in Investigative Genetics [9].”
https://www.ncbi.nlm.nih.gov/labs/pmc/articles/PMC4279886/

This next source details three main areas where PCR encounters limitations: amplicon size, amplification error rate, and contamination. It claims that PCR is prone to erroneous results due to contamination:
Polymerase Chain Reaction
Limitations of PCR
Some of the limitations of PCR in gene synthesis are as follows:
Amplicon Size: PCR efficiency decreases with increased amplicon size. Amplicon refers to the DNA fragments or sequences that are amplified by the PCR. PCR has the ability to amplify DNA sequences of up to 3 kb, but ideally the length should be less than 1 kb. The reason is due to the low processivity of the Taq polymerase enzyme which lacks proof reading ability. Many genes, in particular human genes are longer than those that can be multiplied by PCR. Megabases long DNA sequences have to be isolated and then multiplied which is impossible to achieve by PCR.
Error Rate During Amplification: DNA polymerases have a proof reading mechanism by which they correct the errors committed during DNA replication. Taq polymerase lacks proof reading mechanism and is unable to correct the errors that happen during DNA amplification. The error rate of Taq polymerase is 1 error per 9000 nucleotides.
Contamination: PCR technique is extremely sensitive and prone to erroneous results due to contaminated DNA. Contaminated DNA may originate from contaminant organisms found in the biological source, airborne cellular debris, products of previous PCR reactions. Contamination can be prevented by using good laboratory technique and taking adequate control measures.”
https://www.google.com/amp/s/www.medindia.net/amp/patients/patientinfo/polymerase-chain-reaction.htm

This third source explains how even trace amounts of contamination can produce misleading results. In order for PCR to even be useful, prior sequence knowledge is required. This should be a red flag for anyone believing in a “novel virus” genome said to utilize PCR for its creation. The DNA polymerase used for PCR is prone to errors which leads to mutations (i.e. errors) in the genome:
Research Techniques Made Simple: Polymerase Chain Reaction (PCR)
“Although PCR is a valuable technique, it does have limitations. Because PCR is a highly sensitive technique, any form of contamination of the sample by even trace amounts of DNA can produce misleading results (Bolognia et al, 2008; Smith & Osborn, 2009). In addition, in order to design primers for PCR, some prior sequence data is needed. Therefore, PCR can only be used to identify the presence or absence of a known pathogen or gene. Another limitation is that the primers used for PCR can anneal non-specifically to sequences that are similar, but not completely identical to target DNA. In addition, incorrect nucleotides can be incorporated into the PCR sequence by the DNA polymerase, albeit at a very low rate.”
LIMITATIONS
- The DNA polymerase used in the PCR reaction is prone to errors and can lead to mutations in the fragment that is generated
- The specificity of the generated PCR product may be altered by nonspecific binding of the primers to other similar sequences on the template DNA
- In order to design primers to generate a PCR product, some prior sequence information is usually necessary.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4102308/

This final source deals with the clinical (mis)use of PCR and reiterates that prior sequence knowledge is required in order to generate a sequence. PCR’s use as a clinical diagnostic tool is challenged by difficulties in interpretation of the results, especially as even trace amounts of “viral” RNA can trigger positive results which are considered clinically irrelevant. It is admitted that it is not possible to determine a positive result of a clinical infection from contamination. Thus, the results should be interpreted cautiously due to the high sensitivity of PCR and its inability to differentiate between contamination and results of clinical relevance:
Polymerase chain reaction: advantages and drawbacks
PCR HAS SOME LIMITATIONS….
“PCR is not only a very sensitive technique but also a very specific one: The primers are usually directly complementary to the target sequence and DNA can only be amplified if the matching target DNA-primer is adequate. In fact, as primers are directly complementary to target DNA, amplification only occurs if the target is exactly or closely related to the DNA sequence of the expected causative agent. For example, primers designed to amplify canine oral papillomavirus DNA will probably not amplify the DNA of another canine papillomavirus. PCR demands that sequence information be available for at least a part of the DNA that is to be amplified. Interpreting the clinical relevance of a positive PCR amplification can also be challenging. In fact some extremely sensitive nested PCR detect even 0.05 viral copy per cell. The presence of such trace amounts of DNA does not indicate a productive infection but probably a latent or clinically non-relevant infection. For this reason, one must interpret cautiously some PCR results. Last but not least, PCR does not allow localization of the nucleic acids. It is consequently not possible to differentiate a clinical infection from a contamination. It has, for example been shown, that papillomavirus DNA can be amplified from virtually each sample of normal human skin. But the detection rate is much lower when the stratum corneum of these samples is removed. This indicates that most of these papillomavirus are probably contaminant.”
PCR FROM BENCH TO BEDSIDE: WONDERFUL BUT…. BE CAREFUL!
“As mentioned above, PCR is a wonderful tool for research but also for daily diagnosis. One should however be aware of one possible pitfall: the sensitivity of the method. As already said, some very sensitive PCR settings may be able to identify as few as one genome copy within 100 cells! It is very unlikely that these few
microorganisms may cause any damage to the host but… the test is positive! One should consequently interpret PCR results cautiously. Exactly like the IgE serology tests that identify sensitization (and not allergy!), PCR identifies infection, but does not say anything on the actual link between this infection and the clinical signs. Let’s take the example of leishmaniasis!
Before 2000, most authors agreed that the prevalence of the disease was from 1 to 15% in endemic areas of Europe. But, as soon as PCR studies began, data changed. It was suddenly obvious that the great majority, if not all, dogs living in endemic areas were infected. But only some of them were seropositive and few presented with clinical signs! That is the reason why PCR is NOT a good test to assess whether leishmania infection explain the clinical signs of one specific dog.
The gold standard to answer this question remains the detection of leishmania-specific IgG!”
While the above article attempts to make excuses for why the following examples are not limitations, it does go on to list these additional issues:
- PCR cannot amplify RNA
- PCR only amplifies specific targets
- PCR amplifies only a very limited part of the genome
- PCR may sometimes not be specific enough
- PCR is not quantitative
- It is not possible to know where the DNA was in the sample

In Summary:
- Polymerase chain reaction, or PCR, is a laboratory technique used to make multiple copies of a segment of DNA
- The newly synthesized DNA segments serve as templates in later cycles, which allow the DNA target to be exponentially amplified millions of times
- In the early 1980’s, a highly profiled professor stated “it [PCR] can never become a widely used diagnostic tool due to the unavoidable contamination”
- Since then, the contamination in PCR remains, but it could be managed (not eliminated) with the right laboratory environment, sample flow, and careful experimental design including proper sample handling and a set of controls
- With the rise of next generation sequencing (NGS) during the last couple of years, the contamination worry has appeared again as it has with any molecular biology technique employing amplification
- The studies under scrutiny surround the use of the NGS technology in the studies of e.g. modern “viral” and ancient bacterial pathogens, and whole genome sequence data of the domestic cow
- In 2013, Xu et al. published a study consisting of 92 seronegative, non-A – E, hepatitis patients from Chongqing, China
- They used Solexa deep sequencing and found that all 10 sera pools had a 3.780 bp contig, which was located at the interface of Parvoviridae and Circoviridae
- The authors designated the new “virus” provisionally as NIH-CQV.
- In the study, 63 of 90 patient samples (70%) were positive, but all those from 45 healthy controls were negative
- The authors recommended further studies, but concluded that their “data indicate that a parvovirus-like “virus” is highly prevalent in a cohort of patients with non-A–E hepatitis.”
- Soon after, Naccache et al. shed doubt on these findings
- They discovered, using NGS, a highly divergent DNA “virus,” which also was at the interface between Parvoviridae and Circoviridae, and they tentatively called it a parvovirus-like hybrid “virus” (PHV)
- The authors detected the “virus” originally in various sets of clinical samples, and all strains were ~ 99% identical in nucleotide and amino acid sequences with each other and the NIH-CQV
- Naccache et al. then showed that the source of these “viruses” was contaminated commercial silica-binding spin-columns used in the sample preparation, and suggested that such contamination can be time dependent and geography specific
- Little consensus exists on sample collection and experimental study design of NGS-based studies which, may introduce another level of concern
- Campana et al. tried to resolve the cause of huey cocoliztli (Great Pestilence in Nahautl), a hemorrhagic fever that killed almost half of the population in 1576 in Mexico
- The authors used Helicos HeliScope and Illumina 2500 sequencing platforms for metagenomic sequencing to identify the pathogen in eight human remains from a known site of the huey cocoliztli outbreak from Spanish colonial times
- They also took surrounding soil samples and four pre-colonial remains for comparative studies
- Without the comparative sampling, the authors could have reported Yersinia pestis and rickettsiosis as causative pathogens, which now turned more likely to be false positive findings
- Merchant et al. studied Bos Taurus, the domestic cow, whose genome was first assembled in 2009 from 35 million Sanger sequencing reads, and mapped into chromosomes
- Small regions remained unmapped, and Merchant et al. targeted those sequences
- By use of Kraken system to classify the unmapped contigs, they surprisingly identified 173 small contigs that were of microbial origin
- One of those was Bovine herpes “virus” 6, isolate Pennsylvania 47, which is a cattle-specific “virus” causing various diseases
- This “virus” is a “retrovirus,” and the authors considered the possibility of “viral” insertion to the host genome, which they excluded during further investigation
- The most common contaminants belonged to Acinetobacter (29 contigs), Pseudomonas (35 contigs) and Stenotrophomonas (27 contigs)
- Another unexpected contaminating contig of interest was 2.885 small contigs, earlier placed in chromosomes 1 to 10, which aligned to a human specific bacterium, Neisseria gonorrhoeae, strain TCDC-NG08107
- Although this sequence is putatively a complete genome, it contained multiple sequences that seemed to derive from the cow and sheep genomes
- When the scientific community is changing rapidly from Sanger sequencing to the next phase(s) in the sequencing technology, the importance of quality control and validation has to be emphasized
- Microbial contamination is not yet fully understood, but not surprisingly it appears to be prevalent
- There is a need for clear outline for detection and validation of new marker systems and setting thresholds for filtering out contamination in such studies as metagenomics
- PCR efficiency decreases with increased amplicon size (DNA fragments or sequences that are amplified by the PCR)
- The reason is due to the low processivity of the Taq polymerase enzyme which lacks proof reading ability
- Megabases long DNA sequences have to be isolated and then multiplied which is impossible to achieve by PCR
- Taq polymerase lacks proof reading mechanism and is unable to correct the errors that happen during DNA amplification
- The error rate of Taq polymerase is 1 error per 9000 nucleotides
- PCR technique is extremely sensitive and prone to erroneous results due to contaminated DNA
- Contaminated DNA may originate from contaminant organisms found in the biological source, airborne cellular debris, products of previous PCR reactions
- Because PCR is a highly sensitive technique, any form of contamination of the sample by even trace amounts of DNA can produce misleading results
- In order to design primers for PCR, some prior sequence data is needed therefore PCR can only be used to identify the presence or absence of a known pathogen or gene
- The primers used for PCR can anneal non-specifically to sequences that are similar, but not completely identical to target DNA
- Incorrect nucleotides can be incorporated into the PCR sequence by the DNA polymerase
- The DNA polymerase used in the PCR reaction is prone to errors and can lead to mutations in the fragment that is generated
- As primers are directly complementary to target DNA, amplification only occurs if the target is exactly or closely related to the DNA sequence of the expected causative agent
- PCR demands that sequence information be available for at least a part of the DNA that is to be amplified
- Interpreting the clinical relevance of a positive PCR amplification can be challenging
- In fact some extremely sensitive nested PCR detect even 0.05 “viral” copy per cell
- The presence of such trace amounts of DNA does not indicate a productive infection but probably a latent or clinically non-relevant infection
- For this reason, one must interpret cautiously some PCR results
- PCR does not allow localization of the nucleic acids so it is consequently not possible to differentiate a clinical infection from a contamination
- It has, for example been shown, that “papillomavirus” DNA can be amplified from virtually each sample of normal human skin
- This indicates that most of these “papillomavirus” are probably contaminants
- One should be aware of one possible pitfall: the sensitivity of the method
- One should consequently interpret PCR results cautiously
- PCR identifies “infection,” but does not say anything on the actual link between this “infection” and the clinical signs
- An example was given with leishmaniasis, a disease in dogs said to be caused by leishmania parasites spread by sandflies
- Most authors agreed that the prevalence of the disease was from 1 to 15%
- As soon as PCR studies began, data changed and it was suddenly obvious that the great majority, if not all, dogs living in endemic areas were infected yet only some of them were seropositive and few presented with clinical signs
- That is the reason why PCR is NOT a good test to assess whether leishmania infection explain the clinical signs of one specific dog
- Other limitations include:
- PCR cannot amplify RNA
- PCR only amplifies specific targets
- PCR amplifies only a very limited part of the genome
- PCR may sometimes not be specific enough
- PCR is not quantitative
- It is not possible to know where the DNA was in the sample

Contamination is not the exception, it is the rule. Starting with the unpurified sample which is immediately placed in a “viral” transport media and is regularly subjected to cell culture, contamination is unavoidable. This carries over to the extraction of the RNA used to generate the sequences as well as the methods used for fragmenting the DNA/RNA into small sizes for sequencing. PCR is then used to amplify the fragments into many to create the DNA library. Seeing as PCR is very sensitive and prone to contamination, with results varying from machine to machine, lab-to-lab, even with the same sample with the same machine in the same lab, the results obtained from this technique should be considered highly questionable at best. There is no step in this process where contamination is not felt. The sensitivity of PCR means that this contamination is amplified exponentially. As PCR can not differentiate contamination from DNA, the contaminants are sequenced into the genome leading to one gigantic mess of erroneous, unreliable data and false results. While it is claimed PCR results should be interpreted cautiously, when it comes to a technique prone to contamination, no genome or diagnostic test result should ever be trusted.
And to think an entire pandemic and hundreds of thousands of resulting deaths (perhaps millions) are pretty much based on a faulty testing system. Without standardization, PCR is useless for anything much more than guesswork. But guesswork and piecemeal has become the foundation of standard medical science.
I think the scientific medical community is very much lost in the wilderness when it comes to understanding the functional processes within a human. Otherwise, they would have created drugs with no side-effects or longer term after carnage by now.
LikeLiked by 1 person
Yes, they are just guessing and playing chemist, hoping the side effects don’t kill the patient too quickly while raking in the $$$. The PCR test is nothing but meaningless data used to deceive. It’s sad thar people can not, or are unwilling to, understand this.
LikeLike
Another good one Mike. I’ve seen some papers say qPCR – q for quantitative. Would you know how they do it? Perhaps it an estimate or modeling. I’ve often wondered how they do this since the starting sample is not standard (in terms of dilution), so the process has to do something quantitative like actually counting to give the value for the entire sample’s say ‘viral load’
LikeLiked by 1 person
Thanks! It is nothing but estimates based off of the Ct value:
“PCR tests are exquisitely sensitive. The median test can detect one genome in a microliter of sample. The best are 100 times more sensitive; the worst 100 times less so, but even these are still able to detect the vast majority of infected individuals. Every test reports a “q:” the cycle threshold (Ct, aka Cq, or Cp). However, the specifics of the test protocol affect what this Ct is. The same sample tested with different protocols will likely not generate comparable Ct numbers. It will vary protocol-to-protocol based on differences in pre-PCR processes (sample collection; use of transport medium; cDNA generation; reagent selection and purity); locations and base content of genome regions selected; primer design to bridge those regions (e.g. off-target binding or primer-dimer formation); probe design to detect amplified product; efficiency of the PCR cycler instrument used; etc. Across all samples run on the same protocol, Ct will accurately reflect relative viral load differences sample to sample. Generating comparability beyond that requires each lab to publish what is called a “Standard Curve” for each test protocol it performs. This translates “apples and oranges” Ct counts to more comparable viral loads expressed in terms of number of viral copies per milliliter. This is done by taking a sample of known viral concentration (available commercially) and running a series of 10x dilutions on the same protocol and recording the resulted Ct with each known level of viral load for that specific test, run in that particular way, by that particular lab.”
https://chs.asu.edu/diagnostics-commons/blog/how-do-we-use-quantitative-tests-quantitatively
This article I did has more info on why this is a problem:
https://viroliegy.com/2021/10/01/viral-load-of-crap/
LikeLike
Hi Mike,
I note how even when they admit the limitations of the PCR they are still over-reaching: “PCR identifies infection, but does not say anything on the actual link between this infection and the clinical signs.” The author has made the incorrect claim that the PCR result “identifies infection” when it does nothing of the sort. We could simply introduce into the sample a short synthetic nucleotide sequence that we have matched to the primers which would cause the the same PCR result, no “infection” required. Or find a PCR “positive” inanimate object – how is the object “infected”?
Also in the “Advantages & Disadvantages of PCR Approach”: “Cannot discriminate between viable and nonviable or infectious and noninfectious cells or viruses” – the PCR is even more limited than this. It cannot determine the provenance of the amplified nucleotide sequence or confirm the existence of a postulated organism. As you know, this is particularly so with “viruses” where we have millions of “genomes” on the databases but none of the sequences were shown to come from inside a “virus”.
The conflation of the analytical performance of the process with it being a diagnostic “test” is behind the misinterpretation and deceptive use of the PCR. It is the pivotal issue in the COVID-19 fraud. As least more people seem to be aware of this now and it will be important going forward for them to be aware of any future fake pandemics from the start.
Cheers,
Mark
LikeLiked by 1 person
Just like the “infected” paw-paw fruit, goat, motor oil, Coca-Cola, chicken wings, ice cream, salmon cutting board, water… 😉
They definitely love to assume something (i.e. infection) and assign meaning to results that are utterly meaningless. I have always viewed this as a testing pandemic. Without PCR, this whole scam goes away. I am definitely happy more are aware of the fraud. I hope we can continue waking people up.
LikeLike
For another take on contamination please take a look at the bioinformatician Miguel Romero’s student thesis. His findings reveal the length and breadth of “HIV” contamination of DNA databases. His list of contaminated taxa is stunningly impressive. Interestingly he left the door open as to whether his findings really were explained by contamination. In fact he later suggested that detection of “HIV” DNA might have clinical utility for the much needed early diagnosis of ovarian cancer. Unfortunately this fell on deaf ears. Apparently the survival of “HIV” is far more important than investigating the possibility such a “liquid biopsy” might save a significant number of lives.
http://openaccess.uoc.edu/webapps/o2/handle/10609/31361
Click to access Contamination_of_genomic_databases_by_HIV-1_Bioinformatic.pdf
https://www.bmj.com/content/363/bmj.k4419/rr
LikeLiked by 1 person
Awesome! Thanks for the information and links Dr. Turner! I will give them a look. I always love some good sources debunking the current dogma. 😉
LikeLike