These days people everywhere are wanting to demonstrate that their beliefs are based on scientific evidence. Indeed, basing one’s beliefs on scientific evidence has become trendy. Granted, it is important for one’s beliefs to be backed by evidence. Such evidence may be scientific evidence or it may be other kinds of evidence such as personal experiences.
As culture has placed increasing value on scientific evidence based thinking, some people are still finding ways to justify their erroneous beliefs. For example, proponents of the body-positive movement often have some data numbers up their sleeves that they can use to justify their beliefs that you can be obese and still enjoy optimal health (see video). Proponents of the transexual agenda also have data numbers up their sleeves that they use to deny that the transexual agenda is harming people. Meanwhile, stories all over the internet are pouring out where people relay how much their decision to change sexes was one of the worse decisions that they had ever made.
When I see these people use data numbers to support their erroneous beliefs, I wonder if these data numbers exist, or if they do exist, where did they come from? Even if the data numbers were accurate, do these numbers alone justify these people’s beliefs?
Even scientific evidence based thinking can be ridden with errors. In addition, our beliefs should not be based solely on scientific evidence. Below I will discuss why it is so terrible to base one’s beliefs solely on scientific evidence.
Scientific evidence should not replace common sense
One thing that fills me with disgust is when people demand a series of rigorous scientific studies to tell them what color the grass is or which way is up. These people may think that they are being smart by demanding scientific evidence before determining whether something is true, but when that something is knowable by simple common sense, then these people are not being smart, they are being idiots.
For example, we should figure by common sense that smoking tobacco is bad for health. Think about it: when there is a fire, smoke inhalation kills more people than the flames do. Yet when people light up a cigarette and inhale the smoke, they are essentially lighting something up on fire and inhaling the smoke. Meanwhile, in the earlier 20th century, people demanded a bunch of rigorous scientific research to tell them that it has bad health effects. The least they could have done was figure by default that it is bad for their health (since this is the more likely possibility), but many people could not even do that.
Now some people demand a bunch of rigorous scientific research to tell them that how they eat every day affects their health and/or affects their risk for serious diseases like cancer.
When you depend on scientific research to tell you something that you should know by common sense, you are not being smart. You are being an idiot. Your demands for such research would only waste valuable resources and divert these resources away from studies that could tell us what common sense alone cannot tell us.
Factors that pervert evidence-based thinking
Two factors that pervert evidence-based thinking are pride/ego and money interests.
Pride/ego have a mind-poisoning affect that can cloud a person’s thinking and cause the person to have an over-inflated sense of their abilities and knowledge. Additionally, these people tend to have a deflated view of others, perceiving others as being more incompetent than they are, especially when these other people think differently than them. People in this state do not always prioritize objective truth. They just want to assert their intellectual superiority over others. As such, they can be disruptive in circles of people who are trying to find the answers to something.
One expression to watch out for is the expression there’s no evidence that…. In certain contexts, this expression is appropriate. It can discourage people from jumping to a conclusion about something without adequate evidence. Nonetheless, when these words are uttered, they are often laced with either the burden of proof logical fallacy or what I call the unseen evidence logical fallacy.
The burden of proof logical fallacy is characterized by the assumption that a certain statement is true until or unless proven otherwise. For example, when someone says that “there is no evidence” that GMOs are bad for health, the person may not have considered whether there is evidence that GMOs are safe.
The unseen evidence logical fallacy is characterized by the belief that if one has not seen the evidence, then there must not be any. This logical fallacy is a logical fallacy that arrogant people are especially prone to. When people are arrogant and think they are smarter and more knowledgeable than they are, they tend to believe that whatever evidence they are aware of is all that there is. This is the if-there-were-evidence-I-would-have-heard-about-it assumption. Keep in mind that if you have not read anything on the topic, then no matter how much evidence there is for something, you are never going to see it.
Another factor that can pervert evidence-based thinking is money interests. When there is a money-driven agenda, objective truth is no longer at the top of the priority list; nor is solving problems that are plaguing society.
What gets published in the scientific literature is not determined by scientific merit as much as the public is lead to believe. Many medical journals receive substantial amounts of their funding from pharmaceutical companies. A study by (Bhandari et al., 2004) reviewed 332 randomized controlled trials in 8 surgical journals and 5 medical journals—all of which were high-impact journals. They found that of the 158 drug trials, industry-funded drug trials were about 1.8 times as likely to show results favorable to the new treatment as drug trials not funded by industry.
Another study by (Stern and Simes,1997) found that publication of a study’s results is delayed and less likely to be published when the results are negative (not in favor of the new treatment being tested.) The reason is that research studies generating negative results can lead to reduced revenue for the drug company that is trying to sell the drug. Sometimes, the drug company uses intimidation to prevent clinical researchers and/or medical journals from publishing results that do not make the new drug look good.
(Smith, 2003) describes a case where a study was published showing that a non-steroidal anti-inflammatory drug, benoxaprofen, may have serious side effects. The sellers of the drug, Eli Lilly, threatened legal action against the journal. Eli Lilly made big claims that the drug reversed arthritis even though no scientific studies had confirmed this claim. To conclude, many results we see in the science literature are biased in favor of results that make the new drug look good.
Before 1980, independent researchers designed the experimental protocols of the clinical trials, recruited the patients, analyzed the results, and wrote the articles to be published in the journals. Now, the vast majority of clinical studies have protocols designed by pharmaceutical companies; not all of the patients in the clinical trials necessarily exist; results are analyzed by the drug companies or by communications agencies; and the names of big-name authors are attached afterwards. Clinical trial data are now considered to be property of the pharmaceutical company that manufactured the drug. Meanwhile, individuals with a big name are selected by pharmaceutical companies as speakers, and are paid money to speak about new drugs at medical education events regardless of whether they are even familiar with the drug that they are speaking about. The information about new drugs that is presented at medical education events is susceptible to inaccuracies, leaving attending clinicians with insufficient knowledge about the true nature of a drug and its side effects. Now clinical studies are more about money making than about actual medical breakthroughs (Healy, 2002).
According to (Smith, 2003), clinical trials on new drugs should be simple, medically important, properly randomized and large scale. However, when drug companies determine the research procedures, they misconduct research in different ways. Sometimes a drug company runs a study on a large number of people with no clear question and no control subjects, for the sake of showing that a study was done. Sometimes, when a clinical study is run against a competitor drug, the competitor drug is used in a lower-than-optimal dosage in order to make the competitor drug seem less effective than it actually is. Alternatively, the competitor drug may be tested at a dosage that is higher than optimal in order to make the competitor drug appear to cause more side effects than it would when used in practice.
The bottom line is that some findings in the peer-reviewed science literature can be misleading. As a rule of thumb, one must consider the funding source.
Anecdotal evidence can carry weight
Anecdotal evidence is defined as information based on personal experience or observation, rather than systematic data collection.
You may have heard anecdotal evidence being labeled as low-quality evidence. In this day and age, people are starting to value scientific evidence so much so that they are starting to undervalue other kinds of evidence that can be right in front of them staring them in the face.
One should know that not all anecdotal evidence is created equal. For example, some anecdotal evidence is just hearsay, and certainly an anecdote does not mean much of anything if it never happened, or if the events are being recalled inaccurately.
When it comes to anecdotal evidence that concerns the effects of a behavioral change or intervention, it matters whether the individual presented in the anecdote can serve as its own control. In other words, do we know the state of the individual before the behavioral change or intervention? Alternatively, do we know what the state of the individual would have been if not for the behavioral change or intervention?
An example of an anecdote is the following: My grandpa smoked throughout his life and he lived to be 90 years old. Therefore, smoking is not bad for your health. In this day and age, the public knows that smoking is bad for health, so why would grandpa live to be 90 years old? The issue with this anecdote is that the grandpa presented here cannot serve as his own control because we do not know what his lifespan or quality of life would have been if he had never smoked. Therefore, this anecdote would be low-quality anecdotal evidence.
Now let us say that Stacey is obese and has been obese her entire life. Then Stacey does a specific dietary change and loses all of the extra weight. This anecdote serves as evidence that the dietary change can be effective for weight loss. Certainly this is not sufficient evidence that the diet would always be effective, but it is evidence that the dietary change can be effective. Now let us say that other obese people try the same thing and have the same results. As this point, the anecdotal evidence is adding up, and is becoming quite significant. In these anecdotes, the individuals can serve as their own controls because we know what their state of health was before the behavioral change as well as after.
People like to think that the science literature presents higher quality evidence than anecdotes, and it usually does; but sometimes the anecdotal evidence points towards what turns out to be right while the science literature is wrong. As was discussed previously, the science literature is contaminated with publication bias due to money interests. Clinical researchers have reported attempts by the pharmaceutical sponsors to intimidate them into refraining from reporting certain serious side effects that they observed in the clinical trials. Furthermore, there are different ways that studies can be intentionally misconducted to make the treatment appear safer and/or more effective than it is.
In cases where the findings published in the science literature do not accurately convey what was observed in the clinical trials, anecdotes will emerge that may contradict what is published in the science literature. Let us say a physician regularly follows the science literature in order to know what the expected side effects are for a certain drug. What if the physician sees his patients experiencing side effects that were not reported in the science literature? Should the physician dismiss his patients’ experiences as “low-quality anecdotal evidence” and figure that the science literature is more credible? One can see here that in a medical context, labeling anecdotal evidence as low-quality evidence can lead to medical gaslighting.
It may sound reasonable to say that the science literature presents higher quality evidence than anecdotes, but the implications of this philosophy are questionable. By labeling anecdotal evidence as low quality, especially in a medical context, what you are saying is that you will dismiss the evidence that is right in front of you among people you know personally, and that you would rather believe science researchers somewhere far away that you never met who ran a study that may have a conflict of interest with the funding source.
In a broader context, people acquire wisdom with age very much because of experience. The life experiences that give someone agedly wisdom consist largely of anecdotes. If one were to truly assume that anecdotes cannot tell us anything, then one would have a harder time acquiring wisdom with age.
Do you want to solve problems and find truth, or just look smart?
When you depend on scientific studies to tell you everything under the sun, and when you only believe what those scientific studies are telling you, you may sound smart, but you can render yourself less able to solve certain problems. Meanwhile, the people who have common sense, and who pay attention to what is happening right in front of them could be the ones solving problems and finding truth. I believe that this already happens in the real world.
For example, every so often I hear about someone who has a cancer diagnosis, and is given a poor prognosis with only a short amount of time to live even with treatment. The patient rejects the conventional treatments to the surprise of the medical doctors and tries an alternative intervention that often involves a radical dietary change. Somehow the cancer goes away and ten years later still has not returned. Now, this is just an anecdote, but it is an anecdote that has surfaced with one patient after another. In these kinds of anecdotes, the patient, to some extent, can serve as his/her own control. When the doctors give a poor prognosis, this poor prognosis represents what most likely would have happened if the patient had chosen the conventional treatment route instead of the alternative route. Therefore, achieving cancer remission and still being in remission 10 years later does say something.
So why have we not heard very much about these alternative interventions in cancer medicine? The most likely reason is that these interventions do not make money. They in fact cause a loss of money because they can prevent patients from taking the expensive drugs that do make lots of money. When these alternative interventions are understudied, they also will be “underevidenced”—not because they are ineffective, but because they do not align with money interests. Therefore, if you were to choose the interventions that are the most heavily studied, you would not be as likely to choose the best treatment options as you may think.
Too many times, I hear a person claiming that there is “no evidence” for an intervention that is, in reality, saving lives. It is terrible.
“Conspiracy theorist” and other kinds of name-calling
You may have heard people being described by the following terms:
- narcissist
- conspiracy theorist
- anti-science
- pseudoscience
- thing-you-disagree-with-phobic
- Marxist/communist
- fascist
- legalist
These labels, when used, often do not accurately describe the person in question. For example, when person A calls person B a conspiracy theorist, all that may be happening is that person B is trying to acknowledge corruption that is taken place in some higher power. This corruption that is taking place is not obvious to person A, and because person A has a hard time respecting someone with an opposing view, person A has decided to attach to person B the derogatory label of “conspiracy theorist”.
Conspiracy theory is defined as a theory that explains an event or set of circumstances as the result of a secret plot by usually powerful conspirators. Note that conspiracy theories tend to come from someone’s imagination, and therefore do not usually have much evidence to support them. As such, they do not turn out to be true later on.
The thing is that powerful entities have existed throughout history, and where there is power, there tends to be some amount of corruption. After all, they say that power corrupts. One can figure that such corruption would happen behind closed doors because then it can more easily go unpunished and unaccounted for. So, it would be foolish to call anyone a conspiracy theorist who is suspecting of corruption in some higher power—even if it is not obvious that such corruption is taking place.
Now back to person A calling person B a conspiracy theorist…Rather than attaching to person B a derogatory label, person A could have said “I disagree, I do not think this corruption you speak of is happening”. Respectful disagreement is a more appropriate reaction when someone suspects an evil plot taking place in some higher power that you do not think is taking place. There is no need to use derogatory labels. They are judgmental, and what is worse, the “conspiracy theory” may be an actual ongoing conspiracy that is hurting millions of people, and showing disrespect for people who are trying to acknowledge it would only make things worse.
People cannot always disagree with each other with grace. When arrogance creeps in, there is a temptation to attach derogatory labels to anyone that one disagrees with. As another example, people may be called anti-science because they disagree with some scientists on something, or because they disagree with a specific application of science, such as the use of GMOs in food production. However, you can disagree with the entire science community on something and still not be anti-science. Anti-science, in its true form, refers to a rejection of the scientific method as a way of acquiring knowledge and understanding the world around us. Even if one were to weigh anecdotal evidence above scientific evidence, one would still not necessarily be anti-science. One is anti-science when one places no weight on scientific evidence or scientific inquiry.
Sometimes people will state that a mode of thought is “pseudoscience”. Pseudoscience refers to something that has the look of science, but does not use the scientific method. Astrology is one example of pseudoscience. Sometimes, however, when someone disagrees with a piece of scientific work, he/she may label it as pseudoscience. Yet the work may be based on true science, but the person may just disagree with the methodology or with the conclusions. Again, people like to attach derogatory labels to other people who have opposing views, even when the derogatory labels do not accurately describe the other person.
References
Bhandari M, Busse JW, Jackowski D, Montori VM, Schünemann H, Sprague S, Mears, D. Schemitsch EH, Heels-Ansdell D, Devereaux PJ. (2004) “Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials.” Canadian Medical Association Journal 170(4): 477-480.
Healy D (2002) “In the Grip of the Python: Conflicts at the University-Industry Interface.” Science and Engineering Ethics 9(1): 1-13.
Smith R (2003) “Medical journals and pharmaceutical companies: uneasy bedfellows.” British Medical Journal 326: 1202-1205.
Stern JM, Simes RJ. (1997) “Publication bias: evidence of delayed publication in a cohort study of clinical research projects.” British Medical Journal 315: 640-645.