Pessimism is to see the worst in the world and life, scepticism is to doubt the truth of something, to question the validity of something. Scepticism literally originates from the Greek for 'To examine'. Pessimism is a negative attitude, scepticism is a healthy questioning attitude. I want to avoid this post being like verbal diarrhoea so I will do my best to structure it coherently.
Scepticism; becoming more important in modern living?
Scepticism; becoming more important in modern living?
Believing everything one reads is unfortunately something many people do, it becomes more of a problem because public health is always a hot topic with most people showing interest. Though healthcare and human science (my research) is a very specialist research area that takes a lot of understanding to formulate an opinion based on fact. Therefore when a non-expert reads an article on a 'new' discovery they almost have to assume it is real or that the person writing the article has employed scepticism for you. Sadly this is not the case now, some articles are written by people with some understanding but not the critical analysis skills to employ scepticism or someone with very little understanding on the subject. More dangerous reasons include that the person relaying the information is not allowed the time to do the necessary research to trust the information and the need to sell more of their product or in the case of news sites on the internet and social media, wanting to get as many views as possible to make more money from ads through site traffic, this is the 'click bait' phenomenon. The 'click bait' approach seems to have shifted the balance from delivering factually accurate news that has had scepticism employed for the consumer, to a focus on entertainment. This has led to reporting of information that is not verified and may require lots more knowledge on the subject than the lay person has to make an informed judgement. This essentially forces the lay person to believe what they are reading because they do not have all the information, so the need for the average person to be more sceptical is growing.
I personally have noticed the words 'allegedly' and 'reportedly' or 'speculating' appearing more in papers, on the news and on internet sources, these words suggest that the report has not been verified as accurate. Combing this type of publicising and interest in public health with a lack of scepticism from the lay person leads to consequences such as pressure on medical personnel because people request 'the new drug they have read about that cures this' it then has to be explained that this drug is in development or does not have enough research to even trust as a viable option. Having spoken to doctors and medical students, patients and family members often ask for 'the new treatment' that turns out to be nowhere near ready to come to market and many of them report being accused of withholding treatment because they will not administer this new drug that does not exist. This could be prevented if articles clarified the statement 'potential/new therapy'.
I personally have noticed the words 'allegedly' and 'reportedly' or 'speculating' appearing more in papers, on the news and on internet sources, these words suggest that the report has not been verified as accurate. Combing this type of publicising and interest in public health with a lack of scepticism from the lay person leads to consequences such as pressure on medical personnel because people request 'the new drug they have read about that cures this' it then has to be explained that this drug is in development or does not have enough research to even trust as a viable option. Having spoken to doctors and medical students, patients and family members often ask for 'the new treatment' that turns out to be nowhere near ready to come to market and many of them report being accused of withholding treatment because they will not administer this new drug that does not exist. This could be prevented if articles clarified the statement 'potential/new therapy'.
Potential therapies
When it comes to healthcare, there are things to look out for in the articles and news you see. The biggest one being 'potential therapies', this phrase as it is really is not the problem, many papers with implications in healthcare will contain this phrase or one similar and it is generally a true statement if the research is credible (not going into this yet). For example, I have used the phrase in my own work and will continue to use the phrase when it is required. This is not the issue, the issue is when the phrase is used in an article or news story and especially in a tag line or title. This is because when someone writing a paper suggests their research could lead to 'a potential new therapy' this is many years and advances in research after that paper is written, whereas when a person hears it as news, it is perceived as cutting edge and something that is available in the near future. Thus, it is the distinction in time and amount of research left to be done that is not stated in the article, or it is put at the end of the article once a person has clicked on it/brought it (click bait).
To put this into examples, a study by Coste et al in 2010 identified the first good evidence of a mechanically activation cation channel they named Piezo1 and Piezo2. It was described that potential treatments for touch disorders and mechanical nociception were implicated. This is indeed true as the same group later identified the channel in nociceptors and its role in mechanical pain, again suggesting targeting this channel could be a potential therapy.
This is all true but 7 years after the original study was published we are still a long way from a drug/therapy coming to market, it is not because research has stalled or it is incorrect but because research takes a long time and once a potential drug is developed, that is just the beginning.
To put it into perspective, just taking into account drug development (not the research before such as the discovery of Piezo2 and subsequent research) a target is identified for the drug, involving intricate chemistry to target specific amino acids at specific positions in the protein, this then needs to be tested with an assay to ensure binding affinity (does not prove efficacy). Based on this assay and functional assays for efficacy testing, the chemistry is adjusted and tests repeated until it is optimal.
Then animal testing for efficacy is done in at least two species of animal, usually rats and dogs (see later for issues with this). After this the drug will be adjusted chemically again. Side effects will be tested the same as efficacy then phase 1 clinical trials (side effects in humans) begin, then phase 2 trials and then phase 3 trials. Phase 1-3 alone 'clinical development' takes around 6 years, so it is often the case that the 'potential therapy' mentioned in the news or article is anywhere from a few years to over two decades from being available (figure 1). In addition a drug can fail for many reasons at any point in development and the majority do.
To put this into examples, a study by Coste et al in 2010 identified the first good evidence of a mechanically activation cation channel they named Piezo1 and Piezo2. It was described that potential treatments for touch disorders and mechanical nociception were implicated. This is indeed true as the same group later identified the channel in nociceptors and its role in mechanical pain, again suggesting targeting this channel could be a potential therapy.
This is all true but 7 years after the original study was published we are still a long way from a drug/therapy coming to market, it is not because research has stalled or it is incorrect but because research takes a long time and once a potential drug is developed, that is just the beginning.
To put it into perspective, just taking into account drug development (not the research before such as the discovery of Piezo2 and subsequent research) a target is identified for the drug, involving intricate chemistry to target specific amino acids at specific positions in the protein, this then needs to be tested with an assay to ensure binding affinity (does not prove efficacy). Based on this assay and functional assays for efficacy testing, the chemistry is adjusted and tests repeated until it is optimal.
Then animal testing for efficacy is done in at least two species of animal, usually rats and dogs (see later for issues with this). After this the drug will be adjusted chemically again. Side effects will be tested the same as efficacy then phase 1 clinical trials (side effects in humans) begin, then phase 2 trials and then phase 3 trials. Phase 1-3 alone 'clinical development' takes around 6 years, so it is often the case that the 'potential therapy' mentioned in the news or article is anywhere from a few years to over two decades from being available (figure 1). In addition a drug can fail for many reasons at any point in development and the majority do.
Figure 1: typical drug development process. |
The issue I have is not with 'potential therapy' or anything like that, it is the accidental or purposeful omission of the clarification of potential therapy in the article or news story. How far along is it? Is it one of the 5 or 10,000 compounds in figure 1? Is it before the drug discovery stage and just a theory? Because I do not think this information will ever be reported on, it is important for the people taking in that information to be sceptical of how much of a potential therapy it really is. It may seem trivial to some people but if you have a family member that has an illness and you read 'potential new therapy' you can think it could change that persons life and it is false hope if it is not clarified that it is 20 years away and it is unfair to use this false hope just to get people to read your article. Some scepticism will help combat all of the consequences, such as just finding the original academic paper and reading comments from other scientists and finding out if it is just one stand alone study as this will show the reality of the situation. That is what scepticism is, asking questions, doing your own research to find the truth.
Scepticism; the responsibility of everyone involved and the consequences of not being sceptical
Now to the science, just because that article is based on a real study that suggests a potential therapy, does not mean that the paper is flawless or even credible. Lets start with the extreme and shocking example... 'a link between autism and vaccination' this example is important because it has continuing consequences on public health still, despite that it came from one study that was almost immediately discredited and led to the author losing their licence to research because of the falsification of data, bribery and unethical behaviour charged against him. The study that sparked the whole anti-vaccination movement was a single study by Wakefield et al from the Walker-Smith Lab that was picked up by The Lancet journal in 1998. The study stated a link between MMR vaccine and autism.
I will not go into lots of depth on this as the information is readily available but I will summarise the key evidence refuting the claims. Firstly, the data set was very small with no control groups and relied almost entirely on parents beliefs that the child may have autism symptoms, rather than medical examination of the limited number of children. Secondly, in over a decade, none of hundreds of studies have reproduced results that are even close. Thirdly, Wakefield changed the family histories and accounts given to better fit his hypothesis because crediting the link between autism and MMR vaccines increased his personal financial standing and the financial standing of the institution he worked for (Royal Free Hospital) which is FRAUD. He was convicted of this fraud in a long trial and stripped of his clinical and academic credentials. Part of his conviction was simply because he was part of a group of people in an ongoing lawsuit of the makers of the MMR vaccine and that he accepted money to publish research that would help the group with their lawsuit, none of this was disclosed when the research was published. Finally (because I think I have made my point, but there are many more reasons), there were 12 authors on that paper and 10 of them retracted their involvement in the paper with most stating that they were mistaken or misinterpreted the results, whereas Wakefield would not say he was mistaken but also refused to commit to reproducing his data in controlled, independent conditions. He refused despite being offered funding and the opportunity to do the research himself to support his autism-MMR vaccine claim, you do not have to understand science or law to know that this is the mark of a guilty man who knows he would not be able to reproduce the data because he falsified it. The consequence of this one incorrect study has been a resurgence in three dangerous diseases and other diseases such as TB, polio and other fatal diseases.
So how would this be different with scepticism employed? Well firstly, the final version of the paper was written by Wakefield alone, thus most of the other authors were unaware of this false data and false conclusions. As an author on a paper, you are staking your reputation on being partly responsible for that research and the conclusions the research draws. The authors should have read and critiqued the final copy and never let it be sent to journals. Secondly, when journals receive papers they are critiqued and sent to independent researchers to critically analyse and give their opinion on the limitations and if the data is credible. The reviewers of this paper for whatever reason failed to notice the glaring errors, tenuous links and gaps in the science that if flagged up would have prevented this paper ever being published and prevented the public health epidemic it has lead to, because when the publishers sent the paper back to Wakefield to amend it, he would have failed to do so and been exposed. This to me is the first failure for scepticism as it would have been nipped in the bud either by the other authors or the journal but it was not. So the next place scepticism failed (or was totally ignored) was by the media who wrote articles on the 'potential links' between Autism and MMR vaccination. The articles were shockingly unbalanced and ignored hundreds of testimonies from other scientists that there were lots of anomalies, inaccuracies and inconsistencies in the research. Unfortunately celebrity endorsement gave further credit to the lies and swayed public opinion, if you wanted to you could argue that the celebrities in question saw endorsing anti-vaccination as a way to ensure they got extra publicity in a hot topic in the media to further their celebrity status, for now I will say it was just lack of scepticism on their part. Finally, if the lay person had been sceptical of the media reports and took the time to research it, they would have seen that it was merely one paper which should ring alarm bells and then reading testament from other scientists would have shown that it was in fact not concrete evidence and in this case just untrue. Though as I have stated, the lay person is often let down by the media who should be employing scepticism for them and in this case the paper should never have seen the printing press. Nonetheless, it is an example that proves scepticism is important for the lay person but also for researchers and publicists alike.
I would like to stress that if the MMR-autism link was true (it is not) I would still be using this as an example because it was just one obscure, poorly conducted research paper at the time so scepticism should have meant Wakefields' claims were lost because of the poor methods employed. If subsequent more scientific studies had been done that showed a significant link then so be it, but I would still be using this as an example for why scepticism is important at all levels, from researchers to the lay person.
The importance of employing scepticism as a scientific researcher!
I want to give one more example, I am not going to talk too much about it because I am publishing another entry specifically related to animal models of depression, but I am going to explain a bit about why scepticism is important to those of you who are learning at undergrad at the moment or even postgrad students and beyond.
As a bit of background, there are 3 types of anti-depressants: monoamine oxidase inhibitors, Tricyclics and selective-monoamine re-uptake inhibitors, this is in order of oldest to newest with no advance in efficacy coming since the mid 20th century (Table 1), the only improvement is fewer side effects for the selective re-uptake inhibitors. So what are the reasons? flawed hypotheses (e.g. monoamine hypothesis), lack of understanding of the disorder and poor animal models of the disorder (the last two go hand in hand).
For now I will just say, employ scepticism when learning about topics and question why a lab has used the technique they have because the paper you read will give their interpretation, for that interpretation to be correct there must be no other reasonable explanation possible at that time. For example, removing a rats olfactory bulb results in aggression, a symptom of depression, thus removing a rats olfactory bulbs causes depression. This is an interpretation, can you think of any other possible interpretations that are just as possible? (you can comment them if you like) I will start you off with one possible other interpretation, the rat has lost its principle sense that it uses for everything including whether another animal is friend of foe, thus to increase its chance of survival the rat, being unable to distinguish friend or foe, attacks to give it a chance to win and survive. Therefore an equally possible (if not way more plausible) interpretation is that the rat has an extensive stress response (activating fight of flight) because it cannot distinguish friend from foe without smell. So the 'removing rats olfactory bulbs causes depression' interpretation is not conclusive and would never be the conclusion of a group of researchers who are experts in the field, right? wrong. There are hundreds of studies using exactly this method and claiming they have made the rat depressed and anti-depressants helped.
The problem I have with explaining this is that there are so many examples similar to the one above and those I talk about in my other post that I just cannot give you all the examples to watch out for, you will really need to use your critical analysis to decide if the experiment or indeed animal they are using, is appropriate or the researcher is just doing it because that is what the lab and other researchers have done before them. I will give an example specifically to do with using mice and rats in depression and any other social disorder.
I will not go into lots of depth on this as the information is readily available but I will summarise the key evidence refuting the claims. Firstly, the data set was very small with no control groups and relied almost entirely on parents beliefs that the child may have autism symptoms, rather than medical examination of the limited number of children. Secondly, in over a decade, none of hundreds of studies have reproduced results that are even close. Thirdly, Wakefield changed the family histories and accounts given to better fit his hypothesis because crediting the link between autism and MMR vaccines increased his personal financial standing and the financial standing of the institution he worked for (Royal Free Hospital) which is FRAUD. He was convicted of this fraud in a long trial and stripped of his clinical and academic credentials. Part of his conviction was simply because he was part of a group of people in an ongoing lawsuit of the makers of the MMR vaccine and that he accepted money to publish research that would help the group with their lawsuit, none of this was disclosed when the research was published. Finally (because I think I have made my point, but there are many more reasons), there were 12 authors on that paper and 10 of them retracted their involvement in the paper with most stating that they were mistaken or misinterpreted the results, whereas Wakefield would not say he was mistaken but also refused to commit to reproducing his data in controlled, independent conditions. He refused despite being offered funding and the opportunity to do the research himself to support his autism-MMR vaccine claim, you do not have to understand science or law to know that this is the mark of a guilty man who knows he would not be able to reproduce the data because he falsified it. The consequence of this one incorrect study has been a resurgence in three dangerous diseases and other diseases such as TB, polio and other fatal diseases.
So how would this be different with scepticism employed? Well firstly, the final version of the paper was written by Wakefield alone, thus most of the other authors were unaware of this false data and false conclusions. As an author on a paper, you are staking your reputation on being partly responsible for that research and the conclusions the research draws. The authors should have read and critiqued the final copy and never let it be sent to journals. Secondly, when journals receive papers they are critiqued and sent to independent researchers to critically analyse and give their opinion on the limitations and if the data is credible. The reviewers of this paper for whatever reason failed to notice the glaring errors, tenuous links and gaps in the science that if flagged up would have prevented this paper ever being published and prevented the public health epidemic it has lead to, because when the publishers sent the paper back to Wakefield to amend it, he would have failed to do so and been exposed. This to me is the first failure for scepticism as it would have been nipped in the bud either by the other authors or the journal but it was not. So the next place scepticism failed (or was totally ignored) was by the media who wrote articles on the 'potential links' between Autism and MMR vaccination. The articles were shockingly unbalanced and ignored hundreds of testimonies from other scientists that there were lots of anomalies, inaccuracies and inconsistencies in the research. Unfortunately celebrity endorsement gave further credit to the lies and swayed public opinion, if you wanted to you could argue that the celebrities in question saw endorsing anti-vaccination as a way to ensure they got extra publicity in a hot topic in the media to further their celebrity status, for now I will say it was just lack of scepticism on their part. Finally, if the lay person had been sceptical of the media reports and took the time to research it, they would have seen that it was merely one paper which should ring alarm bells and then reading testament from other scientists would have shown that it was in fact not concrete evidence and in this case just untrue. Though as I have stated, the lay person is often let down by the media who should be employing scepticism for them and in this case the paper should never have seen the printing press. Nonetheless, it is an example that proves scepticism is important for the lay person but also for researchers and publicists alike.
I would like to stress that if the MMR-autism link was true (it is not) I would still be using this as an example because it was just one obscure, poorly conducted research paper at the time so scepticism should have meant Wakefields' claims were lost because of the poor methods employed. If subsequent more scientific studies had been done that showed a significant link then so be it, but I would still be using this as an example for why scepticism is important at all levels, from researchers to the lay person.
The importance of employing scepticism as a scientific researcher!
I want to give one more example, I am not going to talk too much about it because I am publishing another entry specifically related to animal models of depression, but I am going to explain a bit about why scepticism is important to those of you who are learning at undergrad at the moment or even postgrad students and beyond.
As a bit of background, there are 3 types of anti-depressants: monoamine oxidase inhibitors, Tricyclics and selective-monoamine re-uptake inhibitors, this is in order of oldest to newest with no advance in efficacy coming since the mid 20th century (Table 1), the only improvement is fewer side effects for the selective re-uptake inhibitors. So what are the reasons? flawed hypotheses (e.g. monoamine hypothesis), lack of understanding of the disorder and poor animal models of the disorder (the last two go hand in hand).
For now I will just say, employ scepticism when learning about topics and question why a lab has used the technique they have because the paper you read will give their interpretation, for that interpretation to be correct there must be no other reasonable explanation possible at that time. For example, removing a rats olfactory bulb results in aggression, a symptom of depression, thus removing a rats olfactory bulbs causes depression. This is an interpretation, can you think of any other possible interpretations that are just as possible? (you can comment them if you like) I will start you off with one possible other interpretation, the rat has lost its principle sense that it uses for everything including whether another animal is friend of foe, thus to increase its chance of survival the rat, being unable to distinguish friend or foe, attacks to give it a chance to win and survive. Therefore an equally possible (if not way more plausible) interpretation is that the rat has an extensive stress response (activating fight of flight) because it cannot distinguish friend from foe without smell. So the 'removing rats olfactory bulbs causes depression' interpretation is not conclusive and would never be the conclusion of a group of researchers who are experts in the field, right? wrong. There are hundreds of studies using exactly this method and claiming they have made the rat depressed and anti-depressants helped.
The problem I have with explaining this is that there are so many examples similar to the one above and those I talk about in my other post that I just cannot give you all the examples to watch out for, you will really need to use your critical analysis to decide if the experiment or indeed animal they are using, is appropriate or the researcher is just doing it because that is what the lab and other researchers have done before them. I will give an example specifically to do with using mice and rats in depression and any other social disorder.
There are some studies and questionnaires you can see where researchers in depression, autism etc... have been asked questions about their animals, with the majority using mice, they are asked questions like 'are mice social creatures' and almost all the researchers (cumulative total of over 200 from all the independent questionnaires I could find) the answer was YES! If you ask that same question to a zoologist or someone who studies mice, the answer will be NO! Mice in the wild are very solitary creatures once in adulthood that generally only come together to mate. So why are the overwhelming majority of researchers using mice to test depression and other social disorders? There are a few answers to this, the labs already used mice so they just carried on without questioning it (no scepticism) and that because mice are housed together in labs, researchers think they are social. They are not, they are housed together because you can fit 5 mice in a shoebox sized enclosure. If you actually study the housing, the mice will split into dominant and subordinate mice, where few mice have lots of territory and the rest huddle in a corner so they will not keep being attacked (I have read papers in top journals by groups with good reputations that state that the mouse that is on its own is depressed because it has isolated itself from the group. NO! This mouse is dominant and it is alone because if the other mice go near it, it attacks them [figure 2]) none of this exists in wild mice as they are not social and essentially shows that the researchers have a fundamental lack of understanding of mice as a genus (I won't even go into the species differences issues here). I want to say that all of what I am saying is proven by questionnaires and studies already conducted and is not just guessing but show scepticism, research it and you will find what I am talking about. So you can see how scepticism from the new guard of researchers onto the old methods will greatly benefit science compared to just continuing on with the same old, poor methods and animals that have led to a half a century of stagnation in depression (and other areas) research that has led to no truly novel treatments succeeding since monoamine oxidase inhibitors and has caused some of the largest drugs companies to pull out of depression and anti-depressant research all together, at the cost of jobs and public health. Perhaps a change of animal and better, more socially relevant tests are the answer? but without scepticism from those going into study these disorders, research will continue to stagnate and it will continue to be trial and error where miracle treatments may appear (unlikely) but we will have no better understanding of it than we do now.
With all of this in mind, 'mice' and 'social' do not go together very well, there is some credit if the mice are juvenile as they do engage in play but far less credible for adult mice, so when you see mouse-social-autism, keep the fact that they are not very sociable creatures in mind as you decide whether you agree with the research. The same applies with anxiety and schizophrenia too.
Finally, YOU CAN BE SCEPTICAL AND STILL, AFTER RESEARCHING BELIEVE THE THING YOU WERE SCEPTICAL OF, TO BE TRUE! JUST BECAUSE YOU ARE SCEPTICAL, DOESN'T MEAN YOU DISAGREE, JUST THAT YOU WANT TO FIND MORE INFORMATION BEFORE MAKING YOUR MIND UP.