Biases and bathing suits

It's summer in the northern hemisphere, and I put on a bathing suit today... 😒

How can we tell if we are measuring what we think we are measuring? 

From the 1970s, the Body Mass Index (BMI) was adopted by medics as a measure of obesity, but several issues have come to light regarding the use of this tool. As more knowledge has been amassed, the clinical definition of obesity has changed. 
 
Nowadays, the WHO still imparts some face validity to assessing the BMI —because it “looks like” or “appears to” measure what it is intended to measure, although scientists are aware that body fat proportion is a better predictor for the risk of diabetes. 🍕🍎🍔🍓
 
Correlation between BMI and Body Fat Percentage for Men (NHANES, 1994)
For the original study where this graph came from, see here.

So, construct validity determines if something “really” measures the construct (in this case, of obesity), and content validity measures the tool's overall value (in this case, for diabetes screening): the appropriateness of different measurements of obesity has evolved.
 

This all is a way of saying: we are aware we are biased. 

But there is a lot we can do to minimise bias, mitigate its sources, 
and estimate its contribution to our observations.

 

Although a lot of different types of biases have been identified, the ones that most often affect research design in biomedical sciences are:


  1. Ascertainment bias is a form of systematic error that occurs during data collection. 

    It occurs when participants are not representative of the target population.

    Research design solution: random sampling.

    In medicine, it's when participants are selected based on diagnostic and demographic inclusion criteria, rather than by convenience sampling.

     

  2. Attrition bias is the selective dropout of some participants, who systematically differ from those who remain in the study.

    Research design solution: intention-to-treat analysis.

    In medicine, it's when investigators compare baseline data from participants who leave with those who remain at the end of the study.

     

  3. Placebo effect is a phenomenon where people report real improvement after taking a fake or nonexistent treatment, called a placebo

    Because the placebo can’t actually cure any condition, any beneficial effects reported may be due to a person’s belief or expectation that their condition is being treated.

    However, the placebo effect is not exclusively attributed to psychology. It could also be due to other biases, such as regression to the mean and confirmation bias.

    Research design solution: double-blinding.

    In medicine, it's when the researchers performing the experiment, as well as study participants, are unaware of each subject’s group assignment.


  4. Publication bias means that a study’s findings determine whether it will be published, rather than the study design, relevance of the research question, or overall quality. 

    Because the academic community tends to view positive studies more favorably than negative ones, these are more likely to be published, increasing type I errors (false positives).

    Research design solution: clinical trial registries.

    This occurs when a paper describing the hypothesis, clinical study protocols, and expected statistical power is published before data analysis is completed.

    For more information see here.
 
Next, I'll talk about reliability in biomedical science testing.

Comments

Leave a comment!

How to spot fake reviewers: a beginner's guide

Auditing published papers (part I)

IMHO: why open science should adopt double anonymous peer review