Skip to Main Content
 

Biomedical Sciences (KCOM)

Critical Reading for Biomedical Sciences Objectives

Objectives: 

1. Participants will demonstrate an introductory understanding of the required components of a scientific research paper.

2. Participants will identify the definition, purpose and process of critical appraisal.

3. Participants will demonstrate an introductory understanding of what elements to appraise within each section of a research paper.

4. Participants will describe the critical appraisal findings of their chosen research article.

Types of Bias

Bias can occur in the design and methodology of the study and this can distort the study's findings. The presence of bias may prevent the study from accurately reflecting the true results of the study. No study is completely free from bias. Through critical appraisal, the reviewer should systematically check that the researchers have minimized and acknowledged all forms of bias. 

Selection bias: Differences in the characteristics of the intervention and the comparator groups. Blind random placement of subjects within the intervention and control groups will reduce the risk of selection bias. 

Performance bias: Differences between groups in the care that is provided, or in the exposure to factors other than the interventions of interest. Blinding of participants, researchers and outcome assessors will reduce the risk of performance bias. 

Attrition bias: Loss of number of participants in either the control or intervention group through withdrawal or drop out. This will impact the comparison between the intervention and control groups. Enrollment of a greater number of study participants than deemed necessary can help to prevent the loss of needed data points and compensate for expected withdrawals. 

Detection bias: Differences between groups in how outcomes are determined. To minimize detection bias, the methodology applied to the intervention and control groups must be equivalent with the exception of the  measured intervention. Blinding of participants, researchers and assessors will also minimize detection bias. 

Reporting bias: Differences between reported and unreported findings. All of the data collected during a study must be presented in an objective manner regardless of results to minimize reporting bias. 

For more information: Cochrane Training Handbook for Assessing Risk of Bias

The What & Why of Critical Appraisal

Critical Appraisal

"Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in particular context." Burls, A. (2009). What is critical appraisal? In What Is This Series: Evidence-based medicine. Available online at What is Critical Appraisal

It is an essential skill to assure the evidence applied to a research study is valid and reliable. 

Why is critical appraisal important? 

  • To determine the quality of the research
  • To evaluate bias, methodology and applicability
  • Even peer reviewed research can have questionable methodology, biases or may simply not be relevant to your focus of study. 

The Critical Appraisal Process

Critical Appraisal Questions

Abstract: (Summary)

  • Summary of each section: Introduction, objectives, scope, materials, brief methodology, results and conclusion of study findings.
    • Does the author establish a brief background of the issue to be examined?
    • Does the author provide rationale and identify a research gap? 
    • Does the author state the research question?
    • Does the author briefly describe the methodology of the study? 
    • Does the author briefly state the most important findings? 
    • Does the author describe the impact of the findings on the wider community/population? 

Introduction: (opinion/subjective)

  • Does the author clearly state the purpose of the study and/or the research question?
  • Does the author provide recent, credible sources to justify the purpose of the study? 

Methodology: (objective)

  • Will the chosen methodology answer the research question?
  • What variable is being measured?
  • Is there an appropriate comparator? 
  • Were the inclusion and exclusion criteria described and followed?
  • Were the sample sizes of the intervention and comparator equal in size and characteristics?
  • What was the rate of attrition?
  • What statistical tests were used for the measurements? 
  • Was the protocol for performing the test described in detail? Was it cited?
  • Is the study reproducible based on the methodology described? 

Results: (objective)

  • What are the results? Do they answer the research question?
  • Are the results represented in the tables, charts and/or graphs clear and accurate? Do the data visualizations match the described results?
  • Were the results presented in an objective manner? Without comment, bias or interpretation?
  • Were the results of the study consistent with other available evidence?

Discussion: (subjective)

  • Does the discussion describe the relationship between the results and the original hypothesis or research question? 
  • Are the results of the study discussed in relation to previous studies in order to arrive at an explanation of the observed phenomena?
  • Are possible explanations of unexpected results and observations discussed? This should be phrased and described as hypotheses that can be tested by realistic experimental procedures.
  • Are principal points discussed and summarized? 
  • Does the author acknowledge possible biases? 
  • Does the author list limitations?
  • Has the author suggested future research? 
  • Has the author stated their conclusions and contributions? 

Conclusion: (opinion/subjective)

  • Authors' generated inferences, opinions, and hypotheses about the results. Views authors draw from the data.