critical evaluation of quantitative research

  • Subscribe to journal Subscribe
  • Get new issue alerts Get alerts

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

A guide to critical appraisal of evidence

Fineout-Overholt, Ellen PhD, RN, FNAP, FAAN

Ellen Fineout-Overholt is the Mary Coulter Dowdy Distinguished Professor of Nursing at the University of Texas at Tyler School of Nursing, Tyler, Tex.

The author has disclosed no financial relationships related to this article.

Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers successfully determine what is known about a clinical issue. Patient outcomes are improved when clinicians apply a body of evidence to daily practice.

How do nurses assess the quality of clinical research? This article outlines a stepwise approach to critical appraisal of research studies' worth to clinical practice: rapid critical appraisal, evaluation, synthesis, and recommendation. When critical care nurses apply a body of valid, reliable, and applicable evidence to daily practice, patient outcomes are improved.

FU1-4

Critical care nurses can best explain the reasoning for their clinical actions when they understand the worth of the research supporting their practices. In c ritical appraisal , clinicians assess the worth of research studies to clinical practice. Given that achieving improved patient outcomes is the reason patients enter the healthcare system, nurses must be confident their care techniques will reliably achieve best outcomes.

Nurses must verify that the information supporting their clinical care is valid, reliable, and applicable. Validity of research refers to the quality of research methods used, or how good of a job researchers did conducting a study. Reliability of research means similar outcomes can be achieved when the care techniques of a study are replicated by clinicians. Applicability of research means it was conducted in a similar sample to the patients for whom the findings will be applied. These three criteria determine a study's worth in clinical practice.

Appraising the worth of research requires a standardized approach. This approach applies to both quantitative research (research that deals with counting things and comparing those counts) and qualitative research (research that describes experiences and perceptions). The word critique has a negative connotation. In the past, some clinicians were taught that studies with flaws should be discarded. Today, it is important to consider all valid and reliable research informative to what we understand as best practice. Therefore, the author developed the critical appraisal methodology that enables clinicians to determine quickly which evidence is worth keeping and which must be discarded because of poor validity, reliability, or applicability.

Evidence-based practice process

The evidence-based practice (EBP) process is a seven-step problem-solving approach that begins with data gathering (see Seven steps to EBP ). During daily practice, clinicians gather data supporting inquiry into a particular clinical issue (Step 0). The description is then framed as an answerable question (Step 1) using the PICOT question format ( P opulation of interest; I ssue of interest or intervention; C omparison to the intervention; desired O utcome; and T ime for the outcome to be achieved). 1 Consistently using the PICOT format helps ensure that all elements of the clinical issue are covered. Next, clinicians conduct a systematic search to gather data answering the PICOT question (Step 2). Using the PICOT framework, clinicians can systematically search multiple databases to find available studies to help determine the best practice to achieve the desired outcome for their patients. When the systematic search is completed, the work of critical appraisal begins (Step 3). The known group of valid and reliable studies that answers the PICOT question is called the body of evidence and is the foundation for the best practice implementation (Step 4). Next, clinicians evaluate integration of best evidence with clinical expertise and patient preferences and values to determine if the outcomes in the studies are realized in practice (Step 5). Because healthcare is a community of practice, it is important that experiences with evidence implementation be shared, whether the outcome is what was expected or not. This enables critical care nurses concerned with similar care issues to better understand what has been successful and what has not (Step 6).

Critical appraisal of evidence

The first phase of critical appraisal, rapid critical appraisal, begins with determining which studies will be kept in the body of evidence. All valid, reliable, and applicable studies on the topic should be included. This is accomplished using design-specific checklists with key markers of good research. When clinicians determine a study is one they want to keep (a “keeper” study) and that it belongs in the body of evidence, they move on to phase 2, evaluation. 2

In the evaluation phase, the keeper studies are put together in a table so that they can be compared as a body of evidence, rather than individual studies. This phase of critical appraisal helps clinicians identify what is already known about a clinical issue. In the third phase, synthesis, certain data that provide a snapshot of a particular aspect of the clinical issue are pulled out of the evaluation table to showcase what is known. These snapshots of information underpin clinicians' decision-making and lead to phase 4, recommendation. A recommendation is a specific statement based on the body of evidence indicating what should be done—best practice. Critical appraisal is not complete without a specific recommendation. Each of the phases is explained in more detail below.

Phase 1: Rapid critical appraisal . Rapid critical appraisal involves using two tools that help clinicians determine if a research study is worthy of keeping in the body of evidence. The first tool, General Appraisal Overview for All Studies (GAO), covers the basics of all research studies (see Elements of the General Appraisal Overview for All Studies ). Sometimes, clinicians find gaps in knowledge about certain elements of research studies (for example, sampling or statistics) and need to review some content. Conducting an internet search for resources that explain how to read a research paper, such as an instructional video or step-by-step guide, can be helpful. Finding basic definitions of research methods often helps resolve identified gaps.

To accomplish the GAO, it is best to begin with finding out why the study was conducted and how it answers the PICOT question (for example, does it provide information critical care nurses want to know from the literature). If the study purpose helps answer the PICOT question, then the type of study design is evaluated. The study design is compared with the hierarchy of evidence for the type of PICOT question. The higher the design falls within the hierarchy or levels of evidence, the more confidence nurses can have in its finding, if the study was conducted well. 3,4 Next, find out what the researchers wanted to learn from their study. These are called the research questions or hypotheses. Research questions are just what they imply; insufficient information from theories or the literature are available to guide an educated guess, so a question is asked. Hypotheses are reasonable expectations guided by understanding from theory and other research that predicts what will be found when the research is conducted. The research questions or hypotheses provide the purpose of the study.

Next, the sample size is evaluated. Expectations of sample size are present for every study design. As an example, consider as a rule that quantitative study designs operate best when there is a sample size large enough to establish that relationships do not exist by chance. In general, the more participants in a study, the more confidence in the findings. Qualitative designs operate best with fewer people in the sample because these designs represent a deeper dive into the understanding or experience of each person in the study. 5 It is always important to describe the sample, as clinicians need to know if the study sample resembles their patients. It is equally important to identify the major variables in the study and how they are defined because this helps clinicians best understand what the study is about.

The final step in the GAO is to consider the analyses that answer the study research questions or confirm the study hypothesis. This is another opportunity for clinicians to learn, as learning about statistics in healthcare education has traditionally focused on conducting statistical tests as opposed to interpreting statistical tests. Understanding what the statistics indicate about the study findings is an imperative of critical appraisal of quantitative evidence.

The second tool is one of the variety of rapid critical appraisal checklists that speak to validity, reliability, and applicability of specific study designs, which are available at varying locations (see Critical appraisal resources ). When choosing a checklist to implement with a group of critical care nurses, it is important to verify that the checklist is complete and simple to use. Be sure to check that the checklist has answers to three key questions. The first question is: Are the results of the study valid? Related subquestions should help nurses discern if certain markers of good research design are present within the study. For example, identifying that study participants were randomly assigned to study groups is an essential marker of good research for a randomized controlled trial. Checking these essential markers helps clinicians quickly review a study to check off these important requirements. Clinical judgment is required when the study lacks any of the identified quality markers. Clinicians must discern whether the absence of any of the essential markers negates the usefulness of the study findings. 6-9

TU1

The second question is: What are the study results? This is answered by reviewing whether the study found what it was expecting to and if those findings were meaningful to clinical practice. Basic knowledge of how to interpret statistics is important for understanding quantitative studies, and basic knowledge of qualitative analysis greatly facilitates understanding those results. 6-9

The third question is: Are the results applicable to my patients? Answering this question involves consideration of the feasibility of implementing the study findings into the clinicians' environment as well as any contraindication within the clinicians' patient populations. Consider issues such as organizational politics, financial feasibility, and patient preferences. 6-9

When these questions have been answered, clinicians must decide about whether to keep the particular study in the body of evidence. Once the final group of keeper studies is identified, clinicians are ready to move into the phase of critical appraisal. 6-9

Phase 2: Evaluation . The goal of evaluation is to determine how studies within the body of evidence agree or disagree by identifying common patterns of information across studies. For example, an evaluator may compare whether the same intervention is used or if the outcomes are measured in the same way across all studies. A useful tool to help clinicians accomplish this is an evaluation table. This table serves two purposes: first, it enables clinicians to extract data from the studies and place the information in one table for easy comparison with other studies; and second, it eliminates the need for further searching through piles of periodicals for the information. (See Bonus Content: Evaluation table headings .) Although the information for each of the columns may not be what clinicians consider as part of their daily work, the information is important for them to understand about the body of evidence so that they can explain the patterns of agreement or disagreement they identify across studies. Further, the in-depth understanding of the body of evidence from the evaluation table helps with discussing the relevant clinical issue to facilitate best practice. Their discussion comes from a place of knowledge and experience, which affords the most confidence. The patterns and in-depth understanding are what lead to the synthesis phase of critical appraisal.

The key to a successful evaluation table is simplicity. Entering data into the table in a simple, consistent manner offers more opportunity for comparing studies. 6-9 For example, using abbreviations versus complete sentences in all columns except the final one allows for ease of comparison. An example might be the dependent variable of depression defined as “feelings of severe despondency and dejection” in one study and as “feeling sad and lonely” in another study. 10 Because these are two different definitions, they need to be different dependent variables. Clinicians must use their clinical judgment to discern that these different dependent variables require different names and abbreviations and how these further their comparison across studies.

TU2

Sample and theoretical or conceptual underpinnings are important to understanding how studies compare. Similar samples and settings across studies increase agreement. Several studies with the same conceptual framework increase the likelihood of common independent variables and dependent variables. The findings of a study are dependent on the analyses conducted. That is why an analysis column is dedicated to recording the kind of analysis used (for example, the name of the statistical analyses for quantitative studies). Only statistics that help answer the clinical question belong in this column. The findings column must have a result for each of the analyses listed; however, in the actual results, not in words. For example, a clinician lists a t -test as a statistic in the analysis column, so a t -value should reflect whether the groups are different as well as probability ( P -value or confidence interval) that reflects statistical significance. The explanation for these results would go in the last column that describes worth of the research to practice. This column is much more flexible and contains other information such as the level of evidence, the studies' strengths and limitations, any caveats about the methodology, or other aspects of the study that would be helpful to its use in practice. The final piece of information in this column is a recommendation for how this study would be used in practice. Each of the studies in the body of evidence that addresses the clinical question is placed in one evaluation table to facilitate the ease of comparing across the studies. This comparison sets the stage for synthesis.

Phase 3: Synthesis . In the synthesis phase, clinicians pull out key information from the evaluation table to produce a snapshot of the body of evidence. A table also is used here to feature what is known and help all those viewing the synthesis table to come to the same conclusion. A hypothetical example table included here demonstrates that a music therapy intervention is effective in reducing the outcome of oxygen saturation (SaO 2 ) in six of the eight studies in the body of evidence that evaluated that outcome (see Sample synthesis table: Impact on outcomes ). Simply using arrows to indicate effect offers readers a collective view of the agreement across studies that prompts action. Action may be to change practice, affirm current practice, or conduct research to strengthen the body of evidence by collaborating with nurse scientists.

When synthesizing evidence, there are at least two recommended synthesis tables, including the level-of-evidence table and the impact-on-outcomes table for quantitative questions, such as therapy or relevant themes table for “meaning” questions about human experience. (See Bonus Content: Level of evidence for intervention studies: Synthesis of type .) The sample synthesis table also demonstrates that a final column labeled synthesis indicates agreement across the studies. Of the three outcomes, the most reliable for clinicians to see with music therapy is SaO 2 , with positive results in six out of eight studies. The second most reliable outcome would be reducing increased respiratory rate (RR). Parental engagement has the least support as a reliable outcome, with only two of five studies showing positive results. Synthesis tables make the recommendation clear to all those who are involved in caring for that patient population. Although the two synthesis tables mentioned are a great start, the evidence may require more synthesis tables to adequately explain what is known. These tables are the foundation that supports clinically meaningful recommendations.

Phase 4: Recommendation . Recommendations are definitive statements based on what is known from the body of evidence. For example, with an intervention question, clinicians should be able to discern from the evidence if they will reliably get the desired outcome when they deliver the intervention as it was in the studies. In the sample synthesis table, the recommendation would be to implement the music therapy intervention across all settings with the population, and measure SaO 2 and RR, with the expectation that both would be optimally improved with the intervention. When the synthesis demonstrates that studies consistently verify an outcome occurs as a result of an intervention, however that intervention is not currently practiced, care is not best practice. Therefore, a firm recommendation to deliver the intervention and measure the appropriate outcomes must be made, which concludes critical appraisal of the evidence.

A recommendation that is off limits is conducting more research, as this is not the focus of clinicians' critical appraisal. In the case of insufficient evidence to make a recommendation for practice change, the recommendation would be to continue current practice and monitor outcomes and processes until there are more reliable studies to be added to the body of evidence. Researchers who use the critical appraisal process may indeed identify gaps in knowledge, research methods, or analyses, for example, that they then recommend studies that would fill in the identified gaps. In this way, clinicians and nurse scientists work together to build relevant, efficient bodies of evidence that guide clinical practice.

Evidence into action

Critical appraisal helps clinicians understand the literature so they can implement it. Critical care nurses have a professional and ethical responsibility to make sure their care is based on a solid foundation of available evidence that is carefully appraised using the phases outlined here. Critical appraisal allows for decision-making based on evidence that demonstrates reliable outcomes. Any other approach to the literature is likely haphazard and may lead to misguided care and unreliable outcomes. 11 Evidence translated into practice should have the desired outcomes and their measurement defined from the body of evidence. It is also imperative that all critical care nurses carefully monitor care delivery outcomes to establish that best outcomes are sustained. With the EBP paradigm as the basis for decision-making and the EBP process as the basis for addressing clinical issues, critical care nurses can improve patient, provider, and system outcomes by providing best care.

Seven steps to EBP

Step 0–A spirit of inquiry to notice internal data that indicate an opportunity for positive change.

Step 1– Ask a clinical question using the PICOT question format.

Step 2–Conduct a systematic search to find out what is already known about a clinical issue.

Step 3–Conduct a critical appraisal (rapid critical appraisal, evaluation, synthesis, and recommendation).

Step 4–Implement best practices by blending external evidence with clinician expertise and patient preferences and values.

Step 5–Evaluate evidence implementation to see if study outcomes happened in practice and if the implementation went well.

Step 6–Share project results, good or bad, with others in healthcare.

Adapted from: Steps of the evidence-based practice (EBP) process leading to high-quality healthcare and best patient outcomes. © Melnyk & Fineout-Overholt, 2017. Used with permission.

Critical appraisal resources

  • The Joanna Briggs Institute http://joannabriggs.org/research/critical-appraisal-tools.html
  • Critical Appraisal Skills Programme (CASP) www.casp-uk.net/casp-tools-checklists
  • Center for Evidence-Based Medicine www.cebm.net/critical-appraisal
  • Melnyk BM, Fineout-Overholt E. Evidence-Based Practice in Nursing and Healthcare: A Guide to Best Practice . 3rd ed. Philadelphia, PA: Wolters Kluwer; 2015.

A full set of critical appraisal checklists are available in the appendices.

Bonus content!

This article includes supplementary online-exclusive material. Visit the online version of this article at www.nursingcriticalcare.com to access this content.

critical appraisal; decision-making; evaluation of research; evidence-based practice; synthesis

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Determining the level of evidence: experimental research appraisal, caring for hospitalized patients with alcohol withdrawal syndrome, the qt interval, evidence-based practice for red blood cell transfusions, searching with critical appraisal tools.

Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a quantitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a qualitative paper
  • Useful resources

Appraisal of a Quantitative paper: Top tips

undefined

  • Introduction

Critical appraisal of a quantitative paper (RCT)

This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to practise the technique on a sample article.

Please note this framework is for appraising one particular type of quantitative research a Randomised Controlled Trial (RCT) which is defined as 

a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo.  The groups are then followed up to see if there are any differences between the results.  This helps in assessing the effectiveness of the intervention.(CASP, 2020)

Support materials

  • Framework for reading quantitative papers (RCTs)
  • Critical appraisal of a quantitative paper PowerPoint

To practise following this framework for critically appraising a quantitative article, please look at the following article:

Marrero, D.G.  et al.  (2016) 'Comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  AJPH Research , 106(5), pp. 949-956.

Critical Appraisal of a quantitative paper (RCT): practical example

  • Internal Validity
  • External Validity
  • Reliability Measurement Tool

How to use this practical example 

Using the framework, you can have a go at appraising a quantitative paper - we are going to look at the following article:

Marrero, d.g.  et al  (2016) 'comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  ajph research , 106(5), pp. 949-956.,            step 1.  take a quick look at the article, step 2.  click on the internal validity tab above - there are questions to help you appraise the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of external validity and reliability. , questioning the internal validity:, randomisation : how were participants allocated to each group did a randomisation process taken place, comparability of groups: how similar were the groups eg age, sex, ethnicity – is this made clear, blinding (none, single, double or triple): who was not aware of which group a patient was in (eg nobody, only patient, patient and clinician, patient, clinician and researcher) was it feasible for more blinding to have taken place , equal treatment of groups: were both groups treated in the same way , attrition : what percentage of participants dropped out did this adversely affect one group has this been evaluated, overall internal validity: does the research measure what it is supposed to be measuring, questioning the external validity:, attrition: was everyone accounted for at the end of the study was any attempt made to contact drop-outs, sampling approach: how was the sample selected was it based on probability or non-probability what was the approach (eg simple random, convenience) was this an appropriate approach, sample size (power calculation): how many participants was a sample size calculation performed did the study pass, exclusion/ inclusion criteria: were the criteria set out clearly were they based on recognised diagnostic criteria, what is the overall external validity can the results be applied to the wider population, questioning the reliability (measurement tool) internal validity:, internal consistency reliability (cronbach’s alpha). has a cronbach’s alpha score of 0.7 or above been included, test re-test reliability correlation. was the test repeated more than once were the same results received has a correlation coefficient been reported is it above 0.7 , validity of measurement tool. is it an established tool if not what has been done to check if it is reliable pilot study expert panel literature review criterion validity (test against other tools): has a criterion validity comparison been carried out was the score above 0.7, what is the overall reliability how consistent are the measurements , overall validity and reliability:, overall how valid and reliable is the paper.

  • << Previous: Critical Appraisal of a qualitative paper
  • Next: Useful resources >>
  • Last Updated: Jul 23, 2024 3:37 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

Research Evaluation

  • First Online: 23 June 2020

Cite this chapter

critical evaluation of quantitative research

  • Carlo Ghezzi 2  

1041 Accesses

1 Citations

  • The original version of this chapter was revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-030-45157-8_7

This chapter is about research evaluation. Evaluation is quintessential to research. It is traditionally performed through qualitative expert judgement. The chapter presents the main evaluation activities in which researchers can be engaged. It also introduces the current efforts towards devising quantitative research evaluation based on bibliometric indicators and critically discusses their limitations, along with their possible (limited and careful) use.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

19 october 2021.

The original version of the chapter was inadvertently published with an error. The chapter has now been corrected.

Notice that the taxonomy presented in Box 5.1 does not cover all kinds of scientific papers. As an example, it does not cover survey papers, which normally are not submitted to a conference.

Private institutions and industry may follow different schemes.

Adler, R., Ewing, J., Taylor, P.: Citation statistics: A report from the international mathematical union (imu) in cooperation with the international council of industrial and applied mathematics (iciam) and the institute of mathematical statistics (ims). Statistical Science 24 (1), 1–14 (2009). URL http://www.jstor.org/stable/20697661

Esposito, F., Ghezzi, C., Hermenegildo, M., Kirchner, H., Ong, L.: Informatics Research Evaluation. Informatics Europe (2018). URL https://www.informatics-europe.org/publications.html

Friedman, B., Schneider, F.B.: Incentivizing quality and impact: Evaluating scholarship in hiring, tenure, and promotion. Computing Research Association (2016). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., Rafols, I.: Bibliometrics: The leiden manifesto for research metrics. Nature News 520 (7548), 429 (2015). https://doi.org/10.1038/520429a . URL http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Parnas, D.L.: Stop the numbers game. Commun. ACM 50 (11), 19–21 (2007). https://doi.org/10.1145/1297797.1297815 . URL http://doi.acm.org/10.1145/1297797.1297815

Patterson, D., Snyder, L., Ullman, J.: Evaluating computer scientists and engineers for promotion and tenure. Computing Research Association (1999). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Saenen, B., Borrell-Damian, L.: Reflections on University Research Assessment: key concepts, issues and actors. European University Association (2019). URL https://eua.eu/component/attachments/attachments.html?id=2144

Download references

Author information

Authors and affiliations.

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy

Carlo Ghezzi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carlo Ghezzi .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Ghezzi, C. (2020). Research Evaluation. In: Being a Researcher. Springer, Cham. https://doi.org/10.1007/978-3-030-45157-8_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-45157-8_5

Published : 23 June 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-45156-1

Online ISBN : 978-3-030-45157-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Duquesne University Logo

  • Critical Appraisal Tools
  • Introduction
  • Related Guides
  • Getting Help

Critical Appraisal of Studies

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value/relevance in a particular context by providing a framework to evaluate the research. During the critical appraisal process, researchers can:

  • Decide whether studies have been undertaken in a way that makes their findings reliable as well as valid and unbiased
  • Make sense of the results
  • Know what these results mean in the context of the decision they are making
  • Determine if the results are relevant to their patients/schoolwork/research

Burls, A. (2009). What is critical appraisal? In What Is This Series: Evidence-based medicine. Available online at  What is Critical Appraisal?

Critical appraisal is included in the process of writing high quality reviews, like systematic and integrative reviews and for evaluating evidence from RCTs and other study designs. For more information on systematic reviews, check out our  Systematic Review  guide.

  • Next: Critical Appraisal Tools >>
  • Last Updated: Nov 16, 2023 1:27 PM
  • URL: https://guides.library.duq.edu/critappraise

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 20 January 2009

How to critically appraise an article

  • Jane M Young 1 &
  • Michael J Solomon 2  

Nature Clinical Practice Gastroenterology & Hepatology volume  6 ,  pages 82–91 ( 2009 ) Cite this article

52k Accesses

99 Citations

448 Altmetric

Metrics details

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article

Critical appraisal provides a basis for decisions on whether to use the results of a study in clinical practice

Different study designs are prone to various sources of systematic bias

Design-specific, critical-appraisal checklists are useful tools to help assess study quality

Assessments of other factors, including the importance of the research question, the appropriateness of statistical analysis, the legitimacy of conclusions and potential conflicts of interest are an important part of the critical appraisal process

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

critical evaluation of quantitative research

Similar content being viewed by others

critical evaluation of quantitative research

Making sense of the literature: an introduction to critical appraisal for the primary care practitioner

critical evaluation of quantitative research

How to appraise the literature: basic principles for the busy clinician - part 2: systematic reviews and meta-analyses

critical evaluation of quantitative research

How to appraise the literature: basic principles for the busy clinician - part 1: randomised controlled trials

Druss BG and Marcus SC (2005) Growth and decentralisation of the medical literature: implications for evidence-based medicine. J Med Libr Assoc 93 : 499–501

PubMed   PubMed Central   Google Scholar  

Glasziou PP (2008) Information overload: what's behind it, what's beyond it? Med J Aust 189 : 84–85

PubMed   Google Scholar  

Last JE (Ed.; 2001) A Dictionary of Epidemiology (4th Edn). New York: Oxford University Press

Google Scholar  

Sackett DL et al . (2000). Evidence-based Medicine. How to Practice and Teach EBM . London: Churchill Livingstone

Guyatt G and Rennie D (Eds; 2002). Users' Guides to the Medical Literature: a Manual for Evidence-based Clinical Practice . Chicago: American Medical Association

Greenhalgh T (2000) How to Read a Paper: the Basics of Evidence-based Medicine . London: Blackwell Medicine Books

MacAuley D (1994) READER: an acronym to aid critical reading by general practitioners. Br J Gen Pract 44 : 83–85

CAS   PubMed   PubMed Central   Google Scholar  

Hill A and Spittlehouse C (2001) What is critical appraisal. Evidence-based Medicine 3 : 1–8 [ http://www.evidence-based-medicine.co.uk ] (accessed 25 November 2008)

Public Health Resource Unit (2008) Critical Appraisal Skills Programme (CASP) . [ http://www.phru.nhs.uk/Pages/PHD/CASP.htm ] (accessed 8 August 2008)

National Health and Medical Research Council (2000) How to Review the Evidence: Systematic Identification and Review of the Scientific Literature . Canberra: NHMRC

Elwood JM (1998) Critical Appraisal of Epidemiological Studies and Clinical Trials (2nd Edn). Oxford: Oxford University Press

Agency for Healthcare Research and Quality (2002) Systems to rate the strength of scientific evidence? Evidence Report/Technology Assessment No 47, Publication No 02-E019 Rockville: Agency for Healthcare Research and Quality

Crombie IK (1996) The Pocket Guide to Critical Appraisal: a Handbook for Health Care Professionals . London: Blackwell Medicine Publishing Group

Heller RF et al . (2008) Critical appraisal for public health: a new checklist. Public Health 122 : 92–98

Article   Google Scholar  

MacAuley D et al . (1998) Randomised controlled trial of the READER method of critical appraisal in general practice. BMJ 316 : 1134–37

Article   CAS   Google Scholar  

Parkes J et al . Teaching critical appraisal skills in health care settings (Review). Cochrane Database of Systematic Reviews 2005, Issue 3. Art. No.: cd001270. 10.1002/14651858.cd001270

Mays N and Pope C (2000) Assessing quality in qualitative research. BMJ 320 : 50–52

Hawking SW (2003) On the Shoulders of Giants: the Great Works of Physics and Astronomy . Philadelphia, PN: Penguin

National Health and Medical Research Council (1999) A Guide to the Development, Implementation and Evaluation of Clinical Practice Guidelines . Canberra: National Health and Medical Research Council

US Preventive Services Taskforce (1996) Guide to clinical preventive services (2nd Edn). Baltimore, MD: Williams & Wilkins

Solomon MJ and McLeod RS (1995) Should we be performing more randomized controlled trials evaluating surgical operations? Surgery 118 : 456–467

Rothman KJ (2002) Epidemiology: an Introduction . Oxford: Oxford University Press

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: sources of bias in surgical studies. ANZ J Surg 73 : 504–506

Margitic SE et al . (1995) Lessons learned from a prospective meta-analysis. J Am Geriatr Soc 43 : 435–439

Shea B et al . (2001) Assessing the quality of reports of systematic reviews: the QUORUM statement compared to other tools. In Systematic Reviews in Health Care: Meta-analysis in Context 2nd Edition, 122–139 (Eds Egger M. et al .) London: BMJ Books

Chapter   Google Scholar  

Easterbrook PH et al . (1991) Publication bias in clinical research. Lancet 337 : 867–872

Begg CB and Berlin JA (1989) Publication bias and dissemination of clinical research. J Natl Cancer Inst 81 : 107–115

Moher D et al . (2000) Improving the quality of reports of meta-analyses of randomised controlled trials: the QUORUM statement. Br J Surg 87 : 1448–1454

Shea BJ et al . (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 7 : 10 [10.1186/1471-2288-7-10]

Stroup DF et al . (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 283 : 2008–2012

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: evaluating surgical effectiveness. ANZ J Surg 73 : 507–510

Schulz KF (1995) Subverting randomization in controlled trials. JAMA 274 : 1456–1458

Schulz KF et al . (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273 : 408–412

Moher D et al . (2001) The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology 1 : 2 [ http://www.biomedcentral.com/ 1471-2288/1/2 ] (accessed 25 November 2008)

Rochon PA et al . (2005) Reader's guide to critical appraisal of cohort studies: 1. Role and design. BMJ 330 : 895–897

Mamdani M et al . (2005) Reader's guide to critical appraisal of cohort studies: 2. Assessing potential for confounding. BMJ 330 : 960–962

Normand S et al . (2005) Reader's guide to critical appraisal of cohort studies: 3. Analytical strategies to reduce confounding. BMJ 330 : 1021–1023

von Elm E et al . (2007) Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 335 : 806–808

Sutton-Tyrrell K (1991) Assessing bias in case-control studies: proper selection of cases and controls. Stroke 22 : 938–942

Knottnerus J (2003) Assessment of the accuracy of diagnostic tests: the cross-sectional study. J Clin Epidemiol 56 : 1118–1128

Furukawa TA and Guyatt GH (2006) Sources of bias in diagnostic accuracy studies and the diagnostic process. CMAJ 174 : 481–482

Bossyut PM et al . (2003)The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138 : W1–W12

STARD statement (Standards for the Reporting of Diagnostic Accuracy Studies). [ http://www.stard-statement.org/ ] (accessed 10 September 2008)

Raftery J (1998) Economic evaluation: an introduction. BMJ 316 : 1013–1014

Palmer S et al . (1999) Economics notes: types of economic evaluation. BMJ 318 : 1349

Russ S et al . (1999) Barriers to participation in randomized controlled trials: a systematic review. J Clin Epidemiol 52 : 1143–1156

Tinmouth JM et al . (2004) Are claims of equivalency in digestive diseases trials supported by the evidence? Gastroentrology 126 : 1700–1710

Kaul S and Diamond GA (2006) Good enough: a primer on the analysis and interpretation of noninferiority trials. Ann Intern Med 145 : 62–69

Piaggio G et al . (2006) Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA 295 : 1152–1160

Heritier SR et al . (2007) Inclusion of patients in clinical trial analysis: the intention to treat principle. In Interpreting and Reporting Clinical Trials: a Guide to the CONSORT Statement and the Principles of Randomized Controlled Trials , 92–98 (Eds Keech A. et al .) Strawberry Hills, NSW: Australian Medical Publishing Company

National Health and Medical Research Council (2007) National Statement on Ethical Conduct in Human Research 89–90 Canberra: NHMRC

Lo B et al . (2000) Conflict-of-interest policies for investigators in clinical trials. N Engl J Med 343 : 1616–1620

Kim SYH et al . (2004) Potential research participants' views regarding researcher and institutional financial conflicts of interests. J Med Ethics 30 : 73–79

Komesaroff PA and Kerridge IH (2002) Ethical issues concerning the relationships between medical practitioners and the pharmaceutical industry. Med J Aust 176 : 118–121

Little M (1999) Research, ethics and conflicts of interest. J Med Ethics 25 : 259–262

Lemmens T and Singer PA (1998) Bioethics for clinicians: 17. Conflict of interest in research, education and patient care. CMAJ 159 : 960–965

Download references

Author information

Authors and affiliations.

JM Young is an Associate Professor of Public Health and the Executive Director of the Surgical Outcomes Research Centre at the University of Sydney and Sydney South-West Area Health Service, Sydney,

Jane M Young

MJ Solomon is Head of the Surgical Outcomes Research Centre and Director of Colorectal Research at the University of Sydney and Sydney South-West Area Health Service, Sydney, Australia.,

Michael J Solomon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jane M Young .

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Young, J., Solomon, M. How to critically appraise an article. Nat Rev Gastroenterol Hepatol 6 , 82–91 (2009). https://doi.org/10.1038/ncpgasthep1331

Download citation

Received : 10 August 2008

Accepted : 03 November 2008

Published : 20 January 2009

Issue Date : February 2009

DOI : https://doi.org/10.1038/ncpgasthep1331

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Emergency physicians’ perceptions of critical appraisal skills: a qualitative study.

  • Sumintra Wood
  • Jacqueline Paulis
  • Angela Chen

BMC Medical Education (2022)

An integrative review on individual determinants of enrolment in National Health Insurance Scheme among older adults in Ghana

  • Anthony Kwame Morgan
  • Anthony Acquah Mensah

BMC Primary Care (2022)

Autopsy findings of COVID-19 in children: a systematic review and meta-analysis

  • Anju Khairwa
  • Kana Ram Jat

Forensic Science, Medicine and Pathology (2022)

The use of a modified Delphi technique to develop a critical appraisal tool for clinical pharmacokinetic studies

  • Alaa Bahaa Eldeen Soliman
  • Shane Ashley Pawluk
  • Ousama Rachid

International Journal of Clinical Pharmacy (2022)

Critical Appraisal: Analysis of a Prospective Comparative Study Published in IJS

  • Ramakrishna Ramakrishna HK
  • Swarnalatha MC

Indian Journal of Surgery (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

critical evaluation of quantitative research

  • Search Menu

Sign in through your institution

  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical Philosophy
  • Classical Mythology
  • Classical History
  • Classical Reception
  • Classical Numismatics
  • Classical Literature
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Papyrology
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Social History
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Evolution
  • Language Reference
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Media
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Legal System - Costs and Funding
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Restitution
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business Ethics
  • Business History
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Social Issues in Business and Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic History
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Management of Land and Natural Resources (Social Science)
  • Natural Disasters (Environment)
  • Pollution and Threats to the Environment (Social Science)
  • Social Impact of Environmental Issues (Social Science)
  • Sustainability
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • Ethnic Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Administration
  • Public Policy
  • Qualitative Political Methodology
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Disability Studies
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

Finding and Evaluating Evidence: Systematic Reviews and Evidence-Based Practice

  • < Previous chapter
  • Next chapter >

3 Critically Appraising the Quality and Credibility of Quantitative Research for Systematic Reviews

  • Published: September 2011
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter looks at how to evaluate the quality and credibility of various types of quantitative research that might be included in a systematic review. Various factors that determine the quality and believability of a study will be presented, including, • assessing the study’s methods in terms of internal validity • examining factors associated with external validity and relevance; and • evaluating the credibility of the research and researcher in terms of possible biases that might influence the research design, analysis, or conclusions. The importance of transparency is highlighted.

Personal account

  • Sign in with email/username & password
  • Get email alerts
  • Save searches
  • Purchase content
  • Activate your purchase/trial code
  • Add your ORCID iD

Institutional access

Sign in with a library card.

  • Sign in with username/password
  • Recommend to your librarian
  • Institutional account management
  • Get help with access

Access to content on Oxford Academic is often provided through institutional subscriptions and purchases. If you are a member of an institution with an active account, you may be able to access content in one of the following ways:

IP based access

Typically, access is provided across an institutional network to a range of IP addresses. This authentication occurs automatically, and it is not possible to sign out of an IP authenticated account.

Choose this option to get remote access when outside your institution. Shibboleth/Open Athens technology is used to provide single sign-on between your institution’s website and Oxford Academic.

  • Click Sign in through your institution.
  • Select your institution from the list provided, which will take you to your institution's website to sign in.
  • When on the institution site, please use the credentials provided by your institution. Do not use an Oxford Academic personal account.
  • Following successful sign in, you will be returned to Oxford Academic.

If your institution is not listed or you cannot sign in to your institution’s website, please contact your librarian or administrator.

Enter your library card number to sign in. If you cannot sign in, please contact your librarian.

Society Members

Society member access to a journal is achieved in one of the following ways:

Sign in through society site

Many societies offer single sign-on between the society website and Oxford Academic. If you see ‘Sign in through society site’ in the sign in pane within a journal:

  • Click Sign in through society site.
  • When on the society site, please use the credentials provided by that society. Do not use an Oxford Academic personal account.

If you do not have a society account or have forgotten your username or password, please contact your society.

Sign in using a personal account

Some societies use Oxford Academic personal accounts to provide access to their members. See below.

A personal account can be used to get email alerts, save searches, purchase content, and activate subscriptions.

Some societies use Oxford Academic personal accounts to provide access to their members.

Viewing your signed in accounts

Click the account icon in the top right to:

  • View your signed in personal account and access account management features.
  • View the institutional accounts that are providing access.

Signed in but can't access content

Oxford Academic is home to a wide variety of products. The institutional subscription may not cover the content that you are trying to access. If you believe you should have access to that content, please contact your librarian.

For librarians and administrators, your personal account also provides access to institutional account management. Here you will find options to view and activate subscriptions, manage institutional settings and access options, access usage statistics, and more.

Our books are available by subscription or purchase to libraries and institutions.

Month: Total Views:
October 2022 9
November 2022 10
January 2023 8
February 2023 6
March 2023 6
April 2023 13
May 2023 12
June 2023 2
July 2023 5
August 2023 3
September 2023 2
October 2023 1
November 2023 4
December 2023 9
January 2024 18
February 2024 2
March 2024 4
April 2024 17
May 2024 20
June 2024 4
July 2024 3
August 2024 1
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Rights and permissions
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Appraising Quantitative Research in Health Education: Guidelines for Public Health Educators

Leonard jack, jr..

Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971

Sandra C. Hayes

Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388

Jeanfreau G. Scharalda

Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853

Barbara Stetson

Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904

Nkenge H. Jones-Jack

Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080

Matthew Valliere

Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652

William R. Kirchain

Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971

Michael Fagen

Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice , Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551

Cris LeBlanc

Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971

Many practicing health educators do not feel they possess the skills necessary to critically appraise quantitative research. This publication is designed to help provide practicing health educators with basic tools helpful to facilitate a better understanding of quantitative research. This article describes the major components—title, introduction, methods, analyses, results and discussion sections—of quantitative research. Readers will be introduced to information on the various types of study designs and seven key questions health educators can use to facilitate the appraisal process. Upon reading, health educators will be in a better position to determine whether research studies are well designed and executed.

Appraising the Quality of Quantitative Research in Health Education

Practicing health educators often find themselves with little time to read published research in great detail. Some health educators with limited time to read scientific papers may get frustrated as they get bogged down trying to understand research terminology, methods, and approaches. The purpose of appraising a scientific publication is to assess whether the study’s research questions (hypotheses), methods and results (findings) are sufficiently valid to produce useful information ( Fowkes and Fulton, 1991 ; Donnelly, 2004 ; Greenhalgh and Taylor, 1997 ; Johnson and Onwuegbuze, 2004 ; Greenhalgh, 1997 ; Yin, 2003; and Hennekens and Buring, 1987 ). Having the ability to deconstruct and reconstruct scientific publications is a critical skill in a results-oriented environment linked to increasing demands and expectations for improved program outcomes and strong justifications to program focus and direction. Health educators do must not solely rely on the opinions of researchers, but, rather, increase their confidence in their own abilities to discern the quality of published scientific research. Health educators with little experience reading and appraising scientific publications, may find this task less difficult if they: 1) become more familiar with the key components of a research publication, and 2) utilize questions presented in this article to critically appraise the strengths and weaknesses of published research.

Key Components of a Scientific Research Publication

The key components of a research publication should provide important information that is needed to assess the strengths and weaknesses of the research. Key components typically include the: publication title , abstract , introduction , research methods used to address the research question(s) or hypothesis, statistical analysis used, results , and the researcher’s interpretation and conclusion or recommended use of results to inform future research or practice. A brief description of these components follows:

Publication Title

A general heading or description should provide immediate insight into the intent of the research. Titles may include information regarding the focus of the research, population or target audience being studied, and study design.

An abstract provides the reader with a brief description of the overall research, how it was done, statistical techniques employed, key results,and relevant implications or recommendations.

Introduction

This section elaborates on the content mentioned in the abstract and provides a better idea of what to anticipate in the manuscript. The introduction provides a succinct presentation of previously published literature, thus offering a purpose (rationale) for the study.

This component of the publication provides critical information on the type of research methods used to conduct the study. Common examples of study designs used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trial. The methods section should contain information on the inclusion and exclusion criteria used to identify participants in the study.

Quantitative data contains information that is quantifiable, perhaps through surveys that are analyzed using statistical tests to determine if the results happened by chance. Two types of statistical analyses are used: descriptive and inferential ( Johnson and Onwuegbuze, 2004 ). Descriptive statistics are used to describe the basic features of the study data and provide simple summaries about the sample and measures. With inferential statistics, researchers are trying to reach conclusions that extend beyond the immediate data alone. Thus, they use inferential statistics to make inferences from the data to more general conditions.

This section presents the reader with the researcher’s data and results of statistical analyses described in the method section. Thus, this section must align closely with the methods section.

Discussion (Conclusion)

This section should explain what the data means thereby summarizing main results and findings for the reader. Important limitations (such as the use of a non-random sample, the absence of a control group, and short duration of the intervention) should be discussed. Researchers should discuss how each limitation can impact the applicability and use of study results. This section also presents recommendations on ways the study can help advance future health education and practice.

Critically Appraising the Strengths and Weaknesses of Published Research

During careful reading of the analysis, results, and discussion (conclusion) sections, what key questions might you ask yourself in order to critically appraise the strengths and weaknesses of the research? Based on a careful review of the literature ( Greenhalgh and Taylor, 1997 ; Greenhalgh, 1997 ; and Hennekens and Buring, 1987 ) and our research experiences, we have identified seven key questions around which to guide your assessment of quantitative research.

1) Is a study design identified and appropriately applied?

Study designs refer to the methodology used to investigate a particular health phenomenon. Becoming familiar with the various study designs will help prepare you to critically assess whether its selection was applied adequately to answer the research questions (or hypotheses). As mentioned previously, common examples of study designs frequently used to conduct quantitative research include cross sectional study, cohort study, case-control study, and controlled trail. A brief description of each can be found in Table 1 .

Definitions of Study Designs

A cross-sectional study is a descriptive study in which disease, risk factors, or other characteristics are measured simultaneously (at one particular point in time) in a given population ( ).
A cohort study is an analytical study in which individuals with differing exposures to a suspected factor are identified and then observed for the occurrence of certain health effects over a period of time ( ). Comparison may be made with a control group, but interventions are not normally applied in cohort studies.
A case-control study is an analytical study which compares individuals who have a specific condition (“cases”) with a group of individuals without the condition (“controls”) ( ). A case-control study generally depends on the collection of retrospective data, thus introducing the possibility of recall bias. Recall bias is the tendency of subjects to report events in a manner that is different between the two groups studied.
A controlled trial is an experimental study comparing the intervention administered in one group of individuals (also referred as treatment, experimental or study group) and the outcome compared to a similar group (control group) that did not receive the intervention ( ). A controlled trial may or may not use randomization to assign individuals to groups, and it may or may not use blinding to prevent them from knowing which treatment they get. In the event study participants are randomly assigned (meaning everyone has an equal chance of being selected) to a treatment or control group, this study design would be referred to as a randomized controlled trial.

2) Is the study sample representative of the group from which it is drawn?

The study sample must be representative of the group from which it is drawn. The study sample must therefore be typical of the wider target audience to whom the research might apply. Addressing whether the study sample is representative of the group from which it is drawn will require the researcher to take into consideration the sampling method and sample size.

Sampling Method

Many sampling methods are used individually or in combination. Keep in mind that sampling methods are divided into two categories: probability sampling and non-probability sampling ( Last, 2001 ). Probability sampling (also called random sampling) is any sampling scheme in which the probability of choosing each individual is the same (or at least known, so it can be readjusted mathematically to be equal). Non-probability sampling is any sampling scheme in which the probability of an individual being chosen is unknown. Typically, researchers should offer a rationale for utilizing non-probability sampling, and when utilized, be aware of its limitations. For example, use of a convenience sample (choosing individuals in an unstructured manner) can be justified when collecting pilot data around which future studies employing more rigorous sampling methods will be utilized.

Sample Size

Established statistical theories and formulas are used to generate sample size calculations—the recommended number of individuals necessary in order to have sufficient power to detect meaningful results at a certain level of statistical significance. In the methods section, look for a statement or two confirming whether steps where taken to obtain the appropriate sample size.

3) In research studies using a control group, is this group adequate for the purpose of the study?

Source of controls.

In case-control and cohort studies, the source of controls should be such that the distribution of characteristics not under investigation are similar to those in the cases or study cohort.

In case-control studies both cases and controls are often matched on certain characteristics such as age, sex, income, and race. The criteria used for including and excluding study participants must be adequately described and examined carefully. Inclusion and exclusion criteria may include: ethnicity, age of diagnosis, length of time living with a health condition, geographic location, and presence or absence of complications. You should critically assess whether matching across these characteristics actually occurred.

4) What is the validity of measurements and outcomes identified in the study?

Validity is the extent to which a measurement captures what it claims to measure. This might take the form of questions contained on a survey, questionnaire or instrument. Researchers should address one or more of the following types of validity: face, content, criterion-related, and construct ( Last, 2001 ; William and Donnelly, 2008).

Face validity

Face validity assures that, upon examination, the variable of interest can measure what it intends to measure. If the researcher has chosen to study a variable that has not been studied before, he/she usually will need to start with face validity.

Content validity

Content validity involves comparing the content of the measurement technique to the known literature on the topic and validating the fact that the tool (e.g., survey, questionnaire) does represent the literature accurately.

Criterion-related validity

Criterion-related validity involves making sure the measures within a survey when tested proves to be effective in predicting criterion or indicators of a construct.

Construct validity

Construct validity deals with the validation of the construct that underlies the research. Here, researchers test the theory that underlies the hypothesis or research question.

5) To what extent is a common source of bias called blindness taken into account?

During data collection, a common source of bias is that subjects and/or those collecting the data are not blind to the purpose of the research. This can likely be the result of researchers going the extra mile to make sure those in the experimental group benefit from the intervention ( Fowkes and Fulton, 1991 ). Inadequate blindness can be a problem in studies utilizing all types of study designs. While total blindness is not possible, appraising whether steps were taken to be sure issues related to ensure blindness occurred is essential.

6) To what extent is the study considered complete with regard to drop outs and missing data?

Regardless of the study design employed, one must assess not only the proportion of drop outs in each group, but also why they dropped out. This may point to possible bias, as well as determine what efforts were taken to retain participants in the study.

Missing data

Despite the fact that missing data are a part of almost all research, it should still be appraised. There are several reasons why the data may be missing. The nature and extent to which data is missing should be explained.

7) To what extent are study results influenced by factors that negatively impact their credibility?

Contamination.

In research studies comparing the effectiveness of a structured intervention, contamination occurs when the control group makes changes based on learning what those participating in the intervention are doing. Despite the fact that researchers typically do not report the extent to which contamination occurs, you should nevertheless try to assess whether contamination negatively impacted the credibility of study results.

Confounding factors

A confounding factor in a study is a variable which is related to one or more of the measurements (measures or variables) defined in a study. A confounding factor may mask an actual association or falsely demonstrate an apparent association between the study variables where no real association between them exists. If confounding factors are not measured and considered, study results may be biased and compromised.

The guidelines and questions presented in this article are by no means exhaustive. However, when applied, they can help health education practitioners obtain a deeper understanding of the quality of published research. While no study is 100% perfect, we do encourage health education practitioners to pause before taking researchers at their word that study results are both accurate and impressive. If you find yourself answering ‘no’ to a majority of the key questions provided, then it is probably safe to say that, from your perspective, the quality of the research is questionable.

Over time, as you repeatedly apply the guidelines presented in this article, you will become more confident and interested in reading research publications from beginning to end. While this article is geared to health educators, it can help anyone interested in learning how to appraise published research. Table 2 lists additional reading resources that can help improve one’s understanding and knowledge of quantitative research. This article and the reading resources identified in Table 2 can serve as useful tools to frame informative conversations with your peers regarding the strengths and weaknesses of published quantitative research in health education.

Publications on How to Read, Write and Appraise Quantitative Research

Contributor Information

Leonard Jack, Jr., Associate Dean for Research and Endowed Chair of Minority Health Disparities, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

Sandra C. Hayes, Central Mississippi Area Health Education Center, 350 West Woodrow Wilson, Suite 3320, Jackson, MS 39213; Telephone: 601-987-0272; Fax: 601-815-5388.

Jeanfreau G. Scharalda, Louisiana State University Health Sciences Center School of Nursing, 1900 Gravier Street, New Orleans, Louisiana 70112; Telephone: 504-568-4140; Fax: 504-568-5853.

Barbara Stetson, Department of Psychological and Brain Sciences, 317 Life Sciences Building, University of Louisville, Louisville, KY 40292; Telephone: 502-852-2540; Fax: 502-852-8904.

Nkenge H. Jones-Jack, Epidemiologist & Evaluation Consultant, Metairie, Louisiana 70002. Telephone: 678-524-1147; Fax: 504-267-4080.

Matthew Valliere, Chronic Disease Prevention and Control, Bureau of Primary Care and Rural Health, Office of the Secretary, 628 North 4th Street, Baton Rouge, LA 70821-3118; Telephone: 225-342-2655; Fax: 225-342-2652.

William R. Kirchain, Division of Clinical and Administrative Sciences, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, Room 121, New Orleans, Louisiana 70125; Telephone: 504-520-5395; Fax: 504-520-7971.

Michael Fagen, Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice , Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551.

Cris LeBlanc, Centers of Excellence Scholar, College of Pharmacy, Xavier University of Louisiana, 1 Drexel Drive, New Orleans, Louisiana 70125; Telephone: 504-520-5345; Fax: 504-520-7971.

  • Fowkes FG, Fulton PM. Critical appraisal of published research: introductory guidelines. British Medical Journal. 1991; 302 :1136–40. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Donnelly RA. The Complete Idiots Guide to Statistics. Alpha Books; New York, NY: 2004. pp. 6–7. [ Google Scholar ]
  • Greenhalgh T, Taylor R. How to read a paper: Papers that go beyond numbers (qualitative research) British Medical Journal. 1997; 315 :740–743. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Greenhalgh T. How to read a paper: Assessing the methodological quality of published papers. British Medical Journal. 315 :305–308. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Johnson RB, Onwuegbuze AJ. Mixed methods research: A research paradigm whose time has come. Educational Researcher. 2004; 33 :14–26. [ Google Scholar ]
  • Hennekens CH, Buring JE. Epidemiology in Medicine. Little, Brown and Company; Boston, Massachusetts: 1987. pp. 106–108. [ Google Scholar ]
  • Last JM. A dictionary of epidemiology. 4. Oxford University Press, Inc; New York, New York: 2001. [ Google Scholar ]
  • Trochim WM, Donnelly J. Research methods knowledge base. 3. Atomic Dog; Mason, Ohio: 2008. pp. 6–8. [ Google Scholar ]

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 21, Issue 4
  • How to appraise quantitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

This article has a correction. Please see:

  • Correction: How to appraise quantitative research - April 01, 2019

Download PDF

  • Xabi Cathala 1 ,
  • Calvin Moorley 2
  • 1 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • 2 Nursing Research and Diversity in Care , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Mr Xabi Cathala, Institute of Vocational Learning, School of Health and Social Care, London South Bank University London UK ; cathalax{at}lsbu.ac.uk and Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/eb-2018-102996

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

Some nurses feel that they lack the necessary skills to read a research paper and to then decide if they should implement the findings into their practice. This is particularly the case when considering the results of quantitative research, which often contains the results of statistical testing. However, nurses have a professional responsibility to critique research to improve their practice, care and patient safety. 1  This article provides a step by step guide on how to critically appraise a quantitative paper.

Title, keywords and the authors

The authors’ names may not mean much, but knowing the following will be helpful:

Their position, for example, academic, researcher or healthcare practitioner.

Their qualification, both professional, for example, a nurse or physiotherapist and academic (eg, degree, masters, doctorate).

This can indicate how the research has been conducted and the authors’ competence on the subject. Basically, do you want to read a paper on quantum physics written by a plumber?

The abstract is a resume of the article and should contain:

Introduction.

Research question/hypothesis.

Methods including sample design, tests used and the statistical analysis (of course! Remember we love numbers).

Main findings.

Conclusion.

The subheadings in the abstract will vary depending on the journal. An abstract should not usually be more than 300 words but this varies depending on specific journal requirements. If the above information is contained in the abstract, it can give you an idea about whether the study is relevant to your area of practice. However, before deciding if the results of a research paper are relevant to your practice, it is important to review the overall quality of the article. This can only be done by reading and critically appraising the entire article.

The introduction

Example: the effect of paracetamol on levels of pain.

My hypothesis is that A has an effect on B, for example, paracetamol has an effect on levels of pain.

My null hypothesis is that A has no effect on B, for example, paracetamol has no effect on pain.

My study will test the null hypothesis and if the null hypothesis is validated then the hypothesis is false (A has no effect on B). This means paracetamol has no effect on the level of pain. If the null hypothesis is rejected then the hypothesis is true (A has an effect on B). This means that paracetamol has an effect on the level of pain.

Background/literature review

The literature review should include reference to recent and relevant research in the area. It should summarise what is already known about the topic and why the research study is needed and state what the study will contribute to new knowledge. 5 The literature review should be up to date, usually 5–8 years, but it will depend on the topic and sometimes it is acceptable to include older (seminal) studies.

Methodology

In quantitative studies, the data analysis varies between studies depending on the type of design used. For example, descriptive, correlative or experimental studies all vary. A descriptive study will describe the pattern of a topic related to one or more variable. 6 A correlational study examines the link (correlation) between two variables 7  and focuses on how a variable will react to a change of another variable. In experimental studies, the researchers manipulate variables looking at outcomes 8  and the sample is commonly assigned into different groups (known as randomisation) to determine the effect (causal) of a condition (independent variable) on a certain outcome. This is a common method used in clinical trials.

There should be sufficient detail provided in the methods section for you to replicate the study (should you want to). To enable you to do this, the following sections are normally included:

Overview and rationale for the methodology.

Participants or sample.

Data collection tools.

Methods of data analysis.

Ethical issues.

Data collection should be clearly explained and the article should discuss how this process was undertaken. Data collection should be systematic, objective, precise, repeatable, valid and reliable. Any tool (eg, a questionnaire) used for data collection should have been piloted (or pretested and/or adjusted) to ensure the quality, validity and reliability of the tool. 9 The participants (the sample) and any randomisation technique used should be identified. The sample size is central in quantitative research, as the findings should be able to be generalised for the wider population. 10 The data analysis can be done manually or more complex analyses performed using computer software sometimes with advice of a statistician. From this analysis, results like mode, mean, median, p value, CI and so on are always presented in a numerical format.

The author(s) should present the results clearly. These may be presented in graphs, charts or tables alongside some text. You should perform your own critique of the data analysis process; just because a paper has been published, it does not mean it is perfect. Your findings may be different from the author’s. Through critical analysis the reader may find an error in the study process that authors have not seen or highlighted. These errors can change the study result or change a study you thought was strong to weak. To help you critique a quantitative research paper, some guidance on understanding statistical terminology is provided in  table 1 .

  • View inline

Some basic guidance for understanding statistics

Quantitative studies examine the relationship between variables, and the p value illustrates this objectively.  11  If the p value is less than 0.05, the null hypothesis is rejected and the hypothesis is accepted and the study will say there is a significant difference. If the p value is more than 0.05, the null hypothesis is accepted then the hypothesis is rejected. The study will say there is no significant difference. As a general rule, a p value of less than 0.05 means, the hypothesis is accepted and if it is more than 0.05 the hypothesis is rejected.

The CI is a number between 0 and 1 or is written as a per cent, demonstrating the level of confidence the reader can have in the result. 12  The CI is calculated by subtracting the p value to 1 (1–p). If there is a p value of 0.05, the CI will be 1–0.05=0.95=95%. A CI over 95% means, we can be confident the result is statistically significant. A CI below 95% means, the result is not statistically significant. The p values and CI highlight the confidence and robustness of a result.

Discussion, recommendations and conclusion

The final section of the paper is where the authors discuss their results and link them to other literature in the area (some of which may have been included in the literature review at the start of the paper). This reminds the reader of what is already known, what the study has found and what new information it adds. The discussion should demonstrate how the authors interpreted their results and how they contribute to new knowledge in the area. Implications for practice and future research should also be highlighted in this section of the paper.

A few other areas you may find helpful are:

Limitations of the study.

Conflicts of interest.

Table 2 provides a useful tool to help you apply the learning in this paper to the critiquing of quantitative research papers.

Quantitative paper appraisal checklist

  • 1. ↵ Nursing and Midwifery Council , 2015 . The code: standard of conduct, performance and ethics for nurses and midwives https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21.8.18 ).
  • Gerrish K ,
  • Moorley C ,
  • Tunariu A , et al
  • Shorten A ,

Competing interests None declared.

Patient consent Not required.

Provenance and peer review Commissioned; internally peer reviewed.

Correction notice This article has been updated since its original publication to update p values from 0.5 to 0.05 throughout.

Linked Articles

  • Miscellaneous Correction: How to appraise quantitative research BMJ Publishing Group Ltd and RCN Publishing Company Ltd Evidence-Based Nursing 2019; 22 62-62 Published Online First: 31 Jan 2019. doi: 10.1136/eb-2018-102996corr1

Read the full text or download the PDF:

#QuantCrit: Integrating CRT With Quantitative Methods in Family Science

Curtis and Boe

See all articles from this issue

  • Traditional quantitative methodologies are rooted in studying and norming the experiences of W.E.I.R.D.
  • Quantitative Criticalism is a transdisciplinary approach to resist traditional quantitative methodologies
  • Quantitative Criticalists aim to produce socially just research informed by various critical theories (e.g., CRT, feminist, queer).

Race is frequently operationalized as an individual fixed trait that is used to explain individual differences in various outcomes (Zuberi, 2000). Family Scientists may use race to explain direct causal relationships (e.g., parenting styles) or as a control variable to account for the variation explained by race on a given outcome (e.g., communication styles; Zuberi & Bonilla-Silva, 2008). When using race as a variable, Family Scientists often unlink it from its sociocultural context. In effect, race is reduced to a phenotypic or genotypic marker for explaining research phenomena (Bonilla-Silva, 2009). This process is particularly pervasive in quantitative methods, which are frequently perceived as more empirically valuable than qualitative methods (Onwuegbuzie & Leech, 2005). Quantitative methods position the external world as independent of human perception and are subject to immutable scientific laws. Quantitative researchers utilize randomization, control, and manipulation to ensure that outside factors do not bias research findings. This emphasis on isolationism centers knowledge as objective. However, critical researchers argue that quantitative inquiry is no less socially constructed than any other form of research (Stage, 2007). For instance, sampling bias can greatly influence research findings and interpretations by privileging the lived experiences of certain groups of people (e.g., high number of studies conducted on individuals from Western, educated, industrial, rich, and democratic backgrounds; Nielsen et al., 2017) or presenting results that are nonrepresentative of the internal diversity that exists within marginalized groups (e.g., the plethora of comparison studies that consolidate members of the African American diaspora to a single racial category; Jackson & Cothran, 2003).

To challenge the assumptions that shape quantitative inquiry’s emphasis on neutrality and objectivity, critical-race-conscious scholars have embraced critical race theory (CRT) as a mechanism for addressing the replication of racial stereotypes and White supremacy in empirical research. By failing to account for how racism and White supremacy shape family scholarship, Family Scientists are inadvertently perpetuating the position that change and betterment are the sole responsibility of the individual rather than challenging the systemic “creators” of inequality (Walsdorf et al., 2020). While conceptualizations of CRT tenets are evolving, a common thread among the elements is a commitment to identify, deconstruct, and remedy the oppressive realities of people of color, their families, and their communities (Bridges, 2019). This commitment has recently been extended to the critical evaluation of quantitative research via the development of “Quantitative Criticalism” (QuantCrit), which provides a framework for applying the principles and insights of CRT to quantitative data whenever it is used in research or encountered in policy and practice (Gillborn et al., 2018). In this article, we briefly summarize the tenets of QuantCrit and its connections to the principles of CRT and provide a brief case example of how Family Scientists can use QuantCrit.

Quantitative Criticalism

QuantCrit is an analytic framework that utilizes the tenets of CRT to challenge normative assumptions embedded in quantitative methodology (Covarrubias & Vélez, 2013; Gillborn et al., 2018; Sullivan et al., 2010). Despite its origin, QuantCrit has been used to critique traditional approaches to investigating various racial disparities, including breast cancer and genomic uncertainty (Gerido, 2020), teaching evaluations (Campbell, 2020), Asian American experiences in higher education (Teranishi, 2007), teacher prioritization of student achievement (Quinn et al., 2019), and student learning outcomes (Young & Cunningham, 2021). While heavily utilized by education scholars, QuantCrit is a transdisciplinary framework that applies the principles of socially constructed inequality and inherent inequality found within CRT to quantitative inquiry (insofar as socially constructed and inherent inequalities function to create and maintain social, economic, and political inequalities between dominant and marginalized groups). According to Gillborn et al. (2018), QuantCrit is not “an off-shoot movement of CRT” but “a kind of toolkit that embodies the need to apply CRT understandings and insights whenever quantitative data is used in research and/or encountered in policy and practice” (p. 169). Several central tenets guide this framework.

Acategorical Intersectionality

Identity is a complex, multidimensional aspect of individuals’ lived experience. Intersectionality describes how systems of identity, discrimination, and disadvantage co-influence individuals, families, and communities (Collins, 2019). Intersectionality challenges the idea of a single social category as the primary dimension of inequity and asserts that complex social inequalities are firmly entrenched in all aspects of people’s lived experiences. The gendered, racialized, and economic factors that shape an individual’s lived experiences cannot be understood independently, as they are intertwined. For example, Suzuki et al. (2021) discuss this complexity by explaining how not including race in research may suggest that it is unimportant but addressing race only via the inclusion of racial categories without explicitly elaborating on how racism influenced the outcome may indicate that racial inequities are natural. As such, QuantCrit researchers refute the idea of categorization as natural or inherent, critically evaluating the categories they construct for analysis and provide a rationale regarding their use of categories.

Centrality of Counternarratives

QuantCrit places emphasis on reliably researching and centering individuals’ lived experience using counternarratives. Counternarratives represent the perspectives of minoritized groups that often contradict a culture’s dominant narrative. By centralizing minoritized voices and contextualizing privilege and power, QuantCrit researchers diversify research narratives. In doing so, QuantCrit researchers highlight the multidirectional effects of power, privilege, and oppression by disrupting narratives that frame minoritized group members as deficient. This disruption also includes critically evaluating and intervening in the oppressive systems that uphold the power and privilege of dominant groups.

Nonneutrality of Data

QuantCrit researchers heavily scrutinize the notion of objectivity and reject the idea that numbers “speak for themselves.” QuantCrit researchers acknowledge that all data and analytic methods have biases and strive to minimize and explicitly discuss these biases.

Bias in the Interpretation and Presentation of Research

Even when numbers are not explicitly used to advance oppressive notions, research findings are interpreted and presented through the cultural norms, values, and practices of the researcher (Gillborn et al., 2018). In presenting research results, QuantCrit researchers overtly discuss their positionality and how their lived experiences may have influenced their interpretation and presentation of their findings.

Social Justice Oriented

QuantCrit research is rooted in the goals of social justice; it rejects the notion that quantitative research is bias-free, identifying and acknowledging how prior and contemporary research is used as a tool of oppression, and disrupting systems of oppression by critically evaluating and changing oppressive aspects of the quantitative research process. In doing so, QuantCrit researchers commit themselves to capturing the nuances and depth of the lived experiences of marginalized groups while simultaneously challenging prevailing oppressive systems.

QuantCrit: A Brief Example

QuantCrit has far-reaching research implications for Family Science. Consider how Family Scientists construct variables to reflect aspects of social marginalization (e.g., neighborhood disadvantage). Neighborhood disadvantage refers to the “lack of economic and social resources that predisposes people to physical and social disorder” (Ross & Mirowsky, 2001, p. 258). These effects are often of interest to Family Scientists for their developmental and intergenerational consequences (e.g., social organization theory; Mancini & Bowen, 2013). However, this construct has issues regarding its operationalization and measurement. Prior studies have operationalized neighborhood disadvantage as an index of contextual elements of a participant’s environment, including, but not limited to, a) the proportion of households with children with single-parent mothers; b) the proportion of households living under poverty rate; c) unemployment rate; and d) proportion of African American households (Martin et al., 2019; Vazsonyi et al., 2006). These elements are frequently mathematically consolidated into a variable based on their high degree of relatability. An issue with this consolidation is the assumption that living near or around a higher proportion of African American households brings disadvantages. However, researchers rarely address how redlining has been used to systematically place African Americans in disenfranchised neighborhoods (Aaronson et al., 2021).

QuantCrit researchers would approach the assessment of neighborhood disadvantage much differently. They may use other factors such as access to resources (e.g., food, health care facilities, community resources) and physical signs of social disorder (e.g., graffiti, vandalism, abandoned buildings) as indicators of neighborhood disadvantage. In addition, QuantCrit researchers may collect data from residents on their perception of the neighborhood and how it has an impact on their lives. These factors respect the spirit of the unobserved concept without perpetuating harmful stereotypes about African Americans.

As Family Scientists seek to incorporate CRT into their praxis, there is a growing need to critically evaluate our approach to quantitative methodology and disrupt the perpetuation of racism and White supremacy within Family Science scholarship. QuantCrit provides researchers with fertile ground for such reflection as it challenges researchers to consider the historical, social, political, and economic power relations present within their research. While this article was a mere introduction to the QuantCrit framework, we hope that it inspires more Family Scientists to reflect upon its tenets and explore ways of dismantling racism and White supremacy within their quantitative research.

Bridges, K. (2019). Critical race theory: A primer (3rd ed.). Foundation Press.

Bonilla-Silva, E. (2009). Racism without racists: Color-blind racism and the persistence of racial inequality in the United States (2nd ed.). Rowman & Littlefield.

Campbell, S. L. (2020). Ratings in black and white: A quantcrit examination of race and gender in teacher evaluation reform. Race Ethnicity and Education , https://doi.org/10.1080/13613324.2020.1842345

Collins, P. H. (2019). Intersectionality as critical social theory . Duke University Press.

Covarrubias, A., & V. Vélez. 2013. Critical race quantitative intersectionality: An anti-racist research paradigm that refuses to “let the numbers speak for themselves.” In M. Lynn & A. D. Dixson (Eds.), Handbook of critical race theory in education (pp. 270–285). Routledge.

Gerido, L. H. (2020). Racial disparities in breast cancer and genomic uncertainty: A quantcrit mini-review. Open Information Science, 4 (1), 39–57. https://doi.org/10.1515/opis-2020-0004

Gillborn, D., Warmington, P., & Demack, S. (2018). Quantcrit: Education, policy, “Big Data” and principles for a critical race theory of statistics. Race Ethnicity and Education , 21 (2), 158–179. https://doi.org/10.1080/13613324.2017.1377417

Jackson, J. V., & Cothran, M. E. (2003). Black versus Black: The relationships among African, African American, and African Caribbean persons. Journal of Black Studies , 33 (5), 576–604. https://doi.org/10.1177/0021934703033005003

Mancini, J. A., & Bowen, G. L. (2013). Families and communities: A social organization theory of action and change. In G. W. Peterson & K. R. Bush (Eds.) Handbook of marriage and the family (pp. 781–813). Springer.

Martin, C. L., Kane, J. B., Miles, G. L., Aiello, A. E., & Harris, K. M. (2019). Neighborhood disadvantage across the transition from adolescence to adulthood and risk of metabolic syndrome. Health & Place , 57 , 131–138. https://doi.org/10.1016/j.healthplace.2019.03.002

Nielsen, M., Haun, D., Kärtner, J., & Legare, C. H. (2017). The persistent sampling bias in developmental psychology: A call to action. Journal of Experimental Child Psychology, 162 , 31–38. https://doi.org/10.1016/j.jecp.2017.04.017

Onwuegbuzie, A. J., & Leech, N. L. (2005). Taking the “Q” out of research: Teaching research methodology courses without the divide between quantitative and qualitative paradigms. Quality & Quantity, 39 , 267–296. https://doi.org/10.1007/s11135-004-1670-0

Quinn, D. M., Desruisseaux, T. M., & Nkansah-Amankra, A. (2019). “Achievement gap” language affects teachers’ issue prioritization. Educational Researcher , 48 (7), 484–487. https://doi.org/10.3102/0013189X19863765

Ross, C. E., & Mirowsky, J. (2001). Neighborhood disadvantage, disorder, and health. Journal of Health & Social Behavior, 42 (3), 258–276. https://doi.org/10.2307/3090214

Stage, F. K. (2007). Answering critical questions using quantitative data. New Directions for Institutional Research , 2007 (133), 5–16. https://doi.org/10.1002/ir.200

Sullivan, E., Larke, P. J., & Webb-Hasan, G. (2010). Using critical policy and critical race theory to examine Texas’ school disciplinary policies. Race, Gender & Class , 17 (1–2), 72–87. https://www.jstor.org/stable/41674726

Suzuki, S., Morris, S. L., & Johnson, S. K. (2021). Using quantcrit to advance an anti-racist developmental science: Applications to mixture modeling. Journal of Adolescent Research , 1–27. https://doi.org/10.1177/07435584211028229

Teranishi, R. T. (2007). Race, ethnicity, and higher education policy: The use of critical quantitative research. New Directions for Institutional Research , 2007 (133), 37–49. https://doi.org/10.1002/ir.203

Vazsonyi, A. T., Cleveland, H. H., & Wiebe, R. P. (2006). Does the effect of impulsivity on delinquency vary by level of neighborhood disadvantage? Criminal Justice and Behavior , 33 (4), 511–541. https://doi.org/10.1177/0093854806287318

Walsdorf, A. A., Jordan, L. S., McGeorge, C. R., & Caught, M. O. (2020). White supremacy and the web of family science: Implications of the missing spider. Journal of Family Theory & Review, 12 (1), 64–79. https://doi.org/10.1111/jftr.12364

Young, J., & Cunningham, J. A. (2021). Repositioning black girls in mathematics disposition research: New perspectives from quantcrit. Investigations in Mathematics Learning , 13 (1), 29–42. https://doi.org/10.1080/19477503.2020.1827664

Zuberi, T., & Bonilla-Silva, E. (2008). White logic, white methods: Racism and methodology. Rowman & Littlefield.

Zuberi, T. (2000). Deracializing social statistics: Problems in the quantification of race. Annals of the American Academy of Political and Social Science, 568 (1), 172–185. https://doi.org/10.1177/000271620056800113

Family Science is a vibrant and growing discipline. Visit Family.Science to learn more and see how Family Scientists make a difference.

NCFR is a nonpartisan, 501(c)(3) nonprofit organization whose members support all families through research, teaching, practice, and advocacy.

Get the latest updates on NCFR & Family Science in our weekly email newsletter:

Connect with Us

National Council on Family Relations 661 LaSalle Street, Suite 200 Saint Paul, MN 55114 Phone: (888) 781-9331 [email protected] Terms & Conditions | Privacy Policy

© Copyright 2023 NCFR

  • MS in the Learning Sciences
  • Tuition & Financial Aid

SMU Simmons School of Education & Human Development

Qualitative vs. quantitative data analysis: How do they differ?

Educator presenting data to colleagues

Learning analytics have become the cornerstone for personalizing student experiences and enhancing learning outcomes. In this data-informed approach to education there are two distinct methodologies: qualitative and quantitative analytics. These methods, which are typical to data analytics in general, are crucial to the interpretation of learning behaviors and outcomes. This blog will explore the nuances that distinguish qualitative and quantitative research, while uncovering their shared roles in learning analytics, program design and instruction.

What is qualitative data?

Qualitative data is descriptive and includes information that is non numerical. Qualitative research is used to gather in-depth insights that can't be easily measured on a scale like opinions, anecdotes and emotions. In learning analytics qualitative data could include in depth interviews, text responses to a prompt, or a video of a class period. 1

What is quantitative data?

Quantitative data is information that has a numerical value. Quantitative research is conducted to gather measurable data used in statistical analysis. Researchers can use quantitative studies to identify patterns and trends. In learning analytics quantitative data could include test scores, student demographics, or amount of time spent in a lesson. 2

Key difference between qualitative and quantitative data

It's important to understand the differences between qualitative and quantitative data to both determine the appropriate research methods for studies and to gain insights that you can be confident in sharing.

Data Types and Nature

Examples of qualitative data types in learning analytics:

  • Observational data of human behavior from classroom settings such as student engagement, teacher-student interactions, and classroom dynamics
  • Textual data from open-ended survey responses, reflective journals, and written assignments
  • Feedback and discussions from focus groups or interviews
  • Content analysis from various media

Examples of quantitative data types:

  • Standardized test, assessment, and quiz scores
  • Grades and grade point averages
  • Attendance records
  • Time spent on learning tasks
  • Data gathered from learning management systems (LMS), including login frequency, online participation, and completion rates of assignments

Methods of Collection

Qualitative and quantitative research methods for data collection can occasionally seem similar so it's important to note the differences to make sure you're creating a consistent data set and will be able to reliably draw conclusions from your data.

Qualitative research methods

Because of the nature of qualitative data (complex, detailed information), the research methods used to collect it are more involved. Qualitative researchers might do the following to collect data:

  • Conduct interviews to learn about subjective experiences
  • Host focus groups to gather feedback and personal accounts
  • Observe in-person or use audio or video recordings to record nuances of human behavior in a natural setting
  • Distribute surveys with open-ended questions

Quantitative research methods

Quantitative data collection methods are more diverse and more likely to be automated because of the objective nature of the data. A quantitative researcher could employ methods such as:

  • Surveys with close-ended questions that gather numerical data like birthdates or preferences
  • Observational research and record measurable information like the number of students in a classroom
  • Automated numerical data collection like information collected on the backend of a computer system like button clicks and page views

Analysis techniques

Qualitative and quantitative data can both be very informative. However, research studies require critical thinking for productive analysis.

Qualitative data analysis methods

Analyzing qualitative data takes a number of steps. When you first get all your data in one place you can do a review and take notes of trends you think you're seeing or your initial reactions. Next, you'll want to organize all the qualitative data you've collected by assigning it categories. Your central research question will guide your data categorization whether it's by date, location, type of collection method (interview vs focus group, etc), the specific question asked or something else. Next, you'll code your data. Whereas categorizing data is focused on the method of collection, coding is the process of identifying and labeling themes within the data collected to get closer to answering your research questions. Finally comes data interpretation. To interpret the data you'll take a look at the information gathered including your coding labels and see what results are occurring frequently or what other conclusions you can make. 3

Quantitative analysis techniques

The process to analyze quantitative data can be time-consuming due to the large volume of data possible to collect. When approaching a quantitative data set, start by focusing in on the purpose of your evaluation. Without making a conclusion, determine how you will use the information gained from analysis; for example: The answers of this survey about study habits will help determine what type of exam review session will be most useful to a class. 4

Next, you need to decide who is analyzing the data and set parameters for analysis. For example, if two different researchers are evaluating survey responses that rank preferences on a scale from 1 to 5, they need to be operating with the same understanding of the rankings. You wouldn't want one researcher to classify the value of 3 to be a positive preference while the other considers it a negative preference. It's also ideal to have some type of data management system to store and organize your data, such as a spreadsheet or database. Within the database, or via an export to data analysis software, the collected data needs to be cleaned of things like responses left blank, duplicate answers from respondents, and questions that are no longer considered relevant. Finally, you can use statistical software to analyze data (or complete a manual analysis) to find patterns and summarize your findings. 4

Qualitative and quantitative research tools

From the nuanced, thematic exploration enabled by tools like NVivo and ATLAS.ti, to the statistical precision of SPSS and R for quantitative analysis, each suite of data analysis tools offers tailored functionalities that cater to the distinct natures of different data types.

Qualitative research software:

NVivo: NVivo is qualitative data analysis software that can do everything from transcribe recordings to create word clouds and evaluate uploads for different sentiments and themes. NVivo is just one tool from the company Lumivero, which offers whole suites of data processing software. 5

ATLAS.ti: Similar to NVivo, ATLAS.ti allows researchers to upload and import data from a variety of sources to be tagged and refined using machine learning and presented with visualizations and ready for insert into reports. 6

SPSS: SPSS is a statistical analysis tool for quantitative research, appreciated for its user-friendly interface and comprehensive statistical tests, which makes it ideal for educators and researchers. With SPSS researchers can manage and analyze large quantitative data sets, use advanced statistical procedures and modeling techniques, predict customer behaviors, forecast market trends and more. 7

R: R is a versatile and dynamic open-source tool for quantitative analysis. With a vast repository of packages tailored to specific statistical methods, researchers can perform anything from basic descriptive statistics to complex predictive modeling. R is especially useful for its ability to handle large datasets, making it ideal for educational institutions that generate substantial amounts of data. The programming language offers flexibility in customizing analysis and creating publication-quality visualizations to effectively communicate results. 8

Applications in Educational Research

Both quantitative and qualitative data can be employed in learning analytics to drive informed decision-making and pedagogical enhancements. In the classroom, quantitative data like standardized test scores and online course analytics create a foundation for assessing and benchmarking student performance and engagement. Qualitative insights gathered from surveys, focus group discussions, and reflective student journals offer a more nuanced understanding of learners' experiences and contextual factors influencing their education. Additionally feedback and practical engagement metrics blend these data types, providing a holistic view that informs curriculum development, instructional strategies, and personalized learning pathways. Through these varied data sets and uses, educators can piece together a more complete narrative of student success and the impacts of educational interventions.

Master Data Analysis with an M.S. in Learning Sciences From SMU

Whether it is the detailed narratives unearthed through qualitative data or the informative patterns derived from quantitative analysis, both qualitative and quantitative data can provide crucial information for educators and researchers to better understand and improve learning. Dive deeper into the art and science of learning analytics with SMU's online Master of Science in the Learning Sciences program . At SMU, innovation and inquiry converge to empower the next generation of educators and researchers. Choose the Learning Analytics Specialization to learn how to harness the power of data science to illuminate learning trends, devise impactful strategies, and drive educational innovation. You could also find out how advanced technologies like augmented reality (AR), virtual reality (VR), and artificial intelligence (AI) can revolutionize education, and develop the insight to apply embodied cognition principles to enhance learning experiences in the Learning and Technology Design Specialization , or choose your own electives to build a specialization unique to your interests and career goals.

For more information on our curriculum and to become part of a community where data drives discovery, visit SMU's MSLS program website or schedule a call with our admissions outreach advisors for any queries or further discussion. Take the first step towards transforming education with data today.

  • Retrieved on August 8, 2024, from nnlm.gov/guides/data-glossary/qualitative-data
  • Retrieved on August 8, 2024, from nnlm.gov/guides/data-glossary/quantitative-data
  • Retrieved on August 8, 2024, from cdc.gov/healthyyouth/evaluation/pdf/brief19.pdf
  • Retrieved on August 8, 2024, from cdc.gov/healthyyouth/evaluation/pdf/brief20.pdf
  • Retrieved on August 8, 2024, from lumivero.com/solutions/
  • Retrieved on August 8, 2024, from atlasti.com/
  • Retrieved on August 8, 2024, from ibm.com/products/spss-statistics
  • Retrieved on August 8, 2024, from cran.r-project.org/doc/manuals/r-release/R-intro.html#Introduction-and-preliminaries

Return to SMU Online Learning Sciences Blog

Southern Methodist University has engaged Everspring , a leading provider of education and technology services, to support select aspects of program delivery.

This will only take a moment

This paper is in the following e-collection/theme issue:

Published on 28.8.2024 in Vol 13 (2024)

Implementation and Impact of Intimate Partner Violence Screening Expansion in the Veterans Health Administration: Protocol for a Mixed Methods Evaluation

Authors of this article:

Author Orcid Image

  • Galina A Portnoy 1, 2 , PhD   ; 
  • Mark R Relyea 1, 2 , PhD   ; 
  • Melissa E Dichter 3, 4 , PhD   ; 
  • Katherine M Iverson 5, 6 , PhD   ; 
  • Candice Presseau 1, 2 , PhD   ; 
  • Cynthia A Brandt 1, 2 , MPH, MD   ; 
  • Melissa Skanderson 1, 2 , MSW   ; 
  • LeAnn E Bruce 7 , LCSW, PhD   ; 
  • Steve Martino 1, 2 , PhD  

1 VA Connecticut Healthcare System, West Haven, CT, United States

2 Yale School of Medicine, New Haven, CT, United States

3 VA Center for Health Equity Research and Promotion (CHERP), Corporal Michael J. Crescenz VA Medical Center, Philadelphia, PA, United States

4 School of Social Work, Temple University, Philadelphia, PA, United States

5 Women’s Health Sciences Division of the National Center for PTSD, VA Boston Healthcare System, Boston, MA, United States

6 Department of Psychiatry, Boston University School of Medicine, Boston, MA, United States

7 Intimate Partner Violence Assistance Program, Care Management and Social Work Service, Veterans Health Administration, Washington, DC, United States

Corresponding Author:

Galina A Portnoy, PhD

VA Connecticut Healthcare System

950 Campbell Ave

West Haven, CT, 06516

United States

Phone: 1 2039325711

Email: [email protected]

Background: Intimate partner violence (IPV) is a significant public health problem with far-reaching consequences. The health care system plays an integral role in the detection of and response to IPV. Historically, the majority of IPV screening initiatives have targeted women of reproductive age, with little known about men’s IPV screening experiences or the impact of screening on men’s health care. The Veterans Health Administration (VHA) has called for an expansion of IPV screening, providing a unique opportunity for a large-scale evaluation of IPV screening and response across all patient populations.

Objective: In this protocol paper, we describe the recently funded Partnered Evaluation of Relationship Health Innovations and Services through Mixed Methods (PRISM) initiative, aiming to evaluate the implementation and impact of the VHA’s IPV screening and response expansion, with a particular focus on identifying potential gender differences.

Methods: The PRISM Initiative is guided by the Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) and Consolidated Framework for Implementation Research (CFIR 2.0) frameworks. We will use mixed methods data from 139 VHA facilities to evaluate the IPV screening expansion, including electronic health record data and qualitative interviews with patients, clinicians, and national IPV program leadership. Quantitative data will be analyzed using a longitudinal observational design with repeated measurement periods at baseline (T0), year 1 (T1), and year 2 (T2). Qualitative interviews will focus on identifying multilevel factors, including potential implementation barriers and facilitators critical to IPV screening and response expansion, and examining the impact of screening on patients and clinicians.

Results: The PRISM initiative was funded in October 2023. We have developed the qualitative interview guides, obtained institutional review board approval, extracted quantitative data for baseline analyses, and began recruitment for qualitative interviews. Reports of progress and results will be made available to evaluation partners and funders through quarterly and end-of-year reports. All data collection and analyses across time points are expected to be completed in June 2026.

Conclusions: Findings from this mixed methods evaluation will provide a comprehensive understanding of IPV screening expansion at the VHA, including the implementation and impact of screening and the scope of IPV detected in the VHA patient population. Moreover, data generated by this initiative have critical policy and clinical practice implications in a national health care system.

International Registered Report Identifier (IRRID): PRR1-10.2196/59918

Introduction

Intimate partner violence (IPV), including physical, sexual, and psychological aggression, is a significant public health problem with far-reaching consequences. Experiencing IPV is associated with serious negative physical and psychological outcomes among civilians and veterans alike [ 1 , 2 ]. Research shows that women veterans are at increased risk of experiencing violence in relationships compared to civilian women [ 1 ]. Moreover, IPV is common, yet critically understudied, among veteran men.

Reported rates of IPV experience vary widely across studies, largely due to methodological differences [ 3 , 4 ]. Although US women veterans are more likely to experience lifetime IPV than men (approximately 45% vs 36%), the prevalence of past-year IPV is similar, at approximately 30% [ 5 ]. Research has found that as many as 55% of women veterans experience IPV in their lifetimes [ 6 ]. Women’s experiences of IPV are associated with adverse physical and mental health, including cardiovascular and respiratory problems, chronic pain, reproductive health challenges, posttraumatic stress, anxiety, depression, substance use, and elevated risk for suicide [ 2 , 7 - 10 ]. For veteran men, IPV experience is associated with poorer overall mental health; greater occupational impairment; and higher rates of depression, smoking, and heavy and binge drinking [ 11 , 12 ]. Despite high rates of IPV experience and adverse outcomes for men, little research has examined IPV screening and referral outcomes for this population. Moreover, although evidence demonstrates that transgender and nonbinary patients are at increased risk of IPV compared to cisgender patients [ 13 , 14 ], very little research has examined IPV screening and referral outcomes within this population [ 15 ].

The health care system plays an integral role in the detection of and response to IPV [ 16 , 17 ]. As such, the significant impact that IPV has on veterans across genders underscores the critical need for a comprehensive and effective health care response for patients seeking services through the Veterans Health Administration (VHA). In 2014, the VHA developed the national IPV Assistance Program to oversee and implement integrated services aimed at reducing the risk for IPV, including establishing IPV Assistance Program Coordinators at each Department of Veterans Affairs (VA) medical center across the country and providing clinical services and resources for IPV-related concerns through prevention, detection, and treatment [ 18 ]. Since the IPV Assistance Program’s inception, the implementation of IPV screening and response among women veterans has been an important priority area for the program.

The VHA policy for IPV detection and response parallels and expands on recommendations put forth by the US Preventive Services Task Force [ 19 ]. VHA policy requires annual IPV screening for women of reproductive age, as well as other patients who belong to a recognized high-risk group (eg, patients who are homeless or underhoused, those with co-occurring disorders, and patients with disability [ 20 ]) and provision of support, resources, and referrals [ 20 ]. Accordingly, enhancing the implementation of IPV screening and response among the women veteran patient population has been a major priority of VHA clinical practice and research efforts over the last decade. A robust body of literature has shown that IPV screening of women veterans during health care visits is essential as patients are unlikely to spontaneously disclose IPV but will often report their IPV experiences when asked by a clinician in a sensitive manner [ 21 ]. Evidence also demonstrates that screening increases the identification of IPV and enhances women veterans’ connections to and satisfaction with care, and women veterans report perceiving IPV screening as supportive, validating, and helpful [ 21 - 23 ].

However, there remain many system-level and clinician-reported barriers to screening women veterans, such as limited time and resources, discomfort in addressing IPV, lack of training, and competing priorities during health care visits [ 24 , 25 ]. Moreover, because the majority of IPV research and health care screening initiatives to date have targeted women, little is known about screening men for IPV, including men’s perceptions and experiences of screening, their willingness to disclose IPV during screening, and clinicians’ experiences and attitudes about screening men. These limitations highlight the need for a large-scale evaluation of IPV screening reach and effectiveness with men and patients of all gender identities to inform strategies for optimizing screening implementation across patient populations. To date, IPV screening implementation across health care systems, including the VHA, has targeted women of reproductive age and largely has occurred in primary care, obstetrics or gynecology, and urgent care settings. The recent VHA IPV screening expansion provides a unique opportunity for evaluation of the IPV screening and response protocol across VHA patient populations.

Expansion of IPV Screening

In response to growing evidence demonstrating that all VHA patient populations are at risk for experiencing IPV and associated negative health consequences (eg, men [ 12 , 26 ], women above reproductive age [ 27 ], and transgender and nonbinary veterans [ 13 , 14 ]), the VA National IPV Assistance Program has called for expanded IPV screening through the implementation of a “no wrong door” approach using the Relationship Health and Safety Clinical Reminder version 3 (RHS 3.0). This approach enables a patient-centered solution for detecting IPV such that patients are screened for IPV regardless of where they receive care within the health care system. The RHS 3.0 is a 2-part IPV screener, including a 5-item primary screen for IPV [ 28 - 30 ] and, if triggered, a 3-item secondary screen for risk of severe and potentially lethal IPV [ 31 ].

The RHS 3.0 was developed as a clinical reminder with a note template and approved for VHA enterprise-wide installation in August 2023. The clinical reminder prompts clinicians to screen women of reproductive age at least annually and is available and recommended for use with all patients outside of this target demographic, as well. Veterans can be screened for IPV and connected with support services wherever they present for care in the health care system, resulting in a “no wrong door” approach. The RHS 3.0 is administered using a standardized template in the electronic health record (EHR). Population health implementation support tools, such as clinical reminders and note templates, leverage standardization of the EHR to help systematize screening administration and data collection across a large patient population [ 32 ]. As the largest integrated health care system in the United States, the VHA serves over 9 million patients across 172 health care facilities [ 33 ], underscoring the importance of leveraging tools available in the EHR for reaching all patients who come into contact with the health care system.

To conduct IPV screening and complete the necessary clinical reminder steps, clinicians require training through an internet-based training module available to all VHA staff or through receiving training conducted by an IPV Assistance Program Coordinator. To support the adoption of the RHS 3.0 screening expansion, the IPV Assistance Program is combining a top-down approach, which includes disseminating education and materials throughout the VHA (eg, national live and recorded trainings, an IPV screening and response toolkit, and internet-based training modules), with facility-level implementation strategies at the discretion of local IPV Assistance Program Coordinators at each medical center.

A national screening expansion of this scale requires rigorous, systematic evaluation. Although prior work exists evaluating IPV screening implementation for women veterans in primary care settings [ 23 - 25 , 34 - 37 ], no effort to date has systematically evaluated the implementation of IPV screening among all patients across the entire health care system. Data are needed to assess the implementation of the screening expansion, as well as the impact of screening new patient populations. In this protocol paper, we describe the recently funded Partnered Evaluation of Relationship Health Innovations and Services through Mixed Methods (PRISM) initiative. The aims of this work are to (1) evaluate the implementation of the RHS 3.0 IPV screening and response to national expansion and (2) identify the impact of IPV screening and potential gender differences. Specifically, we will assess implementation outcomes across Reach, Effectiveness, Adoption, Implementation, and Maintenance (RE-AIM) domains and examine potential differences in outcomes by patient characteristics (eg, gender, age, race, ethnicity, sexual orientation, and marital status). To examine the “no wrong door” approach, we will also identify clinical settings most and least likely to adopt IPV screening, as well as yield disclosures during screening. We will examine the impact and potential gender differences by assessing service use and clinical outcomes following positive screens and exploring experiences and perceptions of clinicians who screened and patients who disclosed IPV during screening encounters.

Conceptual Framework

The PRISM initiative is guided by 2 robust implementation science frameworks—the RE-AIM (outcomes framework [ 38 ]) and the updated Consolidated Framework for Implementation Research (CFIR 2.0; determinants framework [ 39 ]). Although RE-AIM helps guide the organization of evaluation outcomes, it does not necessarily explain the conditions that influence variation in outcomes across the system, including among patient subgroups and clinical settings. The CFIR 2.0 is particularly helpful for structuring the exploration of contextual factors essential for the implementation of innovative programs, including at the VHA [ 40 ], making it ideally suited to guide the evaluation of multilevel factors that impact the implementation success of the VHA IPV screening and response national expansion. Integrating these 2 frameworks will support the assessment of implementation outcomes alongside understanding potential barriers and enablers of screening implementation, clinicians’ experiences with IPV screening and patients’ experiences with being screened (particularly among newly targeted patient populations, like men), and the implementation process overall.

Data Sources

Evaluation of the RHS 3.0 IPV screening expansion will include mixed methods data across VA health care facilities nationally. We will integrate quantitative and qualitative data sources, including EHR data and qualitative interviews (see Table 1 for a summary of data sources and outcomes). Quantitative data will be extracted from the VA’s Corporate Data Warehouse (CDW), a centralized data repository that aggregates clinical, administrative, and financial data from the VA EHR across all 139 VA medical centers and satellite clinics [ 41 ]. Our evaluation sample will include all VHA veteran patients with at least 1 outpatient VHA health care encounter during the evaluation observation period. Qualitative data sources will include semistructured interviews with veterans, clinicians, and VA leadership from the national IPV Assistance Program. The development of the semistructured interviews was guided by the CFIR 2.0 domains. Specifically, clinician interviews will focus on CFIR 2.0 constructs related to the RHS 3.0 screening and response protocol itself, the outer and inner settings (ie, their clinics and facilities), responders’ experiences and perceptions of the screening and response (including perceived impacts for patients), and the implementation process. Veteran interviews will also focus on their perceptions of being screened, including experiences with the screening process itself and resulting outcomes (eg, services offered or received).

ConstructOutcomes

Reach: is the IPV screening expansion reaching its intended target (all veterans)?


Effectiveness: is the RHS 3.0 screening expansion effective?




Adoption: what is the uptake of screening across VHA facilities and clinical setting?



Implementation fidelity: to what extent is the RHS 3.0 being implemented as intended?


Maintenance: is the RHS 3.0 expansion being sustained over time?



Connection to and use of health care




Connection to resources

Other clinical and health outcomes

a IPV: intimate partner violence.

b Indicates electronic health record data extracted from the corporate data warehouse.

c RHS 3.0: Relationship Health and Safety Clinical Reminder version 3.0.

d Indicates data resulting from qualitative interviews.

e VHA: Veterans Health Administration.

f CFIR: Consolidated Framework for Implementation Research.

Ethical Considerations

This evaluation is a quality improvement (QI) initiative jointly supported by the VA Care Management and Social Work Service’s IPV Assistance Program; the VA Quality Enhancement Research Initiative (QUERI); and the IPV Center for Implementation, Research, and Evaluation (IPV-CIRE) at VA Connecticut Health care System. The PRISM Initiative was designed for internal purposes in support of VA QI as an internal operations evaluation designated as nonresearch by VA, thus not requiring institutional review board approval [ 42 ]. Empirical research conducted with data collected from this QI initiative was approved by the VA Connecticut Health Care System institutional review board (protocol #1792152).

Quantitative Procedures and Analyses

Quantitative data grounded in the RE-AIM outcomes will be extracted from the CDW and analyzed using a longitudinal observational design with repeated measurement periods at baseline (T0), year 1 (T1), and year 2 (T2) [ 38 ]. We operationalized reach as the proportion and representativeness of veteran patients who were administered IPV screening, and those administered the RHS 3.0 specifically, calculated as veterans screened out of those eligible for screening. Eligibility for screening was defined as having at least 1 VHA health care encounter during the observation period. To examine representativeness and potential disparities between those screened and not screened, we will use generalized linear mixed models (GLMM) to assess differences by patient characteristics (gender, age, race, ethnicity, sexual orientation, marital status, rurality, and housing instability status) over each of the 3 time points while specifying facilities as random effects to account for nesting [ 43 , 44 ]. We will also examine the change in the probability of being screened over time through GLMMs specifying facilities and patients as random effects. Additionally, we will examine whether the effect of time varies significantly across facilities. In post hoc analyses, we will investigate whether the change in the probability of being screened over time differs by patient characteristics.

Effectiveness, for this evaluation, focuses on determining the extent to which the RHS 3.0 expansion itself is effective. We defined the effectiveness of the expansion as whether there is an increase in the proportion of IPV cases detected, referrals offered, and universal education provided following the implementation of the RHS 3.0. Using the quantitative CDW data, we will calculate changes in the proportion of these factors following positive screens through the same GLMM process described above. Effectiveness will also be examined through the exploration of qualitative data regarding clinicians’ and veterans’ experiences with and perceptions of the RHS 3.0 and its potential unintended consequences. Adoption was operationalized as differences in screening uptake by facility and clinical setting [ 38 ]. We will combine stop codes, identifiers used by the VHA to track which clinic group and location provided a service, into meaningful categories of services (eg, primary care mental health, social work, etc) to examine descriptive statistics (frequencies and range) of screening uptake across medical center facilities and clinical settings. We will also assess adoption through an exploration of the barriers and facilitators related to IPV screening expansion via qualitative data.

Implementation fidelity was defined as the extent to which IPV screening and response procedures are implemented as intended (ie, per IPV Assistance Program guidelines) [ 20 , 38 ]. For example, we will identify the percentage of veterans screened out of those eligible, completion of the secondary screen based on positive primary screen responses, referrals offered, and universal IPV education provided. We will also explore contextual factors related to fidelity reported during qualitative interviews. Finally, to examine maintenance of the RHS 3.0 expansion, we will identify whether the proportion of veterans screened (and proportion screened across patient subgroups), percent of facilities using the RHS 3.0, and number of unique clinicians using the RHS 3.0 remains at, above, or below T0 levels during T1 and T2 [ 38 ].

In addition to examining the implementation of the IPV screening expansion, the PRISM initiative will evaluate the impact of expanded IPV screening on patients, clinicians, and the health care system as a whole. Unique from RE-AIM’s effectiveness domain, which targets whether the screening expansion itself is effective, determining the impact of expanded screening will involve examining linkages between screening and connection and engagement to VHA services and resources. Leveraging available CDW data, we will examine patients’ connection to, and use of, health care and social support services following IPV screening, including services for physical and mental health conditions and resources for social services or essential needs (see Table 1 for detailed description). Examination of CDW data will also enable the identification of responses to IPV cases identified as high risk for lethality or serious injury, including a number of safety plans completed or same-day consults placed. Additional analyses will include a calculation of the frequency of services received in the 60 days following a positive screen and the same day. We will determine associations between positive screens and health care or resource use by using GLMMs with facilities as random effects, controlling for service use in the 60 days prior to the screen. We will also examine rates of high-risk cases detected from secondary screener results and the proportion of safety plans completed among high-risk cases.

Qualitative Procedures and Analyses

Guiding frameworks.

We will use the CFIR 2.0 [ 39 ] as an organizational and explanatory determinants framework to (1) categorize multilevel factors critical to RHS 3.0 implementation across VHA facilities and clinical settings, (2) identify potential barriers and facilitators of implementation, and (3) examine the impact of screening on patients. The CFIR 2.0 includes 49 constructs across 5 overarching domains that have been shown to influence program implementation, and they are (1) innovation, (2) inner setting, (3) outer setting, (4) individuals, and (5) implementation process [ 39 ]. Following guidance to use constructs most salient for particular initiatives under study [ 45 ], we selected the most relevant constructs from the CFIR 2.0 and CFIR Outcomes Addendum [ 46 ] to guide the development of interview guides with veterans and clinicians and analyses of qualitative data for this evaluation. See Table 2 for a full list of CFIR constructs guiding qualitative data collection and analysis.

Level and constructC or P

Evidence strength and qualityC

AdaptabilityC or P

ComplexityC or P

External policies and incentivesC

Patient needs and resourcesC or P

Culture and climateC

CompatibilityC

Relative priorityC

Access to informationC

Self-efficacyC

Need and appropriatenessC or P

Capability, opportunity, and motivationC

AcceptabilityC or P

FeasibilityC

Impact and outcomesC or P

Tailoring strategiesC

AdaptingC

a CFIR: Consolidated Framework for Implementation Research.

b RHS 3.0: Relationship Health and Safety Clinical Reminder version 3.0.

c C: clinicians.

d Indicating which constructs will be reflected in corresponding interview guides.

e P: patients.

Leveraging EHR data, we will use purposeful sampling [ 47 ] to identify veteran patients who screened positive for IPV and created random stratified samples by gender and ensure patient presentation across geographical locations and clinical settings. We will generate batches of eligible veterans monthly to ensure that veterans are interviewed within 2 to 4 months of their screening encounter and have sufficient memory of the experience. We will send recruitment letters to those potentially eligible informing them of the study and inviting them to contact the study team to opt in or out of participation. Veterans with cognitive, language, or other impairments that prevent full participation in the study will not be eligible to participate. To ensure diversity in perspectives across interviews, we aim to include veterans from varying demographic backgrounds and experiences. We will encourage this diverse makeup by ensuring that our sample includes at least 20 women and 20 men, 25% racial or ethnic minority veterans, and 25% veterans aged 55 or older; we will also seek representation from each region of the country [ 48 , 49 ]. Recruitment will continue until we reach thematic saturation [ 50 ]. Interviews will be approximately 45 minutes and veterans will be compensated US $50 for their participation. Veterans’ participation will be voluntary and their responses to interview questions will not be shared with any clinician providing them VHA services. Interview data will be presented using deidentified representative quotes and in aggregate through the development of qualitative themes.

We will leverage the EHR data to identify clinicians who used the RHS 3.0 to screen patients of all genders for IPV to create a roster of potentially eligible clinicians. Using this roster, we will ensure clinician representation across clinician discipline, geographical location, and clinical settings. We will reach out to potentially eligible clinicians via email informing them of the study, confirming that they have conducted at least 5 screenings with women and at least 5 with men, and inviting them to participate in a qualitative interview about their experiences and perceptions. We aim to include at least 25 clinicians, although our final sample will be determined by thematic saturation [ 50 ]. Interviews will be 45-60 minutes in length.

National IPV Assistance Program Leaders

Using an established ethnographically informed method of guided discussion [ 51 ], we will conduct biannual 30-60 minute interviews with 3-4 key national IPV program leaders involved in supporting the RHS 3.0 expansion and program office partners, at the leaders’ request. These discussions will enable us to systematically document and understand the implementation plan and process, as well as provide opportunities for the national implementers to engage in “periodic reflections” with our team [ 51 ]. Discussion notes will be coded to reflect key CFIR domains of interest and emergent themes, which will be analyzed in triangulation with our other qualitative data sources (ie, clinician interviews) and quantitative data (ie, CDW-based RE-AIM domains).

Qualitative Data Analysis

We will use a hybrid deductive-inductive thematic analysis approach [ 52 ] to analyze the qualitative data, with initial codes informed by specified CFIR constructs ( Table 2 ). Interview transcripts will be coded and summarized, then consolidated into matrices by CFIR constructs. Multiple team members will conduct data analyses. Using a rigorous, team-based approach, we will complete the following steps: (1) develop a start list of codes based on the CFIR constructs and interview guides (to which we will add emergent codes); (2) code the transcripts; (3) transpose and systematize data into summary templates; (4) organize the data into matrices to note trends, similarities, and differences; and (5) synthesize into findings. As new codes arise, earlier transcripts will be recoded [ 53 ]. This iterative process will continue until all themes have been identified. Discrepancies will be resolved through consensus discussions [ 54 ]. Qualitative findings will support the planned evaluation by (1) describing conditions necessary for clinicians to effectively screen all veterans across health care settings and barriers to doing so; (2) identifying contextual factors that will inform future implementation strategies needed to enhance the expansion of the RHS 3.0; and (3) revealing veterans’ experiences with, and outcomes related to, disclosing IPV during screening encounters, including the impact of screening on their health care and service use, satisfaction with the screening and response encounter, and sense of connection with VHA and clinicians.

Patient Engagement

This evaluation will include a Veteran Engagement Board (VEB) to accurately represent the complex and diverse experiences of veterans and ensure that our findings are meaningful and accessible to the patient population we are striving to serve [ 55 ]. Through the VEB, we will include veterans’ voices and perspectives in each phase of the evaluation and aim to increase shared decision-making across a diversity of perspectives [ 56 ]. Although we will gain important knowledge through qualitative interviews with veterans to understand the impact of the RHS 3.0 screening and response protocol expansion on patients, involving veterans in research to understand their experiences differs meaningfully from engaging them in the research process to ensure that all phases of the work are veteran-centric and guided and informed by veteran perspectives. The VEB will actively inform the PRISM initiative through regularly occurring meetings and discussions focused on activities across all stages of the project (ie, planning, data collection, interpretation of results, and dissemination of findings).

The PRISM initiative was funded in October 2023. We have developed the qualitative interview guides, obtained institutional review board approval, extracted quantitative data for baseline analyses, and began recruitment for qualitative interviews. Quantitative analyses will take place in 2024 (T0), 2025 (T1), and 2026 (T2). Qualitative interviews and analyses will take place between April 2024 and October 2025. Reports of progress and results will be made available to evaluation partners and funders through quarterly and end-of-year reports. Evaluation findings will also be intermittently disseminated through peer-reviewed journals and presentations at scientific meetings. All data collection and analyses across time points are expected to be completed in June 2026.

This study protocol outlines a mixed methods evaluation of IPV screening expansion in the VHA, conducted in partnership with the VA National IPV Assistance Program. Findings generated from the evaluation of the IPV screening expansion will provide a comprehensive understanding of the reach, effectiveness, adoption, implementation, maintenance, and impact of the expanded IPV screening and response protocol, the RHS 3.0. Additionally, findings will determine the scope of IPV detected in the VA patient population through screening during routine clinical health encounters, knowledge essential to inform clinical practice and policy. These data will generate knowledge regarding IPV disclosures among subgroups of veterans previously not targeted for IPV screening (eg, men, women above reproductive age, and transgender and nonbinary veterans) and those from underserved or vulnerable populations, disparities in IPV screening and outcomes, and patient subgroups at heightened risk for IPV and IPV-related injury and lethality. Little research has examined IPV screening and referral outcomes within these populations, specifically among men and transgender or nonbinary patients (who are at increased risk of IPV compared to cisgender patients [ 13 - 15 ]). Because the majority of IPV screening initiatives to date have targeted women, this work will expand the field’s knowledge regarding other subgroups’ experiences and perceptions of screening and their willingness to disclose IPV during health care encounters.

One limitation of this national evaluation initiative is our ability to access data across all VA sites. The VA is currently undergoing an EHR modernization, including transitioning to a new EHR system [ 57 ]. These changes may impede our ability to identify and extract necessary data at VA sites that have transitioned to a new EHR system. Additionally, we are limited by our evaluation method. Although a staggered implementation rollout or use of control group sites may enhance evaluation by allowing for comparisons of strategies, the national IPV program partners are prioritizing a full-scale national rollout, currently underway, limiting the possibility of a staggered approach.

This initiative also has critical policy and clinical practice implications. Through the course of this project, we will develop essential evaluation tools for monitoring and improving screening implementation in the VHA over time. The VA’s Office of the Inspector General identified the need for systematic and high-quality tracking of IPV-related programmatic outcomes data, particularly regarding IPV screening implementation outcomes [ 58 ], an area that this project will directly address. This evaluation will provide critically needed systems and clinical data to VHA policy leaders to inform national programming and enable tracking over time. Evaluation of the IPV screening expansion will result in recommendations for future IPV screening implementation initiatives and adaptations, including potential comparisons of implementation strategies for future studies leading to the optimization of IPV screening implementation across the health care system.

Acknowledgments

The authors are grateful to the coauthors for their collaboration and shared scholarship and to the Veterans Health Administration’s (VHA’s) Care Management and Social Work Service’s Intimate Partner Violence Assistance Program (IPVAP) for their support of this work. This research was supported by funding from the US Department of Veterans Affairs (VA) Quality Enhancement Research Initiative (QUERI) as a partnered evaluation initiative (PEC 23-182); Health Services Research and Development (HSR&D) as part of GAP’s HSR&D Career Development Award (CDA 19-234); and the VHA’s Care Management and Social Work Service through their support of the IPV Center for Implementation, Research, and Evaluation (IPV-CIRE). The views expressed in this paper are those of the authors and do not necessarily represent the views of the VA or the United States government.

Data Availability

The data sets generated and analyzed during this study are not publicly available due to requirements of compliance with government procedures. Those interested in deidentified data underlying this initiative can send an email request to the corresponding author.

Authors' Contributions

GAP, MED, and KMI conceptualized the study methodology and project administration. MRR, CP, MS, and CAB contributed to the statistical analysis plan, software, data curation, and visualization. LEB and SM provided supervision and mentorship. All authors contributed to the writing, review, and editing of the paper. All authors approved the paper.

Conflicts of Interest

None declared.

  • Dichter ME, Cerulli C, Bossarte RM. Intimate partner violence victimization among women veterans and associated heart health risks. Womens Health Issues. 2011;21(4 Suppl):S190-S194. [ CrossRef ] [ Medline ]
  • Intimate partner violence surveillance: uniform definitions and recommended data elements. Centers for Disease Control and Prevention. Atlanta, GA.; 2015. URL: https://www.cdc.gov/violenceprevention/pdf/ipv/intimatepartnerviolence.pdf [accessed 2019-04-09]
  • Portnoy GA, Rodriguez A, Kroll-Desrosiers A, Kessner MA, Walls S, Parkes DJ, et al. Prevalence of Intimate Partner Violence (IPV) among Veterans: A Secondary Analysis Study. Washington, DC. U.S. Department of Veteran's Affairs [unpublished report]; 2022.
  • Cowlishaw S, Freijah I, Kartal D, Sbisa A, Mulligan A, Notarianni M, et al. Intimate partner violence (IPV) in military and veteran populations: a systematic review of population-based surveys and population screening studies. Int J Environ Res Public Health. 2022;19(14):8853. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Iverson KM, Livingston WS, Vogt D, Smith BN, Kehle-Forbes SM, Mitchell KS. Prevalence of sexual violence and intimate partner violence among US military veterans: findings from surveys with two national samples. J Gen Intern Med. 2024;39(3):418-427. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Iverson KM, Stirman SW, Street AE, Gerber MR, Carpenter SL, Dichter ME, et al. Female veterans' preferences for counseling related to intimate partner violence: informing patient-centered interventions. Gen Hosp Psychiatry. 2016;40:33-38. [ CrossRef ] [ Medline ]
  • Campbell JC. Health consequences of intimate partner violence. Lancet. 2002;359(9314):1331-1336. [ CrossRef ] [ Medline ]
  • Humphreys J, Cooper BA, Miaskowski C. Occurrence, characteristics, and impact of chronic pain in formerly abused women. Violence Against Women. 2011;17(10):1327-1343. [ CrossRef ] [ Medline ]
  • Iverson KM, McLaughlin KA, Gerber MR, Dick A, Smith BN, Bell ME, et al. Exposure to interpersonal violence and its associations with psychiatric morbidity in a U.S. national sample: a gender comparison. Psychol Violence. 2013;3(3):273-287. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Responding to Intimate Partner Violence and Sexual Violence Against Women: WHO Clinical and Policy Guidelines. Geneva, Switzerland. World Health Organization; 2013.
  • Cerulli C, Bossarte RM, Dichter ME. Exploring intimate partner violence status among male veterans and associated health outcomes. Am J Mens Health. 2014;8(1):66-73. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Iverson KM, Vogt D, Maskin RM, Smith BN. Intimate partner violence victimization and associated implications for health and functioning among male and female post-9/11 veterans. Med Care. 2017;55(Suppl 9 Suppl 2):S78-S84. [ CrossRef ] [ Medline ]
  • Peitzmeier SM, Malik M, Kattari SK, Marrow E, Stephenson R, Agénor M, et al. Intimate partner violence in transgender populations: systematic review and meta-analysis of prevalence and correlates. Am J Public Health. 2020;110(9):e1-e14. [ CrossRef ] [ Medline ]
  • Langenderfer-Magruder L, Whitfield DL, Walls NE, Kattari SK, Ramos D. Experiences of intimate partner violence and subsequent police reporting among lesbian, gay, bisexual, transgender, and queer adults in colorado: comparing rates of cisgender and transgender victimization. J Interpers Violence. 2016;31(5):855-871. [ CrossRef ] [ Medline ]
  • Das KJH, Peitzmeier S, Berrahou IK, Potter J. Intimate partner violence (IPV) screening and referral outcomes among transgender patients in a primary care setting. J Interpers Violence. 2022;37(13-14):NP11720-NP11742. [ CrossRef ] [ Medline ]
  • Tarzia L, Bohren MA, Cameron J, Garcia-Moreno C, O'Doherty L, Fiolet R, et al. Women's experiences and expectations after disclosure of intimate partner abuse to a healthcare provider: a qualitative meta-synthesis. BMJ Open. 2020;10(11):e041339. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • García-Moreno C, Hegarty K, d'Oliveira AFL, Koziol-McLain J, Colombini M, Feder G. The health-systems response to violence against women. Lancet. 2015;385(9977):1567-1579. [ CrossRef ] [ Medline ]
  • U.S. Department of Veterans Affairs. Department of Veterans affairs FY 2018-2024 strategic plan. Washington, DC. Department of Defense; 2014. URL: https://www.va.gov/oei/docs/VA2018-2024strategicPlan.pdf [accessed 2024-06-12]
  • US Preventive Services Task Force, Curry SJ, Krist AH, Owens DK, Barry MJ, Caughey AB, et al. Screening for intimate partner violence, elder abuse, and abuse of vulnerable adults: US preventive services task force final recommendation statement. JAMA. 2018;320(16):1678-1687. [ CrossRef ] [ Medline ]
  • Intimate Partner Violence Assistance Program (VHA directive 1198). Washington, DC. Veteran's Health Administration; 2019.
  • Dichter ME, Wagner C, Goldberg EB, Iverson KM. Intimate partner violence detection and care in the veterans health administration: patient and provider perspectives. Womens Health Issues. 2015;25(5):555-560. [ CrossRef ] [ Medline ]
  • Iverson KM, Huang K, Wells SY, Wright JD, Gerber MR, Wiltsey-Stirman S. Women veterans' preferences for intimate partner violence screening and response procedures within the veterans health administration. Res Nurs Health. 2014;37(4):302-311. [ CrossRef ] [ Medline ]
  • Iverson KM, Stolzmann KL, Brady JE, Adjognon OL, Dichter ME, Lew RA, et al. Integrating intimate partner violence screening programs in primary care: results from a hybrid-II implementation-effectiveness RCT. Am J Prev Med. 2023;65(2):251-260. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Iverson KM, Adjognon O, Grillo AR, Dichter ME, Gutner CA, Hamilton AB, et al. Intimate partner violence screening programs in the veterans health administration: informing scale-up of successful practices. J Gen Intern Med. 2019;34(11):2435-2442. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Portnoy GA, Iverson KM, Haskell SG, Czarnogorski M, Gerber MR. A multisite quality improvement initiative to enhance the adoption of screening practices for intimate partner violence into routine primary care for women veterans. Public Health Rep. 2021;136(1):52-60. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gaffey AE, Burg MM, Rosman L, Portnoy GA, Brandt CA, Cavanagh CE, et al. Baseline characteristics from the women veterans cohort study: gender differences and similarities in health and healthcare utilization. J Womens Health (Larchmt). 2021;30(7):944-955. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Makaroun LK, Brignone E, Rosland A, Dichter ME. Association of health conditions and health service utilization with intimate partner violence identified via routine screening among middle-aged and older women. JAMA Netw Open. 2020;3(4):e203138. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sherin KM, Sinacore JM, Li XQ, Zitter RE, Shakil A. HITS: a short domestic violence screening tool for use in a family practice setting. Fam Med. 1998;30(7):508-512. [ Medline ]
  • Chan C, Chan Y, Au A, Cheung G. Reliability and validity of the “Extended ‐ Hurt, Insult, Threaten, Scream” (E‐Hits) screening tool in detecting intimate partner violence in hospital emergency departments in Hong Kong. Hong Kong J Emerg Med. 2010;17(2):109-117. [ CrossRef ]
  • Iverson KM, King MW, Gerber MR, Resick PA, Kimerling R, Street AE, et al. Accuracy of an intimate partner violence screening tool for female VHA patients: a replication and extension. J Trauma Stress. 2015;28(1):79-82. [ CrossRef ] [ Medline ]
  • Campbell JC, Webster DW, Glass N. The danger assessment: validation of a lethality risk assessment instrument for intimate partner femicide. J Interpers Violence. 2009;24(4):653-674. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Eggleston EM, Klompas M. Rational use of electronic health records for diabetes population management. Curr Diab Rep. 2014;14(4):479. [ CrossRef ] [ Medline ]
  • Veterans Health Administration. U.S. Department of Veterans Affairs. URL: https://www.va.gov/health/ [accessed 2024-02-14]
  • Dichter ME, Iverson KM, Montgomery AE, Sorrentino A. Clinical response to positive screens for intimate partner violence in the veterans health administration: findings from review of medical records. J Aggress Maltreatment Trauma. 2021;32(7-8):1005-1021. [ CrossRef ]
  • Ketchum K, Dichter ME. Evaluation of a pilot intimate partner violence screening program in a veterans health administration HIV clinic. J Aggress Maltreatment Trauma. 2022;32(7-8):979-988. [ CrossRef ]
  • Miller CJ, Stolzmann K, Dichter ME, Adjognon OL, Brady JE, Portnoy GA, et al. Intimate partner violence screening for women in the veterans health administration: temporal trends from the early years of implementation 2014-2020. J Aggress Maltreat Trauma. 2023;32(7-8):960-978. [ FREE Full text ] [ CrossRef ]
  • Adjognon OL, Brady JE, Iverson KM, Stolzmann K, Dichter ME, Lew RA, et al. Using the matrixed multiple case study approach to identify factors affecting the uptake of IPV screening programs following the use of implementation facilitation. Implement Sci Commun. 2023;4(1):145. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Glasgow RE, Vogt TM, Boles SM. Evaluating the public health impact of health promotion interventions: the RE-AIM framework. Am J Public Health. 1999;89(9):1322-1327. [ CrossRef ] [ Medline ]
  • Damschroder LJ, Reardon CM, Widerquist MAO, Lowery J. The updated consolidated framework for implementation research based on user feedback. Implement Sci. 2022;17(1):75. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Damschroder LJ, Lowery JC. Evaluation of a large-scale weight management program using the consolidated framework for implementation research (CFIR). Implement Sci. 2013;8:51. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Corporate data warehouse (CDW). Health Services Research & Development. URL: https://www.hsrd.research.va.gov/for_researchers/vinci/cdw.cfm [accessed 2024-02-14]
  • Administration VH. Program guide: 1200.21. VHA operations activities that may constitute research. VHA Handbook. 2019;13.
  • Stroup WW. Generalized Linear Mixed Models: Modern Concepts, Methods and Applications (Chapman & Hall/CRC texts in Statistical Science). Boca Raton, Florida. CRC press; 2012:1439815127.
  • Bates D, Mächler M, Bolker B, Walker S. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67(1):1-48.
  • Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Damschroder LJ, Reardon CM, Opra Widerquist MA, Lowery J. Conceptualizing outcomes for use with the consolidated framework for implementation research (CFIR): the CFIR outcomes addendum. Implement Sci. 2022;17(1):7. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42(5):533-544. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nathan S, Newman C, Lancaster K. Qualitative Interviewing. Singapore. Springer Nature; 2019:391-410.
  • Malterud K, Siersma VD, Guassora AD. Sample size in qualitative interview studies: guided by information power. Qual Health Res. 2016;26(13):1753-1760. [ CrossRef ] [ Medline ]
  • Braun V, Clarke V. To saturate or not to saturate? questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qual Res Sport Exerc Health. 2019;13(2):201-216. [ CrossRef ]
  • Finley EP, Huynh AK, Farmer MM, Bean-Mayberry B, Moin T, Oishi SM, et al. Periodic reflections: a method of guided discussions for documenting implementation phenomena. BMC Med Res Methodol. 2018;18(1):153. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2016;5(1):80-92. [ CrossRef ]
  • Clarke V, Braun V. Successful Qualitative Research: A Practical Guide for Beginners. New Zealand. University of Auckland; 2016:1-400.
  • Gale RC, Wu J, Erhardt T, Bounthavong M, Reardon CM, Damschroder LJ, et al. Comparison of rapid vs in-depth qualitative analytic methods from a process evaluation of academic detailing in the veterans health administration. Implement Sci. 2019;14(1):11. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Franco Z, Hooyer K, Ruffalo L, Frey-Ho Fung RA. Veterans health and well-being—collaborative research approaches: toward veteran community engagement. J Humanist Psychol. 2020;61(3):287-312. [ CrossRef ]
  • Increasing the voice of the veteran in VA research: recommendations from the veteran engagement workgroup. U.S. Department of Veterans Affairs. 2016. URL: https:/​/www.​hsrd.research.va.gov/​for_researchers/​cyber_seminars/​archives/​video_archive.​cfm?SessionID=1125 [accessed 2023-05-03]
  • Frequently asked questions. VA| EHR Modernization. URL: https://digital.va.gov/ehr-modernization/frequently-asked-question/ [accessed 2024-06-26]
  • Intimate partner violence assistance program implementation status and barriers to compliance. U.S. Department of Veterans Affairs. 2022. URL: https://www.oversight.gov/sites/default/files/oig-reports/VA/VAOIG-21-00797-248.pdf [accessed 2023-05-03]

Abbreviations

corporate data warehouse
Consolidated Framework for Implementation Research
electronic health record
generalized linear mixed models
intimate partner violence
intimate partner violence Center for Implementation, Research, and Evaluation
Partnered Evaluation of Relationship Health Innovations and Services through Mixed Methods
quality improvement
Quality Enhancement Research Initiative
Reach, Effectiveness, Adoption, Implementation, and Maintenance
Relationship Health and Safety Clinical Reminder version 3
Department of Veterans Affairs
Veteran Engagement Board
Veterans Health Administration

Edited by T Leung; The proposal for this study was externally peer-reviewed by the Quality Enhancement Research Initiative Parent (QUERI) - Veterans Affairs (VA) Office of Research & Development (ORD) (USA). submitted 25.04.24; accepted 13.07.24; published 28.08.24.

©Galina A Portnoy, Mark R Relyea, Melissa E Dichter, Katherine M Iverson, Candice Presseau, Cynthia A Brandt, Melissa Skanderson, LeAnn E Bruce, Steve Martino. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 28.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.

COMMENTS

  1. (PDF) Critical Appraisal of Quantitative Research

    quality. 1 Introduction. Critical appraisal describes the process of analyzing a study in a rigorous and. methodical way. Often, this process involves working through a series of questions. to ...

  2. Full article: Critical appraisal

    Authors reviewing qualitative research, similar to their quantitative counterparts, do not always use or optimize the use or value of their critical appraisals. Just as reviewers can undertake a sensitivity analysis on quantitative research, they can also apply the process to qualitative work (Carroll & Booth, Citation 2015). The purpose of a ...

  3. Critical Appraisal of Quantitative Research

    Abstract. Critical appraisal skills are important for anyone wishing to make informed decisions or improve the quality of healthcare delivery. A good critical appraisal provides information regarding the believability and usefulness of a particular study. However, the appraisal process is often overlooked, and critically appraising quantitative ...

  4. PDF Step'by-step guide to critiquing research. Part 1: quantitative research

    critiquing the literature, critical analysis, reviewing the literature, evaluation and appraisal of the literature which are in essence the same thing (Bassett and Bassett, 2003). Terminology in research can be confusing for the novice research reader where a term like 'random' refers to an organized manner of selecting items or participants ...

  5. A guide to critical appraisal of evidence : Nursing2020 Critical Care

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  6. Critical Appraisal of a quantitative paper

    Critical appraisal of a quantitative paper (RCT) This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external ...

  7. Critical Appraisal: Assessing the Quality of Studies

    Critical appraisal is the balanced assessment of a piece of research, looking for its strengths and weaknesses and then coming to a balanced judgement about its trustworthiness and its suitability for use in a particular context. If this all seems a bit abstract, think of an essay that you submit to pass a course.

  8. Research Evaluation

    Research findings are subject to an in-depth, critical evaluation by researchers themselves as part of their research, before they may claim that results represent acceptable theories or valid research results. ... In recent years, there has been a growing interest towards quantitative methods for research evaluation. The main motivation for ...

  9. Introduction

    Critical Appraisal of Studies. Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value/relevance in a particular context by providing a framework to evaluate the research. During the critical appraisal process, researchers can: Decide whether studies have been undertaken ...

  10. How to critically appraise an article

    Key Points. Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article. Critical appraisal provides a basis for decisions on whether to use the ...

  11. Critically Appraising the Quality and Credibility of Quantitative

    Numerous scales and checklists have been developed to assist in appraising research studies. Some approaches focus on assessing threats to internal validity and others focus on the methodological issues that can affect the quality of the research (Petticrew & Roberts, 2006). Perhaps the most comprehensive information on appraising the quality of studies for systematic reviews is available from ...

  12. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  13. What Is Quantitative Research? An Overview and Guidelines

    Abstract. In an era of data-driven decision-making, a comprehensive understanding of quantitative research is indispensable. Current guides often provide fragmented insights, failing to offer a holistic view, while more comprehensive sources remain lengthy and less accessible, hindered by physical and proprietary barriers.

  14. Appraising Quantitative Research in Health Education: Guidelines for

    Michael Fagen, Co-Associate Editor for the Evaluation and Practice section of Health Promotion Practice, Department of Community Health Sciences, School of Public Health, University of Illinois at Chicago, 1603 W. Taylor St., M/C 923, Chicago, IL 60608-1260, Telephone: 312-355-0647; Fax: 312-996-3551.

  15. Critical Appraisal Tools and Reporting Guidelines

    Pediatric Research in Inpatient Settings (PRIS) Network. (2021). Evaluation of an educational outreach and audit and feedback program to reduce continuous pulse oximetry use in hospitalized infants with stable bronchiolitis: A nonrandomized clinical trial.

  16. Critical appraisal of quantitative and qualitative research literature

    This paper describes a broad framework of critical appraisal of published research literature that covers both quantitative and qualitative methodologies. The aim is the heart of a research study. It should be robust, concisely stated and specify a study factor, outcome factor(s) and reference population. Quantitative study designs, including ...

  17. Deeper than Wordplay: A Systematic Review of Critical Quantitative

    Although the critical research cannon is often associated with qualitative scholars, there is a growing number of critical scholars who are refusing positivist-informed quantitative analyses. However, as a growing number of education scholars engaged in critical approaches to quantitative inquiry, instances of conflation began to surface.

  18. (PDF) Critical appraisal of quantitative and qualitative research

    This paper describes a broad framework of critical appraisal of published research literature that covers both quantitative and qualitative methodologies. The aim is the heart of a research study ...

  19. How to appraise quantitative research

    Title, keywords and the authors. The title of a paper should be clear and give a good idea of the subject area. The title should not normally exceed 15 words 2 and should attract the attention of the reader. 3 The next step is to review the key words. These should provide information on both the ideas or concepts discussed in the paper and the ...

  20. Critical Review of Quantitative and Qualitative Research

    4.CONCLUSION. In conclusion, it can be evidently identified that the. strengths a nd weaknesses o f two essential research. paradigms, namely, quantitative and qualitative research. are critically ...

  21. #QuantCrit: Integrating CRT With Quantitative Methods in Family Science

    This commitment has recently been extended to the critical evaluation of quantitative research via the development of "Quantitative Criticalism" (QuantCrit), which provides a framework for applying the principles and insights of CRT to quantitative data whenever it is used in research or encountered in policy and practice (Gillborn et al ...

  22. Qualitative vs. Quantitative Data Analysis in Education

    Quantitative data is information that has a numerical value. Quantitative research is conducted to gather measurable data used in statistical analysis. Researchers can use quantitative studies to identify patterns and trends. In learning analytics quantitative data could include test scores, student demographics, or amount of time spent in a ...

  23. (PDF) Critical appraisal of quantitative Research Article

    4. Critical Appraisal. Critical appraisal is a process which scientifically evaluate the strength and weakness of. a research paper for the application of theory, practice and education. While ...

  24. JMIR Research Protocols

    Quantitative data will be analyzed using a longitudinal observational design with repeated measurement periods at baseline (T0), year 1 (T1), and year 2 (T2). Qualitative interviews will focus on identifying multilevel factors, including potential implementation barriers and facilitators critical to IPV screening and response expansion, and ...

  25. Full article: Procurement governance in reducing corruption in the

    Supplier perception and development are critical to public procurement performance (Changalima et al., Citation 2022). This study applied an explanatory sequential design, a mixed-method design in which the quantitative study was conducted first, and the qualitative study was designed to follow up and explain the quantitative study.