Methodological Approaches to Literature Review

  • Living reference work entry
  • First Online: 09 May 2023
  • Cite this living reference work entry

review article on methodology

  • Dennis Thomas 2 ,
  • Elida Zairina 3 &
  • Johnson George 4  

752 Accesses

1 Citations

The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature. This chapter discusses the methodological approaches to conducting a literature review and offers an overview of different types of reviews. There are various types of reviews, including narrative reviews, scoping reviews, and systematic reviews with reporting strategies such as meta-analysis and meta-synthesis. Review authors should consider the scope of the literature review when selecting a type and method. Being focused is essential for a successful review; however, this must be balanced against the relevance of the review to a broad audience.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

review article on methodology

Reviewing Literature for and as Research

review article on methodology

Discussion and Conclusion

review article on methodology

Systematic Reviews in Educational Research: Methodology, Perspectives and Application

Akobeng AK. Principles of evidence based medicine. Arch Dis Child. 2005;90(8):837–40.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Alharbi A, Stevenson M. Refining Boolean queries to identify relevant studies for systematic review updates. J Am Med Inform Assoc. 2020;27(11):1658–66.

Article   PubMed   PubMed Central   Google Scholar  

Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.

Article   Google Scholar  

Aromataris E MZE. JBI manual for evidence synthesis. 2020.

Google Scholar  

Aromataris E, Pearson A. The systematic review: an overview. Am J Nurs. 2014;114(3):53–8.

Article   PubMed   Google Scholar  

Aromataris E, Riitano D. Constructing a search strategy and searching for evidence. A guide to the literature search for a systematic review. Am J Nurs. 2014;114(5):49–56.

Babineau J. Product review: covidence (systematic review software). J Canad Health Libr Assoc Canada. 2014;35(2):68–71.

Baker JD. The purpose, process, and methods of writing a literature review. AORN J. 2016;103(3):265–9.

Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.

Bramer WM, Rethlefsen ML, Kleijnen J, Franco OH. Optimal database combinations for literature searches in systematic reviews: a prospective exploratory study. Syst Rev. 2017;6(1):1–12.

Brown D. A review of the PubMed PICO tool: using evidence-based practice in health education. Health Promot Pract. 2020;21(4):496–8.

Cargo M, Harris J, Pantoja T, et al. Cochrane qualitative and implementation methods group guidance series – paper 4: methods for assessing evidence on intervention implementation. J Clin Epidemiol. 2018;97:59–69.

Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126(5):376–80.

Article   CAS   PubMed   Google Scholar  

Counsell C. Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med. 1997;127(5):380–7.

Cummings SR, Browner WS, Hulley SB. Conceiving the research question and developing the study plan. In: Cummings SR, Browner WS, Hulley SB, editors. Designing Clinical Research: An Epidemiological Approach. 4th ed. Philadelphia (PA): P Lippincott Williams & Wilkins; 2007. p. 14–22.

Eriksen MB, Frandsen TF. The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review. JMLA. 2018;106(4):420.

Ferrari R. Writing narrative style literature reviews. Medical Writing. 2015;24(4):230–5.

Flemming K, Booth A, Hannes K, Cargo M, Noyes J. Cochrane qualitative and implementation methods group guidance series – paper 6: reporting guidelines for qualitative, implementation, and process evaluation evidence syntheses. J Clin Epidemiol. 2018;97:79–85.

Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.

Green BN, Johnson CD, Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. J Chiropr Med. 2006;5(3):101–17.

Gregory AT, Denniss AR. An introduction to writing narrative and systematic reviews; tasks, tips and traps for aspiring authors. Heart Lung Circ. 2018;27(7):893–8.

Harden A, Thomas J, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews. J Clin Epidemiol. 2018;97:70–8.

Harris JL, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 2: methods for question formulation, searching, and protocol development for qualitative evidence synthesis. J Clin Epidemiol. 2018;97:39–48.

Higgins J, Thomas J. In: Chandler J, Cumpston M, Li T, Page MJ, Welch VA, editors. Cochrane Handbook for Systematic Reviews of Interventions version 6.3, updated February 2022). Available from www.training.cochrane.org/handbook.: Cochrane; 2022.

International prospective register of systematic reviews (PROSPERO). Available from https://www.crd.york.ac.uk/prospero/ .

Khan KS, Kunz R, Kleijnen J, Antes G. Five steps to conducting a systematic review. J R Soc Med. 2003;96(3):118–21.

Landhuis E. Scientific literature: information overload. Nature. 2016;535(7612):457–8.

Lockwood C, Porritt K, Munn Z, Rittenmeyer L, Salmond S, Bjerrum M, Loveday H, Carrier J, Stannard D. Chapter 2: Systematic reviews of qualitative evidence. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis. JBI; 2020. Available from https://synthesismanual.jbi.global . https://doi.org/10.46658/JBIMES-20-03 .

Chapter   Google Scholar  

Lorenzetti DL, Topfer L-A, Dennett L, Clement F. Value of databases other than medline for rapid health technology assessments. Int J Technol Assess Health Care. 2014;30(2):173–8.

Moher D, Liberati A, Tetzlaff J, Altman DG, the PRISMA Group. Preferred reporting items for (SR) and meta-analyses: the PRISMA statement. Ann Intern Med. 2009;6:264–9.

Mulrow CD. Systematic reviews: rationale for systematic reviews. BMJ. 1994;309(6954):597–9.

Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143.

Munthe-Kaas HM, Glenton C, Booth A, Noyes J, Lewin S. Systematic mapping of existing tools to appraise methodological strengths and limitations of qualitative research: first stage in the development of the CAMELOT tool. BMC Med Res Methodol. 2019;19(1):1–13.

Murphy CM. Writing an effective review article. J Med Toxicol. 2012;8(2):89–90.

NHMRC. Guidelines for guidelines: assessing risk of bias. Available at https://nhmrc.gov.au/guidelinesforguidelines/develop/assessing-risk-bias . Last published 29 August 2019. Accessed 29 Aug 2022.

Noyes J, Booth A, Cargo M, et al. Cochrane qualitative and implementation methods group guidance series – paper 1: introduction. J Clin Epidemiol. 2018b;97:35–8.

Noyes J, Booth A, Flemming K, et al. Cochrane qualitative and implementation methods group guidance series – paper 3: methods for assessing methodological limitations, data extraction and synthesis, and confidence in synthesized qualitative findings. J Clin Epidemiol. 2018a;97:49–58.

Noyes J, Booth A, Moore G, Flemming K, Tunçalp Ö, Shakibazadeh E. Synthesising quantitative and qualitative evidence to inform guidelines on complex interventions: clarifying the purposes, designs and outlining some methods. BMJ Glob Health. 2019;4(Suppl 1):e000893.

Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Healthcare. 2015;13(3):141–6.

Polanin JR, Pigott TD, Espelage DL, Grotpeter JK. Best practice guidelines for abstract screening large-evidence systematic reviews and meta-analyses. Res Synth Methods. 2019;10(3):330–42.

Article   PubMed Central   Google Scholar  

Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7(1):1–7.

Shea BJ, Reeves BC, Wells G, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Brit Med J. 2017;358

Sterne JA, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. Br Med J. 2016;355

Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. JAMA. 2000;283(15):2008–12.

Tawfik GM, Dila KAS, Mohamed MYF, et al. A step by step guide for conducting a systematic review and meta-analysis with simulation data. Trop Med Health. 2019;47(1):1–9.

The Critical Appraisal Program. Critical appraisal skills program. Available at https://casp-uk.net/ . 2022. Accessed 29 Aug 2022.

The University of Melbourne. Writing a literature review in Research Techniques 2022. Available at https://students.unimelb.edu.au/academic-skills/explore-our-resources/research-techniques/reviewing-the-literature . Accessed 29 Aug 2022.

The Writing Center University of Winconsin-Madison. Learn how to write a literature review in The Writer’s Handbook – Academic Professional Writing. 2022. Available at https://writing.wisc.edu/handbook/assignments/reviewofliterature/ . Accessed 29 Aug 2022.

Thompson SG, Sharp SJ. Explaining heterogeneity in meta-analysis: a comparison of methods. Stat Med. 1999;18(20):2693–708.

Tricco AC, Lillie E, Zarin W, et al. A scoping review on the conduct and reporting of scoping reviews. BMC Med Res Methodol. 2016;16(1):15.

Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169(7):467–73.

Yoneoka D, Henmi M. Clinical heterogeneity in random-effect meta-analysis: between-study boundary estimate problem. Stat Med. 2019;38(21):4131–45.

Yuan Y, Hunt RH. Systematic reviews: the good, the bad, and the ugly. Am J Gastroenterol. 2009;104(5):1086–92.

Download references

Author information

Authors and affiliations.

Centre of Excellence in Treatable Traits, College of Health, Medicine and Wellbeing, University of Newcastle, Hunter Medical Research Institute Asthma and Breathing Programme, Newcastle, NSW, Australia

Dennis Thomas

Department of Pharmacy Practice, Faculty of Pharmacy, Universitas Airlangga, Surabaya, Indonesia

Elida Zairina

Centre for Medicine Use and Safety, Monash Institute of Pharmaceutical Sciences, Faculty of Pharmacy and Pharmaceutical Sciences, Monash University, Parkville, VIC, Australia

Johnson George

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Johnson George .

Section Editor information

College of Pharmacy, Qatar University, Doha, Qatar

Derek Charles Stewart

Department of Pharmacy, University of Huddersfield, Huddersfield, United Kingdom

Zaheer-Ud-Din Babar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 Springer Nature Switzerland AG

About this entry

Cite this entry.

Thomas, D., Zairina, E., George, J. (2023). Methodological Approaches to Literature Review. In: Encyclopedia of Evidence in Pharmaceutical Public Health and Health Services Research in Pharmacy. Springer, Cham. https://doi.org/10.1007/978-3-030-50247-8_57-1

Download citation

DOI : https://doi.org/10.1007/978-3-030-50247-8_57-1

Received : 22 February 2023

Accepted : 22 February 2023

Published : 09 May 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-50247-8

Online ISBN : 978-3-030-50247-8

eBook Packages : Springer Reference Biomedicine and Life Sciences Reference Module Biomedical and Life Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

review article on methodology

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

review article on methodology

Try for free

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Open access
  • Published: 07 September 2020

A tutorial on methodological studies: the what, when, how and why

  • Lawrence Mbuagbaw   ORCID: orcid.org/0000-0001-5855-5461 1 , 2 , 3 ,
  • Daeria O. Lawson 1 ,
  • Livia Puljak 4 ,
  • David B. Allison 5 &
  • Lehana Thabane 1 , 2 , 6 , 7 , 8  

BMC Medical Research Methodology volume  20 , Article number:  226 ( 2020 ) Cite this article

41k Accesses

60 Citations

60 Altmetric

Metrics details

Methodological studies – studies that evaluate the design, analysis or reporting of other research-related reports – play an important role in health research. They help to highlight issues in the conduct of research with the aim of improving health research methodology, and ultimately reducing research waste.

We provide an overview of some of the key aspects of methodological studies such as what they are, and when, how and why they are done. We adopt a “frequently asked questions” format to facilitate reading this paper and provide multiple examples to help guide researchers interested in conducting methodological studies. Some of the topics addressed include: is it necessary to publish a study protocol? How to select relevant research reports and databases for a methodological study? What approaches to data extraction and statistical analysis should be considered when conducting a methodological study? What are potential threats to validity and is there a way to appraise the quality of methodological studies?

Appropriate reflection and application of basic principles of epidemiology and biostatistics are required in the design and analysis of methodological studies. This paper provides an introduction for further discussion about the conduct of methodological studies.

Peer Review reports

The field of meta-research (or research-on-research) has proliferated in recent years in response to issues with research quality and conduct [ 1 , 2 , 3 ]. As the name suggests, this field targets issues with research design, conduct, analysis and reporting. Various types of research reports are often examined as the unit of analysis in these studies (e.g. abstracts, full manuscripts, trial registry entries). Like many other novel fields of research, meta-research has seen a proliferation of use before the development of reporting guidance. For example, this was the case with randomized trials for which risk of bias tools and reporting guidelines were only developed much later – after many trials had been published and noted to have limitations [ 4 , 5 ]; and for systematic reviews as well [ 6 , 7 , 8 ]. However, in the absence of formal guidance, studies that report on research differ substantially in how they are named, conducted and reported [ 9 , 10 ]. This creates challenges in identifying, summarizing and comparing them. In this tutorial paper, we will use the term methodological study to refer to any study that reports on the design, conduct, analysis or reporting of primary or secondary research-related reports (such as trial registry entries and conference abstracts).

In the past 10 years, there has been an increase in the use of terms related to methodological studies (based on records retrieved with a keyword search [in the title and abstract] for “methodological review” and “meta-epidemiological study” in PubMed up to December 2019), suggesting that these studies may be appearing more frequently in the literature. See Fig.  1 .

figure 1

Trends in the number studies that mention “methodological review” or “meta-

epidemiological study” in PubMed.

The methods used in many methodological studies have been borrowed from systematic and scoping reviews. This practice has influenced the direction of the field, with many methodological studies including searches of electronic databases, screening of records, duplicate data extraction and assessments of risk of bias in the included studies. However, the research questions posed in methodological studies do not always require the approaches listed above, and guidance is needed on when and how to apply these methods to a methodological study. Even though methodological studies can be conducted on qualitative or mixed methods research, this paper focuses on and draws examples exclusively from quantitative research.

The objectives of this paper are to provide some insights on how to conduct methodological studies so that there is greater consistency between the research questions posed, and the design, analysis and reporting of findings. We provide multiple examples to illustrate concepts and a proposed framework for categorizing methodological studies in quantitative research.

What is a methodological study?

Any study that describes or analyzes methods (design, conduct, analysis or reporting) in published (or unpublished) literature is a methodological study. Consequently, the scope of methodological studies is quite extensive and includes, but is not limited to, topics as diverse as: research question formulation [ 11 ]; adherence to reporting guidelines [ 12 , 13 , 14 ] and consistency in reporting [ 15 ]; approaches to study analysis [ 16 ]; investigating the credibility of analyses [ 17 ]; and studies that synthesize these methodological studies [ 18 ]. While the nomenclature of methodological studies is not uniform, the intents and purposes of these studies remain fairly consistent – to describe or analyze methods in primary or secondary studies. As such, methodological studies may also be classified as a subtype of observational studies.

Parallel to this are experimental studies that compare different methods. Even though they play an important role in informing optimal research methods, experimental methodological studies are beyond the scope of this paper. Examples of such studies include the randomized trials by Buscemi et al., comparing single data extraction to double data extraction [ 19 ], and Carrasco-Labra et al., comparing approaches to presenting findings in Grading of Recommendations, Assessment, Development and Evaluations (GRADE) summary of findings tables [ 20 ]. In these studies, the unit of analysis is the person or groups of individuals applying the methods. We also direct readers to the Studies Within a Trial (SWAT) and Studies Within a Review (SWAR) programme operated through the Hub for Trials Methodology Research, for further reading as a potential useful resource for these types of experimental studies [ 21 ]. Lastly, this paper is not meant to inform the conduct of research using computational simulation and mathematical modeling for which some guidance already exists [ 22 ], or studies on the development of methods using consensus-based approaches.

When should we conduct a methodological study?

Methodological studies occupy a unique niche in health research that allows them to inform methodological advances. Methodological studies should also be conducted as pre-cursors to reporting guideline development, as they provide an opportunity to understand current practices, and help to identify the need for guidance and gaps in methodological or reporting quality. For example, the development of the popular Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) guidelines were preceded by methodological studies identifying poor reporting practices [ 23 , 24 ]. In these instances, after the reporting guidelines are published, methodological studies can also be used to monitor uptake of the guidelines.

These studies can also be conducted to inform the state of the art for design, analysis and reporting practices across different types of health research fields, with the aim of improving research practices, and preventing or reducing research waste. For example, Samaan et al. conducted a scoping review of adherence to different reporting guidelines in health care literature [ 18 ]. Methodological studies can also be used to determine the factors associated with reporting practices. For example, Abbade et al. investigated journal characteristics associated with the use of the Participants, Intervention, Comparison, Outcome, Timeframe (PICOT) format in framing research questions in trials of venous ulcer disease [ 11 ].

How often are methodological studies conducted?

There is no clear answer to this question. Based on a search of PubMed, the use of related terms (“methodological review” and “meta-epidemiological study”) – and therefore, the number of methodological studies – is on the rise. However, many other terms are used to describe methodological studies. There are also many studies that explore design, conduct, analysis or reporting of research reports, but that do not use any specific terms to describe or label their study design in terms of “methodology”. This diversity in nomenclature makes a census of methodological studies elusive. Appropriate terminology and key words for methodological studies are needed to facilitate improved accessibility for end-users.

Why do we conduct methodological studies?

Methodological studies provide information on the design, conduct, analysis or reporting of primary and secondary research and can be used to appraise quality, quantity, completeness, accuracy and consistency of health research. These issues can be explored in specific fields, journals, databases, geographical regions and time periods. For example, Areia et al. explored the quality of reporting of endoscopic diagnostic studies in gastroenterology [ 25 ]; Knol et al. investigated the reporting of p -values in baseline tables in randomized trial published in high impact journals [ 26 ]; Chen et al. describe adherence to the Consolidated Standards of Reporting Trials (CONSORT) statement in Chinese Journals [ 27 ]; and Hopewell et al. describe the effect of editors’ implementation of CONSORT guidelines on reporting of abstracts over time [ 28 ]. Methodological studies provide useful information to researchers, clinicians, editors, publishers and users of health literature. As a result, these studies have been at the cornerstone of important methodological developments in the past two decades and have informed the development of many health research guidelines including the highly cited CONSORT statement [ 5 ].

Where can we find methodological studies?

Methodological studies can be found in most common biomedical bibliographic databases (e.g. Embase, MEDLINE, PubMed, Web of Science). However, the biggest caveat is that methodological studies are hard to identify in the literature due to the wide variety of names used and the lack of comprehensive databases dedicated to them. A handful can be found in the Cochrane Library as “Cochrane Methodology Reviews”, but these studies only cover methodological issues related to systematic reviews. Previous attempts to catalogue all empirical studies of methods used in reviews were abandoned 10 years ago [ 29 ]. In other databases, a variety of search terms may be applied with different levels of sensitivity and specificity.

Some frequently asked questions about methodological studies

In this section, we have outlined responses to questions that might help inform the conduct of methodological studies.

Q: How should I select research reports for my methodological study?

A: Selection of research reports for a methodological study depends on the research question and eligibility criteria. Once a clear research question is set and the nature of literature one desires to review is known, one can then begin the selection process. Selection may begin with a broad search, especially if the eligibility criteria are not apparent. For example, a methodological study of Cochrane Reviews of HIV would not require a complex search as all eligible studies can easily be retrieved from the Cochrane Library after checking a few boxes [ 30 ]. On the other hand, a methodological study of subgroup analyses in trials of gastrointestinal oncology would require a search to find such trials, and further screening to identify trials that conducted a subgroup analysis [ 31 ].

The strategies used for identifying participants in observational studies can apply here. One may use a systematic search to identify all eligible studies. If the number of eligible studies is unmanageable, a random sample of articles can be expected to provide comparable results if it is sufficiently large [ 32 ]. For example, Wilson et al. used a random sample of trials from the Cochrane Stroke Group’s Trial Register to investigate completeness of reporting [ 33 ]. It is possible that a simple random sample would lead to underrepresentation of units (i.e. research reports) that are smaller in number. This is relevant if the investigators wish to compare multiple groups but have too few units in one group. In this case a stratified sample would help to create equal groups. For example, in a methodological study comparing Cochrane and non-Cochrane reviews, Kahale et al. drew random samples from both groups [ 34 ]. Alternatively, systematic or purposeful sampling strategies can be used and we encourage researchers to justify their selected approaches based on the study objective.

Q: How many databases should I search?

A: The number of databases one should search would depend on the approach to sampling, which can include targeting the entire “population” of interest or a sample of that population. If you are interested in including the entire target population for your research question, or drawing a random or systematic sample from it, then a comprehensive and exhaustive search for relevant articles is required. In this case, we recommend using systematic approaches for searching electronic databases (i.e. at least 2 databases with a replicable and time stamped search strategy). The results of your search will constitute a sampling frame from which eligible studies can be drawn.

Alternatively, if your approach to sampling is purposeful, then we recommend targeting the database(s) or data sources (e.g. journals, registries) that include the information you need. For example, if you are conducting a methodological study of high impact journals in plastic surgery and they are all indexed in PubMed, you likely do not need to search any other databases. You may also have a comprehensive list of all journals of interest and can approach your search using the journal names in your database search (or by accessing the journal archives directly from the journal’s website). Even though one could also search journals’ web pages directly, using a database such as PubMed has multiple advantages, such as the use of filters, so the search can be narrowed down to a certain period, or study types of interest. Furthermore, individual journals’ web sites may have different search functionalities, which do not necessarily yield a consistent output.

Q: Should I publish a protocol for my methodological study?

A: A protocol is a description of intended research methods. Currently, only protocols for clinical trials require registration [ 35 ]. Protocols for systematic reviews are encouraged but no formal recommendation exists. The scientific community welcomes the publication of protocols because they help protect against selective outcome reporting, the use of post hoc methodologies to embellish results, and to help avoid duplication of efforts [ 36 ]. While the latter two risks exist in methodological research, the negative consequences may be substantially less than for clinical outcomes. In a sample of 31 methodological studies, 7 (22.6%) referenced a published protocol [ 9 ]. In the Cochrane Library, there are 15 protocols for methodological reviews (21 July 2020). This suggests that publishing protocols for methodological studies is not uncommon.

Authors can consider publishing their study protocol in a scholarly journal as a manuscript. Advantages of such publication include obtaining peer-review feedback about the planned study, and easy retrieval by searching databases such as PubMed. The disadvantages in trying to publish protocols includes delays associated with manuscript handling and peer review, as well as costs, as few journals publish study protocols, and those journals mostly charge article-processing fees [ 37 ]. Authors who would like to make their protocol publicly available without publishing it in scholarly journals, could deposit their study protocols in publicly available repositories, such as the Open Science Framework ( https://osf.io/ ).

Q: How to appraise the quality of a methodological study?

A: To date, there is no published tool for appraising the risk of bias in a methodological study, but in principle, a methodological study could be considered as a type of observational study. Therefore, during conduct or appraisal, care should be taken to avoid the biases common in observational studies [ 38 ]. These biases include selection bias, comparability of groups, and ascertainment of exposure or outcome. In other words, to generate a representative sample, a comprehensive reproducible search may be necessary to build a sampling frame. Additionally, random sampling may be necessary to ensure that all the included research reports have the same probability of being selected, and the screening and selection processes should be transparent and reproducible. To ensure that the groups compared are similar in all characteristics, matching, random sampling or stratified sampling can be used. Statistical adjustments for between-group differences can also be applied at the analysis stage. Finally, duplicate data extraction can reduce errors in assessment of exposures or outcomes.

Q: Should I justify a sample size?

A: In all instances where one is not using the target population (i.e. the group to which inferences from the research report are directed) [ 39 ], a sample size justification is good practice. The sample size justification may take the form of a description of what is expected to be achieved with the number of articles selected, or a formal sample size estimation that outlines the number of articles required to answer the research question with a certain precision and power. Sample size justifications in methodological studies are reasonable in the following instances:

Comparing two groups

Determining a proportion, mean or another quantifier

Determining factors associated with an outcome using regression-based analyses

For example, El Dib et al. computed a sample size requirement for a methodological study of diagnostic strategies in randomized trials, based on a confidence interval approach [ 40 ].

Q: What should I call my study?

A: Other terms which have been used to describe/label methodological studies include “ methodological review ”, “methodological survey” , “meta-epidemiological study” , “systematic review” , “systematic survey”, “meta-research”, “research-on-research” and many others. We recommend that the study nomenclature be clear, unambiguous, informative and allow for appropriate indexing. Methodological study nomenclature that should be avoided includes “ systematic review” – as this will likely be confused with a systematic review of a clinical question. “ Systematic survey” may also lead to confusion about whether the survey was systematic (i.e. using a preplanned methodology) or a survey using “ systematic” sampling (i.e. a sampling approach using specific intervals to determine who is selected) [ 32 ]. Any of the above meanings of the words “ systematic” may be true for methodological studies and could be potentially misleading. “ Meta-epidemiological study” is ideal for indexing, but not very informative as it describes an entire field. The term “ review ” may point towards an appraisal or “review” of the design, conduct, analysis or reporting (or methodological components) of the targeted research reports, yet it has also been used to describe narrative reviews [ 41 , 42 ]. The term “ survey ” is also in line with the approaches used in many methodological studies [ 9 ], and would be indicative of the sampling procedures of this study design. However, in the absence of guidelines on nomenclature, the term “ methodological study ” is broad enough to capture most of the scenarios of such studies.

Q: Should I account for clustering in my methodological study?

A: Data from methodological studies are often clustered. For example, articles coming from a specific source may have different reporting standards (e.g. the Cochrane Library). Articles within the same journal may be similar due to editorial practices and policies, reporting requirements and endorsement of guidelines. There is emerging evidence that these are real concerns that should be accounted for in analyses [ 43 ]. Some cluster variables are described in the section: “ What variables are relevant to methodological studies?”

A variety of modelling approaches can be used to account for correlated data, including the use of marginal, fixed or mixed effects regression models with appropriate computation of standard errors [ 44 ]. For example, Kosa et al. used generalized estimation equations to account for correlation of articles within journals [ 15 ]. Not accounting for clustering could lead to incorrect p -values, unduly narrow confidence intervals, and biased estimates [ 45 ].

Q: Should I extract data in duplicate?

A: Yes. Duplicate data extraction takes more time but results in less errors [ 19 ]. Data extraction errors in turn affect the effect estimate [ 46 ], and therefore should be mitigated. Duplicate data extraction should be considered in the absence of other approaches to minimize extraction errors. However, much like systematic reviews, this area will likely see rapid new advances with machine learning and natural language processing technologies to support researchers with screening and data extraction [ 47 , 48 ]. However, experience plays an important role in the quality of extracted data and inexperienced extractors should be paired with experienced extractors [ 46 , 49 ].

Q: Should I assess the risk of bias of research reports included in my methodological study?

A : Risk of bias is most useful in determining the certainty that can be placed in the effect measure from a study. In methodological studies, risk of bias may not serve the purpose of determining the trustworthiness of results, as effect measures are often not the primary goal of methodological studies. Determining risk of bias in methodological studies is likely a practice borrowed from systematic review methodology, but whose intrinsic value is not obvious in methodological studies. When it is part of the research question, investigators often focus on one aspect of risk of bias. For example, Speich investigated how blinding was reported in surgical trials [ 50 ], and Abraha et al., investigated the application of intention-to-treat analyses in systematic reviews and trials [ 51 ].

Q: What variables are relevant to methodological studies?

A: There is empirical evidence that certain variables may inform the findings in a methodological study. We outline some of these and provide a brief overview below:

Country: Countries and regions differ in their research cultures, and the resources available to conduct research. Therefore, it is reasonable to believe that there may be differences in methodological features across countries. Methodological studies have reported loco-regional differences in reporting quality [ 52 , 53 ]. This may also be related to challenges non-English speakers face in publishing papers in English.

Authors’ expertise: The inclusion of authors with expertise in research methodology, biostatistics, and scientific writing is likely to influence the end-product. Oltean et al. found that among randomized trials in orthopaedic surgery, the use of analyses that accounted for clustering was more likely when specialists (e.g. statistician, epidemiologist or clinical trials methodologist) were included on the study team [ 54 ]. Fleming et al. found that including methodologists in the review team was associated with appropriate use of reporting guidelines [ 55 ].

Source of funding and conflicts of interest: Some studies have found that funded studies report better [ 56 , 57 ], while others do not [ 53 , 58 ]. The presence of funding would indicate the availability of resources deployed to ensure optimal design, conduct, analysis and reporting. However, the source of funding may introduce conflicts of interest and warrant assessment. For example, Kaiser et al. investigated the effect of industry funding on obesity or nutrition randomized trials and found that reporting quality was similar [ 59 ]. Thomas et al. looked at reporting quality of long-term weight loss trials and found that industry funded studies were better [ 60 ]. Kan et al. examined the association between industry funding and “positive trials” (trials reporting a significant intervention effect) and found that industry funding was highly predictive of a positive trial [ 61 ]. This finding is similar to that of a recent Cochrane Methodology Review by Hansen et al. [ 62 ]

Journal characteristics: Certain journals’ characteristics may influence the study design, analysis or reporting. Characteristics such as journal endorsement of guidelines [ 63 , 64 ], and Journal Impact Factor (JIF) have been shown to be associated with reporting [ 63 , 65 , 66 , 67 ].

Study size (sample size/number of sites): Some studies have shown that reporting is better in larger studies [ 53 , 56 , 58 ].

Year of publication: It is reasonable to assume that design, conduct, analysis and reporting of research will change over time. Many studies have demonstrated improvements in reporting over time or after the publication of reporting guidelines [ 68 , 69 ].

Type of intervention: In a methodological study of reporting quality of weight loss intervention studies, Thabane et al. found that trials of pharmacologic interventions were reported better than trials of non-pharmacologic interventions [ 70 ].

Interactions between variables: Complex interactions between the previously listed variables are possible. High income countries with more resources may be more likely to conduct larger studies and incorporate a variety of experts. Authors in certain countries may prefer certain journals, and journal endorsement of guidelines and editorial policies may change over time.

Q: Should I focus only on high impact journals?

A: Investigators may choose to investigate only high impact journals because they are more likely to influence practice and policy, or because they assume that methodological standards would be higher. However, the JIF may severely limit the scope of articles included and may skew the sample towards articles with positive findings. The generalizability and applicability of findings from a handful of journals must be examined carefully, especially since the JIF varies over time. Even among journals that are all “high impact”, variations exist in methodological standards.

Q: Can I conduct a methodological study of qualitative research?

A: Yes. Even though a lot of methodological research has been conducted in the quantitative research field, methodological studies of qualitative studies are feasible. Certain databases that catalogue qualitative research including the Cumulative Index to Nursing & Allied Health Literature (CINAHL) have defined subject headings that are specific to methodological research (e.g. “research methodology”). Alternatively, one could also conduct a qualitative methodological review; that is, use qualitative approaches to synthesize methodological issues in qualitative studies.

Q: What reporting guidelines should I use for my methodological study?

A: There is no guideline that covers the entire scope of methodological studies. One adaptation of the PRISMA guidelines has been published, which works well for studies that aim to use the entire target population of research reports [ 71 ]. However, it is not widely used (40 citations in 2 years as of 09 December 2019), and methodological studies that are designed as cross-sectional or before-after studies require a more fit-for purpose guideline. A more encompassing reporting guideline for a broad range of methodological studies is currently under development [ 72 ]. However, in the absence of formal guidance, the requirements for scientific reporting should be respected, and authors of methodological studies should focus on transparency and reproducibility.

Q: What are the potential threats to validity and how can I avoid them?

A: Methodological studies may be compromised by a lack of internal or external validity. The main threats to internal validity in methodological studies are selection and confounding bias. Investigators must ensure that the methods used to select articles does not make them differ systematically from the set of articles to which they would like to make inferences. For example, attempting to make extrapolations to all journals after analyzing high-impact journals would be misleading.

Many factors (confounders) may distort the association between the exposure and outcome if the included research reports differ with respect to these factors [ 73 ]. For example, when examining the association between source of funding and completeness of reporting, it may be necessary to account for journals that endorse the guidelines. Confounding bias can be addressed by restriction, matching and statistical adjustment [ 73 ]. Restriction appears to be the method of choice for many investigators who choose to include only high impact journals or articles in a specific field. For example, Knol et al. examined the reporting of p -values in baseline tables of high impact journals [ 26 ]. Matching is also sometimes used. In the methodological study of non-randomized interventional studies of elective ventral hernia repair, Parker et al. matched prospective studies with retrospective studies and compared reporting standards [ 74 ]. Some other methodological studies use statistical adjustments. For example, Zhang et al. used regression techniques to determine the factors associated with missing participant data in trials [ 16 ].

With regard to external validity, researchers interested in conducting methodological studies must consider how generalizable or applicable their findings are. This should tie in closely with the research question and should be explicit. For example. Findings from methodological studies on trials published in high impact cardiology journals cannot be assumed to be applicable to trials in other fields. However, investigators must ensure that their sample truly represents the target sample either by a) conducting a comprehensive and exhaustive search, or b) using an appropriate and justified, randomly selected sample of research reports.

Even applicability to high impact journals may vary based on the investigators’ definition, and over time. For example, for high impact journals in the field of general medicine, Bouwmeester et al. included the Annals of Internal Medicine (AIM), BMJ, the Journal of the American Medical Association (JAMA), Lancet, the New England Journal of Medicine (NEJM), and PLoS Medicine ( n  = 6) [ 75 ]. In contrast, the high impact journals selected in the methodological study by Schiller et al. were BMJ, JAMA, Lancet, and NEJM ( n  = 4) [ 76 ]. Another methodological study by Kosa et al. included AIM, BMJ, JAMA, Lancet and NEJM ( n  = 5). In the methodological study by Thabut et al., journals with a JIF greater than 5 were considered to be high impact. Riado Minguez et al. used first quartile journals in the Journal Citation Reports (JCR) for a specific year to determine “high impact” [ 77 ]. Ultimately, the definition of high impact will be based on the number of journals the investigators are willing to include, the year of impact and the JIF cut-off [ 78 ]. We acknowledge that the term “generalizability” may apply differently for methodological studies, especially when in many instances it is possible to include the entire target population in the sample studied.

Finally, methodological studies are not exempt from information bias which may stem from discrepancies in the included research reports [ 79 ], errors in data extraction, or inappropriate interpretation of the information extracted. Likewise, publication bias may also be a concern in methodological studies, but such concepts have not yet been explored.

A proposed framework

In order to inform discussions about methodological studies, the development of guidance for what should be reported, we have outlined some key features of methodological studies that can be used to classify them. For each of the categories outlined below, we provide an example. In our experience, the choice of approach to completing a methodological study can be informed by asking the following four questions:

What is the aim?

Methodological studies that investigate bias

A methodological study may be focused on exploring sources of bias in primary or secondary studies (meta-bias), or how bias is analyzed. We have taken care to distinguish bias (i.e. systematic deviations from the truth irrespective of the source) from reporting quality or completeness (i.e. not adhering to a specific reporting guideline or norm). An example of where this distinction would be important is in the case of a randomized trial with no blinding. This study (depending on the nature of the intervention) would be at risk of performance bias. However, if the authors report that their study was not blinded, they would have reported adequately. In fact, some methodological studies attempt to capture both “quality of conduct” and “quality of reporting”, such as Richie et al., who reported on the risk of bias in randomized trials of pharmacy practice interventions [ 80 ]. Babic et al. investigated how risk of bias was used to inform sensitivity analyses in Cochrane reviews [ 81 ]. Further, biases related to choice of outcomes can also be explored. For example, Tan et al investigated differences in treatment effect size based on the outcome reported [ 82 ].

Methodological studies that investigate quality (or completeness) of reporting

Methodological studies may report quality of reporting against a reporting checklist (i.e. adherence to guidelines) or against expected norms. For example, Croituro et al. report on the quality of reporting in systematic reviews published in dermatology journals based on their adherence to the PRISMA statement [ 83 ], and Khan et al. described the quality of reporting of harms in randomized controlled trials published in high impact cardiovascular journals based on the CONSORT extension for harms [ 84 ]. Other methodological studies investigate reporting of certain features of interest that may not be part of formally published checklists or guidelines. For example, Mbuagbaw et al. described how often the implications for research are elaborated using the Evidence, Participants, Intervention, Comparison, Outcome, Timeframe (EPICOT) format [ 30 ].

Methodological studies that investigate the consistency of reporting

Sometimes investigators may be interested in how consistent reports of the same research are, as it is expected that there should be consistency between: conference abstracts and published manuscripts; manuscript abstracts and manuscript main text; and trial registration and published manuscript. For example, Rosmarakis et al. investigated consistency between conference abstracts and full text manuscripts [ 85 ].

Methodological studies that investigate factors associated with reporting

In addition to identifying issues with reporting in primary and secondary studies, authors of methodological studies may be interested in determining the factors that are associated with certain reporting practices. Many methodological studies incorporate this, albeit as a secondary outcome. For example, Farrokhyar et al. investigated the factors associated with reporting quality in randomized trials of coronary artery bypass grafting surgery [ 53 ].

Methodological studies that investigate methods

Methodological studies may also be used to describe methods or compare methods, and the factors associated with methods. Muller et al. described the methods used for systematic reviews and meta-analyses of observational studies [ 86 ].

Methodological studies that summarize other methodological studies

Some methodological studies synthesize results from other methodological studies. For example, Li et al. conducted a scoping review of methodological reviews that investigated consistency between full text and abstracts in primary biomedical research [ 87 ].

Methodological studies that investigate nomenclature and terminology

Some methodological studies may investigate the use of names and terms in health research. For example, Martinic et al. investigated the definitions of systematic reviews used in overviews of systematic reviews (OSRs), meta-epidemiological studies and epidemiology textbooks [ 88 ].

Other types of methodological studies

In addition to the previously mentioned experimental methodological studies, there may exist other types of methodological studies not captured here.

What is the design?

Methodological studies that are descriptive

Most methodological studies are purely descriptive and report their findings as counts (percent) and means (standard deviation) or medians (interquartile range). For example, Mbuagbaw et al. described the reporting of research recommendations in Cochrane HIV systematic reviews [ 30 ]. Gohari et al. described the quality of reporting of randomized trials in diabetes in Iran [ 12 ].

Methodological studies that are analytical

Some methodological studies are analytical wherein “analytical studies identify and quantify associations, test hypotheses, identify causes and determine whether an association exists between variables, such as between an exposure and a disease.” [ 89 ] In the case of methodological studies all these investigations are possible. For example, Kosa et al. investigated the association between agreement in primary outcome from trial registry to published manuscript and study covariates. They found that larger and more recent studies were more likely to have agreement [ 15 ]. Tricco et al. compared the conclusion statements from Cochrane and non-Cochrane systematic reviews with a meta-analysis of the primary outcome and found that non-Cochrane reviews were more likely to report positive findings. These results are a test of the null hypothesis that the proportions of Cochrane and non-Cochrane reviews that report positive results are equal [ 90 ].

What is the sampling strategy?

Methodological studies that include the target population

Methodological reviews with narrow research questions may be able to include the entire target population. For example, in the methodological study of Cochrane HIV systematic reviews, Mbuagbaw et al. included all of the available studies ( n  = 103) [ 30 ].

Methodological studies that include a sample of the target population

Many methodological studies use random samples of the target population [ 33 , 91 , 92 ]. Alternatively, purposeful sampling may be used, limiting the sample to a subset of research-related reports published within a certain time period, or in journals with a certain ranking or on a topic. Systematic sampling can also be used when random sampling may be challenging to implement.

What is the unit of analysis?

Methodological studies with a research report as the unit of analysis

Many methodological studies use a research report (e.g. full manuscript of study, abstract portion of the study) as the unit of analysis, and inferences can be made at the study-level. However, both published and unpublished research-related reports can be studied. These may include articles, conference abstracts, registry entries etc.

Methodological studies with a design, analysis or reporting item as the unit of analysis

Some methodological studies report on items which may occur more than once per article. For example, Paquette et al. report on subgroup analyses in Cochrane reviews of atrial fibrillation in which 17 systematic reviews planned 56 subgroup analyses [ 93 ].

This framework is outlined in Fig.  2 .

figure 2

A proposed framework for methodological studies

Conclusions

Methodological studies have examined different aspects of reporting such as quality, completeness, consistency and adherence to reporting guidelines. As such, many of the methodological study examples cited in this tutorial are related to reporting. However, as an evolving field, the scope of research questions that can be addressed by methodological studies is expected to increase.

In this paper we have outlined the scope and purpose of methodological studies, along with examples of instances in which various approaches have been used. In the absence of formal guidance on the design, conduct, analysis and reporting of methodological studies, we have provided some advice to help make methodological studies consistent. This advice is grounded in good contemporary scientific practice. Generally, the research question should tie in with the sampling approach and planned analysis. We have also highlighted the variables that may inform findings from methodological studies. Lastly, we have provided suggestions for ways in which authors can categorize their methodological studies to inform their design and analysis.

Availability of data and materials

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Abbreviations

Consolidated Standards of Reporting Trials

Evidence, Participants, Intervention, Comparison, Outcome, Timeframe

Grading of Recommendations, Assessment, Development and Evaluations

Participants, Intervention, Comparison, Outcome, Timeframe

Preferred Reporting Items of Systematic reviews and Meta-Analyses

Studies Within a Review

Studies Within a Trial

Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374(9683):86–9.

PubMed   Google Scholar  

Chan AW, Song F, Vickers A, Jefferson T, Dickersin K, Gotzsche PC, Krumholz HM, Ghersi D, van der Worp HB. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383(9913):257–66.

PubMed   PubMed Central   Google Scholar  

Ioannidis JP, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, Schulz KF, Tibshirani R. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383(9912):166–75.

Higgins JP, Altman DG, Gotzsche PC, Juni P, Moher D, Oxman AD, Savovic J, Schulz KF, Weeks L, Sterne JA. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ. 2011;343:d5928.

Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet. 2001;357.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. PLoS Med. 2009;6(7):e1000100.

Shea BJ, Hamel C, Wells GA, Bouter LM, Kristjansson E, Grimshaw J, Henry DA, Boers M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. J Clin Epidemiol. 2009;62(10):1013–20.

Shea BJ, Reeves BC, Wells G, Thuku M, Hamel C, Moran J, Moher D, Tugwell P, Welch V, Kristjansson E, et al. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. Bmj. 2017;358:j4008.

Lawson DO, Leenus A, Mbuagbaw L. Mapping the nomenclature, methodology, and reporting of studies that review methods: a pilot methodological review. Pilot Feasibility Studies. 2020;6(1):13.

Puljak L, Makaric ZL, Buljan I, Pieper D. What is a meta-epidemiological study? Analysis of published literature indicated heterogeneous study designs and definitions. J Comp Eff Res. 2020.

Abbade LPF, Wang M, Sriganesh K, Jin Y, Mbuagbaw L, Thabane L. The framing of research questions using the PICOT format in randomized controlled trials of venous ulcer disease is suboptimal: a systematic survey. Wound Repair Regen. 2017;25(5):892–900.

Gohari F, Baradaran HR, Tabatabaee M, Anijidani S, Mohammadpour Touserkani F, Atlasi R, Razmgir M. Quality of reporting randomized controlled trials (RCTs) in diabetes in Iran; a systematic review. J Diabetes Metab Disord. 2015;15(1):36.

Wang M, Jin Y, Hu ZJ, Thabane A, Dennis B, Gajic-Veljanoski O, Paul J, Thabane L. The reporting quality of abstracts of stepped wedge randomized trials is suboptimal: a systematic survey of the literature. Contemp Clin Trials Commun. 2017;8:1–10.

Shanthanna H, Kaushal A, Mbuagbaw L, Couban R, Busse J, Thabane L: A cross-sectional study of the reporting quality of pilot or feasibility trials in high-impact anesthesia journals Can J Anaesthesia 2018, 65(11):1180–1195.

Kosa SD, Mbuagbaw L, Borg Debono V, Bhandari M, Dennis BB, Ene G, Leenus A, Shi D, Thabane M, Valvasori S, et al. Agreement in reporting between trial publications and current clinical trial registry in high impact journals: a methodological review. Contemporary Clinical Trials. 2018;65:144–50.

Zhang Y, Florez ID, Colunga Lozano LE, Aloweni FAB, Kennedy SA, Li A, Craigie S, Zhang S, Agarwal A, Lopes LC, et al. A systematic survey on reporting and methods for handling missing participant data for continuous outcomes in randomized controlled trials. J Clin Epidemiol. 2017;88:57–66.

CAS   PubMed   Google Scholar  

Hernández AV, Boersma E, Murray GD, Habbema JD, Steyerberg EW. Subgroup analyses in therapeutic cardiovascular clinical trials: are most of them misleading? Am Heart J. 2006;151(2):257–64.

Samaan Z, Mbuagbaw L, Kosa D, Borg Debono V, Dillenburg R, Zhang S, Fruci V, Dennis B, Bawor M, Thabane L. A systematic scoping review of adherence to reporting guidelines in health care literature. J Multidiscip Healthc. 2013;6:169–88.

Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59(7):697–703.

Carrasco-Labra A, Brignardello-Petersen R, Santesso N, Neumann I, Mustafa RA, Mbuagbaw L, Etxeandia Ikobaltzeta I, De Stio C, McCullagh LJ, Alonso-Coello P. Improving GRADE evidence tables part 1: a randomized trial shows improved understanding of content in summary-of-findings tables with a new format. J Clin Epidemiol. 2016;74:7–18.

The Northern Ireland Hub for Trials Methodology Research: SWAT/SWAR Information [ https://www.qub.ac.uk/sites/TheNorthernIrelandNetworkforTrialsMethodologyResearch/SWATSWARInformation/ ]. Accessed 31 Aug 2020.

Chick S, Sánchez P, Ferrin D, Morrice D. How to conduct a successful simulation study. In: Proceedings of the 2003 winter simulation conference: 2003; 2003. p. 66–70.

Google Scholar  

Mulrow CD. The medical review article: state of the science. Ann Intern Med. 1987;106(3):485–8.

Sacks HS, Reitman D, Pagano D, Kupelnick B. Meta-analysis: an update. Mount Sinai J Med New York. 1996;63(3–4):216–24.

CAS   Google Scholar  

Areia M, Soares M, Dinis-Ribeiro M. Quality reporting of endoscopic diagnostic studies in gastrointestinal journals: where do we stand on the use of the STARD and CONSORT statements? Endoscopy. 2010;42(2):138–47.

Knol M, Groenwold R, Grobbee D. P-values in baseline tables of randomised controlled trials are inappropriate but still common in high impact journals. Eur J Prev Cardiol. 2012;19(2):231–2.

Chen M, Cui J, Zhang AL, Sze DM, Xue CC, May BH. Adherence to CONSORT items in randomized controlled trials of integrative medicine for colorectal Cancer published in Chinese journals. J Altern Complement Med. 2018;24(2):115–24.

Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors' implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ. 2012;344:e4178.

The Cochrane Methodology Register Issue 2 2009 [ https://cmr.cochrane.org/help.htm ]. Accessed 31 Aug 2020.

Mbuagbaw L, Kredo T, Welch V, Mursleen S, Ross S, Zani B, Motaze NV, Quinlan L. Critical EPICOT items were absent in Cochrane human immunodeficiency virus systematic reviews: a bibliometric analysis. J Clin Epidemiol. 2016;74:66–72.

Barton S, Peckitt C, Sclafani F, Cunningham D, Chau I. The influence of industry sponsorship on the reporting of subgroup analyses within phase III randomised controlled trials in gastrointestinal oncology. Eur J Cancer. 2015;51(18):2732–9.

Setia MS. Methodology series module 5: sampling strategies. Indian J Dermatol. 2016;61(5):505–9.

Wilson B, Burnett P, Moher D, Altman DG, Al-Shahi Salman R. Completeness of reporting of randomised controlled trials including people with transient ischaemic attack or stroke: a systematic review. Eur Stroke J. 2018;3(4):337–46.

Kahale LA, Diab B, Brignardello-Petersen R, Agarwal A, Mustafa RA, Kwong J, Neumann I, Li L, Lopes LC, Briel M, et al. Systematic reviews do not adequately report or address missing outcome data in their analyses: a methodological survey. J Clin Epidemiol. 2018;99:14–23.

De Angelis CD, Drazen JM, Frizelle FA, Haug C, Hoey J, Horton R, Kotzin S, Laine C, Marusic A, Overbeke AJPM, et al. Is this clinical trial fully registered?: a statement from the International Committee of Medical Journal Editors*. Ann Intern Med. 2005;143(2):146–8.

Ohtake PJ, Childs JD. Why publish study protocols? Phys Ther. 2014;94(9):1208–9.

Rombey T, Allers K, Mathes T, Hoffmann F, Pieper D. A descriptive analysis of the characteristics and the peer review process of systematic review protocols published in an open peer review journal from 2012 to 2017. BMC Med Res Methodol. 2019;19(1):57.

Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet. 2002;359(9302):248–52.

Porta M (ed.): A dictionary of epidemiology, 5th edn. Oxford: Oxford University Press, Inc.; 2008.

El Dib R, Tikkinen KAO, Akl EA, Gomaa HA, Mustafa RA, Agarwal A, Carpenter CR, Zhang Y, Jorge EC, Almeida R, et al. Systematic survey of randomized trials evaluating the impact of alternative diagnostic strategies on patient-important outcomes. J Clin Epidemiol. 2017;84:61–9.

Helzer JE, Robins LN, Taibleson M, Woodruff RA Jr, Reich T, Wish ED. Reliability of psychiatric diagnosis. I. a methodological review. Arch Gen Psychiatry. 1977;34(2):129–33.

Chung ST, Chacko SK, Sunehag AL, Haymond MW. Measurements of gluconeogenesis and Glycogenolysis: a methodological review. Diabetes. 2015;64(12):3996–4010.

CAS   PubMed   PubMed Central   Google Scholar  

Sterne JA, Juni P, Schulz KF, Altman DG, Bartlett C, Egger M. Statistical methods for assessing the influence of study characteristics on treatment effects in 'meta-epidemiological' research. Stat Med. 2002;21(11):1513–24.

Moen EL, Fricano-Kugler CJ, Luikart BW, O’Malley AJ. Analyzing clustered data: why and how to account for multiple observations nested within a study participant? PLoS One. 2016;11(1):e0146721.

Zyzanski SJ, Flocke SA, Dickinson LM. On the nature and analysis of clustered data. Ann Fam Med. 2004;2(3):199–200.

Mathes T, Klassen P, Pieper D. Frequency of data extraction errors and methods to increase data extraction quality: a methodological review. BMC Med Res Methodol. 2017;17(1):152.

Bui DDA, Del Fiol G, Hurdle JF, Jonnalagadda S. Extractive text summarization system to aid data extraction from full text in systematic review development. J Biomed Inform. 2016;64:265–72.

Bui DD, Del Fiol G, Jonnalagadda S. PDF text classification to leverage information extraction from publication reports. J Biomed Inform. 2016;61:141–8.

Maticic K, Krnic Martinic M, Puljak L. Assessment of reporting quality of abstracts of systematic reviews with meta-analysis using PRISMA-A and discordance in assessments between raters without prior experience. BMC Med Res Methodol. 2019;19(1):32.

Speich B. Blinding in surgical randomized clinical trials in 2015. Ann Surg. 2017;266(1):21–2.

Abraha I, Cozzolino F, Orso M, Marchesi M, Germani A, Lombardo G, Eusebi P, De Florio R, Luchetta ML, Iorio A, et al. A systematic review found that deviations from intention-to-treat are common in randomized trials and systematic reviews. J Clin Epidemiol. 2017;84:37–46.

Zhong Y, Zhou W, Jiang H, Fan T, Diao X, Yang H, Min J, Wang G, Fu J, Mao B. Quality of reporting of two-group parallel randomized controlled clinical trials of multi-herb formulae: A survey of reports indexed in the Science Citation Index Expanded. Eur J Integrative Med. 2011;3(4):e309–16.

Farrokhyar F, Chu R, Whitlock R, Thabane L. A systematic review of the quality of publications reporting coronary artery bypass grafting trials. Can J Surg. 2007;50(4):266–77.

Oltean H, Gagnier JJ. Use of clustering analysis in randomized controlled trials in orthopaedic surgery. BMC Med Res Methodol. 2015;15:17.

Fleming PS, Koletsi D, Pandis N. Blinded by PRISMA: are systematic reviewers focusing on PRISMA and ignoring other guidelines? PLoS One. 2014;9(5):e96407.

Balasubramanian SP, Wiener M, Alshameeri Z, Tiruvoipati R, Elbourne D, Reed MW. Standards of reporting of randomized controlled trials in general surgery: can we do better? Ann Surg. 2006;244(5):663–7.

de Vries TW, van Roon EN. Low quality of reporting adverse drug reactions in paediatric randomised controlled trials. Arch Dis Child. 2010;95(12):1023–6.

Borg Debono V, Zhang S, Ye C, Paul J, Arya A, Hurlburt L, Murthy Y, Thabane L. The quality of reporting of RCTs used within a postoperative pain management meta-analysis, using the CONSORT statement. BMC Anesthesiol. 2012;12:13.

Kaiser KA, Cofield SS, Fontaine KR, Glasser SP, Thabane L, Chu R, Ambrale S, Dwary AD, Kumar A, Nayyar G, et al. Is funding source related to study reporting quality in obesity or nutrition randomized control trials in top-tier medical journals? Int J Obes. 2012;36(7):977–81.

Thomas O, Thabane L, Douketis J, Chu R, Westfall AO, Allison DB. Industry funding and the reporting quality of large long-term weight loss trials. Int J Obes. 2008;32(10):1531–6.

Khan NR, Saad H, Oravec CS, Rossi N, Nguyen V, Venable GT, Lillard JC, Patel P, Taylor DR, Vaughn BN, et al. A review of industry funding in randomized controlled trials published in the neurosurgical literature-the elephant in the room. Neurosurgery. 2018;83(5):890–7.

Hansen C, Lundh A, Rasmussen K, Hrobjartsson A. Financial conflicts of interest in systematic reviews: associations with results, conclusions, and methodological quality. Cochrane Database Syst Rev. 2019;8:Mr000047.

Kiehna EN, Starke RM, Pouratian N, Dumont AS. Standards for reporting randomized controlled trials in neurosurgery. J Neurosurg. 2011;114(2):280–5.

Liu LQ, Morris PJ, Pengel LH. Compliance to the CONSORT statement of randomized controlled trials in solid organ transplantation: a 3-year overview. Transpl Int. 2013;26(3):300–6.

Bala MM, Akl EA, Sun X, Bassler D, Mertz D, Mejza F, Vandvik PO, Malaga G, Johnston BC, Dahm P, et al. Randomized trials published in higher vs. lower impact journals differ in design, conduct, and analysis. J Clin Epidemiol. 2013;66(3):286–95.

Lee SY, Teoh PJ, Camm CF, Agha RA. Compliance of randomized controlled trials in trauma surgery with the CONSORT statement. J Trauma Acute Care Surg. 2013;75(4):562–72.

Ziogas DC, Zintzaras E. Analysis of the quality of reporting of randomized controlled trials in acute and chronic myeloid leukemia, and myelodysplastic syndromes as governed by the CONSORT statement. Ann Epidemiol. 2009;19(7):494–500.

Alvarez F, Meyer N, Gourraud PA, Paul C. CONSORT adoption and quality of reporting of randomized controlled trials: a systematic analysis in two dermatology journals. Br J Dermatol. 2009;161(5):1159–65.

Mbuagbaw L, Thabane M, Vanniyasingam T, Borg Debono V, Kosa S, Zhang S, Ye C, Parpia S, Dennis BB, Thabane L. Improvement in the quality of abstracts in major clinical journals since CONSORT extension for abstracts: a systematic review. Contemporary Clin trials. 2014;38(2):245–50.

Thabane L, Chu R, Cuddy K, Douketis J. What is the quality of reporting in weight loss intervention studies? A systematic review of randomized controlled trials. Int J Obes. 2007;31(10):1554–9.

Murad MH, Wang Z. Guidelines for reporting meta-epidemiological methodology research. Evidence Based Med. 2017;22(4):139.

METRIC - MEthodological sTudy ReportIng Checklist: guidelines for reporting methodological studies in health research [ http://www.equator-network.org/library/reporting-guidelines-under-development/reporting-guidelines-under-development-for-other-study-designs/#METRIC ]. Accessed 31 Aug 2020.

Jager KJ, Zoccali C, MacLeod A, Dekker FW. Confounding: what it is and how to deal with it. Kidney Int. 2008;73(3):256–60.

Parker SG, Halligan S, Erotocritou M, Wood CPJ, Boulton RW, Plumb AAO, Windsor ACJ, Mallett S. A systematic methodological review of non-randomised interventional studies of elective ventral hernia repair: clear definitions and a standardised minimum dataset are needed. Hernia. 2019.

Bouwmeester W, Zuithoff NPA, Mallett S, Geerlings MI, Vergouwe Y, Steyerberg EW, Altman DG, Moons KGM. Reporting and methods in clinical prediction research: a systematic review. PLoS Med. 2012;9(5):1–12.

Schiller P, Burchardi N, Niestroj M, Kieser M. Quality of reporting of clinical non-inferiority and equivalence randomised trials--update and extension. Trials. 2012;13:214.

Riado Minguez D, Kowalski M, Vallve Odena M, Longin Pontzen D, Jelicic Kadic A, Jeric M, Dosenovic S, Jakus D, Vrdoljak M, Poklepovic Pericic T, et al. Methodological and reporting quality of systematic reviews published in the highest ranking journals in the field of pain. Anesth Analg. 2017;125(4):1348–54.

Thabut G, Estellat C, Boutron I, Samama CM, Ravaud P. Methodological issues in trials assessing primary prophylaxis of venous thrombo-embolism. Eur Heart J. 2005;27(2):227–36.

Puljak L, Riva N, Parmelli E, González-Lorenzo M, Moja L, Pieper D. Data extraction methods: an analysis of internal reporting discrepancies in single manuscripts and practical advice. J Clin Epidemiol. 2020;117:158–64.

Ritchie A, Seubert L, Clifford R, Perry D, Bond C. Do randomised controlled trials relevant to pharmacy meet best practice standards for quality conduct and reporting? A systematic review. Int J Pharm Pract. 2019.

Babic A, Vuka I, Saric F, Proloscic I, Slapnicar E, Cavar J, Pericic TP, Pieper D, Puljak L. Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent. J Clin Epidemiol. 2019.

Tan A, Porcher R, Crequit P, Ravaud P, Dechartres A. Differences in treatment effect size between overall survival and progression-free survival in immunotherapy trials: a Meta-epidemiologic study of trials with results posted at ClinicalTrials.gov. J Clin Oncol. 2017;35(15):1686–94.

Croitoru D, Huang Y, Kurdina A, Chan AW, Drucker AM. Quality of reporting in systematic reviews published in dermatology journals. Br J Dermatol. 2020;182(6):1469–76.

Khan MS, Ochani RK, Shaikh A, Vaduganathan M, Khan SU, Fatima K, Yamani N, Mandrola J, Doukky R, Krasuski RA: Assessing the Quality of Reporting of Harms in Randomized Controlled Trials Published in High Impact Cardiovascular Journals. Eur Heart J Qual Care Clin Outcomes 2019.

Rosmarakis ES, Soteriades ES, Vergidis PI, Kasiakou SK, Falagas ME. From conference abstract to full paper: differences between data presented in conferences and journals. FASEB J. 2005;19(7):673–80.

Mueller M, D’Addario M, Egger M, Cevallos M, Dekkers O, Mugglin C, Scott P. Methods to systematically review and meta-analyse observational studies: a systematic scoping review of recommendations. BMC Med Res Methodol. 2018;18(1):44.

Li G, Abbade LPF, Nwosu I, Jin Y, Leenus A, Maaz M, Wang M, Bhatt M, Zielinski L, Sanger N, et al. A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Med Res Methodol. 2017;17(1):181.

Krnic Martinic M, Pieper D, Glatt A, Puljak L. Definition of a systematic review used in overviews of systematic reviews, meta-epidemiological studies and textbooks. BMC Med Res Methodol. 2019;19(1):203.

Analytical study [ https://medical-dictionary.thefreedictionary.com/analytical+study ]. Accessed 31 Aug 2020.

Tricco AC, Tetzlaff J, Pham B, Brehaut J, Moher D. Non-Cochrane vs. Cochrane reviews were twice as likely to have positive conclusion statements: cross-sectional study. J Clin Epidemiol. 2009;62(4):380–6 e381.

Schalken N, Rietbergen C. The reporting quality of systematic reviews and Meta-analyses in industrial and organizational psychology: a systematic review. Front Psychol. 2017;8:1395.

Ranker LR, Petersen JM, Fox MP. Awareness of and potential for dependent error in the observational epidemiologic literature: A review. Ann Epidemiol. 2019;36:15–9 e12.

Paquette M, Alotaibi AM, Nieuwlaat R, Santesso N, Mbuagbaw L. A meta-epidemiological study of subgroup analyses in cochrane systematic reviews of atrial fibrillation. Syst Rev. 2019;8(1):241.

Download references

Acknowledgements

This work did not receive any dedicated funding.

Author information

Authors and affiliations.

Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, ON, Canada

Lawrence Mbuagbaw, Daeria O. Lawson & Lehana Thabane

Biostatistics Unit/FSORC, 50 Charlton Avenue East, St Joseph’s Healthcare—Hamilton, 3rd Floor Martha Wing, Room H321, Hamilton, Ontario, L8N 4A6, Canada

Lawrence Mbuagbaw & Lehana Thabane

Centre for the Development of Best Practices in Health, Yaoundé, Cameroon

Lawrence Mbuagbaw

Center for Evidence-Based Medicine and Health Care, Catholic University of Croatia, Ilica 242, 10000, Zagreb, Croatia

Livia Puljak

Department of Epidemiology and Biostatistics, School of Public Health – Bloomington, Indiana University, Bloomington, IN, 47405, USA

David B. Allison

Departments of Paediatrics and Anaesthesia, McMaster University, Hamilton, ON, Canada

Lehana Thabane

Centre for Evaluation of Medicine, St. Joseph’s Healthcare-Hamilton, Hamilton, ON, Canada

Population Health Research Institute, Hamilton Health Sciences, Hamilton, ON, Canada

You can also search for this author in PubMed   Google Scholar

Contributions

LM conceived the idea and drafted the outline and paper. DOL and LT commented on the idea and draft outline. LM, LP and DOL performed literature searches and data extraction. All authors (LM, DOL, LT, LP, DBA) reviewed several draft versions of the manuscript and approved the final manuscript.

Corresponding author

Correspondence to Lawrence Mbuagbaw .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

DOL, DBA, LM, LP and LT are involved in the development of a reporting guideline for methodological studies.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Mbuagbaw, L., Lawson, D.O., Puljak, L. et al. A tutorial on methodological studies: the what, when, how and why. BMC Med Res Methodol 20 , 226 (2020). https://doi.org/10.1186/s12874-020-01107-7

Download citation

Received : 27 May 2020

Accepted : 27 August 2020

Published : 07 September 2020

DOI : https://doi.org/10.1186/s12874-020-01107-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Methodological study
  • Meta-epidemiology
  • Research methods
  • Research-on-research

BMC Medical Research Methodology

ISSN: 1471-2288

review article on methodology

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

Methodology of a systematic review

Affiliations.

  • 1 Hospital Universitario La Paz, Madrid, España. Electronic address: [email protected].
  • 2 Hospital Universitario Fundación Alcorcón, Madrid, España.
  • 3 Instituto Valenciano de Oncología, Valencia, España.
  • 4 Hospital Universitario de Cabueñes, Gijón, Asturias, España.
  • 5 Hospital Universitario Ramón y Cajal, Madrid, España.
  • 6 Hospital Universitario Gregorio Marañón, Madrid, España.
  • 7 Hospital Universitario de Canarias, Tenerife, España.
  • 8 Hospital Clínic, Barcelona, España; EAU Guidelines Office Board Member.
  • PMID: 29731270
  • DOI: 10.1016/j.acuro.2018.01.010

Context: The objective of evidence-based medicine is to employ the best scientific information available to apply to clinical practice. Understanding and interpreting the scientific evidence involves understanding the available levels of evidence, where systematic reviews and meta-analyses of clinical trials are at the top of the levels-of-evidence pyramid.

Acquisition of evidence: The review process should be well developed and planned to reduce biases and eliminate irrelevant and low-quality studies. The steps for implementing a systematic review include (i) correctly formulating the clinical question to answer (PICO), (ii) developing a protocol (inclusion and exclusion criteria), (iii) performing a detailed and broad literature search and (iv) screening the abstracts of the studies identified in the search and subsequently of the selected complete texts (PRISMA).

Synthesis of the evidence: Once the studies have been selected, we need to (v) extract the necessary data into a form designed in the protocol to summarise the included studies, (vi) assess the biases of each study, identifying the quality of the available evidence, and (vii) develop tables and text that synthesise the evidence.

Conclusions: A systematic review involves a critical and reproducible summary of the results of the available publications on a particular topic or clinical question. To improve scientific writing, the methodology is shown in a structured manner to implement a systematic review.

Keywords: Meta-analysis; Metaanálisis; Methodology; Metodología; Revisión sistemática; Systematic review.

Copyright © 2018 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

PubMed Disclaimer

Similar articles

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, Moraleda C, Rogers L, Daniels K, Green P. Crider K, et al. Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217. Cochrane Database Syst Rev. 2022. PMID: 36321557 Free PMC article.
  • The Effectiveness of Integrated Care Pathways for Adults and Children in Health Care Settings: A Systematic Review. Allen D, Gillen E, Rixson L. Allen D, et al. JBI Libr Syst Rev. 2009;7(3):80-129. doi: 10.11124/01938924-200907030-00001. JBI Libr Syst Rev. 2009. PMID: 27820426
  • The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Soll RF, et al. Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12. Early Hum Dev. 2020. PMID: 33036834
  • Palliative Treatment of Cancer-Related Pain [Internet]. Kongsgaard U, Kaasa S, Dale O, Ottesen S, Nordøy T, Hessling SE, von Hofacker S, Bruland ØS, Lyngstadaas A. Kongsgaard U, et al. Oslo, Norway: Knowledge Centre for the Health Services at The Norwegian Institute of Public Health (NIPH); 2005 Dec. Report from Norwegian Knowledge Centre for the Health Services (NOKC) No. 09-2005. Oslo, Norway: Knowledge Centre for the Health Services at The Norwegian Institute of Public Health (NIPH); 2005 Dec. Report from Norwegian Knowledge Centre for the Health Services (NOKC) No. 09-2005. PMID: 29320015 Free Books & Documents. Review.
  • WHO/ILO work-related burden of disease and injury: Protocol for systematic reviews of occupational exposure to dusts and/or fibres and of the effect of occupational exposure to dusts and/or fibres on pneumoconiosis. Mandrioli D, Schlünssen V, Ádám B, Cohen RA, Colosio C, Chen W, Fischer A, Godderis L, Göen T, Ivanov ID, Leppink N, Mandic-Rajcevic S, Masci F, Nemery B, Pega F, Prüss-Üstün A, Sgargi D, Ujita Y, van der Mierden S, Zungu M, Scheepers PTJ. Mandrioli D, et al. Environ Int. 2018 Oct;119:174-185. doi: 10.1016/j.envint.2018.06.005. Epub 2018 Jun 27. Environ Int. 2018. PMID: 29958118 Review.
  • Effects of whole-body vibration on bone mineral density in postmenopausal women: an overview of systematic reviews. Yin S, Liu Y, Zhong Y, Zhu F. Yin S, et al. BMC Womens Health. 2024 Aug 6;24(1):444. doi: 10.1186/s12905-024-03290-x. BMC Womens Health. 2024. PMID: 39107743 Free PMC article.
  • Effects of different nutrition interventions on sarcopenia criteria in older people: A study protocol for a systematic review of systematic reviews with meta-analysis. Ferreira LF, Roda Cardoso J, Telles da Rosa LH. Ferreira LF, et al. PLoS One. 2024 May 10;19(5):e0302843. doi: 10.1371/journal.pone.0302843. eCollection 2024. PLoS One. 2024. PMID: 38728270 Free PMC article.
  • Editorial: Reviews in psychiatry 2022: psychopharmacology. Taube M. Taube M. Front Psychiatry. 2024 Feb 28;15:1382027. doi: 10.3389/fpsyt.2024.1382027. eCollection 2024. Front Psychiatry. 2024. PMID: 38482070 Free PMC article. No abstract available.
  • Writing a Scientific Review Article: Comprehensive Insights for Beginners. Amobonye A, Lalung J, Mheta G, Pillai S. Amobonye A, et al. ScientificWorldJournal. 2024 Jan 17;2024:7822269. doi: 10.1155/2024/7822269. eCollection 2024. ScientificWorldJournal. 2024. PMID: 38268745 Free PMC article. Review.
  • Appraising systematic reviews: a comprehensive guide to ensuring validity and reliability. Shaheen N, Shaheen A, Ramadan A, Hefnawy MT, Ramadan A, Ibrahim IA, Hassanein ME, Ashour ME, Flouty O. Shaheen N, et al. Front Res Metr Anal. 2023 Dec 21;8:1268045. doi: 10.3389/frma.2023.1268045. eCollection 2023. Front Res Metr Anal. 2023. PMID: 38179256 Free PMC article. Review.
  • Search in MeSH

LinkOut - more resources

Full text sources.

  • Elsevier Science

Other Literature Sources

  • scite Smart Citations

Research Materials

  • NCI CPTC Antibody Characterization Program
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • Insights blog

What is a review article?

Learn how to write a review article.

What is a review article? A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results.

Writing a review of literature is to provide a critical evaluation of the data available from existing studies. Review articles can identify potential research areas to explore next, and sometimes they will draw new conclusions from the existing data.

Why write a review article?

To provide a comprehensive foundation on a topic.

To explain the current state of knowledge.

To identify gaps in existing studies for potential future research.

To highlight the main methodologies and research techniques.

Did you know? 

There are some journals that only publish review articles, and others that do not accept them.

Make sure you check the  aims and scope  of the journal you’d like to publish in to find out if it’s the right place for your review article.

How to write a review article

Below are 8 key items to consider when you begin writing your review article.

Check the journal’s aims and scope

Make sure you have read the aims and scope for the journal you are submitting to and follow them closely. Different journals accept different types of articles and not all will accept review articles, so it’s important to check this before you start writing.

Define your scope

Define the scope of your review article and the research question you’ll be answering, making sure your article contributes something new to the field. 

As award-winning author Angus Crake told us, you’ll also need to “define the scope of your review so that it is manageable, not too large or small; it may be necessary to focus on recent advances if the field is well established.” 

Finding sources to evaluate

When finding sources to evaluate, Angus Crake says it’s critical that you “use multiple search engines/databases so you don’t miss any important ones.” 

For finding studies for a systematic review in medical sciences,  read advice from NCBI . 

Writing your title, abstract and keywords

Spend time writing an effective title, abstract and keywords. This will help maximize the visibility of your article online, making sure the right readers find your research. Your title and abstract should be clear, concise, accurate, and informative. 

For more information and guidance on getting these right, read our guide to writing a good abstract and title  and our  researcher’s guide to search engine optimization . 

Introduce the topic

Does a literature review need an introduction? Yes, always start with an overview of the topic and give some context, explaining why a review of the topic is necessary. Gather research to inform your introduction and make it broad enough to reach out to a large audience of non-specialists. This will help maximize its wider relevance and impact. 

Don’t make your introduction too long. Divide the review into sections of a suitable length to allow key points to be identified more easily.

Include critical discussion

Make sure you present a critical discussion, not just a descriptive summary of the topic. If there is contradictory research in your area of focus, make sure to include an element of debate and present both sides of the argument. You can also use your review paper to resolve conflict between contradictory studies.

What researchers say

Angus Crake, researcher

As part of your conclusion, include making suggestions for future research on the topic. Focus on the goal to communicate what you understood and what unknowns still remains.

Use a critical friend

Always perform a final spell and grammar check of your article before submission. 

You may want to ask a critical friend or colleague to give their feedback before you submit. If English is not your first language, think about using a language-polishing service.

Find out more about how  Taylor & Francis Editing Services can help improve your manuscript before you submit.

What is the difference between a research article and a review article?

Differences in...
Presents the viewpoint of the author Critiques the viewpoint of other authors on a particular topic
New content Assessing already published content
Depends on the word limit provided by the journal you submit to Tends to be shorter than a research article, but will still need to adhere to words limit

Before you submit your review article…

Complete this checklist before you submit your review article:

Have you checked the journal’s aims and scope?

Have you defined the scope of your article?

Did you use multiple search engines to find sources to evaluate?

Have you written a descriptive title and abstract using keywords?

Did you start with an overview of the topic?

Have you presented a critical discussion?

Have you included future suggestions for research in your conclusion?

Have you asked a friend to do a final spell and grammar check?

review article on methodology

Expert help for your manuscript

review article on methodology

Taylor & Francis Editing Services  offers a full range of pre-submission manuscript preparation services to help you improve the quality of your manuscript and submit with confidence.

Related resources

How to edit your paper

Writing a scientific literature review

review article on methodology

  • Interlibrary Loan and Scan & Deliver
  • Course Reserves
  • Purchase Request
  • Collection Development & Maintenance
  • Current Negotiations
  • Ask a Librarian
  • Instructor Support
  • Library How-To
  • Research Guides
  • Research Support
  • Study Rooms
  • Research Rooms
  • Partner Spaces
  • Loanable Equipment
  • Print, Scan, Copy
  • 3D Printers
  • Poster Printing
  • OSULP Leadership
  • Strategic Plan

Scholarly Articles: How can I tell?

  • Journal Information
  • Literature Review
  • Author and affiliation
  • Introduction
  • Specialized Vocabulary

Methodology

  • Research sponsors
  • Peer-review

The methodology section or methods section tells you how the author(s) went about doing their research. It should let you know a) what method they used to gather data (survey, interviews, experiments, etc.), why they chose this method, and what the limitations are to this method.

The methodology section should be detailed enough that another researcher could replicate the study described. When you read the methodology or methods section:

  • What kind of research method did the authors use? Is it an appropriate method for the type of study they are conducting?
  • How did the authors get their tests subjects? What criteria did they use?
  • What are the contexts of the study that may have affected the results (e.g. environmental conditions, lab conditions, timing questions, etc.)
  • Is the sample size representative of the larger population (i.e., was it big enough?)
  • Are the data collection instruments and procedures likely to have measured all the important characteristics with reasonable accuracy?
  • Does the data analysis appear to have been done with care, and were appropriate analytical techniques used? 

A good researcher will always let you know about the limitations of his or her research.

  • << Previous: Specialized Vocabulary
  • Next: Results >>
  • Last Updated: Apr 15, 2024 3:26 PM
  • URL: https://guides.library.oregonstate.edu/ScholarlyArticle

review article on methodology

Contact Info

121 The Valley Library Corvallis OR 97331–4501

Phone: 541-737-3331

Services for Persons with Disabilities

In the Valley Library

  • Oregon State University Press
  • Special Collections and Archives Research Center
  • Undergrad Research & Writing Studio
  • Graduate Student Commons
  • Tutoring Services
  • Northwest Art Collection

Digital Projects

  • Oregon Explorer
  • Oregon Digital
  • ScholarsArchive@OSU
  • Digital Publishing Initiatives
  • Atlas of the Pacific Northwest
  • Marilyn Potts Guin Library  
  • Cascades Campus Library
  • McDowell Library of Vet Medicine

FDLP Emblem

  • Search Menu
  • Sign in through your institution
  • Computer Science
  • Earth Sciences
  • Information Science
  • Life Sciences
  • Materials Science
  • Science Policy
  • Advance Access
  • Special Topics
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to submit
  • About National Science Review
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Machine learning requires automation, slem framework and approaches, slem algorithms and applications, related fields.

  • < Previous

Simulating learning methodology (SLeM): an approach to machine learning automation

  • Article contents
  • Figures & tables
  • Supplementary Data

Zongben Xu, Jun Shu, Deyu Meng, Simulating learning methodology (SLeM): an approach to machine learning automation, National Science Review , Volume 11, Issue 8, August 2024, nwae277, https://doi.org/10.1093/nsr/nwae277

  • Permissions Icon Permissions

Machine learning (ML) is a fundamental technology of artificial intelligence (AI) that focuses on searching the possibly existing mapping |$f:\mathcal {X}\rightarrow \mathcal {Y}$| to fit a given dataset |$\mathcal {D}= \lbrace (x_i,y_i)\rbrace _{i=1}^N$|⁠ , where each |$(x,y)\in \mathcal {X}\times \mathcal {Y} \subset \mathbb {R}^d \times \mathbb {R}$|⁠ . The traditional learning paradigm of ML research is to find a mapping |$f^{*}$| from a predefined hypothesis space |$\mathcal {F} = \lbrace f_{\theta }:\mathcal {X}\rightarrow \mathcal {Y}, \theta \in \Theta \rbrace$| by solving the following equation, based on a given optimality criterion (i.e. a loss function |$\ell :\mathcal {Y} \times \mathcal {Y} \rightarrow \mathbb {R}^+$|⁠ ):

Here |$\mathbb {E}_{\mathcal {D}}$| denotes expectation with respect to |$\mathcal {D}$|⁠ , |$R(\cdot )$| is a regularizer that controls the property of the solution and |$\Theta$| is the set of parameters |$\theta \in \mathbb {R}^p$|⁠ .

Under such a learning paradigm, ML techniques like deep learning have revolutionized various fields of AI, e.g. computer vision, natural language processing and speech recognition, by effectively addressing complex problems that were once considered intractable. However, the effectiveness of ML always highly relies on some prerequisites of ML’s fundamental components before solving the aforementioned formulation. Some examples are as follows.

The independence hypothesis on the loss function. The loss function |$\ell$| is preset before implementation, certainly independent of the data distribution and application problems.

The large capacity hypothesis on the hypothesis space. The hypothesis space |$\mathcal {F}$| should be large enough in capacity to embody the optimal solution to be found. It is certainly preset independent of the application problems.

The completeness hypothesis on training data. Samples |$(x,y)$| in the training dataset should be well labeled, of low-level noise, class balanced and of sufficient number.

The prior determination hypothesis on the regularizer. Regularizer R is fixed and preset by a prior of the hypothesis space |$\mathcal {F}$|⁠ , while only hyperparameter |$\lambda$| is adjusted.

The Euclidean space hypothesis on the analysis tool. The performance of ML can be analyzed in the Euclidean space, which means that the optimization algorithm (i.e.  |$\arg \min$|⁠ ) of solving parameters |$\theta$| can always be naturally embedded in |$\mathbb {R}^p$| with the Euclidean norm.

All of these prerequisites are standard settings in ML research. They can be seen as both the prompt for the rapid development of ML and the restraints on its progress. To improve the performance of existing AI technologies, it is necessary to break through these prior hypotheses of ML. However, it is easy to observe that we can optimally set these components if and only if the optimal solution to the problem is known in advance, falling into a ‘chicken or egg’ dilemma. Therefore, it is fundamental to establish some kind of best-fitting strategies to ML setting-up for ML applications. Recently, there have been a series of strategies towards breaking through these hypotheses of ML with a best-fitting theory, e.g. model-driven deep learning for large capacity/regularizer hypotheses, the noise modeling principle for the independence hypothesis, self-paced learning for the completeness hypothesis, Banach space geometry for the Euclidean hypothesis (see [ 1 ] and the references therein).

Though these strategies have demonstrated effectiveness and powerfulness, they still highly rely on manually presetting, not automatically designing purely from data. Specifically, at the data level, we still rely on human effort to collect, select and annotate data. Humans must determine which data should be used for training and testing purposes. At the model and algorithm level, we have to manually construct the fundamental structure of learning models (e.g. deep neural networks), predefine the basic forms of loss functions, determine the algorithm types and their hyperparameters of the optimization algorithms, etc. Moreover, at the task and environment level, current techniques are good at solving single tasks in a closed environment, while they are limited in handling complex and varying multi-tasks in a more realistic open and evolutionary environment. In a nutshell, the current learning paradigm that relies on extensive manual interventions on ML’s components struggles to handle complex data and diverse tasks in the real world, resulting in a degradation and unsatisfied learning capability of current ML techniques.

A natural approach to address the aforementioned challenges is to reduce manual interventions in the ML process via some learning strategies towards automation of ML. In other words, we hope to design ML’s fundamental components for enhancing adaptive learning capabilities of ML in an open and evolutionary environment with diverse tasks, thereby achieving the so-called machine learning automation (Auto |$^6$| ML)[ 1 ]. We can summarize Auto |$^6$| ML as the following six automation goals.

Data and sample level : automatically generate data and select samples.

Model and algorithm level : automatically construct models/losses and design algorithms.

Task and environment level : automatically transfer between varying tasks and adapt to dynamic environments.

Achieving Auto |$^6$| ML could be understood as the automation regulation and design of ML’s fundamental components such as data, models, losses and algorithms, which intrinsically calls for a determination of ‘learning methodology’ mapping. In the following, we propose a ‘simulating learning methodology’ (SLeM) approach for the learning methodology determination in general and for Auto |$^6$| ML in particular. We report the SLeM framework, approaches, algorithms and applications in Fig.  1 .

Illustration of the SLeM framework, theories, algorithms and applications for machine learning automation.

Illustration of the SLeM framework, theories, algorithms and applications for machine learning automation.

In this section, we propose a SLeM framework by formalizing the learning task, learning method and learning methodology, and then we present three possible computations to implement SLeM.

Learning task

Machine learning summarizes the observable laws in the real world. From the view of mathematics and statistics, a learning task can thus be defined as a work that infers the underlying laws (i.e. probability density function) from observed data. In some sense, it is equivalent to a statistical inference task, and its specific forms include classification, regression, clustering, dimensionality reduction, etc. A learning task could be represented in many different ways; for example, (i) a task can be described by prompts via natural language instructions/demonstrations [ 2 ], i.e.  |$T=(t_1,\dots ,t_n)$|⁠ , where |$t_i$| is the task demonstration. The current popular large language model (LLM) solves the problem just via text prompt interaction with the model. Moreover, a task can be decomposed into a series of sub-tasks, i.e.  |$T = t_1 \circ t_{2|1} \circ t_{3|2,1} \circ \cdots$|⁠ . Such a hierarchical prompt representation of a learning task can help solve a complicated reasoning task for LLM [ 3 ]. (ii) A task can be characterized by small-size high-quality data, called meta-data [ 4 ], denoted |$D^{(q)}= \lbrace (x_i^{(q)}, y_i^{(q)})\rbrace _{i=1}^m$|⁠ , which is popularly used in meta learning. (iii) A task can also be defined by a set of logic rules/knowledge, called meta-knowledge [ 5 ], which could also be utilized to quantify the task representation. More forms of task representation are still required, and the research on the precise mathematical formulation of a learning task is still ongoing.

Learning method

We define a learning method as a specification of all four elements of ML in equation ( 1 ). More precisely, we define the learning space |$\mathcal {K}=(\mathcal {D},\mathcal {F},\mathcal {L}, \mathcal {A})$|⁠ , where |$\mathcal {D},\mathcal {F},\mathcal {L}, \mathcal {A}$| denote the data (distribution functions), model (hypothesis), loss (loss functions) and algorithm spaces, respectively, and we define a learning method as an element in |$\mathcal {K}$| when a learning task is given, with |$\mathcal {D},\mathcal {F},\mathcal {L}, \mathcal {A}$| representing a proper data scheme, a learner’s architecture, a specific loss function and an optimization algorithm, respectively. The determination of the learning method could be considered as designing ML’s components for the task, which is potentially hopeful to alleviate the above ML prerequisites. To make the computation tractable, we suppose that |$\mathcal {K}$| is separable, that is, each element in the learning space |$\mathcal {K}$| can be expanded with a countably infinite number of base functions, and then |$\mathcal {K}$| could be represented by the product of four infinite sequence spaces |$\Psi = (\Psi _{\mathcal {D}}, \Psi _{\mathcal {F}}, \Psi _{\mathcal {L}}, \Psi _{\mathcal {A}})$|⁠ . From this perspective, a learning method then corresponds to a hyperparameter assignment of |$\mathcal {K}$|⁠ . In other words, an effective hyperparameter configuration involved in the ML process can be interpreted as a proper ‘learning method’ imposed on a learning task [ 6 ]. In practice, we employ finite hyperparameter assignment sequences to approximate |$\Psi$|⁠ .

Learning methodology

The learning methodology is a mapping from the task space |$\mathcal {T}$| to the learning space |$\mathcal {K}$| or |$\Psi$|⁠ , denoted |$\mathcal {LM}:\mathcal {T}\rightarrow (\mathcal {K}$| or |$\Psi )$|⁠ . Thus, a learning methodology could be understood as a hyperparameter assignment rule of the learning method. Determination of the learning methodology is, however, an intrinsically infinite-dimensional ML problem.

SLeM aims to learn the learning methodology mapping |$\mathcal {LM}$|⁠ , or, in other words, learn the hyperparameter assignment rule of ML. To this end, we can employ an explicit hyperparameter setting mapping |$h:\mathcal {T}\rightarrow \Psi$| conditioned on learning tasks that map from the learning task space |$\mathcal {T}$| to the hyperparameter space |$\Psi$|⁠ , covering the whole learning process to simulate the ‘learning methodology’. Formally, we propose solving the following formulation to get the ‘learning methodology’ mapping h shared among various learning tasks:

Here |$\boldsymbol {L}$| is a metric evaluating the learning method |$\psi = (\psi _{\mathcal {D}}, \psi _{\mathcal {F}}, \psi _{\mathcal {L}}, \psi _{\mathcal {A}}) \in \Psi$| for learning task |$T \in \mathcal {T}$|⁠ , |$\mathcal {S}$| is the joint probability distribution over |$\mathcal {T}\times \Psi$| and |$\mathcal {H}$| is the hypothesis space of h .

The obtained learning methodology mapping is promising to help ML model finely adapt to varying tasks from dynamic environments with fewer human interventions, and thereby achieving Auto |$^6$| ML. Note that the formulation in equation ( 2 ) is computationally intractable; a natural method to solve it is to collect observations |$\lbrace (\mathcal {T}_i,\Psi _i)_{i=1}^t \rbrace$| from |$\mathcal {S}$|⁠ . We propose three typical realization approaches for SLeM according to different task representation forms, which are verified to be effective for achieving Auto |$^6$| ML in practice.

Prompt-based SLeM

Suppose that we have access to observations |$S = \lbrace (T_i, \psi _i)\rbrace _{i=1}^M$|⁠ , denoted by task prompts and corresponding learning methods; then we can rewrite equation ( 2 ) as

This approach is closely related to the recent LLM techniques [ 2 ]. When given a task prompt, the LLM directly predicts its solution, while SLeM firstly predicts its learning method, and then produces the solution based on the learning method. This understanding potentially reveals the insight of the task generalization ability of LLM techniques. However, such a ‘brute-force’ learning paradigm is cumbersome and labor intensive; how to develop lightweight-reduced implementations for this formulation is left for future study.

Meta-data-based SLeM

Suppose that we have enough meta-data |$D_i^{(q)}$| that can be used to properly evaluate learning methods adapting to learning task |$T_i$|⁠ ; then we can rewrite equation ( 2 ) as

where |$\ell ^{meta}\ {\rm and}\ \ell ^{task}$| are meta and task losses, respectively, |$\ell (f ,D) = (\frac{1}{|D|})\sum _{i=1}^{|D|} \ell (f(x_i), y_i)$| and |$f^{*}_{i}(h)$| is the optimal learner for task |$T_i$| given hyperparameter configurations predicted by |$h(T_i)$|⁠ . To better distinguish f and h , we usually call h a meta-learner. Here |$D_i^{(s)}$| is the training set for task |$T_i$|⁠ , and we drop its explicit dependence on |$h(T_i)$|⁠ . Formulation ( 4 ) can be very easily integrated into the traditional ML framework to provide a fresh understanding and extension of the original ML framework. In the next section, we further show that such a meta-data-based SLeM formulation could greatly enhance adaptive learning capabilities of existing ML methods. We have provided a statistical learning guarantee for the task transfer generalization ability of the so-obtained learning methodology in [ 6 ], which makes Auto |$^6$| ML directly tractable and more solid.

Meta-knowledge-based SLeM

Collecting meta-data may be costly and difficult in some applications. Instead, we also suggest utilizing meta-knowledge to evaluate the learning methodology [ 5 ]. Specifically, we propose the following meta-regularization (MR) approach for computing the learning methodology h :

Here |$\mathcal {MR}(h)$| is a meta-regularizer that confines the meta-learner functions in terms of data augmentation consistency (DAC), regulated by meta-knowledge, and |$\lambda , \gamma \ge 0$| are hyperparameters making a trade-off between meta-loss and the meta-regularizer. In [ 5 ], we theoretically showed that the DAC-MR approach can be treated as a proxy meta-objective used to evaluate the meta-learner without high-quality meta-data (i.e.  |$\lambda =0, \gamma \gt 0$|⁠ ). Besides, meta-loss combined with the DAC-MR approach is capable of achieving better meta-level generalization (i.e.  |$\lambda \gt 0, \gamma \gt 0$|⁠ ). We also empirically demonstrated that the DAC-MR approach could learn well-performing meta-learners from training tasks with noisy, sparse or even unavailable meta-data, well aligned with theoretical insights.

The learning process of SLeM contains meta-training and meta-test stages, respectively. In the meta-training stage, we extract the learning methodology from given meta-training tasks. However, it often still needs human interventions to help get the learning methodology, like collecting meta-training tasks, designing the architecture of the learning methodology mapping, configuring hyperparameters of meta-training algorithms, etc. Yet we want to emphasize that, in the meta-test stage, our meta-learned learning methodology is fixed, which could be used to tune hyperparameters of ML in a plug-and-play manner. In this sense, it should be more rational to say that such a SLeM scheme alleviates the work load of tune additional hyperparameters of machine learning at the meta-test stage of SLeM, and thus potentially achieves Auto |$^6$| ML at the data and sample and model and algorithm levels. It is essential to note that SLeM still requires a human to specify what problem or task they want ML to solve, and to set input task information for the learning methodology mapping. When task information specified by users reflects the characteristic of varying tasks, the learning methodology could adaptively predict the machine learning method for varying tasks. In this sense, SLeM is potentially effective for addressing varying tasks from dynamic environments. In other words, SLeM could achieve Auto |$^6$| ML at the task and environment level with proper task information specified by a human.

Based on the proposed SLeM framework, we can readily develop a series of SLeM algorithms for Auto |$^6$| ML, as presented in the following. It is worth emphasizing that the realizations of Auto |$^6$| ML are mainly based on meta-data-based SLeM approaches in this paper.

Data auto-selection

We explore the assignment of a weight |$v_i \in [0,1]$| to each candidate datum |$x_i$|⁠ , which represents the possibility of |$x_i$| being selected. Compared with conventional methods using pre-defined weighting schemes to assign values of the |$v_i$|⁠ , we adopt an MLP net called MW-Net [ 4 ] to learn an explicit weighting scheme. It has been substantiated that weighting functions automatically extracted from data comply with those proposed in the hand-designed studies for class imbalance or noisy labels. We further reform MW-Net by introducing a task feature as the supplementary input information, denoted CMW-Net [ 7 ], for addressing real-world heterogeneous data bias. CMW-Net is substantiated to be performable in different complicated data bias cases, and helps improve the performance of sample selection and label correction in a series of data bias issues, including datasets with class imbalance, different synthetic label noise forms and real-life complicated biased datasets. The meta-learned weighting scheme can especially be used in a plug-and-play manner, and can be directly deployed on unseen datasets, without needing to specifically tune extra hyperparameters of the CMW-Net algorithm.

Model auto-adjustment

The existing backbone networks have limited ability to adapt to different distribution shifts. They always use a noise transition matrix to adjust the prediction of the deep classifier for addressing the influence of noisy labels. Compared with previous methods specifically designed based on knowledge of the transition matrix, we use a transformer network, called IDCS-NTM [ 8 ], to automatically predict the noise transition for adjusting the prediction of the deep classifier adapting to various noisy labels. Meanwhile, the meta-learned noise transition network can help adjust the prediction of the deep classifier on unseen real noisy datasets, and achieves better performance compared with manually designed noise transition.

Loss auto-setting

For a regression task, the form of the loss function corresponds to the distribution of the underlying noise. How to set the loss function could be formulated as a weighted loss optimization problem. Conventional methods attempt to solve weighted loss by assigning the unknown distribution subjectively or fixing the weight vector empirically, which makes them hard to address complex scenarios adaptively and effectively. We use a hyper-weight network (HWnet) [ 9 ] to predict the weight vector. HWnet could automatically adjust weights for different learning tasks, so as to auto-set the loss function in compliance with the tasks at hand. The meta-learned HWnet can be explicitly plugged into other unseen tasks to finely adapt to various complex noise scenarios, and helps improve their performance. For the classification task, we also explore a loss adjuster [ 10 ] to automatically set robust loss functions of every instance for various noisy label tasks. The meta-learned loss adjuster could also transfer to unseen real-life noisy datasets, and achieves better performance compared with hand-designed robust loss functions with carefully tuned hyperparameters.

Algorithm auto-designing

The stochastic gradient descent algorithm requires manually presetting a learning rate (LR) schedule (i.e.  |$\lbrace \alpha _t\rbrace _{t=1}^T$| with T the total iteration steps) for the task at hand. We use a long short-term memory-based net, called MLR-SNet [ 11 ], to adaptively set the LR schedule. MLR-SNet could automatically learn a proper LR schedule to comply with the training dynamics of different deep neural network (DNN) training problems, which are more flexible than hand-designed policies for specific learning tasks. The meta-learned LR schedule is plug and play, and could be readily transferred to unseen heterogeneous tasks. MLR-SNet is substantiated to be transferable among DNN training tasks of different training epochs, datasets and network architectures and the large-scale ImageNet, and achieves comparable performance with the corresponding best hand-designed LR schedules in the test data.

SLeM applications

We have released the aforementioned SLeM algorithms on an open-source platform at https://github.com/xjtushujun/Auto-6ML based on Jittor, aiming to provide a toolkit box for users to handle real-life Auto |$^6$| ML problems. Recently, our CMW-Net algorithm [ 12 ] was the champion of the 2022 International Algorithm Case Competition, which achieves a competitive sample selection and label correction performance on real-life heterogeneous and diverse label noise tasks, showing its potential usefulness for more practical datasets and tasks. It is possible to utilize SLeM algorithms to real application problems possessing features of varying multiple tasks from dynamic environments. For example, the visual unmanned navigation problem calls for reliable feature extraction and matching techniques that generalize to different geophysical scenarios and multimodal data; the smart education problem calls for effective visual recognition, detection and analysis techniques that generalize to diverse teaching scenarios and analysis tasks; and so on.

AutoML [ 13 , 14 ] encompasses a wide range of methods aiming to automate traditionally manual aspects of the machine learning process, such as data preparation, algorithm selection, hyperparameter tuning and architecture search, while it has limited researches on automatical transfer between varying tasks, which is emphasized by aforementioned Auto |$^6$| ML. Existing AutoML methods are mostly heuristic, making it difficult to develop theoretical evidence. In comparison, our SLeM framework establishes a unified mathematical formulation for Auto |$^6$| ML, and provides theoretical insight into the task transfer generalization ability of SLeM [ 6 ].

Algorithm selection [ 15 ] learns a mapping from the problem space to the algorithm space by searching for the optimal algorithm from a pool of finite algorithms for the tasks at hand. It is usually inflexible to addressing varying tasks. The SLeM adopts bi-level optimization tools to extract learning methodology mapping for predicting the proper learning method of different tasks with a sound theoretical guarantee, which could more flexibly and adaptively fit query tasks.

Existing SLeM algorithms only realize automation for each component of ML, which is far from the goal of Auto |$^6$| ML. In particular, the learning process of SLeM still requires extensive human interventions and selections. Achieving SLeM algorithms with stronger automation capabilities and more complex automation problems/scenarios is still an important problem in future research. Moreover, developing the lightweight prompt-based SLeM approach is worth deeper and more comprehensive exploration for the reduction of the LLM. Besides, we try to construct a novel learning theory on infinite-dimensional functional space to finely reveal the insights of SLeM, and develop task-generalized transfer learning theory to provide a theoretical foundation for handling varying tasks and dynamic environments in real-world applications. Building connections between SLeM and other techniques on exploring task-transferable generalization, like meta-learning, in-context learning and large foundation models, is also valuable for future research.

This work was supported by the National Key Research and Development Program of China (2022YFA1004100) and in part by the National Natural Science Foundation of China (12326606 and 12226004).

Conflict of interest statement . None declared.

Xu   Z . Sci Sin Inform   2021 ; 51 : 1967 – 78 .

Brown   T , Mann   B , Ryder   N  et al.    Language models are fewshot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems . Red Hook : Curran Associates , 2020 ; 1877 – 1901 .

Google Scholar

Google Preview

Wei   J , Wang   X , Schuurmans   D  et al.    Chain-of-thought prompting elicits reasoning in large language models . In: Proceedings of the 36th International Conference on Neural Information Processing Systems . Red Hook : Curran Associates , 2024 ; 24824 – 37 .

Shu   J , Xie   Q , Yi   L  et al.    Meta-Weight-Net: learning an explicit mapping for sample weighting . In: Proceedings of the 33rd International Conference on Neural Information Processing Systems . Red Hook : Curran Associates , 2019 ; 1919 – 30 .

Shu   J , Yuan   X , Meng   D  et al.  arXiv:2305.07892.

Shu   J , Meng   D , Xu   Z . J Mach Learn Res   2023 ; 24 : 186.

Shu   J , Yuan   X , Meng   D  et al.    IEEE Trans Pattern Anal Mach Intell   2023 ; 45 : 11521 – 39 . 10.1109/TPAMI.2023.3271451

Shu   J , Zhao   Q , Z Xu  et al.  arXiv: 2006.05697 .

Rui   X , Cao   X , Shu   J  et al.  arXiv:2301.06081.

Ding   K , Shu   J , Meng   D  et al.  arXiv:2301.07306.

Shu   J , Zhu   Y , Zhao   Q  et al.    IEEE Trans Pattern Anal Mach Intell   2023 ; 45 : 3505 – 21 . 10.1109/TPAMI.2022.3184315

Shu   J , Yuan   X , Meng   D . Natl Sci Rev   2023 ; 10 : nwad084 . 10.1093/nsr/nwad084

Hutter   F , Kotthoff   L , Vanschoren   J . Automated Machine Learning: Methods, Systems, Challenges . Cham : Springer , 2019 . 10.1007/978-3-030-05318-5

Baratchi   M , Wang   C , Limmer   S  et al.    Artif Intell Rev   2024 ; 57 : 122 .

Rice   JR . Adv Comput   1976 ; 15 : 65 – 118 . 10.1016/S0065-2458(08)60520-3

Month: Total Views:
August 2024 97

Email alerts

Citing articles via.

  • Recommend to Your Librarian

Affiliations

  • Online ISSN 2053-714X
  • Print ISSN 2095-5138
  • Copyright © 2024 China Science Publishing & Media Ltd. (Science Press)
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 06 September 2024

The renaissance of oral tolerance: merging tradition and new insights

  • Vuk Cerovic   ORCID: orcid.org/0000-0003-1392-0669 1 ,
  • Oliver Pabst   ORCID: orcid.org/0000-0002-5533-883X 1 &
  • Allan McI Mowat   ORCID: orcid.org/0000-0001-9389-3079 2  

Nature Reviews Immunology ( 2024 ) Cite this article

1 Altmetric

Metrics details

  • Gastrointestinal system
  • Mucosal immunology

Oral tolerance is the process by which feeding of soluble proteins induces antigen-specific systemic immune unresponsiveness. Oral tolerance is thought to have a central role in suppressing immune responses to ‘harmless’ food antigens, and its failure can lead to development of pathologies such as food allergies or coeliac disease. However, on the basis of long-standing experimental observations, the relevance of oral tolerance in human health has achieved new prominence recently following the discovery that oral administration of peanut proteins prevents the development of peanut allergy in at-risk human infants. In this Review, we summarize the new mechanistic insights into three key processes necessary for the induction of tolerance to oral antigens: antigen uptake and transport across the small intestinal epithelial barrier to the underlying immune cells; the processing, transport and presentation of fed antigen by different populations of antigen-presenting cells; and the development of immunosuppressive T cell populations that mediate antigen-specific tolerance. In addition, we consider how related but distinct processes maintain tolerance to bacterial antigens in the large intestine. Finally, we outline the molecular mechanisms and functional consequences of failure of oral tolerance and how these may be modulated to enhance clinical outcomes and prevent disease.

This is a preview of subscription content, access via your institution

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

review article on methodology

Similar content being viewed by others

review article on methodology

Inulin-gel-based oral immunotherapy remodels the small intestinal microbiome and suppresses food allergy

review article on methodology

Goblet cell associated antigen passages support the induction and maintenance of oral tolerance

review article on methodology

Complex regulatory effects of gut microbial short-chain fatty acids on immune tolerance and autoimmunity

Pabst, O. & Mowat, A. M. Oral tolerance to food protein. Mucosal Immunol. 5 , 232–239 (2012).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Rezende, R. M. & Weiner, H. L. Oral tolerance: an updated review. Immunol. Lett. 245 , 29–37 (2022).

Article   CAS   PubMed   Google Scholar  

Mowat, A. M. Basic mechanisms and clinical implications of oral tolerance. Curr. Opin. Gastroenterol. 15 , 546–556 (1999).

Faria, A. M. & Weiner, H. L. Oral tolerance. Immunol. Rev. 206 , 232–259 (2005).

Faria, A. M. & Weiner, H. L. Oral tolerance: mechanisms and therapeutic applications. Adv. Immunol. 73 , 153–264 (1999).

Roberts, G. et al. Defining the window of opportunity and target populations to prevent peanut allergy. J. Allergy Clin. Immunol. 151 , 1329–1336 (2022).

Article   PubMed   Google Scholar  

Torow, N. et al. M cell maturation and cDC activation determine the onset of adaptive immune priming in the neonatal Peyer’s patch. Immunity 56 , 1220–1238.e7 (2023).

Vaarala, O., Saukkonen, T., Savilahti, E., Klemola, T. & Akerblom, H. K. Development of immune response to cow’s milk proteins in infants receiving cow’s milk or hydrolyzed formula. J. Allergy Clin. Immunol. 96 , 917–923 (1995).

Chambers, S. J. et al. Adoptive transfer of dendritic cells from allergic mice induces specific immunoglobulin E antibody in naïve recipients in absence of antigen challenge without altering the T helper 1/T helper 2 balance. Immunology 112 , 72–79 (2004).

Shen, L., Weber, C. R. & Turner, J. R. The tight junction protein complex undergoes rapid and continuous molecular remodeling at steady state. J. Cell Biol. 181 , 683–695 (2008).

Shashikanth, N. et al. Tight junction channel regulation by interclaudin interference. Nat. Commun. 13 , 3780 (2022).

Lin, X. P., Almqvist, N. & Telemo, E. Human small intestinal epithelial cells constitutively express the key elements for antigen processing and the production of exosomes. Blood Cell Mol. Dis. 35 , 122–128 (2005).

Article   CAS   Google Scholar  

Karlsson, M. et al. “Tolerosomes” are produced by intestinal epithelial cells. Eur. J. Immunol. 31 , 2892–2900 (2001).

Ostman, S., Taube, M. & Telemo, E. Tolerosome-induced oral tolerance is MHC dependent. Immunology 116 , 464–476 (2005).

Article   PubMed   PubMed Central   Google Scholar  

Van Niel, G. et al. Intestinal epithelial exosomes carry MHC class II/peptides able to inform the immune system in mice. Gut 52 , 1690–1697 (2003).

Knoop, K. A., McDonald, K. G., McCrate, S., McDole, J. R. & Newberry, R. D. Microbial sensing by goblet cells controls immune surveillance of luminal antigens in the colon. Mucosal Immunol. 8 , 198–210 (2015).

Gustafsson, J. K. et al. Intestinal goblet cells sample and deliver lumenal antigens by regulated endocytic uptake and transcytosis. eLife 10 , e67292 (2021).

Kulkarni, D. H. et al. Goblet cell associated antigen passages support the induction and maintenance of oral tolerance. Mucosal Immunol. 13 , 271–282 (2020). This review covers the initial studies that indicate goblet cell-associated passages may have an important role in the uptake of antigen from the intestine and in oral tolerance.

Knoop, K. A. et al. Microbial antigen encounter during a preweaning interval is critical for tolerance to gut bacteria. Sci. Immunol. 2 , eaao1314 (2017).

Knoop, K. A. et al. Maternal activation of the EGFR prevents translocation of gut-residing pathogenic Escherichia coli in a model of late-onset neonatal sepsis. Proc. Natl Acad. Sci. USA 117 , 7941–7949 (2020).

Kulkarni, D. H. et al. Goblet cell associated antigen passages are inhibited during Salmonella typhimurium infection to prevent pathogen dissemination and limit responses to dietary antigens. Mucosal Immunol. 11 , 1103–1113 (2018).

Noah, T. K. et al. IL-13-induced intestinal secretory epithelial cell antigen passages are required for IgE-mediated food-induced anaphylaxis. J. Allergy Clin. Immunol. 144 , 1058–1073.e3 (2019).

Suzuki, H. et al. Ovalbumin-protein sigma 1 M-cell targeting facilitates oral tolerance with reduction of antigen-specific CD4+ T cells. Gastroenterology 135 , 917–925 (2008).

Jang, M. H. et al. Intestinal villous M cells: an antigen entry site in the mucosal epithelium. Proc. Natl Acad. Sci. USA 101 , 6110–6115 (2004).

Rescigno, M. et al. Dendritic cells express tight junction proteins and penetrate gut epithelial monolayers to sample bacteria. Nat. Immunol. 2 , 361–367 (2001).

Niess, J. H. et al. CX3CR1-mediated dendritic cell access to the intestinal lumen and bacterial clearance. Science 307 , 254–258 (2005).

Farache, J. et al. Luminal bacteria recruit CD103 + dendritic cells into the intestinal epithelium to sample bacterial antigens for presentation. Immunity 38 , 581–595 (2013).

Chieppa, M., Rescigno, M., Huang, A. Y. & Germain, R. N. Dynamic imaging of dendritic cell extension into the small bowel lumen in response to epithelial cell TLR engagement. J. Exp. Med. 203 , 2841–2852 (2006).

Mazzini, E., Massimiliano, L., Penna, G. & Rescigno, M. Oral tolerance can be established via gap junction transfer of fed antigens from CX3CR1 + macrophages to CD103 + dendritic cells. Immunity 40 , 248–261 (2014).

Liu, Q. et al. Pyruvate enhances oral tolerance via GPR31. Int. Immunol. 34 , 343–352 (2022).

Cummings, R. J. et al. Different tissue phagocytes sample apoptotic cells to direct distinct homeostasis programs. Nature 539 , 565–569 (2016).

Joeris, T. et al. Intestinal cDC1 drive cross-tolerance to epithelial-derived antigen via induction of FoxP3 + CD8 + T regs . Sci. Immunol. 6 , eabd3774 (2021).

Cerovic, V. et al. Lymph-borne CD8α + dendritic cells are uniquely able to cross-prime CD8 + T cells with antigen acquired from intestinal epithelial cells. Mucosal Immunol. 8 , 38–48 (2015).

Sun, C. M. et al. Small intestine lamina propria dendritic cells promote de novo generation of Foxp3 T reg cells via retinoic acid. J. Exp. Med. 204 , 1775–1785 (2007).

Coombes, J. L. et al. A functionally specialized population of mucosal CD103+ DCs induces Foxp3+ regulatory T cells via a TGF-beta and retinoic acid-dependent mechanism. J. Exp. Med. 204 , 1757–1764 (2007). Together with Sun et al. (2007), this study demonstrates that small intestine lamina propria (SILP) DCs and their migratory counterparts in the MLN induce T reg differentiation by the production of retinoic acid.

Esterhazy, D. et al. Classical dendritic cells are required for dietary antigen-mediated induction of peripheral T reg cells and tolerance. Nat. Immunol. 17 , 545–555 (2016).

Fukaya, T. et al. Gut dysbiosis promotes the breakdown of oral tolerance mediated through dysfunction of mucosal dendritic cells. Cell Rep. 42 , 112431 (2023).

Worbs, T. et al. Oral tolerance originates in the intestinal immune system and relies on antigen carriage by dendritic cells. J. Exp. Med. 203 , 519–527 (2006). This study demonstrates the dependence of oral tolerance induction on the active carriage of antigen from the SILP to the MLN by cDCs.

Baratin, M. et al. Homeostatic NF-κB signaling in steady-state migratory dendritic cells regulates immune homeostasis and tolerance. Immunity 42 , 627–639 (2015).

Worthington, J. J., Czajkowska, B. I., Melton, A. C. & Travis, M. A. Intestinal dendritic cells specialize to activate transforming growth factor-β and induce Foxp3+ regulatory T cells via integrin αvβ8. Gastroenterology 141 , 1802–1812 (2011). This paper shows that activation of TGFβ vis integrin αvβ8 on dendritic cells leads to generation of regulatory T cells.

Hung, L. Y. et al. Cellular context of IL-33 expression dictates impact on anti-helminth immunity. Sci. Immunol. 5 , eabc6259 (2020).

Matteoli, G. et al. Gut CD103+ dendritic cells express indoleamine 2,3-dioxygenase which influences T regulatory/T effector cell balance and oral tolerance induction. Gut 59 , 595–604 (2010).

Moreira, T. G. et al. PD-L1 + and XCR1 + dendritic cells are region-specific regulators of gut homeostasis. Nat. Commun. 12 , 4907 (2021).

Rivollier, A., He, J., Kole, A., Valatas, V. & Kelsall, B. L. Inflammation switches the differentiation program of Ly6Chi monocytes from antiinflammatory macrophages to inflammatory dendritic cells in the colon. J. Exp. Med. 209 , 139–155 (2012).

Kedmi, R. et al. A RORγt + cell instructs gut microbiota-specific T reg cell differentiation. Nature 610 , 737–743 (2022).

Akagbosu, B. et al. Novel antigen-presenting cell imparts T reg -dependent tolerance to gut microbiota. Nature 610 , 752–760 (2022). This study, together with Lyu et al. (2022) and Kedmi et al. (2022) presents evidence that a novel population of RORγt + antigen-presenting cells is specialized in the induction of regulatory T cells and tolerance to the microbiota in the neonatal intestine.

Mucida, D. et al. Oral tolerance in the absence of naturally occurring T reg s. J. Clin. Invest. 115 , 1923–1933 (2005).

Ohta, T. et al. Crucial roles of XCR1-expressing dendritic cells and the XCR1-XCL1 chemokine axis in intestinal immune homeostasis. Sci. Rep. 6 , 23505 (2016).

Gribonika, I. et al. Migratory CD103+CD11b+ cDC2s in Peyer’s patches are critical for gut IgA responses following oral immunization. Mucosal Immunol . https://doi.org/10.1016/j.mucimm.2024.03.004 (2024).

Huang, F. P. et al. A discrete subpopulation of dendritic cells transports apoptotic intestinal epithelial cells to T cell areas of mesenteric lymph nodes. J. Exp. Med. 191 , 435–444 (2000).

Bosteels, V. et al. LXR signaling controls homeostatic dendritic cell maturation. Sci. Immunol. 8 , eadd3955 (2023).

Welty, N. E. et al. Intestinal lamina propria dendritic cells maintain T cell homeostasis but do not affect commensalism. J. Exp. Med. 210 , 2011–2024 (2013).

Scott, C. L. et al. CCR2 + CD103 − intestinal dendritic cells develop from DC-committed precursors and induce interleukin-17 production by T cells. Mucosal Immunol. 8 , 327–339 (2015).

Cerovic, V. et al. Intestinal CD103 − dendritic cells migrate in lymph and prime effector T cells. Mucosal Immunol. 6 , 104–113 (2013).

Canesso, M. C. C. et al. Identification of dendritic cell-T cell interactions driving immune responses to food. Preprint at bioRxiv https://doi.org/10.1101/2022.10.26.513772 (2022).

Mortha, A. et al. Microbiota-dependent crosstalk between macrophages and ILC3 promotes intestinal homeostasis. Science 343 , 1249288 (2014).

Tanaka, Y. et al. Oral CD103 − CD11b + classical dendritic cells present sublingual antigen and induce Foxp3 + regulatory T cells in draining lymph nodes. Mucosal Immunol. 10 , 79–90 (2017).

Fenton, T. M. et al. Inflammatory cues enhance TGFβ activation by distinct subsets of human intestinal dendritic cells via integrin αvβ8. Mucosal Immunol. 10 , 624–634 (2017).

Ferris, S. T. et al. cDC1 prime and are licensed by CD4 + T cells to induce anti-tumour immunity. Nature 584 , 624–629 (2020).

Jaensson-Gyllenbäck, E. et al. Bile retinoids imprint intestinal CD103+ dendritic cells with the ability to generate gut-tropic T cells. Mucosal Immunol. 4 , 438–447 (2011).

McDonald, K. G. et al. Epithelial expression of the cytosolic retinoid chaperone cellular retinol binding protein II is essential for in vivo imprinting of local gut dendritic cells by lumenal retinoids. Am. J. Pathol. 180 , 984–997 (2012).

Boucard-Jourdin, M. et al. β8 integrin expression and activation of TGF-β by intestinal dendritic cells are determined by both tissue microenvironment and cell lineage. J. Immunol. 197 , 1968–1978 (2016).

Zeng, R., Bscheider, M., Lahl, K., Lee, M. & Butcher, E. C. Generation and transcriptional programming of intestinal dendritic cells: essential role of retinoic acid. Mucosal Immunol. 9 , 183–193 (2016).

Bain, C. C. et al. TGFβR signalling controls CD103 + CD11b + dendritic cell development in the intestine. Nat. Commun. 8 , 620 (2017).

Bang, Y. J. et al. Serum amyloid A delivers retinol to intestinal myeloid cells to promote adaptive immunity. Science 373 , eabf9232 (2021).

Rivera, C. A. et al. Epithelial colonization by gut dendritic cells promotes their functional diversification. Immunity 55 , 129–144.e8 (2022).

Rimoldi, M. et al. Intestinal immune homeostasis is regulated by the crosstalk between epithelial cells and dendritic cells. Nat. Immunol. 6 , 507–514 (2005).

Shan, M. et al. Mucus enhances gut homeostasis and oral tolerance by delivering immunoregulatory signals. Science 342 , 447–453 (2013).

Chng, S. H. et al. Ablating the aryl hydrocarbon receptor (AhR) in CD11c+ cells perturbs intestinal epithelium development and intestinal immunity. Sci. Rep. 6 , 23820 (2016).

Delgado, M., Gonzalez-Rey, E. & Ganea, D. The neuropeptide vasoactive intestinal peptide generates tolerogenic dendritic cells. J. Immunol. 175 , 7311–7324 (2005).

Cording, S. et al. The intestinal micro-environment imprints stromal cells to promote efficient T reg induction in gut-draining lymph nodes. Mucosal Immunol. 7 , 359–368 (2014).

Hammerschmidt, S. I. et al. Stromal mesenteric lymph node cells are essential for the generation of gut-homing T cells in vivo. J. Exp. Med. 205 , 2483–2490 (2008).

Denning, T. L., Wang, Y. C., Patel, S. R., Williams, I. R. & Pulendran, B. Lamina propria macrophages and dendritic cells differentially induce regulatory and interleukin 17-producing T cell responses. Nat. Immunol. 8 , 1086–1094 (2007).

Denning, T. L. et al. Functional specializations of intestinal dendritic cell and macrophage subsets that control Th17 and regulatory T cell responses are dependent on the T cell/APC ratio, source of mouse strain, and regional localization. J. Immunol. 187 , 733–747 (2011).

Schulz, O. et al. Intestinal CD103+, but not CX3CR1+, antigen sampling cells migrate in lymph and serve classical dendritic cell functions. J. Exp. Med. 206 , 3101–3114 (2009).

Guillaume, J., Leufgen, A., Hager, F. T., Pabst, O. & Cerovic, V. MHCII expression on gut macrophages supports T cell homeostasis and is regulated by microbiota and ontogeny. Sci. Rep. 13 , 1509 (2023).

Zhou, L. et al. Innate lymphoid cells support regulatory T cells in the intestine through interleukin-2. Nature 568 , 405–409 (2019).

Reynders, A. et al. Identity, regulation and in vivo function of gut NKp46 + RORγt + and NKp46 + RORγt − lymphoid cells. EMBO J. 30 , 2934–2947 (2011).

Hadis, U. et al. Intestinal tolerance requires gut homing and expansion of FoxP3 + regulatory T cells in the lamina propria. Immunity 34 , 237–246 (2011). In this paper, cross-talk with IL-10producing mucosal macrophages is shown to be needed for the local expansion of the regulatory T cells involved in oral tolerance to dietary antigen.

Kim, M. et al. Critical role for the microbiota in CX3CR1 + intestinal mononuclear phagocyte regulation of intestinal T cell responses. Immunity 49 , 151–163.e5 (2018).

Kelly, A. et al. Human monocytes and macrophages regulate immune tolerance via integrin αvβ8-mediated TGFβ activation. J. Exp. Med. 215 , 2725–2736 (2018).

Heuberger, C., Pott, J. & Maloy, K. J. Why do intestinal epithelial cells express MHC class II. Immunology 162 , 357–367 (2021).

Zindl, C. L. et al. Distal colonocytes targeted by C . rodentium recruit T-cell help for barrier defence. Nature 629 , 669–678 (2024).

He, K. et al. Gasdermin D licenses MHCII induction to maintain food tolerance in small intestine. Cell 186 , 3033–3048.e20 (2023).

Reis, B. S., Rogoz, A., Costa-Pinto, F. A., Taniuchi, I. & Mucida, D. Mutual expression of the transcription factors Runx3 and ThPOK regulates intestinal CD4 + T cell immunity. Nat. Immunol. 14 , 271–280 (2013).

Sujino, T. et al. Tissue adaptation of regulatory and intraepithelial CD4 + T cells controls gut inflammation. Science 352 , 1581–1586 (2016).

Bousbaine, D. et al. A conserved Bacteroidetes antigen induces anti-inflammatory intestinal T lymphocytes. Science 377 , 660–666 (2022).

Ramanan, D. et al. Regulatory T cells in the face of the intestinal microbiota. Nat. Rev. Immunol. 23 , 749–762 (2023).

Malik, A. et al. Epithelial IFNγ signalling and compartmentalized antigen presentation orchestrate gut immunity. Nature 623 , 1044–1052 (2023).

Garside, P. & Mowat, A. M. Oral tolerance. Semin. Immunol. 13 , 177–185 (2001).

Weiss, J. M. et al. Neuropilin 1 is expressed on thymus-derived natural regulatory T cells, but not mucosa-generated induced Foxp3+ T reg cells. J. Exp. Med. 209 , 1723–1742, s1 (2012).

Kim, K. S. et al. Dietary antigens limit mucosal immunity by inducing regulatory T cells in the small intestine. Science 351 , 858–863 (2016).

Hauet-Broere, F. et al. Functional CD25− and CD25+ mucosal regulatory T cells are induced in gut-draining lymphoid tissue within 48 h after oral antigen application. Eur. J. Immunol. 33 , 2801–2810 (2003).

Kaminski, A. et al. Resident regulatory T cells reflect the immune history of individual lymph nodes. Sci. Immunol. 8 , eadj5789 (2023).

Archila, L. D. et al. α(S1)-Casein elucidate major T-cell responses in cow’s milk allergy. J. Allergy Clin. Immunol. 140 , 854–857.e6 (2017).

Archila, L. D. et al. Jug r 2-reactive CD4 + T cells have a dominant immune role in walnut allergy. J. Allergy Clin. Immunol. 136 , 983–992.e7 (2015).

Christophersen, A. et al. Distinct phenotype of CD4 + T cells driving celiac disease identified in multiple autoimmune conditions. Nat. Med. 25 , 734–737 (2019).

Bacher, P. et al. Antigen-reactive T cell enrichment for direct, high-resolution analysis of the human naive and memory Th cell repertoire. J. Immunol. 190 , 3967–3976 (2013).

Rodríguez-Sillke, Y. et al. Analysis of circulating food antigen-specific T-cells in celiac disease and inflammatory bowel disease. Int. J. Mol. Sci. 24 , 8153 (2023).

Sarna, V. K. et al. HLA-DQ-gluten tetramer blood test accurately identifies patients with and without celiac disease in absence of gluten consumption. Gastroenterology 154 , 886–896.e6 (2018).

Martini, G. R. et al. Selection of cross-reactive T cells by commensal and food-derived yeasts drives cytotoxic T(H)1 cell responses in Crohn’s disease. Nat. Med. 29 , 2602–2614 (2023).

Christophersen, A. et al. Phenotype-based isolation of antigen-specific CD4 + T cells in autoimmunity: a study of celiac disease. Adv. Sci. 9 , e2104766 (2022).

Article   Google Scholar  

Hong, S. W. et al. Immune tolerance of food is mediated by layers of CD4 + T cell dysfunction. Nature 607 , 762–768 (2022). This elegant study characterizes a population of CD4 + T cells induced by oral antigen that lack traditional lineage markers but may represent a source of T reg cells on secondary antigen encounter.

Mowat, A. M. & Agace, W. W. Regional specialization within the intestinal immune system. Nat. Rev. Immunol. 14 , 667–685 (2014).

Ohnmacht, C. et al. The microbiota regulates type 2 immunity through RORγt + T cells. Science 349 , 989–993 (2015).

Sefik, E. et al. Individual intestinal symbionts induce a distinct population of RORγ + regulatory T cells. Science 349 , 993–997 (2015).

van der Veeken, J. et al. Genetic tracing reveals transcription factor Foxp3-dependent and Foxp3-independent functionality of peripherally induced Treg cells. Immunity 55 , 1173–1184.e7 (2022).

Hang, S. et al. Bile acid metabolites control T H 17 and T reg cell differentiation. Nature 576 , 143–148 (2019).

Song, X. et al. Microbial bile acid metabolites modulate gut RORγ + regulatory T cell homeostasis. Nature 577 , 410–415 (2020).

Abdel-Gadir, A. et al. Microbiota therapy acts via a regulatory T cell MyD88/RORγt pathway to suppress food allergy. Nat. Med. 25 , 1164–1174 (2019).

Luciani, C., Hager, F. T., Cerovic, V. & Lelouard, H. Dendritic cell functions in the inductive and effector sites of intestinal immunity. Mucosal Immunol. 15 , 40–50 (2022).

Houston, S. A. et al. The lymph nodes draining the small intestine and colon are anatomically separate and immunologically distinct. Mucosal Immunol. 9 , 468–478 (2016).

Esterhazy, D. et al. Compartmentalized gut lymph node drainage dictates adaptive immune responses. Nature 569 , 126–130 (2019).

Toivonen, R. et al. Activation of plasmacytoid dendritic cells in colon-draining lymph nodes during Citrobacter rodentium infection involves pathogen-sensing and inflammatory pathways distinct from conventional dendritic cells. J. Immunol. 196 , 4750–4759 (2016).

Veenbergen, S. et al. Colonic tolerance develops in the iliac lymph nodes and can be established independent of CD103 + dendritic cells. Mucosal Immunol. 9 , 894–906 (2016).

Singh, N. et al. Activation of Gpr109a, receptor for niacin and the commensal metabolite butyrate, suppresses colonic inflammation and carcinogenesis. Immunity 40 , 128–139 (2014).

Campbell, C. et al. Bacterial metabolism of bile acids promotes generation of peripheral regulatory T cells. Nature 581 , 475–479 (2020).

Knoop, K. A. et al. Synchronization of mothers and offspring promotes tolerance and limits allergy. JCI Insight 5 , e137943 (2020).

Panea, C. et al. Intestinal monocyte-derived macrophages control commensal-specific Th17 responses. Cell Rep. 12 , 1314–1324 (2015).

Goto, Y. et al. Segmented filamentous bacteria antigens presented by intestinal dendritic cells drive mucosal Th17 cell differentiation. Immunity 40 , 594–607 (2014).

Hepworth, M. R. et al. Group 3 innate lymphoid cells mediate intestinal selection of commensal bacteria-specific CD4 + T cells. Science 348 , 1031–1035 (2015).

Lyu, M. et al. ILC3s select microbiota-specific regulatory T cells to establish tolerance in the gut. Nature 610 , 744–751 (2022).

Brown, C. C. et al. Transcriptional basis of mouse and human dendritic cell heterogeneity. Cell 179 , 846–863.e24 (2019).

Wang, J. et al. Single-cell multiomics defines tolerogenic extrathymic Aire-expressing populations with unique homology to thymic epithelium. Sci. Immunol. 6 , eabl5053 (2021).

Abramson, J., Dobeš, J., Lyu, M. & Sonnenberg, G. F. The emerging family of RORγt+ antigen-presenting cells. Nat. Rev. Immunol. 24 , 64–77 (2024).

Dobeš, J. et al. Extrathymic expression of Aire controls the induction of effective TH17 cell-mediated immune response to Candida albicans . Nat. Immunol. 23 , 1098–1108 (2022).

Yamano, T. et al. Aire-expressing ILC3-like cells in the lymph node display potent APC features. J. Exp. Med. 216 , 1027–1037 (2019).

Parisotto, Y. F. et al. Thetis cells induce food-specific Treg cell differentiation and oral tolerance. Preprint at bioRxiv https://doi.org/10.1101/2024.05.08.592952 (2024).

Karlsson, M. R., Rugtveit, J. & Brandtzaeg, P. Allergen-responsive CD4+CD25+ regulatory T cells in children who have outgrown cow’s milk allergy. J. Exp. Med. 199 , 1679–1688 (2004).

Du Toit, G. et al. Randomized trial of peanut consumption in infants at risk for peanut allergy. N. Engl. J. Med. 372 , 803–813 (2015).

Perkin, M. R. et al. Efficacy of the Enquiring About Tolerance (EAT) study among infants at high risk of developing food allergy. J. Allergy Clin. Immunol. 144 , 1606–1614.e2 (2019).

Du Toit, G. et al. Early consumption of peanuts in infancy is associated with a low prevalence of peanut allergy. J. Allergy Clin. Immunol. 122 , 984–991 (2008). This clinical study shows that introduction of antigen into the diet early in life reduces susceptibility to peanut allergy in children.

Smeekens, J. M. et al. A single priming event prevents oral tolerance to peanut. Clin. Exp. Allergy 53 , 930–940 (2023).

Brough, H. A. et al. Epicutaneous sensitization in the development of food allergy: what is the evidence and how can this be prevented? Allergy 75 , 2185–2205 (2020).

Strid, J., Thomson, M., Hourihane, J., Kimber, I. & Strobel, S. A novel model of sensitization and oral tolerance to peanut protein. Immunology 113 , 293–303 (2004).

DePaolo, R. W. et al. Co-adjuvant effects of retinoic acid and IL-15 induce inflammatory immunity to dietary antigens. Nature 471 , 220–224 (2011).

Abadie, V., Khosla, C. & Jabri, B. A mouse model of celiac disease. Curr. Protoc. 2 , e515 (2022).

Bouziat, R. et al. Reovirus infection triggers inflammatory responses to dietary antigens and development of celiac disease. Science 356 , 44–50 (2017).

Bouziat, R. et al. Murine norovirus infection induces T H 1 inflammatory responses to dietary antigens. Cell Host Microbe 24 , 677–688.e5 (2018).

Medina Sanchez, L. et al. The gut protist Tritrichomonas arnold restrains virus-mediated loss of oral tolerance by modulating dietary antigen-presenting dendritic cells. Immunity 56 , 1862–1875.e9 (2023).

Lindfors, K. et al. Metagenomics of the faecal virome indicate a cumulative effect of enterovirus and gluten amount on the risk of coeliac disease autoimmunity in genetically at risk children: the TEDDY study. Gut 69 , 1416–1422 (2020).

Brown, J. J., Jabri, B. & Dermody, T. S. A viral trigger for celiac disease. PLoS Pathog. 14 , e1007181 (2018).

Iversen, R. & Sollid, L. M. The immunobiology and pathogenesis of celiac disease. Annu. Rev. Pathol. 18 , 47–70 (2023).

Araya, R. E. et al. Mechanisms of innate immune activation by gluten peptide p31-43 in mice. Am. J. Physiol. Gastrointest. Liver Physiol. 311 , G40–G49 (2016).

Al Nabhani, Z. et al. A weaning reaction to microbiota is required for resistance to immunopathologies in the adult. Immunity 50 , 1276–1288.e5 (2019). This study demonstrates that a pro-inflammatory reaction to the microbiota at the time of weaning prevents systemic inflammatory and allergic disorders later in life.

Hornef, M. W. & Torow, N. ‘Layered immunity’ and the ‘neonatal window of opportunity’ — timed succession of non-redundant phases to establish mucosal host-microbial homeostasis after birth. Immunology 159 , 15–25 (2020).

Donald, K. & Finlay, B. B. Early-life interactions between the microbiota and immune system: impact on immune system development and atopic disease. Nat. Rev. Immunol. 23 , 735–748 (2023).

Filardy, A. A., Ferreira, J. R. M., Rezende, R. M., Kelsall, B. L. & Oliveira, R. P. The intestinal microenvironment shapes macrophage and dendritic cell identity and function. Immunol. Lett. 253 , 41–53 (2023).

Sanidad, K. Z. et al. Gut bacteria-derived serotonin promotes immune tolerance in early life. Sci. Immunol. 9 , eadj4775 (2024).

Willems, F., Vollstedt, S. & Suter, M. Phenotype and function of neonatal DC. Eur. J. Immunol. 39 , 26–35 (2009).

Ruckwardt, T. J., Malloy, A. M., Morabito, K. M. & Graham, B. S. Quantitative and qualitative deficits in neonatal lung-migratory dendritic cells impact the generation of the CD8+ T cell response. PLoS Pathog. 10 , e1003934 (2014).

Ruckwardt, T. J., Morabito, K. M., Bar-Haim, E., Nair, D. & Graham, B. S. Neonatal mice possess two phenotypically and functionally distinct lung-migratory CD103 + dendritic cell populations following respiratory infection. Mucosal Immunol. 11 , 186–198 (2018).

Duong, Q. A., Pittet, L. F., Curtis, N. & Zimmermann, P. Antibiotic exposure and adverse long-term health outcomes in children: a systematic review and meta-analysis. J. Infect. 85 , 213–300 (2022).

Størdal, K., White, R. A. & Eggesbø, M. Early feeding and risk of celiac disease in a prospective birth cohort. Pediatrics 132 , e1202–e1209 (2013).

Nakandakari-Higa, S. et al. Universal recording of immune cell interactions in vivo. Nature 627 , 399–406 (2024).

Gu, Y. et al. Immune microniches shape intestinal Treg function. Nature 628 , 854–862 (2024).

Husby, S., Mestecky, J., Moldoveanu, Z., Holland, S. & Elson, C. O. Oral tolerance in humans. T cell but not B cell tolerance after antigen feeding. J. Immunol. 152 , 4663–4670 (1994).

Leishman, A. J., Garside, P. & Mowat, A. M. Induction of oral tolerance in the primed immune system: influence of antigen persistence and adjuvant form. Cell. Immunol. 202 , 71–78 (2000).

Pabst, O. et al. Gut-liver axis: barriers and functional circuits. Nat. Rev. Gastroenterol. Hepatol. 20 , 447–461 (2023).

Callery, M. P., Kamei, T. & Flye, M. W. The effect of portacaval shunt on delayed-hypersensitivity responses following antigen feeding. J. Surg. Res. 46 , 391–394 (1989).

Thomson, A. W. & Knolle, P. A. Antigen-presenting cell function in the tolerogenic liver environment. Nat. Rev. Immunol. 10 , 753–766 (2010).

Guilliams, M. et al. Spatial proteogenomics reveals distinct and evolutionarily conserved hepatic macrophage niches. Cell 185 , 379–396.e38 (2022).

Bamboat, Z. M. et al. Human liver dendritic cells promote T cell hyporesponsiveness. J. Immunol. 182 , 1901–1911 (2009).

Carambia, A. et al. TGF-β-dependent induction of CD4+CD25+Foxp3+ Tregs by liver sinusoidal endothelial cells. J. Hepatol. 61 , 594–599 (2014).

Limmer, A. et al. Cross-presentation of oral antigens by liver sinusoidal endothelial cells leads to CD8 T cell tolerance. Eur. J. Immunol. 35 , 2970–2981 (2005).

Callery, M. P., Kamei, T. & Flye, M. W. Kupffer cell blockade inhibits induction of tolerance by the portal venous route. Transplantation 47 , 1092–1094 (1989).

Heymann, F. et al. Liver inflammation abrogates immunological tolerance induced by Kupffer cells. Hepatology 62 , 279–291 (2015).

Goubier, A. et al. Plasmacytoid dendritic cells mediate oral tolerance. Immunity 29 , 464–475 (2008).

Cassani, B. et al. Gut-tropic T cells that express integrin α4β7 and CCR9 are required for induction of oral immune tolerance in mice. Gastroenterology 141 , 2109–2118 (2011).

Download references

Acknowledgements

The work was supported by the German research foundation (DFG) Project-ID 403224013 – SFB 1382 (B06 to O.P).

Author information

Authors and affiliations.

Institute of Molecular Medicine, RWTH Aachen University, Aachen, Germany

Vuk Cerovic & Oliver Pabst

School of Infection and Immunity, College of Medicine, Veterinary Medicine and Life Sciences, University of Glasgow, Glasgow, UK

Allan McI Mowat

You can also search for this author in PubMed   Google Scholar

Contributions

The authors contributed equally to all aspects of the article.

Corresponding authors

Correspondence to Vuk Cerovic or Allan McI Mowat .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Reviews Immunology thanks Petra Bachar, Katsuaki Sato and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

When T cells encounter their cognate antigen in the absence of costimulation, they can become anergic. In this state, there is some evidence of activation, but there are no effector functions and the cell cannot proliferate or show other functions when restimulated with antigen. Anergic T cells also do not show active regulatory activity.

A diet which lacks proteins or immunogenic peptides, with amino acids usually being the only source of nitrogen.

(DTH). Also known as type IV hypersensitivity, DTH is a form of immune reaction driven classically by CD4 + T H 1 cells producing IFNγ, leading to activation of macrophages. The resulting production of nonspecific mediators such as TNF, IL-6, IL-1 and reactive oxygen intermediates is aimed at clearing pathogens, but it can also be an important cause of tissue damage.

Small vesicles with bilipid membranes derived from the cell surface or intracellular organelles such as endosomes, lysosomes or endoplasmic reticulum that are released into the surrounding environment. As well as membrane-bound molecules, they can contain other cellular material such as RNA, DNA and proteins which can transmit information to neighbouring cells.

First described as the effector molecule released when a cell undergoes the inflammatory programmed cell death process of pyroptosis. In this situation, caspases activated downstream of the inflammasome result in cleavage of full-length gasdermin D, releasing an N-terminal form that forms pores in the cell membrane. The cell then dies owing to membrane leakage and pro-inflammatory mediators such as IL-1β are released through the pores. More recently, non-pore-forming roles of gasdermin D have been described and these include the caspase-dependent generation of a smaller fragment that can translocate to the nucleus and modify gene transcription.

Antigen-specific immunological hypo-responsiveness induced by feeding an antigen. It can affect both local immune function in the intestine and throughout the rest of the body.

Similarly to thymic regulatory T cells (tT reg cells, also referred to as natural T reg cells), pT reg cells express FOXP3 and depend on this transcription factor for their development and functions. However, in contrast to tT reg cells, pT reg cells differentiate from conventional naive CD4 + T cells in secondary lymphoid organs under the influence of TGFβ and retinoic acid or when antigen is presented under conditions of metabolic stress. pT reg cells can express RORγt, but not Helios, and are usually specific for non-self-antigens. They are important in tolerance to foods, the microbiota and the fetus. By contrast, thymic T reg cells arise in the thymus, are self-reactive and express the Helios transcription factor. Both types of T reg cell control inflammatory effector immune responses in a number of ways, including through the production of inhibitory cytokines and the CTLA4-mediated removal of costimulatory molecules from the surface of antigen-presenting cells.

A population of CD4 + T cells with regulatory function that do not express FOXP3, and the development of which is dependent on IL-10. They act by producing IL-10 and can also express IFNγ.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Cite this article.

Cerovic, V., Pabst, O. & Mowat, A.M. The renaissance of oral tolerance: merging tradition and new insights. Nat Rev Immunol (2024). https://doi.org/10.1038/s41577-024-01077-7

Download citation

Accepted : 25 July 2024

Published : 06 September 2024

DOI : https://doi.org/10.1038/s41577-024-01077-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

review article on methodology

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

TechRepublic

Account information.

review article on methodology

Share with Your Friends

Close CRM Review (2024): Features, Pros, Cons, and Pricing

Your email has been sent

Image of Allyssa Haygood-Taylor

Close CRM fast facts

$49 per month

4/5


Close is a sales CRM solution with modern and simple features . It prioritizes helping marketing and sales teams increase selling efficiency and slimming business tech stacks by offering a multitude of built-in tools. Examples include omnichannel communication, calendar syncs, video meeting, and reporting.

Close markets itself as a solution best for startup or small business teams that are looking to enhance their remote CRM strategy . While its premium tiers can be considered costly, it does offer onboarding assistance and guided platform setup. If your business can allocate a budget for Close, it can facilitate sales productivity improvements in agile-style teams.

Close CRM pricing

  • Free trial: 14 days.
  • Startup: $49 per user per month when billed annually. $59 per user per month when billed annually. This plan offers built-in calling, texting, and email solutions plus pipeline reporting and account management.
  • Professional: $99 per user per month when billed annually. $109 per user per month when billed annually. This plan includes all Startup features, plus custom activities, follow-up automations, and multiple pipeline views.
  • Enterprise: $139 per user per month when billed annually. $149 per user per month when billed monthly. This tier includes all Professional features, plus predictive dialer, call coaching, and enhanced customizable reports.

Close CRM key features

Workflow automations.

Close allows users to implement automations throughout their sales CRM cycle . Businesses can start with multi-channel outreach by automating emails, calls, and SMS communication. From there, reps can identify and optimize outreach strategies with a KPI-based report dashboard. The automations that are performing best can be cloned into templates for easy access.

Close CRM workflow automation feature.

AI call assistant

Close’s suite of AI tools includes a built-in call assistant. This feature automatically transcribes and summarizes all phone calls. Reps can focus on the conversation and rely on Close to produce an accurate and searchable summary with generated action items. This tool also has multi-language support for over 20 languages and can work through the desktop or with the Close mobile app.

Close CRM call assistant feature.

Video meetings

Close’s video tool functions through an integration with Zoom. Businesses can connect their Zoom account, and previous cloud recordings will come to Close automatically. Close will notify sales reps 5 minutes before their next Zoom call, join the Zoom call right from the Lead view in Close, and then save the call recording. This is best for remote sales teams that have to video sell.

Close CRM Zoom video feature.

Search and Smart Views

The pairing of search with the Smart Views feature can help reps prioritize leads and begin lead nurturing from the same screen. The built-in search function helps reps find contact information by searching important phrases, contract mentions, past conversations, and more. This will create a Smart View of the top lead data, like recent email opens, renewal updates, or other firmographics. Then, outreach workflows can be automated around those saved views.

Close CRM search and smart view feature.

Close CRM pros

  • Offers easy-to-adapt templates.
  • Real users report effective reminders and notifications.
  • SOC2 Type 2, GDPR, and CCPA compliant.

Close CRM cons

  • Real users report a learning curve with the interface.
  • Expensive compared to other similar solutions.
  • Users report minor bugs with the platform.

Alternatives to Close CRM

Pipedrive is an intuitive CRM provider with a highlighted focus on clean and simple pipelines for sales reps to follow. Compared to Close, Pipedrive’s reporting dashboards offer more collaboration and custom field reports. And while both Pipedrive and Close offer email syncs and outreach automations, Pipedrive’s marketing tools are more advanced with email analytics and segmentation.

For more information, head over to this Pipedrive review .

HubSpot offers a suite of business tools, including a popular CRM software option. It has the ability to integrate with over 1,500 tools for the ultimate customization. HubSpot’s software itself isn’t open source , but its integrations are developer-friendly. Although it doesn’t have the option for a free trial like Close, HubSpot does offer a forever free tier as well as an enterprise plan, which makes it a more scalable platform.

To learn more about this alternative, check out the full HubSpot review .

monday CRM is another flexible and customizable CRM software with advanced project management functionality. It does offer a free version, but only to nonprofits or students, once approved. Compared to monday CRM, Close does offer more built-in communication features like the calling and SMS tools.

Read the monday CRM review for more details.

Review methodology

To review Close, I used our in-house rubric that has defined criteria around the most important evaluation points when considering the best CRM providers . I compared Close’s top features, pricing, and benefits against industry standards. All of this helps me identify standout features and ideal use cases.

Here’s the exact criteria I used to score Close’s CRM software:

  • Cost: Weighted 25% of the total score.
  • Core features: Weighted 25% of the total score.
  • Customizations: Weighted 15% of the total score.
  • Integrations: Weighted 15% of the total score.
  • Ease of use: Weighted 10% of the total score.
  • Customer support: Weighted 10% of the total score.

For a further breakdown of these criteria, read the TechRepublic review methodology page .

  • What Is CRM?
  • Top 10 CRM Features and Functionalities
  • 4 Types of CRM Software and How to Choose
  • CRM Best Practices for Businesses
  • Best CRMs for Small Business

Image of Allyssa Haygood-Taylor

Create a TechRepublic Account

Get the web's best business technology news, tutorials, reviews, trends, and analysis—in your inbox. Let's start with the basics.

* - indicates required fields

Sign in to TechRepublic

Lost your password? Request a new password

Reset Password

Please enter your email adress. You will receive an email message with instructions on how to reset your password.

Check your email for a password reset link. If you didn't receive an email don't forgot to check your spam folder, otherwise contact support .

Welcome. Tell us a little bit about you.

This will help us provide you with customized content.

Want to receive more TechRepublic news?

You're all set.

Thanks for signing up! Keep an eye out for a confirmation email from our team. To ensure any newsletters you subscribed to hit your inbox, make sure to add [email protected] to your contacts list.

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.

Cover of Handbook of eHealth Evaluation: An Evidence-based Approach

Handbook of eHealth Evaluation: An Evidence-based Approach [Internet].

Chapter 9 methods for literature reviews.

Guy Paré and Spyros Kitsiou .

9.1. Introduction

Literature reviews play a critical role in scholarship because science remains, first and foremost, a cumulative endeavour ( vom Brocke et al., 2009 ). As in any academic discipline, rigorous knowledge syntheses are becoming indispensable in keeping up with an exponentially growing eHealth literature, assisting practitioners, academics, and graduate students in finding, evaluating, and synthesizing the contents of many empirical and conceptual papers. Among other methods, literature reviews are essential for: (a) identifying what has been written on a subject or topic; (b) determining the extent to which a specific research area reveals any interpretable trends or patterns; (c) aggregating empirical findings related to a narrow research question to support evidence-based practice; (d) generating new frameworks and theories; and (e) identifying topics or questions requiring more investigation ( Paré, Trudel, Jaana, & Kitsiou, 2015 ).

Literature reviews can take two major forms. The most prevalent one is the “literature review” or “background” section within a journal paper or a chapter in a graduate thesis. This section synthesizes the extant literature and usually identifies the gaps in knowledge that the empirical study addresses ( Sylvester, Tate, & Johnstone, 2013 ). It may also provide a theoretical foundation for the proposed study, substantiate the presence of the research problem, justify the research as one that contributes something new to the cumulated knowledge, or validate the methods and approaches for the proposed study ( Hart, 1998 ; Levy & Ellis, 2006 ).

The second form of literature review, which is the focus of this chapter, constitutes an original and valuable work of research in and of itself ( Paré et al., 2015 ). Rather than providing a base for a researcher’s own work, it creates a solid starting point for all members of the community interested in a particular area or topic ( Mulrow, 1987 ). The so-called “review article” is a journal-length paper which has an overarching purpose to synthesize the literature in a field, without collecting or analyzing any primary data ( Green, Johnson, & Adams, 2006 ).

When appropriately conducted, review articles represent powerful information sources for practitioners looking for state-of-the art evidence to guide their decision-making and work practices ( Paré et al., 2015 ). Further, high-quality reviews become frequently cited pieces of work which researchers seek out as a first clear outline of the literature when undertaking empirical studies ( Cooper, 1988 ; Rowe, 2014 ). Scholars who track and gauge the impact of articles have found that review papers are cited and downloaded more often than any other type of published article ( Cronin, Ryan, & Coughlan, 2008 ; Montori, Wilczynski, Morgan, Haynes, & Hedges, 2003 ; Patsopoulos, Analatos, & Ioannidis, 2005 ). The reason for their popularity may be the fact that reading the review enables one to have an overview, if not a detailed knowledge of the area in question, as well as references to the most useful primary sources ( Cronin et al., 2008 ). Although they are not easy to conduct, the commitment to complete a review article provides a tremendous service to one’s academic community ( Paré et al., 2015 ; Petticrew & Roberts, 2006 ). Most, if not all, peer-reviewed journals in the fields of medical informatics publish review articles of some type.

The main objectives of this chapter are fourfold: (a) to provide an overview of the major steps and activities involved in conducting a stand-alone literature review; (b) to describe and contrast the different types of review articles that can contribute to the eHealth knowledge base; (c) to illustrate each review type with one or two examples from the eHealth literature; and (d) to provide a series of recommendations for prospective authors of review articles in this domain.

9.2. Overview of the Literature Review Process and Steps

As explained in Templier and Paré (2015) , there are six generic steps involved in conducting a review article:

  • formulating the research question(s) and objective(s),
  • searching the extant literature,
  • screening for inclusion,
  • assessing the quality of primary studies,
  • extracting data, and
  • analyzing data.

Although these steps are presented here in sequential order, one must keep in mind that the review process can be iterative and that many activities can be initiated during the planning stage and later refined during subsequent phases ( Finfgeld-Connett & Johnson, 2013 ; Kitchenham & Charters, 2007 ).

Formulating the research question(s) and objective(s): As a first step, members of the review team must appropriately justify the need for the review itself ( Petticrew & Roberts, 2006 ), identify the review’s main objective(s) ( Okoli & Schabram, 2010 ), and define the concepts or variables at the heart of their synthesis ( Cooper & Hedges, 2009 ; Webster & Watson, 2002 ). Importantly, they also need to articulate the research question(s) they propose to investigate ( Kitchenham & Charters, 2007 ). In this regard, we concur with Jesson, Matheson, and Lacey (2011) that clearly articulated research questions are key ingredients that guide the entire review methodology; they underscore the type of information that is needed, inform the search for and selection of relevant literature, and guide or orient the subsequent analysis. Searching the extant literature: The next step consists of searching the literature and making decisions about the suitability of material to be considered in the review ( Cooper, 1988 ). There exist three main coverage strategies. First, exhaustive coverage means an effort is made to be as comprehensive as possible in order to ensure that all relevant studies, published and unpublished, are included in the review and, thus, conclusions are based on this all-inclusive knowledge base. The second type of coverage consists of presenting materials that are representative of most other works in a given field or area. Often authors who adopt this strategy will search for relevant articles in a small number of top-tier journals in a field ( Paré et al., 2015 ). In the third strategy, the review team concentrates on prior works that have been central or pivotal to a particular topic. This may include empirical studies or conceptual papers that initiated a line of investigation, changed how problems or questions were framed, introduced new methods or concepts, or engendered important debate ( Cooper, 1988 ). Screening for inclusion: The following step consists of evaluating the applicability of the material identified in the preceding step ( Levy & Ellis, 2006 ; vom Brocke et al., 2009 ). Once a group of potential studies has been identified, members of the review team must screen them to determine their relevance ( Petticrew & Roberts, 2006 ). A set of predetermined rules provides a basis for including or excluding certain studies. This exercise requires a significant investment on the part of researchers, who must ensure enhanced objectivity and avoid biases or mistakes. As discussed later in this chapter, for certain types of reviews there must be at least two independent reviewers involved in the screening process and a procedure to resolve disagreements must also be in place ( Liberati et al., 2009 ; Shea et al., 2009 ). Assessing the quality of primary studies: In addition to screening material for inclusion, members of the review team may need to assess the scientific quality of the selected studies, that is, appraise the rigour of the research design and methods. Such formal assessment, which is usually conducted independently by at least two coders, helps members of the review team refine which studies to include in the final sample, determine whether or not the differences in quality may affect their conclusions, or guide how they analyze the data and interpret the findings ( Petticrew & Roberts, 2006 ). Ascribing quality scores to each primary study or considering through domain-based evaluations which study components have or have not been designed and executed appropriately makes it possible to reflect on the extent to which the selected study addresses possible biases and maximizes validity ( Shea et al., 2009 ). Extracting data: The following step involves gathering or extracting applicable information from each primary study included in the sample and deciding what is relevant to the problem of interest ( Cooper & Hedges, 2009 ). Indeed, the type of data that should be recorded mainly depends on the initial research questions ( Okoli & Schabram, 2010 ). However, important information may also be gathered about how, when, where and by whom the primary study was conducted, the research design and methods, or qualitative/quantitative results ( Cooper & Hedges, 2009 ). Analyzing and synthesizing data : As a final step, members of the review team must collate, summarize, aggregate, organize, and compare the evidence extracted from the included studies. The extracted data must be presented in a meaningful way that suggests a new contribution to the extant literature ( Jesson et al., 2011 ). Webster and Watson (2002) warn researchers that literature reviews should be much more than lists of papers and should provide a coherent lens to make sense of extant knowledge on a given topic. There exist several methods and techniques for synthesizing quantitative (e.g., frequency analysis, meta-analysis) and qualitative (e.g., grounded theory, narrative analysis, meta-ethnography) evidence ( Dixon-Woods, Agarwal, Jones, Young, & Sutton, 2005 ; Thomas & Harden, 2008 ).

9.3. Types of Review Articles and Brief Illustrations

EHealth researchers have at their disposal a number of approaches and methods for making sense out of existing literature, all with the purpose of casting current research findings into historical contexts or explaining contradictions that might exist among a set of primary research studies conducted on a particular topic. Our classification scheme is largely inspired from Paré and colleagues’ (2015) typology. Below we present and illustrate those review types that we feel are central to the growth and development of the eHealth domain.

9.3.1. Narrative Reviews

The narrative review is the “traditional” way of reviewing the extant literature and is skewed towards a qualitative interpretation of prior knowledge ( Sylvester et al., 2013 ). Put simply, a narrative review attempts to summarize or synthesize what has been written on a particular topic but does not seek generalization or cumulative knowledge from what is reviewed ( Davies, 2000 ; Green et al., 2006 ). Instead, the review team often undertakes the task of accumulating and synthesizing the literature to demonstrate the value of a particular point of view ( Baumeister & Leary, 1997 ). As such, reviewers may selectively ignore or limit the attention paid to certain studies in order to make a point. In this rather unsystematic approach, the selection of information from primary articles is subjective, lacks explicit criteria for inclusion and can lead to biased interpretations or inferences ( Green et al., 2006 ). There are several narrative reviews in the particular eHealth domain, as in all fields, which follow such an unstructured approach ( Silva et al., 2015 ; Paul et al., 2015 ).

Despite these criticisms, this type of review can be very useful in gathering together a volume of literature in a specific subject area and synthesizing it. As mentioned above, its primary purpose is to provide the reader with a comprehensive background for understanding current knowledge and highlighting the significance of new research ( Cronin et al., 2008 ). Faculty like to use narrative reviews in the classroom because they are often more up to date than textbooks, provide a single source for students to reference, and expose students to peer-reviewed literature ( Green et al., 2006 ). For researchers, narrative reviews can inspire research ideas by identifying gaps or inconsistencies in a body of knowledge, thus helping researchers to determine research questions or formulate hypotheses. Importantly, narrative reviews can also be used as educational articles to bring practitioners up to date with certain topics of issues ( Green et al., 2006 ).

Recently, there have been several efforts to introduce more rigour in narrative reviews that will elucidate common pitfalls and bring changes into their publication standards. Information systems researchers, among others, have contributed to advancing knowledge on how to structure a “traditional” review. For instance, Levy and Ellis (2006) proposed a generic framework for conducting such reviews. Their model follows the systematic data processing approach comprised of three steps, namely: (a) literature search and screening; (b) data extraction and analysis; and (c) writing the literature review. They provide detailed and very helpful instructions on how to conduct each step of the review process. As another methodological contribution, vom Brocke et al. (2009) offered a series of guidelines for conducting literature reviews, with a particular focus on how to search and extract the relevant body of knowledge. Last, Bandara, Miskon, and Fielt (2011) proposed a structured, predefined and tool-supported method to identify primary studies within a feasible scope, extract relevant content from identified articles, synthesize and analyze the findings, and effectively write and present the results of the literature review. We highly recommend that prospective authors of narrative reviews consult these useful sources before embarking on their work.

Darlow and Wen (2015) provide a good example of a highly structured narrative review in the eHealth field. These authors synthesized published articles that describe the development process of mobile health (m-health) interventions for patients’ cancer care self-management. As in most narrative reviews, the scope of the research questions being investigated is broad: (a) how development of these systems are carried out; (b) which methods are used to investigate these systems; and (c) what conclusions can be drawn as a result of the development of these systems. To provide clear answers to these questions, a literature search was conducted on six electronic databases and Google Scholar . The search was performed using several terms and free text words, combining them in an appropriate manner. Four inclusion and three exclusion criteria were utilized during the screening process. Both authors independently reviewed each of the identified articles to determine eligibility and extract study information. A flow diagram shows the number of studies identified, screened, and included or excluded at each stage of study selection. In terms of contributions, this review provides a series of practical recommendations for m-health intervention development.

9.3.2. Descriptive or Mapping Reviews

The primary goal of a descriptive review is to determine the extent to which a body of knowledge in a particular research topic reveals any interpretable pattern or trend with respect to pre-existing propositions, theories, methodologies or findings ( King & He, 2005 ; Paré et al., 2015 ). In contrast with narrative reviews, descriptive reviews follow a systematic and transparent procedure, including searching, screening and classifying studies ( Petersen, Vakkalanka, & Kuzniarz, 2015 ). Indeed, structured search methods are used to form a representative sample of a larger group of published works ( Paré et al., 2015 ). Further, authors of descriptive reviews extract from each study certain characteristics of interest, such as publication year, research methods, data collection techniques, and direction or strength of research outcomes (e.g., positive, negative, or non-significant) in the form of frequency analysis to produce quantitative results ( Sylvester et al., 2013 ). In essence, each study included in a descriptive review is treated as the unit of analysis and the published literature as a whole provides a database from which the authors attempt to identify any interpretable trends or draw overall conclusions about the merits of existing conceptualizations, propositions, methods or findings ( Paré et al., 2015 ). In doing so, a descriptive review may claim that its findings represent the state of the art in a particular domain ( King & He, 2005 ).

In the fields of health sciences and medical informatics, reviews that focus on examining the range, nature and evolution of a topic area are described by Anderson, Allen, Peckham, and Goodwin (2008) as mapping reviews . Like descriptive reviews, the research questions are generic and usually relate to publication patterns and trends. There is no preconceived plan to systematically review all of the literature although this can be done. Instead, researchers often present studies that are representative of most works published in a particular area and they consider a specific time frame to be mapped.

An example of this approach in the eHealth domain is offered by DeShazo, Lavallie, and Wolf (2009). The purpose of this descriptive or mapping review was to characterize publication trends in the medical informatics literature over a 20-year period (1987 to 2006). To achieve this ambitious objective, the authors performed a bibliometric analysis of medical informatics citations indexed in medline using publication trends, journal frequencies, impact factors, Medical Subject Headings (MeSH) term frequencies, and characteristics of citations. Findings revealed that there were over 77,000 medical informatics articles published during the covered period in numerous journals and that the average annual growth rate was 12%. The MeSH term analysis also suggested a strong interdisciplinary trend. Finally, average impact scores increased over time with two notable growth periods. Overall, patterns in research outputs that seem to characterize the historic trends and current components of the field of medical informatics suggest it may be a maturing discipline (DeShazo et al., 2009).

9.3.3. Scoping Reviews

Scoping reviews attempt to provide an initial indication of the potential size and nature of the extant literature on an emergent topic (Arksey & O’Malley, 2005; Daudt, van Mossel, & Scott, 2013 ; Levac, Colquhoun, & O’Brien, 2010). A scoping review may be conducted to examine the extent, range and nature of research activities in a particular area, determine the value of undertaking a full systematic review (discussed next), or identify research gaps in the extant literature ( Paré et al., 2015 ). In line with their main objective, scoping reviews usually conclude with the presentation of a detailed research agenda for future works along with potential implications for both practice and research.

Unlike narrative and descriptive reviews, the whole point of scoping the field is to be as comprehensive as possible, including grey literature (Arksey & O’Malley, 2005). Inclusion and exclusion criteria must be established to help researchers eliminate studies that are not aligned with the research questions. It is also recommended that at least two independent coders review abstracts yielded from the search strategy and then the full articles for study selection ( Daudt et al., 2013 ). The synthesized evidence from content or thematic analysis is relatively easy to present in tabular form (Arksey & O’Malley, 2005; Thomas & Harden, 2008 ).

One of the most highly cited scoping reviews in the eHealth domain was published by Archer, Fevrier-Thomas, Lokker, McKibbon, and Straus (2011) . These authors reviewed the existing literature on personal health record ( phr ) systems including design, functionality, implementation, applications, outcomes, and benefits. Seven databases were searched from 1985 to March 2010. Several search terms relating to phr s were used during this process. Two authors independently screened titles and abstracts to determine inclusion status. A second screen of full-text articles, again by two independent members of the research team, ensured that the studies described phr s. All in all, 130 articles met the criteria and their data were extracted manually into a database. The authors concluded that although there is a large amount of survey, observational, cohort/panel, and anecdotal evidence of phr benefits and satisfaction for patients, more research is needed to evaluate the results of phr implementations. Their in-depth analysis of the literature signalled that there is little solid evidence from randomized controlled trials or other studies through the use of phr s. Hence, they suggested that more research is needed that addresses the current lack of understanding of optimal functionality and usability of these systems, and how they can play a beneficial role in supporting patient self-management ( Archer et al., 2011 ).

9.3.4. Forms of Aggregative Reviews

Healthcare providers, practitioners, and policy-makers are nowadays overwhelmed with large volumes of information, including research-based evidence from numerous clinical trials and evaluation studies, assessing the effectiveness of health information technologies and interventions ( Ammenwerth & de Keizer, 2004 ; Deshazo et al., 2009 ). It is unrealistic to expect that all these disparate actors will have the time, skills, and necessary resources to identify the available evidence in the area of their expertise and consider it when making decisions. Systematic reviews that involve the rigorous application of scientific strategies aimed at limiting subjectivity and bias (i.e., systematic and random errors) can respond to this challenge.

Systematic reviews attempt to aggregate, appraise, and synthesize in a single source all empirical evidence that meet a set of previously specified eligibility criteria in order to answer a clearly formulated and often narrow research question on a particular topic of interest to support evidence-based practice ( Liberati et al., 2009 ). They adhere closely to explicit scientific principles ( Liberati et al., 2009 ) and rigorous methodological guidelines (Higgins & Green, 2008) aimed at reducing random and systematic errors that can lead to deviations from the truth in results or inferences. The use of explicit methods allows systematic reviews to aggregate a large body of research evidence, assess whether effects or relationships are in the same direction and of the same general magnitude, explain possible inconsistencies between study results, and determine the strength of the overall evidence for every outcome of interest based on the quality of included studies and the general consistency among them ( Cook, Mulrow, & Haynes, 1997 ). The main procedures of a systematic review involve:

  • Formulating a review question and developing a search strategy based on explicit inclusion criteria for the identification of eligible studies (usually described in the context of a detailed review protocol).
  • Searching for eligible studies using multiple databases and information sources, including grey literature sources, without any language restrictions.
  • Selecting studies, extracting data, and assessing risk of bias in a duplicate manner using two independent reviewers to avoid random or systematic errors in the process.
  • Analyzing data using quantitative or qualitative methods.
  • Presenting results in summary of findings tables.
  • Interpreting results and drawing conclusions.

Many systematic reviews, but not all, use statistical methods to combine the results of independent studies into a single quantitative estimate or summary effect size. Known as meta-analyses , these reviews use specific data extraction and statistical techniques (e.g., network, frequentist, or Bayesian meta-analyses) to calculate from each study by outcome of interest an effect size along with a confidence interval that reflects the degree of uncertainty behind the point estimate of effect ( Borenstein, Hedges, Higgins, & Rothstein, 2009 ; Deeks, Higgins, & Altman, 2008 ). Subsequently, they use fixed or random-effects analysis models to combine the results of the included studies, assess statistical heterogeneity, and calculate a weighted average of the effect estimates from the different studies, taking into account their sample sizes. The summary effect size is a value that reflects the average magnitude of the intervention effect for a particular outcome of interest or, more generally, the strength of a relationship between two variables across all studies included in the systematic review. By statistically combining data from multiple studies, meta-analyses can create more precise and reliable estimates of intervention effects than those derived from individual studies alone, when these are examined independently as discrete sources of information.

The review by Gurol-Urganci, de Jongh, Vodopivec-Jamsek, Atun, and Car (2013) on the effects of mobile phone messaging reminders for attendance at healthcare appointments is an illustrative example of a high-quality systematic review with meta-analysis. Missed appointments are a major cause of inefficiency in healthcare delivery with substantial monetary costs to health systems. These authors sought to assess whether mobile phone-based appointment reminders delivered through Short Message Service ( sms ) or Multimedia Messaging Service ( mms ) are effective in improving rates of patient attendance and reducing overall costs. To this end, they conducted a comprehensive search on multiple databases using highly sensitive search strategies without language or publication-type restrictions to identify all rct s that are eligible for inclusion. In order to minimize the risk of omitting eligible studies not captured by the original search, they supplemented all electronic searches with manual screening of trial registers and references contained in the included studies. Study selection, data extraction, and risk of bias assessments were performed inde­­pen­dently by two coders using standardized methods to ensure consistency and to eliminate potential errors. Findings from eight rct s involving 6,615 participants were pooled into meta-analyses to calculate the magnitude of effects that mobile text message reminders have on the rate of attendance at healthcare appointments compared to no reminders and phone call reminders.

Meta-analyses are regarded as powerful tools for deriving meaningful conclusions. However, there are situations in which it is neither reasonable nor appropriate to pool studies together using meta-analytic methods simply because there is extensive clinical heterogeneity between the included studies or variation in measurement tools, comparisons, or outcomes of interest. In these cases, systematic reviews can use qualitative synthesis methods such as vote counting, content analysis, classification schemes and tabulations, as an alternative approach to narratively synthesize the results of the independent studies included in the review. This form of review is known as qualitative systematic review.

A rigorous example of one such review in the eHealth domain is presented by Mickan, Atherton, Roberts, Heneghan, and Tilson (2014) on the use of handheld computers by healthcare professionals and their impact on access to information and clinical decision-making. In line with the methodological guide­lines for systematic reviews, these authors: (a) developed and registered with prospero ( www.crd.york.ac.uk/ prospero / ) an a priori review protocol; (b) conducted comprehensive searches for eligible studies using multiple databases and other supplementary strategies (e.g., forward searches); and (c) subsequently carried out study selection, data extraction, and risk of bias assessments in a duplicate manner to eliminate potential errors in the review process. Heterogeneity between the included studies in terms of reported outcomes and measures precluded the use of meta-analytic methods. To this end, the authors resorted to using narrative analysis and synthesis to describe the effectiveness of handheld computers on accessing information for clinical knowledge, adherence to safety and clinical quality guidelines, and diagnostic decision-making.

In recent years, the number of systematic reviews in the field of health informatics has increased considerably. Systematic reviews with discordant findings can cause great confusion and make it difficult for decision-makers to interpret the review-level evidence ( Moher, 2013 ). Therefore, there is a growing need for appraisal and synthesis of prior systematic reviews to ensure that decision-making is constantly informed by the best available accumulated evidence. Umbrella reviews , also known as overviews of systematic reviews, are tertiary types of evidence synthesis that aim to accomplish this; that is, they aim to compare and contrast findings from multiple systematic reviews and meta-analyses ( Becker & Oxman, 2008 ). Umbrella reviews generally adhere to the same principles and rigorous methodological guidelines used in systematic reviews. However, the unit of analysis in umbrella reviews is the systematic review rather than the primary study ( Becker & Oxman, 2008 ). Unlike systematic reviews that have a narrow focus of inquiry, umbrella reviews focus on broader research topics for which there are several potential interventions ( Smith, Devane, Begley, & Clarke, 2011 ). A recent umbrella review on the effects of home telemonitoring interventions for patients with heart failure critically appraised, compared, and synthesized evidence from 15 systematic reviews to investigate which types of home telemonitoring technologies and forms of interventions are more effective in reducing mortality and hospital admissions ( Kitsiou, Paré, & Jaana, 2015 ).

9.3.5. Realist Reviews

Realist reviews are theory-driven interpretative reviews developed to inform, enhance, or supplement conventional systematic reviews by making sense of heterogeneous evidence about complex interventions applied in diverse contexts in a way that informs policy decision-making ( Greenhalgh, Wong, Westhorp, & Pawson, 2011 ). They originated from criticisms of positivist systematic reviews which centre on their “simplistic” underlying assumptions ( Oates, 2011 ). As explained above, systematic reviews seek to identify causation. Such logic is appropriate for fields like medicine and education where findings of randomized controlled trials can be aggregated to see whether a new treatment or intervention does improve outcomes. However, many argue that it is not possible to establish such direct causal links between interventions and outcomes in fields such as social policy, management, and information systems where for any intervention there is unlikely to be a regular or consistent outcome ( Oates, 2011 ; Pawson, 2006 ; Rousseau, Manning, & Denyer, 2008 ).

To circumvent these limitations, Pawson, Greenhalgh, Harvey, and Walshe (2005) have proposed a new approach for synthesizing knowledge that seeks to unpack the mechanism of how “complex interventions” work in particular contexts. The basic research question — what works? — which is usually associated with systematic reviews changes to: what is it about this intervention that works, for whom, in what circumstances, in what respects and why? Realist reviews have no particular preference for either quantitative or qualitative evidence. As a theory-building approach, a realist review usually starts by articulating likely underlying mechanisms and then scrutinizes available evidence to find out whether and where these mechanisms are applicable ( Shepperd et al., 2009 ). Primary studies found in the extant literature are viewed as case studies which can test and modify the initial theories ( Rousseau et al., 2008 ).

The main objective pursued in the realist review conducted by Otte-Trojel, de Bont, Rundall, and van de Klundert (2014) was to examine how patient portals contribute to health service delivery and patient outcomes. The specific goals were to investigate how outcomes are produced and, most importantly, how variations in outcomes can be explained. The research team started with an exploratory review of background documents and research studies to identify ways in which patient portals may contribute to health service delivery and patient outcomes. The authors identified six main ways which represent “educated guesses” to be tested against the data in the evaluation studies. These studies were identified through a formal and systematic search in four databases between 2003 and 2013. Two members of the research team selected the articles using a pre-established list of inclusion and exclusion criteria and following a two-step procedure. The authors then extracted data from the selected articles and created several tables, one for each outcome category. They organized information to bring forward those mechanisms where patient portals contribute to outcomes and the variation in outcomes across different contexts.

9.3.6. Critical Reviews

Lastly, critical reviews aim to provide a critical evaluation and interpretive analysis of existing literature on a particular topic of interest to reveal strengths, weaknesses, contradictions, controversies, inconsistencies, and/or other important issues with respect to theories, hypotheses, research methods or results ( Baumeister & Leary, 1997 ; Kirkevold, 1997 ). Unlike other review types, critical reviews attempt to take a reflective account of the research that has been done in a particular area of interest, and assess its credibility by using appraisal instruments or critical interpretive methods. In this way, critical reviews attempt to constructively inform other scholars about the weaknesses of prior research and strengthen knowledge development by giving focus and direction to studies for further improvement ( Kirkevold, 1997 ).

Kitsiou, Paré, and Jaana (2013) provide an example of a critical review that assessed the methodological quality of prior systematic reviews of home telemonitoring studies for chronic patients. The authors conducted a comprehensive search on multiple databases to identify eligible reviews and subsequently used a validated instrument to conduct an in-depth quality appraisal. Results indicate that the majority of systematic reviews in this particular area suffer from important methodological flaws and biases that impair their internal validity and limit their usefulness for clinical and decision-making purposes. To this end, they provide a number of recommendations to strengthen knowledge development towards improving the design and execution of future reviews on home telemonitoring.

9.4. Summary

Table 9.1 outlines the main types of literature reviews that were described in the previous sub-sections and summarizes the main characteristics that distinguish one review type from another. It also includes key references to methodological guidelines and useful sources that can be used by eHealth scholars and researchers for planning and developing reviews.

Table 9.1. Typology of Literature Reviews (adapted from Paré et al., 2015).

Typology of Literature Reviews (adapted from Paré et al., 2015).

As shown in Table 9.1 , each review type addresses different kinds of research questions or objectives, which subsequently define and dictate the methods and approaches that need to be used to achieve the overarching goal(s) of the review. For example, in the case of narrative reviews, there is greater flexibility in searching and synthesizing articles ( Green et al., 2006 ). Researchers are often relatively free to use a diversity of approaches to search, identify, and select relevant scientific articles, describe their operational characteristics, present how the individual studies fit together, and formulate conclusions. On the other hand, systematic reviews are characterized by their high level of systematicity, rigour, and use of explicit methods, based on an “a priori” review plan that aims to minimize bias in the analysis and synthesis process (Higgins & Green, 2008). Some reviews are exploratory in nature (e.g., scoping/mapping reviews), whereas others may be conducted to discover patterns (e.g., descriptive reviews) or involve a synthesis approach that may include the critical analysis of prior research ( Paré et al., 2015 ). Hence, in order to select the most appropriate type of review, it is critical to know before embarking on a review project, why the research synthesis is conducted and what type of methods are best aligned with the pursued goals.

9.5. Concluding Remarks

In light of the increased use of evidence-based practice and research generating stronger evidence ( Grady et al., 2011 ; Lyden et al., 2013 ), review articles have become essential tools for summarizing, synthesizing, integrating or critically appraising prior knowledge in the eHealth field. As mentioned earlier, when rigorously conducted review articles represent powerful information sources for eHealth scholars and practitioners looking for state-of-the-art evidence. The typology of literature reviews we used herein will allow eHealth researchers, graduate students and practitioners to gain a better understanding of the similarities and differences between review types.

We must stress that this classification scheme does not privilege any specific type of review as being of higher quality than another ( Paré et al., 2015 ). As explained above, each type of review has its own strengths and limitations. Having said that, we realize that the methodological rigour of any review — be it qualitative, quantitative or mixed — is a critical aspect that should be considered seriously by prospective authors. In the present context, the notion of rigour refers to the reliability and validity of the review process described in section 9.2. For one thing, reliability is related to the reproducibility of the review process and steps, which is facilitated by a comprehensive documentation of the literature search process, extraction, coding and analysis performed in the review. Whether the search is comprehensive or not, whether it involves a methodical approach for data extraction and synthesis or not, it is important that the review documents in an explicit and transparent manner the steps and approach that were used in the process of its development. Next, validity characterizes the degree to which the review process was conducted appropriately. It goes beyond documentation and reflects decisions related to the selection of the sources, the search terms used, the period of time covered, the articles selected in the search, and the application of backward and forward searches ( vom Brocke et al., 2009 ). In short, the rigour of any review article is reflected by the explicitness of its methods (i.e., transparency) and the soundness of the approach used. We refer those interested in the concepts of rigour and quality to the work of Templier and Paré (2015) which offers a detailed set of methodological guidelines for conducting and evaluating various types of review articles.

To conclude, our main objective in this chapter was to demystify the various types of literature reviews that are central to the continuous development of the eHealth field. It is our hope that our descriptive account will serve as a valuable source for those conducting, evaluating or using reviews in this important and growing domain.

  • Ammenwerth E., de Keizer N. An inventory of evaluation studies of information technology in health care. Trends in evaluation research, 1982-2002. International Journal of Medical Informatics. 2004; 44 (1):44–56. [ PubMed : 15778794 ]
  • Anderson S., Allen P., Peckham S., Goodwin N. Asking the right questions: scoping studies in the commissioning of research on the organisation and delivery of health services. Health Research Policy and Systems. 2008; 6 (7):1–12. [ PMC free article : PMC2500008 ] [ PubMed : 18613961 ] [ CrossRef ]
  • Archer N., Fevrier-Thomas U., Lokker C., McKibbon K. A., Straus S.E. Personal health records: a scoping review. Journal of American Medical Informatics Association. 2011; 18 (4):515–522. [ PMC free article : PMC3128401 ] [ PubMed : 21672914 ]
  • Arksey H., O’Malley L. Scoping studies: towards a methodological framework. International Journal of Social Research Methodology. 2005; 8 (1):19–32.
  • A systematic, tool-supported method for conducting literature reviews in information systems. Paper presented at the Proceedings of the 19th European Conference on Information Systems ( ecis 2011); June 9 to 11; Helsinki, Finland. 2011.
  • Baumeister R. F., Leary M.R. Writing narrative literature reviews. Review of General Psychology. 1997; 1 (3):311–320.
  • Becker L. A., Oxman A.D. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Overviews of reviews; pp. 607–631.
  • Borenstein M., Hedges L., Higgins J., Rothstein H. Introduction to meta-analysis. Hoboken, nj : John Wiley & Sons Inc; 2009.
  • Cook D. J., Mulrow C. D., Haynes B. Systematic reviews: Synthesis of best evidence for clinical decisions. Annals of Internal Medicine. 1997; 126 (5):376–380. [ PubMed : 9054282 ]
  • Cooper H., Hedges L.V. In: The handbook of research synthesis and meta-analysis. 2nd ed. Cooper H., Hedges L. V., Valentine J. C., editors. New York: Russell Sage Foundation; 2009. Research synthesis as a scientific process; pp. 3–17.
  • Cooper H. M. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in Society. 1988; 1 (1):104–126.
  • Cronin P., Ryan F., Coughlan M. Undertaking a literature review: a step-by-step approach. British Journal of Nursing. 2008; 17 (1):38–43. [ PubMed : 18399395 ]
  • Darlow S., Wen K.Y. Development testing of mobile health interventions for cancer patient self-management: A review. Health Informatics Journal. 2015 (online before print). [ PubMed : 25916831 ] [ CrossRef ]
  • Daudt H. M., van Mossel C., Scott S.J. Enhancing the scoping study methodology: a large, inter-professional team’s experience with Arksey and O’Malley’s framework. bmc Medical Research Methodology. 2013; 13 :48. [ PMC free article : PMC3614526 ] [ PubMed : 23522333 ] [ CrossRef ]
  • Davies P. The relevance of systematic reviews to educational policy and practice. Oxford Review of Education. 2000; 26 (3-4):365–378.
  • Deeks J. J., Higgins J. P. T., Altman D.G. In: Cochrane handbook for systematic reviews of interventions. Higgins J. P. T., Green S., editors. Hoboken, nj : John Wiley & Sons, Ltd; 2008. Analysing data and undertaking meta-analyses; pp. 243–296.
  • Deshazo J. P., Lavallie D. L., Wolf F.M. Publication trends in the medical informatics literature: 20 years of “Medical Informatics” in mesh . bmc Medical Informatics and Decision Making. 2009; 9 :7. [ PMC free article : PMC2652453 ] [ PubMed : 19159472 ] [ CrossRef ]
  • Dixon-Woods M., Agarwal S., Jones D., Young B., Sutton A. Synthesising qualitative and quantitative evidence: a review of possible methods. Journal of Health Services Research and Policy. 2005; 10 (1):45–53. [ PubMed : 15667704 ]
  • Finfgeld-Connett D., Johnson E.D. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews. Journal of Advanced Nursing. 2013; 69 (1):194–204. [ PMC free article : PMC3424349 ] [ PubMed : 22591030 ]
  • Grady B., Myers K. M., Nelson E. L., Belz N., Bennett L., Carnahan L. … Guidelines Working Group. Evidence-based practice for telemental health. Telemedicine Journal and E Health. 2011; 17 (2):131–148. [ PubMed : 21385026 ]
  • Green B. N., Johnson C. D., Adams A. Writing narrative literature reviews for peer-reviewed journals: secrets of the trade. Journal of Chiropractic Medicine. 2006; 5 (3):101–117. [ PMC free article : PMC2647067 ] [ PubMed : 19674681 ]
  • Greenhalgh T., Wong G., Westhorp G., Pawson R. Protocol–realist and meta-narrative evidence synthesis: evolving standards ( rameses ). bmc Medical Research Methodology. 2011; 11 :115. [ PMC free article : PMC3173389 ] [ PubMed : 21843376 ]
  • Gurol-Urganci I., de Jongh T., Vodopivec-Jamsek V., Atun R., Car J. Mobile phone messaging reminders for attendance at healthcare appointments. Cochrane Database System Review. 2013; 12 cd 007458. [ PMC free article : PMC6485985 ] [ PubMed : 24310741 ] [ CrossRef ]
  • Hart C. Doing a literature review: Releasing the social science research imagination. London: SAGE Publications; 1998.
  • Higgins J. P. T., Green S., editors. Cochrane handbook for systematic reviews of interventions: Cochrane book series. Hoboken, nj : Wiley-Blackwell; 2008.
  • Jesson J., Matheson L., Lacey F.M. Doing your literature review: traditional and systematic techniques. Los Angeles & London: SAGE Publications; 2011.
  • King W. R., He J. Understanding the role and methods of meta-analysis in IS research. Communications of the Association for Information Systems. 2005; 16 :1.
  • Kirkevold M. Integrative nursing research — an important strategy to further the development of nursing science and nursing practice. Journal of Advanced Nursing. 1997; 25 (5):977–984. [ PubMed : 9147203 ]
  • Kitchenham B., Charters S. ebse Technical Report Version 2.3. Keele & Durham. uk : Keele University & University of Durham; 2007. Guidelines for performing systematic literature reviews in software engineering.
  • Kitsiou S., Paré G., Jaana M. Systematic reviews and meta-analyses of home telemonitoring interventions for patients with chronic diseases: a critical assessment of their methodological quality. Journal of Medical Internet Research. 2013; 15 (7):e150. [ PMC free article : PMC3785977 ] [ PubMed : 23880072 ]
  • Kitsiou S., Paré G., Jaana M. Effects of home telemonitoring interventions on patients with chronic heart failure: an overview of systematic reviews. Journal of Medical Internet Research. 2015; 17 (3):e63. [ PMC free article : PMC4376138 ] [ PubMed : 25768664 ]
  • Levac D., Colquhoun H., O’Brien K. K. Scoping studies: advancing the methodology. Implementation Science. 2010; 5 (1):69. [ PMC free article : PMC2954944 ] [ PubMed : 20854677 ]
  • Levy Y., Ellis T.J. A systems approach to conduct an effective literature review in support of information systems research. Informing Science. 2006; 9 :181–211.
  • Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P. A. et al. Moher D. The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. Annals of Internal Medicine. 2009; 151 (4):W-65. [ PubMed : 19622512 ]
  • Lyden J. R., Zickmund S. L., Bhargava T. D., Bryce C. L., Conroy M. B., Fischer G. S. et al. McTigue K. M. Implementing health information technology in a patient-centered manner: Patient experiences with an online evidence-based lifestyle intervention. Journal for Healthcare Quality. 2013; 35 (5):47–57. [ PubMed : 24004039 ]
  • Mickan S., Atherton H., Roberts N. W., Heneghan C., Tilson J.K. Use of handheld computers in clinical practice: a systematic review. bmc Medical Informatics and Decision Making. 2014; 14 :56. [ PMC free article : PMC4099138 ] [ PubMed : 24998515 ]
  • Moher D. The problem of duplicate systematic reviews. British Medical Journal. 2013; 347 (5040) [ PubMed : 23945367 ] [ CrossRef ]
  • Montori V. M., Wilczynski N. L., Morgan D., Haynes R. B., Hedges T. Systematic reviews: a cross-sectional study of location and citation counts. bmc Medicine. 2003; 1 :2. [ PMC free article : PMC281591 ] [ PubMed : 14633274 ]
  • Mulrow C. D. The medical review article: state of the science. Annals of Internal Medicine. 1987; 106 (3):485–488. [ PubMed : 3813259 ] [ CrossRef ]
  • Evidence-based information systems: A decade later. Proceedings of the European Conference on Information Systems ; 2011. Retrieved from http://aisel ​.aisnet.org/cgi/viewcontent ​.cgi?article ​=1221&context ​=ecis2011 .
  • Okoli C., Schabram K. A guide to conducting a systematic literature review of information systems research. ssrn Electronic Journal. 2010
  • Otte-Trojel T., de Bont A., Rundall T. G., van de Klundert J. How outcomes are achieved through patient portals: a realist review. Journal of American Medical Informatics Association. 2014; 21 (4):751–757. [ PMC free article : PMC4078283 ] [ PubMed : 24503882 ]
  • Paré G., Trudel M.-C., Jaana M., Kitsiou S. Synthesizing information systems knowledge: A typology of literature reviews. Information & Management. 2015; 52 (2):183–199.
  • Patsopoulos N. A., Analatos A. A., Ioannidis J.P. A. Relative citation impact of various study designs in the health sciences. Journal of the American Medical Association. 2005; 293 (19):2362–2366. [ PubMed : 15900006 ]
  • Paul M. M., Greene C. M., Newton-Dame R., Thorpe L. E., Perlman S. E., McVeigh K. H., Gourevitch M.N. The state of population health surveillance using electronic health records: A narrative review. Population Health Management. 2015; 18 (3):209–216. [ PubMed : 25608033 ]
  • Pawson R. Evidence-based policy: a realist perspective. London: SAGE Publications; 2006.
  • Pawson R., Greenhalgh T., Harvey G., Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy. 2005; 10 (Suppl 1):21–34. [ PubMed : 16053581 ]
  • Petersen K., Vakkalanka S., Kuzniarz L. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology. 2015; 64 :1–18.
  • Petticrew M., Roberts H. Systematic reviews in the social sciences: A practical guide. Malden, ma : Blackwell Publishing Co; 2006.
  • Rousseau D. M., Manning J., Denyer D. Evidence in management and organizational science: Assembling the field’s full weight of scientific knowledge through syntheses. The Academy of Management Annals. 2008; 2 (1):475–515.
  • Rowe F. What literature review is not: diversity, boundaries and recommendations. European Journal of Information Systems. 2014; 23 (3):241–255.
  • Shea B. J., Hamel C., Wells G. A., Bouter L. M., Kristjansson E., Grimshaw J. et al. Boers M. amstar is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of Clinical Epidemiology. 2009; 62 (10):1013–1020. [ PubMed : 19230606 ]
  • Shepperd S., Lewin S., Straus S., Clarke M., Eccles M. P., Fitzpatrick R. et al. Sheikh A. Can we systematically review studies that evaluate complex interventions? PLoS Medicine. 2009; 6 (8):e1000086. [ PMC free article : PMC2717209 ] [ PubMed : 19668360 ]
  • Silva B. M., Rodrigues J. J., de la Torre Díez I., López-Coronado M., Saleem K. Mobile-health: A review of current state in 2015. Journal of Biomedical Informatics. 2015; 56 :265–272. [ PubMed : 26071682 ]
  • Smith V., Devane D., Begley C., Clarke M. Methodology in conducting a systematic review of systematic reviews of healthcare interventions. bmc Medical Research Methodology. 2011; 11 (1):15. [ PMC free article : PMC3039637 ] [ PubMed : 21291558 ]
  • Sylvester A., Tate M., Johnstone D. Beyond synthesis: re-presenting heterogeneous research literature. Behaviour & Information Technology. 2013; 32 (12):1199–1215.
  • Templier M., Paré G. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems. 2015; 37 (6):112–137.
  • Thomas J., Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. bmc Medical Research Methodology. 2008; 8 (1):45. [ PMC free article : PMC2478656 ] [ PubMed : 18616818 ]
  • Reconstructing the giant: on the importance of rigour in documenting the literature search process. Paper presented at the Proceedings of the 17th European Conference on Information Systems ( ecis 2009); Verona, Italy. 2009.
  • Webster J., Watson R.T. Analyzing the past to prepare for the future: Writing a literature review. Management Information Systems Quarterly. 2002; 26 (2):11.
  • Whitlock E. P., Lin J. S., Chou R., Shekelle P., Robinson K.A. Using existing systematic reviews in complex systematic reviews. Annals of Internal Medicine. 2008; 148 (10):776–782. [ PubMed : 18490690 ]

This publication is licensed under a Creative Commons License, Attribution-Noncommercial 4.0 International License (CC BY-NC 4.0): see https://creativecommons.org/licenses/by-nc/4.0/

  • Cite this Page Paré G, Kitsiou S. Chapter 9 Methods for Literature Reviews. In: Lau F, Kuziemsky C, editors. Handbook of eHealth Evaluation: An Evidence-based Approach [Internet]. Victoria (BC): University of Victoria; 2017 Feb 27.
  • PDF version of this title (4.5M)

In this Page

  • Introduction
  • Overview of the Literature Review Process and Steps
  • Types of Review Articles and Brief Illustrations
  • Concluding Remarks

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Recent Activity

  • Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Ev... Chapter 9 Methods for Literature Reviews - Handbook of eHealth Evaluation: An Evidence-based Approach

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

COMMENTS

  1. Literature review as a research methodology: An overview and guidelines

    As mentioned previously, there are a number of existing guidelines for literature reviews. Depending on the methodology needed to achieve the purpose of the review, all types can be helpful and appropriate to reach a specific goal (for examples, please see Table 1).These approaches can be qualitative, quantitative, or have a mixed design depending on the phase of the review.

  2. Writing a Scientific Review Article: Comprehensive Insights for

    Writing a review article is equivalent to conducting a research study, with the information gathered by the author (reviewer) representing the data. Like all major studies, it involves conceptualisation, planning, implementation, and dissemination [], all of which may be detailed in a methodology section, if necessary.

  3. How to write a review article?

    In conclusion, during writing process of a review article, the procedures to be achieved can be indicated as follows: 1) Get rid of fixed ideas, and obsessions from your head, and view the subject from a large perspective. 2) Research articles in the literature should be approached with a methodological, and critical attitude and 3) finally ...

  4. Review articles: purpose, process, and structure

    Many research disciplines feature high-impact journals that are dedicated outlets for review papers (or review-conceptual combinations) (e.g., Academy of Management Review, Psychology Bulletin, Medicinal Research Reviews).The rationale for such outlets is the premise that research integration and synthesis provides an important, and possibly even a required, step in the scientific process.

  5. Methodological Approaches to Literature Review

    The literature review can serve various functions in the contexts of education and research. It aids in identifying knowledge gaps, informing research methodology, and developing a theoretical framework during the planning stages of a research study or project, as well as reporting of review findings in the context of the existing literature.

  6. An overview of methodological approaches in systematic reviews

    1. INTRODUCTION. Evidence synthesis is a prerequisite for knowledge translation. 1 A well conducted systematic review (SR), often in conjunction with meta‐analyses (MA) when appropriate, is considered the "gold standard" of methods for synthesizing evidence related to a topic of interest. 2 The central strength of an SR is the transparency of the methods used to systematically search ...

  7. The art of writing literature review: What do we know and what do we

    Methodology and structure of review articles. Systematic literature review articles can be broadly classified as domain-based, theory-based, and method-based. In addition to these categories of systematic literature reviews, meta analytical reviews are also increasingly popular in many different subject areas (Hulland & Houston, 2020). There ...

  8. How to write a good scientific review article

    Although the peer-review process is not usually as rigorous as for a research article [], there are some common features; for example, reviewers will consider whether the article cites the most relevant work in support of its analysis and conclusions. Journal editors will also carefully read the first draft and, depending on the journal ...

  9. Writing an impactful review article: What do we know and what do we

    Methodology and structure of review articles. One of the purposes of a review article is to provide novel and substantively interesting ideas and directions for researchers to undertake novel studies, rather than focusing on incremental and recycled types of research. Thus, it is important for researchers to have a 'state-of-the-art ...

  10. How to Write a Literature Review

    A Review of the Theoretical Literature" (Theoretical literature review about the development of economic migration theory from the 1950s to today.) Example literature review #2: "Literature review as a research methodology: An overview and guidelines" (Methodological literature review about interdisciplinary knowledge acquisition and ...

  11. Methodological Guidance for a Quality Review Article

    Despite the type of synthesis methodology that the review article is presenting, The Gerontologist welcomes review articles that use synthesis methods (such as scoping/systematic review, meta-analysis) that are guided by the best scientific reporting standards according to the IOM, PRISMA, and Cochrane. Therefore, authors are expected to ...

  12. Writing a literature review

    Writing a literature review requires a range of skills to gather, sort, evaluate and summarise peer-reviewed published data into a relevant and informative unbiased narrative. Digital access to research papers, academic texts, review articles, reference databases and public data sets are all sources of information that are available to enrich ...

  13. A tutorial on methodological studies: the what, when, how and why

    A tutorial on methodological studies: the what, when, how and ...

  14. Full article: Methodology or method? A critical review of qualitative

    Study design. The critical review method described by Grant and Booth (Citation 2009) was used, which is appropriate for the assessment of research quality, and is used for literature analysis to inform research and practice.This type of review goes beyond the mapping and description of scoping or rapid reviews, to include "analysis and conceptual innovation" (Grant & Booth, Citation 2009 ...

  15. Methodology of a systematic review

    The steps for implementing a systematic review include (i) correctly formulating the clinical question to answer (PICO), (ii) developing a protocol (inclusion and exclusion criteria), (iii) performing a detailed and broad literature search and (iv) screening the abstracts of the studies identified in the search and subsequently of the selected ...

  16. Basics of Writing Review Articles

    A well-written review article must summarize key research findings, reference must-read articles, describe current areas of agreement as well as controversies and debates, point out gaps in current knowledge, depict unanswered questions, and suggest directions for future research (1). During the last decades, there has been a great expansion in ...

  17. PDF METHODOLOGY OF THE LITERATURE REVIEW

    renders the literature review process as a mixed research study (Onwuegbuzie, Collins, et al., 2010). Using Multiple Sections of a Report The CLR as a mixed research study is enhanced by recognizing that meaning-making can occur from any aspect of a work (e.g., research article, book chapter, book), including the title, abstract, literature review

  18. What is a review article?

    A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results. Writing a review of literature is to provide a ...

  19. LibGuides: Scholarly Articles: How can I tell?: Methodology

    The methodology section or methods section tells you how the author (s) went about doing their research. It should let you know a) what method they used to gather data (survey, interviews, experiments, etc.), why they chose this method, and what the limitations are to this method. The methodology section should be detailed enough that another ...

  20. Guidelines for writing a systematic review

    A Systematic Review (SR) is a synthesis of evidence that is identified and critically appraised to understand a specific topic. SRs are more comprehensive than a Literature Review, which most academics will be familiar with, as they follow a methodical process to identify and analyse existing literature (Cochrane, 2022).

  21. PDF Exploring Research Methodology: Review Article

    Exploring Research Methodology: Review Article

  22. Simulating learning methodology (SLeM): an approach to machine learning

    AutoML [13, 14] encompasses a wide range of methods aiming to automate traditionally manual aspects of the machine learning process, such as data preparation, algorithm selection, hyperparameter tuning and architecture search, while it has limited researches on automatical transfer between varying tasks, which is emphasized by aforementioned ...

  23. Iron Deficiency Anemia and Dental Caries: A Systematic Review and Meta

    It is a literature review article. Shamsaddin et al 16: 2017: Does not provide data on the outcome of interest. Babu and Bhanushali 6: 2017: Does not show statistical results to assess association. Jayakumar and Gurunathan 7: 2017: Does not provide data on the outcome of interest. Nur et al 17: 2016: All had severe caries. Tang et al 18: 2013 ...

  24. Effectiveness of cold plasma application in oral wound healing process

    This scoping review aimed to evaluate the effectiveness of CAP application in the oral wound healing process. Design. An electronic literature search was conducted using PubMed/Medline, Embase, Web of Science, Scopus, and grey literature (Google Scholar). The search included all articles published up to October 11, 2023.

  25. Writing, reading, and critiquing reviews

    Type of Review Description Examples of published HPE articles using review methodology; Systematic Review: Often associated with Cochrane Reviews, this type of review aims to answer a narrowly focused question and uses a predetermined structured method to search, screen, select, appraise and summarize findings.

  26. Toward Greener Flow Assurance: Review of Experimental and Computational

    In the challenging environment of deep-sea oil and gas extraction, the occurrence of hydrate plugging due to gas hydrate formation and deposition in subsea pipelines poses significant operational risks. This issue, leading to pipeline or valve blockages, not only jeopardizes the safe extraction of oil and gas resources but also poses a threat to the marine environment. To address this problem ...

  27. The renaissance of oral tolerance: merging tradition and new insights

    Oral tolerance is the process by which feeding of soluble proteins induces antigen-specific systemic immune unresponsiveness. Oral tolerance is thought to have a central role in suppressing immune ...

  28. Methods for Monitoring the Photovoltaic Panel: A Review

    With the rapid development of Photovoltaic (PV) solar energy technology, a vast array of PV systems have been installed globally. According to the latest reports from the International Energy Agency (IEA), an astonishing 420GW of solar power has been installed, representing a doubling of solar energy capacity from 2022 to 2023, equivalent to the entire world's output in 2022. PV systems ...

  29. Close CRM Review (2024): Features, Pros, Cons, and Pricing

    For a further breakdown of these criteria, read the TechRepublic review methodology page. Share Article Account Information. Share with Your Friends. Close CRM Review (2024): Features, Pros, Cons ...

  30. Chapter 9 Methods for Literature Reviews

    Chapter 9 Methods for Literature Reviews