10 Shattuck St, Boston MA 02115 | (617) 432-2136
| |
Copyright © 2020 President and Fellows of Harvard College. All rights reserved.
Appinio Research · 01.02.2024 · 39min read
Are you looking to harness the power of data and uncover meaningful insights from a multitude of research studies? In a world overflowing with information, meta-analysis emerges as a guiding light, offering a systematic and quantitative approach to distilling knowledge from a sea of research.
This guide will demystify the art and science of meta-analysis, walking you through the process, from defining your research question to interpreting the results. Whether you're an academic researcher, a policymaker, or a curious mind eager to explore the depths of data, this guide will equip you with the tools and understanding needed to undertake robust and impactful meta-analyses.
Meta-analysis is a quantitative research method that involves the systematic synthesis and statistical analysis of data from multiple individual studies on a particular topic or research question. It aims to provide a comprehensive and robust summary of existing evidence by pooling the results of these studies, often leading to more precise and generalizable conclusions.
The primary purpose of meta-analysis is to:
Meta-analysis plays a crucial role in scientific research and evidence-based decision-making. Here are key reasons why meta-analysis is highly valuable:
Meta-analysis can address a wide range of research questions across various disciplines. Some common types of research questions that meta-analysis can tackle include:
Meta-analysis is a versatile tool that can provide valuable insights into a wide array of research questions, making it an indispensable method in evidence synthesis and knowledge advancement.
In evidence synthesis and research aggregation, meta-analysis and systematic reviews are two commonly used methods, each serving distinct purposes while sharing some similarities. Let's explore the differences and similarities between these two approaches.
While meta-analysis and systematic reviews share the overarching goal of synthesizing research evidence, they differ in their approach and main outcomes. Meta-analysis is quantitative, focusing on effect sizes, while systematic reviews provide comprehensive overviews, utilizing both quantitative and qualitative data to summarize the literature. Depending on the research question and available data, one or both of these methods may be employed to provide valuable insights for evidence-based decision-making.
Planning a meta-analysis is a critical phase that lays the groundwork for a successful and meaningful study. We will explore each component of the planning process in more detail, ensuring you have a solid foundation before diving into data analysis.
Your research questions are the guiding compass of your meta-analysis. They should be precise and tailored to the topic you're investigating. To craft effective research questions:
For example, if you're studying the impact of a specific intervention on patient outcomes, your research question might be: "What is the effect of Intervention X on Patient Outcome Y in published clinical trials?"
Eligibility criteria define the boundaries of your meta-analysis. By establishing clear criteria, you ensure that the studies you include are relevant and contribute to your research objectives. Key considerations for eligibility criteria include:
Your eligibility criteria should strike a balance between inclusivity and relevance. Excluding certain studies based on valid criteria ensures the quality and relevance of the data you analyze.
A robust search strategy is fundamental to identifying all relevant studies. To create an effective search strategy:
Remember that the goal is to cast a wide net while maintaining precision to capture all relevant studies.
Data extraction is the process of systematically collecting information from each selected study. It involves retrieving key data points, including:
Creating a standardized data extraction form is essential to ensure consistency and accuracy throughout this phase. Spreadsheet software, such as Microsoft Excel, is commonly used for data extraction.
Assessing the quality of included studies is crucial to determine their reliability and potential impact on your meta-analysis. Various quality assessment tools and checklists are available, depending on the study design. Some commonly used tools include:
Quality assessment typically involves evaluating aspects such as study design, sample size, data collection methods , and potential biases. This step helps you weigh the contribution of each study to the overall analysis.
Conducting a thorough literature review is a critical step in the meta-analysis process. We will explore the essential components of a literature review, from designing a comprehensive search strategy to establishing clear inclusion and exclusion criteria and, finally, the study selection process.
To ensure the success of your meta-analysis, it's imperative to cast a wide net when searching for relevant studies. A comprehensive search strategy involves:
Remember that the goal is to leave no relevant stone unturned, as missing key studies can introduce bias into your meta-analysis.
Clearly defined inclusion and exclusion criteria are the gatekeepers of your meta-analysis. These criteria ensure that the studies you include meet your research objectives and maintain the quality of your analysis. Consider the following factors when establishing criteria:
Your inclusion and exclusion criteria should strike a balance between inclusivity and relevance. Rigorous criteria help maintain the quality and applicability of the studies included in your meta-analysis.
The study selection process involves systematically screening and evaluating each potential study to determine whether it meets your predefined inclusion criteria. Here's a step-by-step guide:
Maintaining a clear and organized record of your study selection process is essential for transparency and reproducibility. Software tools like EndNote or Covidence can facilitate the screening and data extraction process.
By following these systematic steps in conducting a literature review, you ensure that your meta-analysis is built on a solid foundation of relevant and high-quality studies.
As you progress in your meta-analysis journey, the data extraction and management phase becomes paramount. We will delve deeper into the critical aspects of this phase, including the data collection process, data coding and transformation, and how to handle missing data effectively.
The data collection process is the heart of your meta-analysis, where you systematically extract essential information from each selected study. To ensure accuracy and consistency:
To optimize your data collection process and streamline the extraction and management of crucial information, consider leveraging innovative solutions like Appinio . With Appinio, you can effortlessly collect real-time consumer insights, ensuring your meta-analysis benefits from the latest data trends and user perspectives. Ready to learn more? Book a demo today and unlock a world of data-driven possibilities!
Get a free demo and see the Appinio platform in action!
After data collection, you may need to code and transform the extracted data to ensure uniformity and compatibility across studies. This process involves:
The goal of data coding and transformation is to make sure that data from different studies are compatible and can be effectively synthesized during the analysis phase. Spreadsheet software like Excel or statistical software like R can be used for these tasks.
Missing data is a common challenge in meta-analysis, and how you handle it can impact the validity and precision of your results. Strategies for handling missing data include:
Remember that transparency in reporting how you handled missing data is crucial for the credibility of your meta-analysis.
By following these steps in data extraction and management, you will ensure the integrity and reliability of your meta-analysis dataset.
Meta-analysis is a versatile research method that can be applied to various fields and disciplines, providing valuable insights by synthesizing existing evidence.
Background: A market research agency is tasked with assessing the effectiveness of advertising campaigns on sales outcomes for a range of consumer products. They have access to multiple studies and reports conducted by different companies, each analyzing the impact of advertising on sales revenue.
Meta-Analysis Approach:
Findings: Through meta-analysis, the market research agency discovers that advertising campaigns have a statistically significant and positive impact on sales across various product categories. The findings provide evidence for the effectiveness of advertising efforts and assist companies in making data-driven decisions regarding their marketing strategies.
These examples illustrate how meta-analysis can be applied in diverse domains, from tech startups seeking to optimize user engagement to market research agencies evaluating the impact of advertising campaigns. By systematically synthesizing existing evidence, meta-analysis empowers decision-makers with valuable insights for informed choices and evidence-based strategies.
Ensuring the quality and reliability of the studies included in your meta-analysis is essential for drawing accurate conclusions. We'll show you how you can assess study quality using specific tools, evaluate potential bias, and address publication bias.
Quality assessment tools provide structured frameworks for evaluating the methodological rigor of each included study. The choice of tool depends on the study design. Here are some commonly used quality assessment tools:
Evaluating potential sources of bias is crucial to understanding the limitations of the included studies. Common sources of bias include:
To assess bias, reviewers often use the quality assessment tools mentioned earlier, which include domains related to bias, or they may specifically address bias concerns in the narrative synthesis.
We'll move on to the core of meta-analysis: data synthesis. We'll explore different effect size measures, fixed-effect versus random-effects models, and techniques for assessing and addressing heterogeneity among studies.
Now that you've gathered data from multiple studies and assessed their quality, it's time to synthesize this information effectively.
Effect size measures quantify the magnitude of the relationship or difference you're investigating in your meta-analysis. The choice of effect size measure depends on your research question and the type of data provided by the included studies. Here are some commonly used effect size measures:
Selecting the appropriate effect size measure depends on the nature of your data and the research question. When effect sizes are not directly reported in the studies, you may need to calculate them using available data, such as means, standard deviations, and sample sizes.
Formula for Cohen's d:
d = (Mean of Group A - Mean of Group B) / Pooled Standard Deviation
In meta-analysis, you can choose between fixed-effect and random-effects models to combine the results of individual studies:
The choice between these models should be guided by the degree of heterogeneity observed among the included studies. If heterogeneity is significant, the random-effects model is often preferred, as it provides a more robust estimate of the overall effect.
Forest plots are graphical representations commonly used in meta-analysis to display the results of individual studies along with the combined summary estimate. Key components of a forest plot include:
Forest plots help visualize the distribution of effect sizes across studies and provide insights into the consistency and direction of the findings.
Heterogeneity refers to the variability in effect sizes among the included studies. It's important to assess and understand heterogeneity as it can impact the interpretation of your meta-analysis results. Standard methods for assessing heterogeneity include:
Assessing heterogeneity is crucial because it informs your choice of meta-analysis model (fixed-effect vs. random-effects) and whether subgroup analyses or sensitivity analyses are warranted to explore potential sources of heterogeneity.
With the data synthesis complete, it's time to make sense of the results of your meta-analysis.
The meta-analytic summary is the culmination of your efforts in data synthesis. It provides a consolidated estimate of the effect size and its confidence interval, combining the results of all included studies. To interpret the meta-analytic summary effectively:
Subgroup analyses allow you to explore whether the effect size varies across different subgroups of studies or participants. This can help identify potential sources of heterogeneity or assess whether the intervention's effect differs based on specific characteristics. Steps for conducting subgroup analyses:
Subgroup analyses can provide valuable insights into the factors influencing the overall effect size and help tailor recommendations for specific populations or conditions.
Sensitivity analyses are conducted to assess the robustness of your meta-analysis results by exploring how different choices or assumptions might affect the findings. Common sensitivity analyses include:
Sensitivity analyses help assess the robustness and reliability of your meta-analysis results, providing a more comprehensive understanding of the potential influence of various factors.
The final stages of your meta-analysis involve preparing your findings for publication.
When preparing your meta-analysis manuscript, consider the following:
Adhering to transparent reporting guidelines ensures that your meta-analysis is transparent, reproducible, and credible. Some widely recognized guidelines include:
Adhering to these guidelines ensures that your meta-analysis is transparent, reproducible, and credible. It enhances the quality of your research and aids readers and reviewers in assessing the rigor of your study.
The PRISMA statement is a valuable resource for conducting and reporting systematic reviews and meta-analyses. Key elements of PRISMA include:
By adhering to the PRISMA statement, you enhance the transparency and credibility of your meta-analysis, facilitating its acceptance for publication and aiding readers in evaluating the quality of your research.
Meta-analysis is a powerful tool that allows you to combine and analyze data from multiple studies to find meaningful patterns and make informed decisions. It helps you see the bigger picture and draw more accurate conclusions than individual studies alone. Whether you're in healthcare, education, business, or any other field, the principles of meta-analysis can be applied to enhance your research and decision-making processes. Remember that conducting a successful meta-analysis requires careful planning, attention to detail, and transparency in reporting. By following the steps outlined in this guide, you can embark on your own meta-analysis journey with confidence, contributing to the advancement of knowledge and evidence-based practices in your area of interest.
Introducing Appinio , the real-time market research platform that brings a new level of excitement to your meta-analysis journey. With Appinio, you can seamlessly collect your own market research data in minutes, empowering your meta-analysis with fresh, real-time consumer insights.
Here's why Appinio is your ideal partner for efficient data collection:
Get free access to the platform!
Join the loop 💌
Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.
Get the latest market research news straight to your inbox! 💌
20.08.2024 | 30min read
What is Employee Experience (EX) and How to Improve It?
19.08.2024 | 14min read
Revolutionizing Brand Health with Mental Availability: Key Takeaways
15.08.2024 | 31min read
360-Degree Feedback: Survey, Process, Software, Examples
Systematic reviews for health: 1. formulate the research question.
A systematic review is based on a pre-defined specific research question ( Cochrane Handbook, 1.1 ). The first step in a systematic review is to determine its focus - you should clearly frame the question(s) the review seeks to answer ( Cochrane Handbook, 2.1 ). It may take you a while to develop a good review question - it is an important step in your review. Well-formulated questions will guide many aspects of the review process, including determining eligibility criteria, searching for studies, collecting data from included studies, and presenting findings ( Cochrane Handbook, 2.1 ).
The research question should be clear and focused - not too vague, too specific or too broad.
You may like to consider some of the techniques mentioned below to help you with this process. They can be useful but are not necessary for a good search strategy.
P | I | C | O |
---|---|---|---|
if appropriate | |||
Most important characteristics of patient (e.g. age, disease/condition, gender) | Main intervention (e.g. drug treatment, diagnostic/screening test) | Main alternative (e.g. placebo, standard therapy, no treatment, gold standard) | What you are trying to accomplish, measure, improve, affect (e.g. reduced mortality or morbidity, improved memory) |
Richardson, WS, Wilson, MC, Nishikawa, J & Hayward, RS 1995, 'The well-built clinical question: A key to evidence-based decisions', ACP Journal Club , vol. 123, no. 3, pp. A12-A12 .
We do not have access to this article at UTAS.
A variant of PICO is PICOS . S stands for Study designs . It establishes which study designs are appropriate for answering the question, e.g. randomised controlled trial (RCT). There is also PICO C (C for context) and PICO T (T for timeframe).
You may find this document on PICO / PIO / PEO useful:
S | PI | D | E | R |
---|---|---|---|---|
Sample | Phenomenon of Interest | Design | Evaluation | Research type |
Cooke, A, Smith, D & Booth, A 2012, 'Beyond pico the spider tool for qualitative evidence synthesis', Qualitative Health Research , vol. 22, no. 10, pp. 1435-1443.
This article is only accessible for UTAS staff and students.
S | P | I | C | E |
---|---|---|---|---|
Setting (where?) | Perspecitve (for whom?) | Intervention (what?) | Comparison (compared with what?) | Evaluation (with what result?) |
Cleyle, S & Booth, A 2006, 'Clear and present questions: Formulating questions for evidence based practice', Library hi tech , vol. 24, no. 3, pp. 355-368.
E | C | L | I | P | Se |
---|---|---|---|---|---|
Expectation (improvement or information or innovation) | Client group (at whom the service is aimed) | Location (where is the service located?) | Impact (outcomes) | Professionals (who is involved in providing/improving the service) | Service (for which service are you looking for information) |
Wildridge, V & Bell, L 2002, 'How clip became eclipse: A mnemonic to assist in searching for health policy/management information', Health Information & Libraries Journal , vol. 19, no. 2, pp. 113-115.
There are many more techniques available. See the below guide from the CQUniversity Library for an extensive list:
This is the specific research question used in the example:
"Is animal-assisted therapy more effective than music therapy in managing aggressive behaviour in elderly people with dementia?"
Within this question are the four PICO concepts :
P | elderly patients with dementia |
---|---|
I | animal-assisted therapy |
C | music therapy |
O | aggressive behaviour |
S - Study design
This is a therapy question. The best study design to answer a therapy question is a randomised controlled trial (RCT). You may decide to only include studies in the systematic review that were using a RCT, see Step 8 .
See source of example
Reviewed by Psychology Today Staff
Meta-analysis is an objective examination of published data from many studies of the same research topic identified through a literature search. Through the use of rigorous statistical methods, it can reveal patterns hidden in individual studies and can yield conclusions that have a high degree of reliability. It is a method of analysis that is especially useful for gaining an understanding of complex phenomena when independent studies have produced conflicting findings.
Meta-analysis provides much of the underpinning for evidence-based medicine. It is particularly helpful in identifying risk factors for a disorder, diagnostic criteria, and the effects of treatments on specific populations of people, as well as quantifying the size of the effects. Meta-analysis is well-suited to understanding the complexities of human behavior.
There are well-established scientific criteria for selecting studies for meta-analysis. Usually, meta-analysis is conducted on the gold standard of scientific research—randomized, controlled, double-blind trials. In addition, published guidelines not only describe standards for the inclusion of studies to be analyzed but also rank the quality of different types of studies. For example, cohort studies are likely to provide more reliable information than case reports.
Through statistical methods applied to the original data collected in the included studies, meta-analysis can account for and overcome many differences in the way the studies were conducted, such as the populations studied, how interventions were administered, and what outcomes were assessed and how. Meta-analyses, and the questions they are attempting to answer, are typically specified and registered with a scientific organization, and, with the protocols and methods openly described and reviewed independently by outside investigators, the research process is highly transparent.
Meta-analysis is often used to validate observed phenomena, determine the conditions under which effects occur, and get enough clarity in clinical decision-making to indicate a course of therapeutic action when individual studies have produced disparate findings. In reviewing the aggregate results of well-controlled studies meeting criteria for inclusion, meta-analysis can also reveal which research questions, test conditions, and research methods yield the most reliable results, not only providing findings of immediate clinical utility but furthering science.
The technique can be used to answer social and behavioral questions large and small. For example, to clarify whether or not having more options makes it harder for people to settle on any one item, a meta-analysis of over 53 conflicting studies on the phenomenon was conducted. The meta-analysis revealed that choice overload exists—but only under certain conditions. You will have difficulty selecting a TV show to watch from the massive array of possibilities, for example, if the shows differ from each other in multiple ways or if you don’t have any strong preferences when you finally get to sit down in front of the TV.
A meta-analysis conducted in 2000, for example, answered the question of whether physically attractive people have “better” personalities . Among other traits, they prove to be more extroverted and have more social skills than others. Another meta-analysis, in 2014, showed strong ties between physical attractiveness as rated by others and having good mental and physical health. The effects on such personality factors as extraversion are too small to reliably show up in individual studies but real enough to be detected in the aggregate number of study participants. Together, the studies validate hypotheses put forth by evolutionary psychologists that physical attractiveness is important in mate selection because it is a reliable cue of health and, likely, fertility.
The replication crisis, publication bias, and careerism undermine scientific rigor and ethical responsibility in counseling to provide effective and safe care for clients.
The human brain has two halves. A new study highlights when differences between them start.
When considered across a lifetime, no within-person association exists between religiosity and psychological well-being.
What are the prevalence rates of psychosis for displaced refugees?
A recent review provides compelling evidence that arts engagement significantly reduces cognitive decline and enhances the quality of life among healthy older adults.
Personal Perspective: Mental healthcare AI is evolving beyond administrative roles. By automating routine tasks, therapists can spend sessions focusing on human interactions.
Investing in building a positive classroom climate holds benefits for students and teachers alike.
Mistakenly blaming cancer-causing chemicals and radiation for most cancers lets us avoid the simple lifestyle changes that could protect us from cancer far more.
According to astronomer Carl Sagan, "Extraordinary claims require extraordinary evidence." Does the claim that pet owners live longer pass the extraordinary evidence requirement?
People, including leading politicians, are working later in life than ever before. Luckily, social science suggests that aging does not get in the way of job performance.
Sticking up for yourself is no easy task. But there are concrete skills you can use to hone your assertiveness and advocate for yourself.
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Xiao‐meng wang.
1 Department of Epidemiology, School of Public Health, Southern Medical University, Guangzhou Guangdong, China
Zhi‐hao li, wen‐fang zhong, associated data.
Data sharing is not applicable to this article because no datasets were generated or analyzed during the current study.
With the explosive growth of medical information, it is almost impossible for healthcare providers to review and evaluate all relevant evidence to make the best clinical decisions. Meta‐analyses, which summarize all existing evidence and quantitatively synthesize individual studies, have become the best available evidence for informing clinical practice. This article introduces the common methods, steps, principles, strengths and limitations of meta‐analyses and aims to help healthcare providers and researchers obtain a basic understanding of meta‐analyses in clinical practice and research.
This article introduces the common methods, principles, steps, strengths and limitations of meta‐analyses and aims to help clinicians and researchers obtain a basic understanding of meta‐analyses in clinical practice and research.
With the explosive growth of medical information, it has become almost impossible for healthcare providers to review and evaluate all related evidence to inform their decision making. 1 , 2 Furthermore, the inconsistent and often even conflicting conclusions of different studies can confuse these individuals. Systematic reviews were developed to resolve such situations, which comprehensively and systematically summarize all relevant empirical evidence. 3 Many systematic reviews contain meta‐analysis, which use statistical methods to combine the results of individual studies. 4 Through meta‐analyses, researchers can objectively and quantitatively synthesize results from different studies and increase the statistical strength and precision for estimating effects. 5 In the late 1970s, meta‐analysis began to appear regularly in the medical literature. 6 Subsequently, a plethora of meta‐analyses have emerged and the growth is exponential over time. 7 When conducted properly, a meta‐analysis of medical studies is considered as decisive evidence because it occupies a top level in the hierarchy of evidence. 8
An understanding of the principles, performance, advantages and weaknesses of meta‐analyses is important. Therefore, we aim to provide a basic understanding of meta‐analyses for clinicians and researchers in the present article by introducing the common methods, principles, steps, strengths and limitations of meta‐analyses.
There are many types of meta‐analysis methods (Table 1 ). In this article, we mainly introduce five meta‐analysis methods commonly used in clinical practice.
Meta‐analysis methods
Methods | Definitions |
---|---|
Aggregate data meta‐analysis | Extracting summary results of studies available in published accounts |
Individual participant data meta‐analysis | Collecting individual participant‐level data from original studies |
Cumulative meta‐analysis | Adding studies to a meta‐analysis based on a predetermined order |
Network meta‐analysis | Combining direct and indirect evidence to compare the effectiveness between different interventions |
Meta‐analysis of diagnostic test accuracy | Identifying and synthesizing evidence on the accuracy of tests |
Prospective meta‐analysis | Conducting meta‐analysis for studies that specify research selection criteria, hypotheses and analysis, but for which the results are not yet known |
Sequential meta‐analysis | Combining the methodology of cumulative meta‐analysis with the technique of formal sequential testing, which can sequentially evaluate the available evidence at consecutive interim steps during the data collection |
Meta‐analysis of the adverse events | Following the basic meta‐analysis principles to analyze the incidences of adverse events of studies |
Although more information can be obtained based on individual participant‐level data from original studies, it is usually impossible to obtain these data from all included studies in meta‐analysis because such data may have been corrupted, or the main investigator may no longer be contacted or refuse to release the data. Therefore, by extracting summary results of studies available in published accounts, an aggregate data meta‐analysis (AD‐MA) is the most commonly used of all the quantitative approaches. 9 A study has found that > 95% of published meta‐analyses were AD‐MA. 10 In addition, AD‐MA is the mainstay of systematic reviews conducted by the US Preventive Services Task Force, the Cochrane Collaboration and many professional societies. 9 Moreover, AD‐MA can be completed relatively quickly at a low cost, and the data are relatively easy to obtain. 11 , 12 However, AD‐MA has very limited control over the data. A challenge with AD‐MA is that the association between an individual participant‐level covariate and the effect of the interventions at the study level may not reflect the individual‐level effect modification of that covariate. 13 It is also difficult to extract sufficient compatible data to undertake meaningful subgroup analyses in AD‐MA. 14 Furthermore, AD‐MA is prone to ecological bias, as well as to confounding from variables not included in the model, and may have limited power. 15
An individual participant data meta‐analysis (IPD‐MA) is considered the “gold standard” for meta‐analysis; this type of analysis collects individual participant‐level data from original studies. 15 Compared with AD‐MA, IPD‐MA has many advantages, including improved data quality, a greater variety of analytical types that can be performed and the ability to obtain more reliable results. 16 , 17
It is crucial to maintain clusters of participants within studies in the statistical implementation of an IPD‐MA. Clusters can be retained during the analysis using a one‐step or two‐step approach. 18 In the one‐step approach, the individual participant data from all studies are modeled simultaneously, at the same time as accounting for the clustering of participants within studies. 19 This approach requires a model specific to the type of data being synthesized and an appropriate account of the meta‐analysis assumptions (e.g. fixed or random effects across studies). Cheng et al . 20 proposed using a one‐step IPD‐MA to handle binary rare events and found that this method was superior to traditional methods of inverse variance, the Mantel–Haenszel method and the Yusuf‐Peto method. In the two‐step approach, the individual participant data from each study are analyzed independently for each separate study to produce aggregate data for each study (e.g. a mean treatment effect estimate and its standard error) using a statistical method appropriate for the type of data being analyzed (e.g. a linear regression model might be fitted for continuous responses, or a Cox regression might be applied for time‐to‐event data). The aggregate data are then combined to obtain an summary effect in the second step using a suitable model, such as weighting studies by the inverse of the variance. 21 For example, using a two‐step IPD‐MA, Grams et al . 22 found that apolipoprotein‐L1 kidney‐risk variants were not associated with incident cardiovascular disease or death independent of kidney measures.
Compared to the two‐step approach, the one‐step IPD‐MA is recommended for small meta‐analyses 23 and, conveniently, must only specify one model; however, this requires careful distinction of within‐study and between‐study variability. 24 The two‐step IPD‐MA is more laborious, although it allows the use of traditional, well‐known meta‐analysis techniques in the second step, such as those used by the Cochrane Collaboration (e.g. the Mantel–Haenszel method).
Meta‐analyses are traditionally used retrospectively to review existing evidence. However, current evidence often undergoes several updates as new studies become available. Thus, updated data must be continuously obtained to simplify and digest the ever‐expanding literature. Therefore, cumulative meta‐analysis was developed, which adds studies to a meta‐analysis based on a predetermined order and then tracks the magnitude of the mean effect and its variance. 25 A cumulative meta‐analysis can be performed multiple times; not only can it obtain summary results and provide a comparison of the dynamic results, but also it can assess the impact of newly added studies on the overall conclusions. 26 For example, initial observational studies and systematic reviews and meta‐analyses suggested that frozen embryo transfer was better for mothers and babies; however, recent primary studies have begun to challenge these conclusions. 27 Maheshwari et al . 27 therefore conducted a cumulative meta‐analysis to investigate whether these conclusions have remained consistent over time and found that the decreased risks of harmful outcomes associated with pregnancies conceived from frozen embryos have been consistent in terms of direction and magnitude of effect over several years, with an increasing precision around the point estimates. Furthermore, continuously updated cumulative meta‐analyses may avoid unnecessary large‐scale randomized controlled trials (RCTs) and prevent wasted research efforts. 28
Although RCTs can directly compare the effectiveness of interventions, most of them compare the effectiveness of an intervention with a placebo, and there is almost no direct comparison between different interventions. 29 , 30 Network meta‐analyses comprise a relatively recent development that combines direct and indirect evidence to compare the effectiveness between different interventions. 31 Evidence obtained from RCTs is considered as direct evidence, whereas evidence obtained through one or more common comparators is considered as indirect evidence. For example, when comparing interventions A and C, direct evidence refers to the estimate of the relative effects between A and C. When no RCTs have directly compared interventions A and C, these interventions can be compared indirectly if both have been compared with B (placebo or some standard treatments) in other studies (forming an A–B–C “loop” of evidence). 32 , 33
A valid network meta‐analysis can correctly combine the relative effects of more than two studies and obtain a consistent estimate of the relative effectiveness of all interventions in one analysis. 34 This meta‐analysis may lead to a greater accuracy of estimating intervention effectiveness and the ability to compare all available interventions to calculate the rank of different interventions. 34 , 35 For example, phosphodiesterase type 5 inhibitors (PDE5‐Is) are the first‐line therapy for erectile dysfunction, although there are limited available studies on the comparative effects of different types of PDE5‐Is. 36 Using a network meta‐analysis, Yuan et al . 36 calculated the absolute effects and the relative rank of different PDE5‐Is to provide an overview of the effectiveness and safety of all PDE5‐Is.
Notably, a network meta‐analysis should satisfy the transitivity assumption, in which there are no systematic differences between the available comparisons other than the interventions being compared 37 ; in other words, the participants could be randomized to any of the interventions in a hypothetical RCT consisting of all the interventions included in the network meta‐analysis.
Sensitivity and specificity are commonly used to assess diagnostic accuracy. However, diagnostic tests in clinical practice are rarely 100% specific or sensitive. 38 It is difficult to obtain accurate estimates of sensitivity and specificity in small diagnostic accuracy studies. 39 , 40 Even in a large sample size study, the number of cases may still be small as a result of the low prevalence. By identifying and synthesizing evidence on the accuracy of tests, the meta‐analysis of diagnostic test accuracy (DTA) provides insight into the ability of medical tests to detect the target diseases 41 ; it also can provide estimates of test performance, allow comparisons of the accuracy of different tests and facilitate the identification of sources of variability. 42 For example, the FilmArray® (Biomerieux, Marcy‐l'Étoile, France) meningitis/encephalitis (ME) panel can detect the most common pathogens in central nervous system infections, although reports of false positives and false negatives are confusing. 43 Based on meta‐analysis of DTA, Tansarli et al . 43 calculated that the sensitivity and specificity of the ME panel were both > 90%, indicating that the ME panel has high diagnostic accuracy.
3.1. frame a question.
Researchers must formulate an appropriate research question at the beginning. A well‐formulated question will guide many aspects of the review process, including determining eligibility criteria, searching for studies, collecting data from included studies, structuring the syntheses and presenting results. 44 There are some tools that may facilitate the construction of research questions, including PICO, as used in clinical practice 45 ; PEO and SPICE, as used for qualitative research questions 46 , 47 ; and SPIDER, as used for mixed‐methods research. 48
It is crucial for researchers to formulate a search strategy in advance that includes inclusion and exclusion criteria, as well as a standardized data extraction form. The definition of inclusion and exclusion criteria depends on established question elements, such as publication dates, research design, population and results. A reasonable inclusion and exclusion criteria will reduce the risk of bias, increase transparency and make the review systematic. Broad criteria may increase the heterogeneity between studies, and narrow criteria may make it difficult to find studies; therefore, a compromise should be found. 49
To minimize bias and reduce hampered interpretation of outcomes, the search strategy should be as comprehensive as possible, employing multiple databases, such as PubMed, Embase, Cochrane Central Registry of Controlled Trials, Scopus, Web of Science and Google Scholar. 50 , 51 Removing language restrictions and actively searching for non‐English bibliographic databases may also help researchers to perform a comprehensive meta‐analysis. 52
The selection or rejection of the included articles should be guided by the criteria. 53 Two independent reviewers may screen the included articles, and any disagreements should be resolved by consensus through discussion. First, the titles and abstracts of all relevant searched papers should be read, and inclusion or exclusion criteria applied to determine whether these papers meet. Then, the full texts of the included articles should be reviewed once more to perform the rejection again. Finally, the reference lists of these articles should be searched to widen the research as much as possible. 54
A pre‐formed standardized data extraction form should be used to extract data of included studies. All data should be carefully converted using uniform standards. Simultaneous extraction by multiple researchers might also make the extracted data more accurate.
Checklists and scales are often used to assess the quality of articles. For example, the Cochrane Collaboration's tool 55 is usually used to assess the quality of RCTs, whereas the Newcastle Ottawa Scale 56 is one of the most common method to assess the quality of non‐randomized trials. In addition, Quality Assessment of Diagnostic Accuracy Studies 2 57 is often used to evaluate the quality of diagnostic accuracy studies.
Several methods have been proposed to detect and quantify heterogeneity, such as Cochran's Q and I 2 values. Cochran's Q test is used to determine whether there is heterogeneity in primary studies or whether the variation observed is due to chance, 58 but it may be underpowered because of the inclusion of a small number of studies or low event rates. 59 Therefore, p < 0.10 (not 0.05) indicates the presence of heterogeneity given the low statistical strength and insensitivity of Cochran's Q test. 60 Another common method for testing heterogeneity is the I 2 value, which describes the percentage of variation across studies that is attributable to heterogeneity rather than chance; this value does not depend on the number of studies. 61 I 2 values of 25%, 50% and 75% are considered to indicate low, moderate and high heterogeneity, respectively. 60
Fixed effects and random effects models are commonly used to estimate the summary effect in a meta‐analysis. 62 Fixed effects models, which consider the variability of the results as “random variation”, simply weight individual studies by their precision (inverse of the variance). Conversely, random effects models assume a different underlying effect for each study and consider this an additional source of variation that is randomly distributed. A substantial difference in the summary effect calculated by fixed effects models and random effects models will be observed only if the studies are markedly heterogeneous (heterogeneity p < 0.10) and the random effects model typically provides wider confidence intervals than the fixed effect model. 63 , 64
Several methods have been proposed to explore the possible reasons for heterogeneity. According to factors such as ethnicity, the number of studies or clinical features, subgroup analyses can be performed that divide the total data into several groups to assess the impact of a potential source of heterogeneity. Sensitivity analysis is a common approach for examining the sources of heterogeneity on a case‐by‐case basis. 65 In sensitivity analysis, one or more studies are excluded at a time and the impact of removing each or several studies is evaluated on the summary results and the between‐study heterogeneity. Sequential and combinatorial algorithms are usually implemented to evaluate the change in between‐study heterogeneity as one or more studies are excluded from the calculations. 66 Moreover, a meta‐regression model can explain heterogeneity based on study‐level covariates. 67
A funnel plot is a scatterplot that is commonly used to assess publication bias. In a funnel plot, the x ‐axis indicates the study effect and the y ‐axis indicates the study precision, such as the standard error or sample size. 68 , 69 If there is no publication bias, the plot will have a symmetrical inverted funnel; conversely, asymmetry indicates the possibility of publication bias.
A forest plot is a valid and useful tool for summarizing the results of a meta‐analysis. In a forest plot, the results from each individual study are shown as a blob or square; the confidence interval, usually representing 95% confidence, is shown as a horizontal line that passes through the square; and the summary effect is shown as a diamond. 70
There are four most important principles of meta‐analysis performance that should be emphasized. First, the search scope of meta‐analysis should be expanded as much as possible to contain all relevant research, and it is important to remove language restrictions and actively search for non‐English bibliographic databases. Second, any meta‐analysis should include studies selected based on strict criteria established in advance. Third, appropriate tools must be selected to evaluate the quality of evidence according to different types of primary studies. Fourth, the most suitable statistical model should be chosen for the meta‐analysis and a weighted mean estimate of the effect size should be calculated. Finally, the possible causes of heterogeneity should be identified and publication bias in the meta‐analysis must be assessed.
Meta‐analyses have several strengths. First, a major advantage is their ability to improve the precision of effect estimates with considerably increased statistical power, which is particularly important when the power of the primary study is limited as a result of the small sample size. Second, a meta‐analysis has more power to detect small but clinically significant effects and to examine the effectiveness of interventions in demographic or clinical subgroups of participants, which can help researchers identify beneficial (or harmful) effects in specific groups of patients. 71 , 72 Third, meta‐analyses can be used to analyze rare outcomes and outcomes that individual studies were not designed to test (e.g. adverse events). Fourth, meta‐analyses can be used to examine heterogeneity in study results and explore possible sources in case this heterogeneity would lead to bias from “mixing apples and oranges”. 73 Furthermore, meta‐analyses can compare the effectiveness of various interventions, supplement the existing evidence, and then offer a rational and helpful way of addressing a series of practical difficulties that plague healthcare providers and researchers. Lastly, meta‐analyses may resolve disputes caused by apparently conflicting studies, determine whether new studies are necessary for further investigation and generate new hypotheses for future studies. 7 , 74
6.1. missing related research.
The primary limitation of a meta‐analysis is missing related research. Even in the ideal case in which all relevant studies are available, a faulty search strategy can miss some of these studies. Small differences in search strategies can produce large differences in the set of studies found. 75 When searching databases, relevant research can be missed as a result of the omission of keywords. The search engine (e.g. PubMed, Google) may also affect the type and number of studies that are found. 76 Moreover, it may be impossible to identify all relevant evidence if the search scope is limited to one or two databases. 51 , 77 Finally, language restrictions and the failure to search non‐English bibliographic databases may also lead to an incomplete meta‐analysis. 52 Comprehensive search strategies for different databases and languages might help solve this issue.
Publication bias means that positive findings are more likely to be published and then identified through literature searches rather than ambiguous or negative findings. 78 This is an important and key source of bias that is recognized as a potential threat to the validity of results. 79 The real research effect may be exaggerated or even falsely positive if only published articles are included. 80 For example, based on studies registered with the US Food and Drug Administration, Turner et al . 81 reviewed 74 trials of 12 antidepressants to assess publication bias and its influence on apparent efficacy. It was found that antidepressant studies with favorable outcomes were 16 times more likely to be published than those with unfavorable outcomes, and the apparent efficacy of antidepressants increased between 11% and 69% when the non‐published studies were not included in the analysis. 81 Moreover, failing to identify and include non‐English language studies may also increase publication bias. 82 Therefore, all relevant studies should be identified to reduce the impact of publication bias on meta‐analysis.
Because many of the studies identified are not directly related to the subject of the meta‐analysis, it is crucial for researchers to select which studies to include based on defined criteria. Failing to evaluate, select or reject relevant studies based on stricter criteria regarding the study quality may also increase the possibility of selection bias. Missing or inappropriate quality assessment tools may lead to the inclusion of low‐quality studies. If a meta‐analysis includes low‐quality studies, its results will be biased and incorrect, which is also called “garbage in, garbage out”. 83 Strictly defined criteria for included studies and scoring by at least two researchers might help reduce the possibility of selection bias. 84 , 85
The best‐case scenario for meta‐analyses is the availability of individual participant data. However, most individual research reports only contain summary results, such as the mean, standard deviation, proportions, relative risk and odds ratio. In addition to the possibility of reporting errors, the lack of information can severely limit the types of analyses and conclusions that can be achieved in a meta‐analysis. For example, the unavailability of information from individual studies may preclude the comparison of effects in predetermined subgroups of participants. Therefore, if feasible, the researchers could contact the author of the primary study for individual participant data.
Although the studies included in a meta‐analysis have the same research hypothesis, there is still the potential for several areas of heterogeneity. 86 Heterogeneity may exist in various parts of the studies’ design and conduct, including participant selection, interventions/exposures or outcomes studied, data collection, data analyses and selective reporting of results. 87 Although the difference of the results can be overcome by assessing the heterogeneity of the studies and performing subgroup analyses, 88 the results of the meta‐analysis may become meaningless and even may obscure the real effect if the selected studies are too heterogeneous to be comparable. For example, Nicolucci et al . 89 conducted a review of 150 published randomized trials on the treatment of lung cancer. Their review showed serious methodological drawbacks and concluded that heterogeneity made the meta‐analysis of existing trials unlikely to be constructive. 89 Therefore, combining the data in meta‐analysis for studies with large heterogeneity is not recommended.
Funnel plots are appealing because they are a simple technique used to investigate the possibility of publication bias. However, their objective is to detect a complex effect, which can be misleading. For example, the lack of symmetry in a funnel plot can also be caused by heterogeneity. 90 Another problem with funnel plots is the difficulty of interpreting them when few studies are included. Readers may also be misled by the choice of axes or the outcome measure. 91 Therefore, in the absence of a consensus on how the plot should be constructed, asymmetrical funnel plots should be interpreted cautiously. 91
Researchers must make numerous judgments when performing meta‐analyses, 92 which inevitably introduces considerable subjectivity into the meta‐analysis review process. For example, there is often a certain amount of subjectivity when deciding how similar studies should be before it is appropriate to combine them. To minimize subjectivity, at least two researchers should jointly conduct a meta‐analysis and reach a consensus.
The explosion of medical information and differences between individual studies make it almost impossible for healthcare providers to make the best clinical decisions. Meta‐analyses, which summarize all eligible evidence and quantitatively synthesize individual results on a specific clinical question, have become the best available evidence for informing clinical practice and are increasingly important in medical research. This article has described the basic concept, common methods, principles, steps, strengths and limitations of meta‐analyses to help clinicians and investigators better understand meta‐analyses and make clinical decisions based on the best evidence.
CM designed and directed the study. XMW and XRZ had primary responsibility for drafting the manuscript. CM, ZHL, WFZ and PY provided insightful discussions and suggestions. All authors critically reviewed the manuscript for important intellectual content.
The authors declare that they have no conflicts of interest.
This work was supported by the Project Supported by Guangdong Province Universities and Colleges Pearl River Scholar Funded Scheme (2019 to CM) and the Construction of High‐level University of Guangdong (G820332010, G618339167 and G618339164 to CM). The funders played no role in the study design or implementation; manuscript preparation, review or approval; or the decision to submit the manuscript for publication.
Wang X‐M, Zhang X‐R, Li Z‐H, Zhong W‐F, Yang P, Mao C. A brief introduction of meta‐analyses in clinical practice and research . J Gene Med . 2021; 23 :e3312. 10.1002/jgm.3312 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
Xiao‐Meng Wang and Xi‐Ru Zhang contributed equally to this work.
BMC Primary Care volume 25 , Article number: 309 ( 2024 ) Cite this article
120 Accesses
7 Altmetric
Metrics details
There is a considerable amount of research showing an association between continuity of care and improved health outcomes. However, the methods used in most studies examine only the pattern of interactions between patients and clinicians through administrative measures of continuity. The patient experience of continuity can also be measured by using patient reported experience measures. Unlike administrative measures, these can allow elements of continuity such as the presence of information or how joined up care is between providers to be measured. Patient experienced continuity is a marker of healthcare quality in its own right. However, it is unclear if, like administrative measures, patient reported continuity is also linked to positive health outcomes.
Cohort and interventional studies that examined the relationship between patient reported continuity of care and a health outcome were eligible for inclusion. Medline, EMBASE, CINAHL and the Cochrane Library were searched in April 2021. Citation searching of published continuity measures was also performed. QUIP and Cochrane risk of bias tools were used to assess study quality. A box-score method was used for study synthesis.
Nineteen studies were eligible for inclusion. 15 studies measured continuity using a validated, multifactorial questionnaire or the continuity/co-ordination subscale of another instrument. Two studies placed patients into discrete groups of continuity based on pre-defined questions, one used a bespoke questionnaire, one calculated an administrative measure of continuity using patient reported data. Outcome measures examined were quality of life ( n = 11), self-reported health status ( n = 8), emergency department use or hospitalisation ( n = 7), indicators of function or wellbeing ( n = 6), mortality ( n = 4) and physiological measures ( n = 2). Analysis was limited by the relatively small number of hetrogenous studies. The majority of studies showed a link between at least one measure of continuity and one health outcome.
Whilst there is emerging evidence of a link between patient reported continuity and several outcomes, the evidence is not as strong as that for administrative measures of continuity. This may be because administrative measures record something different to patient reported measures, or that studies using patient reported measures are smaller and less able to detect smaller effects. Future research should use larger sample sizes to clarify if a link does exist and what the potential mechanisms underlying such a link could be. When measuring continuity, researchers and health system administrators should carefully consider what type of continuity measure is most appropriate.
Peer Review reports
Continuity of primary care is associated with multiple positive outcomes including reduced hospitals admissions, lower costs and a reduction in mortality [ 1 , 2 , 3 ]. Providing continuity is often seen as opposed to providing rapid access to appointments [ 4 ] and many health systems have chosen to focus primary care policy on access rather than continuity [ 5 , 6 , 7 ]. Continuity has fallen in several primary care systems and this has led to calls to improve it [ 8 , 9 ]. However, it is sometimes unclear exactly what continuity is and what should be improved.
In its most basic form, continuity of care can be defined as a continuous relationship between a patient and a healthcare professional [ 10 ]. However, from the patient perspective, continuity of care can also be experienced as joined up seamless care from multiple providers [ 11 ].
One of the most commonly cited models of continuity by Haggerty et al. defines continuity as
“ …the degree to which a series of discrete healthcare events is experienced as coherent and connected and consistent with the patient’s medical needs and personal context. Continuity of care is distinguished from other attributes of care by two core elements—care over time and the focus on individual patients” [ 11 ].
It then breaks continuity down into three parts (see Table 1 ) [ 11 ]. Other academic models of patient continuity exists but they contain elements which are broadly analogous [ 10 , 12 , 13 , 14 ].
Continuity can be measured through administrative measures or by asking patients about their experience of continuity [ 16 ]. Administrative mesures are commonly used as they allow continuity to be calculated easily for large numbers of patient consultations. Administraive measures capture one element of continuity – the frequency or pattern of professionals seen by a patient [ 16 , 17 ]. There are multiple studies and several systematic reviews showing that better health outcomes are associated with administrative measures of continuity of care [ 1 , 2 , 18 , 19 ]. One of the most recent of these reviews used a box-score method to assess the relationship between reduced mortality and continuity (i.e., counting the numbers of studies reporting significant and non-significant relationships) [ 18 ]. The review examined thirteen studies and found a positive association in nine. Administrative measures of continuity cannot capture aspects of continuity such as informational or management continuity or the nature of the relationship between the patient and clinicians. To address this, there have been several patient-reported experience measures (PREMs) of continuity developed that attempt to capture the patient experience of continuity beyond the pattern in which they see particular clinicians [ 14 , 17 , 20 , 21 ]. Studies have shown a variable correlation between administrative and patient reported measures of continity and their relationship to health outcomes [ 22 ]. Pearson correlation co-efficients vary between 0.11 and 0.87 depending on what is measured and how [ 23 , 24 ]. This suggests that they are capturing different things and that both measures have their uses and drawbacks [ 23 , 25 ]. Patients may have good administrative measures of continuity but report a poor experience. Conversely, administrative measures of continuity may be poor, but a patient may report a high level of experienced continuity. Patient experienced continuity and patient satisfaction with healthcare is an aim in its own right in many healthcare systems [ 26 ]. Whilst this is laudable, it may be unclear to policy makers if prioritising patient-experienced continuity will improve health outcomes.
This review seeks to answer two questions.
Is patient reported continuity of care associated with positive health outcomes?
Are particular types of patient reported continuity (relational, informational or management) associated with positive health outcomes?
A review protocol was registered with PROSPERO in June 2021 (ID: CRD42021246606).
A structured search was undertaken using appropriate search terms on Medline, EMBASE, CINAHL and the Cochrane Library in April 2021 (see Appendix ). The searches were limited to the last 20 years. This age limitation reflects the period in which the more holistic description of continuity (as exemplified by Haggerty et al. 2003) became more prominent. In addition to database searches, existing reviews of PREMs of continuity and co-ordination were searched for appropriate measures. Citation searching of these measures was then undertaken to locate studies that used these outcome measures.
Full text papers were reviewed if the title or abstract suggested that the paper measured (a) continuity through a PREM and (b) a health outcome. Health outcomes were defined as outcomes that measured a direct effect on patient health (e.g., health status) or patient use of emergency or inpatient care. Papers with outcomes relating to patient satisfaction or satisfaction with a particular service were excluded as were process measures (such as quality of documentation, cost to health care provider). Cohort and interventional studies were eligible for inclusion, if they reported data on the relationship between continuity and a relevant health outcome. Cross-sectional studies were excluded because of the risk of recall bias [ 27 ].
The majority of participants in a study had to be aged over 16, based in a healthcare setting and receiving healthcare from healthcare professionals (medical or non-medical). We felt that patients under 16 were unlikely to be asked to fill out continuity PREMs. Studies that used PREMs to quantitatively measure one or more elements of experienced continuity of care or coordination were eligible for inclusion [ 11 ]. Any PREMs that could map to one or more of the three key elements of Haggerty’s definition (Table 1 ) definition were eligible for inclusion. The types of continuity measured by each study were mapped to the Haggerty concepts of continuity by at least two reviewers independently. Our search also included patient reported measures of co-ordination, as a previous review of continuity PREMs highlighted the conceptual overlap between patient experienced continuity and some measures of patient experienced co-ordination [ 17 ]. Whilst there are different definitions of co-ordination, the concept of patient perceived co-ordination is arguably the same as management continuity [ 13 , 14 , 28 ]. Patient reported measures of care co-ordination were reviewed by two reviewers to see whether they measured the concept of management continuity. Because of the overlap between concepts of continuity and other theories (e.g., patient-centred care, quality of care), in studies where it was not clear that continuity was being measured, agreement, with documented reasons, was made about their inclusion/exclusion after discussion between three of the reviewers (PB, SS and AW). Disagreements were resolved by documented group discussion. Some PREMs measured concepts of continuity alongside other concepts such as access. These studies were eligible for inclusion only if measurements of continuity were reported and analysed separately.
All titles/abstracts were initially screened by one reviewer (PB). 20% of the abstracts were independently reviewed by 2 other reviewers (SS and AW), blinded to the results of the initial screening. All full text reviews were done by two blinded reviewers independently. Disagreements were resolved by group discussion between PB, SS, AW and PBo. Excel was used for collation of search results, titles, and abstracts. Rayyan was used in the full text review process.
Data extraction was performed independently by two reviewers. The following data were extracted to an Excel spreadsheet: study design, setting, participant inclusion criteria, method of measurement of continuity, type of continuity measured, outcomes analysed, temporal relationship of continuity to outcomes in the study, co-variates, and quantitative data for continuity measures and outcomes. Disagreements were resolved by documented discussion or involvement of a third reviewer.
Cohort studies were assessed for risk of bias at a study level using the QUIP tool by two reviewers acting independently [ 29 ]. Trials were assessed using the Cochrane risk of bias tool. The use of the QUIP tool was a deviation from the review protocol as the Ottowa-Newcastle tool in the protocol was less suitable for use on the type of cohort studies returned in the search. Any disagreements in rating were resolved by documented discussion.
As outlined in our original protocol, our preferred analysis strategy was to perform meta-analysis. However, we were unable to do this as insufficient numbers of studies reported data amenable to the calculation of an effect size. Instead, we used a box-score method [ 30 ]. This involved assessing and tabulating the relationship between each continuity measure and each outcome in each study. These relationships were recorded as either positive, negative or non-significant (using a conventional p value of < 0.05 as our cut off for significance). Advantages and disadvantages of this method are explored in the discussion section. Where a study used both bivariate analysis and multivariate analysis, the results from the multivariate analysis were extracted. Results were marked as “mixed” where more than one measure for an outcome was used and the significance/direction differed between outcome measures. Sensitivity analysis of study quality and size was carried out.
Figure 1 shows the search results and number of inclusions/exclusions. Studies were excluded for a number of reasons including; having inappropriate outcome measures [ 31 ], focusing on non-adult patient populations [ 32 ] and reporting insufficient data to examine the relationship between continuity and outcomes [ 33 ]. All studies are described in Table 2 .
Results of search strategy –NB. 18 studies provided 19 assessments
Studies took place in 9 different, mostly economically developed, countries. Studies were set in primary care [ 5 ], hospital/specialist outpatient [ 7 ], hospital in-patient [ 5 ], or the general population [ 2 ].
All included studies, apart from one trial [ 34 ], were cohort studies. Study duration varied from 2 months to 5 years. Most studies were rated as being low-moderate or moderate risk of bias, due to outcomes being patient reported, issues with recruitment, inadequately describing cohort populations, significant rates of attrition and/or failure to account for patients lost to follow up.
The majority of the studies (15/19) measured continuity using a validated, multifactorial patient reported measure of continuity or using the continuity/co-ordination subscale of another validated instrument. Two studies placed patients into discrete groups of continuity based on answers to pre-defined questions (e.g., do you have a regular GP that you see? ) [ 35 , 36 ], one used a bespoke questionnaire [ 34 ], and one calculated an administrative measure of continuity (UPC – Usual Provider of Care index) using patient reported visit data collected from patient interviews [ 37 ]. Ten studies reported more than one type of patient reported continuity, four reported relational continuity, three reported overall continuity, one informational continuity and one management continuity.
Most of the studies reported more than one outcome measure. To enable comparison across studies we grouped the most common outcome measures together. These were quality of life ( n = 11), self-reported health status ( n = 8), emergency department use or hospitalisation ( n = 7), and mortality ( n = 4). Other outcomes reported included physiological parameters e.g., blood pressure or blood test parameters ( n = 2) [ 36 , 38 ] and other indicators of functioning or well-being ( n = 6).
Twelve of the nineteen studies demonstrated at least one statistically significant association between at least one patient reported measure of continuity and at least one outcome. However, ten of these studies examined more than one outcome measure. Two of these significant studies showed negative findings; better informational continuity was associated with worse self-reported disease status [ 35 ] and improved continuity was related to increased admissions and ED use [ 39 ]. Four studies demonstrated no association between measures of continuity and any health outcomes.
The four most commonly reported types of outcomes were analysed separately (Table 3 ). All the outcomes had a majority of studies showing no significant association with continuity or a mixed/unclear association. Sensitivity analysis of the results in Table 3 , excluding high and moderate-high risk studies, did not change this finding. Each of these outcomes were also examined in relation to the type of continuity that was measured (Table 4 ) Apart from the relationship between informational continuity and quality or life, all other combinations of continuity type/outcome had a majority of studies showing no significant association with continuity or a mixed/unclear association. However, the relationship between informational continuity and quality of life was only examined in two separate studies [ 40 , 41 ]. One of these studies contained less than 100 patients and was removed when sensitivity analysis of study size was carried out [ 40 ]. Sensitivity analysis of the results in Table 4 , excluding high and moderate-high risk studies, did not change the findings.
Two sensitivity analyses were carried out (a) removing all studies with less than 100 participants and (b) those with less than 1000 participants. There were only five studies with at least 1000 participants. These all showed at least one positive association between continuity and health outcome. Of note, three of these five studies examined emergency department use/readmissions and all three found a significant positive association.
Continuity of care is a multi-dimensional concept that is often linked to positive health outcomes. There is strong evidence that administrative measures of continuity are associated with improved health outcomes including a reduction in mortality, healthcare costs and utilisation of healthcare [ 3 , 18 , 19 ]. Our interpretation of the evidence in this review is that there is an emerging link between patient reported continuity and health outcomes. Most studies in the review contained at least one significant association between continuity and a health outcome. However, when outcome measures were examined individually, the findings were less consistent.
The evidence for a link between patient reported continuity is not as strong as that for administrative measures. There are several possible explanations for this. The review retrieved a relatively small number of studies that examined a range of different outcomes, in different patient populations, in different settings, using different outcomes, and different measures of continuity. This resulted in small numbers of studies examining the relationship of a particular measure of continuity with a particular outcome (Table 4 ). The studies in the review took place in a wide variety of country and healthcare settings and it may be that the effects of continuity vary in different contexts. Finally, in comparison to studies of administrative measures of continuity, the studies in this review were small: the median number of participants in the studies was 486, compared to 39,249 in a recent systematic review examining administrative measures of continuity [ 18 ]. Smaller studies are less able to detect small effect sizes and this may be the principle reason for the difference between the results of this review and previous reviews of administrative measures of continuity. When studies with less than 1000 participants were excluded, all remaining studies showed at least one positive finding and there was a consistent association between reduction in emergency department use/re-admissions and continuity. This suggests that a modest association between certain outcomes and patient reported continuity may be present but, due to effect size, larger studies are needed to demonstrate it. The box score method does not take account of differential size of studies.
Continuity is not a concept that is universally agreed upon. We mapped concepts of continuity onto the commonly used Haggerty framework [ 11 ]. Apart from the use of the Nijmegen Continuity of care questionnaire in three studies [ 42 ], all studies measured continuity using different methods and concepts of continuity. We could have used other theoretical constructs of continuity for the mapping of measures. It was not possible to find the exact questions asked of patients in every study. We therefore mapped several of the continuity measures based on higher level descriptions given by the authors. The diversity of patient measures may account for some of the variability in findings between studies. However, it may be that the nature of continuity captured by patient reported measures is less closely linked to health outcomes than that captured by administrative measures. Administrative measures capture the pattern of interactions between patients and clinicians. All studies in this review (apart from Study 18) use PREMs that attempt to capture something different to the pattern in which a patient sees a clinician. Depending on the specific measure used, this includes: aspects of information transfer between services, how joined up care was between different providers and the nature of the patient-clinician relationship. PREMs can only capture what the patient perceives and remembers. The experience of continuity for the patient is important in its own right. However, it may be that the aspects of continuity that are most linked to positive health outcomes are best reflected by administrative measures. Sidaway-Lee et al. have hypothesised why relational continuity may be linked to health outcomes [ 43 ]. This includes the ability for a clinician to think more holistically and the motivation to “go the extra mile” for a patient. Whilst these are difficult to measure directly, it may be that administrative measures are a better proxy marker than PREMs for these aspects of continuity.
This review shows a potential emerging relationship between patient reported continuity and health outcomes. However, the evidence for this association is currently weaker than that demonstrated in previous reviews of administrative measures of continuity.
If continuity is to be measured and improved, as is being proposed in some health systems [ 44 ], these findings have potential implications as to what type of measure we should use. Measurement of health system performance often drives change [ 45 ]. Health systems may respond to calls to improve continuity differently, depending on how continuity is measured. Continuity PREMs are important and patient experienced continuity should be a goal in its own right. However, it is the fact that continuity is linked to multiple positive health care and health system outcomes that is often given as the reason for pursing it as a goal [ 8 , 44 , 46 ]. Whilst this review shows there is emerging evidence of a link, it is not as strong as that found in studies of administrative measures. If, as has been shown in other work, PREMS and administrative measures are looking at different things [ 23 , 24 ], we need to choose our measures of continuity carefully.
Larger studies are required to confirm the emerging link between patient experienced continuity and outcomes shown in this paper. Future studies, where possible, should collect both administrative and patient reported measures of continuity and seek to understand the relative importance of the three different aspects of continuity (relational, informational, managerial). The relationship between patient experienced continuity and outcomes is likely to vary between different groups and future work should examine differential effects in different patient populations There are now several validated measures of patient experienced continuity [ 17 , 20 , 21 , 42 ]. Whilst there may be an argument more should be developed, the use of a standardised questionnaire (such as the Nijmegen questionnaire) where possible, would enable closer comparison between patient experiences in different healthcare settings.
The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.
Gray DJP, Sidaway-Lee K, White E, Thorne A, Evans PH. Continuity of care with doctors - a matter of life and death? A systematic review of continuity of care and mortality. BMJ Open. 2018;8(6):1–12.
Google Scholar
Barker I, Steventon A, Deeny SR. Association between continuity of care in general practice and hospital admissions for ambulatory care sensitive conditions: cross sectional study of routinely collected, person level data. BMJ Online. 2017;356.
Bazemore A, Merenstein Z, Handler L, Saultz JW. The impact of interpersonal continuity of primary care on health care costs and use: a critical review. Ann Fam Med. 2023;21(3):274–9.
Article PubMed PubMed Central Google Scholar
Palmer W, Hemmings N, Rosen R, Keeble E, Williams S, Imison C. Improving access and continuity in general practice. The Nuffield Trust; 2018 [cited 2022 Jan 15]. https://www.nuffieldtrust.org.uk/research/improving-access-and-continuity-in-general-practice
Pettigrew LM, Kumpunen S, Rosen R, Posaner R, Mays N. Lessons for ‘large-scale’ general practice provider organisations in England from other inter-organisational healthcare collaborations. Health Policy. 2019;123(1):51–61.
Article PubMed Google Scholar
Glenister KM, Guymer J, Bourke L, Simmons D. Characteristics of patients who access zero, one or multiple general practices and reasons for their choices: a study in regional Australia. BMC Fam Pract. 2021;22(1):2.
Kringos D, Boerma W, Bourgueil Y, Cartier T, Dedeu T, Hasvold T, et al. The strength of primary care in Europe: an international comparative study. Br J Gen Pract. 2013;63(616):e742–50.
Salisbury H. Helen Salisbury: everyone benefits from continuity of care. BMJ. 2023;382:p1870.
Article Google Scholar
Gray DP, Sidaway-Lee K, Johns C, Rickenbach M, Evans PH. Can general practice still provide meaningful continuity of care? BMJ. 2023;383:e074584.
Ladds E, Greenhalgh T. Modernising continuity: a new conceptual framework. Br J Gen Pr. 2023;73(731):246–8.
Haggerty JL, Reid, Robert, Freeman G, Starfield B, Adair CE, McKendry R. Continuity of care: a multidisciplinary review. BMJ. 2003;327(7425):1219–21.
Freeman G, Shepperd S, Robinson I, Ehrich K, Richards S, Pitman P et al. Continuity of care continuity of care report of a scoping exercise for the national co-ordinating centre for NHS service delivery and organisation R & D. 2001 [cited 2020 Oct 15]. https://njl-admin.nihr.ac.uk/document/download/2027166
Saultz JW. Defining and measuring interpersonal continuity of care. Ann Fam Med. 2003;1(3):134–43.
Uijen AA, Schers HJ, Schellevis FG. Van den bosch WJHM. How unique is continuity of care? A review of continuity and related concepts. Fam Pract. 2012;29(3):264–71.
Murphy M, Salisbury C. Relational continuity and patients’ perception of GP trust and respect: a qualitative study. Br J Gen Pr. 2020;70(698):e676–83.
Gray DP, Sidaway-Lee K, Whitaker P, Evans P. Which methods are most practicable for measuring continuity within general practices? Br J Gen Pract. 2023;73(731):279–82.
Uijen AA, Schers HJ. Which questionnaire to use when measuring continuity of care. J Clin Epidemiol. 2012;65(5):577–8.
Baker R, Bankart MJ, Freeman GK, Haggerty JL, Nockels KH. Primary medical care continuity and patient mortality. Br J Gen Pr. 2020;70(698):E600–11.
Van Walraven C, Oake N, Jennings A, Forster AJ. The association between continuity of care and outcomes: a systematic and critical review. J Eval Clin Pr. 2010;16(5):947–56.
Aller MB, Vargas I, Garcia-Subirats I, Coderch J, Colomés L, Llopart JR, et al. A tool for assessing continuity of care across care levels: an extended psychometric validation of the CCAENA questionnaire. Int J Integr Care. 2013;13(OCT/DEC):1–11.
Haggerty JL, Roberge D, Freeman GK, Beaulieu C, Bréton M. Validation of a generic measure of continuity of care: when patients encounter several clinicians. Ann Fam Med. 2012;10(5):443–51.
Bentler SE, Morgan RO, Virnig BA, Wolinsky FD, Hernandez-Boussard T. The association of longitudinal and interpersonal continuity of care with emergency department use, hospitalization, and mortality among medicare beneficiaries. PLoS ONE. 2014;9(12):1–18.
Bentler SE, Morgan RO, Virnig BA, Wolinsky FD. Do claims-based continuity of care measures reflect the patient perspective? Med Care Res Rev. 2014;71(2):156–73.
Rodriguez HP, Marshall RE, Rogers WH, Safran DG. Primary care physician visit continuity: a comparison of patient-reported and administratively derived measures. J Gen Intern Med. 2008;23(9):1499–502.
Adler R, Vasiliadis A, Bickell N. The relationship between continuity and patient satisfaction: a systematic review. Fam Pr. 2010;27(2):171–8.
Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573–6.
Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc. 2016;9:211–7.
Schultz EM, McDonald KM. What is care coordination? Int J Care Coord. 2014;17(1–2):5–24.
Hayden, van der Windt, Danielle, Cartwright, Jennifer, Cote, Pierre, Bombardier, Claire. Assessing Bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.
Green BF, Hall JA. Quantitative methods for literature reviews. Annu Rev Psychol. 1984;35(1):37–54.
Article CAS PubMed Google Scholar
Safran DG, Montgomery JE, Chang H, Murphy J, Rogers WH. Switching doctors: predictors of voluntary disenrollment from a primary physician’s practice. J Fam Pract. 2001;50(2):130–6.
CAS PubMed Google Scholar
Burns T, Catty J, Harvey K, White S, Jones IR, McLaren S, et al. Continuity of care for carers of people with severe mental illness: results of a longitudinal study. Int J Soc Psychiatry. 2013;59(7):663–70.
Engelhardt JB, Rizzo VM, Della Penna RD, Feigenbaum PA, Kirkland KA, Nicholson JS, et al. Effectiveness of care coordination and health counseling in advancing illness. Am J Manag Care. 2009;15(11):817–25.
PubMed Google Scholar
Uijen AA, Bischoff EWMA, Schellevis FG, Bor HHJ, Van Den Bosch WJHM, Schers HJ. Continuity in different care modes and its relationship to quality of life: a randomised controlled trial in patients with COPD. Br J Gen Pr. 2012;62(599):422–8.
Humphries C, Jaganathan S, Panniyammakal J, Singh S, Dorairaj P, Price M, et al. Investigating discharge communication for chronic disease patients in three hospitals in India. PLoS ONE. 2020;15(4):1–20.
Konrad TR, Howard DL, Edwards LJ, Ivanova A, Carey TS. Physician-patient racial concordance, continuity of care, and patterns of care for hypertension. Am J Public Health. 2005;95(12):2186–90.
Van Walraven C, Taljaard M, Etchells E, Bell CM, Stiell IG, Zarnke K, et al. The independent association of provider and information continuity on outcomes after hospital discharge: implications for hospitalists. J Hosp Med. 2010;5(7):398–405.
Gulliford MC, Naithani S, Morgan M. Continuity of care and intermediate outcomes of type 2 diabetes mellitus. Fam Pr. 2007;24(3):245–51.
Kaneko M, Aoki T, Mori H, Ohta R, Matsuzawa H, Shimabukuro A, et al. Associations of patient experience in primary care with hospitalizations and emergency department visits on isolated islands: a prospective cohort study. J Rural Health. 2019;35(4):498–505.
Beesley VL, Janda M, Burmeister EA, Goldstein D, Gooden H, Merrett ND, et al. Association between pancreatic cancer patients’ perception of their care coordination and patient-reported and survival outcomes. Palliat Support Care. 2018;16(5):534–43.
Valaker I, Fridlund B, Wentzel-Larsen T, Nordrehaug JE, Rotevatn S, Råholm MB, et al. Continuity of care and its associations with self-reported health, clinical characteristics and follow-up services after percutaneous coronary intervention. BMC Health Serv Res. 2020;20(1):1–15.
Uijen AA, Schellevis FG, Van Den Bosch WJHM, Mokkink HGA, Van Weel C, Schers HJ. Nijmegen continuity questionnaire: development and testing of a questionnaire that measures continuity of care. J Clin Epidemiol. 2011;64(12):1391–9.
Sidaway-Lee K, Gray DP, Evans P, Harding A. What mechanisms could link GP relational continuity to patient outcomes ? Br J Gen Pr. 2021;(June):278–81.
House of Commons Health and Social Care Committee. The future of general practice. 2022. https://publications.parliament.uk/pa/cm5803/cmselect/cmhealth/113/report.html
Close J, Byng R, Valderas JM, Britten N, Lloyd H. Quality after the QOF? Before dismantling it, we need a redefined measure of ‘quality’. Br J Gen Pract. 2018;68(672):314–5.
Gray DJP. Continuity of care in general practice. BMJ. 2017;356:j84.
Download references
Not applicable.
Patrick Burch carried this work out as part of a PhD Fellowship funded by THIS Institute.
Authors and affiliations.
Centre for Primary Care and Health Services Research, Institute of Population Health, University of Manchester, Manchester, England
Patrick Burch, Alex Walter, Stuart Stewart & Peter Bower
You can also search for this author in PubMed Google Scholar
PBu conceived the review and performed the searches. PBu, AW and SS performed the paper selections, reviews and data abstractions. PBo helped with the design of the review and was inovlved the reviewer disputes. All authors contributed towards the drafting of the final manuscript.
Correspondence to Patrick Burch .
Ethics approval, consent for publication, competing interests.
The authors declare no competing interests.
Publisher’s note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Below is the link to the electronic supplementary material.
Supplementary material 2, rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and permissions
Cite this article.
Burch, P., Walter, A., Stewart, S. et al. Patient reported measures of continuity of care and health outcomes: a systematic review. BMC Prim. Care 25 , 309 (2024). https://doi.org/10.1186/s12875-024-02545-8
Download citation
Received : 27 March 2023
Accepted : 29 July 2024
Published : 19 August 2024
DOI : https://doi.org/10.1186/s12875-024-02545-8
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
ISSN: 2731-4553
Background Maternal and child health care, particularly antenatal care (ANC) and immunization services, are essential to improving health outcomes in rural Africa. Despite global efforts, access to high-quality health care services remains limited in rural areas, contributing to high maternal and childhood mortality rates. Community Health Workers (CHWs) have been recognized as a promising solution for bridging this gap by providing essential services directly to underserved populations. This systematic review and meta-analysis aims to evaluate the effectiveness and cost-effectiveness of CHWs in delivering antenatal care and immunization services to pregnant mothers and children under 5 in rural Africa.
Methods This review will include randomized controlled trials (RCTs), cohort studies, case-control studies, and observational studies published from 2014 onward. The search strategy will be implemented across multiple databases, including Google Scholar, Academic Info, Cochrane, Refseek, PubMed, and MEDLINE. The primary outcomes will focus on clinical and economic measures, including maternal and child health outcomes and cost-effectiveness of CHW interventions. Data extraction and quality assessment will be conducted independently by two reviewers, with discrepancies resolved through discussion or the involvement of a third reviewer.
Discussion The findings from this review will contribute to the understanding of the role CHWs play in improving maternal and child health outcomes in rural Africa. The results will provide valuable insights for policymakers, health care providers, and stakeholders to inform future interventions and resource allocation strategies.
Registration This protocol is registered in PROSPERO Registration number CRD42024529963.
The authors have declared no competing interest.
This study did not receive any funding
I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.
I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.
I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).
I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.
Email: zumanikebby97{at}gmail.com Zambia
Email: henryenriquejohn{at}gmail.com Uganda
Email: bukieafolabi{at}yahoo.com Nigeria
Email: dubesem2010{at}gmail.com Zimbabwe
Email: bvincent4christ{at}gmail.com Uganda
Email: georgeotun2{at}gmail.com Nigeria
Email: Markanum07{at}gmail.com Ghana
Email emma2ofori{at}yahoo.co.uk Ghana
Email: bbkvalentine{at}gmail.com Uganda
Email: perupatience{at}gmail.com Kenya
Email: chikosakamwengolangson{at}gmail.com Zambia
Email: mensima3199efua{at}gmail.com Ghana
All data produced in the present study are available upon reasonable request to the authors
https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=529963
View the discussion thread.
Thank you for your interest in spreading the word about medRxiv.
NOTE: Your email address is requested solely to identify you as the sender of this article.
IMAGES
COMMENTS
2.1 Step 1: defining the research question. The first step in conducting a meta-analysis, as with any other empirical study, is the definition of the research question. Most importantly, the research question determines the realm of constructs to be considered or the type of interventions whose effects shall be analyzed.
2 Eight steps in conducting a meta‑analysis 2.1 Step 1: dening the research question The rst step in conducting a meta-analysis, as with any other empirical study, is the denition of the research question. Most importantly, the research question deter-mines the realm of constructs to be considered or the type of interventions whose
Tip 1: Know the type of outcome than. There are differences in a forest plot depending on the type of outcomes. For a continuous outcome, the mean, standard deviation and number of patients are ...
The graphical output of meta-analysis is a forest plot which provides information on individual studies and the pooled effect. Systematic reviews of literature can be undertaken for all types of questions, and all types of study designs. This article highlights the key features of systematic reviews, and is designed to help readers understand ...
It is easy to confuse systematic reviews and meta-analyses. A systematic review is an objective, reproducible method to find answers to a certain research question, by collecting all available studies related to that question and reviewing and analyzing their results. A meta-analysis differs from a systematic review in that it uses statistical ...
Rule 1: Specify the topic and type of the meta-analysis. Considering that a systematic review [ 10] is fundamental for a meta-analysis, you can use the Population, Intervention, Comparison, Outcome (PICO) model to formulate the research question. It is important to verify that there are no published meta-analyses on the specific topic in order ...
2.1 Step 1: defining the research question. The first step in conducting a meta-analysis, as with any other empirical study, is the. definition of the research question. Most importantly, the ...
Step 1: Defining a Research Question. A well-defined research question is a fundamental starting point for any research synthesis. The research question should guide decisions about which studies to include in the meta-analysis, and which statistical model is most appropriate.
Definition. "A meta-analysis is a formal, epidemiological, quantitative study design that uses statistical methods to generalise the findings of the selected independent studies. Meta-analysis and systematic review are the two most authentic strategies in research. When researchers start looking for the best available evidence concerning ...
Meta-analysis is a statistical procedure for analyzing the combined data from different studies, and can be a major source of concise up-to-date information. The overall conclusions of a meta-analysis, however, depend heavily on the quality of the meta-analytic process, and an appropriate evaluation of the quality of meta-analysis (meta-evaluation) can be challenging. We outline ten questions ...
To do the meta-analysis, we can use free software, such as RevMan or R package meta . In this example, we will use the R package meta. The tutorial of meta package can be accessed through "General Package for Meta-Analysis" tutorial pdf . The R codes and its guidance for meta-analysis done can be found in Additional file 5: File S3.
Define the Research Question. A meta-analysis begins with a question. Common questions addressed in meta-analyses are whether one treatment is more effective than another or if exposure to a certain agent will result in disease. Before beginning an analysis, the investigators need to define the problem or question of interest. ... For example ...
Similar to any research study, a meta-analysis begins with a research question. Meta-analysis can be used in any situation where the goal is to summarize quantitative findings from empirical studies. It can be used to examine different types of effects, including prevalence rates (e.g., percentage of rape survivors with depression), growth ...
Meta-analysis would be used for the following purposes: To establish statistical significance with studies that have conflicting results. To develop a more correct estimate of effect magnitude. To provide a more complex analysis of harms, safety data, and benefits. To examine subgroups with individual numbers that are not statistically significant.
Meta-analysis is the statistical combination of results from two or more separate studies. Potential advantages of meta-analyses include an improvement in precision, the ability to answer questions not posed by individual studies, and the opportunity to settle controversies arising from conflicting claims.
It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.
Purpose: Meta-analysis is a statistical technique used to combine and analyze quantitative data from multiple individual studies that address the same research question. The primary aim of meta-analysis is to provide a single summary effect size that quantifies the magnitude and direction of an effect or relationship across studies.
Step 1. Formulate the Research Question. A systematic review is based on a pre-defined specific research question (Cochrane Handbook, 1.1).The first step in a systematic review is to determine its focus - you should clearly frame the question(s) the review seeks to answer (Cochrane Handbook, 2.1).It may take you a while to develop a good review question - it is an important step in your review.
Meta-analysis refers to the statistical analysis of the data from independent primary studies focused on the same question, which aims to generate a quantitative estimate of the studied phenomenon, for example, the effectiveness of the intervention (Gopalakrishnan and Ganeshkumar, 2013). In clinical research, systematic reviews and meta ...
The research question for a meta-analysis could be formulated around specific theory (e.g., regulatory fit theory; Motyka et al., 2014) or model (e.g., technology acceptance model; King & He, 2006)). Defining a research question in meta-analysis requires a deep understanding of the topic and literature, and entails specifying a valuable ...
Meta-analysis is the statistical combination of the results of multiple studies addressing a similar research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. [ 1]
Meta-analysis is an objective examination of published data from many studies of the same research topic identified through a literature search. Through the use of rigorous statistical methods, it ...
Assessing heterogeneity in meta -analysis 11 being c 2 i i i w cw w = ( 1) where wi is the weighting factor for the ith study assuming a fixed -effects model (wi = 1/ 2 i ˆ ), k is the number of studies, and Q is the statistical test for heterogeneity proposed by Cochran (1954) and defined in equation (12). To avoid negative values for ˆ2 when Q (k - 1), ˆ2 is equated to 0.
Researchers must formulate an appropriate research question at the beginning. A well‐formulated question will guide many aspects of the review process, ... the lack of information can severely limit the types of analyses and conclusions that can be achieved in a meta‐analysis. For example, the unavailability of information from individual ...
There is a considerable amount of research showing an association between continuity of care and improved health outcomes. However, the methods used in most studies examine only the pattern of interactions between patients and clinicians through administrative measures of continuity. The patient experience of continuity can also be measured by using patient reported experience measures.
Yes I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable. ... This systematic review and meta-analysis aims to evaluate the effectiveness and cost-effectiveness of CHWs in delivering antenatal care and immunization services ...