• Privacy Policy

Research Method

Home » Research Problem – Examples, Types and Guide

Research Problem – Examples, Types and Guide

Table of Contents

Research Problem

Research Problem

Definition:

Research problem is a specific and well-defined issue or question that a researcher seeks to investigate through research. It is the starting point of any research project, as it sets the direction, scope, and purpose of the study.

Types of Research Problems

Types of Research Problems are as follows:

Descriptive problems

These problems involve describing or documenting a particular phenomenon, event, or situation. For example, a researcher might investigate the demographics of a particular population, such as their age, gender, income, and education.

Exploratory problems

These problems are designed to explore a particular topic or issue in depth, often with the goal of generating new ideas or hypotheses. For example, a researcher might explore the factors that contribute to job satisfaction among employees in a particular industry.

Explanatory Problems

These problems seek to explain why a particular phenomenon or event occurs, and they typically involve testing hypotheses or theories. For example, a researcher might investigate the relationship between exercise and mental health, with the goal of determining whether exercise has a causal effect on mental health.

Predictive Problems

These problems involve making predictions or forecasts about future events or trends. For example, a researcher might investigate the factors that predict future success in a particular field or industry.

Evaluative Problems

These problems involve assessing the effectiveness of a particular intervention, program, or policy. For example, a researcher might evaluate the impact of a new teaching method on student learning outcomes.

How to Define a Research Problem

Defining a research problem involves identifying a specific question or issue that a researcher seeks to address through a research study. Here are the steps to follow when defining a research problem:

  • Identify a broad research topic : Start by identifying a broad topic that you are interested in researching. This could be based on your personal interests, observations, or gaps in the existing literature.
  • Conduct a literature review : Once you have identified a broad topic, conduct a thorough literature review to identify the current state of knowledge in the field. This will help you identify gaps or inconsistencies in the existing research that can be addressed through your study.
  • Refine the research question: Based on the gaps or inconsistencies identified in the literature review, refine your research question to a specific, clear, and well-defined problem statement. Your research question should be feasible, relevant, and important to the field of study.
  • Develop a hypothesis: Based on the research question, develop a hypothesis that states the expected relationship between variables.
  • Define the scope and limitations: Clearly define the scope and limitations of your research problem. This will help you focus your study and ensure that your research objectives are achievable.
  • Get feedback: Get feedback from your advisor or colleagues to ensure that your research problem is clear, feasible, and relevant to the field of study.

Components of a Research Problem

The components of a research problem typically include the following:

  • Topic : The general subject or area of interest that the research will explore.
  • Research Question : A clear and specific question that the research seeks to answer or investigate.
  • Objective : A statement that describes the purpose of the research, what it aims to achieve, and the expected outcomes.
  • Hypothesis : An educated guess or prediction about the relationship between variables, which is tested during the research.
  • Variables : The factors or elements that are being studied, measured, or manipulated in the research.
  • Methodology : The overall approach and methods that will be used to conduct the research.
  • Scope and Limitations : A description of the boundaries and parameters of the research, including what will be included and excluded, and any potential constraints or limitations.
  • Significance: A statement that explains the potential value or impact of the research, its contribution to the field of study, and how it will add to the existing knowledge.

Research Problem Examples

Following are some Research Problem Examples:

Research Problem Examples in Psychology are as follows:

  • Exploring the impact of social media on adolescent mental health.
  • Investigating the effectiveness of cognitive-behavioral therapy for treating anxiety disorders.
  • Studying the impact of prenatal stress on child development outcomes.
  • Analyzing the factors that contribute to addiction and relapse in substance abuse treatment.
  • Examining the impact of personality traits on romantic relationships.

Research Problem Examples in Sociology are as follows:

  • Investigating the relationship between social support and mental health outcomes in marginalized communities.
  • Studying the impact of globalization on labor markets and employment opportunities.
  • Analyzing the causes and consequences of gentrification in urban neighborhoods.
  • Investigating the impact of family structure on social mobility and economic outcomes.
  • Examining the effects of social capital on community development and resilience.

Research Problem Examples in Economics are as follows:

  • Studying the effects of trade policies on economic growth and development.
  • Analyzing the impact of automation and artificial intelligence on labor markets and employment opportunities.
  • Investigating the factors that contribute to economic inequality and poverty.
  • Examining the impact of fiscal and monetary policies on inflation and economic stability.
  • Studying the relationship between education and economic outcomes, such as income and employment.

Political Science

Research Problem Examples in Political Science are as follows:

  • Analyzing the causes and consequences of political polarization and partisan behavior.
  • Investigating the impact of social movements on political change and policymaking.
  • Studying the role of media and communication in shaping public opinion and political discourse.
  • Examining the effectiveness of electoral systems in promoting democratic governance and representation.
  • Investigating the impact of international organizations and agreements on global governance and security.

Environmental Science

Research Problem Examples in Environmental Science are as follows:

  • Studying the impact of air pollution on human health and well-being.
  • Investigating the effects of deforestation on climate change and biodiversity loss.
  • Analyzing the impact of ocean acidification on marine ecosystems and food webs.
  • Studying the relationship between urban development and ecological resilience.
  • Examining the effectiveness of environmental policies and regulations in promoting sustainability and conservation.

Research Problem Examples in Education are as follows:

  • Investigating the impact of teacher training and professional development on student learning outcomes.
  • Studying the effectiveness of technology-enhanced learning in promoting student engagement and achievement.
  • Analyzing the factors that contribute to achievement gaps and educational inequality.
  • Examining the impact of parental involvement on student motivation and achievement.
  • Studying the effectiveness of alternative educational models, such as homeschooling and online learning.

Research Problem Examples in History are as follows:

  • Analyzing the social and economic factors that contributed to the rise and fall of ancient civilizations.
  • Investigating the impact of colonialism on indigenous societies and cultures.
  • Studying the role of religion in shaping political and social movements throughout history.
  • Analyzing the impact of the Industrial Revolution on economic and social structures.
  • Examining the causes and consequences of global conflicts, such as World War I and II.

Research Problem Examples in Business are as follows:

  • Studying the impact of corporate social responsibility on brand reputation and consumer behavior.
  • Investigating the effectiveness of leadership development programs in improving organizational performance and employee satisfaction.
  • Analyzing the factors that contribute to successful entrepreneurship and small business development.
  • Examining the impact of mergers and acquisitions on market competition and consumer welfare.
  • Studying the effectiveness of marketing strategies and advertising campaigns in promoting brand awareness and sales.

Research Problem Example for Students

An Example of a Research Problem for Students could be:

“How does social media usage affect the academic performance of high school students?”

This research problem is specific, measurable, and relevant. It is specific because it focuses on a particular area of interest, which is the impact of social media on academic performance. It is measurable because the researcher can collect data on social media usage and academic performance to evaluate the relationship between the two variables. It is relevant because it addresses a current and important issue that affects high school students.

To conduct research on this problem, the researcher could use various methods, such as surveys, interviews, and statistical analysis of academic records. The results of the study could provide insights into the relationship between social media usage and academic performance, which could help educators and parents develop effective strategies for managing social media use among students.

Another example of a research problem for students:

“Does participation in extracurricular activities impact the academic performance of middle school students?”

This research problem is also specific, measurable, and relevant. It is specific because it focuses on a particular type of activity, extracurricular activities, and its impact on academic performance. It is measurable because the researcher can collect data on students’ participation in extracurricular activities and their academic performance to evaluate the relationship between the two variables. It is relevant because extracurricular activities are an essential part of the middle school experience, and their impact on academic performance is a topic of interest to educators and parents.

To conduct research on this problem, the researcher could use surveys, interviews, and academic records analysis. The results of the study could provide insights into the relationship between extracurricular activities and academic performance, which could help educators and parents make informed decisions about the types of activities that are most beneficial for middle school students.

Applications of Research Problem

Applications of Research Problem are as follows:

  • Academic research: Research problems are used to guide academic research in various fields, including social sciences, natural sciences, humanities, and engineering. Researchers use research problems to identify gaps in knowledge, address theoretical or practical problems, and explore new areas of study.
  • Business research : Research problems are used to guide business research, including market research, consumer behavior research, and organizational research. Researchers use research problems to identify business challenges, explore opportunities, and develop strategies for business growth and success.
  • Healthcare research : Research problems are used to guide healthcare research, including medical research, clinical research, and health services research. Researchers use research problems to identify healthcare challenges, develop new treatments and interventions, and improve healthcare delivery and outcomes.
  • Public policy research : Research problems are used to guide public policy research, including policy analysis, program evaluation, and policy development. Researchers use research problems to identify social issues, assess the effectiveness of existing policies and programs, and develop new policies and programs to address societal challenges.
  • Environmental research : Research problems are used to guide environmental research, including environmental science, ecology, and environmental management. Researchers use research problems to identify environmental challenges, assess the impact of human activities on the environment, and develop sustainable solutions to protect the environment.

Purpose of Research Problems

The purpose of research problems is to identify an area of study that requires further investigation and to formulate a clear, concise and specific research question. A research problem defines the specific issue or problem that needs to be addressed and serves as the foundation for the research project.

Identifying a research problem is important because it helps to establish the direction of the research and sets the stage for the research design, methods, and analysis. It also ensures that the research is relevant and contributes to the existing body of knowledge in the field.

A well-formulated research problem should:

  • Clearly define the specific issue or problem that needs to be investigated
  • Be specific and narrow enough to be manageable in terms of time, resources, and scope
  • Be relevant to the field of study and contribute to the existing body of knowledge
  • Be feasible and realistic in terms of available data, resources, and research methods
  • Be interesting and intellectually stimulating for the researcher and potential readers or audiences.

Characteristics of Research Problem

The characteristics of a research problem refer to the specific features that a problem must possess to qualify as a suitable research topic. Some of the key characteristics of a research problem are:

  • Clarity : A research problem should be clearly defined and stated in a way that it is easily understood by the researcher and other readers. The problem should be specific, unambiguous, and easy to comprehend.
  • Relevance : A research problem should be relevant to the field of study, and it should contribute to the existing body of knowledge. The problem should address a gap in knowledge, a theoretical or practical problem, or a real-world issue that requires further investigation.
  • Feasibility : A research problem should be feasible in terms of the availability of data, resources, and research methods. It should be realistic and practical to conduct the study within the available time, budget, and resources.
  • Novelty : A research problem should be novel or original in some way. It should represent a new or innovative perspective on an existing problem, or it should explore a new area of study or apply an existing theory to a new context.
  • Importance : A research problem should be important or significant in terms of its potential impact on the field or society. It should have the potential to produce new knowledge, advance existing theories, or address a pressing societal issue.
  • Manageability : A research problem should be manageable in terms of its scope and complexity. It should be specific enough to be investigated within the available time and resources, and it should be broad enough to provide meaningful results.

Advantages of Research Problem

The advantages of a well-defined research problem are as follows:

  • Focus : A research problem provides a clear and focused direction for the research study. It ensures that the study stays on track and does not deviate from the research question.
  • Clarity : A research problem provides clarity and specificity to the research question. It ensures that the research is not too broad or too narrow and that the research objectives are clearly defined.
  • Relevance : A research problem ensures that the research study is relevant to the field of study and contributes to the existing body of knowledge. It addresses gaps in knowledge, theoretical or practical problems, or real-world issues that require further investigation.
  • Feasibility : A research problem ensures that the research study is feasible in terms of the availability of data, resources, and research methods. It ensures that the research is realistic and practical to conduct within the available time, budget, and resources.
  • Novelty : A research problem ensures that the research study is original and innovative. It represents a new or unique perspective on an existing problem, explores a new area of study, or applies an existing theory to a new context.
  • Importance : A research problem ensures that the research study is important and significant in terms of its potential impact on the field or society. It has the potential to produce new knowledge, advance existing theories, or address a pressing societal issue.
  • Rigor : A research problem ensures that the research study is rigorous and follows established research methods and practices. It ensures that the research is conducted in a systematic, objective, and unbiased manner.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Conclusion

Research Paper Conclusion – Writing Guide and...

Literature Review

Literature Review – Types Writing Guide and...

Ethical Considerations

Ethical Considerations – Types, Examples and...

Research Methodology

Research Methodology – Types, Examples and...

Problem statement

Problem Statement – Writing Guide, Examples and...

Data Interpretation

Data Interpretation – Process, Methods and...

research design problem meaning

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research design problem meaning

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

Private Coaching

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research design problem meaning

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research design problem meaning

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

14 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

Rachael Opoku

This post is really helpful.

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Joreme

This post has been very useful to me. Confusing areas have been cleared

Esther Mwamba

This is very helpful and very useful!

Lilo_22

Wow! This post has an awful explanation. Appreciated.

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Types of Research Designs
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

Introduction

Before beginning your paper, you need to decide how you plan to design the study .

The research design refers to the overall strategy and analytical approach that you have chosen in order to integrate, in a coherent and logical way, the different components of the study, thus ensuring that the research problem will be thoroughly investigated. It constitutes the blueprint for the collection, measurement, and interpretation of information and data. Note that the research problem determines the type of design you choose, not the other way around!

De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Trochim, William M.K. Research Methods Knowledge Base. 2006.

General Structure and Writing Style

The function of a research design is to ensure that the evidence obtained enables you to effectively address the research problem logically and as unambiguously as possible . In social sciences research, obtaining information relevant to the research problem generally entails specifying the type of evidence needed to test the underlying assumptions of a theory, to evaluate a program, or to accurately describe and assess meaning related to an observable phenomenon.

With this in mind, a common mistake made by researchers is that they begin their investigations before they have thought critically about what information is required to address the research problem. Without attending to these design issues beforehand, the overall research problem will not be adequately addressed and any conclusions drawn will run the risk of being weak and unconvincing. As a consequence, the overall validity of the study will be undermined.

The length and complexity of describing the research design in your paper can vary considerably, but any well-developed description will achieve the following :

  • Identify the research problem clearly and justify its selection, particularly in relation to any valid alternative designs that could have been used,
  • Review and synthesize previously published literature associated with the research problem,
  • Clearly and explicitly specify hypotheses [i.e., research questions] central to the problem,
  • Effectively describe the information and/or data which will be necessary for an adequate testing of the hypotheses and explain how such information and/or data will be obtained, and
  • Describe the methods of analysis to be applied to the data in determining whether or not the hypotheses are true or false.

The research design is usually incorporated into the introduction of your paper . You can obtain an overall sense of what to do by reviewing studies that have utilized the same research design [e.g., using a case study approach]. This can help you develop an outline to follow for your own paper.

NOTE: Use the SAGE Research Methods Online and Cases and the SAGE Research Methods Videos databases to search for scholarly resources on how to apply specific research designs and methods . The Research Methods Online database contains links to more than 175,000 pages of SAGE publisher's book, journal, and reference content on quantitative, qualitative, and mixed research methodologies. Also included is a collection of case studies of social research projects that can be used to help you better understand abstract or complex methodological concepts. The Research Methods Videos database contains hours of tutorials, interviews, video case studies, and mini-documentaries covering the entire research process.

Creswell, John W. and J. David Creswell. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 5th edition. Thousand Oaks, CA: Sage, 2018; De Vaus, D. A. Research Design in Social Research . London: SAGE, 2001; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Leedy, Paul D. and Jeanne Ellis Ormrod. Practical Research: Planning and Design . Tenth edition. Boston, MA: Pearson, 2013; Vogt, W. Paul, Dianna C. Gardner, and Lynne M. Haeffele. When to Use What Research Design . New York: Guilford, 2012.

Action Research Design

Definition and Purpose

The essentials of action research design follow a characteristic cycle whereby initially an exploratory stance is adopted, where an understanding of a problem is developed and plans are made for some form of interventionary strategy. Then the intervention is carried out [the "action" in action research] during which time, pertinent observations are collected in various forms. The new interventional strategies are carried out, and this cyclic process repeats, continuing until a sufficient understanding of [or a valid implementation solution for] the problem is achieved. The protocol is iterative or cyclical in nature and is intended to foster deeper understanding of a given situation, starting with conceptualizing and particularizing the problem and moving through several interventions and evaluations.

What do these studies tell you ?

  • This is a collaborative and adaptive research design that lends itself to use in work or community situations.
  • Design focuses on pragmatic and solution-driven research outcomes rather than testing theories.
  • When practitioners use action research, it has the potential to increase the amount they learn consciously from their experience; the action research cycle can be regarded as a learning cycle.
  • Action research studies often have direct and obvious relevance to improving practice and advocating for change.
  • There are no hidden controls or preemption of direction by the researcher.

What these studies don't tell you ?

  • It is harder to do than conducting conventional research because the researcher takes on responsibilities of advocating for change as well as for researching the topic.
  • Action research is much harder to write up because it is less likely that you can use a standard format to report your findings effectively [i.e., data is often in the form of stories or observation].
  • Personal over-involvement of the researcher may bias research results.
  • The cyclic nature of action research to achieve its twin outcomes of action [e.g. change] and research [e.g. understanding] is time-consuming and complex to conduct.
  • Advocating for change usually requires buy-in from study participants.

Coghlan, David and Mary Brydon-Miller. The Sage Encyclopedia of Action Research . Thousand Oaks, CA:  Sage, 2014; Efron, Sara Efrat and Ruth Ravid. Action Research in Education: A Practical Guide . New York: Guilford, 2013; Gall, Meredith. Educational Research: An Introduction . Chapter 18, Action Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Gorard, Stephen. Research Design: Creating Robust Approaches for the Social Sciences . Thousand Oaks, CA: Sage, 2013; Kemmis, Stephen and Robin McTaggart. “Participatory Action Research.” In Handbook of Qualitative Research . Norman Denzin and Yvonna S. Lincoln, eds. 2nd ed. (Thousand Oaks, CA: SAGE, 2000), pp. 567-605; McNiff, Jean. Writing and Doing Action Research . London: Sage, 2014; Reason, Peter and Hilary Bradbury. Handbook of Action Research: Participative Inquiry and Practice . Thousand Oaks, CA: SAGE, 2001.

Case Study Design

A case study is an in-depth study of a particular research problem rather than a sweeping statistical survey or comprehensive comparative inquiry. It is often used to narrow down a very broad field of research into one or a few easily researchable examples. The case study research design is also useful for testing whether a specific theory and model actually applies to phenomena in the real world. It is a useful design when not much is known about an issue or phenomenon.

  • Approach excels at bringing us to an understanding of a complex issue through detailed contextual analysis of a limited number of events or conditions and their relationships.
  • A researcher using a case study design can apply a variety of methodologies and rely on a variety of sources to investigate a research problem.
  • Design can extend experience or add strength to what is already known through previous research.
  • Social scientists, in particular, make wide use of this research design to examine contemporary real-life situations and provide the basis for the application of concepts and theories and the extension of methodologies.
  • The design can provide detailed descriptions of specific and rare cases.
  • A single or small number of cases offers little basis for establishing reliability or to generalize the findings to a wider population of people, places, or things.
  • Intense exposure to the study of a case may bias a researcher's interpretation of the findings.
  • Design does not facilitate assessment of cause and effect relationships.
  • Vital information may be missing, making the case hard to interpret.
  • The case may not be representative or typical of the larger problem being investigated.
  • If the criteria for selecting a case is because it represents a very unusual or unique phenomenon or problem for study, then your interpretation of the findings can only apply to that particular case.

Case Studies. Writing@CSU. Colorado State University; Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 4, Flexible Methods: Case Study Design. 2nd ed. New York: Columbia University Press, 1999; Gerring, John. “What Is a Case Study and What Is It Good for?” American Political Science Review 98 (May 2004): 341-354; Greenhalgh, Trisha, editor. Case Study Evaluation: Past, Present and Future Challenges . Bingley, UK: Emerald Group Publishing, 2015; Mills, Albert J. , Gabrielle Durepos, and Eiden Wiebe, editors. Encyclopedia of Case Study Research . Thousand Oaks, CA: SAGE Publications, 2010; Stake, Robert E. The Art of Case Study Research . Thousand Oaks, CA: SAGE, 1995; Yin, Robert K. Case Study Research: Design and Theory . Applied Social Research Methods Series, no. 5. 3rd ed. Thousand Oaks, CA: SAGE, 2003.

Causal Design

Causality studies may be thought of as understanding a phenomenon in terms of conditional statements in the form, “If X, then Y.” This type of research is used to measure what impact a specific change will have on existing norms and assumptions. Most social scientists seek causal explanations that reflect tests of hypotheses. Causal effect (nomothetic perspective) occurs when variation in one phenomenon, an independent variable, leads to or results, on average, in variation in another phenomenon, the dependent variable.

Conditions necessary for determining causality:

  • Empirical association -- a valid conclusion is based on finding an association between the independent variable and the dependent variable.
  • Appropriate time order -- to conclude that causation was involved, one must see that cases were exposed to variation in the independent variable before variation in the dependent variable.
  • Nonspuriousness -- a relationship between two variables that is not due to variation in a third variable.
  • Causality research designs assist researchers in understanding why the world works the way it does through the process of proving a causal link between variables and by the process of eliminating other possibilities.
  • Replication is possible.
  • There is greater confidence the study has internal validity due to the systematic subject selection and equity of groups being compared.
  • Not all relationships are causal! The possibility always exists that, by sheer coincidence, two unrelated events appear to be related [e.g., Punxatawney Phil could accurately predict the duration of Winter for five consecutive years but, the fact remains, he's just a big, furry rodent].
  • Conclusions about causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. This means causality can only be inferred, never proven.
  • If two variables are correlated, the cause must come before the effect. However, even though two variables might be causally related, it can sometimes be difficult to determine which variable comes first and, therefore, to establish which variable is the actual cause and which is the  actual effect.

Beach, Derek and Rasmus Brun Pedersen. Causal Case Study Methods: Foundations and Guidelines for Comparing, Matching, and Tracing . Ann Arbor, MI: University of Michigan Press, 2016; Bachman, Ronet. The Practice of Research in Criminology and Criminal Justice . Chapter 5, Causation and Research Designs. 3rd ed. Thousand Oaks, CA: Pine Forge Press, 2007; Brewer, Ernest W. and Jennifer Kubn. “Causal-Comparative Design.” In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 125-132; Causal Research Design: Experimentation. Anonymous SlideShare Presentation; Gall, Meredith. Educational Research: An Introduction . Chapter 11, Nonexperimental Research: Correlational Designs. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007; Trochim, William M.K. Research Methods Knowledge Base. 2006.

Cohort Design

Often used in the medical sciences, but also found in the applied social sciences, a cohort study generally refers to a study conducted over a period of time involving members of a population which the subject or representative member comes from, and who are united by some commonality or similarity. Using a quantitative framework, a cohort study makes note of statistical occurrence within a specialized subgroup, united by same or similar characteristics that are relevant to the research problem being investigated, rather than studying statistical occurrence within the general population. Using a qualitative framework, cohort studies generally gather data using methods of observation. Cohorts can be either "open" or "closed."

  • Open Cohort Studies [dynamic populations, such as the population of Los Angeles] involve a population that is defined just by the state of being a part of the study in question (and being monitored for the outcome). Date of entry and exit from the study is individually defined, therefore, the size of the study population is not constant. In open cohort studies, researchers can only calculate rate based data, such as, incidence rates and variants thereof.
  • Closed Cohort Studies [static populations, such as patients entered into a clinical trial] involve participants who enter into the study at one defining point in time and where it is presumed that no new participants can enter the cohort. Given this, the number of study participants remains constant (or can only decrease).
  • The use of cohorts is often mandatory because a randomized control study may be unethical. For example, you cannot deliberately expose people to asbestos, you can only study its effects on those who have already been exposed. Research that measures risk factors often relies upon cohort designs.
  • Because cohort studies measure potential causes before the outcome has occurred, they can demonstrate that these “causes” preceded the outcome, thereby avoiding the debate as to which is the cause and which is the effect.
  • Cohort analysis is highly flexible and can provide insight into effects over time and related to a variety of different types of changes [e.g., social, cultural, political, economic, etc.].
  • Either original data or secondary data can be used in this design.
  • In cases where a comparative analysis of two cohorts is made [e.g., studying the effects of one group exposed to asbestos and one that has not], a researcher cannot control for all other factors that might differ between the two groups. These factors are known as confounding variables.
  • Cohort studies can end up taking a long time to complete if the researcher must wait for the conditions of interest to develop within the group. This also increases the chance that key variables change during the course of the study, potentially impacting the validity of the findings.
  • Due to the lack of randominization in the cohort design, its external validity is lower than that of study designs where the researcher randomly assigns participants.

Healy P, Devane D. “Methodological Considerations in Cohort Study Designs.” Nurse Researcher 18 (2011): 32-36; Glenn, Norval D, editor. Cohort Analysis . 2nd edition. Thousand Oaks, CA: Sage, 2005; Levin, Kate Ann. Study Design IV: Cohort Studies. Evidence-Based Dentistry 7 (2003): 51–52; Payne, Geoff. “Cohort Study.” In The SAGE Dictionary of Social Research Methods . Victor Jupp, editor. (Thousand Oaks, CA: Sage, 2006), pp. 31-33; Study Design 101. Himmelfarb Health Sciences Library. George Washington University, November 2011; Cohort Study. Wikipedia.

Cross-Sectional Design

Cross-sectional research designs have three distinctive features: no time dimension; a reliance on existing differences rather than change following intervention; and, groups are selected based on existing differences rather than random allocation. The cross-sectional design can only measure differences between or from among a variety of people, subjects, or phenomena rather than a process of change. As such, researchers using this design can only employ a relatively passive approach to making causal inferences based on findings.

  • Cross-sectional studies provide a clear 'snapshot' of the outcome and the characteristics associated with it, at a specific point in time.
  • Unlike an experimental design, where there is an active intervention by the researcher to produce and measure change or to create differences, cross-sectional designs focus on studying and drawing inferences from existing differences between people, subjects, or phenomena.
  • Entails collecting data at and concerning one point in time. While longitudinal studies involve taking multiple measures over an extended period of time, cross-sectional research is focused on finding relationships between variables at one moment in time.
  • Groups identified for study are purposely selected based upon existing differences in the sample rather than seeking random sampling.
  • Cross-section studies are capable of using data from a large number of subjects and, unlike observational studies, is not geographically bound.
  • Can estimate prevalence of an outcome of interest because the sample is usually taken from the whole population.
  • Because cross-sectional designs generally use survey techniques to gather data, they are relatively inexpensive and take up little time to conduct.
  • Finding people, subjects, or phenomena to study that are very similar except in one specific variable can be difficult.
  • Results are static and time bound and, therefore, give no indication of a sequence of events or reveal historical or temporal contexts.
  • Studies cannot be utilized to establish cause and effect relationships.
  • This design only provides a snapshot of analysis so there is always the possibility that a study could have differing results if another time-frame had been chosen.
  • There is no follow up to the findings.

Bethlehem, Jelke. "7: Cross-sectional Research." In Research Methodology in the Social, Behavioural and Life Sciences . Herman J Adèr and Gideon J Mellenbergh, editors. (London, England: Sage, 1999), pp. 110-43; Bourque, Linda B. “Cross-Sectional Design.” In  The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman, and Tim Futing Liao. (Thousand Oaks, CA: 2004), pp. 230-231; Hall, John. “Cross-Sectional Survey Design.” In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 173-174; Helen Barratt, Maria Kirwan. Cross-Sectional Studies: Design Application, Strengths and Weaknesses of Cross-Sectional Studies. Healthknowledge, 2009. Cross-Sectional Study. Wikipedia.

Descriptive Design

Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. Descriptive research is used to obtain information concerning the current status of the phenomena and to describe "what exists" with respect to variables or conditions in a situation.

  • The subject is being observed in a completely natural and unchanged natural environment. True experiments, whilst giving analyzable data, often adversely influence the normal behavior of the subject [a.k.a., the Heisenberg effect whereby measurements of certain systems cannot be made without affecting the systems].
  • Descriptive research is often used as a pre-cursor to more quantitative research designs with the general overview giving some valuable pointers as to what variables are worth testing quantitatively.
  • If the limitations are understood, they can be a useful tool in developing a more focused study.
  • Descriptive studies can yield rich data that lead to important recommendations in practice.
  • Appoach collects a large amount of data for detailed analysis.
  • The results from a descriptive research cannot be used to discover a definitive answer or to disprove a hypothesis.
  • Because descriptive designs often utilize observational methods [as opposed to quantitative methods], the results cannot be replicated.
  • The descriptive function of research is heavily dependent on instrumentation for measurement and observation.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 5, Flexible Methods: Descriptive Research. 2nd ed. New York: Columbia University Press, 1999; Given, Lisa M. "Descriptive Research." In Encyclopedia of Measurement and Statistics . Neil J. Salkind and Kristin Rasmussen, editors. (Thousand Oaks, CA: Sage, 2007), pp. 251-254; McNabb, Connie. Descriptive Research Methodologies. Powerpoint Presentation; Shuttleworth, Martyn. Descriptive Research Design, September 26, 2008; Erickson, G. Scott. "Descriptive Research Design." In New Methods of Market Research and Analysis . (Northampton, MA: Edward Elgar Publishing, 2017), pp. 51-77; Sahin, Sagufta, and Jayanta Mete. "A Brief Study on Descriptive Research: Its Nature and Application in Social Science." International Journal of Research and Analysis in Humanities 1 (2021): 11; K. Swatzell and P. Jennings. “Descriptive Research: The Nuts and Bolts.” Journal of the American Academy of Physician Assistants 20 (2007), pp. 55-56; Kane, E. Doing Your Own Research: Basic Descriptive Research in the Social Sciences and Humanities . London: Marion Boyars, 1985.

Experimental Design

A blueprint of the procedure that enables the researcher to maintain control over all factors that may affect the result of an experiment. In doing this, the researcher attempts to determine or predict what may occur. Experimental research is often used where there is time priority in a causal relationship (cause precedes effect), there is consistency in a causal relationship (a cause will always lead to the same effect), and the magnitude of the correlation is great. The classic experimental design specifies an experimental group and a control group. The independent variable is administered to the experimental group and not to the control group, and both groups are measured on the same dependent variable. Subsequent experimental designs have used more groups and more measurements over longer periods. True experiments must have control, randomization, and manipulation.

  • Experimental research allows the researcher to control the situation. In so doing, it allows researchers to answer the question, “What causes something to occur?”
  • Permits the researcher to identify cause and effect relationships between variables and to distinguish placebo effects from treatment effects.
  • Experimental research designs support the ability to limit alternative explanations and to infer direct causal relationships in the study.
  • Approach provides the highest level of evidence for single studies.
  • The design is artificial, and results may not generalize well to the real world.
  • The artificial settings of experiments may alter the behaviors or responses of participants.
  • Experimental designs can be costly if special equipment or facilities are needed.
  • Some research problems cannot be studied using an experiment because of ethical or technical reasons.
  • Difficult to apply ethnographic and other qualitative methods to experimentally designed studies.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 7, Flexible Methods: Experimental Research. 2nd ed. New York: Columbia University Press, 1999; Chapter 2: Research Design, Experimental Designs. School of Psychology, University of New England, 2000; Chow, Siu L. "Experimental Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 448-453; "Experimental Design." In Social Research Methods . Nicholas Walliman, editor. (London, England: Sage, 2006), pp, 101-110; Experimental Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Kirk, Roger E. Experimental Design: Procedures for the Behavioral Sciences . 4th edition. Thousand Oaks, CA: Sage, 2013; Trochim, William M.K. Experimental Design. Research Methods Knowledge Base. 2006; Rasool, Shafqat. Experimental Research. Slideshare presentation.

Exploratory Design

An exploratory design is conducted about a research problem when there are few or no earlier studies to refer to or rely upon to predict an outcome . The focus is on gaining insights and familiarity for later investigation or undertaken when research problems are in a preliminary stage of investigation. Exploratory designs are often used to establish an understanding of how best to proceed in studying an issue or what methodology would effectively apply to gathering information about the issue.

The goals of exploratory research are intended to produce the following possible insights:

  • Familiarity with basic details, settings, and concerns.
  • Well grounded picture of the situation being developed.
  • Generation of new ideas and assumptions.
  • Development of tentative theories or hypotheses.
  • Determination about whether a study is feasible in the future.
  • Issues get refined for more systematic investigation and formulation of new research questions.
  • Direction for future research and techniques get developed.
  • Design is a useful approach for gaining background information on a particular topic.
  • Exploratory research is flexible and can address research questions of all types (what, why, how).
  • Provides an opportunity to define new terms and clarify existing concepts.
  • Exploratory research is often used to generate formal hypotheses and develop more precise research problems.
  • In the policy arena or applied to practice, exploratory studies help establish research priorities and where resources should be allocated.
  • Exploratory research generally utilizes small sample sizes and, thus, findings are typically not generalizable to the population at large.
  • The exploratory nature of the research inhibits an ability to make definitive conclusions about the findings. They provide insight but not definitive conclusions.
  • The research process underpinning exploratory studies is flexible but often unstructured, leading to only tentative results that have limited value to decision-makers.
  • Design lacks rigorous standards applied to methods of data gathering and analysis because one of the areas for exploration could be to determine what method or methodologies could best fit the research problem.

Cuthill, Michael. “Exploratory Research: Citizen Participation, Local Government, and Sustainable Development in Australia.” Sustainable Development 10 (2002): 79-89; Streb, Christoph K. "Exploratory Case Study." In Encyclopedia of Case Study Research . Albert J. Mills, Gabrielle Durepos and Eiden Wiebe, editors. (Thousand Oaks, CA: Sage, 2010), pp. 372-374; Taylor, P. J., G. Catalano, and D.R.F. Walker. “Exploratory Analysis of the World City Network.” Urban Studies 39 (December 2002): 2377-2394; Exploratory Research. Wikipedia.

Field Research Design

Sometimes referred to as ethnography or participant observation, designs around field research encompass a variety of interpretative procedures [e.g., observation and interviews] rooted in qualitative approaches to studying people individually or in groups while inhabiting their natural environment as opposed to using survey instruments or other forms of impersonal methods of data gathering. Information acquired from observational research takes the form of “ field notes ” that involves documenting what the researcher actually sees and hears while in the field. Findings do not consist of conclusive statements derived from numbers and statistics because field research involves analysis of words and observations of behavior. Conclusions, therefore, are developed from an interpretation of findings that reveal overriding themes, concepts, and ideas. More information can be found HERE .

  • Field research is often necessary to fill gaps in understanding the research problem applied to local conditions or to specific groups of people that cannot be ascertained from existing data.
  • The research helps contextualize already known information about a research problem, thereby facilitating ways to assess the origins, scope, and scale of a problem and to gage the causes, consequences, and means to resolve an issue based on deliberate interaction with people in their natural inhabited spaces.
  • Enables the researcher to corroborate or confirm data by gathering additional information that supports or refutes findings reported in prior studies of the topic.
  • Because the researcher in embedded in the field, they are better able to make observations or ask questions that reflect the specific cultural context of the setting being investigated.
  • Observing the local reality offers the opportunity to gain new perspectives or obtain unique data that challenges existing theoretical propositions or long-standing assumptions found in the literature.

What these studies don't tell you

  • A field research study requires extensive time and resources to carry out the multiple steps involved with preparing for the gathering of information, including for example, examining background information about the study site, obtaining permission to access the study site, and building trust and rapport with subjects.
  • Requires a commitment to staying engaged in the field to ensure that you can adequately document events and behaviors as they unfold.
  • The unpredictable nature of fieldwork means that researchers can never fully control the process of data gathering. They must maintain a flexible approach to studying the setting because events and circumstances can change quickly or unexpectedly.
  • Findings can be difficult to interpret and verify without access to documents and other source materials that help to enhance the credibility of information obtained from the field  [i.e., the act of triangulating the data].
  • Linking the research problem to the selection of study participants inhabiting their natural environment is critical. However, this specificity limits the ability to generalize findings to different situations or in other contexts or to infer courses of action applied to other settings or groups of people.
  • The reporting of findings must take into account how the researcher themselves may have inadvertently affected respondents and their behaviors.

Historical Design

The purpose of a historical research design is to collect, verify, and synthesize evidence from the past to establish facts that defend or refute a hypothesis. It uses secondary sources and a variety of primary documentary evidence, such as, diaries, official records, reports, archives, and non-textual information [maps, pictures, audio and visual recordings]. The limitation is that the sources must be both authentic and valid.

  • The historical research design is unobtrusive; the act of research does not affect the results of the study.
  • The historical approach is well suited for trend analysis.
  • Historical records can add important contextual background required to more fully understand and interpret a research problem.
  • There is often no possibility of researcher-subject interaction that could affect the findings.
  • Historical sources can be used over and over to study different research problems or to replicate a previous study.
  • The ability to fulfill the aims of your research are directly related to the amount and quality of documentation available to understand the research problem.
  • Since historical research relies on data from the past, there is no way to manipulate it to control for contemporary contexts.
  • Interpreting historical sources can be very time consuming.
  • The sources of historical materials must be archived consistently to ensure access. This may especially challenging for digital or online-only sources.
  • Original authors bring their own perspectives and biases to the interpretation of past events and these biases are more difficult to ascertain in historical resources.
  • Due to the lack of control over external variables, historical research is very weak with regard to the demands of internal validity.
  • It is rare that the entirety of historical documentation needed to fully address a research problem is available for interpretation, therefore, gaps need to be acknowledged.

Howell, Martha C. and Walter Prevenier. From Reliable Sources: An Introduction to Historical Methods . Ithaca, NY: Cornell University Press, 2001; Lundy, Karen Saucier. "Historical Research." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor. (Thousand Oaks, CA: Sage, 2008), pp. 396-400; Marius, Richard. and Melvin E. Page. A Short Guide to Writing about History . 9th edition. Boston, MA: Pearson, 2015; Savitt, Ronald. “Historical Research in Marketing.” Journal of Marketing 44 (Autumn, 1980): 52-58;  Gall, Meredith. Educational Research: An Introduction . Chapter 16, Historical Research. 8th ed. Boston, MA: Pearson/Allyn and Bacon, 2007.

Longitudinal Design

A longitudinal study follows the same sample over time and makes repeated observations. For example, with longitudinal surveys, the same group of people is interviewed at regular intervals, enabling researchers to track changes over time and to relate them to variables that might explain why the changes occur. Longitudinal research designs describe patterns of change and help establish the direction and magnitude of causal relationships. Measurements are taken on each variable over two or more distinct time periods. This allows the researcher to measure change in variables over time. It is a type of observational study sometimes referred to as a panel study.

  • Longitudinal data facilitate the analysis of the duration of a particular phenomenon.
  • Enables survey researchers to get close to the kinds of causal explanations usually attainable only with experiments.
  • The design permits the measurement of differences or change in a variable from one period to another [i.e., the description of patterns of change over time].
  • Longitudinal studies facilitate the prediction of future outcomes based upon earlier factors.
  • The data collection method may change over time.
  • Maintaining the integrity of the original sample can be difficult over an extended period of time.
  • It can be difficult to show more than one variable at a time.
  • This design often needs qualitative research data to explain fluctuations in the results.
  • A longitudinal research design assumes present trends will continue unchanged.
  • It can take a long period of time to gather results.
  • There is a need to have a large sample size and accurate sampling to reach representativness.

Anastas, Jeane W. Research Design for Social Work and the Human Services . Chapter 6, Flexible Methods: Relational and Longitudinal Research. 2nd ed. New York: Columbia University Press, 1999; Forgues, Bernard, and Isabelle Vandangeon-Derumez. "Longitudinal Analyses." In Doing Management Research . Raymond-Alain Thiétart and Samantha Wauchope, editors. (London, England: Sage, 2001), pp. 332-351; Kalaian, Sema A. and Rafa M. Kasim. "Longitudinal Studies." In Encyclopedia of Survey Research Methods . Paul J. Lavrakas, ed. (Thousand Oaks, CA: Sage, 2008), pp. 440-441; Menard, Scott, editor. Longitudinal Research . Thousand Oaks, CA: Sage, 2002; Ployhart, Robert E. and Robert J. Vandenberg. "Longitudinal Research: The Theory, Design, and Analysis of Change.” Journal of Management 36 (January 2010): 94-120; Longitudinal Study. Wikipedia.

Meta-Analysis Design

Meta-analysis is an analytical methodology designed to systematically evaluate and summarize the results from a number of individual studies, thereby, increasing the overall sample size and the ability of the researcher to study effects of interest. The purpose is to not simply summarize existing knowledge, but to develop a new understanding of a research problem using synoptic reasoning. The main objectives of meta-analysis include analyzing differences in the results among studies and increasing the precision by which effects are estimated. A well-designed meta-analysis depends upon strict adherence to the criteria used for selecting studies and the availability of information in each study to properly analyze their findings. Lack of information can severely limit the type of analyzes and conclusions that can be reached. In addition, the more dissimilarity there is in the results among individual studies [heterogeneity], the more difficult it is to justify interpretations that govern a valid synopsis of results. A meta-analysis needs to fulfill the following requirements to ensure the validity of your findings:

  • Clearly defined description of objectives, including precise definitions of the variables and outcomes that are being evaluated;
  • A well-reasoned and well-documented justification for identification and selection of the studies;
  • Assessment and explicit acknowledgment of any researcher bias in the identification and selection of those studies;
  • Description and evaluation of the degree of heterogeneity among the sample size of studies reviewed; and,
  • Justification of the techniques used to evaluate the studies.
  • Can be an effective strategy for determining gaps in the literature.
  • Provides a means of reviewing research published about a particular topic over an extended period of time and from a variety of sources.
  • Is useful in clarifying what policy or programmatic actions can be justified on the basis of analyzing research results from multiple studies.
  • Provides a method for overcoming small sample sizes in individual studies that previously may have had little relationship to each other.
  • Can be used to generate new hypotheses or highlight research problems for future studies.
  • Small violations in defining the criteria used for content analysis can lead to difficult to interpret and/or meaningless findings.
  • A large sample size can yield reliable, but not necessarily valid, results.
  • A lack of uniformity regarding, for example, the type of literature reviewed, how methods are applied, and how findings are measured within the sample of studies you are analyzing, can make the process of synthesis difficult to perform.
  • Depending on the sample size, the process of reviewing and synthesizing multiple studies can be very time consuming.

Beck, Lewis W. "The Synoptic Method." The Journal of Philosophy 36 (1939): 337-345; Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. The Handbook of Research Synthesis and Meta-Analysis . 2nd edition. New York: Russell Sage Foundation, 2009; Guzzo, Richard A., Susan E. Jackson and Raymond A. Katzell. “Meta-Analysis Analysis.” In Research in Organizational Behavior , Volume 9. (Greenwich, CT: JAI Press, 1987), pp 407-442; Lipsey, Mark W. and David B. Wilson. Practical Meta-Analysis . Thousand Oaks, CA: Sage Publications, 2001; Study Design 101. Meta-Analysis. The Himmelfarb Health Sciences Library, George Washington University; Timulak, Ladislav. “Qualitative Meta-Analysis.” In The SAGE Handbook of Qualitative Data Analysis . Uwe Flick, editor. (Los Angeles, CA: Sage, 2013), pp. 481-495; Walker, Esteban, Adrian V. Hernandez, and Micheal W. Kattan. "Meta-Analysis: It's Strengths and Limitations." Cleveland Clinic Journal of Medicine 75 (June 2008): 431-439.

Mixed-Method Design

  • Narrative and non-textual information can add meaning to numeric data, while numeric data can add precision to narrative and non-textual information.
  • Can utilize existing data while at the same time generating and testing a grounded theory approach to describe and explain the phenomenon under study.
  • A broader, more complex research problem can be investigated because the researcher is not constrained by using only one method.
  • The strengths of one method can be used to overcome the inherent weaknesses of another method.
  • Can provide stronger, more robust evidence to support a conclusion or set of recommendations.
  • May generate new knowledge new insights or uncover hidden insights, patterns, or relationships that a single methodological approach might not reveal.
  • Produces more complete knowledge and understanding of the research problem that can be used to increase the generalizability of findings applied to theory or practice.
  • A researcher must be proficient in understanding how to apply multiple methods to investigating a research problem as well as be proficient in optimizing how to design a study that coherently melds them together.
  • Can increase the likelihood of conflicting results or ambiguous findings that inhibit drawing a valid conclusion or setting forth a recommended course of action [e.g., sample interview responses do not support existing statistical data].
  • Because the research design can be very complex, reporting the findings requires a well-organized narrative, clear writing style, and precise word choice.
  • Design invites collaboration among experts. However, merging different investigative approaches and writing styles requires more attention to the overall research process than studies conducted using only one methodological paradigm.
  • Concurrent merging of quantitative and qualitative research requires greater attention to having adequate sample sizes, using comparable samples, and applying a consistent unit of analysis. For sequential designs where one phase of qualitative research builds on the quantitative phase or vice versa, decisions about what results from the first phase to use in the next phase, the choice of samples and estimating reasonable sample sizes for both phases, and the interpretation of results from both phases can be difficult.
  • Due to multiple forms of data being collected and analyzed, this design requires extensive time and resources to carry out the multiple steps involved in data gathering and interpretation.

Burch, Patricia and Carolyn J. Heinrich. Mixed Methods for Policy Research and Program Evaluation . Thousand Oaks, CA: Sage, 2016; Creswell, John w. et al. Best Practices for Mixed Methods Research in the Health Sciences . Bethesda, MD: Office of Behavioral and Social Sciences Research, National Institutes of Health, 2010Creswell, John W. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches . 4th edition. Thousand Oaks, CA: Sage Publications, 2014; Domínguez, Silvia, editor. Mixed Methods Social Networks Research . Cambridge, UK: Cambridge University Press, 2014; Hesse-Biber, Sharlene Nagy. Mixed Methods Research: Merging Theory with Practice . New York: Guilford Press, 2010; Niglas, Katrin. “How the Novice Researcher Can Make Sense of Mixed Methods Designs.” International Journal of Multiple Research Approaches 3 (2009): 34-46; Onwuegbuzie, Anthony J. and Nancy L. Leech. “Linking Research Questions to Mixed Methods Data Analysis Procedures.” The Qualitative Report 11 (September 2006): 474-498; Tashakorri, Abbas and John W. Creswell. “The New Era of Mixed Methods.” Journal of Mixed Methods Research 1 (January 2007): 3-7; Zhanga, Wanqing. “Mixed Methods Application in Health Intervention Research: A Multiple Case Study.” International Journal of Multiple Research Approaches 8 (2014): 24-35 .

Observational Design

This type of research design draws a conclusion by comparing subjects against a control group, in cases where the researcher has no control over the experiment. There are two general types of observational designs. In direct observations, people know that you are watching them. Unobtrusive measures involve any method for studying behavior where individuals do not know they are being observed. An observational study allows a useful insight into a phenomenon and avoids the ethical and practical difficulties of setting up a large and cumbersome research project.

  • Observational studies are usually flexible and do not necessarily need to be structured around a hypothesis about what you expect to observe [data is emergent rather than pre-existing].
  • The researcher is able to collect in-depth information about a particular behavior.
  • Can reveal interrelationships among multifaceted dimensions of group interactions.
  • You can generalize your results to real life situations.
  • Observational research is useful for discovering what variables may be important before applying other methods like experiments.
  • Observation research designs account for the complexity of group behaviors.
  • Reliability of data is low because seeing behaviors occur over and over again may be a time consuming task and are difficult to replicate.
  • In observational research, findings may only reflect a unique sample population and, thus, cannot be generalized to other groups.
  • There can be problems with bias as the researcher may only "see what they want to see."
  • There is no possibility to determine "cause and effect" relationships since nothing is manipulated.
  • Sources or subjects may not all be equally credible.
  • Any group that is knowingly studied is altered to some degree by the presence of the researcher, therefore, potentially skewing any data collected.

Atkinson, Paul and Martyn Hammersley. “Ethnography and Participant Observation.” In Handbook of Qualitative Research . Norman K. Denzin and Yvonna S. Lincoln, eds. (Thousand Oaks, CA: Sage, 1994), pp. 248-261; Observational Research. Research Methods by Dummies. Department of Psychology. California State University, Fresno, 2006; Patton Michael Quinn. Qualitiative Research and Evaluation Methods . Chapter 6, Fieldwork Strategies and Observational Methods. 3rd ed. Thousand Oaks, CA: Sage, 2002; Payne, Geoff and Judy Payne. "Observation." In Key Concepts in Social Research . The SAGE Key Concepts series. (London, England: Sage, 2004), pp. 158-162; Rosenbaum, Paul R. Design of Observational Studies . New York: Springer, 2010;Williams, J. Patrick. "Nonparticipant Observation." In The Sage Encyclopedia of Qualitative Research Methods . Lisa M. Given, editor.(Thousand Oaks, CA: Sage, 2008), pp. 562-563.

Philosophical Design

Understood more as an broad approach to examining a research problem than a methodological design, philosophical analysis and argumentation is intended to challenge deeply embedded, often intractable, assumptions underpinning an area of study. This approach uses the tools of argumentation derived from philosophical traditions, concepts, models, and theories to critically explore and challenge, for example, the relevance of logic and evidence in academic debates, to analyze arguments about fundamental issues, or to discuss the root of existing discourse about a research problem. These overarching tools of analysis can be framed in three ways:

  • Ontology -- the study that describes the nature of reality; for example, what is real and what is not, what is fundamental and what is derivative?
  • Epistemology -- the study that explores the nature of knowledge; for example, by what means does knowledge and understanding depend upon and how can we be certain of what we know?
  • Axiology -- the study of values; for example, what values does an individual or group hold and why? How are values related to interest, desire, will, experience, and means-to-end? And, what is the difference between a matter of fact and a matter of value?
  • Can provide a basis for applying ethical decision-making to practice.
  • Functions as a means of gaining greater self-understanding and self-knowledge about the purposes of research.
  • Brings clarity to general guiding practices and principles of an individual or group.
  • Philosophy informs methodology.
  • Refine concepts and theories that are invoked in relatively unreflective modes of thought and discourse.
  • Beyond methodology, philosophy also informs critical thinking about epistemology and the structure of reality (metaphysics).
  • Offers clarity and definition to the practical and theoretical uses of terms, concepts, and ideas.
  • Limited application to specific research problems [answering the "So What?" question in social science research].
  • Analysis can be abstract, argumentative, and limited in its practical application to real-life issues.
  • While a philosophical analysis may render problematic that which was once simple or taken-for-granted, the writing can be dense and subject to unnecessary jargon, overstatement, and/or excessive quotation and documentation.
  • There are limitations in the use of metaphor as a vehicle of philosophical analysis.
  • There can be analytical difficulties in moving from philosophy to advocacy and between abstract thought and application to the phenomenal world.

Burton, Dawn. "Part I, Philosophy of the Social Sciences." In Research Training for Social Scientists . (London, England: Sage, 2000), pp. 1-5; Chapter 4, Research Methodology and Design. Unisa Institutional Repository (UnisaIR), University of South Africa; Jarvie, Ian C., and Jesús Zamora-Bonilla, editors. The SAGE Handbook of the Philosophy of Social Sciences . London: Sage, 2011; Labaree, Robert V. and Ross Scimeca. “The Philosophical Problem of Truth in Librarianship.” The Library Quarterly 78 (January 2008): 43-70; Maykut, Pamela S. Beginning Qualitative Research: A Philosophic and Practical Guide . Washington, DC: Falmer Press, 1994; McLaughlin, Hugh. "The Philosophy of Social Research." In Understanding Social Work Research . 2nd edition. (London: SAGE Publications Ltd., 2012), pp. 24-47; Stanford Encyclopedia of Philosophy . Metaphysics Research Lab, CSLI, Stanford University, 2013.

Sequential Design

  • The researcher has a limitless option when it comes to sample size and the sampling schedule.
  • Due to the repetitive nature of this research design, minor changes and adjustments can be done during the initial parts of the study to correct and hone the research method.
  • This is a useful design for exploratory studies.
  • There is very little effort on the part of the researcher when performing this technique. It is generally not expensive, time consuming, or workforce intensive.
  • Because the study is conducted serially, the results of one sample are known before the next sample is taken and analyzed. This provides opportunities for continuous improvement of sampling and methods of analysis.
  • The sampling method is not representative of the entire population. The only possibility of approaching representativeness is when the researcher chooses to use a very large sample size significant enough to represent a significant portion of the entire population. In this case, moving on to study a second or more specific sample can be difficult.
  • The design cannot be used to create conclusions and interpretations that pertain to an entire population because the sampling technique is not randomized. Generalizability from findings is, therefore, limited.
  • Difficult to account for and interpret variation from one sample to another over time, particularly when using qualitative methods of data collection.

Betensky, Rebecca. Harvard University, Course Lecture Note slides; Bovaird, James A. and Kevin A. Kupzyk. "Sequential Design." In Encyclopedia of Research Design . Neil J. Salkind, editor. (Thousand Oaks, CA: Sage, 2010), pp. 1347-1352; Cresswell, John W. Et al. “Advanced Mixed-Methods Research Designs.” In Handbook of Mixed Methods in Social and Behavioral Research . Abbas Tashakkori and Charles Teddle, eds. (Thousand Oaks, CA: Sage, 2003), pp. 209-240; Henry, Gary T. "Sequential Sampling." In The SAGE Encyclopedia of Social Science Research Methods . Michael S. Lewis-Beck, Alan Bryman and Tim Futing Liao, editors. (Thousand Oaks, CA: Sage, 2004), pp. 1027-1028; Nataliya V. Ivankova. “Using Mixed-Methods Sequential Explanatory Design: From Theory to Practice.” Field Methods 18 (February 2006): 3-20; Bovaird, James A. and Kevin A. Kupzyk. “Sequential Design.” In Encyclopedia of Research Design . Neil J. Salkind, ed. Thousand Oaks, CA: Sage, 2010; Sequential Analysis. Wikipedia.

Systematic Review

  • A systematic review synthesizes the findings of multiple studies related to each other by incorporating strategies of analysis and interpretation intended to reduce biases and random errors.
  • The application of critical exploration, evaluation, and synthesis methods separates insignificant, unsound, or redundant research from the most salient and relevant studies worthy of reflection.
  • They can be use to identify, justify, and refine hypotheses, recognize and avoid hidden problems in prior studies, and explain data inconsistencies and conflicts in data.
  • Systematic reviews can be used to help policy makers formulate evidence-based guidelines and regulations.
  • The use of strict, explicit, and pre-determined methods of synthesis, when applied appropriately, provide reliable estimates about the effects of interventions, evaluations, and effects related to the overarching research problem investigated by each study under review.
  • Systematic reviews illuminate where knowledge or thorough understanding of a research problem is lacking and, therefore, can then be used to guide future research.
  • The accepted inclusion of unpublished studies [i.e., grey literature] ensures the broadest possible way to analyze and interpret research on a topic.
  • Results of the synthesis can be generalized and the findings extrapolated into the general population with more validity than most other types of studies .
  • Systematic reviews do not create new knowledge per se; they are a method for synthesizing existing studies about a research problem in order to gain new insights and determine gaps in the literature.
  • The way researchers have carried out their investigations [e.g., the period of time covered, number of participants, sources of data analyzed, etc.] can make it difficult to effectively synthesize studies.
  • The inclusion of unpublished studies can introduce bias into the review because they may not have undergone a rigorous peer-review process prior to publication. Examples may include conference presentations or proceedings, publications from government agencies, white papers, working papers, and internal documents from organizations, and doctoral dissertations and Master's theses.

Denyer, David and David Tranfield. "Producing a Systematic Review." In The Sage Handbook of Organizational Research Methods .  David A. Buchanan and Alan Bryman, editors. ( Thousand Oaks, CA: Sage Publications, 2009), pp. 671-689; Foster, Margaret J. and Sarah T. Jewell, editors. Assembling the Pieces of a Systematic Review: A Guide for Librarians . Lanham, MD: Rowman and Littlefield, 2017; Gough, David, Sandy Oliver, James Thomas, editors. Introduction to Systematic Reviews . 2nd edition. Los Angeles, CA: Sage Publications, 2017; Gopalakrishnan, S. and P. Ganeshkumar. “Systematic Reviews and Meta-analysis: Understanding the Best Evidence in Primary Healthcare.” Journal of Family Medicine and Primary Care 2 (2013): 9-14; Gough, David, James Thomas, and Sandy Oliver. "Clarifying Differences between Review Designs and Methods." Systematic Reviews 1 (2012): 1-9; Khan, Khalid S., Regina Kunz, Jos Kleijnen, and Gerd Antes. “Five Steps to Conducting a Systematic Review.” Journal of the Royal Society of Medicine 96 (2003): 118-121; Mulrow, C. D. “Systematic Reviews: Rationale for Systematic Reviews.” BMJ 309:597 (September 1994); O'Dwyer, Linda C., and Q. Eileen Wafford. "Addressing Challenges with Systematic Review Teams through Effective Communication: A Case Report." Journal of the Medical Library Association 109 (October 2021): 643-647; Okoli, Chitu, and Kira Schabram. "A Guide to Conducting a Systematic Literature Review of Information Systems Research."  Sprouts: Working Papers on Information Systems 10 (2010); Siddaway, Andy P., Alex M. Wood, and Larry V. Hedges. "How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-analyses, and Meta-syntheses." Annual Review of Psychology 70 (2019): 747-770; Torgerson, Carole J. “Publication Bias: The Achilles’ Heel of Systematic Reviews?” British Journal of Educational Studies 54 (March 2006): 89-102; Torgerson, Carole. Systematic Reviews . New York: Continuum, 2003.

  • << Previous: Purpose of Guide
  • Next: Design Flaws to Avoid >>
  • Last Updated: Sep 4, 2024 9:40 AM
  • URL: https://libguides.usc.edu/writingguide

research design problem meaning

Writing about Design

Principles and tips for design-oriented research.

Writing about Design

How to define a research question or a design problem

Introduction.

Many texts state that identifying a good research question (or, equivalently, a design problem) is important for research. Wikipedia, for example, starts (as of writing this text, at least) with the following two sentences:

“A research question is ‘a question that a research project sets out to answer’. Choosing a research question is an essential element of both quantitative and qualitative research.” (Wikipedia, 2020)

However, finding a good research question (RQ) can be a painful experience. It may feel impossible to understand what are the criteria for a good RQ, how a good RQ can be found, and to notice when there are problems with some RQ candidate.

In this text, I will address the pains described above. I start by presenting a scenario of a project that has problems with its RQ. The analysis of that scenario allows me then to describe how to turn the situation described in the scenario for a better research or design project.

Scenario of a problematic project

Let us consider a scenario that you are starting a new research or design project. You have already an idea: your work will be related to communication with instant messaging (IM). Because you are a design-minded person, you are planning to design and develop a new IM feature: a possibility to send predefined replies on a mobile IM app. Your idea is that this feature will allow the user to communicate quickly with others in difficult situations where the they can only connect with others through their mobile phone. Your plan is to supply the mobile IM app with messages like “I’m late by 10 minutes but see you soon”, “I can’t answer back now but will do that later today”, and so on.

Therefore, your plan involves designing such an app, maybe first by sketching it and then illustrating its interaction with a prototyping software like Figma or Adobe XD. You may also decide to make your design functional by programming it and letting a selected number of participants to use it. These kinds of activities will let you demonstrate your skills as a designer-researcher.

Although predefined messages for a mobile IM app can be a topic of a great study, there are some problems with this project that require you to think more about it before you start. As the project is currently defined, it is difficult to provide convincing answers to these challenges:

  • Challenge 1: Why would this be a relevant topic for research or design? Good studies address topics that may interest also other people than the author only. The current research topic, however, does not do that self-evidently yet: it lacks an explanation why it would make sense to equip mobile IM apps with predefined replies. There is only a guess that this could be useful in some situations, but this may not convince the reader about the ingenuity of this project.
  • Challenge 2: How do you demonstrate that your solution is particularly good? For an outsider who will see the project’s outcome, it may not be clear why your final design would be the best one among the other possible designs. If you propose one interaction design for such a feature, what makes that a good one? In other words, the project lacks a yardstick by which its quality should be measured.
  • Challenge 3: How does this project lead to learning or new knowledge? Even if you can show that the topic is relevant (point 1) and that the solution works well (2), the solution may feel too “particularized” – not usable in any other design context. This is an important matter in applied research fields like design and human–computer interaction, because these fields require some form of generalizability from their studies. Findings of a study should result in some kind of knowledge, such as skills, sensitivity to important matters, design solutions or patterns, etc. that could be used also at a later time in other projects, preferably by other people too.

All these problems relate to a problem that this study does not have a RQ yet . Identifying a good research question will help clarify all the above matters, as we will see below.

Adding a research question / design problem

RQs are of many kinds, and they are closely tied to the intended finding of the study: what contribution  should the study deliver. A contribution can be, for example, a solution to a problem or creation of novel information or knowledge. Novel information, in turn, can be a new theory, model or hypothesis, analysis that offers deeper understanding, identification of an unattended problem, description about poorly understood phenomenon, a new viewpoint, or many other things.

The researcher or thesis author usually has a lot of freedom in choosing the exact type of contribution that they want to make. This can feel difficult to the author: there may be no-one telling what they should study. In a way, in such a situation, the thesis/article author is the client of their own research: they both define what needs to be done, and then accomplish that work. Some starting points for narrowing down the space of possibilities is offered here.

Most importantly, the RQ needs to be focused on a topic that the author genuinely does not know, and which is important to find out on the path to the intended contribution. In our scenario about a mobile IM app’s predefined replies, there are currently too many alternatives for an intended contribution, and an outsider would not be able to know which one of them to expect:

  • Demonstration that mobile IM apps will be better to use when they have this new feature.
  • Report on the ways by which people would use the new feature, if their mobile IM apps would have such a feature.
  • Requirements analysis for the specific design and detailed features by which the feature should be designed.
  • Analysis of the situations where the feature would be most needed, and user groups who would most often be in such situations.

All of these are valid contributions, and the author can choose to focus on any one of them. This depends also on the author’s personal interests. This gives a possibility for formulating a RQ for the project. It is important to notice that each one of the possible contributions listed above calls for a different corresponding RQ:

RQ1: Do predefined replies in mobile IM apps improve their usability?

RQ2: How will users start using the predefined replies in mobile IM apps?

RQ3: How should the interaction in the IM app be designed, and what kind of predefined replies need to be offered to the users?

RQ4: When are predefined replies in IM apps needed?

This list of four RQs, matched with the four possible contributions, shows why the scenario presented in the beginning of this text was problematic. Only after asking these kinds of questions one is able to seek to answer to the earlier-presented three challenges in the end of the previous section. Also, each of the RQs needs a different research or design method, and its own kind of background research.

The choice and fine-tuning of the research question / design problem

Which one of the above RQs should our hypothetical researcher/designer choose? Lists of basic requisites for good RQs have been presented in many websites. They can help identify RQs that will still need refinement. Monash University offers the following kind of helpful list:

  • Clear and focused.  In other words, the question should clearly state what the writer needs to do.
  • Not too broad and not too narrow.  The question should have an appropriate scope. If the question is too broad it will not be possible to answer it thoroughly within the word limit. If it is too narrow you will not have enough to write about and you will struggle to develop a strong argument.
  • Not too easy to answer.  For example, the question should require more than a simple yes or no answer.
  • Not too difficult to answer.  You must be able to answer the question thoroughly within the given timeframe and word limit.
  • Researchable.  You must have access to a suitable amount of quality research materials, such as academic books and refereed journal articles.
  • Analytical rather than descriptive.  In other words, your research question should allow you to produce an analysis of an issue or problem rather than a simple description of it.

If a study meets the above criteria, it has a good chance of avoiding a problem of presenting a “non-contribution” : A laboriously produced finding that nonetheless does not provide new, interesting information. The points 3 and 6 above particularly guard against such studies: they warn the readers from focusing their efforts on something that is already known (3) and only describing what was done or what observations were made, instead of analysing them in more detail (6).

In fine-tuning a possible RQ, it is important to situate it to the right scope. The first possible RQ that comes to one’s mind is often too broad and needs to be narrowed. RQ4 above (“ When are predefined replies in IM apps most needed? ”), for example, is a very relevant question, but it is probably too broad.

Why is RQ4 too broad? The reason is that RQs are usually considered very literally. If you leave an aspect in your RQ unspecified, then it means that you intend that your RQ and your findings will be generalisable (i.e., applicable) to all the possible contexts and cases that your RQ can be applied to. Consider the following diagram:

With a question “ When are predefined replies in IM apps most needed?”, you are asking a question that covers both leisure-oriented and work-oriented IM apps which can be of very different kinds. Some of the IM apps are mobile-oriented (such as WhatsApp) and others are desktop-oriented (such as Slack or Teams). Unless you specify your RQ more narrowly, your findings should be applicable to all these kinds of apps. Also, RQ4 is unspecific also about the people that you are thinking as communication partners. It may be impossible for you to make a study so broad that it applies to all of these cases.

Therefore, a more manageable-sized scoping could be something like this:

RQ4 (version 2): In which away-from-desktop leisure life situations are predefined replies in IM apps most needed?

Furthermore, you can also narrow down your focus theoretically. In our example scenario, the researcher/designer can decide, for example, that they will consider predefined IM replies from the viewpoint of “face-work” in social interaction. By adopting this viewpoint, the researcher/designer can decide that they will design the IM’s replies with a goal that they help the user to maintain an active, positive image in the eyes of others. When they start designing the reply feature, they can now ask much more specific questions. For example: how could my design help a user in doing face-work in cases where they are in a hurry and can only send a short and blunt message to another person? How could the predefined replies help in situations where the users would not have time to answer but they know they should? Ultimately, would the predefined replies make it easier for users to do face-work in computer-mediated communications (CMC)?

You can therefore further specify RQ4 into this:

RQ4 (version 3): In which away-from-desktop leisure life situations are predefined replies in IM apps most needed when it is important to react quickly to arriving messages?

As you may notice, it is possible to scope the RQ too narrowly so that it starts to be close to absurd. But if that does not become a problem, the choice of methods (i.e., the research design ) becomes much easier to do.

The benefit of theoretically narrowed-down RQs (in this case, building on the concept of face-work in RQ4 version 3) have the benefit that they point you to useful background literature. Non-theoretical RQs (e.g., RQ4 version 2), in contrast, require that you identify the relevant literature more independently, relying on your own judgment. In the present case, you can base your thinking about IM apps’ on sociological research on interpersonal interaction and self-presentation (e.g., Goffman 1967) and its earlier applications to CMC (Nardi et al., 2000; Salovaara et al., 2011). Such a literature provides the starting points for deeper design considerations. Deeper considerations, in turn, increase the contribution of the research, and make it interesting for the readers.

As said, the first RQ that one comes to think of is not necessarily the best and final one. The RQ may need to be adapted (and also can be adapted) over the course of the research. In qualitative research this is very typical, and the same applies to exploratory design projects that proceed through small design experiments (i.e., through their own smaller RQs).

This text promised to address the pains that definition of a RQ or a design problem may pose for a student or a researcher. The main points of the answer may be summarized as follows:

  • The search for a good RQ is a negotiation process between three objectives : what is personally motivating, what is realistically possible to do (e.g., that the work can be built on some earlier literature and there is a method that can answer to the RQ), and what motivates its relevance (i.e., can it lead to interesting findings).
  • The search for a RQ or a design problem is a process and not a task that must be fixed immediately . It is, however, good to get started somewhere, since a RQ gives a lot of focus for future activities: what to read and what methods to choose, for example.

With the presentation of the scenario and its analysis, I sought to demonstrate why and how choosing an additional analytical viewpoint can be a useful strategy. With it, a project whose meaningfulness may be otherwise questionable for an outsider can become interesting when its underpinnings and assumptions are explicated. That helps ensure that the reader will appreciate the work that the author has done with their research.

In the problematization of the scenario, I presented the three challenges related to it. I can now offer possible answers to them, by highlighting why a RQ can serve as a tool for finding them:  

  • Why would this be a relevant topic for research or design? Choice of a RQ often requires some amount of background research that helps the researcher/designer to understand how much about the problem has already been solved by others. This awareness helps shape the RQ to focus on a topic where information is not yet known and more information is needed for a high-quality outcome.
  • How do you demonstrate that your solution is particularly good? By having a question, it is possible to analyse what are the right methods for answering it. The quality of executing these becomes then evaluatable. The focus on a particular question also will permit that the author compromises optimality in other, less central outcomes. For example, if smoothness of interaction is in the focus, then it is easy to explain why long-term robustness and durability of a prototype may not be critical.
  • How does this project lead to learning or new knowledge? Presentation of the results or findings allows the researcher/design to devote their Discussion section (see the IMRaD article format ) to topics that would have been impossible to predict before the study. That will demonstrate that the project has generated novel understanding: it has generated knowledge that can be considered insightful.

If and when the researcher/designer pursues further in design and research, the experience of thinking about RQs and design problems accumulates. As one reads literature , the ability to consider different research questions becomes better too. Similarly, as one carries out projects with different RQs and problems, and notices how adjusting them along the way helps shape one’s work, the experience similarly grows. Eventually, one may even learn to enjoy the analytical process of identifying a good research question.

As a suggestion for further reading, Carsten Sørensen’s text  (2002) about writing and planning an article in information systems research field is a highly recommended one. It combines the question of choosing the RQ with the question on how to write a paper about it.

Goffman, E. (1967). On face-work: An analysis of ritual elements in social interaction. Psychiatry , 18 (3), 213–231.  https://doi.org/10.1080/00332747.1955.11023008

Nardi, B. A., Whittaker, S., & Bradner, E. (2000). Interaction and outeraction: Instant messaging in action. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work (CSCW 2000) (pp. 79–88). New York, NY: ACM Press. https://doi.org/10.1145/358916.358975

Salovaara, A., Lindqvist, A., Hasu, T., & Häkkilä, J. (2011). The phone rings but the user doesn’t answer: unavailability in mobile communication. In Proceedings of the 13th International Conference on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2011) (pp. 503–512). New York, NY: ACM Press. https://doi.org/10.1145/2037373.2037448

Sørensen, C. (2002): This is Not an Article — Just Some Food for Thoughts on How to Write One. Working Paper. Department of Information Systems, The London School of Economics and Political Science. No. 121.

Wikipedia (2020). Research question. Retrieved from https://en.wikipedia.org/wiki/Research_question (30 November 2020).

One thought on “ How to define a research question or a design problem ”

Pingback: From table of contents to a finished text | Writing about Design

Comments are closed.

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research design problem meaning

Home Market Research Research Tools and Apps

Research Design: What it is, Elements & Types

Research Design

Can you imagine doing research without a plan? Probably not. When we discuss a strategy to collect, study, and evaluate data, we talk about research design. This design addresses problems and creates a consistent and logical model for data analysis. Let’s learn more about it.

What is Research Design?

Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success.

Creating a research topic explains the type of research (experimental,  survey research ,  correlational , semi-experimental, review) and its sub-type (experimental design, research problem , descriptive case-study). 

There are three main types of designs for research:

  • Data collection
  • Measurement
  • Data Analysis

The research problem an organization faces will determine the design, not vice-versa. The design phase of a study determines which tools to use and how they are used.

The Process of Research Design

The research design process is a systematic and structured approach to conducting research. The process is essential to ensure that the study is valid, reliable, and produces meaningful results.

  • Consider your aims and approaches: Determine the research questions and objectives, and identify the theoretical framework and methodology for the study.
  • Choose a type of Research Design: Select the appropriate research design, such as experimental, correlational, survey, case study, or ethnographic, based on the research questions and objectives.
  • Identify your population and sampling method: Determine the target population and sample size, and choose the sampling method, such as random , stratified random sampling , or convenience sampling.
  • Choose your data collection methods: Decide on the data collection methods , such as surveys, interviews, observations, or experiments, and select the appropriate instruments or tools for collecting data.
  • Plan your data collection procedures: Develop a plan for data collection, including the timeframe, location, and personnel involved, and ensure ethical considerations.
  • Decide on your data analysis strategies: Select the appropriate data analysis techniques, such as statistical analysis , content analysis, or discourse analysis, and plan how to interpret the results.

The process of research design is a critical step in conducting research. By following the steps of research design, researchers can ensure that their study is well-planned, ethical, and rigorous.

Research Design Elements

Impactful research usually creates a minimum bias in data and increases trust in the accuracy of collected data. A design that produces the slightest margin of error in experimental research is generally considered the desired outcome. The essential elements are:

  • Accurate purpose statement
  • Techniques to be implemented for collecting and analyzing research
  • The method applied for analyzing collected details
  • Type of research methodology
  • Probable objections to research
  • Settings for the research study
  • Measurement of analysis

Characteristics of Research Design

A proper design sets your study up for success. Successful research studies provide insights that are accurate and unbiased. You’ll need to create a survey that meets all of the main characteristics of a design. There are four key characteristics:

Characteristics of Research Design

  • Neutrality: When you set up your study, you may have to make assumptions about the data you expect to collect. The results projected in the research should be free from research bias and neutral. Understand opinions about the final evaluated scores and conclusions from multiple individuals and consider those who agree with the results.
  • Reliability: With regularly conducted research, the researcher expects similar results every time. You’ll only be able to reach the desired results if your design is reliable. Your plan should indicate how to form research questions to ensure the standard of results.
  • Validity: There are multiple measuring tools available. However, the only correct measuring tools are those which help a researcher in gauging results according to the objective of the research. The  questionnaire  developed from this design will then be valid.
  • Generalization:  The outcome of your design should apply to a population and not just a restricted sample . A generalized method implies that your survey can be conducted on any part of a population with similar accuracy.

The above factors affect how respondents answer the research questions, so they should balance all the above characteristics in a good design. If you want, you can also learn about Selection Bias through our blog.

Research Design Types

A researcher must clearly understand the various types to select which model to implement for a study. Like the research itself, the design of your analysis can be broadly classified into quantitative and qualitative.

Qualitative research

Qualitative research determines relationships between collected data and observations based on mathematical calculations. Statistical methods can prove or disprove theories related to a naturally existing phenomenon. Researchers rely on qualitative observation research methods that conclude “why” a particular theory exists and “what” respondents have to say about it.

Quantitative research

Quantitative research is for cases where statistical conclusions to collect actionable insights are essential. Numbers provide a better perspective for making critical business decisions. Quantitative research methods are necessary for the growth of any organization. Insights drawn from complex numerical data and analysis prove to be highly effective when making decisions about the business’s future.

Qualitative Research vs Quantitative Research

Here is a chart that highlights the major differences between qualitative and quantitative research:

Qualitative ResearchQuantitative Research
Focus on explaining and understanding experiences and perspectives.Focus on quantifying and measuring phenomena.
Use of non-numerical data, such as words, images, and observations.Use of numerical data, such as statistics and surveys.
Usually uses small sample sizes.Usually uses larger sample sizes.
Typically emphasizes in-depth exploration and interpretation.Typically emphasizes precision and objectivity.
Data analysis involves interpretation and narrative analysis.Data analysis involves statistical analysis and hypothesis testing.
Results are presented descriptively.Results are presented numerically and statistically.

In summary or analysis , the step of qualitative research is more exploratory and focuses on understanding the subjective experiences of individuals, while quantitative research is more focused on objective data and statistical analysis.

You can further break down the types of research design into five categories:

types of research design

1. Descriptive: In a descriptive composition, a researcher is solely interested in describing the situation or case under their research study. It is a theory-based design method created by gathering, analyzing, and presenting collected data. This allows a researcher to provide insights into the why and how of research. Descriptive design helps others better understand the need for the research. If the problem statement is not clear, you can conduct exploratory research. 

2. Experimental: Experimental research establishes a relationship between the cause and effect of a situation. It is a causal research design where one observes the impact caused by the independent variable on the dependent variable. For example, one monitors the influence of an independent variable such as a price on a dependent variable such as customer satisfaction or brand loyalty. It is an efficient research method as it contributes to solving a problem.

The independent variables are manipulated to monitor the change it has on the dependent variable. Social sciences often use it to observe human behavior by analyzing two groups. Researchers can have participants change their actions and study how the people around them react to understand social psychology better.

3. Correlational research: Correlational research  is a non-experimental research technique. It helps researchers establish a relationship between two closely connected variables. There is no assumption while evaluating a relationship between two other variables, and statistical analysis techniques calculate the relationship between them. This type of research requires two different groups.

A correlation coefficient determines the correlation between two variables whose values range between -1 and +1. If the correlation coefficient is towards +1, it indicates a positive relationship between the variables, and -1 means a negative relationship between the two variables. 

4. Diagnostic research: In diagnostic design, the researcher is looking to evaluate the underlying cause of a specific topic or phenomenon. This method helps one learn more about the factors that create troublesome situations. 

This design has three parts of the research:

  • Inception of the issue
  • Diagnosis of the issue
  • Solution for the issue

5. Explanatory research : Explanatory design uses a researcher’s ideas and thoughts on a subject to further explore their theories. The study explains unexplored aspects of a subject and details the research questions’ what, how, and why.

Benefits of Research Design

There are several benefits of having a well-designed research plan. Including:

  • Clarity of research objectives: Research design provides a clear understanding of the research objectives and the desired outcomes.
  • Increased validity and reliability: To ensure the validity and reliability of results, research design help to minimize the risk of bias and helps to control extraneous variables.
  • Improved data collection: Research design helps to ensure that the proper data is collected and data is collected systematically and consistently.
  • Better data analysis: Research design helps ensure that the collected data can be analyzed effectively, providing meaningful insights and conclusions.
  • Improved communication: A well-designed research helps ensure the results are clean and influential within the research team and external stakeholders.
  • Efficient use of resources: reducing the risk of waste and maximizing the impact of the research, research design helps to ensure that resources are used efficiently.

A well-designed research plan is essential for successful research, providing clear and meaningful insights and ensuring that resources are practical.

QuestionPro offers a comprehensive solution for researchers looking to conduct research. With its user-friendly interface, robust data collection and analysis tools, and the ability to integrate results from multiple sources, QuestionPro provides a versatile platform for designing and executing research projects.

Our robust suite of research tools provides you with all you need to derive research results. Our online survey platform includes custom point-and-click logic and advanced question types. Uncover the insights that matter the most.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

Experimental vs Observational Studies: Differences & Examples

Experimental vs Observational Studies: Differences & Examples

Sep 5, 2024

Interactive forms

Interactive Forms: Key Features, Benefits, Uses + Design Tips

Sep 4, 2024

closed-loop management

Closed-Loop Management: The Key to Customer Centricity

Sep 3, 2024

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

research design problem meaning

What Is a Research Design? | Definition, Types & Guide

research design problem meaning

Introduction

Parts of a research design, types of research methodology in qualitative research, narrative research designs, phenomenological research designs, grounded theory research designs.

  • Ethnographic research designs

Case study research design

Important reminders when designing a research study.

A research design in qualitative research is a critical framework that guides the methodological approach to studying complex social phenomena. Qualitative research designs determine how data is collected, analyzed, and interpreted, ensuring that the research captures participants' nuanced and subjective perspectives. Research designs also recognize ethical considerations and involve informed consent, ensuring confidentiality, and handling sensitive topics with the utmost respect and care. These considerations are crucial in qualitative research and other contexts where participants may share personal or sensitive information. A research design should convey coherence as it is essential for producing high-quality qualitative research, often following a recursive and evolving process.

research design problem meaning

Theoretical concepts and research question

The first step in creating a research design is identifying the main theoretical concepts. To identify these concepts, a researcher should ask which theoretical keywords are implicit in the investigation. The next step is to develop a research question using these theoretical concepts. This can be done by identifying the relationship of interest among the concepts that catch the focus of the investigation. The question should address aspects of the topic that need more knowledge, shed light on new information, and specify which aspects should be prioritized before others. This step is essential in identifying which participants to include or which data collection methods to use. Research questions also put into practice the conceptual framework and make the initial theoretical concepts more explicit. Once the research question has been established, the main objectives of the research can be specified. For example, these objectives may involve identifying shared experiences around a phenomenon or evaluating perceptions of a new treatment.

Methodology

After identifying the theoretical concepts, research question, and objectives, the next step is to determine the methodology that will be implemented. This is the lifeline of a research design and should be coherent with the objectives and questions of the study. The methodology will determine how data is collected, analyzed, and presented. Popular qualitative research methodologies include case studies, ethnography , grounded theory , phenomenology, and narrative research . Each methodology is tailored to specific research questions and facilitates the collection of rich, detailed data. For example, a narrative approach may focus on only one individual and their story, while phenomenology seeks to understand participants' lived common experiences. Qualitative research designs differ significantly from quantitative research, which often involves experimental research, correlational designs, or variance analysis to test hypotheses about relationships between two variables, a dependent variable and an independent variable while controlling for confounding variables.

research design problem meaning

Literature review

After the methodology is identified, conducting a thorough literature review is integral to the research design. This review identifies gaps in knowledge, positioning the new study within the larger academic dialogue and underlining its contribution and relevance. Meta-analysis, a form of secondary research, can be particularly useful in synthesizing findings from multiple studies to provide a clear picture of the research landscape.

Data collection

The sampling method in qualitative research is designed to delve deeply into specific phenomena rather than to generalize findings across a broader population. The data collection methods—whether interviews, focus groups, observations, or document analysis—should align with the chosen methodology, ethical considerations, and other factors such as sample size. In some cases, repeated measures may be collected to observe changes over time.

Data analysis

Analysis in qualitative research typically involves methods such as coding and thematic analysis to distill patterns from the collected data. This process delineates how the research results will be systematically derived from the data. It is recommended that the researcher ensures that the final interpretations are coherent with the observations and analyses, making clear connections between the data and the conclusions drawn. Reporting should be narrative-rich, offering a comprehensive view of the context and findings.

Overall, a coherent qualitative research design that incorporates these elements facilitates a study that not only adds theoretical and practical value to the field but also adheres to high quality. This methodological thoroughness is essential for achieving significant, insightful findings. Examples of well-executed research designs can be valuable references for other researchers conducting qualitative or quantitative investigations. An effective research design is critical for producing robust and impactful research outcomes.

Each qualitative research design is unique, diverse, and meticulously tailored to answer specific research questions, meet distinct objectives, and explore the unique nature of the phenomenon under investigation. The methodology is the wider framework that a research design follows. Each methodology in a research design consists of methods, tools, or techniques that compile data and analyze it following a specific approach.

The methods enable researchers to collect data effectively across individuals, different groups, or observations, ensuring they are aligned with the research design. The following list includes the most commonly used methodologies employed in qualitative research designs, highlighting how they serve different purposes and utilize distinct methods to gather and analyze data.

research design problem meaning

The narrative approach in research focuses on the collection and detailed examination of life stories, personal experiences, or narratives to gain insights into individuals' lives as told from their perspectives. It involves constructing a cohesive story out of the diverse experiences shared by participants, often using chronological accounts. It seeks to understand human experience and social phenomena through the form and content of the stories. These can include spontaneous narrations such as memoirs or diaries from participants or diaries solicited by the researcher. Narration helps construct the identity of an individual or a group and can rationalize, persuade, argue, entertain, confront, or make sense of an event or tragedy. To conduct a narrative investigation, it is recommended that researchers follow these steps:

Identify if the research question fits the narrative approach. Its methods are best employed when a researcher wants to learn about the lifestyle and life experience of a single participant or a small number of individuals.

Select the best-suited participants for the research design and spend time compiling their stories using different methods such as observations, diaries, interviewing their family members, or compiling related secondary sources.

Compile the information related to the stories. Narrative researchers collect data based on participants' stories concerning their personal experiences, for example about their workplace or homes, their racial or ethnic culture, and the historical context in which the stories occur.

Analyze the participant stories and "restore" them within a coherent framework. This involves collecting the stories, analyzing them based on key elements such as time, place, plot, and scene, and then rewriting them in a chronological sequence (Ollerenshaw & Creswell, 2000). The framework may also include elements such as a predicament, conflict, or struggle; a protagonist; and a sequence with implicit causality, where the predicament is somehow resolved (Carter, 1993).

Collaborate with participants by actively involving them in the research. Both the researcher and the participant negotiate the meaning of their stories, adding a credibility check to the analysis (Creswell & Miller, 2000).

A narrative investigation includes collecting a large amount of data from the participants and the researcher needs to understand the context of the individual's life. A keen eye is needed to collect particular stories that capture the individual experiences. Active collaboration with the participant is necessary, and researchers need to discuss and reflect on their own beliefs and backgrounds. Multiple questions could arise in the collection, analysis, and storytelling of individual stories that need to be addressed, such as: Whose story is it? Who can tell it? Who can change it? Which version is compelling? What happens when narratives compete? In a community, what do the stories do among them? (Pinnegar & Daynes, 2006).

research design problem meaning

Make the most of your data with ATLAS.ti

Powerful tools in an intuitive interface, ready for you with a free trial today.

A research design based on phenomenology aims to understand the essence of the lived experiences of a group of people regarding a particular concept or phenomenon. Researchers gather deep insights from individuals who have experienced the phenomenon, striving to describe "what" they experienced and "how" they experienced it. This approach to a research design typically involves detailed interviews and aims to reach a deep existential understanding. The purpose is to reduce individual experiences to a description of the universal essence or understanding the phenomenon's nature (van Manen, 1990). In phenomenology, the following steps are usually followed:

Identify a phenomenon of interest . For example, the phenomenon might be anger, professionalism in the workplace, or what it means to be a fighter.

Recognize and specify the philosophical assumptions of phenomenology , for example, one could reflect on the nature of objective reality and individual experiences.

Collect data from individuals who have experienced the phenomenon . This typically involves conducting in-depth interviews, including multiple sessions with each participant. Additionally, other forms of data may be collected using several methods, such as observations, diaries, art, poetry, music, recorded conversations, written responses, or other secondary sources.

Ask participants two general questions that encompass the phenomenon and how the participant experienced it (Moustakas, 1994). For example, what have you experienced in this phenomenon? And what contexts or situations have typically influenced your experiences within the phenomenon? Other open-ended questions may also be asked, but these two questions particularly focus on collecting research data that will lead to a textural description and a structural description of the experiences, and ultimately provide an understanding of the common experiences of the participants.

Review data from the questions posed to participants . It is recommended that researchers review the answers and highlight "significant statements," phrases, or quotes that explain how participants experienced the phenomenon. The researcher can then develop meaningful clusters from these significant statements into patterns or key elements shared across participants.

Write a textual description of what the participants experienced based on the answers and themes of the two main questions. The answers are also used to write about the characteristics and describe the context that influenced the way the participants experienced the phenomenon, called imaginative variation or structural description. Researchers should also write about their own experiences and context or situations that influenced them.

Write a composite description from the structural and textural description that presents the "essence" of the phenomenon, called the essential and invariant structure.

A phenomenological approach to a research design includes the strict and careful selection of participants in the study where bracketing personal experiences can be difficult to implement. The researcher decides how and in which way their knowledge will be introduced. It also involves some understanding and identification of the broader philosophical assumptions.

research design problem meaning

Grounded theory is used in a research design when the goal is to inductively develop a theory "grounded" in data that has been systematically gathered and analyzed. Starting from the data collection, researchers identify characteristics, patterns, themes, and relationships, gradually forming a theoretical framework that explains relevant processes, actions, or interactions grounded in the observed reality. A grounded theory study goes beyond descriptions and its objective is to generate a theory, an abstract analytical scheme of a process. Developing a theory doesn't come "out of nothing" but it is constructed and based on clear data collection. We suggest the following steps to follow a grounded theory approach in a research design:

Determine if grounded theory is the best for your research problem . Grounded theory is a good design when a theory is not already available to explain a process.

Develop questions that aim to understand how individuals experienced or enacted the process (e.g., What was the process? How did it unfold?). Data collection and analysis occur in tandem, so that researchers can ask more detailed questions that shape further analysis, such as: What was the focal point of the process (central phenomenon)? What influenced or caused this phenomenon to occur (causal conditions)? What strategies were employed during the process? What effect did it have (consequences)?

Gather relevant data about the topic in question . Data gathering involves questions that are usually asked in interviews, although other forms of data can also be collected, such as observations, documents, and audio-visual materials from different groups.

Carry out the analysis in stages . Grounded theory analysis begins with open coding, where the researcher forms codes that inductively emerge from the data (rather than preconceived categories). Researchers can thus identify specific properties and dimensions relevant to their research question.

Assemble the data in new ways and proceed to axial coding . Axial coding involves using a coding paradigm or logic diagram, such as a visual model, to systematically analyze the data. Begin by identifying a central phenomenon, which is the main category or focus of the research problem. Next, explore the causal conditions, which are the categories of factors that influence the phenomenon. Specify the strategies, which are the actions or interactions associated with the phenomenon. Then, identify the context and intervening conditions—both narrow and broad factors that affect the strategies. Finally, delineate the consequences, which are the outcomes or results of employing the strategies.

Use selective coding to construct a "storyline" that links the categories together. Alternatively, the researcher may formulate propositions or theory-driven questions that specify predicted relationships among these categories.

Develop and visually present a matrix that clarifies the social, historical, and economic conditions influencing the central phenomenon. This optional step encourages viewing the model from the narrowest to the broadest perspective.

Write a substantive-level theory that is closely related to a specific problem or population. This step is optional but provides a focused theoretical framework that can later be tested with quantitative data to explore its generalizability to a broader sample.

Allow theory to emerge through the memo-writing process, where ideas about the theory evolve continuously throughout the stages of open, axial, and selective coding.

The researcher should initially set aside any preconceived theoretical ideas to allow for the emergence of analytical and substantive theories. This is a systematic research approach, particularly when following the methodological steps outlined by Strauss and Corbin (1990). For those seeking more flexibility in their research process, the approach suggested by Charmaz (2006) might be preferable.

One of the challenges when using this method in a research design is determining when categories are sufficiently saturated and when the theory is detailed enough. To achieve saturation, discriminant sampling may be employed, where additional information is gathered from individuals similar to those initially interviewed to verify the applicability of the theory to these new participants. Ultimately, its goal is to develop a theory that comprehensively describes the central phenomenon, causal conditions, strategies, context, and consequences.

research design problem meaning

Ethnographic research design

An ethnographic approach in research design involves the extended observation and data collection of a group or community. The researcher immerses themselves in the setting, often living within the community for long periods. During this time, they collect data by observing and recording behaviours, conversations, and rituals to understand the group's social dynamics and cultural norms. We suggest following these steps for ethnographic methods in a research design:

Assess whether ethnography is the best approach for the research design and questions. It's suitable if the goal is to describe how a cultural group functions and to delve into their beliefs, language, behaviours, and issues like power, resistance, and domination, particularly if there is limited literature due to the group’s marginal status or unfamiliarity to mainstream society.

Identify and select a cultural group for your research design. Choose one that has a long history together, forming distinct languages, behaviours, and attitudes. This group often might be marginalized within society.

Choose cultural themes or issues to examine within the group. Analyze interactions in everyday settings to identify pervasive patterns such as life cycles, events, and overarching cultural themes. Culture is inferred from the group members' words, actions, and the tension between their actual and expected behaviours, as well as the artifacts they use.

Conduct fieldwork to gather detailed information about the group’s living and working environments. Visit the site, respect the daily lives of the members, and collect a diverse range of materials, considering ethical aspects such as respect and reciprocity.

Compile and analyze cultural data to develop a set of descriptive and thematic insights. Begin with a detailed description of the group based on observations of specific events or activities over time. Then, conduct a thematic analysis to identify patterns or themes that illustrate how the group functions and lives. The final output should be a comprehensive cultural portrait that integrates both the participants (emic) and the researcher’s (etic) perspectives, potentially advocating for the group’s needs or suggesting societal changes to better accommodate them.

Researchers engaging in ethnography need a solid understanding of cultural anthropology and the dynamics of sociocultural systems, which are commonly explored in ethnographic research. The data collection phase is notably extensive, requiring prolonged periods in the field. Ethnographers often employ a literary, quasi-narrative style in their narratives, which can pose challenges for those accustomed to more conventional social science writing methods.

Another potential issue is the risk of researchers "going native," where they become overly assimilated into the community under study, potentially jeopardizing the objectivity and completion of their research. It's crucial for researchers to be aware of their impact on the communities and environments they are studying.

The case study approach in a research design focuses on a detailed examination of a single case or a small number of cases. Cases can be individuals, groups, organizations, or events. Case studies are particularly useful for research designs that aim to understand complex issues in real-life contexts. The aim is to provide a thorough description and contextual analysis of the cases under investigation. We suggest following these steps in a case study design:

Assess if a case study approach suits your research questions . This approach works well when you have distinct cases with defined boundaries and aim to deeply understand these cases or compare multiple cases.

Choose your case or cases. These could involve individuals, groups, programs, events, or activities. Decide whether an individual or collective, multi-site or single-site case study is most appropriate, focusing on specific cases or themes (Stake, 1995; Yin, 2003).

Gather data extensively from diverse sources . Collect information through archival records, interviews, direct and participant observations, and physical artifacts (Yin, 2003).

Analyze the data holistically or in focused segments . Provide a comprehensive overview of the entire case or concentrate on specific aspects. Start with a detailed description including the history of the case and its chronological events then narrow down to key themes. The aim is to delve into the case's complexity rather than generalize findings.

Interpret and report the significance of the case in the final phase . Explain what insights were gained, whether about the subject of the case in an instrumental study or an unusual situation in an intrinsic study (Lincoln & Guba, 1985).

The investigator must carefully select the case or cases to study, recognizing that multiple potential cases could illustrate a chosen topic or issue. This selection process involves deciding whether to focus on a single case for deeper analysis or multiple cases, which may provide broader insights but less depth per case. Each choice requires a well-justified rationale for the selected cases. Researchers face the challenge of defining the boundaries of a case, such as its temporal scope and the events and processes involved. This decision in a research design is crucial as it affects the depth and value of the information presented in the study, and therefore should be planned to ensure a comprehensive portrayal of the case.

research design problem meaning

Qualitative and quantitative research designs are distinct in their approach to data collection and data analysis. Unlike quantitative research, which focuses on numerical data and statistical analysis, qualitative research prioritizes understanding the depth and richness of human experiences, behaviours, and interactions.

Qualitative methods in a research design have to have internal coherence, meaning that all elements of the research project—research question, data collection, data analysis, findings, and theory—are well-aligned and consistent with each other. This coherence in the research study is especially crucial in inductive qualitative research, where the research process often follows a recursive and evolving path. Ensuring that each component of the research design fits seamlessly with the others enhances the clarity and impact of the study, making the research findings more robust and compelling. Whether it is a descriptive research design, explanatory research design, diagnostic research design, or correlational research design coherence is an important element in both qualitative and quantitative research.

Finally, a good research design ensures that the research is conducted ethically and considers the well-being and rights of participants when managing collected data. The research design guides researchers in providing a clear rationale for their methodologies, which is crucial for justifying the research objectives to the scientific community. A thorough research design also contributes to the body of knowledge, enabling researchers to build upon past research studies and explore new dimensions within their fields. At the core of the design, there is a clear articulation of the research objectives. These objectives should be aligned with the underlying concepts being investigated, offering a concise method to answer the research questions and guiding the direction of the study with proper qualitative methods.

Carter, K. (1993). The place of a story in the study of teaching and teacher education. Educational Researcher, 22(1), 5-12, 18.

Charmaz, K. (2006). Constructing grounded theory. London: Sage.

Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory Into Practice, 39(3), 124-130.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage.

Moustakas, C. (1994). Phenomenological research methods. Thousand Oaks, CA: Sage.

Ollerenshaw, J. A., & Creswell, J. W. (2000, April). Data analysis in narrative research: A comparison of two “restoring” approaches. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, LA.

Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.

Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. Newbury Park, CA: Sage.

van Manen, M. (1990). Researching lived experience: Human science for an action sensitive pedagogy. Ontario, Canada: University of Western Ontario.

Yin, R. K. (2003). Case study research: Design and methods (3rd ed.). Thousand Oaks, CA: Sage

research design problem meaning

Whatever your research objectives, make it happen with ATLAS.ti!

Download a free trial today.

research design problem meaning

Educational resources and simple solutions for your research journey

research problems

What is a Research Problem? Characteristics, Types, and Examples

What is a Research Problem? Characteristics, Types, and Examples

A research problem is a gap in existing knowledge, a contradiction in an established theory, or a real-world challenge that a researcher aims to address in their research. It is at the heart of any scientific inquiry, directing the trajectory of an investigation. The statement of a problem orients the reader to the importance of the topic, sets the problem into a particular context, and defines the relevant parameters, providing the framework for reporting the findings. Therein lies the importance of research problem s.  

The formulation of well-defined research questions is central to addressing a research problem . A research question is a statement made in a question form to provide focus, clarity, and structure to the research endeavor. This helps the researcher design methodologies, collect data, and analyze results in a systematic and coherent manner. A study may have one or more research questions depending on the nature of the study.   

research design problem meaning

Identifying and addressing a research problem is very important. By starting with a pertinent problem , a scholar can contribute to the accumulation of evidence-based insights, solutions, and scientific progress, thereby advancing the frontier of research. Moreover, the process of formulating research problems and posing pertinent research questions cultivates critical thinking and hones problem-solving skills.   

Table of Contents

What is a Research Problem ?  

Before you conceive of your project, you need to ask yourself “ What is a research problem ?” A research problem definition can be broadly put forward as the primary statement of a knowledge gap or a fundamental challenge in a field, which forms the foundation for research. Conversely, the findings from a research investigation provide solutions to the problem .  

A research problem guides the selection of approaches and methodologies, data collection, and interpretation of results to find answers or solutions. A well-defined problem determines the generation of valuable insights and contributions to the broader intellectual discourse.  

Characteristics of a Research Problem  

Knowing the characteristics of a research problem is instrumental in formulating a research inquiry; take a look at the five key characteristics below:  

Novel : An ideal research problem introduces a fresh perspective, offering something new to the existing body of knowledge. It should contribute original insights and address unresolved matters or essential knowledge.   

Significant : A problem should hold significance in terms of its potential impact on theory, practice, policy, or the understanding of a particular phenomenon. It should be relevant to the field of study, addressing a gap in knowledge, a practical concern, or a theoretical dilemma that holds significance.  

Feasible: A practical research problem allows for the formulation of hypotheses and the design of research methodologies. A feasible research problem is one that can realistically be investigated given the available resources, time, and expertise. It should not be too broad or too narrow to explore effectively, and should be measurable in terms of its variables and outcomes. It should be amenable to investigation through empirical research methods, such as data collection and analysis, to arrive at meaningful conclusions A practical research problem considers budgetary and time constraints, as well as limitations of the problem . These limitations may arise due to constraints in methodology, resources, or the complexity of the problem.  

Clear and specific : A well-defined research problem is clear and specific, leaving no room for ambiguity; it should be easily understandable and precisely articulated. Ensuring specificity in the problem ensures that it is focused, addresses a distinct aspect of the broader topic and is not vague.  

Rooted in evidence: A good research problem leans on trustworthy evidence and data, while dismissing unverifiable information. It must also consider ethical guidelines, ensuring the well-being and rights of any individuals or groups involved in the study.

research design problem meaning

Types of Research Problems  

Across fields and disciplines, there are different types of research problems . We can broadly categorize them into three types.  

  • Theoretical research problems

Theoretical research problems deal with conceptual and intellectual inquiries that may not involve empirical data collection but instead seek to advance our understanding of complex concepts, theories, and phenomena within their respective disciplines. For example, in the social sciences, research problem s may be casuist (relating to the determination of right and wrong in questions of conduct or conscience), difference (comparing or contrasting two or more phenomena), descriptive (aims to describe a situation or state), or relational (investigating characteristics that are related in some way).  

Here are some theoretical research problem examples :   

  • Ethical frameworks that can provide coherent justifications for artificial intelligence and machine learning algorithms, especially in contexts involving autonomous decision-making and moral agency.  
  • Determining how mathematical models can elucidate the gradual development of complex traits, such as intricate anatomical structures or elaborate behaviors, through successive generations.  
  • Applied research problems

Applied or practical research problems focus on addressing real-world challenges and generating practical solutions to improve various aspects of society, technology, health, and the environment.  

Here are some applied research problem examples :   

  • Studying the use of precision agriculture techniques to optimize crop yield and minimize resource waste.  
  • Designing a more energy-efficient and sustainable transportation system for a city to reduce carbon emissions.  
  • Action research problems

Action research problems aim to create positive change within specific contexts by involving stakeholders, implementing interventions, and evaluating outcomes in a collaborative manner.  

Here are some action research problem examples :   

  • Partnering with healthcare professionals to identify barriers to patient adherence to medication regimens and devising interventions to address them.  
  • Collaborating with a nonprofit organization to evaluate the effectiveness of their programs aimed at providing job training for underserved populations.  

These different types of research problems may give you some ideas when you plan on developing your own.  

How to Define a Research Problem  

You might now ask “ How to define a research problem ?” These are the general steps to follow:   

  • Look for a broad problem area: Identify under-explored aspects or areas of concern, or a controversy in your topic of interest. Evaluate the significance of addressing the problem in terms of its potential contribution to the field, practical applications, or theoretical insights.
  • Learn more about the problem: Read the literature, starting from historical aspects to the current status and latest updates. Rely on reputable evidence and data. Be sure to consult researchers who work in the relevant field, mentors, and peers. Do not ignore the gray literature on the subject.
  • Identify the relevant variables and how they are related: Consider which variables are most important to the study and will help answer the research question. Once this is done, you will need to determine the relationships between these variables and how these relationships affect the research problem . 
  • Think of practical aspects : Deliberate on ways that your study can be practical and feasible in terms of time and resources. Discuss practical aspects with researchers in the field and be open to revising the problem based on feedback. Refine the scope of the research problem to make it manageable and specific; consider the resources available, time constraints, and feasibility.
  • Formulate the problem statement: Craft a concise problem statement that outlines the specific issue, its relevance, and why it needs further investigation.
  • Stick to plans, but be flexible: When defining the problem , plan ahead but adhere to your budget and timeline. At the same time, consider all possibilities and ensure that the problem and question can be modified if needed.

research design problem meaning

Key Takeaways  

  • A research problem concerns an area of interest, a situation necessitating improvement, an obstacle requiring eradication, or a challenge in theory or practical applications.   
  • The importance of research problem is that it guides the research and helps advance human understanding and the development of practical solutions.  
  • Research problem definition begins with identifying a broad problem area, followed by learning more about the problem, identifying the variables and how they are related, considering practical aspects, and finally developing the problem statement.  
  • Different types of research problems include theoretical, applied, and action research problems , and these depend on the discipline and nature of the study.  
  • An ideal problem is original, important, feasible, specific, and based on evidence.  

Frequently Asked Questions  

Why is it important to define a research problem?  

Identifying potential issues and gaps as research problems is important for choosing a relevant topic and for determining a well-defined course of one’s research. Pinpointing a problem and formulating research questions can help researchers build their critical thinking, curiosity, and problem-solving abilities.   

How do I identify a research problem?  

Identifying a research problem involves recognizing gaps in existing knowledge, exploring areas of uncertainty, and assessing the significance of addressing these gaps within a specific field of study. This process often involves thorough literature review, discussions with experts, and considering practical implications.  

Can a research problem change during the research process?  

Yes, a research problem can change during the research process. During the course of an investigation a researcher might discover new perspectives, complexities, or insights that prompt a reevaluation of the initial problem. The scope of the problem, unforeseen or unexpected issues, or other limitations might prompt some tweaks. You should be able to adjust the problem to ensure that the study remains relevant and aligned with the evolving understanding of the subject matter.

How does a research problem relate to research questions or hypotheses?  

A research problem sets the stage for the study. Next, research questions refine the direction of investigation by breaking down the broader research problem into manageable components. Research questions are formulated based on the problem , guiding the investigation’s scope and objectives. The hypothesis provides a testable statement to validate or refute within the research process. All three elements are interconnected and work together to guide the research.  

R Discovery is a literature search and research reading platform that accelerates your research discovery journey by keeping you updated on the latest, most relevant scholarly content. With 250M+ research articles sourced from trusted aggregators like CrossRef, Unpaywall, PubMed, PubMed Central, Open Alex and top publishing houses like Springer Nature, JAMA, IOP, Taylor & Francis, NEJM, BMJ, Karger, SAGE, Emerald Publishing and more, R Discovery puts a world of research at your fingertips.  

Try R Discovery Prime FREE for 1 week or upgrade at just US$72 a year to access premium features that let you listen to research on the go, read in your language, collaborate with peers, auto sync with reference managers, and much more. Choose a simpler, smarter way to find and read research – Download the app and start your free 7-day trial today !  

Related Posts

Research in Shorts

Research in Shorts: R Discovery’s New Feature Helps Academics Assess Relevant Papers in 2mins 

Interplatform Capability

How Does R Discovery’s Interplatform Capability Enhance Research Accessibility 

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Research design

Research design is a comprehensive plan for data collection in an empirical research project. It is a ‘blueprint’ for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process. The instrument development and sampling processes are described in the next two chapters, and the data collection process—which is often loosely called ‘research design’—is introduced in this chapter and is described in further detail in Chapters 9–12.

Broadly speaking, data collection methods can be grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected—quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth—and analysed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that is not available from either type of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key attributes of a research design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesised independent variable, and not by variables extraneous to the research context. Causality requires three conditions: covariation of cause and effect (i.e., if cause happens, then effect also happens; if cause does not happen, effect does not happen), temporal precedence (cause must precede effect in time), and spurious correlation, or there is no plausible alternative explanation for the change. Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are by no means immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalisability refers to whether the observed associations can be generalised from the sample to the population (population validity), or to other people, organisations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalised to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalisability than laboratory experiments where treatments and extraneous variables are more controlled. The variation in internal and external validity for a wide range of research designs is shown in Figure 5.1.

Internal and external validity

Some researchers claim that there is a trade-off between internal and external validity—higher external validity can come only at the cost of internal validity and vice versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs are ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organisational learning are difficult to define, much less measure. For instance, construct validity must ensure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure are valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical tests, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

Different types of validity in scientific research

Improving internal and external validity

The best research designs are those that can ensure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalisable to the population at large. Controls are required to ensure internal validity (causality) of research designs, and can be accomplished in five ways: manipulation, elimination, inclusion, and statistical control, and randomisation.

In manipulation , the researcher manipulates the independent variables in one or more levels (called ‘treatments’), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs, but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalisability, but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomisation technique is aimed at cancelling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomisation are: random selection , where a sample is selected randomly from a population, and random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomisation also ensures external validity, allowing inferences drawn from the sample to be generalised to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalisability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for a few of those dimensions.

Popular research designs

As noted earlier, research designs can be classified into two categories—positivist and interpretive—depending on the goal of the research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalised patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research, while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9–12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the ‘treatment group’) but not to another group (‘control group’), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value) to subjects in the control group. More complex designs may include multiple treatment groups, such as low versus high dosage of the drug or combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned to each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organisation where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analysed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalisability since real life is often more complex (i.e., involving more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a ‘socially desirable’ response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by countries from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualised and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalised to other case sites. Generalisability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically six to ten people) at one location, and having them discuss a phenomenon of interest for a period of one and a half to two hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalised to other settings because of the small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or ‘actions’ into those phenomena and observing the effects of those actions. In this method, the researcher is embedded within a social context such as an organisation and initiates an action—such as new organisational procedures or new technologies—in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalisability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasises that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time—eight months to two years—and during that period, engages, observes, and records the daily life of the studied culture, and theorises about the evolution and behaviours in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves ‘sense-making’. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalisable to other cultures.

Selecting research designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for an individual unit of analysis) or a case study (for an organisational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organisational decision-making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organisational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

research design problem meaning

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

The Four Types of Research Design — Everything You Need to Know

Jenny Romanchuk

Updated: July 23, 2024

Published: January 18, 2023

When you conduct research, you need to have a clear idea of what you want to achieve and how to accomplish it. A good research design enables you to collect accurate and reliable data to draw valid conclusions.

research design used to test different beauty products

In this blog post, we'll outline the key features of the four common types of research design with real-life examples from UnderArmor, Carmex, and more. Then, you can easily choose the right approach for your project.

Table of Contents

What is research design?

The four types of research design, research design examples.

Research design is the process of planning and executing a study to answer specific questions. This process allows you to test hypotheses in the business or scientific fields.

Research design involves choosing the right methodology, selecting the most appropriate data collection methods, and devising a plan (or framework) for analyzing the data. In short, a good research design helps us to structure our research.

Marketers use different types of research design when conducting research .

There are four common types of research design — descriptive, correlational, experimental, and diagnostic designs. Let’s take a look at each in more detail.

Researchers use different designs to accomplish different research objectives. Here, we'll discuss how to choose the right type, the benefits of each, and use cases.

Research can also be classified as quantitative or qualitative at a higher level. Some experiments exhibit both qualitative and quantitative characteristics.

research design problem meaning

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

Experimental

An experimental design is used when the researcher wants to examine how variables interact with each other. The researcher manipulates one variable (the independent variable) and observes the effect on another variable (the dependent variable).

In other words, the researcher wants to test a causal relationship between two or more variables.

In marketing, an example of experimental research would be comparing the effects of a television commercial versus an online advertisement conducted in a controlled environment (e.g. a lab). The objective of the research is to test which advertisement gets more attention among people of different age groups, gender, etc.

Another example is a study of the effect of music on productivity. A researcher assigns participants to one of two groups — those who listen to music while working and those who don't — and measure their productivity.

The main benefit of an experimental design is that it allows the researcher to draw causal relationships between variables.

One limitation: This research requires a great deal of control over the environment and participants, making it difficult to replicate in the real world. In addition, it’s quite costly.

Best for: Testing a cause-and-effect relationship (i.e., the effect of an independent variable on a dependent variable).

Correlational

A correlational design examines the relationship between two or more variables without intervening in the process.

Correlational design allows the analyst to observe natural relationships between variables. This results in data being more reflective of real-world situations.

For example, marketers can use correlational design to examine the relationship between brand loyalty and customer satisfaction. In particular, the researcher would look for patterns or trends in the data to see if there is a relationship between these two entities.

Similarly, you can study the relationship between physical activity and mental health. The analyst here would ask participants to complete surveys about their physical activity levels and mental health status. Data would show how the two variables are related.

Best for: Understanding the extent to which two or more variables are associated with each other in the real world.

Descriptive

Descriptive research refers to a systematic process of observing and describing what a subject does without influencing them.

Methods include surveys, interviews, case studies, and observations. Descriptive research aims to gather an in-depth understanding of a phenomenon and answers when/what/where.

SaaS companies use descriptive design to understand how customers interact with specific features. Findings can be used to spot patterns and roadblocks.

For instance, product managers can use screen recordings by Hotjar to observe in-app user behavior. This way, the team can precisely understand what is happening at a certain stage of the user journey and act accordingly.

Brand24, a social listening tool, tripled its sign-up conversion rate from 2.56% to 7.42%, thanks to locating friction points in the sign-up form through screen recordings.

different types of research design: descriptive research example.

Carma Laboratories worked with research company MMR to measure customers’ reactions to the lip-care company’s packaging and product . The goal was to find the cause of low sales for a recently launched line extension in Europe.

The team moderated a live, online focus group. Participants were shown w product samples, while AI and NLP natural language processing identified key themes in customer feedback.

This helped uncover key reasons for poor performance and guided changes in packaging.

research design example, tweezerman

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

  • Social Science
  • Quantitative Social Research
  • Research Design

THE RESEARCH DESIGN

  • February 2023

Sumbl Ahmad Khanday at Aligarh Muslim University

  • Aligarh Muslim University
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Norman Blaikie

Ranjit Kumar

  • L Christensen
  • W T Trochim
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Research process
  • How to Define a Research Problem | Ideas & Examples

How to Define a Research Problem | Ideas & Examples

Published on 8 November 2022 by Shona McCombes and Tegan George.

A research problem is a specific issue or gap in existing knowledge that you aim to address in your research. You may choose to look for practical problems aimed at contributing to change, or theoretical problems aimed at expanding knowledge.

Some research will do both of these things, but usually the research problem focuses on one or the other. The type of research problem you choose depends on your broad topic of interest and the type of research you think will fit best.

This article helps you identify and refine a research problem. When writing your research proposal or introduction , formulate it as a problem statement and/or research questions .

Table of contents

Why is the research problem important, step 1: identify a broad problem area, step 2: learn more about the problem, frequently asked questions about research problems.

Having an interesting topic isn’t a strong enough basis for academic research. Without a well-defined research problem, you are likely to end up with an unfocused and unmanageable project.

You might end up repeating what other people have already said, trying to say too much, or doing research without a clear purpose and justification. You need a clear problem in order to do research that contributes new and relevant insights.

Whether you’re planning your thesis , starting a research paper , or writing a research proposal , the research problem is the first step towards knowing exactly what you’ll do and why.

Prevent plagiarism, run a free check.

As you read about your topic, look for under-explored aspects or areas of concern, conflict, or controversy. Your goal is to find a gap that your research project can fill.

Practical research problems

If you are doing practical research, you can identify a problem by reading reports, following up on previous research, or talking to people who work in the relevant field or organisation. You might look for:

  • Issues with performance or efficiency
  • Processes that could be improved
  • Areas of concern among practitioners
  • Difficulties faced by specific groups of people

Examples of practical research problems

Voter turnout in New England has been decreasing, in contrast to the rest of the country.

The HR department of a local chain of restaurants has a high staff turnover rate.

A non-profit organisation faces a funding gap that means some of its programs will have to be cut.

Theoretical research problems

If you are doing theoretical research, you can identify a research problem by reading existing research, theory, and debates on your topic to find a gap in what is currently known about it. You might look for:

  • A phenomenon or context that has not been closely studied
  • A contradiction between two or more perspectives
  • A situation or relationship that is not well understood
  • A troubling question that has yet to be resolved

Examples of theoretical research problems

The effects of long-term Vitamin D deficiency on cardiovascular health are not well understood.

The relationship between gender, race, and income inequality has yet to be closely studied in the context of the millennial gig economy.

Historians of Scottish nationalism disagree about the role of the British Empire in the development of Scotland’s national identity.

Next, you have to find out what is already known about the problem, and pinpoint the exact aspect that your research will address.

Context and background

  • Who does the problem affect?
  • Is it a newly-discovered problem, or a well-established one?
  • What research has already been done?
  • What, if any, solutions have been proposed?
  • What are the current debates about the problem? What is missing from these debates?

Specificity and relevance

  • What particular place, time, and/or group of people will you focus on?
  • What aspects will you not be able to tackle?
  • What will the consequences be if the problem is not resolved?

Example of a specific research problem

A local non-profit organisation focused on alleviating food insecurity has always fundraised from its existing support base. It lacks understanding of how best to target potential new donors. To be able to continue its work, the organisation requires research into more effective fundraising strategies.

Once you have narrowed down your research problem, the next step is to formulate a problem statement , as well as your research questions or hypotheses .

Once you’ve decided on your research objectives , you need to explain them in your paper, at the end of your problem statement.

Keep your research objectives clear and concise, and use appropriate verbs to accurately convey the work that you will carry out for each one.

I will compare …

The way you present your research problem in your introduction varies depending on the nature of your research paper . A research paper that presents a sustained argument will usually encapsulate this argument in a thesis statement .

A research paper designed to present the results of empirical research tends to present a research question that it seeks to answer. It may also include a hypothesis – a prediction that will be confirmed or disproved by your research.

Research objectives describe what you intend your research project to accomplish.

They summarise the approach and purpose of the project and help to focus your research.

Your objectives should appear in the introduction of your research paper , at the end of your problem statement .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. & George, T. (2022, November 08). How to Define a Research Problem | Ideas & Examples. Scribbr. Retrieved 3 September 2024, from https://www.scribbr.co.uk/the-research-process/define-research-problem/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, dissertation & thesis outline | example & free templates, example theoretical framework of a dissertation or thesis, how to write a strong hypothesis | guide & examples.

Developing Surveys on Questionable Research Practices: Four Challenging Design Problems

  • Open access
  • Published: 02 September 2024

Cite this article

You have full access to this open access article

research design problem meaning

  • Christian Berggren   ORCID: orcid.org/0000-0002-4233-5138 1 ,
  • Bengt Gerdin   ORCID: orcid.org/0000-0001-8360-5387 2 &
  • Solmaz Filiz Karabag   ORCID: orcid.org/0000-0002-3863-1073 1 , 3  

2 Altmetric

The exposure of scientific scandals and the increase of dubious research practices have generated a stream of studies on Questionable Research Practices (QRPs), such as failure to acknowledge co-authors, selective presentation of findings, or removal of data not supporting desired outcomes. In contrast to high-profile fraud cases, QRPs can be investigated using quantitative, survey-based methods. However, several design issues remain to be solved. This paper starts with a review of four problems in the QRP research: the problem of precision and prevalence, the problem of social desirability bias, the problem of incomplete coverage, and the problem of controversiality, sensitivity and missing responses. Various ways to handle these problems are discussed based on a case study of the design of a large, cross-field QRP survey in the social and medical sciences in Sweden. The paper describes the key steps in the design process, including technical and cognitive testing and repeated test versions to arrive at reliable survey items on the prevalence of QRPs and hypothesized associated factors in the organizational and normative environments. Partial solutions to the four problems are assessed, unresolved issues are discussed, and tradeoffs that resist simple solutions are articulated. The paper ends with a call for systematic comparisons of survey designs and item quality to build a much-needed cumulative knowledge trajectory in the field of integrity studies.

Similar content being viewed by others

research design problem meaning

Lies, Damned Lies, and Crafty Questionnaire Design

Dirty data: the effects of screening respondents who provide low-quality data in survey research.

research design problem meaning

Questionable Research Practices in Single-Case Experimental Designs: Examples and Possible Solutions

Explore related subjects.

  • Medical Ethics

Avoid common mistakes on your manuscript.

Introduction

The public revelations of research fraud and non-replicable findings (Berggren & Karabag, 2019 ; Levelt et al., 2012 ; Nosek et al., 2022 ) have created a lively interest in studying research integrity. Most studies in this field tend to focus on questionable research practices, QRPs, rather than blatant fraud, which is less common and hard to study with rigorous methods (Butler et al., 2017 ). Despite the significant contributions of this research about the incidence of QRPs in various countries and contexts, several issues still need to be addressed regarding the challenges of designing precise and valid survey instruments and achieving satisfactory response rates in this sensitive area. While studies in management (Hinkin, 1998 ; Lietz, 2010 ), behavioral sciences, psychology (Breakwell et al., 2020 ), sociology (Brenner, 2020 ), and education (Hill et al., 2022 ) have provided guidelines to design surveys, they rarely discuss how to develop, test, and use surveys targeting sensitive and controversial issues such as organizational or individual corruption (Lin & Yu, 2020 ), fraud (Lawlor et al., 2021 ), and misconduct. The aim of this study is to contribute to a systematic discussion of challenges facing survey designers in these areas and, by way of a detailed case study, highlight alternative ways to increase participation and reliability of surveys focusing on questionable research practices, scientific norms, and organizational climate.

The following section starts with a literature-based review of four important problems:

the lack of conceptual consensus and precise measurements,

the problem of social desirability bias.

the difficulty of covering both quantitative and qualitative research fields.

the problem of controversiality and sensitivity.

Section 3 presents an in-depth case study of developing and implementing a survey on QRPs in the social and medical sciences in Sweden 2018–2021, designed to target these problems. Its first results were presented in this journal (Karabag et al., 2024 ). The section also describes the development process and the survey content and highlights the general design challenges. Section 4 returns to the four problems by discussing partial solutions, difficult tradeoffs, and remaining issues.

Four Design Problems in the Study of Questionable Research Practices

Extant QRP studies have generated an impressive body of knowledge regarding the occurrence and complexities of questionable practices, their increasing trend in several academic fields, and the difficulty of mitigating them with conventional interventions such as ethics courses and espousal of integrity policies (Gopalakrishna et al., 2022 ; Karabag et al., 2024 ; Necker, 2014 ). However, investigations on the prevalence of QRPs have so far lacked systematic problem analysis. Below, four main problems are discussed.

The Problem of Conceptual Clarity and Measurement Precision

Studies of QRP prevalence in the literature exhibit high levels of questionable behaviors but also considerable variation in their estimates. This is illustrated in the examples below:

“42% hade collected more data after inspecting whether results were statistically significant… and 51% had reported an unexpected finding as though it had been hypothesized from the start (HARKing)”( Fraser et al., 2018 , p. 1) , “51 , 3% of respondents engaging frequently in at least one QRP” ( Gopalakrishna et al., 2022 , p. 1) , “…one third of the researchers stated that for the express purpose of supporting hypotheses with statistical significance they engaged in post hoc exclusion of data” ( Banks et al., 2016 , p. 10).

On a general level, QRPs constitute deviations from the responsible conduct of research, that are not severe enough to be defined as fraud and fabrication (Steneck, 2006 ). Within these borders, there is no conceptual consensus regarding specific forms of QRPs (Bruton et al., 2020 ; Xie et al., 2021 ). This has resulted in a considerable variation in prevalence estimates (Agnoli et al., 2017 ; Artino et al. Jr, 2019 ; Fiedler & Schwarz, 2016 ). Many studies emphasize the role of intentionality, implying a purpose to support a specific assertion with biased evidence (Banks et al., 2016 ). This tends to be backed by reports of malpractices in quantitative research, such as p-hacking or HARKing, where unexpected findings or results from an exploratory analysis are reported as having been predicted from the start (Andrade, 2021 ). Other QRP studies, however, build on another, often implicit conceptual definition and include practices that could instead be defined as sloppy or under-resourced research, e.g. insufficient attention to equipment, deficient supervision of junior co-workers, inadequate note-keeping of the research process, or use of inappropriate research designs (Gopalakrishna et al., 2022 ). Alternatively, those studies include behaviors such as “Fashion-determined choice of research topic”, “Instrumental and marketable approach”, and “Overselling methods, data or results” (Ravn & Sørensen, 2021 , p. 30; Vermeulen & Hartmann, 2015 ) which may be opportunistic or survivalist but not necessarily involve intentions to mislead.

To shed light on the prevalence of QRPs in different environments, the first step is to conceptualize and delimit the practices to be considered. The next step is to operationalize the conceptual approach into useful indicators and, if needed, to reformulate and reword the indicators into unambiguous, easily understood items (Hinkin, 1995 , 1998 ). The importance of careful item design has been demonstrated by Fiedler and Schwarz ( 2016 ). They show how the perceived QRP prevalence changes by adding specifications to well-known QRP items. Such specifications include: “ failing to report all dependent measures that are relevant for a finding ”, “ selectively reporting studies related to a specific finding that ‘’worked’ ” (Fiedler & Schwarz, 2016 , p. 46, italics in original ), or “collecting more data after seeing whether results were significant in order to render non-significant results significant ” (Fiedler & Schwarz, 2016 , p. 49, italics in original ). These specifications demonstrate the importance of precision in item design, the need for item tests before applications in a large-scale survey, and as the case study in Sect. 3 indicates, the value of statistically analyzing the selected items post-implementation.

The Problem of Social Desirability

Case studies of publicly exposed scientific misconduct have the advantage of explicitness and possible triangulation of sources (Berggren & Karabag, 2019 ; Huistra & Paul, 2022 ). Opinions may be contradictory, but researchers/investigators may often approach a variety of stakeholders and compare oral statements with documents and other sources (Berggren & Karabag, 2019 ). By contrast, quantitative studies of QRPs need to rely on non-public sources in the form of statements and appraisals of survey respondents for the dependent variables and for potentially associated factors such as publication pressure, job insecurity, or competitive climate.

Many QRP surveys use items that target the respondents’ personal attitudes and preferences regarding the dependent variables, indicating QRP prevalence, as well as the explanatory variables. This has the advantage that the respondents presumably know their own preferences and practices. A significant disadvantage, however, concerns social desirability, which in this context means the tendency of respondents to portray themselves, sometimes inadvertently, in more positive ways than justified by their behavior. The extent of this problem was indicated in a meta-study by Fanelli ( 2009 ), which demonstrated major differences between answers to sensitive survey questions that targeted the respondents’ own behavior and questions that focused on the behavior of their colleagues. In the case study below, the pros and cons of the latter indirect approaches are analyzed.

The Problem of Covering Both Quantitative and Qualitative Research

Studies of QRP prevalence are dominated by quantitative research approaches, where there exists a common understanding of the meaning of facts, proper procedures and scientific evidence. Several research fields, also in the social and medical sciences, include qualitative approaches — case studies, interpretive inquiries, or discourse analysis — where assessments of ‘truth’ and ‘evidence’ may be different or more complex to evaluate.

This does not mean that all qualitative endeavors are equal or that deceit—such as presenting fabricated interview quotes or referring to non-existent protocols —is accepted. However, while there are defined criteria for reporting qualitative research, such as the Consolidated Criteria for Reporting Qualitative Research (COREQ) (Tong et al., 2007 ) or the Standards for Reporting Qualitative Research (SRQR checklist) (O’Brien et al., 2014 ), the field of qualitative research encompasses a wide range of different approaches. This includes comparative case studies that offer detailed evidence to support their claims—such as the differences between British and Japanese factories (Dore, 1973 /2011)—as well as discourse analyses and interpretive studies, where the concept of ‘evidence’ is more fluid and hard to apply. The generative richness of the analysis is a key component of their quality (Flick, 2013 ). This intra-field variation makes it hard to pin down and agree upon general QRP items to capture such behaviors in qualitative research. Some researchers have tried to interpret and report qualitative research by means of quantified methods (Ravn & Sørensen, 2021 ), but so far, these attempts constitute a marginal phenomenon. Consequently, the challenges of measuring the prevalence of QRPs (or similar issues) in the variegated field of qualitative research remain largely unexplored.

The Problem of Institutional Controversiality and Personal Sensitivity

Science and academia depend on public trust for funding and executing research. This makes investigations of questionable behaviors a controversial issue for universities and may lead to institutional refusal/non-response. This resistance was experienced by the designers of a large-scale survey of norms and practices in the Dutch academia when several universities decided not to take part, referring to the potential danger of negative publicity (de Vrieze, 2021 ). A Flemish survey on academic careers encountered similar participation problems (Aubert Bonn & Pinxten, 2019 ). Another study on universities’ willingness to solicit whistleblowers for participation revealed that university officers, managers, and lawyers tend to feel obligated to protect their institution’s reputation (Byrn et al., 2016 ). Such institutional actors may resist participation to avoid the exposure of potentially negative information about their institutions and management practices, which might damage the university’s brand (Byrn et al., 2016 ; Downes, 2017 ).

QRP surveys involve sensitive and potentially intrusive questions also from a respondent’s personal perspective that can lead to a reluctance to participate and non-response behavior (Roberts & John, 2014 ; Tourangeau & Yan, 2007 ). Studies show that willingness to participate declines for surveys covering sensitive issues such as misconduct, crime, and corruption, compared to less sensitive ones like leisure activities (cf. Tourangeau et al., 2010 ). The method of survey administration—whether face-to-face, over the phone, via the web, or paper-based—can influence the perceived sensitivity and response rate (Siewert & Udani, 2016 ; Szolnoki & Hoffmann, 2013 ). In the case study below, the survey did not require any institutional support. Instead, the designers focused on minimizing the individual sensitivity problem by avoiding questions about the respondents’ personal practices. To manage this, they concentrated on their colleagues’ behaviors (see Sect. 4.2). Even if a respondent agrees to participate, they may not answer the QRP items due to insufficient knowledge about her colleagues’ practices or a lack of motivation to answer critical questions about their colleagues’ practices (Beatty & Herrmann, 2002 ; Yan & Curtin, 2010 ). Additionally, a significant time gap between observing specific QRPs in the respondent’s research environment and receiving the survey may make it difficult to recall and accurately respond to the questions. Such issues may also result in non-response problems.

Addressing the Problems: Case Study of a Cross-Field QRP Survey – Design Process, Survey Content, Design Challenges

This section presents a case study of the way these four problems were addressed in a cross-field survey intended to capture QRP prevalence and associated factors across the social and medical sciences in Sweden. The account is based on the authors’ intensive involvement in the design and analysis of the survey, including the technical and cognitive testing, and post-implementation analysis of item quality, missing responses, and open respondent comments. The theoretical background and the substantive results of the study are presented in a separate paper (Karabag et al., 2024 ). Method and language experts at Statistics Sweden, a government agency responsible for public statistics in Sweden, supported the testing procedures, the stratified respondent sampling and administered the survey roll-out.

The Survey Design Process – Repeated Testing and Prototyping

The design process included four steps of testing, revising, and prototyping, which allowed the researchers to iteratively improve the survey and plan the roll-out.

Step 1: Development of the Baseline Survey

This step involved searching the literature and creating a list of alternative constructs concerning the key concepts in the planned survey. Based on the study’s aim, the first and third authors compared these constructs and examined how they had been itemized in the literature. After two rounds of discussions, they agreed on construct formulations and relevant ways to measure them, rephrased items if deemed necessary, and designed new items in areas where the extant literature did not provide any guidance. In this way, Survey Version 1 was compiled.

Step 2: Pre-Testing by Means of a Large Convenience Sample

In the second step, this survey version was reviewed by two experts in organizational behavior at Linköping University. This review led to minor adjustments and the creation of Survey Version 2 , which was used for a major pretest. The aim was both to check the quality of individual items and to garner enough responses for a factor analysis that could be used to build a preliminary theoretical model. This dual aim required a larger sample than suggested in the literature on pretesting (Perneger et al., 2015 ). At the same time, it was essential to minimize the contamination of the planned target population in Sweden. To accomplish this, the authors used their access to a community of organization scholars to administer Survey Version 2 to 200 European management researchers.

This mass pre-testing yielded 163 responses. The data were used to form preliminary factor structures and test a structural equation model. Feedback from a few of the respondents highlighted conceptual issues and duplicated questions. Survey Version 3 was developed and prepared for detailed pretesting based on this feedback.

Step 3: Focused Pre-Testing and Technical Assessment

This step focused on the pre-testing and technical assessment. The participants in this step’s pretesting were ten researchers (six in the social sciences and four in the medical sciences) at five Swedish universities: Linköping, Uppsala, Gothenburg, Gävle, and Stockholm School of Economics. Five of those researchers mainly used qualitative research methods, two used both qualitative and quantitative methods, and three used quantitative methods. In addition, Statistics Sweden conducted a technical assessment of the survey items, focusing on wording, sequence, and response options. Footnote 1 Based on feedback from the ten pretest participants and the Statistics Sweden assessment, Survey Version 4 was developed, translated into Swedish, and reviewed by two researchers with expertise in research ethics and scientific misconduct.

It should be highlighted that Swedish academia is predominantly bilingual. While most researchers have Swedish as their mother tongue, many are more proficient in English, and a minority have limited or no knowledge of Swedish. During the design process, the two language versions were compared item by item and slightly adjusted by skilled bilingual researchers. This task was relatively straightforward since most items and concepts were derived from previously published literature in English. Notably, the Swedish versions of key terms and concepts have long been utilized within Swedish academia (see for example Berggren, 2016 ; Hasselberg, 2012 ). To secure translation quality, the language was controlled by a language expert at Statistics Sweden.

Step 4: Cognitive Interviews by Survey and Measurement Experts

Next, cognitive interviews (Willis, 2004 ) were organized with eight researchers from the social and medical sciences and conducted by an expert from Statistics Sweden (Wallenborg Likidis, 2019 ). The participants included four women and four men, ranging in age from 30 to 60. They were two doctoral students, two lecturers, and four professors, representing five different universities and colleges. Additionally, two participants had a non-Nordic background. To ensure confidentiality, no connections are provided between these characteristics and the individual participants.

An effort was made to achieve a distribution of gender, age, subject, employment, and institution. Four social science researchers primarily used qualitative research methods, while the remaining four employed qualitative and quantitative methods. Additionally, four respondents completed the Swedish version of the survey, and four completed the English version.

The respondents completed the survey in the presence of a methods expert from Statistics Sweden, who observed their entire response process. The expert noted spontaneous reactions and recorded instances where respondents hesitated or struggled to understand an item. After the survey, the expert conducted a structured interview with all eight participants, addressing details in each section of the survey, including the missive for recruiting respondents. Some respondents provided oral feedback while reading the cover letter and answering the questions, while others offered feedback during the subsequent interview.

During the cognitive interview process, the methods expert continuously communicated suggestions for improvements to the design team. A detailed test protocol confirmed that most items were sufficiently strong, although a few required minor modifications. The research team then finalized Survey Version 5 , which included both English and Swedish versions (for the complete survey, see Supplementary Material S1).

Although the test successfully captured a diverse range of participants, it would have been desirable to conduct additional tests of the English survey with more non-Nordic participants; as it stands, only one such test was conducted. Despite the participants’ different approaches to completing the survey, the estimated time to complete it was approximately 15–20 min. No significant time difference was observed between completing the survey in Swedish and English.

Design Challenges – the Dearth of an Item-Specific Public Quality Discussion

The design decision to employ survey items from the relevant literature as much as possible was motivated by a desire to increase comparability with previous studies of questionable research practices. However, this approach came with several challenges. Survey-based studies of QRPs rely on the respondents’ subjective assessments, with no possibility to compare the answers with other sources. Thus, an open discussion of survey problems would be highly valuable. However, although published studies usually present the items used in the surveys, there is seldom any analysis of the problems and tradeoffs involved when using a particular type of item or response format and meager information about item validity. Few studies, for example, contain any analysis that clarifies which items that measured the targeted variables with sufficient precision and which items that failed to do so.

Another challenge when using existing survey studies is the lack of information regarding the respondents’ free-text comments about the survey’s content and quality. This could be because the survey did not contain any open questions or because the authors of the report could not statistically analyze the answers. As seen below, however, open respondent feedback on a questionnaire involving sensitive or controversial aspects may provide important feedback regarding problems that did not surface during the pretest process, which by necessity targets much smaller samples.

Survey Content

The survey started with questions about the respondent’s current employment and research environment. It ended with background questions on the respondents’ positions and the extent of their research activity, plus space for open comments about the survey. The core content of the survey consisted of sections on the organizational climate (15 items), scientific norms (13 items), good and questionable research practices (16 items), perceptions of fairness in the academic system (4 items), motivation for conducting research (8 items), ethics training and policies (5 items); and questions on the quality of the research environment and the respondent’s perceived job security.

Sample and Response Rate

All researchers, teachers, and Ph.D. students employed at Swedish universities are registered by Statistics Sweden. To ensure balanced representation and perspectives from both large universities and smaller university colleges, the institutions were divided into three strata based on the number of researchers, teachers, and Ph.D. students: more than 1,000 individuals (7 universities and university colleges), 500–999 individuals (3 institutions), and fewer than 500 individuals (29 institutions). From these strata, Statistics Sweden randomly sampled 35%, 45%, and 50% of the relevant employees, resulting in a sample of 10,047 individuals. After coverage analysis and exclusion of wrongly included, 9,626 individuals remained.

The selected individuals received a personal postal letter with a missive in both English and Swedish informing them about the project and the survey and notifying them that they could respond on paper or online. The online version provided the option to answer in either English or Swedish. The paper version was available only in English to reduce the cost of production and posting. The missive provided the recipients with comprehensive information about the study and what their involvement would entail. It emphasized the voluntary character of participation and their right to withdraw from the survey at any time, adding: “If you do not want to answer the questions , we kindly ask you to contact us. Then you will not receive any reminders.” Sixty-three individuals used this decline option. In line with standard Statistics Sweden procedures, survey completion implied an agreement to participation and to the publication of anonymized results and indicated participants’ understanding of the terms provided (Duncan & Cheng, 2021 ). An email address was provided for respondents to request study outputs or for any other reason. The survey was open for data collection for two months, during which two reminders were sent to non-responders who had not opted out.

Once Statistics Sweden had collected the answers, they were anonymized and used to generate data files delivered to the authors. Statistics Sweden also provided anonymized information about age, gender, and type of employment of each respondent in the dataset delivered to the researchers. Of the targeted individuals, 3,295 responded, amounting to an overall response rate of 34.2%. An analysis of missing value patterns revealed that 290 of the respondents either lacked data for an entire factor or had too many missing values dispersed over several survey sections. After removing these 290 responses, we used SPSS algorithms (IBM-SPSS Statistics 27) to analyze the remaining missing values, which were randomly distributed and constituted less than 5% of the data. These values were replaced using the program’s imputation program (Madley-Dowd et al., 2019 ). The final dataset consisted of 3,005 individuals, evenly distributed between female and male respondents (53,5% vs. 46,5%) and medical and social scientists (51,3% vs. 48,5%). An overview of the sample and the response rate is provided in Table  1 , which can also be found in (Karabag et al., 2024 ). As shown in Table  1 , the proportion of male and female respondents, as well as the proportion of respondents from medical and social science, and the age distribution of the respondents compared well with the original selection frame from Statistics Sweden.

Revisiting the Four Problems. Partial Solutions and Remaining Issues

Managing the precision problem - the value of factor analyses.

As noted above, the lack of conceptual consensus and standard ways to measure QRPs has resulted in a huge variation in estimated prevalence. In the case studied here, the purpose was to investigate deviations from research integrity and not low-quality research in general. This conceptual focus implied that selected survey items regarding QRP should build on the core aspect of intention, as suggested by Banks et al. ( 2016 , p. 323): “design, analytic, or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favor of an assertion”. After scrutinizing the literature, five items were selected as general indicators of QRP, irrespective of the research approach (see Table  2 ).

An analysis of the survey responses indicated that the general QRP indicators worked well in terms of understandability and precision. Considering the sensitive nature of the items, features that typically yield very high rates of missing data (Fanelli, 2009 ; Tourangeau & Yan, 2007 ), our missing rates of 11–21% must be considered modest. In addition, there were a few critical comments on the item formulation in the open response section at the end of the survey (see below).

Regarding the explanatory (independent) variables, the survey was inspired by studies showing the importance of the organizational climate and the normative environment within academia (Anderson et al., 2010 ). Organizational climate can be measured in several ways; the studied survey focused on items related to a collegial versus a competitive climate. The analysis of the normative environment was inspired by the classical norms of science articulated by Robert Merton in his CUDOS framework: communism (communalism), universalism, disinterestedness, and organized skepticism (Merton, 1942 /1973). This framework has been extensively discussed and challenged but remains a key reference (Anderson et al., 2010 ; Chalmers & Glasziou, 2009 ; Kim & Kim, 2018 ; Macfarlane & Cheng, 2008 ). Moreover, we were inspired by the late work of Merton on the ambivalence and ambiguities of scientists (Merton, 1942 /1973), and the counter norms suggested by Mitroff ( 1974 ). Thus, the survey involved a composite set of items to capture the contradictory normative environment in academia: classical norms as well as their counter norms.

To reduce the problems of social desirability bias and personal sensitivity, the survey design avoided items about the respondent’s personal adherence to explicit ideals, which are common in many surveys (Gopalakrishna et al., 2022 ). Instead, the studied survey focused on the normative preferences and attitudes within the respondent’s environment. This necessitated the identification, selection, and refinement of 3–4 items for each potentially relevant norm/counter-norm. The selection process was used in previous studies of norm subscription in various research communities (Anderson et al., 2007 ; Braxton, 1993 ; Bray & von Storch, 2017 ). For the norm “skepticism”, we consulted studies in the accounting literature of the three key elements of professional skepticism: questioning mind, suspension of judgment and search for knowledge (Hurtt, 2010 ).

The first analytical step after receiving the completed survey set from Statistics Sweden was to conduct a set of factor analyses to assess the quality and validity of the survey items related to the normative environment and the organizational climate. These analyses suggested three clearly identifiable factors related to the normative environment: (1) a counter norm factor combining Mitroff’s particularism and dogmatism (‘Biasedness’ in the further analysis), and two Mertonian factors: (2) Skepticism and (3) Openness, a variant of Merton’s Communalism (see Table  3 ). A fourth Merton factor, Disinterestedness, could not be identified in our analysis.

The analytical process for organizational climate involved reducing the number of items from 15 to 11 (see Table 4 ). Here, the factor analysis suggested two clearly identifiable factors, one related to collegiality and the other related to competition (see Table  4 ). Overall, the factor analyses suggested that the design efforts had paid off in terms of high item quality, robust factor loadings, and a very limited need to remove any items.

In a parallel step, the open comments were assessed as an indication of how the study was perceived by the respondents (see Table  5 ). Of the 3005 respondents, 622 provided comprehensible comments, and many of them were extensive. 187 comments were related to the respondents’ own employment/role, 120 were related to the respondents’ working conditions and research environment, and 98 were related to the academic environment and atmosphere. Problems in knowing details of collegial practices were mentioned in 82 comments.

Reducing Desirability Bias - the Challenge of Nonresponse

It is well established that studies on topics where the respondent has anything embarrassing or sensitive to report suffer from more missing responses than studies on neutral subjects and that respondents may edit the information they provide on sensitive topics (Tourangeau & Yan, 2007 ). Such a social desirability bias is applicable for QRP studies which explicitly target the respondents’ personal attitudes and behaviors. To reduce this problem, the studied survey applied a non-self-format focusing on the behaviors and preferences of the respondents’ colleagues. Relevant survey items from published studies were rephrased from self-format designs to non-self-questions about practices in the respondent’s environment, using the format: “In my research environment, colleagues…” followed by a five-step incremental response format from “(1) never” to “(5) always”. In a similar way the survey avoided “should”-statements about ideal normative values: “Scientists and scholars should critically examine…”. Instead, the survey used items intended to indicate the revealed preferences in the respondent’s normative environment regarding universalism versus particularism or openness versus secrecy.

As indicated by Fanelli ( 2009 ), these redesign efforts probably reduced the social desirability bias significantly. At the same time, however, the redesign seemed to increase a problem not discussed by Fanelli ( 2009 ): an increased uncertainty problem related to the respondents’ difficulties of knowing the practices of their colleagues in questionable areas. This issue was indicated by the open comment at the end of the studied survey, where 13% of the 622 respondents pointed out that they lacked sufficient knowledge about the behavior of their colleagues to answer the QRP questions (see Table  5 ). One respondent wrote:

“It’s difficult to answer questions about ‘colleagues in my research area’ because I don’t have an insight into their research practices; I can only make informed guesses and generalizations. Therefore, I am forced to answer ‘don’t know’ to a lot of questions”.

Regarding the questions on general QRPs, the rate of missing responses varied between 11% and 21%. As for the questions targeting specific QRP practices in quantitative and qualitative research, the rate of missing responses ranged from 38 to 49%. Unfortunately, the non-response alternative to these questions (“Don’t know/not relevant”) combined the two issues: the lack of knowledge and the lack of relevance. Thus, we don’t know what part of the missing responses related to a non-presence of the specific research approach in the respondent’s environment and what part signaled a lack of knowledge about collegial practices in this environment.

Measuring QRPs in Qualitative Research - the Limited Role of Pretests

Studies of QRP prevalence focus on quantitative research approaches, where there exists a common understanding of the interpretation of scientific evidence, clearly recommended procedures, and established QRP items related to compliance with these procedures. In the heterogenous field of qualitative research, there are several established standards for reporting the research (O’Brien et al., 2014 ; Tong et al., 2007 ), but, as noted above, hardly any commonly accepted survey items that capture behaviors that fulfill the criteria for QRPs. As a result, the studied survey project designed such items from the start during the survey development process. After technical and cognitive tests, four items were selected. See Table  6 .

Despite the series of pretests, however, the first two of these items met severe criticism from a few respondents in the survey’s open commentary section. Here, qualitative researchers argued that the items were unduly influenced by the truth claims in quantitative studies, whereas their research dealt with interpretation and discourse analysis. Thus, they rejected the items regarding selective usage of respondents and of interview quotes as indicators of questionable practices:

“The alternative regarding using quotes is a bit misleading. Supporting your results by quotes is a way to strengthen credibility in a qualitative method….” “The question about dubious practices is off target for us, who work with interpretation rather than solid truths. You can present new interpretations, but normally that does not imply that previous ‘findings’ should be considered incorrect.” “The questions regarding qualitative research were somewhat irrelevant. Often this research is not guided by a given hypothesis, and researchers may use a convenient sample without this resulting in lower quality.”

One comment focused on other problems related to qualitative research:

“Several questions do not quite capture the ethical dilemmas we wrestle with. For example , is the issue of dishonesty and ‘inaccuracies’ a little misplaced for us who work with interpretation? …At the same time , we have a lot of ethical discussions , which , for example , deal with power relations between researchers and ‘researched’ , participant observation/informal contacts and informed consent (rather than patients participating in a study)”.

Unfortunately, the survey received these comments and criticism only after the full-scale rollout and not during the pretest rounds. Thus, we had no chance to replace the contested items with other formulations or contemplate a differentiation of the subsection to target specific types of qualitative research with appropriate questions. Instead, we had to limit the post-roll-out survey analysis to the last two items in Table  6 , although they captured devious behaviors rather than gray zone practices.

Why then was this criticism of QRP items related to qualitative research not exposed in the pretest phase? This is a relevant question, also for future survey designers. An intuitive answer could be that the research team only involved quantitative researchers. However, as highlighted above, the pretest participants varied in their research methods: some exclusively used qualitative methods, others employed mixed methods, and some utilized quantitative methods. This diversity suggests that the selection of test participants was appropriate. Moreover, all three members of the research team had experience of both quantitative and qualitative studies. However, as discussed above, the field of qualitative research involves several different types of research, with different goals and methods – from detailed case studies grounded in original empirical fieldwork to participant observations of complex organizational phenomena to discursive re-interpretations of previous studies. Of the 3,005 respondents who answered the survey in a satisfactory way, only 16 respondents, or 0,5%, had any critical comments about the QRP items related to qualitative research. A failure to capture the objections from such a small proportion in a pretest phase is hardly surprising. The general problem could be compared with the challenge of detecting negative side-effects in drug development. Although the pharmaceutical firms conduct large-scale tests of candidate drugs before government approval, doctors nevertheless detect new side-effects when the medicine is rolled out to significantly more people than the test populations – and report these less frequent problems in the additional drug information (Galeano et al., 2020 ; McNeil et al., 2010 ).

In the social sciences, the purpose of pre-testing is to identify problems related to ambiguities and bias in item formulation and survey format and initiate a search for relevant solutions. A pre-test on a small, selected subsample cannot guarantee that all respondent problems during the full-scale data collection will be detected. The pretest aims to reduce errors to acceptable levels and ensure that the respondents will understand the language and terminology chosen. Pretesting in survey development is also essential to help the researchers to assess the overall flow and structure of the survey, and to make necessary adjustments to enhance respondent engagement and data quality (Ikart, 2019 ; Presser & Blair, 1994 ).

In our view, more pretests would hardly solve the epistemological challenge of formulating generally acceptable QRP items for qualitative research. The open comments studied here suggest that there is no one-size-fits-all solution. If this is right, the problem should rather be reformulated to a question of identifying different strands of qualitative research with diverse views of integrity and evidence which need to be measured with different measures. To address this challenge in a comprehensive way, however, goes far beyond the current study.

Controversiality and Collegial sensitivity - the Challenge of Predicting Nonresponse

Studies of research integrity, questionable research practices, and misconduct in science tend to be organizationally controversial and personally sensitive. If university leaders are asked to support such studies, there is a considerable risk that the answer will be negative. In the case studied here, the survey roll-out was not dependent on any active organizational participation since Statistics Sweden possessed all relevant respondent information in-house. This, we assumed, would take the controversiality problem off the agenda. Our belief was supported by the non-existent complaints regarding a potential negativity bias from the pretest participants. Instead, the problem surfaced when the survey was rolled out, and all the respondents contemplated the survey. The open comment section at the end of the survey provided insights into this reception.

Many respondents provided positive feedback, reflected in 30 different comments such as:

“Thank you for doing this survey. I really hope it will lead to changes because it is needed”. “This is an important survey. However , there are conflicting norms , such as those you cite in the survey , /concerning/ for example , data protection. How are researchers supposed to be open when we cannot share data for re-analysis?” “I am glad that the problems with egoism and non-collegiality are addressed in this manner ”.

Several of them asked for more critical questions regarding power, self-interest, and leadership:

“What I lack in the survey were items regarding academic leadership. Otherwise, I am happy that someone is doing research on these issues”. “A good survey but needs to be complemented with questions regarding researchers who put their commercial interests above research and exploit academic grants for commercial purposes”.

A small minority criticized the survey for being overly negative towards academia:

“A major part of the survey feels very negative and /conveys/ the impression that you have a strong pre-understanding of academia as a horrible environments”. “Some of the questions are uncomfortable and downright suggestive. Why such a negative attitude towards research?” “The questions have a tendency to make us /the respondents/ informers. An unpleasant feeling when you are supposed to lay information against your university”. “Many questions are hard to answer, and I feel that they measure my degree of suspicion against my closest colleagues and their motivation … Several questions I did not want to answer since they contain a negative interpretation of behaviors which I don’t consider as automatically negative”.

A few of these respondents stated that they abstained from answering some of the ‘negative questions’, since they did not want to report on or slander their colleagues. The general impact is hard to assess. Only 20% of the respondents offered open survey comments, and only seven argued that questions were “negative”. The small number explains why the issue of negativity did not show up during the testing process. However, a perceived sense of negativity may have affected the willingness to answer among more respondents than those who provided free test comments.

Conclusion - The Needs for a Cumulative Knowledge Trajectory in Integrity Studies

In the broad field of research integrity studies, investigations of QRPs in different contexts and countries play an important role. The comparability of the results, however, depends on the conceptual focus of the survey design and the quality of the survey items. This paper starts with a discussion of four common problems in QRP research: the problems of precision, social desirability, incomplete coverage, and organizational controversiality and sensitivity. This is followed by a case study of how these problems were addressed in a detailed survey design process. An assessment of the solutions employed in the studied survey design reveals progress as well as unresolved issues.

Overall, the paper shows that the problem and challenges of precision could be effectively managed through explicit conceptual definitions and careful item design.

The problem of social desirability bias was probably reduced by means of a non-self-response format referring to preferences and behaviors among colleagues instead of personal behaviors. However, an investigation of open respondent comments indicated that the reduced risk of social bias came at the expense of higher uncertainty due to the respondents’ lack of insight in the concrete practices of their colleagues.

The problem of incomplete coverage of QRPs in qualitative research, the authors initially linked to “the lack of standard items” to capture QRPs in qualitative studies. Open comments at the end of the survey, however, suggested that the lack of such standards would not be easily managed by the design of new items. Rather, it seems to be an epistemological challenge related to the multifarious nature of the qualitative research field, where the understanding of ‘evidence’ is unproblematic in some qualitative sub-fields but contested in others. This conjecture and other possible explanations will hopefully be addressed in forthcoming epistemological and empirical studies.

Regarding the problem of controversiality and sensitivity, previous studies show that QRP research is a controversial and sensitive area for academic executives and university brand managers. The case study discussed here indicates that this is a sensitive subject also for rank-and-file researchers who may hesitate to answer, even when the questions do not target the respondents’ own practices but the practices and preferences of their colleagues. Future survey designers may need to engage in framing, presenting, and balancing sensitive items to reduce respondent suspicions and minimize the rate of missing responses. Reflections on the case indicate that this is doable but requires thoughtful design, as well as repeated tests, including feedback from a broad selection of prospective participants.

In conclusion, the paper suggests that more resources should be spent on the systematic evaluation of different survey designs and item formulations. In the long term, such investments in method development will yield a higher proportion of robust and comparable studies. This would mitigate the problems discussed here and contribute to the creation of a much-needed cumulative knowledge trajectory in research integrity studies.

An issue not covered here is that surveys, however finely developed, only give quantitative information about patterns, behaviors, and structures. An understanding of underlying thoughts and perspectives requires other procedures. Thus, methods that integrate and triangulate qualitative and quantitative data —known as mixed methods (Karabag & Berggren, 2016 ; Ordu & Yılmaz, 2024 ; Smajic et al., 2022 )— may give a deeper and more complete picture of the phenomenon of QRP.

Data Availability

The data supporting the findings of this study are available from the corresponding author, upon reasonable request.

Wallenborg Likidis ( 2019 ). Academic norms and scientific attitudes: Metrology Review of a survey for doctoral students , researchers and academic teachers (In Swedish: Akademiska normer och vetenskapliga förhallningssätt. Mätteknisk granskning av en enkät till doktorander , forskare och akademiska lärare) . Prod.nr. 8,942,146, Statistics Sweden, Örebro.

Agnoli, F., Wicherts, J. M., Veldkamp, C. L., Albiero, P., & Cubelli, R. (2017). Questionable research practices among Italian research psychologists. PLoS One , 12(3), e0172792.

Anderson, M. S., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). The perverse effects of competition on scientists’ work and relationships. Science and Engineering Ethics , 13 , 437–461.

Article   Google Scholar  

Anderson, M. S., Ronning, E. A., Devries, R., & Martinson, B. C. (2010). Extending the Mertonian norms: Scientists’ subscription to norms of Research. The Journal of Higher Education , 81 (3), 366–393. https://doi.org/10.1353/jhe.0.0095

Andrade, C. (2021). HARKing, cherry-picking, p-hacking, fishing expeditions, and data dredging and mining as questionable research practices. The Journal of Clinical Psychiatry , 82 (1), 25941.

ArtinoJr, A. R., Driessen, E. W., & Maggio, L. A. (2019). Ethical shades of gray: International frequency of scientific misconduct and questionable research practices in health professions education. Academic Medicine , 94 (1), 76–84.

Aubert Bonn, N., & Pinxten, W. (2019). A decade of empirical research on research integrity: What have we (not) looked at? Journal of Empirical Research on Human Research Ethics , 14 (4), 338–352.

Banks, G. C., O’Boyle Jr, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management , 42 (1), 5–20.

Beatty, P., & Herrmann, D. (2002). To answer or not to answer: Decision processes related to survey item nonresponse. Survey Nonresponse , 71 , 86.

Google Scholar  

Berggren, C. (2016). Scientific Publishing: History, practice, and ethics (in Swedish: Vetenskaplig Publicering: Historik, Praktik Och Etik) . Studentlitteratur AB.

Berggren, C., & Karabag, S. F. (2019). Scientific misconduct at an elite medical institute: The role of competing institutional logics and fragmented control. Research Policy , 48 (2), 428–443. https://doi.org/10.1016/j.respol.2018.03.020

Braxton, J. M. (1993). Deviancy from the norms of science: The effects of anomie and alienation in the academic profession. Research in Higher Education , 54 (2), 213–228. https://www.jstor.org/stable/40196105

Bray, D., & von Storch, H. (2017). The normative orientations of climate scientists. Science and Engineering Ethics , 23 (5), 1351–1367.

Breakwell, G. M., Wright, D. B., & Barnett, J. (2020). Research questions, design, strategy and choice of methods. Research Methods in Psychology , 1–30.

Brenner, P. S. (2020). Why survey methodology needs sociology and why sociology needs survey methodology: Introduction to understanding survey methodology: Sociological theory and applications. In Understanding survey methodology: Sociological theory and applications (pp. 1–11). https://doi.org/10.1007/978-3-030-47256-6_1

Bruton, S. V., Medlin, M., Brown, M., & Sacco, D. F. (2020). Personal motivations and systemic incentives: Scientists on questionable research practices. Science and Engineering Ethics , 26 (3), 1531–1547.

Butler, N., Delaney, H., & Spoelstra, S. (2017). The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education , 16 (1), 94–109.

Byrn, M. J., Redman, B. K., & Merz, J. F. (2016). A pilot study of universities’ willingness to solicit whistleblowers for participation in a study. AJOB Empirical Bioethics , 7 (4), 260–264.

Chalmers, I., & Glasziou, P. (2009). Avoidable waste in the production and reporting of research evidence. The Lancet , 374 (9683), 86–89.

de Vrieze, J. (2021). Large survey finds questionable research practices are common. Science . https://doi.org/10.1126/science.373.6552.265

Dore, R. P. (1973/2011). British Factory Japanese Factory: The origins of National Diversity in Industrial Relations, with a New Afterword . University of California Press/Routledge.

Downes, M. (2017). University scandal, reputation and governance. International Journal for Educational Integrity , 13 , 1–20.

Duncan, L. J., & Cheng, K. F. (2021). Public perception of NHS general practice during the first six months of the COVID-19 pandemic in England. F1000Research , 10 .

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One , 4(5), e5738.

Fiedler, K., & Schwarz, N. (2016). Questionable research practices revisited. Social Psychological and Personality Science , 7 (1), 45–52.

Flick, U. (2013). The SAGE Handbook of Qualitative Data Analysis . sage.

Fraser, H., Parker, T., Nakagawa, S., Barnett, A., & Fidler, F. (2018). Questionable research practices in ecology and evolution. PLoS One , 13(7), e0200303.

Galeano, D., Li, S., Gerstein, M., & Paccanaro, A. (2020). Predicting the frequencies of drug side effects. Nature Communications , 11 (1), 4575.

Gopalakrishna, G., Ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in the Netherlands. PLoS One , 17 (2), e0263023.

Hasselberg, Y. (2012). Science as Work: Norms and Work Organization in Commodified Science (in Swedish: Vetenskap Som arbete: Normer och arbetsorganisation i den kommodifierade vetenskapen) . Gidlunds förlag.

Hill, J., Ogle, K., Gottlieb, M., Santen, S. A., & ArtinoJr, A. R. (2022). Educator’s blueprint: a how-to guide for collecting validity evidence in survey‐based research. AEM Education and Training , 6(6), e10835.

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of Management , 21 (5), 967–988.

Hinkin, T. R. (1998). A brief tutorial on the development of measures for use in survey questionnaires. Organizational Research Methods , 1 (1), 104–121.

Huistra, P., & Paul, H. (2022). Systemic explanations of scientific misconduct: Provoked by spectacular cases of norm violation? Journal of Academic Ethics , 20 (1), 51–65.

Hurtt, R. K. (2010). Development of a scale to measure professional skepticism. Auditing: A Journal of Practice & Theory , 29 (1), 149–171.

Ikart, E. M. (2019). Survey questionnaire survey pretesting method: An evaluation of survey questionnaire via expert reviews technique. Asian Journal of Social Science Studies , 4 (2), 1.

Karabag, S. F., & Berggren, C. (2016). Misconduct, marginality and editorial practices in management, business and economics journals. PLoS One , 11 (7), e0159492. https://doi.org/10.1371/journal.pone.0159492

Karabag, S. F., Berggren, C., Pielaszkiewicz, J., & Gerdin, B. (2024). Minimizing questionable research practices–the role of norms, counter norms, and micro-organizational ethics discussion. Journal of Academic Ethics , 1–27. https://doi.org/10.1007/s10805-024-09520-z

Kim, S. Y., & Kim, Y. (2018). The ethos of Science and its correlates: An empirical analysis of scientists’ endorsement of Mertonian norms. Science Technology and Society , 23 (1), 1–24. https://doi.org/10.1177/0971721817744438

Lawlor, J., Thomas, C., Guhin, A. T., Kenyon, K., Lerner, M. D., Consortium, U., & Drahota, A. (2021). Suspicious and fraudulent online survey participation: Introducing the REAL framework. Methodological Innovations , 14 (3), 20597991211050467.

Levelt, W. J., Drenth, P., & Noort, E. (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel (in Dutch: Falende wetenschap: De frauduleuze onderzoekspraktijken van social-psycholoog Diederik Stapel) . Commissioned by the Tilburg University, University of Amsterdam and the University of Groningen. https://doi.org/http://hdl.handle.net/11858/00-001M-0000-0010-258A-9

Lietz, P. (2010). Research into questionnaire design: A summary of the literature. International Journal of Market Research , 52 (2), 249–272.

Lin, M. W., & Yu, C. (2020). Can corruption be measured? Comparing global versus local perceptions of corruption in East and Southeast Asia. In Regional comparisons in comparative policy analysis studies (pp. 90–107). Routledge.

Macfarlane, B., & Cheng, M. (2008). Communism, universalism and disinterestedness: Re-examining contemporary support among academics for Merton’s scientific norms. Journal of Academic Ethics , 6 , 67–78.

Madley-Dowd, P., Hughes, R., Tilling, K., & Heron, J. (2019). The proportion of missing data should not be used to guide decisions on multiple imputation. Journal of Clinical Epidemiology , 110 , 63–73.

McNeil, J. J., Piccenna, L., Ronaldson, K., & Ioannides-Demos, L. L. (2010). The value of patient-centred registries in phase IV drug surveillance. Pharmaceutical Medicine , 24 , 281–288.

Merton, R. K. (1942/1973). The normative structure of science. In The sociology of science: Theoretical and empirical investigations . The University of Chicago Press.

Mitroff, I. I. (1974). Norms and counter-norms in a select group of the Apollo Moon scientists: A case study of the ambivalence of scientists. American Sociological Review , 39 (4), 579–595. https://doi.org/10.2307/2094423

Necker, S. (2014). Scientific misbehavior in economics. Research Policy , 43 (10), 1747–1759. https://doi.org/10.1016/j.respol.2014.05.002

Nosek, B. A., Hardwicke, T. E., Moshontz, H., Allard, A., Corker, K. S., Dreber, A., & Nuijten, M. B. (2022). Replicability, robustness, and reproducibility in psychological science. Annual Review of Psychology , 73 (1), 719–748.

O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine , 89 (9). https://journals.lww.com/academicmedicine/fulltext/2014/09000/standards_for_reporting_qualitative_research__a.21.aspx

Ordu, Y., & Yılmaz, S. (2024). Examining the impact of dramatization simulation on nursing students’ ethical attitudes: A mixed-method study. Journal of Academic Ethics , 1–13.

Perneger, T. V., Courvoisier, D. S., Hudelson, P. M., & Gayet-Ageron, A. (2015). Sample size for pre-tests of questionnaires. Quality of life Research , 24 , 147–151.

Presser, S., & Blair, J. (1994). Survey pretesting: Do different methods produce different results? Sociological Methodology , 73–104.

Ravn, T., & Sørensen, M. P. (2021). Exploring the gray area: Similarities and differences in questionable research practices (QRPs) across main areas of research. Science and Engineering Ethics , 27 (4), 40.

Roberts, D. L., & John, F. A. S. (2014). Estimating the prevalence of researcher misconduct: a study of UK academics within biological sciences. PeerJ , 2 , e562.

Siewert, W., & Udani, A. (2016). Missouri municipal ethics survey: Do ethics measures work at the municipal level? Public Integrity , 18 (3), 269–289.

Smajic, E., Avdic, D., Pasic, A., Prcic, A., & Stancic, M. (2022). Mixed methodology of scientific research in healthcare. Acta Informatica Medica , 30 (1), 57–60. https://doi.org/10.5455/aim.2022.30.57-60

Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics , 12 , 53–74.

Szolnoki, G., & Hoffmann, D. (2013). Online, face-to-face and telephone surveys—comparing different sampling methods in wine consumer research. Wine Economics and Policy , 2 (2), 57–66.

Tong, A., Sainsbury, P., & Craig, J. (2007). Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. International Journal for Quality in Health Care , 19 (6), 349–357. https://doi.org/10.1093/intqhc/mzm042

Tourangeau, R., & Yan, T. (2007). Sensitive questions in surveys. Psychological Bulletin , 133 (5), 859.

Tourangeau, R., Groves, R. M., & Redline, C. D. (2010). Sensitive topics and reluctant respondents: Demonstrating a link between nonresponse bias and measurement error. Public Opinion Quarterly , 74 (3), 413–432.

Vermeulen, I., & Hartmann, T. (2015). Questionable research and publication practices in communication science. Communication Methods and Measures , 9 (4), 189–192.

Wallenborg Likidis, J. (2019). Academic norms and scientific attitudes: Metrology review of a survey for doctoral students, researchers and academic teachers (In Swedish: Akademiska normer och vetenskapliga förhallningssätt. Mätteknisk granskning av en enkät till doktorander, forskare och akademiska lärare) . Prod.nr. 8942146, Statistics Sweden, Örebro.

Willis, G. B. (2004). Cognitive interviewing: A tool for improving questionnaire design . Sage Publications.

Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and Engineering Ethics , 27 (4), 41.

Yan, T., & Curtin, R. (2010). The relation between unit nonresponse and item nonresponse: A response continuum perspective. International Journal of Public Opinion Research , 22 (4), 535–551.

Download references

Acknowledgements

We thank Jennica Wallenborg Likidis, Statistics Sweden, for providing expert support in the survey design. We are grateful to colleagues Ingrid Johansson Mignon, Cecilia Enberg, Anna Dreber Almenberg, Andrea Fried, Sara Liin, Mariano Salazar, Lars Bengtsson, Harriet Wallberg, Karl Wennberg, and Thomas Magnusson, who joined the pretest or cognitive tests. We also thank Ksenia Onufrey, Peter Hedström, Jan-Ingvar Jönsson, Richard Öhrvall, Kerstin Sahlin, and David Ludvigsson for constructive comments or suggestions.

Open access funding provided by Linköping University. Swedish Forte: Research Council for Health, Working Life and Welfare ( https://www.vr.se/swecris?#/project/2018-00321_Forte ) Grant No. 2018-00321.

Open access funding provided by Linköping University.

Author information

Authors and affiliations.

Department of Management and Engineering [IEI], Linköping University, Linköping, SE-581 83, Sweden

Christian Berggren & Solmaz Filiz Karabag

Department of Surgical Sciences, Uppsala University, Uppsala University Hospital, entrance 70, Uppsala, SE-751 85, Sweden

Bengt Gerdin

Department of Civil and Industrial Engineering, Uppsala University, Box 169, Uppsala, SE-751 04, Sweden

Solmaz Filiz Karabag

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: CB. Survey Design: SFK, CB, Methodology: SFK, BG, CB. Visualization: SFK, BG. Funding acquisition: SFK. Project administration and management: SFK. Writing – original draft: CB. Writing – review & editing: CB, BG, SFK. Approval of the final manuscript: SFK, BG, CB.

Corresponding author

Correspondence to Solmaz Filiz Karabag .

Ethics declarations

Ethics approval and consent to participate.

The Swedish Act concerning the Ethical Review of Research Involving Humans (2003:460) defines the type of studies which requires an ethics approval. In line with the General Data Protection Regulation (EU 2016/67), the act is applicable for studies that collect personal data that reveal racial or ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, or health and sexual orientation. The present study does not involve any of the above, why no formal ethical permit was required. The ethical aspects of the project and its compliance with the guidelines of the Swedish Research Council (2017) were also part of the review process at the project’s public funding agency Forte.

Competing Interests

The authors declare that they have no competing interests.

Supporting Information

The complete case study survey of social and medical science researchers in Sweden 2020.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Berggren, C., Gerdin, B. & Karabag, S.F. Developing Surveys on Questionable Research Practices: Four Challenging Design Problems. J Acad Ethics (2024). https://doi.org/10.1007/s10805-024-09565-0

Download citation

Accepted : 23 August 2024

Published : 02 September 2024

DOI : https://doi.org/10.1007/s10805-024-09565-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Questionable Research Practices
  • Normative Environment
  • Organizational Climate
  • Survey Development
  • Design Problems
  • Problem of Incomplete Coverage
  • Survey Design Process
  • Baseline Survey
  • Pre-testing
  • Technical Assessment
  • Cognitive Interviews
  • Social Desirability
  • Sensitivity
  • Organizational Controversiality
  • Challenge of Nonresponse
  • Qualitative Research
  • Quantitative Research
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. 25 Types of Research Designs (2024)

    research design problem meaning

  2. PPT

    research design problem meaning

  3. PPT

    research design problem meaning

  4. How To Identify A Problem Statement In A Research Article

    research design problem meaning

  5. Research Problem

    research design problem meaning

  6. Defining Research Problem and Research Design

    research design problem meaning

VIDEO

  1. Implications of sample Design

  2. Descriptive Research Design: Key Concepts and Features

  3. Research Design/Importance/ contents/ Characteristics/ Types/Research Methodology/ Malayalam

  4. What is research design? #how to design a research advantages of research design

  5. THE RESEARCH PROBLEM: Meaning and Actual Sample

  6. Research Design, Types of research design, steps of research design

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  3. Research Design

    Research Design. Definition: Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. ... Introduction: This section provides an overview of the research problem, the research ...

  4. Research Problem

    Research Problem. Definition: Research problem is a specific and well-defined issue or question that a researcher seeks to investigate through research. It is the starting point of any research project, as it sets the direction, scope, and purpose of the study. Types of Research Problems. Types of Research Problems are as follows: Descriptive ...

  5. What Is Research Design? 8 Types + Examples

    The problem with defining research design… One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia. Some sources claim that the three research design types are qualitative, ...

  6. Types of Research Designs

    Definition and Purpose. Descriptive research designs help provide answers to the questions of who, what, when, where, and how associated with a particular research problem; a descriptive study cannot conclusively ascertain answers to why. ... An exploratory design is conducted about a research problem when there are few or no earlier studies to ...

  7. How to define a research question or a design problem

    Many texts state that identifying a good research question (or, equivalently, a design problem) is important for research. Wikipedia, for example, starts (as of writing this text, at least) with the following two sentences: "A research question is 'a question that a research project sets out to answer'. Choosing a research question is an ...

  8. Research Design: What it is, Elements & Types

    Research design is the framework of research methods and techniques chosen by a researcher to conduct a study. The design allows researchers to sharpen the research methods suitable for the subject matter and set up their studies for success. Creating a research topic explains the type of research (experimental,survey research,correlational ...

  9. Research Design

    Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions. Introduction. Step 1. Step 2.

  10. What Is a Research Design?

    Introduction. A research design in qualitative research is a critical framework that guides the methodological approach to studying complex social phenomena. Qualitative research designs determine how data is collected, analyzed, and interpreted, ensuring that the research captures participants' nuanced and subjective perspectives.

  11. What is a Research Problem? Characteristics, Types, and Examples

    A research problem is a gap in existing knowledge, a contradiction in an established theory, or a real-world challenge that a researcher aims to address in their research. It is at the heart of any scientific inquiry, directing the trajectory of an investigation. The statement of a problem orients the reader to the importance of the topic, sets ...

  12. Research design

    Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. [1] A strong research design yields valid answers to research questions while weak ...

  13. How to Define a Research Problem

    A research problem is a specific issue or gap in existing knowledge that you aim to address in your research. You may choose to look for practical problems aimed at contributing to change, or theoretical problems aimed at expanding knowledge. Some research will do both of these things, but usually the research problem focuses on one or the other.

  14. Research design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a 'blueprint' for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process.

  15. Research Methods Guide: Research Design & Method

    Research design is a plan to answer your research question. A research method is a strategy used to implement that plan. Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively. Which research method should I choose?

  16. Basics of Research Design: A Guide to selecting appropriate research design

    The essence of research design is to translate a research problem into data for analysis so as to provide relevant answers to research questions at a minimum cost. ... Towards a Definition of ...

  17. The Four Types of Research Design

    Getting Started with Research Design. Research design is your blueprint to answer questions through collecting data. When done right, it gives granular information on an issue and informs business decisions. To start, map out your questions, define your problem, and think of what data you want to receive as a result.

  18. PDF The Selection of a Research Design

    Rather than starting with a theory (as in postpostivism), inquirers generate or inductively develop a theory or pattern of meaning. For example, in discussing constructivism, Crotty (1998) identified sev-eral assumptions: Meanings are constructed by human beings as they engage with the world they are interpreting.

  19. What is a research design?

    Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research. Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group.As a result, the characteristics of the participants who drop out differ from the characteristics of those who ...

  20. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  21. (PDF) Research Design

    Once the problem is selected and the relevant literature searched the broader research format and plan haunts the researcher's mind which is called 'research plan' or 'research design'. The ...

  22. (Pdf) the Research Design

    Research design is a logical and systematic plan prepared for directing a research study. It specifies the objectives of the study, the methodology, and the techniques to be adopted for achieving ...

  23. How to Define a Research Problem

    A research problem is a specific issue or gap in existing knowledge that you aim to address in your research. You may choose to look for practical problems aimed at contributing to change, or theoretical problems aimed at expanding knowledge. Some research will do both of these things, but usually the research problem focuses on one or the other.

  24. Developing Surveys on Questionable Research Practices: Four ...

    However, several design issues remain to be solved. This paper starts with a review of four problems in the QRP research: the problem of precision and prevalence, the problem of social desirability bias, the problem of incomplete coverage, and the problem of controversiality, sensitivity and missing responses.