Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Research Design | Types, Guide & Examples

What Is a Research Design | Types, Guide & Examples

Published on June 7, 2021 by Shona McCombes . Revised on November 20, 2023 by Pritha Bhandari.

A research design is a strategy for answering your   research question  using empirical data. Creating a research design means making decisions about:

  • Your overall research objectives and approach
  • Whether you’ll rely on primary research or secondary research
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research objectives and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, other interesting articles, frequently asked questions about research design.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities—start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach
and describe frequencies, averages, and correlations about relationships between variables

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed-methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types.

  • Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships
  • Descriptive and correlational designs allow you to measure variables and describe relationships between them.
Type of design Purpose and characteristics
Experimental relationships effect on a
Quasi-experimental )
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analyzing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study—plants, animals, organizations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

  • Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalize your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study , your aim is to deeply understand a specific context, not to generalize to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question .

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviors, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews .

Questionnaires Interviews
)

Observation methods

Observational studies allow you to collect data unobtrusively, observing characteristics, behaviors or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what kinds of data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected—for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research design purpose and principles

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are high in reliability and validity.

Operationalization

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalization means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in—for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced, while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity
) )

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method , you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample—by mail, online, by phone, or in person?

If you’re using a probability sampling method , it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method , how will you avoid research bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organizing and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymize and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well-organized will save time when it comes to analyzing it. It can also help other researchers validate and add to your findings (high replicability ).

On its own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyze the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarize your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarize your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analyzing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Research Design | Types, Guide & Examples. Scribbr. Retrieved August 24, 2024, from https://www.scribbr.com/methodology/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, guide to experimental design | overview, steps, & examples, how to write a research proposal | examples & templates, ethical considerations in research | types & examples, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

  • Privacy Policy

Research Method

Home » Research Design – Types, Methods and Examples

Research Design – Types, Methods and Examples

Table of Contents

Research Design

Research Design

Definition:

Research design refers to the overall strategy or plan for conducting a research study. It outlines the methods and procedures that will be used to collect and analyze data, as well as the goals and objectives of the study. Research design is important because it guides the entire research process and ensures that the study is conducted in a systematic and rigorous manner.

Types of Research Design

Types of Research Design are as follows:

Descriptive Research Design

This type of research design is used to describe a phenomenon or situation. It involves collecting data through surveys, questionnaires, interviews, and observations. The aim of descriptive research is to provide an accurate and detailed portrayal of a particular group, event, or situation. It can be useful in identifying patterns, trends, and relationships in the data.

Correlational Research Design

Correlational research design is used to determine if there is a relationship between two or more variables. This type of research design involves collecting data from participants and analyzing the relationship between the variables using statistical methods. The aim of correlational research is to identify the strength and direction of the relationship between the variables.

Experimental Research Design

Experimental research design is used to investigate cause-and-effect relationships between variables. This type of research design involves manipulating one variable and measuring the effect on another variable. It usually involves randomly assigning participants to groups and manipulating an independent variable to determine its effect on a dependent variable. The aim of experimental research is to establish causality.

Quasi-experimental Research Design

Quasi-experimental research design is similar to experimental research design, but it lacks one or more of the features of a true experiment. For example, there may not be random assignment to groups or a control group. This type of research design is used when it is not feasible or ethical to conduct a true experiment.

Case Study Research Design

Case study research design is used to investigate a single case or a small number of cases in depth. It involves collecting data through various methods, such as interviews, observations, and document analysis. The aim of case study research is to provide an in-depth understanding of a particular case or situation.

Longitudinal Research Design

Longitudinal research design is used to study changes in a particular phenomenon over time. It involves collecting data at multiple time points and analyzing the changes that occur. The aim of longitudinal research is to provide insights into the development, growth, or decline of a particular phenomenon over time.

Structure of Research Design

The format of a research design typically includes the following sections:

  • Introduction : This section provides an overview of the research problem, the research questions, and the importance of the study. It also includes a brief literature review that summarizes previous research on the topic and identifies gaps in the existing knowledge.
  • Research Questions or Hypotheses: This section identifies the specific research questions or hypotheses that the study will address. These questions should be clear, specific, and testable.
  • Research Methods : This section describes the methods that will be used to collect and analyze data. It includes details about the study design, the sampling strategy, the data collection instruments, and the data analysis techniques.
  • Data Collection: This section describes how the data will be collected, including the sample size, data collection procedures, and any ethical considerations.
  • Data Analysis: This section describes how the data will be analyzed, including the statistical techniques that will be used to test the research questions or hypotheses.
  • Results : This section presents the findings of the study, including descriptive statistics and statistical tests.
  • Discussion and Conclusion : This section summarizes the key findings of the study, interprets the results, and discusses the implications of the findings. It also includes recommendations for future research.
  • References : This section lists the sources cited in the research design.

Example of Research Design

An Example of Research Design could be:

Research question: Does the use of social media affect the academic performance of high school students?

Research design:

  • Research approach : The research approach will be quantitative as it involves collecting numerical data to test the hypothesis.
  • Research design : The research design will be a quasi-experimental design, with a pretest-posttest control group design.
  • Sample : The sample will be 200 high school students from two schools, with 100 students in the experimental group and 100 students in the control group.
  • Data collection : The data will be collected through surveys administered to the students at the beginning and end of the academic year. The surveys will include questions about their social media usage and academic performance.
  • Data analysis : The data collected will be analyzed using statistical software. The mean scores of the experimental and control groups will be compared to determine whether there is a significant difference in academic performance between the two groups.
  • Limitations : The limitations of the study will be acknowledged, including the fact that social media usage can vary greatly among individuals, and the study only focuses on two schools, which may not be representative of the entire population.
  • Ethical considerations: Ethical considerations will be taken into account, such as obtaining informed consent from the participants and ensuring their anonymity and confidentiality.

How to Write Research Design

Writing a research design involves planning and outlining the methodology and approach that will be used to answer a research question or hypothesis. Here are some steps to help you write a research design:

  • Define the research question or hypothesis : Before beginning your research design, you should clearly define your research question or hypothesis. This will guide your research design and help you select appropriate methods.
  • Select a research design: There are many different research designs to choose from, including experimental, survey, case study, and qualitative designs. Choose a design that best fits your research question and objectives.
  • Develop a sampling plan : If your research involves collecting data from a sample, you will need to develop a sampling plan. This should outline how you will select participants and how many participants you will include.
  • Define variables: Clearly define the variables you will be measuring or manipulating in your study. This will help ensure that your results are meaningful and relevant to your research question.
  • Choose data collection methods : Decide on the data collection methods you will use to gather information. This may include surveys, interviews, observations, experiments, or secondary data sources.
  • Create a data analysis plan: Develop a plan for analyzing your data, including the statistical or qualitative techniques you will use.
  • Consider ethical concerns : Finally, be sure to consider any ethical concerns related to your research, such as participant confidentiality or potential harm.

When to Write Research Design

Research design should be written before conducting any research study. It is an important planning phase that outlines the research methodology, data collection methods, and data analysis techniques that will be used to investigate a research question or problem. The research design helps to ensure that the research is conducted in a systematic and logical manner, and that the data collected is relevant and reliable.

Ideally, the research design should be developed as early as possible in the research process, before any data is collected. This allows the researcher to carefully consider the research question, identify the most appropriate research methodology, and plan the data collection and analysis procedures in advance. By doing so, the research can be conducted in a more efficient and effective manner, and the results are more likely to be valid and reliable.

Purpose of Research Design

The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection and analysis.

Some of the key purposes of research design include:

  • Providing a clear and concise plan of action for the research study.
  • Ensuring that the research is conducted ethically and with rigor.
  • Maximizing the accuracy and reliability of the research findings.
  • Minimizing the possibility of errors, biases, or confounding variables.
  • Ensuring that the research is feasible, practical, and cost-effective.
  • Determining the appropriate research methodology to answer the research question(s).
  • Identifying the sample size, sampling method, and data collection techniques.
  • Determining the data analysis method and statistical tests to be used.
  • Facilitating the replication of the study by other researchers.
  • Enhancing the validity and generalizability of the research findings.

Applications of Research Design

There are numerous applications of research design in various fields, some of which are:

  • Social sciences: In fields such as psychology, sociology, and anthropology, research design is used to investigate human behavior and social phenomena. Researchers use various research designs, such as experimental, quasi-experimental, and correlational designs, to study different aspects of social behavior.
  • Education : Research design is essential in the field of education to investigate the effectiveness of different teaching methods and learning strategies. Researchers use various designs such as experimental, quasi-experimental, and case study designs to understand how students learn and how to improve teaching practices.
  • Health sciences : In the health sciences, research design is used to investigate the causes, prevention, and treatment of diseases. Researchers use various designs, such as randomized controlled trials, cohort studies, and case-control studies, to study different aspects of health and healthcare.
  • Business : Research design is used in the field of business to investigate consumer behavior, marketing strategies, and the impact of different business practices. Researchers use various designs, such as survey research, experimental research, and case studies, to study different aspects of the business world.
  • Engineering : In the field of engineering, research design is used to investigate the development and implementation of new technologies. Researchers use various designs, such as experimental research and case studies, to study the effectiveness of new technologies and to identify areas for improvement.

Advantages of Research Design

Here are some advantages of research design:

  • Systematic and organized approach : A well-designed research plan ensures that the research is conducted in a systematic and organized manner, which makes it easier to manage and analyze the data.
  • Clear objectives: The research design helps to clarify the objectives of the study, which makes it easier to identify the variables that need to be measured, and the methods that need to be used to collect and analyze data.
  • Minimizes bias: A well-designed research plan minimizes the chances of bias, by ensuring that the data is collected and analyzed objectively, and that the results are not influenced by the researcher’s personal biases or preferences.
  • Efficient use of resources: A well-designed research plan helps to ensure that the resources (time, money, and personnel) are used efficiently and effectively, by focusing on the most important variables and methods.
  • Replicability: A well-designed research plan makes it easier for other researchers to replicate the study, which enhances the credibility and reliability of the findings.
  • Validity: A well-designed research plan helps to ensure that the findings are valid, by ensuring that the methods used to collect and analyze data are appropriate for the research question.
  • Generalizability : A well-designed research plan helps to ensure that the findings can be generalized to other populations, settings, or situations, which increases the external validity of the study.

Research Design Vs Research Methodology

Research DesignResearch Methodology
The plan and structure for conducting research that outlines the procedures to be followed to collect and analyze data.The set of principles, techniques, and tools used to carry out the research plan and achieve research objectives.
Describes the overall approach and strategy used to conduct research, including the type of data to be collected, the sources of data, and the methods for collecting and analyzing data.Refers to the techniques and methods used to gather, analyze and interpret data, including sampling techniques, data collection methods, and data analysis techniques.
Helps to ensure that the research is conducted in a systematic, rigorous, and valid way, so that the results are reliable and can be used to make sound conclusions.Includes a set of procedures and tools that enable researchers to collect and analyze data in a consistent and valid manner, regardless of the research design used.
Common research designs include experimental, quasi-experimental, correlational, and descriptive studies.Common research methodologies include qualitative, quantitative, and mixed-methods approaches.
Determines the overall structure of the research project and sets the stage for the selection of appropriate research methodologies.Guides the researcher in selecting the most appropriate research methods based on the research question, research design, and other contextual factors.
Helps to ensure that the research project is feasible, relevant, and ethical.Helps to ensure that the data collected is accurate, valid, and reliable, and that the research findings can be interpreted and generalized to the population of interest.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Gap

Research Gap – Types, Examples and How to...

Critical Analysis

Critical Analysis – Types, Examples and Writing...

Data Interpretation

Data Interpretation – Process, Methods and...

Data collection

Data Collection – Methods Types and Examples

Chapter Summary

Chapter Summary & Overview – Writing Guide...

Research Summary

Research Summary – Structure, Examples and...

Leave a comment x.

Save my name, email, and website in this browser for the next time I comment.

Ohio State nav bar

The Ohio State University

  • BuckeyeLink
  • Find People
  • Search Ohio State

Basic Research Design

What is research design.

  • Definition of Research Design : A procedure for generating answers to questions, crucial in determining the reliability and relevance of research outcomes.
  • Importance of Strong Designs : Strong designs lead to answers that are accurate and close to their targets, while weak designs may result in misleading or irrelevant outcomes.
  • Criteria for Assessing Design Strength : Evaluating a design’s strength involves understanding the research question and how the design will yield reliable empirical information.

The Four Elements of Research Design (Blair et al., 2023)

research design purpose and principles

  • The MIDA Framework : Research designs consist of four interconnected elements – Model (M), Inquiry (I), Data strategy (D), and Answer strategy (A), collectively referred to as MIDA.
  • Theoretical Side (M and I): This encompasses the researcher’s beliefs about the world (Model) and the target of inference or the primary question to be answered (Inquiry).
  • Empirical Side (D and A): This includes the strategies for collecting (Data strategy) and analyzing or summarizing information (Answer strategy).
  • Interplay between Theoretical and Empirical Sides : The theoretical side sets the research challenges, while the empirical side represents the researcher’s responses to these challenges.
  • Relation among MIDA Components: The diagram above shows how the four elements of a design are interconnected and how they relate to both real-world and simulated quantities.
  • Parallelism in Design Representation: The illustration highlights two key parallelisms in research design – between actual and simulated processes, and between the theoretical (M, I) and empirical (D, A) sides.
  • Importance of Simulated Processes: The parallelism between actual and simulated processes is crucial for understanding and evaluating research designs.
  • Balancing Theoretical and Empirical Aspects : Effective research design requires a balance between theoretical considerations (models and inquiries) and empirical methodologies (data and answer strategies).

Research Design Principles (Blair et al., 2023)

  • Integration of Components: Designs are effective not merely due to their individual components but how these components work together.
  • Focus on Entire Design: Assessing a design requires examining how each part, such as the question, estimator, and sampling method, fits into the overall design.
  • Importance of Diagnosis: The evaluation of a design’s strength lies in diagnosing the whole design, not just its parts.
  • Strong Design Characteristics: Designs with parallel theoretical and empirical aspects tend to be stronger.
  • The M:I:D:A Analogy: Effective designs often align data strategies with models and answer strategies with inquiries.
  • Flexibility in Models: Good designs should perform well even under varying world scenarios, not just under expected conditions.
  • Broadening Model Scope: Designers should consider a wide range of models, assessing the design’s effectiveness across these.
  • Robustness of Inquiries and Strategies: Inquiries should yield answers and strategies should be applicable regardless of variations in real-world events.
  • Diagnosis Across Models: It’s important to understand for which models a design excels and for which it falters.
  • Specificity of Purpose: A design is deemed good when it aligns with a specific purpose or goal.
  • Balancing Multiple Criteria: Designs should balance scientific precision, logistical constraints, policy goals, and ethical considerations.
  • Diverse Goals and Assessments: Different designs may be optimal for different goals; the purpose dictates the design evaluation.
  • Early Planning Benefits: Designing early allows for learning and improving design properties before data collection.
  • Avoiding Post-Hoc Regrets: Early design helps avoid regrets related to data collection or question formulation.
  • Iterative Improvement: The process of declaration, diagnosis, and redesign improves designs, ideally done before data collection.
  • Adaptability to Changes: Designs should be flexible to adapt to unforeseen circumstances or new information.
  • Expanding or Contracting Feasibility: The scope of feasible designs may change due to various practical factors.
  • Continual Redesign: The principle advocates for ongoing design modification, even post research completion, for robustness and response to criticism.
  • Improvement Through Sharing: Sharing designs via a formalized declaration makes it easier for others to understand and critique.
  • Enhancing Scientific Communication: Well-documented designs facilitate better communication and justification of research decisions.
  • Building a Design Library: The idea is to contribute designs to a shared library, allowing others to learn from and build upon existing work.

The Basics of Social Science Research Designs (Panke, 2018)

Deductive and inductive research.

research design purpose and principles

  • Starting Point: Begins with empirical observations or exploratory studies.
  • Development of Hypotheses: Hypotheses are formulated after initial empirical analysis.
  • Case Study Analysis: Involves conducting explorative case studies and analyzing dynamics at play.
  • Generalization of Findings: Insights are then generalized across multiple cases to verify their applicability.
  • Application: Suitable for novel phenomena or where existing theories are not easily applicable.
  • Example Cases: Exploring new events like Donald Trump’s 2016 nomination or Russia’s annexation of Crimea in 2014.
  • Theory-Based: Starts with existing theories to develop scientific answers to research questions.
  • Hypothesis Development: Hypotheses are specified and then empirically examined.
  • Empirical Examination: Involves a thorough empirical analysis of hypotheses using sound methods.
  • Theory Refinement: Results can refine existing theories or contribute to new theoretical insights.
  • Application: Preferred when existing theories relate to the research question.
  • Example Projects: Usually explanatory projects asking ‘why’ questions to uncover relationships.

Explanatory and Interpretative Research Designs

research design purpose and principles

  • Definition: Explanatory research aims to explain the relationships between variables, often addressing ‘why’ questions. It is primarily concerned with identifying cause-and-effect dynamics and is typically quantitative in nature. The goal is to test hypotheses derived from theories and to establish patterns that can predict future occurrences.
  • Definition: Interpretative research focuses on understanding the deeper meaning or underlying context of social phenomena. It often addresses ‘how is this possible’ questions, seeking to comprehend how certain outcomes or behaviors are produced within specific contexts. This type of research is usually qualitative and prioritizes individual experiences and perceptions.
  • Explanatory Research: Poses ‘why’ questions to explore causal relationships and understand what factors influence certain outcomes.
  • Interpretative Research: Asks ‘how is this possible’ questions to delve into the processes and meanings behind social phenomena.
  • Explanatory Research: Relies on established theories to form hypotheses about causal relationships between variables. These theories are then tested through empirical research.
  • Interpretative Research: Uses theories to provide a framework for understanding the social context and meanings. The focus is on constitutive relationships rather than causal ones.
  • Explanatory Research: Often involves studying multiple cases to allow for comparison and generalization. It seeks patterns across different scenarios.
  • Interpretative Research: Typically concentrates on single case studies, providing an in-depth understanding of that particular case without necessarily aiming for generalization.
  • Explanatory Research: Aims to produce findings that can be generalized to other similar cases or populations. It seeks universal or broad patterns.
  • Interpretative Research: Offers detailed insights specific to a single case or context. These findings are not necessarily intended to be generalized but to provide a deep understanding of the particular case.

Qualitative, Quantitative, and Mixed-method Projects

  • Definition: Qualitative research is exploratory and aims to understand human behavior, beliefs, feelings, and experiences. It involves collecting non-numerical data, often through interviews, focus groups, or textual analysis. This method is ideal for gaining in-depth insights into specific phenomena.
  • Example in Education: A qualitative study might involve conducting in-depth interviews with teachers to explore their experiences and challenges with remote teaching during the pandemic. This research would aim to understand the nuances of their experiences, challenges, and adaptations in a detailed and descriptive manner.
  • Definition: Quantitative research seeks to quantify data and generalize results from a sample to the population of interest. It involves measurable, numerical data and often uses statistical methods for analysis. This approach is suitable for testing hypotheses or examining relationships between variables.
  • Example in Education: A quantitative study could involve surveying a large number of students to determine the correlation between the amount of time spent on homework and their academic achievement. This would involve collecting numerical data (hours of homework, grades) and applying statistical analysis to examine relationships or differences.
  • Definition: Mixed-method research combines both qualitative and quantitative approaches, providing a more comprehensive understanding of the research problem. It allows for the exploration of complex research questions by integrating numerical data analysis with detailed narrative data.
  • Example in Education: A mixed-method study might investigate the impact of a new teaching method. The research could start with quantitative methods, like administering standardized tests to measure learning outcomes, followed by qualitative methods, such as conducting focus groups with students and teachers to understand their perceptions and experiences with the new teaching method. This combination provides both statistical results and in-depth understanding.
  • Research Questions: What kind of information is needed to answer the questions? Qualitative for “how” and “why”, quantitative for “how many” or “how much”, and mixed methods for a comprehensive understanding of both the breadth and depth of a phenomenon.
  • Nature of the Study: Is the study aiming to explore a new area (qualitative), confirm hypotheses (quantitative), or achieve both (mixed-method)?
  • Resources Available: Time, funding, and expertise available can influence the choice. Qualitative research can be more time-consuming, while quantitative research may require specific statistical skills.
  • Data Sources: Availability and type of data also guide the methodology. Existing numerical data might lean towards quantitative, while studies requiring personal experiences or opinions might be qualitative.

References:

Blair, G., Coppock, A., & Humphreys, M. (2023).  Research Design in the Social Sciences: Declaration, Diagnosis, and Redesign . Princeton University Press.

Panke, D. (2018). Research design & method selection: Making good choices in the social sciences.  Research Design & Method Selection , 1-368.

research design purpose and principles

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research design purpose and principles

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research design purpose and principles

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research design purpose and principles

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research design purpose and principles

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

12 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

Rachael Opoku

This post is really helpful.

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Joreme

This post has been very useful to me. Confusing areas have been cleared

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative approach Quantitative approach

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

Type of design Purpose and characteristics
Experimental
Quasi-experimental
Correlational
Descriptive

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Type of design Purpose and characteristics
Grounded theory
Phenomenology

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling Non-probability sampling

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Questionnaires Interviews

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Quantitative observation

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

Field Examples of data collection methods
Media & communication Collecting a sample of texts (e.g., speeches, articles, or social media posts) for data on cultural norms and narratives
Psychology Using technologies like neuroimaging, eye-tracking, or computer-based tasks to collect data on things like attention, emotional response, or reaction time
Education Using tests or assignments to collect data on knowledge and skills
Physical sciences Using scientific instruments to collect data on things like weight, blood pressure, or chemical composition

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

Reliability Validity

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

Approach Characteristics
Thematic analysis
Discourse analysis

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 21 August 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

5 Research design

Research design is a comprehensive plan for data collection in an empirical research project. It is a ‘blueprint’ for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process. The instrument development and sampling processes are described in the next two chapters, and the data collection process—which is often loosely called ‘research design’—is introduced in this chapter and is described in further detail in Chapters 9–12.

Broadly speaking, data collection methods can be grouped into two categories: positivist and interpretive. Positivist methods , such as laboratory experiments and survey research, are aimed at theory (or hypotheses) testing, while interpretive methods, such as action research and ethnography, are aimed at theory building. Positivist methods employ a deductive approach to research, starting with a theory and testing theoretical postulates using empirical data. In contrast, interpretive methods employ an inductive approach that starts with data and tries to derive a theory about the phenomenon of interest from the observed data. Often times, these methods are incorrectly equated with quantitative and qualitative research. Quantitative and qualitative methods refers to the type of data being collected—quantitative data involve numeric scores, metrics, and so on, while qualitative data includes interviews, observations, and so forth—and analysed (i.e., using quantitative techniques such as regression or qualitative techniques such as coding). Positivist research uses predominantly quantitative data, but can also use qualitative data. Interpretive research relies heavily on qualitative data, but can sometimes benefit from including quantitative data as well. Sometimes, joint use of qualitative and quantitative data may help generate unique insight into a complex social phenomenon that is not available from either type of data alone, and hence, mixed-mode designs that combine qualitative and quantitative data are often highly desirable.

Key attributes of a research design

The quality of research designs can be defined in terms of four key design attributes: internal validity, external validity, construct validity, and statistical conclusion validity.

Internal validity , also called causality, examines whether the observed change in a dependent variable is indeed caused by a corresponding change in a hypothesised independent variable, and not by variables extraneous to the research context. Causality requires three conditions: covariation of cause and effect (i.e., if cause happens, then effect also happens; if cause does not happen, effect does not happen), temporal precedence (cause must precede effect in time), and spurious correlation, or there is no plausible alternative explanation for the change. Certain research designs, such as laboratory experiments, are strong in internal validity by virtue of their ability to manipulate the independent variable (cause) via a treatment and observe the effect (dependent variable) of that treatment after a certain point in time, while controlling for the effects of extraneous variables. Other designs, such as field surveys, are poor in internal validity because of their inability to manipulate the independent variable (cause), and because cause and effect are measured at the same point in time which defeats temporal precedence making it equally likely that the expected effect might have influenced the expected cause rather than the reverse. Although higher in internal validity compared to other methods, laboratory experiments are by no means immune to threats of internal validity, and are susceptible to history, testing, instrumentation, regression, and other threats that are discussed later in the chapter on experimental designs. Nonetheless, different research designs vary considerably in their respective level of internal validity.

External validity or generalisability refers to whether the observed associations can be generalised from the sample to the population (population validity), or to other people, organisations, contexts, or time (ecological validity). For instance, can results drawn from a sample of financial firms in the United States be generalised to the population of financial firms (population validity) or to other firms within the United States (ecological validity)? Survey research, where data is sourced from a wide variety of individuals, firms, or other units of analysis, tends to have broader generalisability than laboratory experiments where treatments and extraneous variables are more controlled. The variation in internal and external validity for a wide range of research designs is shown in Figure 5.1.

Internal and external validity

Some researchers claim that there is a trade-off between internal and external validity—higher external validity can come only at the cost of internal validity and vice versa. But this is not always the case. Research designs such as field experiments, longitudinal field surveys, and multiple case studies have higher degrees of both internal and external validities. Personally, I prefer research designs that have reasonable degrees of both internal and external validities, i.e., those that fall within the cone of validity shown in Figure 5.1. But this should not suggest that designs outside this cone are any less useful or valuable. Researchers’ choice of designs are ultimately a matter of their personal preference and competence, and the level of internal and external validity they desire.

Construct validity examines how well a given measurement scale is measuring the theoretical construct that it is expected to measure. Many constructs used in social science research such as empathy, resistance to change, and organisational learning are difficult to define, much less measure. For instance, construct validity must ensure that a measure of empathy is indeed measuring empathy and not compassion, which may be difficult since these constructs are somewhat similar in meaning. Construct validity is assessed in positivist research based on correlational or factor analysis of pilot test data, as described in the next chapter.

Statistical conclusion validity examines the extent to which conclusions derived using a statistical procedure are valid. For example, it examines whether the right statistical method was used for hypotheses testing, whether the variables used meet the assumptions of that statistical test (such as sample size or distributional requirements), and so forth. Because interpretive research designs do not employ statistical tests, statistical conclusion validity is not applicable for such analysis. The different kinds of validity and where they exist at the theoretical/empirical levels are illustrated in Figure 5.2.

Different types of validity in scientific research

Improving internal and external validity

The best research designs are those that can ensure high levels of internal and external validity. Such designs would guard against spurious correlations, inspire greater faith in the hypotheses testing, and ensure that the results drawn from a small sample are generalisable to the population at large. Controls are required to ensure internal validity (causality) of research designs, and can be accomplished in five ways: manipulation, elimination, inclusion, and statistical control, and randomisation.

In manipulation , the researcher manipulates the independent variables in one or more levels (called ‘treatments’), and compares the effects of the treatments against a control group where subjects do not receive the treatment. Treatments may include a new drug or different dosage of drug (for treating a medical condition), a teaching style (for students), and so forth. This type of control is achieved in experimental or quasi-experimental designs, but not in non-experimental designs such as surveys. Note that if subjects cannot distinguish adequately between different levels of treatment manipulations, their responses across treatments may not be different, and manipulation would fail.

The elimination technique relies on eliminating extraneous variables by holding them constant across treatments, such as by restricting the study to a single gender or a single socioeconomic status. In the inclusion technique, the role of extraneous variables is considered by including them in the research design and separately estimating their effects on the dependent variable, such as via factorial designs where one factor is gender (male versus female). Such technique allows for greater generalisability, but also requires substantially larger samples. In statistical control , extraneous variables are measured and used as covariates during the statistical testing process.

Finally, the randomisation technique is aimed at cancelling out the effects of extraneous variables through a process of random sampling, if it can be assured that these effects are of a random (non-systematic) nature. Two types of randomisation are: random selection , where a sample is selected randomly from a population, and random assignment , where subjects selected in a non-random manner are randomly assigned to treatment groups.

Randomisation also ensures external validity, allowing inferences drawn from the sample to be generalised to the population from which the sample is drawn. Note that random assignment is mandatory when random selection is not possible because of resource or access constraints. However, generalisability across populations is harder to ascertain since populations may differ on multiple dimensions and you can only control for a few of those dimensions.

Popular research designs

As noted earlier, research designs can be classified into two categories—positivist and interpretive—depending on the goal of the research. Positivist designs are meant for theory testing, while interpretive designs are meant for theory building. Positivist designs seek generalised patterns based on an objective view of reality, while interpretive designs seek subjective interpretations of social phenomena from the perspectives of the subjects involved. Some popular examples of positivist designs include laboratory experiments, field experiments, field surveys, secondary data analysis, and case research, while examples of interpretive designs include case research, phenomenology, and ethnography. Note that case research can be used for theory building or theory testing, though not at the same time. Not all techniques are suited for all kinds of scientific research. Some techniques such as focus groups are best suited for exploratory research, others such as ethnography are best for descriptive research, and still others such as laboratory experiments are ideal for explanatory research. Following are brief descriptions of some of these designs. Additional details are provided in Chapters 9–12.

Experimental studies are those that are intended to test cause-effect relationships (hypotheses) in a tightly controlled setting by separating the cause from the effect in time, administering the cause to one group of subjects (the ‘treatment group’) but not to another group (‘control group’), and observing how the mean effects vary between subjects in these two groups. For instance, if we design a laboratory experiment to test the efficacy of a new drug in treating a certain ailment, we can get a random sample of people afflicted with that ailment, randomly assign them to one of two groups (treatment and control groups), administer the drug to subjects in the treatment group, but only give a placebo (e.g., a sugar pill with no medicinal value) to subjects in the control group. More complex designs may include multiple treatment groups, such as low versus high dosage of the drug or combining drug administration with dietary interventions. In a true experimental design , subjects must be randomly assigned to each group. If random assignment is not followed, then the design becomes quasi-experimental . Experiments can be conducted in an artificial or laboratory setting such as at a university (laboratory experiments) or in field settings such as in an organisation where the phenomenon of interest is actually occurring (field experiments). Laboratory experiments allow the researcher to isolate the variables of interest and control for extraneous variables, which may not be possible in field experiments. Hence, inferences drawn from laboratory experiments tend to be stronger in internal validity, but those from field experiments tend to be stronger in external validity. Experimental data is analysed using quantitative statistical techniques. The primary strength of the experimental design is its strong internal validity due to its ability to isolate, control, and intensively examine a small number of variables, while its primary weakness is limited external generalisability since real life is often more complex (i.e., involving more extraneous variables) than contrived lab settings. Furthermore, if the research does not identify ex ante relevant extraneous variables and control for such variables, such lack of controls may hurt internal validity and may lead to spurious correlations.

Field surveys are non-experimental designs that do not control for or manipulate independent variables or treatments, but measure these variables and test their effects using statistical methods. Field surveys capture snapshots of practices, beliefs, or situations from a random sample of subjects in field settings through a survey questionnaire or less frequently, through a structured interview. In cross-sectional field surveys , independent and dependent variables are measured at the same point in time (e.g., using a single questionnaire), while in longitudinal field surveys , dependent variables are measured at a later point in time than the independent variables. The strengths of field surveys are their external validity (since data is collected in field settings), their ability to capture and control for a large number of variables, and their ability to study a problem from multiple perspectives or using multiple theories. However, because of their non-temporal nature, internal validity (cause-effect relationships) are difficult to infer, and surveys may be subject to respondent biases (e.g., subjects may provide a ‘socially desirable’ response rather than their true response) which further hurts internal validity.

Secondary data analysis is an analysis of data that has previously been collected and tabulated by other sources. Such data may include data from government agencies such as employment statistics from the U.S. Bureau of Labor Services or development statistics by countries from the United Nations Development Program, data collected by other researchers (often used in meta-analytic studies), or publicly available third-party data, such as financial data from stock markets or real-time auction data from eBay. This is in contrast to most other research designs where collecting primary data for research is part of the researcher’s job. Secondary data analysis may be an effective means of research where primary data collection is too costly or infeasible, and secondary data is available at a level of analysis suitable for answering the researcher’s questions. The limitations of this design are that the data might not have been collected in a systematic or scientific manner and hence unsuitable for scientific research, since the data was collected for a presumably different purpose, they may not adequately address the research questions of interest to the researcher, and interval validity is problematic if the temporal precedence between cause and effect is unclear.

Case research is an in-depth investigation of a problem in one or more real-life settings (case sites) over an extended period of time. Data may be collected using a combination of interviews, personal observations, and internal or external documents. Case studies can be positivist in nature (for hypotheses testing) or interpretive (for theory building). The strength of this research method is its ability to discover a wide variety of social, cultural, and political factors potentially related to the phenomenon of interest that may not be known in advance. Analysis tends to be qualitative in nature, but heavily contextualised and nuanced. However, interpretation of findings may depend on the observational and integrative ability of the researcher, lack of control may make it difficult to establish causality, and findings from a single case site may not be readily generalised to other case sites. Generalisability can be improved by replicating and comparing the analysis in other case sites in a multiple case design .

Focus group research is a type of research that involves bringing in a small group of subjects (typically six to ten people) at one location, and having them discuss a phenomenon of interest for a period of one and a half to two hours. The discussion is moderated and led by a trained facilitator, who sets the agenda and poses an initial set of questions for participants, makes sure that the ideas and experiences of all participants are represented, and attempts to build a holistic understanding of the problem situation based on participants’ comments and experiences. Internal validity cannot be established due to lack of controls and the findings may not be generalised to other settings because of the small sample size. Hence, focus groups are not generally used for explanatory or descriptive research, but are more suited for exploratory research.

Action research assumes that complex social phenomena are best understood by introducing interventions or ‘actions’ into those phenomena and observing the effects of those actions. In this method, the researcher is embedded within a social context such as an organisation and initiates an action—such as new organisational procedures or new technologies—in response to a real problem such as declining profitability or operational bottlenecks. The researcher’s choice of actions must be based on theory, which should explain why and how such actions may cause the desired change. The researcher then observes the results of that action, modifying it as necessary, while simultaneously learning from the action and generating theoretical insights about the target problem and interventions. The initial theory is validated by the extent to which the chosen action successfully solves the target problem. Simultaneous problem solving and insight generation is the central feature that distinguishes action research from all other research methods, and hence, action research is an excellent method for bridging research and practice. This method is also suited for studying unique social problems that cannot be replicated outside that context, but it is also subject to researcher bias and subjectivity, and the generalisability of findings is often restricted to the context where the study was conducted.

Ethnography is an interpretive research design inspired by anthropology that emphasises that research phenomenon must be studied within the context of its culture. The researcher is deeply immersed in a certain culture over an extended period of time—eight months to two years—and during that period, engages, observes, and records the daily life of the studied culture, and theorises about the evolution and behaviours in that culture. Data is collected primarily via observational techniques, formal and informal interaction with participants in that culture, and personal field notes, while data analysis involves ‘sense-making’. The researcher must narrate her experience in great detail so that readers may experience that same culture without necessarily being there. The advantages of this approach are its sensitiveness to the context, the rich and nuanced understanding it generates, and minimal respondent bias. However, this is also an extremely time and resource-intensive approach, and findings are specific to a given culture and less generalisable to other cultures.

Selecting research designs

Given the above multitude of research designs, which design should researchers choose for their research? Generally speaking, researchers tend to select those research designs that they are most comfortable with and feel most competent to handle, but ideally, the choice should depend on the nature of the research phenomenon being studied. In the preliminary phases of research, when the research problem is unclear and the researcher wants to scope out the nature and extent of a certain research problem, a focus group (for an individual unit of analysis) or a case study (for an organisational unit of analysis) is an ideal strategy for exploratory research. As one delves further into the research domain, but finds that there are no good theories to explain the phenomenon of interest and wants to build a theory to fill in the unmet gap in that area, interpretive designs such as case research or ethnography may be useful designs. If competing theories exist and the researcher wishes to test these different theories or integrate them into a larger theory, positivist designs such as experimental design, survey research, or secondary data analysis are more appropriate.

Regardless of the specific research design chosen, the researcher should strive to collect quantitative and qualitative data using a combination of techniques such as questionnaires, interviews, observations, documents, or secondary data. For instance, even in a highly structured survey questionnaire, intended to collect quantitative data, the researcher may leave some room for a few open-ended questions to collect qualitative data that may generate unexpected insights not otherwise available from structured quantitative data alone. Likewise, while case research employ mostly face-to-face interviews to collect most qualitative data, the potential and value of collecting quantitative data should not be ignored. As an example, in a study of organisational decision-making processes, the case interviewer can record numeric quantities such as how many months it took to make certain organisational decisions, how many people were involved in that decision process, and how many decision alternatives were considered, which can provide valuable insights not otherwise available from interviewees’ narrative responses. Irrespective of the specific research design employed, the goal of the researcher should be to collect as much and as diverse data as possible that can help generate the best possible insights about the phenomenon of interest.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

What Is the Purpose of Research Design?

research design purpose and principles

Research design, like any framework of research, has its methods and purpose. The purpose of research design is to provide a clear plan of the research, based on independent and dependent variables, and to consider the cause and effect evoked by these variables.

Methods that constitute research design can include:

  • Observations
  • Experiments
  • Working with archives
  • Other methods

As we’ve explained here , the methods of research always back up the purpose and the goals of the research.

8 steps of research design process

Main Goals of Research Design

  • The main goal of research design is for a researcher to make sure that the conclusions they’ve come to are justified. It means that the research has to confirm or deny the hypothesis.
  • Another purpose of research design is to broaden the researcher’s understanding of the topic, and to make them more conscious about various places, groups, and settings.
  • Finally, research design allows the researcher to achieve an accurate understanding of the topic they are working on, and be able to explain the topic to others.

Usually, you cannot accomplish all these three goals at the same time, but you can try to do that by using multi-stage design in your study, which is also known as multi-stage sampling .

tips on how to develop research design

Examples of Research Design

One should choose the type of research design corresponding to the particular purpose of their research. Here are some examples to choose from when you’re doing research.

General Structure and Writing Style

The chosen type should help the researcher work on the hypothesis accurately; while collecting data, you have to choose specific evidence, which will show you the differences between the variables. Before conducting research, think about what information you will need and how you're going to use it. Otherwise, you can get lost.

Action Research

When you’ve developed an understanding of the problem, and you’ve begun planning an interventional strategy, action research design helps to bring the deviation to the surface. You make an observation and bookmark the deviation. Then you commit an action, and compare the outcome of having committed the action with the outcome of not having committed the action.

We would use a case study when studying a particular problem and when we take some broad samples and try to narrow them down to subjects that are researchable. This one is also cool when we need to check if a certain theory applies to a certain real-world problem.

Causal Research

If A is X, then B is Y — that's how we can simplify causality. By comparing two or more variables, a researcher can understand the effect they have on each other. This effect can then be measured in terms of causality: i.e., what constitutes the cause and the effect.

Cohort Study

Cohort studies are more often used in medical studies, but are now gaining popularity in social science. They are based on a quantitative framework, where a researcher makes notes on a statistical occurrence within a subgroup, with members of the group having similar features that are related to the study topic, rather than analyzing mathematical occurrences within a custom group. Observation is the main method of data gathering in cohort studies.

How to Study Research Design

You can learn research design:

  • At universities
  • Through online tutorials
  • In online classes
  • By reading scientific papers

Research with FlowMapp’s Tools

FlowMapp specializes in strategic website planning and user experience analysis and develops special tools for such analysis. Although all of these products focus on web design, FlowMapp's analysis and planning tools can be used by a wide range of other professionals:

  • developers;
  • copywriters;
  • UX strategists;
  • researchers;
  • sales managers;
  • product managers;
  • project managers;

User Flow Tool

User flow is a visual representation of the sequence of actions that users perform to achieve their goal. In fact, User flow tool allow you to look at the interaction between the user and the web product through the eyes of the user.

You can see how this looks like in the following example :

FlowMapp Flowchart UX design tool app interface

Wireframe tool

The Wireframe tool by FlowMapp is a powerful online tool that allows for the rapid creation of website prototypes. With its extensive library of templates for every website block, this tool enables designers to quickly build hi-fi prototypes. These wireframes can be easily shared via a link as a real webpage.

This tool is useful for quickly analyzing the labor required for website development, visualize future project ideas, and effectively present them to stakeholders.

The Notes tool by FlowMapp is a content gathering system that allows you to store all types of data, files, and ideas in one place, and collaborate on them with others. This tool is particularly useful for creating briefs, reports, documentation, and for storing all project-related data.

During the research phase, the Notes tool is a valuable asset for gathering and organizing ideas, especially when working on UX/UI design, user behavior and user flow analysis, or website development projects.

Remember that the step-by-step approach is the one that will help you most. Also, you can try to implement other research formats from multiple spheres and try some more mixed methods of research. Don’t be afraid to experiment – register at our website and get access to all tools we mentioned above!

research design purpose and principles

Peak-end Rule in UX Design

research design purpose and principles

What Is System Usability Scale (SUS)?

  • University Libraries
  • Research Guides
  • Topic Guides
  • Research Methods Guide
  • Research Design & Method

Research Methods Guide: Research Design & Method

  • Introduction
  • Survey Research
  • Interview Research
  • Data Analysis
  • Resources & Consultation

Tutorial Videos: Research Design & Method

Research Methods (sociology-focused)

Qualitative vs. Quantitative Methods (intro)

Qualitative vs. Quantitative Methods (advanced)

research design purpose and principles

FAQ: Research Design & Method

What is the difference between Research Design and Research Method?

Research design is a plan to answer your research question.  A research method is a strategy used to implement that plan.  Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively.

Which research method should I choose ?

It depends on your research goal.  It depends on what subjects (and who) you want to study.  Let's say you are interested in studying what makes people happy, or why some students are more conscious about recycling on campus.  To answer these questions, you need to make a decision about how to collect your data.  Most frequently used methods include:

  • Observation / Participant Observation
  • Focus Groups
  • Experiments
  • Secondary Data Analysis / Archival Study
  • Mixed Methods (combination of some of the above)

One particular method could be better suited to your research goal than others, because the data you collect from different methods will be different in quality and quantity.   For instance, surveys are usually designed to produce relatively short answers, rather than the extensive responses expected in qualitative interviews.

What other factors should I consider when choosing one method over another?

Time for data collection and analysis is something you want to consider.  An observation or interview method, so-called qualitative approach, helps you collect richer information, but it takes time.  Using a survey helps you collect more data quickly, yet it may lack details.  So, you will need to consider the time you have for research and the balance between strengths and weaknesses associated with each method (e.g., qualitative vs. quantitative).

  • << Previous: Introduction
  • Next: Survey Research >>
  • Last Updated: Aug 21, 2023 10:42 AM

Principles of Research Design

Cite this chapter.

research design purpose and principles

121 Accesses

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Unable to display preview.  Download preview PDF.

Carter, K.C. (1985). Koch’s postulates in relation to the work of Jacob Henle and Edwin Klebs. Medical History , 29 , 353–374, 356–357.

CAS   PubMed   Google Scholar  

Checkoway, H., & Demers, D.A. (1994). Occupational case—control studies. Epidemiologic Reviews , 16 , 151–162.

Comstock, G.W. (1994). Evaluating vaccination effectiveness and vaccine efficacy by means of case-control studies. Epidemiologic Reviews , 16 , 77–89.

Copeland, K.T., Checkoway, H., McMichael, A.J., & Holbrook, R.H. (1977). Bias due to misclassification in the estimation of relative risk. American Journal of Epidemiology , 105 , 488–495.

Coughlin, S.S. (1990). Recall bias in epidemiologic studies. Journal of Clinical Epidemiology , 43 , 87–91.

Dwyer, D.M., Strickler, H., Goodman, R.A., & Armenian, H.K. (1994). Use of case-control studies in outbreak investigations. Epidemiologic Reviews , 16 , 109–123.

Farrar, W.B. (1991). Clinical trials: Access and reimbursement. Cancer , 67 , 1779–1782.

Feldman, F., Finch, M., & Dowd, B. (1989). The role of health practices in HMO selection bias: A confirmatory study. Inquiry , 26 , 381–387.

Felson, D.T. (1992). Bias in meta-analytic research. Journal of Clinical Epidemiology , 45 , 885–892.

Article   CAS   PubMed   Google Scholar  

Gore, S.M. (1981). Assessing clinical trials—why randomise? British Medical Journal , 282 , 1958–1960.

Gray-Donald, K., & Kramer, M.S. (1988). Causality inference in observational vs. experimental studies: An empirical comparison, American Journal of Epidemiology , 127 , 885–892.

Greenland, S. (1977). Response and follow-up bias in cohort studies. American Journal of Epidemiology , 106 , 184–187.

Greenland, S., & Robins, J.M. (1985). Confounding and misclassification. American Journal of Epidemiology , 122 , 495–506.

Hammond, E.C., Selikoff, I.J., & Seidman, H. (1979). Asbestos exposure, cigarette smoking, and death rates. Annals of the New York Academy of Science , 330 , 473–490.

CAS   Google Scholar  

Hill, A.B. (1965). The environment and disease: Association or causation? Proceedings of the Royal Society of Medicine , 58 , 295–300.

Hills, M., & Armitage, P. (1979). The two-period cross-over clinical trial. British Journal of Clinical Pharmacology , 8 , 7–20.

Jooste, P.L., Yach, D., Steenkamp, H.J., Botha, J.L., & Rossouw, J.E. (1990). Drop-out and newcomer bias in a community cardiovascular follow-up study. International Journal of Epidemiology , 19 , 284–289.

Kalton, G. (1993). Sampling considerations in research on HIV risk and illness. In D.G. Ostrow & R.C. Kessler (Eds.), Methodological Issues in AIDS Behavioral Research (pp. 53–74). New York: Plenum Press.

Google Scholar  

Kelsey, J.L., Thompson, W.D., & Evans, A.S. (1986). Methods in Observational Epidemiology . New York: Oxford University Press.

Khlat, M. (1994). Use of case-control methods for indirect estimation in demography. Epidemiologic Reviews , 16 , 124–133.

Khoury, M.J., & Beaty, T.H. (1994). Applications of the case-control method in genetic epidemiology. Epidemiologic Reviews , 16 , 134–150.

Kleinbaum, D.G., Kupper, L.L., & Morgenstern, M. (1982). Epidemiologic Research: Principles and Quantitative Methods . New York: Van Nostrand Reinhold.

Lasky, T., & Stolley, P.D. (1994). Selection of cases and controls. Epidemiologic Reviews , 16 , 6–17.

Last, J.M., (Ed.). (1988). A Dictionary of Epidemiology (2nd ed.). New York: Oxford University Press.

Liberati, A. (1995). “Meta-analysis: statistical alchemy for the 21st century”: Discussion. A plea for a more balanced view of meta-analysis and systematic overviews of the effect of health care interventions. Journal of Clinical Epidemiology , 48 , 81–86.

Lubin, J.H., & Gail, M.H. (1984). Biased selection of controls for case-control analyses of cohort studies. Biometrics , 40 , 63–75.

Mastroianni, A.C., Faden, R., & Federman, D. (Ed.s). (1994). Women and Health Research: Ethical and Legal Issues of Including Women in Clinical Studies , vol. 1. Washington, DC: National Academy Press.

Miettinen, O.S. (1985). The case-control study: Valid selection of subjects. Journal of Chronic Disease , 38 , 543–548.

Article   CAS   Google Scholar  

Morabia, A. (1991). On the origin of Hill’s causal criteria. Epidemiology , 2 , 367–369.

Morgenstern, H. (1996, Spring). Course Materials, Part II: Class Notes for Epidemiologic Methods II, Epidemiology 201B , 54–82.

Morgenstern, H. (1982). Uses of ecologic analysis in epidemiologic research. American Journal of Public Health , 72 , 1336–1344.

O’Brien, P.C., & Shampo, M.A. (1988). Statistical considerations for performing multiple tests in a single experiment. 5. Comparing two therapies with respect to several endpoints. Mayo Clinic Proceedings , 63 , 1140–1143.

Olkin, I. (1995). Statistical and theoretical considerations in meta-analysis, quoting the National Library of Medicine. Journal of Clinical Epidemiology , 48 , 133–146.

Rosner, F. (1987). The ethics of randomized clinical trials. American Journal of Medicine , 82 , 283–290.

Rothman, K.J. (1976). Causes. American Journal of Epidemiology , 104 , 587–592.

Rothman, K.J. (1986). Modern Epidemiology . Boston: Little, Brown and Company.

Sackett, D.L. (1979). Bias in analytic research. Journal of Chronic Diseases , 32 , 51–63.

Sartwell, P.E. (1960). On the methodology of investigations of etiologic factors in chronic disease—further comments. Journal of Chronic Disease , 11 , 61–63.

Schlesselman, J.J. (1982). Case Control Studies . New York: Oxford University Press.

Selby, J.V. (1994). Case-control evaluations of treatment and program efficacy. Epidemiologic Reviews , 16 , 90–101.

Senn, S.J. (1991). Falsification and clinical trials. Statistics in Medicine , 10 , 1679–1692.

Smith, P.G. (1983). Issues in the design of case-control studies: Matching and interaction effects. Tijdschrift voor Sociale Gezondheidszorg , 61 , 755–760.

Sterling, T.D., Weinkam, J.J., & Weinkam, J.L. (1990). The sick person effect. Journal of Clinical Epidemiology , 43 , 141–151.

Susser, M. (1991). What is a cause and how do we know one? A grammar for pragmatic epidemiology. American Journal of Epidemiology , 133 , 635–648.

Wacholder, S., Silverman, D.T., McLaughlin, J.K., & Mandel, J.S. (1992). Selection of controls in case-control studies. III. Design options. American Journal of Epidemiology , 135 , 1042–1050.

Weed, D.L. (1986). On the logic of causal inference. American Journal of Epidemiology , 123 , 965–979.

Weiss, N.S. (1994). Application of the case-control method in the evaluation of screening. Epidemiologic Reviews , 16 , 102–108.

Download references

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Kluwer Academic Publishers

About this chapter

(2002). Principles of Research Design. In: Gender, Ethnicity, and Health Research. Springer, Boston, MA. https://doi.org/10.1007/0-306-47569-3_3

Download citation

DOI : https://doi.org/10.1007/0-306-47569-3_3

Publisher Name : Springer, Boston, MA

Print ISBN : 978-0-306-46172-9

Online ISBN : 978-0-306-47569-6

eBook Packages : Springer Book Archive

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Planning a Research Project: Things You Need to Know

1. Planning your research

2. Correlational and experimental research

3. Types of variables

4. Manipulating independent variables

5. Confounding and control

6. Measuring dependent variables

7. Factors that determine power

8. Results: Main effects and interaction effects

9. Generalizability: Replication and interactions

research design purpose and principles

These notes provide some basic principles you need to bear in mind when designing a research project. If you have already taken a research methods course, they will serve as a useful reminder. If you have never taken a course in research methods, they should point you in the right direction.

See also the notes on generating a research idea and on writing a research report .

The goal of research is to answer questions. Of course, the question itself must be an important one. We do not address importance in these notes, but see the notes on generating strong (and weak) research ideas . Assuming your research question is worth answering, your success depends on meeting two other important criteria, power and control . A third concern, relevant if the first two criteria are satisfied, is generalizability . Power is the probability that your study will find an effect if it is present. Absence of power means that your research fails, not because you are asking the wrong question, but because you were unable to find what you were looking for. Control refers to the validity of any conclusion you reach. Control guarantees that your answer is not tainted because of errors in the design of your study. The results of your research are generalizable if they apply beyond the specific details of your study. Often referred to as external validity , generalizability determines how seriously your research conclusion can be taken in the world at large. How you evaluate your study depends on the kind of research question you ask. Research generally addresses two kinds of questions, correlational and causal . A correlational question asks if there is a relationship between two or more aspects of behavior. It seeks a description rather than an explanation. A causal question seeks an explanation for behavior. It asks if the behavior is causally affected by some feature of the situation or the person. As we'll see, causal questions are much harder to answer than correlational questions.
Even if you never had a course in research methods, you have presumably learned that correlational data cannot be used to answer causal questions. Unfortunately, many students (and some advanced researchers) fail to understand the full implications of this principle. Suppose we find that children who share their toys frequently with others also score higher on a test of empathy. What can we conclude from this finding? Here are four possible conclusions: 1. Requiring children to share their toys will enhance their sense of empathy. 2. Increasing a child's empathy for others will lead to their being more willing to share. 3. As children grow older, both empathy and willingness to share will increase. 4. Children who score higher on a test of empathy are more likely than others to share their toys.

Probably, only the lastconclusion is justified.

Whenever we observe a correlation between two variables, X and Y, the result can be interpreted in three ways. Either X caused the changes in Y, or Y caused the changes in X, or some third variable caused both X and in Y to change. These are, of course, conclusions 1 through 3 above. In the absence of further information, we cannot choose between them.

If all we want to do is predict behavior, then correlational data are enough. Conclusion 4 is a prediction, and is therefore justified.

If the hypothesis we wish to support is that X causes changes in Y, then we must show that deliberate changes in X are associated with predicted changes in Y, with all other factors controlled. To support a causal hypothesis we must conduct a true experiment , in which the presumed causal variable is deliberately manipulated, while all other variables are controlled.

Causal hypotheses are almost always of more interest than correlational hypotheses. They are necessary if we want to understand or modify a psychological process.

Unfortunately, many variables of interest cannot be manipulated, for practical or for ethical reasons. For example, we cannot manipulate pathological conditions, which makes it hard or impossible to attribute changes in behavior to any pathology. In many cases, then, there is no alternative but to conduct correlational research. In these cases, it is still possible to exercise some control statistically. You must recognize, though, that the conclusions will never be as strong as you would like them to be.

All research involves the use of two or more variables. A variable is anything that varies during the research. You will be looking for relationships among variables, and in the case of causal hypotheses trying to show that a relationship is a causal one. You need to identify a number of different kinds of variables.

a. Dependent variables . In any research project you will measure one or more aspects of subjects' behavior. The variables that you measure are dependent variables. Presumably it is performance on these variables that you hope to predict or explain.

b. Independent variables . An independent variable is any characteristic of the research setting that you manipulate. You must be able to determine freely for any one subject what value the variable will take on. If the variable is not under your control in this way, it cannot be an independent variable.

Independent variables are the presumed causes of behavior. In a true experiment there must be at least one independent variable. Presumably you are interested in the effect of each independent variable on one or more dependent variables, and you would like to draw causal conclusions.

Note that it is very hard, sometimes impossible, to treat characteristics of a person as independent variables.

Typical classification variables are age, sex, psychopathological diagnosis, socio-economic status, educational level, etc. Many characteristics of a person are classification variables. Whenever you include classifications variable in your research question, you must be wary of drawing illegitimate causal conclusions.

d. Extraneous variables . Dependent, independent, and classification variables are the ones you pay attention to. But every research project includes a potentially infinite number of other variables that are neither manipulated nor measured. These are the extraneous variables.

Extraneous variables come in two forms, confounding and random . Confounding variables are a direct threat to the validity of your research, and must be controlled. We examine confounding variables in Section 5 . Random variables do not threaten the validity of your conclusions, but they have an impact on power . We examine power in Section 7 .

You should make sure you understand the difference between random and confounding variables. If an extraneous variable is not a confounding variable, it is a random variable. Random variables can be a nuisance, but they are not as serious a threat as confounding variables.

Typical extraneous variables include individual differences among the subjects, changing features of the research setting, and events occuring outside the research setting that might impact subjects' behavior.

e. Constants . Constants are features that might have been variable, but they have been fixed, so they do not vary. Classification variables that are often fixed in this way. For example, your study might use only female subjects. Sex, then, is a constant. Sometimes a classification variable is partially constant. For example, all subjects might be between 18 and 24 months old. Age is fixed within that range. Many features of the research setting are also fixed.

As we'll see, holding variables constant is one way to prevent confounding, and thus enhance control . There are two other principles to keep in mind about constants, and they tend to be in conflict:

1. You can increase power by fixing variables that might otherwise have a large effect on your dependent variables. For example, if your dependent variable changes a great deal with time of day, then holding time of day constant or partially constant will increase power (see Section 7 ).

2. You can reduce the generalizability of your results by fixing variables that might change the way an independent variable affects dependent variables. For example, if your independent variable has different effects for young children and older children, your results may be misleading if you use only young subjects. If you suspect this could be the case you may need to conduct one or more replications and look for interaction effects (see Section 9 ).

By definition, independent variables are manipulated. This implies that for any subject in the experiment you have complete control over the independent variable, and can set its level to be anything you wish. You should think of independent variables as something you do to the experimental conditions, not something you do, at least not directly, to the subjects. For example, you cannot manipulate age by selecting, say, a five year old subject. You cannot do anything to people to change their age.

In designing an experiment, independent variables may be manipulated in one of two ways, between subjects or within subjects . In a between subjects design, separate groups of subjects are used for each possible value of the variable. In a within subjects design, each subject is tested under all different values of the variable.

If you use a between-subjects design it is essential that you use random assignment to allocate subjects to the various conditions. Any non-random assignment will introduce confounding into your experiment. If you use random assignment, all extraneous individual difference variables are random variables.

Within subject designs are almost always more powerful than between subjects designs ( Section 7 ). You are controlling all the individual differences that might be extraneous variables, including intelligence and personality factors. We sometimes talk about using subjects as their own controls.

The drawback to within subjects designs is that you must be careful to avoid confounding caused by the order in which different values of the variable are tested (see Section 5 ).

In fact, whenever you manipulate an independent variable, you must exercise great caution to make sure that no other variable is systematically varied at the same time. This would lead to confounding , and it destroys the validity of your research. Sometimes it takes great vigilance to recognize that confounding has occurred, and it may require considerable ingenuity to overcome that confounding (see Section 5 ).

Confounding can arise in either correlational or in experimental research. It is a serious threat to the validity of your research, especially if you are asking a causal research question. To guard against confounding you can exercise a number of techniques that control the confounding variables.

a. Correlational research . In correlational research, a variable is a confounding variable if it is correlated with both variables involved in a correlational. Suppose, for example, you find that autistic children show less empathy than non-autistic children. Suppose further that intelligence differs between autistic and non-autistic children, and is related to empathy as well. Then intelligence is a confounding variable. Thus, you do not know if the differences in empathy were due to autism, to differences in intelligence, or perhaps to other confounding variables.

Confounding in correlational research is almost inevitable. Because nothing is manipulated, there are many variables that can potentially confound a correlation. You can control them one at a time by fixing them, and thus turning them into constants, but can never eliminate all possible confounding this way.

If you are using a classification variable you may be able to effect some control by matching subjects for important confounding variables. For example, you may be able to ensure that for every autistic child you have selected a non-autistic child of equal intelligence. This way you have controlled for intelligence. However, there will always be other potentially confounding variables, more than you can control by matching.

There are also a number of advanced statistical procedures, including path analysis and structural equation modeling , that can sometimes substitute for direct experimental control, but they are beyond the scope of these notes.

Finally, note that if you are concerned only about predicting behavior, the existence of confounding in correlational research may be irrelevant. If your predictions are successful, you may not care why they are successful.

b. Experimental research . In experimental research a variable is a confounding variable if it is systematically related to both an independent variable and a dependent variable. Confounding will invalidate any causal conclusions you try to draw from your experiment.

Suppose, for example, you manipulate the abstractness of words (abstract versus concrete) in a memory experiment. Then you find out that the abstract words occur less frequently in the English language than the concrete words. You do not know if changes in performance were due to abstractness or to word frequency.

Incidental confounding that occurs when you manipulate an independent variable is called a manipulation artifact . The solution is to find some way of manipulating the variable while ruling out the incidental changes, or at least determining if and when they occur. In the example above, you may need to make sure that words used in all conditions are matched for word frequency.

When the independent variable is manipulated within subjects (all subjects are tested with all values of the variable) you need to take special care to avoid confounding due to practice effects, fatigue effects, or other order effects.

A standard control procedure for order effects is counterbalancing . Use separate groups of subjects that receive the treatments in different orders. If you have only two treatments, A and B, you will need two groups of subjects, one tested in the order AB, the other tested in the order BA. With more than two treatments, counterbalancing becomes more complicated. If you do not use counterbalancing, you must present the conditions in a random order, chosen separately for each subject.

The most important concern with your dependent variables is the reliability of your measurements. If reliability is poor, it is impossible to obtain meaningful results. You can conduct pilot studies to assess reliability, but normally the best way to ensure reliability is to use measuring procedures that other investigators have used successfully.

Reliability is notoriously poor for variables that rely on ill-defined human judgments for their measurement. Ratings of children's behavior, for example, should be checked carefully for reliability. It is also wise to avoid open-ended questions unless a reliable coding scheme has been developed. Any other flaw in your measurement procedure that introduces error will reduce reliability. Reliability of your measurements is an important determinant of power in your research (see Section 7 ).

You may see reference in some textbooks to concerns about the validity of a measuring procedure. Validity of measurements is a complex subject. For most purposes, measurements can be said to be valid if they predict what you expect them to predict, or are changed in ways you expect them to be changed. Thus, validity of measurement in your research is closely tied to support for your hypotheses. If your hypotheses are supported, you can usually take measurement validity to be a given.

Power is the probability that your study will find an effect when the effect really does exist. A number of factors can affect the power of your research. Some of these factors have been discussed in previous sections. a. Experimental design . If you use a within-subjects procedure to manipulate an independent variable, power will be greater than it would be if you use a between-subjects design ( Section 4 ). If you use a between-subjects design you can increase power by using matched groups of subjects, matching them on a variable that you know to be highly correlated with your dependent variable. The greater the correlation between matching variable and dependent variable, the greater the increase in power. b. Reliability . Power is closely related to the reliability of your dependent variable ( Section 6 ). Power is reduced if you use a measurement procedure that has low reliability. c. Extraneous variables . Random extraneous variables (those that are not confounding) will reduce power if they are correlated with your dependent variable ( Section 3d ). For example, some reaction times may vary with time of day. If reaction time is your dependent variable and your experiment is run at varying times of day, power may be reduced. You may be able to increase power by holding the variable constant, or partially constant ( Section 3e ). d. Number of subjects . The most direct way to increase power is to use more subjects. There are statistical procedures for estimating the number of subjects you will need to find an effect of a given size.
When an independent variable (or classification variable) is shown to have an effect on a dependent variable, we call it a main effect . In the simplest study, we would have one independent or classification variable and one dependent variable, and we would look for a main effect. Interesting main effects are rare in studies of human behavior. Whenever one asks, "What effect does X have on Y", the answer is almost always, "It depends". Descriptions of how and on what it depends are descriptions of interactions . We define an interaction as a change in how an independent variable or a classification variable affects a dependent variable as a function of additional independent or classification variables. Here's a simple example: How does performing in front of an audience affect the quality of a person's performance? It is known that performance will generally improve if the performer is skilled, but it may deteriorate if the performer is a novice. That is, presence or absence of an audience interacts with skill level to affect performance. Note that in describing an interaction we always talk about two independent (or classification) variables interacting with each other in their effect on a dependent variable. It is incorrect to say that an independent variables interacts with a dependent variable. Interactions are used to clarify the psychological processes that determine behavior. For example, the interaction between skill level and audience tells us a lot about how a performer exercises her skill. Thus, if you are interested in exploring the processes that underlie behavior, you should include at least two independent variables in the design of your research. This is usually the best way to test theories of behavior. Use the theory to derive predicted interactions. "For a research psychologist, interactions are the spice of life".
There is an important connection between interactions and generalizability. If you have found a main effect for an independent variable, you probably want to know if the results are generalizable. In other words, you want to know what would happen if you changed one or more of the constant features in the experiment. Suppose you have tested the effects of an educational intervention in a particular class at a particular school. It is necessary to find out if the results will generalize to other classes at other schools. In effect, you are asking if the intervention variable interacts with other variables that might differentiate the tested class from others. Of course, you could answer the question by repeating the experiment with other clasees at other schools, i.e., by replicating the study with some variation in the constants. If you find the same main effect each time, there is no interaction, and the results will generalize. If there are interactions the results may not generalize. The degree of generalizability depends on the size of the interactions: the bigger the interactions, the poorer is the generalizability. It should be clear, then, that the only safe way to assess generalizability is to carry out a number of replications. If you are not able to do this, you should at least be aware of the concerns.

research design purpose and principles

3   Research design principles

With the MIDA framework and the declare, diagnose, redesign algorithm in hand, we can articulate a set of six principles for research design.

This section offers succinct discussions of each principle. We will expand on the implications of these principles for specific design choices throughout the book.

Design principles

  • Design holistically
  • Design agnostically
  • Design for purpose
  • Design early
  • Design often
  • Design to share

Principle 3.1 Design holistically

This is perhaps the most important of our principles. Designs are good not because they have good components but because the components work together to get a good result. Too often, researchers develop and evaluate parts of their designs in isolation: Is this a good question? Is this a good estimator? What’s the best way to sample? But if you design with a view to diagnosis you are forced to focus on how each part of the design fits together. An estimator might be appropriate if you use one assignment scheme but not another. The evaluation of data and answer strategies depends on whether your model and inquiry call for descriptive inference, causal inference, or generalization inference (or perhaps, all three at once).If we ask, “What’s your research design?” and you respond “It’s a regression discontinuity design,” we’ve learned something about what class your answer strategy might fall into, but we don’t have enough information to decide whether it’s a strong design until we learn about the model, inquiry, data strategy, and other parts of the answer strategy. Ultimately design evaluation comes not from assessment of the parts but from diagnosis of the full design.

When we consider whole designs rather than just thinking about one aspect at a time, we notice how designs that have “parallel” theoretical and empirical sides tend to be strong. We develop this idea in Section 9.3 . If you want your estimate \(a_{d^*} = A(d^*)\) to be close to the estimand \(a_{m^*} = I(m^*)\) , it’s often best to choose data strategies that parallel models and answer strategies that parallel inquiries, i.e., to make sure that this rough analogy holds: M : I :: D : A .

Principle 3.2 Design agnostically

When we design a research study, we have in mind a model of how the world works. But a good design should work, and work well, even when the world is different from what we expect. One implication is that we should entertain many models, seeking not just to ensure the design produces good results for models that we think likely but trying to expand the set of possible models for which the design delivers good results. A second implication is that inquiries and answer strategies should still work when the world looks different from what we expect. Inquiries should have answers even when event generating processes are different to how you imagine them. In the same way, the ability to apply an answer strategy should depend as little as possible on strong expectations of how the data you will get will look.

A corollary to “Design agnostically” is that we should know for which models our design performs well and for which models it performs poorly. We want to diagnose over many models to find where designs break. All designs break under some models, so the fact that a design ever breaks is no criticism. As research designers, we just want to know which models pose problems and which do not.

Principle 3.3 Design for purpose

When we say a design is good we mean it is good for some specific purpose. That purpose should be captured by the diagnosands used to assess design quality and design decisions should then be taken with respect to the specified purpose. Too often, researchers focus on a narrow set of diagnosands, and consider them in isolation. Is the estimator unbiased? Do I have statistical power? The evaluation of a design nearly always requires balancing multiple criteria: scientific precision, logistical constraints, policy goals, as well as ethical considerations. And oftentimes these might come into conflict with each other. Thus one design might be best if the goal is to assessing whether a treatment has any effect, another if the goal is to assess the size of an effect. One design might be optimal if the goal is to contribute to general knowledge about how processes work, but another if the goal is to make a decision about whether to move forward with a policy in a given context.

In the MIDA framework, the goals of a design are not formally a part of a design. They enter at the diagnosis stage, and, of course, a single design might be assessed for performance for different purposes.

Principle 3.4 Design early

Designing an empirical project entails declaring, diagnosing, and redesigning the components of a research design: its model, inquiry, data strategy, and answer strategy. The design phase yields the biggest gains when we design early. By frontloading design decisions, we can learn about the properties of a design while there is still time to improve them. Once data strategies are implemented — units sampled, treatments assigned, and outcomes measured — there’s no going back. While applying the answer strategy to the revealed dataset, you might well wish you’d gathered data differently, or asked different questions. Post-hoc, we always wish our previous selves had planned ahead.

A reason deeper than regret for designing early is that the declaration, diagnosis, and redesign process inevitably changes designs, almost always for the better. Revealing how each of the four design elements are interconnected yields improvements to each. These choices are almost always better made before any data are collected or analyzed.

Principle 3.5 Design often

Designing early does not mean being inflexible. In practice, unforeseen circumstances may change the set of feasible data and answer strategies. Implementation failures due to nonresponse, noncompliance, spillovers, inability to link datasets, funding contractions, or logistical errors are common ways the set of feasible designs might contract. The set of feasible designs might expand if new data sources are discovered, additional funding is secured, or if you learn about a new piece of software. Whether the set expands or contracts, we benefit from declaring, diagnosing, and redesigning given the new realities.

In Part IV on the research design lifecycle, we push this principle to the limit, encouraging you to keep on designing even after research is completed, arguing that ex post design can help you assess the robustness of your claims and help decide how to respond to criticism of your work.

Principle 3.6 Design to share

The MIDA framework and the declaration, diagnosis, and redesign algorithm can improve the quality of your research designs. It can also help you communicate your work, justify your decisions, and contribute to the scientific enterprise. Formalizing design declaration makes this sharing easier. By coding up a design as an object that can be run, diagnosed, and redesigned, you help other researchers see, understand, and question the logic of your research.

We urge you to keep this sharing function in mind as you write code, explore alternatives, and optimize over designs. An answer strategy that is hard-coded to capture your final decisions might break when researchers try to modify parts. Alternatively, designs can be created specifically to make it easier to explore neighboring designs, let others see why you chose the design you chose, and give them a leg up in their own work. In our ideal world, when you create a design, you contribute it to a design library so others can check it out and build on your good work.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Public Health
  • v.100(10); Oct 2010

Some Principles of Research Design in Public Health

RESEARCH IS A PLANFUL AND systematic activity directed at the discovery of new facts or at the identification of relationships among facts. It attempts to achieve these objectives in a manner that will insure that the data obtained can be generalized, that is, can be applied to a wider range of situations than the one studied, and that the data are not influenced by the conscious or unconscious bias of the investigator or by any other extraneous factors …

In this article we present some principles of research design. While the discussion cannot be comprehensive, it may serve as a guide for the planning and design of administrative research in public health. To invest these principles with the greatest possible meaning for health workers, their use will be illustrated by a description of the life history of a particular research project which was actually carried out….

1. Formulating the Problem

The study grew out of Public Health Service experience with X-ray case-finding activities in the field of tuberculosis. In the late 1940s and early 1950s, the Public Health Service was intensively involved in demonstrations designed to improve case finding in tuberculosis…. The extent of public participation varied considerably from place to place, but, in general, did not achieve the high level desired by program planners….

The Tuberculosis Program officials were asked for their best hunches concerning the nature of the problem. A variety of reactions was obtained. We learned, for example, that every concerted effort had already been made to inform the population in surveyed localities. We further learned that in so far as possible, mobile X-ray units used for the programs had been placed in convenient neighborhood locations; that their hours of operation generally were set to fit in with the living habits of the local population; that cost was not a factor since the X-rays were provided free of charge; that for the individual, obtaining the X-ray was a quick procedure which did not require much of his time. Factors such as knowledge, cost, time, convenience, and the like had one important characteristic in common. They all dealt with potential barriers against participating in the screening program; failure to plan adequately for any of these factors might have prevented some people from coming in…. In short, they [tuberculosis program officials] were implicitly asking for a study around the question: “Why, despite our best efforts, are some people still not coming in for screening?” …

Hochbaum's approach was to turn the question around. Instead of asking why people are not coming in for screening, he chose to direct his attack to the question: Why are people coming in for screening? By asking the question in this way, he was insuring a concern with the positive forces that impel people to take action and thus avoided limiting the study to factors which might prevent it….

2. Review of Literature

After formulating the research problem, a rather comprehensive review of the literature was undertaken. The review failed to uncover any previous research which had approached the problem from the point of view of the present study….

3. Formulating Hypotheses and Making Assumptions Explicit

The principal investigator then formulated a series of hypotheses as explicitly as possible…. In general they concerned, first, the individual's psychological readiness to participate in the screening program (itself based on 3 separate psychological factors); second, the role of situational factors in facilitating or inhibiting the readiness; and, third, the role of certain cues or stimuli to action….

[T]he attempt was made to define the hypotheses operationally, which means defining them in terms that are measurable…. In the attempt to formulate hypotheses operationally … it was necessary to define certain terms that might not at first seem to need much further definition. Consider, for example, the phrase: the act of obtaining a chest X-ray….

Is the person who obtains X-ray only after the appearance of suspicious symptoms taking the same action as the person who obtains it in the absence of symptoms? Is the person who obtains the X-ray at a mobile unit behaving in a way similar to the person who turns to his physician or to the hospital? The action itself appears identical on the surface, but the motivation, the context, the very nature of the behavior can be regarded as very different. This consideration led to defining the phrase, “obtaining a chest x-ray,” in terms of 4 factors: (1) whether the X-rayed person had voluntarily obtained a chest X-ray without compulsion or pressure, (2) under what specific circumstances he had sought X-ray, (3) to what kind of X-ray facility he had turned, and (4) the time period in which he had obtained X-ray….

4. Specification of Needed Data

Following the definition of terms used and a formulation of explicit hypotheses, it was necessary to specify precisely the kinds of data that would have to be obtained to test the hypotheses…. [E]ach and every hypothesis and each and every part of the hypothesis had to be subjected to careful scrutiny for an analysis of the specific data that would have to be obtained to test it adequately.

5. Selection of METHODS and Technics to Yield Needed Data

By this time in the history of the case-finding project, the ultimate purpose had been identified and the research problem had been defined. Relevant literature had been reviewed and assumptions, definitions, and hypotheses had already been made explicit and defined in measurable terms. Finally, the detailed data that would be needed to test the hypotheses were specified. At this point and not before, methods were selected to obtain the needed data; and at this point, it was determined what kinds of samples should be selected for the study…. [T]he research design as developed required that the sample studied encompass people who had never obtained chest X-rays as well as people who had obtained X-rays voluntarily and under certain other conditions….

[I]t was decided to select 3 specific large cities for study and to draw representative samples in each totaling about 1200 persons. Data were collected in this case through the use of a personal but standardized questionnaire consisting of about 90 principal questions.

… [A] finding obtained from a sample may be generalized to a population only when the individuals studied constitute a random or representative sample of the population. Briefly, the requirements for random (or representative) sampling are as follows. First, that every member of the population or group to be studied has an equal probability of being drawn in the sample, and second, that each member drawn in the sample actually be studied or in some way accounted for …

A frequent question raised concerns the determination of minimum sample size needed. Briefly, one estimates needed sample size in terms of such considerations as the complexity of the analyses to be made, the heterogeneity of the population regarding the characteristics to be studied, the degree of precision of findings required, and the kinds of statistical measures to be used….

6. Planning the Analysis

It may be expected that with the development of a questionnaire the case finding study would have been ready to go into the field, but it was not. It was known that time would be saved in the long run and serious errors avoided if a few additional steps were undertaken before the field phase. One of these steps involved an attempt to plan the kind of analysis that the principal investigator intended to make later with the data he expected to obtain. The mere step of setting up blank or dummy tables that show which data to be obtained will have to be related to what others and in what manner, occasionally will show gaps in the data that are being sought and may occasionally show that more data are being sought than are actually needed. Moreover, indicating the kinds of statistical procedures one intends to employ in the analysis helps to determine the minimum sample size needed. Finally, the analysis plans frequently show that the form in which the data are to be obtained will not render them susceptible to the specific kind of an analysis being designed….

7. Pretesting Study Design

Normally, in the development of a project, one would at this point pretest this instruments on a small but representative sample of the population with which he is concerned. Unfortunately, as is so often true, in the case of the present example, inadequate time was available for thorough pretesting although some was done. And there was reason to regret the inadequate pretesting, for a simple clerical error in the instructions to the interviewer later resulted in the failure to obtain certain important information from about 150 respondents….

8. Collection of Data

After all these steps had been accomplished, the study was taken into the field, the interviewers were trained intensively, and the data were collected. In all, approximately 7 months had been required for the detailed planning of the study and 3 weeks were required to collect all the data. Thus approximately 90% of the time spent on the study before analysis was devoted to planning….

9. Analysis of Data

The complex content analysis that had been developed was applied to the collected data, and the latter were summarized….

10. Interpretation and Reporting

In a study whose hypotheses have been stated clearly, the interpretation of data are always simpler than in one whose hypotheses are vaguely stated or not stated at all. In the latter case, one is required to rationalize the data after the fact, a process always fraught with some danger. As indicated, the study discussed here had a number of fairly well defined hypotheses with questions specifically designed to gather data relevant to the hypotheses. As a consequence, the interpretations of data were almost automatic in most cases….

One of the most important obligations of the research specialist is to communicate his methods and findings to his colleagues so that their research efforts may be built upon his. In the area of administrative research in public health this obligation is even more pressing, since soundly originated research should produce findings that can be applied to the improvement of public health programs even in localities outside the one than has been studied….

PREVALENT MISCONCEPTIONS ABOUT ADMINISTRATIVE RESEARCH

It is widely but incorrectly believed that a research design will be judged favorably if it contains certain “magic” words, phrases, or concepts. The foregoing presentation should demonstrate that the magic required is not one of words, but one of rational, systematic planning. The person who can develop a sound plan will rarely have trouble communicating its soundness, and conversely the person who has difficulty in communicating a good plan probably does not have one.

A second misconception is that inadequate research plans can be made acceptable by labeling them “exploratory.” True enough, research in uncharted areas often involves exploration of imperfectly defined hypotheses or incompletely developed methods. But, as indicated in this paper, the exploratory researcher is not freed from the obligation to observe the principles of sound research design to the maximum extent possible.

A third misconception is that basic research in which research is an end in itself is good, and that applied research in which research is a means to an end is poor. Nothing could be farther from the truth. It is both possible and imperative to develop sound research designs in applied research, and it is not unusual to see poor designs for basic research.

CONCLUSIONS

The major purpose of this paper was to shed a little light on the nature of the research process and on some of its complexities. The design of research is a technical matter involving technical skills. The enthusiastic but untrained researcher is not more likely to discover new facts systematically than is an enthusiastic but untrained layperson likely to perform successful appendectomies. In any attempt to do more or less definitive research, the skills of the trained researcher are essential. It may be added parenthetically this restriction does not apply to all data-gathering activities of health agencies but only to those whose aim is to discover or establish new facts in a way that is generalizable to other situations and free of certain biases.

SlidePlayer

  • My presentations

Auth with social network:

Download presentation

We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!

Presentation is loading. Please wait.

To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video

Research Design: Purpose and Principles

Published by Modified over 9 years ago

Similar presentations

Presentation on theme: "Research Design: Purpose and Principles"— Presentation transcript:

Research Design: Purpose and Principles

Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.

research design purpose and principles

Randomized Complete Block and Repeated Measures (Each Subject Receives Each Treatment) Designs KNNL – Chapters 21,

research design purpose and principles

Chapter 9 Choosing the Right Research Design Chapter 9.

research design purpose and principles

Inadequate Designs and Design Criteria

research design purpose and principles

The Basics of Experimentation I: Variables and Control

research design purpose and principles

DOCTORAL SEMINAR, SPRING SEMESTER 2007 Experimental Design & Analysis Analysis of Covariance; Within- Subject Designs March 13, 2007.

research design purpose and principles

Chapter 21 Research Design Applications: Randomized Groups and Correlated Groups.

research design purpose and principles

Jeff Beard Lisa Helma David Parrish Start Presentation.

research design purpose and principles

Analysis of Variance Chapter Introduction Analysis of variance compares two or more populations of interval data. Specifically, we are interested.

research design purpose and principles

Using Between-Subjects and Within-Subjects Experimental Designs

research design purpose and principles

Chapter 28 Design of Experiments (DOE). Objectives Define basic design of experiments (DOE) terminology. Apply DOE principles. Plan, organize, and evaluate.

research design purpose and principles

Sampling and Experimental Control Goals of clinical research is to make generalizations beyond the individual studied to others with similar conditions.

research design purpose and principles

Chapter 20 General Designs of Research. The implicit purpose of all research design is to impose controlled restrictions of natural phenomena. The research.

research design purpose and principles

The Psychologist as Detective, 4e by Smith/Davis © 2007 Pearson Education Chapter Twelve: Designing, Conducting, Analyzing, and Interpreting Experiments.

research design purpose and principles

8. ANALYSIS OF VARIANCE 8.1 Elements of a Designed Experiment

research design purpose and principles

Educational Research by John W. Creswell. Copyright © 2002 by Pearson Education. All rights reserved. Slide 1 Chapter 11 Experimental and Quasi-experimental.

research design purpose and principles

Chapter 23 Nonexperimental Research. Definition Nonexperimental research is systematic empirical inquiry in which the scientist does not have direct control.

research design purpose and principles

Chapter 6 Variance And Covariance. Studying sets of numbers as they are is unwieldy. It is usually necessary to reduce the sets in two ways: (1) by calculating.

research design purpose and principles

L1 Chapter 11 Experimental and Quasi- experimental Designs Dr. Bill Bauer.

research design purpose and principles

Sampling and Participants

About project

© 2024 SlidePlayer.com Inc. All rights reserved.

  • UB Directory
  • Clinical and Translational Science Institute >
  • Workforce Development >
  • Core Competency Workshop Series >
  • Qualitative Study Design and Data Analysis >

Qualitative Principles in Action

This workshop delves into the real-world application of qualitative principles in research. Denise Lillvis, PhD, discusses her qualitative research on children in acute healthcare settings. The session includes applicative examples to concepts discussed earlier in the series. 

Participants are invited to bring questions and ideas to share with the group during the session, offering an opportunity for collaborative learning as well as questions about individual projects.

DATE: Wednesday, October 30, 2024 TIME: 9:00 a.m. – 10:30 a.m. LOCATION:  Room 7002, Clinical and Translational Research Center, 875 Ellicott St., Buffalo

*Attendance will be limited to the first 35 registrants (waitlist available).

Denise Lillvis, PhD, MPA.

Denise Lillvis, PhD, MPA Assistant Professor Department of Epidemiology and Environmental Health School of Public Health and Health Professions, UB CTSI K Scholar 2022-2024

For more information, contact [email protected] or 716-844-9282. 

COMMENTS

  1. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  2. Research Design

    The purpose of research design is to plan and structure a research study in a way that enables the researcher to achieve the desired research goals with accuracy, validity, and reliability. Research design is the blueprint or the framework for conducting a study that outlines the methods, procedures, techniques, and tools for data collection ...

  3. Basic Research Design

    Research Design Principles (Blair et al., 2023) ... Design for Purpose. Specificity of Purpose: A design is deemed good when it aligns with a specific purpose or goal. Balancing Multiple Criteria: Designs should balance scientific precision, logistical constraints, policy goals, and ethical considerations. ...

  4. What is a Research Design? Definition, Types, Methods and Examples

    Research design methods refer to the systematic approaches and techniques used to plan, structure, and conduct a research study. The choice of research design method depends on the research questions, objectives, and the nature of the study. Here are some key research design methods commonly used in various fields: 1.

  5. What Is Research Design? 8 Types + Examples

    Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data. Research designs for quantitative studies include descriptive, correlational, experimental and quasi-experimenta l designs. Research designs for qualitative studies include phenomenological ...

  6. PDF WHAT IS RESEARCH DESIGN?

    about the role and purpose of research design. We need to understand what research design is and what it is not. We need to know where design fits into the whole research process from framing a question to finally analysing and reporting data. This is the purpose of this chapter. Description and explanation Social researchers ask two ...

  7. Research Design

    Table of contents. Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies.

  8. Research design

    Research design is a comprehensive plan for data collection in an empirical research project. It is a 'blueprint' for empirical research aimed at answering specific research questions or testing specific hypotheses, and must specify at least three processes: the data collection process, the instrument development process, and the sampling process.

  9. Clinical research study designs: The essentials

    Introduction. In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the "real world" setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of ...

  10. Research Design: What is Research Design, Types, Methods, and Examples

    There are various types of research design, each suited to different research questions and objectives: • Quantitative Research: Focuses on numerical data and statistical analysis to quantify relationships and patterns. Common methods include surveys, experiments, and observational studies. • Qualitative Research: Emphasizes understanding ...

  11. Research Design

    In a nutshell, the following is the procedure of research design: 1. Define the purpose of your project. Determine whether it will be exploratory, descriptive, or explanatory. 2. Specify the meanings of each concept you want to study. 3. Select a research method. 4.

  12. Research design

    Research design refers to the overall strategy utilized to answer research questions. A research design typically outlines the theories and models underlying a project; the research question(s) of a project; a strategy for gathering data and information; and a strategy for producing answers from the data. [1] A strong research design yields valid answers to research questions while weak ...

  13. PDF The Selection of a Research Design

    research involves philosophical assumptions as well as distinct methods or procedures. Research design, which I refer to as the plan or proposal to conduct research, involves the intersection of philosophy, strategies of inquiry, and specific methods. A framework that I use to explain the inter-action of these three components is seen in Figure ...

  14. (PDF) Basics of Research Design: A Guide to selecting appropriate

    for validity and reliability. Design is basically concerned with the aims, uses, purposes, intentions and plans within the. pr actical constraint of location, time, money and the researcher's ...

  15. What Is the Purpose of Research Design?

    Research design, like any framework of research, has its methods and purpose. The purpose of research design is to provide a clear plan of the research, based on independent and dependent variables, and to consider the cause and effect evoked by these variables. Methods that constitute research design can include: Observations. Surveys.

  16. Research Methods Guide: Research Design & Method

    Research design is a plan to answer your research question. A research method is a strategy used to implement that plan. Research design and methods are different but closely related, because good research design ensures that the data you obtain will help you answer your research question more effectively. Which research method should I choose?

  17. PDF Principles of Research Design

    Principles of Research Design Every discipline that concerns itself with health research—sociology, anthropology, epi demi-ology, psychology, and health promotion, for example—maintains a lexicon specific to that field that constitutes, in effect, a shorthand for describing the study designs, relevant measures

  18. Principles of Research Design

    9. Generalizability: Replication and interactions. "Ancient of Days" (William Blake) scanned by Mark Harden, at Artchive. These notes provide some basic principles you need to bear in mind when designing a research project. If you have already taken a research methods course, they will serve as a useful reminder.

  19. Research Design in the Social Sciences

    Design agnostically. Design for purpose. Design early. Design often. Design to share. Principle 3.1 Design holistically. This is perhaps the most important of our principles. Designs are good not because they have good components but because the components work together to get a good result. Too often, researchers develop and evaluate parts of ...

  20. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  21. Some Principles of Research Design in Public Health

    In this article we present some principles of research design. While the discussion cannot be comprehensive, it may serve as a guide for the planning and design of administrative research in public health. ... By this time in the history of the case-finding project, the ultimate purpose had been identified and the research problem had been ...

  22. (PDF) Principles of Research Design

    the different components of the study in a coherent and logical way, thereby, ensuring. you will effectively address the research problem; it constitutes the blueprint for the. collection ...

  23. Research Design: Purpose and Principles

    1 Research Design: Purpose and Principles Chapter 18 Research Design: Purpose and Principles. 2 Research Design is the plan and structure of investigation, conceived so as to obtain answers to research questions. This plan is the overall scheme or program of the research. It includes an outline of what the investigator will do, from writing the ...

  24. Qualitative Principles in Action

    This workshop delves into the real-world application of qualitative principles in research. Denise Lillvis, PhD, discusses her qualitative research on children in acute healthcare settings. The session includes applicative examples to concepts discussed earlier in the series.&nbsp;

  25. Adobe Workfront

    ADOBE WORKFRONT Plan, assign, and execute work from one place. Build a marketing system of record by centralizing and integrating work across teams and applications with the industry-leading enterprise marketing work management application.