Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on September 5, 2024.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Qualitative research approaches
Approach What does it involve?
Grounded theory Researchers collect rich data on a topic of interest and develop theories .
Researchers immerse themselves in groups or organizations to understand their cultures.
Action research Researchers and participants collaboratively link theory to practice to drive social change.
Phenomenological research Researchers investigate a phenomenon or event by describing and interpreting participants’ lived experiences.
Narrative research Researchers examine how stories are told to understand how participants perceive and make sense of their experiences.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Prevent plagiarism. Run a free check.

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative data analysis
Approach When to use Example
To describe and categorize common words, phrases, and ideas in qualitative data. A market researcher could perform content analysis to find out what kind of language is used in descriptions of therapeutic apps.
To identify and interpret patterns and themes in qualitative data. A psychologist could apply thematic analysis to travel blogs to explore how tourism shapes self-identity.
To examine the content, structure, and design of texts. A media researcher could use textual analysis to understand how news coverage of celebrities has changed in the past decade.
To study communication and how language is used to achieve effects in specific contexts. A political scientist could use discourse analysis to study how politicians generate trust in election campaigns.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

article qualitative research

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2024, September 05). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved September 20, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • - Google Chrome

Intended for healthcare professionals

  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Critically appraising...

Critically appraising qualitative research

  • Related content
  • Peer review
  • Ayelet Kuper , assistant professor 1 ,
  • Lorelei Lingard , associate professor 2 ,
  • Wendy Levinson , Sir John and Lady Eaton professor and chair 3
  • 1 Department of Medicine, Sunnybrook Health Sciences Centre, and Wilson Centre for Research in Education, University of Toronto, 2075 Bayview Avenue, Room HG 08, Toronto, ON, Canada M4N 3M5
  • 2 Department of Paediatrics and Wilson Centre for Research in Education, University of Toronto and SickKids Learning Institute; BMO Financial Group Professor in Health Professions Education Research, University Health Network, 200 Elizabeth Street, Eaton South 1-565, Toronto
  • 3 Department of Medicine, Sunnybrook Health Sciences Centre
  • Correspondence to: A Kuper ayelet94{at}post.harvard.edu

Six key questions will help readers to assess qualitative research

Summary points

Appraising qualitative research is different from appraising quantitative research

Qualitative research papers should show appropriate sampling, data collection, and data analysis

Transferability of qualitative research depends on context and may be enhanced by using theory

Ethics in qualitative research goes beyond review boards’ requirements to involve complex issues of confidentiality, reflexivity, and power

Over the past decade, readers of medical journals have gained skills in critically appraising studies to determine whether the results can be trusted and applied to their own practice settings. Criteria have been designed to assess studies that use quantitative methods, and these are now in common use.

In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application of a scoring system.

Box 1 Key questions to ask when reading qualitative research studies

Was the sample used in the study appropriate to its research question.

Were the data collected appropriately?

Were the data analysed appropriately?

Can I transfer the results of this study to my own setting?

Does the study adequately address potential ethical issues, including reflexivity?

Overall: is what the researchers did clear?

One of the critical decisions in a qualitative study is whom or what to include in the sample—whom to interview, whom to observe, what texts to analyse. An understanding that qualitative research is based in experience and in the construction of meaning, combined with the specific research question, should guide the sampling process. For example, a study of the experience of survivors of domestic violence that examined their reasons for not seeking help from healthcare providers might focus on interviewing a …

Log in using your username and password

BMA Member Log In

If you have a subscription to The BMJ, log in:

  • Need to activate
  • Log in via institution
  • Log in via OpenAthens

Log in through your institution

Subscribe from £184 *.

Subscribe and get access to all BMJ articles, and much more.

* For online subscription

Access this article for 1 day for: £50 / $60/ €56 ( excludes VAT )

You can download a PDF version for your personal record.

Buy this article

article qualitative research

Our systems are now restored following recent technical disruption, and we’re working hard to catch up on publishing. We apologise for the inconvenience caused. Find out more: https://www.cambridge.org/universitypress/about-us/news-and-blogs/cambridge-university-press-publishing-update-following-technical-disruption

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

  • > Journals
  • > BJPsych Bulletin
  • > The Psychiatrist
  • > Volume 37 Issue 6
  • > Qualitative research: its value and applicability

article qualitative research

Article contents

What questions are best answered using qualitative research, countering some misconceptions, in conclusion, qualitative research: its value and applicability.

Published online by Cambridge University Press:  02 January 2018

Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals, and the meanings attached to them. The effective application of qualitative methods to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. Qualitative approaches have particular potential in psychiatry research, singularly and in combination with quantitative methods. This article outlines the nature and potential application of qualitative research as well as attempting to counter a number of misconceptions.

Qualitative research has a rich tradition in the social sciences. Since the late 19th century, researchers interested in studying the social behaviour and cultures of humankind have perceived limitations in trying to explain the phenomena they encounter in purely quantifiable, measurable terms. Anthropology, in its social and cultural forms, was one of the foremost disciplines in developing what would later be termed a qualitative approach, founded as it was on ethnographic studies which sought an understanding of the culture of people from other societies, often hitherto unknown and far removed in geography. Reference Bernard 1 Early researchers would spend extended periods of time living in societies, observing, noting and photographing the minutia of daily life, with the most committed often learning the language of peoples they observed, in the hope of gaining greater acceptance by them and a more detailed understanding of the cultural norms at play. All academic disciplines concerned with human and social behaviour, including anthropology, sociology and psychology, now make extensive use of qualitative research methods whose systematic application was first developed by these colonial-era social scientists.

Their methods, involving observation, participation and discussion of the individuals and groups being studied, as well as reading related textual and visual media and artefacts, form the bedrock of all qualitative social scientific inquiry. The general aim of qualitative research is thus to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of those studied, and the meanings attached to them. Reference Bryman 2 Researchers interested in finding out why people behave the way they do; how people are affected by events, how attitudes and opinions are formed; how and why cultures and practices have developed in the way they have, might well consider qualitative methods to answer their questions.

It is fair to say that clinical and health-related research is still dominated by quantitative methods, of which the randomised controlled trial, focused on hypothesis-testing through experiment controlled by randomisation, is perhaps the quintessential method. Qualitative approaches may seem obscure to the uninitiated when directly compared with the experimental, quantitative methods used in clinical research. There is increasing recognition among researchers in these fields, however, that qualitative methods such as observation, in-depth interviews, focus groups, consensus methods, case studies and the interpretation of texts can be more effective than quantitative approaches in exploring complex phenomena and as such are valuable additions to the methodological armoury available to them. Reference Denzin and Lincoln 3

In considering what kind of research questions are best answered using a qualitative approach, it is important to remember that, first and foremost, unlike quantitative research, inquiry conducted in the qualitative tradition seeks to answer the question ‘What?’ as opposed to ‘How often?’. Qualitative methods are designed to reveal what is going on by describing and interpreting phenomena; they do not attempt to measure how often an event or association occurs. Research conducted using qualitative methods is normally done with an intent to preserve the inherent complexities of human behaviour as opposed to assuming a reductive view of the subject in order to count and measure the occurrence of phenomena. Qualitative research normally takes an inductive approach, moving from observation to hypothesis rather than hypothesis-testing or deduction, although the latter is perfectly possible.

When conducting research in this tradition, the researcher should, if possible, avoid separating the stages of study design, data collection and analysis, but instead weave backwards and forwards between the raw data and the process of conceptualisation, thereby making sense of the data throughout the period of data collection. Although there are inevitable tensions among methodologists concerned with qualitative practice, there is broad consensus that a priori categories and concepts reflecting a researcher's own preconceptions should not be imposed on the process of data collection and analysis. The emphasis should be on capturing and interpreting research participants' true perceptions and/or behaviours.

Using combined approaches

The polarity between qualitative and quantitative research has been largely assuaged, to the benefit of all disciplines which now recognise the value, and compatibility, of both approaches. Indeed, there can be particular value in using quantitative methods in combination with qualitative methods. Reference Barbour 4 In the exploratory stages of a research project, qualitative methodology can be used to clarify or refine the research question, to aid conceptualisation and to generate a hypothesis. It can also help to identify the correct variables to be measured, as researchers have been known to measure before they fully understand the underlying issues pertaining to a study and, as a consequence, may not always target the most appropriate factors. Qualitative work can be valuable in the interpretation, qualification or illumination of quantitative research findings. This is particularly helpful when focusing on anomalous results, as they test the main hypothesis formulated. Qualitative methods can also be used in combination with quantitative methods to triangulate findings and support the validation process, for example, where three or more methods are used and the results compared for similarity (e.g. a survey, interviews and a period of observation in situ ).

‘There is little value in qualitative research findings because we cannot generalise from them’

Generalisability refers to the extent that the account can be applied to other people, times and settings other than those actually studied. A common criticism of qualitative research is that the results of a study are rarely, if ever, generalisable to a larger population because the sample groups are small and the participants are not chosen randomly. Such criticism fails to recognise the distinctiveness of qualitative research where sampling is concerned. In quantitative research, the intent is to secure a large random sample that is representative of the general population, with the purpose of eliminating individual variations, focusing on generalisations and thereby allowing for statistical inference of results that are applicable across an entire population. In qualitative research, generalisability is based on the assumption that it is valuable to begin to understand similar situations or people, rather than being representative of the target population. Qualitative research is rarely based on the use of random samples, so the kinds of reference to wider populations made on the basis of surveys cannot be used in qualitative analysis.

Qualitative researchers utilise purposive sampling, whereby research participants are selected deliberately to test a particular theoretical premise. The purpose of sampling here is not to identify a random subgroup of the general population from which statistically significant results can be extrapolated, but rather to identify, in a systematic way, individuals that possess relevant characteristics for the question being considered. Reference Strauss and Corbin 5 The researchers must instead ensure that any reference to people and settings beyond those in the study are justified, which is normally achieved by defining, in detail, the type of settings and people to whom the explanation or theory applies based on the identification of similar settings and people in the study. The intent is to permit a detailed examination of the phenomenon, resulting in a text-rich interpretation that can deepen our understanding and produce a plausible explanation of the phenomenon under study. The results are not intended to be statistically generalisable, although any theory they generate might well be.

‘Qualitative research cannot really claim reliability or validity’

In quantitative research, reliability is the extent to which different observers, or the same observers on different occasions, make the same observations or collect the same data about the same object of study. The changing nature of social phenomena scrutinised by qualitative researchers inevitably makes the possibility of the same kind of reliability problematic in their work. A number of alternative concepts to reliability have been developed by qualitative methodologists, however, known collectively as forms of trustworthiness. Reference Guba 6

One way to demonstrate trustworthiness is to present detailed evidence in the form of quotations from interviews and field notes, along with thick textual descriptions of episodes, events and settings. To be trustworthy, qualitative analysis should also be auditable, making it possible to retrace the steps leading to a certain interpretation or theory to check that no alternatives were left unexamined and that no researcher biases had any avoidable influence on the results. Usually, this involves the recording of information about who did what with the data and in what order so that the origin of interpretations can be retraced.

In general, within the research traditions of the natural sciences, findings are validated by their repeated replication, and if a second investigator cannot replicate the findings when they repeat the experiment then the original results are questioned. If no one else can replicate the original results then they are rejected as fatally flawed and therefore invalid. Natural scientists have developed a broad spectrum of procedures and study designs to ensure that experiments are dependable and that replication is possible. In the social sciences, particularly when using qualitative research methods, replication is rarely possible given that, when observed or questioned again, respondents will almost never say or do precisely the same things. Whether results have been successfully replicated is always a matter of interpretation. There are, however, procedures that, if followed, can significantly reduce the possibility of producing analyses that are partial or biased. Reference Altheide, Johnson, Denzin and Lincoln 7

Triangulation is one way of doing this. It essentially means combining multiple views, approaches or methods in an investigation to obtain a more accurate interpretation of the phenomena, thereby creating an analysis of greater depth and richness. As the process of analysing qualitative data normally involves some form of coding, whereby data are broken down into units of analysis, constant comparison can also be used. Constant comparison involves checking the consistency and accuracy of interpretations and especially the application of codes by constantly comparing one interpretation or code with others both of a similar sort and in other cases and settings. This in effect is a form of interrater reliability, involving multiple researchers or teams in the coding process so that it is possible to compare how they have coded the same passages and where there are areas of agreement and disagreement so that consensus can be reached about a code's definition, improving consistency and rigour. It is also good practice in qualitative analysis to look constantly for outliers – results that are out of line with your main findings or any which directly contradict what your explanations might predict, re-examining the data to try to find a way of explaining the atypical finding to produce a modified and more complex theory and explanation.

Qualitative research has been established for many decades in the social sciences and encompasses a valuable set of methodological tools for data collection, analysis and interpretation. Their effective application to other disciplines, including clinical, health service and education research, has a rapidly expanding and robust evidence base. The use of qualitative approaches to research in psychiatry has particular potential, singularly and in combination with quantitative methods. Reference Crabb and Chur-Hansen 8 When devising research questions in the specialty, careful thought should always be given to the most appropriate methodology, and consideration given to the great depth and richness of empirical evidence which a robust qualitative approach is able to provide.

Declaration of interest

Crossref logo

This article has been cited by the following publications. This list is generated based on data provided by Crossref .

  • Google Scholar

View all Google Scholar citations for this article.

Save article to Kindle

To save this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Volume 37, Issue 6
  • Steven J. Agius (a1)
  • DOI: https://doi.org/10.1192/pb.bp.113.042770

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox .

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive .

Reply to: Submit a response

- No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted

Your details

Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly.

You have entered the maximum number of contributors

Conflicting interests.

Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners.

  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

781k Accesses

369 Citations

90 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

article qualitative research

  • Search Menu
  • Sign in through your institution
  • Advance articles
  • Mini-reviews
  • ESHRE Pages
  • Editor's Choice
  • Supplements
  • Author Guidelines
  • Submission Site
  • Reasons to Publish
  • Open Access
  • Advertising and Corporate Services
  • Advertising
  • Reprints and ePrints
  • Sponsored Supplements
  • Branded Books
  • Journals Career Network
  • About Human Reproduction
  • About the European Society of Human Reproduction and Embryology
  • Editorial Board
  • Self-Archiving Policy
  • Dispatch Dates
  • Contact ESHRE
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

Introduction, when to use qualitative research, how to judge qualitative research, conclusions, authors' roles, conflict of interest.

  • < Previous

Qualitative research methods: when to use them and how to judge them

  • Article contents
  • Figures & tables
  • Supplementary Data

K. Hammarberg, M. Kirkman, S. de Lacey, Qualitative research methods: when to use them and how to judge them, Human Reproduction , Volume 31, Issue 3, March 2016, Pages 498–501, https://doi.org/10.1093/humrep/dev334

  • Permissions Icon Permissions

In March 2015, an impressive set of guidelines for best practice on how to incorporate psychosocial care in routine infertility care was published by the ESHRE Psychology and Counselling Guideline Development Group ( ESHRE Psychology and Counselling Guideline Development Group, 2015 ). The authors report that the guidelines are based on a comprehensive review of the literature and we congratulate them on their meticulous compilation of evidence into a clinically useful document. However, when we read the methodology section, we were baffled and disappointed to find that evidence from research using qualitative methods was not included in the formulation of the guidelines. Despite stating that ‘qualitative research has significant value to assess the lived experience of infertility and fertility treatment’, the group excluded this body of evidence because qualitative research is ‘not generally hypothesis-driven and not objective/neutral, as the researcher puts him/herself in the position of the participant to understand how the world is from the person's perspective’.

Qualitative and quantitative research methods are often juxtaposed as representing two different world views. In quantitative circles, qualitative research is commonly viewed with suspicion and considered lightweight because it involves small samples which may not be representative of the broader population, it is seen as not objective, and the results are assessed as biased by the researchers' own experiences or opinions. In qualitative circles, quantitative research can be dismissed as over-simplifying individual experience in the cause of generalisation, failing to acknowledge researcher biases and expectations in research design, and requiring guesswork to understand the human meaning of aggregate data.

As social scientists who investigate psychosocial aspects of human reproduction, we use qualitative and quantitative methods, separately or together, depending on the research question. The crucial part is to know when to use what method.

The peer-review process is a pillar of scientific publishing. One of the important roles of reviewers is to assess the scientific rigour of the studies from which authors draw their conclusions. If rigour is lacking, the paper should not be published. As with research using quantitative methods, research using qualitative methods is home to the good, the bad and the ugly. It is essential that reviewers know the difference. Rejection letters are hard to take but more often than not they are based on legitimate critique. However, from time to time it is obvious that the reviewer has little grasp of what constitutes rigour or quality in qualitative research. The first author (K.H.) recently submitted a paper that reported findings from a qualitative study about fertility-related knowledge and information-seeking behaviour among people of reproductive age. In the rejection letter one of the reviewers (not from Human Reproduction ) lamented, ‘Even for a qualitative study, I would expect that some form of confidence interval and paired t-tables analysis, etc. be used to analyse the significance of results'. This comment reveals the reviewer's inappropriate application to qualitative research of criteria relevant only to quantitative research.

In this commentary, we give illustrative examples of questions most appropriately answered using qualitative methods and provide general advice about how to appraise the scientific rigour of qualitative studies. We hope this will help the journal's reviewers and readers appreciate the legitimate place of qualitative research and ensure we do not throw the baby out with the bath water by excluding or rejecting papers simply because they report the results of qualitative studies.

In psychosocial research, ‘quantitative’ research methods are appropriate when ‘factual’ data are required to answer the research question; when general or probability information is sought on opinions, attitudes, views, beliefs or preferences; when variables can be isolated and defined; when variables can be linked to form hypotheses before data collection; and when the question or problem is known, clear and unambiguous. Quantitative methods can reveal, for example, what percentage of the population supports assisted conception, their distribution by age, marital status, residential area and so on, as well as changes from one survey to the next ( Kovacs et al. , 2012 ); the number of donors and donor siblings located by parents of donor-conceived children ( Freeman et al. , 2009 ); and the relationship between the attitude of donor-conceived people to learning of their donor insemination conception and their family ‘type’ (one or two parents, lesbian or heterosexual parents; Beeson et al. , 2011 ).

In contrast, ‘qualitative’ methods are used to answer questions about experience, meaning and perspective, most often from the standpoint of the participant. These data are usually not amenable to counting or measuring. Qualitative research techniques include ‘small-group discussions’ for investigating beliefs, attitudes and concepts of normative behaviour; ‘semi-structured interviews’, to seek views on a focused topic or, with key informants, for background information or an institutional perspective; ‘in-depth interviews’ to understand a condition, experience, or event from a personal perspective; and ‘analysis of texts and documents’, such as government reports, media articles, websites or diaries, to learn about distributed or private knowledge.

Qualitative methods have been used to reveal, for example, potential problems in implementing a proposed trial of elective single embryo transfer, where small-group discussions enabled staff to explain their own resistance, leading to an amended approach ( Porter and Bhattacharya, 2005 ). Small-group discussions among assisted reproductive technology (ART) counsellors were used to investigate how the welfare principle is interpreted and practised by health professionals who must apply it in ART ( de Lacey et al. , 2015 ). When legislative change meant that gamete donors could seek identifying details of people conceived from their gametes, parents needed advice on how best to tell their children. Small-group discussions were convened to ask adolescents (not known to be donor-conceived) to reflect on how they would prefer to be told ( Kirkman et al. , 2007 ).

When a population cannot be identified, such as anonymous sperm donors from the 1980s, a qualitative approach with wide publicity can reach people who do not usually volunteer for research and reveal (for example) their attitudes to proposed legislation to remove anonymity with retrospective effect ( Hammarberg et al. , 2014 ). When researchers invite people to talk about their reflections on experience, they can sometimes learn more than they set out to discover. In describing their responses to proposed legislative change, participants also talked about people conceived as a result of their donations, demonstrating various constructions and expectations of relationships ( Kirkman et al. , 2014 ).

Interviews with parents in lesbian-parented families generated insight into the diverse meanings of the sperm donor in the creation and life of the family ( Wyverkens et al. , 2014 ). Oral and written interviews also revealed the embarrassment and ambivalence surrounding sperm donors evident in participants in donor-assisted conception ( Kirkman, 2004 ). The way in which parents conceptualise unused embryos and why they discard rather than donate was explored and understood via in-depth interviews, showing how and why the meaning of those embryos changed with parenthood ( de Lacey, 2005 ). In-depth interviews were also used to establish the intricate understanding by embryo donors and recipients of the meaning of embryo donation and the families built as a result ( Goedeke et al. , 2015 ).

It is possible to combine quantitative and qualitative methods, although great care should be taken to ensure that the theory behind each method is compatible and that the methods are being used for appropriate reasons. The two methods can be used sequentially (first a quantitative then a qualitative study or vice versa), where the first approach is used to facilitate the design of the second; they can be used in parallel as different approaches to the same question; or a dominant method may be enriched with a small component of an alternative method (such as qualitative interviews ‘nested’ in a large survey). It is important to note that free text in surveys represents qualitative data but does not constitute qualitative research. Qualitative and quantitative methods may be used together for corroboration (hoping for similar outcomes from both methods), elaboration (using qualitative data to explain or interpret quantitative data, or to demonstrate how the quantitative findings apply in particular cases), complementarity (where the qualitative and quantitative results differ but generate complementary insights) or contradiction (where qualitative and quantitative data lead to different conclusions). Each has its advantages and challenges ( Brannen, 2005 ).

Qualitative research is gaining increased momentum in the clinical setting and carries different criteria for evaluating its rigour or quality. Quantitative studies generally involve the systematic collection of data about a phenomenon, using standardized measures and statistical analysis. In contrast, qualitative studies involve the systematic collection, organization, description and interpretation of textual, verbal or visual data. The particular approach taken determines to a certain extent the criteria used for judging the quality of the report. However, research using qualitative methods can be evaluated ( Dixon-Woods et al. , 2006 ; Young et al. , 2014 ) and there are some generic guidelines for assessing qualitative research ( Kitto et al. , 2008 ).

Although the terms ‘reliability’ and ‘validity’ are contentious among qualitative researchers ( Lincoln and Guba, 1985 ) with some preferring ‘verification’, research integrity and robustness are as important in qualitative studies as they are in other forms of research. It is widely accepted that qualitative research should be ethical, important, intelligibly described, and use appropriate and rigorous methods ( Cohen and Crabtree, 2008 ). In research investigating data that can be counted or measured, replicability is essential. When other kinds of data are gathered in order to answer questions of personal or social meaning, we need to be able to capture real-life experiences, which cannot be identical from one person to the next. Furthermore, meaning is culturally determined and subject to evolutionary change. The way of explaining a phenomenon—such as what it means to use donated gametes—will vary, for example, according to the cultural significance of ‘blood’ or genes, interpretations of marital infidelity and religious constructs of sexual relationships and families. Culture may apply to a country, a community, or other actual or virtual group, and a person may be engaged at various levels of culture. In identifying meaning for members of a particular group, consistency may indeed be found from one research project to another. However, individuals within a cultural group may present different experiences and perceptions or transgress cultural expectations. That does not make them ‘wrong’ or invalidate the research. Rather, it offers insight into diversity and adds a piece to the puzzle to which other researchers also contribute.

In qualitative research the objective stance is obsolete, the researcher is the instrument, and ‘subjects’ become ‘participants’ who may contribute to data interpretation and analysis ( Denzin and Lincoln, 1998 ). Qualitative researchers defend the integrity of their work by different means: trustworthiness, credibility, applicability and consistency are the evaluative criteria ( Leininger, 1994 ).

Trustworthiness

A report of a qualitative study should contain the same robust procedural description as any other study. The purpose of the research, how it was conducted, procedural decisions, and details of data generation and management should be transparent and explicit. A reviewer should be able to follow the progression of events and decisions and understand their logic because there is adequate description, explanation and justification of the methodology and methods ( Kitto et al. , 2008 )

Credibility

Credibility is the criterion for evaluating the truth value or internal validity of qualitative research. A qualitative study is credible when its results, presented with adequate descriptions of context, are recognizable to people who share the experience and those who care for or treat them. As the instrument in qualitative research, the researcher defends its credibility through practices such as reflexivity (reflection on the influence of the researcher on the research), triangulation (where appropriate, answering the research question in several ways, such as through interviews, observation and documentary analysis) and substantial description of the interpretation process; verbatim quotations from the data are supplied to illustrate and support their interpretations ( Sandelowski, 1986 ). Where excerpts of data and interpretations are incongruent, the credibility of the study is in doubt.

Applicability

Applicability, or transferability of the research findings, is the criterion for evaluating external validity. A study is considered to meet the criterion of applicability when its findings can fit into contexts outside the study situation and when clinicians and researchers view the findings as meaningful and applicable in their own experiences.

Larger sample sizes do not produce greater applicability. Depth may be sacrificed to breadth or there may be too much data for adequate analysis. Sample sizes in qualitative research are typically small. The term ‘saturation’ is often used in reference to decisions about sample size in research using qualitative methods. Emerging from grounded theory, where filling theoretical categories is considered essential to the robustness of the developing theory, data saturation has been expanded to describe a situation where data tend towards repetition or where data cease to offer new directions and raise new questions ( Charmaz, 2005 ). However, the legitimacy of saturation as a generic marker of sampling adequacy has been questioned ( O'Reilly and Parker, 2013 ). Caution must be exercised to ensure that a commitment to saturation does not assume an ‘essence’ of an experience in which limited diversity is anticipated; each account is likely to be subtly different and each ‘sample’ will contribute to knowledge without telling the whole story. Increasingly, it is expected that researchers will report the kind of saturation they have applied and their criteria for recognising its achievement; an assessor will need to judge whether the choice is appropriate and consistent with the theoretical context within which the research has been conducted.

Sampling strategies are usually purposive, convenient, theoretical or snowballed. Maximum variation sampling may be used to seek representation of diverse perspectives on the topic. Homogeneous sampling may be used to recruit a group of participants with specified criteria. The threat of bias is irrelevant; participants are recruited and selected specifically because they can illuminate the phenomenon being studied. Rather than being predetermined by statistical power analysis, qualitative study samples are dependent on the nature of the data, the availability of participants and where those data take the investigator. Multiple data collections may also take place to obtain maximum insight into sensitive topics. For instance, the question of how decisions are made for embryo disposition may involve sampling within the patient group as well as from scientists, clinicians, counsellors and clinic administrators.

Consistency

Consistency, or dependability of the results, is the criterion for assessing reliability. This does not mean that the same result would necessarily be found in other contexts but that, given the same data, other researchers would find similar patterns. Researchers often seek maximum variation in the experience of a phenomenon, not only to illuminate it but also to discourage fulfilment of limited researcher expectations (for example, negative cases or instances that do not fit the emerging interpretation or theory should be actively sought and explored). Qualitative researchers sometimes describe the processes by which verification of the theoretical findings by another team member takes place ( Morse and Richards, 2002 ).

Research that uses qualitative methods is not, as it seems sometimes to be represented, the easy option, nor is it a collation of anecdotes. It usually involves a complex theoretical or philosophical framework. Rigorous analysis is conducted without the aid of straightforward mathematical rules. Researchers must demonstrate the validity of their analysis and conclusions, resulting in longer papers and occasional frustration with the word limits of appropriate journals. Nevertheless, we need the different kinds of evidence that is generated by qualitative methods. The experience of health, illness and medical intervention cannot always be counted and measured; researchers need to understand what they mean to individuals and groups. Knowledge gained from qualitative research methods can inform clinical practice, indicate how to support people living with chronic conditions and contribute to community education and awareness about people who are (for example) experiencing infertility or using assisted conception.

Each author drafted a section of the manuscript and the manuscript as a whole was reviewed and revised by all authors in consultation.

No external funding was either sought or obtained for this study.

The authors have no conflicts of interest to declare.

Beeson D , Jennings P , Kramer W . Offspring searching for their sperm donors: how family types shape the process . Hum Reprod 2011 ; 26 : 2415 – 2424 .

Google Scholar

Brannen J . Mixing methods: the entry of qualitative and quantitative approaches into the research process . Int J Soc Res Methodol 2005 ; 8 : 173 – 184 .

Charmaz K . Grounded Theory in the 21st century; applications for advancing social justice studies . In: Denzin NK , Lincoln YS (eds). The Sage Handbook of Qualitative Research . California : Sage Publications Inc. , 2005 .

Google Preview

Cohen D , Crabtree B . Evaluative criteria for qualitative research in health care: controversies and recommendations . Ann Fam Med 2008 ; 6 : 331 – 339 .

de Lacey S . Parent identity and ‘virtual’ children: why patients discard rather than donate unused embryos . Hum Reprod 2005 ; 20 : 1661 – 1669 .

de Lacey SL , Peterson K , McMillan J . Child interests in assisted reproductive technology: how is the welfare principle applied in practice? Hum Reprod 2015 ; 30 : 616 – 624 .

Denzin N , Lincoln Y . Entering the field of qualitative research . In: Denzin NK , Lincoln YS (eds). The Landscape of Qualitative Research: Theories and Issues . Thousand Oaks : Sage , 1998 , 1 – 34 .

Dixon-Woods M , Bonas S , Booth A , Jones DR , Miller T , Shaw RL , Smith JA , Young B . How can systematic reviews incorporate qualitative research? A critical perspective . Qual Res 2006 ; 6 : 27 – 44 .

ESHRE Psychology and Counselling Guideline Development Group . Routine Psychosocial Care in Infertility and Medically Assisted Reproduction: A Guide for Fertility Staff , 2015 . http://www.eshre.eu/Guidelines-and-Legal/Guidelines/Psychosocial-care-guideline.aspx .

Freeman T , Jadva V , Kramer W , Golombok S . Gamete donation: parents' experiences of searching for their child's donor siblings or donor . Hum Reprod 2009 ; 24 : 505 – 516 .

Goedeke S , Daniels K , Thorpe M , Du Preez E . Building extended families through embryo donation: the experiences of donors and recipients . Hum Reprod 2015 ; 30 : 2340 – 2350 .

Hammarberg K , Johnson L , Bourne K , Fisher J , Kirkman M . Proposed legislative change mandating retrospective release of identifying information: consultation with donors and Government response . Hum Reprod 2014 ; 29 : 286 – 292 .

Kirkman M . Saviours and satyrs: ambivalence in narrative meanings of sperm provision . Cult Health Sex 2004 ; 6 : 319 – 336 .

Kirkman M , Rosenthal D , Johnson L . Families working it out: adolescents' views on communicating about donor-assisted conception . Hum Reprod 2007 ; 22 : 2318 – 2324 .

Kirkman M , Bourne K , Fisher J , Johnson L , Hammarberg K . Gamete donors' expectations and experiences of contact with their donor offspring . Hum Reprod 2014 ; 29 : 731 – 738 .

Kitto S , Chesters J , Grbich C . Quality in qualitative research . Med J Aust 2008 ; 188 : 243 – 246 .

Kovacs GT , Morgan G , Levine M , McCrann J . The Australian community overwhelmingly approves IVF to treat subfertility, with increasing support over three decades . Aust N Z J Obstetr Gynaecol 2012 ; 52 : 302 – 304 .

Leininger M . Evaluation criteria and critique of qualitative research studies . In: Morse J (ed). Critical Issues in Qualitative Research Methods . Thousand Oaks : Sage , 1994 , 95 – 115 .

Lincoln YS , Guba EG . Naturalistic Inquiry . Newbury Park, CA : Sage Publications , 1985 .

Morse J , Richards L . Readme First for a Users Guide to Qualitative Methods . Thousand Oaks : Sage , 2002 .

O'Reilly M , Parker N . ‘Unsatisfactory saturation’: a critical exploration of the notion of saturated sample sizes in qualitative research . Qual Res 2013 ; 13 : 190 – 197 .

Porter M , Bhattacharya S . Investigation of staff and patients' opinions of a proposed trial of elective single embryo transfer . Hum Reprod 2005 ; 20 : 2523 – 2530 .

Sandelowski M . The problem of rigor in qualitative research . Adv Nurs Sci 1986 ; 8 : 27 – 37 .

Wyverkens E , Provoost V , Ravelingien A , De Sutter P , Pennings G , Buysse A . Beyond sperm cells: a qualitative study on constructed meanings of the sperm donor in lesbian families . Hum Reprod 2014 ; 29 : 1248 – 1254 .

Young K , Fisher J , Kirkman M . Women's experiences of endometriosis: a systematic review of qualitative research . J Fam Plann Reprod Health Care 2014 ; 41 : 225 – 234 .

  • conflict of interest
  • credibility
  • qualitative research
  • quantitative methods
Month: Total Views:
December 2016 1
January 2017 4
February 2017 41
March 2017 82
April 2017 111
May 2017 88
June 2017 71
July 2017 73
August 2017 92
September 2017 59
October 2017 76
November 2017 84
December 2017 179
January 2018 177
February 2018 235
March 2018 377
April 2018 504
May 2018 914
June 2018 1,052
July 2018 2,122
August 2018 4,606
September 2018 5,764
October 2018 7,844
November 2018 11,701
December 2018 8,722
January 2019 10,884
February 2019 10,938
March 2019 13,846
April 2019 17,949
May 2019 17,333
June 2019 12,257
July 2019 12,485
August 2019 14,138
September 2019 11,868
October 2019 13,410
November 2019 17,044
December 2019 12,312
January 2020 14,664
February 2020 15,928
March 2020 16,475
April 2020 22,019
May 2020 12,941
June 2020 15,155
July 2020 13,648
August 2020 12,338
September 2020 12,599
October 2020 13,599
November 2020 12,718
December 2020 10,484
January 2021 11,352
February 2021 12,734
March 2021 15,285
April 2021 14,133
May 2021 12,930
June 2021 8,304
July 2021 7,175
August 2021 7,738
September 2021 7,781
October 2021 7,080
November 2021 7,031
December 2021 5,549
January 2022 6,326
February 2022 6,135
March 2022 9,371
April 2022 10,448
May 2022 10,467
June 2022 6,905
July 2022 5,981
August 2022 7,254
September 2022 7,249
October 2022 7,709
November 2022 7,660
December 2022 6,075
January 2023 6,936
February 2023 7,205
March 2023 8,530
April 2023 8,302
May 2023 7,630
June 2023 5,434
July 2023 4,658
August 2023 5,289
September 2023 4,781
October 2023 6,221
November 2023 6,384
December 2023 4,798
January 2024 5,907
February 2024 5,875
March 2024 8,614
April 2024 8,526
May 2024 7,309
June 2024 4,280
July 2024 3,615
August 2024 4,135
September 2024 2,327

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1460-2350
  • Copyright © 2024 European Society of Human Reproduction and Embryology
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Qualitative Research : Definition

Qualitative research is the naturalistic study of social meanings and processes, using interviews, observations, and the analysis of texts and images.  In contrast to quantitative researchers, whose statistical methods enable broad generalizations about populations (for example, comparisons of the percentages of U.S. demographic groups who vote in particular ways), qualitative researchers use in-depth studies of the social world to analyze how and why groups think and act in particular ways (for instance, case studies of the experiences that shape political views).   

Events and Workshops

  • Introduction to NVivo Have you just collected your data and wondered what to do next? Come join us for an introductory session on utilizing NVivo to support your analytical process. This session will only cover features of the software and how to import your records. Please feel free to attend any of the following sessions below: April 25th, 2024 12:30 pm - 1:45 pm Green Library - SVA Conference Room 125 May 9th, 2024 12:30 pm - 1:45 pm Green Library - SVA Conference Room 125
  • Next: Choose an approach >>
  • Choose an approach
  • Find studies
  • Learn methods
  • Getting Started
  • Get software
  • Get data for secondary analysis
  • Network with researchers

Profile Photo

  • Last Updated: Aug 9, 2024 2:09 PM
  • URL: https://guides.library.stanford.edu/qualitative_research

Coastal Carolina logo

UNIVERSITY LIBRARIES

SOC 325: Qualitative Research

  • How to Search Videos

Ask a Librarian

Search google scholar for articles.

Google Scholar Search

Find Journal Articles in Sociology Databases

Search SocIndex and Sociological Abstracts for scholarly journals in the field of Sociology.

  • SocINDEX with Full Text This link opens in a new window Coverage from all subdisciplines of sociology.
  • Sociological Abstracts This link opens in a new window Journals and other materials in sociology, social work, and other social sciences.

Finding Journal Articles in Other Databases

Databases from other disciplines may also have articles of interest to you, depending on the topic you are studying.  The following databases are also recommended for topics related to Sociology:

  • Social Services Abstracts This link opens in a new window Articles on social work, human services, and related areas, including social welfare, social policy, and community development.
  • Social Sciences Citation Index This link opens in a new window Articles in the sciences and social sciences.
  • National Criminal Justice Reference Service This link opens in a new window International journals, books, reports, dissertations, and unpublished papers on criminology and related disciplines.
  • Criminal Justice Abstracts with Full Text This link opens in a new window Bibliographic records and full text articles on criminal justice, criminology, corrections and prison, criminal investigation, the forensic sciences and substance abuse.
  • Education Source This link opens in a new window Coverage of all levels of education from early childhood to higher education. Includes scholarly, peer-reviewed journal articles, magazines, reviews, newspaper articles, trade publications, and conference papers.

Provides full-text and images of over 1000 scholarly, peer-reviewed journals in a variety of disciplines. Has best coverage in older volumes of journals.

  • ERIC This link opens in a new window Includes reports, academic journals, ERIC documents, educational reports, dissertations, and government documents covering all levels of education.
  • SPORTDiscus with Full Text This link opens in a new window Scholarly, peer-reviewed journals covering physical education, exercise physiology and sports science.
  • Next: How to Search Videos >>
  • Last Updated: Aug 1, 2024 3:09 PM
  • URL: https://libraries.coastal.edu/SOC325

Coastal Carolina Logo

  • University Libraries
  • Site Policies
  • Open access
  • Published: 16 September 2024

A qualitative exploration of disseminating research findings among public health researchers in China

  • Yiluan Hu 1 ,
  • Xuejun Yin 1 , 2 ,
  • Yachen Wang 1 ,
  • Enying Gong 1 ,
  • Xin Xin 3 ,
  • Jing Liu 4 ,
  • Xia Liu 4 ,
  • Ruitai Shao 1 ,
  • Juan Zhang 1 , 5 &
  • Ross C. Brownson 6 , 7  

BMC Public Health volume  24 , Article number:  2518 ( 2024 ) Cite this article

35 Accesses

Metrics details

Research dissemination is essential to accelerate the translating of evidence into practice. Little is known about dissemination among Chinese public health researchers. This study aimed to explore the understanding and practices of disseminating research findings and to identify barriers and facilitators that influence dissemination activities to non-research audiences.

This study deployed an exploratory qualitative design with purposive and snowball sampling. One focus group with 5 participants and 12 in-depth interviews were conducted with participants working in diverse fields from universities ( n  = 10), the National Chinese Center for Disease Control and Prevention ( n  = 4), the Chinese National Cancer Center ( n  = 1), the Chinese National Center for Cardiovascular Disease ( n  = 1), and China office of a global research institute ( n  = 1) from May to December 2021 to reach saturation. Data were initially analyzed using inductive thematic analysis. The designing for dissemination (D4D) logic model was then used to organize themes and subthemes. Two coders independently coded all transcripts and discussed disparities to reach a consensus.

Out of 17 participants, 12 misunderstood the concept of dissemination; 14 had disseminated to non-research audiences: 10 to the public, 10 to practitioners, and 9 to policymakers. We identified multiple barriers to dissemination to non-research audiences across four phases of the D4D logic model, including low priority of dissemination, limited application of D4D strategies, insufficient support from the research organizations, practice settings, and health systems, and overemphasis on academic publications.

Conclusions

There was a lack of understanding and experience of dissemination, indicating a lack of emphasis on active dissemination in China. We provide implications for raising awareness, building capacity, facilitating multidisciplinary collaboration, providing incentives and infrastructure, changing climate and culture, establishing communication and executive networks, and accelerating systematic shifts in impact focus.

Peer Review reports

Introduction

The gap between research and practice is well documented [ 1 , 2 , 3 , 4 ]. Dissemination refers to the active approach of spreading evidence-based interventions to the target audience via predetermined channels using planned strategies [ 3 , 5 ] and is a prerequisite for bridging the gap between research and practice. The concept of dissemination has some overlap with other related concepts including science popularization and knowledge translation. Although both use communication techniques as useful strategies, science popularization is mainly about propagating general knowledge to the public with the aim of improving citizens’ science literacy [ 6 ], whereas dissemination involves wider audiences and aims to maximize the impact of research and promote the uptake of evidence. On the other hand, although sharing a similar goal with dissemination of bridging the research-practice gap, knowledge translation refers to the dynamic and iterative process involving synthesis, dissemination, exchange, and ethically-sound application of knowledge, which considers dissemination a component of translation [ 7 , 8 ].

Despite the importance of dissemination, dissemination is often not a priority for researchers and their organization [ 9 ] and is largely missed. For example, in a study of US public health researchers, 78% reported dissemination as important to their research, while only 27% spent over 10% of their time on dissemination [ 3 ] and 28% rated their dissemination efforts as excellent or good [ 10 ]. In addition, there are inconsistencies in preferred sources of information between researchers and non-researchers. Almost all researchers disseminated their research through academic publications [ 11 , 12 , 13 , 14 ], yet practitioners and policymakers may find them inaccessible, difficult to understand, or time-consuming [ 11 , 15 , 16 , 17 ].

To effectively disseminate the evidence, dissemination and implementation (D&I) science has thrived and designing for dissemination (D4D) has emerged as a promising direction within D&I science. The D4D perspective highlights the responsibility of researchers to actively disseminate and the need to plan from the outset to fit the adopters’ needs, assets, and time frames [ 3 ]. Useful D4D strategies include stakeholder involvement, application of D&I science theories and frameworks, incorporation of marketing, business, communication, systems approaches and professionals, and related disciplines [ 3 , 18 , 19 ]. Despite the availability of D4D, the application remains insufficient. For example, only 17% of US public health researchers used a framework or theory to plan their dissemination activities and only 34% typically involved stakeholders in the research process in 2012; 55% of US and Canadian D&I scientists typically involved stakeholders in the research process in 2018. While there is a growing body of evidence on D4D in some regions of the world, there are limited data on D4D from China.

Evidence from high-income countries has revealed individual-level barriers such as lack of capacity and reluctance to disseminate findings of a single study, and organizational-level barriers such as lack of financial resources, staff time, and academic incentives [ 14 , 20 ]. Yet, little is known about dissemination in China, where the D&I science is still in its infancy. With progresses in China’s health reform, science popularization and knowledge translation has received increasing attention, but dissemination received little attention in the field of public health. In addition, the large population, high disease burden, shortage of healthcare providers, and relatively centralized health system further exacerbate the complexity of dissemination in China [ 16 , 21 ]. A quantitative study conducted by the current team among Chinese public health researchers suggested that only 58.1% had disseminated their research findings, and that main barriers included a lack of financial resources, platforms, and collaboration mechanisms at the organizational level, as well as a lack of time, knowledge, and skills at the individual level [ 22 ].

Hence, there is urgency to explore factors underlying the dissemination in China from the perspective of researchers. We aimed to explore researchers’ understanding of the concept of dissemination and current dissemination activities, further to identify barriers and facilitators that influence dissemination to non-research audiences guided by the D4D logic model.

A qualitative study design was deployed to explore public health researchers’ perspectives on contextual factors affecting the dissemination of research findings in China. The study was reported according to the Consolidated criteria for reporting qualitative research (COREQ) guidelines (see Additional file 1) [ 23 ].

Theoretical framework

With the aim to gain insight into the barriers and facilitators for researchers to design for dissemination, this study adopted the D4D logic model as an analytical framework. The D4D logic model was published by Kwan and colleagues [ 19 ] in 2022 and included four phases: (1) the initial conceptualization phase identifying need and demand, and establishing evidence base of health issues; (2) the design phase using multiple strategies to determine the design of dissemination product as well as the packaging, messaging, and distribution plan; (3) the subsequent dissemination phase based on the push-pull-capacity model and situating the push of research, pull of practice, and capacity of health systems to support dissemination; and (4) the impact phase ensuring adoption, sustainment, and equity benefits [ 19 ].

Participants and sampling

Study participants were public health researchers working in universities, the National Chinese Center for Disease Control and Prevention (briefly as China CDC), the Chinese National Cancer Center, the Chinese National Center for Cardiovascular Disease, or China Offices of global research institutes. Universities are the most important producers of evidence in China, followed by healthcare institutions, research institutions, and companies [ 24 ].Teaching and researching are core activities for university researchers, and academic publication is one of the key tenure and promotion criteria. The China CDC is a governmental and national-level technical institution affiliated with the National Health Commission of China, and shoulders the responsibilities of focusing on the key tasks of national disease prevention and control and of instructing the provincial-, prefecture-, city-, and county-level CDC. Also under the leadership of the National Health Commission of China and shoulder responsibilities of evidence generation and implementation, the Chinese National Cancer Center and the Chinese National Center for Cardiovascular Disease are based in two big specialized hospitals in China. Given that university researchers are the biggest community for evidence generation in China, most of the participants were university researchers.

Purposive and snowball sampling methods were applied to reach less accessible target participants. First, participants were purposively selected on the basis that they had rich experience in public health research and took an active part in academia. Second, interviewees were asked to nominate other researchers who might be willing to provide information for in-depth interviews, particularly those with expertise in dissemination and implementation science. All potential participants were contacted directly by telephone by a senior member (JZ) of the research team to seek their participation. Participants were informed of the study’s purpose, process, confidentiality, and right to withdraw at any time. They were then asked to give informed oral consent to participate in the study and to be audio-recorded prior to the formal interview. In total, 18 researchers received the invitation; one declined due to unavailability during the time of this study.

Data collection

Data were collected from May 2021 to December 2021 through a focus group and in-depth interviews. Given that participants may be unfamiliar with the concept of dissemination and the experience of dissemination may be limited, we initially conducted a focus group of five participants to stimulate discussion. During the discussion, participants were actively involved and contributed a lot to the topic, so we later conducted individual interviews to gather a rich and detailed understanding of the participants’ perspectives. The focus group of five participants and the first two individual in-depth interviews were conducted face-to-face, while later ten individual in-depth interviews were conducted via Tencent Meeting (Chinese online meeting software, similar to Zoom) because of the COVID-19-related physical distancing restrictions. During the interviews, participants were alone in their office or a private space to ensure confidentiality so that they could share freely.

A multidisciplinary team of researchers and students in dissemination and implementation science, behavior science, psychology, and qualitative methods contributed to developing the interview guide. The interview guide was pilot tested and refined prior to the formal interview. As dissemination is a relatively new concept in China, participants entered interviews with a discussion about their understanding of this concept. To ensure participants have consistent understanding of dissemination, the interviewer then clarified the concept as the active approach of spreading evidence-based interventions to the target audience via predetermined channels using planned strategies [ 3 , 5 ]. Then, participants were encouraged to have a deep, detailed discussion on their dissemination experience and barriers and facilitators of dissemination to non-research audiences. Participants’ demographic information, which was pre-collected, was confirmed with participants at the end of the interview. The interview guide can be found in supplementary file 2.

All interviews were conducted in Mandarin Chinese by an interviewer experienced in qualitative research (JZ, professor, Ph.D., female) with a note-taker (YH, master’s student, female). No repeat interviews were conducted. The researchers collected participants’ demographic information, research interests, and research projects online before the formal interview to have a deep understanding of their perspectives. All interviews were audio-recorded and transcribed after obtaining oral consent from the interviewees. Transcripts were not returned to participants for comment or correction. Following qualitative research best practices [ 25 , 26 , 27 ], data collection ended when information saturation occurred and no new information was observed.

Data analysis

Data analysis occurred concurrently with data collection. Verbatim transcripts were coded using the inductive thematic analysis approach in NVivo 11 software. First, a coder (YH) reviewed transcripts to generate initial codes and aggregated them into categories to form early themes and subthemes. The D4D logic model [ 19 ] was then used to organize and map the relationships between themes and subthemes. Then, another coder (YW) independently applied codes to transcripts using the same coding framework. The codebook was constantly checked against the transcripts and was finally determined by comparison until no new information was identified. All coding results were compared and discussed between the two coders to reach a consensus. Unsolved discrepancies were resolved through discussion with a senior researcher (JZ) and at research team meetings. Data analysis was conducted in Chinese. All themes, subthemes, and typical verbatim quotes used to illustrate the main themes, were translated into English. Quotes are identified by participants’ ID to guarantee anonymity. Participants did not provide feedback on the findings.

Information saturation was reached after completing a focus group of 5 participants and 12 in-depth individual interviews with public health researchers in China. The interviews took 41.9 ± 10.9 min on average. Participants aged between 32 and 65 years, with an average of 46.5 ± 8.3 years, were primarily female (70.6%), and had a Ph.D. degree (88.2%). They worked in the universities in the field of health policy, behavioral science, global health, and implementation science ( n  = 10), the China CDC in the field of tobacco control, AIDS/STD control, tuberculosis control, and environmental health ( n  = 4), the Chinese National Cancer Center ( n  = 1), the Chinese National Center for Cardiovascular Disease ( n  = 1), and the China office of a global research institute ( n  = 1).

Theme 1: understanding of the concept of dissemination

Five out of 17 participants had no difficulty understanding the concept of dissemination as the active approach of spreading evidence-based interventions to the target audience via predetermined channels using planned strategies, while 12 participants misunderstood dissemination to some extent. Eight participants did not differentiate dissemination of research findings from science popularization of general knowledge when discussing their dissemination activities.

Dissemination means that I share some knowledge with others… I have always paid close attention to new media , and I have written and post some health science articles in Zhihu (Chinese online question-and-answer social media , similar to Quora) … Some online magazines often invite me and my colleagues to write some science articles , for example , I recently wrote an article to share some psychological and behavioral techniques for smoking cessation (Participant 01).

One participant viewed dissemination as knowledge translation, saying that dissemination referred to the process of translating and applying research, especially interventional research, into practice and policy.

I feel that dissemination in Chinese would be easily understood as science popularization , but it actually highlights the translation to the practice and policy , so translating it as ‘knowledge translation’ in Chinese may be more appropriate (participant 16).

Three participants argued that dissemination was similar to health communication, which refers to the communication and sharing of information.

The government is now promoting the awareness of knowledge translation , but I feel that knowledge translation in Chinese emphasizes the process of translating and applying our research , which is more about health technology , and sometimes there may be some commercial elements in knowledge translation. Dissemination is more similar to health communication (participant 14).

Theme 2: experience of dissemination

Subtheme 2.1: dissemination within academia.

Three participants working in the universities mainly published their research findings in peer-reviewed journals or through academic conferences for different reasons: one expressed a lack of resources in reaching non-research audiences, while two showed a lack of motivation, saying that dissemination to non-research audiences was not their priority.

I mainly published my research on peer-reviewed journals… for ordinary researchers like me , access and resources were limited (participant 07). As a researcher , I am very competent when disseminating within academia. Even if I encounter difficulties , I will face them. But for dissemination to practitioners or policymakers , the main disseminator is not me and should not be me… I am a teacher , and my priorities for the next five to ten years include publishing textbooks , participating in academic activities , working with young students , and conducting research (participant 17).

Subtheme 2.2: dissemination beyond academia

Fourteen participants described their experiences disseminating research findings to non-research audiences: 10 had disseminated to the public, 10 to practitioners, and 9 to policymakers. Participants disseminated to the public through social media and mass media. They cited social media as an accessible channel for every individual researcher. However, they felt their personal influence was limited in reaching a wide population, and they needed more resources to use mass media for dissemination. In addition, researchers were worried about possible misinformation and disinformation when disseminating on social media and mass media.

Our impact as a researcher to disseminate is so weak that our research findings posted on WeChat (Chinese social media , similar to WhatsApp and Snapchat) Moments can only be noticed by a few hundred people at most (participant 02). We are not required to add references , and sometimes the already added ones may even be deleted… and because our target audience is the public , we need to translate academic language into plain language… sometimes I am afraid of making scientific mistakes or causing misinformation (participant 01).

Dissemination to policymakers was considered impactful but with a high threshold. A participant indicated that in such cases, dissemination to practitioners was an alternative strategy to influence practice since it was more accessible. Of nine participants who have ever disseminated to policymakers, three worked in China CDC, and five engaged in health policy research.

My organization (China CDC) is a technical support organization for administrative decisions and policy-making , so a lot of our work is done for dissemination (participant 15). For researchers conducting health policy research like me , it is a must to disseminate to our government (participant 08).

Some participants felt the issuance of standards and guidelines ( n  = 4) and publication of patents ( n  = 5) as their dissemination routes. In contrast, some participants thought standards, guidelines, and patents were dissemination products that needed further disseminated, and the issuance of these products did not mean successful dissemination.

The implementation of patents is limited… now patents are mainly used by my peer researchers. Publishing patents does not mean dissemination , and patents themselves actually need to be further disseminated and implemented (participant 15).

Theme 3: facilitators and barriers of dissemination based on the D4D logic model

Factors influencing dissemination to non-research audiences emerged across four phases of the D4D logic model [ 19 ], and seven subthemes were identified: (1) motivation; (2) design processes; (3) packaging and distribution design; (4) push of research; (5) pull of practice; (6) capacity of health systems; and (7) impact of research. The subthemes are discussed in detail below and in Table  1 .

Subtheme 3.1: motivation

Most participants expressed their willingness to disseminate to non-research audiences out of a sense of social responsibility and social recognition, with the exception of two participants who did not consider dissemination to be their priority. Social climate was mentioned as another facilitator of dissemination.

The ultimate goal of scientific research is to change the public’s cognition and behavior , and the government’s decision-making process. If you do not consider dissemination , your research has no value , and it is hard to get recognition from our peers and the public (participant 12).

Subtheme 3.2: design processes

Subtheme 3.2.1: stakeholder involvement and context analysis.

Some participants indicated difficulties building relationships and reaching consensus with stakeholders (e.g., the public, media, practitioners, and policymakers) because of potential conflicts of interest between stakeholders and researchers. Involving stakeholders from the outset, building contacts based on previous relationships, and matching stakeholders’ needs were recommended by participants as helpful for stakeholder involvement. In addition, involving stakeholders from all sectors of society, not only within the health system but also outside of it (e.g., education system, non-governmental organizations, non-profit organizations, and commercial organizations), was thought to have the potential to make a greater influence.

This was based on previous collaboration between their organization and ours , and we have a long-term collaboration with them , so it was quite natural and easy to involve them… We got in touch with them when the research is being formulated. The sooner you can get in touch with stakeholders and get their support , the better… and if we can connect with people and organizations outside the health system , our dissemination efforts may have a greater impact and be more sustainable (participant 13).

Subtheme 3.2.2: application of D&I methodologies

The application of D&I methodologies was stressed as a facilitator of dissemination. However, some participants indicated that D&I science was still an emerging field in China, the limited understanding of D&I methodologies impeded the dissemination and implementation of research.

Currently , there is limited knowledge of methodologies including research design , theoretical frameworks , and qualitative methods for D&I science in China , which hinders the dissemination and implementation of research (participant 16).

Subtheme 3.2.3: marketing and business approaches

Some participants mentioned that the field of marketing was quite relevant to dissemination design and that marketing and communication approaches were promising for dissemination to non-research audiences, especially to the general public.

Take food marketing in food policy as an example , I feel that Coke’s advertising is so good that I also want to drink it; on the contrary , if you simply tell me not to eat food high in sugar and salt , then I will just not listen , let alone the ordinary consumers (participant 06).

Subtheme 3.2.4: context and situation analysis

Conducting context and situation analysis was cited as the foundation for understanding context and tailoring dissemination efforts.

Health communication always emphasizes needs assessment and audience segmentation , and it is important to understand the audiences’ needs. In many cases , what we were doing did not meet the needs of our audiences , and they did not accept (participant 04).

Subtheme 3.2.5: complexity of social, health, organizational, and political systems

Participants perceived policy resistance and low confidence in disseminating research with negative, politically or economically sensitive findings in complex social, health, organizational, and political systems. In addition, some participants noted that the COVID-19 pandemic increased the uncertainty of research findings and the vulnerability of collaboration networks.

For example , research involving the control of the tobacco industry , which is related to the economy , is very sensitive (participant 06). At first , everything went well , and they were very supportive. But because of the COVID-19 pandemic , the organization changed leadership , so we had to communicate with them again (participant 13).

Subtheme 3.3: packaging and distribution design

Subtheme 3.3.1: capability of packaging.

Participants indicated that integrating and packaging for non-research audiences was difficult and time-consuming and could be irregular and misleading, which calls for special competencies that differ from usual academic training.

It is demanding , requiring a high level of processing , summarizing , writing , and packaging skills. These are huge challenges that our daily training does not teach us (participant 12).

Subtheme 3.3.2: availability of distribution channels and platforms

The availability of channels and platforms was highlighted as an important contextual factor affecting dissemination. Those in the early stages of their careers, who had not yet established academic influence, expressed a lack of access to channels to interact with policymakers who were beyond the reach of individual researchers. Leveraging existing channels, platforms, and programs was recommended to facilitate dissemination to intended audiences.

Especially , we young researchers actually have many ideas and know a lot , but we do not have channels to share (participant 01). It is important to consider taking advantage of existing platforms or programs and hitching a ride whenever possible. Otherwise , dissemination involves a lot of financial and personnel input (participant 13).

Subtheme 3.4: push of research

Subtheme 3.4.1: incentives.

Academic publications were cited as the chief yardstick of performance evaluation, promotion requirements, and grant obligations. Some participants stated that the extent of dissemination to policymakers would also influence performance evaluation but were not given the same importance as academic publications. This was attributed by some participants to the difficulty in quantifiably evaluating dissemination activities. Although the China CDC participants expressed less pressure for academic publication than their university counterparts, they also complained about the academic incentive systems.

Dissemination to policymakers is now considered in performance evaluation , but still not as much as publishing papers on peer-reviewed journals… they may never regard dissemination as the most important criterion (participant 06). Currently , the value of science is still limited to publication and ‘Impact Factor’… Another problem is that it is difficult to define our dissemination efforts. For example , I cannot say how many people are using my APP and how much impact it burst , but I can say how many papers I have published in top journals (participant 11).

Subtheme 3.4.2: infrastructure

Seven participants reported having a dedicated person or team responsible for dissemination-related activities in their organization. These persons or teams served mainly for patent applications, communication, and publicity.

We have a Development Office dedicated for knowledge translation. They would organize seminars on dissemination like how to apply for patents (participant 14). The attitude of the communication platform in our school is very clear , and its purpose is to build prestige for our school. If we have proper research to disseminate , they will help with propaganda (participant 17).

Some participants mentioned that their organization would provide additional support, such as administrative facilitation, to help them disseminate more smoothly.

In addition to providing administrative costs , our university also provides intangible support for the development of D&I science and for the coordination of different departments (participant 16).

Subtheme 3.5: pull of practice

Participants noted a lack of climate or culture to support dissemination mainly because of the lack of priority given to some health issues themselves and the dissemination activities among leaders and practitioners.

The national government is advocating the dissemination and implementation of many innovations , but the local government may find it difficult to understand the value of (disseminating) these innovations and may not be unwilling to provide financial or personnel support (participant 10). We introduced our research and why we wanted to work with them to disseminate it , but they said that was not their focus. Then what was their focus at that time? All they wanted to do was help village doctors to pass a qualification exam and select the ‘most beautiful village doctor’. They were not interested in our dissemination of chronic diseases (participant 17).

Subtheme 3.6: capacity of health systems

Subtheme 3.6.1: communication networks.

The lack of networks between researchers and non-research audiences was cited as a barrier. Some researchers expected the health systems to build mechanisms for bidirectional communication networks between researchers and non-research audiences.

There is no mechanism to collaborate us with non-research audience… some researchers may have such relationships with non-research audiences , but that is out of their personal impact and efforts rather than the mechanisms in the health system (participant 02). There is a gap between researchers and policymakers in the academic system… maybe our organization could help bridge the gap. For example , the organization could build a system to collect our research findings regularly and disseminate to policymakers because universities have this kind of relationship with the government (participant 07).

Subtheme 3.6.2: executive networks

Executive network in the health system was considered necessary for dissemination on a large scale but difficult for ordinary university researchers to have. A participant in the China CDC pointed out that although the top-down CDC system in China, including CDCs at national, provincial, city, and county levels, could facilitate wide dissemination, their dissemination impact was still limited by the lack of human resources for public health.

Our dissemination success has benefited greatly from the solid executive network built before. For example , under the Chinese National Cancer Center , we have Cancer Prevention Offices at the provincial level. They could help us disseminate our research findings , like our evidence and apps. However , most researchers , especially university researchers , do not have such an objective support network (participant 11). The lack of human resources in public health is one of the most common problems in our country. For example , we have 40 staff working on tuberculosis at the China CDC , but only 10 at each provincial CDC , and 2 at each county CDC. In many cases , there are even half a person in counties working on tuberculosis (participant 10).

Subtheme 3.7: impact of research

Participants noted a chasm between overemphasis on academic publications and ignorance of long-term impact in the current academic system. Despite a series of national policies designed to break the undesirable orientation of “academic publications only” issued by the Chinese government, participants were pessimistic about them. They stated that the interpretation and implementation of these policies need to be further reviewed and improved.

Dissemination to non-research audiences is not expected by my organization , which does not care about these activities. However , it is the government that holds the baron , and there is nothing my organization can do about it. (participant 09). At present , national policies are developing and changing fast , but how to interpret and implement these policies needs to be gradually improved… our government is paying more and more attention to dissemination , but when it comes to the implementation level , there are still many shortcomings (participant 14).

This qualitative study explored the understanding and practices of dissemination, and further identified the barriers and facilitators of dissemination, which may be the first of this type in China. We found a lack of understanding of the concept and inadequate practices of dissemination to non-research audiences among Chinese public health researchers. We also identified barriers and facilitators in the conceptualization, design, dissemination, and impact phases of the D4D logic model [ 19 ], suggesting considerable room for improvement in the application of D4D strategies and the development of systematic resources. Our findings begin to provide a roadmap of ideas and actions to improve the active dissemination of research in China.

Dissemination was poorly understood by Chinese public health researchers, who confused it with some related concepts such as communication, science popularization, and knowledge translation, indicating a lag in the development and advocacy of dissemination in China. The lag in development and the lack of understanding of dissemination may hinder the dissemination practice and the uptake of evidence. Hence, dissemination, which highlights taking an active approach, identifying target audience, selecting predetermined channels, and using planned strategies to disseminate, should be deeply rooted in researchers’ mind to facilitate research uptake and understanding.

The public, practitioners, and policymakers were identified as three key non-research audiences for dissemination, yet most only gave a brief description when asked about their dissemination practices. While the internet and media are promising for large-scale dissemination, there is a need to strengthen the capacity of researchers to address misinformation and disinformation [ 28 , 29 ] and to facilitate collaboration between researchers and the media to achieve wide dissemination in China. Dissemination to the public and practitioners is considered as feasible and direct, while dissemination to policymakers as crucial for long-term impact. Indeed, the Chinese government holds accountability for the health of people, and proactively disseminating research findings to policymakers and government officials helps make a a greater public health impact. Nevertheless, the participants faced the dilemma of lacking personal relationships and access to channel to interact with policymakers. Although some academic associations (e.g., the Chinese Preventive Medicine Association) bring together researchers and practitioners in China, their potential to connect researchers and policymakers needs to be further strengthened to lead to dissemination success. Most of the participants with experience of dissemination in policy dissemination were those working in the China CDC or engaged in health policy research: the former stressed the mission of the China CDC to provide technical support for policy-making, and the latter stated that influencing policy was the fundamental goal of health policy research. This also suggests that organizations and researchers with stronger missions and resources to influence policy may have greater opportunities to disseminate to policymakers.

Although few in this study explicitly stated that dissemination to non-research audiences was not their priority, a lack of design capacity and distribution channels among researchers, insufficient support in organizations and the health systems, and an overemphasis on academic publications hindered dissemination to non-research audiences. First, there was a limited application of D4D strategies in the design of dissemination products, packaging and distribution plans. This is consistent with other studies suggesting that the lack of capacity was a common barrier to dissemination practice in low- and middle-income countries [ 30 ]. A good news was that Chinese researchers were actively involved diverse stakeholders at multiple stages of their research, which is consistent with the international trend of increasing emphasis on stakeholder engagement [ 31 , 32 ]. A survey of US and Canadian researchers in 2018 also revealed increases in stakeholder involvement compared to a survey of US researchers in 2012 [ 3 , 33 ]. However, there was a need to build multisectoral partnerships and improve stakeholder involvement’s depth and quality [ 32 ]. In addition, some researchers were aware of the potential for leveraging methods and frameworks from D&I science, marketing and business, communications and visual arts, and systems science to achieve dissemination success, yet the practical application needed to be improved. These disciplines (e.g., D&I science, marketing, systems science, and complexity science) originated from abroad and may not seem familiar to the Chinese public health researchers, it may require a lengthy learning and adaptation process. There are some simple tools and principles for guidance [ 34 ]. Notably, not all research finding should be disseminated to all audiences, the ability of deciding what to disseminate and to whom to disseminate should be strengthened in initial stage. Therefore, it is necessary to build capacity in the D4D principles and skills and to promote teaming across disciplines, as it may be unrealistic for public health researchers to develop all the D4D skills [ 13 ].

In addition to the need to improve researchers’ capacity and partnership across disciplines, there remained substantial room for improvement in the resources and structures that support dissemination. Specifically, there was a lack of incentives and infrastructure in research organizations (the push), a lack of climate and culture in practice or policy settings (the pull), and a lack of dissemination networks in the health system (the capacity). The persistent push–pull disconnect between researchers and practitioners was reported in other study [ 35 , 36 ]. As might have been expected, academic publications were the main criteria for performance evaluation, which may also be true in many other countries [ 10 , 14 , 33 , 37 , 38 , 39 ]. Furthermore, although some participants reported having a dedicated person or team for dissemination-related activities, the responsibilities of these dedicated persons or teams need to be further clarified and their capacity needs to be further enhanced. On the other hand, previous research points out that attention to dissemination tends to focus more on the push side than the pull and capacity sides [ 11 , 19 ]. For example, studies in the US suggested that 53% of researchers reported having a designated individual or team for dissemination [ 3 ] while only 20% of practitioners reported so [ 40 ]. Thus, changing the climate and culture in practice or policy settings to be receptive and prepared for dissemination, providing infrastructure to enhance communication between researchers and non-research audiences, and building executive networks to support wide dissemination are needed as a lack of platforms and collaboration mechanisms is also a common barrier to dissemination [ 30 ].

Problems with the lack of push, pull, and capacity for dissemination may be partly attributed to overemphasizing academic metrics rather than the long-term health and equity impacts. Several government funding agencies in developed countries have adopted policies to support or even require dissemination efforts [ 12 , 19 , 41 , 42 , 43 ]. Yet most funding agencies in China still focus on academic impact, existing fundings for dissemination in China are small in terms of its scale and are competitive to apply for. To address this issue, the Chinese government has adopted a series of national policies to reduce the overemphasis on academic publications and improve the evaluation system [ 44 , 45 , 46 , 47 ]. However, policy interpretation and grassroots implementation need to be further improved to accelerate the system shift to focus on the long-term impact of research. Frameworks such as the Research Excellence Framework (REF) [ 48 ] and the Translational Science Benefits Model (TSBM) [ 49 ] provide an outline and benchmarks by which researchers can measure the impact of scientific discoveries beyond traditional academic metrics.

This study revealed important aspects regarding research dissemination in China from the perspective of researchers with some limitations. First, 17 interview participants may not fully reflect the full spectrum in China although data saturation was reached. Given that dissemination is in its infancy in China, this study plays an initial study and future studies may need to involve more and more diversified participants to reveal dissemination of the whole research system in China. Second, some interviews were conducted online due to the COVID-19 pandemic, which limited the ability to gain information from contextual details and nonverbal expressions during the interviews. Third, the study is a qualitative exploratory study, additional large-scale quantitative studies are needed to triangulate the findings across the broader population. Indeed, the research team has run a large-scale survey to examine the attitudes and practices of Chinese public health researchers towards dissemination.

This study highlights a lack of emphasis on active dissemination in China and identifies multiple barriers to dissemination. There is a need to advance the field to promote understanding and raise awareness of dissemination—with the goal of ultimately more rapidly and equitably moving evidence to practice and policy. There is also a need to build capacity in D4D and to collaborate with experts from multiple disciplines (e.g., marketing, systems science, complexity science) to break down disciplinary silos. The findings also provide implications for promoting training programs, providing incentives and infrastructure for diverse dissemination activities, creating a climate and culture of readiness for dissemination, establishing bidirectional communication networks and efficient executive networks, and accelerating systematic shifts in policy orientation. Otherwise, dissemination is likely to sink to low priority in the already over-stretched system.

Data availability

All the data and materials of this qualitative study are available from the corresponding author on reasonable request.

Abbreviations

designing for dissemination

dissemination and implementation

National Chinese Center for Disease Control and Prevention

Balas EA, Boren SA. Managing clinical knowledge for Health Care Improvement. Yearb Med Inf. 2000;1:65–70.

Lenfant C. Shattuck lecture–clinical research to clinical practice–lost in translation? N Engl J Med. 2003;349(9):868–74. https://doi.org/10.1056/NEJMsa035507 .

Article   PubMed   Google Scholar  

Brownson RC, Jacobs JA, Tabak RG, et al. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health. 2013;103(9):1693–9. https://doi.org/10.2105/ajph.2012.301165 .

Article   PubMed   PubMed Central   Google Scholar  

Brownson RC, Eyler AA, Harris JK, et al. Getting the Word Out: New approaches for disseminating Public Health Science. J Public Health Manag Pract. 2018;24(2):102–11. https://doi.org/10.1097/PHH.0000000000000673 .

Rabin BA, Brownson RC. Terminology for dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, editors. Dissemination and Implementation Research in Health: translating Research to Practice. New York: Oxford University Press; 2018. p. 22.

Google Scholar  

Qiu J. Science communication in China: a critical component of the global science powerhouse. Natl Sci Rev. 2020;7(4):824–9. https://doi.org/10.1093/nsr/nwaa035 .

Canadian Institutes of Health Research. Section 1.1 Knowledge to action: what it is and what it isn’t [online]. https://cihr-irsc.gc.ca/e/41928.html (accessed 22 Feb 2023).

McCormack L, Sheridan S, Lewis M, et al. Communication and dissemination strategies to facilitate the use of health-related evidence. Evid Rep Technol Assess (Full Rep). 2013;2131–520. https://doi.org/10.23970/ahrqepcerta213 .

National Cancer Institute. Designing for Dissemination: Conference Summary Report. Washington, DC: National Cancer Institute; 2002.

Tabak RG, Stamatakis KA, Jacobs JA, et al. What predicts dissemination efforts among public health researchers in the United States? Public Health Rep. 2014;129(4):361–8. https://doi.org/10.1177/003335491412900411 .

Brownson RC, Fielding JE, Green LW. Building Capacity for evidence-based Public Health: reconciling the pulls of Practice and the push of Research. Annu Rev Public Health. 2018;39:27–53. https://doi.org/10.1146/annurev-publhealth-040617-014746 .

Wilson PM, Petticrew M, Calnan MW, et al. Does dissemination extend beyond publication: a survey of a cross section of public funded research in the UK. Implement Sci. 2010;5:61. https://doi.org/10.1186/1748-5908-5-61 .

Tabak RG, Reis RS, Wilson P, et al. Dissemination of Health-Related Research among scientists in three countries: Access to resources and Current practices. Biomed Res Int. 2015;2015:179156. https://doi.org/10.1155/2015/179156 .

McVay AB, Stamatakis KA, Jacobs JA, et al. The role of researchers in disseminating evidence to public health practice settings: a cross-sectional study. Health Res Policy Syst. 2016;14(1):42. https://doi.org/10.1186/s12961-016-0113-4 .

Harris JK, Allen P, Jacob RR, et al. Information-seeking among chronic disease prevention staff in state health departments: use of academic journals. Prev Chronic Dis. 2014;11:E138. https://doi.org/10.5888/pcd11.140201 .

Budd EL, deRuyter AJ, Wang Z, et al. A qualitative exploration of contextual factors that influence dissemination and implementation of evidence-based chronic disease prevention across four countries. BMC Health Serv Res. 2018;18(1):233. https://doi.org/10.1186/s12913-018-3054-5 .

Jin Y, Li Z, Han F, et al. Barriers and enablers for the implementation of clinical practice guidelines in China: a mixed-method study. BMJ Open. 2019;9(9):e026328. https://doi.org/10.1136/bmjopen-2018-026328 .

Ashcraft LE, Quinn DA, Brownson RC. Strategies for effective dissemination of research to United States policymakers: a systematic review. Implement Sci. 2020;15(1):89. https://doi.org/10.1186/s13012-020-01046-3 .

Kwan BM, Brownson RC, Glasgow RE, et al. Designing for Dissemination and sustainability to Promote Equitable impacts on Health. Annu Rev Public Health. 2022;43:331–53. https://doi.org/10.1146/annurev-publhealth-052220-112457 .

Long CR, Purvis RS, Flood-Grady E, et al. Health researchers’ experiences, perceptions and barriers related to sharing study results with participants. Health Res Policy Syst. 2019;17(1):25. https://doi.org/10.1186/s12961-019-0422-5 .

Zhao J, Bai W, Zhang Q, et al. Evidence-based practice implementation in healthcare in China: a living scoping review. Lancet Reg Health West Pac. 2022;20:100355. https://doi.org/10.1016/j.lanwpc.2021.100355 .

Hu Y, Yin X, Gong E et al. Are public health researchers designing for dissemination? Findings from a national survey in China [in review]. Implement Sci Commun 2023.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57. https://doi.org/10.1093/intqhc/mzm042 .

Ministry of Science and Technology of China. Summary of academic publications of China in 2020 [online]. 2022. https://www.most.gov.cn/xxgk/xinxifenlei/fdzdgknr/kjtjbg/kjtj2022/202209/P020220920391756277580.pdf (accessed 22 Feb 2023).

Guest G, Bunce A, Johnson L. How many interviews are Enough?An experiment with data saturation and variability. Field Methods. 2006;18(1):59–82. https://doi.org/10.1177/1525822x05279903 .

Article   Google Scholar  

Saunders B, Sim J, Kingstone T, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893–907. https://doi.org/10.1007/s11135-017-0574-8 .

Vasileiou K, Barnett J, Thorpe S, et al. Characterising and justifying sample size sufficiency in interview-based studies: systematic analysis of qualitative health research over a 15-year period. BMC Med Res Methodol. 2018;18(1):148. https://doi.org/10.1186/s12874-018-0594-7 .

Green LW, Fielding JE, Brownson RC. More on fake news, disinformation, and countering these with Science. Annu Rev Public Health. 2021;42:v–vi. https://doi.org/10.1146/annurev-pu-42-012821-100001 .

Hagg E, Dahinten VS, Currie LM. The emerging use of social media for health-related purposes in low and middle-income countries: a scoping review. Int J Med Inf. 2018;115:92–105. https://doi.org/10.1016/j.ijmedinf.2018.04.010 .

Murunga VI, Oronje RN, Bates I, et al. Review of published evidence on knowledge translation capacity, practice and support among researchers and research institutions in low- and middle-income countries. Health Res Policy Syst. 2020;18(1):16. https://doi.org/10.1186/s12961-019-0524-0 .

Boaz A, Hanney S, Borst R, et al. How to engage stakeholders in research: design principles to support improvement. Health Res Policy Syst. 2018;16(1):60. https://doi.org/10.1186/s12961-018-0337-6 .

Triplett NS, Woodard GS, Johnson C, et al. Stakeholder engagement to inform evidence-based treatment implementation for children’s mental health: a scoping review. Implement Sci Commun. 2022;3(1):82. https://doi.org/10.1186/s43058-022-00327-w .

Knoepke CE, Ingle MP, Matlock DD, et al. Dissemination and stakeholder engagement practices among dissemination & implementation scientists: results from an online survey. PLoS ONE. 2019;14(11):e0216971. https://doi.org/10.1371/journal.pone.0216971 .

Article   PubMed   PubMed Central   CAS   Google Scholar  

Ross-Hellauer T, Tennant JP, Banelytė V, et al. Ten simple rules for innovative dissemination of research. PLoS Comput Biol. 2020;16(4):e1007704. https://doi.org/10.1371/journal.pcbi.1007704 .

Brownson RC. Bridging Research and Practice to Implement Strategic Public Health Science. Am J Public Health. 2021;111(8):1389–91. https://doi.org/10.2105/ajph.2021.306393 .

Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. Cochrane Effective Pract Organ Care Rev Group BMJ. 1998;317(7156):465–8. https://doi.org/10.1136/bmj.317.7156.465 .

Article   CAS   Google Scholar  

Moore JB, Maddock JE, Brownson RC. The role of dissemination in Promotion and Tenure for Public Health. J Public Health Manag Pract. 2018;24(1):1–3. https://doi.org/10.1097/phh.0000000000000691 .

Tabak RG, Padek MM, Kerner JF, et al. Dissemination and implementation Science Training needs: insights from practitioners and researchers. Am J Prev Med. 2017;52(3 Suppl 3):S322–9. https://doi.org/10.1016/j.amepre.2016.10.005 .

McNeal DM, Glasgow RE, Brownson RC, et al. Perspectives of scientists on disseminating research findings to non-research audiences. J Clin Transl Sci. 2020;5(1):e61. https://doi.org/10.1017/cts.2020.563 .

Shato T, Kepper MM, McLoughlin GM, et al. Designing for dissemination among public health and clinical practitioners in the USA. J Clin Transl Sci. 2024;8(1):e8. https://doi.org/10.1017/cts.2023.695 .

Smits PA, Denis J-L. How research funding agencies support science integration into policy and practice: an international overview. Implement Sci. 2014;9:28. https://doi.org/10.1186/1748-5908-9-28 .

Glasgow RE, Vinson C, Chambers D, et al. National Institutes of Health approaches to dissemination and implementation science: current and future directions. Am J Public Health. 2012;102(7):1274–81. https://doi.org/10.2105/ajph.2012.300755 .

McLean RK, Graham ID, Bosompra K, et al. Understanding the performance and impact of public knowledge translation funding interventions: protocol for an evaluation of Canadian Institutes of Health Research knowledge translation funding programs. Implement Sci. 2012;7:57. https://doi.org/10.1186/1748-5908-7-57 .

General Office of the Communist Party of China Central Committee, the State Council of China. Opinions on deepening the reform of project evaluation, talent evaluation and institutional assessment [online]. 2018. http://www.gov.cn/zhengce/2018-07/03/content_5303251.htm (accessed 22 Feb 2023).

Ministry of Science and Technology of China. Measures to Break the Undesirable Orientation of. Academic Publications Only in Scientific and Technological Apprasials (for trial implementation) [online]. 2020. http://www.most.gov.cn/xxgk/xinxifenlei/fdzdgknr/fgzc/gfxwj/gfxwj2020/202002/t20200223_151781.html (accessed 22 Feb 2023).

General Office of the State Council of China. Guiding opinions on improving the scientific, technological achievements evaluation system [online]. 2021. http://www.gov.cn/gongbao/content/2021/content_5631817.htm (accessed 22 Feb 2023).

Ministry of Science and Technology of China. Work plan on the piloting of the evaluation reform of scientific and technological talents [online]. 2022. http://www.gov.cn/zhengce/zhengceku/2022-11/10/content_5725957.htm (accessed 22 Feb 2023).

Jensen EA, Wong P, Reed MS. How research data deliver non-academic impacts: a secondary analysis of UK Research Excellence Framework impact case studies. PLoS ONE. 2022;17(3):e0264914. https://doi.org/10.1371/journal.pone.0264914 .

Luke DA, Sarli CC, Suiter AM, et al. The translational science benefits model: a New Framework for assessing the Health and Societal benefits of clinical and Translational sciences. Clin Transl Sci. 2018;11(1):77–84. https://doi.org/10.1111/cts.12495 .

Download references

Acknowledgements

We would like to acknowledge the support of all participants.

This work was supported in part by Disciplines Construction Project: Population Medicine (number WH10022022010) and Disciplines construction project: Multimorbidity (number WH10022022034). RCB is supported by the US National Cancer Institute (number P50CA244431), the National Institute of Diabetes and Digestive and Kidney Diseases (numbers P30DK092950, P30DK056341), and the Centers for Disease Control and Prevention (number U48DP006395), and the Foundation for Barnes-Jewish Hospital.

Author information

Authors and affiliations.

School of Population Medicine and Public Health, Chinese Academy of Medical Sciences, Peking Union Medical College, No.9 Dong Dan San Tiao, Dongcheng District, Beijing, 100730, China

Yiluan Hu, Xuejun Yin, Yachen Wang, Enying Gong, Ruitai Shao & Juan Zhang

The George Institute for Global Health, University of New South Wales, Newtown, NSW, Australia

Faculty of Psychology, Beijing Normal University, Beijing, China

Chinese Preventive Medicine Association, Beijing, 100021, China

Jing Liu & Xia Liu

Research Unit of Population Health, Faculty of Medicine, University of Oulu, Oulu, 5000, Finland

Prevention Research Center, Brown School, Washington University in St. Louis, One Brookings Drive, Campus, Box 1196, St. Louis, MO, 63130, USA

Ross C. Brownson

Department of Surgery, Division of Public Health Sciences, and Alvin J. Siteman Cancer Center, Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, 63130, USA

You can also search for this author in PubMed   Google Scholar

Contributions

JZ, RS, and RCB obtained funding. JZ, RS, RCB, and YH were responsible for the conceptualization and design of the study. JZ, RS, YH, XY, EG, and XX developed the interview guide. JZ, YH, JL, and XL collected data. YH and YW analyzed the data. YH wrote the first draft. JZ, RCB, RS, YH, and YW edited the manuscript. All authors approved the final version for submission.

Corresponding authors

Correspondence to Ruitai Shao or Juan Zhang .

Ethics declarations

Ethics approval and consent to participate.

This study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Ethics Committee for Biomedical Research Projects involving Humans of the Chinese Academy of Medical Sciences and Peking Union Medical College (CAMS&PUMC-IEC-2021-12) on March 15, 2021. Informed consent was obtained from all participants involved in the study. Consent included permission to be audio-recorded.

The findings and conclusions in this paper are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or the Centers for Disease Control and Prevention.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Hu, Y., Yin, X., Wang, Y. et al. A qualitative exploration of disseminating research findings among public health researchers in China. BMC Public Health 24 , 2518 (2024). https://doi.org/10.1186/s12889-024-19820-z

Download citation

Received : 12 May 2023

Accepted : 16 August 2024

Published : 16 September 2024

DOI : https://doi.org/10.1186/s12889-024-19820-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Dissemination
  • Public health
  • Evidence-based

BMC Public Health

ISSN: 1471-2458

article qualitative research

What is Qualitative in Qualitative Research

  • Open access
  • Published: 27 February 2019
  • Volume 42 , pages 139–160, ( 2019 )

Cite this article

You have full access to this open access article

article qualitative research

  • Patrik Aspers 1 , 2 &
  • Ugo Corte 3  

631k Accesses

356 Citations

25 Altmetric

Explore all metrics

What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being “qualitative,” the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term “qualitative.” Then, drawing on ideas we find scattered across existing work, and based on Becker’s classic study of marijuana consumption, we formulate and illustrate a definition that tries to capture its core elements. We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. This formulation is developed as a tool to help improve research designs while stressing that a qualitative dimension is present in quantitative work as well. Additionally, it can facilitate teaching, communication between researchers, diminish the gap between qualitative and quantitative researchers, help to address critiques of qualitative methods, and be used as a standard of evaluation of qualitative research.

Similar content being viewed by others

article qualitative research

What is Qualitative in Research

Unsettling definitions of qualitative research, what is “qualitative” in qualitative research why the answer does not matter but the question is important, explore related subjects.

  • Artificial Intelligence

Avoid common mistakes on your manuscript.

If we assume that there is something called qualitative research, what exactly is this qualitative feature? And how could we evaluate qualitative research as good or not? Is it fundamentally different from quantitative research? In practice, most active qualitative researchers working with empirical material intuitively know what is involved in doing qualitative research, yet perhaps surprisingly, a clear definition addressing its key feature is still missing.

To address the question of what is qualitative we turn to the accounts of “qualitative research” in textbooks and also in empirical work. In his classic, explorative, interview study of deviance Howard Becker ( 1963 ) asks ‘How does one become a marijuana user?’ In contrast to pre-dispositional and psychological-individualistic theories of deviant behavior, Becker’s inherently social explanation contends that becoming a user of this substance is the result of a three-phase sequential learning process. First, potential users need to learn how to smoke it properly to produce the “correct” effects. If not, they are likely to stop experimenting with it. Second, they need to discover the effects associated with it; in other words, to get “high,” individuals not only have to experience what the drug does, but also to become aware that those sensations are related to using it. Third, they require learning to savor the feelings related to its consumption – to develop an acquired taste. Becker, who played music himself, gets close to the phenomenon by observing, taking part, and by talking to people consuming the drug: “half of the fifty interviews were conducted with musicians, the other half covered a wide range of people, including laborers, machinists, and people in the professions” (Becker 1963 :56).

Another central aspect derived through the common-to-all-research interplay between induction and deduction (Becker 2017 ), is that during the course of his research Becker adds scientifically meaningful new distinctions in the form of three phases—distinctions, or findings if you will, that strongly affect the course of his research: its focus, the material that he collects, and which eventually impact his findings. Each phase typically unfolds through social interaction, and often with input from experienced users in “a sequence of social experiences during which the person acquires a conception of the meaning of the behavior, and perceptions and judgments of objects and situations, all of which make the activity possible and desirable” (Becker 1963 :235). In this study the increased understanding of smoking dope is a result of a combination of the meaning of the actors, and the conceptual distinctions that Becker introduces based on the views expressed by his respondents. Understanding is the result of research and is due to an iterative process in which data, concepts and evidence are connected with one another (Becker 2017 ).

Indeed, there are many definitions of qualitative research, but if we look for a definition that addresses its distinctive feature of being “qualitative,” the literature across the broad field of social science is meager. The main reason behind this article lies in the paradox, which, to put it bluntly, is that researchers act as if they know what it is, but they cannot formulate a coherent definition. Sociologists and others will of course continue to conduct good studies that show the relevance and value of qualitative research addressing scientific and practical problems in society. However, our paper is grounded in the idea that providing a clear definition will help us improve the work that we do. Among researchers who practice qualitative research there is clearly much knowledge. We suggest that a definition makes this knowledge more explicit. If the first rationale for writing this paper refers to the “internal” aim of improving qualitative research, the second refers to the increased “external” pressure that especially many qualitative researchers feel; pressure that comes both from society as well as from other scientific approaches. There is a strong core in qualitative research, and leading researchers tend to agree on what it is and how it is done. Our critique is not directed at the practice of qualitative research, but we do claim that the type of systematic work we do has not yet been done, and that it is useful to improve the field and its status in relation to quantitative research.

The literature on the “internal” aim of improving, or at least clarifying qualitative research is large, and we do not claim to be the first to notice the vagueness of the term “qualitative” (Strauss and Corbin 1998 ). Also, others have noted that there is no single definition of it (Long and Godfrey 2004 :182), that there are many different views on qualitative research (Denzin and Lincoln 2003 :11; Jovanović 2011 :3), and that more generally, we need to define its meaning (Best 2004 :54). Strauss and Corbin ( 1998 ), for example, as well as Nelson et al. (1992:2 cited in Denzin and Lincoln 2003 :11), and Flick ( 2007 :ix–x), have recognized that the term is problematic: “Actually, the term ‘qualitative research’ is confusing because it can mean different things to different people” (Strauss and Corbin 1998 :10–11). Hammersley has discussed the possibility of addressing the problem, but states that “the task of providing an account of the distinctive features of qualitative research is far from straightforward” ( 2013 :2). This confusion, as he has recently further argued (Hammersley 2018 ), is also salient in relation to ethnography where different philosophical and methodological approaches lead to a lack of agreement about what it means.

Others (e.g. Hammersley 2018 ; Fine and Hancock 2017 ) have also identified the treat to qualitative research that comes from external forces, seen from the point of view of “qualitative research.” This threat can be further divided into that which comes from inside academia, such as the critique voiced by “quantitative research” and outside of academia, including, for example, New Public Management. Hammersley ( 2018 ), zooming in on one type of qualitative research, ethnography, has argued that it is under treat. Similarly to Fine ( 2003 ), and before him Gans ( 1999 ), he writes that ethnography’ has acquired a range of meanings, and comes in many different versions, these often reflecting sharply divergent epistemological orientations. And already more than twenty years ago while reviewing Denzin and Lincoln’ s Handbook of Qualitative Methods Fine argued:

While this increasing centrality [of qualitative research] might lead one to believe that consensual standards have developed, this belief would be misleading. As the methodology becomes more widely accepted, querulous challengers have raised fundamental questions that collectively have undercut the traditional models of how qualitative research is to be fashioned and presented (1995:417).

According to Hammersley, there are today “serious treats to the practice of ethnographic work, on almost any definition” ( 2018 :1). He lists five external treats: (1) that social research must be accountable and able to show its impact on society; (2) the current emphasis on “big data” and the emphasis on quantitative data and evidence; (3) the labor market pressure in academia that leaves less time for fieldwork (see also Fine and Hancock 2017 ); (4) problems of access to fields; and (5) the increased ethical scrutiny of projects, to which ethnography is particularly exposed. Hammersley discusses some more or less insufficient existing definitions of ethnography.

The current situation, as Hammersley and others note—and in relation not only to ethnography but also qualitative research in general, and as our empirical study shows—is not just unsatisfactory, it may even be harmful for the entire field of qualitative research, and does not help social science at large. We suggest that the lack of clarity of qualitative research is a real problem that must be addressed.

Towards a Definition of Qualitative Research

Seen in an historical light, what is today called qualitative, or sometimes ethnographic, interpretative research – or a number of other terms – has more or less always existed. At the time the founders of sociology – Simmel, Weber, Durkheim and, before them, Marx – were writing, and during the era of the Methodenstreit (“dispute about methods”) in which the German historical school emphasized scientific methods (cf. Swedberg 1990 ), we can at least speak of qualitative forerunners.

Perhaps the most extended discussion of what later became known as qualitative methods in a classic work is Bronisław Malinowski’s ( 1922 ) Argonauts in the Western Pacific , although even this study does not explicitly address the meaning of “qualitative.” In Weber’s ([1921–-22] 1978) work we find a tension between scientific explanations that are based on observation and quantification and interpretative research (see also Lazarsfeld and Barton 1982 ).

If we look through major sociology journals like the American Sociological Review , American Journal of Sociology , or Social Forces we will not find the term qualitative sociology before the 1970s. And certainly before then much of what we consider qualitative classics in sociology, like Becker’ study ( 1963 ), had already been produced. Indeed, the Chicago School often combined qualitative and quantitative data within the same study (Fine 1995 ). Our point being that before a disciplinary self-awareness the term quantitative preceded qualitative, and the articulation of the former was a political move to claim scientific status (Denzin and Lincoln 2005 ). In the US the World War II seem to have sparked a critique of sociological work, including “qualitative work,” that did not follow the scientific canon (Rawls 2018 ), which was underpinned by a scientifically oriented and value free philosophy of science. As a result the attempts and practice of integrating qualitative and quantitative sociology at Chicago lost ground to sociology that was more oriented to surveys and quantitative work at Columbia under Merton-Lazarsfeld. The quantitative tradition was also able to present textbooks (Lundberg 1951 ) that facilitated the use this approach and its “methods.” The practices of the qualitative tradition, by and large, remained tacit or was part of the mentoring transferred from the renowned masters to their students.

This glimpse into history leads us back to the lack of a coherent account condensed in a definition of qualitative research. Many of the attempts to define the term do not meet the requirements of a proper definition: A definition should be clear, avoid tautology, demarcate its domain in relation to the environment, and ideally only use words in its definiens that themselves are not in need of definition (Hempel 1966 ). A definition can enhance precision and thus clarity by identifying the core of the phenomenon. Preferably, a definition should be short. The typical definition we have found, however, is an ostensive definition, which indicates what qualitative research is about without informing us about what it actually is :

Qualitative research is multimethod in focus, involving an interpretative, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Qualitative research involves the studied use and collection of a variety of empirical materials – case study, personal experience, introspective, life story, interview, observational, historical, interactional, and visual texts – that describe routine and problematic moments and meanings in individuals’ lives. (Denzin and Lincoln 2005 :2)

Flick claims that the label “qualitative research” is indeed used as an umbrella for a number of approaches ( 2007 :2–4; 2002 :6), and it is not difficult to identify research fitting this designation. Moreover, whatever it is, it has grown dramatically over the past five decades. In addition, courses have been developed, methods have flourished, arguments about its future have been advanced (for example, Denzin and Lincoln 1994) and criticized (for example, Snow and Morrill 1995 ), and dedicated journals and books have mushroomed. Most social scientists have a clear idea of research and how it differs from journalism, politics and other activities. But the question of what is qualitative in qualitative research is either eluded or eschewed.

We maintain that this lacuna hinders systematic knowledge production based on qualitative research. Paul Lazarsfeld noted the lack of “codification” as early as 1955 when he reviewed 100 qualitative studies in order to offer a codification of the practices (Lazarsfeld and Barton 1982 :239). Since then many texts on “qualitative research” and its methods have been published, including recent attempts (Goertz and Mahoney 2012 ) similar to Lazarsfeld’s. These studies have tried to extract what is qualitative by looking at the large number of empirical “qualitative” studies. Our novel strategy complements these endeavors by taking another approach and looking at the attempts to codify these practices in the form of a definition, as well as to a minor extent take Becker’s study as an exemplar of what qualitative researchers actually do, and what the characteristic of being ‘qualitative’ denotes and implies. We claim that qualitative researchers, if there is such a thing as “qualitative research,” should be able to codify their practices in a condensed, yet general way expressed in language.

Lingering problems of “generalizability” and “how many cases do I need” (Small 2009 ) are blocking advancement – in this line of work qualitative approaches are said to differ considerably from quantitative ones, while some of the former unsuccessfully mimic principles related to the latter (Small 2009 ). Additionally, quantitative researchers sometimes unfairly criticize the first based on their own quality criteria. Scholars like Goertz and Mahoney ( 2012 ) have successfully focused on the different norms and practices beyond what they argue are essentially two different cultures: those working with either qualitative or quantitative methods. Instead, similarly to Becker ( 2017 ) who has recently questioned the usefulness of the distinction between qualitative and quantitative research, we focus on similarities.

The current situation also impedes both students and researchers in focusing their studies and understanding each other’s work (Lazarsfeld and Barton 1982 :239). A third consequence is providing an opening for critiques by scholars operating within different traditions (Valsiner 2000 :101). A fourth issue is that the “implicit use of methods in qualitative research makes the field far less standardized than the quantitative paradigm” (Goertz and Mahoney 2012 :9). Relatedly, the National Science Foundation in the US organized two workshops in 2004 and 2005 to address the scientific foundations of qualitative research involving strategies to improve it and to develop standards of evaluation in qualitative research. However, a specific focus on its distinguishing feature of being “qualitative” while being implicitly acknowledged, was discussed only briefly (for example, Best 2004 ).

In 2014 a theme issue was published in this journal on “Methods, Materials, and Meanings: Designing Cultural Analysis,” discussing central issues in (cultural) qualitative research (Berezin 2014 ; Biernacki 2014 ; Glaeser 2014 ; Lamont and Swidler 2014 ; Spillman 2014). We agree with many of the arguments put forward, such as the risk of methodological tribalism, and that we should not waste energy on debating methods separated from research questions. Nonetheless, a clarification of the relation to what is called “quantitative research” is of outmost importance to avoid misunderstandings and misguided debates between “qualitative” and “quantitative” researchers. Our strategy means that researchers, “qualitative” or “quantitative” they may be, in their actual practice may combine qualitative work and quantitative work.

In this article we accomplish three tasks. First, we systematically survey the literature for meanings of qualitative research by looking at how researchers have defined it. Drawing upon existing knowledge we find that the different meanings and ideas of qualitative research are not yet coherently integrated into one satisfactory definition. Next, we advance our contribution by offering a definition of qualitative research and illustrate its meaning and use partially by expanding on the brief example introduced earlier related to Becker’s work ( 1963 ). We offer a systematic analysis of central themes of what researchers consider to be the core of “qualitative,” regardless of style of work. These themes – which we summarize in terms of four keywords: distinction, process, closeness, improved understanding – constitute part of our literature review, in which each one appears, sometimes with others, but never all in the same definition. They serve as the foundation of our contribution. Our categories are overlapping. Their use is primarily to organize the large amount of definitions we have identified and analyzed, and not necessarily to draw a clear distinction between them. Finally, we continue the elaboration discussed above on the advantages of a clear definition of qualitative research.

In a hermeneutic fashion we propose that there is something meaningful that deserves to be labelled “qualitative research” (Gadamer 1990 ). To approach the question “What is qualitative in qualitative research?” we have surveyed the literature. In conducting our survey we first traced the word’s etymology in dictionaries, encyclopedias, handbooks of the social sciences and of methods and textbooks, mainly in English, which is common to methodology courses. It should be noted that we have zoomed in on sociology and its literature. This discipline has been the site of the largest debate and development of methods that can be called “qualitative,” which suggests that this field should be examined in great detail.

In an ideal situation we should expect that one good definition, or at least some common ideas, would have emerged over the years. This common core of qualitative research should be so accepted that it would appear in at least some textbooks. Since this is not what we found, we decided to pursue an inductive approach to capture maximal variation in the field of qualitative research; we searched in a selection of handbooks, textbooks, book chapters, and books, to which we added the analysis of journal articles. Our sample comprises a total of 89 references.

In practice we focused on the discipline that has had a clear discussion of methods, namely sociology. We also conducted a broad search in the JSTOR database to identify scholarly sociology articles published between 1998 and 2017 in English with a focus on defining or explaining qualitative research. We specifically zoom in on this time frame because we would have expect that this more mature period would have produced clear discussions on the meaning of qualitative research. To find these articles we combined a number of keywords to search the content and/or the title: qualitative (which was always included), definition, empirical, research, methodology, studies, fieldwork, interview and observation .

As a second phase of our research we searched within nine major sociological journals ( American Journal of Sociology , Sociological Theory , American Sociological Review , Contemporary Sociology , Sociological Forum , Sociological Theory , Qualitative Research , Qualitative Sociology and Qualitative Sociology Review ) for articles also published during the past 19 years (1998–2017) that had the term “qualitative” in the title and attempted to define qualitative research.

Lastly we picked two additional journals, Qualitative Research and Qualitative Sociology , in which we could expect to find texts addressing the notion of “qualitative.” From Qualitative Research we chose Volume 14, Issue 6, December 2014, and from Qualitative Sociology we chose Volume 36, Issue 2, June 2017. Within each of these we selected the first article; then we picked the second article of three prior issues. Again we went back another three issues and investigated article number three. Finally we went back another three issues and perused article number four. This selection criteria was used to get a manageable sample for the analysis.

The coding process of the 89 references we gathered in our selected review began soon after the first round of material was gathered, and we reduced the complexity created by our maximum variation sampling (Snow and Anderson 1993 :22) to four different categories within which questions on the nature and properties of qualitative research were discussed. We call them: Qualitative and Quantitative Research, Qualitative Research, Fieldwork, and Grounded Theory. This – which may appear as an illogical grouping – merely reflects the “context” in which the matter of “qualitative” is discussed. If the selection process of the material – books and articles – was informed by pre-knowledge, we used an inductive strategy to code the material. When studying our material, we identified four central notions related to “qualitative” that appear in various combinations in the literature which indicate what is the core of qualitative research. We have labeled them: “distinctions”, “process,” “closeness,” and “improved understanding.” During the research process the categories and notions were improved, refined, changed, and reordered. The coding ended when a sense of saturation in the material arose. In the presentation below all quotations and references come from our empirical material of texts on qualitative research.

Analysis – What is Qualitative Research?

In this section we describe the four categories we identified in the coding, how they differently discuss qualitative research, as well as their overall content. Some salient quotations are selected to represent the type of text sorted under each of the four categories. What we present are examples from the literature.

Qualitative and Quantitative

This analytic category comprises quotations comparing qualitative and quantitative research, a distinction that is frequently used (Brown 2010 :231); in effect this is a conceptual pair that structures the discussion and that may be associated with opposing interests. While the general goal of quantitative and qualitative research is the same – to understand the world better – their methodologies and focus in certain respects differ substantially (Becker 1966 :55). Quantity refers to that property of something that can be determined by measurement. In a dictionary of Statistics and Methodology we find that “(a) When referring to *variables, ‘qualitative’ is another term for *categorical or *nominal. (b) When speaking of kinds of research, ‘qualitative’ refers to studies of subjects that are hard to quantify, such as art history. Qualitative research tends to be a residual category for almost any kind of non-quantitative research” (Stiles 1998:183). But it should be obvious that one could employ a quantitative approach when studying, for example, art history.

The same dictionary states that quantitative is “said of variables or research that can be handled numerically, usually (too sharply) contrasted with *qualitative variables and research” (Stiles 1998:184). From a qualitative perspective “quantitative research” is about numbers and counting, and from a quantitative perspective qualitative research is everything that is not about numbers. But this does not say much about what is “qualitative.” If we turn to encyclopedias we find that in the 1932 edition of the Encyclopedia of the Social Sciences there is no mention of “qualitative.” In the Encyclopedia from 1968 we can read:

Qualitative Analysis. For methods of obtaining, analyzing, and describing data, see [the various entries:] CONTENT ANALYSIS; COUNTED DATA; EVALUATION RESEARCH, FIELD WORK; GRAPHIC PRESENTATION; HISTORIOGRAPHY, especially the article on THE RHETORIC OF HISTORY; INTERVIEWING; OBSERVATION; PERSONALITY MEASUREMENT; PROJECTIVE METHODS; PSYCHOANALYSIS, article on EXPERIMENTAL METHODS; SURVEY ANALYSIS, TABULAR PRESENTATION; TYPOLOGIES. (Vol. 13:225)

Some, like Alford, divide researchers into methodologists or, in his words, “quantitative and qualitative specialists” (Alford 1998 :12). Qualitative research uses a variety of methods, such as intensive interviews or in-depth analysis of historical materials, and it is concerned with a comprehensive account of some event or unit (King et al. 1994 :4). Like quantitative research it can be utilized to study a variety of issues, but it tends to focus on meanings and motivations that underlie cultural symbols, personal experiences, phenomena and detailed understanding of processes in the social world. In short, qualitative research centers on understanding processes, experiences, and the meanings people assign to things (Kalof et al. 2008 :79).

Others simply say that qualitative methods are inherently unscientific (Jovanović 2011 :19). Hood, for instance, argues that words are intrinsically less precise than numbers, and that they are therefore more prone to subjective analysis, leading to biased results (Hood 2006 :219). Qualitative methodologies have raised concerns over the limitations of quantitative templates (Brady et al. 2004 :4). Scholars such as King et al. ( 1994 ), for instance, argue that non-statistical research can produce more reliable results if researchers pay attention to the rules of scientific inference commonly stated in quantitative research. Also, researchers such as Becker ( 1966 :59; 1970 :42–43) have asserted that, if conducted properly, qualitative research and in particular ethnographic field methods, can lead to more accurate results than quantitative studies, in particular, survey research and laboratory experiments.

Some researchers, such as Kalof, Dan, and Dietz ( 2008 :79) claim that the boundaries between the two approaches are becoming blurred, and Small ( 2009 ) argues that currently much qualitative research (especially in North America) tries unsuccessfully and unnecessarily to emulate quantitative standards. For others, qualitative research tends to be more humanistic and discursive (King et al. 1994 :4). Ragin ( 1994 ), and similarly also Becker, ( 1996 :53), Marchel and Owens ( 2007 :303) think that the main distinction between the two styles is overstated and does not rest on the simple dichotomy of “numbers versus words” (Ragin 1994 :xii). Some claim that quantitative data can be utilized to discover associations, but in order to unveil cause and effect a complex research design involving the use of qualitative approaches needs to be devised (Gilbert 2009 :35). Consequently, qualitative data are useful for understanding the nuances lying beyond those processes as they unfold (Gilbert 2009 :35). Others contend that qualitative research is particularly well suited both to identify causality and to uncover fine descriptive distinctions (Fine and Hallett 2014 ; Lichterman and Isaac Reed 2014 ; Katz 2015 ).

There are other ways to separate these two traditions, including normative statements about what qualitative research should be (that is, better or worse than quantitative approaches, concerned with scientific approaches to societal change or vice versa; Snow and Morrill 1995 ; Denzin and Lincoln 2005 ), or whether it should develop falsifiable statements; Best 2004 ).

We propose that quantitative research is largely concerned with pre-determined variables (Small 2008 ); the analysis concerns the relations between variables. These categories are primarily not questioned in the study, only their frequency or degree, or the correlations between them (cf. Franzosi 2016 ). If a researcher studies wage differences between women and men, he or she works with given categories: x number of men are compared with y number of women, with a certain wage attributed to each person. The idea is not to move beyond the given categories of wage, men and women; they are the starting point as well as the end point, and undergo no “qualitative change.” Qualitative research, in contrast, investigates relations between categories that are themselves subject to change in the research process. Returning to Becker’s study ( 1963 ), we see that he questioned pre-dispositional theories of deviant behavior working with pre-determined variables such as an individual’s combination of personal qualities or emotional problems. His take, in contrast, was to understand marijuana consumption by developing “variables” as part of the investigation. Thereby he presented new variables, or as we would say today, theoretical concepts, but which are grounded in the empirical material.

Qualitative Research

This category contains quotations that refer to descriptions of qualitative research without making comparisons with quantitative research. Researchers such as Denzin and Lincoln, who have written a series of influential handbooks on qualitative methods (1994; Denzin and Lincoln 2003 ; 2005 ), citing Nelson et al. (1992:4), argue that because qualitative research is “interdisciplinary, transdisciplinary, and sometimes counterdisciplinary” it is difficult to derive one single definition of it (Jovanović 2011 :3). According to them, in fact, “the field” is “many things at the same time,” involving contradictions, tensions over its focus, methods, and how to derive interpretations and findings ( 2003 : 11). Similarly, others, such as Flick ( 2007 :ix–x) contend that agreeing on an accepted definition has increasingly become problematic, and that qualitative research has possibly matured different identities. However, Best holds that “the proliferation of many sorts of activities under the label of qualitative sociology threatens to confuse our discussions” ( 2004 :54). Atkinson’s position is more definite: “the current state of qualitative research and research methods is confused” ( 2005 :3–4).

Qualitative research is about interpretation (Blumer 1969 ; Strauss and Corbin 1998 ; Denzin and Lincoln 2003 ), or Verstehen [understanding] (Frankfort-Nachmias and Nachmias 1996 ). It is “multi-method,” involving the collection and use of a variety of empirical materials (Denzin and Lincoln 1998; Silverman 2013 ) and approaches (Silverman 2005 ; Flick 2007 ). It focuses not only on the objective nature of behavior but also on its subjective meanings: individuals’ own accounts of their attitudes, motivations, behavior (McIntyre 2005 :127; Creswell 2009 ), events and situations (Bryman 1989) – what people say and do in specific places and institutions (Goodwin and Horowitz 2002 :35–36) in social and temporal contexts (Morrill and Fine 1997). For this reason, following Weber ([1921-22] 1978), it can be described as an interpretative science (McIntyre 2005 :127). But could quantitative research also be concerned with these questions? Also, as pointed out below, does all qualitative research focus on subjective meaning, as some scholars suggest?

Others also distinguish qualitative research by claiming that it collects data using a naturalistic approach (Denzin and Lincoln 2005 :2; Creswell 2009 ), focusing on the meaning actors ascribe to their actions. But again, does all qualitative research need to be collected in situ? And does qualitative research have to be inherently concerned with meaning? Flick ( 2007 ), referring to Denzin and Lincoln ( 2005 ), mentions conversation analysis as an example of qualitative research that is not concerned with the meanings people bring to a situation, but rather with the formal organization of talk. Still others, such as Ragin ( 1994 :85), note that qualitative research is often (especially early on in the project, we would add) less structured than other kinds of social research – a characteristic connected to its flexibility and that can lead both to potentially better, but also worse results. But is this not a feature of this type of research, rather than a defining description of its essence? Wouldn’t this comment also apply, albeit to varying degrees, to quantitative research?

In addition, Strauss ( 2003 ), along with others, such as Alvesson and Kärreman ( 2011 :10–76), argue that qualitative researchers struggle to capture and represent complex phenomena partially because they tend to collect a large amount of data. While his analysis is correct at some points – “It is necessary to do detailed, intensive, microscopic examination of the data in order to bring out the amazing complexity of what lies in, behind, and beyond those data” (Strauss 2003 :10) – much of his analysis concerns the supposed focus of qualitative research and its challenges, rather than exactly what it is about. But even in this instance we would make a weak case arguing that these are strictly the defining features of qualitative research. Some researchers seem to focus on the approach or the methods used, or even on the way material is analyzed. Several researchers stress the naturalistic assumption of investigating the world, suggesting that meaning and interpretation appear to be a core matter of qualitative research.

We can also see that in this category there is no consensus about specific qualitative methods nor about qualitative data. Many emphasize interpretation, but quantitative research, too, involves interpretation; the results of a regression analysis, for example, certainly have to be interpreted, and the form of meta-analysis that factor analysis provides indeed requires interpretation However, there is no interpretation of quantitative raw data, i.e., numbers in tables. One common thread is that qualitative researchers have to get to grips with their data in order to understand what is being studied in great detail, irrespective of the type of empirical material that is being analyzed. This observation is connected to the fact that qualitative researchers routinely make several adjustments of focus and research design as their studies progress, in many cases until the very end of the project (Kalof et al. 2008 ). If you, like Becker, do not start out with a detailed theory, adjustments such as the emergence and refinement of research questions will occur during the research process. We have thus found a number of useful reflections about qualitative research scattered across different sources, but none of them effectively describe the defining characteristics of this approach.

Although qualitative research does not appear to be defined in terms of a specific method, it is certainly common that fieldwork, i.e., research that entails that the researcher spends considerable time in the field that is studied and use the knowledge gained as data, is seen as emblematic of or even identical to qualitative research. But because we understand that fieldwork tends to focus primarily on the collection and analysis of qualitative data, we expected to find within it discussions on the meaning of “qualitative.” But, again, this was not the case.

Instead, we found material on the history of this approach (for example, Frankfort-Nachmias and Nachmias 1996 ; Atkinson et al. 2001), including how it has changed; for example, by adopting a more self-reflexive practice (Heyl 2001), as well as the different nomenclature that has been adopted, such as fieldwork, ethnography, qualitative research, naturalistic research, participant observation and so on (for example, Lofland et al. 2006 ; Gans 1999 ).

We retrieved definitions of ethnography, such as “the study of people acting in the natural courses of their daily lives,” involving a “resocialization of the researcher” (Emerson 1988 :1) through intense immersion in others’ social worlds (see also examples in Hammersley 2018 ). This may be accomplished by direct observation and also participation (Neuman 2007 :276), although others, such as Denzin ( 1970 :185), have long recognized other types of observation, including non-participant (“fly on the wall”). In this category we have also isolated claims and opposing views, arguing that this type of research is distinguished primarily by where it is conducted (natural settings) (Hughes 1971:496), and how it is carried out (a variety of methods are applied) or, for some most importantly, by involving an active, empathetic immersion in those being studied (Emerson 1988 :2). We also retrieved descriptions of the goals it attends in relation to how it is taught (understanding subjective meanings of the people studied, primarily develop theory, or contribute to social change) (see for example, Corte and Irwin 2017 ; Frankfort-Nachmias and Nachmias 1996 :281; Trier-Bieniek 2012 :639) by collecting the richest possible data (Lofland et al. 2006 ) to derive “thick descriptions” (Geertz 1973 ), and/or to aim at theoretical statements of general scope and applicability (for example, Emerson 1988 ; Fine 2003 ). We have identified guidelines on how to evaluate it (for example Becker 1996 ; Lamont 2004 ) and have retrieved instructions on how it should be conducted (for example, Lofland et al. 2006 ). For instance, analysis should take place while the data gathering unfolds (Emerson 1988 ; Hammersley and Atkinson 2007 ; Lofland et al. 2006 ), observations should be of long duration (Becker 1970 :54; Goffman 1989 ), and data should be of high quantity (Becker 1970 :52–53), as well as other questionable distinctions between fieldwork and other methods:

Field studies differ from other methods of research in that the researcher performs the task of selecting topics, decides what questions to ask, and forges interest in the course of the research itself . This is in sharp contrast to many ‘theory-driven’ and ‘hypothesis-testing’ methods. (Lofland and Lofland 1995 :5)

But could not, for example, a strictly interview-based study be carried out with the same amount of flexibility, such as sequential interviewing (for example, Small 2009 )? Once again, are quantitative approaches really as inflexible as some qualitative researchers think? Moreover, this category stresses the role of the actors’ meaning, which requires knowledge and close interaction with people, their practices and their lifeworld.

It is clear that field studies – which are seen by some as the “gold standard” of qualitative research – are nonetheless only one way of doing qualitative research. There are other methods, but it is not clear why some are more qualitative than others, or why they are better or worse. Fieldwork is characterized by interaction with the field (the material) and understanding of the phenomenon that is being studied. In Becker’s case, he had general experience from fields in which marihuana was used, based on which he did interviews with actual users in several fields.

Grounded Theory

Another major category we identified in our sample is Grounded Theory. We found descriptions of it most clearly in Glaser and Strauss’ ([1967] 2010 ) original articulation, Strauss and Corbin ( 1998 ) and Charmaz ( 2006 ), as well as many other accounts of what it is for: generating and testing theory (Strauss 2003 :xi). We identified explanations of how this task can be accomplished – such as through two main procedures: constant comparison and theoretical sampling (Emerson 1998:96), and how using it has helped researchers to “think differently” (for example, Strauss and Corbin 1998 :1). We also read descriptions of its main traits, what it entails and fosters – for instance, an exceptional flexibility, an inductive approach (Strauss and Corbin 1998 :31–33; 1990; Esterberg 2002 :7), an ability to step back and critically analyze situations, recognize tendencies towards bias, think abstractly and be open to criticism, enhance sensitivity towards the words and actions of respondents, and develop a sense of absorption and devotion to the research process (Strauss and Corbin 1998 :5–6). Accordingly, we identified discussions of the value of triangulating different methods (both using and not using grounded theory), including quantitative ones, and theories to achieve theoretical development (most comprehensively in Denzin 1970 ; Strauss and Corbin 1998 ; Timmermans and Tavory 2012 ). We have also located arguments about how its practice helps to systematize data collection, analysis and presentation of results (Glaser and Strauss [1967] 2010 :16).

Grounded theory offers a systematic approach which requires researchers to get close to the field; closeness is a requirement of identifying questions and developing new concepts or making further distinctions with regard to old concepts. In contrast to other qualitative approaches, grounded theory emphasizes the detailed coding process, and the numerous fine-tuned distinctions that the researcher makes during the process. Within this category, too, we could not find a satisfying discussion of the meaning of qualitative research.

Defining Qualitative Research

In sum, our analysis shows that some notions reappear in the discussion of qualitative research, such as understanding, interpretation, “getting close” and making distinctions. These notions capture aspects of what we think is “qualitative.” However, a comprehensive definition that is useful and that can further develop the field is lacking, and not even a clear picture of its essential elements appears. In other words no definition emerges from our data, and in our research process we have moved back and forth between our empirical data and the attempt to present a definition. Our concrete strategy, as stated above, is to relate qualitative and quantitative research, or more specifically, qualitative and quantitative work. We use an ideal-typical notion of quantitative research which relies on taken for granted and numbered variables. This means that the data consists of variables on different scales, such as ordinal, but frequently ratio and absolute scales, and the representation of the numbers to the variables, i.e. the justification of the assignment of numbers to object or phenomenon, are not questioned, though the validity may be questioned. In this section we return to the notion of quality and try to clarify it while presenting our contribution.

Broadly, research refers to the activity performed by people trained to obtain knowledge through systematic procedures. Notions such as “objectivity” and “reflexivity,” “systematic,” “theory,” “evidence” and “openness” are here taken for granted in any type of research. Next, building on our empirical analysis we explain the four notions that we have identified as central to qualitative work: distinctions, process, closeness, and improved understanding. In discussing them, ultimately in relation to one another, we make their meaning even more precise. Our idea, in short, is that only when these ideas that we present separately for analytic purposes are brought together can we speak of qualitative research.

Distinctions

We believe that the possibility of making new distinctions is one the defining characteristics of qualitative research. It clearly sets it apart from quantitative analysis which works with taken-for-granted variables, albeit as mentioned, meta-analyses, for example, factor analysis may result in new variables. “Quality” refers essentially to distinctions, as already pointed out by Aristotle. He discusses the term “qualitative” commenting: “By a quality I mean that in virtue of which things are said to be qualified somehow” (Aristotle 1984:14). Quality is about what something is or has, which means that the distinction from its environment is crucial. We see qualitative research as a process in which significant new distinctions are made to the scholarly community; to make distinctions is a key aspect of obtaining new knowledge; a point, as we will see, that also has implications for “quantitative research.” The notion of being “significant” is paramount. New distinctions by themselves are not enough; just adding concepts only increases complexity without furthering our knowledge. The significance of new distinctions is judged against the communal knowledge of the research community. To enable this discussion and judgements central elements of rational discussion are required (cf. Habermas [1981] 1987 ; Davidsson [ 1988 ] 2001) to identify what is new and relevant scientific knowledge. Relatedly, Ragin alludes to the idea of new and useful knowledge at a more concrete level: “Qualitative methods are appropriate for in-depth examination of cases because they aid the identification of key features of cases. Most qualitative methods enhance data” (1994:79). When Becker ( 1963 ) studied deviant behavior and investigated how people became marihuana smokers, he made distinctions between the ways in which people learned how to smoke. This is a classic example of how the strategy of “getting close” to the material, for example the text, people or pictures that are subject to analysis, may enable researchers to obtain deeper insight and new knowledge by making distinctions – in this instance on the initial notion of learning how to smoke. Others have stressed the making of distinctions in relation to coding or theorizing. Emerson et al. ( 1995 ), for example, hold that “qualitative coding is a way of opening up avenues of inquiry,” meaning that the researcher identifies and develops concepts and analytic insights through close examination of and reflection on data (Emerson et al. 1995 :151). Goodwin and Horowitz highlight making distinctions in relation to theory-building writing: “Close engagement with their cases typically requires qualitative researchers to adapt existing theories or to make new conceptual distinctions or theoretical arguments to accommodate new data” ( 2002 : 37). In the ideal-typical quantitative research only existing and so to speak, given, variables would be used. If this is the case no new distinction are made. But, would not also many “quantitative” researchers make new distinctions?

Process does not merely suggest that research takes time. It mainly implies that qualitative new knowledge results from a process that involves several phases, and above all iteration. Qualitative research is about oscillation between theory and evidence, analysis and generating material, between first- and second -order constructs (Schütz 1962 :59), between getting in contact with something, finding sources, becoming deeply familiar with a topic, and then distilling and communicating some of its essential features. The main point is that the categories that the researcher uses, and perhaps takes for granted at the beginning of the research process, usually undergo qualitative changes resulting from what is found. Becker describes how he tested hypotheses and let the jargon of the users develop into theoretical concepts. This happens over time while the study is being conducted, exemplifying what we mean by process.

In the research process, a pilot-study may be used to get a first glance of, for example, the field, how to approach it, and what methods can be used, after which the method and theory are chosen or refined before the main study begins. Thus, the empirical material is often central from the start of the project and frequently leads to adjustments by the researcher. Likewise, during the main study categories are not fixed; the empirical material is seen in light of the theory used, but it is also given the opportunity to kick back, thereby resisting attempts to apply theoretical straightjackets (Becker 1970 :43). In this process, coding and analysis are interwoven, and thus are often important steps for getting closer to the phenomenon and deciding what to focus on next. Becker began his research by interviewing musicians close to him, then asking them to refer him to other musicians, and later on doubling his original sample of about 25 to include individuals in other professions (Becker 1973:46). Additionally, he made use of some participant observation, documents, and interviews with opiate users made available to him by colleagues. As his inductive theory of deviance evolved, Becker expanded his sample in order to fine tune it, and test the accuracy and generality of his hypotheses. In addition, he introduced a negative case and discussed the null hypothesis ( 1963 :44). His phasic career model is thus based on a research design that embraces processual work. Typically, process means to move between “theory” and “material” but also to deal with negative cases, and Becker ( 1998 ) describes how discovering these negative cases impacted his research design and ultimately its findings.

Obviously, all research is process-oriented to some degree. The point is that the ideal-typical quantitative process does not imply change of the data, and iteration between data, evidence, hypotheses, empirical work, and theory. The data, quantified variables, are, in most cases fixed. Merging of data, which of course can be done in a quantitative research process, does not mean new data. New hypotheses are frequently tested, but the “raw data is often the “the same.” Obviously, over time new datasets are made available and put into use.

Another characteristic that is emphasized in our sample is that qualitative researchers – and in particular ethnographers – can, or as Goffman put it, ought to ( 1989 ), get closer to the phenomenon being studied and their data than quantitative researchers (for example, Silverman 2009 :85). Put differently, essentially because of their methods qualitative researchers get into direct close contact with those being investigated and/or the material, such as texts, being analyzed. Becker started out his interview study, as we noted, by talking to those he knew in the field of music to get closer to the phenomenon he was studying. By conducting interviews he got even closer. Had he done more observations, he would undoubtedly have got even closer to the field.

Additionally, ethnographers’ design enables researchers to follow the field over time, and the research they do is almost by definition longitudinal, though the time in the field is studied obviously differs between studies. The general characteristic of closeness over time maximizes the chances of unexpected events, new data (related, for example, to archival research as additional sources, and for ethnography for situations not necessarily previously thought of as instrumental – what Mannay and Morgan ( 2015 ) term the “waiting field”), serendipity (Merton and Barber 2004 ; Åkerström 2013 ), and possibly reactivity, as well as the opportunity to observe disrupted patterns that translate into exemplars of negative cases. Two classic examples of this are Becker’s finding of what medical students call “crocks” (Becker et al. 1961 :317), and Geertz’s ( 1973 ) study of “deep play” in Balinese society.

By getting and staying so close to their data – be it pictures, text or humans interacting (Becker was himself a musician) – for a long time, as the research progressively focuses, qualitative researchers are prompted to continually test their hunches, presuppositions and hypotheses. They test them against a reality that often (but certainly not always), and practically, as well as metaphorically, talks back, whether by validating them, or disqualifying their premises – correctly, as well as incorrectly (Fine 2003 ; Becker 1970 ). This testing nonetheless often leads to new directions for the research. Becker, for example, says that he was initially reading psychological theories, but when facing the data he develops a theory that looks at, you may say, everything but psychological dispositions to explain the use of marihuana. Especially researchers involved with ethnographic methods have a fairly unique opportunity to dig up and then test (in a circular, continuous and temporal way) new research questions and findings as the research progresses, and thereby to derive previously unimagined and uncharted distinctions by getting closer to the phenomenon under study.

Let us stress that getting close is by no means restricted to ethnography. The notion of hermeneutic circle and hermeneutics as a general way of understanding implies that we must get close to the details in order to get the big picture. This also means that qualitative researchers can literally also make use of details of pictures as evidence (cf. Harper 2002). Thus, researchers may get closer both when generating the material or when analyzing it.

Quantitative research, we maintain, in the ideal-typical representation cannot get closer to the data. The data is essentially numbers in tables making up the variables (Franzosi 2016 :138). The data may originally have been “qualitative,” but once reduced to numbers there can only be a type of “hermeneutics” about what the number may stand for. The numbers themselves, however, are non-ambiguous. Thus, in quantitative research, interpretation, if done, is not about the data itself—the numbers—but what the numbers stand for. It follows that the interpretation is essentially done in a more “speculative” mode without direct empirical evidence (cf. Becker 2017 ).

Improved Understanding

While distinction, process and getting closer refer to the qualitative work of the researcher, improved understanding refers to its conditions and outcome of this work. Understanding cuts deeper than explanation, which to some may mean a causally verified correlation between variables. The notion of explanation presupposes the notion of understanding since explanation does not include an idea of how knowledge is gained (Manicas 2006 : 15). Understanding, we argue, is the core concept of what we call the outcome of the process when research has made use of all the other elements that were integrated in the research. Understanding, then, has a special status in qualitative research since it refers both to the conditions of knowledge and the outcome of the process. Understanding can to some extent be seen as the condition of explanation and occurs in a process of interpretation, which naturally refers to meaning (Gadamer 1990 ). It is fundamentally connected to knowing, and to the knowing of how to do things (Heidegger [1927] 2001 ). Conceptually the term hermeneutics is used to account for this process. Heidegger ties hermeneutics to human being and not possible to separate from the understanding of being ( 1988 ). Here we use it in a broader sense, and more connected to method in general (cf. Seiffert 1992 ). The abovementioned aspects – for example, “objectivity” and “reflexivity” – of the approach are conditions of scientific understanding. Understanding is the result of a circular process and means that the parts are understood in light of the whole, and vice versa. Understanding presupposes pre-understanding, or in other words, some knowledge of the phenomenon studied. The pre-understanding, even in the form of prejudices, are in qualitative research process, which we see as iterative, questioned, which gradually or suddenly change due to the iteration of data, evidence and concepts. However, qualitative research generates understanding in the iterative process when the researcher gets closer to the data, e.g., by going back and forth between field and analysis in a process that generates new data that changes the evidence, and, ultimately, the findings. Questioning, to ask questions, and put what one assumes—prejudices and presumption—in question, is central to understand something (Heidegger [1927] 2001 ; Gadamer 1990 :368–384). We propose that this iterative process in which the process of understanding occurs is characteristic of qualitative research.

Improved understanding means that we obtain scientific knowledge of something that we as a scholarly community did not know before, or that we get to know something better. It means that we understand more about how parts are related to one another, and to other things we already understand (see also Fine and Hallett 2014 ). Understanding is an important condition for qualitative research. It is not enough to identify correlations, make distinctions, and work in a process in which one gets close to the field or phenomena. Understanding is accomplished when the elements are integrated in an iterative process.

It is, moreover, possible to understand many things, and researchers, just like children, may come to understand new things every day as they engage with the world. This subjective condition of understanding – namely, that a person gains a better understanding of something –is easily met. To be qualified as “scientific,” the understanding must be general and useful to many; it must be public. But even this generally accessible understanding is not enough in order to speak of “scientific understanding.” Though we as a collective can increase understanding of everything in virtually all potential directions as a result also of qualitative work, we refrain from this “objective” way of understanding, which has no means of discriminating between what we gain in understanding. Scientific understanding means that it is deemed relevant from the scientific horizon (compare Schütz 1962 : 35–38, 46, 63), and that it rests on the pre-understanding that the scientists have and must have in order to understand. In other words, the understanding gained must be deemed useful by other researchers, so that they can build on it. We thus see understanding from a pragmatic, rather than a subjective or objective perspective. Improved understanding is related to the question(s) at hand. Understanding, in order to represent an improvement, must be an improvement in relation to the existing body of knowledge of the scientific community (James [ 1907 ] 1955). Scientific understanding is, by definition, collective, as expressed in Weber’s famous note on objectivity, namely that scientific work aims at truths “which … can claim, even for a Chinese, the validity appropriate to an empirical analysis” ([1904] 1949 :59). By qualifying “improved understanding” we argue that it is a general defining characteristic of qualitative research. Becker‘s ( 1966 ) study and other research of deviant behavior increased our understanding of the social learning processes of how individuals start a behavior. And it also added new knowledge about the labeling of deviant behavior as a social process. Few studies, of course, make the same large contribution as Becker’s, but are nonetheless qualitative research.

Understanding in the phenomenological sense, which is a hallmark of qualitative research, we argue, requires meaning and this meaning is derived from the context, and above all the data being analyzed. The ideal-typical quantitative research operates with given variables with different numbers. This type of material is not enough to establish meaning at the level that truly justifies understanding. In other words, many social science explanations offer ideas about correlations or even causal relations, but this does not mean that the meaning at the level of the data analyzed, is understood. This leads us to say that there are indeed many explanations that meet the criteria of understanding, for example the explanation of how one becomes a marihuana smoker presented by Becker. However, we may also understand a phenomenon without explaining it, and we may have potential explanations, or better correlations, that are not really understood.

We may speak more generally of quantitative research and its data to clarify what we see as an important distinction. The “raw data” that quantitative research—as an idealtypical activity, refers to is not available for further analysis; the numbers, once created, are not to be questioned (Franzosi 2016 : 138). If the researcher is to do “more” or “change” something, this will be done by conjectures based on theoretical knowledge or based on the researcher’s lifeworld. Both qualitative and quantitative research is based on the lifeworld, and all researchers use prejudices and pre-understanding in the research process. This idea is present in the works of Heidegger ( 2001 ) and Heisenberg (cited in Franzosi 2010 :619). Qualitative research, as we argued, involves the interaction and questioning of concepts (theory), data, and evidence.

Ragin ( 2004 :22) points out that “a good definition of qualitative research should be inclusive and should emphasize its key strengths and features, not what it lacks (for example, the use of sophisticated quantitative techniques).” We define qualitative research as an iterative process in which improved understanding to the scientific community is achieved by making new significant distinctions resulting from getting closer to the phenomenon studied. Qualitative research, as defined here, is consequently a combination of two criteria: (i) how to do things –namely, generating and analyzing empirical material, in an iterative process in which one gets closer by making distinctions, and (ii) the outcome –improved understanding novel to the scholarly community. Is our definition applicable to our own study? In this study we have closely read the empirical material that we generated, and the novel distinction of the notion “qualitative research” is the outcome of an iterative process in which both deduction and induction were involved, in which we identified the categories that we analyzed. We thus claim to meet the first criteria, “how to do things.” The second criteria cannot be judged but in a partial way by us, namely that the “outcome” —in concrete form the definition-improves our understanding to others in the scientific community.

We have defined qualitative research, or qualitative scientific work, in relation to quantitative scientific work. Given this definition, qualitative research is about questioning the pre-given (taken for granted) variables, but it is thus also about making new distinctions of any type of phenomenon, for example, by coining new concepts, including the identification of new variables. This process, as we have discussed, is carried out in relation to empirical material, previous research, and thus in relation to theory. Theory and previous research cannot be escaped or bracketed. According to hermeneutic principles all scientific work is grounded in the lifeworld, and as social scientists we can thus never fully bracket our pre-understanding.

We have proposed that quantitative research, as an idealtype, is concerned with pre-determined variables (Small 2008 ). Variables are epistemically fixed, but can vary in terms of dimensions, such as frequency or number. Age is an example; as a variable it can take on different numbers. In relation to quantitative research, qualitative research does not reduce its material to number and variables. If this is done the process of comes to a halt, the researcher gets more distanced from her data, and it makes it no longer possible to make new distinctions that increase our understanding. We have above discussed the components of our definition in relation to quantitative research. Our conclusion is that in the research that is called quantitative there are frequent and necessary qualitative elements.

Further, comparative empirical research on researchers primarily working with ”quantitative” approaches and those working with ”qualitative” approaches, we propose, would perhaps show that there are many similarities in practices of these two approaches. This is not to deny dissimilarities, or the different epistemic and ontic presuppositions that may be more or less strongly associated with the two different strands (see Goertz and Mahoney 2012 ). Our point is nonetheless that prejudices and preconceptions about researchers are unproductive, and that as other researchers have argued, differences may be exaggerated (e.g., Becker 1996 : 53, 2017 ; Marchel and Owens 2007 :303; Ragin 1994 ), and that a qualitative dimension is present in both kinds of work.

Several things follow from our findings. The most important result is the relation to quantitative research. In our analysis we have separated qualitative research from quantitative research. The point is not to label individual researchers, methods, projects, or works as either “quantitative” or “qualitative.” By analyzing, i.e., taking apart, the notions of quantitative and qualitative, we hope to have shown the elements of qualitative research. Our definition captures the elements, and how they, when combined in practice, generate understanding. As many of the quotations we have used suggest, one conclusion of our study holds that qualitative approaches are not inherently connected with a specific method. Put differently, none of the methods that are frequently labelled “qualitative,” such as interviews or participant observation, are inherently “qualitative.” What matters, given our definition, is whether one works qualitatively or quantitatively in the research process, until the results are produced. Consequently, our analysis also suggests that those researchers working with what in the literature and in jargon is often called “quantitative research” are almost bound to make use of what we have identified as qualitative elements in any research project. Our findings also suggest that many” quantitative” researchers, at least to some extent, are engaged with qualitative work, such as when research questions are developed, variables are constructed and combined, and hypotheses are formulated. Furthermore, a research project may hover between “qualitative” and “quantitative” or start out as “qualitative” and later move into a “quantitative” (a distinct strategy that is not similar to “mixed methods” or just simply combining induction and deduction). More generally speaking, the categories of “qualitative” and “quantitative,” unfortunately, often cover up practices, and it may lead to “camps” of researchers opposing one another. For example, regardless of the researcher is primarily oriented to “quantitative” or “qualitative” research, the role of theory is neglected (cf. Swedberg 2017 ). Our results open up for an interaction not characterized by differences, but by different emphasis, and similarities.

Let us take two examples to briefly indicate how qualitative elements can fruitfully be combined with quantitative. Franzosi ( 2010 ) has discussed the relations between quantitative and qualitative approaches, and more specifically the relation between words and numbers. He analyzes texts and argues that scientific meaning cannot be reduced to numbers. Put differently, the meaning of the numbers is to be understood by what is taken for granted, and what is part of the lifeworld (Schütz 1962 ). Franzosi shows how one can go about using qualitative and quantitative methods and data to address scientific questions analyzing violence in Italy at the time when fascism was rising (1919–1922). Aspers ( 2006 ) studied the meaning of fashion photographers. He uses an empirical phenomenological approach, and establishes meaning at the level of actors. In a second step this meaning, and the different ideal-typical photographers constructed as a result of participant observation and interviews, are tested using quantitative data from a database; in the first phase to verify the different ideal-types, in the second phase to use these types to establish new knowledge about the types. In both of these cases—and more examples can be found—authors move from qualitative data and try to keep the meaning established when using the quantitative data.

A second main result of our study is that a definition, and we provided one, offers a way for research to clarify, and even evaluate, what is done. Hence, our definition can guide researchers and students, informing them on how to think about concrete research problems they face, and to show what it means to get closer in a process in which new distinctions are made. The definition can also be used to evaluate the results, given that it is a standard of evaluation (cf. Hammersley 2007 ), to see whether new distinctions are made and whether this improves our understanding of what is researched, in addition to the evaluation of how the research was conducted. By making what is qualitative research explicit it becomes easier to communicate findings, and it is thereby much harder to fly under the radar with substandard research since there are standards of evaluation which make it easier to separate “good” from “not so good” qualitative research.

To conclude, our analysis, which ends with a definition of qualitative research can thus both address the “internal” issues of what is qualitative research, and the “external” critiques that make it harder to do qualitative research, to which both pressure from quantitative methods and general changes in society contribute.

Åkerström, Malin. 2013. Curiosity and serendipity in qualitative research. Qualitative Sociology Review 9 (2): 10–18.

Google Scholar  

Alford, Robert R. 1998. The craft of inquiry. Theories, methods, evidence . Oxford: Oxford University Press.

Alvesson, Mats, and Dan Kärreman. 2011. Qualitative research and theory development. Mystery as method . London: SAGE Publications.

Book   Google Scholar  

Aspers, Patrik. 2006. Markets in Fashion, A Phenomenological Approach. London Routledge.

Atkinson, Paul. 2005. Qualitative research. Unity and diversity. Forum: Qualitative Social Research 6 (3): 1–15.

Becker, Howard S. 1963. Outsiders. Studies in the sociology of deviance . New York: The Free Press.

Becker, Howard S. 1966. Whose side are we on? Social Problems 14 (3): 239–247.

Article   Google Scholar  

Becker, Howard S. 1970. Sociological work. Method and substance . New Brunswick: Transaction Books.

Becker, Howard S. 1996. The epistemology of qualitative research. In Ethnography and human development. Context and meaning in social inquiry , ed. Jessor Richard, Colby Anne, and Richard A. Shweder, 53–71. Chicago: University of Chicago Press.

Becker, Howard S. 1998. Tricks of the trade. How to think about your research while you're doing it . Chicago: University of Chicago Press.

Becker, Howard S. 2017. Evidence . Chigaco: University of Chicago Press.

Becker, Howard, Blanche Geer, Everett Hughes, and Anselm Strauss. 1961. Boys in White, student culture in medical school . New Brunswick: Transaction Publishers.

Berezin, Mabel. 2014. How do we know what we mean? Epistemological dilemmas in cultural sociology. Qualitative Sociology 37 (2): 141–151.

Best, Joel. 2004. Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , eds . Charles, Ragin, Joanne, Nagel, and Patricia White, 53-54. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf .

Biernacki, Richard. 2014. Humanist interpretation versus coding text samples. Qualitative Sociology 37 (2): 173–188.

Blumer, Herbert. 1969. Symbolic interactionism: Perspective and method . Berkeley: University of California Press.

Brady, Henry, David Collier, and Jason Seawright. 2004. Refocusing the discussion of methodology. In Rethinking social inquiry. Diverse tools, shared standards , ed. Brady Henry and Collier David, 3–22. Lanham: Rowman and Littlefield.

Brown, Allison P. 2010. Qualitative method and compromise in applied social research. Qualitative Research 10 (2): 229–248.

Charmaz, Kathy. 2006. Constructing grounded theory . London: Sage.

Corte, Ugo, and Katherine Irwin. 2017. “The Form and Flow of Teaching Ethnographic Knowledge: Hands-on Approaches for Learning Epistemology” Teaching Sociology 45(3): 209-219.

Creswell, John W. 2009. Research design. Qualitative, quantitative, and mixed method approaches . 3rd ed. Thousand Oaks: SAGE Publications.

Davidsson, David. 1988. 2001. The myth of the subjective. In Subjective, intersubjective, objective , ed. David Davidsson, 39–52. Oxford: Oxford University Press.

Denzin, Norman K. 1970. The research act: A theoretical introduction to Ssociological methods . Chicago: Aldine Publishing Company Publishers.

Denzin, Norman K., and Yvonna S. Lincoln. 2003. Introduction. The discipline and practice of qualitative research. In Collecting and interpreting qualitative materials , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–45. Thousand Oaks: SAGE Publications.

Denzin, Norman K., and Yvonna S. Lincoln. 2005. Introduction. The discipline and practice of qualitative research. In The Sage handbook of qualitative research , ed. Norman K. Denzin and Yvonna S. Lincoln, 1–32. Thousand Oaks: SAGE Publications.

Emerson, Robert M., ed. 1988. Contemporary field research. A collection of readings . Prospect Heights: Waveland Press.

Emerson, Robert M., Rachel I. Fretz, and Linda L. Shaw. 1995. Writing ethnographic fieldnotes . Chicago: University of Chicago Press.

Esterberg, Kristin G. 2002. Qualitative methods in social research . Boston: McGraw-Hill.

Fine, Gary Alan. 1995. Review of “handbook of qualitative research.” Contemporary Sociology 24 (3): 416–418.

Fine, Gary Alan. 2003. “ Toward a Peopled Ethnography: Developing Theory from Group Life.” Ethnography . 4(1):41-60.

Fine, Gary Alan, and Black Hawk Hancock. 2017. The new ethnographer at work. Qualitative Research 17 (2): 260–268.

Fine, Gary Alan, and Timothy Hallett. 2014. Stranger and stranger: Creating theory through ethnographic distance and authority. Journal of Organizational Ethnography 3 (2): 188–203.

Flick, Uwe. 2002. Qualitative research. State of the art. Social Science Information 41 (1): 5–24.

Flick, Uwe. 2007. Designing qualitative research . London: SAGE Publications.

Frankfort-Nachmias, Chava, and David Nachmias. 1996. Research methods in the social sciences . 5th ed. London: Edward Arnold.

Franzosi, Roberto. 2010. Sociology, narrative, and the quality versus quantity debate (Goethe versus Newton): Can computer-assisted story grammars help us understand the rise of Italian fascism (1919- 1922)? Theory and Society 39 (6): 593–629.

Franzosi, Roberto. 2016. From method and measurement to narrative and number. International journal of social research methodology 19 (1): 137–141.

Gadamer, Hans-Georg. 1990. Wahrheit und Methode, Grundzüge einer philosophischen Hermeneutik . Band 1, Hermeneutik. Tübingen: J.C.B. Mohr.

Gans, Herbert. 1999. Participant Observation in an Age of “Ethnography”. Journal of Contemporary Ethnography 28 (5): 540–548.

Geertz, Clifford. 1973. The interpretation of cultures . New York: Basic Books.

Gilbert, Nigel. 2009. Researching social life . 3rd ed. London: SAGE Publications.

Glaeser, Andreas. 2014. Hermeneutic institutionalism: Towards a new synthesis. Qualitative Sociology 37: 207–241.

Glaser, Barney G., and Anselm L. Strauss. [1967] 2010. The discovery of grounded theory. Strategies for qualitative research. Hawthorne: Aldine.

Goertz, Gary, and James Mahoney. 2012. A tale of two cultures: Qualitative and quantitative research in the social sciences . Princeton: Princeton University Press.

Goffman, Erving. 1989. On fieldwork. Journal of Contemporary Ethnography 18 (2): 123–132.

Goodwin, Jeff, and Ruth Horowitz. 2002. Introduction. The methodological strengths and dilemmas of qualitative sociology. Qualitative Sociology 25 (1): 33–47.

Habermas, Jürgen. [1981] 1987. The theory of communicative action . Oxford: Polity Press.

Hammersley, Martyn. 2007. The issue of quality in qualitative research. International Journal of Research & Method in Education 30 (3): 287–305.

Hammersley, Martyn. 2013. What is qualitative research? Bloomsbury Publishing.

Hammersley, Martyn. 2018. What is ethnography? Can it survive should it? Ethnography and Education 13 (1): 1–17.

Hammersley, Martyn, and Paul Atkinson. 2007. Ethnography. Principles in practice . London: Tavistock Publications.

Heidegger, Martin. [1927] 2001. Sein und Zeit . Tübingen: Max Niemeyer Verlag.

Heidegger, Martin. 1988. 1923. Ontologie. Hermeneutik der Faktizität, Gesamtausgabe II. Abteilung: Vorlesungen 1919-1944, Band 63, Frankfurt am Main: Vittorio Klostermann.

Hempel, Carl G. 1966. Philosophy of the natural sciences . Upper Saddle River: Prentice Hall.

Hood, Jane C. 2006. Teaching against the text. The case of qualitative methods. Teaching Sociology 34 (3): 207–223.

James, William. 1907. 1955. Pragmatism . New York: Meredian Books.

Jovanović, Gordana. 2011. Toward a social history of qualitative research. History of the Human Sciences 24 (2): 1–27.

Kalof, Linda, Amy Dan, and Thomas Dietz. 2008. Essentials of social research . London: Open University Press.

Katz, Jack. 2015. Situational evidence: Strategies for causal reasoning from observational field notes. Sociological Methods & Research 44 (1): 108–144.

King, Gary, Robert O. Keohane, S. Sidney, and S. Verba. 1994. Designing social inquiry. In Scientific inference in qualitative research . Princeton: Princeton University Press.

Chapter   Google Scholar  

Lamont, Michelle. 2004. Evaluating qualitative research: Some empirical findings and an agenda. In Report from workshop on interdisciplinary standards for systematic qualitative research , ed. M. Lamont and P. White, 91–95. Washington, DC: National Science Foundation.

Lamont, Michèle, and Ann Swidler. 2014. Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology 37 (2): 153–171.

Lazarsfeld, Paul, and Alan Barton. 1982. Some functions of qualitative analysis in social research. In The varied sociology of Paul Lazarsfeld , ed. Patricia Kendall, 239–285. New York: Columbia University Press.

Lichterman, Paul, and Isaac Reed I (2014), Theory and Contrastive Explanation in Ethnography. Sociological methods and research. Prepublished 27 October 2014; https://doi.org/10.1177/0049124114554458 .

Lofland, John, and Lyn Lofland. 1995. Analyzing social settings. A guide to qualitative observation and analysis . 3rd ed. Belmont: Wadsworth.

Lofland, John, David A. Snow, Leon Anderson, and Lyn H. Lofland. 2006. Analyzing social settings. A guide to qualitative observation and analysis . 4th ed. Belmont: Wadsworth/Thomson Learning.

Long, Adrew F., and Mary Godfrey. 2004. An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology 7 (2): 181–196.

Lundberg, George. 1951. Social research: A study in methods of gathering data . New York: Longmans, Green and Co..

Malinowski, Bronislaw. 1922. Argonauts of the Western Pacific: An account of native Enterprise and adventure in the archipelagoes of Melanesian New Guinea . London: Routledge.

Manicas, Peter. 2006. A realist philosophy of science: Explanation and understanding . Cambridge: Cambridge University Press.

Marchel, Carol, and Stephanie Owens. 2007. Qualitative research in psychology. Could William James get a job? History of Psychology 10 (4): 301–324.

McIntyre, Lisa J. 2005. Need to know. Social science research methods . Boston: McGraw-Hill.

Merton, Robert K., and Elinor Barber. 2004. The travels and adventures of serendipity. A Study in Sociological Semantics and the Sociology of Science . Princeton: Princeton University Press.

Mannay, Dawn, and Melanie Morgan. 2015. Doing ethnography or applying a qualitative technique? Reflections from the ‘waiting field‘. Qualitative Research 15 (2): 166–182.

Neuman, Lawrence W. 2007. Basics of social research. Qualitative and quantitative approaches . 2nd ed. Boston: Pearson Education.

Ragin, Charles C. 1994. Constructing social research. The unity and diversity of method . Thousand Oaks: Pine Forge Press.

Ragin, Charles C. 2004. Introduction to session 1: Defining qualitative research. In Workshop on Scientific Foundations of Qualitative Research , 22, ed. Charles C. Ragin, Joane Nagel, Patricia White. http://www.nsf.gov/pubs/2004/nsf04219/nsf04219.pdf

Rawls, Anne. 2018. The Wartime narrative in US sociology, 1940–7: Stigmatizing qualitative sociology in the name of ‘science,’ European Journal of Social Theory (Online first).

Schütz, Alfred. 1962. Collected papers I: The problem of social reality . The Hague: Nijhoff.

Seiffert, Helmut. 1992. Einführung in die Hermeneutik . Tübingen: Franke.

Silverman, David. 2005. Doing qualitative research. A practical handbook . 2nd ed. London: SAGE Publications.

Silverman, David. 2009. A very short, fairly interesting and reasonably cheap book about qualitative research . London: SAGE Publications.

Silverman, David. 2013. What counts as qualitative research? Some cautionary comments. Qualitative Sociology Review 9 (2): 48–55.

Small, Mario L. 2009. “How many cases do I need?” on science and the logic of case selection in field-based research. Ethnography 10 (1): 5–38.

Small, Mario L 2008. Lost in translation: How not to make qualitative research more scientific. In Workshop on interdisciplinary standards for systematic qualitative research, ed in Michelle Lamont, and Patricia White, 165–171. Washington, DC: National Science Foundation.

Snow, David A., and Leon Anderson. 1993. Down on their luck: A study of homeless street people . Berkeley: University of California Press.

Snow, David A., and Calvin Morrill. 1995. New ethnographies: Review symposium: A revolutionary handbook or a handbook for revolution? Journal of Contemporary Ethnography 24 (3): 341–349.

Strauss, Anselm L. 2003. Qualitative analysis for social scientists . 14th ed. Chicago: Cambridge University Press.

Strauss, Anselm L., and Juliette M. Corbin. 1998. Basics of qualitative research. Techniques and procedures for developing grounded theory . 2nd ed. Thousand Oaks: Sage Publications.

Swedberg, Richard. 2017. Theorizing in sociological research: A new perspective, a new departure? Annual Review of Sociology 43: 189–206.

Swedberg, Richard. 1990. The new 'Battle of Methods'. Challenge January–February 3 (1): 33–38.

Timmermans, Stefan, and Iddo Tavory. 2012. Theory construction in qualitative research: From grounded theory to abductive analysis. Sociological Theory 30 (3): 167–186.

Trier-Bieniek, Adrienne. 2012. Framing the telephone interview as a participant-centred tool for qualitative research. A methodological discussion. Qualitative Research 12 (6): 630–644.

Valsiner, Jaan. 2000. Data as representations. Contextualizing qualitative and quantitative research strategies. Social Science Information 39 (1): 99–113.

Weber, Max. 1904. 1949. Objectivity’ in social Science and social policy. Ed. Edward A. Shils and Henry A. Finch, 49–112. New York: The Free Press.

Download references

Acknowledgements

Financial Support for this research is given by the European Research Council, CEV (263699). The authors are grateful to Susann Krieglsteiner for assistance in collecting the data. The paper has benefitted from the many useful comments by the three reviewers and the editor, comments by members of the Uppsala Laboratory of Economic Sociology, as well as Jukka Gronow, Sebastian Kohl, Marcin Serafin, Richard Swedberg, Anders Vassenden and Turid Rødne.

Author information

Authors and affiliations.

Department of Sociology, Uppsala University, Uppsala, Sweden

Patrik Aspers

Seminar for Sociology, Universität St. Gallen, St. Gallen, Switzerland

Department of Media and Social Sciences, University of Stavanger, Stavanger, Norway

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Patrik Aspers .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Aspers, P., Corte, U. What is Qualitative in Qualitative Research. Qual Sociol 42 , 139–160 (2019). https://doi.org/10.1007/s11133-019-9413-7

Download citation

Published : 27 February 2019

Issue Date : 01 June 2019

DOI : https://doi.org/10.1007/s11133-019-9413-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Epistemology
  • Philosophy of science
  • Phenomenology
  • Find a journal
  • Publish with us
  • Track your research

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Journal Proposal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

article qualitative research

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

The performance and qualitative evaluation of scientific work at research universities: a focus on the types of university and research.

article qualitative research

1. Introduction

2. materials and methods, 3. literature review, 4.1. description of the research object and university research data analysis, 4.2. survey result analysis, 5. discussion, 6. conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest, appendix a. survey form for indicators for assessing the quality of the scientific research, appendix b. university research processes.

Click here to enlarge figure

  • Etzkowitz, H. The Triple Helix University—Industry—Government Innovation in Action ; Routledge: New York, NY, USA, 2008; p. 225. ISBN 978-0415964500. [ Google Scholar ]
  • Tang, H.H. The strategic role of world-class universities in regional innovation system: China’s Greater Bay Area and Hong Kong’s academic profession. Asian Educ. Dev. Stud. 2020 , 11 , 7–22. [ Google Scholar ] [ CrossRef ]
  • National Research Council, USA. Research Universities and the Future of America: Ten Breakthrough Actions Vital to Our Nation’s Prosperity and Security (Report) ; National Research Council: Washington, DC, USA, 2012; p. 225. ISBN 978-0-309-25639-1.
  • Powell, J.J.W.; Dusdal, J. The European Center of Science Productivity: Research Universities and Institutes in France, Germany, and the United Kingdom. Century Sci. Int. Perspect. Educ. Soc. 2017 , 33 , 55–83. [ Google Scholar ] [ CrossRef ]
  • Intarakumnerd, P.; Goto, A. Role of public research institutes in national innovation systems in industrialized countries: The cases of Fraunhofer, NIST, CSIRO, AIST, and ITRI. Res. Policy 2018 , 47 , 1309–1320. [ Google Scholar ] [ CrossRef ]
  • Vlasova, V.V.; Gokhberg, L.M.; Ditkovsky, K.A.; Kotsemir, M.N.; Kuznetsova, I.A.; Martynova, S.V.; Nesterenko, A.V.; Pakhomov, S.I.; Polyakova, V.V.; Ratay, T.V.; et al. Science Indicators: 2023: Statistical Collection ; National Research University Higher School of Economics: Moscow, Russia, 2023; p. 416. ISBN 978-5-7598-2765-8. (In Russian) [ Google Scholar ]
  • Textor, C. Breakdown of R&D spending in China 2017–2022, by Entity. Available online: https://www.statista.com/statistics/1465556/research-and-development-expenditure-in-china-distribution-by-entity/ (accessed on 12 August 2024).
  • Chen, K.; Zhang, C.; Feng, Z.; Zhang, Y.; Ning, L. Technology transfer systems and modes of national research institutes: Evidence from the Chinese Academy of Sciences. Res. Policy 2022 , 51 , 104471. [ Google Scholar ] [ CrossRef ]
  • Vasilev, Y.; Vasileva, P.; Batova, O.; Tsvetkova, A. Assessment of Factors Influencing Educational Effectiveness in Higher Educational Institutions. Sustainability 2024 , 16 , 4886. [ Google Scholar ] [ CrossRef ]
  • Ilyushin, Y.V.; Pervukhin, D.A.; Afanaseva, O.V. Application of the theory of systems with distributed parameters for mineral complex facilities management. ARPN J. Eng. Appl. Sci. 2019 , 14 , 3852–3864. [ Google Scholar ]
  • Raupov, I.; Burkhanov, R.; Lutfullin, A.; Maksyutin, A.; Lebedev, A.; Safiullina, E. Experience in the Application of Hydrocarbon Optical Studies in Oil Field Development. Energies 2022 , 15 , 3626. [ Google Scholar ] [ CrossRef ]
  • Rudnik, S.N.; Afanasev, V.G.; Samylovskaya, E.A. 250 years in the service of the Fatherland: Empress Catherine II Saint Petersburg Mining university in facts and figures. J. Min. Inst. 2023 , 263 , 810–830. [ Google Scholar ]
  • Olcay, G.F.; Bulu, M. Is measuring the knowledge creation of universities possible? A review of university rankings. Technol. Forecast. Soc. Chang. 2017 , 123 , 153–160. [ Google Scholar ] [ CrossRef ]
  • Lapinskas, A.; Makhova, L.; Zhidikov, V. Responsible resource wealth management in ensuring inclusive growth [Odpowiedzialne zarzdzanie zasobami w zapewnieniu wzrostu wczajcego]. Pol. J. Manag. Stud. 2021 , 23 , 288–304. [ Google Scholar ] [ CrossRef ]
  • Peris-Ortiz, M.; García-Hurtado, D.; Román, A.P. Measuring knowledge exploration and exploitation in universities and the relationship with global ranking indicators. Eur. Res. Manag. Bus. Econ. 2023 , 29 , 100212. [ Google Scholar ] [ CrossRef ]
  • Ponomariov, B.L.; Boardman, P.C. Influencing scientists’ collaboration and productivity patterns through new institutions: University research centers and scientific and technical human capital. Res. Policy 2010 , 39 , 613–624. [ Google Scholar ] [ CrossRef ]
  • Isaeva, N.V.; Borisova, L.V. Comparative analysis of national policies for developing research universities’ campuses. Univ. Manag. Pract. Anal. 2013 , 6 , 74–87. (In Russian) [ Google Scholar ]
  • Ponomarenko, T.V.; Marinina, O.A.; Nevskaya, M.A. Innovative learning methods in technical universities: The possibility of forming interdisciplinary competencies. Espacios 2019 , 40 , 1–10. Available online: http://www.revistaespacios.com/a19v40n41/19404116.html (accessed on 10 September 2024).
  • Zhang, H.; Patton, D.; Kenney, M. Building global-class universities: Assessing the impact of the 985 Project. Res. Policy 2013 , 42 , 765–775. [ Google Scholar ] [ CrossRef ]
  • Shang, J.; Zeng, M.; Zhang, G. Investigating the mentorship effect on the academic success of young scientists: An empirical study of the 985 project universities of China. J. Informetr. 2022 , 16 , 101285. [ Google Scholar ] [ CrossRef ]
  • Chen, Y.; Pan, Y.; Liu, H.; Wu, X.; Deng, G. Efficiency analysis of Chinese universities with shared inputs: An aggregated two-stage network DEA approach. Socio-Econ. Plan. Sci. 2023 , 90 , 101728. [ Google Scholar ] [ CrossRef ]
  • Mindeli, L.E. Financial Support for the Development of the Scientific and Technological Sphere ; Mindeli, L.E., Chernykh, S.I., Frolova, N.D., Todosiychuk, A.V., Fetisov, V.P., Eds.; Institute for Problems of Science Development of the Russian Academy of Sciences (IPRAN): Moscow, Russia, 2018; p. 215. ISBN 978-5-91294-125-2. (In Russian) [ Google Scholar ]
  • Guba, K.S.; Slovogorodsky, N.A. “Publish or Perish” in Russian social sciences: Patterns of co-authorship in “predatory” and “pure” journals. Issues Educ. Educ. Stud. Mosc. 2022 , 4 , 80–106. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Slepykh, V.; Lovakov, A.; Yudkevich, M. Academic career after defending a PhD thesis on the example of four branches of Russian science. Issues Educ. Educ. Stud. Mosc. 2022 , 4 , 260–297. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Mohrman, K.; Ma, W.; Baker, D. The Research University in Transition: The Emerging Global Model. High. Educ. Policy 2008 , 21 , 5–27. [ Google Scholar ] [ CrossRef ]
  • Altbach, P.G. The Road to Academic Excellence: The Making of World-Class Research Universities ; Altbach, P.G., Salmi, J., Eds.; World Bank: Washington, DC, USA, 2011; p. 381. ISBN 978-0-8213-9485-4. [ Google Scholar ] [ CrossRef ]
  • Powell, J.J.W.; Fernandez, F.; Crist, J.T.; Dusdal, J.; Zhang, L.; Baker, D.P. Introduction: The Worldwide Triumph of the Research University and Globalizing Science. Century Sci. Int. Perspect. Educ. Soc. 2017 , 33 , 1–36. [ Google Scholar ] [ CrossRef ]
  • Colina-Ysea, F.; Pantigoso-Leython, N.; Abad-Lezama, I.; Calla-Vásquez, K.; Chávez-Campó, S.; Sanabria-Boudri, F.M.; Soto-Rivera, C. Implementation of Hybrid Education in Peruvian Public Universities: The Challenges. Educ. Sci. 2024 , 14 , 419. [ Google Scholar ] [ CrossRef ]
  • Fernandez, F.; Baker, D.P. Science Production in the United States: An Unexpected Synergy between Mass Higher Education and the Super Research University. Century Sci. Int. Perspect. Educ. Soc. 2021 , 33 , 85–111. [ Google Scholar ] [ CrossRef ]
  • Jamil, S. The Challenge of Establishing World-Class Universities: Directions in Development ; World Bank: Washington, DC, USA, 2009; p. 115. ISBN 082-1378-767, 978-0821-3787-62. [ Google Scholar ] [ CrossRef ]
  • Mudzakkir, M.; Sukoco, B.; Suwignjo, P. World-class Universities: Past and Future. Int. J. Educ. Manag. 2022 , 36 , 277–295. [ Google Scholar ] [ CrossRef ]
  • Tian, L. Rethinking the global orientation of world-class universities from a comparative functional perspective. Int. J. Educ. Dev. 2023 , 96 , 102700. [ Google Scholar ] [ CrossRef ]
  • Clark, B.R. Creating Entrepreneurial Universities: Organizational Pathways of Transformation ; Emerald Group Publishing Limited: London, UK, 1998; p. 200. ISBN 978-0 0804-3342-4. [ Google Scholar ]
  • Etzkowitz, H. The Entrepreneurial University: Vision and Metrics. Ind. High. Educ. 2016 , 3 , 83–97. [ Google Scholar ] [ CrossRef ]
  • Kazin, F.A.; Kondratiev, A.V. Development of the concept of an entrepreneurial university in Russian universities: New assessment tools. Univ. Upr. Prakt. Anal. Univ. Manag. Pract. Anal. 2022 , 26 , 18–41. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Berestov, A.V.; Guseva, A.I.; Kalashnik, V.M.; Kaminsky, V.I.; Kireev, S.V.; Sadchikov, S.M. The “national research university” project is a driver of Russian higher education. Vyss. Obraz. V Ross. High. Educ. Russ. 2020 , 29 , 22–34. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Matveeva, N.; Ferligoj, A. Scientific Collaboration in Russian Universities before and after the Excellence Initiative Project “5–100”. Scientometrics 2020 , 124 , 2383–2407. [ Google Scholar ] [ CrossRef ]
  • Matveeva, N.; Sterligov, I.; Yudkevich, M. The Effect of Russian University Excellence Initiative on Publications and Collaboration Patterns. J. Informetr. 2021 , 15 , 101110. [ Google Scholar ] [ CrossRef ]
  • Semenov, V.P.; Mikhailov Yu, I. Challenges and trends of quality management in the context of industrial and mineral resources economy. J. Min. Inst. 2017 , 226 , 497. [ Google Scholar ] [ CrossRef ]
  • Litvinenko, V.S.; Bowbrick, I.; Naumov, I.A.; Zaitseva, Z. Global guidelines and requirements for professional competencies of natural resource extraction engineers: Implications for ESG principles and sustainable development goals. J. Clean. Prod. 2022 , 338 , 130530. [ Google Scholar ] [ CrossRef ]
  • Semenova, T.; Martínez Santoyo, J.Y. Economic Strategy for Developing the Oil Industry in Mexico by Incorporating Environmental Factors. Sustainability 2024 , 16 , 36. [ Google Scholar ] [ CrossRef ]
  • Rozhdestvensky, I.V.; Filimonov, A.V.; Khvorostyanaya, A.S. Methodology for assessing the readiness of higher educational institutions and scientific organizations for technology transfer. Innov. Innov. 2020 , 9 , 11–16. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Bakthavatchaalam, V.; Miles, M.; de Lourdes, M.-T.; Jose, S. Research productivity and academic dishonesty in a changing higher education landscape. On the example of technical universities in India (translated from English). Educ. Issues Educ. Stud. Mosc. 2021 , 2 , 126–151. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Aven, T. Perspectives on the nexus between good risk communication and high scientific risk analysis quality. Reliab. Eng. Syst. Saf. 2018 , 178 , 290–296. [ Google Scholar ] [ CrossRef ]
  • Marinina, O.A.; Kirsanova, N.Y.; Nevskaya, M.A. Circular economy models in industry: Developing a conceptual framework. Energies 2022 , 15 , 9376–9386. [ Google Scholar ] [ CrossRef ]
  • Siegel, D.; Bogers, M.; Jennings, D.; Xue, L. Technology transfer from national/federal labs and public research institutes: Managerial and policy implications. Res. Policy 2022 , 52 , 104646. [ Google Scholar ] [ CrossRef ]
  • Snegirev, S.D.; Savelyev, V.Y. Quality management for scientific activities. Stand. I Kachestvo Stand. Qual. 2014 , 3 , 54–57. (In Russian) [ Google Scholar ]
  • Leontyuk, S.M.; Vinogradova, A.A.; Silivanov, M.O. Fundamentals of ISO 9001:2015. J. Phys. Conf. Ser. 2019 , 1384 , 012068. [ Google Scholar ] [ CrossRef ]
  • Wieczorek, O.; Muench, R. Academic capitalism and market thinking in higher education. In International Encyclopedia of Education , 4th ed.; Elsevier: Amsterdam, The Netherlands, 2023; pp. 37–47. [ Google Scholar ] [ CrossRef ]
  • Overchenko, M.N.; Marinin, M.A.; Mozer, S.P. Quality improvement of mining specialists training on the basis of cooperation between Saint-Petersburg mining university and Orica company. J. Min. Inst. 2017 , 228 , 681. [ Google Scholar ] [ CrossRef ]
  • Hernandez-Diaz, P.M.; Polanco, J.-A.; Escobar-Sierra, M. Building a measurement system of higher education performance: Evidence from a Latin-American country. Int. J. Qual. Reliab. Manag. 2021 , 38 , 1278–1300. [ Google Scholar ] [ CrossRef ]
  • Zharova, A.; Karl, W.; Lessmann, H. Data-driven support for policy and decision-making in university research management: A case study from Germany. Eur. J. Oper. Res. 2023 , 308 , 353–368. [ Google Scholar ] [ CrossRef ]
  • Lubango, L.M.; Pouris, A. Is patenting activity impeding the academic performance of South African University researchers? Technol. Soc. 2009 , 31 , 315–324. [ Google Scholar ] [ CrossRef ]
  • de Jesus, C.S.; Cardoso, D.d.O.; de Souza, C.G. Motivational factors for patenting: A study of the Brazilian researchers profile. World Pat. Inf. 2023 , 75 , 102241. [ Google Scholar ] [ CrossRef ]
  • Kiseleva, M.A. Development of an organizational and economic mechanism for managing the research activities of national research universities. Vestn. YURGTU NPI Bull. SRSTU NPI 2021 , 3 , 182–190. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Alakaleek, W.; Harb, Y.; Harb, A.A.; Shishany, A. The impact of entrepreneurship education: A study of entrepreneurial outcomes. Int. J. Manag. Educ. 2023 , 21 , 100800. [ Google Scholar ] [ CrossRef ]
  • Rudko, V.A.; Gabdulkhakov, R.R.; Pyagay, I.N. Scientific and technical substantiation of the possibility for the organization of needle coke production in Russia. J. Min. Inst. 2023 , 263 , 795–809. Available online: https://pmi.spmi.ru/index.php/pmi/article/view/16246?setLocale=en_US (accessed on 10 September 2024).
  • Gromyka, D.S.; Gogolinskii, K.V. Introduction of evaluation procedure of excavator bucket teeth into maintenance and repair: Prompts. MIAB. Mining Inf. Anal. Bull. 2023 , 8 , 94–111. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Michaud, J.; Turri, J. Values and credibility in science communication. Logos Epistem. 2018 , 9 , 199–214. [ Google Scholar ] [ CrossRef ]
  • Oblova, I.S.; Gerasimova, I.G.; Goman, I.V. The scientific career through a gender lens: A contrastive analysis of the EU and Russia. Glob. J. Eng. Educ. 2022 , 24 , 21–27. [ Google Scholar ]
  • Chiware, E.R.T.; Becker, D.A. Research trends and collaborations by applied science researchers in South African universities of technology: 2007–2017. J. Acad. Librariansh. 2018 , 44 , 468–476. [ Google Scholar ] [ CrossRef ]
  • Palavesm, K.; Joorel, S. IRINS: Implementing a Research Information Management System in Indian Higher Education Institutions. Procedia Comput. Sci. 2022 , 211 , 238–245. [ Google Scholar ] [ CrossRef ]
  • Litvinenko, V.S.; Petrov, E.I.; Vasilevskaya, D.V.; Yakovenko, A.V.; Naumov, I.A.; Ratnikov, M.A. Assessment of the role of the state in the management of mineral resources. J. Min. Inst. 2023 , 259 , 95–111. [ Google Scholar ] [ CrossRef ]
  • Carillo, M.R.; Papagni, E.; Sapio, A. Do collaborations enhance the high-quality output of scientific institutions? Evidence from the Italian Research Assessment Exercise. J. Socio-Econ. 2013 , 47 , 25–36. [ Google Scholar ] [ CrossRef ]
  • Chen, S.; Ren, S.; Cao, X. A comparison study of educational scientific collaboration in China and the USA. Phys. A Stat. Mech. Its Appl. 2021 , 585 , 126330. [ Google Scholar ] [ CrossRef ]
  • Arpin, I.; Likhacheva, K.; Bretagnolle, V. Organising inter- and transdisciplinary research in practice. The case of the meta-organisation French LTSER platforms. Environ. Sci. Policy 2023 , 144 , 43–52. [ Google Scholar ] [ CrossRef ]
  • Liew, M.S.; Tengku Shahdan, T.N.; Lim, E.S. Enablers in Enhancing the Relevancy of University-industry Collaboration. Procedia—Soc. Behav. Sci. 2013 , 93 , 1889–1896. [ Google Scholar ] [ CrossRef ]
  • Tunca, F.; Kanat, Ö.N. Harmonization and Simplification Roles of Technology Transfer Offices for Effective University—Industry Collaboration Models. Procedia Comput. Sci. 2019 , 158 , 361–365. [ Google Scholar ] [ CrossRef ]
  • Sciabolazza, V.L.; Vacca, R.; McCarty, C. Connecting the dots: Implementing and evaluating a network intervention to foster scientific collaboration and productivity. Soc. Netw. 2020 , 61 , 181–195. [ Google Scholar ] [ CrossRef ]
  • Ovchinnikova, E.N.; Krotova, S.Y. Training mining engineers in the context of sustainable development: A moral and ethical aspect. Eur. J. Contemp. Educ. 2022 , 11 , 1192–1200. [ Google Scholar ] [ CrossRef ]
  • Duryagin, V.; Nguyen Van, T.; Onegov, N.; Shamsutdinova, G. Investigation of the Selectivity of the Water Shutoff Technology. Energies 2023 , 16 , 366. [ Google Scholar ] [ CrossRef ]
  • Mohamed, M.; Altinay, F.; Altinay, Z.; Dagli, G.; Altinay, M.; Soykurt, M. Validation of Instruments for the Improvement of Interprofessional Education through Educational Management: An Internet of Things (IoT)-Based Machine Learning Approach. Sustainability 2023 , 15 , 16577. [ Google Scholar ] [ CrossRef ]
  • Tuan, N.A.; Hue, T.T.; Lien, L.T.; Van, L.H.; Nhung, H.T.T.; Dat, L.Q. Management factors influencing lecturers’ research productivity in Vietnam National University, Hanoi, Vietnam: A structural equation modeling analysis. Heliyon 2022 , 8 , e10510. [ Google Scholar ] [ CrossRef ]
  • Akcay, B.; Benek, İ. Problem-Based Learning in Türkiye: A Systematic Literature Review of Research in Science Education. Educ. Sci. 2024 , 14 , 330. [ Google Scholar ] [ CrossRef ]
  • Cherepovitsyn, A.E.; Tretyakov, N.A. Development of New System for Assessing the Applicability of Digital Projects in the Oil and Gas Sector. J. Min. Inst. 2023 , 262 , 628–642. Available online: https://pmi.spmi.ru/pmi/article/view/15795?setLocale=en_US (accessed on 10 September 2024).
  • Murzo, Y.; Sveshnikova, S.; Chuvileva, N. Method of text content development in creation of professionally oriented online courses for oil and gas specialists. Int. J. Emerg. Technol. Learn. 2019 , 14 , 143–152. [ Google Scholar ] [ CrossRef ]
  • Sveshnikova, S.A.; Skornyakova, E.R.; Troitskaya, M.A.; Rogova, I.S. Development of engineering students’ motivation and independent learning skills. Eur. J. Contemp. Educ. 2022 , 11 , 555–569. [ Google Scholar ] [ CrossRef ]
  • Rijcke, S.D.; Wouters, P.F.; Rushforth, A.D.; Franssen, T.P.; Hammarfelt, B. Evaluation Practices and Effects of Indicator Use—A Literature Review. Res. Eval. 2016 , 25 , rvv038. [ Google Scholar ] [ CrossRef ]
  • Cappelletti-Montano, B.; Columbu, S.; Montaldo, S.; Musio, M. New perspectives in bibliometric indicators: Moving from citations to citing authors. J. Informetr. 2021 , 15 , 101164. [ Google Scholar ] [ CrossRef ]
  • García-Villar, C.; García-Santos, J.M. Bibliometric indicators to evaluate scientific activity. Radiología Engl. Ed. 2021 , 63 , 228–235. [ Google Scholar ] [ CrossRef ]
  • Guskov, A.E.; Kosyakov, D.V. National factional account and assessment of scientific performance of organizations. Nauchnyye Tekhnicheskiye Bibl. Sci. Tech. Libr. 2020 , 1 , 15–42. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Khuram, S.; Rehman, C.A.; Nasir, N.; Elahi, N.S. A bibliometric analysis of quality assurance in higher education institutions: Implications for assessing university’s societal impact. Eval. Program Plan. 2023 , 99 , 102319. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Buehling, K. Changing research topic trends as an effect of publication rankings—The case of German economists and the Handelsblatt Ranking. J. Informetr. 2021 , 15 , 101199. [ Google Scholar ] [ CrossRef ]
  • Kremcheev, E.A.; Kremcheeva, D.A. The content of the operation quality concept of the scientific and technical organization. Opcion 2019 , 35 , 3052–3066. [ Google Scholar ]
  • Nyondo, D.W.; Langa, P.W. The development of research universities in Africa: Divergent views on relevance and experience. Issues Educ. Educ. Stud. Mosc. 2021 , 1 , 237–256. (In Russian) [ Google Scholar ] [ CrossRef ]
  • Marozau, R.; Guerrero, M. Impacts of Universities in Different Stages of Economic Development. J. Knowl. Econ. 2016 , 12 , 1–21. [ Google Scholar ] [ CrossRef ]
  • Fayomi, O.O.; Okokpujie, I.P.; Fayomi, O.S.I.; Udoye, N.E. An Overview of a Prolific University from Sustainable and Policy Perspective. Procedia Manuf. 2019 , 35 , 343–348. [ Google Scholar ] [ CrossRef ]
  • Do, T.H.; Krott, M.; Böcher, M. Multiple traps of scientific knowledge transfer: Comparative case studies based on the RIU model from Vietnam, Germany, Indonesia, Japan, and Sweden. For. Policy Econ. 2020 , 114 , 102134. [ Google Scholar ] [ CrossRef ]
  • See, K.F.; Ma, Z.; Tian, Y. Examining the efficiency of regional university technology transfer in China: A mixed-integer generalized data envelopment analysis framework. Technol. Forecast. Soc. Chang. 2023 , 197 , 122802. [ Google Scholar ] [ CrossRef ]
  • Dusdal, J.; Zapp, M.; Marques, M.; Powell, J.J.W. Higher Education Organizations as Strategic Actors in Networks: Institutional and Relational Perspectives Meet Social Network Analysis. In Theory and Method in Higher Education Research ; Huisman, J., Tight, M., Eds.; Emerald Publishing Limited: Bingley, UK, 2021; Volume 7, pp. 55–73. [ Google Scholar ] [ CrossRef ]
  • Silva, M.D.C.; de Mello, J.C.C.B.S.; Gomes, C.F.S.; Carlos, I.C. Efficiency analysis of scientific laboratories. Meta Aval. 2020 , 2 , 625–645. [ Google Scholar ] [ CrossRef ]
  • Vinogradova, A.; Gogolinskii, K.; Umanskii, A.; Alekhnovich, V.; Tarasova, A.; Melnikova, A. Method of the Mechanical Properties Evaluation of Polyethylene Gas Pipelines with Portable Hardness Testers. Inventions 2022 , 7 , 125. [ Google Scholar ] [ CrossRef ]
  • Chen, W.; Yan, Y. New components and combinations: The perspective of the internal collaboration networks of scientific teams. J. Informetr. 2023 , 17 , 101407. [ Google Scholar ] [ CrossRef ]
  • Corcoran, A.W.; Hohwy, J.; Friston, K.J. Accelerating scientific progress through Bayesian adversarial collaboration. Neuron 2023 , 111 , 3505–3516. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Wu, L.; Yi, F.; Huang, Y. Toward scientific collaboration: A cost-benefit perspective. Res. Policy 2024 , 53 , 104943. [ Google Scholar ] [ CrossRef ]
  • Ilyushin, Y.; Afanaseva, O. Spatial Distributed Control System Of Temperature Field: Synthesis And Modeling. ARPN J. Eng. Appl. Sci. 2021 , 16 , 1491–1506. [ Google Scholar ]
  • Cossani, G.; Codoceo, L.; Caceres, H.; Tabilo, J. Technical efficiency in Chile’s higher education system: A comparison of rankings and accreditation. Eval. Program Plan. 2022 , 92 , 102058. [ Google Scholar ] [ CrossRef ]
  • Marinin, M.A.; Marinina, O.A.; Rakhmanov, R.A. Methodological approach to assessing influence of blasted rock fragmentation on mining costs. Gorn. Zhurnal 2023 , 9 , 28–34. [ Google Scholar ] [ CrossRef ]
  • Sutton, E. The increasing significance of impact within the Research Excellence Framework (REF). Radiography 2020 , 26 (Suppl. S2), S17–S19. [ Google Scholar ] [ CrossRef ]
  • Basso, A.; di Tollo, G. Prediction of UK Research excellence framework assessment by the departmental h-index. Eur. J. Oper. Res. 2022 , 296 , 1036–1049. [ Google Scholar ] [ CrossRef ]
  • Groen-Xu, M.; Bös, G.; Teixeira, P.A.; Voigt, T.; Knapp, B. Short-term incentives of research evaluations: Evidence from the UK Research Excellence Framework. Res. Policy 2023 , 52 , 104729. [ Google Scholar ] [ CrossRef ]
  • Reddy, K.S.; Xie, E.; Tang, Q. Higher education, high-impact research, and world university rankings: A case of India and comparison with China. Pac. Sci. Rev. B Humanit. Soc. Sci. 2016 , 2 , 1–21. [ Google Scholar ] [ CrossRef ]
  • Shima, K. Changing Science Production in Japan: The Expansion of Competitive Funds, Reduction of Block Grants, and Unsung Heroe. Century Sci. Int. Perspect. Educ. Soc. 2017 , 33 , 113–140. [ Google Scholar ] [ CrossRef ]
  • Li, D. There is more than what meets the eye: University preparation for the socio-economic impact requirement in research assessment exercise 2020 in Hong Kong. Asian Educ. Dev. Stud. 2021 , 11 , 702–713. [ Google Scholar ] [ CrossRef ]
  • Radushinsky, D.A.; Kremcheeva, D.A.; Smirnova, E.E. Problems of service quality management in the field of higher education of the economy of the Russian Federation and directions for their solution. Relacoes Int. No Mundo Atual 2023 , 6 , 33–54. [ Google Scholar ]
  • Shi, Y.; Wang, D.; Zhang, Z. Categorical Evaluation of Scientific Research Efficiency in Chinese Universities: Basic and Applied Research. Sustainability 2022 , 14 , 4402. [ Google Scholar ] [ CrossRef ]
  • Cheng, Z.; Xiao, T.; Chen, C.; Xiong, X. Evaluation of Scientific Research in Universities Based on the Idea of Education for Sustainable Development. Sustainability 2022 , 14 , 2474. [ Google Scholar ] [ CrossRef ]
  • Hou, L.; Luo, J.; Pan, X. Research Topic Specialization of Universities in Information Science and Library Science and Its Impact on Inter-University Collaboration. Sustainability 2022 , 14 , 9000. [ Google Scholar ] [ CrossRef ]
  • Elbawab, R. University Rankings and Goals: A Cluster Analysis. Economies 2022 , 10 , 209. [ Google Scholar ] [ CrossRef ]
  • Kifor, C.V.; Olteanu, A.; Zerbes, M. Key Performance Indicators for Smart Energy Systems in Sustainable Universities. Energies 2023 , 16 , 1246. [ Google Scholar ] [ CrossRef ]
  • Guironnet, J.P.; Peypoch, N. The geographical efficiency of education and research: The ranking of U.S. universities. Socio-Econ. Plan. Sci. 2018 , 62 , 44–55. [ Google Scholar ] [ CrossRef ]
  • Ma, Z.; See, K.F.; Yu, M.M.; Zhao, C. Research efficiency analysis of China’s university faculty members: A modified meta-frontier DEA approach. Socio-Econ. Plan. Sci. 2021 , 76 , 100944. [ Google Scholar ] [ CrossRef ]
  • Tavares, R.S.; Angulo-Meza, L.; Sant’Anna, A.P. A proposed multistage evaluation approach for Higher Education Institutions based on network Data envelopment analysis: A Brazilian experience. Eval. Program Plan. 2021 , 89 , 101984. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Le, M.H.; Afsharian, M.; Ahn, H. Inverse Frontier-based Benchmarking for Investigating the Efficiency and Achieving the Targets in the Vietnamese Education System. Omega 2021 , 103 , 102427. [ Google Scholar ] [ CrossRef ]
  • Adot, E.; Akhmedova, A.; Alvelos, H.; Barbosa-Pereira, S.; Berbegal-Mirabent, J.; Cardoso, S.; Domingues, P.; Franceschini, F.; Gil-Doménech, D.; Machado, R.; et al. SMART-QUAL: A dashboard for quality measurement in higher education institutions. Qual. Reliab. Manag. 2023 , 40 , 1518–1539. [ Google Scholar ] [ CrossRef ]
CountryShare of the Total Volume, %Average
20112015201820192020202120222023
12345678910
USA3.4%3.4%3.4%3.2%3.2%3.0%3.0%3.0%3.2%
China7.1%8.0%8.0%8.2%8.5%8.5%8.5%8.5%8.2%
Japan5.9%5.4%5.1%5.2%5.2%5.2%5.2%5.2%5.3%
Russia8.0%9.0%9.5%10.7%10.1%10.2%11.0%11.0%9.9%
Turkey20.4%19.2%18.9%18.4%18.6%16.4%15.7%15.5%17.9%
Serbia25.1%24.0%25.3%25.4%44.7%45.9%41.9%43.0%34.4%
Spain4.1%4.3%4.4%4.2%3.9%4.0%4.0%4.0%4.1%
France1.0%2.8%3.1%2.9%3.0%3.0%3.0%3.0%2.7%
EU0.8%0.8%1.2%1.2%1.2%1.2%1.2%1.2%1.1%
Indicators for Assessing the Quality of Project Results and the Performances
of Specialized SUs
Significance of Indicators, %
FundamentalEngineering
123
1. Basic scientific performance indicators:
1.1. the number of patents registered
1.2. the number of original computer programs registered
1.3. the number of defended dissertations (master/science candidates) by employees of SUs
1.4. the number of defended dissertations (Ph.D./doctoral) by employees of SUs
2. Student cooperation indicators:
(the statistics of the students attracted to the project teams/the work of the SUs during the reporting period—the number of persons and percentages of staff and of the total working time)
2.1. students
2.2. postgraduate students
2.3. young specialists (25–35 years)
2.4. foreign students and postgraduates
3. Quantitative economic indicators:
3.1. total number of researchers involved in the project
3.2. working time of researchers, hours
3.3. working time of researchers, costs (if available)
3.4. constantly used spaces of laboratories, m
3.5. constantly used office spaces, m
3.6. costs for maintaining laboratory and office spaces
3.7. residual value of the laboratory equipment used, which belongs to SUs/STUs
3.8. cost of specially purchased equipment for the project
3.9. laboratory equipment use of other departments (SUs) and organizations (costs and hours)
3.10. costs of materials used for laboratory experiments
3.11. other costs
3.12. net profit or pure income (proceeds minus all the costs and taxes)
3.13. proceeds per researcher on a project or in a reporting period
3.14. net profit per researcher on a project or in a reporting period
4. Quantitative scientometric indicators:
4.1. the quantity of scientific publications indexed by Scopus/WoS 1–2 quartile
4.2. the quantity of scientific publications indexed by Scopus/WoS 3–4 quartile
4.3. the quantity of scientific publications indexed by Scopus/WoS, without quartile
4.4. the quantity of scientific publications indexed by national citation databases (for example, the Russian Science Citation Index, RSCI)
4.5. the quantity of citations in Scopus/WoS databases *
4.6. the quantity of citations in the national citation databases *
4.7. the quantity of reviews for Scopus/WoS performed
4.8. the quantity of reviews performed for publications, indexed in national citation databases
5. International cooperation indicators:
5.1. foreign researchers attracted to the project teams/the work of SUs during the reporting period (the number of persons and percentages of staff and of working hours)
5.2. researchers of SUs attracted to work with foreign partners during the reporting period (the number of persons and percentages of staff and of working hours)
6. Qualitative assessment (comprehensive multifactorial assessment)
6.1. possibilities for integration with the results of previous and related studies
6.2. maintaining existing achievements, general culture, and expanding the activities of the scientific school
6.3. the possibility for testing/the partial implementation of the results in practice in different industries—“knowledge transfer”—on a test or stream basis
6.4. the possibility for publishing results with inclusion in regional/national or sectoral research information systems
6.5. invitations to SU researchers to become constant members of national and international scientific associations
6.6. invitations to SU researchers to participate in national academic councils which are awarding the scientific degrees
6.7. other direct and indirect positive impacts in various areas
TOTAL100.0%100.0%
CharacteristicMining UniversitySt. Petersburg State UniversitySPb Polytechnical UniversityITMO UniversityLETI University
123456
Total number of researchers (employees of SUs/STUs)18023025020050
Total number of researchers who took part in the survey (246)
Of them
10359292718
SU leaders54111
Middle managers and specialists7940191510
Post-graduate students127553
Students78464
Aged
20–2520199117
25–353112633
35–5540181193
>551210345
Problem Possible Solution
1. The insufficient involvement of students, postgraduates, and young specialists in research, which complicates the transfer of innovations in the long term and is a threat to the sustainability of the developments of both the university and its macroenvironment region, industry, and country [ , , ]. The creation of conditions for the development of university science by the state: the construction of laboratory premises, acquisition of equipment, and engineering school support [ , , ]. Attracting students to research via the entrepreneurial activities of the university [ , ].
2. The risk of unjustified investment in university research: “the system for identifying promising developments at universities is retroactive, which leads to a low potential for their commercialization... and to unjustified investments.” [ ]; “Falsification of research at technical universities can not only deprive the university of the trust of sponsoring companies but also leads to emergency situations when trying to implement it” [ ]; publication of results in “predatory” journals is a research management risk [ , ]. The correct defining of a task, drawing up detailed technical specifications, and bearing responsibility for the results of research [ ]; implementing the terms from international quality standards of the ISO 9000 series and their analogs for science products in research contracts and technical specifications: “product”—“scientific result” and “requirement”—“scientific criteria” and “quality”—“the degree of scientific validity of a research result” [ , ].
3. The separation of the functions of research contracting and contract execution: “the creation of scientific products and their successful sale as products or services on the market are different types of activities that require separate management and organizational efforts and structures” [ , ]. Attracting managers from international companies in university science contract and sales divisions [ , , ] and the implementation of support schemes and promotional programs for key specialists, who can present, sell, and execute research as incentives [ ].
4. The incomplete reflection of the specialists’ competencies: shortcomings in realizing the potential of temporary and constant scientific teams (SUs, engineering centers, etc.) in patents and grant activities [ , , ]. Involving researchers, lecturers, and students in the work of “entrepreneurial university” small enterprises and encouraging them to register patents and IT-industry products and to apply for grants [ , , ].
5. Low levels of scientific collaborations and communications between researchers within and between universities and production companies: insufficient levels of trust and cooperation for joint scientific research between university units [ , ]; the absence or shortcomings of academic research communication and management systems (RCMSs), like European “EuroCRIS”, complicates the exchange of experience within and between universities and production companies and research result implementation [ , , ]. Stimulating scientific collaboration within and between universities and production companies by organizing inter- and trans-disciplinary research [ , , ]; organizing internships for employees of universities and production companies [ , , ]; the creation of personalized algorithms and systems of research communication and management with high-tech partner companies of universities [ , ]; introducing an internet-of-things (IoT)-based machine-learning approach [ ].
6. Involving lecturers in scientific activities: “lecturers (teachers) are, for the most part, interested in educational activities, and conducting scientific research is perceived as something forced” [ ]; current real-world problems or scenarios are not invented enough in educational practice [ ]. Shifting the focus to the formation of “interdisciplinary competencies” and problem-solving skills of lecturers, which allows for them to carry out desk research on their own, as well as to involve talented students in scientific work [ , , ].
7. Limitations of scientometric (bibliometric) indicators: quantitative methods of the integer counting of publications for assessing the effectiveness of academic research are not sufficiently objective, and they need additional qualitative diversification [ , , ]. The use of the “fractional counting” of scientific publications to increase the objectivity of scientific result evaluation [ ], taking into account the societal impact, research topic, and other qualitative factors while ranking the publication [ , , ].
8. Problems of small (regional) universities in attracting qualified scientific personnel capable to “make a significant contribution to … the production of knowledge and its transfer” [ , ]. Regional universities should stress the most-relevant area of research for the territory, with the partial involvement of qualified specialists from local production leaders as consultants [ , , ].
CharacteristicMining UniversitySt. Petersburg State UniversitySPb Polytechnical UniversityITMO UniversityLETI University
123456
1. Number of undergraduate and graduate students, thousands of people16.732.13414.59.1
2. Number of lecturers (employees of education units, teaching staff, and support staff), thousands of people2.53.32.51.31.1
3. Number of researchers (employees of scientific units), thousands of people0.180.230.250.20.05
4. Ratio of the number of researchers to the number of lecturers, %12%7%8%15%5%
5. Annual volume of scientific work performed, millions of rubles1500–1950580–650710–790650–780130–170
6. Share of government and organizations with state participation that order research, percentage of the total volume of the contracts20.7%69.7%59.5%48.5%78.9%
7. Lecturers who published research in journals in the Scopus/WoS level 1–2 quartile36%14%29%39%17%
8. Share of researchers who regularly publish the results of their research in journals in the Scopus/WoS level 1–2 quartile53%44%57%64%39%
9. Number of patents registered to the university187–29855–112312–628215–36589–178
10. Share of patent authorship attributable to researchers/lecturers65/35%85/15%78/22%62/38%82/18%
11. Annual volume of scientific work per employee of the SU, thousands of rubles (average estimate)94442652300036253000
CharacteristicMining UniversitySPb State UniversitySPb Polytechnical UniversityITMO UniversityLETI University
123456
The share of students and postgraduates who study technical specialties93%44%68%94%78%
University type (EE—engineering; C—comprehensive; E—mixed, closer to engineering)EECEEEE
Performed by UnitsShare of the Total Volume, %
Mining UniversitySt. Petersburg State UniversitySPb Polytechnical UniversityITMO UniversityLETI UniversityWeighted Average *
1234567
1. Scientific units (SUs/STUs), total90.3%62.8%79.8%93.6%65.3%83.7%
Including
(a) fundamental research19.8%16.8%14.6%12.6%27.8%17.3%
(b) engineering projects70.5%46.0%65.2%81.0%37.5%66.4%
2. Education units (EUs)9.7%37.2%20.2%6.4%34.7%16.3%
Including
(a) fundamental research9.1%35.0%15.3%6.0%28.0%14.4%
(b) engineering projects0.6%2.2%4.9%0.4%6.7%1.9%
TOTAL100%100%100%100%100%100.0%
Including
(a) fundamental research28.9%51.8%29.9%18.6%55.8%31.8%
(b) engineering projects71.1%48.2%70.1%81.4%44.2%68.2%
Groups of IndicatorsSignificance of Indicators, %
FundamentalEngineering
1. Basic scientific performance indicators10.9%11.0%
2. Student cooperation indicators7.6%13.2%
3. Quantitative economic indicators29.8%65.4%
4. Quantitative scientometric indicators31.7%4.4%
5. International cooperation indicators3.2%1.3%
6. Qualitative assessment (comprehensive multifactorial assessment)16.8%4.7%
TOTAL100.0%100.0%
Indicators for Fundamental Research%Indicators for Engineering Projects%
1234
4.1. the quantity of scientific publications indexed by Scopus/WoS 1–2 quartile8.8%1.1. the number of registered patents 7.8%
4.5. the quantity of citations in Scopus/WoS databases 7.8%3.12. net profit or pure income (proceeds minus all the costs and taxes) 6.9%
6.1. possibilities for integration with the results of previous and related studies5.6%3.4. constantly used spaces of laboratories, m 6.4%
1.3. the number of defended dissertations (Ph.D.; science candidate) by employees of SUs5.3%3.2. working time of researchers, hours6.1%
4.7. the quantity of reviews for Scopus/WoS performed4.5%3.3. working time of researchers, costs (if available)5.7%
4.8. the quantity of reviews performed for publications, indexed in national citation databases3.7%3.8. cost of specially purchased equipment for the project 5.7%
Subtotal 35.7%Subtotal 38.6%
HypothesisConclusion
H1Partially proved hypothesis (70%)
H2Proved hypothesis
H3Partially proved hypothesis (90%)
H4Partially proved hypothesis (50%)
H5Proved hypothesis
H6Proved hypothesis
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Radushinsky, D.A.; Zamyatin, E.O.; Radushinskaya, A.I.; Sytko, I.I.; Smirnova, E.E. The Performance and Qualitative Evaluation of Scientific Work at Research Universities: A Focus on the Types of University and Research. Sustainability 2024 , 16 , 8180. https://doi.org/10.3390/su16188180

Radushinsky DA, Zamyatin EO, Radushinskaya AI, Sytko II, Smirnova EE. The Performance and Qualitative Evaluation of Scientific Work at Research Universities: A Focus on the Types of University and Research. Sustainability . 2024; 16(18):8180. https://doi.org/10.3390/su16188180

Radushinsky, Dmitry A., Egor O. Zamyatin, Alexandra I. Radushinskaya, Ivan I. Sytko, and Ekaterina E. Smirnova. 2024. "The Performance and Qualitative Evaluation of Scientific Work at Research Universities: A Focus on the Types of University and Research" Sustainability 16, no. 18: 8180. https://doi.org/10.3390/su16188180

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Open access
  • Published: 19 September 2024

Strengthening rural healthcare outcomes through digital health: qualitative multi-site case study

  • Leanna Woods 1 , 2 , 3 ,
  • Rebekah Eden 2 , 4 ,
  • Sophie Macklin 2 ,
  • Jenna Krivit 2 ,
  • Rhona Duncan 5 ,
  • Helen Murray 6 ,
  • Raelene Donovan 7 &
  • Clair Sullivan 1 , 2 , 8  

BMC Health Services Research volume  24 , Article number:  1096 ( 2024 ) Cite this article

36 Accesses

Metrics details

Rural populations experience ongoing health inequities with disproportionately high morbidity and mortality rates, but digital health in rural settings is poorly studied. Our research question was: How does digital health influence healthcare outcomes in rural settings? The objective was to identify how digital health capability enables the delivery of outcomes in rural settings according to the quadruple aims of healthcare: population health, patient experience, healthcare costs and provider experience.

A multi-site qualitative case study was conducted with interviews and focus groups performed with healthcare staff ( n  = 93) employed in rural healthcare systems ( n  = 10) in the state of Queensland, Australia. An evidence-based digital health capability framework and the quadruple aims of healthcare served as classification frameworks for deductive analysis. Theoretical analysis identified the interrelationships among the capability dimensions, and relationships between the capability dimensions and healthcare outcomes.

Seven highly interrelated digital health capability dimensions were identified from the interviews: governance and management; information technology capability; people, skills, and behaviours; interoperability; strategy; data analytics; consumer centred care. Outcomes were directly influenced by all dimensions except strategy. The interrelationship analysis demonstrated the influence of strategy on all digital health capability dimensions apart from data analytics, where the outcomes of data analytics shaped ongoing strategic efforts.

Conclusions

The study indicates the need to coordinate improvement efforts targeted across the dimensions of digital capability, optimise data analytics in rural settings to further support strategic decision making, and consider how consumer-centred care could influence digital health capability in rural healthcare services. Digital transformation in rural healthcare settings is likely to contribute to the achievement of the quadruple aims of healthcare if transformation efforts are supported by a clear, resourced digital strategy that is fit-for-purpose to the nuances of rural healthcare delivery.

Peer Review reports

Continuing significant health inequalities exist within and across countries [ 1 ]. In Australia, people in rural and remote areas have a 40% increased disease risk and a 2.3 times higher preventable mortality rate than those in major cities [ 2 ]. First Nations Australians are particularly disadvantaged: those living in remote areas have a life expectancy 6–7 years lower than those in major cities [ 3 ]. The population is unevenly dispersed across vast distances and harsh environments, contributing to healthcare access issues [ 4 ].

In these rural settings, geographic, resourcing and health equity challenges provide great opportunity for digital models of care [ 5 ]. Precision medicine and virtual care hold the promise of more integrated and value-based health systems with improved outcomes and care closer to home [ 6 ]. Outcomes are increasingly measured and mapped using the quadruple aim of healthcare: population health, patient experience, healthcare costs and provider experience [ 7 ]. The quadruple aim enables a balanced view of healthcare outcomes beyond traditionally measured productivity measures to benefits that are meaningful to staff, clinicians and consumers [ 8 ].

Literature on digital health transformation acknowledges the disparities faced by rural communities and their difficulty in delivering on the quadruple aim of healthcare given geographic constraints [ 9 , 10 , 11 ]. Although digital adoption across healthcare settings varies [ 12 ], the need for digital health is highest in rural and resource-constrained settings [ 13 ]. Rural areas need more reliable digital connectivity to compensate for the geographical remoteness, yet rural communities are generally less and worse connected by technologies [ 10 ]. Described as a digital vulnerability [ 10 ], rurality is contributing to widening digital health inequities [ 9 ]. Careful consideration of transformation efforts is required to ensure the unanticipated consequences [ 8 ] such as exacerbating the rural digital divide, are adequately managed [ 10 , 14 ]. ‘TechQuity’ has become an increasingly prioritised commitment to use technology to eliminate structural inequities among diverse social, economic, demographic or geographic groups [ 14 ]. Although enormous efforts are currently underway to bring digital services to rural and remote areas, challenges remain in supporting an ageing and understaffed rural workforce [ 15 ] to adopt digitally-enabled models of care with immature infrastructure, network fragility, public policy constraints, adoption barriers, lack of digital devices, constrained local technical knowledge, and variable digital inclusion of citizens [ 9 , 10 , 11 ].

Digital health capability refers to the enabling environment required for executing digital health [ 16 ]. Digital health capability (or ‘maturity’) assessments examine the enabling environment across groups of variables often referred to as dimensions [ 16 , 17 , 18 ]. Despite the interest of digital transformation leaders to define clear targets for success, and the many frameworks available to assess digital health capability (e.g., Electronic Medical Record Adoption Model, Picture Archiving and Communication Systems Maturity Model, Clinical Digital Maturity Index) [ 17 ], international consensus on how digital health capability should be measured remains elusive [ 19 ]. Digital capability assessments have traditionally focused on technical implementation alone [ 19 ]. To consolidate the diverse dimensions, we conducted a systematic literature review and developed a synthesised digital health capability framework [ 17 ], and refined it through a consultative process with various healthcare stakeholders [ 18 ] (Fig.  1 ). This framework provides health organisations with a more comprehensive understanding of the current state of digital health capability and guidelines to develop roadmaps for digital health to result in meaningful improvement in patient care, health outcomes and health equity. While building digital health capability can positively contribute to the health equity gap [ 20 ], the digital health capability of rural and remote health services is poorly studied.

figure 1

Digital health capability framework

Previous research suggests the need for healthcare organisations to improve digital health capability [ 19 ] with calls for research to investigate: (1) digital health capability in resource-constrained rural health services [ 12 , 13 , 21 ], (2) the interrelationships among the different dimensions of digital health capability [ 17 ], and (3) the outcomes that meaningfully benefit patients and populations [ 17 , 21 , 22 ]. To address this gap, we sought to answer the research question: How does digital health capability influence healthcare outcomes in rural settings according to the quadruple aims of healthcare?

Our objectives were to:

Identify digital health capability dimensions in rural settings.

Identify the relationships among the digital health capability dimensions and healthcare outcomes.

Identify the interrelationships among the digital health capability dimensions.

A multi-site qualitative case study was conducted, using semi-structured interviews supplemented with focus groups.

Setting and participants

In the geographically large Australian state of Queensland, approximately 38% of the total population and 66% of First Nations people live in non-metropolitan areas [ 4 ]. Universal healthcare is delivered by Queensland Health across 16 geographically defined healthcare systems, providing a range of public healthcare services in small rural clinics to large quaternary academic hospitals [ 23 ]. Of the 16 healthcare systems, six are considered regional as they have a mix of regional, rural and remote healthcare services and four are considered remote as they serve only rural and remote communities [ 23 , 24 ] (Fig.  2 ). Queensland Health’s ten-year digital strategic plan includes the vision to improve access to care and support better health outcomes for rural and remote Queenslanders [ 2 ]. Regional ( n  = 6) and remote ( n  = 4) healthcare systems were included in this study and are collectively referred to as rural healthcare systems ( n  = 10).

figure 2

Regional and remote healthcare systems in Queensland, Australia

Data collection

As digital transformation is an interdisciplinary endeavour, a purposive sample [ 25 ] of healthcare staff working in diverse roles (e.g., clinicians, executives, informatics team members) within rural healthcare systems were eligible participants. Site contact persons and the state-wide health executive forum helped identify participants. Participants were invited to attend an interview or focus group via videoconferencing and a semi-structured interview guide was administered by two interviewers. The interview guide contained questions pertaining to strategic vision, experiences of implementations, and evaluations of digital transformations, tailored to each professional group (Appendix 1 ). Participation was voluntary. Interviews and focus groups were audio recorded, transcribed, and anonymised.

Data analysis

The interview data was deductively analysed using the digital health capability framework [ 18 ] and quadruple aims of healthcare [ 7 ] as classification frameworks. The analysis involved two researchers coding the interview data and classifying the relevant perceptions of interview participants to the respective subdimensions of digital health capability and healthcare outcomes. Through an initial qualitative analysis, there were consistency amongst the themes described by regional and remote healthcare systems and therefore the analysis was performed on the combined sample. When the analysis revealed a theme not included in the existing framework [ 18 ] it was inductively analysed [ 26 ]. Three new sub-dimensions resulted: resources (governance and management dimension); fit-for-purpose (IT capability dimension); and attitudes (people, skills and behaviour dimension) (Fig.  1 , Appendix 2 ).

The quadruple aims of healthcare was used to identify the outcomes related to population health (e.g., equity, access, disparities), patient experience (e.g., preferences, communication, access, engagement), provider experience (e.g., workload, preferences), and healthcare costs (i.e., resourcing, efficiency) [ 7 , 27 ].

To address objective 1, the analysis of the quotes regarding the sub-dimensions and dimensions of the digital health capability framework were synthesised in tables and reported narratively. We did not identify divergent perspectives in this analysis.

To address objective 2 and 3, theoretical coding was performed to identify relationships between the dimensions of the digital health capability framework and outcomes (e.g., between interoperability and population health) and interrelationships among the dimensions (e.g., between interoperability and strategy), respectively. The analyses for objectives 2 and 3 were supported by the matrix querying functionality in NVivo (version 12, QSR International). The outputs of the matrix queries were manually analysed by four researchers to confirm the relationships, and to identify the direction of the relationship.

In all analyses, findings were refined and finalised through consensus in researcher workshops [ 25 ].

In total, 93 participants attended an interview or focus group across regional ( n  = 6) and remote ( n  = 4) healthcare systems (Table  1 ).

Perceptions of rural digital health capability

The perceptions of digital health capability dimensions in rural healthcare systems are described below. Three new sub-dimensions were identified; resources (governance and management dimension); fit-for-purpose (IT capability dimension); and attitudes (people, skills and behaviour dimension). Appendix 3 provides additional evidence of the capability subdimensions.

Consumer-centred care , in terms of supporting consumers to manage their own health through technology-enhanced care and improved consumer health literacy, was valued by participants: “ Communication with patients , so the ability to send text messages or have an app that allows them to track their own health , in terms of appointments , results , medications …would be …great .” (G6). Limited resourcing, time availability and dispersed location of consumers were described as barriers to the access and use of technological infrastructure needed to enable consumer-centred care.

Governance and management were considered vital as rural sites transitioned from paper to digital. Adequate resourcing and support for staff through this transition was important: “ In …the last five years , we’ve introduced them (staff) to [the electronic medical record] and that has been a huge challenge with a lot of change initiatives and management to …bring them along on the journey.” ( E2). Participants identified that governance and management efforts should be targeted at providing effective data governance, protocols for sharing data with external providers, and reducing unsafe workarounds.

In the IT capability dimension, digital infrastructure was limited by internet connectivity: “ The other barrier …is our connectivity within our sites , with being rural and remote. We don’t always have the greatest internet. …Our bandwidth and our speed [is poor] ” (F8). During transition phases, participants reported the ‘hybrid’ paper-digital model resulted in duplicated information and poor data accuracy. The use of dashboards was valuable for visualising data.

In the people , skills and behaviour dimension, the digital literacy of the healthcare workforce mediates their acceptance of digital health: “ The hardest thing was they’re extremely experienced and knowledgeable clinicians , and they’ve had to now go into something where they feel inadequate and feel that they can’t do their job ” (E1). Participants indicated that the continued use of telehealth and investment in education to enhance individual competence and clinician confidence in digital technologies can minimise unnecessary workarounds and contribute to providing equitable care to rural populations.

Interoperable systems were perceived to facilitate efficient and accurate exchange of clinical information. Continuity of care is difficult between primary care providers, state-funded health services, and external providers in rural settings: “ I can see a benefit in patient care delivery for when the entire health service is on the same system , because it will help with the transfer of the patient care through the different journeys ” (B4). Participants emphasised that poor information visibility with external providers limited external interoperability, and the numerous systems utilised within a healthcare system limited internal interoperability.

The strategic focus of rural healthcare systems is interoperability, digital competency and investment in education and training. Strategic adaptability and alignment to organisational strategies is reported as challenging in rural contexts as digital health solutions were originally tailored for other healthcare systems in Queensland: “ The state-wide solutions don’t really cater for the [rural health] , because of how isolated and remote it is ” (J10).

Rural healthcare providers use data analytics for healthcare performance tracking. Accurate data input by clinicians was critical for data analytics: “ [The digital system] is our source of truth and it needs to be correct and up to date because that’s where all of our information from a funding perspective comes from ” (B3). The continued use of paper is required for manual auditing. Participants saw value in extending data analytics to provide insight into trends and identify clinical risks, particularly for chronic disease management.

Relationships among digital health capability dimensions and outcomes

All four healthcare outcomes (population health, patient experience, provider experience, and healthcare costs) were described to be influenced by the digital health capability dimensions (Fig.  3 ; Table  2 , Appendix 4 ). Strategy did not directly impact any outcome.

figure 3

Dimensions of digital health capability and their relationships to the healthcare outcomes

Relationships among the digital health capability dimensions

Interrelationships were present among all digital health capability dimensions (Fig.  4 , Appendix 5 ).

figure 4

Interrelationships among the digital health capability dimensions

Participants described the strategy dimension influenced the governance and management, people skills and behaviour, IT capability, interoperability and consumer-centred care dimensions. To incorporate digital transformation in the strategy and facilitate “ care closer to the home ” (J4), monetary and human resources are required. Enacting the digital strategy requires balancing priorities to develop the digital capability of the health system and “ invest in the education and training of staff ” (F7) to improve people, skills, and behaviour. Investment in technology infrastructure is required: “ to invest in any further digital transformation , [it] must come with updated hardware infrastructure ” (F6). Interoperability was core to the strategy, with the vision to “ have one system , …so …all of our users …are only seeing one system , and if that’s not possible , then get interoperability front and centre on everyone’s roadmap ” (G2).

Participants reported the challenge of governance and management and described its influence on all dimensions. Resource constraints and a lack of digital health governance within individual healthcare systems have hampered the digital health strategy from being realised. Implementation of technologies (e.g., telehealth) and structures (“ consumer and community engagement team ” (I6)) to facilitate consumer-centred care have been introduced in rural settings. Due to the segmentation of healthcare systems and primary and secondary care, participants expressed the importance of interoperability standards: “ blanket rules that all digital platforms …talk(ed) to each other ” (B6). In some instances, efforts to improve interoperability have been introduced by management, ensuring that the same system is used across sites: “ Patient flow manager for us is our communication tool and we made sure that every site has [it] ” (D12). Despite “ government documents that dictate how it should be used , …they’re not well enforced ” (F8) and individual variations exist in technology use. Establishing a “ culture …[that] supports very open and transparent reporting of incidents ” (A7) and structures such as a “ business analysis and decision support team …[and] a nurse informatics [area] ” (B10) facilitated data analytics.

IT capability influenced the governance and management, data analytics, consumer-centred care, people, skills, and behaviour, and interoperability dimensions. Governance and management were facilitated by providing “ a full audit system …we can audit , we can review , we can look at gaps ” (F8). IT also enabled analytics: “ we use [software] to collate and to present data back to clinical teams , …for decision making , …to understand trends , …[and] for reporting …to the board and executive ” (I6). Leveraging the capability of web-based portals, community healthcare directories, and telehealth have “ enabled deeper engagement with the community ” (F9) facilitating consumer-centred care. In some instances, IT improved interoperability: “ we can see every admission to a public hospital in Queensland , and clinical notes , and discharge summaries ” (B2). However, other IT did not meet interoperability standards and, the inclusion of additional IT was sometimes met with fear and frustration.

Interoperability influenced the governance and management, data analytics, people skills and behaviour, and IT capability dimensions. Participants expressed frustration with the limited interoperability: “ instead of having one single channel which is commonly shared by everybody , we now have five different variants at both ends ” (A3). Suboptimal interoperability impedes IT capability and data analytics: “ if everything is linked to all databases , …you [would] have these wonderful capabilities to put your parameters in and run a report ” (D9). Participants noted that a lack of interoperability between state-wide systems is “ fraught with risk ” (C9) that need to be managed.

Participants described how data analytics influenced the strategy, governance and management, and people, skills, and behaviour dimensions. Reporting and visualisation of data facilitated “ operational and strategic planning ” (J8). Data analytics can be used to actively monitor whether national standards are met: “ We have a national standard called Emergency Length of Stay – …making sure patients leave within four hours. …I use the [digital] system …to keep tracking ‘where are we at? What’s our timing , what are we doing’” (E1). However, individuals had a “ sense that the data will be used against people ” (D6) and expressed concerns “ over being a micro-manager ” (D4).

The consumer-centred care dimension did not influence any digital health capability dimensions.

People skills and behaviour appeared to influence the data analytics and consumer-centred care dimensions. Healthcare professionals perform workarounds to overcome system limitations, often resulting in poor data quality hampering data analytics: “g arbage in , garbage out. …Because people didn’t enter information correctly , it … threw out the whole dataset .” (E9). Digital health literacy of healthcare professionals and consumers influenced the delivery of consumer-centred care: “ To access healthcare moving forward you will have to have a level of digital acuity and we need to be responsible for building competency and checking for competency. …It’s not just for the patients , …it’s a skill that we need to embed with our staff .” [B7].

Despite the potentially transformative role of digital health to address access and equity [ 16 ], healthcare systems can struggle with implementing digital health in a way that addresses the health needs of marginalised communities. The key challenges across the seven capability dimensions involved enhancing workforce education and support, increasing IT capability, enabling interoperability, and catering healthcare delivery to rural settings. The existing digital health capability framework was improved by adding the sub-dimensions of resourcing, fit-for-purpose, and attitude, which are essential for rural digital transformation. The extensive use of telehealth and clinical dashboards are exemplar technological implementations enhancing rural digital health capability.

Our study in rural and remote Australia adds critical insight into the interrelationships among dimensions of digital health capability, confirming the need for rigorous strategy, governance, optimised digital technologies and a capable workforce to drive digital transformation. Second, it links the capability dimensions with outcomes reporting how healthcare outcomes in the rural setting are directly influenced by six of seven digital health capability dimensions. Third, it extends knowledge on the critical role of digital health in healthcare settings where need is high but appropriate, sustainable adoption is challenging.

Implications for practice

Healthcare leaders need to coordinate improvement efforts targeted across the dimensions of digital health. First, rural healthcare systems require a robust digital health strategy. The strategy dimension did not appear to influence outcomes directly, rather, the strategy dimension influenced other dimensions which then influenced outcomes. To deliver on the digital strategy, governance and management frameworks need to be established to support the provisioning of technical elements and development of workforce capability and consumer digital literacy. To ensure local needs are met, digital capability building initiatives require a clear local digital strategy, critical for coordinating activities [ 19 ], setting clear targets towards the desired future state [ 28 ], and motivating stakeholders towards pursuit of the quadruple aims of healthcare [ 19 ]. A concurrent state-wide study of digital health capability in Queensland identified the need for strategic initiatives to address variations in capability in rural healthcare systems [ 29 , 30 ]. A metropolitan-originated digital health strategy is unlikely to address rural healthcare delivery challenges.

Data analytics capability can be optimised in rural settings to support decision-making and strategic development. We evidenced that data analytics can influence strategy and has a bidirectional relationship with the governance and management and people, skills and behaviour dimensions. Foundationally, data analytics capability is influenced by the interoperability and IT capability dimensions. This indicates that improving interoperability and IT capability to enhance information exchange and the usability of data can strengthen the data analytics capability, resulting in better informed and executed strategy supported by governance and management. In settings challenged with health inequities and coverage of essential healthcare [ 1 ], investing in data analytics capability will build a foundation for predictive and prescriptive analytics to benefit individual and population health [ 31 , 32 ]. Clinical insights gained from data analytics can direct strategic priorities of rural healthcare delivery. We found that rural healthcare workers are seeing these early benefits through identification of clinical risk, particularly in chronic disease management and for early intervention.

Our findings demonstrate that participants view consumer-centred care as a capability that is shaped by other dimensions rather than informing other dimensions. Existing literature has struggled to articulate the intersection between consumers and digital health in system wide transformation. A systematic review of digital health capability revealed less than 10% of studies examined consumer-centred care [ 17 ]. The importance of the consumer-centred care dimension was evident in participants perceptions that it improved population health and patient experience. Emerging insights from broader literature indicate the need to consider additional consumer digital health themes (e.g., ethical implications, choice, transparency) [ 33 , 34 , 35 ]. Defining best practice processes (e.g., consumer panels, committees) and people (e.g., diversity, inclusion) to integrate consumer perspectives into digital initiatives remains necessary to rural healthcare improvement to address access and equity challenges.

Limitations

The qualitative methodology is limited to inferring association among the capability dimensions and outcomes, rather than attributing causality. Perceptions of digital health capability were captured from the purposive sample of interviewed participants with diverse roles in the health service. Differences in perspectives across different participant cohorts were not examined. We are unable to account for the confounders to perceived digital capability and outcomes which is likely to be influenced by the experience of participants in their work setting, at the time of data collection.

Digital health holds the promise of overcoming the “tyranny of distance”. Improvement of essential health services in rural settings is critical to health and wellbeing of populations experiencing health inequities. Using a multi-site case study analysis of rural healthcare systems, this study evidenced the complex interplay among the dimensions of digital health capability and the association between the dimensions and healthcare outcomes. The drivers of digitally enabled healthcare improvement in rural settings transcend single dimensions of digital health capability. A focus on digital strategy is foundational to enabling improvements across technical and human capabilities. Better leveraging data analytics and embedding consumer-centred care are likely to enable digital health improvements that meet local needs, enhancing the patient experience, improving population health, reducing the cost of care, and improving the provider experience. Improving care for our rural populations is critical to entire system improvement.

Data availability

The data that support the findings of this study are not openly available in accordance with the approved Human Research Ethics Application and institutional requirements. For enquiries about the availability of data and materials contact the corresponding author at [email protected].

Abbreviations

  • Information technology

United Nations Department of Economic and Social Affairs Sustainable Development. Ensure healthy lives and promote well-being for all at all ages: United Nations; 2023 [cited 2023 29 September 2023]. https://sdgs.un.org/goals/goal3

eHealth Queensland. Digital strategy for rural and remote healthcare. Queensland: State of Queensland (Queensland Health); 2021.

Google Scholar  

Australian Institute of Health Welfare. Aboriginal and Torres Strait Islander Health Performance Framework 2020 Summary Report. AIHW Canberra, Australia; 2020.

Queensland Department of Health. Rural and remote health and wellbeing strategy 2022–2027. Queensland: State of Queensland (Queensland Health); 2022.

Marshall A, Babacan H, Dale AP. Leveraging digital development in regional and rural Queensland: Policy Discussion Paper. 2021.

Queensland Health. Digital health 2031: a digital vision for Queensland’s health system. Brisbane, Queensland: State of Queensland; 2022.

Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Annals Family Med. 2014;12(6):573–6.

Article   Google Scholar  

Woods L, Eden R, Canfell OJ et al. Show me the money: how do we justify spending health care dollars on digital health? Med J Aust. 2022.

Yao R, Zhang W, Evans R, et al. Inequities in health care services caused by the adoption of digital health technologies: scoping review. J Med Internet Res. 2022;24(3):e34144.

Article   PubMed   PubMed Central   Google Scholar  

Esteban-Navarro M-Á, García-Madurga M-Á, Morte-Nadal T, et al. The rural digital divide in the face of the COVID-19 pandemic in Europe—recommendations from a scoping review. Informatics. 2020;7(4):54.

Manyazewal T, Woldeamanuel Y, Blumberg HM, et al. The potential use of digital health technologies in the African context: a systematic review of evidence from Ethiopia. NPJ Digit Med. 2021;4(1):125.

Clarke MA, Skinner A, McClay J et al. Rural health information technology and informatics workforce assessment: a pilot study. Health Technol. 2023:1–9.

Curioso WH. Building capacity and training for digital health: challenges and opportunities in Latin America. J Med Internet Res. 2019;21(12):e16513.

Clark CR, Akdas Y, Wilkins CH, et al. TechQuity is an imperative for health and technology business: Let’s work together to achieve it. J Am Med Inform Assoc. 2021;28(9):2013–6.

World Health Organization. WHO Guideline on Health Workforce Development, Attraction, Recruitment and Retention in Rural and Remote Areas. 2021.

World Health Organization. Recommendations on digital interventions for health system strengthening. World Health Organ. 2019:2020–10.

Duncan R, Eden R, Woods L, et al. Synthesizing dimensions of Digital Maturity in hospitals: systematic review. J Med Internet Res. 2022;24(3):e32994.

Woods LS, Eden R, Duncan R et al. Which one? A suggested approach for evaluating digital health maturity models. Front Digit Health. 2022.

Cresswell K, Sheikh A, Krasuska M, et al. Reconceptualising the digital maturity of health systems. Lancet Digit Health. 2019;1(5):e200–1.

Article   PubMed   Google Scholar  

World Health Organization. Global strategy on digital health 2020–2025. Geneva: World Health Organization; 2021. p. 60.

Martin G, Clarke J, Liew F, et al. Evaluating the impact of organisational digital maturity on clinical outcomes in secondary care in England. NPJ Digit Med. 2019;2(1):1–7.

Flott K, Callahan R, Darzi A, et al. A patient-centered Framework for Evaluating Digital Maturity of Health Services: a systematic review. J Med Internet Res. 2016;18(4):e75.

Queensland Health. Rural regions and locations Queensland1996-2023 [cited 2023 18 September]. https://www.health.qld.gov.au/employment/what-its-like-to-work-for-us/rural-remote/rural-regions-and-locations

Australian Government Department of Health. Health Workforce Locator Australia(no date) [ https://www.health.gov.au/resources/apps-and-tools/health-workforce-locator/health-workforce-locator

Yin RK. Qualitative research from start to finish. Guilford; 2015.

Saldaña J. The coding manual for qualitative researchers: sage; 2021.

Avdagovska M, Menon D, Stafinski T. Capturing the impact of patient portals based on the Quadruple Aim and benefits evaluation frameworks: scoping review. J Med Internet Res. 2020;22(12):e24568.

Burmann A, Meister S, editors. Practical application of Maturity models in Healthcare: findings from multiple digitalization Case studies. HEALTHINF; 2021.

Woods L, Eden R, Pearce A, et al. Evaluating Digital Health Capability at Scale Using the Digital Health Indicator. Appl Clin Inf. 2022;13(05):991–1001.

Woods L, Dendere R, Eden R, et al. Perceived Impact of Digital Health Maturity on Patient Experience, Population Health, Health Care costs, and Provider Experience: mixed methods Case Study. J Med Internet Res. 2023;25:e45868.

Sanchez P, Voisey JP, Xia T, et al. Causal machine learning for healthcare and precision medicine. Royal Soc Open Sci. 2022;9(8):220638.

Almond H, Mather C. Digital Health: A Transformative Approach. Elsevier Health Sciences; 2023.

Kukafka R. Digital health consumers on the road to the future. J Med Internet Res. 2019;21(11):e16359.

Health Consumers Queensland. Queensland Digital Health Consumer Charter. Queensland: Health Consumers Queensland; 2021.

Viitanen J, Valkonen P, Savolainen K, et al. Patient experience from an eHealth perspective: a scoping review of approaches and recent trends. Yearb Med Inform. 2022;31(01):136–45.

Download references

Acknowledgements

The authors would like to extend their appreciation to the many Queensland Health staff members who participated in this research.

This research was supported by the Digital Health CRC Limited (“DHCRC”) and Queensland Department of Health. DHCRC is funded under the Australian Commonwealth’s Cooperative Research Centres (CRC) Program.

Author information

Authors and affiliations.

Centre for Health Services Research, Faculty of Medicine, The University of Queensland, Brisbane, Australia

Leanna Woods & Clair Sullivan

Queensland Digital Health Centre, Faculty of Medicine, The University of Queensland, Brisbane, Australia

Leanna Woods, Rebekah Eden, Sophie Macklin, Jenna Krivit & Clair Sullivan

Digital Health Cooperative Research Centre, Sydney, Australia

Leanna Woods

UQ Business School, Faculty of Business, Economics, and Law, The University of Queensland, Brisbane, Australia

Rebekah Eden

School of Information Systems, Faculty of Science, Queensland University of Technology, Brisbane, Australia

Rhona Duncan

North West, Central West, South West Hospital and Health Services, Longreach, QLD, Australia

Helen Murray

eHealth Queensland, Brisbane, Australia

Raelene Donovan

Metro North Hospital and Health Service, Brisbane, Australia

Clair Sullivan

You can also search for this author in PubMed   Google Scholar

Contributions

LW, RE and CS devised the project with strategic direction from HM and RaD. LW, RE and RhD contributed to data collection under the supervision of CS. SM and JK contributed to data analysis and reporting of results with the assistance of LW, RE and CS. LW and RE wrote the manuscript with assistance from SM, JK, ReD, HM, RhD and CS. All authors contributed to the article and approved the submitted version.

Corresponding author

Correspondence to Leanna Woods .

Ethics declarations

Ethics approval and consent to participate.

The study received a multisite ethics approval from the Royal Brisbane and Women’s Hospital [ID: HREC/2020/QRBW/66895], followed by site-specific research governance approvals. Informed, written content was provided by participants prior to data collection.

Consent for publication

Not applicable.

Competing interests

Author Clair Sullivan is an editorial board member of BMC Medical Informatics and Decision Making.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/ .

Reprints and permissions

About this article

Cite this article.

Woods, L., Eden, R., Macklin, S. et al. Strengthening rural healthcare outcomes through digital health: qualitative multi-site case study. BMC Health Serv Res 24 , 1096 (2024). https://doi.org/10.1186/s12913-024-11402-4

Download citation

Received : 28 March 2024

Accepted : 05 August 2024

Published : 19 September 2024

DOI : https://doi.org/10.1186/s12913-024-11402-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rural health services
  • Informatics
  • Quality of health care
  • Digital maturity

BMC Health Services Research

ISSN: 1472-6963

article qualitative research

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Qualitative study.

Steven Tenny ; Janelle M. Brannan ; Grace D. Brannan .

Affiliations

Last Update: September 18, 2022 .

  • Introduction

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. [1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a standalone study, purely relying on qualitative data, or part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and applications of qualitative research.

Qualitative research, at its core, asks open-ended questions whose answers are not easily put into numbers, such as "how" and "why." [2] Due to the open-ended nature of the research questions, qualitative research design is often not linear like quantitative design. [2] One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. [3] Phenomena such as experiences, attitudes, and behaviors can be complex to capture accurately and quantitatively. In contrast, a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a particular time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify, and it is essential to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore "compete" against each other and the philosophical paradigms associated with each other, qualitative and quantitative work are neither necessarily opposites, nor are they incompatible. [4] While qualitative and quantitative approaches are different, they are not necessarily opposites and certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated.

Qualitative Research Approaches

Ethnography

Ethnography as a research design originates in social and cultural anthropology and involves the researcher being directly immersed in the participant’s environment. [2] Through this immersion, the ethnographer can use a variety of data collection techniques to produce a comprehensive account of the social phenomena that occurred during the research period. [2] That is to say, the researcher’s aim with ethnography is to immerse themselves into the research population and come out of it with accounts of actions, behaviors, events, etc, through the eyes of someone involved in the population. Direct involvement of the researcher with the target population is one benefit of ethnographic research because it can then be possible to find data that is otherwise very difficult to extract and record.

Grounded theory

Grounded Theory is the "generation of a theoretical model through the experience of observing a study population and developing a comparative analysis of their speech and behavior." [5] Unlike quantitative research, which is deductive and tests or verifies an existing theory, grounded theory research is inductive and, therefore, lends itself to research aimed at social interactions or experiences. [3] [2] In essence, Grounded Theory’s goal is to explain how and why an event occurs or how and why people might behave a certain way. Through observing the population, a researcher using the Grounded Theory approach can then develop a theory to explain the phenomena of interest.

Phenomenology

Phenomenology is the "study of the meaning of phenomena or the study of the particular.” [5] At first glance, it might seem that Grounded Theory and Phenomenology are pretty similar, but the differences can be seen upon careful examination. At its core, phenomenology looks to investigate experiences from the individual's perspective. [2] Phenomenology is essentially looking into the "lived experiences" of the participants and aims to examine how and why participants behaved a certain way from their perspective. Herein lies one of the main differences between Grounded Theory and Phenomenology. Grounded Theory aims to develop a theory for social phenomena through an examination of various data sources. In contrast, Phenomenology focuses on describing and explaining an event or phenomenon from the perspective of those who have experienced it.

Narrative research

One of qualitative research’s strengths lies in its ability to tell a story, often from the perspective of those directly involved in it. Reporting on qualitative research involves including details and descriptions of the setting involved and quotes from participants. This detail is called a "thick" or "rich" description and is a strength of qualitative research. Narrative research is rife with the possibilities of "thick" description as this approach weaves together a sequence of events, usually from just one or two individuals, hoping to create a cohesive story or narrative. [2] While it might seem like a waste of time to focus on such a specific, individual level, understanding one or two people’s narratives for an event or phenomenon can help to inform researchers about the influences that helped shape that narrative. The tension or conflict of differing narratives can be "opportunities for innovation." [2]

Research Paradigm

Research paradigms are the assumptions, norms, and standards underpinning different research approaches. Essentially, research paradigms are the "worldviews" that inform research. [4] It is valuable for qualitative and quantitative researchers to understand what paradigm they are working within because understanding the theoretical basis of research paradigms allows researchers to understand the strengths and weaknesses of the approach being used and adjust accordingly. Different paradigms have different ontologies and epistemologies. Ontology is defined as the "assumptions about the nature of reality,” whereas epistemology is defined as the "assumptions about the nature of knowledge" that inform researchers' work. [2] It is essential to understand the ontological and epistemological foundations of the research paradigm researchers are working within to allow for a complete understanding of the approach being used and the assumptions that underpin the approach as a whole. Further, researchers must understand their own ontological and epistemological assumptions about the world in general because their assumptions about the world will necessarily impact how they interact with research. A discussion of the research paradigm is not complete without describing positivist, postpositivist, and constructivist philosophies.

Positivist versus postpositivist

To further understand qualitative research, we must discuss positivist and postpositivist frameworks. Positivism is a philosophy that the scientific method can and should be applied to social and natural sciences. [4] Essentially, positivist thinking insists that the social sciences should use natural science methods in their research. It stems from positivist ontology, that there is an objective reality that exists that is wholly independent of our perception of the world as individuals. Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability.

Conversely, postpositivists argue that social reality can never be one hundred percent explained, but could be approximated. [4] Indeed, qualitative researchers have been insisting that there are “fundamental limits to the extent to which the methods and procedures of the natural sciences could be applied to the social world,” and therefore, postpositivist philosophy is often associated with qualitative research. [4] An example of positivist versus postpositivist values in research might be that positivist philosophies value hypothesis-testing, whereas postpositivist philosophies value the ability to formulate a substantive theory.

Constructivist

Constructivism is a subcategory of postpositivism. Most researchers invested in postpositivist research are also constructivist, meaning they think there is no objective external reality that exists but instead that reality is constructed. Constructivism is a theoretical lens that emphasizes the dynamic nature of our world. "Constructivism contends that individuals' views are directly influenced by their experiences, and it is these individual experiences and views that shape their perspective of reality.” [6]  constructivist thought focuses on how "reality" is not a fixed certainty and how experiences, interactions, and backgrounds give people a unique view of the world. Constructivism contends, unlike positivist views, that there is not necessarily an "objective"reality we all experience. This is the ‘relativist’ ontological view that reality and our world are dynamic and socially constructed. Therefore, qualitative scientific knowledge can be inductive as well as deductive.” [4]

So why is it important to understand the differences in assumptions that different philosophies and approaches to research have? Fundamentally, the assumptions underpinning the research tools a researcher selects provide an overall base for the assumptions the rest of the research will have. It can even change the role of the researchers. [2] For example, is the researcher an "objective" observer, such as in positivist quantitative work? Or is the researcher an active participant in the research, as in postpositivist qualitative work? Understanding the philosophical base of the study undertaken allows researchers to fully understand the implications of their work and their role within the research and reflect on their positionality and bias as it pertains to the research they are conducting.

Data Sampling 

The better the sample represents the intended study population, the more likely the researcher is to encompass the varying factors. The following are examples of participant sampling and selection: [7]

  • Purposive sampling- selection based on the researcher’s rationale for being the most informative.
  • Criterion sampling selection based on pre-identified factors.
  • Convenience sampling- selection based on availability.
  • Snowball sampling- the selection is by referral from other participants or people who know potential participants.
  • Extreme case sampling- targeted selection of rare cases.
  • Typical case sampling selection based on regular or average participants. 

Data Collection and Analysis

Qualitative research uses several techniques, including interviews, focus groups, and observation. [1] [2] [3] Interviews may be unstructured, with open-ended questions on a topic, and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one-on-one and appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be participant-observers to share the experiences of the subject or non-participants or detached observers.

While quantitative research design prescribes a controlled environment for data collection, qualitative data collection may be in a central location or the participants' environment, depending on the study goals and design. Qualitative research could amount to a large amount of data. Data is transcribed, which may then be coded manually or using computer-assisted qualitative data analysis software or CAQDAS such as ATLAS.ti or NVivo. [8] [9] [10]

After the coding process, qualitative research results could be in various formats. It could be a synthesis and interpretation presented with excerpts from the data. [11] Results could also be in the form of themes and theory or model development.

Dissemination

The healthcare team can use two reporting standards to standardize and facilitate the dissemination of qualitative research outcomes. The Consolidated Criteria for Reporting Qualitative Research or COREQ is a 32-item checklist for interviews and focus groups. [12] The Standards for Reporting Qualitative Research (SRQR) is a checklist covering a more comprehensive range of qualitative research. [13]

Applications

Many times, a research question will start with qualitative research. The qualitative research will help generate the research hypothesis, which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data to better understand what the numbers truly mean and their implications. The qualitative techniques can then help clarify the quantitative data and also help refine the hypothesis for future research. Furthermore, with qualitative research, researchers can explore poorly studied subjects with quantitative methods. These include opinions, individual actions, and social science research.

An excellent qualitative study design starts with a goal or objective. This should be clearly defined or stated. The target population needs to be specified. A method for obtaining information from the study population must be carefully detailed to ensure no omissions of part of the target population. A proper collection method should be selected that will help obtain the desired information without overly limiting the collected data because, often, the information sought is not well categorized or obtained. Finally, the design should ensure adequate methods for analyzing the data. An example may help better clarify some of the various aspects of qualitative research.

A researcher wants to decrease the number of teenagers who smoke in their community. The researcher could begin by asking current teen smokers why they started smoking through structured or unstructured interviews (qualitative research). The researcher can also get together a group of current teenage smokers and conduct a focus group to help brainstorm factors that may have prevented them from starting to smoke (qualitative research).

In this example, the researcher has used qualitative research methods (interviews and focus groups) to generate a list of ideas of why teens start to smoke and factors that may have prevented them from starting to smoke. Next, the researcher compiles this data. The research found that, hypothetically, peer pressure, health issues, cost, being considered "cool," and rebellious behavior all might increase or decrease the likelihood of teens starting to smoke.

The researcher creates a survey asking teen participants to rank how important each of the above factors is in either starting smoking (for current smokers) or not smoking (for current nonsmokers). This survey provides specific numbers (ranked importance of each factor) and is thus a quantitative research tool.

The researcher can use the survey results to focus efforts on the one or two highest-ranked factors. Let us say the researcher found that health was the primary factor that keeps teens from starting to smoke, and peer pressure was the primary factor that contributed to teens starting smoking. The researcher can go back to qualitative research methods to dive deeper into these for more information. The researcher wants to focus on keeping teens from starting to smoke, so they focus on the peer pressure aspect.

The researcher can conduct interviews and focus groups (qualitative research) about what types and forms of peer pressure are commonly encountered, where the peer pressure comes from, and where smoking starts. The researcher hypothetically finds that peer pressure often occurs after school at the local teen hangouts, mostly in the local park. The researcher also hypothetically finds that peer pressure comes from older, current smokers who provide the cigarettes.

The researcher could further explore this observation made at the local teen hangouts (qualitative research) and take notes regarding who is smoking, who is not, and what observable factors are at play for peer pressure to smoke. The researcher finds a local park where many local teenagers hang out and sees that the smokers tend to hang out in a shady, overgrown area of the park. The researcher notes that smoking teenagers buy their cigarettes from a local convenience store adjacent to the park, where the clerk does not check identification before selling cigarettes. These observations fall under qualitative research.

If the researcher returns to the park and counts how many individuals smoke in each region, this numerical data would be quantitative research. Based on the researcher's efforts thus far, they conclude that local teen smoking and teenagers who start to smoke may decrease if there are fewer overgrown areas of the park and the local convenience store does not sell cigarettes to underage individuals.

The researcher could try to have the parks department reassess the shady areas to make them less conducive to smokers or identify how to limit the sales of cigarettes to underage individuals by the convenience store. The researcher would then cycle back to qualitative methods of asking at-risk populations their perceptions of the changes and what factors are still at play, and quantitative research that includes teen smoking rates in the community and the incidence of new teen smokers, among others. [14] [15]

Qualitative research functions as a standalone research design or combined with quantitative research to enhance our understanding of the world. Qualitative research uses techniques including structured and unstructured interviews, focus groups, and participant observation not only to help generate hypotheses that can be more rigorously tested with quantitative research but also to help researchers delve deeper into the quantitative research numbers, understand what they mean, and understand what the implications are. Qualitative research allows researchers to understand what is going on, especially when things are not easily categorized. [16]

  • Issues of Concern

As discussed in the sections above, quantitative and qualitative work differ in many ways, including the evaluation criteria. There are four well-established criteria for evaluating quantitative data: internal validity, external validity, reliability, and objectivity. Credibility, transferability, dependability, and confirmability are the correlating concepts in qualitative research. [4] [11] The corresponding quantitative and qualitative concepts can be seen below, with the quantitative concept on the left and the qualitative concept on the right:

  • Internal validity: Credibility
  • External validity: Transferability
  • Reliability: Dependability
  • Objectivity: Confirmability

In conducting qualitative research, ensuring these concepts are satisfied and well thought out can mitigate potential issues from arising. For example, just as a researcher will ensure that their quantitative study is internally valid, qualitative researchers should ensure that their work has credibility. 

Indicators such as triangulation and peer examination can help evaluate the credibility of qualitative work.

  • Triangulation: Triangulation involves using multiple data collection methods to increase the likelihood of getting a reliable and accurate result. In our above magic example, the result would be more reliable if we interviewed the magician, backstage hand, and the person who "vanished." In qualitative research, triangulation can include telephone surveys, in-person surveys, focus groups, and interviews and surveying an adequate cross-section of the target demographic.
  • Peer examination: A peer can review results to ensure the data is consistent with the findings.

A "thick" or "rich" description can be used to evaluate the transferability of qualitative research, whereas an indicator such as an audit trail might help evaluate the dependability and confirmability.

  • Thick or rich description:  This is a detailed and thorough description of details, the setting, and quotes from participants in the research. [5] Thick descriptions will include a detailed explanation of how the study was conducted. Thick descriptions are detailed enough to allow readers to draw conclusions and interpret the data, which can help with transferability and replicability.
  • Audit trail: An audit trail provides a documented set of steps of how the participants were selected and the data was collected. The original information records should also be kept (eg, surveys, notes, recordings).

One issue of concern that qualitative researchers should consider is observation bias. Here are a few examples:

  • Hawthorne effect: The effect is the change in participant behavior when they know they are being observed. Suppose a researcher wanted to identify factors that contribute to employee theft and tell the employees they will watch them to see what factors affect employee theft. In that case, one would suspect employee behavior would change when they know they are being protected.
  • Observer-expectancy effect: Some participants change their behavior or responses to satisfy the researcher's desired effect. This happens unconsciously for the participant, so it is essential to eliminate or limit the transmission of the researcher's views.
  • Artificial scenario effect: Some qualitative research occurs in contrived scenarios with preset goals. In such situations, the information may not be accurate because of the artificial nature of the scenario. The preset goals may limit the qualitative information obtained.
  • Clinical Significance

Qualitative or quantitative research helps healthcare providers understand patients and the impact and challenges of the care they deliver. Qualitative research provides an opportunity to generate and refine hypotheses and delve deeper into the data generated by quantitative research. Qualitative research is not an island apart from quantitative research but an integral part of research methods to understand the world around us. [17]

  • Enhancing Healthcare Team Outcomes

Qualitative research is essential for all healthcare team members as all are affected by qualitative research. Qualitative research may help develop a theory or a model for health research that can be further explored by quantitative research. Much of the qualitative research data acquisition is completed by numerous team members, including social workers, scientists, nurses, etc. Within each area of the medical field, there is copious ongoing qualitative research, including physician-patient interactions, nursing-patient interactions, patient-environment interactions, healthcare team function, patient information delivery, etc. 

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Disclosure: Steven Tenny declares no relevant financial relationships with ineligible companies.

Disclosure: Janelle Brannan declares no relevant financial relationships with ineligible companies.

Disclosure: Grace Brannan declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Tenny S, Brannan JM, Brannan GD. Qualitative Study. [Updated 2022 Sep 18]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. [Cochrane Database Syst Rev. 2022] Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, et al. Cochrane Database Syst Rev. 2022 Feb 1; 2(2022). Epub 2022 Feb 1.
  • Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). [Phys Biol. 2013] Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). Foffi G, Pastore A, Piazza F, Temussi PA. Phys Biol. 2013 Aug; 10(4):040301. Epub 2013 Aug 2.
  • The future of Cochrane Neonatal. [Early Hum Dev. 2020] The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Early Hum Dev. 2020 Nov; 150:105191. Epub 2020 Sep 12.
  • Review Invited review: Qualitative research in dairy science-A narrative review. [J Dairy Sci. 2023] Review Invited review: Qualitative research in dairy science-A narrative review. Ritter C, Koralesky KE, Saraceni J, Roche S, Vaarst M, Kelton D. J Dairy Sci. 2023 Sep; 106(9):5880-5895. Epub 2023 Jul 18.
  • Review Participation in environmental enhancement and conservation activities for health and well-being in adults: a review of quantitative and qualitative evidence. [Cochrane Database Syst Rev. 2016] Review Participation in environmental enhancement and conservation activities for health and well-being in adults: a review of quantitative and qualitative evidence. Husk K, Lovell R, Cooper C, Stahl-Timmins W, Garside R. Cochrane Database Syst Rev. 2016 May 21; 2016(5):CD010351. Epub 2016 May 21.

Recent Activity

  • Qualitative Study - StatPearls Qualitative Study - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

9 Data Collection Methods in Qualitative Research

9 Data Collection Methods in Qualitative Research

Explore top methods for collecting qualitative data, from interviews to social media monitoring, to gain deeper customer insights for your strategy.

In the world of customer insights, having access to the right data is crucial. Numbers and metrics can provide valuable direction, but they often fail to capture the full picture of how your customers truly feel, what they need, or why they behave in certain ways.

That’s where qualitative research shines. Using multiple qualitative data collection methods is like casting a wider net for insights — the more varied your approach, the better your chances of capturing nuanced feedback that standard surveys might miss.

Whether it’s through in-depth interviews or mining customer chat logs, the diversity of data sources can help build a robust understanding of your customers’ experiences.

In this article, we’ll cover the top methods you can use to collect qualitative data to inform your customer experience strategy .

Table of contents

Qualitative vs quantitative methods, 9 essential qualitative data collection methods.

In-depth Interviews

Focus Groups

Observational Research

Case Studies

Surveys with Open-ended Questions

Ethnographic Research

Customer Support Center Chat History

Social Media Conversation Monitoring

Review Sites

Pitfalls to Avoid in Qualitative Data Collection

Analyzing qualitative data.

When it comes to gathering customer insights, there are two main avenues: qualitative and quantitative research. Both are crucial, but they serve different purposes.

Quantitative methods rely on numerical data. Think of it as your go-to for answering “how many?” and “how much?” questions. It’s all about measurable facts, trends, and patterns. For example, you might run a large-scale survey asking customers to rate their satisfaction on a 1-10 scale, and you’ll get hard numbers to analyze. This kind of data is easy to visualize in graphs and charts, which helps you track customer satisfaction metrics like NPS or CSAT scores over time.

But qualitative methods ? This is where you dig deeper. These methods focus on the “why” and “how,” uncovering insights into the emotions, motivations, and thought processes behind customer behaviors. Instead of numerical data, qualitative research gives you rich, detailed feedback in the form of words. The qualitative data collected through these methods provides detailed and nuanced insights into individuals' or groups' experiences, perspectives, and behaviors. It’s an excellent way to get to the heart of customer experiences and understand their pain points on a human level.

Why Qualitative Research Is Critical for Customer Experience Strategy

Quantitative data can tell you what’s happening, but qualitative data tells you why it’s happening. The qualitative data collected through various methods can explain the underlying reasons behind customer satisfaction scores. If your quantitative research shows a drop in customer satisfaction scores, qualitative research can explain why. By diving into customer stories, open-ended survey responses, or even analyzing chat logs, you gain invaluable insights into where things might be going wrong (or right!).

Let’s dive into the most impactful methods you can use to gather valuable customer insights. Each of these methods offers a unique lens into the customer experience, helping you build a comprehensive understanding of your audience. Understanding both qualitative and quantitative data is essential for building a comprehensive understanding of your audience.

Data Collection Methods in Qualitative Research In-Depth Interviews

1. In-Depth Interviews

In-depth interviews are one-on-one conversations where the researcher asks open-ended questions , allowing the customer to share their thoughts and experiences in detail. These interviews are incredibly useful when you want to understand the “why” behind customer behavior or preferences. The qualitative data collected through in-depth interviews provides rich, detailed insights into customer behavior and preferences.

Maximizing the method: To get the most out of in-depth interviews, focus on creating a comfortable environment where participants feel free to express their honest opinions. Listen actively, ask follow-up questions, and don’t shy away from allowing the conversation to go off-script if it leads to richer insights.

Example: Imagine you’re an insights manager at a retail brand conducting an in-depth interview with a frequent shopper. By asking about their shopping habits, you can uncover that the customer values sustainability and chooses brands with eco-friendly packaging. This insight could inform future product packaging decisions.

Data Collection Methods in Qualitative Research Focus Groups

2. Focus Groups

A focus group is a facilitated discussion with a small group of customers – usually around 6-10 people. The goal is to encourage interaction between participants, sparking conversations that reveal insights through group dynamics. The collective experience of a focus group can surface opinions that may not emerge in individual interviews. The qualitative data collected through focus groups can reveal collective opinions and insights that may not emerge in individual interviews.

Maximizing the method: Ensure that the focus group facilitator is skilled at guiding discussions without leading them. It’s important to let the conversation flow naturally, but the facilitator should know when to probe deeper or refocus the group when necessary.

Example: Let’s say a tech company runs a focus group with power users of their app. During the session, one participant mentions a feature they find confusing, which prompts others to agree. This shared feedback provides the company with a clear signal to revisit that feature for usability improvements.

Data Collection Methods in Qualitative Research Focus Groups

3. Observational Research

Observational research (sometimes called field research) involves observing customers in their natural environment, whether it’s a store, website, or another setting. Instead of asking questions, researchers watch how customers interact with products, services, or environments in real-time. The qualitative data collected through observational research provides real-time insights into customer interactions and behaviors.

Maximizing the method: The key to observational research is to remain unobtrusive. Customers should behave naturally without being influenced by the researcher’s presence. It’s also crucial to take detailed notes on both the behaviors you expected, and any surprising actions that arise.

Example: A coffee shop chain might use observational research to see how customers navigate their in-store experience. Do they head straight to the counter or linger at the menu? Are they confused about the ordering process? These observations could highlight ways to improve the store layout or ordering flow.

Data Collection Methods in Qualitative Research Case Studies

4. Case Studies

Case studies are in-depth analyses of individual customer experiences, often focusing on how a product or service has solved a specific problem for them. By following a single customer’s journey from problem to solution, case studies offer detailed narratives that can illustrate the broader impact of your offerings. The qualitative data collected through case studies offers detailed narratives that illustrate the broader impact of your offerings.

Maximizing the method: Choose case study subjects that reflect common challenges or experiences within your customer base. The more relatable the story, the more likely other customers will see themselves in the narrative.

Example: A B2B SaaS company could create a case study around a client that successfully used their software to reduce employee churn. By detailing the challenges, implementation, and results, the case study could serve as a powerful testimonial for potential clients.

Data Collection Methods in Qualitative Research Open Ended Survey Questions

5. Surveys with Open-Ended Questions

While many surveys are typically quantitative, surveys with open-ended questions provide a qualitative element by allowing customers to write out their responses in their own words. This method bridges the gap between structured data and personal insights, making it easier to spot recurring themes or unique perspectives. The qualitative data collected through open-ended survey questions bridges the gap between structured data and personal insights.

Maximizing the method: Be strategic with the placement of open-ended questions. Too many can overwhelm respondents, but including one or two at key points in your survey allows for deeper insights without causing survey fatigue.

Example: A travel company might send out a post-trip survey asking, “What was the most memorable part of your experience?” The open-ended responses could reveal customer preferences that the company wasn’t previously aware of, informing future offerings or services.

Data Collection Methods in Qualitative Research Ethnographic Research

6. Ethnographic Research

Ethnographic research takes immersion to a new level. In this method, researchers embed themselves in the customer’s environment for extended periods to observe and experience their behaviors firsthand. It’s about gaining a deep understanding of customer culture, motivations, and interactions. The qualitative data collected through ethnographic research provides a deep understanding of customer culture and interactions.

Maximizing the method: This method works best when researchers fully integrate into the customer’s world, whether that’s living among a target community or spending time on-site with customers in their daily routines. It’s a time-intensive process, but the insights can be incredibly rich.

Example: A researcher for a clothing brand might spend several weeks with a group of customers, observing how they shop for and wear clothes in their daily lives. This immersive research could uncover nuanced preferences about fabric types, fit, and style that surveys alone wouldn’t reveal.

Data Collection Methods in Qualitative Research Customer Support Chat History

7. Customer Support Center Chat History

Your customer support center chat history can be a treasure trove of qualitative data. By analyzing conversations between customers and support agents, you can identify recurring issues, concerns, and sentiments that might not surface in formal surveys or interviews. This method provides an authentic view of how customers feel in real-time as they interact with your brand for problem-solving. The qualitative data collected from chat histories provides an authentic view of customer sentiments in real-time.

Maximizing the method: Use text analysis tools to sift through large volumes of chat data, identifying common themes and patterns. Pay special attention to moments of frustration or satisfaction, as these often hold the key to customer experience improvements.

Example: A software company analyzes its chat history and notices that many customers express confusion about a particular feature. This insight leads the product team to create clearer in-app tutorials, ultimately reducing the number of support requests related to that feature.

Data Collection Methods in Qualitative Research Social Media Conversation Monitoring

8. Social Media Conversation Monitoring

Social media platforms are filled with candid, unsolicited customer feedback. Social media conversation monitoring involves tracking brand mentions, hashtags, and keywords to gauge customer sentiment and uncover insights about your audience. This method gives you access to a wide range of voices, including those who may never participate in formal research. The qualitative data collected from social media conversations offers a wide range of customer insights.

Maximizing the method: Leverage social listening tools to automate the process of monitoring and analyzing conversations across platforms like Instagram, Meta, or X. Be sure to track both direct mentions of your brand and broader industry-related conversations that could reveal trends or shifting customer preferences.

Example: A beauty brand might notice that customers are frequently discussing a competitor’s eco-friendly packaging on social media. By monitoring this trend, the brand could introduce more sustainable packaging solutions to align with emerging customer values.

Data Collection Methods in Qualitative Research Social Media Conversation Monitoring

9. Review Sites

Review sites such as Yelp, Google Reviews, and Trustpilot are another goldmine for qualitative data. Customers who leave reviews are often highly motivated to share their experiences, whether positive or negative. By mining these reviews, you can gather insights into customer satisfaction, product issues, and potential areas for improvement. The qualitative data collected from review sites provides insights into customer satisfaction and areas for improvement.

Maximizing the method: Don’t just focus on star ratings—read through the text of each review to extract the underlying emotions and motivations. Look for patterns in the language used and the specific aspects of your product or service that are frequently mentioned.

Example: A restaurant chain may notice through online reviews that customers often comment on the long wait times during dinner hours. This feedback prompts management to reassess staffing levels during peak times, improving both operational efficiency and customer satisfaction.

As with any research process, there are a few key pitfalls to watch out for when collecting qualitative data. Avoiding these three common mistakes will ensure that your insights are both accurate and actionable.

article qualitative research

1. Bias in Data Collection

Bias can creep into qualitative research in many forms, from how questions are phrased in interviews or surveys to how data is interpreted. For example, leading questions might push respondents toward a specific answer. Similarly, during observational research or focus groups, the presence or behavior of the researcher could unintentionally influence participants.

How to avoid it: Ensure your research methods are designed to be neutral and that questions are open-ended. It’s also important to train researchers to minimize their influence during interviews or observations. Using standardized protocols can help maintain consistency across different data collection methods.

Data Collection Methods in Qualitative Research Pitfalls

2. Over-reliance on a Single Method

While one method may seem like the easiest or most convenient to implement, relying solely on one form of data collection can lead to incomplete or skewed insights. For example, in-depth interviews might provide detailed information, but they won’t capture broad patterns across your entire customer base.

How to avoid it: Combine multiple data collection methods, like surveys, focus groups, and social media monitoring, to get a fuller picture. Each method will reveal different aspects of customer experience, and when analyzed together, they provide more comprehensive insights.

Data Collection Methods in Qualitative Research Pitfalls

3. Failing to Document the Research Process

One of the easiest ways to undermine the quality of your qualitative data is by failing to document the research process adequately. Without a clear record of how data was collected, analyzed, and interpreted, it becomes difficult to validate findings or replicate the study in the future.

How to avoid it: Keep detailed notes, records, and transcriptions of every stage of the research process. Having a clear audit trail ensures that your findings are credible and can be trusted by decision-makers.

With these qualitative data collection methods at your disposal, you’ll find yourself with a wealth of unstructured qualitative data. While an abundance of data is valuable, it also presents a significant challenge: how to make sense of it all efficiently.

This is where advanced tools and technology come into play.

The Challenge of Unstructured Data

Qualitative research methods produce, by their nature, unstructured data. Whether you’re working with transcripts from focus groups, feedback from review sites, or social media conversations, the data doesn’t neatly fit into rows and columns like quantitative data does. Instead, you’re dealing with text—rich, narrative-driven, and full of context. This makes it incredibly insightful but also hard to analyze manually.

Manually categorizing themes, identifying patterns, and summarizing key takeaways from large datasets is time-consuming and prone to human error. It’s easy to miss out on emerging trends or nuances that could offer strategic value, especially if you're dealing with diverse data sources.

How Kapiche’s AI-Powered Auto-Theming Can Help

Kapiche’s automatic theming feature is designed to solve this problem. By leveraging AI-powered technology, Kapiche cleans, categorizes, and analyzes your text data quickly and accurately. The platform automatically identifies themes, clusters related data points, and even provides summaries that help you interpret what your customers are saying.

Kapiche qualitative research auto-theming

For example, Kapiche can scan through customer support chat histories or social media mentions and instantly group similar pieces of feedback together—whether customers are talking about product performance, customer service, or price sensitivity. With these insights readily available, you can take faster action to improve your customer experience.

Benefits of Auto-Theming for Insights Managers

Here's how an auto-theming can transform your qualitative data analysis:

Speed and Efficiency: Automating the process saves you countless hours of manual work.

Comprehensive Analysis: By aggregating data from multiple sources, you get a fuller picture of customer sentiment across various touchpoints.

Uncover Hidden Insights: The AI detects patterns that you might not notice through manual analysis, offering deeper insights into customer behavior.

Actionable Summaries: Instead of wading through raw text, Kapiche provides concise summaries of key themes and trends, enabling you to act on insights faster.

With tools like this at your disposal, the overwhelming task of analyzing qualitative data becomes manageable, empowering your insights team to make data-driven decisions more effectively.

Let Us Help You

Navigating the complexities of qualitative data collection and analysis can be challenging, but you don’t have to do it alone. At Kapiche, we’re committed to helping insights teams like yours make the most of your qualitative customer data.

Our AI-powered auto-theming capabilities simplify the process by automatically categorizing, analyzing, and summarizing your data. This means you can quickly uncover key insights and trends without getting bogged down by the sheer volume of unstructured information.

Ready to see how Kapiche can transform your research process? Click the link below to watch an on-demand demo and discover how our platform can enhance your customer insights strategy.

Book a Demo with Kapiche

You might also like

Thematic Analysis in Qualitative Research_ A Step-by-Step Guide

IMAGES

  1. (PDF) A Front-to-Back Guide to Writing a Qualitative Research Article

    article qualitative research

  2. Literature Review For Qualitative Research

    article qualitative research

  3. Qualitative Research Examples

    article qualitative research

  4. Literature Review For Qualitative Research

    article qualitative research

  5. Example Of A Qualitative Research Article Critique

    article qualitative research

  6. 18 Qualitative Research Examples (2024)

    article qualitative research

VIDEO

  1. Qualitative and Quantitative research

  2. QUALITATIVE AND QUANTITATIVE RESEARCH

  3. TYPES OF TWO MAIN RESEARCH- QUANTITATIVE AND QUALITATIVE RESEARCH

  4. Exploring Qualitative and Quantitative Research Methods and why you should use them

  5. 9. Qualitative and Quantitative Research Approaches

  6. Qualitative vs. Quantitative Research Design

COMMENTS

  1. Planning Qualitative Research: Design and Decision Making for New

    While many books and articles guide various qualitative research methods and analyses, there is currently no concise resource that explains and differentiates among the most common qualitative approaches. We believe novice qualitative researchers, students planning the design of a qualitative study or taking an introductory qualitative research course, and faculty teaching such courses can ...

  2. What Is Qualitative Research? An Overview and Guidelines

    Qualitative research is a methodology for scientific inquiry that emphasizes the depth and richness of context and voice in understanding social phenomena. 3 This methodology is constructive or interpretive , aiming to unveil the "what," "why," "when," "where," "who," and "how" (or the "5W1H") behind social behaviors ...

  3. Qualitative Research: Sage Journals

    Qualitative Research is a peer-reviewed international journal that has been leading debates about qualitative methods for over 20 years. The journal provides a forum for the discussion and development of qualitative methods across disciplines, publishing high quality articles that contribute to the ways in which we think about and practice the ...

  4. How to use and assess qualitative research methods

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions ...

  5. What is Qualitative in Qualitative Research

    What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being "qualitative," the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term ...

  6. Qualitative Research: Data Collection, Analysis, and Management

    Qualitative research is used to gain insights into people's feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. It is also possible to use different types of research in the same study, an approach known as "mixed ...

  7. What Is Qualitative Research?

    Revised on September 5, 2024. Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research. Qualitative research is the opposite of quantitative research, which ...

  8. Criteria for Good Qualitative Research: A Comprehensive Review

    This review aims to synthesize a published set of evaluative criteria for good qualitative research. The aim is to shed light on existing standards for assessing the rigor of qualitative research encompassing a range of epistemological and ontological standpoints. Using a systematic search strategy, published journal articles that deliberate criteria for rigorous research were identified. Then ...

  9. 31 Writing Up Qualitative Research

    Abstract. This chapter provides guidelines for writing journal articles based on qualitative approaches. The guidelines are part of the tradition of the Chicago School of Sociology and the author's experience as a writer and reviewer. The guidelines include understanding experiences in context, immersion, interpretations grounded in accounts ...

  10. Critically appraising qualitative research

    In this article we offer guidance for readers on how to assess a study that uses qualitative research methods by providing six key questions to ask when reading qualitative research (box 1). However, the thorough assessment of qualitative research is an interpretive act and requires informed reflective thought rather than the simple application ...

  11. A Front-to-Back Guide to Writing a Qualitative Research Article

    Fordham University, New York, USA. Abstract. Purpose - This paper aims to offer junior scholars a front-to-back guide to writing an academic, theoretically positioned, qualitative research ...

  12. Qualitative research: its value and applicability

    Qualitative research has a rich tradition in the study of human social behaviour and cultures. Its general aim is to develop concepts which help us to understand social phenomena in, wherever possible, natural rather than experimental settings, to gain an understanding of the experiences, perceptions and/or behaviours of individuals, and the meanings attached to them.

  13. (PDF) What Is Qualitative Research?

    The methodology used in this study is a qualitative, interpretive case study, and qualitative research generally looks for an in-depth knowledge of a researched topic incorporating human and ...

  14. How to use and assess qualitative research methods

    This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common ...

  15. Qualitative Methods in Health Care Research

    Qualitative research has ample possibilities within the arena of healthcare research. This article aims to inform healthcare professionals regarding qualitative research, its significance, and applicability in the field of healthcare. A wide variety of phenomena that cannot be explained using the quantitative approach can be explored and ...

  16. Qualitative research methods: when to use them and how to judge them

    Qualitative research is gaining increased momentum in the clinical setting and carries different criteria for evaluating its rigour or quality. Quantitative studies generally involve the systematic collection of data about a phenomenon, using standardized measures and statistical analysis. In contrast, qualitative studies involve the systematic ...

  17. Qualitative Research : Definition

    Qualitative research is the naturalistic study of social meanings and processes, using interviews, observations, and the analysis of texts and images. In contrast to quantitative researchers, whose statistical methods enable broad generalizations about populations (for example, comparisons of the percentages of U.S. demographic groups who vote in particular ways), qualitative researchers use ...

  18. PDF Recommendations for Designing and Reviewing Qualitative Research in

    Publication of Qualitative Research of the Society for Qualitative Inquiry in Psychol-ogy, a section of Division 5 of the American Psychological Association. This initiative was a response to concerns by authors that reviews of qualitative research articles frequently utilize inflexible sets of procedures and provide contradictory feedback

  19. Learning to Do Qualitative Data Analysis: A Starting Point

    This question is particularly relevant to researchers new to the field and practice of qualitative research and instructors and mentors who regularly introduce students to qualitative research practices. In this article, we seek to offer what we view as a useful starting point for learning how to do qualitative analysis. We begin by discussing ...

  20. Full article: Beyond the "wow" factor: the analytic importance of

    In this paper, we perceive boredom as a potential resource for creativity in qualitative research. We present three main arguments. First, boredom is often an inevitable stage on the way to research excitement and can even serve as an important clue leading to analytic surprises. Second, there is a methodological need to reflect on boredom in ...

  21. University Libraries: SOC 325: Qualitative Research: Articles

    Articles in the sciences and social sciences. National Criminal Justice Reference Service This link opens in a new window International journals, books, reports, dissertations, and unpublished papers on criminology and related disciplines.

  22. A qualitative exploration of disseminating research findings among

    The gap between research and practice is well documented [1,2,3,4].Dissemination refers to the active approach of spreading evidence-based interventions to the target audience via predetermined channels using planned strategies [3, 5] and is a prerequisite for bridging the gap between research and practice.The concept of dissemination has some overlap with other related concepts including ...

  23. Introduction to qualitative research methods

    Qualitative research techniques provide a lens for learning about nonquantifiable phenomena such as people's experiences, languages, histories, and cultures. In this article, we describe the strengths and role of qualitative research methods and how these can be employed in clinical research.

  24. What is Qualitative in Qualitative Research

    What is qualitative research? If we look for a precise definition of qualitative research, and specifically for one that addresses its distinctive feature of being "qualitative," the literature is meager. In this article we systematically search, identify and analyze a sample of 89 sources using or attempting to define the term "qualitative." Then, drawing on ideas we find scattered ...

  25. The Performance and Qualitative Evaluation of Scientific Work at ...

    The successful implementation of scientific research is one of the key factors for sustainable development, including the development of tertiary education. A leading or "world-class university", today, transfers knowledge to innovation, bearing the concept of "academic excellence", and features of "research" and "entrepreneurial" universities highly match the SDGs. This ...

  26. Strengthening rural healthcare outcomes through digital health

    A multi-site qualitative case study was conducted, using semi-structured interviews supplemented with focus groups. Setting and participants. In the geographically large Australian state of Queensland, approximately 38% of the total population and 66% of First Nations people live in non-metropolitan areas [].Universal healthcare is delivered by Queensland Health across 16 geographically ...

  27. Qualitative marketing research

    Qualitative marketing research involves a natural or observational examination of the philosophies that govern consumer behavior.The direction and framework of the research is often revised as new information is gained, allowing the researcher to evaluate issues and subjects in an in-depth manner.

  28. Different Paradigm Conceptions and Their Implications for Qualitative

    This article arose from a program of research examining different models of qualitative inquiry. In reviewing earlier qualitative texts, I identified that Guba says his paradigms arose from the work of several sources (e.g., Ford, 1975; Hesse, 1980; Schwartz & Ogilvy, 1979), but also from the work of Kuhn (Guba, 1978; Guba & Lincoln, 1985).This claim is repeated by later qualitative ...

  29. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervening or introducing treatments just like in quantitative research, qualitative research helps generate hypothenar to further investigate and understand quantitative data. Qualitative research gathers participants' experiences ...

  30. 9 Data Collection Methods in Qualitative Research

    Qualitative research methods produce, by their nature, unstructured data. Whether you're working with transcripts from focus groups, feedback from review sites, or social media conversations, the data doesn't neatly fit into rows and columns like quantitative data does. Instead, you're dealing with text—rich, narrative-driven, and full ...