Factors impacting critical thinking skill development during skills fair intervention
Themes | Subthemes | Frequency of mentions |
---|---|---|
Internal factors | 33 | |
Confidence and anxiety levels | 17 | |
Attitude | 10 | |
Age | 6 | |
External factors | 62 | |
Experience and practice | 21 | |
Faculty involvement | 24 | |
Positive learning environment | 11 | |
Faculty prompts | 6 |
Skills fair intervention as a developmental strategy for critical thinking
Themes | Subthemes | Frequency |
---|---|---|
Develops alternative thinking | 13 | |
Application of knowledge and skills | 9 | |
Noticing trends to prevent complications | 4 | |
Thinking before doing | 10 | |
Considering future outcomes | 5 | |
Analyzing relevant data | 5 |
American Nephrology Nurses Association (ANNA) ( 2019 ), “ Learning, leading, connecting, and playing at the intersection of nephrology and nursing-2019–2020 strategic plan ”, viewed 3 Aug 2019, available at: https://www.annanurse.org/download/reference/association/strategicPlan.pdf .
Arli , S.D. , Bakan , A.B. , Ozturk , S. , Erisik , E. and Yildirim , Z. ( 2017 ), “ Critical thinking and caring in nursing students ”, International Journal of Caring Sciences , Vol. 10 No. 1 , pp. 471 - 478 .
Benner , P. , Sutphen , M. , Leonard , V. and Day , L. ( 2010 ), Educating Nurses: A Call for Radical Transformation , Jossey-Bass , San Francisco .
Brunt , B. ( 2005 ), “ Critical thinking in nursing: an integrated review ”, The Journal of Continuing Education in Nursing , Vol. 36 No. 2 , pp. 60 - 67 .
Chun-Chih , L. , Chin-Yen , H. , I-Ju , P. and Li-Chin , C. ( 2015 ), “ The teaching-learning approach and critical thinking development: a qualitative exploration of Taiwanese nursing students ”, Journal of Professional Nursing , Vol. 31 No. 2 , pp. 149 - 157 , doi: 10.1016/j.profnurs.2014.07.001 .
Clarke , L.W. and Whitney , E. ( 2009 ), “ Walking in their shoes: using multiple-perspectives texts as a bridge to critical literacy ”, The Reading Teacher , Vol. 62 No. 6 , pp. 530 - 534 , doi: 10.1598/RT.62.6.7 .
Dykstra , D. ( 2008 ), “ Integrating critical thinking and memorandum writing into course curriculum using the internet as a research tool ”, College Student Journal , Vol. 42 No. 3 , pp. 920 - 929 , doi: 10.1007/s10551-010-0477-2 .
Ebright , P. , Urden , L. , Patterson , E. and Chalko , B. ( 2004 ), “ Themes surrounding novice nurse near-miss and adverse-event situations ”, The Journal of Nursing Administration: The Journal of Nursing Administration , Vol. 34 , pp. 531 - 538 , doi: 10.1097/00005110-200411000-00010 .
Ennis , R. ( 2011 ), “ The nature of critical thinking: an outline of critical thinking dispositions and abilities ”, viewed 3 May 2017, available at: https://education.illinois.edu/docs/default-source/faculty-documents/robert-ennis/thenatureofcriticalthinking_51711_000.pdf .
Facione , P.A. ( 1990 ), Critical Thinking: A Statement of Expert Consensus for Purposes of Educational Assessment and Instruction , The California Academic Press , Millbrae .
Facione , N.C. and Facione , P.A. ( 2013 ), The Health Sciences Reasoning Test: Test Manual , The California Academic Press , Millbrae .
Fero , L.J. , Witsberger , C.M. , Wesmiller , S.W. , Zullo , T.G. and Hoffman , L.A. ( 2009 ), “ Critical thinking ability of new graduate and experienced nurses ”, Journal of Advanced Nursing , Vol. 65 No. 1 , pp. 139 - 148 , doi: 10.1111/j.1365-2648.2008.04834.x .
Garvey , P.K. and CNE series ( 2015 ), “ Failure to rescue: the nurse's impact ”, Medsurg Nursing , Vol. 24 No. 3 , pp. 145 - 149 .
Goodare , P. ( 2015 ), “ Literature review: ‘are you ok there?’ The socialization of student and graduate nurses: do we have it right? ”, Australian Journal of Advanced Nursing , Vol. 33 No. 1 , pp. 38 - 43 .
Graneheim , U.H. and Lundman , B. ( 2014 ), “ Qualitative content analysis in nursing research: concepts, procedures, and measures to achieve trustworthiness ”, Nurse Education Today , Vol. 24 No. 2 , pp. 105 - 12 , doi: 10.1016/j.nedt.2003.10.001 .
Hsu , L. and Hsieh , S. ( 2013 ), “ Factors affecting metacognition of undergraduate nursing students in a blended learning environment ”, International Journal of Nursing Practice , Vol. 20 No. 3 , pp. 233 - 241 , doi: 10.1111/ijn.12131 .
Ignatavicius , D. ( 2001 ), “ Six critical thinking skills for at-the-bedside success ”, Dimensions of Critical Care Nursing , Vol. 20 No. 2 , pp. 30 - 33 .
Institute of Medicine ( 2001 ), Crossing the Quality Chasm: A New Health System for the 21st Century , National Academy Press , Washington .
James , J. ( 2013 ), “ A new, evidence-based estimate of patient harms associated with hospital care ”, Journal of Patient Safety , Vol. 9 No. 3 , pp. 122 - 128 , doi: 10.1097/PTS.0b013e3182948a69 .
Jones , J.H. ( 2010 ), “ Developing critical thinking in the perioperative environment ”, AORN Journal , Vol. 91 No. 2 , pp. 248 - 256 , doi: 10.1016/j.aorn.2009.09.025 .
Kaplan Nursing ( 2012 ), Kaplan Nursing Integrated Testing Program Faculty Manual , Kaplan Nursing , New York, NY .
Kim , J.S. , Gu , M.O. and Chang , H.K. ( 2019 ), “ Effects of an evidence-based practice education program using multifaceted interventions: a quasi-experimental study with undergraduate nursing students ”, BMC Medical Education , Vol. 19 , doi: 10.1186/s12909-019-1501-6 .
Longton , S. ( 2014 ), “ Utilizing evidence-based practice for patient safety ”, Nephrology Nursing Journal , Vol. 41 No. 4 , pp. 343 - 344 .
McCausland , L.L. and Meyers , C.C. ( 2013 ), “ An interactive skills fair to prepare undergraduate nursing students for clinical experience ”, Nursing Education Perspectives , Vol. 34 No. 6 , pp. 419 - 420 , doi: 10.5480/1536-5026-34.6.419 .
McMullen , M.A. and McMullen , W.F. ( 2009 ), “ Examining patterns of change in the critical thinking skills of graduate nursing students ”, Journal of Nursing Education , Vol. 48 No. 6 , pp. 310 - 318 , doi: 10.3928/01484834-20090515-03 .
Moore , Z.E. ( 2007 ), “ Critical thinking and the evidence-based practice of sport psychology ”, Journal of Clinical Sport Psychology , Vol. 1 , pp. 9 - 22 , doi: 10.1123/jcsp.1.1.9 .
Nadelson , S. and Nadelson , L.S. ( 2014 ), “ Evidence-based practice article reviews using CASP tools: a method for teaching EBP ”, Worldviews on Evidence-Based Nursing , Vol. 11 No. 5 , pp. 344 - 346 , doi: 10.1111/wvn.12059 .
Newton , S.E. and Moore , G. ( 2013 ), “ Critical thinking skills of basic baccalaureate and accelerated second-degree nursing students ”, Nursing Education Perspectives , Vol. 34 No. 3 , pp. 154 - 158 , doi: 10.5480/1536-5026-34.3.154 .
Nibert , A. ( 2011 ), “ Nursing education and practice: bridging the gap ”, Advance Healthcare Network , viewed 3 May 2017, available at: https://www.elitecme.com/resource-center/nursing/nursing-education-practice-bridging-the-gap/ .
Oermann , M.H. , Kardong-Edgren , S. , Odom-Maryon , T. , Hallmark , B.F. , Hurd , D. , Rogers , N. and Smart , D.A. ( 2011 ), “ Deliberate practice of motor skills in nursing education: CPR as exemplar ”, Nursing Education Perspectives , Vol. 32 No. 5 , pp. 311 - 315 , doi: 10.5480/1536-5026-32.5.311 .
Papathanasiou , I.V. , Kleisiaris , C.F. , Fradelos , E.C. , Kakou , K. and Kourkouta , L. ( 2014 ), “ Critical thinking: the development of an essential skill for nursing students ”, Acta Informatica Medica , Vol. 22 No. 4 , pp. 283 - 286 , doi: 10.5455/aim.2014.22.283-286 .
Park , M.Y. , Conway , J. and McMillan , M. ( 2016 ), “ Enhancing critical thinking through simulation ”, Journal of Problem-Based Learning , Vol. 3 No. 1 , pp. 31 - 40 , doi: 10.24313/jpbl.2016.3.1.31 .
Paul , R. ( 1993 ), Critical Thinking: How to Prepare Students for a Rapidly Changing World , The Foundation for Critical Thinking , Santa Rosa .
Paul , R. and Elder , L. ( 2008 ), “ Critical thinking: the art of socratic questioning, part III ”, Journal of Developmental Education , Vol. 31 No. 3 , pp. 34 - 35 .
Paul , R. and Elder , L. ( 2012 ), Critical Thinking: Tools for Taking Charge of Your Learning and Your Life , 3rd ed. , Pearson/Prentice Hall , Boston .
Profetto-McGrath , J. ( 2005 ), “ Critical thinking and evidence-based practice ”, Journal of Professional Nursing , Vol. 21 No. 6 , pp. 364 - 371 , doi: 10.1016/j.profnurs.2005.10.002 .
Rahman , A. and Applebaum , R. ( 2011 ), “ What's all this about evidence-based practice? The roots, the controversies, and why it matters ”, American Society on Aging , viewed 3 May 2017, available at: https://www.asaging.org/blog/whats-all-about-evidence-based-practice-roots-controversies-and-why-it-matters .
Rieger , K. , Chernomas , W. , McMillan , D. , Morin , F. and Demczuk , L. ( 2015 ), “ The effectiveness and experience of arts‐based pedagogy among undergraduate nursing students: a comprehensive systematic review protocol ”, JBI Database of Systematic Reviews and Implementation Reports , Vol. 13 No. 2 , pp. 101 - 124 , doi: 10.11124/jbisrir-2015-1891 .
Robert , R.R. and Petersen , S. ( 2013 ), “ Critical thinking at the bedside: providing safe passage to patients ”, Medsurg Nursing , Vol. 22 No. 2 , pp. 85 - 118 .
Roberts , S.T. , Vignato , J.A. , Moore , J.L. and Madden , C.A. ( 2009 ), “ Promoting skill building and confidence in freshman nursing students with a skills-a-thon ”, Educational Innovations , Vol. 48 No. 8 , pp. 460 - 464 , doi: 10.3928/01484834-20090518-05 .
Romeo , E. ( 2010 ), “ Quantitative research on critical thinking and predicting nursing students' NCLEX-RN performance ”, Journal of Nursing Education , Vol. 49 No. 7 , pp. 378 - 386 , doi: 10.3928/01484834-20100331-05 .
Sackett , D. , Rosenberg , W. , Gray , J. , Haynes , R. and Richardson , W. ( 1996 ), “ Evidence-based medicine: what it is and what it isn't ”, British Medical Journal , Vol. 312 No. 7023 , pp. 71 - 72 , doi: 10.1136/bmj.312.7023.71 .
Saintsing , D. , Gibson , L.M. and Pennington , A.W. ( 2011 ), “ The novice nurse and clinical decision-making: how to avoid errors ”, Journal of Nursing Management , Vol. 19 No. 3 , pp. 354 - 359 .
Saldana , J. ( 2009 ), The Coding Manual for Qualitative Researchers , Sage , Los Angeles .
Scheffer , B. and Rubenfeld , M. ( 2000 ), “ A consensus statement on critical thinking in nursing ”, Journal of Nursing Education , Vol. 39 No. 8 , pp. 352 - 359 .
Stanley , M.C. and Dougherty , J.P. ( 2010 ), “ Nursing education model. A paradigm shift in nursing education: a new model ”, Nursing Education Perspectives , Vol. 31 No. 6 , pp. 378 - 380 , doi: 10.1043/1536-5026-31.6.378 .
Swing , V.K. ( 2014 ), “ Early identification of transformation in the proficiency level of critical thinking skills (CTS) for the first-semester associate degree nursing (ADN) student ”, doctoral thesis , Capella University , Minneapolis , viewed 3 May 2017, ProQuest Dissertations & Theses database .
Turner , P. ( 2005 ), “ Critical thinking in nursing education and practice as defined in the literature ”, Nursing Education Perspectives , Vol. 26 No. 5 , pp. 272 - 277 .
Twibell , R. , St Pierre , J. , Johnson , D. , Barton , D. , Davis , C. and Kidd , M. ( 2012 ), “ Tripping over the welcome mat: why new nurses don't stay and what the evidence says we can do about it ”, American Nurse Today , Vol. 7 No. 6 , pp. 1 - 10 .
Watson , G. and Glaser , E.M. ( 1980 ), Watson Glaser Critical Thinking Appraisal , Psychological Corporation , San Antonio .
Wittmann-Price , R.A. ( 2013 ), “ Facilitating learning in the classroom setting ”, in Wittmann-Price , R.A. , Godshall , M. and Wilson , L. (Eds), Certified Nurse Educator (CNE) Review Manual , Springer Publishing , New York, NY , pp. 19 - 70 .
Related articles, all feedback is valuable.
Please share your general feedback
Contact Customer Support
Due to the downward trend in respiratory viruses in Maryland, masking is no longer required but remains strongly recommended in Johns Hopkins Medicine clinical locations in Maryland. Read more .
Evidence-based practice, what is ebp.
As nurses, we often hear the term evidence-based practice (EBP). But, what does it actually mean? EBP is a process used to review, analyze, and translate the latest scientific evidence. The goal is to quickly incorporate the best available research, along with clinical experience and patient preference, into clinical practice, so nurses can make informed patient-care decisions ( Dang et al., 2022 ). EBP is the cornerstone of clinical practice. Integrating EBP into your nursing practice improves quality of care and patient outcomes.
As a nurse, you will have plenty of opportunities to get involved in EBP. Take that “AHA” moment. Do you think there’s a better way to do something? Let’s turn to the evidence and find out!
When conducting an EBP project, it is important to use a model to help guide your work. In the Johns Hopkins Health System, we use the Johns Hopkins Evidence-Based Practice (JHEBP) model. It is a three-phase approach referred to as the PET process: practice question, evidence, and translation. In the first phase, the team develops a practice question by identifying the patient population, interventions, and outcomes (PICO). In the second phase, a literature search is performed, and the evidence is appraised for strength and quality. In the third phase, the findings are synthesized to develop recommendations for practice.
The JHEBP model is accompanied by user-friendly tools. The tools walk you through each phase of the project. Johns Hopkins nurses can access the tools via our Inquiry Toolkit . The tools are available to individuals from other institutions via the Institute for Johns Hopkins Nursing (IJHN) .
If you’re interested in learning more about the JHEBP model and tools, Johns Hopkins nurses have access to a free online course entitled JHH Nursing | Central | Evidence-Based Practice Series in MyLearning. The course follows the JHEBP process from beginning to end and provides guidance to the learner on how to use the JHEBP tools. The course is available to individuals from other institutions for a fee via the Institute for Johns Hopkins Nursing (IJHN) .
All EBP projects need to be submitted to the Center for Nursing Inquiry for review. The CNI ensures all nurse-led EBP projects are high-quality and value added. We also offer expert guidance and support, if needed.
The Center for Nursing Inquiry can answer any questions you may have about the JHEBP tools. All 10 JHEBP tools can be found in our Inquiry Toolkit : project management guide, question development tool, stakeholder analysis tool, evidence level and quality guide, research evidence appraisal tool, non-research evidence appraisal tool, individual evidence summary tool, synthesis process and recommendations tool, action planning tool, and dissemination tool. The tools walk you through each phase of an EBP project.
The Welch Medical Library serves the information needs of the faculty, staff, and students of Johns Hopkins Medicine, Nursing and Public Health. Often, one of the toughest parts of conducting an EBP project is finding the evidence. The informationist assigned to your department can assist you with your literature search and citation management.
Your project is complete. Now what? It’s time to share your project with the scholarly community.
To prepare your EBP project for publication, use the JHEBP Dissemination Tool . The JHEBP Dissemination Tool (Appendix J) details what to include in each section of your manuscript, from the introduction to the discussion, and shows you which EBP appendices correspond to each part of a scientific paper. You can find the JHEBP Dissemination Tool in our Inquiry Toolkit .
You can also present your project at a local, regional, or national conference. Poster and podium presentation templates are available in our Inquiry Toolkit .
To learn more about sharing your project, check out our Abstract & Manuscript Writing webinar and our Poster & Podium Presentations webinar !
Do you have an idea for an EBP project?
A step-by-step approach to using evidence-based practice in your decision-making
People professionals are often involved in solving complex organisational problems and need to understand ‘what works’ in order to influence key organisational outcomes. The challenge is to pick reliable, trustworthy solutions and not be distracted by unreliable fads, outdated received wisdom or superficial quick fixes.
This challenge has led to evidence-based practice . The goal is to make better, more effective decisions to help organisations achieve their goals.
At the CIPD, we believe this is an important step for the people profession to take: our Profession Map describes a vision of a profession that is principles-led, evidence-based and outcomes-driven.
This guide sets out what evidence-based practice is, why it’s important, what evidence we should use and how the step-by-step approach works. It builds on previous CIPD publications 1 and the work of the Center for Evidence-Based Management ( CEBMa ), as well as our experience of applying an evidence-based approach to the people profession.
Why do we need to be evidence-based, what evidence should we use, how do we make evidence-based decisions, how to move towards an evidence-based profession, notes and further reading, acknowledgements and publication information, evidence-based practice: a video introduction.
The basic idea of evidence-based practice is that high-quality decisions and effective practices are based on critically appraised evidence from multiple sources. When we say ‘evidence’, we mean information, facts or data supporting (or contradicting) a claim, assumption or hypothesis. This evidence may come from scientific research, the local organisation, experienced professionals or relevant stakeholders. We use the following definition from CEBMa :
“Evidence-based practice is about making decisions through the conscientious, explicit and judicious use of the best available evidence from multiple sources… to increase the likelihood of a favourable outcome.”
This technical definition is worth unpacking.
Conscientious means that you make a real effort to gather and use evidence from multiple sources – not just professional opinion. Good decisions will draw on evidence from other sources as well: the scientific literature, the organisation itself, and the judgement of experienced professionals.
Explicit means you take a systematic, step-by-step approach that is transparent and reproducible – describing in detail how you acquired the evidence and how you evaluated its quality. In addition, in order to prevent cherry-picking, you make explicit the criteria you used to select the evidence.
Judicious means critically appraised. Evidence-based practice is not about using all the evidence you can find, but focuses only on the most reliable and trustworthy evidence.
Increased likelihood means that taking an evidence-based approach does not guarantee a certain outcome. Evidence-based people professionals typically make decisions not based on conclusive, solid evidence, but on probabilities, indications and tentative conclusions. As such, an evidence-based approach does not tell you what to decide, but it does help you to make a better-informed decision.
The importance of evidence-based practice and the problems it sets out to solve is explained in more detail in our factsheet and thought leadership article but, in essence, it has three main benefits:
Before making an important decision or introducing a new practice, an evidence-based people professional should start by asking: "What is the available evidence?" As a minimum, people professionals should consider four sources of evidence.
The expertise and professional judgement of practitioners, such as colleagues, managers, staff members, employees and leaders, is vital for determining whether a people management issue does require attention, if the data from the organisation are reliable, whether research findings are applicable, or whether a proposed solution or practice is likely to work given the organisational context.
In past decades, a large number of scientific studies have been published on topics relevant to people professionals, for example on topics such as the characteristics of effective teams, the drivers of knowledge worker performance, the recruitment and selection of personnel, the effect of feedback on employee performance, the antecedents of absenteeism, and the predictors of staff turnover. Empirical studies published in peer-reviewed journals are especially relevant, as they provide the strongest evidence on cause-and-effect relationships, and thus what works in practice.
This can be financial data or performance indicators (for example, number of sales, costs, return on investment, market share), but it can also come from customers (for example, customer satisfaction, brand recognition), or employees (for example, task performance, job satisfaction). It can be ‘hard’ numbers such as staff turnover rates, medical errors or productivity levels, but it can also include ‘soft’ elements such as perceptions of the organisation’s risk climate or attitudes towards senior management. This source of evidence typically helps leaders identify the existence or scale of a need or problem, possible causes and potential solutions.
Stakeholders are people (individuals or groups inside or outside the organisation) whose interests affect or are affected by a decision and its outcomes. For example, internal stakeholders such as employees can be affected by the decision to reduce the number of staff, or external stakeholders such as suppliers may be affected by the decision to apply higher-quality standards. However, stakeholders can also influence the outcome of a decision; for example, employees can go on strike, regulators can block a merger, and the general public can stop buying a company’s products. For this reason, this evidence is often an important guide to what needs or issues an organisation investigates, what improvements it considers and whether any trade-offs or unintended consequences of proposed interventions are acceptable.
Finally, none of the four sources are enough on their own – every source of evidence has its limitations and its weaknesses. It may not always be practical to draw on all four sources of evidence or to cover each of them thoroughly, but the more we can do, the better decisions will be.
Since the 1990s, evidence-based practice has become an established standard in many professions. The principles and practices were first developed in the field of medicine and following this, have been applied in a range of professions – including architecture, agriculture, crime and justice, education, international development, nutrition and social welfare, as well as management. Despite differing contexts, the approach broadly remains the same.
Below, each step is discussed and illustrated with examples. It is important to note that following all six steps will not always be feasible. However, the more that professionals can do – especially when making major or strategic decisions – the better.
Asking questions to clarify the problem and potential solution and to check whether there is evidence in support of that problem and solution is an essential first step. Without these questions the search for evidence will be haphazard, the appraisal of the evidence arbitrary, and its applicability uncertain.
Asking critical questions should be constructive and informative. It is not about tearing apart or dismissing other people's ideas and suggestions. By the same token, evidence-based practice is not an exercise in myth busting, but rather seeking to establish whether claims are likely to be true and potential solutions are likely to be effective.
Example 1 shows the type of questions you can ask.
Consider a typical starting point: a senior manager asks you to develop and implement autonomous teams in the organisation. Rather than jumping into action and implementing the proposed solution, an evidence-based approach first asks questions to clarify the (assumed) problem:
The next step would be to ask more specific questions on whether there is sufficient evidence confirming the existence and seriousness of the problem. The senior manager explains that the organisation has a serious problem with absenteeism, and that a lack of autonomy – employees’ discretion and independence to schedule their work and determine how it is to be done – is assumed to be its major cause. Important questions to ask are:
Based on the answers, you should be able to conclude whether there is sufficient evidence to support the senior manager’s claim that the organisation has a problem with absenteeism, and that this problem is most likely caused by a lack of autonomy. If one or more questions can’t be answered, this may be an indication that more evidence is needed. The next step would be to ask the executive manager critical questions about the proposed solution:
The final step would be to ask questions to check whether there is sufficient evidence from multiple sources indicating the proposed solution will indeed solve the problem:
Based on the answers to the questions in Step 1 , we should have a good understanding of whether there is sufficient evidence from multiple sources to support the assumed problem and preferred solution. In most cases, however, the available evidence is too limited or important sources are missing. In that case, we proceed with the second step of evidence-based practice: acquiring evidence.
This could be through:
Practitioner expertise is a useful starting point in evidence-based practice to understand the assumed problem and preferred solutions. It is also helpful in interpreting other sources of evidence – for example, in assessing whether insights from scientific literature are relevant to the current context.
This could be through the following types of publication:
This could be through the following sources:
Organisational decisions often have lots of stakeholders both inside and outside the organisation. A stakeholder map is therefore a useful tool to identify which stakeholders are the most relevant. A stakeholder’s relevance is determined by two variables:
When the most important stakeholders are identified, often qualitative methods such as focus groups and in-depth interviews are used to discuss their concerns.
Unfortunately, evidence is never perfect and can be misleading in many different ways. Sometimes the evidence is so weak that it is hardly convincing at all, while at other times the evidence is so strong that no one doubts its correctness. After we have acquired the evidence we therefore need to critically appraise what evidence is ‘best’ – that is, the most trustworthy.
When appraising the evidence from practitioners, the first step is to determine whether their insights and opinions are based on relevant experience or personal opinions. Next, we need to determine how valid and reliable that experience is. We can assess this by considering:
For example, based on these three criteria, it can be determined that the expertise of a sales agent is more likely to be trustworthy than the expertise of a business consultant specialised in mergers. In general, sales agents work within a relatively steady and predictable work environment, they give their sales pitch several times a week, and they receive frequent, direct and objective feedback. Consultants, however, are involved in a merger only a few times a year (often less), so there are not many opportunities to learn from experience. In addition, the outcome of a merger is often hard to determine – what is regarded as a success by one person may be seen as a failure by another. Finally, consultants accompanying a merger do not typically operate in a regular and predictable environment: contextual factors such as organisational differences, power struggles and economic developments often affect the outcome.
Professional expertise that is not based on valid and reliable experience is especially prone to bias, but any evidence from practitioners is likely to reflect personal views and be open to bias (see Building an evidence-based people profession ). For this reason, we should always ask ourselves how a seemingly experienced professional’s judgement could be biased and always look at it alongside other evidence.
Appraising scientific evidence requires a certain amount of research understanding. To critically appraise findings from scientific research, we need to understand a study’s ‘design’ (the methods and procedures used to collect and analyse data). Examples of common study designs are cross-sectional studies (surveys), experiments (such as randomised controlled trials, otherwise known as ‘RCTs’), qualitative case studies, and meta-analyses. The first step is to determine whether a study’s design is the best way to answer the research question. This is referred to as ‘methodological appropriateness’.
Different types of research questions occur in the domain of people management. Very often we are concerned with ‘cause-and-effect’ or ‘impact’ questions, for example:
The most appropriate study designs to answer cause-and-effect questions are RCTs and controlled before-after studies, along with systematic reviews and meta-analyses that gather together these types of study.
Other research questions that are relevant to people professionals are questions about prevalence (for example, “How common is burnout among nurses in hospitals?”), attitudes (for example, “How do employees feel about working in autonomous teams?”), prediction (for example, “What are drivers/predictors of absenteeism?”) or differences (for example, “Is there a difference in task performance between virtual teams and traditional teams?”).
Each of these questions would ideally have a specific study design to guarantee a valid and reliable (non-biased) answer.
An overview of common study designs and what they involve can be found in the Appendix . Following this (also in the Appendix) is an overview of types of questions and the appropriateness of each design. This gives a useful guide for both designing and appraising studies. For example, if you want information on how prevalent a problem is, a survey will work best; a qualitative study will give the greatest insight into people’s experiences of, or feelings about, the problem; and an RCT or before-after study will give the best information on whether a solution to the problem has the desired impact.
Of course, which design a study uses to answer a research question is not the only important aspect to consider. The quality of the study design – how well it was conducted – is equally important. For example, key considerations in quantitative studies include how participants are selected and whether measures are reliable and valid. 2 For systematic reviews and meta-analyses, a key question is how included studies are selected.
Finally, if an impact study seems to be trustworthy, we want to understand what the impact is of the intervention or factor being studied. Statistical measures of ‘effect sizes’ give us this information, both for an intervention or factors of influence itself, and how it compares to others. Being able to compare effect sizes is very important for practice. For example, the critical question is not simply, “Does a new management practice have a small or large effect on performance?” but rather, “Is this the best approach or are other practices more impactful?” For more information on effect sizes, see Effects sizes and interpreting research findings in the Appendix .
When critically appraising evidence from the organisation, the first thing to determine is whether the data are accurate. Nowadays, many organisations have advanced management information systems that present metrics and KPIs in the form of graphs, charts and appealing visualisations, giving the data a sense of objectivity. However, the data in such systems are often collected by people, which is in fact a social and political endeavour. An important appraisal question therefore is: “Were the data collected, processed and reported in a reliable way?”
In addition to the accuracy of data, several other factors can affect its trustworthiness, such as measurement error, missing contextual information and the absence of a logic model. Some organisations use advanced data-analytic techniques that involve big data, artificial intelligence or machine learning. Big data and AI technology often raise serious social, ethical and political concerns as these techniques are based on complex mathematical algorithms that can have hidden biases and, as a result, may introduce gender or racial biases into the decision-making process.
Unlike the scientific literature and organisational data, which serve to give objectifiable and trustworthy insights, stakeholder evidence concerns subjective feelings and perceptions that can’t be considered as facts. Nonetheless, we can make sure that stakeholder evidence comes from a representative sample, so that it is an accurate reflection of all relevant stakeholders.
Evidence-based practitioners should present stakeholders with a clear view of the other sources of evidence. That is, they should summarise what the body of published research and organisational data tell us, as viewed with the benefit of professional knowledge. This can serve as the basis for a well-informed and meaningful two-way exchange. For example, the scientific literature may point to a certain solution being most effective, but stakeholders may advise on other important aspects that should be weighed up against this evidence – for example, whether the intervention is difficult to implement, ethically questionable or too expensive.
The purpose of critical appraisal is to determine which evidence is the best available – that is, the most trustworthy. Sometimes, the quality of the evidence available is less than ideal; for example, there may not be any randomised controlled trials on your intervention of interest. But this does not leave us empty-handed. We can look at other studies that are less trustworthy but still go some way to showing cause-and-effect. Indeed, it’s possible that the best available evidence on an important question is the professional experience of a single colleague. However, even this limited evidence can still lead to a better decision than not using it, as long as we are aware of and open about its limitations. A useful maxim is that “the perfect is the enemy of the good”: if you don’t have the ideal evidence, you can still be evidence-based in how you make decisions.
After we have acquired and critically appraised the different types of evidence, how should we bring them all together? The broad process of knitting together evidence from multiple sources is more craft than science. It should be based on the question you wish to answer and the resources available.
A potential approach is illustrated in Figure 3 below. The steps illustrated here are as follows:
In most cases, the answer as to whether to implement a new practice is not a simple yes or no. Questions to consider include the following:
The final part of this step concerns how, and in what form, the evidence should be applied. This includes the following possible approaches:
The final step of evidence-based practice is assessing the outcome of the decision taken: did the decision (or the implementation of the new practice) deliver the desired results?
Unfortunately, organisations seldom evaluate the outcome of decisions, projects or new practices. Nevertheless, assessing the outcome of our decisions is something we can and always should do. Before we assess the outcome, however, we first need to determine:
After all, if we don’t know with certainty if a practice was implemented as planned, we don’t know whether a lack of impact is due to poor implementation or the practice itself.
When we assess the outcome of a decision, we are asking whether the decision had an effect on a particular outcome. As discussed in Step 3 , for a reliable answer to a cause-and-effect question, we need a control group (preferably randomised), a baseline measurement, and a post-measurement – also referred to as a randomised controlled trial. However, often there is no control group available, which leaves us no other option than to assess the impact of the decision or practice by comparing the baseline with the outcome. This type of assessment is referred to as a before-after measurement.
When we do not (or cannot) obtain a baseline, it is harder to reliably assess the outcome of a new practice or intervention. This is often the case in large-scale interventions or change projects that have multiple objectives. But even in those cases, assessing the outcome retrospectively is still beneficial. For example, a meta-analysis found that retrospective evaluations can increase performance by 25%.
People professionals may be thinking: “How do I follow the six steps if I’m not a qualified researcher?” But evidence-based practice is not about trying to turn practitioners into researchers. Rather, it’s about bringing together complementary evidence from different sources, including research and practice. Some aspects of evidence-based practice are technical, so people professionals may find it useful to work with academics or other research specialists.
For example, it’s unlikely all people professionals in an organisation will need to be highly trained in statistics, but an HR team may benefit from bringing in one or two data specialists, or hiring them for ad hoc projects. Similarly, conducting evidence reviews and running trials requires well-developed research skills – practitioners could either develop these capabilities in-house or bring them in from external researchers. Academics are also usually keen to publish research, so it may not be necessary for practitioners to do much additional work to support this. Events like the CIPD’s annual Applied Research Conference can be a good way for people professionals to develop networks with academic researchers.
A good aim for practitioners themselves is to become a ‘savvy consumer’ of research, understanding enough that one can ask probing questions; for example, about the strength of evidence and the size of impacts. This is underpinned by skills in critical thinking, in particular being clear about what questions are really of interest and what evidence will do the best job of answering those questions.
To develop your knowledge and capability, visit the CIPD’s online course on evidence-based practice or, for more advanced skills, look at CEBMa’s online course .
While it is fair to say that evidence-based practice and HR is still in its infancy compared to some other professions, people professionals can begin with these practical steps to pump prime their decision-making:
Our thought leadership article gives further insight into how the people profession can become more evidence-based.
Evidence-based practice is about using the best available evidence from multiple sources to optimise decisions. Being evidence-based is not a question of looking for ‘proof’, as this is far too elusive. However, we can – and should – prioritise the most trustworthy evidence available. The gains in making better decisions on the ground, strengthening the body of knowledge and becoming a more influential profession are surely worthwhile.
To realise the vision of a people profession that’s genuinely evidence-based, we need to move forward on two fronts:
In applying evidence-based thinking in practice, there are certain tenets to hold onto. For substantial decisions, people professionals should always consider drawing on four sources of evidence: professional expertise, scientific literature, organisational data, and stakeholder views and concerns. It can be tempting to rely on professional judgement, received wisdom and ‘best practice’ examples, and bow to senior stakeholder views. But injecting evidence from the other sources will greatly reduce the chance of bias and maximise your chances of effective solutions.
Published management research is a valuable source of evidence for practitioners that seems to be the most neglected. When drawing on the scientific literature, the two principles of critical appraisal (‘not all evidence is equal’) and looking at the broad body of research on a topic (‘one study is not enough’) stand us in excellent stead. This has clear implications for how we look for, prioritise and assess evidence. A systematic approach to reviewing published evidence goes a long way to reducing bias and giving confidence that we’ve captured the most important research insight.
Becoming a profession worthy of the label ‘evidence-based’ is a long road. We need to chip away over time to see real progress. HR, learning and development, and organisational development are newer to evidence-based practice than other professions, but we can take inspiration from them, for whom it has also been a long road, and be ambitious.
This appendix explains some important technical aspects of appraising scientific research, which is inevitably the trickiest aspect of evidence-based practice for non-researchers. As we note in this guide, most people professionals won’t need to become researchers themselves, but a sensible aim is to become ‘savvy consumers’ of research.
To support this, below we explain four aspects of appraising scientific research:
We hope that this assists you in developing enough understanding to be able to ask probing questions and apply research insights.
In HR, people management and related fields, we are often concerned with questions about ‘what works’ or what’s effective in practice. To answer these questions, we need to get as close as possible to establishing cause-and-effect relationships.
Many will have heard the phrase ‘correlation is not causality’ or ‘correlation does not imply causation’. It means that a statistical association between two measures or observed events is not enough to show that one characteristic or action leads to (or affects, or increases the chances of) a particular outcome. One reason is that statistical relationships can be spurious, meaning two things appear to be directly related, but are not.
For example, there is a statistically solid correlation between the amount of ice-cream consumed and the number of people who drown on a given day. But it does not follow that eating ice-cream makes you more likely to drown. The better explanation is that you’re more likely to both eat ice-cream and go swimming (raising your chances of drowning) on sunny days.
So what evidence is enough to show causality? Three key criteria are needed: 3
Different study designs do better or worse jobs at explaining causal relationships.
These research designs go much further to show cause-and-effect or prediction than cross-sectional surveys, which only observe variables at one point in time. In survey analysis, statistical relationships could be spurious or the direction of causality could even be the opposite to what you might suppose. For example, a simple correlation between ‘employee engagement’ and performance could exist because engagement contributes to performance, or because being rated as high-performing makes people feel better.
Other single study designs include controlled before-after studies (also called 'non-randomized controlled trials' or 'controlled longitudinal studies'), controlled studies with post-test only, and case studies. Case studies often use qualitative methods, such as interviews, focus groups, documentary analysis, narrative analysis, and ethnography or participant observation. Qualitative research is often exploratory, in that it is used to gain an understanding of underlying reasons or opinions and generate new theories. These can then be tested as hypotheses in appropriate quantitative studies.
Systematic reviews and meta-analyses are central to evidence-based practice. Their strength is that they look across the body of research, allowing us to understand the best available evidence on a topic overall. In contrast, even well-conducted single studies can give different results on the same topic, due to differences in context or the research approaches used.
Characteristics of these are as follows:
More information on research designs can be found in CEBMa resources .
When conducting an evidence review, we need to determine which research evidence is ‘best’ (that is, most trustworthy) for the question in hand, so we can prioritise it in our recommendations. At the same time, we assess the quality of research evidence to establish how certain we can be of our recommendations: well-established topics often have a strong body of research, but the evidence on new or emerging topics is often far less than ideal.
This involves appraising the study designs or research methods used. For questions about intervention effectiveness or cause-and-effect, we use tables such as that below to inform a rating of evidence quality. Based on established scientific standards, we can also estimate the trustworthiness of the study. Hypothetically, if you were deciding whether to use a particular intervention based on evidence that was only 50% trustworthy, you would have the same 50/50 chance of success as tossing a coin, so the evidence would be useless. On the other hand, using evidence that was 100% trustworthy would give you certain success. Of course, in reality nothing is 100% certain, but highly trustworthy research can conclusively demonstrate that, in a given context, an intervention has a positive or negative impact on the outcomes that were measured.
Table 1: Methodological appropriateness of effect studies and impact evaluations
Systematic review or meta-analysis of randomized controlled studies | AA: Very high | 95% |
Systematic review or meta-analysis of non-randomized controlled and/or before-after studies | A: High | 90% |
Randomized controlled study | ||
Systematic review or meta-analysis of controlled studies without a pre-test or uncontrolled study with a pre-test | B: Moderate | 80% |
Non-randomized controlled before-after study | ||
Interrupted time series | ||
Systematic review or meta-analysis of cross-sectional studies | C: Limited | 70% |
Controlled study without a pre-test or uncontrolled study with a pre-test | ||
Cross-sectional survey | D: Low | 60% |
Case studies, case reports, traditional literature reviews, theoretical papers | E: Very low | 55% |
Notes: Trustworthiness takes into consideration not only which study design was used but also how well it was applied. Table reproduced from CEBMa (2017), based on the classification system of Shadish, Cook and Campbell (2002) 4 and Petticrew and Roberts (2006) 5 .
There are two important points to note about using such hierarchies of evidence. First, as we discuss in this guide, evidence-based practice involves prioritising the best available evidence. A good mantra here is ‘the perfect is the enemy of the good’: if studies with very robust (highly methodologically appropriate) designs are not available on your topic of interest, look at others. For example, if systematic reviews or randomized controlled studies are not available on your question, you will do well to look at other types of studies, such as those with quasi-experimental designs.
Second, although many questions for managers and people and HR relate to effectiveness or causality, this is by no means always the case. Broadly, types of research questions include the following:
Table 2: Types of research question
Does A have an effect/impact on B? What are the critical success factors for A? What are the factors that affect B? | |
Does A precede B? Does A predict B over time? | |
Is A related to B? Does A often occur with B? Do A and B co-vary? | |
Is there a difference between A and B? | |
How often does A occur? | |
What is people's attitude toward A? Are people satisfied with A? How many people prefer A over B? Do people agree with A? | |
What are people's experiences, feelings or perceptions regarding A? What do people need to do/use A? | |
Why does A occur? How does A impact/affect B? Why is A different from B? |
Different methods are suited to different types of questions. For example, a cross-sectional survey is a highly appropriate or trustworthy design for questions about association, difference, prevalence, frequency and attitudes. And qualitative research is highly appropriate for questions about experience, perceptions, feelings, needs and exploration and theory building. For more discussion of this, see Petticrew and Roberts (2003).
Even if practitioners wanting to be evidence-based can search for and find relevant research, they are left with another challenge: how to interpret it. Unfortunately, academic research in human resource management is often highly technical, written in inaccessible language and not closely linked to practice. A recent analysis found that in a sample of 324 peer-reviewed articles, half of them dedicated less than 2% of the text to practical implications, and where implications were discussed, this was often obscure and implicit.
Even if published research does include good discussion of practical implications, it’s helpful and perhaps necessary for practitioners wishing to draw on them to understand the findings. This can be tricky, as they contain fairly technical statistical information.
There’s an obvious need to simplify the technical findings of quantitative studies. The typical way to try to simplify research findings is to focus on statistical significance, or p-values. Reading through a research paper, this may seem intuitive, as the level of significance is identified with asterisks: typically, * means sufficiently significant and ** or *** means highly significant. However, there is a lot of confusion about what the p-value is – even quantitative scientists struggle to translate it into something meaningful and easy to understand – and a growing number of scientists are arguing that it should be abandoned. What’s more, statistical significance does nothing to help a practitioner who wants to know if a technique or approach is likely to have a meaningful impact – that is, it does not answer the most important practical question of how much difference an intervention makes.
The good news is that effect sizes do give this information. The information is still technical and can still be hard to understand, as studies often use different statistics for effect sizes. Fortunately, however, we can translate effect sizes into every-day language. A useful tool is 'Cohen’s Rule of Thumb', which matches different statistical measures to small/medium/large categories. 6
According to Cohen:
The rule of thumb has since been extended to account for very small, very large and huge results. 7
Effect sizes need to be contextualised. For example, a small effect is of huge importance if the outcome is the number of fatalities, or indeed, sales revenue. Compared to this, if the outcome is work motivation (which is likely to affect sales revenue but is certainly not the same thing) even a large effect will be less important. This shows the limits of scientific studies and brings us back to evidence from practitioners and stakeholders, who are well placed to say what outcomes are most important.
1 Gifford, J. (2016) In search of the best available evidence: Positioning paper. London: Chartered Institute of Personnel and Development.
2 For a discussion of reliability and validity in performance measures, see People performance: an evidence review.
3,4 Shadish, W. R. Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Belmont, CA, Wadsworth Cengage Learning.
5 Petticrew, M., & Roberts, H. (2006). How to appraise the studies: an introduction to assessing study quality. In: Systematic reviews in the social sciences: A practical guide, pp125-63. Oxford: Blackwell.
6 For a table showing different measures of effect sizes according to Cohen’s Rule of Thumb, see CEBMa Guideline for Rapid Evidence Assessments in Management and Organizations, p 20.
7 Sawilowsky, S. S. (2009) New Effect Size Rules of Thumb. Journal of Modern Applied Statistical Methods. Vol 8(2), pp 597–599.
CIPD evidence reviews are available on a range of HR and L&D topics.
Barends, E. and Rousseau, D. M. (2018) Evidence-based management: how to use evidence to make better organizational decisions . London: Kogan Page.
Barends, E., Rousseau, D. and Briner, R. B. (2014) Evidence-Based Management: the basic principles . Amsterdam, Center for Evidence-Based Management.
Guadalupe, M. (2020) Turn the Office Into a Lab . INSEAD Economics & Finance – Blog.
Petticrew, M. and Roberts, H. (2003) Evidence, hierarchies, and typologies: horses for courses . Journal Of Epidemiology And Community Health. Vol 57(7): 527.
Pfeffer, J. and Sutton, R. I. (2006) Hard facts, dangerous half-truths, and total nonsense:profiting from evidence-based management . Boston, Mass., Harvard Business School Press.
Pindek, S., Kessler, S. R. and Spector, P. E. (2017) A quantitative and qualitative review of what meta-analyses have contributed to our understanding of human resource management . Human Resource Management Review. Vol 27(1), pp26–38.
Rousseau, D. M. (2006) Is there such a thing as "evidence-based management"? Academy of Management Review. Vol 31(2), pp256–269.
Rousseau, D. M. (2020) Making Evidence-Based Organizational Decisions in an Uncertain World . Organizational Dynamics. Vol 49(1): 100756.
This report was written by Jonny Gifford and Jake Young of the CIPD and Eric Barends of the Center for Evidence-Based Management (CEBMa).
Please cite this guide as: Gifford, J., Barends, E. and Young, J. (2023) Evidence-based HR: Make better decisions and step up your influence . Guide. London: Chartered Institute of Personnel and Development.
In this short video, HR and business experts offer an introduction to the what, why and how of evidence-based practice.
Read our case studies which demonstrate how an evidence-based practice approach helped:
Tackling barriers to work today whilst creating inclusive workplaces of tomorrow.
Discover our practice guidance and recommendations to tackle bullying and harassment in the workplace.
Get to the heart of evidence-based practice and recognise the types of evidence that best inform your professional judgement and decision-making
We all know that being evidence-based helps us make better decisions, but how can we turn this into a reality?
A case study on using evidence-based practice to better understand how to support hybrid workforces
A case study on using evidence-based practice to reinvigorate performance management practices
A case study on using evidence-based practice to review selection processes for promoting police officers
Practical advice for managers on tackling sexual harassment in the workplace
Practical advice on how to tackle sexual harassment in the workplace
This guide offers advice on assessing skills, planning skills development and deploying and redeploying staff
Practical guidance on helping employees adapt and thrive when faced with workplace stress
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .
Background:.
This study is to summarize the status of knowledge, attitudes, implementation, facilitators, and barriers of evidence-based practice (EBP) in community nurses (CNs). EBP has been widely adopted but the knowledge, attitudes, and implementation of EBP among CNs, and the facilitators and barriers they perceived have not been clearly confirmed.
A literature search was conducted using combined keywords in 3 English databases and 3 Chinese databases of peer-reviewed publications covering the dates of publication from 1996 to July, 2018. Twenty articles were included. The information of the knowledge, attitudes, implementation, and the perceived facilitators and barriers of EBP in CNs was extracted and summarized.
CNs had positive attitudes toward EBP, but insufficient knowledge and unprepared implementation. The most cited facilitators were academic training, management functions, and younger age. Inadequate time and resources were recognized as main barriers hindering the transforming from knowledge and attitudes to implementation. Developed interventions mainly focused on knowledge facilitation rather than the elimination of objective barriers.
Findings demonstrate a compelling need for improvement in knowledge and implementation of EBP in CNs, compared with the better attitudes. Except education, knowledge translating into implementation needs more coordination with authorities to magnify the facilitators and overcome the barriers. Further studies need to concentrate on deficient knowledge and implementation of EBP among CNs. Policy makers can use the facilitators and barriers found by this review to modify nursing education, current scientific resources supplement, practice supports for care improving.
Nurses can provide personal care and treatment, work with families and communities, and play a central part in public health and controlling disease and infection. These roles of nurses have been recognized by the World Health Organization. [ 1 ] Community nurses (CNs) combine the skills of nursing, public health, and some phases of social assistance, and they function as a part of the entire public health programs. [ 2 ] CNs can provide health care services, contributing to disease and injury prevention, disability alleviation, and health promotion. [ 1 , 2 ] CNs generally face more independent work in the varied and dynamic community when there is no medical diagnosis or treatment provided by physicians for either patients or family. Therefore, they have to think critically, analyze complex situations, perform health assessment, and make decisions. [ 3 ] However, CNs do not always make decisions based on the up-to-date high quality evidence, but on experiences. [ 4 , 5 ]
WHO has suggested that health improving in communities is dependent upon nursing services underpinned by evidence-based practice (EBP). [ 2 ] EBP refers to using the best available evidence for decision-making and providing efficient and effective care for patients on a scientific basis. [ 6 ] Systematic implementation of EBP can enhance healthcare safety and improve patient outcomes. [ 7 , 8 ] Although EBP is equally important to CNs as it is to clinical nurses, EBP in community nursing is still in the initial stage. [ 5 ]
Researchers have reviewed the importance of nursing leadership in EBP, [ 9 ] the state of readiness for EBP, [ 10 ] barriers and facilitators in guidelines, [ 11 ] and strategies of EBP implementation, [ 12 ] but all these researches were designed for hospital nurses, but not CNs. One study [ 7 ] concluded the practical contents of EBP in community nursing without analyzing the level of CNs’ EBP. Another study [ 13 ] reviewed the attitudes, knowledge, and perceptions of CNs regarding EBP, but the study was limited in European community settings.
In this review, the knowledge, attitudes, and implementation of EBP of CNs were analyzed globally, as well as the facilitators and barriers of EBP implementation of CNs.
The aims of the review are to answer the questions: what is the status of knowledge, attitude, and implementation of EBP among CNs worldwide? What facilitators and barriers influence EBP implementation of CNs?
2.1. literature search.
Literatures were retrieved within the authority of the university which authors belonged. A literature search was conducted using combined keywords in 3 English databases (PubMed/MEDLINE, Mag Online Library, Science Direct) and 3 Chinese databases (Chinese Journal Full-Text Database, Wan fang Database, VIP Database for Chinese Technical Periodicals) of peer-reviewed publications covering the dates of publication from January 1996 (the earliest year when EBP in primary care was introduced in detail [ 14 ] ) to July 2018. The following keywords were used: [((primary care nursing) OR (community health nursing) OR (public health nursing)) OR ((primary care) AND (nurse)) OR ((community health) AND (nurse)) OR ((public health) AND (nurse))] AND [(evidence based) OR (evidence-based practice) OR (evidence-based nursing)] AND [(knowledge) OR (skill) OR (attitude) OR (belief) OR (facilitators) OR (barriers)]. The field was limited to “title/abstract” and the publication type was limited to “journal article.” Reference tracking was carried out to identify additional potentially relevant references. Bibliographic citation management NoteExpress software (Version V3.2.0, Aegean Corporation, Beijing, China) was used to manage the retrieved studies. No published or in-progress systematic review on this topic is found in Cochrane Library and Jonna Briggs Institute Library before this review.
Two researchers independently screened the titles and abstracts to identify CNs based on the authors’ description and the definition of WHO. [ 2 ] Inclusion criteria: reports involving CNs’ EBP knowledge/skill, attitude/belief, implementations; reports involving CNs perceived barriers or facilitators of EBP; original scientific studies; written in English or Chinese. Exclusion criteria: reports on EBP theory/framework, and narrative description of writer's personal opinion; studies without a clearly defined population and sub-analysis of CNs, or with mixed population (hospital and care organization nurses, other health professionals); reports on private nursing homes and rural hospitals; systematic reviews; non-research literature, that is, conference notice.
Two researchers extracted the data, including author, year, study design, sampling, outcome methods, and main results. The following aspects of CNs’ EBP were extracted knowledge, attitude, implementation, facilitator, and barrier. Each study covered at least one theme in this review (Table (Table1 1 ).
Summary of the included studies.
Ethical approval is not necessary because no human subjects and patient information were collected and studied.
A systematic integrated literature review was conducted with the guidance of preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) protocol. [ 46 ] A total of 2873 articles were obtained by database-searching and 4 additional articles were identified by reference-tracking (Fig. (Fig.1). 1 ). After screening the titles, abstracts, and full-text, 19 English articles and 1 Chinese article were included in this study.
Flowchart of the systematic search and review process.
As shown in Table Table1 , 1 , the articles involved in this study were published between 2004 and 2018, conducted in 8 developed countries and 1 developing country. The study designs were mainly cross-sectional survey (n = 11), and the sample sizes ranged from 19 to 719. Four studies (20%) used probability sampling method. Thirteen previously reported questionnaires [ 16 , 18 – 20 , 25 , 26 , 31 , 34 , 35 , 38 , 40 , 44 ] and 6 self-developed questionnaires were used to explore the status of CNs’ EBP. For those studies that used questionnaires (n = 16), 12 studies reported the reliability or validity of questionnaires. Only 2 studies had a response rate under 50%, so the results might be reflective for the actual situation. Face-to-face interview was used in 5 studies, so the feelings of the participants could be directly presented. All studies focused on at least 1 of the 5 themes: knowledge (n = 8, 40%), attitude (n = 12, 60%), implementation (n = 14, 70%), facilitator (n = 8, 40%), and barrier (n = 7, 35%).
No study was excluded in the quality assessment stage because potential valuable insights may be presented even in lower quality researches. [ 47 ] The bias caused by researchers should be noted because no cross-sectional study explained unified training and education for the data collectors in the use of measuring tools, and the dealing of confounding factors within the study design or in data analysis was not shown. No author stated the influences of researcher on the study of qualitative designs, nor specified clear philosophical perspective. With regards to quasi-experimental studies, there was a lack of report about confounding variables on whether participants were involved in other similar studies contemporarily. The designs of randomized controlled trial's (RCT) randomization and concealment were insufficient and needed to be improved in blind design. [ 48 ] Comprehensive application of mixed methods studies was satisfied, but the interview context description and controlling of confounding variables were deficient. [ 49 ]
3.4.1. cn's knowledge of ebp.
EBP knowledge of CNs was not satisfied generally. A total of 93% (n = 139) CNs did not or knew little about EBP. They made decisions depending on individual experiences and consultation. [ 4 , 47 ] Pericas-Beltran et al [ 23 ] pointed out the possible reason may be that most CNs are lack of EBP training or education though some CNs search for more information by Google, e-journals, clinical enquiry websites, institutional websites, and databases. CNs faced difficulties in critical thinking, identifying research articles and journals, evaluating research quality. [ 41 , 29 , 15 ] Gerrish and Cooke [ 29 ] found that only 26% of the participants were competent in finding research evidence and 29% in using research evidence. Though a study in UK [ 39 ] reported 64% CNs had the ability to find research evidence and review critically, all participants were nursing leaders. In addition, participants with higher title have been confirmed to have greater EBP competency. [ 50 ]
Contrary to the lacking of knowledge, CNs consistently expressed satisfactory attitudes and beliefs about EBP. They agreed that primary care needed to keep up with current scientific base and best evidence. [ 23 , 27 , 28 ] This positive attitude toward EBP may be associated with evidence-based intervention experience, working years in nursing, [ 37 ] the role of leader (having more authority and influences), [ 15 ] EBP experiences, [ 24 ] job satisfaction and group cohesion, organizational culture, and, readiness for system-wide integration of EBP. [ 33 ] However, positive attitudes toward EBP did not mean the implementation of EBP. [ 51 ] Rutledge and Skelton [ 15 ] found that after 1-year training program focusing on EBP facilitation skills, almost all participants have more confidence in EBP, whereas they had not implement EBP into daily job. Other researchers [ 51 ] also supported that despite familiarity with EBP, nurses seldom participated in EBP.
CNs’ positive attitudes toward EBP do not guarantee the implementation of EBP. [ 10 ] One study [ 39 ] found that 97% respondents agreed or strongly agreed on the promotive role of EBP in patient care, while only 5% by learning the skills of EBP and 2% by seeking and applying evidence-based summaries established evidence-based implementation. Pereira et al [ 45 ] discovered CNs got higher scores of believing in EBP will improve care (4.02), but the implementation of EBP sufficiently got relatively lower scores (2.71). Similar findings [ 35 ] of poor implementation were also reported in Canada, as 7% respondents formulated questions and performed knowledge search, 20% used database, and 22% used research in practice. Some CNs did read the nursing journals, which gave them access to relevant information within the fields, nonetheless, this did not yet lead to EBP implementation. [ 4 ]
In a pilot study in the United States, [ 43 ] an intervention program was offered to 24 CNs by EBP mentors, including teaching content on EBP; EBP toolkit; environmental prompts; and EBP mentors who encouraged participants to use EBP to provide supports; EBP training (e.g., how to build searchable questions, find evidence, systematic review and meta-analysis, appraise, EBP in clinical decision making). The intervention contended 2 periods to integrate learning and practice: a 16-week educational intervention phase and a following 12-week project intervention period. Data were collected at baseline (Time 1), after the 16-week educational intervention (Time 2), mentored intervention period (Time 3), and after completion of the intervention (Time 4, i.e., 9 months after Time 3). Results demonstrated that the training might be a promising strategy for a short-term enhancement of EBP implementation, but the long-term effect was undetermined. Another study [ 28 ] explored the project including EBP training handbooks and online courses for CNs to enhance evidence-based sexual healthcare and found that 86% participants dedicated useful EBP training handbooks and all participants agreed that online courses were helpful to the implementation of EBP. However, near to one-third participants reported no implementation of the learned EBP skills.
Fourteen facilitators and 21 barriers of CNs’ EBP application were identified and were divided into 3 themes: characters of the evidence (e.g., the presentation, quantity, and quality of the studies); characters of the environment, that is, facilitators and barriers perceived in the work settings; characters of the nurses, that is, the nurses’ values, skills, and awareness about EBP (Table (Table2 2 ).
Facilitators and barriers of CNs’ EBP.
This review found that CNs showed interests in EBP and believed that exerting EBP was useful for the quality of care. [ 23 ] However, the lack of knowledge or skills and barriers of CNs limited the implementation of EBP application. [ 52 ] Numerous teaching approaches, such as small group exercises, article review and critique, case studies, literature search, and scenario simulation training, have been found to be promising ways in improving EBP knowledge and beliefs. [ 53 ] EBP as a scientific approach is easy to be accepted, but difficult to be acquired and applied. [ 54 ] This review found that researchers tended to focus on the cultivation of EBP knowledge and interests, but not the implementation of EBP. However, ensuring EBP implementation is the ultimate aim.
Understanding and identifying the facilitators and barriers of EBP may be the cornerstone to achieve successful knowledge transferring. [ 55 ] This review identified that most facilitators were related to individual knowledge and beliefs. Administrators tend to carry out more EBP because they can get more authority and coordination than general nurses. [ 56 ] Younger age and shorter working experiences were also recognized as promoting factors of EBP, which may be due to the academic training. [ 22 ] These CNs received modern nursing curriculums, including EBP, and they were liable to apply EBP with the cultivated consciousness and ability. [ 17 ] Effective implementation is also associated with organizational culture of valuing EBP, which means “evidence-based culture” perceived and created by community healthcare providers and managements. [ 41 , 33 , 22 ] CNs working in an evidence-based group with a better EBP belief and culture will support the fresh findings and scientific behaviors. [ 57 ]
Barriers of EBP among CNs were mostly gathered in an environment scale. Time and resources were referred mostly. [ 58 ] When workload is too heavy, nurses are less likely to search and apply evidence. [ 55 , 58 ] Nurses who have access to more available resources, such as electronic databases, libraries, and professional guidelines, tend to rely more on scientific evidence. [ 59 , 30 ] Four articles referred inadequate knowledge as a barrier, and indicated that the current academic education programs did not adequately prepared for EBP implementation. [ 23 , 15 , 28 , 59 , 60 ] Barriers distributed evidence was relatively less, and these can be overcome by providing more evidence resources, peer supports, and literature screening skills. All cited barriers can be ascribed to 3 possible factors ultimately: inadequate supports of time and resources, inadequate knowledge and training, inadequate encouragement, and assistance from organizations. Barriers were recognized by researchers; however, workable and comprehensive approaches to overcome these barriers are lacking. Measures reported in studies were educational programs generally, which are effective in knowledge improving. More concerns should be focused on EBP implementation. Barriers must be settled, not just in education and training, but also in objective barriers and comprehensive elimination. Policy support and institutional protection is not a choice, but a necessity. More investment in resource supplement, nursing workforce, nursing guidance, and EBP approaches is needed for well EBP implementation, and therefore CNs can get abundant research time and resources, better EBP operating environment, and additional supports from working stuff and managements.
One of the limitations is that though systematically electronic databases searching has been down, some relevant literatures may be missed as in any reviews. [ 61 ] To avoid this, the articles were independently searched by 2 researchers and all eligible articles were saved with maximum degree. Language and publication bias were possible despite that our review scope was increased to worldwide.
The implication of this review involves the need of relevant training for EBP knowledge. Nursing policy reform must provide a systematic curriculum of EBP for nursing students and manageable continuing education for nurses. Then barriers removing must be a priority for authorities to clear the gap between CNs’ EBP knowledge and implementation. It is expected that more policies will be introduced in EBP supporting among CNs, such as research time protection, resources providing for community health institutions, and responsible EBP coordination.
The findings suggest that most CNs’ EBP are not satisfied. Although they make positive gestures and believe in the value of EBP in improving nursing practice and patient outcomes, they did not have matching sufficient knowledge and skills, such as finding proper evidence. In addition, the application of EBP is worse, and several interventions do improve their knowledge, but how to ensure that the abilities CNs acquired can be used in vocational action remains to be explored.
The facilitators of CNs’ EBP mostly belong to the “nurses” part, and relate to the improvement of the ability and values of EBP both in individual and organization. Barriers mostly belong to the environment part, and all barriers can be attributed to the following factors: lacking time and resources, lacking knowledge and training, inadequate encouragement and assistance. Organizations must ensure that the required resources and supports are available for CNs. Strong experimental designs are required to accurately assess the long-term capacity for EBP training strategies and more researches should be devoted to removing objective barriers in EBP implementations.
Conceptualization: Shu Li, Meijuan Cao, Xuejiao Zhu.
Data curation: Shu Li, Meijuan Cao.
Formal analysis: Shu Li, Meijuan Cao.
Project administration: Xuejiao Zhu.
Writing – original draft: Shu Li, Meijuan Cao, Xuejiao Zhu.
Abbreviations: CNs = community nurses, EBP = evidence-based practice.
How to cite this article: Li S, Cao M, Zhu X. Evidence-based practice. Medicine . 2019;98:39(e17209).
This article is funded by 2018 Zhejiang Medical and Health Science and Technology Plan Project (Zhejiang Health and Family Planning Commission, File No.:2017/66) and it is part of the Construction of Community Chronic Disease Self-Management Scheme Integrating Patient-Volunteer and Medical Staff (Ref. 2018KY146).
The authors have no conflicts of interest to disclose.
Nursing is often referred to as both an art and a science. Evidence-based practitioners must combine understanding the science of health, illness, and disease with the art of adapting care to individual patients and situations, all while thinking critically to improve patient outcomes (see Figure 1).
Applying the best evidence to our clinical decision making involves examining, critiquing, and synthesizing the available research evidence. However, we must consider the science along with our clinical experience and patients’ values, beliefs, and preferences. In this article we’ll discuss how to incorporate patient preferences and clinical judgment into evidence-based decision making.
Good clinical judgment integrates our accumulated wealth of knowledge from patient care experiences as well as our educational background. We learn quickly as healthcare professionals that one size does not fit all. What works for one patient with fatigue may not work for another. What we can do is draw from our clinical expertise and past experiences to inform our decisions going forward. Our clinical expertise, combined with the best available scientific evidence, allows us to provide patients with the options they need. Patients can’t have a preference if they aren’t given a choice, and they can’t make that choice if they aren’t presented with all options.
When we talk about including patient preferences in treatment and care decisions, what does this mean? Patient preferences can be religious or spiritual values, social and cultural values, thoughts about what constitutes quality of life, personal priorities, and beliefs about health. Even though healthcare providers know and understand that they should seek patient input into decisions about patient care, this does not always happen because of barriers such as time constraints, literacy, previous knowledge, and gender, race, and sociocultural influences.
However, we can learn tools and techniques to help elicit patient preferences. First and foremost, we must listen to our patients. Developing our own interpersonal skills is important in enabling us to have a conversation with patients and not just deliver information. We should also involve family members if patients desire. The ASK (AskShareKnow) Patient–Clinician Communication Model is a tool to teach patients and families three questions to ask their healthcare providers to get information they need to make healthcare decisions.
The ASK model was tested to introduce the questions to patients before they met with their healthcare providers. The result was that patients asked one or more of the questions to their provider during their visit, and they also recalled the questions weeks later.
Using the best-available scientific evidence by itself is not enough to care for our patients in an evidence-based environment. We must also incorporate our clinical expertise and patient preferences and values to include the art with the science to see patient outcomes improve.
New York Institute of Technology College of Osteopathic Medicine (NYITCOM) is committed to training osteopathic physicians for a lifetime of learning and practice, based upon the integration of evidence-based knowledge, critical thinking, and the tenets of osteopathic principles and practice.
We are also committed to preparing osteopathic physicians for careers in healthcare, including that in the inner city and rural communities, as well as to the scholarly pursuit of new knowledge concerning health and disease. We provide a continuum of educational experiences to NYITCOM students, extending through the clinical and post-graduate years of training. This continuum provides the future osteopathic physician with the foundation necessary to maintain competence and compassion, as well as the ability to better serve society through research, teaching, and leadership.
To advance patient-centered, population-based osteopathic healthcare through transformative education and illuminating research.
Our graduates will be able to:
The College of Osteopathic Medicine traces its roots to W. Kenneth Riland, D.O., and a group of visionary osteopathic physicians practicing in the State of New York. Dr. Riland was the personal physician to former President Nixon, Vice President Nelson Rockefeller, and Secretary of State Henry Kissinger. Dr. Riland and his colleagues saw the establishment of the medical school as a way to promote and strengthen the credibility of osteopathic medicine and leveraged the support of Rockefeller and other political leaders to establish the New York College of Osteopathic Medicine in 1977. The college changed its name to the NYIT College of Osteopathic Medicine in 2012.
The AOA Commission on Osteopathic College Accreditation (COCA) serves the public by establishing, maintaining, and applying accreditation standards and procedures to ensure that academic quality and continuous quality improvement delivered by the colleges of osteopathic medicine reflect the evolving practice of osteopathic medicine. The scope of the COCA encompasses the accreditation of the colleges of osteopathic medicine.
NYITCOM most recently received a seven-year accreditation from the COCA. Any student who has a complaint related to the COCA accreditation standards and procedures should file the complaint, confidentially, with the following:
The American Osteopathic Association Department of Accreditation 142 East Ontario Chicago, IL 60611-2864 312.202.8124 [email protected]
IMAGES
COMMENTS
nce 3 of Critical Thinking in Evidenced-Based Practice. O ne ofthe hallmarks of EBP is its focus on c. itical thinking. Astleitner (2002) defines critical thinking asa higher-ord. r thinking skill which mainly consists of evaluating arguments. It is a purposeful, self-regulatory judgment which results in interpretation, analysis, evaluation ...
CRITICAL THINKING (CT) is vital in developing evidence-based nursing practice. Evidence-based practice (EBP) supports nursing care that can be "individualized to patients and their families, is more effective, streamlined, and dynamic, and maximizes effects of clinical judgment" ( Youngblut & Brooten, 2001, p. 468).
At the heart of evidence-based practice is the idea that good decision-making is achieved through critical appraisal of the best available evidence from multiple sources. When we say 'evidence', we mean information, facts or data supporting (or contradicting) a claim, assumption or hypothesis. This evidence may come from scientific research ...
1. Introduction. Evidence-based practice (EBP) is defined as "clinical decision-making that considers the best available evidence; the context in which the care is delivered; client preference; and the professional judgment of the health professional" [] (p. 2).EBP implementation is recommended in clinical settings [2,3,4,5] as it has been attributed to promoting high-value health care ...
Before research should be used in practice, it must be evaluated. There are many complexities and nuances in evaluating the research evidence for clinical practice. Evaluation of research behind evidence-based medicine requires critical thinking and good clinical judgment. Sometimes the research findings are mixed or even conflicting.
Critical thinking is just one skill crucial to evidence based practice in healthcare and education, write Jonathan Sharples and colleagues , who see exciting opportunities for cross sector collaboration Imagine you are a primary care doctor. A patient comes into your office with acute, atypical chest pain. Immediately you consider the patient's sex and age, and you begin to think about what ...
Evidence-based practice in nursing involves providing holistic, quality care based on the most up-to-date research and knowledge rather than traditional methods, advice from colleagues, or personal beliefs. ... Use critical thinking skills and consider levels of evidence to establish the reliability of the information when you analyze evidence ...
Critical thinking (CT) is vital to evidence-based nursing practice. Evidence-based practice (EBP) supports nursing care and can contribute positively to patient outcomes across a variety of settings and geographic locations. The nature of EBP, its relevance to nursing, and the skills needed to support it should be required components of ...
Practice" that rational or critical thinking is an essential com-plement to evidence-based practice (EBP). Method: I expand on Kamhi's conclusion and briefly describe what clinicians might need to know to think critically within an EBP profession. Specifically, I suggest how critical thinking is
Critical thinking and the process of evidence-based practice by Eileen Gambrill, New York, NY, Oxford University Press, 2019, 338 pp., ISBN 978--190-46335-9 (paperback) Jerzy Szmagalski The Maria Grzegorzewska University, Warsaw, Poland Correspondence [email protected]
Models of Evidence-Based Practice. Multiple models of EBP are available and have been used in a variety of clinical settings. 16-36 Although review of these models is beyond the scope of this chapter, common elements of these models are selecting a practice topic (e.g., discharge instructions for individuals with heart failure), critique and syntheses of evidence, implementation, evaluation ...
Critical thinking is a complex, dynamic process formed by attitudes and strategic skills, with the aim of achieving a specific goal or objective. The attitudes, including the critical thinking attitudes, constitute an important part of the idea of good care, of the good professional. It could be said that they become a virtue of the nursing ...
Critical thinking is the ability to effectively analyze information and form a judgment. To think critically, you must be aware of your own biases and assumptions when encountering information, and apply consistent standards when evaluating sources. Critical thinking skills help you to: Identify credible sources. Evaluate and respond to arguments.
Internationally, evidence-based practice (EBP) is recognised as a foundational element of healthcare professional education. Achieving competency in this area is a complex undertaking that is reflected in disparities between 'best EBP' and actual clinical care. ... Attaining a critical mass of people who are 'trained' was also deemed ...
Thus, Evidence based nursing practice is an important aspect of Critical Thinking in nursing practice. Evidence Based Nursing Practice: Evidence based practice is the conscientious, explicit and judicious use of current best evidence in making decisions about the case of individual patients (Sackett, 1996).
Purpose: I respond to Kamhi's (2011) conclusion in his article "Balancing Certainty and Uncertainty in Clinical Practice" that rational or critical thinking is an essential complement to evidence-based practice (EBP). Method: I expand on Kamhi's conclusion and briefly describe what clinicians might need to know to think critically within an EBP ...
Critical Thinking: Evidence-based practices in nursing require having the ability to evaluate data logically and weigh the evidence. 2. Scientific Mindset: ... Evidence-based practice in nursing involves several components such as creating answerable clinical questions, using resources to find the best evidence to answer the clinical question(s ...
Critical thinking is the ability and willingness to assess claims and make. objective judgments on the basis. of. well-supported reasons and evidence. rather than emotion or anecdote. Critical ...
Evidence-based practice is now widely recognized as the key to improving healthcare quality and patient outcomes. Although the purposes of nursing research (conducting research to generate new knowledge) and evidence-based nursing practice (utilizing best evidence as basis of nursing practice) seem quite different, an increasing number of research studies have been conducted with the goal of ...
One type of evidence-based practice that can be used to engage students, promote active learning and develop critical thinking is skills fair intervention ( McCausland and Meyers, 2013; Roberts et al., 2009 ). Skills fair intervention promoted a consistent teaching approach of the psychomotor skills to the novice nurse that decreased anxiety ...
EBP is a process used to review, analyze, and translate the latest scientific evidence. The goal is to quickly incorporate the best available research, along with clinical experience and patient preference, into clinical practice, so nurses can make informed patient-care decisions ( Dang et al., 2022 ). EBP is the cornerstone of clinical practice.
Defining 'evidence-based practice'. The basic idea of evidence-based practice is that high-quality decisions and effective practices are based on critically appraised evidence from multiple sources. When we say 'evidence', we mean information, facts or data supporting (or contradicting) a claim, assumption or hypothesis.
CNs faced difficulties in critical thinking, identifying research articles and journals, evaluating research quality. [41,29,15] Gerrish and Cooke found that only 26% of the participants were competent in finding research evidence and 29% in using research evidence. ... Abbreviations: CNs = community nurses, EBP = evidence-based practice.
Clinical Expertise. Applying the best evidence to our clinical decision making involves examining, critiquing, and synthesizing the available research evidence. However, we must consider the science along with our clinical experience and patients' values, beliefs, and preferences. In this article we'll discuss how to incorporate patient preferences and clinical judgment into evidence-based ...
Mission. New York Institute of Technology College of Osteopathic Medicine (NYITCOM) is committed to training osteopathic physicians for a lifetime of learning and practice, based upon the integration of evidence-based knowledge, critical thinking, and the tenets of osteopathic principles and practice.