Evidence Based Practice In Health Care
Evidence based practice in health care is a process of finding evidence or efficiency of different treatment options as well as determining its relevance to a particular client’s situation (liamputong, 2010, P. 270). It is decision or practice based on evidence which consist of research evidence, clinical expertise, and preferences of patient, goal and appropriate circumstances to implement the action, population needs, priorities and resources (Wood & Haber, 2006).
Before evidence best practice the ill person was seen as having spiritual failing or being possessed by demon. Prehistoric man looked upon illness as a spiritual event. Research done before the twentieth century was more anecdotal, consisting of descriptions of patients or pathological findings. They used to rely just on well experienced senior as an information source (Taylor, Kermode, & Roberts, 2006).
It is important to healthcare practice because it is an approach to decision making in which the practitioner use the reliable evidence that affect the care of individual patients. That information is carefully considered according to all relevant and valid research in order to make plan which is best suitable for that patients. Evidence based practice in nursing care is based on solid evidence that is up to date and well researched. Evidence based practice provides the best care to patient and family which gradually leads to improved patient outcomes and patient and family satisfaction with care. To support clinical decision, evidence from research is used to evaluate efficiency of intervention and outcomes. Evidence based practice has increased accountability in nursing research (Hammell & Carpenter, 2004).
Fundamentally, evidence-based practice in the area of health care refers to the process that includes finding empirical evidence regarding the effectiveness and efficacy of various treatment options and then determining to relevance of those options to specific clients (liamputong, 2010, P. 270).
Quantitative research is a valid tool and can assist evidence based practice. Quantitative research is the “…science of numbers” (Landorf, 2010) and uses data to investigate relationships. Quantitative research can help to explain “why” things happen with minimal bias due to its high dependence on numbers and facts. However, there are issues with quantitative research. It will not always “give” a clinician clear answer. It can show you relationships but will not always explain why these relationships exist or why they do not. Quantitative research can also have issues with bias and it is essential to investigate and analyse all data presented in any study..
Cox (2008) conducted quantitative research in the form of a Randomized Controlled Trial that was performed, at the Primary Care Organisation (PCO) level in relation to falls one of the primary causes of accidental death and fragility fractures in older adults. In, order to assess the weather specialist osteoporosis nurses delivering training to care home staff can reduce fractures and improve the prescription of treatments to reduce fractures versus usual care.
“The randomized controlled trial is one of the simplest but most powerful tools of research. In essence, the randomized controlled trial is a study in which people are allocated at random to receive one of several clinical interventions.” (Norman, Stolberg, Trop. 2006, p. 1539) There are different forms of randomization (Landorf, 2010). This research can be considered as blocked randomization since there are equal-sized blocks of participants. The use of blocked randomization is valuable to the quantitative researcher because it enables an equal assessment of equal numbers of participants. The use of large equally sized groups is advantage of quantitative research. It can assess the effectiveness of practice on larger groups of people, thus making it more effective.
Interventions can be divided into three categories specifically, single, multiple and multi-factorial. In the given research article, it can be concluded that multi-factorial intervention was used where different participants receive different combination of interventions based on an individual assessment (Gillespie, Robertson, Gillespie, Lamb, Gates, Cumming, Rowe, 2003). The interventions were given very clear and were designed to be easily accessed by all participants. The interventions were also based on strategies deemed to be primary care level and cost-effective. The interventions included different methods such as verbal and written training, risk factors for falls and fractures, methods used for risk assessment and prevention of fractures in the workplace. This is strength of Quantitative research. Data can be clearly assessed to see if one or a combination of these interventions would decrease the likelihood of falls. This is an excellent example of how quantitative can inform evidence based practice.
The strength of Quantitative research is the clear conclusions it can draw. The trial gave an answer. The answer informed clinicians about the practice of interventions in reducing falls in older people. That was that these interventions were ineffective. However, there was no explanation as to why the trial was so ineffective. There were hypotheses presented, such as participants being “more aware” of falls, but there was no definitive answer.
A computer program and a biostatical were used to randomly allocate patients to the control group which is called as usual care or the Intervention Protocol (IP). This is another strength of Quantitative research techniques. That is that computers can be used to randomly allocate groups. There can be no bias when a computer separates groups.
In the research article 242 excluded patients, it has been mentioned that not enough time to gain ethical approval and research governance The reason behind the numerous people refusing to participate in the research has not been mentioned. This is a weakness of Quantitative research. The article does not clearly state this and it is not mentioned in the conclusion. It is only shown in the flow diagram, so it could be easily missed. The emotions, feelings, insights, motives, intents, views & opinions of the subject are not taken into account
Additionally, out of the 58 actual participants, 29 participants were grouped under Intervention Protocol (IP), whereas the remaining 29 participants were grouped under control group. The most frequent utilized method for identifying participants is never discussed in the research article. This once again is a weakness of quantitative research.
However, at this 6 month stage the clinicians involved in the trial knew that the interventions had been unsuccessful. This is because they could not be blinded to the results. It can then therefore be questioned as to how effective the treatment the second group received was. It could be argued that it would be difficult to stay motivated if the clinicians already knew that the trial had been unsuccessful. This in turn could bias the second group of usual care patients in the study. This analysis then demonstrates another issue with quantitative research techniques.
According to Sackett and Associates (1996, p. 71), evidence-based practice is “the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients.” The above analysis demonstrates that Quantitative research has much strength and can assist clinicians in determining the best practice to obtain the best outcomes for their patients. However, a well-informed researcher and clinician will always be aware of the bias that can exist when presented with information in a trial using quantitative research techniques.
Gillespie LD., Gillespie WJ., Robertson MC. et al. (2009). Intervention for preventing falls in older people living in the community (Review), (4), 1-25. The Cochrane Collaboration: John Wiley & Sons, Ltd. Retrieved from http://www.thecochranelibrary.com.
Landorf, K. (2010). Clinical Trials : The Good, the Bad and the Ugly. In Liamputtong, P. (Ed.). Research methods in health: foundations for evidence- based practice. (Chapter 15, pp. 252-266). South Melbourne: Oxford University Press.
Norman, G., Stolberg, H.,Trop, I. (2006). Fundamentals of Clinical Research for Radiologists. Randomized Controlled Trials. 1539-1544. Canada: AJR. Retrieved from
Sackett,D.L., Rosenburg, W.M., Gray Muir, J.A., Haynes, R.B. & Richardson, W.S. (1996).
Evidence based medicine: What it is and what it isn’t. British Medical journal, 312,
Cox, H., Puffer, S., Morton, V., Cooper, C., Hodson, J., Masud, T., & … Torgerson, D. (2008). Educating nursing home staff on fracture prevention: a cluster randomised trial.Age & Ageing, 37(2), 167-172
Wood, G. L., & Haber, J. (2006). Nursing Research: Methods and Critical Appraisal for Evidence -Based Practice (6th ed., p.295-288). Missouri: Mosby Elsevier
Hammell, K., & Carpenter, C. (2004). Qualitative Research in Evidence-Based Rehabilitation (pp.1-89). London: Elsevier.
Liamputtong, P., (2010). Research methods in health: Foundations for evidence-based practice. Australia and New Zealand: Oxford University Press.
Taylor, B., Kermode, S.& Roberts, K, (2006)., Research in nursing and health care: evidence based practice, Thomson, Australia.