Showing posts with label Evidence based practices. Show all posts
Showing posts with label Evidence based practices. Show all posts

More about evidence based practices

Last week we spoke about Evidence Based Practices (EBP) and how their use has helped create more effective interventions. However, we also mentioned that EBP are difficult to implement. We spoke about how part of the problem is that they can be costly and can go against what most people in the field are used to doing in their practice. This time, I want to explain why most times, these interventions are costly and difficult to move into real-world practice, not only because they may go against what the field is used to do, but also for some other practical reasons.
EBP are usually tested under very rigorous conditions: The most stringent criteria for calling something an Evidence-Based Practice requires the use of a randomized control trial approach. That means that participating individuals may be assigned to one of two (or maybe more) groups: One that receives the treatment or one that receives nothing. Now the justification for doing something like this is because we want to be able to demonstrate that the reason we see change after the treatment, is due to the treatment and not other reason (for example: just the passing of time or in some cases, due to some developmental reasons, when developmental changes make sense). Now, even in those conditions, there may be potential confounding variables that may affect the final outcome.
One of the biggest problems facing many treatments is the fact that many times, individuals show improvement just because they are told (or they believe) that they are receiving some wonder-therapy (or drug). This is so prevalent in clinical trials that people speak about the “placebo effect”.  Therefore, a way to control for the potential effect of placebos is to include a treatment condition which is a placebo (when testing medications, people speak about “sugar pills”) or what may be considered the “normal treatment” (which sometimes is labeled as “business as usual”), where those who did not go into the treatment being tested are receiving the treatment that they might have received had there not been this treatment under testing. Placebo is a very powerful effect, and most of the therapies that sometimes are advertised on TV may work, because of this effect (quiz: how many times have you seen in those late TV ads a comparison group? Or comparisons against a placebo control?).
There are multiple ways to try to prove that a specific intervention is working, but as explained, most people tend to agree that the best approach is to use what is known as the “gold standard” or random assignment to different clinical conditions. The reason random assignment is considered the “gold standard” is that for the most part, it balances out many variables that could potentially affect the outcomes in unexpected ways. Things like age, gender, ethnicity, length of time with the illness, type of treatments received in the past and so forth. How will random assignment control for all of that? Because every individual with any potential combination of these variables  has the chance of being assigned to one of the treatments in the study. Therefore, it is expected that individuals with many if not all the potential combinations that may affect the final outcomes end up in one of the groups in the study, and therefore the effect of all those variables cancels out.
All this dancing is so scientists and the public in general can make informed decisions about the effectiveness of a treatment (i.e., are my outcomes better when I use treatment “A” as opposed to treatment “B”), as well as being able to generalize to a larger group of people than those included in the study. After all, if you were not included in the study, what good will it do to you to know that a program may work if you are not sure that the treatment will work in people like you?
Doing this work means time and money. People involved in testing the treatment needs to conduct multiple studies so they can get some assurance that the results are sound and can withstand multiple tests, under different conditions. They also need to be closely monitored so researchers can be alert if something is not going well. If the new treatment under scrutiny has the potential for being harmful, then they may want to stop the study before too long. On the other hand, if the results are going very well, perhaps it is time to stop the study with confidence that the new treatment will work as expected (though when treating human lives, you don’t want to take any chances).
 There are multiple institutions that have created databases where evidence for or against Evidence Based Practices can be found. The Substance Abuse and Mental Health Services Administration (SAMHSA)  maintains a website with links to several organizations where such information can be found.
Creating and documenting the effectiveness of a specific intervention is not enough. In a country as diverse as the U.S., there are many instances where an intervention that has been proven to work for a specific group of people (say African American), may not necessarily work for another ethnic group (e.g., Latino). The reasons can be associated with genetic makeup as well as with ethnic background (customs and traditions, for example, can be a big impulse or deterrent for some interventions). Therefore sometimes interventions that have been proven to work in an ethnic group (or in a research setting) need to be tested under different conditions (e.g., a different ethnic group or on a community-based environment). This is no easy task, which once more affects how quickly an intervention can be used outside the testing grounds.
This is a very active area of research which is known as validity. People speak about internal or external validity, and if you ever took a “research methods” class in college, then you may recognize many of these ideas or even terms. One book that describes the rationale an many specific examples is  Shadish, Cook and Campbell. However, be warned that this book can be hard to read without some introduction to research methods
One final note: Evidence Based Practices are the top of the pyramid, but there are some interventions/programs that have not been able to prove their worth using the most restrictive criteria (the gold standard) and yet are considered worth more research.
A ‘Promising model/practice’ is defined as “one with at least preliminary evidence of effectiveness in small-scale interventions or for which there is potential for generating data that will be useful for making decisions about taking the intervention to scale and generalizing the results to diverse populations and settings.” Department of Health and Human Services Administration for Children and Families Program Announcement. Federal Register, Vol. 68, No. 131, (July 2003), p. 40974. These are interventions where some initial testing has been done, and the outcomes observed so far seem to indicate that the intervention may be effective. However, more and more strict testing is needed to endorse it as an EBP.
Emerging practices, on the other hand, are “practices that have very specific approaches to problems or ways of working with particular people that receive high marks from consumers and/or clinicians but which are too new or used by too few practitioners to have received general, much less scientific attention.”  We took this definition from the Oakland County Community mental health authority. In this case, it is argued just like in the case of the promising practices, that the intervention being described has produced effective outcomes, but much more testing is still necessary.

Evidence based practices

Currently, one of the most important areas in healthcare is accountability.  As part of this movement toward accountability, the mental healthcare industry and their stakeholders tend to talk about Evidence Based Practices (EBP) as a way to link programs to desirable outcomes.
Evidence based practices can be found in multiple areas: from Education to Mental Health. And within mental health you can find them from medication (Kentucky Medication Algorithm; and Texas Medication Algorithm  where the main goal is to use the medication that will create the best outcomes), to specific interventions or programs like Assertive Community Treatment (ACT) in adult individuals and Multi-systemic Therapy (MST)  for youngsters; to specific illnesses like Schizophrenia  and Bipolar disorder.  Furthermore, the Substance Agency (SAMHSA) which supports most substance abuse and mental health funding at the Federal level, maintains and supports through funding multiple studies to determine and encourage the use of EBP throughout the country (go here to see what SAMHSA endorses) Professional organizations like the American Psychological Association, the American Psychiatric Association, as well as organizations for Occupational Therapy, Psychiatric Rehabilitation, Nursing , etcetera, endorse the use of EBP with their members. Insurance providers, Federal funded entities like the National Institute of Health and Consumer advocacy groups like NAMI  fund or endorse Evidence Based Practices.  In fact, Tanenbaum 2008 states that “EBP is a matter of mental health policy in USA” (page 699).
So what is the big deal about EBP? Why would we want to use EBP rather than other practices that are not considered EBP’s? The main reason has to do with the definition of EBP, and the rationale for the creation of EBP. There are multiple definitions for Evidence Based Practices (this is one); but most of them speak about interventions that are backed by empirical or scientific research. What that means for the individual on the receiving end is the certainty that what is being used is scientifically sound, and not just some unproven therapy, or, even worse, some form of quackery that will not deliver the expected outcomes on a regular basis.
If EBPs are the best thing since sliced bread, then why is there resistance to implement them? There are several issues associated with the implementation of EBP. One is related to the level of information regarding EBPs (who knows about them and how much). Evidence about consumers knowing or participating in decisions regarding services (in this case, EBP services) is usually limited. Tanenbaum, for example, found out that though consumers may be willing to use EBP, they are rarely consulted about the services they received (the decision is not up to them).
Another area is the science to service gap associated with research. There are multiple numbers being tossed around, but Druss 2005 speaks about a twenty year gap between scientific research and implementation in an applied setting. In that regard entities like SAMHSA are doing the best to help move research to practice. For example, SAMHSA instituted an award for centers that do their best to bridge that gap (MHCD received this award in 2009  for its Growth and Recovery Opportunities for Women (GROW) program).
Finally, there is also resistance from providers to implement EBP for multiple reasons: From need for new training, to expense, to the importance of fidelity to the model. 
• Regarding training, most EBP require that clinical people learn new techniques, or ways to do things that seem to be counterintuitive to what is known or has been practiced for many years. As an example, of new implementations for trauma-oriented for women survivors of trauma, the Trauma Recovery and Empowerment Model TREM;  uses an approach where abuse is not seen as “the primary problem”.
• Regarding expense, many of these interventions require very extensive training, or require special certifications to be used. This not only means expense in terms of training and materials, but also certifications; not many centers can afford such implementations.
• Finally, most of these models have been created in research settings, under very controlled situations, and they have been proven to work –mostly-- under those circumstances. Therefore, the model creators will require that you “follow the model” with fidelity. For example, clinicians may have to be on call on a 24 hours/7 days a week schedule; or the ratio of clinician to individuals receiving services is 1-10. And if you do not follow the model within some specific bounds (determined by instruments created by the model designers), then the center or clinicians doing the implementation are formally not using the model, or will not be endorsed by the model developer.
Why then try to use Evidence based practices? The short answer is because they have been proven to work in most situations. That is, the expected outcomes are met as described by the model. For example, youth receiving Multi-Systemic Therapy (MST) will stay at home (rather than at out-of-home-placements), stay in school, reduce the number of arrests, and reduce psychiatric symptoms and substance/alcohol use. Therefore, most people figure that the cost, extra training, continuing certification is worth the hassle. But the field is new, and sometimes it is not clear whether all the program components work as intended, or whether the model really works as intended outside the –most times-- very restrictive conditions imposed by the program developers. This is a new field, and new evidence is mounting every day that speaks in favor or against what we know about EBP.  We’ll have more to say about this area in future blogs.