Showing posts with label Mental. Show all posts
Showing posts with label Mental. Show all posts

There is Actually Alone One Way to Accurate Accurate Adulation - Brainy Bloom Research

What is the purpose of life? The alone purpose that makes rational faculty is to accurate adulation to all active entities. All added purposes are apprenticed to be selfish, with a "getting" motive attached. The acquaintance of accurate adulation seems to appear rarely on our planet, as adumbrated by the abrogating altitude of humans and situations worldwide. The superior of our abundance and brainy bloom depends on our alertness to accurate adulation to the activity about us.
Many may be abashed to ascertain that accurate adulation is not a claimed resource. I accept no adulation of my own, nor do you, or anyone else. There is alone one way we can accurate accurate adulation to the activity about us, that is by consistently acting on what is absolutely right.
Here is accession shocker; to act accurately a getting cannot be acting selfishly. That agency that he or she cannot be acting from a egocentric or blowhard intention. It agency that a getting cannot be in a egocentric authoritative or artful mode, and cannot be acting to get something for self.
The announcement of accurate adulation requires that our intentions be pure; that we be aboveboard will to accord with no strings attached. We have to aswell be accommodating to act in acquiescently amenable ways, which includes getting accommodating to accurate accuracy as we apperceive it in adapted ways. In a egocentric environment, the announcement of accuracy can sometimes be alarming so acumen is in order.
At the affection of the activity of cogent accurate adulation is a aboveboard alertness to accurate love. Without that willingness, whatever comes alternating shall be some anatomy of egocentric action.
Here is an analogy:
Think of a animal getting as a "garden hose," and his or her will as the "faucet" absorbed to the ancillary of a house. The "water" is love.
In adjustment for us to acquaintance or accurate love, we have to accessible our claimed "faucet" (will) and be accommodating to acquiesce baptize to breeze (express love). If we are accommodating to accurate love, "water" flows through us and we feel acceptable (we acquaintance love). In addition, those about us get "wet" (are loved).
On the added hand, if we selfishly and defiantly debris to accurate love, we accumulate our claimed "faucet" shut so that no "water" can breeze through us. Like an bare garden corrupt larboard out in the sun, it anon dries out and begins to decay.

Integrating Data Instruments into Clinical Practice

One issue that MHCD has looked at recently in improving care of mental health consumers, is around the use of data instruments that Evaluation and Research Department uses to collect information on consumer recovery, and how to translate this back to the consumers, resulting in more insight about their treatment progress?  The Evaluation and Research Department uses an instrument called the Consumer Recovery Marker (CRM) and the Recovery Marker Index (RMI).  The CRM is a 15 item survey, filled out by clients and measures “active growth/orientation, hope, symptom interference, sense of safety and social networks” and is described as “consumer’s perception of their mental health recovery” (Deroche, Olmos, Hester, McKinney 2007: 1).  The RMI is a 7 item survey filled out by clinicians that includes such items as “employment, education/learning, active/growth orientation, symptom interference, engagement, housing” and substance use (added since inception), and is described as measuring “indicators usually associated with individual’s recovery, but are not necessary for recovery” (Deroche, et al. 2007: 1).  In combination, these two instruments allow for multiple perspectives on the consumer’s recovery, as well as collect information to make evaluation possible, as well as translating progress back to MHCD’s various shareholders.  The Evaluation and Research Department also utilizes a tool, the Recovery Profile (RP), which combines all the data collected from the CRM and RMI and puts them in an interpretable form, through use of line and bar graphs, showing averages of all scores, etc.  The question that MHCD faces is how to provide this information to clinicians, as well as consumers, so that it can be used to make clinical decisions around consumer treatment, as well as offering insight to the consumer about their recovery process?  In considering this issue, literature review has shown a number of factors that come into play.

Dilemma for Clinician and Organization

Another perspective is to consider the dilemma that clinicians feel, as a result of data instruments being introduced into clinical practice, which represents a switch to more standardized, or evidence based practices.  An article by Broom, Adams and Tovey (2009) looks at this issue within the healthcare and in particular, oncology practice.  This article describes using evidence based practices within medicine and states the challenge is to adopt these principles, while still maintaining “professional autonomy, clinical judgment, and therapeutic integrity” (Broom et al. 2009: 192).  I think some of clinician fears around using instruments, is that it would minimize some of their intuitive experiences with the consumers, as well as doesn’t allow for their unique talents as clinicians to shine through.  In other words, with the mental health field, a lot of a consumer’s story is shared through narrative forms, treatment plans, intakes, histories, etc. and so there might be some hesitance initially in how that can be translated into more quantitative forms, or using data instruments.   From this study, it was found that executive management was more likely to be in support of using evidence based medicine as it minimizes clinician error and is more science, objective based, and clinicians were more in opposition as they felt it takes away from some of their expertise, uniqueness as individuals (Broom et al. 2009).  The challenge is to try to find a balance, where the clinician’s unique talents can be represented and acknowledged, as well as having more information at the clinician’s disposal, in making decisions around the consumer’s treatment.

Another aspect of this dilemma is to consider the strain placed on an organization in trying to satisfy reporting needs of its stakeholders, as well as using that data in a way that improves its programs.  Carman (as cited by Hoole and Patterson 2008) reports that 65% of nonprofits engage in formal program evaluation, 95% report to their boards and 90% experience site visits from their funders (3).  The problem brought up by Hoole and Patterson (2008) in regards to this is that most of data is just collected, not used to actually improve programs.  This is most likely due to conflicting shareholder demands, as well as having support and funding for data collection, being able to translate this importance to managers, as well as staff across the organization (Hoole and Patterson 2008).  With nonprofit organizations, depending on funders, their outcome goals can often reflect interests of the shareholders, and thus are not intertwined with mission of the organization (Hoole and Patterson 2008).  Basically, the answer to this for Hoole and Patterson (2008) are for the nonprofit to work more with the shareholders on integrating outcome requirements with that of their own internal mission or goal statements.  This holds true for MHCD, as well in that they currently are making efforts to integrate data already being collected to satisfy Medicaid funding, or other state, private needs, in a way that can be translated to clinicians and used in clinical practice and making decisions around client care.

Structure and Funding

An article by Carman and Fredericks (2009) describes that how successful evaluation is implemented depends on “autonomy, internal structures and external relations, leadership styles and maturity” (Carman and Fredericks 2009: 3).  Carman and Fredericks also identify the Executive Director as the key into how research and evaluation is implemented into non-profits (Carman and Fredericks 2009).  This study uses three different clusters of non-profits and finds that those that are primarily funded by government, Medicaid, public funds have fewer problems of support and funding for research and evaluation (Carman and Fredericks 2009).  The following is a breakdown of funding for MHCD and was taken from the “MHCD 2009: Report to the Community” and covers the fiscal year concluded June 30, 2009.

Source                                                            Amount                                  Percentage
Medicaid                                             $23,925,630                            44.4
State of Colorado                               $12,945,194                            24.0
Client, Third Party, Pharmacy            $8,826,030                              16.4
Contracts and Grants                          5,632,041                                10.5
Interest, Rent,             Other                           $1,604,777                              3
Medicare                                             $498,286                                 0.9
Public Support                                                $444,236                                 0.8

Some other findings were the larger the organization, more likely to identify staff resistance as a problem in evaluation, data collection (Carman and Fredericks 2009).  Younger organizations were found to have more technical assistance issues with evaluation and organization; with connections to housing or community development, there are fewer problems with implementation and design (Carman and Fredericks 2009).  With MHCD being a larger organization and having a lot of support for evaluation from executive management, as well as having primary funding from Medicaid and the State of Colorado, there is already a lot of familiarity with data collection and evaluation, at least at the administrative level.  However, with it being a large organization and with multiple layers and treatment teams, residential and employment facilities, translating the information that is collected by Evaluation and Research Department to the clinicians and consumers, offers chance for more resistance.

Philosophy

            The mental health field has some differences in philosophy in comparison to medical field as well, that have made it more difficult for evidence based practices to be implemented successfully.  Rishel (2007) points out that the mental health field focuses more on clinical outcome trials or best models of treatment, rather than on prevention itself.  Rishel (2007) goes on to state two of possible reasons for this are prevention coming from a public health perspective and looking at population as a whole, which varies from a clinical approach aimed at best methods to treat those already diagnosed with mental illness.  Also, prevention methods are usually thought of as requiring larger samples and for participants to be followed for a long period of time, compared with clinical trials which are shorter and require smaller sample size (Rishel 2007).  The mental health field is also seen as being hard to evaluate in terms of outcomes, as there are really no standardized outcomes (Rishel 2007).  This becomes even more difficult with less definition attributed to non-profit organizations.  This proves true with MHCD in that most of the focus to this point has been best treatment models to integrate into clinical practice.  However, with integration of data instruments, this allows for more longitudinal data and more of a focus to prevention side. 

Clinician Feeling Towards Instruments     

A study done looking at mental health field and how clinicians feel towards data instruments was conducted by Garland, Kruse, Aarons (2003).   This study was done on mental health system in California and found that 92% of clinicians reported never using scores from data instruments in their clinical practice (Garland et al. 2003: 400).   Further, 90% identified a collecting the data as a “significant time burden” in terms of fitting into their daily work tasks (Garland et al. 2003: 400).  This data shows clinicians not putting much value into data collection, as well as thinking of it more as a burden, versus something that could be useful for them in their practice.  The article also went on to show that 55% of clinicians in reference to measures used by the instrument’s, felt they were “not appropriate, nor valid, for their particular patient population” (Garland et al. 2003: 400).  It would be a hard sell to get clinicians to buy into using some of these instruments, if they don’t believe they are valid or useful for their population.  When asked what changes the clinicians wanted to see with how the data was reported, answers were “briefer administration” or “simpler language”, as well as wishing these were presented in “narrative, as opposed to quantitative form” (Garland et al. 2003: 400).  As we can see, there seems to be a lot of doubt from clinicians around trusting the data is appropriate for their client populations, as well as being able to interpret the data, and doubting whether the data reflects anything that can be used in clinical practice.
           
            As we can see from the literature, there are multiple interests to consider when implementing data instruments into clinical practice.  MHCD is unique from a lot of non-profits in having its own internal Evaluation and Research Department, and this creates a lot of opportunity to progress forward, in terms of how clinical information is relayed to clinicians, as well as consumers receiving mental health services.  MHCD has already made a lot of steps towards helping ease this transition.  The MHCD “Recovery Committee”, which is already a committee that was in place, but is now working on how to help with this transition.  There were focus groups held with clinicians around what concerns they had with the instruments, and this information was used to make some changes to the instruments, as well as in developing trainings.  The trainings were designed to show how the data collection instruments can be interpreted, as well as how to use that information in discussions with consumers to give them incentive in participating in data collection.  MHCD also has a team made up of MHCD consumers who have taken the responsibility of going to each site and sharing with other consumers their experiences with looking at the RP, and how beneficial it is to view this data and learn more about their treatment progress.  MHCD will continue to evaluate how this integration has gone, but it has created a unique opportunity as an organization, to bring various clinical teams and consumers together, to work on getting the most out of this data that is already collected.  In the medical health field, I think we have seen a lot of growth in terms of how we can see lab work, communication with our doctors through e-mail, other electronic means, etc.  I think through this example, MHCD has shown how the mental health field can benefit from data instruments as well, resulting in more client informed, as well as clinician informed care. 

Submitted By Jim Linderman- Evaluation Specialist with Evaluation and Research Department at MHCD, as well as Sociology M.A. student at University of Colorado Denver Sociology Program. 


References

Broom, A., J. Adams, P. Tovey (2009). “Evidence-based healthcare in practice: A study of clinician resistance, professional de-skilling, and inter-specialty differentiation in oncology” Social Science & Medicine 68(192-200).

Carman, J.G., K.A. Fredericks (2009). “Evaluation Capacity and Nonprofit Organizations: Is Glass Half-Empty or Half-Full?” American Journal of Evaluation 31:84.

DeRoche, K., Hester, M., Olmos, P.A., McKinney, C.J. (October, 2007). Evaluation of Mental Health Recovery: Using Data to Inform System Change. Poster presented at the 'Culture of Data' Conference. Denver, CO.

Garland, Ann F., M. Kruse, G. A. Aarons (2003). “Clinicians and Outcome Measurement: What’s the Use?”.  Clinicians and Outcome Measurement 30(4):393-405.

Hoole, E. and T.E. Patterson (2008). “Voices from the Field: Evaluation as part of a Learning Culture”. Nonprofits and Evaluation. New Directions for Evaluation 119:93-113.

MHCD 2009: Report to the Community (can be found at www.mhcd.org)

Rishel, C. (2007). “Evidence Based Prevention Practice in Mental Health: What is it and how do we get there?” American Journal of Orthopsychiatry 77(1): 153-164.          

Future Blogs

We would like to let you know it takes time to compile Research on different subjects.  
Because of this reason we will only be posting on this blog monthly.
Hopefully the subjects will be of interest to you and we hope you will continue to visit our blog in the future.
Thank you

More about evidence based practices

Last week we spoke about Evidence Based Practices (EBP) and how their use has helped create more effective interventions. However, we also mentioned that EBP are difficult to implement. We spoke about how part of the problem is that they can be costly and can go against what most people in the field are used to doing in their practice. This time, I want to explain why most times, these interventions are costly and difficult to move into real-world practice, not only because they may go against what the field is used to do, but also for some other practical reasons.
EBP are usually tested under very rigorous conditions: The most stringent criteria for calling something an Evidence-Based Practice requires the use of a randomized control trial approach. That means that participating individuals may be assigned to one of two (or maybe more) groups: One that receives the treatment or one that receives nothing. Now the justification for doing something like this is because we want to be able to demonstrate that the reason we see change after the treatment, is due to the treatment and not other reason (for example: just the passing of time or in some cases, due to some developmental reasons, when developmental changes make sense). Now, even in those conditions, there may be potential confounding variables that may affect the final outcome.
One of the biggest problems facing many treatments is the fact that many times, individuals show improvement just because they are told (or they believe) that they are receiving some wonder-therapy (or drug). This is so prevalent in clinical trials that people speak about the “placebo effect”.  Therefore, a way to control for the potential effect of placebos is to include a treatment condition which is a placebo (when testing medications, people speak about “sugar pills”) or what may be considered the “normal treatment” (which sometimes is labeled as “business as usual”), where those who did not go into the treatment being tested are receiving the treatment that they might have received had there not been this treatment under testing. Placebo is a very powerful effect, and most of the therapies that sometimes are advertised on TV may work, because of this effect (quiz: how many times have you seen in those late TV ads a comparison group? Or comparisons against a placebo control?).
There are multiple ways to try to prove that a specific intervention is working, but as explained, most people tend to agree that the best approach is to use what is known as the “gold standard” or random assignment to different clinical conditions. The reason random assignment is considered the “gold standard” is that for the most part, it balances out many variables that could potentially affect the outcomes in unexpected ways. Things like age, gender, ethnicity, length of time with the illness, type of treatments received in the past and so forth. How will random assignment control for all of that? Because every individual with any potential combination of these variables  has the chance of being assigned to one of the treatments in the study. Therefore, it is expected that individuals with many if not all the potential combinations that may affect the final outcomes end up in one of the groups in the study, and therefore the effect of all those variables cancels out.
All this dancing is so scientists and the public in general can make informed decisions about the effectiveness of a treatment (i.e., are my outcomes better when I use treatment “A” as opposed to treatment “B”), as well as being able to generalize to a larger group of people than those included in the study. After all, if you were not included in the study, what good will it do to you to know that a program may work if you are not sure that the treatment will work in people like you?
Doing this work means time and money. People involved in testing the treatment needs to conduct multiple studies so they can get some assurance that the results are sound and can withstand multiple tests, under different conditions. They also need to be closely monitored so researchers can be alert if something is not going well. If the new treatment under scrutiny has the potential for being harmful, then they may want to stop the study before too long. On the other hand, if the results are going very well, perhaps it is time to stop the study with confidence that the new treatment will work as expected (though when treating human lives, you don’t want to take any chances).
 There are multiple institutions that have created databases where evidence for or against Evidence Based Practices can be found. The Substance Abuse and Mental Health Services Administration (SAMHSA)  maintains a website with links to several organizations where such information can be found.
Creating and documenting the effectiveness of a specific intervention is not enough. In a country as diverse as the U.S., there are many instances where an intervention that has been proven to work for a specific group of people (say African American), may not necessarily work for another ethnic group (e.g., Latino). The reasons can be associated with genetic makeup as well as with ethnic background (customs and traditions, for example, can be a big impulse or deterrent for some interventions). Therefore sometimes interventions that have been proven to work in an ethnic group (or in a research setting) need to be tested under different conditions (e.g., a different ethnic group or on a community-based environment). This is no easy task, which once more affects how quickly an intervention can be used outside the testing grounds.
This is a very active area of research which is known as validity. People speak about internal or external validity, and if you ever took a “research methods” class in college, then you may recognize many of these ideas or even terms. One book that describes the rationale an many specific examples is  Shadish, Cook and Campbell. However, be warned that this book can be hard to read without some introduction to research methods
One final note: Evidence Based Practices are the top of the pyramid, but there are some interventions/programs that have not been able to prove their worth using the most restrictive criteria (the gold standard) and yet are considered worth more research.
A ‘Promising model/practice’ is defined as “one with at least preliminary evidence of effectiveness in small-scale interventions or for which there is potential for generating data that will be useful for making decisions about taking the intervention to scale and generalizing the results to diverse populations and settings.” Department of Health and Human Services Administration for Children and Families Program Announcement. Federal Register, Vol. 68, No. 131, (July 2003), p. 40974. These are interventions where some initial testing has been done, and the outcomes observed so far seem to indicate that the intervention may be effective. However, more and more strict testing is needed to endorse it as an EBP.
Emerging practices, on the other hand, are “practices that have very specific approaches to problems or ways of working with particular people that receive high marks from consumers and/or clinicians but which are too new or used by too few practitioners to have received general, much less scientific attention.”  We took this definition from the Oakland County Community mental health authority. In this case, it is argued just like in the case of the promising practices, that the intervention being described has produced effective outcomes, but much more testing is still necessary.

Mindfulness and Psychotherapy

The practice of mindfulness is a practice that is finding increased attention in the application of psychotherapy. What exactly is mindfulness as it relates to psychotherapy? The term mindfulness comes from the word sati, taken from the Buddhist tradition of meditation and psychology. This word suggests awareness, attention and remembering. According to Dr. Ronald Siegel, Psy.D, Assistant Clinical Professor of Psychology at the Harvard Medical School, mindfulness as it relates to psychotherapy is assisting a person to learn to cultivate a practice of awareness of a present emotional experience. In the book co-edited by Dr. Siegel (2005), Mindfulness and Psychotherapy, New York: Guilford Press, Dr. Siegel suggests that it is also very important that the person is able to practice acceptance of that emotional state as it arises. As used in psychotherapy, mindfulness is a practice that systematically teaches the patient how to accept their emotional experience. This is similar to the use of mindfulness in Marsha Linehan’s Zen-inspired dialectical behavior therapy (DBT). Linehan, M. (1993). Cognitive-behavioral Treatment of Borderline Personality Disorder. New York: Guilford Press. As emphasized in DBT, emotions can become overwhelming, and this may impact one’s behaviors and thoughts in a negative or destructive manner. Mindfulness as utilized in dialectical behavioral therapy attempts to break this pattern by helping the patient better manage these emotions.
  While mindfulness has most often been related to Buddhist or religious/contemplative practices, mindfulness is now also being integrated into what we might call the more traditional forms of psychotherapy as what is now being called the third wave in behavior therapy. The first wave was Operant and Classical Conditioning and the second one is Cognitive Behavioral Therapy. The third wave now incorporates mindfulness into the well know evidence based practice of Cognitive Behavioral Therapy as Mindfulness-Based Cognitive Therapy, (MBCT).
 Mindfulness-based cognitive therapy was developed by Zindel Segal, Mark Williams and John Teasdale (2001), Mindfulness-Based Cognitive Therapy for Depression: a New Approach to Preventing Relapse, New York: Guilford Press. Their work was largely influenced by the work of Jon Kabat-Zin whose work was discussed in a previous article found on this blog site regarding the work of Kabat-Zin and the development of the Mindfulness-Based Stress Reduction Program at the University of Massachusetts Stress Reduction Center.
Mindfulness-based cognitive therapy is a blend of cognitive behavioral therapy (CBT) which focuses on changing our thoughts in order to change our behaviors, and the meditative practice of mindfulness, a process of identifying our thoughts on a moment-to-moment basis while trying not to pass judgment on them and experience them with acceptance as suggested by Dr. Ronald Siegel. While cognitive behavioral therapy has always emphasized the end result of change of one’s thoughts, mindfulness really looks at how a person thinks — the process of thinking — to help one be more effective in changing negative thoughts. What does some current research suggest about the effectiveness of this newer form of psychotherapy?
Coelho et. al. looked at research about mindfulness-based cognitive therapy and found four relevant studies that examined the effectiveness of this approach. Coelho, H.F. (2007). Mindfulness-based cognitive therapy: evaluating current evidence and informing future research. J Consult Clin Psychol., 75(6):1000-5.
The current evidence from the randomized trials suggests that, for patients with 3 or more previous depressive episodes, MBCT has an additive benefit to usual care. It is important to note here that MBCT is designed to help people who suffer from repeated bouts of depression. Coelho found however, because of the nature of the control groups, these findings cannot be attributed to MBCT-specific effects. The researchers did suggest that MBCT has found some positive results for those with a more chronic depression but they could not say that this was as a result of specifically MBCT alone.
It is clear that there is an ever increasing mindfulness oriented model of psychotherapy. Treatment strategies can be derived from the basic elements of mindfulness – awareness of present experience, with acceptance. A review of the empirical literature by Baer (2003); Baer,R. , Mindfulness training as a clinical intervention: A conceptual and empirical review. Clinical Psychology: Science and Practice, 10(2), 125-142, suggests that mindfulness based treatments are “probably efficacious” and en route to becoming “well established”.
The possible emerging model of mindfulness as integrated into psychotherapy can be seen to have promise in many areas of psychology and psychotherapy and has indeed become well established. Similarly empirical research in this area has seen a significant increase. In 2003 at the time of the review by Baer, there were several hundred empirical research articles on mindfulness and psychotherapy and now, 2010, one can find several thousand. Mindfulness is beginning to move into other areas such as brain science, health/medical psychology and positive psychology. It seems that the clinical literature is promising and psychologists and mental health clinicians have the opportunity to integrate a form of mental practice that is based on a 2,000 year old contemplative practice of bringing the mind to the present state, experiencing this state and accepting this state.
Additional Resources
University of Massachusetts Medical School, Center for Mindfulness and Medicine

www.NICABM.com
Dr. Ronald Siegel, (2010), The Mindfulness Solution: Everyday Practices for Everyday Problems, New York: Guilford Press

By Marcia Middel, Ph.D.
Dr. Middel is the chief psychologist at the Mental Health Center of Denver. She is also the Director of the Center for Integrated Psychological Services (CIPS) and team associate with MHCD Evaluation and Research team