Client Suicide and Clinician Response: Ensuring Policy Guidelines and Clinician Safety

In looking at issue of suicide within mental health field, we know that a person who suffers from mental illness, is more likely to be at risk for suicide.  According to the Mental Health America website of suicide victims suffer from major depression or bipolar (manic-depressive) disorder”.  One aspect that doesn’t often get looked into, is how does suicide of a client affect the clinician that they work with?  In working in mental health field, depending on setting and caseload, clinician’s often have fluctuating caseloads, and might lose a client to suicide, and then be required to have short turnaround in picking up a new client to replace them.  In dealing with a population that might show higher risk of suicide, the question becomes, are we ensuring that clinician’s are protected by policies, have confidence in being able to assess suicidal behavior?  An article does a study on 172 therapists, 125 who are from private practice, and 47 from institutions, and tries to assess their responses to suicide from clients.  The study finds that of these therapists, 85% from institutional setting and 17% from private practice had experienced at least one suicide in their professional careers.  From this article, it appeared there was more of a propensity for clinicians from institutional settings to be more at risk for having experienced a client suicide.  This would make sense in that institutional setting, clients would have more severe symptoms of mental illness, and in general, having to be institutionalized is a result of not being able to take care of one’s self, or possibly self harming behavior in the past.

The same study found that for psychiatrists who had “less than 5 years of professional experience” reported significantly more feelings of “guilty, shocked and insufficient” at their job, after 6 months of the study, opposed to colleagues with more experience. This makes argument for importance of developing policies, as well as offering trainings to ensure new staff feel confident in assessing clients for suicidal risk.  One of the main sources of distress, interestingly enough, was that of “fear of reaction of parent’s relatives” and that was found to be even higher than fear of a lawsuit in this study. This study also found that in dealing with client suicides, of the therapist responses, 80% reported being “supported by the institution” they worked for, 72.3% found some type of “conference” around grieving to be helpful, and 44% had reported wishing they had some type of “conference”. This data shows support for clinicians wanting to feel supported or protected by policy guidelines of institution, as well as some type of debriefing process to allow discussion of the client’s case.  Of all the therapists involved in the study, “one third or 34.5% suffered from severe distress”, which the study did not find significant differences in gender, but were slightly more prevalent among women, and also the study pointed out distinction that with mild distress, usually over 6 months, symptoms lessened, but with severe distress, symptoms were persistent over this time.  The study shows a  prevalence of “severe distress” within clinician’s, which argues for being able to notice this within staff, as well as developing policies that ensure staff protection and confidence in being able to assess client’s for suicidal risk.

Another article  was a study done within the United States Air Force and collected information from 74 of medical treatment facilities, to determine if trainings around suicide assessment, could impact clinician confidence, as well as impact policy or clinician ability to assess suicidal behavior.  One argument that the article starts with is the fact that most clinician ability to do suicide assessment effectively, is dependent upon organizational policies, as well as clinician motivation to access literature on, or continuing education into this area.  For instance, Bongar and Harmatz (as cited by Oordt et al. 2009) found that “only 40% of graduate programs in clinical psychology provided any formal training in clinical work with suicide patients”.  In other words, even clinicians with advanced degrees in psychology or other mental health degrees, would have minimal exposure with how to work with suicide patients, which puts their ability to be effective on what they learn working in the field, putting further emphasis on training their employer offers, what program policies are around this, etc. Depending on what state this occurs, continuing education might not be required, and so this  would put more emphasis on the clinician seeking out these trainings.  The article offers a link to the Air Force website, which offers “18 recommendations for effective clinical work with suicidal patients”.  Without continuing education, the article by Oordt et al. (2009) describes that clinical supervision would be primary source of setting guidelines for how to assess suicide risk.  This requires that with good policies, supervisors could also feel confident in providing feedback to clinicians, as well as in clinicians being trained and having knowledge of what to do in these situations, wouldn’t need as much supervision.

Going back to the article, the study used a 12 hour training session, with 4 hours spent on “suicide assessment”, 4 hours on “management and treatment of suicidal behavior” and 4 hours on “military specific practices, policies” around suicidal assessment.  The goal of the study was to follow up on participants from the training and see if this impacted them, up to 6 months after the trainings.  The study was made up primarily of 82 participants, 48% who were doctorate psychology clinicians, 27% doctoral social work majors, and 13% that were psychiatrists.  Initially, of these participants, 43% reported “little or no formal trainings in graduate programs” around suicidal assessment, and 42% reported “little or no postgraduate or continuing education”.  This information supports the findings that even with advanced degrees, clinicians don’t have much exposure to policies or guidelines around how to do suicide assessment?  At the 6 month follow up, 44% of all participants reported “increased confidence in managing suicidal patients”, 83% “changing suicide practices”, and 66%, “changing clinical policy”, as a result of attending the trainings. The article also offers a 9 step guide to what trainings should look like, around suicidal assessment.  This study was done specifically with the Air Force, but offers an example of support for giving clinician’s trainings around practice of suicide assessment, as well as making sure they have knowledge of what policies are and what is expected of them by organization, in doing suicide assessments.

From these articles, we can see prevalence of “distress” amongst clinicians in having to deal with a client who has committed suicide.  Oftentimes, as clinician’s, we feel sense of needing to be detached or be professional in dealing with our clients, and yet it is important to understand it is normal to experience some grief in losing a client to suicide, or other factors.  What is important is knowing organizational policies, or ways in which we are expected to assess suicidal risk, as well as knowing resources available to aid us.  As these articles point out, through trainings and continuing education, we can feel more confident, as well as develop better policies to deal with clients that display suicidal behaviors.  Here are some lists of resources for info on defining policies around suicide assessment, as well as helpful tips for clinician being able to deal with loss of a client to suicide.
Mental Health America (Suicide Info)
SAMHSA (Statistics on Suicide Likelihood)
Suicide.Org Non-Profit Organization (Warning Signs)
Mayo Clinic Website (General Coping Skills for Losing Someone to Suicide)


Oordt, M.S., D.A. Jobes, V.P. Fonseca, S.M. Schmidt (2009). “Training Mental Health Professionals to Assess and Manage Suicidal Behavior: Can Provider Confidence and Practice Behaviors Be Altered?” Suicide and Life-Threatening Behavior 39(1).

Wurst, F.M., S. Mueller, S. Petitjean, S. Euler, S. Thon, G. Wiesbeck, M. Wolfersdorf (2010).  “Patient Suicide: A Survey of Therapists’ Reactions”. Suicide and Life-Threatening Behavior 40(4).
submitted by Jim Linderman.  Jim is currently a M.A. student with University of Colorado-Denver Sociology Program.

More about evidence based practices

Last week we spoke about Evidence Based Practices (EBP) and how their use has helped create more effective interventions. However, we also mentioned that EBP are difficult to implement. We spoke about how part of the problem is that they can be costly and can go against what most people in the field are used to doing in their practice. This time, I want to explain why most times, these interventions are costly and difficult to move into real-world practice, not only because they may go against what the field is used to do, but also for some other practical reasons.
EBP are usually tested under very rigorous conditions: The most stringent criteria for calling something an Evidence-Based Practice requires the use of a randomized control trial approach. That means that participating individuals may be assigned to one of two (or maybe more) groups: One that receives the treatment or one that receives nothing. Now the justification for doing something like this is because we want to be able to demonstrate that the reason we see change after the treatment, is due to the treatment and not other reason (for example: just the passing of time or in some cases, due to some developmental reasons, when developmental changes make sense). Now, even in those conditions, there may be potential confounding variables that may affect the final outcome.
One of the biggest problems facing many treatments is the fact that many times, individuals show improvement just because they are told (or they believe) that they are receiving some wonder-therapy (or drug). This is so prevalent in clinical trials that people speak about the “placebo effect”.  Therefore, a way to control for the potential effect of placebos is to include a treatment condition which is a placebo (when testing medications, people speak about “sugar pills”) or what may be considered the “normal treatment” (which sometimes is labeled as “business as usual”), where those who did not go into the treatment being tested are receiving the treatment that they might have received had there not been this treatment under testing. Placebo is a very powerful effect, and most of the therapies that sometimes are advertised on TV may work, because of this effect (quiz: how many times have you seen in those late TV ads a comparison group? Or comparisons against a placebo control?).
There are multiple ways to try to prove that a specific intervention is working, but as explained, most people tend to agree that the best approach is to use what is known as the “gold standard” or random assignment to different clinical conditions. The reason random assignment is considered the “gold standard” is that for the most part, it balances out many variables that could potentially affect the outcomes in unexpected ways. Things like age, gender, ethnicity, length of time with the illness, type of treatments received in the past and so forth. How will random assignment control for all of that? Because every individual with any potential combination of these variables  has the chance of being assigned to one of the treatments in the study. Therefore, it is expected that individuals with many if not all the potential combinations that may affect the final outcomes end up in one of the groups in the study, and therefore the effect of all those variables cancels out.
All this dancing is so scientists and the public in general can make informed decisions about the effectiveness of a treatment (i.e., are my outcomes better when I use treatment “A” as opposed to treatment “B”), as well as being able to generalize to a larger group of people than those included in the study. After all, if you were not included in the study, what good will it do to you to know that a program may work if you are not sure that the treatment will work in people like you?
Doing this work means time and money. People involved in testing the treatment needs to conduct multiple studies so they can get some assurance that the results are sound and can withstand multiple tests, under different conditions. They also need to be closely monitored so researchers can be alert if something is not going well. If the new treatment under scrutiny has the potential for being harmful, then they may want to stop the study before too long. On the other hand, if the results are going very well, perhaps it is time to stop the study with confidence that the new treatment will work as expected (though when treating human lives, you don’t want to take any chances).
 There are multiple institutions that have created databases where evidence for or against Evidence Based Practices can be found. The Substance Abuse and Mental Health Services Administration (SAMHSA)  maintains a website with links to several organizations where such information can be found.
Creating and documenting the effectiveness of a specific intervention is not enough. In a country as diverse as the U.S., there are many instances where an intervention that has been proven to work for a specific group of people (say African American), may not necessarily work for another ethnic group (e.g., Latino). The reasons can be associated with genetic makeup as well as with ethnic background (customs and traditions, for example, can be a big impulse or deterrent for some interventions). Therefore sometimes interventions that have been proven to work in an ethnic group (or in a research setting) need to be tested under different conditions (e.g., a different ethnic group or on a community-based environment). This is no easy task, which once more affects how quickly an intervention can be used outside the testing grounds.
This is a very active area of research which is known as validity. People speak about internal or external validity, and if you ever took a “research methods” class in college, then you may recognize many of these ideas or even terms. One book that describes the rationale an many specific examples is  Shadish, Cook and Campbell. However, be warned that this book can be hard to read without some introduction to research methods
One final note: Evidence Based Practices are the top of the pyramid, but there are some interventions/programs that have not been able to prove their worth using the most restrictive criteria (the gold standard) and yet are considered worth more research.
A ‘Promising model/practice’ is defined as “one with at least preliminary evidence of effectiveness in small-scale interventions or for which there is potential for generating data that will be useful for making decisions about taking the intervention to scale and generalizing the results to diverse populations and settings.” Department of Health and Human Services Administration for Children and Families Program Announcement. Federal Register, Vol. 68, No. 131, (July 2003), p. 40974. These are interventions where some initial testing has been done, and the outcomes observed so far seem to indicate that the intervention may be effective. However, more and more strict testing is needed to endorse it as an EBP.
Emerging practices, on the other hand, are “practices that have very specific approaches to problems or ways of working with particular people that receive high marks from consumers and/or clinicians but which are too new or used by too few practitioners to have received general, much less scientific attention.”  We took this definition from the Oakland County Community mental health authority. In this case, it is argued just like in the case of the promising practices, that the intervention being described has produced effective outcomes, but much more testing is still necessary.