Evidence based assessment/Step 1: Plan for most common issues in clinic setting
Medical disclaimer: This page is for educational and informational purposes only and may not be construed as medical advice. The information is not intended to replace medical advice offered by physicians. Please refer to the full text of the Wikiversity medical disclaimer.
|Steps 1-2: Preparation phase|
|Steps 3-5: Prediction phase|
|Steps 6-9: Prescription phase|
|Steps 10-12: Process/progress/outcome phase|
Pareto's 80:20 Rule of Thumb (or "Law of the Vital Few") provides a guiding example for implementing EBA. There can be an incredible richness and complexity to people and their challenges and circumstances. If we try to deal with all the complexity at the beginning, we will not get very far with the limited amount of time and resources that we have. Instead, focusing on the most common issues first will give us things that we can use for most (if not all) of the cases with whom we work. Taking care of these topics first maximizes the return on investment in getting our practice organized.
The first step is to gather descriptive information and external benchmarks about what are common issues in clinical settings similar to ours.
We can also consider a formal audit of our local practice and rates, and make a list of what are the most common presenting problems and needs.
We then want to start gathering resources, including assessment tools that address the different issues. If we are working at an established clinic, then we can sort the measures that we have and tag them to use for specific assessment questions. At this stage, we are looking for gaps, where we may have a frequent issue that does not have a matching assessment tool. If we have a gap like this, then it becomes a top priority to find a reasonable tool for filling the gap. If all the bases are covered, then we can start evaluating whether there are "upgrades" available that would provide better utility for our clients.
Focusing on the most common issues is a strategy for trying to accomplish the most coverage with an initial investment of time and effort.
One interesting question is, "What counts as an issue?" DSM diagnoses, clinical issues that are not a diagnosis (couples in distress; parent child conflict), transdiagnostic issues (e.g., NSSI)....
A second issue is, "What definition do we use?" This includes issues of local diagnoses and base rates, versus external benchmarks such as epidemiological studies and meta-analyses. Discuss billing diagnoses versus actual diagnoses; unstructured versus semi-structured or structured approaches.
A third issue is type of setting -- the community epidemiological samples include lots of people who are not seeking treatment. This leads to interesting points about Berkson's bias, differences in stigma (and culture as a component affecting this)....
We talked about epidemiological studies at some length -- could be interesting and helpful to add links to some of the major ones (like NCS-R, NCS-A, NESARC, NLAS, NHANES...). We also talked about diagnosis as usual (DAU) versus semi/structured diagnostic interviews. Interesting points about DAU:
- It reveals local practices and norms (attachment example -- is reactive attachment disorder "real" or "fluffy" depends a lot on where you are working)
- It helps identify topics where you may want to upgrade your reading or skills quickly if you are changing settings (as happens frequently during graduate training, internship, postdoctoral placements). When we identify a priority, look for reviews focused on that topic (and read the chapter, not the whole book!).
- Discrepancies are informative (see "Benchmarking" -- next section)
- These chart diagnoses have low inter-rater reliability, and they can have systematically biased validity due to billing and reimbursement issues (e.g., conduct disorder example).
- This has implications for the machine-learning approaches that people are starting to use with big data. If the main concern were low reliability, then big enough samples could compensate for that in theory. But if the diagnoses used to train the machine learning model are systematically biased, that creates major problems for the model. Machine learning will have a hard time using billing diagnoses to "learn" about predictors of conduct disorder, because (a) although conduct disorder exists, and there will likely be meaningful predictors and correlates available to the model, (b) conduct disorder is not diagnosed accurately (if at all) in billing diagnoses.
Still to discuss: Nuances of CDC surveillance data; and we did a shout out to LCA/GMM/Cluster analysis as multivariate methods that could define groups (potentially replacing DSM diagnoses and single variable ROC approaches).
Steps to put into practice
There are several different options for approaching the "common issues."
One is to look at what experts recommend. Handbooks often build their table of contents around what are seen as the most common issues. The web sites curated by different professional societies also provide good coverage of common issues.
A second strategy would be to gather estimates from external sources. We have pulled together several options on this page and on the "Benchmarking" pages that come in the next section.
A third option is to do an audit or build a local list of main issues. This could be as simple as jotting down a list of "Top Ten Clinical Issues," or it could be a formal query of a database of electronic medical records. At UNC Chapel Hill, we have been using a program called Titanium for our electronic medical record for several years. We pulled a report of the diagnoses that we entered as part of our routine clinical practice, and we will show some examples of the steps involved in using this to estimate local base rates.
For now, we will focus first on clinical psychology, and loosely divide things into "youth" and "adult" assessment. Over time, we will expand to add other age groups, specialized populations, and additional professions.
Child and Adolescent Assessment Recommendations
Professional Society Web Sites
- Assessment in Children and Adolescents (4th ed.). New York, NY: Guilford Press.
- [Disclaimer: Eric Youngstrom and Mitch Prinstein are going to be the editors for the 5th edition of this book, which is tentatively scheduled for publication in late 2018 or 2019]
Adult Assessment Recommendations
Professional Society Web Sites
A guide to assessments that work. New York, NY: Oxford University Press.
First, we should acknowledge that a lot of experts think that using local rates is a bad idea. We'll briefly go through the biggest concerns.
We discuss these in more detail in the next section.
Tables and figures
(Rettew et al., 2009)
|Condition||A Guide to Assessments That Works Chapter||CDC||DAU||SDI||General Population*|
|ADHD||4||11% or 6.8%||23%||38%|
|Conduct Problems||5||3.5%||17% CD, 37% ODD||25% CD, 38% ODD||1.5-3.2% antisocial|
|Mood Disorders||17% MDD, 10% dysthymia||26% MDD, 8% dysthymia|
||9||--||--||--||2.9%NCS-A, 1% bipolar I, 2-4% spectrum |
|Self-Injurious Thoughts and Behaviors||10|
||12||---||6% (social)||20% (social)||9% social & 19% specificNCS-A|
||13||--||12% (panic)||11% (panic)||2.4% agora/2.3%panicNCS-A|
|Substance Use Disorders||17||4.7%||14%||17%|
|Alcohol Use Disorder||18||4.2%||10%||13%|
|Schizophrenia||20||--||--||--||0.014% child |
|Couple Distress||22||--||--||--||~50% divorce rate(Ch. 22)|
|Pain in Children and Adolescents||26||--||--||--|
|Chronic Pain in Adults||27||--||--||--|
- Note. Adapted from Youngstrom & Van Meter (2016) https://en.wikiversity.org/wiki/Evidence_based_assessment CC-BY-SA 4.0.
- Epidemiological rates refer to general population, not treatment seeking samples, and so often represent a lower bound of what might be expected at a clinic.