Jump to content

Evidence-based assessment/Step 7: Add more intensive methods to finalize diagnoses and case formulation

From Wikiversity
Click Here for Landing Page
Click Here for Landing Page
HGAPS New for Fall 2022: HGAPS and Psychology Conferences
Click Here for Landing Page
Click Here for Landing Page

HGAPS is finding new ways to make psychological science conferences more accessible!

Here are examples from APA 2022 and the JCCAP Future Directions Forum. Coming soon... ABCT!
~ More at HGAPS.org ~



Medical disclaimer: This page is for educational and informational purposes only and may not be construed as medical advice. The information is not intended to replace medical advice offered by physicians. Please refer to the full text of the Wikiversity medical disclaimer.



Prescription Phase: More Intensive Methods to Finalize Diagnoses and Case Formulations

[edit | edit source]

Overview

[edit | edit source]

Updating Probabilities

[edit | edit source]

One way of deciding diagnoses and treatment targets would be to add more information to push some probabilities up into the Treatment Zone. This is what several projects with IBM's Watson are doing  (cf. IBM Watson), along with other applications of machine learning.

Structured and Semi-Structured Diagnostic Interviews

[edit | edit source]

Comparing the typical clinical to the more structured interview formats reveals several patterns. Agreement about some diagnoses is modest overall[1], and significantly worse about many[2]. Some of the gaps are likely due to fiscal considerations – many payors will not reimburse for psychosocial treatment for conduct disorder, creating an incentive to diagnose the same behaviors as attributable to something reimbursable. Others are due to gaps or variations in training, such as the wide discrepancies in diagnosing pediatric bipolar disorder or autism[3]. Clinicians also tend to diagnose fewer conditions than a more structured interview detects [4]. Sometimes more structured interviews detect cases that do not have sufficient impairment to warrant a clinical concern; but at least as important is the pattern of clinicians calling off the search as soon as they have confirmed a (reimbursable) diagnosis[5]. This leads them to underestimate rates of comorbidity compared to SDIs, often missing additional treatment targets or factors that might moderate treatment selection. Reviews of notes and interviews also find that clinicians often assign a diagnosis prematurely, without having documented enough symptoms to meet formal criteria for a diagnosis[6]. Adding SDIs improves diagnostic reliability across a gamut of clinical settings ranging from pediatric community mental health settings[7][8], emergency rooms [9] and inpatient units[10], improves psychosocial outcomes, and provides equally good symptom reduction with less clinician time – allowing more patients to be seen, or less expense per patient[11]. When the clinician’s diagnosis of record happens to be confirmed by an SDI, families engaged more with treatment, had fewer no-shows or cancellations, and had significantly better improvements in internalizing outcomes[12], indirectly suggesting that more accurate diagnosis leads to better process and outcome.

Despite these advantages, clinicians rarely choose to use semi-structured interviews[13]. They see them as infringing on their professional autonomy and potentially damaging rapport with clients[14][15]. Ironically, surveys with clients find that they actually prefer semi-structured interviews, believing that that evaluation is more thorough and the clinician gains a more comprehensive understanding of the client’s needs and situation [16][15]. Recent surveys also indicate that cost and practical barriers may be the main stumbling blocks as attitudes become less of an issue. Development and validation of free SDIs – such as the Child and Adolescent Psychiatric Assessment and the Development and Well-Being Assessment (DAWBA) -- directly address the cost obstacle [17], as do the growing number of free but validated instruments for other niches[18]. Why not just use the semi-structured interview as the basis for diagnosis, skipping the preamble of the rating scales and risk factors? First, no single interview covers all of the diagnoses and clinical topics that a clinician might encounter, and surprisingly, some of the omissions can be common issues. Older interviews did not ask about ADHD in adults, nor about mania or hypomania in youths. Many versions of the Kiddie Schedule for Affective Disorders and Schizophrenia (KSADS) lack modules for pervasive developmental disorders (e.g., [19][20]). The training requirements, and the issue of rater drift, also create practical obstacles to implementation. Finally, a questionnaire or computer may be more likely to get positive responses about sensitive issues difficult for parents or patients to bring up in person[21]. It may be easier for our client to first broach her drinking, her thoughts of self harm, or her confusion about her sexuality in the privacy of a questionnaire or computer session than face-to-face. The discomfort or ease of addressing sensitive topics is likely to vary across cultures, as well, with degree of stigma as a key factor[22].

A hybrid approach, using rating scales and risk factors to generate the contending hypotheses, can guide the selection of which interview modules to use for follow-up. It becomes possible to “take the best,” picking sections of interviews that are best for each particular purpose, using the KSADS modules for mood disorder [23][19], the ADIS for anxiety[24], and so forth.

The DAWBA already is integrating responses on checklists (specifically the SDQ) as part of the formal diagnostic process, combined with structured interviews and clinical judgment[25]. Intriguingly, the DAWBA offers the possibility of subcontracting to get the clinical judgment component, noting that it may be more cost-effective to pay a small fee for outside expertise rather than trying to train and support personnel for that role in-house (http://www.dawba.info/d0.html, accessed January 19, 2016).


Cognitive and Achievement Testing.

Cognitive and achievement testing may also play an important role at this stage for some cases. If the referral question focuses on academic performance, or if the contending hypotheses include a learning disorder, then cognitive and achievement testing clearly provide valuable data[26]. Sometimes the referral question focuses on possible ADHD or learning disability, and cognitive testing may reveal that the person has average or low average ability and has been functioning in a fast paced or challenging environment by dint of hard work rather than having some other disorder. Similarly, autism and developmental disorder evaluations need to gauge whether social deficits are out of line with verbal and nonverbal cognitive ability. Brief, four-subtest ability tests would be well-suited to recognizing these scenarios[27][28], and verbal ability estimates may also help gauge the suitability of more cognitive components in therapy versus emphasizing behavioral interventions. In contrast, despite decades of speculation and clinical lore about subtest scatter or profile analysis being associated with clinical conditions such as ADHD, the EBA framework makes clear that the associations are too weak to be informative for individual decision-making[29]. Deficits in processing speed or working memory are non-specific, meaning that many different clinical conditions can show them, and the pattern does not help to pick between the hypotheses. Ipsative analysis has technical psychometric challenges making it unlikely that apparent strengths and weaknesses will be stable over time or show incremental predictive value[5][30][31]. The validity coefficients for checklists are significantly higher for predicting diagnoses such as ADHD or autism than those found with cognitive profile analysis[32][33] – and checklists can usually be administered and scored much more quickly and at much lower cost. Even those that would benefit from assessment of cognitive ability usually will not need the traditional full battery; in most cases, a four subtest estimate of general ability will provide the most robust and empirically supported information. Specific questions about neuropsychological deficits also would be better addressed by tailored, high-validity batteries[34], rather than using omnibus cognitive ability tests that attempt to provide neuropsychological Swiss Army knife tools that are not adequate for high stakes use[5].

Rationale

[edit | edit source]

Steps to put into practice

[edit | edit source]

Structured Diagnostic Interviews

[edit | edit source]

Semi-Structured Diagnostic Interviews

[edit | edit source]

Semi-structured interviews require more training to use consistently, and periodic re-calibration to avoid drift. Large grants can support this type of infrastructure, but it is challenging to implement in clinical practice.

  • Kiddie Schedule for Affective Disorders and Schizophrenia (KSADS) -- a set of semi-structured interviews widely used in research, but intended for use by trained mental health clinicians and more in-depth than may be feasible in many clinical settings
  • Diagnostic Interview for Anxiety, Mood, and Obsessive-Compulsive and Related Neuropsychiatric Disorders (DIAMOND) - free semi-structured interview paired with a self-report screening questionnaire (source here)
  • Structured Clinical Interview for DSM (SCID) -- one of the most widely used interviews with adults, with DSM-5 and DSM-IV versions (the older versions had a separate interview for personality disorders, which were on "Axis II" of the five axes in the DSM-IV)
  • CIDI
  • CAPA

Tables and figures

[edit | edit source]

References

[edit | edit source]
References
  1. Rettew, D. C., Lynch, A. D., Achenbach, T. M., Dumenci, L., & Ivanova, M. Y. (2009). Meta-analyses of agreement between diagnoses made from clinical evaluations and standardized diagnostic interviews. International Journal of Methods in Psychiatric Research, 18(3), 169-184. doi: 10.1002/mpr.289
  2. Lewczyk, C. M., Garland, A. F., Hurlburt, M. S., Gearity, J., & Hough, R. L. (2003). Comparing DISC-IV and clinician diagnoses among youths receiving public mental health services. Journal of the American Academy of Child and Adolescent Psychiatry, 42(3), 349-356.
  3. Dubicka, B., Carlson, G. A., Vail, A., & Harrington, R. (2008). Prepubertal mania: Diagnostic differences between US and UK clinicians. European Child & Adolescent Psychiatry, 17, 153-161. doi: 10.1007/s00787-007-0649-5
  4. Jensen-Doss, A., Youngstrom, E. A., Youngstrom, J. K., Feeny, N. C., & Findling, R. L. (2014). Predictors and moderators of agreement between clinical and research diagnoses for children and adolescents. Journal of Consulting & Clinical Psychology, 82(6), 1151-1162. doi: 10.1037/a0036657
  5. 5.0 5.1 5.2 Canivez, G. L. (2013). Psychometric versus actuarial interpretation of intelligence and related aptitude batteries. In D. H. Saklofske, V. L. Schwean & C. R. Reynolds (Eds.), The Oxford Handbook of Child Psychological Assessments (pp. 84-112). New York, NY: Oxford University Press. Croskerry, P. (2002). Achieving quality in clinical decision making: cognitive strategies and detection of bias. Academic emergency medicine, 9(11), 1184-1204. doi: 10.1197/aemj.9.11.1184
  6. Miller, P. R. (2002). Inpatient diagnostic assessments: 3. Causes and effects of diagnostic imprecision. Psychiatry Research, 111(2–3), 191-197. doi: http://dx.doi.org/10.1016/S0165-1781(02)00147-6
  7. Hughes, C. W., Rintelmann, J., Mayes, T., Emslie, G. J., Pearson, G., & Rush, A. J. (2000). Structured interview and uniform assessment improves diagnostic reliability. Journal of Child and Adolescent Psychopharmacology, 10(2), 119-131.
  8. Jensen-Doss, A., Youngstrom, E. A., Youngstrom, J. K., Feeny, N. C., & Findling, R. L. (2014). Predictors and moderators of agreement between clinical and research diagnoses for children and adolescents. Journal of Consulting & Clinical Psychology, 82(6), 1151-1162. doi: 10.1037/a0036657
  9. Miller, P. R. (2002). Inpatient diagnostic assessments: 3. Causes and effects of diagnostic imprecision. Psychiatry Research, 111(2–3), 191-197. doi: http://dx.doi.org/10.1016/S0165-1781(02)00147-6
  10. Miller, P. R., Dasher, R., Collins, R., Griffiths, P., & Brown, F. (2001). Inpatient diagnostic assessments: 1. Accuracy of structured vs. unstructured interviews. Psychiatry Research, 105(3), 255-264. doi: http://dx.doi.org/10.1016/S0165-1781(01)00317-1
  11. Hughes, C. W., Emslie, G. J., Wohlfahrt, H., Winslow, R., Kashner, T. M., & Rush, A. J. (2005). Effect of structured interviews on evaluation time in pediatric community mental health settings. Psychiatric Services, 56(9), 1098-1103. doi: 10.1176/appi.ps.56.9.1098
  12. Jensen-Doss, A., & Weisz, J. R. (2008). Diagnostic agreement predicts treatment process and outcomes in youth mental health clinics. Journal of Consulting and Clinical Psychology, 76(5), 711-722. doi: 10.1037/0022-006x.76.5.711
  13. Jensen-Doss, A., & Hawley, K. M. (2010). Understanding barriers to evidence-based assessment: clinician attitudes toward standardized assessment tools. Journal of Clinical Child and Adolescent Psychology, 39(6), 885-896. doi: 10.1080/15374416.2010.517169
  14. Meehl, P. E. (1997). Credentialed persons, credentialed knowledge. Clinical Psychology: Science and Practice, 4, 91-98. doi: 10.1111/j.1468-2850.1997.tb00103.x
  15. 15.0 15.1 Suppiger, A., In-Albon, T., Hendriksen, S., Hermann, E., Margraf, J., & Schneider, S. (2009). Acceptance of structured diagnostic interviews for mental disorders in clinical practice and research settings. Behavior Therapy, 40(3), 272-279. doi: S0005-7894(08)00088-9 [pii]
  16. Bruchmuller, K., Margraf, J., Suppiger, A., & Schneider, S. (2011). Popular or unpopular? Therapists' use of structured interviews and their estimation of patient acceptance. Behavior Therapy, 42(4), 634-643. doi: 10.1016/j.beth.2011.02.003
  17. Bruchmuller, K., Margraf, J., Suppiger, A., & Schneider, S. (2011). Popular or unpopular? Therapists' use of structured interviews and their estimation of patient acceptance. Behavior Therapy, 42(4), 634-643. doi: 10.1016/j.beth.2011.02.003
  18. Beidas, R. S., Stewart, R. E., Walsh, L., Lucas, S., Downey, M. M., Jackson, K., . . . Mandell, D. S. (2015). Free, brief, and validated: Standardized instruments for low-resource mental health settings. Cognitive & Behavioral Practice, 22(1), 5-19. doi: 10.1016/j.cbpra.2014.02.002
  19. 19.0 19.1 Kaufman, J., Birmaher, B., Brent, D., Rao, U., Flynn, C., Moreci, P., . . . Ryan, N. (1997). Schedule for Affective Disorders and Schizophrenia for School-Age Children-Present and Lifetime version (K-SADS-PL): Initial reliability and validity data. Journal of the American Academy of Child & Adolescent Psychiatry, 36(7), 980-988. doi: 10.1097/00004583-199707000-00021
  20. Orvaschel, H. (1995). Schizophrenia and Affective Disorders Schedule for children - Epidemiological Version (KSADS-E). Nova Southeastern University, Ft. Lauderdale, FL.
  21. Shaffer, D., Fisher, P., Lucas, C. P., Dulcan, M. K., & Schwab-Stone, M. E. (2000). NIMH Diagnostic Interview Schedule for Children Version IV (NIMH DISC-IV): Description, differences from previous versions, and reliability of some common diagnoses. Journal of the American Academy of Child & Adolescent Psychiatry, 39(1), 28-38.
  22. Hinshaw, S. P., & Cicchetti, D. (2000). Stigma and mental disorder: Conceptions of illness, public attitudes, personal disclosure, and social policy. Development & Psychopathology, 12(4), 555-598. doi: 10.1017/S0954579400004028
  23. Geller, B., Zimerman, B., Williams, M., Bolhofner, K., Craney, J. L., DelBello, M. P., & Soutullo, C. (2001). Reliability of the Washington University in St. Louis Kiddie Schedule for Affective Disorders and Schizophrenia (WASH-U-KSADS) mania and rapid cycling sections. Journal of the American Academy of Child & Adolescent Psychiatry, 40(4), 450-455.
  24. Di Nardo, P., Moras, K., Barlow, D. H., Rapee, R. M., & Brown, T. A. (1993). Reliability of DSM-III-R anxiety disorder categories. Using the Anxiety Disorders Interview Schedule-Revised (ADIS-R). Archives of General Psychiatry, 50(4), 251-256.
  25. Wichstrøm, L., Berg-Nielsen, T. S., Angold, A., Egger, H. L., Solheim, E. and Sveen, T. H. (2012), Prevalence of psychiatric disorders in preschoolers. Journal of Child Psychology and Psychiatry, 53: 695–705. doi:10.1111/j.1469-7610.2011.02514.x
  26. Fletcher, J. M., Francis, D. J., Morris, R. D., & Lyon, G. R. (2005). Evidence-based assessment of learning disabilities in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 506-522. doi: 10.1207/s15374424jccp3403_7
  27. Glutting, J. J., Adams, W., & Sheslow, D. (2000). Wide Range Intelligence Test Manual. Wilmington, DE: Wide Range.
  28. The Psychological Corporation. (1999). Wechsler Abbreviated Scale of Intelligence Manual. San Antonio: Harcourt Brace and Company. Watkins, M. W., Kush, J. C., & Glutting, J. J. (1997). Prevalence and diagnostic utility of the WISC-III SCAD profile among children with disabilities. School Psychology Quarterly, 12(3), 235-248.
  29. Watkins, M. W., Kush, J. C., & Glutting, J. J. (1997). Discriminant and predictive validity of the WISC-III ACID profile among children with learning disabilities. Psychology in the Schools, 34(4), 309-319, doi: 10.1002/(SICI)1520-6807(199710)34:4<309::AID-PITS2>3.0.CO;2-G
  30. Macmann, G. M., & Barnett, D. W. (1997). Myth of the master detective: Reliability of interpretations for Kaufman's "intelligent testing" approach to the WISC-III. School Psychology Quarterly, 12(3), 197-234. 
  31. McDermott, P. A., Fantuzzo, J. W., Glutting, J. J., Watkins, M. W., & Baggaley, A. R. (1992). Illusions of meaning in the ipsative assessment of children's ability. Journal of Special Education, 25(4), 504-526. 
  32. Ozonoff, S., Goodlin-Jones, B. L., & Solomon, M. (2005). Evidence-based assessment of autism spectrum disorders in children and adolescents. Journal of Clinical Child and Adolescent Psychology, 34, 523-540. doi: 10.1207/s15374424jccp3403_8
  33. Pelham, W. E., Jr., Fabiano, G. A., & Massetti, G. M. (2005). Evidence-based assessment of attention deficit hyperactivity disorder in children and adolescents. Journal of Clinical Child & Adolescent Psychology, 34, 449-476. doi: 10.1207/s15374424jccp3403_5
  34. Sheslow, D., & Adams, W. (2003). Wide Range Assessment of Memory and Learning, Second Edition. Lutz, Fl: PAR.