Evidence-based assessment/Prescription phase/Interviews Are Not Perfect

From Wikiversity
Jump to navigation Jump to search
Click Here for Landing Page
Click Here for Landing Page
HGAPS New for Fall 2022: HGAPS and Psychology Conferences
Click Here for Landing Page
Click Here for Landing Page

HGAPS is finding new ways to make psychological science conferences more accessible!

Here are examples from APA 2022 and the JCCAP Future Directions Forum. Coming soon... ABCT!
~ More at HGAPS.org ~



Deep Thought/Rabbit Hole[edit | edit source]

Are interviews perfect? It would be nice if they were, but they aren't.

Unstructured interviews have all kinds of limitations, as covered in the Jensen-Doss et al. (2020) chapter.

Fully structured interviews have the highest inter-rater reliability (because everyone is getting the same script), which should improve the validity of conclusions based on them (remember that reliability sets an upper limit on validity). But if the patient does not understand the question, or misinterprets what is being asked, that will lower the validity. There is a huge debate in epidemiology about whether diagnoses made by research assistants using a structured interview (such as a SCID, MINI, or CIDI) represent "real" cases with clinical impairment. (Bird; Kessler)

Semi-structured interviews are supposed to hit the sweet spot, where the structure makes sure that we ask about the "vital few" topics for all clients, and go deeper on topics that are relevant to the individual. The clinician/interviewer can rephrase things, changing the script to try to improve communication. This improves validity compared to fully structured interviews, but it lowers the inter-rater reliability (because every interviewer gets some discretion about how to ask questions and to interpret the answers). That wiggle room is why semi-structured approaches are all supposed to only be used by trained clinicians. That isn't a magical solution, either, though, given the potentially big differences in clinical training or the models we use. The actual validity of a semi-structured approach gets sandwiched somewhere in between the improved validity due to adding clinical judgment, versus the hit to inter-rater reliability due to letting people improvise.

LEAD Diagnoses[edit | edit source]

Describe LEAD. Talk about whether we can do this in clinical practice.

Research Corner[edit | edit source]

Talk about the ways of estimating the validity of SDIs (e.g., comparing to LEAD).

Kraemer (1992) and effects on AUC.

Could paste in cuttings from parent page.

Psychometric properties of common diagnostic interviews[edit | edit source]

Screening Measure (Primary Reference) AUC LR+ Score LR- Score Clinical generalizability Study description
Composite International Diagnostic Interview (CIDI) 3.0[1] Adolescent Reported: .57 (N=321) 8.36 (when classified positive by CIDI) 0.86 (when classified negative by CIDI) Moderate Utilized the NCS-A sample of 10,148 adolescents aged 13-17 and their parents.[2]
Parent Reported: .71 (N=321) 9.67 (when classified positive by CIDI) 0.56 (when classified negative by CIDI)
Mini International Neuropsychiatric Interview for Children and Adolescents (MINI-KID)[3] AUC= 0.81-.96 (diagnoses of these disorders: any mood, anxiety, substance use, ADHD or behavioral, and eating) and AUC=.94 (psychotic disorders) (N=226) Greater than or equal to 3.21 0.00 to max 0.31 Sample of 225 children and adolescents ages 6-17 which included 190 outpatients and 36 controls, recruited from South Florida psychiatric center.[4]

Note: “LR+” refers to the change in likelihood ratio associated with a positive test score, and “LR-” is the likelihood ratio for a low score. Likelihood ratios of 1 indicate that the test result did not change impressions at all. LRs larger than 10 or smaller than .10 are frequently clinically decisive; 5 or .20 are helpful, and between 2.0 and .5 are small enough that they rarely result in clinically meaningful changes of formulation (Sackett et al., 2000).

  1. Merikangas, Kathleen; Avenevoli, Shelli; Costello, Jane; Koretz, Doreen; Kessler, Ronald C. (April 2009). "The National Comorbidity Survey Adolescent Supplement (NCS-A): I. Background and Measures". Journal of the American Academy of Child and Adolescent Psychiatry 48 (4): 367-9. doi:10.1097/CHI.0b013e31819996f1. PMID 19242382. PMC 2736858. //www.ncbi.nlm.nih.gov/pmc/articles/PMC2736858/. 
  2. Green, Jennifer Greif; Avenevoli, Shelli; Finkelman, Matthew; Gruber, Michael J.; Kessler, Ronald C.; Merikangas, Kathleen R.; Sampson, Nancy A.; Zaslavsky, Alan M. (March 2010). "Attention deficit hyperactivity disorder: concordance of the adolescent version of the Composite International Diagnostic Interview Version 3.0 (CIDI) with the K-SADS in the US National Comorbidity Survey Replication Adolescent (NCS-A) supplement". International Journal of Methods in Psychiatric Research 19 (1): 34-49. doi:10.1002/mpr.303. PMID 20191660. PMC 2938790. //www.ncbi.nlm.nih.gov/pmc/articles/PMC2938790/. 
  3. Sheehan, David V.; Sheehan, Kathy H.; Shytle, R. Douglas; Janavs, Juris; Bannon, Yvonne; Rogers, Jamison E.; Milo, Karen M.; Stock, Saundra L. et al. (March 2010). "Reliability and validity of the Mini International Neuropsychiatric Interview for Children and Adolescents (MINI-KID)". Journal of Clinical Psychiatry 71 (3): 313-26. doi:10.4088/JCP.09m05305whi. PMID 20331933. 
  4. Cite error: Invalid <ref> tag; no text was provided for refs named Sheehan_et_al-20102