Evidence-based assessment/Assessment Center

From Wikiversity
Jump to navigation Jump to search
Click Here for Landing Page
Click Here for Landing Page
HGAPS New for Fall 2022: HGAPS and Psychology Conferences
Click Here for Landing Page
Click Here for Landing Page

HGAPS is finding new ways to make psychological science conferences more accessible!

Here are examples from APA 2022 and the JCCAP Future Directions Forum. Coming soon... ABCT!
~ More at HGAPS.org ~



Subject classification: this is a psychology resource.

The HGAPS Assessment Center was created by the UNC-Chapel Hill chapter of Helping Give Away Psychological Science in collaboration with the Society for Clinical Child and Adolescent Psychology (SCCAP), The North Carolina Psychological Association (NCPA), Division 5 of the American Psychological Association, and Division 12 Section 9 of the American Psychological Association, as well as clinical psychologists at the University of North Carolina at Chapel Hill Department of Psychology consulted on the creation of the Assessment Center. The Assessment Center consists of both a version for the general public and for clinicians.

Here is a video describing how the assessment center works, what is in it, and what is coming next.

Frequently asked questions (FAQs)[edit | edit source]

Q: Are my answers confidential? Who can see them?

A: Your privacy is important. Unlike most web sites, the Assessment Center does not use cookies, and it does not track any information about you or your phone or computer. Your responses are anonymous, meaning that there is no identifying information stored with them. The tools have been used hundreds to tens of thousands of times, so each person's score is one needle in a very large (and growing) haystack. Anonymous (not identifiable) is an even stronger protection than being "confidential."

Q: I don't feel comfortable with using measures that I don't know the research behind them. I realize that you have many different measures in your library, but how can I know if they are reliable/valid measures?

A: We are always glad to have someone want to see the evidence, and not just use any old thing available!

Several parts to the answer:

How We Prioritize Measures[edit | edit source]

We use reviews and meta-analyses to help decide which measures to prioritize. Our original set started looking at the Beidas et al. review of the best free measures available[1]. We picked some from it, and we supplemented with results from meta-analyses (Stockings et al., 2015[2]; Youngstrom et al., 2018[3]; Youngstrom, Genzlinger, Egerton, & Van Meter, 2015[4]). Most recently, we have been asking content experts who are updating handbooks on assessment (the Mash & Barkley series with Guilford, that Mitch Prinstein PhD and Eric A. Youngstrom, PhD are co-editing, and the Hunsley & Mash Guide to Assessments that Work, 2nd Ed., (Hunsley & Mash, 2018)[5] to provide recommendations for a reasonable starter kit of measures.

There is not perfect overlap: Not everything that fared well in the reviews is up on Wiki, nor is everything on Wiki top tier in the reviews (we did not write every assessment-related page on Wikipedia). Instead, think of it as a garden where we are using the research reviews to decide which things to cultivate and help grow more rapidly.

Our Priorities[edit | edit source]

  1. Evidence of validity from multiple samples (we pay more attention to validity than reliability -- if a measure shows good validity, that shows it is reliable enough!)
  2. Free measures come before commercially distributed ones (the commercial ones have their own advertising budgets; the best of the free measures are underutilized because people do not know about them or where to find them)
  3. Measures with promise of clinical utility take precedence over basic research measures.

The measures that are going into the Assessment Center are filtered even more: We are picking all of them, and having ongoing conversations with colleagues to refine the list. Everything in this smaller set has been picked in the reviews, has a good-sized research base (everything has at least four different samples, and usually dozens, behind it).

Where Can We Find this Evidence?[edit | edit source]

It is always possible to do our own search in GoogleScholar or PubMed (both of which are free). TRIP Database is helpful as an aggregation tool, and finds practice guidelines that would be harder to locate with other search engines, but it does not do a great job with coverage of psychological assessments. TRIP was designed to cover all of medicine, so in fairness, psychological assessment is a special niche. The Buros Mental Measurement Yearbook does not prioritize reviewing free measures, so it is not as helpful in this context as we had hoped.

The goal is to have the information just be a click away, though. We would like to get the supporting research onto the Wikipedia and Wikiversity pages. It has been a slower process than we expected, for several reasons, but we are getting the hang of it. We have added the equivalent of seven journal articles worth of words to Wiki over the last two years.

Wikipedia Documentation[edit | edit source]

Wikipedia is intended for the general public, so we are not putting technical information about measures there. Wikipedia editors want to see review articles, chapters, meta-analyses, and books as the citations, and they often reject citations to the actual research studies. This makes it hard to put a bibliography of the evidence for the measure on Wikipedia. If someone writes a review or meta-analysis, we can cite that; but that means that there is an awkward zone where a free measure has multiple research studies supporting it, but no review… so we can’t drop the citations on a Wikipedia page for the measure and have them stick.

Wikiversity Documentation[edit | edit source]

Wikiversity is our way around this. We can write a page on Wikiversity that gets more technical, includes a more thorough supporting bibliography, and has links to other goodies (scoring syntax, a “live” version in the Assessment Center). We have had a lot more artistic control over the content on the Wikiversity pages, and we have spent more time on them over the last two years as a result.

The rate limiting factor right now is the number of people that know how to make edits. Most of the HGAPS editors are undergraduate or young graduate students, so they do not know the literature off the top of their heads. For researchers, writing content for Wiki has not been a high priority, because it is a service, not peer-reviewed research.

That is changing, though.

Things to Look for in 2020:[edit | edit source]

  • More citations on Wikiversity: We are building a “library in the cloud” with all of the citations from a couple dozen reviews. It is in Zotero, a free reference management program. HGAPS and others will be able to use it to add citations quickly to Wiki. It took a while to build, but if we have one resource with 2000+ citations, share it with 100+ people and a to-do list, then I think we will make a lot of progress quickly.
  • Peer Reviewed Wiki Articles: There is are Wikipedia Journals that are a hybrid: Open access (no fee!), peer reviewed, with an author byline (so the researchers get credit!), a DOI (so people know it is legit!), a stable version (a free PDF), and… it is written in Wiki format, so it has live hyperlinks, and is easier to move chunks onto Wikipedia. This is a relatively new development, but a very exciting one. It creates a path where researchers can get peer-reviewed credit, while writing content for Wiki. Thomas Shafee, a molecular biologist from Australia, did a couple talks at the APA Convention in Chicago that laid out the model and some intriguing examples (see the Lysine page in Wikipedia).
  • Branding Pages: Mian-Li Ong got a pair of icons/boxes made where we can tag pages to show that there is a sister page on Wikipedia or Wikiversity. That is a sign that we were there, and made some improvements to the content. There’s always more to be done, but it is a visual signal to those in the know that we were there.

Resource lists[edit | edit source]

General public resources[edit | edit source]

  • This page details resources associated with the HGAPS Assessment Center broken down by symptom area. This page is duplicated on the HGAPS Assessment Center.

Clinician resources[edit | edit source]

  • This page contains resources associated with the clinician version of the HGAPS Assessment Center. This resource includes (1) a list of the assessments contained in the clinician version of the HGAPS Assessment Center, (2) a list of resources for clinicians associated with specific content areas and symptom clusters, and (3) resources to otherwise assist the evidence-based assessment process and aid the utility of the HGAPS Assessment Center for clinicians.This page is duplicated on the HGAPS Assessment Center.

References[edit | edit source]

  1. Beidas, Rinad S.; Stewart, Rebecca E.; Walsh, Lucia; Lucas, Steven; Downey, Margaret Mary; Jackson, Kamilah; Fernandez, Tara; Mandell, David S. (2015-02-01). "Free, Brief, and Validated: Standardized Instruments for Low-Resource Mental Health Settings". Cognitive and Behavioral Practice 22 (1): 5–19. doi:10.1016/j.cbpra.2014.02.002. ISSN 1077-7229. PMID 25642130. PMC PMC4310476. http://www.sciencedirect.com/science/article/pii/S1077722914000145. 
  2. Stockings, Emily; Degenhardt, Louisa; Lee, Yong Yi; Mihalopoulos, Cathrine; Liu, Angus; Hobbs, Megan; Patton, George (2015-03-15). "Symptom screening scales for detecting major depressive disorder in children and adolescents: A systematic review and meta-analysis of reliability, validity and diagnostic utility". Journal of Affective Disorders 174: 447–463. doi:10.1016/j.jad.2014.11.061. ISSN 0165-0327. http://www.sciencedirect.com/science/article/pii/S016503271400785X. 
  3. Youngstrom, Eric A.; Egerton, Gregory A.; Genzlinger, Jacquelynne; Freeman, Lindsey K.; Rizvi, Sabeen H.; Van Meter, Anna (2018-03). "Improving the global identification of bipolar spectrum disorders: Meta-analysis of the diagnostic accuracy of checklists.". Psychological Bulletin 144 (3): 315–342. doi:10.1037/bul0000137. ISSN 1939-1455. https://psycnet.apa.org/doi/10.1037/bul0000137. 
  4. Youngstrom, Eric A.; Genzlinger, Jacquelynne E.; Egerton, Gregory A.; Van Meter, Anna R. (2015-11-16). "Multivariate meta-analysis of the discriminative validity of caregiver, youth, and teacher rating scales for pediatric bipolar disorder: Mother knows best about mania.". Archives of Scientific Psychology 3 (1): 112–137. doi:10.1037/arc0000024. ISSN 2169-3269. http://dx.doi.org/10.1037/arc0000024. 
  5. Hunsley, J., & Mash, E. J. (2018). Assessments That Work, ed. 2. Psychother Psychosom, 87, 383.