Jump to content

OToPS/Measures/Child Mania Rating Scale

From Wikiversity

Lead section

[edit | edit source]
Click here for instructions for lead section

The lead section gives a quick summary of what the assessment is. Here are some pointers (please do not use bullet points when writing article):

  1. Make sure to include a link to the "anchor citation"
  2. What are its acronyms?
  3. What is its purpose?
  4. What population is it intended for? What do the items measure?
  5. How long does it take to administer?
  6. Who (individual or groups) was it created by?
  7. How many questions are inside? Is it multiple choice?
  8. What has been its impact on the clinical world in general?
  9. Who uses it? Clinicians? Researchers? What settings?
  10. Using the Edit Source function, remove collapse top and collapse bottom curly wurlys to show content.

Reliability

[edit | edit source]

Steps for evaluating reliability and validity

[edit | edit source]
Click here for instructions
  1. Evaluate the instrument by referring to the rubrics for evaluating reliability and validity (both external Wikiversity pages). For easy reference, open these pages in separate tabs.
    1. Reliability rubric
    2. Validity rubric
  2. Refer to the relevant instrument rubric table. This is the table that you will be editing. Do not confuse this with the external pages on reliability and validity.
    1. Instrument rubric table: Reliability
    2. Instrument rubric table: Validity
  3. Depending on whether instrument was adequate, good, excellent, or too good:
    1. Insert your rating.
    2. Add the evidence from journal articles that support your evaluation.
    3. Provide citations.
  4. Refer to the heading for the instrument rubric table ("Rubric for evaluating norms and reliability for the XXX ... indicates new construct or category")
    1. Make sure that you change the name of the instrument accordingly.
  5. Using the Edit Source function, remove collapse top and collapse bottom curly wurlys to show content.

Instrument rubric table: Reliability

[edit | edit source]

Note: Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.

Click here for instrument reliability table

Reliability

[edit | edit source]

Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.

Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.

Evaluation for norms and reliability for the Big Five Inventory (table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Norms Adequate Research of the BFI includes mostly convenience samples and samples taken from surveys from other research projects. In addition, the sample sizes were quite large for most of the studies that were evaluating the psychometrics of the BFI. These samples mainly came from nonclinical populations.
Internal consistency (Cronbach’s alpha, split half, etc.) Good Cronbach's alphas typically ranged from .75 to .90 for subscales and average above .80.[citation needed]
Interrater reliability Not applicable The BFI is designed to be a self-report scale; therefore, inter-rater reliability does not apply here.[1]
Test-retest reliability (stability Good Reliability correlations ranged from .80 to .90 for a 3 month retest. Other studies found that 2 month retest reliability correlations were greater than .75.,[2] with data also show high stability in clinical trials[citation needed]
Repeatability Not published No studies have been published that evaluates the repeatability of the BFI.

Instrument rubric table: Validity

[edit | edit source]
Click here for instrument validity table

Validity

[edit | edit source]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and w:discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.

Development and history

[edit | edit source]
Click here for instructions for development and history
  • Why was this instrument developed? Why was there a need to do so? What need did it meet?
  • What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
  • How was the scale developed? What was the theoretical background behind it?
  • How are these questions reflected in applications to theories, such as w:cognitive behavioral therapy (CBT)?
  • If there were previous versions, when were they published?
  • Discuss the theoretical ideas behind the changes.

Impact

[edit | edit source]

Use in other populations

[edit | edit source]

Limitations

[edit | edit source]

Scoring instructions and syntax

[edit | edit source]

We have syntax in three major languages: R, SPSS, and SAS. All variable names are the same across all three, and all match the CSV shell that we provide as well as the Qualtrics export.

Hand scoring and general instructions

[edit | edit source]

Typically use the total score of all items.

CSV shell for sharing

[edit | edit source]
Click here for CSV shell
  • <Paste link to CSV shell here>

Here is a shell data file that you could use in your own research. The variable names in the shell corresponds with the scoring code in the code for all three statistical programs.

Note that our CSV includes several demographic variables, which follow current conventions in most developmental and clinical psychology journals. You may want to modify them, depending on where you are working. Also pay attention to the possibility of "deductive identification" -- if we ask personal information in enough detail, then it may be possible to figure out the identity of a participant based on a combination of variables.

When different research projects and groups use the same variable names and syntax, it makes it easier to share the data and work together on integrative data analyses or "mega" analyses (which are different and better than meta-analysis in that they are combining the raw data, versus working with summary descriptive statistics).

R/SPSS/SAS syntax

[edit | edit source]
Click here for R code

R code goes here

Click here for SPSS code
SPSS code goes here|}
Click here for SAS code

SAS code goes here

See also

[edit | edit source]
[edit | edit source]
  1. Achenbach, TM; McConaughy, SH; Howell, CT (March 1987). "Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity.". Psychological Bulletin 101 (2): 213–32. PMID 3562706. 
  2. Depue, Richard A.; Slater, Judith F.; Wolfstetter-Kausch, Heidi; Klein, Daniel; Goplerud, Eric; Farr, David (1981). "A behavioral paradigm for identifying persons at risk for bipolar depressive disorder: A conceptual framework and five validation studies.". Journal of Abnormal Psychology 90 (5): 381–437. doi:10.1037/0021-843X.90.5.381.