Evidence-based assessment/Instruments/Dimensional Obsessive Compulsive Scale
HGAPS is finding new ways to make psychological science conferences more accessible!
Here are examples from APA 2022 and the JCCAP Future Directions Forum. Coming soon... ABCT!
~ More at HGAPS.org ~
This is a shell that we use for building new Wikiversity instrument pages on Wikiversity.
Before starting
[edit | edit source]If you are starting a Wikiversity instrument page using this template, please do the following:
- We will import all of the content here to a new page.
- Click on the Edit source tab. Copy all content.
- Create a new page.
- Paste it in!
- Delete these instructions.
Lead section
[edit | edit source]2.3. Acronyms/Purpose
[edit | edit source]DOCS-SF
[edit | edit source]The Dimensional Obsessive-Compulsive Scale (DOCS) was developed to take into account patients with uncommon Obsessive-Compulsive Disorder (OCD) symptoms, measure obsessions and compulsions together, and bring more dimensions into addressing OCD. It is a 20 item self-reported assessment regarding the severity of four dimensions of obsessive and compulsive symptoms: contamination, responsibility for harm and mistakes, unacceptable thoughts, and symmetry/completeness[1][2]
DOCS-SF is used by both clinicians and researchers. In regards to clinicians, it is used to give general practitioner’s a brief, initial screening for OCD symptoms before they refer them to a specialist for treatment. In regards to researchers, it is a new, valid and time - efficient measure of OCD symptom severity.[1]
DOCS
[edit | edit source]The Dimensional Obsessive-Compulsive Scale (DOCS) was developed to take into account patients with uncommon Obsessive-Compulsive Disorder (OCD) symptoms, measure obsessions and compulsions together, and bring more dimensions into addressing OCD. The DOCS was created to address the limitations of previous OCD symptom measures and further assess the severity of each symptom dimension on many different levels.[3][4]
The DOCS is intended for physicians to conduct on OCD patients in order to measure the severity of their symptoms. It is used on adults 18 and older who already have a current OCD diagnosis.[5] Clinicians also use the DOCS to track patients' progress throughout treatment.[6] The DOCS evaluates 20 items, with five for each of the OC symptom dimensions. Within these individual dimensions, the DOCS analyzes the severity of the symptoms over the past month, in terms of time occupied, avoidance behavior, associated distress, functional interference, and difficulty disregarding the obsessions and associated compulsions.[7]
Click here for instructions for lead section
|
---|
The lead section gives a quick summary of what the assessment is. Here are some pointers (please do not use bullet points when writing article):
|
Psychometrics
[edit | edit source]Steps for evaluating reliability and validity
[edit | edit source]- Evaluate the instrument by referring to the rubrics for evaluating reliability and validity (both external Wikiversity pages). For easy reference, open these pages in separate tabs.
- Refer to the relevant instrument rubric table. This is the table that you will be editing. Do not confuse this with the external pages on reliability and validity.
- Depending on whether instrument was adequate, good, excellent, or too good:
- Insert your rating.
- Add the evidence from journal articles that support your evaluation.
- Provide citations.
- Refer to the heading for the instrument rubric table ("Rubric for evaluating norms and reliability for the XXX ... indicates new construct or category")
- Make sure that you change the name of the instrument accordingly.
- Using the Edit Source function, remove collapse top and collapse bottom curly wurlys to show content.
Instrument rubric table: Reliability
[edit | edit source]Note: Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.
Reliability
[edit | edit source]Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.
Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.
Criterion | Rating (adequate, good, excellent, too good*) | Explanation with references |
---|---|---|
Norms | Good | Multiple samples that were not random nor representative of census data[8] |
Internal consistency (Cronbach’s alpha) | Excellent | Alpha usually between .87 and .95[3] |
Interrater reliability | Not applicable | Designed as a self-report scale[1] |
Test-retest reliability (stability | r = .73 over 15 weeks. Evaluated in initial studies,[9] with data also show high stability in clinical trials | |
Repeatability | Not publish | No published studies formally checking repeatability |
Instrument rubric table: Validity
[edit | edit source]Click here for instrument validity table
| |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Validity[edit | edit source]Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and w:discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.
|
Development and history
[edit | edit source]Click here for instructions for development and history
|
---|
|
Impact
[edit | edit source]- What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
- What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?
Use in other populations
[edit | edit source]- How widely has it been used? Has it been translated into different languages? Which languages?
Scoring instructions and syntax
[edit | edit source]We have syntax in three major languages: R, SPSS, and SAS. All variable names are the same across all three, and all match the CSV shell that we provide as well as the Qualtrics export.
Hand scoring and general instructions
[edit | edit source]Click here for hand scoring and general administration instructions
|
---|
<Information about hand scoring and general instructions go here> |
If there are any hand scoring and general administration instructions, it should go here.
CSV shell for sharing
[edit | edit source]Click here for CSV shell
|
---|
|
Here is a shell data file that you could use in your own research. The variable names in the shell corresponds with the scoring code in the code for all three statistical programs.
Note that our CSV includes several demographic variables, which follow current conventions in most developmental and clinical psychology journals. You may want to modify them, depending on where you are working. Also pay attention to the possibility of "deductive identification" -- if we ask personal information in enough detail, then it may be possible to figure out the identity of a participant based on a combination of variables.
When different research projects and groups use the same variable names and syntax, it makes it easier to share the data and work together on integrative data analyses or "mega" analyses (which are different and better than meta-analysis in that they are combining the raw data, versus working with summary descriptive statistics).
R/SPSS/SAS syntax
[edit | edit source]Click here for R code
|
---|
R code goes here |
Click here for SPSS code
|
---|
SPSS code goes here |
Click here for SAS code
|
---|
SAS code goes here |
See also
[edit | edit source]Additional assessments on obsessive-compulsive disorder can be found at the page below:
External links
[edit | edit source]- The Dimensional Obsessive-Compulsive Scale: Development and Validation of a Short Form (DOCS-SF)
- Dimensional Obsessive-Compulsive Scale
- Assessment of obsessive-compulsive symptom dimensions: Development and evaluation of the Dimensional Obsessive-Compulsive Scale
- Evidence-Based Assessment of Obsessive–Compulsive Disorder
- Society of Clinical Child and Adolescent Psychology
- EffectiveChildTherapy.Org information on rule-breaking, defiance, and acting out
Example page
[edit | edit source]References
[edit | edit source]Click here for references
|
---|
|