Evidence-based assessment/Instruments/Mood and Feelings Questionnaire
HGAPS is finding new ways to make psychological science conferences more accessible!
Here are examples from APA 2022 and the JCCAP Future Directions Forum. Coming soon... ABCT!
~ More at HGAPS.org ~
The Mood and Feelings Questionnaire (MFQ) is a suite of six versions of questionnaire meant to measure depression in children and adolescents ages 6-18 (self or parent report) as well as a version for adults to report about their own mood. The questionnaire was created by researchers at Duke University as part of the Great Smokey Mountain epidemiological project in Western North Carolina. The MFQ can be used both as an assessment measure to detect possible cases and as a follow-up assessment measure. It takes 5-10 minutes to administer and is used by clinicians in community samples of ages 6-18. The short forms have 13 items, and the full length have 33 items, which are scored 0 to 2 points each. Higher scores indicate more depression.
Psychometrics
[edit | edit source]Steps for evaluating reliability and validity
[edit | edit source]Click here for instructions
|
---|
|
Instrument rubric table: Reliability
[edit | edit source]Note: Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.
Click here for instrument reliability table
| ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Reliability[edit | edit source]Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed. Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.
|
Instrument rubric table: Validity
[edit | edit source]Click here for instrument validity table
| |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Validity[edit | edit source]Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and w:discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.
|
Development and history
[edit | edit source]Click here for instructions for development and history
|
---|
|
Impact
[edit | edit source]- What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
- What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?
Use in other populations
[edit | edit source]- How widely has it been used? Has it been translated into different languages? Which languages?
Scoring instructions and syntax
[edit | edit source]We have syntax in three major languages: R, SPSS, and SAS. All variable names are the same across all three, and all match the CSV shell that we provide as well as the Qualtrics export.
Hand scoring and general instructions
[edit | edit source]Click here for hand scoring and general administration instructions
|
---|
Each item is rated on a Likert scale of 0-2, in which 0 = NOT TRUE, 1 = SOMETIMES, 2 = TRUE.
There are no prescribed cut points for MFQ. |
If there are any hand scoring and general administration instructions, it should go here.
CSV shell for sharing
[edit | edit source]Click here for CSV shell
|
---|
|
Here is a shell data file that you could use in your own research. The variable names in the shell corresponds with the scoring code in the code for all three statistical programs.
Note that our CSV includes several demographic variables, which follow current conventions in most developmental and clinical psychology journals. You may want to modify them, depending on where you are working. Also pay attention to the possibility of "deductive identification" -- if we ask personal information in enough detail, then it may be possible to figure out the identity of a participant based on a combination of variables.
When different research projects and groups use the same variable names and syntax, it makes it easier to share the data and work together on integrative data analyses or "mega" analyses (which are different and better than meta-analysis in that they are combining the raw data, versus working with summary descriptive statistics).
R/SPSS/SAS syntax
[edit | edit source]Click here for R code
| ||||||
---|---|---|---|---|---|---|
# The psych package allows for easy computation and visualization of different psychological assessments and more.
library(psych)
mfq_data <- data.frame( item1 = c(1, 0, 2, 1, 1), item2 = c(0, 1, 0, 2, 1), item3 = c(2, 1, 1, 0, 2), item4 = c(0, 2, 1, 1, 0), item5 = c(1, 1, 2, 0, 2), item6 = c(2, 0, 1, 2, 1), item7 = c(0, 1, 2, 0, 1), item8 = c(2, 0, 1, 1, 2), item9 = c(1, 2, 0, 1, 0), item10 = c(0, 2, 1, 1, 1), item11 = c(1, 0, 2, 0, 2), item12 = c(2, 1, 0, 1, 1), item13 = c(1, 2, 1, 0, 2) )
mfq_data$mfq_total <- rowSums(mfq_data, na.rm = TRUE)
mfq_descriptive <- describe(mfq_data) #describe is the necessary command for descriptive statistics print("Descriptive statistics for each item and total score:") print(mfq_descriptive)
mfq_total_descriptive <- describe(mfq_data$mfq_total) print("Descriptive statistics for the total MFQ:") print(mfq_total_descriptive)
mfq_reliability <- alpha(mfq_data[, 1:13]) # Items 1 to 13 print("Cronbach's alpha for internal reliability:") print(mfq_reliability)
hist( mfq_data$mfq_total, main = "Distribution of Short-Form MFQ Total Scores", xlab = "MFQ Total Score", breaks = 10, col = "lightgreen", border = "black")|}
See also[edit | edit source]Here, it would be good to link to any related articles on Wikipedia. For instance: External links[edit | edit source]
Example page[edit | edit source]References[edit | edit source]
|