Jump to content

OToPS/Measures/Creative Achievement Questionnaire

From Wikiversity
(Redirected from OToPS/Measures/CAQ)

The Open Teaching of Psychological Science (OToPs) template is a shell that we use for building new Wikiversity instrument pages on Wikiversity.

Before starting

[edit | edit source]

If you are starting a Wikiversity instrument page using this template, please do the following:

  1. We will import all of the content here to a new page.
    1. Click on the Edit source tab. Copy all content.
    2. Create a new page.
    3. Paste it in!
    4. Delete these instructions.

Lead section

[edit | edit source]

The Creative Achievement Questionnaire (CAQ) was created by Shelley H. Carson from Harvard University in 2005 and based the CAQ upon public acclaim ranked by field “experts.” The CAQ can be used to test both individual differences and group differences in creative achievement. It is an objective self-report measure of creative achievement across 10 domains[1]. These 10 domains are: Visual Arts, Music, Dance, Architectural Design, Creative Writing, Scientific Discovery, Humor, Theater and Film, Inventions and Culinary Arts. Each of the 10 domains has 8 questions from 0-7 under it. The CAQ provides individual domain scores and the total CAQ score which is all 10 domain scores summed up. Besides these 10 domains there is 3 additional domains that are not covered in as much detail: individual sports, team sports, and entrepreneurial ventures. All 96 items on the CAQ either ask you to provide a checkmark beside it if applies to you or a number which represents how many times you accomplished that item. The CAQ has no multiple choice on it. The CAQ measures Big-C creativity, which is rare, genius overall creative ability marked by significant and observable accomplishments.[2] Big-C creativity is considered rare, that is why Carson explains that the CAQ total scores are meant to be highly positively skewed[2]. This is supposed to be an accurate representation of the general population, with majority of people scoring low and only individuals with significant achievements in at least one domain receiving high scores[1]. The CAQ uses an innovative and complex scoring approach unlike a simple Likert Scale. The way the CAQ items in each domain is set up makes it so the scores range from zero and beyond with no max score due to multiplying items on the CAQ. These multiplying items are usually items 6. or 7. and multiplies the item's score by however many times that individual accomplished that item. The CAQ is extensively used and so some researchers employ the log function on the total CAQ scores to make analyses of the CAQ easier and to create a more normal distribution of scores[3].

Carson suggested that the CAQ has 3 underlying factors, expressive, scientific inquiry, and performance. These factors were created by grouping related domains together then a factorial analysis was performed. The common use of just the overall score in research studies suggests just a single underlying factor [1]. More research needs to be done to see how many factors this construct actually tests for[2]. Another limitation that raises some concern is the CAQ's ability to accurately test all age ranges in a population. Some research suggests that CAQ scores underestimate creative accomplishments that are typical of college students since they have not reached the stage in life where high scoring items would usually be accomplished[2]. One piece of evidence that supports that it accurately measures creative achievement is that a common predictor of CAQ scores is openness to experience, a personality trait from the Big Five Inventory[1]. Openness to experience has been found to be strongly correlated with other creative measurement scores as well.[2] It can be used in many settings and clinicians and researchers use it. It can be sent out through a survey as well.

Psychometrics

[edit | edit source]

Steps for evaluating reliability and validity

[edit | edit source]
Click here for instructions
  1. Evaluate the instrument by referring to the rubrics for evaluating reliability and validity (both external Wikiversity pages). For easy reference, open these pages in separate tabs.
    1. Reliability rubric
    2. Validity rubric
  2. Refer to the relevant instrument rubric table. This is the table that you will be editing. Do not confuse this with the external pages on reliability and validity.
    1. Instrument rubric table: Reliability
    2. Instrument rubric table: Validity
  3. Depending on whether instrument was adequate, good, excellent, or too good:
    1. Insert your rating.
    2. Add the evidence from journal articles that support your evaluation.
    3. Provide citations.
  4. Refer to the heading for the instrument rubric table ("Rubric for evaluating norms and reliability for the XXX ... indicates new construct or category")
    1. Make sure that you change the name of the instrument accordingly.
  5. Using the Edit Source function, remove collapse top and collapse bottom curly wurlys to show content.

Instrument rubric table: Reliability

[edit | edit source]

Note: Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.

Click here for instrument reliability table

Reliability

[edit | edit source]

Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.

Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.

Evaluation for norms and reliability for the XXX (table from Youngstrom et al., extending Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Norms Adequate Multiple convenience samples and research studies, including both clinical and nonclinical samples[citation needed]
Internal consistency (Cronbach’s alpha, split half, etc.) Excellent; too good for some contexts Alphas routinely over .94 for both scales, suggesting that scales could be shortened for many uses[citation needed]
Interrater reliability Not applicable Designed originally as a self-report scale; parent and youth report correlate about the same as cross-informant scores correlate in general[4]
Test-retest reliability (stability Good r = .73 over 15 weeks. Evaluated in initial studies,[1] with data also show high stability in clinical trials[citation needed]
Repeatability Not published No published studies formally checking repeatability

Instrument rubric table: Validity

[edit | edit source]
Click here for instrument validity table

Validity

[edit | edit source]

Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and w:discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.

Evaluation of validity and utility for the XXX (table from Youngstrom et al., unpublished, extended from Hunsley & Mash, 2008; *indicates new construct or category)
Criterion Rating (adequate, good, excellent, too good*) Explanation with references
Content validity Excellent Covers both DSM diagnostic symptoms and a range of associated features[1]
Contruct validity (e.g., predictive, concurrent, convergent, and discriminant validity) Excellent Shows Convergent validity with other symptom scales, longitudinal prediction of development of mood disorders,[5][6][7] criterion validity via metabolic markers[1][8] and associations with family history of mood disorder.[9] Factor structure complicated;[1][2] the inclusion of “biphasic” or “mixed” mood items creates a lot of cross-loading
Discriminative validity Excellent Multiple studies show that GBI scores discriminate cases with unipolar and bipolar mood disorders from other clinical disorders[1][10][11] effect sizes are among the largest of existing scales[12]
Validity generalization Good Used both as self-report and caregiver report; used in college student[2][13] as well as outpatient[10][14][15] and inpatient clinical samples; translated into multiple languages with good reliability
Treatment sensitivity Good Multiple studies show sensitivity to treatment effects comparable to using interviews by trained raters, including placebo-controlled, masked assignment trials[16][17] Short forms appear to retain sensitivity to treatment effects while substantially reducing burden[17][18]
Clinical utility Good Free (public domain), strong psychometrics, extensive research base. Biggest concerns are length and reading level. Short forms have less research, but are appealing based on reduced burden and promising data

Development and history

[edit | edit source]
Click here for instructions for development and history
  • Why was this instrument developed? Why was there a need to do so? What need did it meet?
  • What was the theoretical background behind this assessment? (e.g. addresses importance of 'negative cognitions', such as intrusions, inaccurate, sustained thoughts)
  • How was the scale developed? What was the theoretical background behind it?
  • If there were previous versions, when were they published?
  • Discuss the theoretical ideas behind the changes.

Impact

[edit | edit source]
  • What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
  • What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?

Use in other populations

[edit | edit source]

Besides the English original, the CAQ was also used in a Chinese[19], French[20], and German version[21].

Scoring instructions and syntax

[edit | edit source]

We have syntax in three major languages: R, SPSS, and SAS. All variable names are the same across all three, and all match the CSV shell that we provide as well as the Qualtrics export.

Hand scoring and general instructions

[edit | edit source]
Click here for hand scoring and general administration instructions

Carson H. Shelley outlined how to score her self-report measure of creative achievement called the Creative Achievement Questionnaire (CAQ). There are 10 domains of creativity in the measurement. These 10 domains appear in this order on the CAQ: Visual Arts, Music, Dance, Architectural Design, Creative Writing, Humor, Inventions, Scientific Discovery, Theater and Film and Culinary Arts. Within each of these domains are a total of 8 items listed as 0 to 7 ordered by ascending creative achievement in that area. The number beside each question represents the amount of points allotted to the person if they place a checkmark beside that item which indicates that they identify with that item as being a true statement about their accomplished creative achievements. On items where there is an asterisk on the item means that the participant needs to specify how many times they have earned that achievement.[1]

First, score each individual domain to receive a domain score for each domain. For every domain, each item with a check mark by it receives the number of points represented by the question number adjacent to the checkmark. To determine points for asterisk marked items multiply the number of the question (e.g. 5, 6, 7) by the number the participant reports which represents the number of times the item has been achieved by them. Next sum the total number of points for that domain to determine the domain score. After finding all 10 domain scores sum up all 10 domain scores to determine the participants total CAQ score.[22]

CSV shell for sharing

[edit | edit source]
Click here for CSV shell

Here is a shell data file that you could use in your own research. The variable names in the shell corresponds with the scoring code in the code for all three statistical programs.

Note that our CSV includes several demographic variables, which follow current conventions in most developmental and clinical psychology journals. You may want to modify them, depending on where you are working. Also pay attention to the possibility of "deductive identification" -- if we ask personal information in enough detail, then it may be possible to figure out the identity of a participant based on a combination of variables.

When different research projects and groups use the same variable names and syntax, it makes it easier to share the data and work together on integrative data analyses or "mega" analyses (which are different and better than meta-analysis in that they are combining the raw data, versus working with summary descriptive statistics).

  • <Paste link to CSV shell here>

R/SPSS/SAS syntax

[edit | edit source]
Click here for R code
#for alpha 
alpha(caqitems, keys=NULL,cumulative=FALSE, title=NULL, max=10,na.rm = TRUE, check.keys=TRUE,n.iter=1,delete=TRUE)


#Yen-Ling Chen’s R code
#PSYC395 Research Seminar
#Creative Achievement Questionnaire
#Scoring
#Reliability
#Yen-Ling Chen 09052017

#import CSV file
library("foreign")
CAQ <- read.csv("C:/Users/yenli/Google Drive/UNC Psyc395/caq.csv", header = TRUE)
attach(CAQ)

#correct the scoring
CAQ$caq01 <- caq01 - 1
CAQ$caq02 <- caq02 - 1
CAQ$caq03 <- caq03 - 1
CAQ$caq04 <- caq04 - 1
CAQ$caq05 <- caq05 - 1
CAQ$caq06 <- caq06 - 1
CAQ$caq07 <- caq07 - 1
CAQ$caq08 <- caq08 - 1
CAQ$caq09 <- caq09 - 1
CAQ$caq10 <- caq10 - 1

#CAQ total score
CAQ$total <- rowSums(CAQ[,c(8:17)], na.rm = TRUE)

#identify Eric's score
which(grepl(48, CAQ$Age))
CAQ[237,18]

#reliability for each domain and total score
library("psych")
reliability <- cbind(CAQ[,c(8:18)])
names(reliability)[names(reliability) == 'caq01'] <- 'Visual Arts'
names(reliability)[names(reliability) == 'caq02'] <- 'Music'
names(reliability)[names(reliability) == 'caq03'] <- 'Dance'
names(reliability)[names(reliability) == 'caq04'] <- 'Architectural Design'
names(reliability)[names(reliability) == 'caq05'] <- 'Creative Writing'
names(reliability)[names(reliability) == 'caq06'] <- 'Humor'
names(reliability)[names(reliability) == 'caq07'] <- 'Inventions'
names(reliability)[names(reliability) == 'caq08'] <- 'Scientific Discovery'
names(reliability)[names(reliability) == 'caq09'] <- 'Theater and Film'
names(reliability)[names(reliability) == 'caq10'] <- 'Culinary Arts'
names(reliability)[names(reliability) == 'total'] <- 'Total'
alpha(reliability, na.rm = TRUE)
Click here for SPSS code
*for the total score without altering the dataset.

compute caqtot = sum(caq01, caq02, caq03, caq04, caq05, caq06, caq07, caq08, caq09, caq10).


var labels caqtotal 'Creative Achievement Question-- total'. 


desc /var caqtot.


*for computing alpha.


RELIABILITY
  /VARIABLES=caq01 caq02 caq03 caq04 caq05 caq06 caq07 caq08 caq09 caq10
  /SCALE('ALL VARIABLES') ALL
  /MODEL=ALPHA
  /STATISTICS=DESCRIPTIVE SCALE
  /SUMMARY=TOTAL.

Click here for SAS code

SAS code goes here

See also

[edit | edit source]

Here, it would be good to link to any related articles on Wikipedia. For instance:

[edit | edit source]

Example page

[edit | edit source]

OToPS usage history

[edit | edit source]
Details
Date Added

(when was measure added to OTOPS Survey?

<Date>
Date Deleted

(when was measure dropped from OTOPS survey?)

<active/deleted>, <date>
Qualtrics scoring Variable name of internally scored variable:

XXX

Notes on internal scoring:

- Is it piped?

- Is it POMP-ed?

- Any transformations needed to make it comparable to published benchmarks?

Content expert Name: Jane Doe, Ph.D.

Institution/Country: University of Wikiversity / Canada

Email: Type email out

Contacted: Y/N

Following page: Y/N

References

[edit | edit source]
Click here for references
  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 Depue, Richard A.; Slater, Judith F.; Wolfstetter-Kausch, Heidi; Klein, Daniel; Goplerud, Eric; Farr, David (1981). "A behavioral paradigm for identifying persons at risk for bipolar depressive disorder: A conceptual framework and five validation studies.". Journal of Abnormal Psychology 90 (5): 381–437. doi:10.1037/0021-843X.90.5.381.  Cite error: Invalid <ref> tag; name ":0" defined multiple times with different content
  2. 2.0 2.1 2.2 2.3 2.4 2.5 2.6 Silvia, Paul J.; Wigert, Benjamin; Reiter-Palmon, Roni; Kaufman, James C.. "Assessing creativity with self-report scales: A review and empirical evaluation.". Psychology of Aesthetics, Creativity, and the Arts 6 (1): 19–34. doi:10.1037/a0024071. http://doi.apa.org/getdoi.cfm?doi=10.1037/a0024071.  Cite error: Invalid <ref> tag; name ":1" defined multiple times with different content
  3. (Prabhakaran, Ranjani; Green, Adam E.; Gray, Jeremy R. (2014-09-01). "Thin slices of creativity: Using single-word utterances to assess creative cognition". Behavior Research Methods. 46 (3): 641–659. ISSN 1554-3528. doi:10.3758/s13428-013-0401-7.)
  4. Achenbach, TM; McConaughy, SH; Howell, CT (March 1987). "Child/adolescent behavioral and emotional problems: implications of cross-informant correlations for situational specificity.". Psychological Bulletin 101 (2): 213–32. PMID 3562706. 
  5. Klein, DN; Dickstein, S; Taylor, EB; Harding, K (February 1989). "Identifying chronic affective disorders in outpatients: validation of the General Behavior Inventory.". Journal of consulting and clinical psychology 57 (1): 106–11. PMID 2925959. 
  6. Mesman, Esther; Nolen, Willem A.; Reichart, Catrien G.; Wals, Marjolein; Hillegers, Manon H.J. (May 2013). "The Dutch Bipolar Offspring Study: 12-Year Follow-Up". American Journal of Psychiatry 170 (5): 542–549. doi:10.1176/appi.ajp.2012.12030401. 
  7. Reichart, CG; van der Ende, J; Wals, M; Hillegers, MH; Nolen, WA; Ormel, J; Verhulst, FC (December 2005). "The use of the GBI as predictor of bipolar disorder in a population of adolescent offspring of parents with a bipolar disorder.". Journal of affective disorders 89 (1-3): 147–55. PMID 16260043. 
  8. Depue, RA; Kleiman, RM; Davis, P; Hutchinson, M; Krauss, SP (February 1985). "The behavioral high-risk paradigm and bipolar affective disorder, VIII: Serum free cortisol in nonpatient cyclothymic subjects selected by the General Behavior Inventory.". The American journal of psychiatry 142 (2): 175–81. PMID 3970242. 
  9. Klein, DN; Depue, RA (August 1984). "Continued impairment in persons at risk for bipolar affective disorder: results of a 19-month follow-up study.". Journal of abnormal psychology 93 (3): 345–7. PMID 6470321. 
  10. 10.0 10.1 Danielson, CK; Youngstrom, EA; Findling, RL; Calabrese, JR (February 2003). "Discriminative validity of the general behavior inventory using youth report.". Journal of abnormal child psychology 31 (1): 29–39. PMID 12597697. 
  11. Findling, RL; Youngstrom, EA; Danielson, CK; DelPorto-Bedoya, D; Papish-David, R; Townsend, L; Calabrese, JR (February 2002). "Clinical decision-making using the General Behavior Inventory in juvenile bipolarity.". Bipolar disorders 4 (1): 34–42. PMID 12047493. 
  12. Youngstrom, Eric A.; Genzlinger, Jacquelynne E.; Egerton, Gregory A.; Van Meter, Anna R. (2015). "Multivariate meta-analysis of the discriminative validity of caregiver, youth, and teacher rating scales for pediatric bipolar disorder: Mother knows best about mania.". Archives of Scientific Psychology 3 (1): 112–137. doi:10.1037/arc0000024. 
  13. Alloy, LB; Abramson, LY; Hogan, ME; Whitehouse, WG; Rose, DT; Robinson, MS; Kim, RS; Lapkin, JB (August 2000). "The Temple-Wisconsin Cognitive Vulnerability to Depression Project: lifetime history of axis I psychopathology in individuals at high and low cognitive risk for depression.". Journal of abnormal psychology 109 (3): 403–18. PMID 11016110. 
  14. Klein, Daniel N.; Dickstein, Susan; Taylor, Ellen B.; Harding, Kathryn (1989). "Identifying chronic affective disorders in outpatients: Validation of the General Behavior Inventory.". Journal of Consulting and Clinical Psychology 57 (1): 106–111. doi:10.1037/0022-006X.57.1.106. 
  15. Youngstrom, EA; Findling, RL; Danielson, CK; Calabrese, JR (June 2001). "Discriminative validity of parent report of hypomanic and depressive symptoms on the General Behavior Inventory.". Psychological assessment 13 (2): 267–76. PMID 11433802. 
  16. Findling, RL; Youngstrom, EA; McNamara, NK; Stansbrey, RJ; Wynbrandt, JL; Adegbite, C; Rowles, BM; Demeter, CA et al. (January 2012). "Double-blind, randomized, placebo-controlled long-term maintenance study of aripiprazole in children with bipolar disorder.". The Journal of clinical psychiatry 73 (1): 57–63. PMID 22152402. 
  17. 17.0 17.1 Youngstrom, E; Zhao, J; Mankoski, R; Forbes, RA; Marcus, RM; Carson, W; McQuade, R; Findling, RL (March 2013). "Clinical significance of treatment effects with aripiprazole versus placebo in a study of manic or mixed episodes associated with pediatric bipolar I disorder.". Journal of child and adolescent psychopharmacology 23 (2): 72–9. PMID 23480324. 
  18. Ong, ML; Youngstrom, EA; Chua, JJ; Halverson, TF; Horwitz, SM; Storfer-Isser, A; Frazier, TW; Fristad, MA et al. (1 July 2016). "Comparing the CASI-4R and the PGBI-10 M for Differentiating Bipolar Spectrum Disorders from Other Outpatient Diagnoses in Youth.". Journal of abnormal child psychology. PMID 27364346. 
  19. Ying-Yao, Wang, Chia-Chi|Ho, Hsiao-Chi|Cheng, Chih-Ling|Cheng, (2014/00/00). "Application of the Rasch Model to the Measurement of Creativity: The Creative Achievement Questionnaire.". Creativity Research Journal 26 (1). ISSN 1040-0419. https://eric.ed.gov/?id=EJ1088863. 
  20. Bendetowicz, David; Urbanski, Marika; Aichelburg, Clarisse; Levy, Richard; Volle, Emmanuelle (2017-01). "Brain morphometry predicts individual creative potential and the ability to combine remote ideas". Cortex 86: 216–229. doi:10.1016/j.cortex.2016.10.021. ISSN 0010-9452. http://linkinghub.elsevier.com/retrieve/pii/S0010945216303161. 
  21. Form, Sven; Schlichting, Kerrin; Kaernbach, Christian (2017-11). "Mentoring functions: Interpersonal tensions are associated with mentees’ creative achievement.". Psychology of Aesthetics, Creativity, and the Arts 11 (4): 440–450. doi:10.1037/aca0000103. ISSN 1931-390X. http://doi.apa.org/getdoi.cfm?doi=10.1037/aca0000103. 
  22. Carson, Shelley H.; Peterson, Jordan B.; Higgins, Daniel M. (2005-02-01). "Reliability, Validity, and Factor Structure of the Creative Achievement Questionnaire". Creativity Research Journal17 (1): pg. 50. ISSN 1040-0419. doi:10.1207/s15326934crj1701_4.