OToPS/Measures/Creative Achievement Questionnaire
The Open Teaching of Psychological Science (OToPs) template is a shell that we use for building new Wikiversity instrument pages on Wikiversity.
If you are starting a Wikiversity instrument page using this template, please do the following:
- We will import all of the content here to a new page.
- Click on the Edit source tab. Copy all content.
- Create a new page.
- Paste it in!
- Delete these instructions.
The Creative Achievement Questionnaire (CAQ) was created by Shelley H. Carson from Harvard University in 2005 and based the CAQ upon public acclaim ranked by field “experts.” The CAQ can be used to test both individual differences and group differences in creative achievement. It is an objective self-report measure of creative achievement across 10 domains. These 10 domains are: Visual Arts, Music, Dance, Architectural Design, Creative Writing, Scientific Discovery, Humor, Theater and Film, Inventions and Culinary Arts. Each of the 10 domains has 8 questions from 0-7 under it. The CAQ provides individual domain scores and the total CAQ score which is all 10 domain scores summed up. Besides these 10 domains there is 3 additional domains that are not covered in as much detail: individual sports, team sports, and entrepreneurial ventures. All 96 items on the CAQ either ask you to provide a checkmark beside it if applies to you or a number which represents how many times you accomplished that item. The CAQ has no multiple choice on it. The CAQ measures Big-C creativity, which is rare, genius overall creative ability marked by significant and observable accomplishments. Big-C creativity is considered rare, that is why Carson explains that the CAQ total scores are meant to be highly positively skewed. This is supposed to be an accurate representation of the general population, with majority of people scoring low and only individuals with significant achievements in at least one domain receiving high scores. The CAQ uses an innovative and complex scoring approach unlike a simple Likert Scale. The way the CAQ items in each domain is set up makes it so the scores range from zero and beyond with no max score due to multiplying items on the CAQ. These multiplying items are usually items 6. or 7. and multiplies the item's score by however many times that individual accomplished that item. The CAQ is extensively used and so some researchers employ the log function on the total CAQ scores to make analyses of the CAQ easier and to create a more normal distribution of scores.
Carson suggested that the CAQ has 3 underlying factors, expressive, scientific inquiry, and performance. These factors were created by grouping related domains together then a factorial analysis was performed. The common use of just the overall score in research studies suggests just a single underlying factor . More research needs to be done to see how many factors this construct actually tests for. Another limitation that raises some concern is the CAQ's ability to accurately test all age ranges in a population. Some research suggests that CAQ scores underestimate creative accomplishments that are typical of college students since they have not reached the stage in life where high scoring items would usually be accomplished. One piece of evidence that supports that it accurately measures creative achievement is that a common predictor of CAQ scores is openness to experience, a personality trait from the Big Five Inventory. Openness to experience has been found to be strongly correlated with other creative measurement scores as well. It can be used in many settings and clinicians and researchers use it. It can be sent out through a survey as well.
Steps for evaluating reliability and validity
|Click here for instructions|
Instrument rubric table: Reliability
Note: Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.
|Click here for instrument reliability table|
Not all of the different types of reliability apply to the way that questionnaires are typically used. Internal consistency (whether all of the items measure the same construct) is not usually reported in studies of questionnaires; nor is inter-rater reliability (which would measure how similar peoples' responses were if the interviews were repeated again, or different raters listened to the same interview). Therefore, make adjustments as needed.
Reliability refers to whether the scores are reproducible. Unless otherwise specified, the reliability scores and values come from studies done with a United States population sample. Here is the rubric for evaluating the reliability of scores on a measure for the purpose of evidence based assessment.
Instrument rubric table: Validity
|Click here for instrument validity table|
Validity describes the evidence that an assessment tool measures what it was supposed to measure. There are many different ways of checking validity. For screening measures, diagnostic accuracy and w:discriminative validity are probably the most useful ways of looking at validity. Unless otherwise specified, the validity scores and values come from studies done with a United States population sample. Here is a rubric for describing validity of test scores in the context of evidence-based assessment.
Development and history
|Click here for instructions for development and history|
- What was the impact of this assessment? How did it affect assessment in psychiatry, psychology and health care professionals?
- What can the assessment be used for in clinical settings? Can it be used to measure symptoms longitudinally? Developmentally?
Use in other populations
Scoring instructions and syntax
We have syntax in three major languages: R, SPSS, and SAS. All variable names are the same across all three, and all match the CSV shell that we provide as well as the Qualtrics export.
Hand scoring and general instructions
|Click here for hand scoring and general administration instructions|
Carson H. Shelley outlined how to score her self-report measure of creative achievement called the Creative Achievement Questionnaire (CAQ). There are 10 domains of creativity in the measurement. These 10 domains appear in this order on the CAQ: Visual Arts, Music, Dance, Architectural Design, Creative Writing, Humor, Inventions, Scientific Discovery, Theater and Film and Culinary Arts. Within each of these domains are a total of 8 items listed as 0 to 7 ordered by ascending creative achievement in that area. The number beside each question represents the amount of points allotted to the person if they place a checkmark beside that item which indicates that they identify with that item as being a true statement about their accomplished creative achievements. On items where there is an asterisk on the item means that the participant needs to specify how many times they have earned that achievement.
First, score each individual domain to receive a domain score for each domain. For every domain, each item with a check mark by it receives the number of points represented by the question number adjacent to the checkmark. To determine points for asterisk marked items multiply the number of the question (e.g. 5, 6, 7) by the number the participant reports which represents the number of times the item has been achieved by them. Next sum the total number of points for that domain to determine the domain score. After finding all 10 domain scores sum up all 10 domain scores to determine the participants total CAQ score.
CSV shell for sharing
|Click here for CSV shell|
Here is a shell data file that you could use in your own research. The variable names in the shell corresponds with the scoring code in the code for all three statistical programs.
Note that our CSV includes several demographic variables, which follow current conventions in most developmental and clinical psychology journals. You may want to modify them, depending on where you are working. Also pay attention to the possibility of "deductive identification" -- if we ask personal information in enough detail, then it may be possible to figure out the identity of a participant based on a combination of variables.
When different research projects and groups use the same variable names and syntax, it makes it easier to share the data and work together on integrative data analyses or "mega" analyses (which are different and better than meta-analysis in that they are combining the raw data, versus working with summary descriptive statistics).
|Click here for R code|
#for alpha alpha(caqitems, keys=NULL,cumulative=FALSE, title=NULL, max=10,na.rm = TRUE, check.keys=TRUE,n.iter=1,delete=TRUE) #Yen-Ling Chen’s R code #PSYC395 Research Seminar #Creative Achievement Questionnaire #Scoring #Reliability #Yen-Ling Chen 09052017 #import CSV file library("foreign") CAQ <- read.csv("C:/Users/yenli/Google Drive/UNC Psyc395/caq.csv", header = TRUE) attach(CAQ) #correct the scoring CAQ$caq01 <- caq01 - 1 CAQ$caq02 <- caq02 - 1 CAQ$caq03 <- caq03 - 1 CAQ$caq04 <- caq04 - 1 CAQ$caq05 <- caq05 - 1 CAQ$caq06 <- caq06 - 1 CAQ$caq07 <- caq07 - 1 CAQ$caq08 <- caq08 - 1 CAQ$caq09 <- caq09 - 1 CAQ$caq10 <- caq10 - 1 #CAQ total score CAQ$total <- rowSums(CAQ[,c(8:17)], na.rm = TRUE) #identify Eric's score which(grepl(48, CAQ$Age)) CAQ[237,18] #reliability for each domain and total score library("psych") reliability <- cbind(CAQ[,c(8:18)]) names(reliability)[names(reliability) == 'caq01'] <- 'Visual Arts' names(reliability)[names(reliability) == 'caq02'] <- 'Music' names(reliability)[names(reliability) == 'caq03'] <- 'Dance' names(reliability)[names(reliability) == 'caq04'] <- 'Architectural Design' names(reliability)[names(reliability) == 'caq05'] <- 'Creative Writing' names(reliability)[names(reliability) == 'caq06'] <- 'Humor' names(reliability)[names(reliability) == 'caq07'] <- 'Inventions' names(reliability)[names(reliability) == 'caq08'] <- 'Scientific Discovery' names(reliability)[names(reliability) == 'caq09'] <- 'Theater and Film' names(reliability)[names(reliability) == 'caq10'] <- 'Culinary Arts' names(reliability)[names(reliability) == 'total'] <- 'Total' alpha(reliability, na.rm = TRUE)
|Click here for SPSS code|
*for the total score without altering the dataset. compute caqtot = sum(caq01, caq02, caq03, caq04, caq05, caq06, caq07, caq08, caq09, caq10). var labels caqtotal 'Creative Achievement Question-- total'. desc /var caqtot. *for computing alpha. RELIABILITY /VARIABLES=caq01 caq02 caq03 caq04 caq05 caq06 caq07 caq08 caq09 caq10 /SCALE('ALL VARIABLES') ALL /MODEL=ALPHA /STATISTICS=DESCRIPTIVE SCALE /SUMMARY=TOTAL.
|Click here for SAS code|
SAS code goes here
Here, it would be good to link to any related articles on Wikipedia. For instance:
OToPS usage history
(when was measure added to OTOPS Survey?
(when was measure dropped from OTOPS survey?)
|Qualtrics scoring||Variable name of internally scored variable:
Notes on internal scoring:
- Is it piped?
- Is it POMP-ed?
- Any transformations needed to make it comparable to published benchmarks?
|Content expert||Name: Jane Doe, Ph.D.
Institution/Country: University of Wikiversity / Canada
Email: Type email out
Following page: Y/N
|Click here for references|