Survey research and design in psychology/Assessment/Lab report/Detailed marking criteria

From Wikiversity
Jump to navigation Jump to search
Lab report - Detailed marking criteria

These detailed guidelines describe the requirements for each section of the lab report:

Overview of sections[edit | edit source]

This is an overview of the requirements and weighting for each section:

Criteria Description % Suggested word count
Title/Abstract Succinct, specific title. Abstract covers purpose, method, findings, and conclusions. 5% 10-15, 160-200
Introduction Establishes the problem, reviews theory and research, develops research question(s) and hypotheses 10% 400-700
Method Describes method and design, including Participants, Materials, and Procedure 15% 500-700
Results Screens (5%) and analyses data using EFA, internal consistency, descriptive statistics and correlations for the composite scores (20%), and MLR (20%) 45% 900-1500
Discussion Summarises and interprets the findings, considers implications, and makes recommendations 25% 500-750

The word count ranges per section are suggestions only. The only restriction is an upper limit on the overall word count.

Cover sheet[edit | edit source]

  1. p. 0
  2. Download via Lab report cover sheet - click File - Download as - Select file type

Running head[edit | edit source]

  1. APA style

Title page[edit | edit source]

  1. p. 1 (note that the running head (short title and p#) should appear differently on p. 1 than on subsequent pages)
  2. Follow APA style (i.e., title, author, institution) except:
    1. use student ID number instead of name (for blind marking)
    2. do not include author note or non-APA style material such as: Word count, Unit name, Date, Tutor name etc.
  3. Title
    1. ~10-15 words
    2. Does the title convey the content or purpose of the report? Should reflect the psychometric and hypothesis-testing aims of the study.
    3. Should be succinct, yet specific to the study (e.g., mentions key constructs)?
    4. Catchy? Memorable?
    5. APA style notes: Capitalise the first letters of the title; no full-stop
  4. Institution name

Abstract[edit | edit source]

  1. p. 2
  2. ~160-200 words
  3. Abstract heading (APA style - should not be in bold)
  4. Summarise (avoiding excessive detail)
    1. Purpose of the study
    2. Method, including the sample (after data screening) and sampling method
    3. Key findings, including:
    4. % of variance explained in the EFA and identified factors
    5. R2 for the MLR and the relative contribution and meaning (consider strength, direction, and significance) of each predictor
    6. Conclusions about theory and method, with key recommendation(s)

Key words[edit | edit source]

  1. APA style
  2. 3 to 5 terms

Introduction[edit | edit source]

  1. pp. 3-
  2. Introduce the topic and concisely explain the study's purpose(s).
  3. Provide a critical overview of relevant past research and identify key issues to be addressed in this study.
  4. Only review constructs which are analysed in the Results - e.g., see possible topics.
  5. Use citations to key background literature - Wikiversity (see readings) lists some starting references, but use of additional references is recommended.
  6. Avoid describing methodological details about the current study, such as measurement tools - this belongs in the Method.
  7. Provide logically-derived and clearly-stated research question(s) and/or hypotheses (can be null and/or alternative). The derivation of the questions and hypotheses should be supported by theoretical argument and citations.
    1. Firstly, specify a research question about the underlying factor structure of one of the multidimensional constructs in the
      Surveys about university student motivation, satisfaction, and time management

Surveys[edit | edit source]

These surveys were designed for use by an undergraduate psychology class (Survey Research and Design in Psychology, 2005-2018):

Students used these surveys to collect data, entry data, and conduct analyses for a lab report.

Using these surveys[edit | edit source]

These instruments and their items are free to use, adapt etcetera under a Creative Commons Attribution International 4.0 license.

However, be aware that the surveys in their current format are intentionally designed to not be "perfect" so that emerging scholars studying subjects such as "Survey research and design in psychology" can collect data and then practice exploratory factor analysis .

There is also intentionally no scoring key . Factor analysis is recommended to help determine the underlying factor structure and to identify which items to use to calculate composite scores. In other words, there is a latent structure, but you'll need to work it out. For example, for university student motivation, see these suggestions. Composite scores representing underlying constructs can then be used for descriptive statistics and hypothesis testing.

Psychometrics[edit | edit source]

There are no reported psychometrics for newly developed items and scales in these survey instruments. Where intact, previously published measures were included, psychometrics may be available.

Users of these surveys should be prepared to conduct their own psychometric analyses (factor structure, reliability, and validity) based on their own samples.

See also[edit | edit source]

  • University student motivation (i.e., either time perspective or time management) e.g., "How many distinct dimensions (factors) of X are there, what are they, and which items best represent these factors?"
    1. Secondly, make a hypothesis about the extent to which each of the independent variables (IVs) predict a dependent variable (DV) (→ Multiple linear regression (MLR)) (choose any suitable variables, or composite of several variables, from the
      Surveys about university student motivation, satisfaction, and time management

Surveys[edit | edit source]

These surveys were designed for use by an undergraduate psychology class (Survey Research and Design in Psychology, 2005-2018):

Students used these surveys to collect data, entry data, and conduct analyses for a lab report.

Using these surveys[edit | edit source]

These instruments and their items are free to use, adapt etcetera under a Creative Commons Attribution International 4.0 license.

However, be aware that the surveys in their current format are intentionally designed to not be "perfect" so that emerging scholars studying subjects such as "Survey research and design in psychology" can collect data and then practice exploratory factor analysis .

There is also intentionally no scoring key . Factor analysis is recommended to help determine the underlying factor structure and to identify which items to use to calculate composite scores. In other words, there is a latent structure, but you'll need to work it out. For example, for university student motivation, see these suggestions. Composite scores representing underlying constructs can then be used for descriptive statistics and hypothesis testing.

Psychometrics[edit | edit source]

There are no reported psychometrics for newly developed items and scales in these survey instruments. Where intact, previously published measures were included, psychometrics may be available.

Users of these surveys should be prepared to conduct their own psychometric analyses (factor structure, reliability, and validity) based on their own samples.

See also[edit | edit source]

Method[edit | edit source]

  1. Clearly explain how the study was conducted in sufficient detail to allow a replication study, but without extraneous detail.
  2. Key marking criteria: Is the study replicable? Is sufficient detail provided for a "naive person" (say, someone in Japan in 20 years time) to be able to fully replicate the study?

Participants (5%)

  1. Provide a one to two paragraph descriptive overview of the participants in the final (after data screening) sample.
  2. Consider which of the available data can be summarised in order to provide an insightful description.
  3. Advanced option: You may wish to compare the sampled data with population statistics for UC students (e.g., see UC at a glance and Annual Reports).

Measures (5%)

  1. Briefly summarise the development of the survey instrumentation.
  2. For the measures used to collect the data which is analysed in the Results, describe e.g.
    1. type of questions
    2. response format
    3. any reverse-scoring and meaning/direction of high/low scores
    4. Do not describe measures which are not used in the current study.
    5. Optional: Use a table to help present the proposed factors (e.g., label, definitions and example items)

Procedure (5%)

  1. Sampling:
    1. What was the target population and the sampling frame?
    2. What sampling technique was used?
  2. Administration:
    1. Briefly summarise and provide an APA style reference for the Survey administration guidelines
    2. Where and how did you collect data?
    3. How long did participants typically take to complete the survey?
    4. Refusal rate? (for the surveys you administered)
    5. Procedural anomalies? (e.g., explain any unanticipated responses or unplanned occurrences)

Results[edit | edit source]

  1. The analysis should proceed through three basic steps:
    1. Data screening - Summarise in one to two paragraphs how the data was screened and what changes were made. Is enough detail provided for the same steps to be followed by someone else? However, avoid excessive detail (e.g, CaseID number are meaningless to a reader).
    2. Sample size assumptions do not belong in this section - instead, this would belong in the section(s) for the corresponding analyses.
    3. Psychometric instrument development
      1. Conduct EFA of a multidimensional construct (either the time perspective items or the time management items).
      2. For each extracted factor, provide reliability analysis (internal consistency - Cronbach's alpha), composite score descriptive statistics, and correlations between factors.
    4. Multiple linear regression - Conduct a MLR with at least three IVs to address one hypothesis per IV
  2. Communicate the depth of your understanding by using your own words; avoid writing results in a robotic (mindless) manner (e.g., avoid overparaphrasing a specific sample write-up)
  3. Most statistics should be rounded to two decimal places unless there is particularly useful information communicated by including a third decimal place (e.g., when reporting exact p values).
  4. Scope and depth of analysis
    1. Additional analyses may be presented. However, it is quite possible to gain maximum marks by conducting one of each of the required analyses. If additional analyses are presented, then they must be clearly related to the research question and hypothesis(es).
    2. In marking, some account will be taken of the scope of the analysis undertaken. Where a more advanced analysis is appropriate (given the research questions(s) and/or hypothesis(es)) and is well conducted, this could represent higher quality work than a simpler analysis. However, there's much also to be said for parsimony (keep it simple and get it right) by focusing on doing a good job of fulfilling the minimum requirements. The best reports are usually not the most complex ones. If in doubt, go with analyses which meet the minimum criteria, which relate to the research question and/or hypotheses, and which you are confident about accurately conducting, interpreting and presenting.

Data screening (5%)

  1. Summarise what was done to check and correct errors in the data - see data screening
  2. Statements like ""All analyses were conducted using SPSS version 23." are unnecessary.

Psychometric instrument development (20%)

  1. Report the results of an EFA of a multidimensional survey construct (either time perspective or time management)
    1. The minimum requirement is to report psychometric analysis (EFA, composite scores, correlations between composite scores, and internal consistency) for one set of items (time perspective or time management). However, it may also be of interest to conduct psychometric analysis of another set of items (e.g., stress) in order to develop other composite scores for further analysis. In this case, present one EFA (of either time perspective or time management) in full and briefly summarise the results of the other EFA, perhaps with relevant output in an appendix.
  2. Indicate the type of EFA used (Type of extraction? Type of rotation?)
  3. Explain the extent to which EFA assumptions were met, but not excessively (e.g., one indicator of factorability is quite sufficient; more is redundant)
    1. Sample size (incl. cases:variables ratio)
    2. Linearity (e.g., check at least some scatterplots, particularly for bivariate outliers or non-linear relations)
    3. Factorability of the correlation matrix (either examine via item correlations, anti-correlation matrix diagonals, or a measure of sampling adequacy (KMO or Bartlett's) - but do not report all of these as they are redundant)
  4. Focus on the final model but summarise the steps taken to get there (e.g., How many factors were extracted initially? What models/factors structures were examined? To what extent was the expected structure evident?)
  5. % of variance explained (for the initial and final model(s))
  6. Label and describe each factor
  7. Which items were retained and/or dropped and the reasons why
  8. Table of factor loadings (sorted by size) and communalities (for the final model)
  9. Reliability analysis (Internal consistency - (Cronbach's alpha)) for each factor
  10. Calculation of composite scores to represent each factor
  11. Table of descriptive statistics for the composite scores
  12. Table of correlations between composite scores

Multiple linear regression (20%)

  1. Report the results of a MLR with at least three predictors - can use any variables in, or derived from, the supplied data set (if they meet the assumptions for MLR)
  2. Reiterate the purpose (research question and/or hypotheses) of the MLR
  3. Mention the type of MLR (e.g., standard, hierarchical, or stepwise)
  4. Describe the IVs and DVs, and any manipulations of the variables (e.g., recoding or creating an interaction term). If not already clear from the Method, clarify the direction of scoring.
  5. Explain the extent to which assumptions were met (e.g., sample size, multicollinearity, multivariate outliers)
  6. Present the correlations between the items (can be part of the MLR coefficients table - see sample write-ups for examples). Demonstrate understanding of the directions of any relationships (e.g., if there is a positive correlation between X and Gender, what does this mean? Are higher values of X associated with males or females?)
  7. Report amount of variance explained (R2 and Adjusted R2 (and the R2 change at each step if a hierarchical MLR is being conducted), along with inferential tests (F(df), p)
  8. Report significance, size, direction and relative contribution of each IV. Make sure to explain what the direction of the relationships mean in plain English.
  9. Table showing the correlations and MLR coefficients, including B for intercept & IVs and Beta (β), and the statistical significance (e.g., t, p), and semi-partial correlations squared (sr2) for each IV and explain the direction and size of the results.
  10. Consider the shared and unique percentages of explained variance.

Discussion[edit | edit source]

  1. Build on the introduction to explain the results and what they mean in a balanced manner.
  2. Demonstrate breadth and depth of understanding of the results and their implications. Avoid merely summarising the results without providing additional critical commentary.
  3. Critically review the strengths and weaknesses of the study's methodology and make practical suggestions for how it could be improved, e.g.,
    1. Validity and reliability of the measures?
    2. Statistical power?
    3. Appropriateness of the sampling technique?
    4. Generalisability of the findings?
  4. Provide tangible recommendations for future research and practical implications.

References[edit | edit source]

  1. Is the reference list complete (i.e., none missing and all cited)?
  2. Does the lab report make effective use of a core set of relevant, high-quality, peer-reviewed, citations? This involves citing and meaningfully discussing appropriate references (as opposed to just dumping citations without explanation), particularly in the Introduction and Discussion.
  3. Reference the instrumentation and the survey administration guidelines - do not copy the guidelines into the Appendices.
  4. Use APA style, including for electronic sources. Include DOIs where relevant.

Appendices[edit | edit source]

  1. Appendices are not needed - they are optional.
  2. Appendices are for additional detail which is relevant to understanding the main body, but which would break the flow of the main report e.g., the correlations between the items used in the factor analysis.
  3. Appendix content does not need to follow APA style but should be well organised, with clear labeling.
  4. Each appendix should be referred to at least once in the main body.