Industrial and organizational psychology/Module 2
Module 2.1
[edit | edit source]Science: An approach that consists of understanding something, predicting something, and taking control of a field of interest. Methods are common, is a logical approach to an investigation (based off a theory, hypothesis, or a simple interest), and depends on data collected in a lab or field.
The research in question should be communicable and publicly accessible. May be published in journals, reports, or textbooks. This lists the methods of data collection, data reported from the investigatons, analyses of data, and conclusions brought up.
Scientists disprove/support theories or hypotheses. The goal is to destroy all plausible explanations, except 1. Scientists are objective in their pursuit of the truth.
An example of science playing its massive role in society is an expert witness during a lawsuit (which could be an I-O psychologist!).
Why do I-O psychs do research? So that they are better HR professionals in the decision-making department of organizations & provides aspects of predictability for HR personnel to make.
Common research designs include:
- Experimental: random assignment, done in a lab/workplace.
- Non-experimental: no manipulation/assignment, either through observational design or surveys.
- Quasi-experimental: Non-random assignemts of participants to certain, set conditions.
Methods of data collection include:
- Quantitative methods: uses tests, rating scales, questionnaires, and physiological measures. Brings out results in numbers.
- Qualitative methods: observations, interviews, case studies, analysis of written documents.
Both methods are not mutually exclusive and triangulation may occur (where information may be both qualitative and quantitative).
Generalizability increases as the study includes more. A compromise reduces generalizability. The results of one study/sample can be used for other participants/situations.
Sampling domans go from participants --> job titles --> time --> organizations.
Control in research are broken down into:
- Experimental control: removes influences that could 'screw' up the results.
- Statistical control: techniques used to migitate/control these influences.
Ethical behavior must be in accordance to the APA.
Module 2.2 - Data Analysis
[edit | edit source]Descriptive statistics: summarize, organize, describe sample of data.
Distribution goes as follows.
- Y-axis = scores going from low to high
- X-axis = frequency of occurrence
Central tendency is measured through mean, mode, and median. Variability is seen through the standard deviation.
- The mean is affected by high/low scores, but the median is not.
- Mean pulls in direction of the skew
Inferential statistics: help in testing hypotheses and creating inferences from sample data to a larger population. This includes the t-test, F-test, and chi-square test.
Statistical significance is achieved when threshold is less than or equal to .05. This is basically that we are confident this didn't occur due to a "lucky chance". The SMALLER the sample size is, the LOWER the power to detect a true/real difference.
Scatterplots and regression (straight line that fits all the data in the graph [x vs. y]) show correlation. Correlation coefficient is a statistic of measure of relation between two variables. Between 0.00 and 1.00.
- Positive correlation: One variable increases, other variable increases as well.
- Negative correlation: One variable increases, other variable decreases.
- Multiple correlation coefficient - overall linear association between multiple variables and a single-outcome variable.
0.00 = NO correlation (curvilinear relationship).
Meta-analysis: Method for bringing in results from multiple studies to come up with one, big conclusion. Statistical artifacts are characteristics of a particular study (usually sample size) that affect the results negatively.
Module 2.3 - Interpretation through Reliability and Validity
[edit | edit source]What is reliability? Consistency/stability of a measure
The test-retest reliability is calculated by correlating measurements taken at one specific time, being compared with measurements taken at another specific time.
Equivalent forms reliability: Calculated through measurements from a sample of test subjects who finished 2 different version of the same test.
Internal consistency is proven when items of a test consistently measure a single construct. Inter-rater reliability is calculated when various statistics show consistent scores amongst multiple raters (.70 - .80 = reasonable reliability). Generalizability theory is where some considers all types of errors in reliability estimates at the same time.
Validity?
[edit | edit source]Are accurate and complete [representative-wise] measurements being taken?
- Analysis - Test developed to assess identified abilities or other characteristics (KSAOs)
- Cirterion - Outcome variable describes important performance domain. Criterion-related validity correlates a predictor (test score) with a performance measure; the correlation that comes out of this is called a validity coefficient.
- Predictive validity design - Time lag between collection of test data & cirterion data - the test is usually given to job applicants.
- Concurrent validity design - Unlike PVD, there is no time lag. Test is given to current employees and the performance measures is collected right then and there. The issue is that no data is present about those who are not employed in the organization.
Content-Related Validity
[edit | edit source]Content of selection procedure represents sadequate sample of important work behaviors and activities/worker KSAOs defined by job analysis.
I-O psychs can use incumbents to gather content-validity evidence.
Construct-Related Validity
[edit | edit source]Scientists get evidence to support decisions/inferences [guesses] about psychological constructs, which are concepts that a test score is made to measure (IQ test for intelligence, for example).