Evaluation Theories/Week 1: Historical Context of Evaluation and the Role of Theory in Evaluation
Introductions: Teaching Team
Introduce Teaching Team on which these notes are based: (M=Mentor: can refer to TA or Professor)
M1: I love eval theory so much! I’m excited to share my passion for the subject with all of you: i took this with (another professor); TA’d this class, read this stuff independently for fun. Have TA’d multiple classes with the professor for this class - … So I’m familiar with what she wants courses to look like.
- Passionate about high risk children; no matter who my professional client is; my client is the children being served in the program; they often don’t have a voice or choice about the programs that support them; as an evaluator interested in program design; crossing boundaries into program development; it’s my responsibility to represent them. That’s what I’ve committed my life to doing
M2: 2nd Year MA Social & Eval; Also passionate about developmental psychology; working on eval with Boys & Girls club of Pomona valley; Ditto to Silvana; love working with kids; feel its my job / passion to help kids fulfill their potential; so I want to evaluate programs looking to help kids who are in the most need; evaluating their programs is helping all of them.
M3: 2nd year; came out from public policy & public health; run consulting firm that does evaluations across the board; my twist on it is “what other people have done that we can change to make things more usable Passion is promoting and facilitating evidence-based policies and programs; with a really wide definition of what “evidence” is; I love coming up with innovative different ways to measure things that others measure poorly; … not content-specific at all; worked in health; policy; development & water chlorination programs; criminal rehabilitation programs after prison ; disease; it can be fun and really impactful when you do it in a way that makes sense to people working on the ground. Like facilitating using tools that we all know and the knowledge that stakeholders have: get information from the ground up and use rigorous methods to make those things sing and emerge.
M4: - second year in Pos Org; interested in what works effectively in training and development; Program Evals with City of Hope and Girls Inc.; interested in an Org. Learning approach; as you’ll learn about (Halley Prescal?) in this class. Passionate about fact that Organizations are really powerful and that we can use that for good; CSR; specifically in the idea of shared value; how orgs create value not just for (customers) and (investors) but communities that they impact as well.
- Opening Activity
- Review Syllabus
- Intro to Eval Theory
- History Activity & Discussion
Content is relatively stable, Not a class where you surf on the internet and do whatever you do; you will have to come and participate; and in order for you to do that you have to have the readings under the belt: **we cut down the readings so that they are manageable during the week.**
We expect you to have thoughts. Other thing; I need a volunteer for note taking.
What is your professional passion?
“Passion 1: Using technology to augment human intellect, develop sustainable technologies, build intercultural intelligence, and create people-focused structures that help organizations flourish.”
Passion 2: C
(J: I’m too tired to keep writing everything out:)) - <—TO DO: Write DOWN OWN EXPLANATION OF PASSION
___ - Finding the root cause of things ___ -
Evaluation Theory Helps you with Organizations, Programs, Context, and Culture
- This class is going to help you fulfill your passion!
When I hear and try to thematically integrate your passions; there’s focus on: Organizations Programs Contextual Issues Culture
All of those play a role in evaluating what you’re trying not to look at. Evaluation theory guides the practice of evaluation.
Sometimes students have taken this class and see it as just a requirement, “I didn’t know what eval was when I got here… I don’t care about it” and others are super-passionate; the idea is that evaluation as a tool could be _____
How it does that: how you can develop a tool-kit of evaluation theories to use in the development of your own passion; to make sure that we’re connecting to your interests. We set up this curriculum for a reason, by design; we have TAs that represent the different co-concentrations that we have here. So we want to make sure that connections are being made; we have assignments; we will evaluate it mid-stream; but these are things we want you to keep in your head.
___ You will speak as the theorist; You are expected - at least four original sources;
In-Class Theorist Debates Every week; there isn’t reading assigned for those weeks. . . It’s really important that you read original writing: there are lots of summaries flowing around; but you need to get a first-hand idea of what these theorists thought about. You’re provided with the topics that people are asking about; each person is responsible for having an answer for each of these questions. - Once you submit. . . it’s a great way to prepare; to think about your theorists as they relate to other theorists. - If you’re having trouble finding sources please get in touch with TA’s. - If a theorist changes opinion over the course of their career, what point of their career do you pick? (J: I think the current thinking should be what you do - ) - But older opinions are not necessarily not worth sharing. - We’ll give you a sheet where you rank your preference for theorist; How well we know what - How do we pick? Next week you’re responsible for reading ____; evaluation theory tree; be introduced; have little blurbs; get a feel for each person. You’ll know for instance that Halley Prescal is very popular with the Org. Folks. . . if you’re interested in someone with very strict theories about causality, then you might want to choose ____;
Once concern with taking away the paper requirement was that you would not be prepared for the debates (J: Yes! I organize my thoughts through writing essays) -
While you will be responsible on your exam for in-depth knowledge of the theorist you studied; you’re responsible for knowledge of all 12; you can’t check out when others are talking because the info generated (and then corrected as it may be) (: What’s the rationale for these people. . . I am still unconvinced that the level of analysis at which these theorists work is the appropriate level of analysis for EVALUATION writ large)
Midterm - 20% Short answer + Essay Questions; Blue Book; questions come from assigned readings; essay questions Right before
Brief Reflection Papers - 5% - Haven’t figured out quite what I want these to be: Integrate material; extend your thinking on a topic / idea; I might pick one - and say, “No more than 2 pages” - to get your ideas about something; extend, or apply. - I’ll know it when I feel it; if we end up not doing those the points will be distributed proportionally across the other assignments.
Term Paper (30%) - Your major writing assignment 1) How theory & research from social science discipline can inform evaluation practice and visa versa 2) how a specific evaluation theorist’s approach can be applied to a “real life” program evaluation.
Instructions on p. 9; hopefully it connects to your passion and what you want to do. WE’ll talk more about the paper as we go on throughout the semester, but I encourage you to read it and see what we’re asking you to do; it applies to Social Evaluation Theory; evaluation of program; . . . Take some time to read it, think about it, come up with questions and concerns
We grade on quality of your product; not amount that you have written. If you have wonderfully targeted; it might be hard to do what you need to do in less than 15 pages though.
Notes - I grade harder when there are notes; do a lot of things that you own’t like. (J: no notes. . . wow. . . what exactly are you grading on then?)
Final Closed book, closed note; you’ll bring a blue-book; exam will be cumulative; weighted more heavily towards the last half of the class. Get to know and love those theorists; 2nd half look at those theorists in practice.
Review session will be held in final week of class; -
The way I historically grade it’s a team effort, especially for exams; you get blue book; code; sit around table; homework; what we’re grading on, -
For papers the TA's would be grading them, send them to me for review; I’ll be able to look at your papers, but wouldn’t provide a lot of substantive feedback. (J: the problem with TA's is in general they are in love with what they have LEARNED and do not do well with innovation . . . it often seems that, the more “competent” the TA is in learning the stuff” -
If you have a macro approach . . . like, “Patton was interested in use”…
We really want to make sure that you do these things on these last bullets: Depend on each other Challenge Attend all classes; Raise relevant questions Be mindful of the time of the course; don’t talk for half an hour And there we go!
2014-01-22 14:02:54 - Come back at 14:17
Intro to Evaluation Theory
So: Intro to Eval Theory. We all live under the stars so evaluation is something that we can relate to even if we have different interests.
When I think about evaluation is , I think of it as this very big, vast, multi-faceted; multi-discipline; diverse thing. It’s big; it’s vast.
Diversity: This list is not Exhaustive: People, Clients, Sectors, Disciplinary Approaches, Funding & Accountability Requirements
Sectors: Every sector we can think of could map onto evaluation.
Q&A: A lot of diversity about what is required for evaluation: NSF eval requirements look very different from NIH or Department of ED or local funder requirement or Institutional Research trying to map onto WASC Eval / Accreditation guidelines.
Disciplinary approaches: Positivism; constructivism; lots of ways we can think about doing evaluation.
Clients: Government - Non-Profit; For-Profit;
Evaluation Theory according to Shadish: “Includes a vast array of decisions"
There are many differences as to people doing evaluation:
- Decision making; consequences; what type of evaluation you’re conducting; - when I read this I think about al those things in addition to it being embedded in practice; - embedded in practice-based discipline. (J: I never though I would meet people that are more practice-oriented than me in the Academic Community - But I have. (J: Inefficient ‘calling upon’ system
Difficult to compare strengths andTheory differences - see whether it puts anything new to the field; or just repackaged readings.
- i.e. ; the extent to which we have these broad frameworks and have this in - help that guide our practice.
What’s empirically based about evaluation theory:
- The large bulk of what’s “Evaluation Theory” is not empirically based; better described as an approach or a model, rather than a theory.
- We have a lot of practice-based evidence; but a lot of it is anecdotal; at times hypothetical; we don’t have empirical data to say, “There’s evidence that says, “this works in this case in order to get this particular result.” - What we’re trying to do in the PhD program in Eval, we’re trying to develop that [cumulative knowledge base] about ___ in eval.
- Shadish says, without theory, we have loosely conglomerated researchers - with principal allegiances to diverse disciplines, dedicated to applying (J: Politically, this will be
Without theory, we have space: Everybody doing their own thing; practicing evaluation but not necessarily guided by any schema, approach; - but with theory, we have Constellations ; some order to the chaos.
You don’t see eval as this big thing with evaluators doing different things across different disciplines. Eval Theory brings order. (J: It should be inductive: it should derive from what is around you)
Three things for Importance of evaluation theory:
- If all you have is a hammer, everything looks like a nail
- “Evaluation theories are like military strategy and tactics, methods are like military weapons and logistics”
- “Each practitioner is a nascent evaluation theorist”
(J: 1. The first one describes almost every disciplinary Silo; 2. The second; is inaccurate (already wrote 137 words on it); 3. Theory is always a part of practice; each practitioner is guided by a theory, whether known or not)
- You have one method; try to apply your method to everything (J: That’s my problem, right; I think Info. Theory. will be abel to describe everything. . . but then, it will refer to appropriate other “theories” -)
This one is all about those disciplinary silo’s: That’s what you see. . . that’s all you see! That’s a problem that relates to why Evaluation Theory is important.
Military Strategy; it makes me think about - what I’ve seen in movies. - They re-evaluate; adjust;
Strategy: strategy |ˈstratəjē| noun (pl. strategies) a plan of action or policy designed to achieve a major or overall aim: Tactic: tactic |ˈtaktik| noun an action or strategy carefully planned to achieve a specific end. (New Oxford American Dictionary)
Evaluation needs to be carefully planned and implemented otherwise you’re just ___; integration across concepts
2nd One made me think about teamwork; communication; the need for - not really knowing . . . - You have to earn the right to know how to use it (J: does that apply to methods - yeah, I think it does - very cool)
S1: When picking strategy; need to be able to be aware of the (weapons) available to you (J: tools) - and that influences the strategy you come up with . - (J: well; tools are developed in response to strategy; tactics look at the tools available) -
S2: Essentially each person that practices evaluation; through multiple evaluations they build their own theory that they utilize; see what works and what does’t; and in that way they are a budding evaluation theorist.
Evaluation Theory is Who We Are
What does that mean? -
- - Identity; “It gives me power to survive in even though there are only a few percentage of people” - that kind of identity helps me survive; Evaluation theorists; if they have an identity of their profession, it will help us having the pride of the profession, and be more confident about their job and give us the substantive content of their job. “ - Changkiu
- - That’s something that to my knowledge they haven’t studied.
- Common Language
- Gives rise to our debates
- Unique Identity
- Face we show to the outside world
- Unique Knowledge base
- Have you heard about evaluation? - “No!” - well . .. you do it all the time, every day. . . (Devil’s advocate: -
“The Four steps of the logic of evaluation <- J: You can’t say “THE” when you yourself say that there is no metatheoretical nomenclature to classify theories; there is not comparative theory; without knowing strengths, weaknesses, and categorizations, how can you say that these are THE four steps?!” - Without those These FOUR STEPS are
. What difference does it make whether the program you are evaluating is new or has existed for many years? <- THIS IS ALL ABOUT
- 1. Logic
- 2. New or existed before
- 3. large, local, small
- 4. use result to change [FORMATIVE]
- 5. time [SCOPE
- 6. Causal Inference
TOPICS FROM These:
- -> Program Context;
- -> Big / Small; Mature / New
- -> Scope
- -> Planning Process
- -> Outcomes
- -> Use; General Context; Eval Theory is way of understanding constructs.
- - What is needed out there in the field?
- Bulleted list item
___It’s very similar to this well-oiled machine (cogs and gears in a machine ) - they’re going to be off if you make inappropriate decisions early-on that are not used as Eval Theory to do your practice. If you’re flying by the seat of your pants; not thinking about how your decisions are being guided; not thinking about the context for the program / methodological choices;
- l The general failure of most theorists to develop contingent theories of evaluation practice that tell evaluators how to make choices based on the contingencies of the situation.
- l The general omission of a consensual vision of the place of evaluation in the world today.
- The second two problems concern what we might call evaluation metatheory:
- l The lack of a widely-accepted metatheoretical nomenclature that would help us to classify any given theory about evaluation, and to use that classification to understand what a particular theory does and does not claim to do.
- l The neglect of a comparative theory of evaluation, one that uses the common meta- theoretical nomenclature to compare and contrast the relative strengths and weaknesses of individual theories.
Problems with Evaluation Theory -
“If you do not know much about evaluation theory, you are not an evaluator. You may be a g… you need to know that knowledge base that makes the field unique.” (J: THE HUBRIS!! TO THINK THAT THEIR DISCIPLINE WHICH EVEN THEY SAY HAS MAJOR PROBLEMS (SHADISH, 1998) -
History Activity & Discussion
Think through what you know; Generate list of at least 5 events / time periods / acts important in development of field.
WE can go back really far; Scriven has a quote, “We are a young discipline but an old practice”
We’re going to start in the 1930’s-1950’s - 8-Year study - Chronbach; who was the mentor for Greene - one of the greatest lineages of all time;
Tyler’s 8-Year Study - He set the stage by introducing this idea of behavior-objective based criterion - He realized that you couldn’t just assess the outcomes- there were 15 different high school curricula; You had to determine whether these programs were implemented as a plan.
Formative evaluation; very important for evaluation.
Launch of Sputnik (This is not explained)
Issues that came up: 1. Accountability and ROI
Evaluative Data to figure out what was going on.
2. Political Issues
3. Managerial Concerns 4. Intellectual Debates
No discipline of “Evaluators” to fulfill these roles; (J: We are a little stream of evaluation that has come in and actually called itself, “Evaluation” -
We stopped this new viable alternative outside of academia
- 2001 The PARADIGM WARS RETURNED.
* ACCOUNTABILITY (NCLB) - OFFICE OF MANAGEMENT & BUDGET EVALUATION EMPHASIS MEMO (OCT. 2009) * Randomized Control Trial.
These Debates are Still Raging: Paradigm Wars haven’t been settled.
15:32:30 - From Conference:
After No Child Left Behind there was a big push towards RCTs; one thing that this recent change brought was in August 2013, there is new criteria from
- Department of Ed
- National Science Foundation
Which came together to do standards for [educational research and evaluation] — they have de-emphasized the importance of RCT UNLESS you are doing something about “Impact"-
Exploratory research on knowledge-bases is okay now - They have developed different standards of evidence, and what’s appropriate, for different kinds of questions.
Traditionally in our Eval bucket, we have been trying to understand fidelity of implementation as well as logic models: those are two areas that have been taken out of the “Evaluation” bucket and into the “research” bucket:
- If you are talking about testing effectiveness of an educational intervention, you have to have a good [theory of change model] and a good grasp of how it’s being implemented instead of [just an RCT…?]
- So; that doesn’t help anybody to build this [cumulative knowledge base]
Not everyone agrees that if you want to assess impact you should do an RCT -
Voices at the table:
- - Special Interest
- - People’s Opinions
- - Convincing anecdotes (with high saliency?)
We are open to thinks that you like; things that you don’t! If we need to make course changes or corrections, please let us know. We're open to feedback.