Monday, October 26, 2009

October 26

In the pre-evaluation phase is important
  • Define the evaluand
  • See if the evaluand even exists
  • Do a needs assessment
  • decide whether or not to proceed
  • define the stakeholders
  • look a the success indicators
This needs to be done to save you headaches.

Friday, October 23, 2009

October 23

What is my evaluation proposal going to be on? Blended learning? Video Communication in an online environment? Who are the stakeholders going to be?

Pre Evaluation Stage

Evaluations should not be cast into a single mold. For any evaluation, many good designs can be proposed, but no perfect ones. (Conbach)

Here is the things that we need to do for the proposal:

Purpose
  • Questions and Criteria - Success indicators for judging the merit and worth of evaluated
  • Proposed Evaluation activities (data)
  • Detail how the evaluation will be conduced
  • Timeline and budget
What is the purpose of this Pre-evaluation phase?
  • evaluability assessment--can the object be assessed
  • determine a budget
  • describe the program and understand it
  • links program goals with program activities
  • determines whether the program is ready for a formal evaluation
  • helps focus the evaluation
Questions that I should ask?
  • What is to be evaluated? (Logic models: goals, process, activities)
  • Why is the evaluation needed? Intended purpose
  • Who needs it done? What do they want to know about it?
  • What conditions might contrain the evaluation?
  • pg 186 has questions you should ask to see if it can and should be done
Why should it not be done?
  • it is premature
  • the purpose is inappropriate (to delay or posture)
  • Resources are inadequate
What do I need to do?
  • Identify key stakeholders and audience (what are their concerns)
  • determine availability of resource
  • Identify Limitations/REstructions
  • Discuss potential evaluation approaches
  • Discuss budget
  • Establish communication expectationa
  • Evaluate political context
  • Timeline and budget

Monday, October 12, 2009

Participatory Evaluation

Relies on inductive reasoning
Uses several sources of data (this is on the constructivist side)
Is pragmatic and is not concerned so much with theory
you start where the participants are so there is no set steps that you follow

The evaluator needs to teach the clients how to evaluate
They act as coaches in that they help people take change.

This is a very bottom up approach.

Strength
  • Broad scope
  • willingness to handle complexity
  • Flexibility (no arrows)
  • Attention to contextual variables
  • Reflection on nuances surrounding the program
  • Fosters activism
Weakness
  • Willingness to look at complexity
  • Cost
  • Time
  • Bias
Responsive Evaluation

It is responsive to the realities in the program

They interact continually with the stakeholders

Create narratives or product displays

Share case studies

Wednesday, October 7, 2009

October 7th

Expertise-Oriented Evaluation

This is one of the oldest forms of evaluation. We can see this with guilds during the middle ages.

Expertise used to judge an institution program, product, of activity.
Many times a team of experts would be used to evaluate the different parts of the design. The example used was that there are a team of umpires but they are each responsible to only specific things. An expert just means that you know how to evaluate not necessarily perform the task (Olympic judge).

It goes through systems:
  • Formal Review Systems (Accreditation) This is normally not based on learning outcomes but what the expert sees
  • Informal Review Systems
  • Peer Reviews for Journals
  • Ad Hoc Panel Review--funding agency review panels such as the United Way Funding
  • Ad Hoc Individual Review--Consultants
Limitations--evaluations may just reflect personal bias
Strength--emphasize role of expertise and human wisdom in evaluation

Utilization-Focused Evaluation

A good evaluation is an evaluation that is used.

The role of the evaluator:
  1. Identify intended users
  2. Engender commitment/increase "readiness for evaluation" (this is the foundation)
  3. Help users generate own questions
  4. Carry out evaluation working closely with users throughout process

Monday, October 5, 2009

October 5

Consumer Based Evaluation
Listening to a Sales Rep.
  • It can be biased
  • Alternative motivation
  • assumptions
  • What is the criteria
Listening to the State Evaluation
  • Not really an evaluations
  • How was the evaluation done
  • Was learning the criteria or is it about money or back room relationships
Teacher Development Instructor
  • Lack of creativity
These evaluations are to help consumers make the correct decisions.
What information would be helpful to help consumers?
  • Cost
  • Durability
  • Is there a need and if there is does this meet the need?
  • other supporting cost
  • Accuracy of the tool
  • Does it support the end goal (perhaps standardized tests)
Many school districts do not look at all of these things but just focus on the standards.

There is a checklist of what consumers should look at on page 105.

Micheal Scrivin and Goal-Free Evaluation

When the evaluator shows up the goals are not given to the evaluator. They then observe the company and then attempts to try to deduct the goals. This lets the evaluator discover some unintended goals. Another advantage to this is that you can switch to a goal evaluation. It is also less susceptible to bias. They interact with the staff but they do not focus on certain things but do a holistic observation. They also are able to find the population that is actually impacted rather than who it is supposed to impact.