Sunday, November 18, 2012

Section 3: Evaluating, Implementing and Managing Instructional Programs and Projects



Item 1 Reflection:

Instructional designs (ID) are evaluated for their design, development and delivery. Two forms of evaluation are formative (occurring during the development stages of the instructional materials) and summative (occurring after the implementation or delivery of the instructional materials).  Whereas the CIPP (Context, Inputs, Process and Product) and Kirkpatrick models are popular approaches of formative and summative evaluation, there are several variations of formative evaluations, such as the Flagg and Tessmer.

Dr. Barbara Flagg stresses upon the need to utilize formative evaluation to inform on the decision-making process during every stage of an ID with the purpose of improving each stage, which are planning, designing, producing and implementing.  There are four phases in Dr. Flagg’s formative evaluation process, and below is a summarization of same.

Stage
Formative Evaluation
Description
Plan
Planning
Data, consisting of existing studies, tests and curricula, expert’s reviews and characteristics of the targeted audience, are analyzed regarding the reason for the program, the content and feasibility of the delivery.
Design
Pre-Production
The targeted audience is used in the making design decisions about content, objectives and production formats. This phase is guided by the preliminary scripts of the planning stage.
Production
Production
Feedback from pilot studies of early programs are considered and revisions are made accordingly.
Implementation
Implementation
Analyze how well the ID works with the targeted audience through field tests. Feedback from field tests assists with development of support materials.

Dr. Martin Tessmer considers three importance reasons for utilizing formative evaluation in instruction:  improving the learning effectiveness of materials, obtaining criticism and suggestions on interest or motivation of instruction to its users, using it because it is already part of the real world of ID. Dr. Tessmer identifies four stages of formative evaluation, each carried out to accomplish different things, yet progressively to improve the instruction. The four stages in order are Expert Review, One-to-One, Small Group and Field Test. Below is a summarization of Dr. Tessmer’s formative evaluation process.

Stage
Description
Expert Review
Experts (such as content, technical, designers, instructors) review the instruction with or without the evaluator present. A decision is made as to what information is needed and from whom. Questions are prepared identifying concerns or improvement areas. The recording tool is designed.
One-to-One
One at a time the learners review the instructional materials with the evaluator present. Topics discussed range from content and direction clarity, to level of difficultness, to motivational appeal.
Small Group
The evaluator uses students as the primary subjects in small groups, focusing on performance data to confirm previous revisions and generate new ones.
Field Test
The instruction/material, polished yet amenable to revisions, is evaluated in the same learning environment in which it will be used when finished.

The concept of formative evaluation is basically the same for Flagg and Tessmer; they both describe it as a process of collecting data used to judge the strengths and weaknesses of an ID in order to revise and improve it. Moreover, I feel both Flagg’s and Tessmer’s processes of formative evaluation are similar, only the staging names are different.

My classroom instruction was evaluated annually as per PDAS requirements. Luckily each year I exceeded preset standards. But these forms of evaluation involved only one person, with one frame of mind, as opposed to formative instructional design evaluation, which involves many people with different perspectives.  This brings to mind my previous association with a small learning community (SLC) at the high school I taught.  We met weekly with our SLC and discussed many things, one being instruction.  Each of us took turns instructing a lesson from our curriculum, and both the manner of instruction and the lesson were critiqued.  Of course, we did not follow either of these approaches of evaluation, but had we followed either of them, I feel our evaluation of each other’s instruction and lesson would have been more formal and in line with research practices.

Source:
Formative Evaluation:  What, why, when, and how. Retrieved November 14, 2012, from http://www.oocities.org/zulkardi/books.html.

Item 2 Reflection:

I would be interested in knowing what the community has to say about instructional design.  Do they agree or approve of the processes involved and do the processing steps validate the type of employee they need, which is the student in the future. In career prep classes, for example, it would be useful to know if the end result looked at in an evaluation is what an employer needs from a student who will be entering the workforce. In my field of employment it would be beneficial to include the business community as part of the evaluators of our teachers’ instructional designs.

Item 3 Reflection:

An example of using situational leadership to facilitate professional development sessions focusing on technology usage in the classroom during economic decline follows below. It just so happens that this situation actually occurred in our district recently.

Two weeks ago we held a half-day inservice for our teachers in our department. About half of the teachers were assigned to me for a technology session. After brainstorming the possibilities previously, I discussed with my director about facilitating specific training sessions within the inservice to benefit teachers in three program areas. I explained that since most of our teachers utilize online curriculum to supplement their lessons, I thought I would focus on how best to utilize the software to increase their students’ chances of passing their program’s industry certification examinations. My director approved my inservice plans, and I proceeded to plan.  I divided our teachers into three broad program areas and tentatively assigned our top three industry certification-producing teachers to lead their program area’s inservice. These teachers are leaders in their field, self-motivated and resourceful, well-liked and respected by everyone, and already these teachers are a resource to teachers in their content areas.  Luckily, when I spoke with them about my plans for inservice each teacher agreed to lead their program area’s inservice. Together we decided upon their training agenda and materials to disseminate during inservice. During the inservice training each leading teacher offered shortcuts and surer methods of utilizing the online curriculum to best help students succeed and pass their program area’s industry certification examinations. Because of the trust that I feel our departmental teachers have upon these leading teachers, I feel our teachers will take what they learned and help their students succeed. I served as delegator of this project and will serve as a monitor for the rest of the school year, monitoring the effects of this training.

1 comment:

  1. Your situational leadership example really is great. Many schools with limited budgets try these in-house inservices for their teachers. It sounds like you really thought it through and got the right people involved in the planning and implementing. Unfortunately, too many campuses depend on people who are not experts and who employees have poor perceptions of. In choosing presenters, it is vital to choose those who truly are experts and who have good rapport with employees, otherwise the information may become irrelevant to the staff.

    ReplyDelete