Evaluation Theme Reflection

Artifacts:

  1. Logic Model Evaluation (Fall 2024 - EDIT 7350e)
  2. CTE Data Collection Tools and Analysis (Spring 2025 - EDIT 7150e)

Evaluation has become one of the most formative parts of my instructional design practice. I see it as the point where curiosity meets evidence, where we pause to ask what worked, what needs rethinking, and how we know. My journey in developing evaluative thinking really took shape across two projects: the first, a group evaluation plan in Fall 2024 with Dr. Bagdy where I helped build a Kellogg Logic Model and design survey instruments; and the second, an independent evaluation design project with Dr. Stefaniak in Spring 2025 where I created customized data tools for a college-wide needs assessment. Looking back, I can clearly see how the collaborative framework I learned in the first project directly informed the structure, rigor, and ethical sensitivity I brought to the second.

The Fall 2024 Evaluation Plan Project introduced me to the Kellogg Logic Model as a systematic way to connect program inputs, activities, and intended outcomes. Working as part of a small design team, we were tasked with evaluating a professional development program. My primary responsibility in the team was to construct the Logic Model and to design Instruments 1 and 4. I began by carefully examining the background materials we were given about the program’s structure and intended goals. Then, I mapped out a clear pathway of change, from resources and activities to short, medium, and long-term outcomes, so that our evaluation plan would show exactly how success was supposed to unfold.

Creating the Logic Model helped me translate abstract aspirations, such as “enhancing instructional innovation,” into measurable indicators. It required asking the kinds of “if–then” questions that make evaluation feel investigative: If we provide learners with sustained workshops and peer mentoring, then we should expect observable shifts in their practice. Designing the survey instruments meant ensuring that those expectations were testable and coherent. Instrument 1 captured participants’ immediate perceptions of the training, while Instrument 4 focused on longer-term professional outcomes. Aligning each instrument to the model clarified not only what data to collect, but also why it mattered.

Collaboration was central to this experience. We revised wording, adjusted and refined question flow so our final tools would produce both credible and useful results. These discussions reminded me that good evaluation depends as much on dialogue as on design. Diverse voices reveal blind spots and help prevent overgeneralization or overly focusing on certain areas. I also gained a deeper appreciation for ethical transparency, by clearly stating assumptions, limitations, and the boundaries of inference in our plan.

By the end of the semester, I realized that evaluation is not just an outcome-checking exercise; it is a design activity in its own right. The logic model acted like a blueprint for reflection, showing how evidence could be used to improve rather than merely to judge. This perspective stayed with me and became the foundation for my independent work the following semester.

In Spring 2025, I applied those lessons to my individual project, where I constructed data tools for the Center for Teaching Excellence (CTE). Whereas the Fall project focused on building a conceptual framework, this one required me to design and operationalize a full suite of data-collection instruments to evaluate CTE’s faculty development efforts. I identified four tiers of stakeholders namely, administrators, advisory committee members, marketing staff, and faculty. I then created a separate survey for each. Each instrument had its own purpose, tone, and analytic focus, reflecting the unique relationship of that group to the CTE’s mission.

For instance, the administrator survey explored institutional priorities, resource allocation, and perceived barriers to engagement. The advisory committee survey captured topic prioritization and feedback loops. The marketing and communications survey examined outreach effectiveness and internal collaboration, while the faculty survey addressed awareness, motivation, and accessibility of professional development opportunities. Developing these tools required balancing precision with empathy. Every question had to be purposeful, unbiased, and inclusive. I piloted formats, adjusted Likert scales, and sequenced questions to avoid response fatigue using all lessons that traced directly back to the structured thinking I gained through the Kellogg model.

What distinguished this project for me was its ethical and contextual depth. Working with real institutional stakeholders meant paying close attention to confidentiality and data security. I made sure surveys were anonymous and stored securely, so respondents could speak honestly about sensitive issues like workload or institutional culture. I also considered representativeness, ensuring that adjunct faculty and new faculty were not overshadowed by dominant voices. In that sense, the project became both a technical and a moral exercise, which I believe embodied ibstpi competencies related to ethical conduct, systematic data collection, and communication of results.

Analyzing these instruments collectively gave me a new understanding of evaluation as a living system rather than a static document. I saw how each data source complemented the others: administrators offering the big-picture strategy, faculty providing the ground-level experience, and marketing staff bridging communication gaps. This triangulated structure allowed the CTE to view engagement as an interconnected ecosystem rather than a single metric. Designing for that level of coherence was so deeply satisfying, it showed me how thoughtful evaluation design can foster organizational learning and not just accountability. I believe my client saw the same in the final presentation and product. 

Taken together, these two experiences, one collaborative and conceptual, the other independent and applied, they mark a significant progression in my professional growth. The Fall 2024 project trained me to think systematically, to visualize cause and effect, and to articulate alignment between design and evidence. The Spring 2025 project challenged me to operationalize those ideas in a real institutional context, making ethical and methodological choices that affect how people’s voices are represented. Both required advanced skills in analysis, synthesis, and communication, but also a reflective mindset that views evaluation as ongoing dialogue rather than final verdict.

Today, I approach evaluation as both an analytical and a human process. It demands rigor, but it also calls for empathy, transparency, and adaptability - the people matter! Whether I am mapping a program’s outcomes or designing a faculty survey, I now begin with the same guiding question: How can this evaluation help people learn something useful about what they do? I think that question keeps the process alive, purposeful, and deeply connected to the heart of instructional design.