Quality Assessment

by Vanessa Preast and Erik Simpson

What makes for a quality assessment?

Our goal with assessment is not to make infallible decisions, but to make better decisions than we would have made without assessment results. Our assessment plan follows a ground-up approach that gives our departments and units a good deal of freedom to define their own goals and to assess our progress using methods appropriate to our institutional context.

When conducting assessment, various factors influence what we consider to be satisfactory evidence quality. Often, we are balancing practical tradeoffs between resources (time, money, personnel) and confidence in our data. Assessment shares many features with educational research, but it has different purposes and different standards for acceptable evidence.

Considerations

Utility

Assessments should be useful and actionable. Pick assessments that help you answer questions interesting to you. (Presumably it is interesting to know how well students are achieving the learning outcomes that you selected). If the evidence you’re collecting isn’t important and you’re not using it to take action, stop collecting it. It is a waste of resources to collect evidence that you’re not using, and we will happily partner with you to help you gather evidence that will be more useful.

Stakes and Evidence

Standards of evidence vary according to the importance and intended audience of assessment exercises. Generally speaking, the primary audience for a departmental assessment project is the department itself, and the stakes are reasonably low. For example, a department interested in how effectively third-year students interact with their group may implement a thoughtfully designed, informal peer evaluation form within a required 300-level course. This easily implemented assessment instrument probably could reveal enough information to fuel productive conversations in the department about how to utilize group learning and teach interpersonal skills.

Sometimes, the stakes may be higher, and it is critically important to be very certain the evidence generated by the assessment is accurate. For example, a student’s performance could have serious negative impacts on themselves or others if they were practicing medicine or flying a plane, so we might expect a much more rigorous assessment to feel certain we know how well students will perform. At Grinnell College, where there are few life-and-death situations in departmental assessment, a high-stakes scenario may take the form of publishing assessment results in an educational research journal. If you wanted to make claims of cause and effect about your departmental programming, you might implement a carefully designed experimental study and seek IRB approval. The ambitions of this latter case reach well beyond the requirements for regular departmental assessment or expectations from the Dean’s office.

Representativeness

Assessment projects often gather data by sampling certain classes or students within classes. When selecting students to assess for these projects, avoid systematically excluding certain kinds of students. Consider accessibility and whether the assessment allows students to demonstrate their knowledge and ability. Even if an assessment exercise does not include all potential participants, consider whether its evidence represents a significant proportion of the student body who will be impacted by the decision made based on the data.

Alignment

The assessment is most valid when it matches and supports both the learning outcome and the teaching approach.

Resources

 

We use cookies to enable essential services and functionality on our site, enhance your user experience, provide better service through personalized content, collect data on how visitors interact with our site, and enable advertising services.

To accept the use of cookies and continue on to the site, click "I Agree." For more information about our use of cookies and how to opt out, please refer to our website privacy policy.