Assessment Planning

Ultimately, the goal of student learning assessment is to inform the ongoing review and improvement of academic programs. Assessment processes should identify where student learning needs must be addressed, so the program can continue to uphold academic standards and prepare their graduates for success.

Please feel free to contact any member of the University Assessment Committee (UAC) with questions regarding the development or refinement of your program’s assessment plan.

Developing an Assessment Plan

Academic Programs

An assessment plan for an academic program should start with the end in mind - the program-level learning outcomes. Essentially, what should a graduate of the program know or be able to do to be successful in his or her field?

Student Learning Outcomes (SLOs)

Student Learning Outcomes identify what a student is expected to have learned by the end of a program/graduation (if program level) or course (if course level). It is important to stress here that the focus is always on the student’s demonstrated learning. It is his or her learning that best reflects how the institution supported him or her, since institutions of higher education have a common mission of educating students to prepare them for success beyond the institution (career, graduate school, etc.).

SLOs (whether at the course or program level) should be clearly-stated, with language that clarifies what students will be able to know or do (e.g. “Students will demonstrate their knowledge of (insert key concept for majors to understand) through a culminating assignment.”)

The focus on clarity, which extends to how performance can be evaluated in an observable way, is necessary to ensure student performance can be easily measured. When student performance can be measured, it can be evaluated, - and evidence-based plans can be developed and implemented for improving student performance in the future.

After the program level outcomes are determined, thought should be given to how course-level learning contributes to program level learning. This is achieved through the introduction and reinforcement of key content throughout the curriculum, enabling students to demonstrate at key points, or at least prior to graduation (often measured through capstone courses), appropriate performance on program level learning outcomes.

Additional guidelines for developing student learning outcomes can be found online at the National Institute for Student Learning Outcomes Assessment (NILOA) external website

In addition to determining what student learning will be measured (driven by the SLOs), faculty also need to determine where in the curriculum assessment of student learning makes the most sense, when to measure it, and how best to determine adequate student performance related to the SLOs.

Where

For the “where”, assessment of student learning at the program level should be, at the very least, at the conclusion of the program. This would likely mean measuring student performance in (at least) a capstone course or other high level course(s) required of program majors.

Assessment can be formative or summative. Achievement of program-level outcomes is summative, since it indicates that students have been able to reach the end/program proficiency goal. Achievement of course-level outcomes (except perhaps when in a capstone course) is formative, since it forms, or builds towards achievement of the program-level outcomes.

It is helpful to periodically use formative assessment in the curriculum for program majors. This can mean identifying key courses or points in time to evaluate what students should be able to know or do at earlier points in the curriculum. For instance, it may be constructive to periodically evaluate performance in a key course typically taken by juniors, as it may strongly impact student learning/demonstrated performance related to a program-level SLO later in the curriculum. It may also be useful to “map” the curriculum for majors, indicating where key content, concepts, skills, etc. are first introduced, and then reinforced across the curriculum, until demonstrated through the capstone or other program-culminating activity. This mapping of the curriculum can facilitate the determination of when (and where) to periodically implement formative assessments.

When

The “when” is the actual plan for when to evaluate learning. If an academic program has five program-level SLOs, it may be too much for faculty to evaluate all outcomes in a given year. Therefore, the frequency of assessment of each SLO should be determined. For instance, there may be a key assignment used in a 300-level course for majors that is useful for evaluating student learning related to SLOs #1 and 3. There may be a different key assignment used in a different 300-level course required for majors that is useful for evaluating student learning related to SLOs #2, 4, & 5. Faculty may choose to evaluate SLOs #1 & 3 one year and SLOs #2, 4, & 5 the next. This sets the table for a more reasonable assessment-related workload than if all needed to be evaluated within the same year. The span of time in which all 5 program SLOs are evaluated is still relatively narrow, which would also be a plus for such a plan.

How

For the “how”, assessment methods can either be direct or indirect. The development and use of benchmarks/performance goals can also help to interpret results so they can be used.

Direct Methods

Direct methods are preferable when possible, since the specific learning outcomes (for the course or program) are directly referenced, or can be linked to a tool used for the assessment. An example of a direct assessment method would be a sample of student assignments (assigned due to relationship to SLOs) evaluated and given a score using a rubric developed for use with the assignment. A rubric is a tool used to evaluate an artifact/assignment using a set of standardized criteria for more objectively determining the level of student performance demonstrated. Grades by themselves are not considered to be a form of direct assessment.

Direct assessment methods can more easily be used to inform curricular change when tied to a benchmark, or performance goal. For instance, if a benchmark used for a learning outcome is that 85% of majors will receive a rating of “meets or exceeds expectations” on the rubric (used with a key assignment), then it is clear, if results indicate that only 65% of majors received that rating, there is clearly a need to look more deeply at the situation, and/or to follow-up with specific plans to address the gap in student learning identified through the assessment.

Indirect Methods

Indirect assessment methods, although considered less useful for evaluating student performance, can still be useful for reflection. An example of an indirect assessment would be the results of a course evaluation. Students may indicate that they learned a lot through the course on the evaluation, but since it’s an indirect evaluation, it isn’t apparent exactly what they learned or how well they learned it. Indirect assessment results can have the most impact when they corroborate findings of a direct assessment. For example, if a direct assessment indicates that 95% of students were able to demonstrate their knowledge of key content, then their feedback on the course evaluation indicating that they learned a lot can be used as an additional piece of evidence that they learned what faculty wanted them to learn (SLOs).

When possible, multiple measures (whether multiple direct measures, or direct and indirect) are preferable to a single measure of student learning related to an SLO. This is because multiple reference points that indicate that sufficient learning was demonstrated, for instance, create more confidence in the reliability of those results.

The current set of Middle States Standards for Accreditation can be found online (see Standard V).

View the Middle States Standards for Accreditation external website

Contacts

If you have any questions about the University Assessment Committee, please contact either of the committee co-chairs:

Brian Bogert

Director of Institutional Research
brian.bogert@wilkes.edu

Jayme Hines

Assistant Professor of Education
jayme.hines@wilkes.edu