Building a Culture of Assessment

The process for assessing student learning described above was the result of a purposeful effort to develop a culture of assessment regarding student learning outcomes at the University of Pittsburgh. It was presented as a natural extension of the planning processes of each program and school/campus rather than a directive from the Provost’s office for the purpose of satisfying accreditation standards. These efforts began when the Council of Deans held initial panel discussions of learning outcomes assessment at its meetings in spring 2004 and fall 2005. Following these discussions, in September 2006, the Provost charged an ad hoc working group of the council to develop guidelines for documenting the assessment of student learning in all academic programs. The ad hoc committee was chaired by the dean of the largest school (the Kenneth P. Dietrich School of Arts and Sciences) and included membership from several other key units. This committee of deans and campus presidents developed the basic process and structure for assessing student learning outcomes. The November 2006 meeting of the Council of Deans had a session devoted to assessing student learning led by the dean of the Dietrich School at which the council approved the proposed guidelines and discussed how to implement them.

Following the passage of the guidelines by the Council of Deans, each school and campus began developing its own processes, and the individual faculties began articulating mission statements and student learning outcomes as well as considering methods to assess these outcomes. To assist the senior administrators in moving the new initiative forward, assessment of student learning also was the topic of a session at the March 2007 meeting of the Council of Deans, and at this meeting, the vice provost for graduate and undergraduate studies (who was leading the implementation of the effort) discussed expectations, and three members of the council (the deans of the Swanson School of Engineering and the School of Information Sciences and the director of the University Library System) discussed how their units were approaching the assessment of student learning.

A similar panel discussion was held as part of the annual University-wide chairs retreat inspring 2007. Over the next year, the learning outcomes assessment initiative also was discussed at meetings of the University Council on Graduate Studies, the Provost’s Advisory Committee on Undergraduate Programs, the Enrollment Management Committee, the Faculty Assembly, the University Senate Education Policies Committee, and the Board of Trustees Academic Affairs and Libraries Committee as well as atannual chairs meetings and numerous school and department meetings. It also was the topic of articles in campus newspapers and newsletters (see Appendix C2).

To assist individual programs as they developed plans for assessing student learning outcomes, the Provost’s office developed a website explaining expectations regarding assessment and providing links to assessment resources, including examples from programs at other universities. The Center for Instructional Development & Distance Education held workshops and worked with 24 individual programs as well as individual schools and campuses (Appendix C3). Deans and campus presidents led the efforts in their individual schools.

The initial assessment plans reviewed by the Office of the Provost in March 2007 were uneven. Though many programs developed meaningful plans consistent with the guidelines, the concept of assessing student learning for the purpose of improving academic programs was not clearly understood by all. Some of the common areas of improvement were discussed at the March 2007 Council of Deans meeting and were included in the individualized feedback provided to the deans and campus presidents.

Over time, however, assessment efforts strengthened as programs became familiar with the process and responded to feedback from the deans, campus presidents, and Provost’s office. The process of assessing student learning has now become part of the culture of the University of Pittsburgh, with virtually all programs’ having meaningful assessment processes in place. Deans and campus presidents report to the Provost annually on the assessment processes in their schools, including submission of the assessment matrices, and the Office of the Provost continues to provide feedback on these assessment efforts. The Provost’s Advisory Committee on Undergraduate Programs and the University Council on Graduate Studies require that all new and revised programs include assessment plans of student learning outcomes as a required component of the review process.

Demonstrating a Growing Culture ofAssessment

WGSE conducted a careful review of the documentation on assessment of student learning at the University to assess compliance with the guidelines and to determine how the process of assessing student learning outcomes has changed since its inception in 2007. This included a review of four years of the annual assessment reports submitted by a randomly selected 10 percent sample of the University’s 350 degree and certificate programs. Using the Middle States’ Rubric for Evaluating Institutional Student Learning Assessment Processes as a guide, the working group rated each assessment report on12 dimensions, including appropriate learning outcomes, use of direct evidence, and dissemination and use of results to drive curricular change. The key findings of the working group’s assessment are as follows:

Finding 1: Responsibility for the development and implementation of assessment plans firmly resides with the faculty members who have the disciplinary expertise and curricular proximity to make decisions about the results of the assessments.

Finding 2: Every degree and certificate program has a plan to assess student learning outcomes; the majority are both robust and sustainable, and more than 90 percent are incompliance with the Council of Deans’ timetable.

Finding 3: Programs have well-developed statements of learning outcomes that are appropriate to their specific aims, and they are using a variety of discipline-appropriate methods of collecting direct evidence.

 

Finding 4: Assessment plans have improved on all 12 criteria used by the working group in its evaluation; in some cases, these improvement shave been quite substantial.

Finding 5: The process of oversight, review, and support by the next level of authority in the administrative hierarchy has had the desirable result of helping to improve the plans overtime. The Office of the Provost has provided feedback on the assessment plans every year, and the deans, directors, and campus presidents have used this feedback as they work with their programs to improve their plans.

Finding 6: Through the assessment process, faculty have identified programmatic strength sas well as areas for improvement and have developed plans to make needed improvements.

Finding 7: The assessment process has resulted in improvements throughout the University, ranging from smaller adjustments to more substantive programmatic changes such as redesigning or adding courses.

The working group confirmed the last finding through a review of reports submitted by deans, directors, and campus presidents documenting curricular changes resulting from this assessment process. A list of curricular improvements since 2007 compiled by the working group identifies 310 changes, including new courses and course content, revised course sequences, changes in teaching assignments, new assessment methods, major overhauls of programs, creation of new tracks within programs, raised standards, additional required seminars, and restructured advising (see Figure 6 and Appendix C16). The implementation of these changes is strong evidence of closing the loop and also demonstrates that a culture of assessment has been ingrained throughout the University.