Examples of General Education Assessment

Writing in the Disciplines, Dietrich School of Arts and Sciences

The experience with assessing the Writing in the Disciplines curriculum illustrates how the assessment of student learning at the University has evolved over the past decade. The University’s composition program, and in particular the Writing in the Disciplines program, has long been well regarded. Part of the reason for this success is the focused attention the faculty have given to reviewing and modifying the curriculum through organized, periodic program evaluations. Traditionally, these evaluations have focused more on the curriculum and how it is delivered and less on a systematic assessment of outcomes. The first steps toward outcomes assessment came in the early 2000s, as the University started to use student surveys (both the internal Student Satisfaction Survey and NSSE) to gather indirect evidence of student learning in the writing program through questions such as “To what extent did your experience at the University contribute to your ability to write clearly?” As noted in the 2007 Periodic Review Report to Middle States, these surveys led to a comprehensive review of the Writing in the Disciplines program when they revealed that other institutions were gaining on our program in terms of student assessment of program effectiveness.

At the request of the Enrollment Management Committee, from 2004 to 2005, a comprehensive assessment of undergraduate writing was directed by two members of the composition faculty. This careful and thoughtful evaluation included a comprehensive review of the curriculum, including a survey of existing writing courses, a review of the writing requirement, online student surveys, and student and teaching assistant focus groups. Notable for this discussion, the evaluation included in-depth interviews with faculty members from across the academic departments in which faculty members were asked to provide their assessment of student writing and how it had evolved over the previous decade. The inclusion of faculty assessment of student writing in addition to student self-assessments marked an expansion of the program’s efforts to include outcomes assessments in its overall program evaluation. The report provided insights into what students and faculty thought about writing and writing courses; compelling examples of best practices in teaching writing; and several recommendations for program improvement that were implemented, including additional faculty, smaller section sizes, a new peer tutoring program, and resources to promote campus-wide discussions of writing and the development of new writing intensive courses.

As the University became engaged in using direct evidence for assessing student learning, in 2009, a process was developed to gather direct evidence of student writing on an ongoing basis that could then be used as part of program evaluation and curriculum development.

The new process included clearly articulated expected learning outcomes (see Figure 8) and performance goals/expectations, a plan for the ongoing collection of direct evidence from student writing samples, and a structure through which assessment results will be used to inform curricular discussions. Appendix C14 includes the assessment matrix summarizing the process and a report on the results of the first round of assessments conducted in 2009. Briefly, the nine faculty members serving on the College Writing Board assess the four learning outcomes triennially by reviewing a sample of student papers drawn from writing-intensive courses.

The papers are evaluated using seven criteria derived from the learning outcomes, and the expectation is that at least 50 percent of the papers reviewed should be rated as proficient or above on a scale that includes superior, proficient, adequate, and inadequate. The criteria and results of the first round of assessments are reported in Appendix C14. The assessment indicated that the goal of having 50 percent of papers rated adequate or higher on the seven criteria was not met. Using these assessments, the College Writing Board identified specific weaknesses and recommendations for increasing the number of papers rated proficient and above. In 2010, these recommendations were presented to the Dietrich School of Arts and Sciences Undergraduate Council, were approved for implementation, and are now part of the assessment system. The school is looking forward to the next assessment by the College Writing Board in 2012, which will provide the first reading against the existing baseline.

Second Language, Dietrich School of Arts and Sciences

A similarly rigorous assessment of student learning related to the Dietrich School’s second language requirement was conducted in academic year 2009–10, and the assessment matrix summary of this assessment can be found in Appendix C15. Under the direction of a faculty member in the School of Education, four specific learning outcomes for second language acquisition were developed; standards for comparison were established; and standardized rubrics for assessing these outcomes for reading, writing, and interpersonal communication were developed. The assessments were based on oral interviews and integrated reading and writing assessments given to 10 percent of the students every two years. Based on the results of the assessments, the dean appropriated resources for increased instructor training in identified areas of weakness, and a committee of language coordinators is conducting full reviews of the curriculum in these areas that will extend above the general education level to include majors and certificate programs.

Quantitative Reasoning, School of Nursing

Key learning outcomes for nursing students include the ability to engage in evidence-based practice, to write a critical appraisal of published research studies, and to critique and interpret statistical methods and results. The assessments indicated a weakness in students’ ability to read and interpret research findings and that this weakness was common to all nursing students, independent of which existing statistics course the students had taken. As a result, the school created a new course, Introduction to Basic Statistics for Evidenced-based Practice, which is now required for all nursing students. By creating a statistics course specifically directed to meet the needs of nursing students, the school is better able to ensure that all students meet curricular requirements directly while also meeting the University goals of gathering and evaluating information effectively and appropriately and understanding and being able to apply scientific and quantitative reasoning. (See Appendix C1, pp. 30–31.)

Economic Analysis, Swanson School of Engineering

An example of an assessment-driven curricular change relates to the ABET expectation that students attain “the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context.” This outcome is assessed, in part, using student projects from the Engineering Economic Analysis course. Prior to fall 2009, students completed a single comprehensive project requiring an economic analysis of a contemporary problem, including a consideration of the societal implications of their solutions. Based on an assessment of these projects, the faculty concluded that students were not adequately addressing the societal issues. As a result, beginning in fall 2009, the course was modified to require three mini projects based on model-eliciting activities (MEAs), a proven educational methodology for presenting complex, realistic, open-ended problems to students. Subsequent assessments demonstrate significant improvements in this learning outcome, as students are now better able to consider environmental, ethical, and other societal issues in finalizing their solutions to these problems. These positive results have led the school to introduce MEAs to other courses to reinforce concepts related to other learning outcomes, including an expectation that graduates be able to design and conduct experiments and analyzeand interpret data. Preliminary assessments of these student learning outcomes indicate thatthe introduction of MEAs in these courses also is resulting in improved learning in these areas. (See Appendix C1, pp. 31–34.)

Writing, Pitt–Greensburg

During academic year 2009–10, Pitt–Greensburg’s assessment of student composition skills was based on a 20-item instrument. Fourteen students out of 20 (i.e., 70 percent) scored three or higher on a scale of one to five overall. The percentage falls just short of the stated goal of 75 percent. While this indicates that the program is effective, it could use improvement in specific areas. After rigorous inquiry and discussion, the faculty agreed to create clearer guidelines for the first two composition courses to address identified areas. Furthermore, the purposes of the third composition course are being closely examined; current versions of this course may be replaced with classes that are specific to each major or department. (See Appendix C6: Greensburg 2011, “Assessment Appendix Pitt–Greensburg May 2011,” pp. 46–49.)

Oral Communication, Pitt–Johnstown

In the area of oral communication, faculty used an established general education evaluation form to assess 26 randomly selected Primary Speaking (PS) videotaped speeches. Overall results showed that all PS speeches were rated at the desired 75 percent proficiency level. However, 15 percent of the speeches were ranked below the desired 75 percent proficiency level in the areas of content and performance. To improve performance in these areas, the faculty are strengthening relevant support services. (See Appendix C6: Johnstown 2011, “Gen_Ed_Sp_Enhanced_Assessment Matrix.”)

Mathematics, Pitt–Titusville

In the area of mathematics, the faculty assessed students’ ability to apply quantitative reasoning to physics, computer science, business, and nursing. In surveys, 79 percent of students answered that the course had shown them new ways to apply mathematics to various fields and allowed them to better understand the connections between various fields. However, when the department assessed a set of core concepts embedded in assignments and exams, it found that only 57 percent of students could satisfactorily solve practical application problems. The faculty are presently reviewing enhancements to improve student competence in core concepts. (See Appendix C6: Titusville 2010, “UPTAssessment of General Education 2009–10,”pp. 10–14.)