Assessment FAQs

What is the QTA Committee? What is its role?

The Quality Through Assessment (QTA) Committee has been established by the College president for the purposes of (1) reviewing the nature and extent of assessment efforts underway throughout the campus and (2) identifying best practices in assessing student learning and institutional effectiveness. It is composed of constituents from throughout the College who have a direct involvement with assessment.

The committee's role is to:

  • Examine the existing processes and elements of assessment across the campus and show how they advance the College's mission.
  • Facilitate the accreditation/certification/endorsement efforts of the College and its programs.
  • Review statistical reports, surveys, and program reviews; look for changes and trends.
  • Communicate assessment efforts and needs to the College community.
  • Collect and share useful information about student learning and institutional effectiveness.
Why is the College initiating this outcomes assessment effort?

For years, the College has employed a variety of assessments at the institutional, school, and program levels. While this proactive approach has resulted in valuable data and significant improvements, the decentralized nature of these assessment activities allowed for limited systematic use of the information collected. As a coordinating entity, the QTA Committee can ensure that information is gathered and utilized efficiently and consistently. The committee, as a centralized unit with "big picture" perspective, also can effectively share the depth and breadth of the College's proactive assessment efforts with external stakeholders.

What was the purpose of the assessment survey completed by faculty in 2007? Why do you need to know the various assessment mechanisms? What will be done with the information? Can I see the results?

The purpose of administering this instrument was to determine where the College stood in terms of assessment. As a part of the guiding principles in establishing the QTA Committee as the coordinating assessment entity, we acknowledge that assessment requires multiple measures to effectively close the loop between teaching, evaluation, and feedback. No single approach can be prescribed from a centralized College perspective.

For example, research findings and reports can be used in a number of ways to provide feedback on selected institutional effectiveness measures such as evaluation of instruction and services, student demographics, job placement, skills, satisfaction and salaries, developmental skills and abilities, and comparisons in student learning between classes and with other institutions. Schools may, in turn, use this information to assess the current assessment techniques and discuss the effectiveness of various approaches in terms of goals and desired outcomes.

The results of the faculty survey are available on this website.

Does this assessment initiative mean that faculty will have to change the way they teach or the way they approach day-to-day operations?

Since the purpose of assessment is to inform and improve future practice, change may be a natural outcome. Assessment initiatives should highlight the strengths and weaknesses in the teaching/learning process and service delivery. Future practice would then accentuate the strengths and address the weaknesses.

A previous assessment initiative at the College gained initial momentum and faded. What assurances do we have that this initiative will not meet the same fate?

As with any initiative, continued momentum depends on having the support of all stakeholders and mechanisms that ensure follow-through. The College president has expressed her dedication and has devoted resources to this effort. Through the ongoing efforts of the QTA Committee, assessment is firmly on the "radar screen."

What is the role of the QTA Committee with regard to the way learning and services are evaluated? What role will the school/office play in this?

It is not the intent of this initiative to alter the specific interaction between faculty, staff, and students. Rather, through this coordinated College-wide effort, the committee hopes to establish a systematic, iterative process of gathering and reviewing assessment mechanisms and processes across the campus. As with all initiatives that affect the teaching/learning process and service delivery, school/office involvement is critical. Levels of involvement in student evaluation, in turn, will vary depending upon the goals, outcomes, and measures of performance indicators consistent with the College's philosophy and mission.

What are faculty and academic units to do with the assessment data?

Assessing student learning and the services supporting that learning makes sense only if the institution, schools, and programs make use of the results; otherwise, it becomes an empty exercise in data collection. The expectation is that units will use assessment results for decision-making that improves instruction; strengthens the curriculum, programs, and services; and forms the basis for policy development and resource allocations. Assessment data should help answer the following questions:

  • What are students learning?
  • Are they learning what we want them to learn?
  • What do the findings/conclusions from data analyses suggest about the students, programs and student learning?
  • How well do our curricular and co-curricular activities enhance learning?
  • How do students view their Penn College experience?
  • How do Penn College students compare with their counterparts across the region or nation?

An academic or administrative program review should be able to answer the following questions:

  • How have the outcomes assessment results been reflected in decision making for determining personnel needs/issues?
  • How have the outcomes assessment results been reflected in allocating financial resources?
  • How have the outcomes assessment results been reflected in developing new programs?
  • How have the outcomes assessment results been reflected in making program changes?
  • How have the outcomes assessment results been reflected in accomplishing departmental (program) goals?
  • How have the outcomes assessment results been reflected in decision making relative to professional faculty development targeting student teaching and learning?
  • What major changes to the assessment plan will be introduced this year?
What are some of the most common tools or techniques of measuring outcomes?
  • Tests developed by instructor
  • Assessment tools developed by the department/program
  • Textbook exams that come from a test bank or textbook
  • Placement rates
  • Review of types of employers hiring students
  • Student evaluation of instructional quality and services
  • Student developmental skill ratings
  • Student performance on national tests
  • Graduation rates
  • Retention rates
  • Students who continue their education
  • Student attitudes toward and engagement in college
  • Portfolios
  • Final projects/exams
  • Capstone projects
  • Student focus groups
  • Student surveys
  • Employer feedback
  • Quizzes
  • Reflective writing/journal writing
  • Notebook of course work
  • Research papers
  • Non-research papers (e.g., position statement)
  • Take home tests
  • Graded homework (in addition to/other than reading a text)
  • Standardized tests (e.g., prepared from outside, competency tests, program-specific tests, MS cert.)
  • Internship/co-op/practicum - Evaluation sheets from College
  • Internship/co-op/practicum - Other evaluation
  • Personal interviews
  • Lab evaluations
  • Group projects
  • Presentations (oral, visual)
  • Reviews from College entity (other than supervisor)
  • Reviews from external entity
  • Rubrics
What should we do about the learning goals that are not easily measured?

It is true that not all learning goals can be measured. When skills are difficult to measure with direct outcomes assessment, we use multiple indirect measurements. For example, accrediting agencies are aware of the difficulties of proving that students are lifelong learners. However, it is possible to demonstrate through alumni surveys that students are still learning. This might show up as a positive response to questions such as "Have you attained another degree since graduation from college?" The bottom line is to establish a credible body of evidence using several different tools so as to establish a pattern of proof from which one can reasonably infer that an outcome occurred.

Student grades and attendance at student activities are already being reported-do these count as assessment?

Absolutely. However, in the matter of grades, most final course grades consist of components that may not accurately reflect the degree of student learning. Consider attendance policies, for example, or extra credit-one will lower final grades, the other will raise them. More reliable than an overall course grade are individual grades on exams or projects within a course; if you are able to "match" or "align" a course's goals to specific items on an assessment within a course, even an assessment you are already giving, then you have provided a clear way to track student success on particular goals across time. You might notice that students are performing better/worse on a certain competency; this allows you to hone in on areas of interest.

In the case of student activities, attendance records give an account of the popularity of an event, but provide little indication of the learning outcomes achieved.

Have you ever wondered why your students are succeeding in a particular area? Holding a focus group or reviewing particular items on an exam may provide an answer. Doing so would not only be useful to you in future semesters, it would stand as an example of what Middle States calls "good" assessment. That is, it is the most directly relevant to you, and it answers questions you have about your own students.

What else does assessment entail besides grading and counting attendance?

As we mentioned earlier, the QTA Committee's purpose is to identify best practices both within the classroom and throughout the institution, while recognizing that there is no "one size fits all" approach. Assessment of effectiveness can be viewed as any given mechanism for any given situation relating to institutional effectiveness or student learning. Results from Institutional Research studies and reports may shed some light on student attitudes and opinions at any level within the institution. At the program level, assessment may involve handing out a survey in a class or assisting with a program review. Some faculty and staff form small groups or pairs to design surveys incorporating common questions to be used on final exams or assessments across areas. Others hold focus groups with students or collaborate on new programs or capstone projects designed to measure success on specific goals. The variety of projects is endless, and results from assessments can be reported in either qualitative or quantitative ways.

 

We would like to acknowledge our colleagues at the following organizations, colleges, and universities for providing some foundational elements around which some of the FAQ content was built:

  • Middle States Association of Colleges and Universities
  • University of Delaware

Pennsylvania College of Technology
One College Avenue
Williamsport, PA 17701

570-326-3761
800-367-9222

© 1995 Pennsylvania College of Technology. Penn College® and degrees that work® are registered in the U.S. Patent and Trademark Office.