Skip to main content

FAQ & Definitions

Basic Definitions

What is assessment and why is it important?

ssessment is the systematic process for understanding and improving student learning. It is the gathering of information about student learning, which may be qualitative or quantitative in nature, and using it for some purpose, such as providing feedback to students, assigning grades, and making curricular improvements. Assessment is the scholarly aspect of teaching.

Assessment:

  • carries practical value. It highlights our unique strengths and uncovers ways to improve. Findings inform curriculum design and pedagogical approaches and allows us to articulate the value of our programs.
  • transcends grades. It can address areas such as student engagement, growth in self-efficacy, and student experiences, and can uncover unintended outcomes, both positive and negative.
  • allows us to look forward as well as reflect. We can gauge learning as we go, adjusting the curriculum as needed. At the end of a course, reflecting on assessment findings can lead us in new directions.
  • engenders intentionality. Learning objectives make the learning and assessment process transparent for students and lay a solid foundation for curriculum development at both the course and program levels.
  • extends responsibility for learning to students themselves. When students understand learning objectives and are included in the assessment process, they take more ownership of their learning.

What is formative assessment?

Formative assessment typically concerns development and ongoing improvement. In a course context, formative assessment may be used at various points during a term to provide feedback on how a course is going and to improve learning, motivate students, diagnose a student’s strengths and weaknesses, or help students reflect critically on their own learning. By identifying what students are learning and what challenges they are having in the middle of a course or module, instructors can build on strengths and reduce weaknesses as they proceed. Formative assessment is especially relevant for new and substantially revised initiatives, to check on how well implementation, and impact match intentions. classroom assessment techniques are a common example of formative assessment.

In a program context, formative assessment may be used to steadily improve quality, offer points of feedback and reflection on recently implemented changes, and diagnose program strengths and weaknesses.

What is summative assessment?

Summative assessment typically concerns accountability, performance, and impact. In a course context, summative assessment may be used to grade or rank a student, predict a student’s success in other courses, or otherwise measure a student’s proficiency. Usually, it is obtained at the end of an activity, course or experience, demonstrating to instructors what their students have achieved. In a program context, summative assessment may be used to demonstrate performance or serve as a benchmark to other programs and schools. This information can be used internally and can also be shared with others. Evaluations of student work using criteria aligned with learning objectives are a key approach to summative assessment, showing instructors what students now know and what they can do.

What is the difference between assessment and grading?

First, assessment often takes a much broader approach to student learning than grading does. Grades reflect how students did in a single course or on a particular project or exam. Assessment can involve looking at what students do across a set of courses or experiences—whether they have been able to make connections across courses, whether individual courses have been transformed into a coherent whole, what they know and are able to do after completing some component of their degree requirements.

In addition, grades by themselves do not explain in detail what students have learned. They can reflect criteria not directly related to student learning, such as class participation and getting papers in by the due dates, as well as how a student’s performance compares to that of classmates. Graded course assignments can be, in part, teaching tools used to enhance student growth as much as to measure it. Furthermore, many student learning assessment tools differ from those used to determine grades; these include baseline measures given to students beginning a course or a program of study and surveys of current and recent graduates.

Grades can, however, be a key source of data for assessment of student learning. This is particularly true when an assignment is clearly linked to course or program objectives and the grading of that assignment uses criteria carefully aligned with those objectives. See our page on assessing learning in classes for more information on grading.

Process of Assessment

What are the differences among course goals, learning objectives, and outcomes?

Course and Program Goals are general statements of educational intent (whether for a course or other instructional unit or a program). These might for example refer to the topics or content areas that an instructor intends to explore in the course or program. These terms are used in different but overlapping ways in the context of assessment. Here is one way to think about distinctions between them:

Objectives: Intended results of instruction, curricula, programs or activities.

  • Learning objectives are more specific and concrete statements of what students are expected to learn or be able to do upon completion of an instructional unit or program, and may focus on cognitive, affective or psychomotor learning. Learning objectives identify the learning behavior and criteria to be met. Strong learning objectives avoid vague behaviors like “know,” “learn,” “understand,” and “appreciate,” and instead use more specific language such as “analyze,” “evaluate,” “demonstrate,” “synthesize,” “generate” and “create” that get at higher level skills. Click for more information on designing learning objectives.
  • Outcomes: are the achieved results of instruction, curricula, programs or activities—what students actually know and are actually able to do, and what values and ways of thinking and behavior they have acquired, by the time they complete a class or a broader course of study such as a major.
    • Learning outcomes: Specific observable/measurable statements of the learning students achieve. To what extent did the student achieve the stated learning objective?
    • Value added outcomes: The amount of learning achieved as a result of instruction that has taken place within a particular context (e.g.: a classroom or a university), over and above the knowledge or skills a student had upon arrival or gained in other ways.

View our guide to create effective learning objectives.

What is the difference between direct and indirect measures?

Assessment plans generally include two types of approaches:

  • Direct measures allow students to demonstrate knowledge, capabilities, and ways of thinking related to the learning objectives.
  • Indirect measures get at this information in other ways. For example, instructors can gain insights on what students may now know and be able to do by asking them to reflect on their experiences. Surveys of alumni are another approach. Or, instructors might look at the content of their courses, and the patterns of courses their students take, noting connections with the learning objectives. Faculty members, research mentors, and employers can provide helpful observations as well.

Some types of direct measures

  • Performance on selected exam questions in foundational courses
  • Scoring of a sample of student papers using rubrics linked to learning objectives
  • Comparisons of responses to exam questions given earlier and later in a quarter, or characteristics of papers written in lower-level courses compared to papers in advanced courses
  • Analysis of characteristics of senior theses
  • Analysis of electronic discussion threads or of in-class presentations or discussions

Some types of indirect measures

  • Interviews of students completing Graduation Petitions
  • Responses to CTEC questions
  • Surveys of recent graduates
  • Counts of students involved in faculty research or choosing particular types of courses
  • Surveys of faculty regarding student preparation for advanced classes or surveys of employers regarding knowledge, skills, and attitudes that graduates bring to the workplace
  • A curriculum map showing how the content and the learning objectives for individual courses, or for sets of courses fulfilling the same requirement, fit with the general learning objectives

What are the different ways that assessment data can be used?

  • Norm-referenced: This kind of assessment allows the instructor to differentiate the performance of different students or groups of students in relation to one another. Students are rated on whether they have performed better, worse or similarly to other students, but they are not rated on what they have learned or whether they gained specific knowledge or a set of skills. Grading on a curve is a common example of norm-referenced assessment.
  • Criterion-referenced: This kind of assessment allows the instructor to gauge achievement or performance according to a pre-existing standard. For example, student papers in a history course might be assessed on the extent to which the paper used evidence to inform the thesis or how effectively the paper integrated primary sources and scholarly theory.

There are four mains ways that units or individual instructors may use assessment data to evaluate student learning outcomes: formatively, summatively, as baselines, and as benchmarks. See Assessment in Practice for specific examples from Northwestern faculty and staff.

Methods of Assessment

What are criterion and norm-referenced assessment?

  • Norm-referenced: This kind of assessment allows the instructor to differentiate the performance of different students or groups of students in relation to one another. Students are rated on whether they have performed better, worse or similarly to other students, but they are not rated on what they have learned or whether they gained specific knowledge or a set of skills. Grading on a curve is a common example of norm-referenced assessment.
  • Criterion-referenced: This kind of assessment allows the instructor to gauge achievement or performance according to a pre-existing standard. For example, student papers in a history course might be assessed on the extent to which the paper used evidence to inform the thesis or how effectively the paper integrated primary sources and scholarly theory.

What are self and peer-referenced assessment?

  • Self-referenced: Students compare their own performance to a standard and/or reflect on their own learning and development and areas to improve and grow.
  • Peer-referenced: Students compare the performance of their peers to a standard. They might seek to offer feedback and guidance to their peers, to help them identify strengths and areas to improve and grow.

Additional resources on self and peer-referenced assessment are available on the assessing learning in courses page.

What is the role of validity and reliability in designing assessments?

In designing assessments, instructors should strive to ensure that they are both valid and reliable.

A valid assessment gauges whether the student has met one or more of the stated learning objectives, and not something else. Sometimes, assessments may appear to measure the right thing, but actually may be measuring knowledge or skills that are not of central importance in a course. For example, some multiple choice tests measure how adept a student is at the mechanics of test-taking, rather than her or his understanding of the material. A written assignment that is meant to measure a student’s ability to critically analyze a problem may actually be measuring writing skill (not unimportant, but perhaps not the a central objective of the course), and a presentation-based assignment meant to measure understanding of concepts may actually be measuring public-speaking skill.

A reliable assessment will produce consistent results every time it is used. Certainly, one of the most reliable types of assessments would be a multiple-choice or true-false exam, assuming there is a clearly defined answer key with right and wrong answers. In such a case, the resulting scoring would be reliable and consistent, no matter who carries it out. For other types of assignments, particularly for those that are more open-ended in nature, where there may be a wider range of possible responses or where “correct” responses are not the goal, reliability depends on a shared set of criteria (and agreed-upon designations of quality).

Examples of reliability:

How is student learning assessed in co-curricular activities?

Assessing student learning in co-curricular activities is as important as in-classroom learning. Co-curricular learning takes place through a variety of programs, activities, and services, including summer transition programs, the residential experience, leadership development, career development, and more. The process and strategies used to assess learning in the co-curriculum are much the same as those used in the classroom. More information about assessing student learning in the co-curriculum can be found on student affairs assessment.

Funds for Enhancing Learning and Teaching

University-wide Funds

  • Faculty Research Grants: Provides individual research grants, creative arts grants, and subvention publication grants (for production costs related to publication) available to certain NU faculty members. Application deadlines are in October, January, and April.
  • The Alice Kaplan Institute for the Humanities: Offers fellowships (by competitive application) for research leaves of absence to NU faculty only. Applications for fellowships are due in January.
  • The Alumnae of Northwestern University: The Gifts and Grants Committee of the Alumnae of Northwestern solicits proposals for projects that will benefit students and promote research and scholarship. Enhancing partnerships among disciplines is one of their goals. Individual grants have ranged from $100 to $20,000, but most awards are for $1000 to $3000. Preference is given to projects not previously funded.

School-based Funds

Some individual schools within Northwestern offer limited funding for such activities as:

  • developing innovative new courses
  • substantially revising existing courses and curricula
  • enhancing student learning experiences through guest speakers, field trips, and the like

Examples include:

  • Hewlett Curricular Fellowship Program: Supports efforts by faculty in Weinberg to create new courses, or revise currently taught courses, to serve as examples of courses that might meet a social inequalities and diversities graduation requirement in the event that the Weinberg faculty approve such a requirement. Learn more about the Hewlett Curricular Fellowship Program.
  • Course Enhancement Grants: Provides funds for special events or activities associated with a Weinberg course. Read about the course enhancement grants and the application process.
  • Freshman Seminar Enhancement funds: Provides funds for special events or activities associated with a Weinberg first-year seminar. For more information, contact: Lane Fenrich (fenrich@northwestern.edu).
  • The Associate Dean for Undergraduate Academic Affairs in Weinberg has limited discretionary funds available for special requests associated with courses and other faculty-student activities. For more information, contact: Mary E. Finn (mfinn@northwestern.edu).
  • Dispute Resolution Research Center, Kellogg School of Management provides funds for research projects on conflict theory, particularly interpersonal conflicts and disputes. Projects must be likely to produce publishable material for the Center working paper series. Grant proposals are due in September and in April.