Skip to main content

Assessing Specific Types of Assignments

Learn more about assessing specific types of assignments below.

Critical Thinking

To assess critical thinking, instructors should first identify the critical thinking skill(s) that they wish their students to achieve. Common critical thinking skills include the ability to: evaluate evidence or authority, examine central issues or assumptions, recognize and evaluate important relationships, evaluate multiple (or competing perspectives), provide alternate explanations and interpretations, separate relevant and irrelevant information; use evidence to inform arguments and conclusion; identify and explain the best solution for a real-world problem using relevant information, explain how changes in a (real-world) problem situation might affect the solution, look for--or create--new solutions; reframe problems, questions, issues, and reflect on one’s own gaps, strengths, and weaknesses.

From there, instructors can determine a rubric which identifies the criteria associated with the activity and skill, and determine the level to which each criteria has been met. Rubric scales for each criteria might include such performance levels as less plausible to most plausible; superficial to deep; unconnected to deeply connected; irrelevant to relevant; or novice to mastery.

Examples

  1. Students in an engineering class have to determine which points of data are extraneous pieces of information, and which need to be charted on a graph and explained. They are required to explain which data were used and which data were omitted in final project. Their responses will be assessed on how logical the selection of data points, and the clarity of the explanations.
  2. Students in a marketing class have been asked to analyze a case study and create a set of solutions based on outside research of trends. The rubric to assess their critical thinking includes the following criteria: (a) the extent to which underlying assumptions are identified and questioned, (b) the extent to which the student has identified plausible solutions, and (c) how effectively solutions are communicated.
  3. Students in a history class must identify which evidence drawn from a collection of documents will help support an original thesis statement, offering a cause or set of causes to help explain a given trend or event. They must then rank a set of scholarly arguments, from “Most Plausible to Least Plausible” and describe their reasoning behind each chosen ranking. The responses are assessed for their level of plausibility, use of evidence, the depth of their analysis, as well as their successful evaluation of multiple points of view.

References

List adapted from Stein, B. & Haynes, A. (2011): Engaging Faculty in the Assessment and Improvement of Students' Critical Thinking Using the Critical Thinking Assessment Test, Change: The Magazine of Higher Learning, 43:2, 44-49

Essays & Other Written Work

Assessing essays and other written work can be tricky, partly because writing assessment is subjective and partly because “good writing” varies with the purpose of an assignment and the field or “discourse community.” For example, instructors in English look for different characteristics in an analysis of a play than biology instructors want to see in a lab report. But when grades vary, many students mistakenly assume that assessment of writing in college is idiosyncratic and unfair.

To prevent that misconception and to help students develop a better understanding of how to write in different fields and for different audiences, one best practice is to share grading criteria with students when making an assignment and when returning a paper with a grade, as can be seen in these examples from first-year seminars, an introductory engineering course, and an upper level biology course.

In addition, if instructors assign more than one paper in a quarter, they can explain how the categories of assessment – such as purpose, organization, development, style, and mechanics – remain constant even though in some instances students will be writing an argument or analysis, while in others they may be telling a story or reporting on survey data. The more we help students move beyond a simplistic and rigid, “rules-based” understanding of writing, the more we will be helping them prepare for the variety of writing they will need to do throughout their lives.

One special case is assessment of writing done by non-native English speakers. In this case, instructors often grade primarily on content (purpose, development, organization, and quality of insights), while looking for improvement in style and mechanics, but not mastery. One good strategy is to help students with one area of grammar at a time, such as tenses or pronoun reference, and then grade on improvement in that one area.

For more resources about assessing and responding to student writing email nuwrite.northwestern.edu.

Group Projects

Group projects can help develop skills in collaboration, leadership, and time management, and they often mirror the work settings many students will be experiencing. The assessment of group projects can place more emphasis on process and skills than on content recall, and often encourages students to explain strategies rather than simply give answers. But it can be challenging to assess individual students and their inputs within this group context.

Two approaches aim to balance the assessment of the group with the assessment of the individual’s achievements by assigning a grade for each. If a project can be equitably divided into distinct tasks (or roles), these tasks can be assigned to individuals and their contributions can be marked separately. Some tasks will likely involve everyone, so for the ultimate product/outcome, a group mark will be assigned and given to each member of the group.

In a second example, each member also receives the same group grade. However, each student additionally writes a reflection paper on the process and group dynamics, and describing his or her unique contributions to the project. Individual students are graded on these papers.

Peer assessment can be a fitting approach to group projects, since students’ peers may be most informed about one another’s contributions. Students can be asked to allocate a proportion of a total mark to each member of their group. Peer assessment requires a good deal of preparation and expectation-setting, with students agreeing ahead of time on criteria to be used. An example of this type of assessment is available below.

References

Implementing Small-Group Instruction: Insights from Successful Practitioners by James L. Cooper, Jean MacGregor, Karl A. Smith, Pamela Robinson.

Discussion & Participation

Instructors often consider discussion and student participation to be a vital component of their courses—and for student engagement overall—but many struggle with how to meaningfully assess these elements.

To begin, instructors should think carefully about what they believe constitutes meaningful student contributions. Does it mean asking (or answering) questions? Does it mean listening to others and responding in respectful thoughtful ways? Does it mean reflecting on course concepts and ideas before or after class, and sharing those reflections with others? Is it to consider multiple points of view or to synthesize ideas of others?

Creative Work

This is a topic surrounded by skepticism: How can something as elusive as creativity be measured? However, a lot of work has been done in this area over the past several decades. Given its associations with innovation, adaptability, critical thinking, and problem solving, it can be critical to understand the degree to which students are manifesting creativity.

With so many definitions of creativity in existence, it is important to clarify the definition being used in the context of the course or program as well as the specific student outcomes around creativity.

If students are assigned to create a tangible product (or an idea for a product), the focus of the assessment might be the product/idea itself. Following the practice of creative product assessment in both academic and corporate settings, specific dimensions of creativity can be identified (such as novelty, usefulness, and elegance of design), and those dimensions can be assessed. Those making the judgments along these dimensions might include instructors, other students, and/or external field experts. The use of “authentic assessment,” where the assessment process reflects the way feedback would be shared and used in the “real world,” can appropriately be applied to product/idea development.

The process of creativity and creative thinking can also be assessed. For example, students might fill out checklists of behaviors deemed creative (such as identifying problems or referencing sources outside the discipline for inspiration). Tests of divergent thinking might additionally be used if one of the outcomes is to develop creative capacities that transfer beyond the subject at hand.

Using multiple assessment techniques and having the assessment stem from multiple sources, such as the instructors, peers, experts, and self-assessments, is particularly valuable when assessing and providing feedback on creativity.

Authentic Assessment

Just as authentic learning experiences engage students in work processes that researchers and experts use every day, authentic assessment reflects the ways in which field practitioners work and receive feedback. It directly examines the intended benefit of student learning, rather than measuring an out-of-context proxy.

Part of what distinguishes authentic assessment is the intended audience of the student’s presentation of material. In more traditional assessment approaches, students may write papers with the instructor as the intended audience. In authentic assessment, students may present a proposal, performance, or service as they would in their field of practice, for example, to a real or imagined relevant audience. The work itself may also serve a real purpose, such as a problem to be solved.

Because students or groups of students within a single class may be producing very different “products” to be assessed in very different environments, authentic assessment necessitates a degree of adaptively in grading. The time this requires, and the challenges in creating consistent grading criteria, should be considered in light of the learning benefits of this approach.