A couple weeks ago, this blog argued for creating an “Equity Mindset” when it came to our work with our students and one another. This week, I’d like to dive into one of the more interesting—and difficult—aspects of equity work in higher education: Assessment.
In the last few years, a sizable body of research has grown around the concept of equitable assessment. What does it mean to assess equitably? As Erick Montenegro and Natasha Jankowski have suggested, a big part of the answer to that question is “for assessment to meet the goal of improving student learning and authentically document what students know and can do, a culturally responsive approach to assessment is needed.” In addition to being culturally responsive, equitable assessment should acknowledge that learning outcomes can often be achieved via multiple means, and assessment practices ought to be designed with that diversity in mind. The problem, however, is that we all too often conduct assessment to be the fastest and most concise way to tell the story of student learning, rather than the fullest or most accurate. As Grant Wiggins (one of the pioneers of Backward Design) warned, “we seem unable to see any moral harm in bypassing context-sensitive human judgments of human abilities in the name of statistical accuracy and economy.” Wiggins was writing in the context of K-12 education’s mania for standardized testing, but his admonition is equally salient for those of us in higher ed as well. To use on obvious example: is a high-stakes, in-class, multiple-choice examination the only way to measure student learning? It is one way, but certainly not the way, to do so.
Assessing equitably, then, asks us to be intentionally mindful of our particular students and the contexts in which they are attempting to learn—both the learning environments we create as part of our curriculum and the larger socio-political-economic structures in which we all operate. We must endeavor to, in the words of Montenegro and Jankowski, “not only understand why our students are achieving, persisting, or stopping-out in the ways they are, but to also understand the underpinning structures of why these things are happening in the first place.” One important tool with which we can do so is student voice, but it’s also a tool we under-utilize. Assessment reflects what faculty and administration think are both appropriate learning outcomes and the best methods by which students achieve them; what if students were invited into that conversation as well? Are the assumptions we have about how students learn, as well as how they show us they’ve done so, accurate?
Part of being mindful of our particular students and their contexts is being culturally responsive with our assessment practices. One common area in which we see the importance of these types of practices is in students’ language use and writing. Dominant Academic English is almost always upheld as the sole correct way to write, to produce knowledge; what are the cultural origins of that hegemony, and how does this culturally-specific conception of language get weaponized against students from marginalized groups?1 Is the traditional research assignment the most appropriate method of assessment for specific learning outcomes? Well, it depends. Maybe the answer is yes, but it’s also one of a range of appropriate assessments. Maybe that answer leads us to reconsider when in the curriculum this type of work is most appropriate for our students as well as the ways in which we prepare them to be successful once they embark on this task.
Another element we should consider in practicing culturally-responsive assessment is the danger of implicitly norming specific practices. For example, it’s one thing to study racial disparities—typically white vs. non-white students—in institutional data. If we simply assume that the outcomes exhibited by white students are the standard, however, and thus our BIPOC students2 are “deficient,” we might exacerbate, rather than constructively address, gaps in student success: “white students are then normed as the population to which others should strive.” Also, we must be wary of disregarding data simply because of small sample size. It’s one thing to realize there is a small n and approach the results sensitive to the fact they will be more volatile because of the small sample size; it’s another thing entirely, however, to not count that n at all. This is erasure rather than assessment, and moves us no closer to addressing real needs on the part of some of our students.
These are but two examples—one on the individual class scale and the other from a larger institutional perspective—of factors we ought to be thinking about in order to approach assessment more equitably. We can’t create an academic community in which all of our students feel they belong if we aren’t attentive to the many ways in which we’re judging their work and shaping their academic success. By ensuring we are paying close attention to, for example, the types of work we implictly norm as “exemplary,” or the ways in which we ask our students to demonstrate the skills and competencies they’ve acquired in our curricula and learning spaces, we are demonstrating a commitment to equity and thus all of our students’ chance to succeed. But this work does require an unpacking of the assumptions many of us hold about what “counts,” what’s “standard,” and indeed what learning really is, as well as how we show it. There may be no better model for our students, however, than for us to commit to that work in the same spirit of reflection and improvement we hope they show to us.
THIS WEEK IN CETL
Monday, Feb. 17, 11:00-11:30 ♦ Lunch and Learn: Using Audio Feedback for Student Grading
Tuesday, Feb. 18, 3:00-4:00 PM ♦ Creating and Using Rubrics Effectively
Finally, there’s nothing quite like that midterm feeling, am I right?
Live look at your average professor managing all of the responsibilities of the semester (wait for it):pic.twitter.com/ZkLAFtBcqq
— 𝙹𝚘𝚜𝚑 𝙶𝚛𝚞𝚋𝚋𝚜 (@JoshuaGrubbsPhD) February 15, 2020