Evaluating Multimodal Student Work

Old School Evaluation: Scantron by Obscure Associate on Flickr: http://www.flickr.com/photos/obscureassociate/4088365078/

As part of my job, I’m often expected to critique works for which there is no clear-cut evaluative rubric — I grade student papers, review journal submissions, serve on selection committees — yet because I’m confident in my knowledge in these contexts, the “squishiness” of evaluation doesn’t bother me. I know a good thing when I see it, and a not-so-good one, too. That’s not how I justify my grades to students, or explain an article rejection to an author, of course. I typically offer comments within the body of a text, which allows me to address micro issues, and then I write a short “cover letter” that offers big-picture, summary comments. I almost never have to deal with disputes — and in some cases, even if I’ve had to award a not-so-hot grade or deliver the news of a rejection, people have thanked me for my helpful comments. That’s good. I want to be helpful. That’s why I dedicate all that time to offering constructive, critical feedback.

Since I began teaching graduate students at The New School I’ve given students the option to complete work in formats that lie outside my own realm of experience. One class curated an exhibition. Another produced an online journal. And even in my more traditional classes, I give students the option of completing creative projects that critically address the subject matter of the course. I also ask them to submit a short written supplement in which they explain the ideas at the center of their work, what “argument” they hoped to make through the production, and how successful they think they were. This supplemental paper allows me to base my evaluation primarily on content (after all, my courses aren’t intended to teach them production skills; I’m simply allowing them to use their existing production skills to work though the ideas central to my courses), but also to address how the form of the project serves its content.

As I’m about to embark on two brand new fall classes that will result in the creation of collaborative, research-based interactive projects — one, an exhibition of “material media,” the other, an map of historical urban media networks — I think it’s time to develop a more formal (though not rigid) evaluation rubric. I’m doing this not only for myself, to guide the assignment of grades, but also to aid students: to help them figure out how to evaluate their own scholarly production work and other multimodal projects — critical skills that will help them navigate through a map-crazy and dataviz-obsessed popular media and design culture.

Our faux scantron wedding RSVPs. By our friend Dan Richardson.

So, I’m investigating “how to evaluate multimodal work.” I’m still thinking through this, but here’s some of what I’ve gathered thus far (I’m including only the stuff that’s relevant to evaluating student projects; visit the original sources for how these standards can be applied to peer-reviewed faculty work. My own comments are in red.):

From the MLA’s “Short Guide to Evaluation of Digital Work” (the following are direct — but, in some cases, abridged — quotations): 

  • “Is it accessible to the community of study? The most basic question to ask of digital work is whether it is accessible to its audience be it students (in the case of pedagogical innovation or users in the case of a research resource.) A work that is hidden and not made available is one that is typically not ready in some fashion. It is normal for digital work to be put up in “beta” or untested form just as it is normal for digital work to be dynamically updated (as in versions of a software tool.)... [Our projects will be hosted on either Parsons or Media Studies servers, and will be made publicly accessible. In addition, our creation process will be made public; all students will be asked to keep blogs on which they chronicle their research and production processes and offer feedback to one another. These individual blogs will be aggregated on a central project blog, to which I will add summary comments.]
  • Have there been any expert consultations? Has this been shown to others for expert opinion? Given the absence of peer review mechanisms for many types of digital work candidates should be encouraged to plan for expert consultations, especially when applying for funding…. [Our mapping tool is currently in development with expert designers at Parsons. Our exhibition platform is developed through an open-source community, to which I will ask the students to contribute. In addition, I’ll be asking faculty experts to attend class and offer constructive criticism at key points throughout the semester.]
  • Has the work been reviewed? Can it be submitted for peer review?Has the work been presented at conferences?… Have papers or reports about the project been published? [I’m currently seeking opportunities to highlight these projects, and critically assess their success/failure as pedagogical experiments, in peer-reviewed publications and at conferences. I also hope to fold particular portions of the mapping project into my next book project.]
  • Do others link to it? Does it link out well? …One indication of how a digital work participates in the conversation of the humanities is how it links to other projects and how in turn, it is described and linked to by others. With the advent of blogging it should be possible to find bloggers who have commented on a project and linked to it. While blog entries are not typically careful reviews they are a sign of interest in the professional community. [We’ll be drawing on the material housed in several local archives and special collections. Proper attribution will be a top priority in both of my student projects; we’ll provide links to relevant collections and finding aids. In addition, through the students’ “process blogs,” we’ll link out to other resources that informed their projects’ development. Finally, a data mining platform will lie behind the interactive mapping project, allowing the map to draw connections to relevant courses, relevant faculty publications, relevant student projects completed outside the context of the class, etc.]
  • If it is an instructional project, has it been assessed appropriately? A scholarly pedagogical project is one that claims to have advanced our knowledge of how to teach or learn. Such claims can be tested and there is a wealth of evaluation techniques including dialogical ones that are recognizable as being in the traditions of humanities interpretation. Further, most universities have teaching and learning units that can be asked to help advise (or even run) assessments for pedagogical innovations from student surveys to focus groups. [Ha!] …Evaluators should not look for enthusiastic and positive results – even negative results (as in this doesn’t help students learn X) are an advance in knowledge. A well designed assessment plan that results in new knowledge that is accessible and really helps others is scholarship, whether or not the pedagogical innovation is demonstrated to have the intended effect. // That said, there are forms of pedagogical innovation, especially the development of tools that are used by instructors to create learning objects, that cannot be assessed in terms of learning objectives but in terms of their usability by the instructor community to meet their learning objectives. In these cases the assessment plan would resemble more usability design and testing…. [Everything I’m posting right here pertains to my attempt to develop appropriate models for assessment. I’ll have to build in mechanisms for evaluating both individual students’ contributions and the collective class effort.]

More from the MLA: “Best Practices in Digital Work” (the following are direct — but, in some cases, abridged — quotations)

  • Appropriate Content
  • Enrichment (Has the data been annotated, linked, and structured appropriately?) One of the promises of digital work is that it can provide rich supplements of commentary, multimedia enhancement, and annotations to provide readers with appropriate historical, literary, and philosophical context. An electronic edition can have high resolution manuscript pages or video of associated performances. A digital work can have multiple interfaces for different audiences from students to researchers. Evaluators should ask about how the potential of the medium has been exploited. Has the work taken advantage of the multimedia possibilities? If an evaluator can imagine a useful enrichment they should ask the candidate whether they considered adding such materials. // Enrichment can take many forms and can raise interesting copyright problems. Often video of dramatic performances are not available because of copyright considerations. Museums and archives can ask for prohibitive license fees for reproduction rights which is why evaluators shouldn’t expect it to be easy to enrich a project with resources, but again, a scholarly project can be expected to have made informed decisions as to what resources they can include. Where projects have negotiated rights evaluators should recognize the decisions and the work of such negotiations. // In some cases enrichment can take the form of significant new scholarship organized as interpretative commentary or essay trajectories through the material. Some projects like NINES actually provide tools for digital exhibit curation so that others can create and share new annotated itineraries through the materials mounted by others…. [This is a primary concern of both of my classes. Rather than uploading data and expecting it to stand on its own, my students will be charged with contextualizing it, and linking their individual data points together into a compelling argument. I’ve already made special arrangements with several institutions for copyright clearances and waiver of reproduction fees. In other cases, students will have to negotiate (with the libraries’ and my assistance) copyright clearances; this will be a good experience for them!]
  • Technical Design (Is the delivery system robust, appropriate, and documented?)In addition to evaluating the decisions made about the representation, encoding and enrichment of evidence, evaluators can ask about the technical design of digital projects. There are better and worse ways to implement a project so that it can be maintained over time by different programmers. A scholarly resource should be designed and documented in a way that allows it to be maintained easily over the life of the project. While a professional programmer with experience with digital humanities projects can advise evaluators about technical design there are some simple questions any evaluator can ask like, “How can new materials be added?”, “Is there documentation for the technical set up that would let another programmer fix a bug?”, and “Were open source tools used that are common for such projects?” // It should be noted that pedagogical works are often technically developed differently than scholarly resources, but evaluators can still ask about how they were developed and whether they were developed so as to be easily adapted and maintained. [Project developers are focusing on this, and they’re documenting the process through a “ticket” system. We’ll ask our students for technical feedback at various points throughout the semester. Their suggestions — which they’ll elaborate upon in their blogs — will inform the development of the platforms even after the end of the semester.]
  • Interface Design and Usability (Is it designed to take advantage of the medium? Has the interface been assessed? Has it been tested? Is it accessible to its intended audience?) …Now best practices in web development suggest that needs analysis, user modeling, interface design and usability testing should be woven into large scale development projects. Evaluators should therefore ask about anticipated users and how the developers imagined their work being used. Did the development team conduct design experiments? Do they know who their users are and how do they know how their work will be used?… // It should be noted that interface design is difficult to do when developing innovative works for which there isn’t an existing self-identified and expert audience. Scholarly projects are often digitizing evidence for unanticipated research uses and should, for that reason, try to keep the data in formats that can be reused whatever the initial interface. There is a tension in scholarly digital work between a) building things to survive and be used (even if only with expertise) by future researchers and b) developing works that can be immediately accessible to scholars without computing skills. It is rare that a project has the funding to both digitize to scholarly standards and develop engaging interfaces that novices find easy. Evaluators should look therefore for plans for long term testing and iterative improvement that is facilitated by a flexible information architecture that can be adapted over time… // Finally, it should be said that interface design is itself a form of digital rhetorical work that should be encouraged. Design can be done following and innovating on practices of asking questions and imagining potential… Evaluators should look expect candidates presenting digital work to have reflected on the engineering and design, even if they didn’t do it, and evaluators should welcome the chance to have a colleague unfold the challenges of the medium.

Cheryl Ball discusses the MLA’s recommendations here, in the discussion forum for her “Evaluating Digital Scholarship” workshop (2010).  I especially appreciate her comments about the wide variety of projects that constitute “digital scholarship,” and which require dynamic criteria for evaluation. She also talks about a fantastic “peer review” exercise she designed for her undergrad “Multimodal Composition” class. They began with Virginia Kuhn’s “components of scholarly multimedia” — conceptual core (“controlling ideas, productive alignment with genre”); research component; form/content (do the formal elements serve the concept?); and creative realization (“does the project use appropriate register? could this have been done on paper?”) — then added two criteria of their own: audience and timeliness. In each of my fall classes we’ll spend a good deal of time examining other online exhibitions and mapping projects and assessing their strengths and weaknesses. I think asking the student to write a formal “reader’s report” — after we’ve generated a list of criteria for assessment — could push their critiques beyond the “I like it,” “I don’t like it,” “There’s too much going on,” or “This wasn’t clear” feedback they usually offer. I attribute the limitations of their feedback not to any lack of serious engagement or interest, but to the fact that they (me included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.

The Institute for Multimedia Literacy has created a handout on “multimedia scholarship grading parameters” that also starts from, and expands upon, Kuhn’s criteria (the following are direct — but, in some cases, abridged — quotations):

  • Conceptual Core: Is the project’s thesis clearly articulated?Is the project productively aligned with one or more of the multimedia genres outlined in the IML program? Does the project effectively engage with the primary issues raised in the project’s research?
  • Research Competence: Does the project display evidence of substantial research and thoughtful engagement with its subject? Does the project use a variety of types of sources (i.e., not just Web sites)? Does the project deploy more than one approach to its topic?
  • Form and Content: Do structural and formal elements of the project reinforce the conceptual core in a productive way? Are design decisions deliberate and controlled? Is the effectiveness of the project uncompromised by technical problems?
  • Creative Realization: Does the project approach its subject in creative or innovative ways? Does the project use media and design principles effectively? Does this project achieve significant goals that could not have been realized on paper?

Here are their additions:

  • Coherence: First and foremost, academic multimedia projects should be coherent, effectively spanning the gap between “tradition” (text) and “innovation” (multimedia) and ultimately balancing their components. A successful multimedia project, in other words, would clearly suffer if translated into a traditional essay, or, conversely, into a “purely” multimedia experience with little or no connection to the broader field within which it participates. The strong multimedia project is not merely a well-written paper with multimedia elements “pasted in”; neither is it merely a good multimedia project with more familiar textual elements “tacked on.” Coherence, then, refers to the graceful balance of familiar scholarly gestures and multimedia expression which mobilizes the scholarship in new ways.
  • Self-reflexivity: A second quality accounts for the authorial understanding of the production choices made in constructing the project. Because these may be difficult or impossible to discern by engaging with the project, we advocate post-production reflection, offering students the opportunity to reflect on and to justify the choices and decisions made during the creation of the project. We also recognize that in many instances it may be more significant for students to reckon with the process of production rather than an end product; again, reflexivity through reflection helps manifest the evolution, and gives instructors a means for gauging learning. [Students will be encouraged to use their process blogs to address these issues.]
  • Control: By control, we mean the extent to which a project demonstrates authorial intention by providing the user with a carefully planned structure, often made manifest through a navigation scheme and a design suited to the project’s argument and content. Control has to do with authorial tone / voice / cuing as well as with the quality of the project’s interactivity if it calls for user interaction. If, for example, it is the student’s intention to confuse a user, it is perfectly appropriate to build that confusion into the project’s navigation scheme; such choices, however, must always be justified in the project’s self-reflexivity.
  • Cogency: Cogency refers to the quality of the project’s argument and its reflection of a conceptual core. Cogency is not a function of an argument’s “rightness” or “wrongness.” With most assignments, students are free to take any position they like; cogency is reflected in the way the argument is made, not in what the argument is.
  • Evidence: What is the quality of the data used to support the project’s argument? Is it suited to the argument? Further, the project should reflect fundamental research competency as understood and dictated by evolving standards of multimedia research and expression.
  • Complexity: Multimedia projects often suffer in being considered somehow outside a larger discourse or context. Complexity refers to the ways in which the project acknowledges its broader context, contributes to a larger discussion and generally participates in an academic community.
  • Technique: Strong scholarly multimedia projects should exhibit an understanding of the affordances of the tools used to create the project.
  • Documentation: Finally, with a nod toward the dramatic technological shifts that characterize contemporary media practices and the fact that formats come and go with alarming rapidity, we advocate a documentation process that describes the project, its formal structure and thematic concerns, with attention to the project’s attributes and the particular needs required for either the student’s own archival process, or those of an instructor, program, or other entity. This, too, offers another stage for assessment, inviting students to consider their work within a larger context, and offering instructors a site for understanding the learning that has occurred [Our individual and aggregated class blogs will serve this purpose].

I’d have to adapt this for graduate classes — but it’s a great starting point.

There’s also this “Grading 2.0: Evaluation in the Digital Age” discussion thread on HASTAC, which, although I haven’t been able to wade through all the comments yet, seems to advocate for a portfolio approach. I think my students’ “process blogs” will function much like a portfolio.

Finally, I’ve found Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” essay extremely helpful in addressing my concerns about the evaluation of “self-expressive” projects. I plan to ask everyone to read Anderson’s piece early in the semester. I had been concerned that some students would assume that, because our projects make use of the same tools they use to create their (often self-expressive or experimental) student films and psychogeographic maps and impressionistic audio pieces, our multimodal scholarly projects could be narrative-based and purely expressive, too. I imagine that at least a few students are unfamiliar with using production as a research methodology: how many have conceived of geotagging as more than a means of “placing” their Flickr uploads or recording their “sensory memories” of particular places in the city? I’m not denigrating these activities — there’s definitely a place for them (including in some of my other classes) — but this fall, I want to focus on multimodal scholarship, and developing appropriate criteria for evaluating it. As Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”