Evaluating Multimodal Student Work

Old School Evaluation: Scantron by Obscure Associate on Flickr: http://www.flickr.com/photos/obscureassociate/4088365078/

As part of my job, I’m often expected to critique works for which there is no clear-cut evaluative rubric — I grade student papers, review journal submissions, serve on selection committees — yet because I’m confident in my knowledge in these contexts, the “squishiness” of evaluation doesn’t bother me. I know a good thing when I see it, and a not-so-good one, too. That’s not how I justify my grades to students, or explain an article rejection to an author, of course. I typically offer comments within the body of a text, which allows me to address micro issues, and then I write a short “cover letter” that offers big-picture, summary comments. I almost never have to deal with disputes — and in some cases, even if I’ve had to award a not-so-hot grade or deliver the news of a rejection, people have thanked me for my helpful comments. That’s good. I want to be helpful. That’s why I dedicate all that time to offering constructive, critical feedback.

Since I began teaching graduate students at The New School I’ve given students the option to complete work in formats that lie outside my own realm of experience. One class curated an exhibition. Another produced an online journal. And even in my more traditional classes, I give students the option of completing creative projects that critically address the subject matter of the course. I also ask them to submit a short written supplement in which they explain the ideas at the center of their work, what “argument” they hoped to make through the production, and how successful they think they were. This supplemental paper allows me to base my evaluation primarily on content (after all, my courses aren’t intended to teach them production skills; I’m simply allowing them to use their existing production skills to work though the ideas central to my courses), but also to address how the form of the project serves its content.

As I’m about to embark on two brand new fall classes that will result in the creation of collaborative, research-based interactive projects — one, an exhibition of “material media,” the other, an map of historical urban media networks — I think it’s time to develop a more formal (though not rigid) evaluation rubric. I’m doing this not only for myself, to guide the assignment of grades, but also to aid students: to help them figure out how to evaluate their own scholarly production work and other multimodal projects — critical skills that will help them navigate through a map-crazy and dataviz-obsessed popular media and design culture.

Our faux scantron wedding RSVPs. By our friend Dan Richardson.

So, I’m investigating “how to evaluate multimodal work.” I’m still thinking through this, but here’s some of what I’ve gathered thus far (I’m including only the stuff that’s relevant to evaluating student projects; visit the original sources for how these standards can be applied to peer-reviewed faculty work. My own comments are in red.):

From the MLA’s “Short Guide to Evaluation of Digital Work” (the following are direct — but, in some cases, abridged — quotations): 

  • “Is it accessible to the community of study? The most basic question to ask of digital work is whether it is accessible to its audience be it students (in the case of pedagogical innovation or users in the case of a research resource.) A work that is hidden and not made available is one that is typically not ready in some fashion. It is normal for digital work to be put up in “beta” or untested form just as it is normal for digital work to be dynamically updated (as in versions of a software tool.)... [Our projects will be hosted on either Parsons or Media Studies servers, and will be made publicly accessible. In addition, our creation process will be made public; all students will be asked to keep blogs on which they chronicle their research and production processes and offer feedback to one another. These individual blogs will be aggregated on a central project blog, to which I will add summary comments.]
  • Have there been any expert consultations? Has this been shown to others for expert opinion? Given the absence of peer review mechanisms for many types of digital work candidates should be encouraged to plan for expert consultations, especially when applying for funding…. [Our mapping tool is currently in development with expert designers at Parsons. Our exhibition platform is developed through an open-source community, to which I will ask the students to contribute. In addition, I'll be asking faculty experts to attend class and offer constructive criticism at key points throughout the semester.]
  • Has the work been reviewed? Can it be submitted for peer review?Has the work been presented at conferences?… Have papers or reports about the project been published? [I'm currently seeking opportunities to highlight these projects, and critically assess their success/failure as pedagogical experiments, in peer-reviewed publications and at conferences. I also hope to fold particular portions of the mapping project into my next book project.]
  • Do others link to it? Does it link out well? …One indication of how a digital work participates in the conversation of the humanities is how it links to other projects and how in turn, it is described and linked to by others. With the advent of blogging it should be possible to find bloggers who have commented on a project and linked to it. While blog entries are not typically careful reviews they are a sign of interest in the professional community. [We'll be drawing on the material housed in several local archives and special collections. Proper attribution will be a top priority in both of my student projects; we'll provide links to relevant collections and finding aids. In addition, through the students' "process blogs," we'll link out to other resources that informed their projects' development. Finally, a data mining platform will lie behind the interactive mapping project, allowing the map to draw connections to relevant courses, relevant faculty publications, relevant student projects completed outside the context of the class, etc.]
  • If it is an instructional project, has it been assessed appropriately? A scholarly pedagogical project is one that claims to have advanced our knowledge of how to teach or learn. Such claims can be tested and there is a wealth of evaluation techniques including dialogical ones that are recognizable as being in the traditions of humanities interpretation. Further, most universities have teaching and learning units that can be asked to help advise (or even run) assessments for pedagogical innovations from student surveys to focus groups. [Ha!] …Evaluators should not look for enthusiastic and positive results – even negative results (as in this doesn’t help students learn X) are an advance in knowledge. A well designed assessment plan that results in new knowledge that is accessible and really helps others is scholarship, whether or not the pedagogical innovation is demonstrated to have the intended effect. // That said, there are forms of pedagogical innovation, especially the development of tools that are used by instructors to create learning objects, that cannot be assessed in terms of learning objectives but in terms of their usability by the instructor community to meet their learning objectives. In these cases the assessment plan would resemble more usability design and testing…. [Everything I'm posting right here pertains to my attempt to develop appropriate models for assessment. I'll have to build in mechanisms for evaluating both individual students' contributions and the collective class effort.]

More from the MLA: “Best Practices in Digital Work” (the following are direct — but, in some cases, abridged — quotations)

  • Appropriate Content
  • Enrichment (Has the data been annotated, linked, and structured appropriately?) One of the promises of digital work is that it can provide rich supplements of commentary, multimedia enhancement, and annotations to provide readers with appropriate historical, literary, and philosophical context. An electronic edition can have high resolution manuscript pages or video of associated performances. A digital work can have multiple interfaces for different audiences from students to researchers. Evaluators should ask about how the potential of the medium has been exploited. Has the work taken advantage of the multimedia possibilities? If an evaluator can imagine a useful enrichment they should ask the candidate whether they considered adding such materials. // Enrichment can take many forms and can raise interesting copyright problems. Often video of dramatic performances are not available because of copyright considerations. Museums and archives can ask for prohibitive license fees for reproduction rights which is why evaluators shouldn’t expect it to be easy to enrich a project with resources, but again, a scholarly project can be expected to have made informed decisions as to what resources they can include. Where projects have negotiated rights evaluators should recognize the decisions and the work of such negotiations. // In some cases enrichment can take the form of significant new scholarship organized as interpretative commentary or essay trajectories through the material. Some projects like NINES actually provide tools for digital exhibit curation so that others can create and share new annotated itineraries through the materials mounted by others…. [This is a primary concern of both of my classes. Rather than uploading data and expecting it to stand on its own, my students will be charged with contextualizing it, and linking their individual data points together into a compelling argument. I've already made special arrangements with several institutions for copyright clearances and waiver of reproduction fees. In other cases, students will have to negotiate (with the libraries' and my assistance) copyright clearances; this will be a good experience for them!]
  • Technical Design (Is the delivery system robust, appropriate, and documented?)In addition to evaluating the decisions made about the representation, encoding and enrichment of evidence, evaluators can ask about the technical design of digital projects. There are better and worse ways to implement a project so that it can be maintained over time by different programmers. A scholarly resource should be designed and documented in a way that allows it to be maintained easily over the life of the project. While a professional programmer with experience with digital humanities projects can advise evaluators about technical design there are some simple questions any evaluator can ask like, “How can new materials be added?”, “Is there documentation for the technical set up that would let another programmer fix a bug?”, and “Were open source tools used that are common for such projects?” // It should be noted that pedagogical works are often technically developed differently than scholarly resources, but evaluators can still ask about how they were developed and whether they were developed so as to be easily adapted and maintained. [Project developers are focusing on this, and they're documenting the process through a "ticket" system. We'll ask our students for technical feedback at various points throughout the semester. Their suggestions -- which they'll elaborate upon in their blogs -- will inform the development of the platforms even after the end of the semester.]
  • Interface Design and Usability (Is it designed to take advantage of the medium? Has the interface been assessed? Has it been tested? Is it accessible to its intended audience?) …Now best practices in web development suggest that needs analysis, user modeling, interface design and usability testing should be woven into large scale development projects. Evaluators should therefore ask about anticipated users and how the developers imagined their work being used. Did the development team conduct design experiments? Do they know who their users are and how do they know how their work will be used?… // It should be noted that interface design is difficult to do when developing innovative works for which there isn’t an existing self-identified and expert audience. Scholarly projects are often digitizing evidence for unanticipated research uses and should, for that reason, try to keep the data in formats that can be reused whatever the initial interface. There is a tension in scholarly digital work between a) building things to survive and be used (even if only with expertise) by future researchers and b) developing works that can be immediately accessible to scholars without computing skills. It is rare that a project has the funding to both digitize to scholarly standards and develop engaging interfaces that novices find easy. Evaluators should look therefore for plans for long term testing and iterative improvement that is facilitated by a flexible information architecture that can be adapted over time… // Finally, it should be said that interface design is itself a form of digital rhetorical work that should be encouraged. Design can be done following and innovating on practices of asking questions and imagining potential… Evaluators should look expect candidates presenting digital work to have reflected on the engineering and design, even if they didn’t do it, and evaluators should welcome the chance to have a colleague unfold the challenges of the medium.

Cheryl Ball discusses the MLA’s recommendations here, in the discussion forum for her “Evaluating Digital Scholarship” workshop (2010).  I especially appreciate her comments about the wide variety of projects that constitute “digital scholarship,” and which require dynamic criteria for evaluation. She also talks about a fantastic “peer review” exercise she designed for her undergrad “Multimodal Composition” class. They began with Virginia Kuhn’s “components of scholarly multimedia” — conceptual core (“controlling ideas, productive alignment with genre”); research component; form/content (do the formal elements serve the concept?); and creative realization (“does the project use appropriate register? could this have been done on paper?”) — then added two criteria of their own: audience and timeliness. In each of my fall classes we’ll spend a good deal of time examining other online exhibitions and mapping projects and assessing their strengths and weaknesses. I think asking the student to write a formal “reader’s report” — after we’ve generated a list of criteria for assessment — could push their critiques beyond the “I like it,” “I don’t like it,” “There’s too much going on,” or “This wasn’t clear” feedback they usually offer. I attribute the limitations of their feedback not to any lack of serious engagement or interest, but to the fact that they (me included!) don’t always know what criteria should be informing their judgment, or what language is typically used in or is appropriate for such a review.

The Institute for Multimedia Literacy has created a handout on “multimedia scholarship grading parameters” that also starts from, and expands upon, Kuhn’s criteria (the following are direct — but, in some cases, abridged — quotations):

  • Conceptual Core: Is the project’s thesis clearly articulated?Is the project productively aligned with one or more of the multimedia genres outlined in the IML program? Does the project effectively engage with the primary issues raised in the project’s research?
  • Research Competence: Does the project display evidence of substantial research and thoughtful engagement with its subject? Does the project use a variety of types of sources (i.e., not just Web sites)? Does the project deploy more than one approach to its topic?
  • Form and Content: Do structural and formal elements of the project reinforce the conceptual core in a productive way? Are design decisions deliberate and controlled? Is the effectiveness of the project uncompromised by technical problems?
  • Creative Realization: Does the project approach its subject in creative or innovative ways? Does the project use media and design principles effectively? Does this project achieve significant goals that could not have been realized on paper?

Here are their additions:

  • Coherence: First and foremost, academic multimedia projects should be coherent, effectively spanning the gap between “tradition” (text) and “innovation” (multimedia) and ultimately balancing their components. A successful multimedia project, in other words, would clearly suffer if translated into a traditional essay, or, conversely, into a “purely” multimedia experience with little or no connection to the broader field within which it participates. The strong multimedia project is not merely a well-written paper with multimedia elements “pasted in”; neither is it merely a good multimedia project with more familiar textual elements “tacked on.” Coherence, then, refers to the graceful balance of familiar scholarly gestures and multimedia expression which mobilizes the scholarship in new ways.
  • Self-reflexivity: A second quality accounts for the authorial understanding of the production choices made in constructing the project. Because these may be difficult or impossible to discern by engaging with the project, we advocate post-production reflection, offering students the opportunity to reflect on and to justify the choices and decisions made during the creation of the project. We also recognize that in many instances it may be more significant for students to reckon with the process of production rather than an end product; again, reflexivity through reflection helps manifest the evolution, and gives instructors a means for gauging learning. [Students will be encouraged to use their process blogs to address these issues.]
  • Control: By control, we mean the extent to which a project demonstrates authorial intention by providing the user with a carefully planned structure, often made manifest through a navigation scheme and a design suited to the project’s argument and content. Control has to do with authorial tone / voice / cuing as well as with the quality of the project’s interactivity if it calls for user interaction. If, for example, it is the student’s intention to confuse a user, it is perfectly appropriate to build that confusion into the project’s navigation scheme; such choices, however, must always be justified in the project’s self-reflexivity.
  • Cogency: Cogency refers to the quality of the project’s argument and its reflection of a conceptual core. Cogency is not a function of an argument’s “rightness” or “wrongness.” With most assignments, students are free to take any position they like; cogency is reflected in the way the argument is made, not in what the argument is.
  • Evidence: What is the quality of the data used to support the project’s argument? Is it suited to the argument? Further, the project should reflect fundamental research competency as understood and dictated by evolving standards of multimedia research and expression.
  • Complexity: Multimedia projects often suffer in being considered somehow outside a larger discourse or context. Complexity refers to the ways in which the project acknowledges its broader context, contributes to a larger discussion and generally participates in an academic community.
  • Technique: Strong scholarly multimedia projects should exhibit an understanding of the affordances of the tools used to create the project.
  • Documentation: Finally, with a nod toward the dramatic technological shifts that characterize contemporary media practices and the fact that formats come and go with alarming rapidity, we advocate a documentation process that describes the project, its formal structure and thematic concerns, with attention to the project’s attributes and the particular needs required for either the student’s own archival process, or those of an instructor, program, or other entity. This, too, offers another stage for assessment, inviting students to consider their work within a larger context, and offering instructors a site for understanding the learning that has occurred [Our individual and aggregated class blogs will serve this purpose].

I’d have to adapt this for graduate classes — but it’s a great starting point.

There’s also this “Grading 2.0: Evaluation in the Digital Age” discussion thread on HASTAC, which, although I haven’t been able to wade through all the comments yet, seems to advocate for a portfolio approach. I think my students’ “process blogs” will function much like a portfolio.

Finally, I’ve found Steve Anderson’s “Regeneration: Multimedia Genres and Emerging Scholarship” essay extremely helpful in addressing my concerns about the evaluation of “self-expressive” projects. I plan to ask everyone to read Anderson’s piece early in the semester. I had been concerned that some students would assume that, because our projects make use of the same tools they use to create their (often self-expressive or experimental) student films and psychogeographic maps and impressionistic audio pieces, our multimodal scholarly projects could be narrative-based and purely expressive, too. I imagine that at least a few students are unfamiliar with using production as a research methodology: how many have conceived of geotagging as more than a means of “placing” their Flickr uploads or recording their “sensory memories” of particular places in the city? I’m not denigrating these activities — there’s definitely a place for them (including in some of my other classes) — but this fall, I want to focus on multimodal scholarship, and developing appropriate criteria for evaluating it. As Anderson says, “narrative may productively serve as an element of a scholarly multimedia project but should not serve as an end in itself.”

Creative Commons License
Evaluating Multimodal Student Work by Shannon Mattern, unless otherwise expressly stated, is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.

9 comments on “Evaluating Multimodal Student Work

  1. Cheryl August 13, 2010 10:09 AM

    Shannon, Thanks for posting to my multimodal class evaluation criteria. I’m really excited by how that assessment project turned out and am working on an article about it. I’d love to hear how it worked for you, if you use any of it in your class. Also, thanks for summarizing the other evaluation criteria here. My bet is you’ll get lots of folks who are looking for that information, and having it all in one place will help a lot of folks.

    best,
    Cheryl.

  2. Shannon August 13, 2010 6:34 PM

    Thanks, Cheryl! I was so excited to come across the description of your peer review exercise, and I’m eager to try it out! I’ll let you know how it goes.

  3. Trudy August 20, 2010 11:23 AM

    I taught advanced business writing last year at USC. The first semester, I developed my own rubrics based on information I had. That worked o.k. But, what I found to be even more successful was to have students create their own rubrics based on their comparison of other publicly available work (their assignment was to create a business-related blog — so they had to compare two other published blogs and develop criteria about what made one better than another). What I found was that this added a dimension of attention to detail that they did not demonstrate when I provided the completed rubric. Then, I graded their work based on their rubric (which they thought was more fair, somehow).

    • Shannon August 20, 2010 5:23 PM

      Thanks, Trudy! I’m glad to know that this worked well for you. I’m planning to do something similar — perhaps have students look over some existing models for evaluation (maybe I’ll even ask them to read this blog post), then we’ll apply those rubrics to a few examples of multimodal work, see how the existing rubrics hold up, then develop our own set of criteria.

  4. Holly Willis August 21, 2010 8:22 PM

    Hi Shannon,
    I’m so pleased to see you work in this area! It’s much needed. I wanted to clarify the genesis of IML material, though – our rubric was derived in concert with faculty, staff and students over several years, then codified as a hand-out for shared use, and then more recently used by IML associate director Virginia Kuhn in her project for Kairos titled “Speaking With Students: Profiles in Digital Pedagogy.” For us, the shared development of the rubric among the several constituencies and areas of expertise was key, and Virginia’s project shows another significant cycle, as students reflect back on their own work, and in the process, expand the parameters…

    • Shannon August 23, 2010 1:52 AM

      Thanks so much for your comment, Holly! I’ll revisit my post to see if I should revise in order to better reflect the process through which the IML’s rubric evolved. You bring up several really important points: the number of constituencies involved in multimodal projects and the different skills, values, standards, etc., they bring to the table; the fact that this convergence of differing viewpoints will require that any collaborative project undergo rounds of (potentially frustrating) negotiation and revision; and, finally, the realization that all this reflection and revision usually pays off in the end — in a project whose value is much greater than the sum of its parts.

  5. Rory Solomon September 16, 2010 4:22 PM

    I like that “open source” technologies are mentioned in the MLA “Best Practices” guide and I think this is a point that bears some elaboration and emphasis. A key tenet of the open source philosophy is peer review in the software development process, in a way that largely parallels academia. It could even be said this is even because of the ways that the open source movement evolved out of academic culture. I think if a research project were to evolve to the point of needing to shape the software tools being used, it would find support and compatible processes in the open source community.

    Also, I’m learning that a big issue with this kind of research work is concern about longevity (ie, the MLA point about “Technical Design”, and the Institute for Multimedia Literacy’s point about “Documentation”), and in this regard digital formats and platforms are often eyed with trepidation. (ie, “Should I invest time and energy into using this tool if it’s going to become obsolete in 3 years?”) Again, here the open source movement provides means to help minimize these concerns in that open source projects provide many ways to evaluate a given software tool / format / platform. Any serious project will have an open, public web presence, including developer and user mailing lists, documentation, and etc. It is fairly easy then to evaluate the depth and breadth of the developer and user communities. It is useful to check, via wikipedia and other open source project websites, whether there are competing initiatives, whether the project is getting support from one of the larger foundations (eg, FSF, Apache, etc), and if there is competition then what trends there are in terms of which tools seem to be “winning out”. Once a critical mass is reached and/or once a certain level of standardization has been achieved (through things like IETF, ISO, RFC’s, etc), one can be fairly confident that a tool will be around for a very long time (eg, no one questions the particular voltage and amp levels coming out of our wall sockets) and even if a tool does become obsolete, there will be many users and developers also contending with this issue, and many well-defined and well-publicized “migration paths” to ensure continued functioning, accessibility, etc.

  6. Roger Whitson November 27, 2012 7:01 AM

    Cheryl + Shannon = multimodal awesomeness.

    • Shannon November 27, 2012 9:29 AM

      Thanks, Roger! You can find an update to the post — and links to some super-helpful recent publications by Cheryl — here.

Comments are closed.