What new type of research, or what new type of statements can be made from an algorithmically generated (or processual) archive? That’s definitely something worth thinking about, but when Ernst argues that this means that the audiovisual archive “can for the first time be organized not just by metadata but according to proper media-inherent-criteria – a sonic and visual memory in its own medium” (28) it seems also, for us, like a step back into thinking again the archive as a transparent source.
He places the archival medium then again somehow outside of politics, as some objective mechanism before human perception, as “a genuinely code-mediated look at a well-defined number of information patterns.” (29) I get that the strength of his argument comes from the idea that digitization to some extent strips medium specificity, at least for the machine, and subjects it to code. When both sound and images become binary the archival implication is that human tastes and distinctions become less important. The archival medium becomes the first archaeologist, historian or researcher before its human user. I’m not sure “machine objectivity” is exactly what Ernst is after, but I argue we should be wary not to fall back into “algorithm = objectivity.” Like Kate Crawford’s has shown in the face-recognition bias talk. It’s the same old bias – but now hardcoded.