I’ve been thinking about some themes that we’ve touch on a couple times: what is and isnt the archivable. I wonder what is the un-meta-able? And where metadata is partially a function of query and index, can we have non-text metadata? I imagine a scenario in which a performance is the subject. It happens on certain date, at a certain place, at a certain time. How does one capture the metadata of mood of an audience? Potentially, I could have a field recording of an audience conversing moments before a performance as metadata. For me, it also comes back to this question of recordings being a capture of a performance or an event, and if these are even inseparable. Are the noises, the coughs, the hums, even the silences a part of performance. Who gets to decide this?

And even if that much is figured out, which parts of an event are to be recorded faithfully, how do we faithfully preserve audio. In the readings, we see degradation of the material the archival medium and thus a degradation on transfer, moving from one material to another.

Makes me think of a piece by a class mate of mine, which was an iteration from the I am Sitting in a Room piece. He recorded his voice, played it back, and recorded the playback almost a dozen times over a variety of mediums (including recording the playback off of a soundsystem in a cafe!). What we get in return are the undulations of not only physical space, but the digital de/encoding process.

Digital Vocal Saturation //2018 by Parsons Senior Thesis

Photo Collections

I’m really drawn to the text that draws the two lines of thought in photographic archiving strategies. Firstly, I had only recently become aware of MCAD when discovering my program director is an alum, so its interesting to see it appear again. It also makes me think about how much Midwest design and art (and archiving!) is underrepresented against the larger coastal institutions, whose money is old and vast.

I’m also drawn to the idea of archiving images by their visual semantics. As a technologist, I always think about how machines “see” and process information: They are highly semantic! At their core, computers see one small block of color in a long single line of colors. With some math, they see edges, they see gradients:

The goal of computer vision, CV, is to get context. The largest competition/effort in CV is COCO , or Common Object in Context, it’s right there in the name! So, at least for machine learning, the visual semantic is the base and necessary component for contextual organization. I guess I’m curious if machines could implement the discourse of the document, or is that soley a human task? Could the NYPL scan their magazine clippings and be told what heading it is, especially when they have wild headings like Views from Behind?

Acknowledging and Addressing Archival Injustices

I am surprised to read in Caswell interview that archivist are resistant to online records, and that records require materiality. Perhaps I am missing a key difference between an archive and a record, but that seems to exclude a vast amount of data. As mentioned, it excludes oral and kinetic records, but does that also exclude databases/online records?! What are the characteristics of a dataset that would make it a record? And in thinking about archiving radical movements, I also struggle to see how we can only stick to materiality where entire political struggles are started and maintained through hashtags.

I’m also interested in the ethics of cataloging radical movements. Like the participants of On Our Backs, should protesters be subject to having their dissidence be preserved? Is there an intersection of anonymity and accuracy + authenticity of archives?

And a last thought on radical digital archives, with the advent of deep fakes, professional trolls, and misinformation, what are the ethics of including, for example, false tweets and misinformation? On one hand, presenting that “data” on equal ground with legitimate data is problematic. On the other, those campaigns should be documented as a part of fighting these struggles.

I guess this week brought more questions than it did answers…

Epistemological and Political Subjects

What strikes me about these reading is thinking about how temporality can transform archives. The structure and means of organizing an archive reflects the environment that communities are building. We have seen this in other archives we’ve explored this semseter, especially in archives that reflect colonial natures. So it no surprise that we see the archive radicalize within the last few decades in line with radical feminism. I wonder however, what radical taxonimical features, we are injecting into archives that either will fade with time, or even will be amplified with retrospection.

Ordering Logics Presentation


Much of my interest in my art and design practice is in surveillance and how it is enabled through technology. My research primarily includes the systems and technologies the government create in agencies from GCHQ to the NYPD, and systems and technologies created by corporate institutions that are marketed and sold to local and federal governments. These technologies are top-secret and/or trade secrets. Moreover, the policies, strategies and rules of engagement are even more secretive and hidden behind impenetrable secret courts, cries of national security, and private contracts. Through leaked documents and product demos we can glean the priorities, hierarchies, and intent of theses technologies

In 2013, Edward Snowden, a data analyst, released documents uncovering the extent of abuse and power the NSA holds. The documents outline several projects of mass surveillance that the NSA maintains. One of these technologies is XKEYSCORE. XKEYSCORE is the NSA’s “widest-reaching” (these are the NSA’s own words) system; developing intelligence from computer networks, the program covers nearly everything a typical user does on the internet, including emails, websites, and Google searches. The XKEYSCORE system continuously collects so much internet data that it can be stored only for 3-5 days at a time. XKEYSCORE was also used to hack into systems that allowed the NSA the keys to cell phone communication. Though the advent of FISA restricts how the NSA may surveil US citizens, FISA warrants that allow the use of XKEYSCORE may lead to some US-based information being swept up by the wide-reaching filterless data gathering. Whistleblowers claim that they could, “wiretap anyone, from you or your accountant to a federal judge to even the president, if I had a personal email.”

XKEYSCORE’s (or XKS) strength is in its ability to go deep. Because much of the traffic in the internet is anonymous (thanks to efforts of privacy activists) XKS’s dragnet approach allows analysts to pick up on small bits of info to start creating profiles on “targets”. As XKS relies on scraping and acquiring telecom signals, it looks at different points of access into the telecom systems [pg.11]. This primarily includes phone numbers, email addresses, log-ins, and other internet activity. In looking at the slide deck for XKS we can glean some classifications that are of interest, and this make someone susceptible to surveillance. They are sprinkled throughout the deck[pg.14-20]. Something I find interesting, is not only data as classification, but ease of access to data as a class [pg.23]. For me there are two major themes in the classification that stand out. One is an anti-globalist and islamophobic one: this person doesn’t belong in this place. The other is an interesting position on internet security and safety: Are you exposed? We’ll target you! Do you care about protecting yourself on the Internet? We’ll target you!

Palantir is a data mining company founded by Peter Thiel, founder and former CEO/”don” of PayPal. They use data fusion to solve big problems, from national defense to improving medical patient outcomes to supply chain management. Data fusion is the process of taking different sets of data and find trends between them. This is exemplified in Palantir’s efforts to aid law enforcement, currently for the LAPD and secretly for NOPD for 6 years. Palantir’s models utilize datasets that include court filings, licenses, addresses, phone numbers, and social media data. Like others, the model uses this to index probability for a given target, but instead of indexing likeliness of buying a product, or voting for a candidate, it models likeliness of committing a crime.

In this so-called “crime forecasting”, Palantir used models that treated gun violence as a communicable disease. This is to say: those who were related, or closely associated, to those who have committed crimes were considered likely to commit a crime, too. For those who have already been charged with crime, the model creates an automated “chronic offender score” for the individual, above a certain threshold, and the individual is placed on a watch list. The individual is notified that they will be under increased surveillance, only to be removed if they have no interactions with law enforcement officers — a murky situation since law enforcement officers are now encouraged to scrutinize a citizen. Companies like Palantir allow local law enforcement to bolster its tactics of surveillance. Unlike the NSA, Palantir enables law enforcement to have a laser focus in its efforts to surveil certain people in their communities. Through their slide deck, presenting and pitching their work to other cities, they highlight where Palantir gathers its data: jail calls and phone logs, gang affiliation data, crime data, and social media [pg13]. This again has that same factor of perpetuating violence, as they use data that already skews (we know that the criminal justice system disproportionately targets Black and Hispanic people) to find “new” criminals, ones that have not been abducted into that system. We see often in the slide decks that “indirect” connections are plotted in the system, glossing over that these are not “indirect”, but “systematic” connections, a system that already favors guilt by association [pg.16].

IBM is one of the oldest tech companies around. The popular IBM PC was released over 37 years ago in 1981. Since then IBM has grown from a computer company into a computational one and has always been a major player in emerging technologies. The case has been no different with artificial intelligence technologies. Perhaps most famously, IBM has developed the computer system Watson, a machine learning, natural-language processor for question and answering, that appeared on Jeopardy brutally defeating. Since then Watson has been built out to be one of the largest services IBM offers, now extending the QA features to CV, Text-to-Speech, Speech-to-Text, and much more. With IBM’s rapid growth in the AI sector, it comes as no surprise that they have lent their machine learning prowess for systems of surveillance.

Last month The Intercept, in partnership with the Investigative Fund, reported on software developed by IBM for the NYPD. Through leaked corporate documents, the public is able to see how and what the NYPD are interested in surveilling. The system and IBM engineers had, unknowingly to the public, used access to NYPD CCTV system of over 500 cameras to tag individuals and train data. The data of public citizens is claimed to be safe via NDAs and background checks. For IBM, the NYPD was one of the first serious customers in surveillance tech, especially after 9/11. Looking through the slides of the leaked documents we can see how the NYPD prioritizes and categorizes its citizens in order to find state-aggressors. One of the most appalling categorizations is that of skin color. This is reminiscent of IBM’s body camera tech that allowed the categorizations of people by “ethnicity” tags, such as “Asian,” “Black,” and “White.” For me, the standout classifications are the more nuanced approach to the cameras themselves than the nuisance of classifying citizens by skin color. This further reinforced the idea that the camera provides a ground truth, and seeing the the defaults of these annotations[pg.33+34], such as Light as default skin color, and Black as default torso color(perhaps more of a nod to the default fashion of NYC). Further, I’m curious as to the classification field for “Large Amount of Skin in Torso” and the general interest for needing skin. Largely the claims for the NYPD using the fields have been shot down early by the NYPD as they claim to have acknowledge the tendency to profile. But an IBM engineer brings an interesting statement to the Intercept, “A company wont invest in what the customer doesn’t want.”


This weeks readings are very interesting in thinking about last week’s topic of infrastructure. Especially in thinking about the libraries like the Prelinger and Warburg, furniture and organizing ideas are intertwined. The Warburg library infrastructure mirrors the abstract structure, and thus requires furnishing to also match. I think it’s interesting to think about this in light of BILLY bookcases. While the construction and material is quiet shoddy (i too am doubtful of butt joints), it’s accessi-BILLY-ty allows larger participation in thinking about how we think about and display ideas, and how they relate to each other. In that way, it could be understood that IKEA holds great power in what we decided to display and organize, not to mention fuels our need to do so.