When the Harvard Art Museum collection looks back at us, which direction does it look? Up, down, left, or right? How deeply or shallowly does it cast its gaze? Do most images peer straight into the visitor’s eyes? What is the orientation of the subject’s head, frontal or rotated? Do particular media or cultural traditions correlate with preferences regarding the directionality of the human gaze? The installation is built upon the AI-based extraction and analysis, fine-tuned via human supervision, of pairs of eyes from the Harvard Art Museum painting, print, sculpture, and coin collections. It allows the visitor, equipped with an input device, to explore the collections from the standpoint of the depicted subject’s gaze direction. A red dot appears where the input device is pointed towards the wall of monitors, establishing a focal point, a point of convergence around which arrays of images are summoned up nine at a time. Opposite the monitors, highlighted zones within the overall cartography of gazes are presented via the gallery’s projection system. For centuries visitors have navigated collections on the basis of culture, chronology, genre, and medium; to those conventional forms of exploration, A Flitting Atlas of the Human Gaze adds a new mode based on the distribution of looks across media and time. (Kevin Brewster, Todd Linkner, Dietmar Offenhuber, Jeffrey Schnapp.)
Curatorial A(i)gents presents a series of machine-learning-based experiments with museum collections and data developed by members and affiliates of metaLAB (at) Harvard, a creative research group working in the networked arts and humanities.