On October 30, 2014, SFMOMA and Stamen Design hosted Art + Data Day at the new Gray Area Art and Technology Theater in San Francisco. The event was formatted as an “unhackathon,” focusing on collaboration and problem solving — rather than competition and speed — as a way of testing an alpha version of the SFMOMA’s new API. Since public sharing is a focus of the API, all code created during the event has since been posted to GitHub, and all of the projects will be summarized on SFMOMA Lab in the coming weeks, continuing with Team Selfie’s project.
Beginning with the precept that photos of works in SFMOMA’s collection found on Google Images, Flickr, Instagram, or Foursquare serve as manifestations of visitor interactions, our team started our experiment by asking the following questions: How do we capture, share, and save our experiences with artwork? How do these photographers choose to frame the works of art? Do they include themselves in the images? Are they facing the artworks or facing the camera? How are the artworks lit? From what angles are the images captured?
Team Selfie — consisting of Flickr engineer Bertrand Fan; Tim Svenonius, senior content strategist at SFMOMA; Bosco Hernández, art director for the SFMOMA Design Studio; Eric Gelinas, design technologist at Stamen Design; and myself, an intern in SFMOMA’s web and digital platforms department — investigated how these actions may or may not become codified in a rhetoric around our engagement with an artwork. Beyond that, we looked at which types of artworks inspired self-documentation and which were photographed only in reference to or as an appropriation of the artwork. For example, one of SFMOMA’s most famous works, Mark Rothko’s No. 14, 1960, repeatedly prompts a similar visitor photo: a wide shot of viewers standing or sitting on the bench in front of the painting, facing the work head on.
Yet a search for Jim Campbell’s Exploded Views returns the artwork captured from a variety of perspectives, with different angles, zooms, and lighting schemes.
To map these kinds of patterns, we designed a web application that aimed to visualize the divergences and parallels that emerge when visitors take and post pictures of artworks. We pulled JPEG images of artworks from the SFMOMA API and placed these next to a slideshow of images we pulled from Flickr’s API. To do that, Bert wrote an application that searches Flickr using artwork title, artist name, and the search term “SFMOMA.” Eric then created a visualization that loads the title of artwork and artist, the SFMOMA image, and a slideshow of the Flickr images.
Next, the team explored further possibilities, such as edge detection in aligning a painting in a set of photographs and face recognition, to create an option that would only show selfies taken with the artwork. Our experiments with edge and face detection were made using OpenCV via the npm OpenCV module. It was easy to detect faces head-on to determine if a photo was a selfie or not. However, this seemed less interesting to us than determining where in the photo the artwork was positioned, and centering the photo appropriately. For example, if we lined up the borders of Rothko’s No. 14, 1960 across photos, we would be able to see the powerful emphasis on visitor interactions rather than on the artwork itself. Lining up the artworks would require us to find a pattern of shapes in the SFMOMA photo that is also present in the Flickr photo. We realized early on that we did not have enough time to write an algorithm capable of that during Art + Data Day, so we decided instead to simply center the Flickr photos and maintain their aspect ratios. This solution came close to the effect we were looking for.
It would also be worthwhile to try grabbing images from sources beyond Flickr. Because most artworks in our collection would not return any image results when searched, we discussed prompting users to visit the museum to find the artworks for themselves and then post the images with a particular tag, to encourage the addition of new images to our project.