Our first digital project is due this week. JJ asked us to create a collection of media objects in either Omeka or Scalar, using the built-in tools to play with footnotes, annotations, linking and adding metadata.
I chose Scalar, since it seemed more receptive to video. It took a while to adjust to the program, but by the end of things, I earned a general grasp of the tools (except for how to remove or edit embedded media—that still escapes me).
My theme was the history and contemporary use of Coptic binding. To see how the project turned out, check it out!
The potential for non-linearly structured books in Omeka or Scalar fascinates me. I expected to maintain my excitement for the fluidity of movement throughout the content, and I did, in large part. However, in formulating the paths between pages and media, my brain’s habit of linear construction continually pushed against letting my thought process to move in a natural form. The exercise was entertaining, and interrogating the codex form (while doing a write-up of THE codex form) made it doubly so. A nagging concern remains, though, that readers might miss content—it would be a shame to miss out on that one page, that one piece of media that might engage a user not quite ready to be as excited about Coptic stitch bindings as me. Though, that might be an argument for making each page appropriately rich with multimedia content, enough to engage readers without overwhelming them.
I experimented with the idea of various levels of reader engagement by including both image and video on only one page, including mostly technical images on anther, and including essentially just text with occasional annotations on the first. The idea would be—once discovering which level of multimedia depth to incorporate for majority reader enjoyment—all the pages could then be somewhat consistently addressed in that way, allowing the book to feel more consistent and tailored throughout.
This week we worked with several media annotation programs and studied oral history recording and transcription processes.
Thinglink provides a handy means of annotating media for free, at least for the basic features. It’s especially useful because the files users create easily shared or embedded in a variety of platforms (WordPress—as seen below—Facebook, Twitter, Pinterest, Google+, Tumblr, Edmodo, Tackk, email or the Thinklink Facebook app). Beyond sharing capabilities, it also allows users to create channels, which essentially creates a ‘playlist’ of media (either image or video) that can be shared for play or educational purposes. One of the more education-oriented features is group annotation, thus facilitating group projects in which users other than the owner can collaborate on contributing annotations of various types—text, images, links and video. JJ also mentioned a small museum that incorporated the embedded files as a means of making their collection more interactive online. The site is a supremely useful on that I would strongly recommend.
If you hover your mouse over the image above, two dots appear on the lower right of the image. Hover over one of the dots and the text of the annotation will appear. In the blue dot, you can see how I linked to the Wikipedia page from which I drew my basic history of the Coptic stitch. Users can also embed videos into their annotations, as I did with the red dot on the lower left.
Relatedly, ImageDiver (in beta testing) supports image annotation, displaying thumbnails of the section of the image users wish to annotate on the left with the full image on the right (Img 1). Upon clicking either the red rectangle or the thumbnail, users are presented with an enlarged image of the annotated section (Img 2).
I didn’t create an account for myself, so I haven’t played with it first hand. But if you have high quality images that can bear this level of zoom, then I’d imagine that it would be a powerful teaching tool. The feature of providing all of the thumbnail annotations on the left might prove especially useful, since students could browse through them almost like flashcards.
YouTube, on the other hand, focuses on video and does allow for a certain level of customizable annotation options, as seen in my video below. The major frustration lies in that there’s no way to show the annotations off of the image. When JJ walked us through the process of annotating a YouTube video, we discovered that even clips shown in full screen (rather than widescreen) don’t allow users to place annotations on the black, empty margins on either side. A feature of YouTube that I very much appreciate, though, is that it’s easy to share and embed from YouTube. Then once users have embedded the video in another webpage (like this blog), viewers are provided with all of the same tools—volume control, replay/pause/play, settings, full screen, share, and watch later—as they would have in YouTube, but with the added bonus of an easy redirect button back to the original YouTube page from which the embed is drawn (just click the YouTube icon on the lower right of the video after pressing play and hovering your mouse over the video)
For non-classroom purposes, Animoto provides a fun way of putting together another visual presentation that could potentially incorporate annotations. The reason I don’t find it classroom-friendly lies in its means of incorporating text, since it messes with the size of the image and the text is clear and legible only in certain styles (even though the style I used was listed in Education, the text was washed out and only partially legible). At the free level of usage, I also didn’t discover any way to edit which images were presented together. The images presented alone or together in the following video are shown that way purely through coincidence. To ensure that the images that did look good together went together, I just had to play with which ones were placed next to one another in the ordering of the thumbnails and the number of images that I included. Perhaps at a paying level this tool would prove more powerful and thus useful in a teaching setting.
I also have no explanation for why there’s a giant margin below the video (it shows as a grey rectangle on my editing screen but just as giant blank space on this published page). Another disadvantage is also that, unless you upgrade your account ($$), there’s no way to remove the “Animoto Trial” banners. I’m not a huge fan, though it could prove quite useful in terms of marketing an event or a business.
Aside from these media annotation tools, we also took a look at TinEye and Google‘s reverse image search to find sources for images. TinEye worked well for 2D images while Google managed images of 3D objects along with the 2D. Google also provides images that are “visually similar.” While those similarities can be quite divergent in terms of content, Google’s algorithm picked up on some aspect of the image that might prove useful in creating a group of images or locating unexpected connections between images. TinEye I’m sure, does just as will with images of 2D objects as Google. But I’m a bit biased towards Google, since it found the original source for an image of a Coptic bound book that I’ve been seeking for months, while TinEye couldn’t locate any instances of the image.
The process for reverse searching an image in both is to drag an image to the search bar in the websites I’ve linked in the previous paragraph. Users can also upload an image manually. Unfortunately (or fortunately, depending upon need), you have to clear your image search in Google before entering a second image for reverse image searching. If you try to search a second on top of the first, Google will search for images that share the qualities of both images. Just something to remember, neither algorithm (as far as I know) deals as much with content as with characteristics of the uploaded image—light and color patterns being most notable.
ORAL HISTORIES & THEIR ANNOTATIONS
Transcriptions serve as the oral history equivalent to media annotations.
We looked primarily at the admirable work being done by Oral History Metadata Synchronizer (OHMS). While still in BETA, the synchronizer seeks to break oral history recordings down into their constituent parts, highlighting the segment that’s being recounted. The “index” for that segment can be labeled with a partial transcript, a segment synopsis, and tagged with keywords & subjects. To the left of this segment metadata is the time at which that segment begins (if input correctly by the technician). The transcript itself is also keyed with these links that jump to the beginnings of the segments (again, if keyed in properly). The process used by OHMS makes transcripts far more searchable and skimable, which will facilitate speedier research and general perusal of these oral histories.
Lynda.com uses another method of syncing an oral presentation and a transcript that likely quite manual and labor intensive (I’ve not been able to find what program they’re using). In the Information Literacy video course we’ve been watching for my Reference and Resource Selection class, the transcript highlight the section of the audio that’s being spoken at that moment. JJ’s thought was that the person syncing the transcript with the audio was likely manually highlighting the word or phrase as the audio track progressed. If that’s the case, it shares striking similarities with editing an annotation in YouTube.
My first semester at Chapel Hill, in Carol Magee’s Art Historical Methods course, our class read “Is there a Digital Art History?” by Johanna Drucker . Subsequently, I attended a session of the Digital Salon Series at UNC titled “What is Digital Art History?” in which we discussed our responses to the Drucker article and heard JJ Bauer and Carolyn Allmendinger reflect on their experience working in digital art history. Now, in my second semester, JJ Bauer asks once more for consideration of the realities and possibilities of digital art history, this time for her course on Alternative Methods: Digital Art History.
What struck me most when rereading the Drucker and related articles was a frustration over why art historians (and scholars generally) would resist tools that could potentially simplify the research process and allow researchers a different lens through which to examine their subjects. “The Limits of the Digital Humanities” by Adam Kirsch and Transitioning to a Digital World: Art History, Its Research Centers, and Digital Scholarshipby Diane M. Zorich proved particularly challenging . In Zorich’s research, art historians’ response to the question “What new tools are needed to facilitate art research, scholarship and teaching?” displayed their awareness of the usefulness and power of tools created in response to the digital humanities, including:
Facilitate search across disparate image sets
Allow search on image metadata and on visual patterns
Enable robust image annotation, including imbedding video, text, and drawings, and allowing links (via URLs) for citation within an image and within specific areas (i.e., details) of an image
Display and register images for side-by-side comparisons and analyses of works of art
Rectify maps, landscape drawings, plans, elevations and other schematic representations of location
Allow bulk downloading of images .
These scholars also mentioned the desirability of being able to:
Annotate digital publications
Cite particular sections within a Web site or other digital publication
Cite particular images and details of images within a digital publication .
But then just a few pages later, Zorich cites the skepticism of many historians as to the appropriateness of employing technology in research, citing scholars’ doubts that digital technology can’t actually improve on what they’re already doing, won’t create any added significance to traditional research, nor serve art historical scholarship generally . Perhaps these doubts are an artifact of an older generation of scholars unfamiliar with new technologies and a newer generation overwhelmed by them, but technology is a tool with the potential to facilitate speedier and more in depth research.
For example, no longer is art historical research limited by what a scholar can physically observe on her own. Thanks to the incorporation of interdisciplinary collaboration and of imaging technology into traditional research and connoisseurship, historians are confidently attributing previously anonymous works to their original authors. La Bella Principessa represents the ideal example of how digital humanities can work for art history in this way. Based in part on the application of a multispectral camera to “capture light from frequencies beyond the visible light range,” historians now attribute the work to Leonardo Da Vinci . The camera’s images allowed for statistical comparison to other of Leonardo’s works (a more manual version of Drucker’s insistence on creating digital webs of works for comparison) . Through the combination of spectral imaging, carbon dating, formal analysis, comparison to similar works and socio-cultural analysis, the art historical community has access to a deeper understanding of both La Bella Principessa singly and of Leonardo’s oeuvre generally.
Admittedly, projects such as there are few and far between due to funding, tenure restrictions, and a general disinterest in engaging with technology. Those less skeptical of the value of digital humanities are still reticent to participate in digital scholarship endeavors, citing lack of technological savvy and an unwillingness to work with those ‘lesser scholars’ who can program and engineer the framework for digital projects . Current standards of publishing, tenure and institutional or departmental regulations don’t help the situation, since virtually none of them have embraced the idea of a more fluid delivery system and the slipperiness of funding and acknowledging interdisciplinary efforts, let alone come to terms with seeing digital collaborations and online publications as up to scholarly snuff .
During class discussion, JJ confirmed that departments (and even whole institutions) are still unwilling to alter its tenure requirements to acknowledge online and digital projects for tenure application, ensuring that important projects that might be begun through class collaboration don’t get funding or attention after the students aren’t there to justify the project’s existence. Until those obstacles are removed, it seems unlikely that individual attitudes towards the inclusion of digital humanist tools will change for the better. As Zorich notes, there is no incentive to take leadership on these projects, even in art historical research centers not associated with an academic institution . It will take highly visible scholars in institutions the world over pushing for a move towards funding and fostering digital art history before wide acceptance might be possible. So as exciting as it is to see projects like the one conducted around La Bella Principessa, the satisfaction of those results are bittersweet.
 Johanna Drucker, “Is There a “Digital” Art History?” Visual Resources: An International Journal of Documentation, vol. 29, no.1-2 (2013): 5-13.
 Adam Kirsch, “The Limits of the Digital Humanities” in New Republic, May 2, 2014; Diane M. Zorich, Transitioning to a Digital World: Art History, Its Research Centers, and Digital Scholarship, Report to the Samuel H. Kress Foundation and the Roy Rosenzweig Center for History and New Media, George Mason University, May 2012.
 Zorich, Transitioning to a Digital World, 15.
 Ibid., 22.
 Blair Howell, “PBS Documentary Attempts to Identify Renaissance-era Drawing as the Work of Da Vinci,” Desert News, January 23, 2012.
 Drucker, “Is There A “Digital” Art History?” 9.
 Zorich, Transitioning to a Digital World, 22-24.
 Kirsch captures perfectly the concerns of scholars in the humanities who have yet to engage with or gain full knowledge of the digital humanities, noting that these efforts don’t live up to the elegant standards of traditional, print-based scholarship. He also, reasonably, sites the issue of tenure (and thus publishing) obstacles were digital humanities to be incorporated into humanities departments; Kirsch, “The Limits of the Digital Humanities.”
 Zorich, Transitioning to a Digital World, 22-23.