Thinking about media and oral history annotation

This week we worked with several media annotation programs and studied oral history recording and transcription processes.


Thinglink provides a handy means of annotating media for free, at least for the basic features.  It’s especially useful because the files users create easily shared or embedded in a variety of platforms (WordPress—as seen below—Facebook, Twitter, Pinterest, Google+, Tumblr, Edmodo, Tackk, email or the Thinklink Facebook app).  Beyond sharing capabilities, it also allows users to create channels, which essentially creates a ‘playlist’ of media (either image or video) that can be shared for play or educational purposes.  One of the more education-oriented features is group annotation, thus facilitating group projects in which users other than the owner can collaborate on contributing annotations of various types—text, images, links and video.  JJ also mentioned a small museum that incorporated the embedded files as a means of making their collection more interactive online.  The site is a supremely useful on that I would strongly recommend.

If you hover your mouse over the image above, two dots appear on the lower right of the image.  Hover over one of the dots and the text of the annotation will appear.  In the blue dot, you can see how I linked to the Wikipedia page from which I drew my basic history of the Coptic stitch.  Users can also embed videos into their annotations, as I did with the red dot on the lower left.

Relatedly, ImageDiver (in beta testing) supports image annotation, displaying thumbnails of the section of the image users wish to annotate on the left with the full image on the right (Img 1).  Upon clicking either the red rectangle or the thumbnail, users are presented with an enlarged image of the annotated section (Img 2).

Img 1

Screenshot of ImageDiver's annotations of The Ambassadors (Hans Holbein, 1533)
Screenshot of ImageDiver’s annotations of The Ambassadors (Hans Holbein, 1533)

Img 2

ImageDiver provides a closeup of the annotated section.
ImageDiver provides a closeup of the annotated section.

I didn’t create an account for myself, so I haven’t played with it first hand.  But if you have high quality images that can bear this level of zoom, then I’d imagine that it would be a powerful teaching tool.  The feature of providing all of the thumbnail annotations on the left might prove especially useful, since students could browse through them almost like flashcards.

YouTube, on the other hand, focuses on video and does allow for a certain level of customizable annotation options, as seen in my video below.  The major frustration lies in that there’s no way to show the annotations off of the image.  When JJ walked us through the process of annotating a YouTube video, we discovered that even clips shown in full screen (rather than widescreen) don’t allow users to place annotations on the black, empty margins on either side.  A feature of YouTube that I very much appreciate, though, is that it’s easy to share and embed from YouTube.  Then once users have embedded the video in another webpage (like this blog), viewers are provided with all of the same tools—volume control, replay/pause/play, settings, full screen, share, and watch later—as they would have in YouTube, but with the added bonus of an easy redirect button back to the original YouTube page from which the embed is drawn (just click the YouTube icon on the lower right of the video after pressing play and hovering your mouse over the video)

For non-classroom purposes, Animoto provides a fun way of putting together another visual presentation that could potentially incorporate annotations.  The reason I don’t find it classroom-friendly lies in its means of incorporating text, since it messes with the size of the image and the text is clear and legible only in certain styles (even though the style I used was listed in Education, the text was washed out and only partially legible). At the free level of usage, I also didn’t discover any way to edit which images were presented together.  The images presented alone or together in the following video are shown that way purely through coincidence.  To ensure that the images that did look good together went together, I just had to play with which ones were placed next to one another in the ordering of the thumbnails and the number of images that I included.  Perhaps at a paying level this tool would prove more powerful and thus useful in a teaching setting.

I also have no explanation for why there’s a giant margin below the video (it shows as a grey rectangle on my editing screen but just as giant blank space on this published page).  Another disadvantage is also that, unless you upgrade your account ($$), there’s no way to remove the “Animoto Trial” banners.  I’m not a huge fan, though it could prove quite useful in terms of marketing an event or a business.

Aside from these media annotation tools, we also took a look at TinEye and Google‘s reverse image search to find sources for images.  TinEye worked well for 2D images while Google managed images of 3D objects along with the 2D.  Google also provides images that are “visually similar.”  While those similarities can be quite divergent in terms of content, Google’s algorithm picked up on some aspect of the image that might prove useful in creating a group of images or locating unexpected connections between images.  TinEye I’m sure, does just as will with images of 2D objects as Google.  But I’m a bit biased towards Google, since it found the original source for an image of a Coptic bound book that I’ve been seeking for months, while TinEye couldn’t locate any instances of the image.

The process for reverse searching an image in both is to drag an image to the search bar in the websites I’ve linked in the previous paragraph.  Users can also upload an image manually.  Unfortunately (or fortunately, depending upon need), you have to clear your image search in Google before entering a second image for reverse image searching.  If you try to search a second on top of the first, Google will search for images that share the qualities of both images.  Just something to remember, neither algorithm (as far as I know) deals as much with content as with characteristics of the uploaded image—light and color patterns being most notable.


Transcriptions serve as the oral history equivalent to media annotations.

We looked primarily at the admirable work being done by Oral History Metadata Synchronizer (OHMS).  While still in BETA, the synchronizer seeks to break oral history recordings down into their constituent parts, highlighting the segment that’s being recounted.  The “index” for that segment can be labeled with a partial transcript, a segment synopsis, and tagged with keywords & subjects.  To the left of this segment metadata is the time at which that segment begins (if input correctly by the technician).  The transcript itself is also keyed with these links that jump to the beginnings of the segments (again, if keyed in properly). The process used by OHMS makes transcripts far more searchable and skimable, which will facilitate speedier research and general perusal of these oral histories. uses another method of syncing an oral presentation and a transcript that likely quite manual and labor intensive (I’ve not been able to find what program they’re using).  In the Information Literacy video course we’ve been watching for my Reference and Resource Selection class, the transcript highlight the section of the audio that’s being spoken at that moment.  JJ’s thought was that the person syncing the transcript with the audio was likely manually highlighting the word or phrase as the audio track progressed.  If that’s the case, it shares striking similarities with editing an annotation in YouTube.

3 thoughts on “Thinking about media and oral history annotation”

  1. Hi Elizabeth,

    I was also pretty inspired by the capabilities of Thinglink in terms of sharing, educating, and exploring. I like that the creator has a certain amount of control over organization into channels, but that there is kind of a social media aspect where you can follow other peoples’ public projects, and the group tagging and annotating is such an interesting way to share ideas. Classroom discussion is important, but sometimes more well—thought out ideas come while doing homework or engaging with your peers in an online forum. The group tagging and annotating could also be really helpful with online classes or distance education. The interface kind of reminded me of a more sophisticated/dynamic version of Pinterest! I enjoyed the example you gave from your Thinglink project. I think it was just the right of information not to overwhelm about a specific item, which could be the danger here, but that is up to the discretion of the creator and thankfully not limited by Thinglink itself. I was also so frustrated with annotations showing up on top of the video with no reference to the text anywhere else. It’s like, if you miss it as a viewer, it’s just gone, or an annoying rewinding until the information is there. A more static holding place for the annotation in addition to the time-limited pop-up on the video would really be helpful. I suppose a YouTube annotator could put the annotations in the description, but I don’t know that that would be intuitive for your everyday YouTube user.

  2. Elizabeth,

    I appreciate how you provide a very helpful overview of many of the different tools that we’ve looked at for annotating and commenting upon various types of media. It is often simultaneously a benefit and a pitfall of digital technologies that there are so many different tools and platforms available for doing similar tasks. It becomes increasingly important, then to keep track of the variety of options that are out there and to think critically about what tool will best serve the project at hand. It’s easy to just start using a tool that you’re familiar with, but you might be missing out on a tool that’s much more suited to what you’re trying to accomplish. With digital technologies, it certainly pays to be flexible, open, and to at least experiment with new tools and platforms as they become accessible. I think that’s been one of the great benefits of this particular course so far: we have the opportunity to try out a lot of different tools, and then have the space and time to think, discuss, and write about what we like and what we’re having issues with. Moving into the future as scholars using digital tools for our research or work, it will certainly be beneficial to continue to make space for this kind of reflection, comparison, and discourse about the platforms we use to do our work.

    As you outline here, for audio-visual annotation, there are a lot of potential factors that some one might want to take into account when deciding what tool to use, including ease of use, cost, interoperability, and audience. As you touch on throughout, I think cost especially becomes a determining factor. Unless you find a tool that is essential to your work and that you know you will continue to use for a long time, it is really hard to justify paying for the “premium” account. It can also be really off-putting, as you suggest with Animoto, when even the free version of tool comes with severe limitations or annoying “extras” like the obligatory watermark. Restrictions like that definitely do NOT make me want to make me want to pay for the full version of the service. I like how, throughout this post, you pay attention to the different tools’ potential educational and scholarly value. The audience for a platform is so crucial to how someone will evaluate the tool; what might be good for marketing purposes (something like Animoto) may be terrible for the classroom.

  3. I agree with your general assessment of TinEye image search versus Google’s reverse image search. In my exploratory searches with both tools, TinEye provided a more focused set of search returns, which could be very useful for institutions looking to trace how the images they make accessible online are accessed and used (as in the example article, “Where Do Images of Art Go Once They Go Online”), as well as for scholars seeking more “exact” copies of the images they search for. However, Google’s broader search returns could prove useful when seeking variant permutations of a given image, how certain images may have been appropriated in other works, or iconological/stylistic connections between sets of very similar images. In general, it seems that Google’s reverse image search might be more useful in an exploratory phase of research, while TinEye image search could prove very helpful when you know exactly what you’re looking for. Of course, your example described above proves an excellent exception to this rule, reminding us that it’s always valuable to check multiple sources–always a good scholarly practice, whether using digital tools or not. And, in the case of these image searching tools, there’s an added advantage to checking multiple sources multiple times, since search returns will change over time as images are added and manipulated in new ways.

Leave a Reply

Your email address will not be published. Required fields are marked *