Experimenting with Microsoft Sway

The final project of our half semester course INLS 690-249: Intellectual Property and Copyright in Archives required us to learn how to create presentations in Microsoft Office Sway.

The app provides a vast array of aesthetically cohesive templates, but they’re not terrible customizable.  In the ‘Design’ tab, you can alter the inspiration for the automatic selection of the most appropriate color scheme (helpful if all of your images are already color cooperative), the color scheme, font style & size and animation emphasis (AKA text block and media size). In the ‘Layout’ tab, you can select three options for how the flow proceeds.

From the Sway homepage, I chose to start the presentation from a document I had saved to my computer.  This generated an automatic template that segmented the text and images into relatively intuitive chunks and headings.  It intuited the headings just from the bolded caps lines that I had in the document as placeholders.  Anything placed in a table cell (my shorthand for Tip & Tricks boxes) it turns into images.  This is a nice idea, but if you have linked material, it’s no longer accessible in the image and needs to be reformatted as text.  The media additions were relatively intuitive and responsive, and embedding on Sway is by far the easiest embed process of any platform I’ve worked with.

With the limited levels of customizability in exchange for pre-curated designs, Sway operates as PowerPoint for those who are more comfortable with a WordPress visual editor approach to user experience; there’s a button for all the functions, and there aren’t too many buttons.

This app works best for small presentations that aren’t terribly text heavy, as well.  The assignment required us to select a focused topic and write up a multimedia mini blog post (500-1000 words).  Even with the mix of media and the limited word count, it still proved challenging to keep the text dynamic and visually pacing.  If coordinated with a formal verbal presentation, I could see this app as a major time saver and means of cutting down on design stress.

It strikes me that the main virtue of Sway is that, once you get over the relatively small learning hump, it’s a quick and dirty means of creating a low-input, low-stress presentation that looks like you put more effort in than you did.

Take a look:

To see how all of the individual Sway presentations embed into one presentation as a compilation, check out the class Sway presentation that our professor Denise Anthony put together:

Timemapping presentation

For our final presentations in JJ’s Digital Art History course, we were directed to put together a 5 minute presentation on our favourite projects from the semester, and I chose timemaps.  I didn’t make one this semester, but ran across them for another class and when building the timelines for this class.

To see the accompanying notes, click the gear underneath the presentation to select ‘Open Speaker Notes.’

Testing of Online Timeline Options

It’s a challenge to find a good timeline application.  In part, this is because timelines are so unwieldy to begin with.  But this is also due to the diversity of purposes for timelines.  Some folks want to track the life of one notable person (see Kelsey or Lauren’s post).  Others are looking to track the highlights of a period or topic (see Erin’s post).  Others still use timelines to track the development of a particular object or media (see Colin’s post).  I chose none of the above, and instead am tracking family history.  Since this is one of the main activities that engages the average person in an archival setting or research context, I figured that modeling various timeline options for family history might be most widely applicable.  So, welcome to the wacky world of the Grabs as they guide us through this timeline investigation.

Tiki-Toki

To see the Tiki-Toki timeline, click this link:

http://www.tiki-toki.com/timeline/entry/623027/The-Grab-Slamin-Family/

Inputting data into this timeline wasn’t too bad, but the subsequent visuals just weren’t doing what I was aiming for.

First of all, the data input isn’t into a spreadsheet.  Instead you’re creating entries based on a type: span or story.  Though consistency is harder to maintain due to the lack of database (and input takes far longer), the instructions provided by the interface a helpful and clear, so you can personalize each entry as you prefer.  Spans are a new function, so perhaps that’s why they don’t work as well.  I tried creating spans for lifespan of some Grabs.  As far as I can tell, there’s no way to better clarify where on ends if multiple are overlapping.  The stories are easy enough though.  Stories are marked by little lighting bug-looking things on the slider bar and appear like chat boxes on the timeline itself.  These lightning bug indicators, as charming as they are, would be problematic if you’re looking for something with more textual clues as to what’s going on from the slider bar.  N.B. You have to actually click “More” at the bottom of a chat box if you want to read an entire entry—you can’t just click anywhere in the chat box, which is what my intuition called for.

Secondly, the photos from Flickr just would not work, so I ended up linking to the copies on Facebook (linking is meant to save you and them storage space, so there’s no download option).  Then once (if) the links work, the span images aren’t adjustable, so you have to hope that your photo is composed with the primary area of interest concentrating in the exact center.  Mine were not, so you’re getting a lot of midsection and no face on people.  The linking goes for all media, as well.  I wanted to include an mp3 from my desktop, but needed to upload it to SoundCloud in order for Tiki-Toki to acknowledge it.

The really cool feature of Tiki-Toki is in the visualization.  It has a 3D option.  Instead of scrolling right to follow the timeline, your traveling down a road and the events slide past you.  It reminds me of a cheesy ’80s time travel movie.  So, not great for for academic settings, but a cool toy for at-home use.  Also handy:  Once you’ve input your data, you can print it, or it can be downloaded as a PDF, CSV or JSON file.  There’s a super handy little embed tab, too, but that’s only accessible if you have a paid account rather than a free one, which is why I’ve only linked to this timeline, not embedded it.  Tiki-Toki also has a group edit function, so other can contribute if you’d like.

Timeline JS3

Timeline JS provides a Google spreadsheet template into which you can enter your data.  Be careful not to alter or delete any of the column headings, otherwise it won’t translate back into Timeline JS properly.  The data entry part is pretty self-explanatory—it’s just like any other spreadsheet.  This format of data entry is nice, too, since it enforces consistency.  Though, Tiki-Toki does allow more play with the visual aspect of the stories and spans.

The website walks you through the process of opening the template, filling it in, linking it back to the website and then producing an embed code and preview without mishap.  I do like that you no longer need to have an account with Timeline JS, since it’s really just providing an interface for your outside spreadsheet.  Plus it’s one less password to remember.  Since it’s based on a Google spreadsheet, it would also be compatible with group contributions.

The appearance is quite clean and it travels linearly in an intuitive way. I like how each span and event are in their own space on the slider bar, unlike the overlap in Tiki-Toki.

This timeline also handles links to media, not direct download.  But it doesn’t appear to struggle with Flickr like Tiki-Toki does.

The one really major drawback of Timeline JS is that the slider bar covers a good portion of the timeline.  It’s not as much of an issue for me, but if you have a lot of text, there’s no apparent way to minimize the slider bar to allow a full screen view.

TimeMapper

TimeMapper is another spreadsheet based timeline application.  Like Timeline JS, TimeMapper uses a Google spreadsheet template, but you can create an account with the website and it does indicate the TimeMapper account to which the timeline belongs (though you can create timemaps anonymously).  I found the template for Timeline JS to be more intuitive, especially because acquiring the link to plug into the website requires one less step in Timeline JS (for TimeMapper’s spreadsheet, make sure you ‘publish to web,’ but then get the shareable link, NOT the link that the ‘publish to web’ window offers).

Like Tiki-Toki, TimeMapper accommodates different data views.  This application provides three: Timemap, Timeline and Map.  If you’re looking to map something over time, the timemap option provides ideal (and unique) functionality—as far as I’m aware, not many other mapping or timeline applications allow you to travel across a map chronologically.  If you do want a map, pay close attention to the data entry instructions from the template provided.  Because my data set doesn’t incorporate and GIS information, I’ve stuck with the traditional timeline view.

I did try to use the latitude, longitude field for the last slide here, but either I entered the numbers in a way that their system didn’t recognize or it doesn’t produce a map.  That will take some experimenting to make work.

The clean lines of this timeline are much like Timeline JS, but more all of the text is visible.  This, I think, is the best of the 4 timeline options I sampled for this post.

Dipity

Dipity is the social media version of a timeline application.  It’s meant to be more commercial and click bait-y.  By its own account, Dipity means to interactively “organize the web’s content by date and time” to “increase traffic and user engagement on your website.”  That mission might work if the website would actually allow me to create a timeline.  I’ve gone to create one three times, and every time it kicks me back to the homepage after I’ve entered the timeline’s title information (how do I know I’ve tried three time?  That’s the number of timelines allowed a free accounts).  Even more frustratingly, when I try to go to ‘My Profile,’ the website generates a page not found message, despite showing me as logged in.  Basically it looks cheap and it doesn’t work.  Give the other three timelines a chance before trying thing one out, if you can make it work.

Another option

Neatline

I haven’t tried this timeline application out, so I can’t attest to the functionality.  But Neatline looks super fun to use.  It’s only available as a plugin on Omaka.org (not Omeka.net), which means a paywall.  Unless your institution can offer you free access to its account, should it have one.  Neatline, like TimeMapper, allows for a timemap.  Check out some of the demos to see what it can do.

Digital Humanities vs Humanities?

This week our readings covered the digital humanities (and humanities generally) debate on instrumentalism vs criticism.  This idea that digital humanities is solely product-oriented, neglecting the traditional humanities’ concern with criticism is a divide with which I struggle.  Since JJ does display a critical approach, perhaps this is an artefact of her take on digital humanities.  But I’m inclined to think that this divide is an artificial wall we’ve constructed, rather than anything inherent in either DH or humanities.

Digital humanities, as I understand it, is really just an extension of traditional humanities. Without humanities, digital humanities wouldn’t exist.  Digital humanities largely represents a new humanist method that helps the discipline to contribute to cross-disciplinary conversation and public relevance by meeting the audience on its native information ground.  Without the critical aspect, the digital humanities wouldn’t be able to perform that work.  From the projects we have examined in JJ’s course, it seems to me that both internal and outwardly-looking criticism are built in to DH. For example, the GIS project Transatlantic Encounters, conducted by Beth Shook on the presence of Latin American artists contributing to and interacting with the Parisian art scene during the interwar period. Shook used the tools provided by DH to critically examine the canon of history and art history with its focus on Western Europeans and white Americans. Or Digitizing “Chinese Englishmen” from Adeline Koh.  She also used DH’s production and criticism features to produce a blog that strives for a decolonized archive (click here for another fascinating assessment of decolonizing the archive).  Both of these projects tie instrumentality with the critical foundation of the humanities.  As does Medieval People of Color, or Barnard’s Reacting to the Past game Modernism vs. Traditionalism: Art in Paris, 188-89, or University of Sydney’s Digital Harlem, or Dan Cohen’s Searching for the Victorians.  For more projects that display the interplay inherent in critical product for critical scholarship, see Alexis Lothian and Amanda Phillips article in E-Media Studies.  A rough list of critical DH projects alone could fulfill the required word count for this post.

Perhaps these projects are atypical in the way that they internally critique the humanities or pedagogy while also contending with outward critique regarding under-acknowledged sections of scholarship.  But just looking at the syllabus JJ created and the resources she compiled strikes me as a pretty broad practice of criticism already built into the instrumentalist nature of DH.

When I asked broadly about the division between product vs criticism in class, we landed on the comparison between a welder and a philosopher, thanks to Marco Rubios’ fictitious statement that welders make more than philosophers.  To which anther reference was made about how philosophy helps welders operate in the market economy ethically (those of you versed in the conversation around the presidential debates can supply the exact reference in the comments?).  I would take a slightly different tack, though, one sparked by a comment in Scott Samuelson’s Atlantic article.

those in the lower classes are assessed exclusively on how well they meet various prescribed outcomes, those in the upper class must know how to evaluate outcomes and consider them against a horizon of values.

Historically, this is true.  But isn’t the point of modern education in the United States to ensure that no matter one’s profession—plumber or a scholar—that each individual can think critically, that one can think for oneself?  Samuelson starts to tease out this idea, but remains on a loftier level.  My inclination is to examine the minute practicalities.  For others who also revel in home improvement shows like This Old House, you’ll immediately grasp why critical thinking is so essential to any manual labor at both the minute and holistic levels.  Skilled workers have to respond to the demands and quirks of each particular environment, analyzing that they are using the right work-arounds to ensure a project’s long term success and that those actions won’t interfere with other, unrelated projects (eg plumbing and electric).  Otherwise, next week, two years, or ten years down the line, a homeowner ends up with a massive plumbing, roofing or other nightmare that negatively impacts the house.  Thus any uncritical work proves not just useless, but damaging.

If the driving force behind the humanities is criticism, then isn’t it equally important for those receiving a technical education to learn independent thought as those with a liberal education?  It’s this foundational assumption that makes it so challenging for me to understand how criticism could possibly be divided from the DH, making it into pure product creation.  If not even a  plumber or welder’s everyday actions can be divided from criticism, then how can a direct derivation of the humanities be, at its core, uncritical?

Continue reading “Digital Humanities vs Humanities?”

Open Access: Increasing participation in scholarship

I have spent majority of the past week discussing the value of education vs. degrees and the barriers a significant portion of the population face in obtaining the credentials and associations required for respected participation in scholarship.

In areas where primary and secondary education provision remains troublingly weak, the higher ed options available to students produced from those systems are limited.  Unfortunately, this means that any scholarship addressing those populations is represented either by outside observers or a limited number of in-group folks that made their way into academia.  This leaves out the valuable perspectives of a massive section of our population.

Thanks to the growth of independent or self-published avenues and online, semi-formal scholarly platforms, however, participation barriers for a portion of that excluded population (and others not generally included in academia) are diminished.  Of course, these avenues still face ridicule from a vocal core of academics and administrators.  But the shuttering of university presses and the standardization of open access journals has cleared the way for a rethinking of publication options.

Open access (OA) journals are one hammer whacking away at the rigidity of academic publishing.  Open access literature is online, free of access charges, and free of most copyright or licensing restrictions.  Despite the insistence of many not using OA publishing, all major scientific or scholarly OA journals insist on peer review. The primary difference between OA and traditional publishing lies in the pay structure.  OA literature is free to read.  Producing OA journals is not without cost, though, even if it is much cheaper than traditional publications.  Like traditional journals, accepted authors pay a publication fee (often waved for instances of hardship).  Editors and referees all donate their labor.  This model ensures that readers don’t face a paywall, and thus ensures the widest possible online access.

Questions of reliability aren’t entirely unjustified, though.  Unscrupulous individuals do take advantage of this new publishing system.  In my library science reference class with Stephanie Brown, we went over the pitfalls of OA publishing.  Here’s a checklist Stephanie created on things to keep in mind when assessing or submitting to an OA journal:

  • Is the journal listed in the Directory of Open Access Journals?  If so, then it’s likely reliable.
  • Does the journal provide a clear statement on article processing charges (APCs) and other fees?  If the fees are unreasonable, stop and find another journal.
  • Receiving continual article solicitations from a journal via email?  File it under spam and find a different journal.
  • Does the journal make unlikely promises (e.g. brief peer review turnaround)? Stop & find another journal.
  • Download an article and read it.  Would you want your work to appear next to it?  If not, find a different journal.

Traditional publishing does have the possibility of facilitating inclusion if modeled to do so, however.  In the Wikipedia summary of Kathlene Fitzpatrick’s Planned Obsolescence: Publishing, Technology, and the Future of the Academy, the author notes that the current university press model treats the press as a separate institution from the university, one that’s meant to at least partly support itself financially.  But, if the university incorporates the press more fully into itself, then the press “has a future as the knowledge-disseminating organ of the university.”  In order for this to happen institutions of higher learning must first reconceive of themselves as “a center of communication, rather than principally as a credential-bestowing organization.”  Tabling the issue of overemphasis on credentialization in the job market, ensuring that a press reflects the learning of an institution’s constituents is both a way to provide professors and students an opportunity to publish and a means of holding the university accountable as an institute of learning rather than a degree churn.  Many schools’ student groups publish a law or business review comprised of student contributions, but few encourage the students to publish for a wider audience through its own press.

Until university presses are revamped, we have OA publishing and peer-to-peer online platforms.  Peer-to-peer—like OA— provides a different publication model, but this one focused on dialogue between participants for a broader conception of peer review.  MediaCommons from the Institute for the Future of the Book provides an ideal example of this new approach.  It focuses on exploring new forms of publishing for media studies through blog posts that others can comment upon in the same capacity as peer reviewers.  These posts are tied to profiles that link to the participants’ projects and works, which yields a networked approach to publishing, both through interpersonal networks displayed through post commentary and through links to related scholarship.

These online networks become increasingly important as the volume of publication submissions increase.  Peer-reviewed journals (the form of journal required by tenure committees) require a sufficient pool of referees from which to draw so that no individual is overburdened with requests for reviews.  As Maxine Clarke points out in her blog post on “Reducing the peer reviewer’s burden,” if more scholars with subject expertise are findable, then the pool of referees to participate in peer review deepens.  And given participation on communal review platforms like MediaCommons, those scholars will be more prepared to perform the duty of jurors, even if their university did not formally prepare them.  This has the added benefit of not just relieving the pressure on the current pool of pier reviewers, but also reducing the influence of a few on many.  A reader’s personal experience of the world and focus within the subject colors her or his perspectives, and thus her or his edits and comments.  More readers means more diversity in editing perspectives.

Other publishing avenues to keep in mind for monographs are self, print-on-demand or independent publishing.  Print-on-demand is a form of self publishing (an option still derided by the academy) that allows you to publish your monograph to an online platform from which visitors can download or order a printed copy of the book.  Lulu.com is a particularly popular print-on-demand self publishing site.  Independent publishers generally deal with smaller press and can also be print-on-demand.  For instance, Library Partners Press at Wake Forest University is one of the many independent presses that operates on a digital platform, allowing for the option of printing.

Folks have information to share with one another, and so many scholars (whether members of the academy or not) have expertise to tap.  The current Big Publishing business doesn’t fully acknowledge nor use those people—it’s only natural that legitimate alternatives would pop up in stead of that operating procedure.

Continue reading “Open Access: Increasing participation in scholarship”

Pedagogy & Digital Media

When I heard Jaskot’s talk, I realized that I was missing out on a new and interesting approach to art history. I had previously used technology to record, organize, and even represent my work as part of a larger conventional framework. I had not used technology to help me better understand my work or to help me draw new conclusions.  —Nancy Ross, Dixie State University

This comment from Nancy Ross’ article “Teaching Twentieth Century Art History with Gender and Data Visualizations” gets at the heart of digital humanities research as we’ve understood it in this class.  For most scholars, digital humanity tools are a means of producing an accompanying visualization.  This neglects how digital humanity tools can actually serve as a new means of interpretation and scholarly output—an expression of research in itself.  Ross goes on to describe how she used a non canonical text book paired with a group networking visualization project to help her undergraduates to better grasp the implications of the research they were conducting on women artists and their social-professional networks.  The students responded with enthusiasm and noted how the process of constructing the visualization altered and strengthened the conclusions they had begin to draw before seeing their research in a new form.

In a class discussion on virtual reality models for architectural spaces, JJ commented that many of the scholars working to put the visualization together found the process of compiling the data and making it work cohesively was actually far more revealing than the finalized model itself.  Inputting the data dramatically altered the scholars’ understanding of how the space worked, while the final product looked as if the space was always meant to be understood in that way.  Process can be a powerful too in research.  See, for example, the outputs resulting from George Mason University’ new technology requirement for its art history master’s program.  These projects allowed the students to experiment with new media relevant to their professional interests while exploring previously unforeseen connections and research conclusions facilitated by their software.

In terms of pedagogy, digital humanities projects really can prove the ideal form for student engagement.  Randy Bass of Georgetown’s Center for New Designs in Learning & Scholarship provides a comprehensive breakdown of learning approaches made possible by assigning digital projects.

  1. Distributive Learning – the combination of growing access to distributed resources and the availability of media tools by which to construct and share interpretation of these resources allows for distribution of classroom responsibility to students.
  2. Authentic Tasks and Complex Inquiry – the availability of large archives of primary resources online makes possible assignments that allow for authentic research and the complex expression of research conclusions.
  3. Dialogic Learning – interactive and telecommunications technologies allow for asynchronous and synchronous learning experiences and provide spaces for conversations and exposure to a wide array of viewpoints and varied positions.
  4. Constructive Learning – the ability to create environments where students can construct projects that involve interdisciplinary, intellectual connections through the use of digital media that are usable.
  5. Public Accountability – the ease of transmission of digital media makes it easy to share work, raising the stakes of participation due to the possibility of public display.
  6. Reflective and Critical Thinking – in aggregate, learning as experienced within digital media now available to pedagogues contributes to the development of complex reflective and critical thinking that educators wish to instill in their students.

In my own learning experiences, I’ve found that engaging with new media with an assortment of these 6 learning approaches in mind allows me to conceive of my research in a broader context and with a more diverse audience while still delving deeply into the subject.  Like Nancy Ross’ students, my attention was sustained for much longer and in manifold ways by having to think critically about making the platform or software work for my purposes.

This course is also making me look back on previous assignments or projects I’ve worked on that could have dramatically benefited from the creation and inclusion of DH visuals.  For example, as a senior at Wellesley College, I contributed to a transcription and annotation project around Anne Whitney that was associated with Jackie Musacchio’s course on 19th century women artists traveling abroad.  As part of the course, we established a standard for transcribing Anne Whitney’s letters from the collection in our archives.  The annotation process included researching & footnoting all items of note in our assigned letters and establishing biographies for each individual mentioned, linking letters to which she referenced or within which she continued conversation of a topic.  The relationships between Americans participating in the Grand Tour and establishing studios in Europe (Italy especially) was quite tight, ensuring few degrees of separation between cliques.  Since Whitney so considerately dated all of her letters and virtually always provided her geographic location, a networked or GIS visualization of the data compiled in the annotation process could show Whitney’s place in the social melee, who she was interacting with where and when, how they connected to other travelers and artists, where Whitney was showing and with whom, just to name a few avenues for the map.  This could prove especially fruitful when compared to similar projects occurring in the area that focus on artistic output in the areas in which Whitney was active.

Part of why this sort of visualization might prove important is that it would use that data we’d already compiled to participate in dialogue around these artists and the state of the art world at the time, ensuring relevance beyond just those interested in Anne Whitney.  Were the data to be scrubbed in a way that allows for replication or other modeling and then making that data open source would doubly ensure this.

Conceiving of social media as another teaching platform for students.

I use Pinterest with embarrassing regularity—both as a personal indulgence in planning the minutia of my theoretical future home, as well as a platform for more scholarly endeavors that incorporate various media.  Other than the lack of tagging capabilities, the site is beautifully suited for research and reference.

For example, in my Collections Development course, Professor Mary Grace Flaherty assigned a final project in which we developed a collection for a specific population.  I chose to create a core collection for book artists.  Instead of simply compiling a bibliography of resources, I created a Pinterest board to host a more dynamic catalogue.  Why Pinterest, you may ask?  One obvious advantage is that it embeds video directly into the pin (AKA a catalogue entry, in this case).  Of far more importance, however, are Pinterest’s networked search functions.  As mentioned, Pinterest doesn’t allow for tagging of pins to simplify searching by category within a single board.  It does allow for 1 keyword search function and 4 different browsing functions across boards and pins, though.

Let me break those 5 functions down for you:

  1. A keyword search function that seeks out pins using 4 limiters: searching across all pins Pinterest; searching across your pins; searching for other pinners; or searching for boards.  This search function also suggests terms to allow for greater granularity. 
  2. A browsing function that allows users to see other pins related to a particular pin.
  3. A browsing function that allows pinners to see other boards with items related to a particular pin.
  4. A browsing function that allows pinners to see other boards on which a particular pin is pinned.
  5. A browsing function that allows pinners to see what else from a particular site has been pinned on Pinterest.

This sort of searching and browsing turns Pinterest into a highly linked catalogue and thesaurus.  One pin can create an entire network of related items, which turns the items in the Book Artists’ Core Collection into a starting point for practitioners or scholars to conduct more in-depth research into a project.  When I began the research that inspired this collection (for a book arts class, which also has its own reference board), Pinterest allowed me to delve deeply into contemporary and traditional methods for emulating every aspect of a British 14th century book of hours.  It also provided inspiration for how to shape that book of hours into an artist book documenting the history of the book.  By identifying one relevant pin on parchment & papermaking or regional binding methods or paleography & typography, I could follow those pins to related resources by using the browsing functions, or even just by following the pin back to its source.

By using Pinterest as the host for my catalogue, I also skirted around the limitations inherent in any collection—one can only develop each topic within a collecting area up to a point before the resources are outside of the collecting scope.  Pinterest lets users control when they’re deep enough into a topic, since the boundaries of the catalogue are so malleable.  For instance, my core collection doesn’t contain an encyclopaedia on the saints or regional medieval plants.  This information is highly relevant to a book artist working on a book of hours, but it’s too granular for a core collection of resources.  For more on the Book Artists’ Core Collection, see this blog post.

Continue reading “Pedagogy & Digital Media”

Experimenting with 3D Scanning

Autodesk 123D Catch desktop & smart phone app

Last week, we visited the Ackland Museum to use one of their objects for our first foray into 3D scanning.  I chose a squat, spouted object for my first project.


The phone app is a huge help in terms of starting out with lens distance and the number of photos required for a baseline.  There is a 70 photo cut off, so don’t get too enthusiastic if you’re using the app.  But if you follow their model (the blue segments), you get the right number with a sufficient amount of overlap to construct the 3D render.

screencap_123D_App
This is a sculpture from home, nothing so fancy as an Ackland object. But it lets you see how the app works.

But it’s certainly a lesson in patience. There are several stages of upload that require users to dredge up the patience we used to exercise when dial-up was our only option.  JJ supposed that the wait time might be a part of the way the site processes the photos—maybe it’s constructing the model as it’s uploading the photo.  If you have internet connectivity issues (the struggle is real, even on our university network), don’t worry too much—I walked too far from the building while it was ‘Finishing,’ and the app asked me if I wanted to continue on my data plan.  Even still, it stayed on that ‘Finishing’ screen for hours.  I finally just let the phone alone and went to bed.  When I opened the app up the next day, it offered me the option of publishing the upload, which went speedily.  So lesson learned: don’t attempt a project under time pressure.

The Autodesk application is meant as an intro to 3D scanning and potential printing (123D Make).  So there’s a limit as to editing the model after it’s been uploaded.  Pretty much all you can do is add tags and a description without any tools to manipulate the model itself.  You can download several different file formats, though.  It also allows you to share the project on various social media platforms or embed it (as seen in the first image).  If you’re new to 3D scanning, this is definitely the way to start.

Agisoft PhotoScan (& Sketchfab)

PhotoScan, on the other hand, is better either for beginners with the option of a tutorial or for those with a little more familiarity with image manipulation software.  The learning curve isn’t as steep as with Adobe products, however, since the basics of what you need to do (in the order they need doing) show up on the ‘Workflow’ dropdown of the menu.  As you complete each step, the next steps are no longer greyed out.  For each task completed, more of the toolbar is made available to you.  For example, once the photos are aligned, the dense cloud constructed and the mesh build, you can view the various underlying structures of the model.  To help fill in the gaps, you can use Tools>Mesh>Close Holes and the software will interpolate what’s missing. The best part, though, is that the default setting for all of the stages result in a respectable model that renders non-reflective surfaces and textures beautifully.  For example, the model we constructed during class involved a brick wall, and it really did look like an actual brick wall by the time we finished, and just on the default (medium) settings.  The only caveat: Make sure you’re working on a computer with a a good bit of memory behind it—the speed and capability of the processing is limited otherwise.

Once you have the model as you like it, you can use image editing software to alter the aesthetics of the model.  DO NOT make any alteration (including cropping) to the photos in the process of taking them nor before aligning them.  Once the photos are aligned, you can crop them in the program to hone in on the object.  For the amount of light and sharpness, export the material file and .jpg.  Edit the .jpg in your image editing software (it will look like an unrecognizable blob—just go with it) and then feed it back into the model for a reapplication of the texture by going to Tools>Import>Import Texture.

Once the model is perfected, you have the option of several export options.  Did you know that Adobe Reader lets users move around a 3D model?  It’s a fun and approachable option.  The Wavefront obj file option allows you to to save without compression, though the file size is large.

experiment with PhotoScanFor this model:

The photos that align properly show up with green check marks.  I think part of the issue with this model is that I uploaded the photos slightly out of order, thus missing the full scan of the little Frenchman’s beret.  That, the uneven lighting and the poorly contrasted background all contributed to the editing problems.  If the model were better, it would be worth spending time with the magic wand or lasso tools to edit closer to the sculpture.  For lighting, try to get neutral lighting that casts minimal shadows.  Bright or harsh lighting not recommended.  If you’re shooting outdoors, aim for cloudy days.

Sketchfab is a means of publicly sharing projects from 3D rendering programs.  If you have paid for an Agisoft account, you can upload projects directly to Sketchfab.  My account is super boring, since the free trial of PhotoScan won’t allow file export or saving.  But RLA Archaeology has a fascinating and active account, so take a look there for a sampling of what Sketchfab has to offer as a public gallery of projects.

3D Printing

The process of creating the scan and watching such a textured image develop was gratifying—that scan being the one that we created in class with the brick, and not reproduced here.  I’m certain that watching the model go through the 3D printing process would be equally fascinating.  But the final product may be less satisfying.  Most printers produce those heavily ringed models that require sanding before they look and feel more like a whole—rather than composite—object.

What’s really cool about 3D printing, though, you can use such an assortment of materials with which to print.  For example, World’s Advanced Saving Project is building prototypes for affordable housing 3D printed from mud.  Some artists are constructing ceramics from 3D models using clay (basically more refined mud).  Still others are using metals for their work.  The possibilities for art, architecture and material culture are overwhelming.

Folksonomies in the GLAM context

Folksonomies, also known as crowd sourced vocabularies, have proved their usefulness time and again in terms of deepening user experience and accessibility, especially in cultural heritage institutions.  Often termed GLAMs (Gallery, Library, Archive and Museum), these institutions can use folksonomies to tap into the knowledge base of their users to make their collections more approachable.

For example, when one user looks at a record for Piet Mondrian’s Composition No. II, with Red and Blue, she may assign tags for colour blocking, linear and geometric.  Another user may tag the terms De Stijl, Dutch and neoplasticism.  By combining both approaches to the painting, the museum ensures that those without contextual knowledge have just as much access to their collections as those with an art historical background.  Linking tags can allow users to access and search collections at their needed comfort level or granularity.  It also frees up the time for employees handling those collections, since they can depend—at lease in part—upon the expertise and observations of their public.

The diversity of tags also increases findability levels for educational uses of online collections.  If an elementary school teacher wants to further her students’ understanding of movements like cubism, she can just open up the catalogue of her local museum to explore the paintings tagged with ‘cubism.’  This becomes increasingly important as field trips are increasingly unavailable to public schools and larger class sizes require more class prep than ever.  With the linked folksonometric vocabulary, the teacher need not fight for the field trip nor dedicate even more time to personally constructing a sampling of the online collection to display.

Crowd sourcing knowledge can also go beyond vocabularies and prove especially useful in an archival context.[1]  When limited information is known about a collection or object and those with institutional knowledge are unavailable (a persistent problem plaguing archives), another source of information is needed.  The best way to do that is to tap those who would have the necessary expertise or experience.  For example, Wellesley College’s archive began including digitized copies of unknown photographs from the archive in the monthly newsletters emailed to alumni.  With those photos, the archive sent a request for any information that alums could provide.  In this way, the archive has recovered a remarkable amount of knowledge about the historical happenings around college.

But folksonomies and other crowd sourcing projects are only useful if institutions incorporate the generated knowledge into their records.  Some gamify the crowd sourcing process in order to engage users, but then lack the impetus to follow through in terms of incorporation.  Dropping the ball in this way may be due in part to technical challenges of coordinating user input and the institution’s online platform.  It may also stem from the same fear that many educators hold for Wikipedia: What if the information provided is WRONG? Fortunately, both anecdotal and research evidence is proving those fears largely unfounded.[2]  The instinct to participate with institutions in crowd sourcing is a self-selective process, requiring users to be quite motivated.  Those interested in that level of participation are going to take the process seriously, since it does require mental engagement and the sacrifice of time.  In terms of enthusiastic but incorrect contributions, institutions may rest assured that the communities that participates in crowd sourcing efforts are quite willing to self-police their fellows.  If something is wrong or misapplied (eg Stephen Colbert’s Wikipedia antics), another user with greater expertise will make the necessary alterations, or the institution can step in directly.

GLAMs are experiencing an identity shift with the increased emphasis on outreach and engagement.[3]  The traditional identity of a safeguarded knowledge repository no longer stands.  GLAMs now represent knowledge networks with which users can engage.  If obstacles exist to hinder that engagement, the institution loses its relevance and therefore its justification to both the bursar and the board.  These open sourced projects can break down those perceived barriers between an institution and their public.  Engaging with users at their varying levels and then using that input shows that the institution envisions itself as a member of that community rather than a looming, inflexible dictator of specific forms of knowledge.  Beyond that, though, institutions can use all the help they can get.  Like at Wellesley, an organization may be lacking in areas of knowledge, or they just don’t have the resources to deeply engage with all of their objects at a cataloguing or transcription level.  Crowd sourcing not only engages users, but it also adds to an institution’s own understanding of its collections.  By mining a community’s knowledge, everyone benefits.


From a more data science-y perspective: Experimenting with folksonomies in an image based collection

For a class on the organization of information, we read an article covering an experiment analyzing the implementation of user generated metadata.[4]  For those in image based institutions looking to build on the attempts of others in the field of crowd sourcing, this experiment is a solid place to begin.  Some alterations, however, may prove helpful.  Firstly, I would recommend a larger pool of participants from a broader age range, at least 25-30 people between the ages of 13-75.  This way the results may be extrapolated with more certainty across a user population.  Secondly, for the tag scoring conducted in the second half of the experiment, I would use the Library of Congress Subject Headings hierarchy in tandem with the Getty’s Art & Architecture Thesaurus, so as to compare the user generated data to discipline specific controlled vocabularies.  Retain the two tasks assigned to the users, including the random assignment of controlled vs. composite indexes and the 5 categories for the images (Places, People-recognizable, People-unrecognizable, Events/Action, and Miscellaneous formats).  In terms of data analysis, employ a one-way analysis of variance, since it provides a clear look at the data, even accounting for voluntary search abandonment.  By maintaining these analytical elements in the one-way ANOVAs and scoring charts for the pie charts, it would be easy enough to compare your findings with those in the article to see if there’s any significant difference in representations of index efficiency (search time or tagging scores) for general cultural heritage institutions and GLAMs with more image based collections.

Continue reading “Folksonomies in the GLAM context”

Google Maps as a means of enacting digital humanities

For our second assignment in JJ’s Digital Art History course, we created a map that would incorporate multiple layers and associated multimedia.  For my map, the layers represent the places in which I’ve lived.  The individual locations and routes represent oft frequented or well enjoyed locations.

The England layer is far more detailed, with supplemental information from the addition of attributes to its data table (Link and Time period).  It also holds more images and even a YouTube video (see the Bodleian Library pin).

The only real trouble I encountered was the number of layers the map would accommodate.  When trying to add another route, the option was greyed out, informing me that I’d reached my limit.  Other than that, the application was primarily intuitive and allowed me the freedom to manipulate it as I wished.

Post-publishing realization: The map automatically adds photos associated with the location after the photos (and singular video) that I linked to it myself.  I’m working to figure out how to disable these media that aren’t mine.

A Reaction to Image Digitization Standards

Instructional Resources for Digitization:


After reading through suggested standards for scanning processes based on each type of material or object, I can thoroughly understand why institutions—despite enthusiasm for access and digital humanities—might shy away from long term and collection-wide scanning projects.

The Harry Ransom Center recently launched a digitization project entitled Project REVEAL (Read and View English and American Literature).  As an institution with astounding resources available to them, the Center had the luxury of approaching the project as a model of ideal digitization workflows for future scanning endeavors both in-house and in the wider community.  Part of Project REVEAL’s objective was to scan twenty-five of the HRC’s English & American author collections in a way that respects original order and disregards the traditional approach of only digitizing collection highlights.  Going box by box, folder by folder, and item by item, the project digitized and provided detailed metadata at the most granular level.

Most organizations, however, do not have the time, finances, nor support to accomplish such a dedicated attempt at digitizing their collections.  Collections may not be processed to the item level.  Oversized or delicate items may require more specialized handling and scanning than the facility can accommodate.  The money supporting the digitization may only stretch far enough to allow the scanning of collection highlights.

To experiment with just how long a high quality scan can take, I digitized a proofing press run of a block from Wellesley College’s Book Arts Lab collection on my HP Photosmart C4700 flatbed scanner.  The 3×4 inch scrap of card, scanned at 4800 dpi, took 1 minute to scan and save.  While my flatbed is aged and amateurish compared to newer, professional grade scanners, items scanned at a high enough dpi to withstand being resized and to accommodate zooming features takes a marked amount of time, which increases many times over when digitizing larger items.  Then there’s the concern of file size and type.  I used a .jpeg format, since my purposes don’t require an uncompressed, high quality .tiff.  But any archival purposes would require that caliber of format.  The opening and downloading of higher quality files is its own time commitment and can severely slow down the loading of the image or the webpage in which it’s embedded, even though the excellence of the scanned image is worth the wait.

senior print
This cut illustrates Wellesley seniors’ hoop rolling tradition.  For more on how scanning setting influences image quality, see “The Image” section of Besser’s Introduction to Imaging (linked at beginning of post).

While these considerations might discourage most organizations from keeping up with the standards the HRC laid out in Project REVEAL, the project’s imaging and metadata standards.  And they don’t differ much from the standards laid out by the Library of Congress, FADGI or Besser’s recommendations.  Without a quality image available for easy, detailed viewing (and potentially for download), a digitization project loses all significance.

These quality images don’t mean much without their metadata, though.  Current technology remains incapable of auto populating metadata fields better than a human viewer. [1]  Consequently, to compare or search image files, one needs sufficiently descriptive metadata.  Project REVEAL, under ‘Object Description,’ includes the fields Title, Creator, Date, Description, Subject (includes medium and format details), Subject, Language, Format, Extent (eg number of manuscript pages), Digital Object Type, Physical Collection, Collection Area, Digital Collection, Repository, Rights, Call Number, Series, Identifier, Finding Aid and File Name.  For description of the image, rather than the object, Project REVEAL includes the categories of Title and File Name.  The metadata sections also provide menus for Tags and Comments, which allows for user participation in making the object even more relevant to viewers.  As an archival institution, the HRC emphasizes metadata representative of their storage structure, such as collections and series identifiers.  While HRC modified their metadata structure for their purposes, the Library of Congress lays out general guidelines for digital material metadata that follows similar lines:

metadata LoC
For more, click on the Library of Congress link at the top of this post.

More information on metadata standard will come in a later post.

Digitization standards hold relevance beyond the institutional level, too.  Individuals establishing an archive—such as visual artists, writers or genealogists—also require workflows for the digitization of their materials.  While both Introduction to Imaging and “Becoming Digital” are oriented towards organizations, their suggestions can be instructive for project planning at an at-home level.  When looking to digitize personal materials, the highest quality images might not be necessary (especially since materials at the highest resolution take quite a while on at-home scanners, as illustrated by my own experimentation).  But Introduction to Imaging‘s chapter on “The Image” is supremely instructive in identifying what an individual might require for their digitized images to serve their needs.  The subsections on Bit Depth, Resolution and File Format illustrate what one will get out of using various levels of depth, size and compression.  By literally illustrating the limits and advantages of each aspect of the scanning process, Besser saves individuals’ time and storage by allowing them to see what those images can do at each level—one need not reinvent the wheel in taking the time to experiment with all of the combinations of aspects personally.

For those working in text-heavy digitized objects, OCR (Optical Character Recognition) might be an avenue to consider before setting off on a digitization project.  Unfortunately, even neatly written handwritten materials don’t respond well to OCR programs, nor do items with variable formatting and fonts (like newspapers or mathematical treatises).  But if your materials are type written—such as manuscripts produced on a typewriter—OCR might be an avenue to explore.  “Becoming Digital” explains the OCR evaluation process quite well, including a breakdown of programs’ efficacy and price point, in its chapter on “How to Make Text Digital.”  Impact’s Tools & Resources page on Tools for Text Digitization provides an even more in depth breakdown of the options out there, though it does not provide pricing information.

While I can see the appeal of OCR for a larger typewritten collection that needs to be searchable down to the phrase, it seems less useful in virtually any other context, since price for software, time for processing and effort for verification/correction result in an imbalance between energy input and product output.  An answer to the instinct to OCR materials might instead to be thoroughly descriptive in the metadata fields of Description, Subject and Tags.

For more examples of digitization projects, take a look at sites like the Digitization Projects Registry to see how various organizations are implementing these technical suggestions.

[1] MIT recently developed a program that could recognize if a painting was in the style of cubism, but it’s far less expensive to have a practiced eye tell you that in the same amount of time than ask a computer to run the program, which isn’t equipped to identify every object, style or variation therein.