Digital histories in Northumbria; a workshop

Screen Shot from The Discovery Museum website showing a permanent gallery where workshop delegates played on interactive exhibits.

Screen Shot from The Discovery Museum website showing a permanent gallery where workshop delegates played on interactive exhibits.

Digital Histories: Advanced Skills for Historians
Northumbria University Newcastle and Tyne and Wear Archives and Museums
24-25 April 2014

Flying to Newcastle for a two-day workshop may seem an extravagant use of research time but this AHRC-funded workshop (organised by Laura Hutchinson and André Keil of University of Northumbria) promised to investigate issues around the digital humanities, to do with archives, text, image, data and metadata, and examine a number of innovative projects into the bargain. Speakers, including a Medieval scholar, a social media maven, community organisers and university- and museum-based IT consultants, were to discuss the implications of: putting archives online; striving for web- and museum-based interactivity; and crowd-sourcing projects that link institutions with volunteers.

The event spotlit concerns around digital and online cultural activity that will now inform my museum-based research. Delegates voiced concerns about unfamiliar material. Text-based historians, comfortable working with online resources be they newspaper archives or scanned records, admitted to lacking confidence when it came to image-based documentation. But there also seems to be a (conspicuous) lack of art- and design-led projects within this digital arena, perhaps because historians with those prefixes prefer to interact with objects and images, offline.

Day One started with Tanja Bueltmann’s run down of “opportunities and challenges” for research in the “digital age”. She endorsed the idea that digital media can increase engagement, but went on to questioning if it might produce a new or different type of knowledge. She uses digitised newspapers as a source of information about migration and diaspora, and mentioned that the British Newspaper Archive, established with the British Library, has plans to digitise another 40 million pages over the next decade. Tanja had some tips: be systematic about search terms; make a note of “hit rates”; and log search engines variations. She recognised the challenge of organising material and taking digital photos. She suggested a naming protocol for saved documents and images: by year, date (always in the same format), newspaper reference (perhaps a shortened version of the title), and page number. Enter these into Excel spreadsheets, titled by the “search”. When saving images copy the information across from spreadsheet to PDF file and create a text file too. Over time the digital researcher will identify patterns but must take into account the probability that various sources might have been available at different times. As a way of mitigating this partiality, Tanja advocated “sampling” rather than attempting to record “everything”, although she reiterated how crucial it is to be systematic “when attempting to contribute to the advancement of knowledge”. Digital data can help us understand historical developments, spot patterns of cultural trends and produce novel insights.

An delegate asked if a “free” website is be a reliable source; will it be properly maintained and updated? Tanja answered that most archives hosted by national and local government were reliable, but that errors in scanning and transcribing can occur, so cross-check with other sources or actual “paper” versions. Another question asked how “accidents and politics” might play a part in the formation of archives. Tanja flagged up issues about copyright, restricted access to commercial archives and suggested mentioning these concerns as part of a research project’s Methodology, which might also discuss search tactics and caveats.

Katherine Krick of Durham University introduced her research into book production, explaining how she used digital and online techniques to study Medieval manuscripts. She also documented the process, which recorded advances in the means and methods used to archive original material. She introduced a number of online projects and sources for comparison, including DM2E, described on its website as “building the tools and communities to enable humanities researchers to work with manuscripts in the Linked Open Web”. Katherine suggest that “you can catch the attention of funding bodies” by including a digital/digitising element in a project. That got me thinking about how a design museum might generate a digitised archive by scanning trade magazines and sales catalogues through annual trawls of printed matter gathered at design weeks and trade shows. The idea of an “trawl” is not new; in the past the V&A’s Prints and Drawings department occasionally performed a “High Street” version to pick up on stylistic changes in packaging and Point of Sale (POS) material.

Katherine’s final take away stressed context; when digitising documents she wanted to see the gutters and margins, to get a sense of the quality and size of the parchment and the binding. In effect, she was saying, don’t forget that these are objects. Having experienced a flood in Durham Cathedral, during her research, Katherine stressed that digitisation can insure against disaster, but nothing beats seeing the object itself.

Alun Edwards from University of Oxford’s RunCoCo initiative, AKA “the Oxford Community Collection Model”, offered insights into a continent-wide project, Europeana 1914-1918, which asks the public to contribute their family histories from the First World War, be they stories, photographs, films or material culture. The project links national libraries across Europe, making it possible to search partner organisations and view and use open source texts and images. The user-generated content is compiled, tagged, archived and searchable, while the project enables “public-engagement, education, and knowledge exchange”, all terms that are music to the ears of funding bodies eager to prove “impact and earn budgets. Again, I began to imagine what user-generated but design-focused content might look like, in the context of a design museum’s digital archive; perhaps designers would contribute sketch-books, mood boards, artwork, product shots and project documentation? Contextualising design objects is been a frequently voiced goal of design museums, but limitations on space and resources can curtail the best plans. A crowd-sourced initiative, using an online template and in-house moderation, could build into a substantial database, perhaps delineated by regional or national boundaries, so as to emphasis the distinct qualities of local scenes and industries.

Lateral thinking was one of Alun’s recommendations, with social media top of his list for generating “impact”. He suggested posted images on FlickR and Pinterest and releasing them to WikiMedia under a Creative Commons (CC) license; such cross-platform interaction could encourage image reuse in blogs such as the Retronaut, which might then lead viewers back to the main site. He even offered price comparisons; from £40 per digitised item for a university-run, subject-specific online archive, to £3.50 on a crowd-sourced platform that included “how to” roadshows that prompted the public to upload. Alun pointed to Nina Simon’s influential book, The Participatory Museum for more examples of this sort of “interaction”. Lasting impact, AKA, the after effects of such a project were listed as: the up-skilling of museum staff and the participating public; the preservation of documents that would otherwise be stored or lost; accessible photographic records; an increase in donated original material; and a spike in visitor numbers, as irregular museum goers turn into new audiences. When questioned about how to sustain such initiatives Alun suggested; use the project to build a community, and then they will run it.

The next speakers, Alan Fidler and Steve Young, of the Tynemouth World War One Commemoration Project were in just such a position; they help run a community-based project with 70 volunteer researchers and a grant from the Heritage Lottery Fund. Converting a local newspaper column into an interactive website, the project uses crowd-sourced data to plot – street by street – the extraordinary contribution made by the communities of the North East to the First World War. In the process, the data also revealed the extent of economic migration from Scotland and Ireland to what was then a booming industrial region. The HLF requires the project to document outcomes and engagement, particularly with hard to reach groups.

The end of day round table raised concerns about digitisation’s potentially negative impact on archives, particularly at local government level. Despite the public’s appetite for archives, it was noted that government money is “drying up” and the actual places and spaces are “being decimated”. Local records offices, which previously converted to microfiche, are now being digitised. But if usage isn’t tracked, whether that be footfall or page-views, funding will be cut. As records are digitised archives are being closed to the public, with access only offered online. It was suggested that the loss of primary sources significantly changes the concept of history. One delegate asked; “are we gatekeepers or commentators?” A final plea was made for joining up networks, co-ordination across institutions, and for templates that could be used a “shell” documents and create consistent formats.

Day Two convened within the Victorian ironwork splendour of The Discovery Museum’s Great Hall. Appropriately, the first speaker was Chris Wild, the founder of Retronaut. Chris described his initial motivation for setting up the blog as “wanting to go back in time”, but creatively reinterpreted his wish into, “how close can I get?” Describing the project as a “photographic time machine” his curation aims “…to tear a hole in people’s map of time” by surprising viewers out of any complacent preconceptions. He recalled seeing the earliest colour photographs from 1907, and realised that he’d only ever imagined the past in black and white; the colour images made it feel contemporary! Admitting that Retronaut is for people who never visit archives and think museums are boring, Chris aims to find images that don’t fit our cosy “map of time”; to stretch, realign and develop that map. While praising “the shallow” he suggested that museum texts are too long (“the book on the wall syndrome”), but revealed that staff and volunteers “fill in the gaps” by attaching researched information, sourced from archives, to the “published” images.

At the time Retronaut was transitioning from a predominance of webpage views to increased activity on social media, with subscribers receiving daily emails and 24-pics-a-day via Twitter. Chris shared his curating “formula” – S.P.E.E.D. – while urging digital academics to use a bit more personal judgement. S is for Seen; the language of the Internet is images and powerful pictures can earn a project masses of attention. P is be Positive; will a viewer get something out of it? E is for; Easy to get. The second E spells; be Emotive. And D is Disruptive; use an image doesn’t fit our perception of the past. In praise of shallowness, Chris advocated the “shop window” approach, leaving it to viewers to delve deeper; helpfully, Retronaut points them to various archives.

The final speaker, Ian Johnson of University of Newcastle’s Special Collections, didn’t quite rain on our parade but with a talk titled, “Digital Pitfalls, an Archivist’s Perspective”, brought more issues to our attention. Positioning the digital archive at the intersection of information management and cultural heritage management he made the point that a rare skill-set is needed to deal with every eventuality. Add to that the requirement for outreach, and you can stick “curator” in the job title too. He explained his role as: taking objects in, preserving them and getting people to use them. Digitisation presents once isolated objects to a community, which includes experts; presented without context, though, a collection reverts to ephemera. Ian flagged up the “headache” of collecting digitally created material, i.e., websites, as opposed to analogue material digitised within the archive; problems include preservation of the platform media and the difficulty of attaching or accessing metadata. Ian held up the BBC Doomsday project as an example of “how not to do” digital preservation as it required a second project, launched in 2011, to salvage the original material collected in the 1980s. Then it was discovered that copyright hadn’t been considered, so the material is not “free” to use until the end of this century. By way of contrast, Ian pointed to best practice embodied in The Hillsborough Archive where digitisation brought material (used as evidence) back into the public domain, and aided the judicial process.

Ian went on to discuss the need for funding bodies and institutions to future-proof digital archives with a “technical plan”, and pointed to The National Archives Digital Preservation Policies: Guidance for archives for guidance. He also suggested investing in IT expertise as there’s little point in scanning documents if they are not searchable (metadata tagging) or accessible (GUI or Graphic User Interface). More practical advice; keep it legal! The ICO (Information Commissioner’s Office) can impose fines and issue “Take Down Notices” for copyright infringement. Ownership means two things: of the physical object (which might reside in your museum or archive); and of the intellectual property (copyright of content). Confused? Read Tim Padfield’s Copyright for Archivists and Record Managers. Ian’s final takeaway was, before digitising try the Why/Who/What test. Ask: Why are we digitising? Who will use it? What will they use it for?

The last round table discussion, fuelled by an impressive biscuit selection, became a forum for speculation. We discussed the need for: input standardisation, data security and access protocols; a wider skill-set taught to “digital historians”, which could include software tutorials, coding and content capture (audio/video/photography); and a deeper understanding of visual methodologies, to read and interpret images and objects. Then delegates brainstormed “Final Thoughts” on giant Post-It Notes (very handy). My take away was “linking”…What info and context can be added to digital records? How can we ensure compatibility? Perhaps using metadata? How can digital material be accessed by and tailored to users with varied needs, means and perspectives? Perhaps through a matrix of enhancements and intertextuality? Looking back on the event now, having written up my notes, I’d suggest that inviting Retronaut along was an inspired move. It could be argued that Retronaut represents a contemporary real-world solution to these very issues, publishing as it does across Web, Email and Social Media. Perhaps starting from scratch, without being encumbered by the requirements of a physical archive, meant that it was able to focus specifically on digital problems as they cropped up, without the need to consider analogue activities. It didn’t have the weight of history or precedent to deal with…

I took a chance on attending an event that initially seemed tangential to my project. With my research further along, I’m realising just how central digital communication and data management are to the future of museums. This, and the Big Data event at the British Museum (reviewed here) have provided me with access to a mass of useful information that I wouldn’t otherwise have come across, which is why I’m writing up my notes and observations in detail.

Thanks again, for time well spent; this event was funded and organised by the AHRC and the Faculty of Arts, Design and Social Sciences at University of Northumbria. The event was discussed on Twitter using a dedicated account and the hashtags, @digihistories #digihist2014 #dhist.

Looking towards Gateshead’s Baltic from Newcastle’s quayside.

Looking towards Gateshead’s Baltic from Newcastle’s quayside.