From the Archive; Collecting Digital Art and Design

I’m reposting these talk-and-conference reviews from 2010, as the issues covered are now to topical. With Louise Shannon recently made Curator of Digital Design at the Victoria and Albert Museum, debate continues around what museums might collect from the field of digital art and design, and how it should be conserved (MoMA’s Architecture and Design Department set the pace with its Video Games acquisitions, which I spent a fun afternoon playing on my last visit). Louise co-curated the seminal show, Decode: Digital Design Sensations, staged in the Porter Gallery, and hearing one of the co-curators of the latest show to open there, “Sky Arts Ignition: Memory Palace”, on BBC Radio 4’s “Start the Week” (17/6/13), made me think about how we might be “reneging on remembering” by relying on digital storage to save our real-life memories. But what happens when digital formats deteriorate and/or become obsolete, and those files are on longer accessible? Check back for my review of Memory Palace, coming soon…

John Maeda in conversation with Alice Rawsthorn
Attended 2 February 2010

Decoding the Digital Conference
Victoria and Albert Museum, London
Attended 4-5 February 2010

John Maeda talks at V&A

Monday 7:01pm, 22 February 2010
“Discussing the digital”
by Liz Farrelly
Originally published on Eye Blog

John Maeda on dirt, de-cluttering and the power of art

Even though John Maeda wasn’t speaking at the conference, “Decoding the Digital”, staged alongside the exhibition Decode: Digital Design Sensations, his work and words (from this lecture earlier in the week) were referenced during the proceedings.

That Maeda’s computer-generated, algorithmic film “Nature” features in the exhibition, and that he is President of the Rhode Island School of Design (RSID), which actually offers 50/50 design and fine art courses, made his appearance doubly relevant to a conference where algorithms (the generation of images via computer code) became the “default” leitmotif, and the art and design divide felt wider than ever.

Maeda was witty and inspirational while suggesting that we could all benefit from a mental de-cluttering (see The Laws of Simplicity, MIT Press, 2006). He also proclaimed that art and design will save the economy (didn’t Henry Cole think that too?); but only if we convince business leaders that divergent thinking is more useful than trying to convert every idea into a product line; S.T.E.M (science, technology, engineering, maths), the mantra of education and commerce, should become S.T.E.A.M, with the addition of art.

Using word games, Maeda finds significant acronyms to add a little mysticism to his decision making process. He also advocated dirt, chance and failure, revealing that when he shows images of art students’ inked-up hands at conferences, computer geeks in the audience assail him with pleas of “we want to get dirty too”. Finally, Maeda reminded us that, “artists aren’t afraid to fail”, because, of course, that’s how we learn.

Decoding the Digital Conference V&A Day One

Thursday 5:01pm, 25 February 2010
“The Wm. Morris code”
by Liz Farrelly
Originally published on Eye Blog

Finding digital roots in the Arts and Crafts Movement

The first day of the “Decoding the Digital” conference was ground-breaking, welcoming a new VA collection, and pointing up areas of art and design history still in need of writing.

Much scholarship was evident. Charlie Gere (Chair of Computers and the History of Art at Lancaster University) opened the discussion by tracing aesthetic, methodological and philosophical roots back to Descartes and William Morris, adding that perhaps the complexities of Medieval art and its reincarnation in the Arts and Crafts Movement are where we could look for the genesis of computer art, not the avant-garde schools of the twentieth century.

Frieder Nake (an early practitioner and Professor of Interactive Computer Graphics at the University of Bremen) provided the conference with its best sound bites, calling for a new methodology to chart the “archaeology of digital art”, because we can’t “dig it up”. He ran through exhibitions and publications that established the genre, from the 1950s onwards, pinpointing the ICA’s “Cybernetics Serendipity” (1968) as epoch making. His fundamental message was that “innovative art doesn’t come from photography or computers, you have to dominate your tools as an artist, because material always resists … the algorithm is the material in digital art so you must manipulate them”. His call to “code” (used as a verb) was followed by a plea to “decode”, adding, “these are beautiful images, but what is behind it? Decode it”.

During the conference there was much talk of algorithms, as many of the speakers, the pioneers of computer art, were code-writing aficionados of generating programming. This focus caused murmurs from the audience, which finally found voice during a Q&A session, when Ghislaine Boddington of Body Data Space pointed out that there are many other forms and formats of computer-enabled practice: performance and installation, film and animation, screen- and Web-based interactivity, robotics and 3D prototyping, plus applications within games and activism.

Roman Verostko described his fascinating career. Starting out as a painter who combined “hand gesture and the careful placement of elements”, he aimed to translate the “art idea to guide a machine”. About code, he likened it to writing a score for drawing. Then he showed his work, simultaneously gestural and controlled, random and planned, created using multi-pen plotters. He also pointed to examples by his contemporaries in the Digital Pioneers exhibition (at the V&A), curated by Douglas Dodds and Honor Beddard.

Day One focused on the ground-breakers who used computers as “a tool or medium from the 1950s to the 1970s”, and included an introduction by Dodds and Beddard to the newly acquired “National Collection of Computer-Generated Art and Design”, now housed at the V&A. Much of the early material on display was sourced from a sizeable donation from Patric Prince, art historian and curator of many SIGGRAPH exhibitions.

Prince explained how she amassed her collection over several decades, initially sparked by an information void about this new art form. She was drawn to it because she liked “hard-edged” art. She urged; “collect something because you love it, not because it’s going to become valuable”.

Another art historian and collector, Anne Morgan Spalter, named this conflagration of conference and exhibition, “the beginning of the beginning of recognition for computers in the visual arts”. She elaborated; this is revolutionary, but it isn’t obvious just by looking; to properly judge computer art, you need to know how it is created. But, she explained, because of the divide between art and science (as recognised by C.P. Snow), it’s difficult for art’s audience to understand just how significant it is. In response to a question, Spalter and Prince explained that this history not only needs writing, but these objects need conserving. As hardware and software – printers, platforms and entire computer languages – break down, become obsolete, and are superseded, this pioneering work needs protecting.

A father and son team closed the first day. Paul Brown is featured in the Digital Pioneers exhibition, while his son Daniel Brown provided Decode with its frontispiece: the artwork On Growth and Form is a continually blossoming, fantastical plant, mapped with patterns culled from the V&A’s collection.

Paul Brown recounted how he worked in the 1970s with the Systems Group at the Slade School of Art. It may have been ignored by the art world, but he later discovered it was well known to research scientists, who recognised that artists were thinking outside the box, investigating computers in ways disallowed in science departments. The art and design divide was explored, with Daniel Brown only differentiating between art and design commissions when it comes to the fee scale. He offers his “Generative Aesthetic” to both cultural and commercial clients.

Decoding the Digital Conference at V&A Day Two

Friday 5:37pm, 26 February 2010
“Back to the future”
by Liz Farrelly
Originally published on Eye Blog

Day two of the V&A’s “Decoding the Digital” kicked off with a change of scene. Graphic design, branding and advertising were centre stage, with filmmaker Johnny Hardstaff in conversation with Shane Walter of onedotzero. Here was the first mention of gaming, as Hardstaff showed extracts from his two short “manifesto” movies, History of Gaming and Future of Gaming, the second of which was commissioned by Sony PlayStation in 2001. Trained as a print designer, Hardstaff revealed how he’d pushed the limits of available software, by making movies using Photoshop. He suggested that software upgrades are prompted by such experiments (Photoshop now includes animation functions).

Hardstaff’s dystopian vision of the future shocked and awed his commercial patrons, who publicly distanced themselves from the movie, while off the record, “they loved it”. The irony was glaring; a major corporation, which facilitates ultra violence in the form of “shoot ‘em up” games, was reluctant to validate a fictionalised account of the games industry that revealed undercurrents of control and exploitation. The fact that those clients “paid very little”, but allowed Hardstaff to retain copyright, meant the movie became his calling card, parlayed into subsequent commercial collaborations. He added; “Advertising is imploding in a delicious way, where consumers are actually getting what they want and artists are getting to do their own thing”. The message being, use this new patronage for your own ends.

In the question session, Hardstaff pointed out that gaming owes its origins to missile tracking systems, adding; “I’m on the cusp of finding it repellent and exciting…I like to take the visual language of systems that people in power use and wield them myself”. Charlie Gere added that Hardstaff’s movies were not simply representation, but “a new way of constructing meaning”. Hardstaff explained his method as “collating, collage, sampling and feedback”, labelling it “Craft Plus”.

Casey Reas of UCLA’s Department of Design Media Arts, and another Decode exhibitor, explained Processing, which he developed in partnership with Ben Fry in 2001. A show of hands revealed that this simple-to-use, open-source language had been used by the majority of the audience. Reas uses it to “develop my own world, with no relationship to the worlds we know”. To disseminate this DIY language to an ever-wider audience though, he’s using good old-fashioned book publishing, and writing increasingly easier instructions with the aim that perhaps one day coding will be the new literacy.

During questions Reas elaborated on an earlier point; how do you choose one image to exhibit from the multiple, drawn iterations of an algorithm? He suggested that the looping and phasing of computer-generated imagery is closer to the performance of a musical score than a unique work of art, declaring: “I relate more to musicians than to visual artists”.

Karsten Schmidt followed, thanking Reas for Processing, which he has incorporated into a palette of software used to create moving, interactive imagery, including the identity for Decode: Digital Design Sensation. He talked about how design is moving from “objects to operations”. Questioned as to why, “all this ‘art’ looks beautiful, where is the ‘bad’?” Schmidt declared; “…this is so young, we’re only scratching the surface of what’s possible”, while curator Louise Shannon pointed out “…data can be troubling, unnerving”. Again, music was mentioned, as “evoking more powerful emotions, while art evokes aesthetics; no painting ever affected me like music does, but this (meaning computer art) could reach the same impact”, declared Schmidt.

Next, curators Louise Shannon and Shane Walter, took the stage to discuss the long and involved process of organising the exhibition, Decode: Digital Design Sensation. Walter went on to describe the work of onedotzero, which he co-founded in the mid 1990s. A hybrid of travelling film festival and “culture club”, onedotzero has brought digital creativity to a worldwide audience, far beyond the art ghetto. This is unashamedly entertaining! And while successive speakers had asked, “what is the cathedral of this new art form”, Walter dared to suggest it may be U2’s stadium tour stage set (commissioned from onedotzero industries), which pulsated with interactive, reactive lights forming and reforming pattern and image. The achievement of onedotzero poses a fundamental question; these digital displays evoke the notion of spectacle, so perhaps judging it in relation to art, the understanding of which demands contemplation and perhaps translation, is missing the point. Instead, this affects the body and the senses, offering new experiences. In short, it awes.

Beryl Graham is Professor of New Media Art at the University of Sunderland, and a member of CRUMB (Curatorial Resource for Upstart Media Bliss). She brought us up to date with innovative initiatives, back in the art ghetto. Her remit is to help curators understand computer art, the better to judge, exhibit and promote it; “if they can understand the process of bronze casting, they can understand the technical details of code”. Going back to the 1970s she revealed how what was once deemed beyond the pale, and therefore not to be shared with art audiences, was later reconsidered and proved popular. Interactivity in galleries is fundamentally feared though, as “participation is beautiful, but it’s difficult…it’s out of control”. If interactivity tests institutions, and the evidence shows that the worst didn’t happen, then, she suggests, the rules will change.

Changing rules were explored further by Hannah Reddler, the Science Museum’s resident art curator. Since 1997 she’s facilitated more than 70 commissioned arts projects, much of which were initially considered “display material”. She’s managed, however, to “collect” many of those commissions by lobbying for changes to the museum’s accession “politics”. Now those commissions will be conserved, and once protected, may be re-exhibited in the future. But, she pointed out, artists aren’t always co-operative when it comes to extending the longevity of an artwork, “they don’t necessarily want to rewrite code for new systems software”. She suggested that computer art should be collected, “as an animal, not an object”, meaning, constant care is needed, rather than simply wrapping it in cotton wool and storing it in an acid-free box. So, the Science Museum has instigated Team Media, a cross-departmental initiative to audit the components of each artwork so it may be “re-ignited”. They’ve also recognised that asking artists to write bespoke software provides better insurance against obsolescence.

The last speaker, Bruce Wands of New York Digital Salon, and Chair of the MFA in Computer Art at the School of Visual Arts, sees a future where the word “digital” is submerged, where even more new art forms will mix with traditional means, and it all just becomes art, again. The “art space” though, will have expanded beyond even the art and design divide, to become embedded in our homes and cities; much like onedotzero’s immersive and experiential approach, perhaps. He’s basing his predictions on “34 years of research”, having logged the evolution of computer art and design across higher education, cultural institutions and the commercial realm.

So, the elephant in the room was finally addressed. Dropping the word “computer” implies that this work be judged on a level playing field, as art or design. But it’s evident that those judging it need to understand how it is made, the processes, the context, and the issues unique to this form. So for now, perhaps it’s more useful to keep the nomenclature, so as to identity that need for scholarship and understanding. Verostko’s career gave a clue as to why, at this conference, there was a fascination with the algorithm. The first generation of computer artists worked simultaneous to Abstraction. And although, much of that early work was often exhibited alongside a set of written instructions or even code (which would place it closer to Conceptual Art), appreciation of the surface – colour and pattern – appeared to take precedence over an understanding of the method of production.

At times during the conference, I felt as if I’d been transported back to the early 1990s, when fears were voiced about the role of computers in the design process, principally by graphic designers. Will everything look the same; will we be seduced by surface and image; will content and problem solving be sacrificed? The “computer aesthetic” was celebrated and fetishised in some quarters, but most fears were dispelled when users realised, it’s just a tool, a more powerful pencil. And the output must be judged, good or bad, like any other design. Designers are still exploring the possibilities of this tool, but the fear of losing control to the machine was dispelled a long time ago. It looks, though, as if those concerns are still being debated in the art world, along with issues regarding display, collecting, crediting (who gets the name check) and the institutional validation of a “new” art form.

[Check back to Eye Blog to see comments relating to this; I’m not going to “acquire” them for this page as the writers didn’t leave them on this blog]