Interview with Lori Hepner
Lori Hepner was interviewed on April 25 by e-mail as her exhibit opened on April 11. Her show is up until May 8, 2011. She is our final featured artist for the semester, but be sure to check out our SRO Photo Gallery Artists for 2011-2012.
What inspired you for the concept of this project?
I’ve long been interested in digital technology, both as a means to make artwork and as a way to look at culture, for a number of years. This work, Status Symbols, was started in a flash of inspiration that hit while learning to solder together electronics for the first time at a local workshop. The concept was close to being fully formed at the outset, at least in terms of what the aesthetics would be. I was interested in the layering of light that would wash together through long-exposures of moving LEDs. I also knew that color negative film would be the best medium to use, as I wanted to use the build up of color density in the highlights, rather than maxing out with pure digital white: 255, 255, 255. Since I had been using binary code as both a technique and concept in previous bodies of work, the idea to turn letters of text from tweets into a string of ones and zeros happened pretty naturally. The ons and offs of LEDs would be the actual text the tweet, character by character.
In lay-person’s terms, explain how the process of the LED Ardunio system works. How was this software applied to your project? How are the images captured and converted into photographs?
The Arduino is an open source microcontroller, basically a mini-computer, that can easily be programmed to control physical items, such as motors, LEDs, light sensors, touch sensors, etc, that an artist might want to be a part of a sculpture, installation, or photo set-up. It is designed to be affordable, as even fully constructed Arduinos are less than $50, and are cross platform for Macs, Windows, and Linux users. A good introduction to the technology can be found on the Arduino website for those that might want more details (http://www.arduino.cc/en/Guide/Introduction).
In my particular use of the Arduino for the Status Symbols project, an array of 8 RGB LEDs, each similar to one pixel on a computer screen, are controlled to turn on and off by the Arduino software. The customizable program allows the color of light to be changed based upon the intensity of the red, green, and blue channels in the LEDs. It sets the speed of the on and off blinking, and controls the sequence of blinks to line up with the binary code that makes up each alphanumeric character of a tweet. By using ASCII binary code, each alphanumeric letter of the keyboard is turned into a series of 8 ones and zeros that are transmitted to the LEDs very quickly. An entire 140-character tweet can blink by in a few seconds.
My set-up has the 8 LEDs and Arduino spinning below a Hassleblad camera that is loaded with color negative film. Each individual twitter portrait is turned into a computer program that controls the 8 LEDs. The LED-array is set to spin and long exposures capture the moving blinks of the LEDs, sometimes through diffusion material, and form the basis of each portrait on the film. After processing, each frame is scanned into the computer and turned into a printable image file.
What are some patterns you may have noticed in the words from status updates and Tweets once converted into the images of light seen in the products the Ardunio elicits?
One of the most unexpected patterns that I’ve noticed in the work is when tweets have recurring characters. In the work that I did for the Brooklyn Museum’s, now-defunct, @1stfan project, I was asking followers to tweet back their portrait in text. The first reply was @1stfans rrrrrrrrrrrrrrrrrrrrrr… until all 140 characters of the tweet were used. Since the tweet was 8 differing characters from the @1stfan reply, followed by a repeated character, an interesting donut shape emerged. Others mimicked this technique during the month-long project, specifically one of my friends, @npghjunk,who told me that he liked the visual formed by the first tweet. He then responded by using a series of periods following the text of his portrait.
How do #trendingtopics from Twitter or heavily discussed events on other social media sites affect your work? Explain the selection process for words or phrases to be converted into light-image form.
The one trending topic that I made some images from was the #iranelection. I was interested in capturing the large amount of messages, but was unable to attain more than 1000 tweets at a time due to an API search limit imposed by Twitter. The tweets that I was able to save only scratched the surface of the hundreds of thousands of tweets of the hashtag. I shot a great deal of the first few minutes of the #iranelection but I haven’t shown any of these images. I may go back to the images at some point in the future to create a body of portraits, but haven’t felt that the timing is right to exhibit these. Since it is far past the immediacy of the original hashtag, the work would need to be viewed at a time with more cultural relevancy than there is right now. It’s a battle between the quickness of Twitter and the slowness that comes up with planning physical exhibitions.
You mentioned in an interview with the Brooklyn Museum that you “tweak” the code to create different aesthetic experiences; how are you working to preserve specific contents carried by the words, such as intellectual, emotional or other?
The tweaking refers to the colors that I’ve added in to the code for the portraits for certain tweet characteristics from the work from that project: Hashtags # are red. @ Symbols are orange. “Quotes” are purple. Some strategic words are blue.
Otherwise, the words in each tweet are a randomly generated color that the code generates at the beginning of a word. Random colors are generated for each of the R, G, and B channels and the color stays valid until a space signifies a new word, when a new set of color values are generated by the program.
How would you like audiences to view your “digital portraits?”
I consider the work to be portraits of the individuals/organizations at the moment of their tweet. Each tweet is a new representation, whether made into an artwork, or left un-imaged. Each status update is a momentary portrait.
In which way do you really see the images as abstract or has the series been considered abstract because audiences may not fully understand what they are seeing?
The work was purposely constructed to be abstract on multiple levels. It is first a direct abstraction of the tweet as it changes text into code into light. It is also visibly abstract in the formal qualities that emerge in the produced image; It is unlike most abstract photographs as it does not visualize a part of the everyday world, but uses the defining properties of photography to create images that cannot be made through any other route.
To what extent is the round image a byproduct of the microcontroller and to what extent are you free to choose the shape of the image output?
I chose to make the portraits circular from the outset. I have a few experiments going on that involve other forms, but I haven’t shown these outside of my studio.
What image sticks out in your mind? What was the word behind its creation?
A piece that stands out is, @justinrmeyer, 12:41 PM Dec 21st from web (#2), @1stfans gaugcuacugcuaug. This tweet is from a personal friend who is a biologist. He tweeted a string of DNA base pairs as his portrait: gaugcuacugcuau. The code within the code comes through in the image, which visually reminds me the nuclear fallout symbol.
Many thanks to Lori for answering our questions. Thank you for a great season.
0 comments
Kick things off by filling out the form below.
Leave a Comment