Posted by Steve Dietz on September 2, 2004 7:37 PM
I attended part of the Language of Networks conference the afternoon before Ars officially opened. Some good technical discussion of representation of networks. Anne Nigten from V2_lab gave a nice talk about mental mapping and announced the launch of D-Tower by Q.S. Serafijn and NOX Architects. Selected residents of Doetinchem in the Netherlands fill out a questionaire.
"This questionnaire contains 360 questions. Every other day, four new questions are made available to the inhabitants of Doetinchem. An example: 'Are you happy with your partner?' Possible answers: 'very much' - 'yes' - 'a little' - 'no' - 'absolutely not' - 'not applicable'. Each answer has a score."
These score can be mapped to the respondents' emotional states - specifically love (red), hate (green), happiness (blue), and fear (yellow). Their answers along with their postal codes are used to create a dynamic, emotional map of the city, showing which parts have a happier profile, for instance. The really cool part, I think, is that a tower at the edge of town (webcam view at left) is lit by a combination of colored lights that represents the emotional state of the town that day. If it's too hateful or fearful, you might want to stay away.
Which is what New York might be, I suppose. I ran into Zhang Ga, who had just arrived from there, and he said that about 1400 people had been arrested so far. This is not news that I'm getting on CNN Asia in the evening, the only English-language news in my unwired hotel room. I'd like to be able to consult a D-Tower map of Manhattan.
Hot tip. Zhang Ga's The Peoples' Portrait will launch in Singapore and Times Square on the Reuters Building in November. He has 5 minutes of every hour from Reuters for the project, so if you want to see yourself many stories high, get to the Times Square Information Center early and often. It should be very Blade Runner, although I think that Zhang Ga's intentions are less noir.
W. Bradford Paley also gave a spirited presentation of Textarc at the Language of Networks conference, along with some of the background to his process. He argues that there is a universal process by which the human sensorium comprehends, which goes from sensation -> differentiation -> segmentation -> recognition -> interpretation -> association -> comprehension, and that one of his goals is to take advantage of this process and offload, so to speak, as much of the information visualization to the earlier stages as possible. Quite convincing. He also spoke briefly about his new project at Ars, which is a social networking project. He is distributing 1,000 stick pins that will anonymously tag - he didnt' say the mechanism, but I am assuming RFID - every time the wearer has a conversation - its length and the other person (assuming s/he also has a stick pin). He'll then create a visualization of the conversations of these 1,000 people over the course of the conference. When you walk up to the visualization wearing your pin, it will automatically identify your node(s) as well. Should be fascinating, although I heard a rumor that the pins had been held up in customs so far.
It's still a day before the official opening of Ars and wandering around there is the usual state of disarray that will turn magically into (mostly) functioning projects overnight. Sitting in the office at OK Centrum where most of the Prix winners are shown, an artist walked in asking if any of us had a 1 ohm resistor on us. I had just given my last one away, unfortunately.
I checked out Ben Rubin and Mark Hansen'sListening Post, which is one of my alltime favorite projects and which one the Golden Nica for interactive art. Yeah! They had everything pretty much under control, even though their equipment had arrived 3 days late, and they had had a minor electrical fire. Ben was chasing down a hum the lights were causing and also preparing to enter by hand thousands of instructions for the audio sequencing, as the version of their sound board at Ars was an early one and couldn't download the program automatically, apparently. When Listening Post is shown at La Villette Numerique later in the month, it will have a 300-foot sight line, so Mark was working on a new font for a new "movement" they've added to the program, which briefly displays ALL CAPS MESSAGES from the chat rooms. Mostly, they think they'll get FUCK YOU and LOSER and similar INVECTIVE. Quite an attract sequence.
The one criticism of Listening Post, which I've heard - and agree with - is that it only has a male voice. Mark explained that there are different levels of text-to-voice synthesizers available and at the most sophisticated end of the spectrum, which they require, the female voice is buggy. Hopefully, this will be fixed by the manufacturer at some point, and they can add more variety to the piece then.
Ran into Jonah Brucker-Cohen and Katherine Moriwaki, and was excited to hear that they'll be demoing umbrella.net - which I mention in my "Locative = 'Yes'" selection for low-fi net art locator - at Spectropolis in New York at the beginning of October. The umbrellas are custom-designed with LED lighting and Ipaq units for messaging whenever an ad hoc network is established. Wish I could be there to try it. It sounds ripe to me, for corporate sponsorship at some golf tournament. You'd really be able to track the dealmaking going on, like a physical version of Josh On'sThey Rule.