Sunday, 2 May 2010

references

O'Sullivan, & D., Igoe, T. (2004). Physical Computing: Sensing and Controlling the Physical World. Boston: Thomson Course Technologies.

Freyer, C., Noel, S., & Rucki,E. (2008). Digital by Design. London: Thames & Hudson Ltd.

Noble, J., (2009). Programming Interactivity.USA: O'Reilley Media, Inc

Bradski, G., Kaehler, A. (2008). Learning OpenCV: Computer vision with the OpenCV Library. USA: O'Reilly Media, Inc.

Reas, C., Fry, B. (2007). Processing: a programming handbook for visual designers and artists. Massachusetts: The MIT Press.

Shiffman, D. (2008). Learning Processing: A Beginner's Guide to Programing Images, Animation, and Interaction. Burlington, USA: Elsevier Inc.

http://arduino-activists.blogspot.com/

http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1272485104/0#0

Chung, B., 14/05/2007.,Bryan Chung’s Personal Website on Digital Art, Software Design, Popular Culture and Entertainment.http-//www.bryanchung.net/?p=177.20/04/2010

timm. 03/11/2009.pixel silhouette to shape.http://processing.org/discourse/yabb2/YaBB.pl?num=1257273175.10/04/2010

Law,C.25/04/2010.Computer vision. Jmyron. webcam and an led.http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1272485104. 25/04/20010

mobic.20/08/2007.blobDetection library: Triangles ???
http://processing.org/discourse/yabb2/YaBB.pl?num=1187621962. 03/04/20110

LED Matrix - Serial Interface - Red/Green/Blue.http://www.sparkfun.com/commerce/product_info.php?products_id=760.01/03/2010

11/03/2010.Download Processing.http://processing.org/download/.01/02/2010

04/02/2009. Serial & Parallel 1/0. http://rxtx.org. 03/03/2010

Arduino and Processing.http://www.arduino.cc/playground/Interfacing/Processing
2006.http://www.v3ga.net/processing/BlobDetection/index-page-download.html. 03/02/2010

OPENCVProcessing and Java Library.http://ubaa.net/shared/processing/opencv/. 10/03/2010
 
Processing Library (Popular)Download JMyron 0025.http://webcamxtra.sourceforge.net/download.shtml. 20/04/2010
 

Possible clothing material

Plastic bottles can be recycled into polyester which is a suitable material for my garment

mock up story boards

Friday, 30 April 2010

Reflective essay


My final approach differed dramatically from my initial concept once I realized the power of processings external libraries. Taking a step away from using physical components discussed in my original design idea, I opted to replace multiple input sensors with a web cam. In my blog I have looked at several different software implementations; blob detection, OpenCV, video libraries with thresholds, background subtraction, and finally JMyron.

The software route was definitely far more versatile than trying to hard wire my project. I have to accept that my finally product will be closer to the physical computing world than this prototype, as an Led matrix with a backpack may not be the most suitable item to embed in clothing when such a large panel of Led’s is required.

This approach has allowed me the time and flexibility to investigate a wide variety of methods, which have opened my eyes to the possibilities created for digital artists through the use of these open source technologies.

I focused on the JMyron sketch from http-//www.bryanchung.net/?p=177 because it was so close to what I wanted to accomplish. He had basically done most of the work for me, all I needed to do was adapt the code for the Led board I have, which proves more difficult than I thought.

The other sketches were simplifying the camera image using various algorithms, but Bryan Chungs was the only sketch to transform the live feed into an 8*8 matrix and to communicate with an Led matrix through the Arduino; this is essentially what I have been striving to achieve.

What my board allowed which Brian Chungs did not, was the representation to be displayed in multi-colours. If the Led colours could be manipulated to reflect brightness, or real world colour as seen through the eye of the camera, then this would enhance my product tremendously.

Where this sketch failed was the clarity of the representation of the image, which is an element that I would like to continue improving. I could make out the fingers on my hand at close range. This is something that the other methods achieved better, but they had better resolutions to work with. It would be interesting to see how much more recognizable imagery would be when the amount of Leds was increased. It is worth researching in the future, how I could achieve the best of both worlds, as an accurate a silhouette as possible would be advantageous.

 Further to this, I decided to undertake the recreation of the scrolling text exhibited by Barbara Layne in her very impressive exhibitions of interactive LED clothing mentioned in my introduction essay. This was not a critical part of my ambition for the project outlined in my introduction yet I feel it was a significant achievement in the prototyping phase.

Daniel Rozin’s tangible technology displays, “The Mirrors collection”, Gerhard Sengmuller’s “Parallel image”, and Jim Campbell Low resolution digital imagery, have inspired me from a technical point of view.  My product is tangible media in the sense that it is not only the wearer who feels connected to it, but also people in the immediate environment as it reacts to there presence in close to real time. That connection between people is what makes the idea work in my opinion. This aspect draws curiosity and it is this quality that delivers the messages.

From a more theoretical viewpoint I refer to documentary films that I have become very fond of in recent years. “The Cove”, about an the man who trained “Flipper”, Ric O’Barry, and his quest to save dolphins in Japan, was where I took the example emblem for my mock up and scrolling text. People who make documentaries have a passion for conveying messages that they think the public need to be made aware of. Allowing people to express themselves on an individual level, and most importantly passively, I think has a lot of power.
 The Free Art & Technology web site was also an interesting website at http://fffff.at/. I guess defying capitalism is at the heart of all open source technologies and using them to this end is very suitable in away.
Open source technology is important for keeping the opportunity open to the masses, to express themselves creatively and in turn drive innovation.

Personally I feel quite strongly about a number of social and environmental issues and feel that certain things aren’t given enough coverage in the media. The media can be a source of distraction from real issues but at the very same time be the key instrument in public awareness. I am proud of the strong British heritage of good documentary filmmaking, but what are often overlooked in our society is the individuals right to freedom of expression. Demonstrator’s rights have been a cause for concern recently with various anti-terror laws hacking away at our civil liberties. There are reasons for peoples safety that these laws have been deemed necessary, although its always a moral dilemma trying to justify the erosion of the civil rights our elders have fought to keep. What I am perhaps looking at is alternative methods to demonstrating. T-shirts have been a key means of transmission of beliefs in modern culture, and I really think they work. If you see some one wearing a shirt displaying something meaningful you are impelled to take a step back and look. For one this person feels strongly enough about the cause to make an example of them selves. My clothing takes it a step further by not only lighting up but also allowing that ever-important interactive aspect.

Reiterating on what I said earlier, putting consideration into how the product is produced is important given the nature of my project. Keeping it Green is essential from an ethical stance; it should also be as low on energy as possible; and made from recycled products.

An after thought about what the clothing should be allowed to display and whether it should be restricted due to the possibility of people being gratuitous; I think this would go against what the shirt stands for; freedom of expression. It may get abused but that’s part of the beauty of it, it is a reflection of your own persona.

Although thoroughly enjoying the challenge set before me with my first project in Arduino and Processing, I fell short of the challenge of creating a representation of a silhouette using Leds. My research has shown that it can be done. I like to create art that has a larger purpose than purely aesthetics, and this project allowed me to do just that.

Thursday, 29 April 2010

Elections personal thought

CHANGE is a hot topic at the moment with the elections looming. Most people are quite sceptical about change. Through their lives in which they have experienced both Labour and Tory governments, many people have not felt that they have seen real change, with many statistics pointing to social mobility declining under the Labour governments who adopted Third Way and free market policies.
A major change in society increasing at an exponential rate, is the acquisition of new technologies regardless of class, gender, or culture. Even a flat in the poorest of neighborhoods will commonly be equipped with numerous high tech gadgets and devices, which have become a reflection of the outward image we want to convey to the world. Technologies have somewhat comforted those who could not afford to buy there own home or aspire to childhood dreams. If you can’t have the big with the Bentley parked in the driveway, then at least you can get the new Iphone on contract to keep you bemused.I know, it sounds like a conspiracy theory and it probably is, but I still think there is truth in how marketing has managed to create a consumerist society which has played a part in diverting peoples attention away from more traditional forms of wealth like property, which will not devalue to next to nothing over a few years.
My product is in it’s self a high tech gadget, which is a kind of paradoxical implementation given the controversy surrounding energy and our unsustainable consumerism in society today, and what the purpose of my clothing is. Leds are very energy efficient light source and being Green should be a core feature of the clothing’s design. I was thinking that it could be made from recycled materials. Safety is also a concern as there will be an electric current and we can’t have people getting electrocuted.
Wearable technologies are of interest to me because it opens doors to personalising clothing to promoting our own ideals. Movements related to music genres have seen people popularly personalise there clothing in order to convey messages about a person’s political dissenting viewpoint for instance. People who follow punk or metal personalise their clothes in anti-conformity fashions.
I think encouraging this voice through the use of technologies as a peaceful out-let for everyday frustrations is a good thing. You no longer need to be confined to promoting a company logo or band as current clothes trends restrict us to.
Personalisation of clothing is also popular because a lot of people want to be seen as an individual- free and independent; not just another cog in the machine.
There is also unprecedented “change” happening within our communities, which I think that politics often struggles to keep up with. The House of Lords after all does not represent the views of the majority, as they are an uneven sample of the population. Neither do the views of many politicians for that matter. My point is that everyone sees life from a different view point and no ones necessarily right or wrong, but unless you have an even representation of the electorate in government, then the plight of the poor is very unlikely to get a fair hearing. We live in a so-called democracy but what sort of voice do everyday people have on the bigger picture?
Modern technologies and there ubiquity are allowing this voice to break through in unforeseen ways. Social networking and blogs are very powerful tools, which have the power to reach the masses. I often think of my dad and how he spends a lot of his time swearing at the TV set when the news comes on, and I have been trying to encourage him to open a blog to vent his concerns and maybe find some peace by getting his opinions out there.
Personally I think politicians should spend less time trying to control change and more time supporting it as it happens within our communities. “The Big Society” is a Tory promise, but we have heard campaign slogans all our lives to little effect.
Technology is putting the mechanisms in place to make these visions a reality through enhancing our communication networks. Government is coming round to realising its power. I see the “Big society” ideology as a product of technological change rather than a Tory ideal.
"The device maintains a single 64 byte buffer which represents each position in the matrix. When CS is asserted (low) the device begins reading data from the SPI input and writing it sequentially to the 64 byte buffer. Simultaneously the device will output the old buffer data on the MISO line. Hence, to display an image on the matrix a set of 64 bytes must be sequentially transferred to the backpack while keeping the CS pin low (this process is slightly different for a daisy-chained system). 

By default, the backpack recognizes up to 255 individual colors. The 64 bytes transferred to the backpack represent the desired color of each LED. The first 3 bits of each byte represent the Red brightness level for that LED; the second 3 bits represent the Green brightness level while the last 2 bits represent the Blue brightness level. Below is a table which illustrates how to construct your color value. "

Frorm microcontroller data sheet from www.sparkfun.com.  code number "COM-00760"

com port

My post on the Arduino forum http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1272485104 was answered very quickly and what I had established yesterday was backed up. It was a problem with the serial port which was not being identified as open.
"port = new Serial(parent,"/dev/cu.usbserial-A900705s",9600);"

What I hadn't realised this time was since I changed my board the port number also changed.
I had copied the port number from the arduino tool bar into my sketch before, but since double checking it after this post, I realised the number at the end of the port had now changed to "/dev/cu.usbserial-A70060ta". I am assuming this is what happened. Anyway, there is light at the end of the tunnel. The Led Matrix is now responding, although not as expected. Only four of the 64 leds are responding to movement, but it is a vital step; we now have communication between processing and the board.
My Matrix differs from the one used by Bryan Chung in several ways. Mine displays 7 different colors being a full R/G/B serial interface matrix with its own backpack. All the refreshing and communication control is taken care of by the backpack. Bryan Chung had developed his own custom PCB back back board.



An interesting thing here is that we were getting some colour change response as well as a simple on or off response to light conditions. The refresh rate or reaction time to changing light conditions was very slow for some reason. This is a great start, but from here the real challenge will begin.

Tuesday, 27 April 2010

problem returned

worst luck. The very same problem has occurred. Back hunting through logs which have suggested a variety of solutions but none have worked. People whom have had similar error messages have had a variety of problems. I tried changing the baud rate in various text documents in the processing files, once in the preferences.text document and again in  a boot.txt file as this was a commonly recorded fix. I then reinstalled the software and its plug-ins on a PC but this again had no effect. There is some movement in the RXTX Leds so it definitely reaches the board but fails to fully load. The Led attached to pin 13 also bleeps a few times as the boot loader retries the load. There has been some strange behaviour on my mac.
While some sketches running processing it corrupted my computer. Was getting problems running Premier which I had running simultaneously, and when ever i pressed a key on the keyboard it would continually reload the command until I quite the document or clicked out side the screen. For example if I pressed r it would go rrrrrrrrrrrrrrrrrrrrrrrrrrrr...... mmmmmmm not good. I am going to buy a new board tomorrow as this one belongs to the university and i wanted one for personal usage anyway.


I have now replaced my board with a Arduino Diecimila. It uploading again fine.
I am getting this message when I try and run the sketch for Bryan Chungs Jmyron sketches. I've tried reverting back to java 5 in the java preferences utility on a whim from a blog that said the serial library doesn't run on Java 6 or a 64 bit processing but that didn't seem to be the problem This was on Snow Leopard while I am running. It definitely seems to be some thing to do with the serial communication between Processing and the board.


java.lang.NullPointerException
    at processing.serial.Serial.write(Serial.java:518)
    at brianChungSoloutionIntelmac$LED.sendMsg(brianChungSoloutionIntelmac.java:159)
    at brianChungSoloutionIntelmac$LED.refresh(brianChungSoloutionIntelmac.java:146)
    at brianChungSoloutionIntelmac.draw(brianChungSoloutionIntelmac.java:43)
    at processing.core.PApplet.handleDraw(PApplet.java:1425)
    at processing.core.PApplet.run(PApplet.java:1327)
    at java.lang.Thread.run(Thread.java:613)

Monday, 26 April 2010

Back on the road

Wonderful, it fixed its self.
Some changes had to be made to Brian Chungs code in order for it to run on a mac. The method with the getForcedWidth() and getForcedHeight() calls need to be removed as they are only needed on a windows machine to get round a video problem. Again you need a mac version of the JMyron library available here. libJMyron.jnilib compiled for intel macs.
Also if you are running this on a mac make sure your com port has been changed. I changed from windows port "com 6" refered to in the code to my mac port, "/dev/cu.usbserial-A900705s".
I Have now got something which is a step closer to what i need to achieve. My latest video shows you the results thus far which is an 8 *8 video display representation of an LED matrix responding to camera movement.

Friday, 16 April 2010

jMyron




Here is another application where a Black & white image caught by a web cam has been successfully sent to an 8*8 Led matrix, this time using the jMyron library. Again I hope to be able to add to this post soon when i get my board working. Currently my board will accept some sketches but none of the later ones I developed such as the scrolling text. I am getting this common error which can unfortunately refer to many things.
avrdude: stk500_recv(): programmer is not responding
SOS can anyone help!, it worked before!



http-//www.bryanchung.net/?p=177

The Arduino uses the last program. The host one has the following

LEDCtrl03.pde

import JMyron.*;
import processing.serial.*;

LED myLed;
Camera myCam;
// set up led screen display
void setup() {
  size(300,300);
  background(0);
  ellipseMode(CORNER);
  colorMode(HSB);
  smooth();
 // sets frames per sec
  frameRate(15);
// disable outline
  noStroke();
  myLed = new LED(this);
  myCam = new Camera();
 // write to console
  println(Serial.list());
}
// called by the framerate function
void draw() {
  background(0);
  myCam.refresh();
  myLed.refresh();
}

void serialEvent(Serial _p) {
}
// call method sendMsg on mouse click
void mousePressed() {
  myLed.sendMsg();
}


//This part grabs the image and reduces it to 8*8
Camera.pde
class Camera {
  final int XNUM = 8;
  final int YNUM = 8;
  JMyron cam;
  PImage img;
  int w, h;
  int xInc, yInc;
  boolean [][] cells;

  Camera() {
    cam = new JMyron();
    w = 320;
    h = 240;
    img = new PImage(h,h);
    cam.start(w,h);
    cam.findGlobs(0);
    println("Myron " + cam.version());
// Next few lines are only for windows users  
// println("Forced Dimensions: " +
    //  cam.getForcedWidth() + " " +
    //  cam.getForcedHeight());
    cells = new boolean[XNUM][YNUM];
   //240/XNUM
   //240/YNUM
    xInc = h/XNUM;
    yInc = h/YNUM;
  }

  void refresh() {
 // PImage  data type for storing images. Processing can store png, jpg, tga, gif
    PImage tmg = new PImage(w,h);
    cam.update();
    cam.imageCopy(tmg.pixels);
//updates image with data in the pixel array
    tmg.updatePixels();
    img.copy(tmg,40,0,h,h,0,0,h,h);
 // update Pixels function updates change   
    img.updatePixels();
    updateCell();
  }

  void updateCell() {
    int xOff = xInc/2;
    int yOff = yInc/2;
    for (int i=0;i
      for (int j=0;j
        int x = i*xInc+xOff;
        int y = j*yInc+yOff;
       //array for all pixels in display window
        color c = img.pixels[y*h+x];
        cells[i][j] = (brightness(c)>=128);
      }
    }
  }
 
  boolean getCol(int _x, int _y) {
    return cells[_x][_y];
  }
}

LED.pde
// This section sends data to the arduino
class LED {
/* Final    Keyword used to state that a value, class, or method can't be changed. If the final keyword is used to define a variable, the variable can't be changed within the program */
  final int XNUM = 8;
  final int YNUM = 8;
  int w, h;
  int w1, h1;
  int xOff, yOff;
  Light [][] board;
  Serial port;
  PApplet parent;
  byte [] msg = new byte[YNUM+1];

  LED(PApplet _p) {
    parent = _p;
    board = new Light[XNUM][YNUM];
    w = 30;
    h = 30;
  // Make sure this is the same port as you used to load the arduino skech to the board on
    port = new Serial(parent,"/dev/cu.usbserial-A70060ta",9600);
    init();
  }

  void init() {
    w1 = w+4;
    h1 = h+4;
    xOff = (width-w1*XNUM)/2;
    yOff = (height-h1*YNUM)/2;
    for (int x=0;x
      for (int y=0;y
        int tx = xOff+x*w1;
        int ty = yOff+y*h1;
        board[x][y] = new Light(tx, ty, w, h);
      }
    }
    msg[0] = (byte) 0x55;
  }

  void refresh() {
    for (int x=0;x
      for (int y=0;y
        board[x][y].setLight(myCam.getCol(x,y));
        board[x][y].show();
      }
    }
    sendMsg();
  }

  void sendMsg() {
    for (int y=0;y
      int result = 0;
      for (int x=0;x
        int x1 = 8-x;
        x1 %= 8;
        result = result*2+board[x1][y].getLight();
      }
      msg[y+1] = (byte) result;
    }
    port.write(msg);
  }
}

Light.pde

// This part controls the onscreen representation
class Light {
  boolean on;
  int w, h;
  int x, y;
 
  Light(int _x, int _y, int _w, int _h) {
    x = _x;
    y = _y;
    w = _w;
    h = _h;
  }
 
  void show() {
    if (on) {
      fill(240,255,255);
    } else {
      fill(0,0,50);
    }
    ellipse(x+2,y+2,w,h);
  }
 
  void toggle() {
    on = !on;
  }
    //Bytes are a convenient datatype for sending information to and from the serial port
  byte getLight() {
    byte state = 0;
    if (on) {
      state = 1;
    }
    return state;
  }
 
  void setLight(boolean _o) {
    on = _o;
  }
}

Arduino



int CLOCK = 12;
int LATCH = 13;
int DATA  = 11;
// byte stores an unsigned number from 0 to 255
byte matrix[8];
byte head;
// two byte value -32000> 32000
int state = 0;

void setup() {
  /*When a pin is configured to OUTPUT with pinMode, and set to LOW with digitalWrite, the pin is at 0 volts. In this state it can sink current, e.g. light an LED that is connected through a series resistor to, +5 volts, or to another pin configured as an output, and set to HIGH.
 Digital pins can be used either as INPUT or OUTPUT. Changing a pin from INPUT TO OUTPUT with pinMode() drastically changes the electrical behavior of the pin. */
  pinMode(CLOCK, OUTPUT);
  pinMode(LATCH, OUTPUT);
  pinMode(DATA,  OUTPUT);
  digitalWrite(CLOCK, LOW);
  digitalWrite(LATCH, LOW);
  digitalWrite(DATA,  LOW);
  initLED();
  clearLED();
  //Begin. Sets the data rate in bits per second (baud) for serial data transmission.
  Serial.begin(9600);
  head = (byte) 0x55;
}

void loop() {
   /*available. Get the number of bytes (characters) available for reading from the serial port. This is data that's already arrived and stored in the serial receive buffer (which holds 128 bytes). */
  if (Serial.available()>0) {
    int input = Serial.read();
   /*Like if statements, switch...case controls the flow of programs by allowing programmers to specify different code that should be executed in various conditions. In particular, a switch statement compares the value of a variable to the values specified in case statements. When a case statement is found whose value matches that of the variable, the code in that case statement is run.
   The break keyword exits the switch statement, and is typically used at the end of each case.*/
    switch (state) {
    case 0:
      if (input==head) {
        state = 1;
      }
      break;
    case 1:
    case 2:
    case 3:
    case 4:
    case 5:
    case 6:
    case 7:
      matrix[state-1] = (byte) input;
      state++;
      break;
    case 8:
      matrix[state-1] = (byte) input;
      state = 0;
      refreshLED();
      break;
    }
  }
}

void ledOut(int n) {
  digitalWrite(LATCH, LOW);
 /*Shifts out a byte of data one bit at a time. Starts from either the most (i.e. the leftmost) or least (rightmost) significant bit. Each bit is written in turn to a data pin, after which a clock pin is toggled to indicate that the bit is available.
 This is known as synchronous serial protocol and is a common way that microcontrollers communicate with sensors, and with other microcontrollers. */
 //MSBFIRST. most significant bit first
  shiftOut(DATA, CLOCK, MSBFIRST, (n>>8));
  shiftOut(DATA, CLOCK, MSBFIRST, (n));
  digitalWrite(LATCH, HIGH);
  delay(1);
  digitalWrite(LATCH, LOW);
}

void initLED() {
  ledOut(0x0B07);
  ledOut(0x0A0C);
  ledOut(0x0900);
  ledOut(0x0C01);
}

void clearLED() {
  for (int i=0;i<8;i++) {
    matrix[i] = 0x00;
  }
  refreshLED();
}

void refreshLED() {
  int n1, n2, n3;
  for (int i=0;i<8;i++) {
    n1 = i+1;
    n2 = matrix[i];
    n3 = (n1<<8)+n2;
    ledOut(n3);
  }
}

void updateLED(int i, int j, boolean b) {
  // = assignment   == equal to
  int t = 1;
  int n = 0;
  int m = 0;
  if (j==0) {
    m = 7;
  }
  else {
    m = j-1;
  }
  n = t<
  if (b) {
    matrix[i] = n | matrix[i];
  }
  else {
    n = ~n;
    matrix[i] = n & matrix[i];
  }
}

background subtraction

This is an interesting approach. The "controlip5" library is used to as a slider to adjust the level of backgroud subtraction using RGB colours. The effect is pretty good and this highlights the fact, I think, that some sort of potentiometer could possibly be used to adjust the sensitivity of the cameras exposure to picking up light in different conditions. I guess this was shown earlier with the threshold click and drag function in my first blob tracking example.

http://processing.org/discourse/yabb2/YaBB.pl?num=1257273175
/* This is the example of background substraction by golan levin;
i just messed around a little bit. after having started the sketch
press SPACE(!) to get an background image. after this the blob is
green and in x[] and y[] the edges are stored. out of those i'd
like to do my shape.*/

import processing.video.*;
import controlP5.*;
/* if you don't got the control P5 thing than delete all the stuff out
sliderValue is the treshold*/
int sliderValue = 60;

int numPixels;
int[] backgroundPixels;
Capture video;

ControlP5 controlP5;
ControlWindow controlWindow;

int num = 0;
int[] x = new int[0];
int[] y = new int[0];
boolean once = false;

void setup() {
  size(320, 240, P2D);
 
  //threshold slider
  controlP5 = new ControlP5(this);
  controlWindow = controlP5.addControlWindow("settings", 500, 100, 490, 180);
  controlWindow.setBackground(color(0));
  Slider s = controlP5.addSlider("sliderValue",0,255,60,40,120,350,20);
  s.setWindow(controlWindow);
 
  video = new Capture(this, width, height, 24);
  numPixels = video.width * video.height;
  // Create array to store the background image
  backgroundPixels = new int[numPixels];
  // Make the pixels[] array available for direct manipulation
  loadPixels();
}

void draw() {
  if (video.available()) {
    video.read();
    video.loadPixels();
    int presenceSum = 0;
   
    //j does count the lines
    int j = 0;
   
    x = new int[0];
    y = new int[0];
   
    for (int i = 0; i < numPixels; i++) {
     
      // ----background substraction as in the example by Golan Levin-------
      color currColor = video.pixels[i];
      color bkgdColor = backgroundPixels[i];
      // Extract the red, green, and blue components of the current pixel’s color
      int currR = (currColor >> 16) & 0xFF;
      int currG = (currColor >> 8) & 0xFF;
      int currB = currColor & 0xFF;
      // Extract the red, green, and blue components of the background pixel’s color
      int bkgdR = (bkgdColor >> 16) & 0xFF;
      int bkgdG = (bkgdColor >> 8) & 0xFF;
      int bkgdB = bkgdColor & 0xFF;
      // Compute the difference of the red, green, and blue values
      int diffR = abs(currR - bkgdR);
      int diffG = abs(currG - bkgdG);
      int diffB = abs(currB - bkgdB);
     
      pixels[i] = 0xFF000000 | (diffR << 16) | (diffG << 8) | diffB;
      // -------------------------------------------------------------------
     
     
      //------------------starting to mess around---------------------------
      if (diffR > sliderValue || diffG > sliderValue || diffB > sliderValue){  //make the silhouette green
        pixels[i] = color(0,255,0);   
       
        if(j > 0 && (pixels[i-1] != color(0,255,0))){  //quick and dirty edge detection from black to green
        x = append(x, i-((j-1)*width));  //getting x, when pixel changes from black to green
        y = append(y, j);            //same with y
        }
      }
     
      if(j > 0 && pixels[i-1] == color(0,255,0) && pixels[i] != color(0,255,0)){    //quick and dirty edge detection from green to black
        x = append(x, i-((j-1)*width));  //getting x, when pixel changes from green to black
        y = append(y, j);            //same with y
      }
     
      if(i > width && i > j*width) j++;  //count the lines we got
    }
    updatePixels(); // Notify that the pixels[] array has changed
   
    //here is where i somehow would like to do an silhouette out of all those x and y i got
    //but curently its only rectangles
    noStroke();
    fill(255,0,0);
    //beginShape();    
    for (int i = 0; i < x.length; i++){
    rect(x[i], y[i], 10,10);
    }
    //endShape();
    noFill();
  }
}


void keyPressed() {
  if (key == ' '){
    video.loadPixels();
    arraycopy(video.pixels, backgroundPixels);
  }
  if (key == 'e') exit();
}

Using native libraries

Here's an solution you don't need a extension library for, it uses the native libraries to processing. Unfortunately I am not getting my Arduino to operate at the moment so that's thrown a spanner in the works.
www.ardduino.cc/cgi-bin/yabb2/YaBB.pl?action=print,num=1205334952
import processing.video.*;
import processing.serial.*;
Capture video;
Serial arduinoSerPort;

color black = color(0);
color white = color(255);
int threshold = 127; // Set the threshold value
float pixelBrightness; // Declare variable to store a pixel's color
//single matrix=8,8,80 3x3Matrix=24,24,24 8x6Matrix=64,48,10
int numPixels;
int matrixWidth = 8;
int matrixHeight = 8;
int matrixMult = 80;
int screenWidth = matrixWidth * matrixMult;
int screenHeight = matrixHeight * matrixMult;
int cycles;
int bitValue;
int[] matrixBitTable = new int[matrixWidth*matrixHeight];
byte [] serialBytes = new byte[8];
String tempSerialString = "";
boolean dataDump = false;


void setup() {
 size(screenWidth, screenHeight);
 // Uses the default video input, see the reference if this causes an error
 video = new Capture(this, matrixWidth, matrixHeight, 12);
 //frameRate(6);
 numPixels = matrixWidth * matrixHeight;
 noCursor();
 // Open the port that the board is connected to and use the same speed (9600 bps)
 println(Serial.list());
 arduinoSerPort = new Serial(this, Serial.list()[0], 9600);
 //arduinoSerPort = new Serial(this, "/dev/tty.usbserial-A4004BWB", 9600);

}

//
void draw() {
 if (video.available()) {
   // 1) get input from video camera:
   video.read();
   video.loadPixels();
   loadPixels();
   int matrix_x = 0;
   int matrix_y = 0;

   // 2) Draw video pixels to a scaled grid
   // a single pixel from the video feed gets drawn as a set of pixels
   for (int i = 0; i < numPixels; i++) {
     pixelBrightness = brightness(video.pixels[i]);
     // test to see if the thresholded value is White:
     if (pixelBrightness > threshold) {
       for (int offset_x = 0; offset_x < matrixMult; offset_x++) {
         for (int offset_y = 0; offset_y < matrixMult; offset_y++) {
           int tempPixel = (matrix_x * matrixMult) + ((matrix_y * matrixMult) * screenWidth);
           pixels[tempPixel+offset_x+(screenWidth * offset_y)] = white;
         }
       }
       bitValue = 1;
     }
     // the thresholded value is black:
     else {
       for (int offset_x = 0; offset_x < matrixMult; offset_x++) {
         for (int offset_y = 0; offset_y < matrixMult; offset_y++) {
           int tempPixel = (matrix_x * matrixMult) + ((matrix_y * matrixMult) * screenWidth);
           pixels[tempPixel+offset_x+(screenWidth * offset_y)] = black;
         }
       }
       bitValue = 0;
     }
     // create an array to hold bitvalues for Arduino LED - this is using an 8x8 displayonly
     int boolLoc = matrix_x + (matrix_y * 8);
     matrixBitTable[boolLoc] = bitValue;
     // update a model of the video grid as we scan through the pixels
     matrix_x++;
     if(matrix_x == matrixWidth){
       matrix_x = 0;
       matrix_y++;
     }
   }

   // 4) update the image on the screen:
   updatePixels();
   cycles++;

   // 5) send bytes to Arduino
   // send some strangely symetrical bytes as a header packet:
   for (int x = 0; x < 8; x++) {
     arduinoSerPort.write(85);
   }
   // put bitvalues from the camera into serial bytes  - this is using an 8x8 display only
   for (int x = 0; x < 8; x++) {
     tempSerialString = "";
     // build a bit sequence as a string for conversion
     for (int y = 0; y < 8; y++) {
       int tablePointer = y+(8*x);
       tempSerialString += str(matrixBitTable[tablePointer]);
     }
     // convert that binary string into a byte value, and send it!:
     int tempInt = unbinary(tempSerialString);
     print(tempSerialString+", ");
     byte tempByte = byte(tempInt);
     serialBytes[x] = tempByte;
     arduinoSerPort.write(serialBytes[x]);
     //print(serialBytes[x]+", ");
   }
   println("");
 
   //delay(10000);
 }
}

Thursday, 15 April 2010

blob detection plugin

I have chosen to further my research into various other library plug-ins for processing which have been built to enhance computer vision within the environment.  This blobDetection plug-in script maybe more suitable to my needs because of the large well defined blobs that are detected. The enemy was in the detail with the previous sketch, because the more broken the blobs are the harder it will be to recognise a shape with such a small resolution. To be practical to power and so to keep the cost of the artifact down, the number of Leds which will make the jacket display, will have to be low. Different light conditions will effect the blob detection capabilities of the prototype and creating software with the ability to adapt to changing conditions. The 'threshold' function in the previous sketch shows how blob detection would change under changing conditions. Designing a system to identify light changes and adapt to that intelligently, would be great but i'll better take it one step at a time and not become to hubristic.
//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
//BlobDetection by v3ga
//May 2005
//Processing(Beta) v0.85
//
// Adding edge lines on the image process in order to 'close' blobs
//
// ~~~~~~~~~~
// software :
// ~~~~~~~~~~
// - Super Fast Blur v1.1 by Mario Klingemann
// - BlobDetection library
//
// ~~~~~~~~~~
// hardware :
// ~~~~~~~~~~
// - Sony Eye Toy (Logitech)
//
//=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

import processing.video.*;
import blobDetection.*;

Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;

// ==================================================
// setup()
// ==================================================
void setup()
{
    // Size of applet
    size(640, 480);
    // Capture
    cam = new Capture(this, 40*4, 30*4, 15);
    // BlobDetection
    // img which will be sent to detection (a smaller copy of the cam frame);
    img = new PImage(80,60);
    theBlobDetection = new BlobDetection(img.width, img.height);
    theBlobDetection.setPosDiscrimination(true);
    theBlobDetection.setThreshold(0.2f); // will detect bright areas whose luminosity > 0.2f;
}

// ==================================================
// captureEvent()
// ==================================================
void captureEvent(Capture cam)
{
    cam.read();
    newFrame = true;
}

// ==================================================
// draw()
// ==================================================
void draw()
{
    if (newFrame)
    {
        newFrame=false;
        image(cam,0,0,width,height);
        img.copy(cam, 0, 0, cam.width, cam.height,
                0, 0, img.width, img.height);
        fastblur(img, 2);
        theBlobDetection.computeBlobs(img.pixels);
        drawBlobsAndEdges(true,true);
    }
}

// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
    noFill();
    Blob b;
    EdgeVertex eA,eB;
    for (int n=0 ; n
    {
        b=theBlobDetection.getBlob(n);
        if (b!=null)
        {
            // Edges
            if (drawEdges)
            {
                strokeWeight(3);
                stroke(0,255,0);
                for (int m=0;m
                {
                    eA = b.getEdgeVertexA(m);
                    eB = b.getEdgeVertexB(m);
                    if (eA !=null && eB !=null)
                        line(
                            eA.x*width, eA.y*height,
                            eB.x*width, eB.y*height
                            );
                }
            }

            // Blobs
            if (drawBlobs)
            {
                strokeWeight(1);
                stroke(255,0,0);
                rect(
                    b.xMin*width,b.yMin*height,
                    b.w*width,b.h*height
                    );
            }

        }

      }
}

// ==================================================
// Super Fast Blur v1.1
// by Mario Klingemann
//
// ==================================================
void fastblur(PImage img,int radius)
{
 if (radius<1){
    return;
  }
  int w=img.width;
  int h=img.height;
  int wm=w-1;
  int hm=h-1;
  int wh=w*h;
  int div=radius+radius+1;
  int r[]=new int[wh];
  int g[]=new int[wh];
  int b[]=new int[wh];
  int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw;
  int vmin[] = new int[max(w,h)];
  int vmax[] = new int[max(w,h)];
  int[] pix=img.pixels;
  int dv[]=new int[256*div];
  for (i=0;i<256*div;i++){
    dv[i]=(i/div);
  }

  yw=yi=0;

  for (y=0;y
    rsum=gsum=bsum=0;
    for(i=-radius;i<=radius;i++){
      p=pix[yi+min(wm,max(i,0))];
      rsum+=(p & 0xff0000)>>16;
      gsum+=(p & 0x00ff00)>>8;
      bsum+= p & 0x0000ff;
    }
    for (x=0;x

      r[yi]=dv[rsum];
      g[yi]=dv[gsum];
      b[yi]=dv[bsum];

      if(y==0){
        vmin[x]=min(x+radius+1,wm);
        vmax[x]=max(x-radius,0);
      }
      p1=pix[yw+vmin[x]];
      p2=pix[yw+vmax[x]];

      rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
      gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
      bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
      yi++;
    }
    yw+=w;
  }

  for (x=0;x
    rsum=gsum=bsum=0;
    yp=-radius*w;
    for(i=-radius;i<=radius;i++){
      yi=max(0,yp)+x;
      rsum+=r[yi];
      gsum+=g[yi];
      bsum+=b[yi];
      yp+=w;
    }
    yi=x;
    for (y=0;y
      pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum];
      if(x==0){
        vmin[y]=min(y+radius+1,hm)*w;
        vmax[y]=max(y-radius,0)*w;
      }
      p1=x+vmin[y];
      p2=x+vmax[y];

      rsum+=r[p1]-r[p2];
      gsum+=g[p1]-g[p2];
      bsum+=b[p1]-b[p2];

      yi+=w;
    }
  }

}






Wednesday, 14 April 2010

code for blob tracking

Noble, J., (2009). Programming Interactivity. United States: O' Reilly Media
// the following statement imports the OpenCV library
import hypermedia.video.*;
OpenCV opencv;
// screen size
int w = 640;
int h = 480;
// This is the threshold of detection
int threshold = 80;
boolean find=true;
void setup() {
size( w*2+30, h*2+30 );
opencv = new OpenCV( this );
opencv.capture(w,h);
}
void draw() {
background(0);
// read image from camera
opencv.read();
image( opencv.image(), 10, 10 ); // RGB image
image( opencv.image(OpenCV.GRAY), 20+w, 10 ); // GRAY image
// here the difference betwee background and previous image is drawn to screen
opencv.absDiff();
opencv.threshold(threshold);
image( opencv.image(OpenCV.GRAY), 20+w, 20+h ); // absolute difference image
// the following code detects the blobs and interpreted as rectangles
Blob[] blobs = opencv.blobs( 100, w*h/3, 20, true );
noFill();
pushMatrix();
translate(20+w,20+h);
for( int i=0; i
Rectangle bounding = blobs[i].rectangle;
noFill();
rect( bounding.x, bounding.y, bounding.width, bounding.height );
float area = blobs[i].area;
float circumference = blobs[i].length;
Point centroid = blobs[i].centroid;
//Points[] points    this is the points which define the blob

Point[] points = blobs[i].points;
// centroid.  this is the binary center of blob
stroke(0,0,255);
line( centroid.x-5, centroid.y, centroid.x+5, centroid.y );
line( centroid.x, centroid.y-5, centroid.x, centroid.y+5 );
fill(255,0,255,64);
stroke(255,0,255);
if ( points.length>0 ) {
beginShape();
for( int j=0; j
vertex( points[j].x, points[j].y );
}
endShape(CLOSE);
}
}
popMatrix();
}

// the space bar will take a new background image for pixel comparison
void keyPressed() {
if ( key==' ' ) opencv.remember();
}
// dragging the mouse alters the threshold
void mouseDragged() {
threshold = int( map(mouseX,0,width,0,255) );
}
public void stop() {
opencv.stop();
super.stop();
}

blob tracking

Heres a video where I used openCV and blob tracking. This is my first experiment with a web cam and processing. The code will be posted in the next few days. In this example the image from the camera is continually compared with a background image. The space bar creates a new background image and the threshold can be varied by dragging the mouse. This library and another dedicated library for blob detection using solely Processing, is available for download from the processing website;
http://processing.org/reference/libraries/

Tuesday, 13 April 2010

I created this mock image in an earlier stage of development.

Changing number of frames in scrolling text

the following video shows how I have changed the frame rate. A frame is set as a 8*8 array. Initially this was set to 8 as reflected in the code I posted previously. Here are the alterations i made;

byte bitmaps[10][8][8];     // Space for 10 frames of 8x8 pixels
was changed to;
byte bitmaps[12][8][8];     // Space for 12 frames of 8x8 pixels


currentBitmap = 8;
  targetBitmap = 8;
  lastTime = millis();
was changed to; currentBitmap = 12;
  targetBitmap = 12;
  lastTime = millis();


targetBitmap%=8;  // there are 8 frames, from 0 to 7
was changed to;
targetBitmap%=12;  // there are 12frames, from 0 to11


void drawFrame(byte frame[8][8])
was changed to;
void drawFrame(byte frame[12][8])

I also clearly added more arrays (12 in total). They are so easy to work with. As you can see, you simply draw your text or symbol by changing the digits in the array, as a single 8*8 Led Matrix frame is accurately represented by each array. 







Code for scrolling text

int bits[8] = {
  128, 64, 32, 16, 8, 4, 2, 1 };

int clock = 13;  // pin SCK del display
int data = 12;   // pin DI del display
int cs = 10;     // pin CS del display

byte bitmaps[10][8][8];     // Space for 10 frames of 8x8 pixels
byte displayPicture[8][8];  // What is currently ON display.
int currentBitmap = 0;      // current displayed bitmap, per display
int targetBitmap = 1;       // Desired image, for the animation to strive for, per display
int step;                   // animation step, usually from 0 to 8, per screen
int stepDelay = 19;         // the wait time between each animation frame
unsigned int delayCounter;           // holder for the delay, as to not hog to processor, per screen
int animationStyle = 0;     // different types of animation 0 = slide 1 = frame replace
unsigned long lastTime;     // display refresh time

void setup() {
  Serial.begin(115200);  // used for debug

  matrixInit();
  int bitmap = 0;

  // black color for buildings 0
  // sky rotating color 1-7
  // here I changed the digits around an added frames
  // bitmap 0
  addLineTobitmap(bitmap,0,1,1,1,1,1,1,1,1);
  addLineTobitmap(bitmap,1,1,1,1,1,4,4,4,1);
  addLineTobitmap(bitmap,2,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,3,1,1,4,1,1,1,1,1);
  addLineTobitmap(bitmap,4,1,1,4,1,1,1,1,1);
  addLineTobitmap(bitmap,5,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,6,1,1,1,1,4,4,4,1);
  addLineTobitmap(bitmap,7,1,1,1,1,1,1,1,1);

  // bitmap 1
  bitmap++;
  addLineTobitmap(bitmap,0,1,1,1,1,1,1,1,1);
  addLineTobitmap(bitmap,1,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,2,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,3,1,4,4,4,4,4,1,1);
  addLineTobitmap(bitmap,4,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,5,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,6,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,7,1,1,1,1,1,1,1,1);

  // bitmap 2
  bitmap++;
  addLineTobitmap(bitmap,0,1,1,1,1,1,1,1,1);
  addLineTobitmap(bitmap,1,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,2,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,3,1,4,4,4,4,1,1,1);
  addLineTobitmap(bitmap,4,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,5,1,4,1,1,1,4,1,1);
  addLineTobitmap(bitmap,6,1,4,4,4,4,1,1,1);
  addLineTobitmap(bitmap,7,1,1,1,1,1,1,1,1);

  // bitmap 3
  bitmap++;
  addLineTobitmap(bitmap,0,1,1,1,1,1,1,1,1);
  addLineTobitmap(bitmap,1,1,4,4,4,4,4,1,1);
  addLineTobitmap(bitmap,2,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,3,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,4,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,5,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,6,1,4,4,4,4,4,1,1);
  addLineTobitmap(bitmap,7,1,1,1,1,1,1,1,1);

  // bitmap 4
  bitmap++;
  addLineTobitmap(bitmap,0,1,1,1,4,4,4,1,1);
  addLineTobitmap(bitmap,1,1,1,4,1,1,1,4,1);
  addLineTobitmap(bitmap,2,1,1,1,1,1,1,4,1);
  addLineTobitmap(bitmap,3,1,1,1,4,4,4,1,1);
  addLineTobitmap(bitmap,4,1,1,4,1,1,1,1,1);
  addLineTobitmap(bitmap,5,1,1,4,1,1,1,4,1);
  addLineTobitmap(bitmap,6,1,1,1,4,4,4,1,1);
  addLineTobitmap(bitmap,7,1,1,1,1,1,1,1,1);

  // bitmap 5
  bitmap++;
  addLineTobitmap(bitmap,0,1,1,1,1,1,1,1,1);
  addLineTobitmap(bitmap,1,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,2,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,3,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,4,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,5,1,1,1,4,1,1,1,1);
  addLineTobitmap(bitmap,6,1,4,4,4,4,4,1,1);
  addLineTobitmap(bitmap,7,1,1,1,1,1,1,1,1);

  // bitmap 6
  bitmap++;
  addLineTobitmap(bitmap,0,1,1,9,9,9,9,1,1);
  addLineTobitmap(bitmap,1,1,9,4,4,4,4,9,1);
  addLineTobitmap(bitmap,2,9,4,1,1,1,1,4,9);
  addLineTobitmap(bitmap,3,9,4,1,1,1,1,4,9);
  addLineTobitmap(bitmap,4,9,4,1,1,1,1,4,9);
  addLineTobitmap(bitmap,5,9,4,1,1,1,1,4,9);
  addLineTobitmap(bitmap,6,1,9,4,4,4,4,9,1);
  addLineTobitmap(bitmap,7,1,1,9,9,9,9,1,1);

  // bitmap 7
  bitmap++;
  addLineTobitmap(bitmap,0,0,0,0,0,0,0,0,0);
  addLineTobitmap(bitmap,1,0,0,0,0,5,5,5,0);
  addLineTobitmap(bitmap,2,0,5,5,5,5,5,0,0);
  addLineTobitmap(bitmap,3,0,0,5,5,5,5,0,0);
  addLineTobitmap(bitmap,4,0,0,5,5,0,0,0,0);
  addLineTobitmap(bitmap,5,0,0,5,0,0,0,0,0);
  addLineTobitmap(bitmap,6,5,5,5,0,0,0,0,0);
  addLineTobitmap(bitmap,7,0,0,5,0,0,0,0,0);
 
 
  currentBitmap = 7;
  targetBitmap = 7;
  lastTime = millis();
}

void loop() {
  if(currentBitmap == targetBitmap) {
    targetBitmap++;
    targetBitmap%=8;  // there are 8 frames, from 0 to 7
  }

  if((millis() - lastTime) > 70) {
    handleAnimations();
    lastTime = millis();
    Serial.print("currentBitmap: ");
    Serial.print(currentBitmap);
    Serial.print(" targetBitmap: ");
    Serial.println(targetBitmap);
  }
  drawFrame(displayPicture);
}

void drawFrame(byte frame[8][8]) {
  digitalWrite(clock, LOW);  //sets the clock for each display, running through 0 then 1
  digitalWrite(data, LOW);   //ditto for data.
  delayMicroseconds(10);
  digitalWrite(cs, LOW);     //ditto for cs.
  delayMicroseconds(10);
  for(int x = 0; x < 8; x++) {
    for (int y = 0; y < 8; y++) {
      //Drawing the grid. x across then down to next y then x across.
      writeByte(frame[x][y]); 
      delayMicroseconds(10);
    }
  }
  delayMicroseconds(10);
  digitalWrite(cs, HIGH);
}

// prints out bytes. Each colour is printed out.
void writeByte(byte myByte) {
  for (int b = 0; b < 8; b++) {  // converting it to binary from colour code.
    digitalWrite(clock, LOW);
    if ((myByte & bits[b])  > 0) {
      digitalWrite(data, HIGH);
    }
    else {
      digitalWrite(data, LOW);
    }
    digitalWrite(clock, HIGH);
    delayMicroseconds(10);
    digitalWrite(clock, LOW);
  }
}

void matrixInit() {
  pinMode(clock, OUTPUT); // sets the digital pin as output
  pinMode(data, OUTPUT);
  pinMode(cs, OUTPUT);
}

void handleAnimations() {     
  if(currentBitmap != targetBitmap){
    // the function takes 3 variables
    drawAnimationToDisplay(currentBitmap, targetBitmap, step);
    delayCounter++;
    if(delayCounter > stepDelay){
      step--;
    }
    if(step < 0){
      step = 8;
      currentBitmap = targetBitmap;
    }
  }
  else {
    drawBitmapToDisplay(currentBitmap);
  }
}

void drawBitmapToDisplay(int bitmap) {
  for(int x = 0; x < 8; x++) {
    for (int y = 0 ; y < 8; y++) {
      //copies the bitmap to be displayed ( in memory )
      displayPicture[x][y] = bitmaps[bitmap][x][y];
    }
  }   
}

void drawAnimationToDisplay(int bitmap, int targetBitmap, int step) { 
  switch (animationStyle) {
  case 0:   // slide transition
    for(int x = 0; x < 8-step; x++) {
      for (int y = 0 ; y < 8; y++) {
        displayPicture[x][y] = bitmaps[targetBitmap][x+step][y];
      }
    }
    for(int x = 0; x < step ;x++) {
      for (int y = 0 ; y < 8;y++) {
        displayPicture[8-step+x][y] = bitmaps[bitmap][x][y];
      }
    }
    break;
  case 1:  // frame by frame
    for(int x = 0; x < 8; x++) {
      for (int y = 0 ; y < 8; y++) {
        displayPicture[x][y] = bitmaps[bitmap][x][y];
      }
    }
    break; 
  }
}

void addLineTobitmap(int bitmap, int line, byte a,byte b,byte c, byte d, byte e, byte f,byte g, byte h) {

 
  bitmaps[bitmap][7][line] = a;
  bitmaps[bitmap][6][line] = b;
  bitmaps[bitmap][5][line] = c;
  bitmaps[bitmap][4][line] = d;
  bitmaps[bitmap][3][line] = e;
  bitmaps[bitmap][2][line] = f;
  bitmaps[bitmap][1][line] = g;
  bitmaps[bitmap][0][line] = h;
}
Hey this is some examples of animation and scrolling text using the Led matrix The code for the animation was adapted to display my name and a dolphin animation. The photo is of the circuit and the details instruct you how to wire the devices. A USB wire powers your devices by connecting from the board to your computer.


SPI IN, Matrix       Arduino Duemilanove board

gnd-                      (Power) gnd
MOSI-                  digital pin 12
CS-                       digital pin 10
sclck-                    digital pin 13
vcc-                      (power) 5v




Friday, 2 April 2010

My idea is to allow you to be a passive activist by the clothes you wear. The Top will be able to display user defined slogans which stand out. This could be seen as a scary prospect as people may not want to stand out. Football supporters proudly wear their team’s shirts but run the risk of receiving abuse from other members of the public who do not share their enthusiasm. However many individuals and groups are brave enough to stand up for what they believe in, and my t-shirt allows them to do this while going about their everyday lives in a passive manner. Since the slogans are user defined, it is down to the user to be responsible and cautious about the messages they portray. "Save the dolphins" for instance is a nice example of a message you could promote with this wearable method of public persuasion.

The publics conscience is affected daily by messages we receive in the media or subliminally through alternative vehicles (like my clothes line). My product aims to promote positive change, by giving people the power of a voice. Clothing are personal possessions so it is only right that they should be used as a means of expression.



This is partly a backlash against corporate media, which has so much influence in the public’s domain. Through the press and advertising we are continually bombarded with ideologies, crafted for commercial and political purposes. Money talks and large corporations have the funding to sell their ideas and values to us. Important issues such as climate change and deforestation have been ignored in the past by the media, which has diverted the public’s attention to more important issues like celebrities cellulite until the situation has become critical. Protestors were once stereotyped as tree hugging, do-gooders who hadn't quite gotten over the 60's but i think the public is and needs to become more conscious of issues beyond a local level. I think new devices to get important messages across to the public passively, are important in raising awareness of key environmental issues. Other areas like public health could adopt similar strategies. "Stop Smoking", and "Wear a Condom" are just examples of positive slogans which could be used.

I guess that since interaction design is all to do with how we interact with our environment, it is suits that my product has the wider environment in mind.

It is time the masses spoke rather than the minority and that’s real democracy. There are injustices which we all feel we should do something about but don't find the time or will power to get up and do something about it. This technology allows people to express themselves more freely to positive means as they go about their everyday lives.




The activists top.

A shirt/ jacket which you can input text which will be displayed by embedded LEDs in the clothing via a microprocessor. This will be designed with political and social persuasion in mind. Light or infrared sensors will detect people approaching and display a silhouette of them as background to the text. I have decided the silhouette should be green being the most restful and natural of all colours conveying a positive aura at one with the environment. The idea of reducing your carbon foot print and being visually registered in this manner is also something I find quite suiting to this idea. Text will be displayed using greens complimentary colour red, which also connotes awareness. My prototype will be looking at the interactive background specifically since my idea is to incorporate my project with existing technology created by another artist, Barbara Layne. Her video is included in my DVD or it can be seen at the following URL: http://www.youtube.com/watch?v=B9obd_JRgek. This allows me to concentrate on interactive background alone, which I believe is enough of a challenge for a project with this narrow a time frame. I aim to design wear for different occasions. T-shirts and jumpers for both females and males will be designed. The Cloth or material should be black as this will make the LEDs stand out more.



I was also inspired by a speech at www.ted.com/talks/sendhil_mullainathan.html. This deals with "the last mile problem", which is referring to awareness and how it can be brought about. The BMW advertisement is used as an example in the talk, promoting the idea that you are safe in the BMW because you can avoid collisions rather than you are safe because it is a solid car. It puts in question the best approaches to selling an idea and what is the most effective method of delivery?

For example slogans could read," we are heading towards a greener world. www.greenpeace.org.uk". This is much needed positivity in an age of pessimism.

In the Adam Curtis documentary "The century of the self", he interviews Edward Bernays, a leading psychologist who was brought in to change the public’s perception on woman and cigarettes, as smoking was originally seen as a man’s habit. They employed female models walk around on a national event day smoking so to catch the media’s attention. It worked! , with a massive uptake in female smokers from that day forth!

The ethnography interviews and influences are available on the provided DVD. My research goes beyond what I would call an ethnographic study because of the wide audience I will be trying to capture with my product. What I have deduced while observing the public as a whole is that younger people are more likely to brandish logos and slogan on their clothes that older people; particularly those under 30. This may be because people become more conservative and aware of the attention drawn by large logos or slogans. I also found that males were significantly more likely to wear garments displaying large logos and slogans than females; although males were by no means alone and this may simply have been down to current clothes trends. This could also be because males are keener to stand out or less inhibited about attracting attention than ladies while in public; at least unwanted attention.


I looked at peoples usage of ubiquitous computer products which they carry with them, and how this related to their opinion of my product; secondly their attitudes towards freedom of speech. My interviews unveiled some common thoughts among my interviewees including a generally positive response to my product which I was very pleased about. Not everyone felt that they would wear the product but there was a general consensus that it was a good idea and that they could perceive people wearing it. I found those of a more extrovert personality and those with more mobile digital devices were more enthusiastic and more likely to wear the clothing. Making the products slogan user defined was a popular idea because it allows personalisation. More interviewees saw the advantage of the brighter display since the whole idea is to raise awareness; the fact that not everyone is so brave and there may be a market for a more subtle message was brought up.

The most given opinion given for the general public’s lack of interest in demonstrations was laziness, which could be easily bypassed by wearing this type of clothing which requires little effort.



I aim to use bright LEDs as Outputs with the possibility of using super bright ones (www.superbrightleds.com), although their energy consumption may make them impractical. The brighter they are the more visible they will be during daylight. The product can only work where there is a high contrast situation between light and dark so if I choose to use light dependent input sensors.



Choosing the input transducer type is probably the most difficult part. The basic idea is to have an equal amount of input sensors and output LEDs, then map the input data to the outputs. An alternative to this approach would be to capture a simple image outline using a low resolution infra red camera, and then reduce the resolution to a value where that it could then be mapped to an array of LEDs creating a representation of the image. A closer look at different sensors capabilities has been organized into the table below.

To create an array of sensors and output transducers for my project using a minimal amount of pins, my research has led me to the following schematics which represent a structure known as Row column scanning. Other methods such as shift registers, latches, or multiplexers may be used in combination, and will be investigated during the prototyping stage of my product development.


Certain problems remain with implementing the design onto a wearable top. Clothing deforms usually when worn which could interfere with capturing and displaying the desired the silhouette accurately. To get around this, I would make the area on the front and back of the jacket used for capturing and displaying data reasonably inflexible, and made of a material which bounces back to its original shape if deformed. The front off a jacket will zip up one side on the front as illustrated below, rather than through the traditional centre position. In artificial light situations there may be multiple light sources leading to ineffective shadows being produced by people trying to interact with the display. This is an argument for using an infra red camera which could detect heat rather than light also avoiding the problem of displaying inanimate objects (unless they emit heat). I currently don't have the technical knowledge to propose the use of an infra red camera, but it will be a route I look at in more detail during prototyping. Originally I thought that my product text display should be completely open to the user, but I then thought about its misused. You do not want people going around with swear words lit up in neon, or, as suggested by my interviewees, with campaigns which may be highly offensive to some groups. This could undermine the product and lead to the opposite effect to that which the product i designed to achieve. To combat this problem I could have a wide selection of effective slogans which the user could choose from. Pr oxemics is an issue as others have to be close enough to the wearer to cast a shadow. You are in fact invited people into your personal space by wearing the technology which is projecting the unknown participants (because of the polar positioning of input and output transducers) outline.