Physical Computing

The first thing was how to make cubes stay where it is after hitting it. Somehow they knew how to can hop or roll away.. and with the projection, we needed them to be sturdy and in place. After hours of sewing this is how we did it:

(this is .GIF so be sure to click if it doesn’t play right now)

karate cube gif

the bodyweight keeps the cube in place when hitting and we sewed a robe for two cubes so that they can be transformed and standing on top of each other (for adults) or next to each other (so that the height suits to the kids).

IMG_8687 copy

IMG_8701 copy
IMG_8700 copy

We also got rid of messy wires and sensors, and in the coat that we made, we have added a pocket for conductive foam. Using that and conductive thread (sewing never ends), I think we did a good job making we the pressure sensor (thanks Antonius for teaching us how to do it).

IMG_8674 - Copy - Copy copy IMG_8679 copy IMG_8680 copy

The fabric sewed with conductive thread from on both and opposite sides of the foam that works exactly same as FSR. Look inside of our cube:

IMG_8737 copy
IMG_8741 copyIMG_8739 copy IMG_8733 copy

We put FSR underneath of the board, so that it changes the background music and starts the game.
(working like start button of the game).

IMG_8714 copy

Woon loves our red reset button (circuit inside) so there’s no doubt in whose room it will end up:

IMG_8727 copy IMG_8728 copy

sensor test and some sound effects:










After making a circuit with a multiple pressure sensors, technical challenge was to write a code in processing and transfer the serial data from sensors into meaningful sketch on a screen, as it was about physical and graphical interface. The goal was to make interactive space, a game scene; to have a player and audience ready to play after. The system is supposed to be interactive, physically object on a scene equipped with sensors, sound response to it and graphical projection on the scene and a player. Conceptually the project was elaborate, but the code required the same approach. Mapping the sensors, so their values become meaningful in the contest of the game was not the hard part, but linking them to sketch appeared to be; trying to use only maximal values from sensors that constantly read data, demanded more variables. Moreover, we encountered problem of analog reading data and averaging that values as solution. We faced very new concepts in computer technology.


first sketch white background 1 pressure sensor:



more sensors and switch added



Woonyung (laying) testing sensors  numbers:



and final testing!


We started with a cube; one of many wonderful black firm-foam like cubes that are on the ITP floor. I see most of them usually serve sitting purpose, but no idea where did they come from; or what actually were made for. Some crazy project maybe? Who knows, but so very tactile thanks to the material they are made of, can’t stay just for sitting…

Woonyung et moi:

002 copy

.001 copy


Scan 2 copy

.Scan 1 copy

Truth to be told I’ve always been the one that doesn’t even press a button if I didn’t read the instructions first. And it is boring, but hey, how else am I supposed to figure out how microwave works? Or door? But I’ve never thought that a door could be of poor design. How illegibly it can become when need to put ‘pull’ ‘push’ label on it. Obviously, it can. ‘Labels are never excuses for bad design’ and even that can be put on a wrong place. Plenty of room for misleading and things to think through; but seriously think through;

Looks like a job to do the thinking but I feel rather relieved that all does labels are not supposed to be there and things can be legible just on their appearance.

Look, the toaster; bread on the table, a knife, maybe some butter? I think you see the movie rolling in your head how this goes. Self-explanatory.

Installations should be ‘toaster, bread and butter’. And please no messing when someone is making it’s own lunch. Make it and leave it. Do good thinking before and let the interaction do the ‘talking’.



Love this drawing. It tells everything about design process and mental image you get when you just throw a look and get all information available from appearance of an object. Conceptual objects and visibility of things; seems like most important lesson of design ever.

Teapot, thanks a million! Now got to go make my design worth a hundred of pictures.

first the famous ball as always:

then the processing sketch with some tweaks:

after maping:

the code:
import processing.serial.*;

Serial myPort; // The serial port
float bgcolor; // Background color
float fgcolor; // Fill color
float xpos = 130; // X-coordeinate
float ypos = 210; // Y-coordinate
int radius = 20; // wheel Radius

float noteX;
float noteY;

float carAngle = 0;
float angleChange = 0.19;
float angleLimit = -6;
float angleBall;
void setup() {
// List all the available serial ports

// I know that the first port in the serial list on my mac
// is always my Arduino module, so I open Serial.list()[0].
// Change the 0 to the appropriate number of the serial port
// that your microcontroller is attached to.
String portName = Serial.list()[0];
myPort = new Serial(this, portName, 9600);

// read bytes into a buffer until you get a linefeed (ASCII 10):
size(800, 400);

void draw() {
// if (button2 == false) {
// // angleChange = 0;
// carAngle = 0;

// else if (keyCode == UP){
carAngle -= angleChange;

if (carAngle < angleLimit || carAngle > 0)
angleChange =-angleChange;
carAngle -= angleChange;

//rear wheel
fill(#36322E); //biggest
ellipse(xpos, ypos, radius, radius);

fill(255); //white
ellipse(xpos, ypos, radius-7, radius-7);
ellipse(xpos, ypos, radius-11, radius-11);

fill(#36322E); //thiniest
ellipse(xpos, ypos, radius-18, radius-18); //inside

//FRONT wheel
fill(#36322E); //biggest
ellipse(xpos+330, ypos, radius, radius);

fill(255); //white
ellipse(xpos+330, ypos, radius-7, radius-7);
ellipse(xpos+330, ypos, radius-11, radius-11);

ellipse(xpos+330, ypos, radius-18, radius-18); //inside if ever BLIN-BLIN


translate(xpos, 210);

// float xpos=0;
// float y=0;



vertex(0, ypos-210-radius * 1.7); //1
vertex(0+radius*2.2/2, ypos-210-radius * 1.7*0.8); //2
vertex(130 + 242, ypos-210-radius * 1.7*1.1); //3
vertex(130 + 242+radius*0.6, ypos-210-radius * 1.7*0.7);//4
vertex(130 + 242, ypos-210+ radius/2);//5
vertex(130 + 242-radius*0.8, ypos-210+ radius/2);//6
vertex(130 + 242-radius*1.1, ypos-210- radius);//7
vertex(130 + 242-radius*0.8-radius*2.2, ypos-210- radius);//8
vertex(130 + 242-radius*2.2*1.7, ypos-210+ radius*0.6);//9
vertex(130+10, ypos-210+ radius*0.6);//10
vertex(0+radius*2.2/2, ypos-210- radius*0.6);//11
vertex(0-radius*2.2/2, ypos-210- radius*0.6);//12
vertex(0-radius*2.2*0.6, ypos-210+ radius*0.6);//13
vertex(0-radius*2.2*1.7, ypos-210+ radius*0.6);//14
vertex(0-radius*2.2*1.9, ypos-210);//15
vertex(0-radius*2.2*1.5, ypos-210 – radius * 1.4*0.9);//16

vertex(0, ypos-210-radius * 1.7); //1
vertex(0+radius*2.2*0.9, ypos-210-radius * 1.7*1.6); //2
vertex(0+radius*6.6*0.9, ypos-210-radius * 1.7*1.9); //3
vertex(0+radius*6.6+radius*2.2*0.8, ypos-210-radius * 1.7); //4
vertex(0+radius*2.2/2, ypos-210-radius * 1.7*0.8); //2


// Draw the shape
//ellipse(xpos, ypos, 20, 20);

void serialEvent(Serial myPort) {
// read the serial buffer:
String myString = myPort.readStringUntil(‘\n’);
if (myString != null) {
// println(myString);
myString = trim(myString);

// split the string at the commas
// and convert the sections into integers:
int sensors[] = int(split(myString, ‘,’));

// xpos = map(sensors[0], 430, 580, 0, width);
// ypos = map(sensors[1], 30, 200, 0, height);

for (int sensorNum = 0; sensorNum < sensors.length; sensorNum++) {
print(“Sensor ” + sensorNum + “: ” + sensors[sensorNum] + “\t”);
// add a linefeed after all the sensor values are printed:

// make sure there are three values before you use them:
if (sensors.length > 1) {
xpos = sensors[0];
carAngle= sensors[1];
xpos = map(sensors[0],35, 200, width +130, 0);
carAngle = map(sensors[1],44,200,0, -6);
// the switch values are 0 and 1. This makes them 0 and 255:
fgcolor = sensors[2] * 255;


iOS 7 on the iPhone is a mixed experience according to users who made the jump as soon as Apple released the latest software update.


The biggest iOS 7 complaints are lag and performance, but battery life is also an issue. The video highlights some of the problems with using iOS 7 on the iPhone 4.

As daily tasks, each has developed its own system of how to use phone. Consequently, applications used are different and things are done differently.

Phone is used probably more than any other device and it’s important to be very efficient to use.

When used daily, motor skills and logic are invested into routine. Tasks become automatic. Automation saves time. So, any and a minor change in the interface means that extra time will be spent on re ‘learning’ and the break-in ‘new stuff’. Update that change interface that way, is inefficient.

With an iOS7, such changes apply to applications that are used every day very frequently (as a calendar or messages). It is more quite frustrating to change that level of automation. what appear to be the main difficulties are so simple tasks, such as deleting messages (one by dragging no longer available) or navigating google calendar (as it is different now).

On the other hand, each update comes with much more better and upgraded things. So the new iPhone interface has made some things easiest than before. All users have listed adjusting lightning as the easiest ever. It is essential that they do not waste time on such basic tasks; they are visible and simple.

All these wonderful applications that are commonly used, are supposed to help us save time and produce faster, superior results. Mapping is between what you want to do and what appears to be possible, or in this case what was to be possible doesn’t seems to do its job. When an action has no apparent result, you may conclude that the action was ineffective. So you repeat it. You repeat it again. Wasting time leads to frustration.

Only poor design allows frustration and here frustration comes from in changing adopted habits. But, this is mostly mine conclusion. And I am not an iPhone use. In a video you’ll see users poorly complaining on that. They easily got used to new interface, new photo arranging and more options! They had no hard time adapting to new stuff. Apparently changing habits is not such a bug if you are offering improved thing. The problem stayed with functions that are no longer available, and that were simpler than the new ones. Deleting messages with a swipe, cool! But now is gone, noo. That is a big deal.

There are also camera changes that some people just do not like it, while others are enthused about. Although both sides are Instagram users, opinions are different. Obviously, emotions are on trial here, not functionality. Yeah, “beautiful things does work better”.

I LOVE that in the middle of recording I asked for brightness to be lowered, and it took like a millisecond to be done! Simple commands, don’t change.


With a keyboard input we had a serious problem. The idea was simple, to use same few buttons to write a few letters on the screen. Each button corresponds a letter, so it would be something like this:


Instead, what we got was:

NNNNNNNNNNNNNNNNNNNNeeeeeeeeeeeeeevvvvvvvvvvvvvvvvvaaaaaaaaaaaaaaaaaaaa zzzzzzzzzziiiiiiiiiiiiiiiivvvvvvvvvvvvvvvvvvvvvv

Tried to press a millisecond-fast! It didn’t work.

Buttons work, obviously.. Then we spent a lot of time with the code. Moon helped us understand whats going on. Was not that easy but it is worth it! Moon, thanks a million. Yaaay

Soon we are talking to computers on another level.


CODE: keys_no_interval

CODE: keys_interval

I eventually forgot the name of a game we’ve played. This is a Ziv’s favorite game, so I did not want to insist on playing Super Mario.

We’ve figured out all the buttons, but I have not figured out the game rules  🙂

IMG_0494 copy

int lastButtonState = LOW;        // state of the button last time you checked

boolean mouseIsActive = true;    // whether or not the Arduino is controlling the mouse

void setup() {

// initialize mouse control:


// initialize serial communication:


// make pin 2 through 5 inputs:

pinMode(2, INPUT);

pinMode(2, INPUT);

pinMode(3, INPUT);

pinMode(4, INPUT);

pinMode(5, INPUT);

pinMode(6, INPUT);


void loop() {

if(mouseIsActive) {

// read the other buttons:

int button2State = digitalRead(3);

int button3State = digitalRead(4);

int button4State = digitalRead(5);

int button5State = digitalRead(6);

int button6State = digitalRead(2);

if (button2State == HIGH) {


Mouse.move(1, 0, 0);      // move right


if (button3State == HIGH) {


Mouse.move(-1, 0, 0);     // move left


if (button4State == HIGH) {


Mouse.move(0, 1, 0);      // move down


if (button5State == HIGH) {


Mouse.move(0, -1, 0);      // move up


if (button6State == HIGH) {;  // move CLICK




a warm-up, control the mouse with analog inputs, two photoresistors,

The thing was that the should totally catch photoresistors with your fingers. It didn’t work when only put finger over the top or onto her. It worked when gently squeezed.

Mouse goes left mouse goes right, mouse goes left, mouse goes right, thanks God it does, it works.

IMG_0490 copy copy



int lastButtonState = LOW; // state of the button last time you checked
boolean mouseIsActive = false; // whether or not the Arduino is controlling the mouse

void setup() {
// initialize mouse control:
// initialize serial communication:
// make pin 2 an input:
pinMode(2, INPUT);

void loop() {
// read the pushbutton:
int buttonState = digitalRead(2);

// if it’s changed and it’s high, toggle the mouse state:
if (buttonState != lastButtonState) {
if (buttonState == HIGH) {
// if mouseIsActive is true, make it false;
// if it’s false, make it true:
mouseIsActive = !mouseIsActive;
// save button state for next comparison:
lastButtonState = buttonState;

// read the analog sensors:
int sensor1 = analogRead(A0);
int sensor2 = analogRead(A1);

// print their values. Remove this when you have things working:
Serial.print(” “);

// if there’s a significant difference to the right:
if (sensor1 > sensor2 + 100){
// Serial.println(“L”);
if (mouseIsActive == true) {
Mouse.move(-1, 0, 0);

// if there’s a significant difference to the left:
else if (sensor2 > sensor1 + 100) {
// Serial.println(“R”);
if (mouseIsActive == true) {
Mouse.move(1, 0, 0);

I need you to forget a bike.

Try again.

You can not.

And how it works? You know how is works because you form a conceptual model of the device and mentally simulate its operation. Visibility acts as a good reminder of what can be done. A result, there is little to remember.


microwave_outline_0 copy

The mental model of a device is formed largely by interpreting its perceived actions and its visible structure.

Now, please recall the procedure for reheating something in the microwave. It is not what you want to do and instead other thing appears to be possible? One button to do two things? Oh, how can you figure out how to control the microwave? Ease like forgetting a bike? How hard it is to remember the bike and how to ride? Natural design, natural signs. Well -designed objects are easy to interpret and understand. Visible clues are important to their operation. Poorly designed objects are difficult and are frustrating to use. They provide no clues and even worse, sometimes false clues. “Humans do not always err. But they do when the things they use are badly conceived and designed.”

It is in everyone’s best interest to make designs simple to use. If a product is patched up with labels, instruction manuals or training courses, the whole purpose of the design is lost. Designers know too much about their product to be objective judges. Design process should evolve the product. Good design is like that, tested by users and changing.


: After this class’ discussion and exercise, and reading Chris Crawford’s definition and Bret Victor’s rant, how would you define physical interaction? What makes for good physical interaction? Are there works from others that you would say are good examples of digital technology that are not interactive? :

A physical interaction must be thoughtful communication; an exchange of information between the two, but thoughtful in both ways; in terms that both parties must respond to the other, still not by simple reaction, but rather as an emotional interaction. In terms that one’s output must be another’s input for possible continuous communication.

That way a good interaction would be the one that put us in state of joy while interacting; and a good physical interaction would employ not just our intellection but also most parts of our bodies. Instead of just using fingers and eyes, we should concentrate more on how to design platforms that would use actual movements and all the senses we have that we intuitively use with such an ease.

The use of a keyboard or a touch screen is limited to just touching and fingers, and after a while, it is far from being fun. However, technology today allows us to make interfaces that could everyday experiences much more enjoyable and intuitive.

In addition to that, Google glasses could have been more than responses to you commands. They provide you with all kinds of information and respond to your questions; Comparison’s Sake it is more like addressing to your secretary than having pleasant talk to your friend. Yes I see it as a good example of digital technology that just should’ve been “more” interactive.