Author of this article is betting on live videos: ” If within 10 years, more than half of all video watched is live, I win.”

Question is would you bet on it?

…………………………..

Small digression, video ads next to article you are trying to read, barbaric.

…………………………..

I wouldn’t.

When email started, and us using it, it became easier to just write message to someone who is in a room next to you then to get out and talk. It is definitely more convenient to answer when you find time for it, then when someone is, OMG interrupting you knocking on your door. I mean, isn’t that the advantage of the internet that content is available exactly when you want it? So if the live event is at ‘wrong time ‘comparing to ‘my time’, watching it wherever but later wins.

Reflex of recording with your phone is one world; so different then live-streaming we are talking here. I would say, time factor again. You are already there, so one click on your phone is all you should do. Not hard to adopt that as a reflex. And then, on the other side, notification comes that your friend is live streaming. Ok, realistic. Still unless there a siren for ‘SOS’ I don’t see demand to watch right-at-the moment. I’m talking to someone right now.

Klip, video-twitter, is live as much as twitter is. Most people are active in the morning and night.

Now, I’m going to search for funny+mistakes+happening+right+now?

I see live videos in action doing well for huge events, like world cup, concerts, itp thesis presentation : ) when you have your time devoted for that, and being unable to be present at the event itself.

Communication.. well numbers here might come too close to 50% (of live videos), but searching for one.. no. Live videos, like reality shows, are too time consuming, too much of nothing-happening. People change channels, turn TV off, or a web page if there is nothing to keep one’s attention. There is too much choice out here already, in vast land of internet, and what do you do when confronted to that? – read reviews, comments and ratings; watch preview and trailers. That will be hard for happening-right-now content.

Live video is an event. Unless half of the planet is on events, or, communicating not in person for more than half of the day.. then it would be worth betting on.

search parameter: ‘ 🙂 ‘

word freq
que 20
aku 18
zaynmalik 18
follow 17
love 17
hope 13
good 12
haha 12
kamu 12
sama 12
gak 11
dont 10
birthday 9
hahaha 9
kita 9
like 9
lol 9
please 9
ada 8
day 8
happy 8
itu 8
video 8
youre 8

01.

 

search parameter: ‘ 😦 ‘

word freq
que 31
gak 6
mas 6
pra 6
dormir 5
hate 5
los 5
voy 5
— 4
ahora 4
ako 4
boleh 4
con 4
las 4
maãana 4
nadie 4
now 4
quiero 4
tau 4
vida 4
∠3
como 3
des 3
dito 3

02_sadface.

search parameter: ‘ serbia ‘

word freq
serbia 1229
travel 665
rusia 449
sanksi 438
jatuhkan 414
takkan 412
atas 410
traveleurope 335
httptcokgchoebwso 330
talkingsloth 330
tomaco 330
vucic 302
aleksandar 211
menteri 210
perdana 210
meminta 199
komisaris 189
perluasan 178
nyatakan 97
eropa 40
fule 32
stefan 32
yang 32
berikan 25

serbia

.

search parameter: ‘ srbijo ‘

srbijo

search parameter: ‘ nyc ‘

nyc

search parameter: ‘ beograd’ beograd

search parameter: ‘ itp’

itp

search parameter: ‘ wtf ‘ wtf

 

.

sentiment analysis, when tweeting about ITP and staying late-nights:


Rplot02 LATE STAYING ALL 3 itp

 

 

Adjusting two variables of the flock, magnitude and speed limit of the vector. Speed limit gives different results when negative or positive, while the magnitude is changed by hoocking up curves (exponential, logarithmic, bouncing, sigmoid and linear).

 

.

.

.

.

.

.

.

.

This is how it started, with the most simplified flocking algorithm; had only attraction towards target: Here, flock is conducted with Leap motion, using hand tracking.

.

.

processing sketch : here :

However, after the user testing, I’ve decided to use the Kinect for motion tracing and conducting to utilize more of the body parts for more complex motions, instead of just one hand.

image (2)

Field report:

1.  Testing a responsive processing sketch 

Hoping to see how people respond to what they see. Before that, how noticeable it is. Whether they will see it or pass by. If they do, how interesting it is – will they stop, will they try to interact? How many times do they need to come by it to get intrigued and interact.

How long they interact? What is the interaction like? Will they come back or how they act next time if they are already familiar?

Note:

Testing was planned to be executed using one of ITP screens, to run the Processing sketch on the screen and attach leap motion next to it. Ideally, I would get the screen on the 1st floor of Tisch building, one that is inside but facing street. So the random passengers would be users. However, that was impossible to do, as the someone else’s project is running until May 1st.

After that, I got permission to use one of the screens on the floor, in the hallway, which was satisfactory enough. Again, I’ve encounter difficulties. I have spent more then 2 hours trying to get the sketch running, with the ending conclusion to ask for software updates on screen computers, and permission for that too. As that also will not be done in time, I used projector for testing. This, as a major change, has affected results of the test (as I knew it would), still I was hoping to get at least some feedback.

2. Interviewed persons info: 

ITPers:

Random ITPers passing by, as they would every day. Testing day is Tuesday, when mostly 2nd years are on the floor, doing their thesis. Not as much students as during other weekdays. (so many factors)

– Most of them just passed by, with a glimpse but walked on.

– 20% actually took a look for few seconds to see what is going on. Here, not only on projection, but all around, projector, leap, cables. More likely they were interested in what is going on in general then on the screen. Of course as projector, cables and leap were improvised and so obvious.

– And those 10%, that were walking close by enough to trigger the sketch, and intrigued enough to try to interact, spent not more then 20-30 seconds. One reason is that projector was projecting into their back. (only one crunched to try to play a bit more). That again, lasted not more 30 seconds, putting hand right in front of the leap.

3. overall interpretation

Screen, screen, screen, it has to be on the screen. Even if it was short throw projector, it is different when facing a wall with something behind than the screen with something ‘in it’. *unless this is to be installed in subway, projecting on the other side’s platform, when we want people to look behind, onto another side.

Replace Leap with Kinect. Leap has to short range, so significant number of potential interactions is lost. Much more than I thought. Also when placing the sensor, leap or Kinect, should make sure it is well hidden or at least be placed as it is part of the screen. The aim to make people interact with the thing on the screen, not that obviously with sensor itself.

More important, Kinect will enable more interesting (possible) movements/actions, than just hand movements, which are not interesting in amazingly short time.

Also the sketch itself should be changed in a way to poses more different behaviors, so it can build more complex interactions. Counting on that more complex means more engaging.

The state of the sketch, when idle and when ‘is after someone’, should be more distinctive.

Stormy day; It starts with few seconds of night, to emphasize and show it is a daylight. Later during composition, was trying to mimic lightning with fast darkening instead of fleshing (because I had no other option, still I think that sudden change is good enough to depict what is going on).

A nice day has started, birds are coming out. As time is passing by, there are more birds flying and singing.. Until the storm starts so birds have to hide. Rather impatiently they fly out again, as soon the rain soothes.. Day goes on, interlacing those two.

Tan function seemed to be the most effective for dramatic effects, pouring rain and bunch of birds latter coming out later. Sinus and Sawtooth for ‘the rest of the day’.


This is about a person who gets an idea and goes on journey; About the time that was spending with friends before flying away across the ocean and after; An empty room that welcomes her and how she turns a carpet into a bad, and latter finding a bunk-bed roommate for splitting the rent. Missing Belgrade, loving New York

After midterm deadline, I had time for coloring which was ‘so-much-fun’ especially the spread with cylinder:

image (22)image (23)image (26)Thanks WoonYung for color pencils!

Also, strengthened cylinder spread by adding another layer:

image (25) image (24)

aimage (3) copy

 

Finished, not colored version:

The hardest part was making Gmail chat pop-up windows. Well known mechanism, nothing not to go smoothly, but somehow it took most of the time. I guess because there were so many of them, or maybe just the fact that I was thinking it will be smooth and fast. Oh well,


image (17)image (16)

 

 

 

This spread was fun and smooth, though : )

image (19)

aimage (1)

 

The bunk-bed spread, emerged at incredible time, when the actual bed was in school serving for another, roommate’s project:

image (20) output_sM3Sjv

 

Thanks to event like this, I invented the mechanism: c02c01

 

One with diminutive kinetic sculpture behind:

 

aimage (2) copy

 

And covers, fast laser cut and tape:

image (18)

 

Exploring mechanisms on 507movements.com , testing and making few of them, I’ve found beautiful use for one; To include it in a pop-up book.

The complete process of making mechanisms in pdf is here:  whole_proces_kinetic sculputres

Much shorter, it started with movement 94:

.

Laser cut and assemble parts :

image (33)

image (28)image (30)

.

.

Then adding gears and placing it on paper:

image (10)image (8)image (2)

adding book characters and testing again:

image (5)

image (3)

.

.

.

Voila kinetic diminutive sculpture in pop-up book:

.

image (6)

Had a lot of fun working late nights with spare spirals : )



Imagined as big-scale installation or digital projection; Also flock (boids) are suppose to generate music when moving.

.

Flock is conducted with Leap motion, using hand tracking. For actual, planed installation I might use Kinect for motion tracing and conducting to utilize more body parts and more complex motions, instead of just one hand.

 

processing sketch : here :

inspired by Cats Video:

.

timing and pacing here:

.

digression on subject of cats video, another one:

timing and pacing concept for the creature:

hurry ‘with purpose’; short breaks to ‘look around and change’;

finding itself ‘in desert’ at one point – the slowest pacing but most ‘critical’ one.

Still working on this. I’ve tarted at 2am last night and thanks to my ‘efficiency rule’ I don’t carry computer charger home : ) so far, this is 1st iteration of Creature sketch with with 1 simple recursion.

I wanted to affect the pattern and change in movement; Aimed for organic growth, to look like it is living creature, not machine (not sure why this matters).

Interesting thing happen after some time as creature starts eating itself. Like that homework task few weeks ago, to watch what happens after some time, I put constrains for speeding up/slowing down the sketch (slower actually easier to keep on watching).  So how this recursion happen over time is:

output_vzTb6E

Gets really slow after, goes slower and slower in fact..

output_CiCbPb

-Should I go to kitchen and come back, how different would it be? No dynamic shifts.

-I’m saving frames to see how it changes, but I don’t even care for 211.000 frame any longer..  (still saving)

-Waiting for change even not sure if there will be one. Hope it’s not like “machine in concrete

-Waiting for it to die. Tiresome.

-Like this is a pet.

-Finally it crashed at 266.000 frames (those are not actual frames, but variable used to speed up/slow down frame rate).

Untitled

 

Looks like nothing much has happened.

Sounds like time for more engaging one.

Embedded data visualization / general data piece data used on social networks using flocking algorithm.

Will try to represent each user as 1 flock and data through boids. I would visualize interactions between users which will be represented with flock and boids. Many flock in one screen.

Maybe groups of boids inside a flock? Different data sets with different rules? What is uploading data or downloading data? What I think will be interesting is showing everyday ‘data interactions’ that we are not quite aware of.

Have no idea how to get data; Vague idea how to assign rules; And not sure what will I find out.

output_ZrvG8i

.HOW IT WORKS:


(code is for processing and I used Daniel Shiffman’s Flocking example to brake it down; although the concept  and principles should be applicable in any of programming languages)

There are  three simple rules of flocking behaviors:
alignment,  cohesion, and separation, 
which when used in combination display the full flocking behavior.
Lets see what happens there.

.

ALIGMENT:

00 aligment

A behavior that causes a particular boid to line up with boids close by.
We iterate through all of the boids and find the ones within the neighbor radius – that is, those close enough to be considered neighbors of the specified boid. If boid is found within the radius, its velocity is added to the computation vector, and the neighbor count is incremented.

PVector align (ArrayList<Boid> boids) {
float neighbordist = 50;
PVector sum = new PVector(0, 0);
int count = 0;
for (Boid other : boids) {
float d = PVector.dist(location, other.location);
   if ((d > 0) && (d < neighbordist)) {                                    //why do we need (d > 0)?
      sum.add(other.velocity);
      count++;
}
}

Than we divide the computation vector by the neighbor count and normalize it (divide it by its length to get a vector of length, obtaining the final resultant vector. And of course, we will not do step above if there are no neighbors, and we will simply return zero vector (0,0).

if (count > 0) {
   sum.div((float)count);                                                  // why do we need to div if we use .setMag after?
   sum.setMag(maxspeed);
   PVector steer = PVector.sub(sum, velocity);
   steer.limit(maxforce);
   return steer;
   } else {
   return new PVector(0, 0);
}

01

0203040607

.

.

COHESION

00 cohesion

Cohesion is a behavior that causes boids to steer towards thecenter of mass” – that is, the average position of the boids within a certain radius.

The implementation is almost identical to that of the alignment behavior, but there are some key differences. First, instead of adding the velocity to the computation vector, the position is added instead.

Like before, the computation vector is divided by the neighbor count, resulting in the position that corresponds to the center of mass. However, we don’t want the center of mass itself, we want the direction towards the center of mass, so we recompute the vector as the distance from the boid to the center of mass. Finally, this value is normalized and returned.

PVector cohesion (ArrayList<Boid> boids) {
float neighbordist = 50;
PVector sum = new PVector(0, 0);            // Start with empty vector to accumulate all locations
int count = 0;
for (Boid other : boids) {
   float d = PVector.dist(location, other.location);
   if ((d > 0) && (d < neighbordist)) {
   sum.add(other.location);                     // Add location
   count++; }
}
   if (count > 0) {
   sum.div(count);
   return seek(sum);                                   // Steer towards the location
} else {
return new PVector(0, 0);
}}

.

.

SEPARATION

00 separation

Separation is the behavior that causes an boid to steer away from all of its neighbors.

The implementation of separation is very similar to that of alignment and cohesion. When a neighboring boid is found, the distance from the boid to the neighbor is added to the computation vector. The computation vector is divided by the corresponding neighbor count, but before normalizing, there is one more crucial step involved. The computed vector needs to be negated in order for the boid to steer away from its neighbors; and we do that by subtracting target location from our boid’s location.

PVector diff = PVector.sub(location, other.location);               //not  .sub(other.location ,location);

.

.

STEER VECTOR     // Steering = Desired minus Velocity

   PVector desired = PVector.sub(target, location);      // A vector pointing from the location to the target
   // Scale to maximum speed
   desired.setMag(maxspeed);

   PVector steer = PVector.sub(desired, velocity);
   steer.limit(maxforce);                                           // Limit to maximum steering force
   return steer;

.

.

.

After getting the principles of the algorithm, this is what we should do in processing to make it work:

00. MAKE OUR GUYS and Pvect variables

// The Flock (an array of Boid objects)
flock = new Flock();
// Add an initial set of boids into the system
   for (int i = 0; i < 150; i++) {
    flock.addBoid(new Boid(width/2,height/2));
}

PVector location;
PVector velocity;
PVector acceleration;
float r;

01. ASIGN THESE RULS TO EACH BOID with magic numbers :

void flock(ArrayList<Boid> boids) {
   PVector sep = separate(boids);    // Separation
   PVector ali = align(boids);              // Alignment
   PVector coh = cohesion(boids);   // Cohesion
   // Arbitrarily weight these forces            // magic numbers for forces to work nicely
   sep.mult(1.5);
   ali.mult(1.0);
   coh.mult(1.0);
   // Add the force vectors to acceleration
   applyForce(sep);
   applyForce(ali);
   applyForce(coh);
}

02. CONNECT THEM AND APPLY through acceleration vector:   

void applyForce(PVector force) {
acceleration.add(force);
}

// Method to update location
void update() {
   // Update velocity
   velocity.add(acceleration);
   // Limit speed
   velocity.limit(maxspeed);
   location.add(velocity);
   // Reset accelertion to 0 each cycle
   acceleration.mult(0);
}

.

.

.

complete Daniel Shiffman flocking example and code : here :

code for processing sketch that post begins with : here :

huge png presentation file here :

.

.

nema1.1393556708192

.

10s:     cool, expected; like the pattern.

30s:     trying to see if speed is changing, although I know it does not. Expecting smt

1min:   feel like ‘ok, got the point’ but still can keep on watching

2min:   seeing the differences in symmetry; paying more attention to ‘details’. Seeing many different parts of arc. Feeling like almost learning the pattern, what’s the next turn

5min: thinking about earth, cosmos, forces, life

10min: what else would’ve made me look this for 10 min? still was not disturbing to keep on watching I guess because organic moments

How is it for someone giving audiograms for hours?

Imagining small multiples.

I was interrupted 3 times.

This is ‘complex’ path/shape. I’ve been watching it for 10 min and I do feel like I know the path, still it goes other way, but doesn’t surprises me. I kinda feel familiar with shape.

Ended. Could do more.

All of them follow the same rules:

//Angle changes over time
angle = map(frameCount, 0, pointCount, 0, TWO_PI);
x = cos(angle*freqX + radians(phix)) * cos(angle*modFreqX);
y = sin(angle*freqY + radians(phiy)) * sin(angle*modFreqY);

Changing parameters in bold resulted in getting different shapes.

Kept changing them over and over again, and didn’t keep record of all values. Do have for one, but doesn’t matter anyways.

freqX: 8,  freqY: 4,  modX: 1,   modY: 1,   phsx: 600,   phsy: 165

.

Initial idea was to make ‘background’ process, pattern like and slowly building itself, insinuating certain logic and getting one softly into the process. One should start to anticipate the pattern arising and then dramatic change would happen.

:

output_s73Kuj

.

This is what Laura later said about surprise:

IMG_20140213_171506 copy

.

.

So sketch transformed itself into this:
output_d7M1Nc

I’ve also tried to make something less flat, more enjoyable. Hope to surprise her.

.