A coffee cup, for people always having one in their hand; Navigates with simple light signals (for left, right and forward).

It also vibrates when telling you to stop, along with flashing red lights. When you reach destination, vibrates in different manner, followed by green lights 🙂


upcoming turning: flashing yellow for 10 sec, then green light turns on

turn : green light on

forward : flashing green

turn 180, go backwards : coffee lid turns 180 degrees

stop : “hysteric” vibration with flashing red lights 🙂

stop, you’ve reached destination : ‘pleasant’ vibration followed with green flashing lights

This is what Woon and moi are working on, packing Together:


Our site layout in paper

.image (2)


First draft of that in code:



Next one testing opetWeather API:



Testing how it works, putting staff in bag 🙂


image (7) image (3)



After making basic code structure, and making sure things work, we start drawing all the items we’ll need, as well all the buttons and text:


image (6) image (5)


AND weather icons, to replace once that from the API we are using:

image (9) image (8)



now we are working on drag/ drop individual items for packing, this is how that looks like:



next will be adding printable checklist, save function, so you have your own, custom-made list.. we have more ideas and waiting for office hours, so : )

image (4)

Our [ code is here ]

This how Diana F.and myself imagined Shout-out:

Social media is an intricate part of identity, allowing for immediate self-expression and feedback .  The therapeutic relationship traditionally has been defined by the exchange between patient and clinician with little personal disclosure of information being shared by the therapist.  Today both therapists and patients can easily find each other on social media adding a greater sense of  knowledge and confusion.

  • Identify user that posts in health boards (Inspire)
  • Track user across social media: twitter, pinterest, inspire, instagram
  • Analyze language for word frequency and sentiment
  • Identify trigger words that convey user is unhappy
  • Identify “likes” of user through pins on pinterest
  • Send “shout-outs” gifts/messages when user posts using trigger word

  • Can we use an aggregate of social media platforms to identify patterns and reach-out


Perceived Privacy among patients and caregivers

“Inspire was created because:

We all need a safe place to discuss health.
We can help each other.
Together, we are better.”

  • Patients/caregivers seek support online
  • Identity is connected to health and emotional posts
  • Users often use same name across social media

Here are searched results for Mnydwhite, we just google-d out:




How it works:

* Connected users are notified and user is sent immediate information
* User does not have to opt in but can be selected randomly by people who join Shout-Out


  • Can we use our ability to connect and track online to help
  • Can we intervene or help when someone posts something “concerning”
  • Therapy is limited by cost and telehealth is part of acceptable care
  • McCormick et al showed twitter can be used to predict depression


We are analyzing twitter data based on a username, and looking for a trigger words.We are using box2D to visualize and sockets, everything is being sent to everyone who is on the page/ ‘following’ (in this case) mandyWhite81.




code is [ here ]

Woon and moi did this for our final:

decided to make subway – game : our first intention was creating game that the player can not stared to weird creatures that is sitting around on the subway and acting like people– if a player do that, lose the point or game over. On the other hand, there are so many issues that we have to figure out, so instead of doing that, we started to do simple thing.

Model the subway from Softimage:


and model the figure (creatures with donut head) from Softimage:



our rig:


01_donut_rig 01_donut_donut donutAboutDonut


Then we imported all assets in unity, and users can navigate subway with their mouse.
We also tried to write a code for : when a player hit by donuts, turn all the lights off… ah didn’t work for ‘tomorrow’.

001 000


This is how it looks like the app that we built in so far.
You can check out [ here ] to see how it works.


An web interface that helps you pack with your friends– who brings what, sharing the things on your trip – such as one camera for all, one toothpaste for all, speakers for loud music. Main purpose of the project is to save time organizing, not to forget important stuff and not bringing unnecessary/duplicated things when traveling in group. All yours and your friends stuff in one bag!




First thing you do is specify where and when you want to travel, so you get weather forecast. Based on weather data, number of days you will be traveling, and the purpose of your trip (vacation, skiing, business.. .) you are provided with things you should pack for your trip. This will help you not to forget thing you’ll need.output_TPlOqG (1)





Then, you start packing with your friends. Like a google doc, you can invite friends (share URL) so you are all on the same page. Aside of personal belongings (contact lenses, toothbrush, socks..) each of you select one or more of common things you will share on your trip, so there is no need for carrying 5 toothpastes, 4 shampoos or 7 cameras. Also, one of you might have spare things (gloves or snowboard if you go skiing), and will let other know so the other friend doesn’t have to buy.


123 goPro_Page_02





Every New York-er knows that when enters the subway car should be staring at his/her phone, book or feet; Avoid any possible communication with other humans.

My intention was to reveal these social layers and make a digital portraiture of NYC subway phenomena; Common ground with enormous number and diversity of people that successfully avoid any kind of interaction.

123 goPro_Page_03


Here I have bunch of goPros, charging and setting them all to respond on one remote controller.

123 goPro_Page_04




Unfortunately, with goPros, you learn fast how great but unreliable they are. This is the only time it worked..


Now a first test, trying to generate 3d model out of multilpe photos. I’m using 123DCatch Autodesk.

123 goPro_Page_06


What has helped is the fish eye removal tool, made by a 15-year old guy who loves to hack goPros 🙂


123 goPro_Page_07


Second test that actually has  generated more like 3D model out of photos (after removing fish eye).

123 goPro_Page_08


Third test, nicely generated 3d model of myself. It is detailed enough with no more then 35 photos.


123 goPro_Page_09


Test again, with some student sleeping 🙂


123 goPro_Page_10


4 goPros hidden in the box.

123 goPro_Page_11

Secretly taken photos with the box..

123 goPro_Page_13


Then I decided to better hide cameras


123 goPro_Page_14

123 goPro_Page_15

123 goPro_Page_16



Testing cameras from the backpack; Generating the 3D model (school restroom).

123 goPro_Page_17


And then, time for train.

2 methods tried. One with taking photos every few seconds (with multyple cameras), and another, shooting vides (with all cameras again) and then picking photos later and generating 3D model out of that. I was not sure which ones will be more clear and sharp. I was assuming I will have more success with photos then with the video, which actually was not proven right in the train. As I was constantly moving all the photos I took were blury and so useless.

It is hard to stop next and in front of people in the train, pretend you are doing something train-casual but actually scanning them with backpack. And what is casual/normal when riding? –> Stare at your feet and don’t move : )



123 goPro_Page_19

123 goPro_Page_20


This is a model generated of these 2 people. Not enough details of people, so they are lost in the model. Only the surrounding is left.


123 goPro_Page_21


One of the photos:

123 goPro_Page_22

123 goPro_Page_23


Not enough information to generate 3D model:


123 goPro_Page_24

Hey, my classmate!


123 goPro_Page_26


Instead of the iintended 3D model, where you can walk-through the interactions of people in NYC subway cars, what will depict that comon ground is this video from my back:


I’m using Andy Sygler’s example for Node.js and Arduino (Uno or Yun), to test and understand the concept of sending serial data from browser, through the server to Arduino. LED is lighting with random color when the mouse is pressed; and when the mouse is released, browser should get the color information and change the color to be the same as LED a second ago (when the mouse was pressed).



There are only two events in the browser window, with only two values to be forwarded (0 or 1) to Arduino:

window.onmousedown = function(){


window.onmouseup = function(){


Key concept here is that the server is only making connection between the to and passing the values; Nothing more complicated than that.

Browser is listening for the 2 mouse events;

and Arduino is waiting for one of the two possible values, picks the random color, telling the server what did he pick, turns of the LED and waits again.


void loop() {

char inByte = Serial.read();

// mousedown event
if (inByte==’1′) {

// make a new color, and turn the LED on
r = random(255);
g = random(255);
b = random(255);


// mouseup event
else if ( inByte == ‘0’) {

// send the color to the browser, then turn the LED off