talkTable : Project Update


Code:


There has been significant progress with the talkTable. The code looks in good shape. I have integrated speech recognition into the program. I’m using Luke Dubois’ speech.js library to do this. The library basically acts as an interface between p5 and Google’s speech recognition kit. There were certain issues that I faced while doing this :

  1. speech.js library doesn’t hold the capability to end the speech detection instruction. I made changes to the library in order to get around this. The changes were very basic. I simply added a method to stop the speech detection process.
  2. The Google API automatically stops running after a certain amount of time (roughly 1.5 – 2 minutes). I figured this might be happening because the way Google’s API has been designed is to trigger speech detection on a keypress, and it’s purpose revolves around detecting a single search query. I added a running interval in the program to re-initiate the API every 30 seconds. If it’s already running, the program does nothing.
  3. The API calls couldn’t be made using the server created by p5 IDE. Instead to diving into the server code for the p5 editor, I resorted to using a different server. With Sharif’s help, I created a python server that runs on my local host, and can easily talk to the API.

Now that I have worked around these challenges, I have a continuous workflow that uses speech recognition to interpret what the users are talking about. The speech detection event is triggered when the users select a topic discussion. The only thing that I wish to add to the code is to broaden the data set of words that are being recognized by the program. I was considering using an AI extension but I figured that the AI chat bots serve a different objective than what I’m trying to achieve. The bots are meant to chat with the user, whereas my program aims at making the users talk amongst themselves, while trying to maintain the conversation intermittently. Therefore, I will creating my own data set for the program. Advantages of doing this are:

  1. The program can be personalized to have certain opinions and characteristics. I much prefer this over a generic response machine.
  2. The program can be curated to listen to the users and instigate them to keep talking.

 


Physical Computing:


I ran into many issues with the physical computing bit. The issues are as follows:

  1. FSRs have minds of their own. First of all, there is a lot of noise in the analog results that they produce. However, since the results are within a certain range I can work around this problem. The bigger problem is that if their position is disturbed, they start producing different ranges. They are highly sensitive to placement, and are also very fragile. I constructed my own FSRs and they lack the durability of a manufactured one.
  2. Finishing and Fabrication. I’m very new to fabrication and it intimidates me. This is my first proper attempt at fabricating and consolidating circuits. Right now, the circuit is all over the place. It doesn’t fit into the designated box, and is highly susceptible to losing connections on physical disturbance.

To solve these issues, I’m considering the solutions stated below:

  1. I might be using a switch instead of FSRs (as suggested by Benedetta). The only advantage with an FSR is that it gives me an approximate idea of the weight of the object, which can help me in confirming that it is a phone that’s been inserted into the socket and not some other object. But at this point, using a switch seems like a more favorable approach.
  2. I have fabricated the boxes, the switch panels, sockets and socket covers. The overall structure looks swell. However, I’m facing issues with optimizing my circuit and trying to make it highly durable. Mithru is helping me out with solutions on better circuit design techniques. I will be updating this post with pictures on how it progresses.

 


User Testing:


User testing has been extremely helpful to give me implementation ideas. Following is some of the feedback that I’m planning to incorporate into the project:

  1. Projecting the visualization on the table instead of using a computer screen. This was suggested by Dominic, and sounded like a great idea. It makes the project actually look like a talkTable, first of all. Also, it takes away the awkward positioning of the laptop in the middle of the table. I think it just makes the whole conversation setting more natural.
  2. Using synthesis with/instead of the text. This was suggested to me by Benedetta. It’s a good idea to use spoken word instead of text. Text can be distracting in the middle of conversations. I will be implementing depending on the time that I can find.
  3. I also tested the project in the ITP lobby to make sure that the audio inputs work fine in the presence of noise. This test was performed to make sure that the device doesn’t malfunction during the show. The words were comprehensible if the user spoke into the mic, which was great.

To  send across an idea, the structure of the device will look something like this. Right now, the project looks in pretty dismal state since I’m still working on the fabrication and rewiring the circuit. I will soon be posting pictures of the progress that I make.

wp_20161216_004

Pcomp: Final Project Proposal: TalkTable


Introduction to Physical Computing / Following Weeks / Final Project Proposal


Project Description


With the increasing affinity and accessibility to mobile phones, it’s oftentimes noticed that people tend to resort to their handheld devices in social situations such as family dinners, restaurants and bars. The interaction with the cellular device gains prioritization over actual conversations with the people present in that space.

Talktable is an attempt to separate the individual from the mobile phone and try to encourage them to engage in first-hand conversations.

Design: TalkTable is a surface with designated sockets at each edge, that can hold a mobile phone. When phones are inserted in each socket, a screen activates with content that the participants can talk about. If any one of the phones is withdrawn from the device, the visual interaction is interrupted and the participants have to start over.

Participants: Two or more

Phase 1: The initial phase aims at instating the physical capability of detecting phones in the designated sockets and triggering a visualization. This visualization acts as a third party that starts the conversation and tries to maintain it. For the initial prototype, I’m planning to display a list of topics ranging from films, art, television to daily lives and religion. Once the participants select a topic of discussion, the interface displays selected questions. I’m also planning to integrate speech recognition into this program so as to be responsive to the discussion and interrupting it only when necessary. Tools: I’m planning to execute the initial phase using a computer screen and physical sockets for the mobile phones. In regards to tools, I’ll be using an Arduino Uno for the physical interactions and p5 for the visualizations.

Phase 2: The second phase involves taking this interaction to a surface model. This will require some fabrication, and perhaps an Raspberry Pi to control an LCD screen.

Phase 3: The third phase requires more time and work. It involves pulling data from the social feed (Facebook, Twitter, Spotify and Intagram) of the participants and curating the topics and discussions. This phase is out of the scope for the present semester.

The eventual idea is to make social spaces more interactive, and appealing to people. The idea aims to make it easier for participants to get to know each other in situations such as conferences, orientation events and the like.


System Diagram


Below is a rough system diagram of the device as it’s intended to be.

system-diagram


Estimated Bill of Materials


screen-shot-2016-12-06-at-1-39-41-pm


Timeline


Week 1 : November 16 – 23

  • Collect all the materials needed for the first phase of the project
  • Figure out the circuit diagram and conduct a basic test of the sockets, using breadboards and mock set up
  • Figure out the visualization. Will it be a game or a list of topics or a visualization of the users’ social feeds

Week 2 : November 23 – 30

  • Art Strategies final project
  • Animation After Effects project
  • While fabricating for art strategies, also fabricate the casing for the sockets

Week 3 : November 30 – December 7

  • Create a barebones circuit within the fabricated sockets
  • Learn p5.js and start writing the interface
  • Complete the initial workflow, which includes introductory messages, topic lists, and basic computer responses based on the choices that the user makes
  • Run the circuit in sync with the visualization

Week 4 : December 7 – 14

  • Add speech recognition to the program
  • Make the program as responsive as possible
  • Consolidate the circuit and finish the fabricated parts
  • Develop a presentation

 


Present Status and Video Demo


Presently, on December 6, I’m running behind on the circuit bit, due to the failure of phone detection using conventional FSRs. I will be using making my own FSRs customized for the sockets. Besides that, the code looks in pretty good shape. The introduction and the topic listing workflows are ready. Things pending with the code are speech recognition (which I’ll initiate post December 7) and addition of more information and responsive messages to the data set that I’m using for the visualization. Regardless, I’ve prepared a video demonstration for the project which gives an idea about how the final project will look like. If there’s time on my hands, I can consider taking the project to Phase 2 (refer the introduction).

 

Aquarduino


Introduction to Physical Computing / Week 7 / Midterm Project


For the midterm project, we present Aquarduino! Aquarduino is basically a smart coaster, that detects the level of water in your cup, and based on the time elapsed, gives you reminders to drink water and not dehydrate yourself. I worked with Lindsey on this project. We had brainstormed on many ideas, ideas revolving around board games, toys for pets and audio modulations. But eventually, we decided on something that can find an application in the busy world, and can assist people in some way.

Introduction:
At this point in time, people are becoming increasingly conscious about the food choices that they make, and the physical activity that is needed, in order to live a healthy life. We thought that water is an important part of our daily intake, and has been overlooked by the plethora of apps and smartwatches that are out there. Therefore, we settled on the idea of a smart coaster. Voila!

In Theory:
Theoretically, Aquarduino detects the level of water in a cup and keeps track of the number of cups of water the user has had during the day. For the numbers, we estimated that the average sleeping time for an average human being is 8 hours a day (not applicable to ITP though haha). So, a person is awake for around 16 hours in a day. And the person should have at least 8 glasses of water in the day. And that’s what the project tries to achieve. To remind the user to have at least 8 glasses of water during the day.

Components:
For the project, we worked with a force sensitive resistor (FSR) to detect the water level in the cup, and a neopixel LED that can update the user about the water consumption activity. We also integrated the application in p5.js to provide a real time simulation of the level of water in the cup and the count of cups of water that the user has had during the day. The user can reset the coaster in the morning to start over the count.

Code:
The p5 code can be found here.
The Arduino code can be found here.

Challenges:
Most of the challenges that we faced were centered around serial communication between Arduino and p5.
1. Noise. An FSR is not steady and can produce a lot of noise when passing values. To control this we introduced a minimum threshold in the Arduino code, which can detect when a cup is picked up from the coaster and there doesn’t need to be a simulation of results during that time.
2. Constant flow of values. The constant flow of input between Arduino and p5 had to controlled. The overflow of input every 9600 parts of a second was making the simulation to go haywire and project results that were not desired. For instance, random variations in the water level when the cup is picked up to drank from. We decided to modulate the way in which information was exchanged between Arduino and p5.

Simulation Results:
screen-shot-2016-10-26-at-11-23-33-am   screen-shot-2016-10-26-at-11-24-35-am       screen-shot-2016-10-26-at-11-27-21-am   screen-shot-2016-10-26-at-11-31-49-am

Circuit:
img_4604

Demonstration:

 Applications:
This simple project can find a useful application in an office/library where people generally get engrossed in work/assignments and  forget to consume the recommended quantity of water. If the FSR input is noise-free and works well with water, the application can be extended to alcohol consumption. The coaster can be used in bars/pubs by a bartender to keep track of the amount of beer/wine a customer is having, and to remind them when is a good time to stop.

Credits:
Lindsey Daniels, Utsav Chadha

Asynchronous Serial Communication (PComp Lab 5)


Introduction to Physical Computing / Week 6 / Asynchronous Serial Communication


Week 5 was concentrated over revising analog and digital input and output. Last week, we begun using asynchronous serial communication.

While working with serial communication, there were some things that have to be kept in mind. Firstly, in order for two devices to communicate serially they should agree on three things.

  1. Baud Rate – The speed at which the two devices send information ( generally 9600 bits per second between a computer and an Arduino microcontroller )
  2. The logic – The interpretation of high and low voltages. This can be simple logic or inverse logic ( as can be found in RS232)
  3. The connections – Microcontroller’s transmit line connects to the port’s receive line, the port’s transmit line connects to the microcontroller’s receive line, and microcontroller’s ground connects to the port’s ground.

This week’s lab was focused around reading analog input’s from a potentiometer, and using the input to interact serially with an application. Such as the Serial monitor, CoolTerm or P5. As demonstrated in the class, I used the analog input to move a circle horizontally in P5.

Post this lab, I was trying to run a code where in three analog input’s are read and printed out as a string on the Serial monitor. However, my P5 window was still running while I was working on this code. This led to unexpected results on the Serial monitor. After a while I realized what was happening. So, it’s very useful to remember that the USB port can interact with only one application at a time. Having multiple applications trying to read serial data from the microcontroller will lead to an erroneous output.

Analog Output (PComp Lab 3)


Introduction to Physical Computing / Week 4 / Analog Output


Microcontrollers generally can’t produce a varying voltage. They can only produce digital voltage, which is either HIGH or LOW. So, to emulate the effect of analog output, the microcontroller generates a series of voltage pulses at regular intervals. This is called pulse width modulation (PWM). Analog outputs can be used for many purposes such as fading an LED, controlling a motor or speaker.

As part of the first lab, I tried to feed an analog input into the Arduino from a potentiometer, and then use the input to regulate the brightness of a LED. The results were are as shown below:

Arduino Code:

screen-shot-2016-10-04-at-3-06-55-pm

Similar to an LED, one can use the analog input provided by the potentiometer to control other devices such as servo motors or speakers. Next, I worked with a speaker where I noticed that just changing the pulse width only produced a variation in the speaker’s volume. Through the Arduino Uno, one can also vary the frequency of the pulse width, which can be used to vary the sound produced by the speaker. This is done using the tone() function.

Arduino code:

screen-shot-2016-10-04-at-8-26-13-pm

I used a servo motor that had the capability to turn 180 degrees.

Arduino code:

screen-shot-2016-10-04-at-8-47-55-pm

Yo Arduino! ( PComp Lab 2 )


Introduction to Physical Computing / Week 3 / Microcontroller Basics


This week’s labs beckon the initiation of programming using an Arduino microcontroller, and processing analog and digital input/output.


I’m going to revise the concepts first (for embedding the definitions into my memory) : An Arduino board is a microcontroller that has a lot of circuitry built around it ( digital and analog ports, voltage regulators et cetera). The Arduino board comes with an Arduino IDE, which is a simplified programming tool that compiles commands into assembly language and loads them to the board. Consequently, circuits using analog and/or digital sensors can be connected to the board and the inputs/outputs can be processed using programming logic. Fun!

The labs from this week required us to deal with digital input and output, and then move on to analog input. Digital inputs can be obtained from digital sensors, which are sensors that can sense binary data ( 1 or 0, HIGH or LOW, ON or OFF). It’s basically like Hodor from Game of Thrones. It either says Hodor or nothing at all. Examples of digital sensors can be 2-state switches or pushbuttons.

Analog sensors, on the other hand, sense a range of values which can be understood and translated using programming logic. Examples of analog sensors can be potentiometers, variable resistors such as photosensitive resistors and force-sensitive resistors.


Lab 1 : Digital Output from Arduino


The first lab revolved around generating a digital output and passing it to the circuit using Arduino. The digital output periodically switches on and off an LED. The following example switches the LED on and off every 500ms.

Arduino Code:

digital-output


Lab 2: Digital Input/Output


The second lab is about receiving digital inputs into the Arduino using a pushbutton, and processing that input to light up an LED (using the digital output circuit from the previous example).

Error: I ran into an error while performing this lab. My LED was connected to the +5V line on my breadboard, which would render the LED on permanently on the board. The LED is supposed to be detached from the +5V line since it’s voltage is dependent on the digital output we produce from the Arduino board.

Arduino Code:

digital-input-output


Lab 3 : Analog Input to Arduino


In the third lab, I used an Arduino board to receive an analog input using a potentiometer and then processed the input and displayed the results on the computer.

Arduino Code:

analog-input

Results:
The values changed between 17-1023 depending on how the resistance was being varied on the potentiometer. My idea is that the output didn’t go down to a minimum zero because even when the dial was turned all the way, the potentiometer still presented some minimal resistance.

analog-input-results


Observing User Interaction in a Public Space, and a Simple Application of Microcontrollers:


I’ve observed that students spend a lot of time working out the codes to the lockers. And it’s a lengthy process, plus it’s highly prone to go wrong even if the student remembers her/his passcode. An alternative solution to this can be using a simple numberboard and a microcontroller to read the passcode and process it.

The idea is to use an analog input and verify if the correct password is being entered or not. The students can work out their passcode on a numbered lock and subsequently the code can be verified using a logic. It’s nothing new, this is being used all around us. For example, buildings, safety deposits, ATMs and even suitcases. However, using it in a school system would save the students some time, and perhaps prevent them from getting late to classes (haha).

Extending the idea into mobile applications, we have numberboards on our cellphones. Perhaps, a student can connect to the lock using the wifi network when in the vicinity and open the lock as he/she walks up to the locker in the hallway. This makes the opening of locks super easy and more time-efficient.

Switches and LEDs ( PComp Lab 1 )


Introduction to Physical Computing / Week 2 / Switches and LEDs


My previous experiences in dealing with a circuit were not great. I had a few mandatory classes during my undergrad that dealt with electrical circuits and theory. However, being in a program that was trying to ‘focus’ on teaching software, the faculty as well as the labs were not good (they were bad and uninspiring). Personally, what this transpired into was an absolute befuddlement with how circuits worked, and barely getting through these courses with a minimum passing grade.

So yeah, there were some apprehensions as I was preparing myself to work on circuits. And to simply see that LED light up was magical. What felt better was to finally understand how a breadboard is structured, how the current flows into the wires, and how to deal with the anodes and cathodes. For the first time, I made a circuit without help or a grudge (haha).

I experimented with two kinds of switches. One was a basic switch, wherein connecting ends of a conducting wire would turn on the switch and holding them apart would turn the switch off. Very basic! A second switch was implemented using, well, a switch.

After this, as illustrated in the previous class, I played with a series circuit and a parallel circuit. The series circuit didn’t work the first time. The LEDs I was using worked with a higher voltage, apparently. So post-modifications, I had built a series circuit successfully. The parallel circuit was smooth.

wp_20160920_004 wp_20160920_006

 

Shoelace switch:

shoe-lacesThe lab also asked us to think about a simple application for a switch and LED circuit. One useful application that I could think about was a shoelace switch. When the shoelace is taut, the current flows through one part of the circuit, hence lighting up a green LED. When the laces come undone, the current starts flowing through another segment of the circuit lighting up a red bulb. A simple circuit that could notify a person and prevent him/her from tripping over. Voila!

Below is a circuit diagram for such a circuit. There are two resistors and LEDs connected in parallel. The resistance for R1 is significantly lower than R2. Following the path of least resistance, the current across the green LED is high, and the current across the red LED is very low when the laces are tied together. I haven’t implemented the circuit yet, so I’m not sure if this will work. I’ll post a picture/video for a working model soon.

shoe-lace-switch

What is Physical Interaction?


Introduction to Physical Computing / Week 1 / What is Physical Interaction?


Prior to laying out my ideas about physical interaction, I’d like to talk about interaction as a concept.

in·ter·ac·tion
ˌin(t)ərˈakSH(ə)n/
noun
noun: interaction; plural noun: interactions
1. reciprocal action or influence.

 

Simply put, interaction is a reciprocative conversation.
And Chris Crawford puts forth the concept of interactivity in a very articulate manner. An interaction is a conversation between two subjects that involves listening, thinking and speaking (input, process and output). My understanding of interaction is an experience where the user talks to a system, the system understands the needs of the user, and reciprocates.

“Interaction: a cyclic process in which two actors alternately listen, think, speak” – Chris Crawford

There are misinterpretations of interaction these days, which weren’t clear enough to me before I went through the reading. The reading clearly draws lines between visual experience, user interfaces and interactivity. Visual experiences, such as books or films or music, can be highly immersive and can shift the listener/viewer to a different space of experience/contemplation. However, that’s an extreme reaction and not interaction. Such visual mediums do not think or listen, they simply talk. Similarly, user interfaces are forms through which the user interacts with a system. Such interfaces are generally not involved in the functioning of the system. An interactive system is one, which through an understanding of function and form, conducts all the three tasks in an effective and reciprocative manner. While the function can be designed well through implementation of concepts such as machine learning and thorough algorithms, the form can be bettered through graphic design and animation.

Moving on, physical interaction is about tools. Tools, that can think, that accentuate human capabilities through intuitive and easy-to-use design, and have an effective function. This is something that is resonant in Bret Victor’s rant. His singular point in the article is to question why technology is still hanging onto the two dimensional surface. He poignantly remarks that a two dimensional device no matter how advanced, is eventually going to using two fingers. His vision involves working towards developing a better system that can develop over the many other human capabilities. And this can be done via good physical interaction.

“With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?” – Bret Victor

For me, a good and very basic example of good physical interaction would be a musical instrument. Lets say, a drum kit. Through the use of the human limbs, the drum kit reciprocates and is capable of producing various amplified sounds, something that accentuates the human capability to produce music. Mr. Victor might agree to this. But Mr. Crawford might not. Since, the drum kit doesn’t think. But what if it did? Maybe, the drum kit could modulate the volume levels by determining the extent of the theater. Or maybe, the drums could preset themselves based on which song is to be played next. Both the readings have given me a clearer definition of interaction, and have raised some interesting questions in my head. I’m anticipating that the coming months will give me many opportunities to ask similar questions to myself, and look for possible solutions.