Pcomp: Final Project Proposal: TalkTable


Introduction to Physical Computing / Following Weeks / Final Project Proposal


Project Description


With the increasing affinity and accessibility to mobile phones, it’s oftentimes noticed that people tend to resort to their handheld devices in social situations such as family dinners, restaurants and bars. The interaction with the cellular device gains prioritization over actual conversations with the people present in that space.

Talktable is an attempt to separate the individual from the mobile phone and try to encourage them to engage in first-hand conversations.

Design: TalkTable is a surface with designated sockets at each edge, that can hold a mobile phone. When phones are inserted in each socket, a screen activates with content that the participants can talk about. If any one of the phones is withdrawn from the device, the visual interaction is interrupted and the participants have to start over.

Participants: Two or more

Phase 1: The initial phase aims at instating the physical capability of detecting phones in the designated sockets and triggering a visualization. This visualization acts as a third party that starts the conversation and tries to maintain it. For the initial prototype, I’m planning to display a list of topics ranging from films, art, television to daily lives and religion. Once the participants select a topic of discussion, the interface displays selected questions. I’m also planning to integrate speech recognition into this program so as to be responsive to the discussion and interrupting it only when necessary. Tools: I’m planning to execute the initial phase using a computer screen and physical sockets for the mobile phones. In regards to tools, I’ll be using an Arduino Uno for the physical interactions and p5 for the visualizations.

Phase 2: The second phase involves taking this interaction to a surface model. This will require some fabrication, and perhaps an Raspberry Pi to control an LCD screen.

Phase 3: The third phase requires more time and work. It involves pulling data from the social feed (Facebook, Twitter, Spotify and Intagram) of the participants and curating the topics and discussions. This phase is out of the scope for the present semester.

The eventual idea is to make social spaces more interactive, and appealing to people. The idea aims to make it easier for participants to get to know each other in situations such as conferences, orientation events and the like.


System Diagram


Below is a rough system diagram of the device as it’s intended to be.

system-diagram


Estimated Bill of Materials


screen-shot-2016-12-06-at-1-39-41-pm


Timeline


Week 1 : November 16 – 23

  • Collect all the materials needed for the first phase of the project
  • Figure out the circuit diagram and conduct a basic test of the sockets, using breadboards and mock set up
  • Figure out the visualization. Will it be a game or a list of topics or a visualization of the users’ social feeds

Week 2 : November 23 – 30

  • Art Strategies final project
  • Animation After Effects project
  • While fabricating for art strategies, also fabricate the casing for the sockets

Week 3 : November 30 – December 7

  • Create a barebones circuit within the fabricated sockets
  • Learn p5.js and start writing the interface
  • Complete the initial workflow, which includes introductory messages, topic lists, and basic computer responses based on the choices that the user makes
  • Run the circuit in sync with the visualization

Week 4 : December 7 – 14

  • Add speech recognition to the program
  • Make the program as responsive as possible
  • Consolidate the circuit and finish the fabricated parts
  • Develop a presentation

 


Present Status and Video Demo


Presently, on December 6, I’m running behind on the circuit bit, due to the failure of phone detection using conventional FSRs. I will be using making my own FSRs customized for the sockets. Besides that, the code looks in pretty good shape. The introduction and the topic listing workflows are ready. Things pending with the code are speech recognition (which I’ll initiate post December 7) and addition of more information and responsive messages to the data set that I’m using for the visualization. Regardless, I’ve prepared a video demonstration for the project which gives an idea about how the final project will look like. If there’s time on my hands, I can consider taking the project to Phase 2 (refer the introduction).

 

One thought on “Pcomp: Final Project Proposal: TalkTable

Leave a Reply

Your email address will not be published. Required fields are marked *