top of page

TRANSPARENT INTENT


A collaboration with Y.Sato Lab

ABOUT THE PROJECT

As technology rapidly evolves and the boundaries between the physical and digital world begin to blur, interfaces and control systems need to develop accordingly. Presently, this change is gradual: we find ourselves constantly learning new actions that we need to adapt into our behaviors and eventually turn into habits.

What if objects could learn from individual human behavioral patterns and understand a user’s true intentions? What if we could use this to create intuitive, invisible, and premonitory interfaces controlled by our instincts?

Transparent Intent explores the future of the interface, predicting a future where objects can be controlled subconsciously. Using computer vision technology developed by Y Sato lab, we’ve designed a set of interfaces that demonstrate this future evolution. The first step removes the tangible interface - the object is controlled through an action. The next step removes the need for direct contact with the object: by controlling the object with a gaze and a gesture, we can start to control objects with simply our intent. The final stage starts to explore the scenario where objects are clairvoyant: they control themselves based on human behavior. In this case, the object detects the sensations you are feeling and adjusts itself accordingly.

Pushed to the future, where advanced computer vision would allow objects to react to your subconscious behavior, technology could effectively become invisible and simply form an extension of the human mind.


HOW WE USE Y. SATO LAB RESEARCH

One of Y. Sato Lab’s main bodies of research is a computer vision technology in which a normal camera system is able to recognize actions and behaviors of the people captured by the system. Once the computer has identified these actions and behaviors, it is able to go beyond this and understand the person’s intention by analyzing gaze, facial direction and depth. We have used this research as a basis to speculate that computer vision could eventually allow us to gain access to human subconscious behavior, and use this to design new, invisible, premonitory interfaces. Imagine an object or environment that could guess your intention by reading your subconscious behavior and adapt to you accordingly.

ABOUT Y SATO LAB

Y. Sato laboratory specializes in Computer Vision based technologies. The laboratory’s vision is to create a world where computers and robots could recognize your actions and behaviors as accurately as a human being.

In the real world, people can easily understand how others behave and identify what they are doing through behavioral observation. Computers, however, are unable to do this and lack the skill to identify and analyze the many nuances of human behavior. It is currently impossible for them to break down the complex, various, and subtle cues, such as location, gestures, facial expressions and gazes which make up human behavior.

Y. Sato lab is working on developing techniques which they refer to as ‘collective visual sensing’ (CVS) to help computers decipher these cues and analyze human behaviors in real-time. The objective is to understand the attention and activities of a group, by gathering and analyzing information from a variety of wearable devices such as eye trackers or wearable cameras.

CVS opens up the doors to a new world where computers sense and understand group activities. This could enable a broad range of new technologies such as extracting measurements from CVS or building assistive systems which could support collaborative work in various environments like an operating room.

bottom of page