Nightmare Voyeurism: Google Tech Can Read Your Body Language – Without Cameras

Leon Neal/Getty Images
Leon Neal/Getty Images

Wired reports that Google’s latest privacy-invading technology can read your body language without using cameras. One Google designer ominously commented, “We’re really just pushing the bounds of what we perceive to be possible for human-computer interaction.”

Wired reports that Google’s newest tech uses radar to detect users’ body language and then performs actions based on its analysis. Google’s Advanced Technology and Product division (ATAP) has reportedly spent over a year exploring how radar could be implemented in computers to understand humans based on their movements and to react to them.

Google CEO Sundar Pichai

Google CEO Sundar Pichai (pool/Getty)

Sabo mocks Google CEO Sundar Pichai

Sabo mocks Google CEO Sundar Pichai (unsavoryagents.com)

Google has experimented with radar in its technology in the past. In 2015 the company released Soli, a sensor that can use radar’s electromagnetic waves to analyze gestures and movements. This was first utilized in the Google Pixel 4 smartphone which could detect user hand gestures to turn off alarms or pause music without actually touching the device.

Now, this Soli sensor is being used in further research. Google’s ATAP is reportedly investigating if radar sensor input can be used to directly control a computer. Leonardo Giusti, head of design at ATAP, commented: “We believe as technology becomes more present in our life, it’s fair to start asking technology itself to take a few more cues from us.”

A large part of the technology is based on proxemics, which is the study of how people utilize the space around them to mediate social interactions. For instance, getting closer to another person shows an increase in engagement and intimacy.

Wired writes:

Radar can detect you moving closer to a computer and entering its personal space. This might mean the computer can then choose to perform certain actions, like booting up the screen without requiring you to press a button. This kind of interaction already exists in current Google Nest smart displays, though instead of radar, Google employs ultrasonic sound waves to measure a person’s distance from the device. When a Nest Hub notices you’re moving closer, it highlights current reminders, calendar events, or other important notifications.

Proximity alone isn’t enough. What if you just ended up walking past the machine and looking in a different direction? To solve this, Soli can capture greater subtleties in movements and gestures, such as body orientation, the pathway you might be taking, and the direction your head is facing—aided by machine learning algorithms that further refine the data. All this rich radar information helps it better guess if you are indeed about to start an interaction with the device, and what the type of engagement might be.

This improved sensing came from the team performing a series of choreographed tasks within their own living rooms (they stayed home during the pandemic) with overhead cameras tracking their movements and real-time radar sensing.

Lauren Bedal, a senior interaction designer at ATAP, commented: “We were able to move in different ways, we performed different variations of that movement, and then—given this was a real-time system that we were working with—we were able to improvise and kind of build off of our findings in real time.” She added: “We’re really just pushing the bounds of what we perceive to be possible for human-computer interaction.”

Read more at Wired here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan or contact via secure email at the address lucasnolan@protonmail.com

COMMENTS

Please let us know if you're having issues with commenting.