New software allows non specialists to intuitively train machines using gestures
Many computer systems people interact with on a daily basis require knowledge about certain aspects of the world, or models, to work. These systems have to be trained, often needing to learn to recognize objects from video or image data. This data often contains superfluous content that reduces the accuracy of models. So researchers found a way to incorporate natural hand gestures into the teaching process. This way, users can more easily teach machines about objects, and the machines can also learn more effectively.
You’ve probably heard the term machine learning before, but are you familiar with machine teaching? Machine learning is what happens behind the scenes when a computer uses input data to form models that can later be used to perform useful functions. But machine teaching is the somewhat less explored part of the process, of how the computer gets its input data to begin with. In the case of visual systems, for example ones that can recognize objects, people need to show objects to a computer so it can learn about them. But there are drawbacks to the ways this is typically done that researchers from the University of Tokyo’s Interactive Intelligent Systems Laboratory sought to improve.
“In a typical object training scenario, people can hold an object up to a camera and move it around so a computer can analyze it from all angles to build up a model,” said graduate student Zhongyi Zhou. “However, machines lack our evolved ability to isolate objects from their environments, so the models they make can inadvertently include unnecessary information from the backgrounds of the training images. This often means users must spend time refining the generated models, which can be a rather technical and time-consuming task. We thought there must be a better way of doing this that’s better for both users and computers, and with our new system, LookHere, I believe we have found it.”
Zhou, working with Associate Professor Koji Yatani, created LookHere to address two fundamental problems in machine teaching: firstly, the problem of teaching efficiency, aiming to minimize the users’ time, and required technical knowledge. And secondly, of learning efficiency — how to ensure better learning data for machines to create models from. LookHere achieves these by doing something novel and surprisingly intuitive. It incorporates the hand gestures of users into the way an image is processed before the machine incorporates it into its model, known as HuTics. For example, a user can point to or present an object to the camera in a way that emphasizes its significance compared to the other elements in the scene. This is exactly how people might show objects to each other. And by eliminating extraneous details, thanks to the added emphasis to what’s actually important in the image, the computer gains better input data for its models.
“The idea is quite straightforward, but the implementation was very challenging,” said Zhou. “Everyone is different and there is no standard set of hand gestures. So, we first collected 2,040 example videos of 170 people presenting objects to the camera into HuTics. These assets were annotated to mark what was part of the object and what parts of the image were just the person’s hands. LookHere was trained with HuTics, and when compared to other object recognition approaches, can better determine what parts of an incoming image should be used to build its models. To make sure it’s as accessible as possible, users can use their smartphones to work with LookHere and the actual processing is done on remote servers. We also released our source code and data set so that others can build upon it if they wish.”
Factoring in the reduced demand on users’ time that LookHere affords people, Zhou and Yatani found that it can build models up to 14 times faster than some existing systems. At present, LookHere deals with teaching machines about physical objects and it uses exclusively visual data for input. But in theory, the concept can be expanded to use other kinds of input data such as sound or scientific data. And models made from that data would benefit from similar improvements in accuracy too.
Original Article: Machine learning, from you
More from: University of Tokyo
The Latest on: Gesture-based machine learning
- Little girl's heartwarming gesture for her hardworking dad goes viralon December 2, 2022 at 7:56 am
A little girl in China has gone viral for placing her jacket over her dad after noticing how hard he was working to plan the menu for their business. A clip shared by People magazine on Instagram ...
- NEON.MW - Neonode, Inc | Stock Price & Latest News | Reuterson November 27, 2022 at 8:25 am
Neonode Inc. is a Sweden-based company, which provides advanced optical sensing solutions for contactless touch, touch, and gesture sensing. It also provide software solutions for scene analysis that ...
- Photo taken at Asian restaurant captures child being taught racist gestureon November 24, 2022 at 6:09 pm
It was right before she tucked into a bowl of ramen that her and her companion unintentionally caught a shocking moment on camera: a child being taught a racist gesture. “Hold up.. lets zoom in ...
- Margot Robbie's secret gesture to Neighbours cast revealedon November 23, 2022 at 4:35 pm
November 23, 2022 - 15:17 GMT Megan Bull Margot Robbie revealed her sweet gesture to the Neighbours ... sweet gesture to the cast and crew after learning that the soap would be coming to an ...
- How to use gesture controls on the Motorola Razron November 23, 2022 at 9:35 am
However, there are some features you might not know about, including the option for gesture controls. You can open apps, turn on the torch and more - just so long as you know where to find the ...
- Theremotion brings complex gesture control to spooky non-touch instrumenton November 22, 2022 at 8:58 am
Specific hand positions can be mapped to a low pass filter, dial in different sounds or play a chord while pinch gestures can restrict the instrument to a selected scale – which could make it ...
- Can China and the US Cooperate on Climate Change?on November 22, 2022 at 6:01 am
U.S. technology rivalry will severely hinder the prospect of a common Sino-American approach to climate change.
- Pentiment Development Team Included 'Italian Hand Gesture Consultant'on November 22, 2022 at 5:13 am
Players enjoying Obsidian Entertainment’s historical adventure game discover that Pentiment employed an Italian Hand Gesture Consultant. Obsidian’s narrative-based adventure game Pentiment was ...
- Developing Machine-Learning Apps on the Raspberry Pi Picoon November 21, 2022 at 4:00 pm
He’ll use gesture detection as an example application ... Attendees will walk away understanding the Pico board and the fundamentals of machine learning on microcontroller-based devices. Day 2: ...
- Alif Semiconductor, Bosch Sensortec, and Edge Impulse deliver extended sensor functions using machine learning on the edgeon November 14, 2022 at 9:27 pm
Typically, creating and training an artificial intelligence/machine learning (AI/ML ... this complexity by offering reference designs based on the secure, low-power E3 MCU from Alif's Ensemble ...
via Bing News
The Latest on: Gesture-based machine learning
[google_news title=”” keyword=”gesture-based machine learning” num_posts=”10″ blurb_length=”0″ show_thumb=”left”]
via Google News