Trending

  • High Income Business Opportunities With The Lowest Risk
  • Bragi’s Dash and Dash Pro wireless earbuds to get Amazon Alexa
  • Medical Alert Systems
  • Marry a Filipina – Is it Really For You?
  • Crunch Report | Postmates Shows Signs of Hard Times
  • Content Management Systems – The History and the Future
  • Laser Experiments You Can Share With Your Homeschool Kids
  • Chameleon – HTML5 Video Player for WordPress
  • Air Traveling Trend in India
  • The Difference Between HVAC Certification and HVAC License

MIT CSAIL teaches a robot to follow contextual voice commands

MIT’s Computer Science and Artificial Intelligence Lab has devised a method by which robots can understand and respond to voice commands, stated in clear, plain language. The system is advanced enough to understand contextual commands, too, including references made to previously mentioned commands and objects.

The so-called ComText system (short for “commands in context”) created by CSAIL researchers provides “Alexa-like” vocal control of robots, which can demonstrate a contextual understanding not just of previous commands, but also of the objects they interact with and their surrounding environment.

This all adds up to a robot users can interact with in the same way they might interact with another person; interfacing with robots is clearly a big challenge, and a potentially huge barrier to their commercial introduction and use in general consumer-facing applications. Even in industry, it would be far easier to have humans and robots working together if they understood natural language voice commands.

ComText has been demonstrated to work by learning designations for certain objects, so you can for instance tell it that “this tool I’m holding is my tool,” and in future whenever you say something like “hand me my tool,” it’ll find the right one and retrieve it. The ComText system was tested by researchers using a Baxter model, which is a two-armed, essentially humanoid bot created by Rethink Robotics.

ComText is made possible because it has different kinds of memory, including both semantic memory, which covers general info, and episodic memory, which is tied to specific occurrences and events. In test, the robot was able to do the right thin in response to tester voice commands around 90 percent of the time, which is remarkable, and the team hopes to push the limits using more complex input, including multi-step commands and a deepening of the robot’s contextual knowledge.

Featured Image: Tom Buehler/MIT CSAIL


Source link

The following two tabs change content below.

Sachin Iyer

I am a self‐learned digital executive, leader, strategist and trainer by profession and experience. I have rich experience as marketer, trainer, entrepreneurial evangelist and start up facilitator. I admire first generation of entrepreneurs and conducted more than 200 programs to establish startup ecosystem in Central India. I have worn many hats in my career, as a result, I have a unique ability to manage multi‐disciplinary projects and navigate complex challenges in the process. I have big ideas and I don't care who gets credit, I just like to facilitate. Drop me a line anytime, whether it's a collaboration, writing projects, skills training or just business - will love to hear from you - [email protected]

Latest posts by Sachin Iyer (see all)

  • Bragi’s Dash and Dash Pro wireless earbuds to get Amazon Alexa - September 1, 2017
  • Crunch Report | Postmates Shows Signs of Hard Times - September 1, 2017
  • Laser Experiments You Can Share With Your Homeschool Kids - September 1, 2017

Categories: Tech News

Leave A Reply

Your email address will not be published.