Scientists from the University of California at BerkeleyUnited States, developed un robotic dog with a brain based on artificial intelligencethat among so many things, allowed him to learn to walk in an hour.
A report from the website Singularity Hub he explains that the robot also learned to roll over, stand up and flap its legs. The experts trained this unique machine with a cardboard roll to teach it to resist and recover from the pushes of its keepers.
The key for this robot dog to produce favorable results was the algorithm transferred to it, called Dreamer, which left in the past the errors generated by passing a simulation algorithm to the real world, without the need for too much trial and error.
The virtues of Dreamer
By building what is called a “world model,” Dreamer is able to project the probability that a future action will achieve its goal, with the experience and accuracy of its projections enhanced.
By filtering out less successful actions in advancethe world model allows the robot to more efficiently figure out what works.
The researchers wrote: “Alearning models of the world from past experiences allows robots to imagine the future outcomes of potential actions, reducing the amount of trial and error in the real environment needed to learn successful behaviors”.
“By predicting future outcomes, world models enable behavioral planning and learning with only a small amount of real-world interaction.”, they added.
In summary, a world model can reduce the equivalent of years of training time in a simulation to no more than an awkward hour in the real world.
The results make artificial intelligence another step in robotics. Dreamer reinforces the argument that “reinforcement learning will be a fundamental tool in the future of robot control,” as Jonathan Hurst, professor of robotics at Oregon State University, argues.