AI, rat whiskers, and robots in the wild: Talking with Ian Abraham

Teaching robots to adapt to their environments has proven tricky. Using advanced AI systems, a Yale researcher is making robots more adaptive and independent.
Ian Abraham with a robot

Ian Abraham

Humans and animals are incredibly adept when it comes to adapting to their environments. For example, when humans step on ice, it only takes a few shuffles of our feet for us to learn to glide. Instilling similar adaptive abilities in robots, though, has proven tricky. Typically, immense amounts of data and computation are needed for robots to even approach the nimble ways of humans and animals navigating their surroundings.

Ian Abraham, an assistant professor of mechanical engineering & materials science at Yale School of Engineering & Applied Science, has a better way. He is leveraging artificial intelligence to help robots reason and learn from their surroundings to quickly develop new skills on their own. Using everything from robot swarms to robotic dogs to rat whiskers, he’s developing physically embodied, he’s developing physically embodied AI systems that — by interacting with their environments — explore, play, and exhibit curiosity. With these tools the robots can make their way in the wild with dexterity and agility.

In the first of a series of conversations about AI with Yale researchers, Abraham discusses his work on making robots more adaptive and independent.

How does AI figure into your work?

Ian Abraham: I see AI as a system that can act and is able to retrieve information, and then do something with that information at the level of what a human is capable of. My work probably goes beyond that, as it looks at how the physical embodiment of AI — for example, a robot — would go about collecting data and information about the world. A lot of the work in my Ph.D., and now as well, is inspired by rats and how they navigate in their environment and use their whiskers to interact with objects: That is, how do the mechanics of whiskers influence how rats go about exploring and understanding their world? We took those insights and applied them to control theory, where the goal is to optimize behaviors that leverage the mechanical interactions in order to do things like navigate or learn about object properties like textures and curvature. We pushed it a bit further to then think about specific motor skills, like manipulation, walking — all these things that are related to robotics.

How did you measure what the whiskers were doing?

Abraham: We literally had tiny little rat whiskers and put them on motors. We had these custom, tiny little force sensors that would go at the base of the rat whiskers and rotate the whiskers to hit objects, where we would measure forces that way. We also had these really high-fidelity models of the rat whisker system that would create a simulation, and we would try to predict how the whisker was bending, and then model that and then predict the forces at the base of the whisker.

One of the takeaways was that mechanical contact, even with a tiny whisker, is incredibly informative. Just getting into contact with an object tells us so much about the object. Then consecutively hitting that object the way rats do with their whiskers refines what we know already about the object we’re interacting with. You have to move them to interact with the world. Same with robots. Stationary robots are useless — I need robots to move to do something meaningful in the world. I want the robot to understand how to manipulate objects by interacting with objects. I need to have the robot think about the data to collect from that object to manipulate it .

Beyond your own projects, this could benefit the robotics field in general, right?

Abraham: That’s the hope. If it works, you can get some really impressive capabilities out of robots. My group has preliminary results that have demonstrated some of these abilities. We have a little hopping robot simulation where the question was: How do you learn how to hop? We found that it just takes a couple of the “right” hops to not only figure out all the mechanics of hopping, but then how to propel forward at pretty high speeds that yield very natural locomotion.

So it’s taking data and then using that to learn how to navigate the rest of the world.

Abraham: Exactly. What we’re interested in is how do we even come up with that data to begin with? The way most AI systems are being developed is you have a bucket of data that you put into a machine-learning process, and it spits out an AI. We’re asking, what if the AI can actually collect the data? What does that mean about the system, and where do we go from there?

Related

Share this with Facebook Share this with X Share this with LinkedIn Share this with Email Print this

Media Contact

Michael Greenwood: michael.greenwood@yale.edu, 203-737-5151