May 7, 2026

Why Touch Is the Missing Piece in Robotics with Tao Yu

Why Touch Is the Missing Piece in Robotics with Tao Yu
Why Touch Is the Missing Piece in Robotics with Tao Yu
The Thinking Machine
Why Touch Is the Missing Piece in Robotics with Tao Yu
Apple Podcasts podcast player iconSpotify podcast player iconYouTube podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconYouTube podcast player icon

Vision got robots looking. Language got them reasoning. But the moment a robot has to actually do something with its hands such threading a cable, screw a nut on a bolt, or manipulate a deformable object it can't fully see, touch becomes the missing piece. And touch is one of the hardest unsolved problems in physical AI.

In this episode of The Thinking Machine, I sit down with Tao Yu, Director of Dexterous AI Group at Analog Devices, to unpack why ADI is now building a humanoid hand platform with tactile sensing at its core.

Here are just a few topics get into:

  • Why dexterous manipulation is the "crown jewel" of robotics, and why it's a hardware + sensor + data + AI problem all at once
  • What tactile sensing actually involves: pressure, vibration, temperature, and the multimodal fusion problem
  • Taxels (tactile pixels) and how human skin's ~1mm resolution sets the bar for fingertip sensors
  • Why collecting good manipulation data is so hard when the human operator can't feel what the robot feels
  • Real2Sim2Real with physics as the interface: how ADI thinks about modeling sensor non-idealities so policies trained in simulation actually transfer


About Tao Yu:
Tao Yu is the Director of Dexterous AI Group at Analog Devices, Inc. (ADI), where he leads research on multimodal tactile sensing for robotics and Physical AI. His work sits at the intersection of ADI's 60-year analog semiconductor heritage and the emerging demand for production-grade tactile sensors in dexterous manipulation and humanoid robotics.

Tao holds a PhD from MIT and has published research spanning tactile sensing, wireless sensor platforms, and signal processing. At ADI, he and his team are pioneering industrial-grade tactile sensors that capture force, vibration, and temperature at sub-millimeter resolution, as well as Physical AI research on tactile-enabled dexterous manipulation.

Tao on LinkedIn: linkedin.com/in/taoyumit/

Thanks to Lightwheel for making this episode possible.
Learn about how Lightwheel is making physical AI successful at https://lightwheel.ai

Jonathan Stephens: Robotics is not new, it's been around for decades. And how to find the right angle in robotics that we can tackle using physical intelligence, that's where I started to find out the real crown jewel of robotics, which is infectious manipulation. And that's where I started to realize that it's not just algorithm problem, it's not just data problem, it's a system problem. It's hardware, it's sensor, and data, and AI. all come together to solve the dextrous manipulation problem. And that's where I start to realize the missing piece, which is tactile sensing. That's Tao Yu, director of the Dextrous AI Group at Analog Devices, where he leads research on multimodal tactile sensing for robotics and physical AI. He and his team are building industrial-grade sensors that capture force, vibration, and temperature at submillimeter resolution. giving robots a genuine sense of touch. In this episode, we're gonna get into why manipulation is the hardest unsolved problem in robotics, why tele-operations breaks down when humans can't feel, where ADI think this is all heading, and much, much more. Thanks to Lightwheel for making this episode possible. This is the Thinking Machine Podcast. I'm Jonathan Stevens. Now let's get to it. It's Al, welcome to the Thinking Machine podcast. I'm glad to have you here. We originally met each other at a GTC event and ⁓ I wasn't even at Lightwheel at the time, but now I work in that Lightwheel and there's a lot of crossover we're doing in simulation, what you're doing with touch and these different technologies. ⁓ As I've gotten into robotics, it takes a lot of disciplines to make these things work. I just want to start off then with you. So you work at Analog and Analog has been around for 60 years, a long time. And when I think about that company, the first thing that comes to mind is not robotics, it's semiconductors, signal processing. You like you have a huge history in that. Why did analog decide to explore robotics and cross them to a different domain? Yeah, is a good starting point, right? So analog, I'd say probably most of the audience may not have heard of analog devices, even though we're one of the FP500. companies with like 120 or 160 billion dollar market cap in around for six years, a very long history. The company is primarily focused on, you know, what we call the interface between the analog or the physical and the digital world. That said, it's about understanding, measuring physics, physical quantities, understanding the interaction with the physical world, but also a lot of deep understanding of how that physical quantities like pressure and the magnetic field and a lot of these kind of physical phenomena translate to electrical signals and be able to measure it and that we can interpret and use in various systems. And in fact, like if we take out any piece of x-rays and we open it up, I'm pretty sure. can find at least a number of ML devices product in there. It's just because we have such a big portfolio of products that covers like power, amplifiers, DSP and all that. Okay, so it isn't really a big addition to your, the company's expertise is just extending it to an emerging field. It's not something new. Now we're seeing that instead of it being a You might have someone as a small group, it's getting a little bit larger, putting sensors on robots and more of a bigger industry. Yeah. Yeah. It's, it's about like understanding the problem because in many cases we build components out of ⁓ our own knowledge or also like some customers specific requirements. But it's these days, I guess technology is moving so fast and the boundary between components and system. becomes blurry so that we have to kind of fit it like where our customers had to be like, what problem are they trying to solve? What is the customer's customer problem, right? Like if we're selling components to robotics companies, we'd better understand what they're using them for so that we can create a solution that is better suited to solve that problem. Okay. And so that makes sense to Go two layers deep, talk to your customers, customers, because you're not selling to a warehouse, you're not selling to me, someone at a home, you're selling to someone who's manufacturing a robot for use cases in those environments. if you don't know what that end use case is, you end up making kind of a general component that... Exactly. have be at anything, right? ⁓ And that's a... That's an interesting question that I've dived into robotics is we see people making these specialized robots that can do certain things really well and then some are just trying to make the do-it-all robot. And I think what you're talking about is the way we're going to move forward initially, make these purpose-driven robots with purpose-driven components on it to solve really hard challenges or just the early needs. then maybe that robot can do other things in the future, but let's get it out there doing some pivotal work. So then personally, what made you want to move into the robotics end? I don't think you've been doing this your whole career so far, but, ⁓ you know, personally is, was there something that inspired you that you're seeing in the market? You're seeing in the world that made you want to explore moving into the tactile sensing and, and humanoid hands within robotics? Yeah. So this story kind of starts about three years ago, right? When, the, the, team I'm leading is more of a advanced or know, cutting edge machine learning team. ⁓ We have a lot of examples of problem we solved in the past using machine learning to ⁓ better model or correct certain physical phenomena, like some non-idealities in physical systems. So this is kind of our bread and butter. Like we know how to characterize, understand the behavior of physical system, model it, find out what can be done with ⁓ a model driven approach and what needs to be done in a data driven approach so that we can kind of put together a combined model to account for the ⁓ intractable part of the physics and provide solution to make our overall system performance better. That's kind of in a very kind of high level way to describe ⁓ the type of work that I did before robotics. when it comes to the age of AI, especially after chat GPT and the explosion of the generative AI. So we kind of trying to figure out the intersection between that and what ADI is overall that good at, like understanding and interacting with the physical world, as well as our kind of deep knowledge and expertise of modeling and correcting physical non-idealities and better understand the physics that the system is interacting with. So this is kind of what we call physical intelligence is about AI that understand the physics behind it. And that intersection happened to be robotics because robots are the embodiment of AI capabilities in the physical world. There's no better way to interact with the physical world than robot that exists today. So that kind of naturally beat us into the robotics space. And there's a lot of things, though a lot of human noise kind of start to get a lot of hypes, but robotics is not new, new, right? It's been around for decades as well. ⁓ And how to find the right angle in robotics that we can tackle using physical intelligence. That's where ⁓ I started to kind of find out the real crown jewel of robotics, which is manipulation, dexious manipulation. And that's where I started to realize that it's not just algorithm problem, it's not just data problem, it's a system problem. It's hardware, it's sensor, and data, and then AI. All come together to create a solution to ⁓ solve the dexious manipulation problem and That's where I start to realize the missing piece, is tactile sensing. And that's where we spend a lot of time trying to put together really good solutions to enable tactile sensing, not just for ourselves, but also for the whole community. that point, I think what people take for granted when they look at these robots is that unlike an LLM, a Czech GPT, it just has to understand text and images. A robot has to... also understand what it's seeing around it, but also what it's feeling. So when it picks up something and it's unexpected, if it doesn't even have the sensors to understand that, then it's not going to be able to analyze or react appropriately. There's only so many things that I can know about an object by just looking at it. I think if you just stick to the basics, when you go, when you're buying clothes, this isn't quite the same, but if I go buy clothes, you're always touching the fabric and how soft is it? Things like that. that you can't perceive by just looking at it. So the robots need that same level of sense. Is that soft and deformable? It looks solid, but maybe it's not. You used to be able to pick it up, it, manipulate it, and then turn those signals into bits that an AI system can understand and react to. Yeah, it's generally actionable insights, right? In that case, if we're using only vision to supervise the motion, Especially in a case as you mentioned, like something unexpected happening, it becomes just reacting to what's happening. Like if something is falling, you won't be able to kind of just catch it like humans reflex, but more like, oops, it dropped. Let me just pick it up again to recover it. So you are actually, ATI is actually building sensors that you'd put on the ends of like a human hand, or are you building the whole hand? What portion of that robot would you be building? Yeah. So ADI is going to build a hand platform, right? That's what we're going to call it. It's not a hand per se, but it's a set of technologies that really enables dexterity. And that is not just tactile sensing, which is where we start with, because that's where we see as the missing piece. And we have great technology to enable it, but also actuation. because the overall goal of texturity is not just being able to perceive some physical signals of the object, but also be able to close loop and act on ⁓ the events, like if slip or some unexpected contact happening, like you need the actuators to actually respond to it. And so like having that closed loop system with the right parameters and. Even compute, like if we have a compute in the hand that allows us to fit in a small model that we can run a certain reactive loop. Those are really like the necessary components or the pieces that's needed to put together this what we call platform so that it benefits the overall kind of humanoid space so that the hand can be designed differently mechanically. you know, different size, shape, but overall the capabilities is kind of built on top of our platform's offering. Okay. So, and it makes sense. There's one hand shape and model, right? So, I'm seeing all sorts of different applications that you might not need five fingers. even get away with some. I'm actually used less than five fingers in our day, right? In fact, fun fact. fun fact that something I learned about is it turns out the index finger is the least useful, which is kind of counterintuitive because you can use the middle finger to actually replace it if the index finger somehow get lost. And that is kind of just surprising and counterintuitive. I would have thought that's the most important, that in your thumb is the most important. Thumb is definitely most important. Yeah, you can finish the grasp. OK, so then let's just talk about the sensors and not the whole hand initially. And we'll move into I am really learning hardware as fast as I can because I do know I come from the software world. So understanding the hardware is just as important, even if I don't actually build it. But if you don't know how they work together with the software, it's like you only have half the equation. But we'll just start with the hardware part of the tactile sensing. ⁓ What kind of senses can and you actually get signals from? I know of course there's like pressure if I touch something, but are we getting heat, vibration and how do those all even work together if you can sense more than one thing? How do you train a robot to know like what heat is and what to do when it comes to that? Yeah, so this gets into the multimodality nature of tactile, right? Because in many cases people think of tactile just being able to sense force or pressure. But in fact, in human skin, there are so many different receptors that respond to different excitation. ⁓ Some respond to static force, right? That's where we get a better kind of spatial resolution that we know the rough size and shape of the object, but also like receptors that respond to ⁓ rapid signals like vibration. that can go up to say one to two kilohertz. ⁓ And that is actually more useful when we try to sense, for example, the fabric texture. Like it's not like you press on it and you know, ⁓ this is cotton, this is silk, but it's more about like when you swipe your finger like over it, you get the really fine grain vibration that's down to even like tens of nanometers of roughness. ⁓ And it's incredible how human skin can sense all that. And of course there's temperature sensor, there's like all these coming together like under, it really like incredible how human skin can achieve all this with our ⁓ natural evolution. But our brain kind of fuse all of them together. Now we have to find a kind of a similar way in AI to do it. But thankfully the AI community have to solve that problem largely. with multimodal models where we can train ⁓ individual representation model for each mortality and then combine them together in our backbone ⁓ that figure out how to fuse them together. But of course, that's kind of built on the type of tasks that we're handling. For example, if we're trying to train a grasping policy, that usually will try to put in as much ⁓ modalities together, like even with vision and kind of fusing them together with backbone network. We should transform it these days. But we also created some kind of interesting ⁓ ways to put in certain human prior. For example, if I'm grasping something that is symmetric, right, ⁓ I would have this prior ⁓ expectation of the force on my two fingers grasping a symmetric object to be about the same. But if it's not, then I know there's something wrong, right? Like either the object is like in contact with something else or like in the case of insertion, there's a very well-known challenge of wedging. Like if you wedge into the hole, like it gets stuck, I can't really like fully insert it. Like this is where we get a lot of this kind of physical insight that we can kind of innovate. on the overall AI architecture to say, can we use cross attention to compare the embeddings from different fingers to arrive at a better ⁓ policy to perform the task knowing that there is certain prior to the problem or the object. So this is kind of a way to fuse all this information in a learning-based approach. And of course, that behavior needs to be trained. on a set of data, right? Like how do we collect the data? How do we translate human behavior, human intuition from a data set into the policy model? So all these are kind of advanced research topics that we're exploring. Right. It feels like when I look at physical AI as a whole, is still fairly new, but what you're working on is even more new. ⁓ I think vision kind of came first and You know, if you're moving an arm and it hits a certain amount of resistance, it knows, okay, there's something blocking that arm or something like that and it gets the vision. But all these other senses we're talking about, I don't know too many companies who are focused on it. I a lot of early demos using just like basic pinchers and you can look at them. They're rigid, there's no sensors on the end of those. They're just picking up objects and moving them, right? lot of private sensors are in the... that arms and things like that that is figuring out how heavy this is in transferring weight. ⁓ So this unlocks a lot of things by having your sensors on the fingers. ⁓ GTC, when we talked, we talked about, let's say I was screwing ⁓ a nut on the end of a bolt and you can't see that bolt. I could blindly do that all day long, but if you don't have those touch sensors that we are using in our fingertips, a robot wouldn't be able to even do that. Even if they see it, they might still not understand it's catching because it's quite, it's cross-strait a little bit, not quite correct. And so when I'm thinking about that then, ⁓ how are you even training that? How are you, do you have to have specific haptic sensors? I'm trying to think like vibration. That means you're able to touch something and that's reproduced in the hand. Or, ⁓ you know, because if I'm swiping a piece of fabric, that robot has to be swiping the exact same piece of fabric or somebody has to translate that to my hand. ⁓ Is that just done using special haptic gloves to train for modalities? Yeah, would say right now this is really a missing piece. And this came to a realization that when we tried to developed the model that we demonstrated at the NVH DC just like a month ago, where we have this dextrous hand that equipped with two or three tactile sensors at the fingertips that is able to trace the cable without vision nicely. I'll make sure you put it on the screen so people can see that. It was quite an impressive demo that you had. Yeah, thank you. ⁓ It actually was a quite a challenge to put together because when we collected the data, we used teleoperation like everyone else. We have the glove that capture human ⁓ motion and translate to join motion of the robot. But of course, like we are not holding the cable, it's the robot hand that holding the cable, right? But the problem is we don't have the same sense of touch when we're doing the teleoperation. The problem now becomes I have no way to use my decades of living as a human to guide how the hand should behave like when you're trying to trace a cable. And in fact, like a few engineers in my team, I can myself, I could try it days to try and figure out like, how am I going to manipulate this cable? Because it just keep falling like, or it just, we couldn't reach it because it's deformed and the hand has certain. ⁓ Kinematic constraints that we just couldn't do it. then we have to kind of rewire our brain and try different things. guess, especially if you look at the demo, the hand is kind of rotated about like 30 degrees this way. ⁓ This is just like use gravity as another degrees of freedom to help us better position the cable. I'm talking about ⁓ a few things like behind the scenes of that demo we're showing. It took us quite a bit of time to figure out like it is very, very hard for a robot to perform the same as human. And this is especially challenging. Like if we're, don't have a good way to capture data because then we can only use our own vision to supervise that data collection. And, and, and there's always going to be a question of, okay, if I can do it by watching it, does that mean vision can solve that problem? Or if you really need haptic feedback so that the human can feel what the robot feels and be able to collect the data with that feedback so that you capture the human intuition from our own sense of touch. So I would say this is still quite an open space and I've seen a lot of work in data collection gloves, right? Collecting human data because after all, like title operation is very expensive. We have that huge gap of not getting the tactile feedback. The next kind of natural path, like with everyone's, not everyone, but a lot of people's attention on ⁓ egocentric data collection is just adding more sensor to the gloves, as a way to capture these human intuitions as people performing those tasks themselves with those gloves. So definitely there's a lot of interesting ⁓ unsolved problem today, like even sensors themselves, they're not kind of well equipped to be installed on those gloves, right? Like how do we solve the data problem? And then we can kind of move on to the next big challenge, which is the AI problem. Yeah. Yeah. So you're almost having to build your own capture collection or work with other parties that are building hands and not hands but gloves and capture equipment. You almost have to help them make the best capture quality possible so you can get the data to train your own AI. That's a tough problem, Like, when again we get to vision, there's lots of camera sensors out there, you know, with stereo, RGB, all these options. But for you, you almost have to create your own options ⁓ or work with other, you know, vendors to So that's an interesting challenge you have to beat. So then I'm to then talk about with those, I know you guys talked about, you introduced me to the concept of a taxel. It's like a pressure point or a sensing point on a fingertip. Yeah, tactile pixel. Can explain what a taxel is and how it relates to then sensing and touching? Yeah, a taxel is kind of an analogy of pixel as an image, The easiest way to put together a tactile sensor is to create individual sensing elements on a surface or over a certain area so that each one responds to pressure applied to it. So this is a way that we kind of define textiles just similar to image so that at the end of the day we get a kind of small image of one dimensional value at every single physical or spatial location of the sensor ⁓ or three in some cases if we're measuring like 3d force. So this kind of give us ⁓ a data representation of the sensor ⁓ over that whole area or the entire coverage of the sensor on the fingertip. Interesting. So I like that idea of thinking of it relate to a TV with pixels, you know, with that we're recording in 1920 by 1080, which is millions of pixels. For a fingertip, it's thousands of pixels. Yeah, I'd say a thousand is probably on the higher side because that's kind of similar to what human has. We have roughly one millimeter of resolution in our skin that kind of accounts for, about a thousand pixels per finger. So if you put 2,000 pixels. you manage to make them that small into a sensor on a robot, would have, compared to a human, super touch. Yeah, super human capability. Right? But also I see the challenge of training that, because then humans don't have super human touch. But I could see how they could be helpful in some ways that you'd like getting your, ⁓ same with these camera sensors, some of these things can see things really far away, because the amount of megapixels it collects that a human wouldn't be able to see and zoom into. Yeah, that was a unique training problem. Okay, so let's move out. So you have these sensors on a humanoid hand platform that can do, like you said, vibrations and temperature and force. So then the hand, you're building this platform. What right now is the biggest challenge of building a hand platform? Is it getting the actuators and the motors in there that are strong enough? and can go through long cycles of movement or is it, you know, like there's a lot of different things that are complex that goes into a hand. Like what's the greatest challenge right now besides the sensors that you're exploring? Yeah. So for humanoid hand, the problem is it's ⁓ very size constraint. It's like our size of our own hand, like hopefully. And we're trying to kind of cram a lot of motors and a lot of wiring, a lot of ⁓ electronics in that very limited space. And also the overall technology and architecture has not converged. There's direct drive hands, which great, but very limited in the torque density. So the overall torque output is very limited. ⁓ And they tend to go very hot. because there's very little space for the heat to dissipate. And then there's tendon driven hand, where the motors can be larger. It can be a bank of motor which sits in a palm or in a forearm. But then there comes to the problem of, what about the slacks of the cable? What about the reliability and the serviceability of these hands if they break? So overall, there hasn't been a convergence of the architecture and I don't expect that it will converge in short amount of time. Each camp will continue to build the best hand possible given the constraints and kind of overcome those constraints. kind of being able to service different architecture, I that that is a huge challenge for us. And in addition, the connectivity we see is a problem because The hands, like if without those sensors, they're like what, like 20 degrees freedom or 24 length for the craziest ones I've seen. ⁓ They're mostly, you know, joint position and maybe velocity coming out and port commands or position commands coming in. They're relatively low dimension ⁓ in terms of data and they're pretty much like symmetric. When it comes to these like really sensor equipped hands, like with tactile sensors, like or 2,000 points per finger and maybe like someone put a camera in the middle of the thumb, like if you look at figure, they kind of create this huge bottleneck or requirements of uplink data that needs to go from the hand to the brain. So having a reliable connectivity that at same time that can achieve low latency and reliability that's needed for like the moving joints at the wrist, like That's a huge, huge challenge. ⁓ There's several points I could, I could dive in onto on that. ⁓ but I, one thing you just pointed out at very end there, I don't think about the fact that, ⁓ adding more sensors is more data that we have to process simultaneously. I'm seeing some very interesting. Like world action models coming out from like Nvidia. So example, they put out one, ⁓ that used all human. centric egocentric data. And it was very impressive. But you jump to the limitations of that paper. I admit that this is running on two BH200 servers running simultaneously to process the amount of data that it has to do. It's a run that's had, I forgot the hertz, like seven hertz. Whatever it was, was enough to make it feel human-like. But ⁓ I can imagine as you add more of these sensors, more data that you have to process simultaneously, that burden gets even higher and higher. So how do you add all those sensors onto a hand and then distill that into the smallest model possible? I can imagine, like you said, almost like a sub model that can then take that, digest that into an ⁓ easier set of, I'm not an AI person 100%, but how could you take all that signal and turn it into a more simplified signal for the big brain to handle? How can you handle that throughput and react in? Because you also don't want these hemorrhoid hands that take, you know, it moves a little bit and it has to wait for the input to compute and at a point where it becomes not really realistic to solve any problems if it takes a long time for this computer robot to deal with the world. So you have some unique challenges there. ⁓ I thought it was interesting as you talked about the two different like motor drives or ways that you can move a hand. I keep moving my hands in the scene here, but ⁓ when I went to NURPS, they had ⁓ Optimus, they were, I think, unveiling the new hand or had just, and they had, I don't know if there's a better word, but de-gloved the hand on display. Again, I'll put it on the screen here, but you can see the tendons running through, because it was a tendon-driven hand. ⁓ I see that that was interesting. think Neo as well uses a tendon-driven hand. I'm seeing that like these more humanoid generalist robots are going that way. there like a reason they would use that then? Do you just suspect is it because it's whiter or lower? Like they don't need the high torque. You know, you don't need to able to grip something so strong that, you know, I can industrial component, that the reason why you'd use that? Yeah. All right. I would say the tendon driven hand has a few benefits, right? Like one is It's more human-like because we use tendon in our hands, in our whole body to drive our motion. It's more human-like, but at the same time it provides more compliance. if you accidentally hit something, if it's a direct drive, it's just like keep that force and just like go in. Tendon has, because of like it has certain slack, it provides the compliance just like our hands that you would be able to kind of... banded and not incurring a lot of damage like if that happens. So there are lot of benefits to this and also it provides a little bit more torque, overall torque because you can then put the motor somewhere else, you can make the hand smaller, you can have the heat to dissipate better. So overall there are a lot of benefits of going with tender hands. But of course the problem comes ⁓ the service ability, the complexity of the overall design. ⁓ And you have to ensure that the material that is kind of being the tendon, ⁓ used as the tendon is strong enough and lasts long enough so that it doesn't just break every other day. Also, there's another challenge which is related to simulation. Like, just a lot of the ⁓ robotic skills are... or physical AI models are trained with combination of simulation data and real-world data. Just because it's so expensive taking that amount of data in real world with robots. So it's a much easier path if we can enable simulation to accurately represent the behavior of the robot in the simulation environment. That's why tendon-driven hand, given that there's certain slack, there's certain... ⁓ none of the ideas that is hard to model in a simulation environment. ⁓ it is much more straightforward if we have a direct drive hand where you can model exactly the actuation and the joint angle where the torque goes, how did each of the joints get actuated with the direct drive architecture. ⁓ right. So, so that's also like why a lot of people, especially people who are very heavy on like doing simulation based policy training, they'd like. direct drive hands much better. Interesting, Yeah, I didn't think about that fact. You know, there's thinking more just deployment and what goes through the most cycles. And you wouldn't want to have to replace the whole arm on a robot every time the tendon breaks or needs to be serviced ⁓ versus you can replace just like a finger or motor. Interesting. So now you bring up simulation. Let's pivot to that then for a little bit. Of course, I come from, I work at Lightwheel where simulation is what we're That's what we do. ⁓ We also do egocentric data, but simulation is the biggest challenge that we're tackling because it's a really hard thing to do. Like you were saying earlier, ⁓ getting sense of data is a really tough challenge. And so what we're trying to achieve at Lightwheel is get the highest quality, physically accurate simulation data possible. And that all kind of starts with making sure we model the real world. as close as possible in simulation, the high fidelity component in there. Tell me, why does that begin with what you're building? How does your high input sensors help us build a better simulation? Yeah, so this is a great topic to touch on, because simulation has been proven extremely useful when it comes to locomotion. all the robots that's doing flipping like parkour, like that's ridiculous. Like I can't do it. And a lot of those models are trained based on simulation because it has a pretty good representation of the overall behavior of the model. And you don't really need to ⁓ pay too much attention to the interaction with object. Of course you interact with the environment with the ground, like, you know, there's a gravel or, or some, you know, terrain that you want the robot to behave robustly, but you can kind of abstract certain things to make it much easier to work with. On the other hand, manipulation has been extremely hard because it's not about just about the robot itself, but it's about a robot getting in contact with something. If I hold a stick, it's not just about me holding a stick, understand the interaction of the hand with the stick, but if I swing it, it may hit something else. And how do we understand the interaction of that stick that's interacting with something else? ⁓ And this is kind of compounding to a much larger environment, especially for deformal body, where, for example, if I'm holding a cable, what if the cable is entangled with another cable, right? How do I simulate such a complex environment that have so many objects that is interacting with each other that I need to capture all those interactions at the same time? And so when it comes to this manipulation problem, and as I mentioned, the tactile is going to be important. How do we capture the tactile information from those interactions into simulation? That is really a hard problem to solve today. Like it's easier to model rigid body interaction. Like if the finger is more or less like rigid and you're kind of grasping a rigid object, ⁓ can kind of, it is easier to compute. the contact force and the distribution of force at those contact points. But then when it comes to something like a plush toy or a cable, something that deforms, now we run into the problem of not being able to extract faithfully what that contact force is. And how can we train a policy that relies on having an accurate measurement of those force? becomes a bigger challenge. So having a simulation model that provides not just high fidelity deformable body simulation, like, this cable bends the way that it should be, but also ⁓ to provide accurate force information to help the robot to make decisions and the policy to be able to act on the force that is perceiving from the fingertip, that is just a problem that is not solved today. make sure I understand that's correct. We need to be of course building assets, building objects in simulation that would react how we'd expect in the real world. think like you said cable, right? Like you can bend a network cable and it'll bounce back and we can model that and we can see how much resistance it to bend and there's some probably friction coefficient on the edges of these. rubber cables, things like that that we can model. But then ⁓ you also need to model backwards. need model the robot, make sure that it's correctly reproducing the signal that is touching these cables back into reel. We're going to make sure that we've reproduced the signal input. And then also you got to make sure you high quality physics engineer and all this, so the physics are actually working as expected. It's a really hard challenge, right? You brought up also like you might have a cable snagged on another cable. One example we've been showing is like wiring through a door panel for manufacturing a car. And think about that. As I'm pulling a cable through that door, sometimes it snags and you can't see what it snagged on. So right? That all has to be a touch thing. you have to intuitively in your brain though say Maybe I feed some more slack on the other end, it'll change the cable deformation a little bit in the middle and whatever it's snagged on, it'll release and then I can pull. Or as you're pulling, it's got friction. got the panel against the cable. ⁓ Is it still going? Should I pull more? I don't want to snap the cable. This robot could be strong enough to damage the cable. So I could see where touch is becoming more more important. It always has been. But I think it's one of those sensitive... As we move through our day that we don't even think about because it's just so ingrained into our how we operate, right? It's like a right. It's all human intuition, right? I don't think about blinking. I also don't think about like when I pick up my cup is do it. Am I going to slip from my fingers? It just no. ⁓ Yeah, like intuition. So simulation is a tough one. ⁓ So then like what does that look like when you're when you're when you're building or you're working in simulation? ⁓ you build your sensor, we have our objects, like light will make objects, we've worked with that, we're working with you to make sure we high quality objects, we're doing that starting with your sensors, rebuilding it, and then when you then run your simulation, then you want to redeploy it back to real. So you go real to sim to real, and there should be a high success answer on each one. How do we know we're going from real to sim correctly and not just failing in the real? a way that we know once it's in stimulation that we actually have a fully reproduced object? Or is it we only know by then going back to real? Right. Yeah, that's a good question. It's in fact, I would say there are many ways to do this. But what we believe is using physics as the interface that help us understand what physical quantity we're measuring. and be able to measure it in a real world and make sure that the same physics is recreated or ⁓ modeled correctly in the simulation environment. I think we have achieved half of the equation, right? Because you know that based on the Newtonian physics, this interaction or this, you know, reaction force is equal to the same force applied to it. So you know that this thing actually provides the same kind of contact force that you're supposed to measure at the surface of the sensor. But then the second half of the equation is where ADI comes in because we create these sensors. We know how much error these sensors make under what kind of circumstance, what is the standard deviation, what is the the drift over temperature and this and that, right? So we can correct them, but they'll never be perfect. So how can we model the other side of the world so that we represent the sensor accurately in the simulation to represent the sensor characteristics that we created? And then when all these is kind of in simulation, we know that the contract is to align on the physical quantity. and we train a model to take the degraded sensor or more representative sensor that combines the physical quantity with a sensor model that we create and be able to train a policy to take those data and react or act well to perform the task. In simulation, we know that we have a good confidence that this policy should work in the real world. because everything is aligned on that interface. So that's kind of what we're we we convey to our to our customers. We have our physical measurement factory, which is basically taking in robot arms with your sensors on it, manipulating world objects, being able to get a sensor output reading of pressure and graph it and then being able to redo that exact same experiment in SIM. You should, you're going for 100 % fidelity, but that's not necessarily ever going to happen, but you get as close as you can. And that should give you the high confidence then that we've reached this benchmark at least saying that we were now, this is 99.4 % similarity as we do testing over certain other tests, saying that this is as high quality as we can, is achievable right now. Sure, we are always going to be chasing higher. but I can imagine if you're close enough too that you should still be able to then deploy this system to real, back from system to real. It should, with the generalized AI, able to still be able to be successful. Yeah, yeah, great. mean, especially you can introduce domain randomization, right? Like how the sensor may vary from one to another, like aging over time. So all of these can be built into the simulation to account for non-idealities that you will see in the real world. So the model not just learn based on perfect measurements, but it also learns how to react to these non-idealities. Okay. Well, so then to kind of wrap up this talk here about tactile sensing and I think people understand how you're helping with simulation. understand you have a platform, have these sensors that can sense different sort of different sensor inputs, Like again, pressure, temperature, vibrations, things like that. ⁓ Now you have a customer who says, I have just won a big contract to build a manufacturing robot, let's say, for a big industrial project. And they want to integrate sense technology, maybe it's doing some fine manipulation, plugging in cables, like I said, screwing a nut on a bolt, things that would really require touch more than just a sensor of sight. How would you work with that customer? Like what is the first thing that you would ask them to make sure that you're building the correct sort of sensors suite and hand platform for them? Yeah, that's great question because there are so many robotic companies today. ⁓ Right? is serving maybe different or a lot of various ⁓ segments. ⁓ So usually the first thing that we align on is the use case because we see the sensor ⁓ like a high end sensor to be more provide more value to really hide the textures problem. Like for example, assembling very delicate parts putting together these fragile or deformable objects that requires much more ⁓ tactile capabilities than in addition to just vision ⁓ or very sparse tactile or single modality tactile, which is available today. And in that case, understanding the actual use case is really the first step. But then the second thing is aligned on the expectation. Is it a really good sensor? ⁓ on paper, maybe just from sensor perspective, like the specs of the sensor, or if ⁓ the value of the sensor is represented through training and ML model that shows how the robot can perform certain tasks better. So having an evaluation framework and a line on the metric is important because there are different ways to solve this problem. And you can kind of try to find the best sensor. and try to do everything else afterwards, or you can put it in use and see how it works. And that is also a very important ⁓ approach. It's a problem. It's a methodology that we want to align on so that we know, okay, are we aiming for the perfect sensor so that we really put a lot of emphasis on solving the variability, solving the temperature, humidity drift. solving all these reliability problem, repeatability problem. Or if we're thinking about let's first get a large amount of data and see how it behaves in ⁓ real world. And then lastly is ⁓ just the start of the project is about understanding the mechanical integration. Because different from camera, which can come in different size and shape. Usually they would come with an enclosure or without an enclosure. It doesn't matter. It's just camera. can put it somewhere. can mount as long as there's a hole to mount it as a place to put it. It wouldn't matter. But these tactile sensors are very, very different because they are basically the surface of the robot. It has to come with a certain size and shape and material so that when the robot perform these tasks, it doesn't just like fall apart or it doesn't just break, right? So having that kind of blurred boundary between us and a customer, that is where the fun part of this project is, then we can think about, ⁓ I know how to solve this from the sensor perspective. can, I know how to solve this from mechanical and material science perspective. I know how to solve this from the overall kind of system perspective. A, can we, make the fingertip replaceable, right? Because maybe the motor lasts four or five years, but fingertip lasts for three months. But hey, it's easy and cheap to the fingertip. That will be a path forward. So there are lot of unknowns in this journey. say right now we're still kind of exploring with a few select partners. But every day that we interact with a partner that we... ⁓ work with, we learn a lot and hopefully reciprocal to the other side. And solving this problem, would say that having a industrial grade solution with tactile sensor in humanoid hand or in a certain kind of physical AI system, I would say this is really a pretty big milestone for us and showing how we can leverage this new sensor technology to solve really hard problems that today cannot be solved just with vision or with the existing robotics solution. I that is really a huge win for us. Yeah. I can see finding those right partners who are willing to do some testing and experimenting. You're in that phase of figuring it out as you're building it, right? A lot of things in the optics world have already been figured out from the past versus you. I also didn't realize the fact that your sensors are touching things, so they will wear out. This is camera, which is a passive sensor. It's placed out away from the objects. It's looking at things. Eventually, it might also need to be replaced, but not at a three-month period. It's a much different... sensor and that is a unique challenge, right? How do you make it? You want a resolution sensor that's getting really good touch data that requires probably not to have much surface between it and what it's touching. But then that also probably makes it more exposed and more likely to fall apart over time. ⁓ So you guys are tackling so many challenges all at once. Yeah, finding that right partner who's willing to go through that journey with you. I can see it really helps them too, because then at the end they get a sensor that's tailored specifically to what they need and not just buy an off-the-shelf sensor that they're willing to use because they're not bringing options, right? ⁓ I'm assuming the customers who would want this level of sense are trying to solve some harder problems than just pick up a toe and move it. Which again, it's not an easy problem, but still it requires a little bit more of dexterity and... and fine-grained movements that you just can't achieve any other way. So guess I had one remaining question to wrap this up. five years, let's just fast-forward five years, which is a long time in robotics, how do you envision then your group's product looking? Would it be a class of sensors that you would buy like a low-grade, ⁓ like a medium and a high-end grade or would it be like everything would be I'm gonna say boutique but everything would be customizable? Like how do you what do you think that looks like in the future as you guys build out this touch sensor program you have and hand I guess, you the whole plot. Yeah. Yeah. I would say this is really a portfolio of offering because not everyone needs the highest end sensor and not everyone have five-finger hands or a knee sensor at the fingertip and the palm, then it's a question of how do we... Right. We can put together different combination of system, but really the key is understanding the capabilities that we enable, right? Because it's not just about the sensor, but also about the overall system integration with the software and also the AI. We really want to... kind of have a lot of the capabilities that enabled by the varieties of customers that we engage with and learning how we can benefit. know, for example, if I get log data in the warehousing environment and in fact, like that didn't occur to me that apparently warehouse like these like Amazon or, or Walmart, like these warehouse has way more items than we have at our home. then if we can solve grasping problem in a warehouse, it should have like solve grasping problem at home, right? And so like having like these cross pollination of different market segments and kind of create a solution around ⁓ a sensor solution and a hand solution, this is really our dream. Cause it's never gonna be one piece that solve all the problems. It's a combination of these different pieces. with certain customization to tailor to those market segments and use cases is the key to unlock them. So you're not just selling, let's think about it, you're not just selling hardware. Correct. And you go online and just shop at the analog.com slash humanoid hand in. We could. I want the three-fingered hand if that's you could, but there's also a software component. There's also, like you guys are building probably AI models as well that support this or... Or we're in simulation models. You you yourself come from MIT having this research background. You understand that there's a whole ecosystem around these hands. Yeah, exactly. Nothing works on its own. So it has to be able to work with different AI models, Semstein, NVIDIA, Generalist. So many different companies right now are just putting these amazing models out. And you want to make sure your data that's coming from these fingers are able to be ingested in those models as well and not just your own. So much to explore here. I just want to thank you for your time. You may have to do a follow-on interview in like six months because I feel like at the speed in which this is iterating, I'm sure there'll be more to explore here. But if people want to learn more and follow your work or follow ADI's work, How can they learn more about what you guys were doing with Team Road Hand and Touch? Yeah, definitely come to our AI portal at www.analog.com.ai. So we have a lot more AI content that we're happy to share with the audience. Yeah, it's like we are going to continue to show our capabilities and updates on what we're doing. So hopefully, everyone enjoys this. Yeah. And I'm sure if someone sees you, you guys had the conference, I saw you at GTC, came to your booth and everyone was so sharp at the booth. ask some great questions. If you see these guys live, you did a great demo, but also, you know, you don't just have sales guys in the booth. It was actually people who are working on the system. And that was really neat. We really go ask these good questions and get the It's not just the lines that the team have been taught, but they actually understand what they're talking about. And that's a very neat thing that you guys just do. So thank you very much for your time and look forward to having me here. That's gonna do it for this episode. Huge thanks to Tao Yu for the conversation. Go check out ADI's AI portal at analog.com slash AI to follow what Tao and his team are building. And as always, thank you to LightWheel for making this show possible. If enjoyed this one, subscribe wherever you're listening. and I'll see you in the next episode of The Thinking Machine.