with Ulrich Irnich & Markus Kuckertz

Shownotes

Episode 54 focuses on the future of robotics, in particular the next tangible breakthroughs and expected benefits in robotics. Our guest is Roberto Calandra, Full Professor at the Technical University of Dresden and founder of the Learning, Adaptive Systems and Robotics Lab. Roberto’s scientific interests are in the field of robotics and machine learning, with the aim of making robots smarter and more useful in the real world.

The conversation between Uli, Markus and Roberto focuses on the groundbreaking advances in touch sensing technology, in particular the development of Digit, a high-precision touch sensor for robots that mimics the human sense of touch. Roberto discusses how this technology is revolutionising the industry by enabling robots to interact with their environment in ways previously thought impossible.

The conversation highlights how the integration of high-resolution tactile sensors and AI algorithms is pushing the boundaries of what robots can do, from handling delicate objects to enhancing virtual reality experiences. The potential of these innovations for areas such as manufacturing, healthcare and e-commerce becomes clear. The discussion offers a glimpse into a future where robots will become more integrated into our daily lives and change the world we live in.

More information can be found here:

  • Website of the TU Dresden: https://tu-dresden.de
  • Website of the Learning, Adaptive Systems and Robotics (LASR) Lab: https://lasr.org/
  • Website of the Digit Tactile Sensor: https://digit.ml
  • Roberto’s personal website: https://www.robertocalandra.com

Your feedback on the episode and suggestions for topics and guests are very welcome! Connect and discuss with us:

  • Roberto Calandra: https://www.linkedin.com/in/rcalandra/
  • Ulrich Irnich: https://www.linkedin.com/in/ulrichirnich/
  • Markus Kuckertz: https://www.linkedin.com/in/markuskuckertz/

Contributors – Hosts: Ulrich Irnich & Markus Kuckertz // Production: Daniel Sprügel, Maniac Studios (https://maniacstudios.com/) // Editorial: Marcus Pawlik © Digital Pacemaker Podcast 2024

Zusammenfassung

In this episode, we delve into the transformative world of tactile sensing in robotics with our guest, Roberto Calandra, a leading expert in the field. As a professor at the Technical University of Dresden and founder of the Learning, Adaptive Systems and Robotics Lab, Roberto has pioneered research that brings the sense of touch to robots. We explore how recent advances in inexpensive, high-resolution hardware combined with artificial intelligence algorithms allow robots to sense touch in a way that closely mimics human perception.

Roberto introduces us to Digit, a groundbreaking tactile sensor developed by his team, which has revolutionized the industry by enabling robots to detect touch similar to how human fingers would. This sensor not only provides high-resolution tactile feedback but is also mass-producible, resulting in widespread adoption across various sectors, including manufacturing, healthcare, and virtual environments. Through our conversation, we unpack the key technological advancements that have made this breakthrough possible and the implications for industries that stand to benefit from robots equipped with a sense of touch.

The discussion elaborates on the parallels between the digitization of touch and the historical advancements in audio and visual technology. Just as the telephone and digital cameras changed how we interact with audio and images, the advent of tactile sensors could profoundly affect how we experience and relate to the physical world, paving the way for a new field we term “touch processing.” We examine how this technology can lead to innovative applications—ranging from improving surgical precision in healthcare to enhancing product interactions in e-commerce.

Roberto shares insightful examples of how tactile sensing is already influencing industries such as quality control in manufacturing and the logistics of delicate food items. We also speculate on the future potential of tactile sensory technology, particularly its role in addressing the growing connectivity of digital and physical realms and its ability to combat social issues like loneliness through increased human-robot interaction.

As we engage in deeper conversations about the societal implications of these technologies, Roberto highlights the importance of integrating multimodal understanding into robotics. By combining touch, vision, and audio, we can create robots that not only perform tasks more effectively but also interact with us in increasingly meaningful ways, ultimately bridging the gap between humans and machines.

Listeners are invited into the excitement of this rapidly evolving field, as we discover the inspirations driving Roberto’s research, from the subtle nuances of human development to the influence of science fiction. Our dialogue ends with a call for a greater understanding and accessibility of tactile technology, encouraging both technical and non-technical audiences to engage with the potential it holds for shaping our future.

Transkript

Speaker0:[0:00] This availability of basically reliable, cheap, and high-resolution hardware, in conjunction with the use of artificial intelligence algorithm that can make sense of these rich, complex images, allow us to, for the first time, really, to be able to digitize touch in a way that is almost on par with human sensing of touch.

Music:[0:22] Music

Speaker2:[0:37] Welcome back to the Digital Pacemaker Podcast with Uli Ohnig and myself, Markus Kuckertz. Uli, how is life today?

Speaker1:[0:43] Life is very well. You know, the sun is shining. I did my sports this morning. So I’m super excited, especially for your conversation together with Roberto.

Speaker2:[0:52] Today we talk about the next tangible breakthroughs in robotics. Our guest is Roberto Calandra, professor at the Technical University of Dresden and founder of the Learning, Adaptive Systems and Robotics Lab. Welcome, Roberto.

Speaker0:[1:06] Hi, very nice to be here today and thank you for having me in your podcast.

Speaker2:[1:11] Roberto is a scientist who works on making robots smarter using machine learning. He and his team developed Digit, a small but highly precise tactile sensor for robots. Digit can detect touch similar to human fingers. It was the first sensor of its kind sold commercially and is now the most widely used tactile sensor in robotics worldwide. Roberto was previously a research scientist at Meta AI, where he founded the Robotic Lab in Menlo Park. He was also a postdoctoral researcher at the Artificial Intelligence Research Laboratory at the University of California at Berkeley. And he holds a PhD from Technical University Darmstadt, a Master of Science in Machine Learning and Data Mining from Aalto University in Finland and a Bachelor of Science in Computer Science from, okay, now my Italian comes here in our podcast, from Università degli Studi di Palermo in Italy. Is that correct, Roberto?

Speaker0:[2:07] It’s good enough.

Speaker2:[2:10] Roberto, I would like to summarize your statements for our discussion. Firstly, you say, bringing the sense of touch to robots will revolutionize the way robots interact with the world around them. Furthermore, you say, with their sense of touch, robots are transforming industries such as manufacturing, online shopping, healthcare, and even virtual worlds. And finally, you say, as digital robots continues to advance, it will open up exciting new uses for robots, including helping us socially. Uli, robots are in the house today. An exciting episode is waiting for us. Do you use robotics at home?

Speaker1:[2:48] Of course. Of course. Starting with the simple one, right? A vacuum cleaner, which cleans our household every day, right? Very precisely. Then in the garden, if you want to cut your grasses, you have your robot there. Then if you look at your house automation, you have some automation there, right? And of course, I did already had one robot at home, one of my gadgets where I try to program it myself and try to figure out what’s the tricky part on that. But I can tell you I need to sold it again because I want to get another gadget. And the rule of the house is if you want to have another gadget, you need to get rid of one additional. So therefore, I’m addicted to robots. Looking at Roberto, it’s such a pleasure to have you here, Roberto. I would shoot the first question on you, right? Especially if you refer to the sense of touch of robots. What are the key technological advantages that have enabled robots to develop a sense on touch? Can you explain that a bit? Because normally, robots It doesn’t look like that they have any touch.

Speaker0:[4:03] Thanks, Uli. Yes, you’re correct. The vast majority of robots that we use in our everyday life, in fact, don’t have any sense of touch. Sometimes they use information about their position, maybe their joint angle. They can have information sometimes from cameras about how the environments look around them.

Speaker0:[4:24] But they don’t really have, at the moment, the sense of touch. And this was one of the main motivations for us to start sort of a long journey in this field.

Speaker0:[4:35] Motivated by the fact that if we look at the way that humans interact with the environment, we can clearly see how humans heavily rely on touch in many aspects of everyday life. You know, every single object that we grasp in our life, every single glass that we pick up from the kitchen and try to hold it full of water, what we are doing at the end of the day is that we are trying to measure the forces that our fingers are exerting on the glass, and we are effectively trying to balance them. The same things can be argued even for walking. What we do when we walk, what we do when we interact with any environment is fundamentally that we are trying to sense forces, and we are trying to balance them to achieve the desired result, whether this is moving, whether this is grasping, whether this is juggling balls because we want to do something. The fact that robots nowadays don’t have any sense of touch makes all of this very difficult because if we cannot measure forces, how can we hope to control them? This is the reason why at the moment robots are not, for example, capable of reliably being able to grasp objects into our everyday life. They are not able to manipulate fragile objects like strawberries or eggs, because they simply cannot feel how much force they need to apply.

Speaker0:[6:00] And so tactile sensing as a field is a field that has existed for quite a long period already. In fact, for many decades.

Speaker0:[6:09] Until recently, the majority of hardware tactile sensors that were available to scientists were hardware that was fundamentally not rich enough to capture the complexity of the sense of touch as we have as humans. The sense of touch in humans is very rich, it’s very complex, because we can sense many different information from the shape of the object to the force, to the temperature, to vibrations. And the majority of technology available until recent in touch were quite plain they were able to identify where we were touching maybe the amount of force but not with a high resolution not with a high richness one of the sort of enablers of this huge leap in tactile sensing that has been happening in the last few years was the reintroduction of a technology called vision-based tactile sensors, which was originally invented in the 80s and rediscovered by Ted Edelson in 2010.

Speaker0:[7:15] And the idea of these sensors is that we can make use of digital cameras that are now everywhere in our life to be able to measure the deformation of silicons, basically very thin layers of silicon. And we can then use these layers of silicon to interact with the world. And through the camera we can measure how this silicon deform and from there we can infer forces we can infer a lot of the mechanical and geometrical properties of what we are touching and this is very interesting because it now allows us to have very high resolution images of what we are touching down to a resolution of about 20 micron which is on par with what human finger resolution is. This availability of basically reliable, cheap, and high-resolution hardware in conjunction with the use of artificial intelligence algorithm that can make sense of these rich, complex images allow us to, for the first time, really to be able to digitize touch in a way that is almost on par with human sensing of touch. So it’s really this conjunction of better hardware, which is available for the first time, and the use of AI for processing this data.

Speaker1:[8:34] And Roberto, thanks for sharing that. And this kind of technology, if you look to that thin layers, is this still in the laboratory or is that already something which is going to mass production?

Speaker0:[8:47] This is a technology that, for the first time, is actually going outside of laboratories and in mass production. Our small contribution to this field was the development and open sourcing of a sensor that we call Digit. And Digit was the first mass-producible, high-resolution, vision-based tactile sensor. And in 2020, we open sourced this sensor so that any lab in the world could reproduce it with minimal knowledge in 3D printing and electronics. But also in 2021, we partnered with a startup in Boston called GelSight. And together with them, we actually commercialized the sensor, making it even easier for scientists and practitioners to actually use the sensors. It used to be that if you wanted to build a high-resolution tactile sensor, you had to design it yourself, go into mechanics and build it.

Speaker0:[9:47] And now instead, you can just click online on a website, order it online, and you get at home after a few days. And this really enabled us to lower the entry bar for scientists and practitioners in this field. And for the first time, we are really seeing an adoption of this type of technology at scale by more and more people, by more and more universities, and also by more and more companies that are interested in evaluating this type of technology and see how they can fit to their needs and applications.

Speaker2:[10:22] So in relation to the earlier impact of the digitization of audio and then video, how does the impact of the digitization of touch compare from your point of view and can you draw parallels between these developments?

Speaker0:[10:35] This is an excellent question. I like to draw a comparison which is of historical nature. In the same way that introduction of telephone at scale in 1890s allowed us for the first time to be able to record audios, transfer it over long distances, and play it back on the other side of the world, this enable us had two interesting consequences. One consequence was the mass adoption of this type of technology in our society with quite impactful application and also business opportunities. And I think that this is certainly very applicable also to touch. But at the same time, it led to the introduction to the creation of new field of computational science dedicated to making sense of audio. So how do we process audio? How do we make sense of it? And more recently, the use of introduction of AI and machine learning also enabled us to process audio in a way that is more natural so that you can now talk to your phone and your phone can understand what you mean. Similar revolution happened in the 1990s with the introduction of digital cameras, where the first time we can now, at scale, digitize images. We can take an image, we can digitize it, we can transfer it over internet and show it on the other side of the world.

Speaker0:[12:01] And once again, this had a very deep impact on our society, on to business opportunities, and allowed us the creation of a new field of computational science dedicated to making sense and processing vision, a field, for example, like computer vision.

Speaker0:[12:20] If we wouldn’t have had these two breakthroughs, today we would not be here listening to each other, seeing each other. And having cell phones in our pocket that allow us at any moment to just call somebody on the other side of the planet seeing them hearing them and so what i believe is happening now with touch is that we are having a breakthrough of similar relevance and importance where for the first time the mass availability of tactile sensors at scale will enable us to digitize touch in a way that was not possible before and this will allow us to record touch to transfer it over distances and also eventually to play it back and this will again have a deep impact onto our societies and business opportunities but will also lead to the creation of a new field of computational science dedicated to making sense of touch and processing touch and the creation of this new field that we call touch processing, is one of the main focus that we have had over the last few years of our work, really trying to bring the community together and to making more and more people, both in robotics and machine learning, aware of the existence of this technology and of the potential benefits of doing research in this field.

Speaker2:[13:45] So looking at the industries, can you give specific examples of where touch-enabled robots are having a significant impact and which industries are leading the way in adopting this technology?

Speaker0:[13:57] At the moment, there are still relatively few industries that are actively using this technology at scale. I would say that the two first industries that come to my mind are basically industries that need metrology and quality control. Imagine that you are producing.

Speaker0:[14:16] For example, a wing of an airplane. You probably want to use, or you are already using, this type of technology to check what is the surface of the metal that you’re building, to check for impurities or quality control of cracks in the wing. And having this type of tactile sensors that really gives you very detailed information about very small objects is fundamentally better than having somebody manually trying to look for cracks from a visual point of view. On the other side, I imagine and I believe that there will be many more industries that will, with the growing maturity of the technology, eventually adopt it. Some that are certainly interested, and I believe are the next one, include logistics, and especially logistics of food. There are several companies that are already using robots in order to pack, for example, food, fruits, and being able to have tactile sensors that allow you to understand if the fruit is already ripe, if the fruit has been damaged, and at the same time to manipulate it in a way that is gentle without squeezing the strawberry is certainly something that is on the mind of many people. So it’s very likely that this is something we will see soon.

Speaker1:[15:40] If I listen to you, Roberto, directly what jumps into my mind is the health sector, right? Because especially if you look to surgery and how we are reducing the impact on the body on surgery and how we use already algorithms to assist us in the surgery, right? This can have a massive impact, especially in that industry.

Speaker0:[16:04] I completely agree with you, Uli. I think that medical applications of touch will be very, very important.

Speaker0:[16:12] A small example, which is something that is a paper that we recently published, is the use of one of our tactile sensors in collaboration with Stanford and MetAI to be able to palpate prostate tissue from a real prostate that was removed from a patient and being able through these tactile sensors to identify if this tissue has cancer or not. This is something that we show that is possible. In some way, I think it’s still a few years ago from being really available in our everyday life, since typical medical applications go through very rigorous processes for safety and whatnot. But I certainly look at the future and very often think, okay, hey, at some point in the future, we will have robots that will complement surgeons. And most definitely, we want them to have touch so that they are able to understand how much pressure they’re putting on people, that they can feel the different tissues, that they can understand if there are anomalies, cancers inside the tissue. Even to give back to people that lost their limbs the sense of touch. I think that all of this application would be amazing. It’s just that sometimes technology takes a little bit longer than one would want to reach the real world.

Speaker1:[17:38] Yeah, and that’s good, right? We need to have a certain level of security and safety and that kind of thing, right? And we don’t want to be test objects, right? But nevertheless, especially the health industry, if you compare that to the aircraft industry, right? There is a huge gap to close if you compare quality levels, right?

Speaker0:[17:58] Certainly.

Speaker2:[18:00] Okay, we have aircraft, food and medical applications as an example. So what about e-commerce? Is there any pilot or implementation that you are observing using this technology?

Speaker0:[18:13] Not yet, but I very strongly believe that this is going to be one of the next frontiers of touch as well. You know, e-commerce is every day in our life. There are already examples of, for example, visual e-commerce where you can take pictures of product and you can ask your phone to find online a product that looks the same so that you can buy it. At the same time, you can take a picture of yourself and this is how you would look like if you had this particular dress on, you know, from a visual point of view. I think that touch is going to play a very important role because it will allow us to do, in a similar way, tactile searches. You know, you can take your tactile sensor and, you know, I hope in the future we’ll be able to take our tactile sensor, touch your favorite sweater and say, hey, phone, find me a sweater that has the same material or that uses the same thread or that feel as soft as this one that I currently have.

Speaker0:[19:17] And at the same time, a lot of e-commerce websites that sell especially textiles for people often have the problem of having high rates of return, where people look at a product, it looks good, but then when they get it in their hand at home, they feel it and say, ah, no, you know, this leather doesn’t quite feel like the high quality leather that I was expecting it. Let’s send it back. And really being able to provide information of how a product would feel from a tactile point of view, I think that would be also very beneficial for customers so that they can feel it. Does it feel right on you before you buy it?

Speaker1:[20:00] Roberto, one question triggers to me because I understand this kind of sensor gets you an image of the temperature, how it feels and all kinds of things and translate that for the algorithm, right? Now, the question is, especially in e-commerce, when you are very haptic, right? You want to touch your shirt. You want to understand what kind of material it is. can you also go the other way so that my sensors my human sensors feel how does it feel when i have this material in my hand

Speaker0:[20:33] Yes this is basically the the goal of the whole field of of optics devices if you think about this loop of you know being able to record touch transferring it processing it and then playing it back the goal of aptX devices is to play back this touch and what we do with tactile sensors and touch processing is mostly concerned with instead how do we record touch and how do we process it and sort of transfer it the field of aptX has there are already commercial products available that allow you to sense in some way touch but in my experience, they are, I would say, pretty crude yet. They are not really able to express the full spectrum of human touch. It’s a very big field. A lot of people, a lot of scientists, a lot of also of companies are working on this. So I’m confident that in some years, we will certainly see better and better product and better and better capability also for playing back touch onto our skin.

Speaker2:[21:44] So you speak about a couple of years. So what is still missing to reproduce touch elsewhere? And how close are we to achieve seamless touch reproduction?

Speaker0:[21:56] Yes, this is a very good question. I don’t directly work in the field of optics, so I cannot give a technical answer to this. But as a sort of user, my experience so far is that the optic device that we currently have are certainly lacking. They are typically able to give you sensations about vibrations. They can give you sensation about forces. But already giving sensation, for example, about textures or geometry, fine geometries, is something that is very, very hard. And the main reason is simply that building devices that act at the resolution that the human skin works is very challenging because the human skin is amazing. It can sense objects that are down to just a few microns. It can sense high-frequency vibrations up to illicitly 9 kHz.

Speaker0:[22:57] And building devices that can do all of this in a way that is also at the same time a small form factor that you can put on your fingertip and that don’t cost thousands and thousands of euros is a very, very big technological and engineering challenge. So I’m sure that as the technology grows, as also more and more markets become interested in this, as you can imagine, anything connected with AR-VR would certainly benefit from better tactile optic devices. I believe that we will get there, but how fast is something I’m not willing to foresee or predict.

Speaker2:[23:43] So, Uli, if you think about robotics, I know that you think about that a lot. What are you most excited about based on what Roberto told us today?

Speaker1:[23:55] Well, to be very honest, especially with a touch, the digital and physical world is getting much closer together, right? Because if you look, the number of Internet of Things devices, right, is increasingly rising, I would say, every week. So that’s a thing. And the other thing is, especially if we talk about how can we, let’s say, close the gap of touch on robotics, that’s really fascinating. You feel that already. I’m very excited about that because that will provide, I would say, a next level of robotics and also a next level of how the digital and the physical world is coming together.

Speaker2:[24:37] And Roberto, what about you? What are you most excited about? What are you mostly looking forward to?

Speaker0:[24:43] From a personal point of view, I would be very excited to be able to transfer touch in the same way that we can transfer vision and audio. Imagine being able to hug and to feel a loved one over long distances. From an emotional point of view, it’s great to be able to see your grandparents and to hear them. But being able to hug them, I think, would be spectacular. So this is something that I would really like to see.

Speaker0:[25:15] From a scientific point of view, what I look forward is to give robots the capability to better understand the world through touch, to be able to integrate vision.

Speaker0:[25:28] Audio, language, and touch all together, to be able to have this comprehensive understanding of the world that us humans we have. You know, you touch an object, you’re probably able to guess what’s the shape of the object, what’s the material. You are able to provide semantic attributes of how it feels. And at the moment, we don’t really have these capabilities, neither in robots nor with our artificial intelligence systems. So giving this multimodality understanding of the world from many different points of view is something that I look forward. And in practice, I think that this will lead to robots that are able to interact with humans in a way that is safer, that are able to perform tasks that at the moment are very hard. We don’t have robots that, for example, are able to grasp all the objects that you might have on your table exactly because they don’t understand the sense of touch. They cannot sense forces. They don’t understand the concept of sleep. So having a robot that can one day clean your kitchen or cook for you, understand how much force to apply when cutting a carrot, I think this is something that would be very, very interesting from a scientific point of view.

Speaker2:[26:46] And I always love the question, actually, if someone like you is here in our podcast to ask, Where do you get your inspiration from on the one hand? And how do you keep yourself on track in such an area like robotics? I mean, on the one hand, it’s a very broad area, actually. A lot of disciplines that come together. And then actually, yeah, what’s your source of information? How do you inspire yourself? Yeah. So what do you do?

Speaker0:[27:13] Okay. These are three different questions in one. So where do I inspire myself? As a human being, a lot of my inspiration is certainly from science fiction. You know, you grew up reading Asimov, you grew up reading Jan Banks and Dan Simons. I think it’s very hard to not be fascinated by the possibility of having robots at some point in our society, being able to interact with humans in a meaningful way. So a lot of the inspiration is how do we reach robots that can actually be useful into our everyday life this is sort of the north star that guides most of my research in practice at the same time i also received a lot of everyday life inspiration from looking at my kids as they grew up you know especially as babies you could really see every day how they were learning and how they were getting better capabilities kids start by having basically a single degree of freedom in their hand where they can just open and close their hand at once and slowly over time they get more and more dexterous so around two years they start being able to move the index separately so you have this finger that start probe the world and start touch things and over time basically all the other joints of the hand of the joints unlock.

Speaker0:[28:39] At the end, when they’re fully developed, they’re really able to have the full dexterity of their hand. But this is not something that they’re born with. It comes with time. It comes with experience. And ultimately, as humans, we think that grasping is an easy task simply because we’ve practiced it a lot. We basically spend the first three years of our life doing not much more than trying to grasp objects, trying to grasp blocks and putting them together and assembling things and filling different surfaces. This is something that often leads to sort of misleading interpretation from humans because they ask, oh, why is it so difficult for a robot to grasp? Well, you as humans know how to do this because they have the right sensing and you have a lot of time and experience in grasping objects every day of your life. So I think that this is one of the inspirations of our work, seeing kids and looking at how they develop and asking, how can we have robots that develop in the same way, that learn over time the same skills and can ultimately master these skills at the same level of adult humans?

Speaker0:[29:52] The second question that you asked was, how do I keep up with the fast pace of our fields? And indeed robotics but even more artificial intelligence are filled that are basically working at speed that is really really hard to catch on sometimes you see an article being published one day a couple of days after you have an article already building on top of it and sometimes this can be frustrating to be honest because you say okay this is an interesting article, I would like to build something on top of it. But by the time that you start the work, already somebody else has done this work and is no longer current. And I know that many people in my field, especially PhD students that need to produce work to achieve their PhD, often feel almost burned out from this process of seeing how fast the science is and having this feeling of not been able to catch up with it. Often thinking about not what is the next step, but thinking what are three steps ahead is the only way to really being able to do work that is meaningful.

Speaker0:[31:07] On the other hand, the availability of conferences where you can really see and interact with new robots, with new AI models, and the democratization of science through, for example, Twitter and other social media means that every day I can basically keep up with new scientific advances from my phone. And I can just look at what other scientists have been posting. And this is fundamentally different from how science used to be done let’s say 50 years ago or 100 years ago where you instead had physical manuscript and you had to go to the library to read that one particular paper that was written you know in this particular journal so really the availability of digital resources and internet and the speed up of communication lead us to have a much more global community of science where information and ideas travel faster and can be disseminated also to a non-technical audience.

Speaker2:[32:11] You mentioned conferences and events. So if I would like to see live what’s going on with Roberts, would you recommend any convention event in Germany or in Europe which would be the place to go and see things?

Speaker0:[32:26] Yes, absolutely. My personal robotic conference is called ICRA, International Conference in Robotics and Automation. It’s usually held in May, and this year was in Japan, in Yokohama. Next year, I believe, is going to be held in the US, so it might not be in Europe. But this year, you are still on time to participate to the conference on robot learning that will take place in Munich in October, I believe. CORE is one of the growing conferences that really mix robotics with basically artificial intelligence and has become basically the flagship conference in the field of robot learning. So I would certainly suggest to participate there.

Speaker2:[33:15] Really cool. Yeah, thank you very much to both of you. Exciting exchange about a rapid developing topic. And yet, like always, the question at this stage, what did you take away from the conversation? Uli, would you like to start?

Speaker1:[33:29] Yes, I’d love to start. So first of all, very exciting moment to have this missing dimension into robotics coming in, right? And that Robaster on the team and all the scientists made a huge step forward on censoring, that’s number one. What kicks even more is, you know, we are always talking about making a big impact on the social, on the digital society, right? And, you know, one of the big issues we have in our society is loneliness, right? And missing, let’s say, the things what humans need to be, right? And this is something which we can bridge with that kind of technology. And it helps the society a bit to protect us from this loneliness. So that’s a massive impact.

Speaker2:[34:18] Roberto, what’s your takeaway?

Speaker0:[34:21] Well, my takeaway from this discussion is that it seems that there is interest also from non-technical audience on this type of technology, which is, of course, very gratifying because it means that hopefully we are moving in the right direction with this type of technology, trying to make it more accessible and more available to practitioners and with the ultimate goal of having it in our society. So I think that this is the big takeaway for me. And certainly a lot of the applications that we discussed from medicine to e-commerce, virtual reality, are certainly things that are on our mind. And I look forward to see what promises they held.

Speaker2:[35:10] Good last words. And yeah, thank you, Roberto, very much for taking the time, for the exciting insights and for being our guest today.

Speaker0:[35:17] Thank you very much for having me here today. It was a great pleasure chatting with you, Markus and Uli.

Speaker2:[35:24] And Uli, thank you as well. That was the Digital Pacemaker podcast with Roberto Calandra about robotics. Follow our podcast on Spotify or Apple Podcasts so you never miss an episode. Have fun and see you soon. your Oli and Markus Rock and roll

Music:[35:38] Music