Rachel Gordon | MIT News, Author at The Robot Report https://www.therobotreport.com/author/rgordon/ Robotics news, research and analysis Mon, 18 Nov 2024 14:36:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Rachel Gordon | MIT News, Author at The Robot Report https://www.therobotreport.com/author/rgordon/ 32 32 MIT: LucidSim training system helps robots close Sim2Real gap https://www.therobotreport.com/mit-lucidsim-training-system-helps-robots-close-sim2real-gap/ https://www.therobotreport.com/mit-lucidsim-training-system-helps-robots-close-sim2real-gap/#respond Sun, 17 Nov 2024 15:00:17 +0000 https://www.therobotreport.com/?p=581620 LucidSim uses generative AI and physics simulators to create realistic virtual training environments that help robots learns tasks without any real-world data.

The post MIT: LucidSim training system helps robots close Sim2Real gap appeared first on The Robot Report.

]]>

For roboticists, one challenge towers above all others: generalization – the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans’ ability to provide it.

A team of MIT CSAIL researchers have developed an approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called “LucidSim,” uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.

LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world.

“A fundamental challenge in robot learning has long been the ‘sim-to-real gap’ – the disparity between simulated training environments and the complex, unpredictable real world,” said MIT CSAIL postdoctoral associate Ge Yang, a lead researcher on LucidSim. “Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities.”

The multi-pronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.

Related: How Agility Robotics closed the Sim2Real gap for Digit

Birth of an idea: from burritos to breakthroughs

The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, MA.

​​”We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn’t have a pure vision-based policy to begin with,” said Alan Yu, an undergraduate student at MIT and co-lead on LucidSim. “We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half an hour. That’s where we had our moment.”


SITE AD for the 2025 Robotics Summit registration. Register now


To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren’t different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.

This approach, however, only resulted in a single image. To make short, coherent videos which serve as little “experiences” for the robot, the scientists hacked together some image magic into another novel technique the team created, called “Dreams In Motion (DIM).” The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot’s perspective.

“We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days,” says Yu. “While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It’s exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments.”

The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main testbed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area, and also, color perception is critical.

“Today, these robots still learn from real-world demonstrations,” said Yang. “Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment.”

a quadruped robot learned to navigate new environments using generative ai.

MIT researchers used a Unitree Robotics Go1 quadruped. | Credit: MIT CSAIL

The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: robots trained by the expert struggled, succeeding only 15 percent of the time – and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88%.

“And giving our robot more data monotonically improves its performance – eventually, the student becomes the expert,” said Yang.

“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” said Stanford University assistant professor of Electrical Engineering Shuran Song, who wasn’t involved in the research. “The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”

From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines – ones that learn to navigate our complex world without ever setting foot in it.

Yu and Yang wrote the paper with four fellow CSAIL affiliates: mechanical engineering postdoc Ran Choi; undergraduate researcher Yajvan Ravan; John Leonard, Samuel C. Collins Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering; and MIT Associate Professor Phillip Isola.

Editor’s Note: This article was republished from MIT CSAIL

The post MIT: LucidSim training system helps robots close Sim2Real gap appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-lucidsim-training-system-helps-robots-close-sim2real-gap/feed/ 0
RoboGrocery from MIT CSAIL is a soft robot to pack groceries, pick recyclables https://www.therobotreport.com/robogrocery-from-mit-csail-is-a-soft-robot-to-pack-groceries-pick-recyclables/ https://www.therobotreport.com/robogrocery-from-mit-csail-is-a-soft-robot-to-pack-groceries-pick-recyclables/#respond Mon, 01 Jul 2024 13:48:45 +0000 https://www.therobotreport.com/?p=579620 RoboGrocery uses soft robotics, sensors, and algorithms to handle a stream of unpredictable objects on a conveyor belt, said MIT CSAIL.

The post RoboGrocery from MIT CSAIL is a soft robot to pack groceries, pick recyclables appeared first on The Robot Report.

]]>
The RoboGrocery system combines vision, algorithms, and soft grippers to prioritize items to pack.

The RoboGrocery system combines vision, algorithms, and soft grippers to prioritize items to pack. Source: MIT CSAIL

As a child, I often accompanied my mother to the grocery store. As she pulled out her card to pay, I heard the same phrase like clockwork: “Go bag the groceries.” It was not my favorite task. Now imagine a world where robots could delicately pack your groceries, and items like bread and eggs are never crushed beneath heavier items. We might be getting closer with RoboGrocery.

Researchers at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) have created a new soft robotic system that combines advanced vision technology, motor-based proprioception, soft tactile sensors, and a new algorithm. RobGrocery can handle a continuous stream of unpredictable objects moving along a conveyor belt, they said.

“The challenge here is making immediate decisions about whether to pack an item or not, especially since we make no assumptions about the object as it comes down the conveyor belt,” said Annan Zhang, a Ph.D. student at MIT CSAIL and one of the lead authors on a new paper about RoboGrocery. “Our system measures each item, decides if it’s delicate, and packs it directly or places it in a buffer to pack later.


SITE AD for the 2025 Robotics Summit registration. Register now


RoboGrocery demonstrates a light touch

RoboGrocery’s pseudo market tour was a success. In the experimental setup, researchers selected 10 items from a set of previously unseen, realistic grocery items and put them onto a conveyor belt in random order. This process was repeated three times, and the evaluation of “bad packs” was done by counting the number of heavy items placed on delicate items.

The soft robotic system showed off its light touch by performing nine times fewer item-damaging maneuvers than the sensorless baseline, which relied solely on pre-programmed grasping motions without sensory feedback. It also damaged items 4.5 times less than the vision-only approach, which used cameras to identify items but lacked tactile sensing, said MIT CSAIL.

To illustrate how RoboGrocery works, let’s consider an example. A bunch of grapes and a can of soup come down the conveyor belt. First, the RGB-D camera detects the grapes and soup, estimating sizes and positions.

The gripper picks up the grapes, and the soft tactile sensors measure the pressure and deformation, signaling that they’re delicate. The algorithm assigns a high delicacy score and places them in the buffer.

Next, the gripper goes in for the soup. The sensors measure minimal deformation, meaning “not delicate,” so the algorithm assigns a low delicacy score, and packs it directly into the bin. 

Once all non-delicate items are packed, RoboGrocery retrieves the grapes from the buffer and carefully places them on top so they aren’t crushed. Throughout the process, a microprocessor handles all sensory data and executes packing decisions in real time. 

The researchers tested various grocery items to ensure robustness and reliability. They included delicate items such as bread, clementines, grapes, kale, muffins, chips, and crackers. The team also tested non-delicate items like soup cans, ground coffee, chewing gum, cheese blocks, prepared meal boxes, ice cream containers, and baking soda. 

RoboGrocery combines sensing and algorithms.

RoboGrocery was tested in its ability to handle a range of delicate grocery items. Source: MIT CSAIL

RoboGrocery handles more varied objects than other systems

Traditionally, bin-packing tasks in robotics have focused on rigid, rectangular objects. These methods, though, can fail to handle objects of varying shapes, sizes, and stiffness. 

However, with its custom blend of RGB-D cameras, closed-loop control servo motors, and soft tactile sensors, RoboGrocery gets ahead of this, said MIT. The cameras provide depth information and color images to accurately determine the object’s shapes and sizes as they move along the conveyor belt.

The motors offer precise control and feedback, allowing the gripper to adjust its grasp based on the object’s characteristics. Finally, the sensors, integrated into the gripper’s fingers, measure the pressure and deformation of the object, providing data on stiffness and fragility.

Despite its success, there’s always room for improvement. The current heuristic to determine whether an item is delicate is somewhat crude, and could be refined with more advanced sensing technologies and better grippers, acknowledged the researchers.

“Currently, our grasping methods are quite basic, but enhancing these techniques can lead to significant improvements,” said Zhang. “For example, determining the optimal grasp direction to minimize failed attempts and efficiently handle items placed on the conveyor belt in unfavorable orientations. For example, a cereal box lying flat might be too large to grasp from above, but standing upright, it could be perfectly manageable.”

RoboGrocery is able to determine the grasp and packing approach for each item.

RoboGrocery is able to determine the best grasping and packing approach for each item. Source: MIT CSAIL

MIT CSAIL team looks ahead

While the project is still in the research phase, its potential applications could extend beyond grocery packing. The team envisions use in various online packing scenarios, such as packing for a move or in recycling facilities, where the order and properties of objects are unknown.

“This is a significant first step towards having robots pack groceries and other items in real-world settings,” said Zhang. “Although we’re not quite ready for commercial deployment, our research demonstrates the power of integrating multiple sensing modalities in soft robotic systems.”

“Automating grocery packing with robots capable of soft and delicate grasping and high level reasoning like the robot in our project has the potential to impact retail efficiency and open new avenues for innovation”, said senior author Daniela Rus, CSAIL director and professor of electrical engineering and computer science (EECS) at MIT.

“Soft grippers are suitable for grasping objects of various shapes and, when combined with proper sensing and control, they can solve long-lasting robotics problems, like bin packing unknown objects,” added Cecilia Laschi, Provost’s Chair Professor of robotics at the National University of Singapore, who was not involved in the work. “This is what this paper has demonstrated — bringing soft robotics a step forward towards concrete applications.”

“The authors have addressed a longstanding problem in robotics — the handling of delicate and irregularly-shaped objects — with a holistic and bioinspired approach,” said Robert Wood, a professor of electrical engineering at Harvard University who was not involved in the paper. “Their use of a combination of vision and tactile sensing parallels how humans accomplish similar tasks and, importantly, sets a benchmark for performance that future manipulation research can build on.”

Zhang co-authored the paper with EECS Ph.D. student Valerie K. Chen ’22, M.Eng. ’23; Jeana Choi ’21, M.Eng. ‘22; and Lillian Chin ‘17 SM, ’19 Ph.D. ’23, currently assistant professor at the University of Texas at Austin. The researchers presented their findings at the IEEE International Conference on Soft Robotics (RoboSoft) earlier this year.

About the author

Rachel Gordon is senior communications manager at MIT CSAIL. This article is reposted with permission.

The post RoboGrocery from MIT CSAIL is a soft robot to pack groceries, pick recyclables appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robogrocery-from-mit-csail-is-a-soft-robot-to-pack-groceries-pick-recyclables/feed/ 0
MIT testing autonomous Roboat II that carries passengers https://www.therobotreport.com/mit-testing-roboat-ii-carries-passengers/ https://www.therobotreport.com/mit-testing-roboat-ii-carries-passengers/#respond Mon, 26 Oct 2020 16:24:12 +0000 https://www.therobotreport.com/?p=107001 Five years in the making, MIT’s autonomous floating vessels get a size upgrade and learn a new way to communicate aboard the waters.

The post MIT testing autonomous Roboat II that carries passengers appeared first on The Robot Report.

]]>
MIT Roboat II

Roboat II, MIT’s latest autonomous boat, is 2 meters long and capable of carrying passengers. | Credit: MIT

The feverish race to produce the shiniest, safest, speediest self-driving car has spilled over into our wheelchairs, scooters, and even golf carts. Recently, there’s been movement from land to sea, as marine autonomy stands to change the canals of our cities, with the potential to deliver goods and services and collect waste across our waterways.

In an update to a five-year project from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Senseable City Lab, researchers have been developing the world’s first fleet of autonomous boats for the City of Amsterdam, the Netherlands, and have recently added a new, larger vessel to the group: “Roboat II.” Now sitting at 2 meters long, which is roughly a “Covid-friendly” 6 feet, the new robotic boat is capable of carrying passengers.

Alongside the Amsterdam Institute for Advanced Metropolitan Solutions, the team also created navigation and control algorithms to update the communication and collaboration among the boats.

“Roboat II navigates autonomously using algorithms similar to those used by self-driving cars, but now adapted for water,” said MIT Professor Daniela Rus, a senior author on a new paper about Roboat and the director of CSAIL. “We’re developing fleets of Roboats that can deliver people and goods, and connect with other Roboats to form a range of autonomous platforms to enable water activities.”

Self-driving boats have been able to transport small items for years, but adding human passengers has felt somewhat intangible due to the current size of the vessels. Roboat II is the “half-scale” boat in the growing body of work, and joins the previously developed quarter-scale Roboat, which is 1 meter long. The third installment, which is under construction in Amsterdam and is considered to be “full scale,” is 4 meters long and aims to carry anywhere from four to six passengers.

Aided by powerful algorithms, Roboat II autonomously navigated the canals of Amsterdam for three hours collecting data, and returned back to its start location with an error margin of only 0.17 meters, or fewer than 7 inches.

“The development of an autonomous boat system capable of accurate mapping, robust control, and human transport is a crucial step towards having the system implemented in the full-scale Roboat,” said senior postdoc Wei Wang, lead author on a new paper about Roboat II. “We also hope it will eventually be implemented in other boats in order to make them autonomous.”

Wang wrote the paper alongside MIT Senseable City Lab postdoc Tixiao Shan, research fellow Pietro Leoni, postdoc David Fernandez-Gutierrez, research fellow Drew Meyers, and MIT professors Carlo Ratti and Daniela Rus. The work was supported by a grant from the Amsterdam Institute for Advanced Metropolitan Solutions in the Netherlands. A paper on Roboat II will be virtually presented at the International Conference on Intelligent Robots and Systems.

To coordinate communication among the boats, another team from MIT CSAIL and Senseable City Lab, also led by Wang, came up with a new control strategy for robot coordination.

With the intent of self-assembling into connected, multi-unit trains — with distant homage to children’s train sets — “collective transport” takes a different path to complete various tasks. The system uses a distributed controller, which is a collection of sensors, controllers, and associated computers distributed throughout a system), and a strategy inspired by how a colony of ants can transport food without communication. Specifically, there’s no direct communication among the connected robots — only one leader knows the destination. The leader initiates movement to the destination, and then the other robots can estimate the intention of the leader, and align their movements accordingly.

“Current cooperative algorithms have rarely considered dynamic systems on the water,” said Ratti, the Senseable City Lab director. “Cooperative transport, using a team of water vehicles, poses unique challenges not encountered in aerial or ground vehicles. For example, inertia and load of the vehicles become more significant factors that make the system harder to control. Our study investigates the cooperative control of the surface vehicles and validates the algorithm on that.”

The team tested their control method on two scenarios: one where three robots are connected in a series, and another where three robots are connected in parallel. The results showed that the coordinated group was able to track various trajectories and orientations in both configurations, and that the magnitudes of the followers’ forces positively contributed to the group — indicating that the follower robots helped the leader.

Wang wrote a paper about collective transport alongside Stanford University PhD student Zijian Wang, MIT postdoc Luis Mateos, MIT researcher Kuan Wei Huang, Stanford Assistant Professor Mac Schwager, Ratti, and Rus.

Roboat II

In 2016, MIT researchers tested a prototype that could move “forward, backward, and laterally along a pre-programmed path in the canals.” Three years later, the team’s robots were updated to “shapeshift” by autonomously disconnecting and reassembling into a variety of configurations.

Now, Roboat II has scaled up to explore transportation tasks, aided by updated research. These include a new algorithm for Simultaneous Localization and Mapping (SLAM), a model-based optimal controller called nonlinear model predictive controller, and an optimization-based state estimator, called moving horizon estimation.

Here’s how it works: When a passenger pickup task is required from a user at a specific position, the system coordinator will assign the task to an unoccupied boat that’s closest to the passenger. As Roboat II picks up the passenger, it will create a feasible path to the desired destination, based on the current traffic conditions.

Then, Roboat II, which weighs more than 50 kilograms, will start to localize itself by running the SLAM algorithm and utilizing lidar and GPS sensors, as well as an inertial measurement unit for localization, pose, and velocity. The controller then tracks the reference trajectories from the planner, which updates the path to avoid obstacles that are detected to avoid potential collisions.

The team notes that the improvements in their control algorithms have made the obstacles feel like less of a giant iceberg since their last update; the SLAM algorithm provides a higher localization accuracy for Roboat, and allows for online mapping during navigation, which they didn’t have in previous iterations.

Increasing the size of Roboat also required a larger area to conduct the experiments, which began in the MIT pools and subsequently moved to the Charles River, which cuts through Boston and Cambridge, Massachusetts.

While navigating the congested roads of cities alike can lead drivers to feel trapped in a maze, canals largely avoid this. Nevertheless, tricky scenarios in the waterways can still emerge. Given that, the team is working on developing more efficient planning algorithms to let the vessel handle more complicated scenarios, by applying active object detection and identification to improve Roboat’s understanding of its environment. The team plans to estimate disturbances such as currents and waves, to further improve the tracking performance in more noisy waters.

“All of these expected developments will be incorporated into the first prototype of the full-scale Roboat and tested in the canals of the City of Amsterdam,” said Rus.

MIT Roboat II

Roboat II has scaled up to explore transportation tasks, aided by updated research. These include a new algorithm for SLAM, a model-based optimal controller called nonlinear model predictive controller, and an optimization-based state estimator, called moving horizon estimation. | Credit: MIT

Collective transport

Making our intuitive abilities a reality for machines has been the persistent intention since the birth of the field, from straightforward commands for picking up items to the nuances of organizing in a group.

One of the main goals of the project is enabling self-assembly to complete the aforementioned tasks of collecting waste, delivering items, and transporting people in the canals — but controlling this movement on the water has been a challenging obstacle. Communication in robotics can often be unstable or have delays, which may worsen the robot coordination.

Many control algorithms for this collective transport require direct communication, the relative positions in the group, and the destination of the task — but the team’s new algorithm simply needs one robot to know the desired trajectory and orientation.

Normally, the distributed controller running on each robot requires the velocity information of the connected structure (represented by the velocity of the center of the structure), but this requires that each robot knows the relative position to the center of the structure. In the team’s algorithm, they don’t need the relative position, and each robot simply uses its local velocity instead of the velocity of the center of the structure.

When the leader initiates the movement to the destination, the other robots can therefore estimate the intention of the leader and align their movements. The leader can also steer the rest of the robots by adjusting its input, without any communication between any two robots.

In the future, the team plans to use machine learning to estimate (online) the key parameters of the robots. They’re also aiming to explore adaptive controllers that allow for dynamic change to the structure when objects are placed on the boat. Eventually, the boats will also be extended to outdoor water environments, where large disturbances such as currents and waves exist.

Editor’s Note: This article was republished from the Massachusetts Institute of Technology.

The post MIT testing autonomous Roboat II that carries passengers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-testing-roboat-ii-carries-passengers/feed/ 0
MIT creates tactile-reactive robot gripper that manipulates cables https://www.therobotreport.com/mit-tactile-reactive-robot-gripper-manipulates-cables/ https://www.therobotreport.com/mit-tactile-reactive-robot-gripper-manipulates-cables/#respond Mon, 13 Jul 2020 14:44:33 +0000 https://www.therobotreport.com/?p=105752 MIT CSAIL created a system that uses two robotic gripper with soft sensitive fingers to handle cables with unprecedented dexterity.

The post MIT creates tactile-reactive robot gripper that manipulates cables appeared first on The Robot Report.

]]>
MIT gripper manipulates cables

MIT’s system uses a pair of soft robotic grippers with high-resolution tactile sensors to manipulate freely moving cables. | Photo Credit: MIT CSAIL.

For humans, it can be challenging to manipulate thin flexible objects like ropes, wires, or cables. But if these problems are hard for humans, they are nearly impossible for robots. As a cable slides between the fingers, its shape is constantly changing, and the robot’s fingers must be constantly sensing and adjusting the cable’s position and motion.

Standard approaches have used a series of slow and incremental deformations, as well as mechanical fixtures, to get the job done. Recently, a group of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) pursued the task from a different angle, in a manner that more closely mimics us humans. The team’s new system uses a pair of soft robotic grippers with high-resolution tactile sensors (and no added mechanical constraints) to successfully manipulate freely moving cables.

One could imagine using a system like this for both industrial and household tasks, to one day enable robots to help us with things like tying knots, wire shaping, or even surgical suturing.

The team’s first step was to build a novel two-fingered gripper. The opposing fingers are lightweight and quick moving, allowing nimble, real-time adjustments of force and position. On the tips of the fingers are vision-based “GelSight” sensors, built from soft rubber with embedded cameras. The gripper is mounted on a robot arm, which can move as part of the control system.

The team’s second step was to create a perception-and-control framework to allow cable manipulation. For perception, they used the GelSight sensors to estimate the pose of the cable between the fingers, and to measure the frictional forces as the cable slides. Two controllers run in parallel: one modulates grip strength, while the other adjusts the gripper pose to keep the cable within the gripper.

When mounted on the arm, the gripper could reliably follow a USB cable starting from a random grasp position. Then, in combination with a second gripper, the robot can move the cable “hand over hand” (as a human would) in order to find the end of the cable. It could also adapt to cables of different materials and thicknesses.

As a further demo of its prowess, the robot performed an action that humans routinely do when plugging earbuds into a cell phone. Starting with a free-floating earbud cable, the robot was able to slide the cable between its fingers, stop when it felt the plug touch its fingers, adjust the plug’s pose, and finally insert the plug into the jack.

“Manipulating soft objects is so common in our daily lives, like cable manipulation, cloth folding, and string knotting,” says Yu She, MIT postdoc and lead author on a new paper about the system. “In many cases, we would like to have robots help humans do this kind of work, especially when the tasks are repetitive, dull, or unsafe.”

String me along

Cable following is challenging for two reasons. First, it requires controlling the “grasp force” (to enable smooth sliding), and the “grasp pose” (to prevent the cable from falling from the gripper’s fingers).

This information is hard to capture from conventional vision systems during continuous manipulation, because it’s usually occluded, expensive to interpret, and sometimes inaccurate.

What’s more, this information can’t be directly observed with just vision sensors, hence the team’s use of tactile sensors. The gripper’s joints are also flexible – protecting them from potential impact.

The algorithms can also be generalized to different cables with various physical properties like material, stiffness, and diameter, and also to those at different speeds.

When comparing different controllers applied to the team’s gripper, their control policy could retain the cable in hand for longer distances than three others. For example, the “open-loop” controller only followed 36 percent of the total length, the gripper easily lost the cable when it curved, and it needed many re-grasps to finish the task.

MIT looks ahead

The MIT team observed that it was difficult to pull the cable back when it reached the edge of the finger, because of the convex surface of the GelSight sensor. Therefore, they hope to improve the finger-sensor shape to enhance the overall performance.

In the future, they plan to study more complex cable manipulation tasks such as cable routing and cable inserting through obstacles, and they want to eventually explore autonomous cable manipulation tasks in the auto industry.

Editor’s Note: This article was republished with permission from MIT News.

The post MIT creates tactile-reactive robot gripper that manipulates cables appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-tactile-reactive-robot-gripper-manipulates-cables/feed/ 0
UVC robot built by MIT CSAIL disinfects Greater Boston Food Bank https://www.therobotreport.com/uvc-robot-built-mit-csail-ava-robotics-disinfects-greater-boston-food-bank/ https://www.therobotreport.com/uvc-robot-built-mit-csail-ava-robotics-disinfects-greater-boston-food-bank/#respond Thu, 02 Jul 2020 16:27:23 +0000 https://www.therobotreport.com/?p=105683 Using ultraviolet light, a UVC robot developed by MIT and Ava Robotics can disinfect a warehouse floor in half an hour. It could one day be used in grocery stores, schools, and other public spaces

The post UVC robot built by MIT CSAIL disinfects Greater Boston Food Bank appeared first on The Robot Report.

]]>

Essential services such as healthcare and food distribution have been under sustained stress during the novel coronavirus pandemic. A team from the Massachusetts Institute of Technology has teamed up with telepresence provider Ava Robotics Inc. and the Greater Boston Food Bank to design a new disinfection system.

The United Nations projected that the number of people facing severe food insecurity worldwide could double to 265 million because of the pandemic. In the U.S. alone, the five-week total of job losses has risen to 26 million, potentially pushing millions more into food insecurity.

One threat of COVID-19 is that droplets can persist, especially on surfaces. Chemical cleaners can kill the virus, but applying them can be expensive, dangerous, and time consuming. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) designed a UVC light fixture that is integrated with Ava Robotics’ mobile robot base to disinfect surfaces and neutralize aerosolized forms of the virus.

“Food banks provide an essential service to our communities, so it is critical to help keep these operations running,” stated Alyssa Pierson, CSAIL research scientist and technical lead of the UVC lamp assembly. “Here, there was a unique opportunity to provide additional disinfecting power to their current workflow and help reduce the risks of COVID-19 exposure.”

Devising an effective UVC robot

Specifically, the robot uses short-wavelength ultraviolet light to kill microorganisms and disrupt their DNA in a process called “ultraviolet germicidal irradiation.”

Ultraviolet light has proven to be effective at killing viruses and bacteria, but it is unsafe for humans to be exposed. The collaborators replaced the telepresence top of Ava’s autonomous robot with the UVC array for disinfecting surfaces.

Typically, this method of ultraviolet germicidal irradiation is used largely in hospitals and medical settings, to sterilize patient rooms, and stop the spread of microorganisms like MRSA and C. diff. The UVC light also works against airborne pathogens. While it’s most effective in the direct line of sight, the light can bounce off of surfaces to reach certain areas, said the MIT researchers.

As far as production went, “in-house manufacturing” took on a whole new meaning for this prototype and the team. The UVC lamps were assembled in Pierson’s basement, and CSAIL Ph.D. student Jonathan Romanishin crafted a makeshift shop in his apartment for the electronics board assembly.

The complete robot system is capable of mapping a space such as the Greater Boston Food Bank’s warehouse and navigating between waypoints and other specified areas.

UVC robot GBFB

The UVC robot’s point of view as it navigates the food bank facility. Source: MIT CSAIL

Mapping, testing UVC robot in the food bank warehouse

First, the team tele-operated the robot to teach it a path around the Greater Boston Food Bank warehouse so that it could then navigate autonomously. The UVC robot can go to human-defined waypoints on its map, such as going to the loading dock, then the warehouse shipping floor, and then returning to base. Users can add new waypoints as needed.

“Our 10-year-old warehouse is a relatively new food-distribution facility with AIB-certified state-of-the-art cleanliness and food safety standards,” said Catherine D’Amato, president and CEO of the Greater Boston Food Bank. “COVID-19 is a new pathogen that GBFB, and the rest of the world, was not designed to handle. We are pleased to have this opportunity to work with MIT CSAIL and Ava Robotics to innovate and advance our sanitation techniques to defeat this menace.”

Within the food bank, the team identified the warehouse shipping floor as a “high-importance area” for the UVC robot to disinfect. Each day, workers stage aisles of products and arrange them for up to 50 pick-ups by partners and distribution trucks the next day. By focusing on the shipping area, the team prioritized disinfecting items leaving the warehouse to reduce the risk of infection in the community.

A unique challenge is that the shipping area is constantly changing, so each night, the robot encounters a slightly new environment. When the UVC robot is deployed, it doesn’t necessarily know which of the staging aisles will be occupied or how full each aisle may be. Therefore, the researchers noted that they needed to teach the robot to differentiate between occupied and unoccupied aisles, so it can change its path accordingly.

The team used a dosimeter, which confirmed that the robot was delivering the expected dosage of UVC light predicted by the model. The robot was able to drive by pallets and storage aisles at roughly 0.22 mph. At this speed, it could cover a 4,000-sq.-ft. space in the warehouse in just half an hour. The UVC dosage delivered during this time can neutralize approximately 90% of coronaviruses on surfaces. For many surfaces, this dose will be higher, resulting in more of the virus neutralized, said the team

“As we drive the robot around the food bank, we are also researching new control policies that will allow the robot to adapt to changes in the environment and ensure all areas receive the proper estimated dosage,” said Pierson. “We are focused on remote operation to minimize human supervision, and therefore, the additional risk of spreading COVID-19, while running our system.”

MIT disinfection robot

Source: Alyssa Pierson, MIT CSAIL

Next steps for UVC robot development

For immediate next steps, the team is focused on increasing the capabilities of the UVC robot at the Greater Boston Food Bank, as well as eventually implementing design upgrades. The developers are exploring how to use the robot’s onboard sensors to adapt to changes in the environment, such that in new territory, the robot would adjust its speed to ensure the recommended dosage is applied to new objects and surfaces.

In addition, the MIT researchers are studying how to make these systems more adaptable. For instance, a robot could dynamically change its plan based on estimated UVC dosages and learn to work in new environments, and teams of UVC robots could be coordinated.

MIT has been a great partner, and when they came to us, the team was eager to start the integration, which took just four weeks to get up and running,” said Ava Robotics CEO Youssef Saleh. “The opportunity for robots to solve workplace challenges is bigger than ever, and collaborating with MIT to make an impact at the food bank has been a great experience.”

Pierson and Romanishin worked alongside Hunter Hansen on software capabilities, Bryan Teague of MIT Lincoln Laboratory, who assisted with the UVC lamp assembly, and Igor Gilitschenski and Xiao Li on future autonomy research. In addition, MIT professors Daniela Rus and Saman Amarasinghe, and Ava leads Marcio Macedo and Youssef Saleh. Ava Robotics provided its platform and team support.

Although the MIT and Ava Robotics researchers are currently focusing on the Greater Boston Food Bank, the algorithms and systems they are developing could be transferred to other use cases in the future. The team said the initial results were encouraging enough that that the approach could be useful for autonomous UV disinfection in other environments, such as factories, warehouses, and restaurants.

“We are excited to see the UVC disinfecting robot support our community in this time of need,” said Rus, director of CSAIL and project lead. “The insights we received from the work at GBFB has highlighted several algorithmic challenges. We plan to tackle these in order to extend the scope of autonomous UV disinfection in complex spaces, including dorms, schools, airplanes, and grocery stores.”

The post UVC robot built by MIT CSAIL disinfects Greater Boston Food Bank appeared first on The Robot Report.

]]>
https://www.therobotreport.com/uvc-robot-built-mit-csail-ava-robotics-disinfects-greater-boston-food-bank/feed/ 0
MIT gives soft robotic gripper better sense of touch and perception https://www.therobotreport.com/mit-soft-robotic-finger-sense-of-touch-perception/ https://www.therobotreport.com/mit-soft-robotic-finger-sense-of-touch-perception/#respond Tue, 02 Jun 2020 16:57:17 +0000 https://www.therobotreport.com/?p=105357 MIT researchers built a soft robotic gripper that uses embedded cameras and deep learning to enable tactile sensing and awareness of its positions and movements.

The post MIT gives soft robotic gripper better sense of touch and perception appeared first on The Robot Report.

]]>
soft robotic finger

MIT researchers built a soft robotic gripper that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body). | Credit: MIT CSAIL

One of the hottest topics in robotics is the field of soft robots, which utilizes squishy and flexible materials rather than traditional rigid materials. But soft robots have been limited due to their lack of good sensing. A good robotic gripper needs to feel what it is touching (tactile sensing), and it needs to sense the positions of its fingers (proprioception). Such sensing has been missing from most soft robots.

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.

“We wish to enable seeing the world by feeling the world. Soft robot hands have sensorized skins that allow them to pick up a range of objects, from delicate, such as potato chips, to heavy, such as milk bottles,” says CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and the deputy dean of research for the MIT Stephen A. Schwarzman College of Computing.

One paper builds off last year’s research from MIT and Harvard University, where a team developed a strong and soft robotic gripper in the form of a cone-shaped origami structure. It collapses in on objects much like a Venus’ flytrap, to pick up items that are as much as 100 times its weight.

To get that newfound versatility and adaptability even closer to that of a human hand, a new team came up with a sensible addition: tactile sensors, made from latex “bladders” (balloons) connected to pressure transducers. The new sensors let the soft robotic gripper not only pick up objects as delicate as potato chips, but it also classifies them — letting the robot better understand what it’s picking up, while also exhibiting that light touch.

When classifying objects, the sensors correctly identified 10 objects with over 90 percent accuracy, even when an object slipped out of grip.

“Unlike many other soft tactile sensors, ours can be rapidly fabricated, retrofitted into grippers, and show sensitivity and reliability,” says MIT postdoc Josie Hughes, the lead author on a new paper about the sensors. “We hope they provide a new method of soft sensing that can be applied to a wide range of different applications in manufacturing settings, like packing and lifting.”

In a second paper, a group of researchers created a soft robotic finger called “GelFlex” that uses embedded cameras and deep learning to enable high-resolution tactile sensing and “proprioception” (awareness of positions and movements of the body).

The gripper, which looks much like a two-finger cup gripper you might see at a soda station, uses a tendon-driven mechanism to actuate the fingers. When tested on metal objects of various shapes, the system had over 96 percent recognition accuracy.

“Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects, and also withstand considerable impact without harming the interacted environment and itself,” says Yu She, lead author on a new paper on GelFlex. “By constraining soft fingers with a flexible exoskeleton, and performing high-resolution sensing with embedded cameras, we open up a large range of capabilities for soft manipulators.”

Magic ball senses

The magic ball gripper is made from a soft origami structure, encased by a soft balloon. When a vacuum is applied to the balloon, the origami structure closes around the object, and the gripper deforms to its structure.

While this motion lets the gripper grasp a much wider range of objects than ever before, such as soup cans, hammers, wine glasses, drones, and even a single broccoli floret, the greater intricacies of delicacy and understanding were still out of reach — until they added the sensors.

When the sensors experience force or strain, the internal pressure changes, and the team can measure this change in pressure to identify when it will feel that again.

In addition to the latex sensor, the team also developed an algorithm which uses feedback to let the gripper possess a human-like duality of being both strong and precise — and 80 percent of the tested objects were successfully grasped without damage.

The team tested the gripper-sensors on a variety of household items, ranging from heavy bottles to small, delicate objects, including cans, apples, a toothbrush, a water bottle, and a bag of cookies.

Going forward, the team hopes to make the methodology scalable, using computational design and reconstruction methods to improve the resolution and coverage using this new sensor technology. Eventually, they imagine using the new sensors to create a fluidic sensing skin that shows scalability and sensitivity.

Hughes co-wrote the new paper with Rus, which they will present virtually at the 2020 International Conference on Robotics and Automation.

GelFlex

In the second paper, a CSAIL team looked at giving a soft robotic gripper more nuanced, human-like senses. Soft fingers allow a wide range of deformations, but to be used in a controlled way there must be rich tactile and proprioceptive sensing. The team used embedded cameras with wide-angle “fisheye” lenses that capture the finger’s deformations in great detail.

To create GelFlex, the team used silicone material to fabricate the soft and transparent finger, and put one camera near the fingertip and the other in the middle of the finger. Then, they painted reflective ink on the front and side surface of the finger, and added LED lights on the back. This allows the internal fish-eye camera to observe the status of the front and side surface of the finger.

The team trained neural networks to extract key information from the internal cameras for feedback. One neural net was trained to predict the bending angle of GelFlex, and the other was trained to estimate the shape and size of the objects being grabbed. The soft robotic gripper could then pick up a variety of items such as a Rubik’s cube, a DVD case, or a block of aluminum.

During testing, the average positional error while gripping was less than 0.77 millimeter, which is better than that of a human finger. In a second set of tests, the soft robotic gripper was challenged with grasping and recognizing cylinders and boxes of various sizes. Out of 80 trials, only three were classified incorrectly.

In the future, the team hopes to improve the proprioception and tactile sensing algorithms, and utilize vision-based sensors to estimate more complex finger configurations, such as twisting or lateral bending, which are challenging for common sensors, but should be attainable with embedded cameras.

Yu She co-wrote the GelFlex paper with MIT graduate student Sandra Q. Liu, Peiyu Yu of Tsinghua University, and MIT Professor Edward Adelson. They will present the paper virtually at the 2020 International Conference on Robotics and Automation.

Editor’s Note: This article was reprinted with permission from MIT News.

The post MIT gives soft robotic gripper better sense of touch and perception appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-soft-robotic-finger-sense-of-touch-perception/feed/ 0
‘Conduct-A-Bot’ system uses muscle signals to control drones https://www.therobotreport.com/conduct-a-bot-muscle-signals-control-drones/ https://www.therobotreport.com/conduct-a-bot-muscle-signals-control-drones/#respond Tue, 28 Apr 2020 14:32:36 +0000 https://www.therobotreport.com/?p=104973 Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication. But intuitiveness is hard to teach – especially to a machine. Looking to improve this, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a method…

The post ‘Conduct-A-Bot’ system uses muscle signals to control drones appeared first on The Robot Report.

]]>
MIT CSAIL Conduct-A-Bot

Joseph DelPreto controls a Conduct-A-Bot drone with his arm muscles. | Credit: MIT CSAIL

Albert Einstein famously postulated that “the only real valuable thing is intuition,” arguably one of the most important keys to understanding intention and communication.

But intuitiveness is hard to teach – especially to a machine. Looking to improve this, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with a method that dials us closer to more seamless human-robot collaboration. The system, called “Conduct-A-Bot,” uses human muscle signals from wearable sensors to pilot a robot’s movement.

“We envision a world in which machines help people with cognitive and physical work, and to do so, they adapt to people rather than the other way around,” said Professor Daniela Rus, director of CSAIL, deputy dean of research for the MIT Stephen A. Schwarzman College of Computing, and co-author on a paper about the system.

To enable seamless teamwork between people and machines, electromyography and motion sensors are worn on the biceps, triceps, and forearms to measure muscle signals and movement. Algorithms then process the signals to detect gestures in real time, without any offline calibration or per-user training data. The system uses just two or three wearable sensors, and nothing in the environment – largely reducing the barrier to casual users interacting with robots.

While Conduct-A-Bot could potentially be used for various scenarios, including navigating menus on electronic devices or supervising autonomous robots, for this research the team used a Parrot Bebop 2 drone, although any commercial drone could be used.

By detecting actions like rotational gestures, clenched fists, tensed arms, and activated forearms, Conduct-A-Bot can move the drone left, right, up, down, and forward, as well as allow it to rotate and stop.

If you gestured toward the right to your friend, they could likely interpret that they should move in that direction. Similarly, if you waved your hand to the left, for example, the drone would follow suit and make a left turn.

In tests, the drone correctly responded to 82 percent of over 1,500 human gestures when it was remotely controlled to fly through hoops. The system also correctly identified approximately 94 percent of cued gestures when the drone was not being controlled.

“Understanding our gestures could help robots interpret more of the nonverbal cues that we naturally use in everyday life,” says Joseph DelPreto, lead author on the new paper. “This type of system could help make interacting with a robot more similar to interacting with another person, and make it easier for someone to start using robots without prior experience or external sensors.”

This type of system could eventually target a range of applications for human-robot collaboration, including remote exploration, assistive personal robots, or manufacturing tasks like delivering objects or lifting materials.

These intelligent tools are also consistent with social distancing — and could potentially open up a realm of future contactless work. For example, you can imagine machines being controlled by humans to safely clean a hospital room, or drop off medications, while letting us humans stay a safe distance.

MIT CSAIL Conduct-A-Bot

Conduct-A-Bot allows users to fly a drone through an obstacle course. | Credit: MIT CSAIL

Muscle signals can often provide information about states that are hard to observe from vision, such as joint stiffness or fatigue.

For example, if you watch a video of someone holding a large box, you might have difficulty guessing how much effort or force was needed — and a machine would also have difficulty gauging that from vision alone. Using muscle sensors opens up possibilities to estimate not only motion, but also the force and torque required to execute that physical trajectory.

For the gesture vocabulary currently used to control the robot, the movements were detected as follows:

  • Stiffening the upper arm to stop the robot (similar to briefly cringing when seeing something going wrong): biceps and triceps muscle signals
  • Waving the hand left/right and up/down to move the robot sideways or vertically: forearm muscle signals (with the forearm accelerometer indicating hand orientation)
  • Fist clenching to move the robot forward: forearm muscle signals
  • Rotating clockwise/counterclockwise to turn the robot: forearm gyroscope

Machine learning classifiers detected the gestures using the wearable sensors. Unsupervised classifiers processed the muscle and motion data and clustered it in real time to learn how to separate gestures from other motions. A neural network also predicted wrist flexion or extension from forearm muscle signals.

The system essentially calibrates itself to each person’s signals while they’re making gestures that control the robot, making it faster and easier for casual users to start interacting with robots.

In the future, the team hopes to expand the tests to include more subjects. And while the movements for Conduct-A-Bot cover common gestures for robot motion, the researchers want to extend the vocabulary to include more continuous or user-defined gestures. Eventually, the hope is to have the robots learn from these interactions to better understand the tasks and provide more predictive assistance or increase their autonomy.

“This system moves one step closer to letting us work seamlessly with robots so they can become more effective and intelligent tools for everyday tasks,” says DelPreto. “As such collaborations continue to become more accessible and pervasive, the possibilities for synergistic benefit continue to deepen.”

Editor’s Note: This article was republished with permission from MIT News.

The post ‘Conduct-A-Bot’ system uses muscle signals to control drones appeared first on The Robot Report.

]]>
https://www.therobotreport.com/conduct-a-bot-muscle-signals-control-drones/feed/ 0