Jennifer Chu, Author at The Robot Report https://www.therobotreport.com/author/jchu/ Robotics news, research and analysis Fri, 17 May 2024 20:24:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Jennifer Chu, Author at The Robot Report https://www.therobotreport.com/author/jchu/ 32 32 Robot ‘SuperLimbs’ help astronauts stand up after falling https://www.therobotreport.com/robot-superlimbs-help-astronauts-stand-up-after-falling/ https://www.therobotreport.com/robot-superlimbs-help-astronauts-stand-up-after-falling/#respond Sun, 19 May 2024 14:00:03 +0000 https://www.therobotreport.com/?p=579107 The design could prove useful in the coming years, with the launch of NASA’s Artemis mission, which plans to send astronauts back to the moon for the first time in over 50 years.

The post Robot ‘SuperLimbs’ help astronauts stand up after falling appeared first on The Robot Report.

]]>

Need a moment of levity? Try watching videos of astronauts falling on the moon. NASA’s outtakes of Apollo astronauts tripping and stumbling as they bounce in slow motion are delightfully relatable.

For MIT engineers, the lunar bloopers also highlight an opportunity to innovate.

“Astronauts are physically very capable, but they can struggle on the moon, where gravity is one-sixth that of Earth’s but their inertia is still the same. Furthermore, wearing a spacesuit is a significant burden and can constrict their movements,” says Harry Asada, professor of mechanical engineering at MIT. “We want to provide a safe way for astronauts to get back on their feet if they fall.”

Asada and his colleagues are designing a pair of wearable robotic limbs that can physically support an astronaut and lift them back on their feet after a fall. The system, which the researchers have dubbed Supernumerary Robotic Limbs or “SuperLimbs” is designed to extend from a backpack, which would also carry the astronaut’s life support system, along with the controller and motors to power the limbs.

The researchers have built a physical prototype, as well as a control system to direct the limbs, based on feedback from the astronaut using it. The team tested a preliminary version on healthy subjects who also volunteered to wear a constrictive garment similar to an astronaut’s spacesuit. When the volunteers attempted to get up from a sitting or lying position, they did so with less effort when assisted by SuperLimbs, compared to when they had to recover on their own.

The MIT team envisions that SuperLimbs can physically assist astronauts after a fall and, in the process, help them conserve their energy for other essential tasks. The design could prove especially useful in the coming years, with the launch of NASA’s Artemis mission, which plans to send astronauts back to the moon for the first time in over 50 years. Unlike the largely exploratory mission of Apollo, Artemis astronauts will endeavor to build the first permanent moon base — a physically demanding task that will require multiple extended extravehicular activities (EVAs).

“During the Apollo era, when astronauts would fall, 80 percent of the time it was when they were doing excavation or some sort of job with a tool,” says team member and MIT doctoral student Erik Ballesteros. “The Artemis missions will really focus on construction and excavation, so the risk of falling is much higher. We think that SuperLimbs can help them recover so they can be more productive, and extend their EVAs.”

Asada, Ballesteros, and their colleagues presented their design and study at the IEEE International Conference on Robotics and Automation (ICRA). Their co-authors include MIT postdoc Sang-Yoep Lee and Kalind Carpenter of the Jet Propulsion Laboratory.

Taking a stand

The team’s design is the latest application of SuperLimbs, which Asada first developed about a decade ago and has since adapted for a range of applications, including assisting workers in aircraft manufacturing, construction, and ship building.

Most recently, Asada and Ballesteros wondered whether SuperLimbs might assist astronauts, particularly as NASA plans to send astronauts back to the surface of the moon.

a rendering of robotic limbs helping an astronaut stand up after falling down.

SuperLimbs, a system of wearable robotic limbs, is designed to lift up astronauts after they fall. Credit: MIT

“In communications with NASA, we learned that this issue of falling on the moon is a serious risk,” Asada says. “We realized that we could make some modifications to our design to help astronauts recover from falls and carry on with their work.”

The team first took a step back, to study the ways in which humans naturally recover from a fall. In their new study, they asked several healthy volunteers to attempt to stand upright after lying on their side, front, and back.

The researchers then looked at how the volunteers’ attempts to stand changed when their movements were constricted, similar to the way astronauts’ movements are limited by the bulk of their spacesuits. The team built a suit to mimic the stiffness of traditional spacesuits, and had volunteers don the suit before again attempting to stand up from various fallen positions. The volunteers’ sequence of movements was similar, though required much more effort compared to their unencumbered attempts.

The team mapped the movements of each volunteer as they stood up, and found that they each carried out a common sequence of motions, moving from one pose, or “waypoint,” to the next, in a predictable order.

“Those ergonomic experiments helped us to model in a straightforward way, how a human stands up,” Ballesteros says. “We could postulate that about 80 percent of humans stand up in a similar way. Then we designed a controller around that trajectory.”

SuperLimbs lend a helping hand

The team developed software to generate a trajectory for a robot, following a sequence that would help support a human and lift them back on their feet. They applied the controller to a heavy, fixed robotic arm, which they attached to a large backpack. The researchers then attached the backpack to the bulky suit and helped volunteers back into the suit. They asked the volunteers to again lie on their back, front, or side, and then had them attempt to stand as the robot sensed the person’s movements and adapted to help them to their feet.

Overall, the volunteers were able to stand stably with much less effort when assisted by the robot, compared to when they tried to stand alone while wearing the bulky suit.

“It feels kind of like an extra force moving with you,” says Ballesteros, who also tried out the suit and arm assist. “Imagine wearing a backpack and someone grabs the top and sort of pulls you up. Over time, it becomes sort of natural.”


SITE AD for the 2025 Robotics Summit registration. Register now


The experiments confirmed that the control system can successfully direct a robot to help a person stand back up after a fall. The researchers plan to pair the control system with their latest version of SuperLimbs, which comprises two multi-jointed robotic arms that can extend out from a backpack. The backpack would also contain the robot’s battery and motors, along with an astronaut’s ventilation system.

“We designed these robotic arms based on an AI search and design optimization, to look for designs of classic robot manipulators with certain engineering constraints,” Ballesteros says. “We filtered through many designs and looked for the design that consumes the least amount of energy to lift a person up. This version of SuperLimbs is the product of that process.”

Over the summer, Ballesteros will build out the full SuperLimbs system at NASA’s Jet Propulsion Laboratory, where he plans to streamline the design and minimize the weight of its parts and motors using advanced, lightweight materials. Then, he hopes to pair the limbs with astronaut suits, and test them in low-gravity simulators, with the goal of someday assisting astronauts on future missions to the moon and Mars.

“Wearing a spacesuit can be a physical burden,” Asada notes. “Robotic systems can help ease that burden, and help astronauts be more productive during their missions.”

Editor’s Note: This article was republished from MIT News.

The post Robot ‘SuperLimbs’ help astronauts stand up after falling appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robot-superlimbs-help-astronauts-stand-up-after-falling/feed/ 0
MIT designs robotic heart chamber https://www.therobotreport.com/mit-designs-robotic-heart-chamber/ https://www.therobotreport.com/mit-designs-robotic-heart-chamber/#respond Sun, 17 Dec 2023 21:37:01 +0000 https://www.therobotreport.com/?p=568908 The realistic model could aid the development of better heart implants and shed light on understudied heart disorders.

The post MIT designs robotic heart chamber appeared first on The Robot Report.

]]>
a rendering of a robotic heart chamber developed by MIT engineers.

A new robotic model developed by MIT simulates the heart’s lesser-known right ventricle. | Credit: MIT

MIT engineers have developed a robotic replica of the heart’s right ventricle, which mimics the beating and blood-pumping action of live hearts.

The robo-ventricle combines real heart tissue with synthetic, balloon-like artificial muscles that enable scientists to control the ventricle’s contractions while observing how its natural valves and other intricate structures function.

The artificial ventricle can be tuned to mimic healthy and diseased states. The team manipulated the model to simulate conditions of right ventricular dysfunction, including pulmonary hypertension and myocardial infarction (heart attack). They also used the model to test cardiac devices. For instance, the team implanted a mechanical valve to repair a natural malfunctioning valve, and then observed how the ventricle’s pumping changed in response.

They say the new robotic right ventricle, or RRV, can be used as a realistic platform to study right ventricle disorders and test devices and therapies aimed at treating those disorders.

“The right ventricle is particularly susceptible to dysfunction in intensive care unit settings, especially in patients on mechanical ventilation,” says Manisha Singh, a postdoc at MIT’s Institute for Medical Engineering and Science (IMES). “The RRV simulator can be used in the future to study the effects of mechanical ventilation on the right ventricle and to develop strategies to prevent right heart failure in these vulnerable patients.”

Singh and her colleagues report details of the new design in an open-access paper appearing today in Nature Cardiovascular Research. Her co-authors include Associate Professor Ellen Roche, who is a core member of IMES and the associate head for research in the Department of Mechanical Engineering at MIT; along with Jean Bonnemain, Caglar Ozturk, Clara Park, Diego Quevedo-Moreno, Meagan Rowlett, and Yiling Fan of MIT; Brian Ayers of Massachusetts General Hospital; Christopher Nguyen of Cleveland Clinic; and Mossab Saeed of Boston Children’s Hospital.

A ballet of beats

The right ventricle is one of the heart’s four chambers, along with the left ventricle and the left and right atria. Of the four chambers, the left ventricle is the heavy lifter, as its thick, cone-shaped musculature is built for pumping blood through the entire body. The right ventricle, Roche says, is a “ballerina” in comparison, as it handles a lighter though no-less-crucial load.

“The right ventricle pumps deoxygenated blood to the lungs, so it doesn’t have to pump as hard,” Roche notes. “It’s a thinner muscle, with more complex architecture and motion.”

This anatomical complexity has made it difficult for clinicians to accurately observe and assess right ventricle function in patients with heart disease.

“Conventional tools often fail to capture the intricate mechanics and dynamics of the right ventricle, leading to potential misdiagnoses and inadequate treatment strategies,” Singh says.

To improve understanding of the lesser-known chamber and speed the development of cardiac devices to treat its dysfunction, the team designed a realistic, functional model of the right ventricle that both captures its anatomical intricacies and reproduces its pumping function.

The model includes real heart tissue, which the team chose to incorporate because it retains natural structures that are too complex to reproduce synthetically.

“There are thin, tiny chordae and valve leaflets with different material properties that are all moving in concert with the ventricle’s muscle. Trying to cast or print these very delicate structures is quite challenging,” Roche explains.

A heart’s shelf-life

In the new study, the team reports explanting a pig’s right ventricle, which they treated to carefully preserve its internal structures. They then fit a silicone wrapping around it, which acted as a soft, synthetic myocardium, or muscular lining. Within this lining, the team embedded several long, balloon-like tubes, which encircled the real heart tissue, in positions that the team determined through computational modeling to be optimal for reproducing the ventricle’s contractions. The researchers connected each tube to a control system, which they then set to inflate and deflate each tube at rates that mimicked the heart’s real rhythm and motion.

To test its pumping ability, the team infused the model with a liquid similar in viscosity to blood. This particular liquid was also transparent, allowing the engineers to observe with an internal camera how internal valves and structures responded as the ventricle pumped liquid through.

They found that the artificial ventricle’s pumping power and the function of its internal structures were similar to what they previously observed in live, healthy animals, demonstrating that the model can realistically simulate the right ventricle’s action and anatomy. The researchers could also tune the frequency and power of the pumping tubes to mimic various cardiac conditions, such as irregular heartbeats, muscle weakening, and hypertension.

“We’re reanimating the heart, in some sense, and in a way that we can study and potentially treat its dysfunction,” Roche says.

To show that the artificial ventricle can be used to test cardiac devices, the team surgically implanted ring-like medical devices of various sizes to repair the chamber’s tricuspid valve — a leafy, one-way valve that lets blood into the right ventricle. When this valve is leaky, or physically compromised, it can cause right heart failure or atrial fibrillation, and leads to symptoms such as reduced exercise capacity, swelling of the legs and abdomen, and liver enlargement.

The researchers surgically manipulated the robo-ventricle’s valve to simulate this condition, then either replaced it by implanting a mechanical valve or repaired it using ring-like devices of different sizes. They observed which device improved the ventricle’s fluid flow as it continued to pump.

“With its ability to accurately replicate tricuspid valve dysfunction, the RRV serves as an ideal training ground for surgeons and interventional cardiologists,” Singh says. “They can practice new surgical techniques for repairing or replacing the tricuspid valve on our model before performing them on actual patients.”

Currently, the RRV can simulate realistic function over a few months. The team is working to extend that performance and enable the model to run continuously for longer stretches. They are also working with designers of implantable devices to test their prototypes on the artificial ventricle and possibly speed their path to patients. And looking far in the future, Roche plans to pair the RRV with a similar artificial, functional model of the left ventricle, which the group is currently fine-tuning.

“We envision pairing this with the left ventricle to make a fully tunable, artificial heart, that could potentially function in people,” Roche says. “We’re quite a while off, but that’s the overarching vision.”

This research was supported, in part, by the National Science Foundation.

Editor’s Note: This article was republished from MIT News.

The post MIT designs robotic heart chamber appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-designs-robotic-heart-chamber/feed/ 0
Inflatable robotic hand gives amputees real-time tactile control https://www.therobotreport.com/inflatable-robotic-hand-amputees-real-time-tactile-control/ https://www.therobotreport.com/inflatable-robotic-hand-amputees-real-time-tactile-control/#respond Tue, 17 Aug 2021 20:05:14 +0000 https://www.therobotreport.com/?p=560171 A computer model relates a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers.

The post Inflatable robotic hand gives amputees real-time tactile control appeared first on The Robot Report.

]]>

The smart hand is soft and elastic, weighs about half a pound, and costs a fraction of comparable prosthetics.

For the more than 5 million people in the world who have undergone an upper-limb amputation, prosthetics have come a long way. Beyond traditional mannequin-like appendages, there is a growing number of commercial neuroprosthetics — highly articulated bionic limbs, engineered to sense a user’s residual muscle signals and robotically mimic their intended motions.

But this high-tech dexterity comes at a price. Neuroprosthetics can cost tens of thousands of dollars and are built around metal skeletons, with electrical motors that can be heavy and rigid.

Now engineers at MIT and Shanghai Jiao Tong University have designed a soft, lightweight, and potentially low-cost neuroprosthetic hand. Amputees who tested the artificial limb performed daily activities, such as zipping a suitcase, pouring a carton of juice, and petting a cat, just as well as — and in some cases better than — those with more rigid neuroprosthetics.

The researchers found the prosthetic, designed with a system for tactile feedback, restored some primitive sensation in a volunteer’s residual limb. The new design is also surprisingly durable, quickly recovering after being struck with a hammer or run over with a car.

The smart hand is soft and elastic, and weighs about half a pound. Its components total around $500 — a fraction of the weight and material cost associated with more rigid smart limbs.

“This is not a product yet, but the performance is already similar or superior to existing neuroprosthetics, which we’re excited about,” said Xuanhe Zhao, professor of mechanical engineering and of civil and environmental engineering at MIT. “There’s huge potential to make this soft prosthetic very low cost, for low-income families who have suffered from amputation.”

Zhao and his colleagues have published their work today in Nature Biomedical Engineering. Co-authors include MIT postdoc Shaoting Lin, along with Guoying Gu, Xiangyang Zhu, and collaborators at Shanghai Jiao Tong University in China.

Big Hero hand

The team’s pliable new design bears an uncanny resemblance to a certain inflatable robot in the animated film “Big Hero 6.” Like the squishy android, the team’s artificial hand is made from soft, stretchy material — in this case, the commercial elastomer EcoFlex. The prosthetic comprises five balloon-like fingers, each embedded with segments of fiber, similar to articulated bones in actual fingers. The bendy digits are connected to a 3-D-printed “palm,” shaped like a human hand.

Related: Watch a soft robotic hand play Mario Bros.

Rather than controlling each finger using mounted electrical motors, as most neuroprosthetics do, the researchers used a simple pneumatic system to precisely inflate fingers and bend them in specific positions. This system, including a small pump and valves, can be worn at the waist, significantly reducing the prosthetic’s weight.

Lin developed a computer model to relate a finger’s desired position to the corresponding pressure a pump would have to apply to achieve that position. Using this model, the team developed a controller that directs the pneumatic system to inflate the fingers, in positions that mimic five common grasps, including pinching two and three fingers together, making a balled-up fist, and cupping the palm.

The pneumatic system receives signals from EMG sensors — electromyography sensors that measure electrical signals generated by motor neurons to control muscles. The sensors are fitted at the prosthetic’s opening, where it attaches to a user’s limb. In this arrangement, the sensors can pick up signals from a residual limb, such as when an amputee imagines making a fist.

The team then used an existing algorithm that “decodes” muscle signals and relates them to common grasp types. They used this algorithm to program the controller for their pneumatic system. When an amputee imagines, for instance, holding a wine glass, the sensors pick up the residual muscle signals, which the controller then translates into corresponding pressures. The pump then applies those pressures to inflate each finger and produce the amputee’s intended grasp.

Going a step further in their design, the researchers looked to enable tactile feedback — a feature that is not incorporated in most commercial neuroprosthetics. To do this, they stitched to each fingertip a pressure sensor, which when touched or squeezed produces an electrical signal proportional to the sensed pressure. Each sensor is wired to a specific location on an amputee’s residual limb, so the user can “feel” when the prosthetic’s thumb is pressed, for example, versus the forefinger.

Good grip

To test the inflatable hand, the researchers enlisted two volunteers, each with upper-limb amputations. Once outfitted with the neuroprosthetic, the volunteers learned to use it by repeatedly contracting the muscles in their arm while imagining making five common grasps.

After completing this 15-minute training, the volunteers were asked to perform a number of standardized tests to demonstrate manual strength and dexterity. These tasks included stacking checkers, turning pages, writing with a pen, lifting heavy balls, and picking up fragile objects like strawberries and bread. They repeated the same tests using a more rigid, commercially available bionic hand and found that the inflatable prosthetic was as good, or even better, at most tasks, compared to its rigid counterpart.

One volunteer was also able to intuitively use the soft prosthetic in daily activities, for instance to eat food like crackers, cake, and apples, and to handle objects and tools, such as laptops, bottles, hammers, and pliers. This volunteer could also safely manipulate the squishy prosthetic, for instance to shake someone’s hand, touch a flower, and pet a cat.

In a particularly exciting exercise, the researchers blindfolded the volunteer and found he could discern which prosthetic finger they poked and brushed. He was also able to “feel” bottles of different sizes that were placed in the prosthetic hand, and lifted them in response. The team sees these experiments as a promising sign that amputees can regain a form of sensation and real-time control with the inflatable hand.

The team has filed a patent on the design, through MIT, and is working to improve its sensing and range of motion.

“We now have four grasp types. There can be more,” Zhao said. “This design can be improved, with better decoding technology, higher-density myoelectric arrays, and a more compact pump that could be worn on the wrist. We also want to customize the design for mass production, so we can translate soft robotic technology to benefit society.”

Editor’s Note: This article was republished from MIT News.

The post Inflatable robotic hand gives amputees real-time tactile control appeared first on The Robot Report.

]]>
https://www.therobotreport.com/inflatable-robotic-hand-amputees-real-time-tactile-control/feed/ 0
Training drones to avoid obstacles at high speeds https://www.therobotreport.com/training-drones-to-fly-around-obstacles-at-high-seeds/ https://www.therobotreport.com/training-drones-to-fly-around-obstacles-at-high-seeds/#respond Wed, 11 Aug 2021 17:38:37 +0000 https://www.therobotreport.com/?p=560128 MIT developed a high-speed flight-planning algorithm that combines simulations and experiments, in a way that minimizes the number of experiments required to identify fast and safe flight paths.

The post Training drones to avoid obstacles at high speeds appeared first on The Robot Report.

]]>
drones avoiding obstacles at high speeds

A drone flies a race course through several gates to find the fastest feasible trajectory. | Photo Credit: MIT

If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.

But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.

Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.

The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20% faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn’t always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.

“At high speeds, there are intricate aerodynamics that are hard to simulate, so we use experiments in the real world to fill in those black holes to find, for instance, that it might be better to slow down first to be faster later,” said Ezra Tal, a graduate student in MIT’s Department of Aeronautics and Astronautics. “It’s this holistic approach we use to see how we can make a trajectory overall as fast as possible.”

“These kinds of algorithms are a very valuable step toward enabling future drones that can navigate complex environments very fast,” added Sertac Karaman, associate professor of aeronautics and astronautics, and director of the Laboratory for Information and Decision Systems at MIT. “We are really hoping to push the limits in a way that they can travel as fast as their physical limits will allow.”

Tal, Karaman, and MIT graduate student Gilhyun Ryou have published their research here.

Fast effects

Training drones to fly around obstacles is relatively straightforward if they are meant to fly slowly. That’s because aerodynamics such as drag don’t generally come into play at low speeds, and they can be left out of any modeling of a drone’s behavior. But at high speeds, such effects are far more pronounced, and how the vehicles will handle is much harder to predict.

“When you’re flying fast, it’s hard to estimate where you are,” Ryou said. “There could be delays in sending a signal to a motor, or a sudden voltage drop which could cause other dynamics problems. These effects can’t be modeled with traditional planning approaches.”

Watch: drone detects & avoids obstacles in 3.5 milliseconds

To get an understanding for how high-speed aerodynamics affect drones in flight, researchers have to run many experiments in the lab, setting drones at various speeds and trajectories to see which fly fast without crashing — an expensive, and often crash-inducing training process.

Instead, the MIT team developed a high-speed flight-planning algorithm that combines simulations and experiments, in a way that minimizes the number of experiments required to identify fast and safe flight paths.

The researchers started with a physics-based flight planning model, which they developed to first simulate how a drone is likely to behave while flying through a virtual obstacle course. They simulated thousands of racing scenarios, each with a different flight path and speed pattern. They then charted whether each scenario was feasible (safe), or infeasible (resulting in a crash). From this chart, they could quickly zero in on a handful of the most promising scenarios, or racing trajectories, to try out in the lab.

“We can do this low-fidelity simulation cheaply and quickly, to see interesting trajectories that could be both fast and feasible. Then we fly these trajectories in experiments to see which are actually feasible in the real world,” Tal said. “Ultimately we converge to the optimal trajectory that gives us the lowest feasible time.”

Researchers simulated thousands of racing scenarios, each with a different flight path and speed pattern. | Photo Credit: MIT

Going slow to go fast

To demonstrate their new approach, the researchers simulated a drone flying through a simple course with five large, square-shaped obstacles arranged in a staggered configuration. They set up this same configuration in a physical training space, and programmed a drone to fly through the course at speeds and trajectories that they previously picked out from their simulations. They also ran the same course with a drone trained on a more conventional algorithm that does not incorporate experiments into its planning.

Overall, the drone trained on the new algorithm “won” every race, completing the course in a shorter time than the conventionally trained drone. In some scenarios, the winning drone finished the course 20% faster than its competitor, even though it took a trajectory with a slower start, for instance taking a bit more time to bank around a turn. This kind of subtle adjustment was not taken by the conventionally trained drone, likely because its trajectories, based solely on simulations, could not entirely account for aerodynamic effects that the team’s experiments revealed in the real world.

The researchers plan to fly more experiments, at faster speeds, and through more complex environments, to further improve their algorithm. They also may incorporate flight data from human pilots who race drones remotely, and whose decisions and maneuvers might help zero in on even faster yet still feasible flight plans.

“If a human pilot is slowing down or picking up speed, that could inform what our algorithm does,” Tal said. “We can also use the trajectory of the human pilot as a starting point, and improve from that, to see, what is something humans don’t do, that our algorithm can figure out, to fly faster. Those are some future ideas we’re thinking about.”

Editor’s Note: This article was republished from MIT News.

The post Training drones to avoid obstacles at high speeds appeared first on The Robot Report.

]]>
https://www.therobotreport.com/training-drones-to-fly-around-obstacles-at-high-seeds/feed/ 0
The future of human-robot collaboration the subject of a new book, RoboBusiness Direct session https://www.therobotreport.com/the-future-of-human-robot-collaboration-the-subject-of-a-new-book-robobusiness-direct-session/ https://www.therobotreport.com/the-future-of-human-robot-collaboration-the-subject-of-a-new-book-robobusiness-direct-session/#respond Fri, 23 Oct 2020 14:47:14 +0000 https://www.therobotreport.com/?p=106968 "What to Expect When You're Expecting Robots" is a new book co-authored by MIT Associate Professor Julie Shah and Motional CTO Laura Major that explores a future populated with robot helpers.

The post The future of human-robot collaboration the subject of a new book, RoboBusiness Direct session appeared first on The Robot Report.

]]>
The future of human-robot collaboration the subject of a new book, RoboBusiness Direct session

As COVID-19 has made it necessary for people to maintain “social distancing,” robots are stepping in to sanitize warehouses and hospitals, ferry test samples to laboratories, and provide avatars for telemedicine. A new book, What to Expect When You’re Expecting Robots, extrapolates from current trends to posit a future filled with assistive systems.

There are signs that people may be increasingly receptive to robotic help, preferring, at least hypothetically, to be picked up by a self-driving taxi or have their food delivered via robot, to reduce their risk of catching the virus.

As more intelligent, independent machines make their way into the public sphere, engineers Julie Shah and Laura Major are urging designers to rethink not just how robots fit in with society, but also how society can change to accommodate these new, “working” robots.

Shah is an associate professor of aeronautics and astronautics at the Massachusetts Institute of Technology and the associate dean of social and ethical responsibilities of computing in the MIT Schwarzman College of Computing. Major, SM ’05, is chief technology officer of Motional, a self-driving car venture supported by automotive companies Hyundai and Aptiv. Together, they have written a new book, What to Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration, published this month by Basic Books.

Robots to be less like tools and more like partners

Laura Major

Laura Major, CTO of Motional

What we can expect, they wrote, is that robots of the future will no longer work for us, but with us. They will be less like tools, programmed to carry out specific tasks in controlled environments, as factory automatons and domestic Roombas have been, and more like partners, interacting with and working among people in the more complex and chaotic real world. As such, Shah and Major said that robots and humans will have to establish a mutual understanding.

“Part of the book is about designing robotic systems that think more like people, and that can understand the very subtle social signals that we provide to each other, that make our world work,” Shah said. “But equal emphasis in the book is on how we have to structure the way we live our lives, from our crosswalks to our social norms, so that robots can more effectively live in our world.”

Related content: The Robot Report Podcast: Motional talks robotaxis, demystifying China’s robotics indusstry, and TRI previews kitchen robot

Getting to know you

As robots increasingly enter public spaces, they may do so safely if they have a better understanding of human and social behavior.

Consider a package delivery robot on a busy sidewalk: The robot may be programmed to give a standard berth to obstacles in its path, such as traffic cones and lampposts. But what if the robot is coming upon a person wheeling a stroller while balancing a cup of coffee? A human passerby would read the social cues and perhaps step to the side to let the stroller by. Could a robot pick up the same subtle signals to change course accordingly?

Julie Shah

Julie Shah, associate professor, MIT

Shah said she believes the answer is yes. As head of the Interactive Robotics Group at MIT, she is developing tools to help robots understand and predict human behavior, such as where people move, what they do, and who they interact with in physical spaces.

Shah has implemented these tools in robots that can recognize and collaborate with humans in environments such as the factory floor and the hospital ward. She is hoping that robots trained to read social cues can more safely be deployed in more unstructured public spaces.

Major, meanwhile, has been helping to make robots, and specifically self-driving cars, work safely and reliably in the real world, beyond the controlled, gated environments where most driverless cars operate today. About a year ago, she and Shah met for the first time, at a robotics conference.

“We were working in parallel universes, me in industry, and Julie in academia, each trying to galvanize understanding for the need to accommodate machines and robots,” Major recalls.

From that first meeting, the seeds for their new book began quickly to sprout.

Editor’s note: MassRobotics will host a discussion with Major and Shah on “The Future of Human-Robot Collaboration: Developing the Next Generation of Industrial, Commercial, and Consumer Robotics Systems” as part of the RoboBusiness Direct series on Friday, Dec. 11, 2020. Register now.

The future could bring a cyborg city

In their book, the engineers describe ways that robots and automated systems can perceive and work with humans — but also ways in which our environment and infrastructure can change to accommodate robots.

A cyborg-friendly city, engineered to manage and direct robots, could avoid scenarios such as the one that played out in San Francisco in 2017. Residents there were seeing an uptick in delivery robots deployed by local technology startups. The robots were causing congestion on city sidewalks and were an unexpected hazard to seniors with disabilities. Lawmakers ultimately enforced strict regulations on the number of delivery robots allowed in the city — a move that improved safety, but potentially at the expense of innovation.

If in the near future there are to be multiple robots sharing a sidewalk with humans at any given time, Shah and Major propose that cities might consider installing dedicated robot lanes, similar to bike lanes, to avoid accidents between robots and humans. The engineers also envision a system to organize robots in public spaces, similar to the way airplanes keep track of one another in flight.

In 1965, the Federal Aviation Agency was created, partly in response to a catastrophic crash between two planes flying through a cloud over the Grand Canyon. Prior to that crash, airplanes were virtually free to fly where they pleased. The FAA began organizing airplanes in the sky through innovations like the traffic collision avoidance system, or TCAS — a system onboard most planes today, that detects other planes outfitted with a universal transponder. TCAS alerts the pilot of nearby planes, and automatically charts a path, independent of ground control, for the plane to take in order to avoid a collision.

Similarly, Shah and Major said that robots in public spaces could be designed with a sort of universal sensor that enables them to see and communicate with one another, regardless of their software platform or manufacturer. This way, they might stay clear of certain areas, avoiding potential accidents and congestion, if they sense robots nearby.

“There could also be transponders for people that broadcast to robots,” Shah said. “For instance, crossing guards could use batons that can signal any robot in the vicinity to pause so that it’s safe for children to cross the street.”

Whether we are ready for them or not, the trend is clear: The robots are coming, to our sidewalks, our grocery stores, and our homes. And as the book’s title suggests, preparing for these new additions to society will take some major changes, in our perception of technology, and in our infrastructure.

“It takes a village to raise a child to be a well-adjusted member of society, capable of realizing his or her full potential,” wrote Shah and Major. “So, too, a robot.”

The post The future of human-robot collaboration the subject of a new book, RoboBusiness Direct session appeared first on The Robot Report.

]]>
https://www.therobotreport.com/the-future-of-human-robot-collaboration-the-subject-of-a-new-book-robobusiness-direct-session/feed/ 0
MIT system improves robots’ spatial perception https://www.therobotreport.com/mit-system-improving-robots-spatial-perception/ https://www.therobotreport.com/mit-system-improving-robots-spatial-perception/#comments Thu, 16 Jul 2020 18:23:06 +0000 https://www.therobotreport.com/?p=105822 MIT researchers have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.

The post MIT system improves robots’ spatial perception appeared first on The Robot Report.

]]>
spatial perception

MIT researchers have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world. The key component of the team’s new model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment. Kimera builds a dense 3D semantic mesh of an environment and can track humans in the environment. The figure shows a multi-frame action sequence of a human moving in the scene. | Credit: MIT

Wouldn’t we all appreciate a little help around the house, especially if that help came in the form of a smart, adaptable, uncomplaining robot? Sure, there are the one-trick Roombas of the appliance world. But MIT engineers are envisioning robots more like home helpers, able to follow high-level, Alexa-type commands, such as “Go to the kitchen and fetch me a coffee cup.”

To carry out such high-level tasks, researchers believe robots will have to be able to perceive their physical environment as humans do.

“In order to make any decision in the world, you need to have a mental model of the environment around you,” says Luca Carlone, assistant professor of aeronautics and astronautics at MIT. “This is something so effortless for humans.

But for robots it’s a painfully hard problem, where it’s about transforming pixel values that they see through a camera, into an understanding of the world.”

Now Carlone and his students have developed a representation of spatial perception for robots that is modeled after the way humans perceive and navigate the world.

Podcast: Hello Robot exits stealth; White Castle turns to robotics

The new model, which they call 3D Dynamic Scene Graphs, enables a robot to quickly generate a 3D map of its surroundings that also includes objects and their semantic labels (a chair versus a table, for instance), as well as people, rooms, walls, and other structures that the robot is likely seeing in its environment.

The spatial perception model also allows the robot to extract relevant information from the 3D map, to query the location of objects and rooms, or the movement of people in its path.

“This compressed representation of the environment is useful because it allows our robot to quickly make decisions and plan its path,” Carlone says. “This is not too far from what we do as humans. If you need to plan a path from your home to MIT, you don’t plan every single position you need to take. You just think at the level of streets and landmarks, which helps you plan your route faster.”

Beyond domestic helpers, Carlone says robots that adopt this new kind of spatial perception may also be suited for other high-level jobs, such as working side by side with people on a factory floor or exploring a disaster site for survivors.

He and his students, including lead author and MIT graduate student Antoni Rosinol, will present their findings at the Robotics: Science and Systems virtual conference.

A mapping mix

At the moment, robotic vision and navigation has advanced mainly along two routes: 3D mapping that enables robots to reconstruct their environment in three dimensions as they explore in real time; and semantic segmentation, which helps a robot classify features in its environment as semantic objects, such as a car versus a bicycle, which so far is mostly done on 2D images.

Carlone and Rosinol’s new model of spatial perception is the first to generate a 3D map of the environment in real-time, while also labeling objects, people (which are dynamic, contrary to objects), and structures within that 3D map.

The key component of the team’s spatial perception model is Kimera, an open-source library that the team previously developed to simultaneously construct a 3D geometric model of an environment, while encoding the likelihood that an object is, say, a chair versus a desk.

“Like the mythical creature that is a mix of different animals, we wanted Kimera to be a mix of mapping and semantic understanding in 3D,” Carlone says.

Kimera works by taking in streams of images from a robot’s camera, as well as inertial measurements from onboard sensors, to estimate the trajectory of the robot or camera and to reconstruct the scene as a 3D mesh, all in real-time.

To generate a semantic 3D mesh, Kimera uses an existing neural network trained on millions of real-world images, to predict the label of each pixel, and then projects these labels in 3D using a technique known as ray-casting, commonly used in computer graphics for real-time rendering.

The result is a map of a robot’s environment that resembles a dense, three-dimensional mesh, where each face is color-coded as part of the objects, structures, and people within the environment.

spatial perception

A 3D dynamic scene graph of an office environment. The nodes in the graph represent entities in the environment (humans, objects, rooms, structures) while edges represent relations between entities. | Credit: MIT

A layered scene

If a robot were to rely on this mesh alone to navigate through its environment, it would be a computationally expensive and time-consuming task. So the researchers built off Kimera, developing algorithms to construct 3D dynamic “scene graphs” from Kimera’s initial, highly dense, 3D semantic mesh.

Scene graphs are popular computer graphics models that manipulate and render complex scenes, and are typically used in video game engines to represent 3D environments.

In the case of the 3D dynamic scene graphs, the associated algorithms abstract, or break down, Kimera’s detailed 3D semantic mesh into distinct semantic layers, such that a robot can “see” a scene through a particular layer, or lens. The layers progress in hierarchy from objects and people, to open spaces and structures such as walls and ceilings, to rooms, corridors, and halls, and finally whole buildings.

Carlone says this layered representation avoids a robot having to make sense of billions of points and faces in the original 3D mesh.

Within the layer of objects and people, the researchers have also been able to develop algorithms that track the movement and the shape of humans in the environment in real time.

The team tested their new model in a photo-realistic simulator, developed in collaboration with MIT Lincoln Laboratory, that simulates a robot navigating through a dynamic office environment filled with people moving around.

“We are essentially enabling robots to have mental models similar to the ones humans use,” Carlone says. “This can impact many applications, including self-driving cars, search and rescue, collaborative manufacturing, and domestic robotics.

Another domain is virtual and augmented reality (AR). Imagine wearing AR goggles that run our algorithm: The goggles would be able to assist you with queries such as ‘Where did I leave my red mug?’ and ‘What is the closest exit?’ You can think about it as an Alexa which is aware of the environment around you and understands objects, humans, and their relations.”

“Our approach has just been made possible thanks to recent advances in deep learning and decades of research on simultaneous localization and mapping,” Rosinol says. “With this work, we are making the leap toward a new era of robotic perception called spatial-AI, which is just in its infancy but has great potential in robotics and large-scale virtual and augmented reality.”

Editor’s Note: This article was republished from MIT News.

The post MIT system improves robots’ spatial perception appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-system-improving-robots-spatial-perception/feed/ 1
Flexible robot from MIT can ‘grow’ like a plant to reach in tight spaces https://www.therobotreport.com/flexible-sturdy-robot-mit-grows-like-plant/ https://www.therobotreport.com/flexible-sturdy-robot-mit-grows-like-plant/#respond Thu, 07 Nov 2019 17:10:27 +0000 https://www.therobotreport.com/?p=103296 MIT researchers have designed a flexible robot whose extendable appendage can work through tight spaces and still lift heavy loads.

The post Flexible robot from MIT can ‘grow’ like a plant to reach in tight spaces appeared first on The Robot Report.

]]>

CAMBRIDGE, Mass. — Mobile robots today have little difficulty navigating across relatively open layouts in factories or warehouses as they move materials. However, robotic manipulators often aren’t flexible enough to get a product at the back of a cluttered shelf or to reach around a car engine to unscrew an oil cap.

Engineers at the Massachusetts Institute of Technology have developed a robot designed to extend a chain-like appendage flexible enough to twist and turn in any necessary configuration, yet rigid enough to support heavy loads or apply torque to assemble parts in tight spaces. When the task is complete, the robot can retract the appendage and extend it again, at a different length and shape, to suit the next task.

The appendage design is inspired by the way plants grow, which involves the transport of nutrients, in a fluidized form, up to the plant’s tip. There, they are converted into solid material to produce, bit by bit, a supportive stem.

Likewise, the robot consists of a “growing point,” or gearbox, that pulls a loose chain of interlocking blocks into the box. Gears in the box then lock the chain units together and feed the chain out, unit by unit, as a rigid appendage.

Flexible robot moves sensors to gearbox

The researchers presented the plant-inspired “growing robot” this week at the IEEE International Conference on Intelligent Robots and Systems (IROS) in Macau. They envision that grippers, cameras, and other sensors could be mounted onto the robot’s gearbox, enabling it to meander through an aircraft’s propulsion system and tighten a loose screw, or to reach into a shelf and grab a product without disturbing the organization of surrounding inventory, among other tasks.

“Think about changing the oil in your car,” said Harry Asada, professor of mechanical engineering at MIT. “After you open the engine roof, you have to be flexible enough to make sharp turns, left and right, to get to the oil filter, and then you have to be strong enough to twist the oil filter cap to remove it.”

“Now we have a robot that can potentially accomplish such tasks,” said Tongxi Yan, a former graduate student in Asada’s lab, who led the work. “It can grow, retract, and grow again to a different shape, to adapt to its environment.”

The team also includes MIT graduate student Emily Kamienski and visiting scholar Seiichi Teshigawara, who presented the results at the conference.

Flexible but sturdy robot from MIT can 'grow' like a plant

The “growing robot” can be programmed to extend in different directions, based on the sequence of chain units that are locked and fed out from the “growing tip,” or gearbox. Source: MIT

The last foot

The design of the new robot is an offshoot of Asada’s work in addressing the “last one-foot problem” — an engineering term referring to the last step, or foot, of a robot’s task or exploratory mission. While a robot may spend most of its time traversing open space, the last foot of its mission may involve more nimble navigation through tighter, more complex spaces to complete a task.

Engineers have devised various concepts and prototypes to address the last one-foot problem, including robots made from soft, balloon-like materials that grow like vines to squeeze through narrow crevices. But, Asada sad, such soft extendable robots aren’t sturdy enough to support end effectors such as grippers, cameras, and other sensors needed to carrying out a task, once the robot has wormed its way to its destination.

“Our solution is not actually soft, but a clever use of rigid materials,” said Asada, who is the Ford Foundation Professor of Engineering.


The Robot Report is launching the Healthcare Robotics Engineering Forum, which will be on Dec. 9-10 in Santa Clara, Calif. The conference and expo will focus on improving the design, development, and manufacture of next-generation healthcare robots. Learn more about the Healthcare Robotics Engineering Forum, and registration is now open.


Chain links for flexible, extendible ‘stem’

Once the team defined the general functional elements of plant growth, they looked to mimic this in a general sense, in an extendable robot.

“The realization of the robot is totally different from a real plant, but it exhibits the same kind of functionality, at a certain abstract level,” Asada said.

The researchers designed a gearbox to represent the robot’s “growing tip,” akin to the bud of a plant, where, as more nutrients flow up to the site, the tip feeds out more rigid stem. Within the box, they fit a system of gears and motors, which works to pull up a fluidized material — in this case, a bendy sequence of 3-D-printed plastic units interlocked with each other, similar to a bicycle chain.

As the chain is fed into the box, it turns around a winch, which feeds it through a second set of motors programmed to lock certain units in the chain to their neighboring units, creating a rigid appendage as it is fed out of the box.

The researchers can program the robot to lock certain units together while leaving others unlocked, to form specific shapes, or to “grow” in certain directions. In experiments, they were able to program the robot to turn around an obstacle as it extended or grew out from its base.

“It can be locked in different places to be curved in different ways, and have a wide range of motions,” Yan says.

When the chain is locked and rigid, it is strong enough to support a 1 lb. weight. If a gripper were attached to the robot’s growing tip, or gearbox, the researchers say the robot could grow long enough to meander through a narrow space, then apply enough torque to loosen a bolt or unscrew a cap.

Auto maintenance is a good example of tasks the robot could assist with, according to Kamienski. “The space under the hood is relatively open, but it’s that last bit where you have to navigate around an engine block or something to get to the oil filter, that a fixed arm wouldn’t be able to navigate around,” she said. “This robot could do something like that.”

This research was partly funded by Japanese bearings maker NSK Ltd.

Editor’s note: This article reprinted with permission from MIT News.

The post Flexible robot from MIT can ‘grow’ like a plant to reach in tight spaces appeared first on The Robot Report.

]]>
https://www.therobotreport.com/flexible-sturdy-robot-mit-grows-like-plant/feed/ 0
Semantic SLAM navigation targets last-mile delivery robots https://www.therobotreport.com/semantic-slam-navigation-last-mile-delivery-robots/ https://www.therobotreport.com/semantic-slam-navigation-last-mile-delivery-robots/#respond Mon, 04 Nov 2019 16:51:58 +0000 https://www.therobotreport.com/?p=103236 In the not too distant future, last-mile delivery robots may be to drop your takeout order, package, or meal-kit subscription at your doorstep – if they can find the door. Standard approaches for robotic navigation involve mapping an area ahead of time, then using algorithms to guide a robot toward a specific goal or GPS…

The post Semantic SLAM navigation targets last-mile delivery robots appeared first on The Robot Report.

]]>
last-mile delivery robots

Last-mile delivery robots could use an MIT algorithm to find the front door, using environmental clues. | Credit: MIT

In the not too distant future, last-mile delivery robots may be to drop your takeout order, package, or meal-kit subscription at your doorstep – if they can find the door.

Standard approaches for robotic navigation involve mapping an area ahead of time, then using algorithms to guide a robot toward a specific goal or GPS coordinate on the map. While this approach might make sense for exploring specific environments, such as the layout of a particular building or planned obstacle course, it can become unwieldy in the context of last-mile delivery robots.

Imagine, for instance, having to map in advance every single neighborhood within a robot’s delivery zone, including the configuration of each house within that neighborhood along with the specific coordinates of each house’s front door. Such a task can be difficult to scale to an entire city, particularly as the exteriors of houses often change with the seasons. Mapping every single house could also run into issues of security and privacy.

Now MIT engineers have developed a navigation method that doesn’t require mapping an area in advance. Instead, their approach enables a robot to use clues in its environment to plan out a route to its destination, which can be described in general semantic terms, such as “front door” or “garage,” rather than as coordinates on a map. For example, if a robot is instructed to deliver a package to someone’s front door, it might start on the road and see a driveway, which it has been trained to recognize as likely to lead toward a sidewalk, which in turn is likely to lead to the front door.

Related: Delivery tests combine autonomous vehicles, bipedal robots

The new technique can greatly reduce the time last-mile delivery robots spend exploring a property before identifying its target, and it doesn’t rely on maps of specific residences.

“We wouldn’t want to have to make a map of every building that we’d need to visit,” says Michael Everett, a graduate student in MIT’s Department of Mechanical Engineering. “With this technique, we hope to drop a robot at the end of any driveway and have it find a door.”

Everett presented the group’s results at the International Conference on Intelligent Robots and Systems. The paper, which is co-authored by Jonathan How, professor of aeronautics and astronautics at MIT, and Justin Miller of the Ford Motor Company, is a finalist for “Best Paper for Cognitive Robots.”

“A sense of what things are”

In recent years, researchers have worked on introducing natural, semantic language to robotic systems, training robots to recognize objects by their semantic labels, so they can visually process a door as a door, for example, and not simply as a solid, rectangular obstacle.

“Now we have an ability to give robots a sense of what things are, in real-time,” Everett says.

Everett, How, and Miller are using similar semantic techniques as a springboard for their new navigation approach, which leverages pre-existing algorithms that extract features from visual data to generate a new map of the same scene, represented as semantic clues, or context.

In their case, the researchers used an algorithm to build up a map of the environment as the robot moved around, using the semantic labels of each object and a depth image. This algorithm is called semantic SLAM (Simultaneous Localization and Mapping).

While other semantic algorithms have enabled robots to recognize and map objects in their environment for what they are, they haven’t allowed a robot to make decisions in the moment while navigating a new environment, on the most efficient path to take to a semantic destination such as a “front door.”

“Before, exploring was just, plop a robot down and say ‘go,’ and it will move around and eventually get there, but it will be slow,” How says.

The cost to go

The researchers looked to speed up a robot’s path-planning through a semantic, context-colored world. They developed a new “cost-to-go estimator,” an algorithm that converts a semantic map created by pre-existing SLAM algorithms into a second map, representing the likelihood of any given location being close to the goal.

“This was inspired by image-to-image translation, where you take a picture of a cat and make it look like a dog,” Everett says. “The same type of idea happens here where you take one image that looks like a map of the world, and turn it into this other image that looks like the map of the world but now is colored based on how close different points of the map are to the end goal.”

This cost-to-go map is colorized, in gray-scale, to represent darker regions as locations far from a goal, and lighter regions as areas that are close to the goal. For instance, the sidewalk, coded in yellow in a semantic map, might be translated by the cost-to-go algorithm as a darker region in the new map, compared with a driveway, which is progressively lighter as it approaches the front door — the lightest region in the new map.

The researchers trained this new algorithm on satellite images from Bing Maps containing 77 houses from one urban and three suburban neighborhoods. The system converted a semantic map into a cost-to-go map, and mapped out the most efficient path, following lighter regions in the map, to the end goal. For each satellite image, Everett assigned semantic labels and colors to context features in a typical front yard, such as grey for a front door, blue for a driveway, and green for a hedge.

During this training process, the team also applied masks to each image to mimic the partial view that a robot’s camera would likely have as it traverses a yard.

“Part of the trick to our approach was [giving the system] lots of partial images,” How explains. “So it really had to figure out how all this stuff was interrelated. That’s part of what makes this work robustly.”

The researchers then tested their approach in a simulation of an image of an entirely new house, outside of the training dataset, first using the preexisting SLAM algorithm to generate a semantic map, then applying their new cost-to-go estimator to generate a second map, and path to a goal, in this case, the front door.

The group’s new cost-to-go technique found the front door 189 percent faster than classical navigation algorithms, which do not take context or semantics into account, and instead spend excessive steps exploring areas that are unlikely to be near their goal.

Everett says the results illustrate how robots can use context to efficiently locate a goal, even in unfamiliar, unmapped environments.

“Even if a robot is delivering a package to an environment it’s never been to, there might be clues that will be the same as other places it’s seen,” Everett says. “So the world may be laid out a little differently, but there’s probably some things in common.”

This research is supported, in part, by the Ford Motor Company.

Editor’s Note: This article was republished with permission from MIT News.

The post Semantic SLAM navigation targets last-mile delivery robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/semantic-slam-navigation-last-mile-delivery-robots/feed/ 0
Bipedal robot has humanlike balance for running and jumping https://www.therobotreport.com/bipedal-robot-has-humanlike-balance-running-jumping/ https://www.therobotreport.com/bipedal-robot-has-humanlike-balance-running-jumping/#respond Wed, 30 Oct 2019 18:00:33 +0000 https://www.therobotreport.com/?p=103173 Little HERMES, a bipedal robot developed by researchers at MIT and the University of Illinois at Urbana-Champaign, uses teleoperation method to improve its balance.

The post Bipedal robot has humanlike balance for running and jumping appeared first on The Robot Report.

]]>

CAMBRIDGE, Mass. — Although quadruped robots are more stable, humanoid bipedal models could be useful for emergency situations and for moving in spaces designed by and for humans. For instance, rescue robots someday might bound through rubble on all fours, then rise up on two legs to push aside a heavy obstacle or break through a locked door.

Engineers are making strides on the design of four-legged robots and their ability to runjump and even do backflips. But getting two-legged robots to exert force or push against something without falling has been a significant stumbling block.

Teleoperated bipedal robot

Engineers at the Massachusetts Institute of Technology and the University of Illinois at Urbana-Champaign have developed a method to control balance in a bipedal, teleoperated robot — an essential step toward enabling a humanoid to carry out high-impact tasks in challenging environments.

The team’s robot, physically resembling a machined torso and two legs, is controlled remotely by a human operator wearing a vest that transmits information about the human’s motion and ground reaction forces to the robot.

Through the vest, the human operator can both direct the robot’s locomotion and feel the robot’s motions. If the robot is starting to tip over, the human feels a corresponding pull on the vest and can adjust in a way to rebalance both herself and, synchronously, the robot.

In experiments with the robot to test this new “balance feedback” approach, the researchers were able to remotely maintain the robot’s balance as it jumped and walked in place in sync with its human operator.

“It’s like running with a heavy backpack — you can feel how the dynamics of the backpack move around you, and you can compensate properly,” said Joao Ramos, who developed the approach as postdoctoral studies at MIT. “Now if you want to open a heavy door, the human can command the robot to throw its body at the door and push it open, without losing balance.”

Ramos, who is now an assistant professor at the University of Illinois at Urbana-Champaign, has detailed the approach in a study appearing in Science Robotics. His co-author on the study is Sangbae Kim, associate professor of mechanical engineering at MIT.

HERMES mimics more than motion

Previously, Kim and Ramos built the bipedal robot HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System) and developed methods for it to mimic the motions of an operator via teleoperation, an approach that the researchers say comes with certain humanistic advantages.

“Because you have a person who can learn and adapt on the fly, a robot can perform motions that it’s never practiced before [via teleoperation],” Ramos said.

In demonstrations, HERMES has poured coffee into a cup, wielded an ax to chop wood, and handled an extinguisher to put out a fire.

All these tasks have involved the robot’s upper body and algorithms to match the robot’s limb positioning with that of its operator’s. HERMES was able to carry out high-impact motions because the robot was rooted in place. In these cases, balance was much simpler to maintain. If the robot were required to take any steps, however, it would have likely tipped over in attempting to mimic the operator’s motions.

“We realized in order to generate high forces or move heavy objects, just copying motions wouldn’t be enough, because the robot would fall easily,” Kim said. “We needed to copy the operator’s dynamic balance.”

Enter Little HERMES, a miniature version of HERMES that is about a third the size of an average human adult. The team engineered the robot as simply a torso and two legs, and designed the system specifically to test lower-body tasks, such as locomotion and balance. As with its full-body counterpart, Little HERMES is designed for teleoperation, with an operator suited up in a vest to control the robot’s actions.

Bipedal robot interface

The teleoperation interface for the human operator. Credit: Ramos and Kim, Sci. Robot. 4, eaav4282 (2019)

For the bipedal robot to copy the operator’s balance rather than just their motions, the team had to first find a simple way to represent balance. Ramos eventually realized that balance could be stripped down to two main ingredients: a person’s center of mass and their center of pressure — basically, a point on the ground where a force equivalent to all supporting forces is exerted.

The location of the center of mass in relation to the center of pressure, Ramos found, relates directly to how balanced a person is at any given time. He also found that the position of these two ingredients could be physically represented as an inverted pendulum.

Imagine swaying from side to side while staying rooted to the same spot. The effect is similar to the swaying of an upside-down pendulum, the top end representing a human’s center of mass (usually in the torso) and the bottom representing their center of pressure on the ground.

Heavy lifting for bipedal robot

To define how center of mass relates to center of pressure, Ramos gathered human motion data, including measurements in the lab, where he swayed back and forth, walked in place, and jumped on a force plate that measured the forces he exerted on the ground, as the position of his feet and torso were recorded. He then condensed this data into measurements of the center of mass and the center of pressure, and developed a model to represent each in relation to the other, as an inverted pendulum.

He then developed a second model, similar to the model for human balance but scaled to the dimensions of the smaller, lighter robot, and he developed a control algorithm to link and enable feedback between the two models.

The researchers tested this balance feedback model, first on a simple inverted pendulum that they built in the lab, in the form of a beam about the same height as Little HERMES. They connected the beam to their teleoperation system, and it swayed back and forth along a track in response to an operator’s movements.

Little HERMES bipedal robot

Joao Ramos teleoperates Little HERMES, which can mimic an operator’s balance to stay upright while running, walking, and jumping in place. Courtesy of researchers Joao Ramos and Sangbae Kim

As the operator swayed to one side, the beam did likewise — a movement that the operator could also feel through the vest. If the beam swayed too far, the operator, feeling the pull, could lean the other way to compensate, and keep the beam balanced.

The experiments showed that the new feedback model could work to maintain balance on the beam, so the researchers then tried the model on Little HERMES. They also developed an algorithm for the bipedal robot to automatically translate the simple model of balance to the forces that each of its feet would have to generate, to copy the operator’s feet.

In the lab, Ramos found that as he wore the vest, he could not only control the robot’s motions and balance, but he also could feel the robot’s movements. When the robot was struck with a hammer from various directions, Ramos felt the vest jerk in the direction the robot moved. Ramos instinctively resisted the tug, which the robot registered as a subtle shift in the center of mass in relation to center of pressure, which it in turn mimicked. The result was that the robot was able to keep from tipping over, even amidst repeated blows to its body.

Little HERMES also mimicked Ramos in other exercises, including running and jumping in place, and walking on uneven ground, all while maintaining its balance without the aid of tethers or supports.

“Balance feedback is a difficult thing to define because it’s something we do without thinking,” said Kim. “This is the first time balance feedback is properly defined for the dynamic actions. This will change how we control a tele-operated humanoid.”


The Robot Report is launching the Healthcare Robotics Engineering Forum, which will be on Dec. 9-10 in Santa Clara, Calif. The conference and expo will focus on improving the design, development, and manufacture of next-generation healthcare robots. Learn more about the Healthcare Robotics Engineering Forum, and registration is now open.


Future plans for humanoid robots

Kim and Ramos plan to continue developing a full-body humanoid with similar balance control. They said they hope it can one day gallop through a disaster zone and rise up to push away barriers as part of rescue or salvage missions.

“Now we can do heavy door opening or lifting or throwing heavy objects, with proper balance communication,” Kim said.

This research was supported in part by Hon Hai Precision Industry Co. (also known as Foxconn Technology Group) and Naver Labs Corp.

Editor’s note: Article reprinted courtesy of MIT News.

The post Bipedal robot has humanlike balance for running and jumping appeared first on The Robot Report.

]]>
https://www.therobotreport.com/bipedal-robot-has-humanlike-balance-running-jumping/feed/ 0
Algorithm speeds up planning process for robotic grippers https://www.therobotreport.com/algorithm-speeds-up-planning-process-robotic-grippers/ https://www.therobotreport.com/algorithm-speeds-up-planning-process-robotic-grippers/#respond Sun, 20 Oct 2019 21:30:34 +0000 https://www.therobotreport.com/?p=103065 If you’re at a desk with a pen or pencil handy, try this move: Grab the pen by one end with your thumb and index finger, and push the other end against the desk. Slide your fingers down the pen, then flip it upside down, without letting it drop. Not too hard, right? But for…

The post Algorithm speeds up planning process for robotic grippers appeared first on The Robot Report.

]]>

If you’re at a desk with a pen or pencil handy, try this move: Grab the pen by one end with your thumb and index finger, and push the other end against the desk. Slide your fingers down the pen, then flip it upside down, without letting it drop. Not too hard, right?

But for a robot — say, one that’s sorting through a bin of objects and attempting to get a good grasp on one of them — this is a computationally taxing maneuver. Before even attempting the move it must calculate a litany of properties and probabilities, such as the friction and geometry of the table, the pen, and its two fingers, and how various combinations of these properties interact mechanically, based on fundamental laws of physics.

Now MIT engineers have found a way to significantly speed up the planning process required for a robot to adjust its grasp on an object by pushing that object against a stationary surface. Whereas traditional algorithms would require tens of minutes for planning out a sequence of motions, the new team’s approach shaves this preplanning process down to less than a second.

Alberto Rodriguez, associate professor of mechanical engineering at MIT, says the speedier planning process will enable robots, particularly in industrial settings, to quickly figure out how to push against, slide along, or otherwise use features in their environments to reposition objects in their grasp. Such nimble manipulation is useful for any tasks that involve picking and sorting, and even intricate tool use.

“This is a way to extend the dexterity of even simple robotic grippers, because at the end of the day, the environment is something every robot has around it,” Rodriguez says.

The team’s results are published today in The International Journal of Robotics Research. Rodriguez’ co-authors are lead author Nikhil Chavan-Dafle, a graduate student in mechanical engineering, and Rachel Holladay, a graduate student in electrical engineering and computer science.

Physics in a cone

Rodriguez’ group works on enabling robots to leverage their environment to help them accomplish physical tasks, such as picking and sorting objects in a bin.

Existing algorithms typically take hours to preplan a sequence of motions for a robotic gripper, mainly because, for every motion that it considers, the algorithm must first calculate whether that motion would satisfy a number of physical laws, such as Newton’s laws of motion and Coulomb’s law describing frictional forces between objects.

“It’s a tedious computational process to integrate all those laws, to consider all possible motions the robot can do, and to choose a useful one among those,” Rodriguez says.

He and his colleagues found a compact way to solve the physics of these manipulations, in advance of deciding how the robot’s hand should move. They did so by using “motion cones,” which are essentially visual, cone-shaped maps of friction.

The inside of the cone depicts all the pushing motions that could be applied to an object in a specific location, while satisfying the fundamental laws of physics and enabling the robot to keep hold of the object. The space outside of the cone represents all the pushes that would in some way cause an object to slip out of the robot’s grasp.

“Seemingly simple variations, such as how hard robot grasps the object, can significantly change how the object moves in the grasp when pushed,” Holladay explains. “Based on how hard you’re grasping, there will be a different motion. And that’s part of the physical reasoning that the algorithm handles.”

The team’s algorithm calculates a motion cone for different possible configurations between robotic grippers, an object that it is holding, and the environment against which it is pushing, in order to select and sequence different feasible pushes to reposition the object.

“It’s a complicated process but still much faster than the traditional method – fast enough that planning an entire series of pushes takes half a second,” Holladay says.

A robot picking up a block letter, T, and pushing it against a nearby wall to re-angle it, before setting it back down in an upright position. | Credit: MIT

Big plans

The researchers tested the new algorithm on a physical setup with a three-way interaction, in which a simple robotic gripper was holding a T-shaped block and pushing against a vertical bar. They used multiple starting configurations, with the robot gripping the block at a particular position and pushing it against the bar from a certain angle. For each starting configuration, the algorithm instantly generated the map of all the possible forces that the robot could apply and the position of the block that would result.

“We did several thousand pushes to verify our model correctly predicts what happens in the real world,” Holladay says. “If we apply a push that’s inside the cone, the grasped object should remain under control. If it’s outside, the object should slip from the grasp.”

The researchers found that the algorithm’s predictions reliably matched the physical outcome in the lab, planning out sequences of motions — such as reorienting the block against the bar before setting it down on a table in an upright position — in less than a second, compared with traditional algorithms that take over 500 seconds to plan out.

“Because we have this compact representation of the mechanics of this three-way-interaction between robot, object, and their environment, we can now attack bigger planning problems,” Rodriguez says.

The group is hoping to apply and extend its approach to enable robotic grippers to handle different types of tools, for instance in a manufacturing setting.

“Most factory robots that use tools have a specially designed hand, so instead of having the abiity to grasp a screwdriver and use it in a lot of different ways, they just make the hand a screwdriver,” Holladay says. “You can imagine that requires less dexterous planning, but it’s much more limiting. We’d like a robot to be able to use and pick lots of different things up.”

Editor’s Note: This article was republished from MIT News.

robotic grippers

A new algorithm speeds up the planning process for robotic grippers to manipulate objects. | Credit: MIT

The post Algorithm speeds up planning process for robotic grippers appeared first on The Robot Report.

]]>
https://www.therobotreport.com/algorithm-speeds-up-planning-process-robotic-grippers/feed/ 0
Robotic thread from MIT could worm its way into brain blood vessels https://www.therobotreport.com/robotic-thread-mit-worm-brain-blood-vessels/ https://www.therobotreport.com/robotic-thread-mit-worm-brain-blood-vessels/#respond Thu, 29 Aug 2019 13:10:42 +0000 https://www.therobotreport.com/?p=102457 Magnetically controlled robotic thread could deliver clot-reducing therapies in response to stroke or other brain blockages, said MIT researchers.

The post Robotic thread from MIT could worm its way into brain blood vessels appeared first on The Robot Report.

]]>

CAMBRIDGE, Mass. — Engineers at the Massachusetts Institute of Technology have developed a magnetically steerable, robotic thread that can actively glide through narrow, winding pathways, such as the labyrinthine vasculature of the brain.

In the future, this threadlike robot could be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.

“Stroke is the No. 5 cause of death and a leading cause of disability in the United States,” stated Xuanhe Zhao, associate professor of mechanical engineering and of civil and environmental engineering at MIT. If acute stroke can be treated within the first 90 minutes or so, patients’ survival rates could increase significantly. If we could design a device to reverse blood-vessel blockage within this ‘golden hour,’ we could potentially avoid permanent brain damage. That’s our hope.”

Zhao and his team, including lead author Yoonho Kim, a graduate student in MIT’s Department of Mechanical Engineering, described their soft robotic design in the journal Science Robotics. The paper’s other co-authors are MIT graduate student German Alberto Parada and visiting student Shengduo Liu.

Getting robotic thread to tight spots

To clear blood clots in the brain, doctors often perform an endovascular procedure, a minimally invasive surgery in which a surgeon inserts a thin wire through a patient’s main artery, usually in the leg or groin. Guided by a fluoroscope that simultaneously images the blood vessels using X-rays, the surgeon then manually rotates the wire up into the damaged brain vessel. A catheter can then be threaded up along the wire to deliver drugs or clot-retrieval devices to the affected region.

Kim said the procedure can be physically taxing, requiring surgeons, who must be specifically trained in the task, to endure repeated radiation exposure from fluoroscopy.

“It’s a demanding skill, and there are simply not enough surgeons for the patients, especially in suburban or rural areas,” Kim said.

The medical guidewires used in such procedures are passive, meaning they must be manipulated manually. They are typically made from a core of metallic alloys, coated in polymer, a material that Kim said could potentially generate friction and damage vessel linings if the wire were to get temporarily stuck in a particularly tight space.

The team realized that developments in their lab could help improve such endovascular procedures, both in the design of the guidewire and in reducing doctors’ exposure to any associated radiation.

Xuanhe Zhao

Xuanhe Zhao, associate professor of mechanical engineering and of civil and environmental engineering at MIT. Source: MIT

Threading a needle

Over the past few years, the team has built up expertise in both hydrogels — biocompatible materials made mostly of water — and 3D-printed magnetically-actuated materials that can be designed to crawl, jump, and even catch a ball, simply by following the direction of a magnet.

In this new paper, the researchers combined their work in hydrogels and in magnetic actuation, to produce a magnetically steerable, hydrogel-coated robotic thread, or guidewire, which they were able to make thin enough to magnetically guide through a life-size silicone replica of the brain’s blood vessels.

The core of the robotic thread is made from nickel-titanium alloy, or “nitinol,” a material that is both bendy and springy. Unlike a clothes hanger, which would retain its shape when bent, a nitinol wire would return to its original shape, giving it more flexibility in winding through tight, tortuous vessels. The team coated the wire’s core in a rubbery paste, or ink, which they embedded throughout with magnetic particles.

Finally, they used a chemical process they developed previously, to coat and bond the magnetic covering with hydrogel — a material that does not affect the responsiveness of the underlying magnetic particles and yet provides the wire with a smooth, friction-free, biocompatible surface.


The Robot Report has launched the Healthcare Robotics Engineering Forum, which will be on Dec. 9-10 in Santa Clara, Calif. The conference and expo focuses on improving the design, development and manufacture of next-generation healthcare robots. Learn more about the Healthcare Robotics Engineering Forum.


Testing and adding functions to robotic thread

They demonstrated the robotic thread’s precision and activation by using a large magnet, much like the strings of a marionette, to steer the thread through an obstacle course of small rings, reminiscent of a thread working its way through the eye of a needle.

The researchers also tested the thread in a life-size silicone replica of the brain’s major blood vessels, including clots and aneurysms, modeled after the CT scans of an actual patient’s brain. The team filled the silicone vessels with a liquid simulating the viscosity of blood, then manually manipulated a large magnet around the model to steer the robot through the vessels’ winding, narrow paths.

Kim said the robotic thread can be functionalized, meaning that features can be added — for example, to deliver clot-reducing drugs or break up blockages with laser light. To demonstrate the latter, the team replaced the thread’s nitinol core with an optical fiber and found that they could magnetically steer the robot and activate the laser once the robot reached a target region.

When the researchers ran comparisons between the robotic thread coated versus uncoated with hydrogel, they found that the hydrogel gave the thread a much-needed, slippery advantage, allowing it to glide through tighter spaces without getting stuck. In an endovascular surgery, this property would be key to preventing friction and injury to vessel linings as the thread works its way through.

Yoonho Kim, MIT researcher working on robotic thread

Yoonho Kim, a graduate student in MIT’s Department of Mechanical Engineering. Source: MIT

Avoiding radiation

And just how can this new robotic thread keep surgeons radiation-free? Kim said that a magnetically steerable guidewire does away with the necessity for surgeons to physically push a wire through a patient’s blood vessels. This means that doctors also wouldn’t have to be in close proximity to a patient, and more importantly, the radiation-generating fluoroscope.

In the near future, he envisions endovascular surgeries that incorporate existing magnetic technologies, such as pairs of large magnets, the directions of which doctors can manipulate from just outside the operating room, away from the fluoroscope imaging the patient’s brain, or even in an entirely different location.

“Existing platforms could apply magnetic field and do the fluoroscopy procedure at the same time to the patient, and the doctor could be in the other room, or even in a different city, controlling the magnetic field with a joystick,” said Kim. “Our hope is to leverage existing technologies to test our robotic thread in vivo in the next step.”

This research was funded, in part, by the U.S. Office of Naval Research, the MIT Institute for Soldier Nanotechnologies, and the National Science Foundation.

The post Robotic thread from MIT could worm its way into brain blood vessels appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robotic-thread-mit-worm-brain-blood-vessels/feed/ 0
MIT algorithm helps robots quickly find objects hidden in dense point clouds https://www.therobotreport.com/mit-algorithm-helps-robots-quickly-find-objects-hidden-in-dense-point-clouds/ https://www.therobotreport.com/mit-algorithm-helps-robots-quickly-find-objects-hidden-in-dense-point-clouds/#respond Thu, 20 Jun 2019 16:35:21 +0000 https://www.therobotreport.com/?p=101707 A new MIT-developed technique enables robots to quickly identify objects hidden in a 3D cloud of data, reminiscent of how some people can make sense of a densely patterned “Magic Eye” image if they observe it in just the right way. Robots typically “see” their environment through sensors that collect and translate a visual scene…

The post MIT algorithm helps robots quickly find objects hidden in dense point clouds appeared first on The Robot Report.

]]>

Robots currently attempt to identify objects in a point cloud by comparing a 3D dot representation of an object with a point cloud representation of the real world that may contain that object. | Credit: MIT News

A new MIT-developed technique enables robots to quickly identify objects hidden in a 3D cloud of data, reminiscent of how some people can make sense of a densely patterned “Magic Eye” image if they observe it in just the right way.

Robots typically “see” their environment through sensors that collect and translate a visual scene into a matrix of dots. Think of the world of, well, “The Matrix,” except that the 1s and 0s seen by the fictional character Neo are replaced by dots – lots of dots – whose patterns and densities outline the objects in a particular scene.

Conventional techniques that try to pick out objects from such clouds of dots, or point clouds, can do so with either speed or accuracy, but not both.

With their new technique, the researchers say a robot can accurately pick out an object, such as a small animal, that is otherwise obscured within a dense cloud of dots, within seconds of receiving the visual data. The team says the technique can be used to improve a host of situations in which machine perception must be both speedy and accurate, including driverless cars and robotic assistants in the factory and the home.

“The surprising thing about this work is, if I ask you to find a bunny in this cloud of thousands of points, there’s no way you could do that,” says Luca Carlone, assistant professor of aeronautics and astronautics and a member of MIT’s Laboratory for Information and Decision Systems (LIDS). “But our algorithm is able to see the object through all this clutter. So we’re getting to a level of superhuman performance in localizing objects.”

Carlone and graduate student Heng Yang will present details of the technique later this month at the Robotics: Science and Systems conference in Germany.

“Failing without knowing”

Robots currently attempt to identify objects in a point cloud by comparing a template object – a 3D dot representation of an object, such as a rabbit – with a point cloud representation of the real world that may contain that object. The template image includes “features,” or collections of dots that indicate characteristic curvatures or angles of that object, such the bunny’s ear or tail. Existing algorithms first extract similar features from the real-life point cloud, then attempt to match those features and the template’s features, and ultimately rotate and align the features to the template to determine if the point cloud contains the object in question.

But the point cloud data that streams into a robot’s sensor invariably includes errors, in the form of dots that are in the wrong position or incorrectly spaced, which can significantly confuse the process of feature extraction and matching. As a consequence, robots can make a huge number of wrong associations, or what researchers call “outliers” between point clouds, and ultimately misidentify objects or miss them entirely.

Carlone says state-of-the-art algorithms are able to sift the bad associations from the good once features have been matched, but they do so in “exponential time,” meaning that even a cluster of processing-heavy computers, sifting through dense point cloud data with existing algorithms, would not be able to solve the problem in a reasonable time. Such techniques, while accurate, are impractical for analyzing larger, real-life datasets containing dense point clouds.

Other algorithms that can quickly identify features and associations do so hastily, creating a huge number of outliers or misdetections in the process, without being aware of these errors.

“That’s terrible if this is running on a self-driving car, or any safety-critical application,” Carlone says. “Failing without knowing you’re failing is the worst thing an algorithm can do.”

MIT’s technique matches objects to those hidden in dense point clouds (left) versus existing techniques (right) that produce incorrect matches. | Credit: MIT News

A relaxed view

Yang and Carlone instead devised a technique that prunes away outliers in “polynomial time,” meaning that it can do so quickly, even for increasingly dense clouds of dots. The technique can thus quickly and accurately identify objects hidden in cluttered scenes.

The researchers first used conventional techniques to extract features of a template object from a point cloud. They then developed a three-step process to match the size, position, and orientation of the object in a point cloud with the template object, while simultaneously identifying good from bad feature associations.

The team developed an “adaptive voting scheme” algorithm to prune outliers and match an object’s size and position. For size, the algorithm makes associations between template and point cloud features, then compares the relative distance between features in a template and corresponding features in the point cloud. If, say, the distance between two features in the point cloud is five times that of the corresponding points in the template, the algorithm assigns a “vote” to the hypothesis that the object is five times larger than the template object.

The algorithm does this for every feature association. Then, the algorithm selects those associations that fall under the size hypothesis with the most votes, and identifies those as the correct associations, while pruning away the others. In this way, the technique simultaneously reveals the correct associations and the relative size of the object represented by those associations. The same process is used to determine the object’s position.

The researchers developed a separate algorithm for rotation, which finds the orientation of the template object in three-dimensional space.

To do this is an incredibly tricky computational task. Imagine holding a mug and trying to tilt it just so, to match a blurry image of something that might be that same mug. There are any number of angles you could tilt that mug, and each of those angles has a certain likelihood of matching the blurry image.

Existing techniques handle this problem by considering each possible tilt or rotation of the object as a “cost” – the lower the cost, the more likely that that rotation creates an accurate match between features. Each rotation and associated cost is represented in a topographic map of sorts, made up of multiple hills and valleys, with lower elevations associated with lower cost.

But Carlone says this can easily confuse an algorithm, especially if there are multiple valleys and no discernible lowest point representing the true, exact match between a particular rotation of an object and the object in a point cloud. Instead, the team developed a “convex relaxation” algorithm that simplifies the topographic map, with one single valley representing the optimal rotation. In this way, the algorithm is able to quickly identify the rotation that defines the orientation of the object in the point cloud.

With their approach, the team was able to quickly and accurately identify three different objects – a bunny, a dragon, and a Buddha – hidden in point clouds of increasing density. They were also able to identify objects in real-life scenes, including a living room, in which the algorithm quickly was able to spot a cereal box and a baseball hat.

Carlone says that because the approach is able to work in “polynomial time,” it can be easily scaled up to analyze even denser point clouds, resembling the complexity of sensor data for driverless cars, for example. “Navigation, collaborative manufacturing, domestic robots, search and rescue, and self-driving cars is where we hope to make an impact,” Carlone says.

Editor’s Note: This article was republished from MIT News.

The post MIT algorithm helps robots quickly find objects hidden in dense point clouds appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-algorithm-helps-robots-quickly-find-objects-hidden-in-dense-point-clouds/feed/ 0
MIT algorithm helps robots better predict human movement https://www.therobotreport.com/mit-algorithm-helps-robots-better-predict-human-movement/ https://www.therobotreport.com/mit-algorithm-helps-robots-better-predict-human-movement/#respond Sat, 15 Jun 2019 14:13:17 +0000 https://www.therobotreport.com/?p=101635 In 2018, researchers at MIT and the auto manufacturer BMW were testing ways in which humans and robots might work in close proximity to assemble car parts. In a replica of a factory floor setting, the team rigged up a robot on rails, designed to deliver parts between work stations. Meanwhile, human workers crossed its…

The post MIT algorithm helps robots better predict human movement appeared first on The Robot Report.

]]>
MIT Motion Prediction

A new algorithm helps robots predict the paths people take in structured environments. | Credit: MIT News

In 2018, researchers at MIT and the auto manufacturer BMW were testing ways in which humans and robots might work in close proximity to assemble car parts. In a replica of a factory floor setting, the team rigged up a robot on rails, designed to deliver parts between work stations. Meanwhile, human workers crossed its path every so often to work at nearby stations.

The robot was programmed to stop momentarily if a person passed by. But the researchers noticed that the robot would often freeze in place, overly cautious, long before a person had crossed its path. If this took place in a real manufacturing setting, such unnecessary pauses could accumulate into significant inefficiencies.

The team traced the problem to a limitation in the robot’s trajectory alignment algorithms used by the robot’s motion predicting software. While they could reasonably predict where a person was headed, due to the poor time alignment the algorithms couldn’t anticipate how long that person spent at any point along their predicted path – and in this case, how long it would take for a person to stop, then double back and cross the robot’s path again.

Now, members of that same MIT team have come up with a solution: an algorithm that accurately aligns partial trajectories in real-time, allowing motion predictors to accurately anticipate the timing of a person’s motion. When they applied the new algorithm to the BMW factory floor experiments, they found that, instead of freezing in place, the robot simply rolled on and was safely out of the way by the time the person walked by again.

“This algorithm builds in components that help a robot understand and monitor stops and overlaps in movement, which are a core part of human motion,” says Julie Shah, associate professor of aeronautics and astronautics at MIT. “This technique is one of the many way we’re working on robots better understanding people.”

Shah and her colleagues, including project lead and graduate student Przemyslaw “Pem” Lasota, will present their results this month at the Robotics: Science and Systems conference in Germany.

Clustered up

To enable robots to predict human movements, researchers typically borrow algorithms from music and speech processing. These algorithms are designed to align two complete time series, or sets of related data, such as an audio track of a musical performance and a scrolling video of that piece’s musical notation.

Researchers have used similar alignment algorithms to sync up real-time and previously recorded measurements of human motion, to predict where a person will be, say, five seconds from now. But unlike music or speech, human motion can be messy and highly variable. Even for repetitive movements, such as reaching across a table to screw in a bolt, one person may move slightly differently each time.

Existing algorithms typically take in streaming motion data, in the form of dots representing the position of a person over time, and compare the trajectory of those dots to a library of common trajectories for the given scenario. An algorithm maps a trajectory in terms of the relative distance between dots.

But Lasota says algorithms that predict trajectories based on distance alone can get easily confused in certain common situations, such as temporary stops, in which a person pauses before continuing on their path. While paused, dots representing the person’s position can bunch up in the same spot.

“When you look at the data, you have a whole bunch of points clustered together when a person is stopped,” Lasota says. “If you’re only looking at the distance between points as your alignment metric, that can be confusing, because they’re all close together, and you don’t have a good idea of which point you have to align to.”

The same goes with overlapping trajectories — instances when a person moves back and forth along a similar path. Lasota says that while a person’s current position may line up with a dot on a reference trajectory, existing algorithms can’t differentiate between whether that position is part of a trajectory heading away, or coming back along the same path.

“You may have points close together in terms of distance, but in terms of time, a person’s position may actually be far from a reference point,” Lasota says.

It’s all in the timing

As a solution, Lasota and Shah devised a “partial trajectory” algorithm that aligns segments of a person’s trajectory in real-time with a library of previously collected reference trajectories. Importantly, the new algorithm aligns trajectories in both distance and timing, and in so doing, is able to accurately anticipate stops and overlaps in a person’s path.

“Say you’ve executed this much of a motion,” Lasota explains. “Old techniques will say, ‘this is the closest point on this representative trajectory for that motion.’ But since you only completed this much of it in a short amount of time, the timing part of the algorithm will say, ‘based on the timing, it’s unlikely that you’re already on your way back, because you just started your motion.’”

The team tested the algorithm on two human motion datasets: one in which a person intermittently crossed a robot’s path in a factory setting (these data were obtained from the team’s experiments with BMW), and another in which the group previously recorded hand movements of participants reaching across a table to install a bolt that a robot would then secure by brushing sealant on the bolt.

For both datasets, the team’s algorithm was able to make better estimates of a person’s progress through a trajectory, compared with two commonly used partial trajectory alignment algorithms. Furthermore, the team found that when they integrated the alignment algorithm with their motion predictors, the robot could more accurately anticipate the timing of a person’s motion. In the factory floor scenario, for example, they found the robot was less prone to freezing in place, and instead smoothly resumed its task shortly after a person crossed its path.

While the algorithm was evaluated in the context of motion prediction, it can also be used as a preprocessing step for other techniques in the field of human-robot interaction, such as action recognition and gesture detection. Shah says the algorithm will be a key tool in enabling robots to recognize and respond to patterns of human movements and behaviors. Ultimately, this can help humans and robots work together in structured environments, such as factory settings and even, in some cases, the home.

“This technique could apply to any environment where humans exhibit typical patterns of behavior,” Shah says. “The key is that the [robotic] system can observe patterns that occur over and over, so that it can learn something about human behavior. This is all in the vein of work of the robot better understand aspects of human motion, to be able to collaborate with us better.”

This research was funded, in part, by a NASA Space Technology Research Fellowship and the National Science Foundation.

Editor’s Note: This article was republished with permission from MIT News.

The post MIT algorithm helps robots better predict human movement appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-algorithm-helps-robots-better-predict-human-movement/feed/ 0
MIT mini cheetah is the first four-legged robot to do a backflip https://www.therobotreport.com/mit-mini-cheetah-four-legged-robot-backflip/ https://www.therobotreport.com/mit-mini-cheetah-four-legged-robot-backflip/#respond Mon, 04 Mar 2019 18:55:10 +0000 https://www.therobotreport.com/?p=100689 MIT’s new mini cheetah robot is springy and light on its feet, with a range of motion that rivals a champion gymnast. The four-legged power pack can bend and swing its legs wide, enabling it to walk either right-side up or upside down. The robot can also trot over uneven terrain about twice as fast…

The post MIT mini cheetah is the first four-legged robot to do a backflip appeared first on The Robot Report.

]]>
MIT mini cheetah is the first four-legged robot to do a backflip

Graduate student Ben Katz (left) and undergraduate student Jared DiCarlo at the MIT Biomimetics Lab with their robot cheetah. (© Bryce Vickmark. All rights reserved. www.vickmark.com 617.448.6758)

MIT’s new mini cheetah robot is springy and light on its feet, with a range of motion that rivals a champion gymnast. The four-legged power pack can bend and swing its legs wide, enabling it to walk either right-side up or upside down. The robot can also trot over uneven terrain about twice as fast as an average person’s walking speed.

Weighing in at just 20 pounds — lighter than some Thanksgiving turkeys — the limber quadruped is no pushover: When kicked to the ground, the robot can quickly right itself with a swift, kung-fu-like swing of its elbows.

Perhaps most impressive is its ability to perform a 360-degree backflip from a standing position. Researchers claim the mini cheetah is designed to be “virtually indestructible,” recovering with little damage, even if a backflip ends in a spill.

In the event that a limb or motor does break, the mini cheetah is designed with modularity in mind. Each of the robot’s legs is powered by three identical, low-cost electric motors that the researchers engineered using off-the-shelf parts. Each motor can easily be swapped out for a new one.

“You could put these parts together, almost like Legos,” said lead developer Benjamin Katz, a technical associate in MIT’s Department of Mechanical Engineering.

The researchers will present the mini cheetah’s design at the International Conference on Robotics and Automation in May. They are currently building more of the four-legged machines, aiming for a set of 10, each of which they hope to loan out to other labs.

“A big part of why we built this robot is that it makes it so easy to experiment and just try crazy things, because the robot is super robust and doesn’t break easily, and if it does break, it’s easy and not very expensive to fix,” said Katz, who worked on the robot in the lab of Sangbae Kim, associate professor of mechanical engineering.

Kim said loaning mini cheetahs out to other research groups gives engineers an opportunity to test out novel algorithms and maneuvers on a highly dynamic robot that they might not otherwise have access to.

“Eventually, I’m hoping we could have a robotic dog race through an obstacle course, where each team controls a mini cheetah with different algorithms, and we can see which strategy is more effective,” Kim sid. “That’s how you accelerate research.”

‘Dynamic stuff’

The mini cheetah is more than just a miniature version of its predecessor, Cheetah 3, a large, heavy, formidable robot, which often needs to be stabilized with tethers to protect its expensive, custom-designed parts.

“In Cheetah 3, everything is super integrated, so if you want to change something, you have to do a ton of redesign,” Katz said. “Whereas with the mini cheetah, if you wanted to add another arm, you could just add three or four more of these modular motors.”

MIT's mini cheetah quadruped robot

MIT’s new mini cheetah robot is springy, light on its feet, and weighs in at just 20 pounds.(© Bryce Vickmark. All rights reserved. www.vickmark.com 617.448.6758)

Katz came up with the electric motor design by reconfiguring the parts to small, commercially available motors normally used in drones and remote-controlled airplanes.

Each of the robot’s 12 motors is about the size of a Mason jar lid, and consists of: a stator, or set of coils, that generates a rotating magnetic field; a small controller that conveys the amount of current the stator should produce; a rotor, lined with magnets, that rotates with the stator’s field, producing torque to lift or rotate a limb; a gearbox that provides a 6:1 gear reduction, enabling the rotor to provide six times the torque that it normally would; and a position sensor that measures the angle and orientation of the motor and associated limb.

Each leg is powered by three motors, to give it three degrees of freedom and a huge range of motion. The lightweight, high-torque, low-inertia design enables the robot to execute fast, dynamic maneuvers and make high-force impacts on the ground without breaking gearboxes or limbs.

“The rate at which it can change forces on the ground is really fast,” Katz said. “When it’s running, its feet are only on the ground for something like 150 milliseconds at a time, during which a computer tells it to increase the force on the foot, then change it to balance, and then decrease that force really fast to lift up. So it can do really dynamic stuff, like jump in the air with every step, or run with two feet on the ground at a time. Most robots aren’t capable of doing this, so move much slower.”

Robotics Summit & Expo 2019 logoKeynotes | Speakers | Exhibitors | Register

Flipping out

The engineers ran the mini cheetah through a number of maneuvers, first testing its running ability through the hallways of MIT’s Pappalardo Lab and along the slightly uneven ground of Killian Court.

In both environments, the quadruped bound along at about 5 miles per hour. The robot’s joints are capable of spinning three times faster, with twice the amount of torque, and Katz estimates the robot could run about twice as fast with a little tuning.

The team wrote another computer code to direct the robot to stretch and twist in various, yoga-like configurations, showcasting its range of motion and ability to rotate its limbs and joints while maintaining balance. They also programmed the robot to recover from an unexpected force, such as a kick to the side. When the researchers kicked the robot to the ground, it automatically shut down.

“It assumes something terrible has gone wrong, so it just turns off, and all the legs fly wherever they go,” Katz said.

When it receives a signal to restart, the robot first determines its orientation, then performs a preprogrammed crouch or elbow-swing maneuver to right itself on all fours.

Katz and co-author Jared Di Carlo, an undergraduate in the Department of Electrical Engineering and Computer Science (EECS), wondered whether the robot could take on even higher-impact maneuvers. Inspired by a class they took last year, taught by EECS Professor Russ Tedrake, they set about programming the mini cheetah to perform a backflip.

“We thought it would be a good test of robot performance, because it takes a lot of power, torque, and there are huge impacts at the end of a flip,” Katz said.

The team wrote a “giant, nonlinear, offline trajectory optimizations” that incorporated the robot’s dynamics and actuator capabilities, and specified a trajectory in which the robot would start out in a certain, right-side-up orientation, and end up flipped 360 degrees. The program they developed then solved all the torques that needed to be applied to each joint, from each individual motor, and at every time period between start and end, in order to carry out the backflip.

“The first time we tried it, it miraculously worked,” Katz said.

“This is super exciting,” Kim adds. “Imagine Cheetah 3 doing a backflip — it would crash and probably destroy the treadmill. We could do this with the mini cheetah on a desktop.”

MIT mini cheetah and researchers

Ben Katz (left) and Jared DiCarlo with their robot cheetah in Cambridge, Mass. (© Bryce Vickmark. All rights reserved. www.vickmark.com 617.448.6758)

The team is building about 10 more mini cheetahs, each of which they plan to loan out to collaborating groups, and Kim intends to form a mini cheetah research consortium of engineers, who can invent, swap, and even compete with new ideas.

Meanwhile, the MIT team is developing another, even higher-impact maneuver.

“We’re working now on a landing controller, the idea being that I want to be able to pick up the robot and toss it, and just have it land on its feet,” Katz said. “Say you wanted to throw the robot into the window of a building and have it go explore inside the building. You could do that.”

Editor’s note: This article by Jennifer Chu was republished with permission of MIT News.

The post MIT mini cheetah is the first four-legged robot to do a backflip appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-mini-cheetah-four-legged-robot-backflip/feed/ 0