Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ Robotics news, research and analysis Fri, 06 Dec 2024 21:26:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Academia / Research Archives - The Robot Report https://www.therobotreport.com/category/research-development/ 32 32 Hello Robot’s Stretch AI toolkit explores embodied intelligence https://www.therobotreport.com/stretch-ai-toolkit-explore-embodied-intelligence/ https://www.therobotreport.com/stretch-ai-toolkit-explore-embodied-intelligence/#respond Fri, 06 Dec 2024 21:26:46 +0000 https://www.therobotreport.com/?p=581871 Stretch AI is a powerful toolkit designed to help researchers and developers create intelligent behaviors for Hello Robot's Stretch 3 mobile manipulator.

The post Hello Robot’s Stretch AI toolkit explores embodied intelligence appeared first on The Robot Report.

]]>

Hello Robot released an open-source collection of tools, tutorials, and reference code called Stretch AI that empowers developers to explore the future of embodied AI on the Stretch 3 mobile manipulator. Stretch 3, released in February 2024, is gaining traction with university labs as a both a platform for research about AI research and real-world deployments.

This comes on the heels of the advancement of robot utility models as a precursor for embodied AI capabilities. Available policies include ACT, VQ-BeT, and Diffusion Policy.

Stretch AI is a powerful toolkit designed to empower researchers and developers to create intelligent behaviors for the Stretch 3 mobile manipulator. This platform offers a range of capabilities, including:

  • Code for precise grasping and manipulation
  • Advanced mapping and navigation techniques
  • Integration with LLM agents for sophisticated decision-making, seamless text-to-speech and speech-to-text functionality
  • Robust visualization and debugging tools to streamline development and testing processes.

Stretch AI integrates open-source AI models, allowing it to accomplish home tasks with natural verbal requests such as “Stretch, pick up the toy, and put it in the bin.” There is a dedicated GitHub repo for Stretch AI.

“With Stretch AI, we wanted to open up access to the latest Embodied AI techniques and make them available to the fast-growing community of Stretch developers,” said Chris Paxton, senior embodied AI lead at Hello Robot. “We’re moving towards a world where robots can perform complex, multi-step tasks in homes. Stretch AI advances the ability to simply develop autonomous systems such as these using AI.”


SITE AD for the 2025 Robotics Summit registration. Register now


Taking AI from labs to living rooms

“Thanks to advances in AI, general-purpose home robots like Stretch are developing faster than expected,” said Hello Robot CEO Aaron Edsinger. “However, it is uncommon to see these robots actually working in real homes with real people. With Stretch AI, roboticists can take their work from the lab and begin developing real applications for realistic home settings.”

Stretch AI offers a distinct vision of the future in which AI-powered robots benefit everyone, including older adults, children, and people with disabilities. “Homes are an inclusive place. To truly succeed in homes, robots, and the AI that powers them, should be made for everyone,” said Edsinger.

Hello Robot said its Stretch mobile manipulator is used by developers in 20 countries, from leading universities to innovative companies. With Stretch AI, Hello Robot invites the research community to collaborate on shaping the future of embodied intelligence.

The Stretch 3 is priced at $24,950 and is available on Hello Robot’s website.

Stretch 3 is portable, lightweight, and designed from the ground up to work around people.

Stretch 3 is portable, lightweight, and designed from the ground up to work around people. | Credit: Hello Robot

The post Hello Robot’s Stretch AI toolkit explores embodied intelligence appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stretch-ai-toolkit-explore-embodied-intelligence/feed/ 0
Project CETI uses AI and robotics to track down sperm whales https://www.therobotreport.com/project-ceti-uses-ai-and-robotics-to-track-down-sperm-whales/ https://www.therobotreport.com/project-ceti-uses-ai-and-robotics-to-track-down-sperm-whales/#respond Tue, 03 Dec 2024 21:19:23 +0000 https://www.therobotreport.com/?p=581810 Project CETI researchers developed the AVATARs framework to make the most out of the small amount of time sperm whales spend on the surface.

The post Project CETI uses AI and robotics to track down sperm whales appeared first on The Robot Report.

]]>
An image of a pod of sperm whales swimming underwater.

Sperm whales spend, on average, 10 minutes of every hour on the surface, presenting challenges for researchers studying them. | Source: Amanda Cotton/Project CETI

In the chilly waters off the New England coast, researchers from the Cetacean Translation Initiative, Project CETI, can spend hours searching and waiting for an elusive sperm whale to surface. During the minutes the whales spend above water, the researchers need to gather as much information as possible before the animals dive back beneath the surface for long periods.

With one of the widest global distributions of any marine mammal species, these whales are difficult to track down, and even more difficult to learn from. Project CETI aims to use robotics and artificial intelligence to decode the vocalizing of sperm whales. It recently released research about how it tracks down sperm whales across the wide ocean.

“The ocean and the natural habitat of the whales is this vast place where we don’t have a lot of infrastructure, so it’s hard to build infrastructure that will always be able to observe the whales,” said Stephanie Gil, an assistant professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and an advisor on the project.

The project brings together some of the world’s leading scientists in biology, linguistics, robotics, and more. The founder of Project CETI, David Gruber, estimated that it’s one of the largest multi-disciplinary research projects active today.

“Project CETI was formed in March 2020, and we’re now over 50 scientists across eight different disciplines,” he said. “I think we’re over 15 institutions, which I believe puts us as one of the most interdisciplinary, large-scale science projects that’s ever been conducted. It’s incredibly rewarding to see so many disciplines working together.”

Project CETI shares latest research

The researchers at the nonprofit organization have developed a reinforcement learning framework that uses autonomous drones to find sperm whales and predict where they will surface. The paper, published in Science Robotics, said it’s possible to predict when and where a whale may surface using various sensor data and predictive models of sperm whale dive behavior.

This new study involved various sensing devices, such as Project CETI aerial drones with very high frequency (VHF) signal sensing capability that use signal phase along with the drone’s motion to emulate an “antenna array in the air” for estimating the direction of pings from CETI’s on-whale tags.

“There are two basic advantages of [VHF signals]. One is that they are really low power, so they can operate for a really, really long time in the field, like months or even years. So, once those small beacons are deployed on the tag, you don’t have to really replace the batteries,” said Ninad Jadhav, a co-author on the paper and a robotics and engineering Ph.D. student at Harvard University.

“The second thing is these signals that these tags transmit, the VHF, are very high-frequency signals,” he added. “They can be detected at really long ranges.”

“That’s a really huge advantage because we never know when the whales will surface or where they will surface, but if they have been tagged before, then you can sense, for example, simple information such as the direction of the signal,” Jadhav told The Robot Report. “You can deploy an algorithm on the robot to detect that, and that gives us an advantage of finding where the whales are on the surface.”

Sperm whales present unique challenges for data collection

From left to right, Stephanie Gil, Sushmita Bhattacharya, and Ninad Jadhav working on a laptop with an orange drone in the foreground.

From left to right: Stephanie Gil, Sushmita Bhattacharya, and Ninad Jadhav. | Source: Stu Rosner

“Sperm whales are only on the surface for about 10 minutes every hour,” said Gil. “Other than that, they’re diving pretty deep in the ocean, so it’s hard to access information about what the whales are actually doing. That makes them somewhat elusive for us and for science.”

“Even we humans have certain patterns day to day. But if you’re actually out observing whales on a particular day, their behavior is not going to exactly align with the models, no matter how much data you’re using to make those models right. So it’s very difficult to really predict with precision when they might be coming up,” she continued.

“You can imagine, if [the scientists] out on the water for days and days, only having a few encounters with the whales, we’re not being that efficient. So this is to increase our efficiency,” Gruber told The Robot Report.

Once the Project CETI researchers can track down the whales, they must gather as much information as possible during the short windows of time sperm whales spend on the surface.

“Underwater data collection is quite challenging,” said Sushmita Bhattacharya, a co-author on the paper and a computer science and robotics Ph.D. student at Harvard University. “So, what is easier than underwater data collection is to have data collected when they’re at the surface. We can leverage drones or shallow hydrophones and collect as much data as possible.”


SITE AD for the 2025 Robotics Summit registration. Register now


Developing the AVATARS framework

At the center of the research is the Autonomous Vehicles for Whale Tracking And Rendezvous by Remote Sensing, or AVATARS framework. AVATARS is the first co-development of VHF sensing and reinforcement learning decision-making for maximizing the rendezvous of robots and whales at sea.

“We tried to build up a model which would kind of mimic [sperm whale] behavior,” Bhattacharya said of AVATARS. “We do this based on the current information that we gather from the sparse data set.”

Being able to predict when and where the whales will surface allowed the researchers to design algorithms for the most efficient route for a drone to rendezvous with—or encounter—a whale at the surface. Designing these algorithms where challenging on many levels, the researchers said.

“Probably the hardest thing is the fact that it is such an uncertain problem. We don’t have certainty at all in [the whales’] positions when they’re underwater, because you can’t track them with GPS when they’re underwater,” Gil said. “You have to think of other ways of trying to track them, for example, by using their acoustic signals and an angle of arrival to their acoustic signals that give you a rough idea of where they are.”

“Ultimately, these algorithms are routing algorithms. So you’re trying to route a team of robots to be at a particular location in the environment, in the world, at a certain given time when it’s necessary to be there,” she told The Robot Report. “So this is analogous to something like rideshare.”

Before bringing the algorithms into the real world with real whales, the team tested them in a controlled environment with devices the team put together to mimic whales.

We mimicked the whale using an engineered whale,” recalled Bhattacharya. “So basically we used a speed boat, and it had a loud engine. We used that engine noise to mimic the whale vocalization, and we had it move to mimic whale motion. And then we used that as our ground test.”

Project CETI tests AVATARS in the real world

An image of a small white drone flying over the ocean. The top of a whale can be seen poking out of the water.

A customized off-the-shelf drone flying to deploy a whale tag developed by Project CETI researchers. | Source: Project CETI

“Every day was a challenge when we were out on the boat, because this was for me, and my co-author Sushmita, the first time we were deploying real autonomous robots from a boat in the middle of the sea trying to collect some information,” Jadhav said.

“One of the major challenges of working in this environment was the noise in the sensor,” he continued. “As opposed to running experiments in the lab environment, which is more controlled, there are fewer sources of noise that impact your experiments or your sensor data”

“The other key challenge was deploying the drone itself from the board,” noted Jadhav. “I remember one instance where this was probably the first or second day of the second expedition that we went on last November, and I had the drone ready. It had the payload. It was waterproof”

“I had already run experiments here in Boston locally, where I had an estimate of how long the drone would fly with the payload. And then we were out on the boat running some initial tests, and the drone took off,” he said. “It was fine, it was doing its thing, and within a minute of it collecting data, there was a sudden gust of wind. The drone just lost control and crashed in the water.”

The team also had to try to predict and react to whale behavior when performing field tests.

“Our algorithm was designed to handle sensor data from a single whale, but what we ended up seeing is that there were four whales together, who were socializing,” Jadhav said. “They were diving and then surfacing at the same time. So, this was tricky, because then it becomes really hard for us on the algorithm side to understand which whale is sending which acoustic signal and which one we are tracking.”

Team tries to gather data without disturbing wildlife

While Project CETI works closely with sperm whales and other sea life that might be around when the whales surface, it aims to leave the whales undisturbed during data collection.

“The main concern that we care about is that even if we fail, we should not harm the whales,” Bhattacharya said. “So we have to be very careful about respecting the boundaries of those animals. That’s why we are looking at a rendezvous radius. Our goal is to go near the whale and not land on it.”

“Being minimally invasive and invisible is a key part of Project CETI,” said Gruber. “[We’re interested in] how to collect this information without interacting directly with the whale.”

This is why the team works mostly with drones that won’t disturb sea life and with specially developed tags that latch onto the whales and collect data. The CETI team eventually collects these tags, and the valuable data they contain, after they fall off the whales.

“A lot of times, people might think of robotics and autonomy as a scary thing, but this is a really important project to showcase that robots can be used to extend the reach of humans and help us understand our world better,” Gil told The Robot Report.

Project CETI aims to decode whale communications

This latest research is just one step in Project CETI’s overarching goal to decode sperm whale vocalizations. In the short term, the organization plans to ramp up data collection, which will be crucial for the project’s long-term goals.

“Once we have all the algorithms worked out, a future outlook is one where we might have, for example, drone ports in the sea that can deploy robots with sensors around the clock to observe whales when they’re available for observation,” Gil said.

“We envision a team of drones that will essentially meet or visit the whales at the right place, at the right time,” Jadhav said. “So whenever the whales surface, you essentially have a kind of autonomous drone, or autonomous robot, very close to the whale to collect information such as visual information or even acoustic if the drone is equipped with that.”

Outside of Project CETI, organizations could use AVATARS to further protect sperm whales in their natural environments. For example, this information could be used to reroute ships away from sperm whale hot spots, reducing the odds of a ship colliding with a pod of sperm whales.

“The idea is that if we understand more about the wholes, more about the whale communities, more about their social structures, then this will also enable and motivate conservation projects and understanding of marine life and how it needs to be protected,” Gil said.

In addition, the researchers said they could apply these methods to other sea mammals that vocalize.

“Here at Project CETI, we’re concerned about sperm whales, but I think this can be generalized to other marine mammals, because a lot of marine mammals vocalize, including humpback whales, other types of whales, and dolphins,” Bhattacharya said.

The post Project CETI uses AI and robotics to track down sperm whales appeared first on The Robot Report.

]]>
https://www.therobotreport.com/project-ceti-uses-ai-and-robotics-to-track-down-sperm-whales/feed/ 0
Clearpath Robotics discusses development of Husky A300 ground vehicle https://www.therobotreport.com/a300-clearpath-robotics-discusses-development/ https://www.therobotreport.com/a300-clearpath-robotics-discusses-development/#respond Tue, 03 Dec 2024 15:00:08 +0000 https://www.therobotreport.com/?p=581811 The Husky A300 uncrewed ground vehicle from Clearpath includes features for both expert robot developers and non-expert users.

The post Clearpath Robotics discusses development of Husky A300 ground vehicle appeared first on The Robot Report.

]]>
The Husky A300, shown here, includes several design improvements over the A200, says Clearpath Robotics.

The Husky A300 is designed to be tougher and have longer endurance than the A200. Source: Clearpath Robotics

Developers of robots for indoor or outdoor use have a new platform to build on. In October, Clearpath Robotics Inc. released the Husky A300, the latest version of its flagship mobile robot for research and development. The Waterloo, Ontario-based company said it has improved the system’s speed, weather resistance, payload capacity, and runtime.

“Husky A200 has been on the market for over 10 years,” said Robbie Edwards, director of technology at Clearpath Robotics. “We have lots of experience figuring out what people want. We’ve had different configurations, upgrades, batteries and chargers, computers, and motors.”

“We’ve also had different configurations of the internal chassis and ingress protection, as well as custom payloads,” he told The Robot Report. “A lot of that functionality that you had to pay to add on is now stock.”

Husky A300 hardware is rugged, faster

The Husky A300 includes a high-torque drivetrain with four brushless motors that enable speeds of up to 2 m/s (4.4 mph), twice as fast as the previous version. It can carry payloads up to 100 kg (220.4 lb.) and has a runtime of up to 12 hours, said Clearpath Robotics.

The company, which Rockwell Automation acquired last year, noted that the platform can integrate third-party components and accessories including depth cameras, directional lidar, dual-antenna GPS, and manipulators. Husky A300 has an IP54 rating against dust and water and can withstand industrial environments or extreme temperatures outdoors, it said. 

“Before, the Husky was configured on a bespoke basis,” said Edwards. “Now we’re off at a more competitive price, which is great for our customers, and it now comes off our production line instead of our integration line.”

Founded in 2009, the company has tested its hardware and software near its office in a wide range of weather conditions.

Clearpath’s integration with Rockwell has gone smoothly so far, with Rockwell’s procurement team easing access to components and manufacturing, said Edwards. He observed that some of Rockwell’s customers in mining or other industrial automation could find new use cases in time.

The Husky A300 platform, shown here, is designed to withstand dust and temperature variances, says Clearpath Robotics.

The Husky A300 can withstand dust and temperature variances. Source: Clearpath Robotics

Clearpath includes ROS 2 support with A300

Husky A300 ships with Robot Operating System (ROS) 2 Jazzy plus demonstrations of Nav2, MoveIt 2, and other developer utilities.

“Over the past two years, there was a big push to get all Clearpath products to ROS 2 Humble because its configuration management system made life easier for our integration team and customers,” recalled Edwards. “We also provide support for simulation, and URDF [Unified Robot Description Format] is configured.”

Many of Clearpath’s R&D customers were familiar with ROS, C++, and Python, so it offered visualization and simulation tools in addition to the ROS stack, he added. However, as the company got non-expert customers, it wanted to enable them to also work with Husky.

“Academics who aren’t roboticists but want to do data collection can now do so with a simple Python interface, without learning ROS,” Edwards said. “We’ve maintained a level of flexibility with integrating different payloads and compute options while still giving a pretty good price point and usability.”


SITE AD for the 2025 Robotics Summit registration. Register now


Husky AMP a ‘turnkey’ option

Clearpath Robotics is offering a “turnkey” version of the robot dubbed Husky AMP, or autonomous mobile platform. It comes with a sensor suite for navigation, pre-installed and configured OutdoorNav software, a Web-based user interface, and an optional wireless charging dock.

“Robotics developers can easily integrate payloads onto the mounting deck, carry out a simple software integration through the OutdoorNav interface, and get their system working in the field faster and more efficiently,” said Clearpath.

“We’ve lowered the barrier to entry by providing all software function calls and a navigation stack,” Edwards asserted. “The RTK [real-time kinematic positioning] GPS is augmented with sensor fusion, including wheel odometry, and visual and lidar sensors.”

“With a waypoint following system, the robotics stack does the path planning, which is constrained and well-tested,” he said. “Non-roboticists can use Husky A300 as a ground drone.”

More robot enhancements, use cases to come

Clearpath Robotics is considering variant drive trains for the Husky A300, such as tracks for softer terrain as in agriculture, said Edwards.

“Husky is a general-purpose platform,” he said. “We’re serving outdoors developers rather than end users directly, but there’s a lot of demand for larger, high-endurance materials transport.”

For the A300, the company surveyed its client base, which came back with 150 use cases.

“I’ve seen lots of cool stuff — robots herding animals, helping to grow plants, working in mines, participating in the DARPA Subterranean Challenge in fleets of Husky and [Boston Dynamics’] Spot,” Edwards said. “Husky Observer conducts inspections of sites such as solar farms.”

“The benefits for industrial users also help researchers,” he said. “Making the robot cheaper to deploy for faster time to value also means better battery life, weatherproofing, and integrations.”

Edwards added that Clearpath has received a lot of interest in mobile manipulation with its Ridgeback omnidirectional platform.

“This trend is finding its way outdoors as well,” he said. “On the application engineering side, developers have put put two large Universal Robots arms on our Warthog UGV [uncrewed ground vehicle] for things like changing tires.”

The Husky A300 can carry different sensor payloads, shown here, or robotic arms.

The Husky A300 can carry different sensor payloads or robotic arms. Source: Clearpath Robotics

The post Clearpath Robotics discusses development of Husky A300 ground vehicle appeared first on The Robot Report.

]]>
https://www.therobotreport.com/a300-clearpath-robotics-discusses-development/feed/ 0
ASTM developing testing standards for mobile manipulators https://www.therobotreport.com/astm-developing-testing-standards-for-mobile-manipulators/ https://www.therobotreport.com/astm-developing-testing-standards-for-mobile-manipulators/#respond Sat, 30 Nov 2024 13:30:51 +0000 https://www.therobotreport.com/?p=581788 The ASTM International F45 committee is developing a new standard and an apparatus to test mobile manipulator precision.

The post ASTM developing testing standards for mobile manipulators appeared first on The Robot Report.

]]>
The MC600 is designed for reliable mobile manipulation, says MiR.

The new MC600 is designed for reliable mobile manipulation, says MiR. Source: Mobile Industrial Robots

While humanoid robots are taking their first steps into industrial applications, ASTM International is working on standards for mobile manipulators. Its F45.05 Committee on Robotics, Automation, and Autonomous Systems is developing a standard designated as WK92144.

A mobile manipulator is broadly defined as an autonomous mobile robot (AMR) base with an attached multi-axis robotic arm. The ANSI/RIA R15.08-1-2020 standard defines a classification scheme for industrial mobile robots (IMR), which includes mobile manipulators as a class.

The goal of ASTM’s standard is to demonstrate the precision of a such a robot and provide a series of quantifiable tests for measuring the accuracy of the manipulator and mobile base’s movements. The research for the emerging testing standard is based on a National Institute of Standards and Technology (NIST) paper.


SITE AD for the 2025 Robotics Summit registration. Register now


Standards efforts focus on definitions, testing

“We started this standards effort in 2014 or 2015,” said Omar Aboul-Enein, co-author of the NIST paper and an F45 committee member. “In many cases, there’s noncontinuous performance. For example, the arm and vehicle don’t move simultaneously for machine tending or assembly tasks. We needed terminology for discrete pose-constrained tasks.”

In 2022, ASTM International expanded its scope to include robotics and automation, he told The Robot Report. “NIST had tested noncontinuous performance with multiple robots, such as AGVs [automated guided vehicles] and then experimented with continuous systems for large-scale manufacturing, like for aircraft wings, ship bows, and wind turbine blades.”

With R15.08, ASTM has focused on AMR testing, with task groups for mobile manipulation, grasp-type end effectors, and robotic assembly, Aboul-Emein explained. The mobile manipulation group has more than 30 members.

By supporting foundational tests for workpiece properties, ASTM wants to help industry create consistent documentation of robot configurations. Aboul-Emein described a configurable test apparatus for mobile manipulation that uses low-cost components and is designed to be easy to reproduce and allow for in-situ testing.

However, the new standards would not apply to end effectors, payloads, or fleet behavior. They could be used to develop simulations of robots and their behaviors, acknowledged Aboul-Emein.

“It definitely has potential, but there are always factors lurking in the real world, such as a dip in the lab floor or the weight of the arm when it’s fully stretched out to one side,” he said. “We’ve been working on items to assess mobile manipulators and measure their behavior, all based on consensus out of the committee. These standards are living documents.”

ASTM introduces testing table for mobile manipulators

The Robot Report also reached out to Aaron Prather, director of the Robotics and Autonomous Systems Program at ASTM International, for a more detail on the WK92144 standard and where it’s headed.

The organization‘s F45 committee is introducing a new testing table, a tool that helps show the precision of a mobile manipulator, linking its arm and base movements. The robot must try to maneuver around the table while its arm performs tasks on the surface. These tasks include tracking an S-shaped black area for welding or gluing and inserting pegs.

image showing a prototype mobile manipulator testing table.

This image, provided by Prather, is an early prototype built by the F45 team to test the emerging standard. | Credit: Aaron Prather, ASTM

Operators can adjust the tabletop to stand at 90 degrees, tilt to 45 degrees, or lay flat at 0 degrees. To make the tests more challenging, they can attach a shaker that adds motion and vibrations.

“The table design will be standardized, and the committee will provide instructions on how everyone can build their table,” said Prather. “Several test standards are planned based on the table. The goal is to have NIST task boards and this new table be the basis for how we test new grasping/manipulation/assembly applications for accuracy and repeatability.”

“Also, expect to see our new Student Competition Challenges to use the boards and table,” he added. “This will help get students involved in how to use standards and send them out into the community with the knowledge on how to leverage these new test tools we are going to keep launching to ensure new robot systems can pass them.”

“Our hope is that we see humanoids and mobile manipulators having to show their results to help end users better understand capabilities and ensure they are getting the right system for their application,” Prather said.

The post ASTM developing testing standards for mobile manipulators appeared first on The Robot Report.

]]>
https://www.therobotreport.com/astm-developing-testing-standards-for-mobile-manipulators/feed/ 0
In just seven years, the global robot density in factories has doubled, IFR finds https://www.therobotreport.com/in-just-seven-years-global-robot-density-factories-has-doubled-ifr-finds/ https://www.therobotreport.com/in-just-seven-years-global-robot-density-factories-has-doubled-ifr-finds/#respond Wed, 20 Nov 2024 18:04:04 +0000 https://www.therobotreport.com/?p=581662 In 2023, China surpassed Germany and Japan in robot density, putting it at the third most automated country worldwide.

The post In just seven years, the global robot density in factories has doubled, IFR finds appeared first on The Robot Report.

]]>
A bar graph showing the countries with the highest robot density in manufacturing in 2023.

Robot density in the manufacturing industry in 2023, according to the IFR. | Source: IFR

Factories worldwide are continuing to adopt more robots, according to the International Federation of Robotics, or IFR. The new global average robot density reached a record 162 units per 10,000 employees in 2023 — more than double the 74 unit average measured only seven years ago.

The Frankfurt, Germany-based IFR noted the growth in its “World Robotics 2024″ report.

“Robot density serves as a barometer to track the degree of automation adoption in the manufacturing industry around the world,” stated Takayuki Ito, the IFR’s new president. “This year’s runner-up is China, which ranks third worldwide behind Korea and Singapore, but right up with Germany and Japan.”

Europe leads in regional robot density

When breaking these numbers down by region, the European Union had the highest robot density, with 219 units installed per 10,000 employees. This is an increase of 5.2% from 2022, with Germany, Sweden, Denmark, and Slovenia in the global Top 10. 

North America followed with 197 units per 10,000 employees, up 4.2% from 2022.

Asia has 182 units per 10,000 people employed in manufacturing — an increase of 7.6%. The economies of Korea, Singapore, mainland China, and Japan were among the top ten most automated countries in 2023.

Founded in 1987, IFR aims to connect the world of robotics around the globe. It’s institutional members come from the robotics industry, national or international industry associations, and research and development institutes. The non-profit organization directly represents more than 90 members from over 20 countries.


SITE AD for the 2025 Robotics Summit registration. Register now


IFR lists countries that are top robot users

The Republic of Korea was the world’s No. 1 adopter of industrial robots in 2023, with 1,012 robots per 10,000 employees. Robot density has increased by 5% in the country on average each year since 2018.

With a world-renowned electronics industry and a strong automotive industry, the Korean economy relies on the two largest customers for industrial robots, said the IFR.

Singapore followed with 770 robots per 10,000 employees. It is a small country with a very low number of employees in the manufacturing industry, so it can reach a high density with a relatively small operational stock.

China took third place in 2023, surpassing Germany and Japan for the first time. It has been heavily investing in automation technology in recent years. This investment seems to have paid off, the IFR noted, as the People’s Republic of China reached a high robot density of 470 robots per 10,000 employees, compared with 402 units in 2022.

“China’s massive investment in automation technology has achieved this high robot density despite a huge manufacturing workforce of around 37 million people,” Ito said. “Robot density serves as a useful barometer for comparing the level of automation in manufacturing between countries.”

Germany and Japan followed China, in fourth and fifth place respectively. Germany has 429 robots per 10,000 employees, and its robot density has grown by 5% CAGR since 2018.

Japan is close behind with 419 units. The robot density of the world’s predominant robotics manufacturing country has grown by 7% on average each year from 2018 to 2023.

The U.S. reached 295 units per 10,000 employees in 2023, falling just outside of the top 10 by ranking eleventh in the world. While it has been increasing its robot density, the U.S. had 285 units per 10,000 robots installed in 2022. 

The U.S. hasn’t been able to keep up with the rest of the world’s pace. In 2022, it was the 10th most automated country worldwide, and in 2021, it was the ninth most automated country. 

The post In just seven years, the global robot density in factories has doubled, IFR finds appeared first on The Robot Report.

]]>
https://www.therobotreport.com/in-just-seven-years-global-robot-density-factories-has-doubled-ifr-finds/feed/ 0
KIMM develops automated mooring system for docking autonomous vessels https://www.therobotreport.com/kimm-develops-automated-mooring-system-for-docking-autonomous-vessels/ https://www.therobotreport.com/kimm-develops-automated-mooring-system-for-docking-autonomous-vessels/#respond Tue, 19 Nov 2024 20:50:57 +0000 https://www.therobotreport.com/?p=581646 The manual mooring process demanded substantial manpower and time, while KIMM said its automated method removes these barriers.

The post KIMM develops automated mooring system for docking autonomous vessels appeared first on The Robot Report.

]]>
Dr. Yongjin Kim, Principal Researcher (right), and Senior Researcher Dr. Young-ki Kim (left) with the automated mooring system.

Principal researcher Dr. Yongjin Kim (right) and senior researcher Dr. Young-ki Kim (left) from the Department of Reliability at KIMM, developed an automated mooring system. | Source: the Korea Institute of Machinery and Materials

The Korea Institute of Machinery and Materials, or KIMM, has developed an automated mooring system to enhance the safety and efficiency of docking operations for autonomous vessels. The institute designed the system to overcome the limitations of conventional wire-based mooring methods. KIMM said it expects the innovation to be commercially available by 2025.

“This automated mooring system represents a key advancement in the safe docking of autonomous vessels and will play a pivotal role in the development of smart port infrastructure,” stated Dr. Yongjin Kim, principal researcher in the Department of Reliability at KIMM. “We expect this solution to set a new standard in operational safety and efficiency across the marine industry.”

The Korea Institute of Machinery and Materials is a non-profit government-funded research institute under the Ministry of Science and Information and Communication Technology. Since its foundation in 1976, KIMM has contributed to South Korea’s economic growth by researching and developing key technologies in machinery and materials, conducting reliability evaluations, and commercializing products.


SITE AD for the 2025 Robotics Summit registration. Register now


KIMM aims to make the mooring process safer, faster

Previously, workers secured vessels to the port manually using thick mooring lines. This method required high tensile strength, depending on the ship’s size and weight. If the wire broke, there was a risk of accidents, and the manual mooring process demanded substantial manpower and time.

KIMM said its automated mooring system directly addresses these challenges. It uses vacuum suction pads for secure attachment and a flexible, four degree-of-freedom hydraulic system for automated control.

The new technology can streamline the mooring process, increasing both speed and accuracy while reducing accident risks and labor needs, according to the researchers.

Actual Fixture for Quantitative Evaluation of Suction Force

The actual Fixture for quantitative evaluation of suction force. Source: KIMM

Korean team receives recognition, prepares for commercialization

Dr. Yongjin Kim led the team at KIMM under President Seog-Hyeon Ryu. Dr. Young-ki Kim served as a senior researcher.

The institute‘s project was conducted under the “Development of Smart Port-Autonomous Ship linkage Technology” initiative, supported by Korea’s Ministry of Oceans and Fisheries. For its innovation and impact, the technology has been recognized by the Korea Federation of Mechanical Engineering Societies as one of “Korea’s Top 10 Mechanical Technologies of the Year.”

The final performance will be verified at sea in 2025, after which the technology development will be completed, including efforts to commercialize the system.

The KIMM automated mooring platform, rom concept to reality.

The automated ship-mooring platform, from concept to manufactured product. Source: KIMM

The post KIMM develops automated mooring system for docking autonomous vessels appeared first on The Robot Report.

]]>
https://www.therobotreport.com/kimm-develops-automated-mooring-system-for-docking-autonomous-vessels/feed/ 0
RBR50 Spotlight: Robotic ventricle advances understanding of heart disease https://www.therobotreport.com/rbr50-spotlight-robotic-ventricle-advances-understanding-of-heart-disease/ https://www.therobotreport.com/rbr50-spotlight-robotic-ventricle-advances-understanding-of-heart-disease/#comments Mon, 18 Nov 2024 11:00:01 +0000 https://www.therobotreport.com/?p=581512 MIT engineers developed a robotic replica of the heart’s right ventricle that mimics its beating and blood-pumping action of live hearts.

The post RBR50 Spotlight: Robotic ventricle advances understanding of heart disease appeared first on The Robot Report.

]]>

Organization: Massachusetts Institute of Technology
Country: U.S.
Website: https://www.mit.edu/
Year Founded: 1861
Number of Employees: 500+
Innovation Class: Technology, Product & Services

MIT engineers developed a robotic replica of the heart’s right ventricle that mimics the beating and blood-pumping action of live hearts. The robotic right ventricle (RRV) combines real heart tissue with synthetic, balloon-like artificial muscles. It allows scientists to control contractions and observe the function of natural valves and structures.

rbr50 banner logo.The right ventricle is one of the heart’s four chambers, but its anatomical complexity has made it difficult for clinicians to accurately observe and assess its function in patients with heart disease.

The researchers said the RRV serves as a platform to study right ventricle disorders and test cardiac devices, providing insights into conditions such as right ventricular dysfunction, pulmonary hypertension, and myocardial infarction.

They also said conventional tools often fail to capture the intricate mechanics and dynamics of the right ventricle, leading to potential misdiagnosis and inadequate treatment strategies.

MIT has also demonstrated the model’s potential as a training tool for surgeons and cardiologists by implanting mechanical valves and ring-like medical devices to repair or replace malfunctioning valves. With plans to extend its performance and test implantable devices, the RRV represents a significant advancement in understanding and treating heart disease, said the team.

The RRV currently simulates realistic heart functions over a few months. The team is working to extend that performance and enable the model to run continuously for longer stretches. It is also working with designers of implantable devices to test their prototypes on the artificial ventricle and possibly speed their path to patients.

In the future, the researchers plan to pair the RRV with a similar artificial, functional model of the left ventricle, which they are fine-tuning.

Explore the RBR50 Robotics Innovation Awards 2024.


SITE AD for the 2025 Robotics Summit registration. Register now



RBR50 Robotics Innovation Awards 2024

OrganizationInnovation
ABB RoboticsModular industrial robot arms offer flexibility
Advanced Construction RoboticsIronBOT makes rebar installation faster, safer
Agility RoboticsDigit humanoid gets feet wet with logistics work
Amazon RoboticsAmazon strengthens portfolio with heavy-duty AGV
Ambi RoboticsAmbiSort uses real-world data to improve picking
ApptronikApollo humanoid features bespoke linear actuators
Boston DynamicsAtlas shows off unique skills for humanoid
BrightpickAutopicker applies mobile manipulation, AI to warehouses
Capra RoboticsHircus AMR bridges gap between indoor, outdoor logistics
DexterityDexterity stacks robotics and AI for truck loading
DisneyDisney brings beloved characters to life through robotics
DoosanApp-like Dart-Suite eases cobot programming
Electric SheepVertical integration positions landscaping startup for success
ExotecSkypod ASRS scales to serve automotive supplier
FANUCFANUC ships one-millionth industrial robot
FigureStartup builds working humanoid within one year
Fraunhofer Institute for Material Flow and LogisticsevoBot features unique mobile manipulator design
Gardarika TresDevelops de-mining robot for Ukraine
Geek+Upgrades PopPick goods-to-person system
GlidanceProvides independence to visually impaired individuals
Harvard UniversityExoskeleton improves walking for people with Parkinson’s disease
ifm efectorObstacle Detection System simplifies mobile robot development
igusReBeL cobot gets low-cost, human-like hand
InstockInstock turns fulfillment processes upside down with ASRS
Kodama SystemsStartup uses robotics to prevent wildfires
Kodiak RoboticsAutonomous pickup truck to enhance U.S. military operations
KUKARobotic arm leader doubles down on mobile robots for logistics
Locus RoboticsMobile robot leader surpasses 2 billion picks
MassRobotics AcceleratorEquity-free accelerator positions startups for success
MecademicMCS500 SCARA robot accelerates micro-automation
MITRobotic ventricle advances understanding of heart disease
MujinTruckBot accelerates automated truck unloading
MushinyIntelligent 3D sorter ramps up throughput, flexibility
NASAMOXIE completes historic oxygen-making mission on Mars
Neya SystemsDevelopment of cybersecurity standards harden AGVs
NVIDIANova Carter gives mobile robots all-around sight
Olive RoboticsEdgeROS eases robotics development process
OpenAILLMs enable embedded AI to flourish
OpteranApplies insect intelligence to mobile robot navigation
Renovate RoboticsRufus robot automates installation of roof shingles
RobelAutomates railway repairs to overcome labor shortage
Robust AICarter AMR joins DHL's impressive robotics portfolio
Rockwell AutomationAdds OTTO Motors mobile robots to manufacturing lineup
SereactPickGPT harnesses power of generative AI for robotics
Simbe RoboticsScales inventory robotics deal with BJ’s Wholesale Club
Slip RoboticsSimplifies trailer loading/unloading with heavy-duty AMR
SymboticWalmart-backed company rides wave of logistics automation demand
Toyota Research InstituteBuilds large behavior models for fast robot teaching
ULC TechnologiesCable Splicing Machine improve safety, power grid reliability
Universal RobotsCobot leader strengthens lineup with UR30

The post RBR50 Spotlight: Robotic ventricle advances understanding of heart disease appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rbr50-spotlight-robotic-ventricle-advances-understanding-of-heart-disease/feed/ 1
MIT: LucidSim training system helps robots close Sim2Real gap https://www.therobotreport.com/mit-lucidsim-training-system-helps-robots-close-sim2real-gap/ https://www.therobotreport.com/mit-lucidsim-training-system-helps-robots-close-sim2real-gap/#respond Sun, 17 Nov 2024 15:00:17 +0000 https://www.therobotreport.com/?p=581620 LucidSim uses generative AI and physics simulators to create realistic virtual training environments that help robots learns tasks without any real-world data.

The post MIT: LucidSim training system helps robots close Sim2Real gap appeared first on The Robot Report.

]]>

For roboticists, one challenge towers above all others: generalization – the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans’ ability to provide it.

A team of MIT CSAIL researchers have developed an approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called “LucidSim,” uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.

LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world.

“A fundamental challenge in robot learning has long been the ‘sim-to-real gap’ – the disparity between simulated training environments and the complex, unpredictable real world,” said MIT CSAIL postdoctoral associate Ge Yang, a lead researcher on LucidSim. “Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities.”

The multi-pronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.

Related: How Agility Robotics closed the Sim2Real gap for Digit

Birth of an idea: from burritos to breakthroughs

The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, MA.

​​”We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn’t have a pure vision-based policy to begin with,” said Alan Yu, an undergraduate student at MIT and co-lead on LucidSim. “We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half an hour. That’s where we had our moment.”


SITE AD for the 2025 Robotics Summit registration. Register now


To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren’t different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.

This approach, however, only resulted in a single image. To make short, coherent videos which serve as little “experiences” for the robot, the scientists hacked together some image magic into another novel technique the team created, called “Dreams In Motion (DIM).” The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot’s perspective.

“We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days,” says Yu. “While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It’s exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments.”

The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main testbed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area, and also, color perception is critical.

“Today, these robots still learn from real-world demonstrations,” said Yang. “Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment.”

a quadruped robot learned to navigate new environments using generative ai.

MIT researchers used a Unitree Robotics Go1 quadruped. | Credit: MIT CSAIL

The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: robots trained by the expert struggled, succeeding only 15 percent of the time – and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88%.

“And giving our robot more data monotonically improves its performance – eventually, the student becomes the expert,” said Yang.

“One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments,” said Stanford University assistant professor of Electrical Engineering Shuran Song, who wasn’t involved in the research. “The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks.”

From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines – ones that learn to navigate our complex world without ever setting foot in it.

Yu and Yang wrote the paper with four fellow CSAIL affiliates: mechanical engineering postdoc Ran Choi; undergraduate researcher Yajvan Ravan; John Leonard, Samuel C. Collins Professor of Mechanical and Ocean Engineering in the MIT Department of Mechanical Engineering; and MIT Associate Professor Phillip Isola.

Editor’s Note: This article was republished from MIT CSAIL

The post MIT: LucidSim training system helps robots close Sim2Real gap appeared first on The Robot Report.

]]>
https://www.therobotreport.com/mit-lucidsim-training-system-helps-robots-close-sim2real-gap/feed/ 0
The AI Institute introduces Theia vision foundation model to improve robot learning https://www.therobotreport.com/theia-vision-foundation-model-aiinstitute-generates-improve-robot-learning/ https://www.therobotreport.com/theia-vision-foundation-model-aiinstitute-generates-improve-robot-learning/#respond Wed, 13 Nov 2024 20:02:38 +0000 https://www.therobotreport.com/?p=581579 Theia is a visual foundation model that the AI Institute says can distill diverse models for policy learning at a lower computation cost.

The post The AI Institute introduces Theia vision foundation model to improve robot learning appeared first on The Robot Report.

]]>
 

In the field of robotics, vision-based learning systems are a promising strategy for enabling machines to interpret and interact with their environment, said the AI Institute today. It introduced the Theia vision foundation model to facilitate robot training.

Vision-based learning systems must provide robust representations of the world, allowing robots to understand and respond to their surroundings, said the AI Institute. Traditional approaches typically focus on single-task models—such as classification, segmentation, or object detection—which individually do not encapsulate the diverse understanding of a scene required for robot learning.

This shortcoming highlights the need for a more holistic solution capable of interpreting a broad spectrum of visual cues efficiently, said the Cambridge, Mass.-based institute, which is developing Theia to address this gap.

In a paper published in the Conference on Robot Learning (CoRL), the AI Institute introduced Theia, a model that is designed to distill the expertise of multiple off-the-shelf vision foundation models (VFMs) into a single model. By combining the strengths of multiple different VFMs, each trained for a specific visual task, Theia generates a richer, unified visual representation that can be used to improve robot learning performance.

Robot policies trained using Theia’s encoder achieved a higher average task success rate of 80.97% when evaluated against 12 robot simulation tasks, a statistically significant improvement over other representation choices.

Furthermore, in real robot experiments, where the institute used behavior cloning to learn robot policies across four multi-step tasks, the trained policy success rate using Theia was on average 15 percentage points higher than policies trained using the next-best representation.

The AI Institute plots robot control policies trained with Theia outperform policies trained with alternative representations on MuJoCo robot simulation tasks, with much less computation, measured by the number of Multiply-Accumulate operations in billions.

Robot control policies trained with Theia outperform policies trained with alternative representations on MuJoCo robot simulation tasks, with much less computation, measured by the number of Multiply-Accumulate operations in billions (MACs). Source: The AI Institute

Theia designed to combine visual models

Theia’s design is based on a distillation process that integrates the strengths of multiple VFMs such as CLIP (vision language), DINOv2 (dense visual correspondence), and ViT (classification), among others. By carefully selecting and combining these models, Theia is able to produce robust visual representations that can improve downstream robot learning performance, said the AI Institute.

At its core, Theia consists of a visual encoder (backbone) and a set of feature translators, which work in tandem to incorporate the knowledge from multiple VFMs into a unified model. The visual encoder generates latent representations that capture diverse visual insights.

These representations are then processed by the feature translators, which refine them by comparing the output features against ground truth. This comparison serves as a supervisory signal, optimizing Theia’s latent representations to enhance their diversity and accuracy.

These optimized latent representations are subsequently used to fine-tune policy learning models, enabling robots to perform a wide range of tasks with greater accuracy.

Theia's design is based on a process that distills the strengths of multiple VFMs, including CLIP, SAM, DINOv2, Depth-Anything, and ViT, among others, according to the AI Institute.

Theia’s design is based on a process that distills the strengths of multiple VFMs, including CLIP, SAM, DINOv2, Depth-Anything, and ViT, among others. Source: The AI Institute

Robots learn in the lab

Researchers at the AI Institute tested Theia in simulation and on a number of robot platforms, including Boston Dynamics‘ Spot and a WidowX robot arm. For one of the rounds of lab testing, it used Theia to train a policy enabling a robot to open a small microwave, place toy food inside, and close the microwave door.

Previously, researchers would have needed to combine all the VFMs, which is slow and computationally expensive, or select which VFM to use to represent the scene in front of the robot. For example, they could choose a segmentation image from a segmentation model, a depth image from a depth model, or a text class name from an image classification model. Each provided different types and granularity of information about the scene.

Generally, a single VFM might work well for a single task with known objects but might not be the right choice for other tasks or other robots.

With Theia, the same image from the robot can be fed through the encoder to generate a single representation with all the key information. That representation can then be input into Theia’s segmentation decoder to output a segmentation image. The same representation can be input into Theia’s depth decoder to output a depth image, and so on.

Each decoder uses the same representation as input because the shared representation possesses the information required to generate all the outputs from the original VFMs. This streamlines the training process and making actions transferable to a broader range of situations, said the researchers.

While it sounds easy for a person, the microwaving task represents a more complex behavior because it requires successful completion of multiple steps: picking up the object, placing it into the microwave, and closing the microwave door. The policy trained with Theia is among the top performers for each of these steps, comparable only to E-RADIO, another approach which also combines multiple VFMs, although not specifically for robotics applications.

Researchers used Theia to train a policy enabling a robot arm to microwave various types of toy food.

Researchers used Theia to train a policy enabling a robot arm to microwave various types of toy food. Source: The AI Institute

Theia prioritizes efficiency

One of Theia’s main advantages over other VFMs is its efficiency, said the AI Institute. Training Theia requires about 150 GPU hours on datasets like ImageNet, reducing the computational resources needed compared to other models.

This high efficiency does not come at the expense of performance, making Theia a practical choice for both research and application. With a smaller model size and reduced need for training data, Theia conserves computational resources during both the training and fine-tuning processes.

AI Institute sees transformation in robot learning

Theia enables robots to learn and adapt more quickly and effectively by refining knowledge from multiple vision models into compact representations for classification, segmentation, depth prediction, and other modalities.

While there is still much work to be done before reaching a 100% success rate on complex robotics tasks using Theia or other VFMs, Theia makes progress toward this goal while using less training data and fewer computational resources.

The AI Institute invited researchers and developers to explore Theia and further evaluate its capabilities to improve how robots learn and interpret their environments.

“We’re excited to see how Theia can contribute to both academic research and practical applications in robotics,” it said. Visit the AI Institute’s project page and demo page to learn more about Theia.


SITE AD for the 2025 Robotics Summit registration. Register now


The post The AI Institute introduces Theia vision foundation model to improve robot learning appeared first on The Robot Report.

]]>
https://www.therobotreport.com/theia-vision-foundation-model-aiinstitute-generates-improve-robot-learning/feed/ 0
Humanoid study group launches survey on human-robot interaction https://www.therobotreport.com/humanoid-study-group-launches-survey-on-human-robot-interaction/ https://www.therobotreport.com/humanoid-study-group-launches-survey-on-human-robot-interaction/#respond Tue, 12 Nov 2024 19:55:27 +0000 https://www.therobotreport.com/?p=581541 The study group needs to determine what aspects of human-robot interaction might impact the development of standards for humanoids.

The post Humanoid study group launches survey on human-robot interaction appeared first on The Robot Report.

]]>
humanoid robot sitting at a desk tasking a survey with a voice bubble over it's head saying: "How should humanoid robots respond to accidental/intentional physical contact with people to ensure both safety and utility?"

Product teams need to consider how to safely shut down humanoids in the case of an error or emergency. | Credit: This image was created by Adobe Firefly AI and modified by The Robot Report

The IEEE Humanoids Study Group was formed in June 2024 and initiated the process of evaluating the current safety standards that might impact the safe design and deployment of humanoid robots. The group’s goal is not to develop any standards, but rather to do the homework that will tee up the development of new standards, or the revision of existing standards, to cover the needs of humanoids.

Many companies are currently developing humanoids for commercial use, including Agility Robotics, Apptronik, Boston Dynamics, Figure, and many others. Agility Robotics’ Digit is widely seen as the leader in the clubhouse at the moment because actually deployed commercially with Spanx in a tote-moving application.

The group aims to complete its market research and final report within a year. Subsequently, Standards Development Organizations (SDOs) will require an additional 1-2 years to develop and ratify any new standards. Consequently, it will take a minimum of 18 to 30 months before humanoid robots can adhere to the necessary safety standards, a crucial step towards mitigating risks in their deployment. This report is planned for release in May 2025.


SITE AD for the 2025 Robotics Summit registration. Register now


The Study Group is split into two subgroups: one subgroup is actively reviewing all existing robot safety standards, while the second human-robot interaction (HRI) subgroup is evaluating all of the various use cases for humanoids.

The HRI subgroup has been tasked with determining what aspects of HRI might impact the development and application of standards when it comes to humanoids. To help the team achieve this goal, they have put together a 12-question survey, divided into three thematic sections. If you would like to contribute to the initiative, please answer as many of the questions below as possible to help determine what aspects of HRI will impact the development of robotic standards with respect to the real-world applications of humanoid robots.

Take the survey below or here.

The post Humanoid study group launches survey on human-robot interaction appeared first on The Robot Report.

]]>
https://www.therobotreport.com/humanoid-study-group-launches-survey-on-human-robot-interaction/feed/ 0
Researchers use imitation learning to train surgical robots https://www.therobotreport.com/researchers-use-imitation-learning-to-train-surgical-robots/ https://www.therobotreport.com/researchers-use-imitation-learning-to-train-surgical-robots/#respond Mon, 11 Nov 2024 18:57:54 +0000 https://www.therobotreport.com/?p=581522 Researchers said the successful use of imitation learning to train surgical robots could eliminate the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy.

The post Researchers use imitation learning to train surgical robots appeared first on The Robot Report.

]]>

A robot, trained for the first time by watching videos of seasoned surgeons, executed the same surgical procedures as skillfully as the human doctors.

The successful use of imitation learning to train surgical robots eliminates the need to program robots with each individual move required during a medical procedure and brings the field of robotic surgery closer to true autonomy, where robots could perform complex surgeries without human help.

“It’s really magical to have this model and all we do is feed it camera input and it can predict the robotic movements needed for surgery,” said senior author Axel Krieger, an assistant professor in Johns Hopkins University’s Department of Mechanical Engineering. “We believe this marks a significant step forward toward a new frontier in medical robotics.”

The team, which included Stanford University researchers, used imitation learning to train Intuitive’s da Vinci Surgical System robot to perform three fundamental tasks required in surgical procedures: manipulating a needle, lifting body tissue, and suturing. In each case, the robot trained on the team’s model performed the same surgical procedures as skillfully as human doctors.

The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. However, where ChatGPT works with words and text, this model speaks “robot” with kinematics, a language that breaks down the angles of robotic motion into math.

The researchers fed their model hundreds of videos recorded from wrist cameras placed on the arms of da Vinci robots during surgical procedures. These videos, recorded by surgeons all over the world, are used for post-operative analysis and then archived. Nearly 7,000 da Vinci robots are used worldwide, and more than 50,000 surgeons are trained on the system, creating a large archive of data for robots to “imitate.”

a surgical robot suturing a patient after a procedure.

The model combined imitation learning with the same machine learning architecture that underpins ChatGPT. | Credit: Johns Hopkins University

While the da Vinci system is widely used, researchers say it’s notoriously imprecise. But the team found a way to make the flawed input work. The key was training the model to perform relative movements rather than absolute actions, which are inaccurate.

“All we need is image input and then this AI system finds the right action,” said lead author Ji Woong “Brian” Kim, a postdoctoral researcher at Johns Hopkins. “We find that even with a few hundred demos, the model is able to learn the procedure and generalize new environments it hasn’t encountered.”

Added Krieger: “The model is so good learning things we haven’t taught it. Like if it drops the needle, it will automatically pick it up and continue. This isn’t something I taught it do.”

The model could be used to quickly train surgical robots to perform any type of surgical procedure, the researchers said. The team is now using imitation learning to train a robot to perform not just small surgical tasks but a full surgery.


SITE AD for the 2025 Robotics Summit registration. Register now


Before this advancement, programming a robot to perform even a simple aspect of a surgery required hand-coding every step. Someone might spend a decade trying to model suturing, Krieger said. And that’s suturing for just one type of surgery.

“It’s very limiting,” Krieger said. “What is new here is we only have to collect imitation learning of different procedures, and we can train a robot to learn it in a couple days. It allows us to accelerate to the goal of autonomy while reducing medical errors and achieving more accurate surgery.”

Authors from Johns Hopkins include PhD student Samuel Schmidgall; Associate Research Engineer Anton Deguet; and Associate Professor of Mechanical Engineering Marin Kobilarov. Stanford University authors are PhD student Tony Z. Zhao and Assistant Professor Chelsea Finn.

Editor’s Note: This article was republished from Johns Hopkins University.

The post Researchers use imitation learning to train surgical robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/researchers-use-imitation-learning-to-train-surgical-robots/feed/ 0
Korea University Medicine shares single-port robotic thymectomy comparative results https://www.therobotreport.com/korea-university-medicine-shares-single-port-robotic-thymectomy-comparative-results/ https://www.therobotreport.com/korea-university-medicine-shares-single-port-robotic-thymectomy-comparative-results/#respond Sun, 10 Nov 2024 15:00:28 +0000 https://www.therobotreport.com/?p=581471 The Korea University team compared and analyzed the perioperative outcomes of 110 cases of robotic thymectomy.

The post Korea University Medicine shares single-port robotic thymectomy comparative results appeared first on The Robot Report.

]]>
A graphical abstract of the research, showing a summary and an illustration for propensity score matching.

A graphical abstract of the research. | Source: Korea University Medicine

A joint research team from the Department of Thoracic and Cardiovascular Surgery at the Korea University College of Medicine last week announced comparative results of single-port robotic thymectomy using a single-port robotic system. 

The team compared and analyzed the perioperative outcomes of 110 cases of robotic thymectomy using the single-port robotic system and conventional video-assisted thoracic surgery (VATS) thymectomy from November 2018 to May 2024. The results showed that all robotic thymectomy performed were successfully performed without conversion to median stenotomy, and 98% of patients had no major complications.

“Through this research, we have an opportunity for our country’s single-port robotic thoracic surgery to be recognized worldwide,” said Prof. Jin-Wook Hwang, a lead author on the paper. “We expect to have greater results from single-port robotic thoracic surgery in the future.”

In addition, compared with VATS thymectomy, conversion rate to multi-port surgery was lower (0%), and the chest tube duration (1.32±0.75 days) and hospitalization period (2.52±1.00 days) were shortened.

“This research proved that single-port robotic thymectomy can overcome the limitations of the conventional thymectomy and provide a better environment,” added Prof. Jun-Hee Lee, lead author on the paper. “We will continue our best to make sure that Korea University Medicine leads the robotic surgery in thoracic surgery.”


SITE AD for the 2025 Robotics Summit registration. Register now


Korea University team says results are significant

The entire team included Korea University professors Jun-hee Lee, Hyun-koo Kim, Jin-Wook Hwang, and Jae-Ho Chung. They said the study proved that single-port robotic thymectomy using the single-port robotic system is safer than the conventional VATS thymectomy and can overcome the limitations of the previous method.

“Thoracic surgery departments in three hospitals of Korea University Medicine have come up with very significant results, which successfully shed the light on safety and efficiency of the latest technique of single-port robotic thymectomy,” stated Prof. Jae-Ho Chung. “Based on the results, we will continue our best efforts to related clinical studies and researches, so that single-port robot surgery can be safely applied to more patients who need thoracic surgery.”

The Korea University researchers said they demonstrated that single-port robotic thymectomy provides not only a safer surgical environment for patients, but also opens the possibility of single-port robotic method becoming the standard treatment for thymectomy. This laid the foundation of providing patients with a better treatment experience, they said.

“This research suggested the future possibilities of single-port robotic surgery,” Prof. Hyun-Koo Kim noted. “We will continue exulting our utmost efforts to improve the quality of life of more patients by continuously conducting robotic surgery researches.”  

In addition, the research team of the Department of Thoracic and Cardiovascular Surgery, Korea University Medical Center also studied thymectomy, lung cancer surgery, and esophageal cancer surgery via single-port robotic surgery. It performed minimally invasive surgeries using the robotic system not only for thymectomy, but also for lung cancer and esophageal cancer surgeries.

The research results were published in the international academic journal: Cancers (MDPI, Swiss Online Journal Publishing Institute).

(from Left) Prof. Jun-hee Lee, Hyun-koo Kim, Jin-Wook Hwang, Jae-Ho Chung, the Department of Thoracic and Cardiovascular Surgery, Korea University College of Medicine.

From left: Professors Jun-hee Lee, Hyun-koo Kim, Jin-Wook Hwang, Jae-Ho Chung of the Department of Thoracic and Cardiovascular Surgery. | Source: Korea University College of Medicine

The post Korea University Medicine shares single-port robotic thymectomy comparative results appeared first on The Robot Report.

]]>
https://www.therobotreport.com/korea-university-medicine-shares-single-port-robotic-thymectomy-comparative-results/feed/ 0
Stanford Robotics Center partners with Stanford HAI for AI research, policy https://www.therobotreport.com/stanford-robotics-center-partners-with-stanford-hai-for-ai-research/ https://www.therobotreport.com/stanford-robotics-center-partners-with-stanford-hai-for-ai-research/#respond Wed, 06 Nov 2024 16:02:27 +0000 https://www.therobotreport.com/?p=581440 The Stanford Robotics Center and Stanford HAI have partnered to explain rapid advances in artificial intelligence to policymakers. 

The post Stanford Robotics Center partners with Stanford HAI for AI research, policy appeared first on The Robot Report.

]]>
HAI Co-Director James Landay and Stanford Robotics Lab Director Oussama Khatib.

From left: HAI Co-Director James Landay and Stanford Robotics Lab Director Oussama Khatib. | Source: Madeleine Wright

The Stanford Robotics Center and the Stanford Institute for Human-Centered Artificial Intelligence last week launched a partnership to identify responsible uses for AI. The initiative will involve interdisciplinary research into how humanity can benefit from the latest technological advances and that those benefits are broadly shared, said the organizations.

The Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Robotics Center (SRC) said they want to respond to rapid advances in AI and take the opportunity for AI to accelerate the field of robotics.

Oussama Khatib, director of the SRC, and Fei-Fei Li, John Etchemendy, and James Landay, co-directors of HAI, will co-lead the project. The team aims to address the technical, societal, and economic challenges confronted in fields that use robotics including health and wellness, education, sustainability, and the future of work.

Partners to explain AI, robotics work to policymakers

“We’re thrilled to be collaborating on this exciting venture,” stated Khatib. “We’re just beginning to understand the exciting ways AI will drive robotics to new capabilities, and now is the time to talk about its effective governance.”

The partnership’s stated goals include translating multidisciplinary research into AI-enabled robotics for policymakers, as well as producing original research to equip them with tools to understand and govern the technology.

“We’re watching the robotics field accelerate in ways we’d never dreamed even a few years ago,” Landay said. “I expect to see this technology change almost every industry over the next decade. For this to be done safely, fairly, and successfully, we need to work together now to understand its potential and its dangers.”


SITE AD for the 2025 Robotics Summit registration. Register now


More about the Stanford robotics research

The Stanford Robotics Center said it brings together cross-disciplinary, world-class researchers and industrial affiliates with a shared vision of robotics’ future. Its collaborative facility supports large-scale innovative projects for transformative impact on people and the planet.

The SRC has projects in five core areas of study: field robotics, medical robotics, education/culture, the future of work, and domestic robotics. It said they are working to anticipate and deliver solutions to society’s coming needs as the robotics revolution brings these systems more closely into our working and daily lives.​

Established in 2019, Stanford HAI seeks to understand and influence the creation and impact of AI. Led by faculty from multiple departments across Stanford University, it is researching AI technologies inspired by human intelligence. It is also studying, forecasting, and guiding the human and societal impact of AI, plus designing and creating AI applications to augment human capabilities. 

The institute said it is educating students and leaders at all stages about a range of AI fundamentals and perspectives. In addition, HAI said it fosters regional and national discussions to lead to direct legislative impact.

The post Stanford Robotics Center partners with Stanford HAI for AI research, policy appeared first on The Robot Report.

]]>
https://www.therobotreport.com/stanford-robotics-center-partners-with-stanford-hai-for-ai-research/feed/ 0
Interact Analysis identifies opportunities for motion control and robotics https://www.therobotreport.com/interact-analysis-identifies-opportunities-motion-control-robotics/ https://www.therobotreport.com/interact-analysis-identifies-opportunities-motion-control-robotics/#respond Sun, 03 Nov 2024 13:30:31 +0000 https://www.therobotreport.com/?p=581389 Interact Analysis discusses two growth areas for motion control: smart conveyance technology and robots with machine-integrated control.

The post Interact Analysis identifies opportunities for motion control and robotics appeared first on The Robot Report.

]]>
Due to high interest rates, elevated inventory, and sluggish demand, the global machinery industry has been facing a tough year in 2024. This has affected sales of industrial automation components to machine builders and OEMs, including motion control products, said Interact Analysis.

Despite this, innovative technologies continue to create new opportunities for motion control, attracting new entrants to the market through product launches or partnerships. Interact Analysis here discusses two new growth areas it has identified from its research and conversations with manufacturers: smart conveyance technology and robots with machine-integrated control.

Smart conveyance technology

Smart conveyance technology is a multi-carrier transport technology and is available as either linear or planar systems. The market for linear systems has surged over the past three years, with sales revenue growing from $237 million in 2020 to $488 million in 2023.

By 2029, sales of linear systems could exceed $1.1 billion, nearly five times the market size in 2020, according to Interact Analysis. Planar technology is still in its trial period, generating sales of nearly $20 million in 2023.

Since the research firm began tracking smart conveyance market data in 2020, the food, pharmaceutical, and general packaging industries were the main application markets for such conveyance systems. However, over the past two to three years, the landscape has changed with the rapid penetration of smart conveyance products in the Asian market, and a sharp increase in sales from the battery and electronics industries.

Encouraged by the growth momentum, new entrants are rapidly entering the market. As shown in the chart below, the number of suppliers almost doubled in 2023 compared with the year before. As of last month, 12 more companies have launched new linear smart conveyance products in 2024.

Most recently, German rotary indexer and conveyor manufacturer TAKTOMAT presented its new linear smart conveyor system powered by SEW Eurodrive at the Motek trade show. TAKTOMAT’s key clients are primarily from the automotive sector, so the new product is expected to have applications within this industry.

New vendors have not acquired meaningful market share, as the supplier base has not yet consolidated, noted Interact Analysis. However, it said it expects vendors to increase their presence, especially in their local markets: China, Japan, and Europe.

A bar graph showing the number of suppliers in the Americas, Asia Pacific, and EMEA before 2023, during 2023, and in 2024. The Americas saw a small amount of movement before 2023, and hasn't grown since. It sits at less than five suppliers. Asia Pacific has seen increased growth as time has gone on, and has over 25 suppliers. EMEA saw the most growth before 2023, and a little in the years since, it has less than 10 suppliers.

The supplier base for linear smart conveyance systems has expanded, particularly in the Asia-Pacific region. | Source: Interact Analysis

Beyond this, Interact Analysis has also conducted research with companies planning to launch new smart conveyance technology products within the next few years. Most of them are suppliers of motion control components such as linear motors and servo products, or conveyor manufacturers.

After the cyclical downturn in the machinery industry comes to an end, we expect the revenue and supplier base of smart conveyance systems to see promising growth in Europe and North America, driven by developments in battery manufacturing and warehouse automation.

Naturally, rising demand for smart conveyance technology represents a growing market for motion control products, including servo and direct-drive technologies. Rather than offering smart conveyance systems in their own portfolio, some vendors are supplying key components to system providers.

For example, many automation companies, including Rockwell and Siemens, have partnerships with Planar Motor Inc. (PMI), which makes planar smart conveyance products, to equip PMI systems with servo drives and controllers.


SITE AD for the 2025 Robotics Summit registration. Register now


Machine-integrated robots

The term machine-integrated robots refers to robots that are fully integrated into machine control platforms, either by eliminating robot-specific controllers or by retaining robot controllers but integrating the programming platform into the machine systems.

The first approach is more common for those machine-integrated robots currently deployed, which include customized robots made by machine builders or OEMs.

In 2023, global machine-integrated robot shipments reached nearly 20,000 units, of which shipments in the Americas, EMEA, and Asia-Pacific regions accounted for 31%, 41%, and 28% respectively. From 2023 to 2029, the market is projected to grow at a compound annual growth rate (CAGR) of 14.6%.

Compared with the standard industrial robot market — with annual shipments of more than 520,000 units — the machine-integrated robot market is currently much smaller but is expected to grow at a faster rate.

A bar graph showing the number of Machine Integrated robots by region. Each region is expected to grow from 2023 until 2029, with the most growth in EMEA, then the Americas, then APAC.

Steady growth is forecast for machine-integrated robots over the next five years. | Source: Interact Analysis

Engineer shortage drives automation demand

The shortage of experienced engineers is one of the major drivers of growth for the machine-integrated robot market. By integrating robot and machine controllers, engineers can control machines and robots in a unified development environment, without using robot programming languages. This helps reduce challenges for both machine builders and end users in finding or training engineers and operators for robotic machines.

OEMs’ motivations to build robots in-house is also fueling the adoption of machine-integrated robots. Machine builders and integrators increasingly have the capability to build mechanical parts for robots, with some OEMs choosing to make robots by themselves to save costs.

In customized scenarios, OEMs build special robot kinematics in-house, with a general automation controller enabling the practical integration of OEM-made robots with machines.

New entrants and partnerships are increasing the number of systems available for machine-integrated robots. Robot manufacturers, machine builders, and motion control system suppliers are all actively introducing new products and solutions.

For example, Rockwell Automation partnered with autonox Robotics in 2023, having previously entered a partnership with Atom Robot in late 2022. Now, robot arms from three vendors can be directly equipped with Rockwell PLCs.

Most recently, Siemens confirmed new cooperation agreements with collaborative robot makers Universal Robots and Jaka, further expanding the range of robots that can be directly programmed on its platform.

In the meantime, motion control suppliers also work closely with machine builders to provide solutions for OEM-made robots. For example, SEW offers a Parallel Arm Kinematics Kit to OEMs looking to make their own delta robots.

In China, many packaging machinery manufacturers exhibited machines with picking robots made in-house at the recent CIIF tradeshow.

Motion control has room to grow, finds Interact Analysis

The surging smart conveyance market and the emergence of machine-integrated robots offer new opportunities to motion control suppliers. Driven by the trends of digitalization, flexibility, and ease of use in the manufacturing industry, both technologies are expected to increase their penetration in the machinery industry.

Despite current challenges, many suppliers are preparing strategies for the next growth cycle. Companies with competitive products and solutions will gain an advantage when demand inevitably picks up.

About the author and Interact Analysis

Samantha Mou headshot.As a research analyst based in China, Samantha Mou provides support in the industrial automation sector. Mou has a master’s degree in economics and has experience, while working in Germany, of conducting market research in industrial equipment and automobile components.

Interact Analysis said each of its team members has more than 15 years of experience in technology and market research. The firm has offices in Irthlingborough, U.K.; Austin, Texas; and Shanghai, China.

Editor’s note: This article was syndicated from Interact Analysis.

The post Interact Analysis identifies opportunities for motion control and robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/interact-analysis-identifies-opportunities-motion-control-robotics/feed/ 0
RBR50 Spotlight: evoBot offers unique design for autonomous mobile robots https://www.therobotreport.com/rbr50-spotlight-evobot-offers-unique-design-for-autonomous-mobile-robots/ https://www.therobotreport.com/rbr50-spotlight-evobot-offers-unique-design-for-autonomous-mobile-robots/#respond Thu, 31 Oct 2024 20:01:03 +0000 https://www.therobotreport.com/?p=581365 evoBOT from Fraunhofer IML has a seemingly simple design that doesn’t resemble any other mobile robot on the market today.

The post RBR50 Spotlight: evoBot offers unique design for autonomous mobile robots appeared first on The Robot Report.

]]>


Organization: Fraunhofer Institute for Material Flow and Logistics
Country: Germany
Website: https://www.iml.fraunhofer.de/en/fields_of_activity/material-flow-systems/iot-and-embedded-systems/evobot.html
Year Founded: 1981
Number of Employees: 500+
Innovation Class: Technology, Product & Services

evoBOT is the kind of robot that stands out the moment you look at it. Its unique and seemingly simple design doesn’t resemble any other autonomous mobile robot (AMR) on the market today. Designed at the Fraunhofer Institute for Material Flow and Logistics (IML), the robot consists of two wheels and gripper arms. If you’re looking at it head-on while it’s zooming around without any cargo, the robot simply looks like an arch with wheels attached at the bottom.

rbr50 banner logo.The robot uses these arms to grip boxes by applying pressure from each side, and it maintains balance as it zips across airports. It keeps its balance using a dynamically stable system based on the principle of an inverted pendulum, which means it doesn’t have an external counterweight. Its ability to balance enables it to move on different and uneven surfaces.

evoBOT can reach a maximum speed of up to 60 kph (37.2 mph) and can transport a load of up to 100 kg (220.4 lb.). It can handle hazardous goods, transport parcels for longer recurring distances, and relieve employees during lifting and overhead work. The mobile manipulator can also procure materials and provide support during the loading and unloading of an aircraft.

Last year, evoBOT completed its first test run at the Munich Airport. There, it performed a practical test in the cargo terminal and on the apron of the airport.

These tests further proved the versatility of Fraunhofer IML’s system, setting it up for potential deployments in numerous industries. evoBOT’s innovative design sets it apart from its AMR counterparts, and it has the capabilities to back that up.

Explore the RBR50 Robotics Innovation Awards 2024.


RBR50 Robotics Innovation Awards 2024

OrganizationInnovation
ABB RoboticsModular industrial robot arms offer flexibility
Advanced Construction RoboticsIronBOT makes rebar installation faster, safer
Agility RoboticsDigit humanoid gets feet wet with logistics work
Amazon RoboticsAmazon strengthens portfolio with heavy-duty AGV
Ambi RoboticsAmbiSort uses real-world data to improve picking
ApptronikApollo humanoid features bespoke linear actuators
Boston DynamicsAtlas shows off unique skills for humanoid
BrightpickAutopicker applies mobile manipulation, AI to warehouses
Capra RoboticsHircus AMR bridges gap between indoor, outdoor logistics
DexterityDexterity stacks robotics and AI for truck loading
DisneyDisney brings beloved characters to life through robotics
DoosanApp-like Dart-Suite eases cobot programming
Electric SheepVertical integration positions landscaping startup for success
ExotecSkypod ASRS scales to serve automotive supplier
FANUCFANUC ships one-millionth industrial robot
FigureStartup builds working humanoid within one year
Fraunhofer Institute for Material Flow and LogisticsevoBot features unique mobile manipulator design
Gardarika TresDevelops de-mining robot for Ukraine
Geek+Upgrades PopPick goods-to-person system
GlidanceProvides independence to visually impaired individuals
Harvard UniversityExoskeleton improves walking for people with Parkinson’s disease
ifm efectorObstacle Detection System simplifies mobile robot development
igusReBeL cobot gets low-cost, human-like hand
InstockInstock turns fulfillment processes upside down with ASRS
Kodama SystemsStartup uses robotics to prevent wildfires
Kodiak RoboticsAutonomous pickup truck to enhance U.S. military operations
KUKARobotic arm leader doubles down on mobile robots for logistics
Locus RoboticsMobile robot leader surpasses 2 billion picks
MassRobotics AcceleratorEquity-free accelerator positions startups for success
MecademicMCS500 SCARA robot accelerates micro-automation
MITRobotic ventricle advances understanding of heart disease
MujinTruckBot accelerates automated truck unloading
MushinyIntelligent 3D sorter ramps up throughput, flexibility
NASAMOXIE completes historic oxygen-making mission on Mars
Neya SystemsDevelopment of cybersecurity standards harden AGVs
NVIDIANova Carter gives mobile robots all-around sight
Olive RoboticsEdgeROS eases robotics development process
OpenAILLMs enable embedded AI to flourish
OpteranApplies insect intelligence to mobile robot navigation
Renovate RoboticsRufus robot automates installation of roof shingles
RobelAutomates railway repairs to overcome labor shortage
Robust AICarter AMR joins DHL's impressive robotics portfolio
Rockwell AutomationAdds OTTO Motors mobile robots to manufacturing lineup
SereactPickGPT harnesses power of generative AI for robotics
Simbe RoboticsScales inventory robotics deal with BJ’s Wholesale Club
Slip RoboticsSimplifies trailer loading/unloading with heavy-duty AMR
SymboticWalmart-backed company rides wave of logistics automation demand
Toyota Research InstituteBuilds large behavior models for fast robot teaching
ULC TechnologiesCable Splicing Machine improve safety, power grid reliability
Universal RobotsCobot leader strengthens lineup with UR30

The post RBR50 Spotlight: evoBot offers unique design for autonomous mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/rbr50-spotlight-evobot-offers-unique-design-for-autonomous-mobile-robots/feed/ 0