Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ Robotics news, research and analysis Wed, 04 Dec 2024 15:58:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Sensors, machine vision, and feedback for robotic designs https://www.therobotreport.com/category/technologies/cameras-imaging-vision/ 32 32 binder introduces M16 connectors with compact design, high sealing performance https://www.therobotreport.com/binder-introduces-m16-connectors-with-compact-design-high-sealing-performance/ https://www.therobotreport.com/binder-introduces-m16-connectors-with-compact-design-high-sealing-performance/#respond Wed, 04 Dec 2024 13:20:24 +0000 https://www.therobotreport.com/?p=581845 Binder USA has released redesigned M16 connectors designed for reliability and performance in harsh conditions.

The post binder introduces M16 connectors with compact design, high sealing performance appeared first on The Robot Report.

]]>
binder new modular M16 connectors.

The new M16 connectors have been redesigned to be modular and easier to handle. Source: binder

For demanding environments, Binder USA LP has introduced a new generation of molded M16 connectors, which it said are engineered to deliver reliability and performance even in the harshest conditions. The M16 circular connectors are designed for applications ranging from heavy-duty machinery like construction cranes and excavators to precision-driven laboratory equipment.

These connectors must meet diverse requirements, ensuring stable and reliable connections in extreme conditions, such as freezing temperatures and exposure to dirt and dust. To address these challenges, they must combine high electrical performance with durability and resilience, noted Camarillo, Calif.-based binder.

binder redesigns connectors to be modular

binder said it has completely redesigned its latest generation of molded M16 connectors. The previous version included many existing parts from field-wireable connectors, not all of which were ideal for the molded version, the company explained.

With an expanding portfolio and increasing demand, the company said it decided to fundamentally redesign the product to use a modular system, enabling many common parts between the unshielded and shielded variants.

“A key feature of the new connector design is the reduction in components,” said Sebastian Ader, product manager at binder. “Thanks to the modular system, we only need one additional part for the shielded and unshielded variants. This allows us to produce much more efficiently, offering cost advantages to customers without compromising on quality.”

Developing the new M16 connector was particularly challenging, said binder, because it had to comply with both the M16 standard (DIN EN 61076-2-106) and the stringent AISG standard (for the eight-pin shielded variant) in terms of IP68 sealing and compatibility between different manufacturers.

By optimizing the sealing system, the new M16 system resolves compatibility problems that have previously led to insufficient sealing, the company said. It added that the new generation of connectors is lead-free, meeting the EU RoHS2 Directive 2011/65/EU, including 2015/863/EU.

[SiTEAD]

M16 suitable for industrial, field applications

When redesigning the M16 molded connectors, binder said it paid particular attention to applications in industrial machinery, camera systems, and pressure sensors. These areas require maximum electrical reliability, and therefore a robust connector system that functions under difficult operating conditions, it noted.

“Crane and excavator applications are a good example. Here, fixed-plug connections are required,” said Ader. “Particularly in critical moments, such as when lifting heavy loads, it is important that the connectors not only fit securely, but are also quick and easy to use.”

A triangular design is intended to make the new M16 connectors are easy to handle, even in sub-zero temperatures or when wearing gloves, for example.

“The new triangular design not only makes handling easier, but it also minimizes dirt-prone areas and undercuts, which enables use even in very harsh and demanding environments,” Ader said. “The new connectors can be reliably mated, unmated and locked at any time.’

The molded M16 connectors also meet requirements for shock resistance, vibration tolerance, and tightness, said binder. “In summary, the robust design ensures a reliable connection in extreme temperatures, dirt, and moisture, minimizes the risk of failure, and ensures the continuous operational readiness of the machines,” it asserted.

“With the molded M16 connector, we have succeeded in meeting market demands in terms of technical properties, handling, and price,” Ader said. “All this makes our solution a future-proof choice for demanding industrial applications.”

About binder

Binder USA LP is a subsidiary of binder Group, a leading global manufacturer of circular connectors, custom cord sets, and LED lights. The company‘s products are used worldwide in industrial environments for factory automation, process control, and medical technology applications.

Binder said its technical innovations meet the highest standards of quality and reliability. The company’s quality management system is ISO 9001 and 14001-certified, but binder said its solution-focused approach to customer applications and commitment to service differentiate it from the competition.

The post binder introduces M16 connectors with compact design, high sealing performance appeared first on The Robot Report.

]]>
https://www.therobotreport.com/binder-introduces-m16-connectors-with-compact-design-high-sealing-performance/feed/ 0
Oxipital AI releases VX2 Vision System for inspection and picking https://www.therobotreport.com/oxipital-ai-releases-vx2-vision-system-for-inspection-and-picking/ https://www.therobotreport.com/oxipital-ai-releases-vx2-vision-system-for-inspection-and-picking/#respond Fri, 29 Nov 2024 13:05:04 +0000 https://www.therobotreport.com/?p=581791 Oxipital AI says its advanced vision system is more compact, delivers greater precision, and is more affordable than its predecessor.

The post Oxipital AI releases VX2 Vision System for inspection and picking appeared first on The Robot Report.

]]>
The VX2 Vision System uses AI for food-grade inspection, shown here, says Oxipital AI.

The VX2 Vision System uses AI for food-grade inspection and picking, says Oxipital AI.

Oxipital AI this month launched its VX2 Vision System, which uses artificial intelligence for inspection and high-speed picking applications across food-grade and industrial sectors. Built on the company’s proprietary Visual AI platform, the VX2 comes in a more compact package at a more accessible price than its predecessor.

“At Oxipital AI, we believe that listening to our customers and learning from real-world applications is the key to driving innovation,” said Austin Harvey, vice president of product at Oxipital. “The VX2 is the result of that philosophy in action. It’s smaller, more powerful, and more versatile, enabling our customers to build more resilient manufacturing processes.”

Formerly Soft Robotics, Oxipital is developing machine vision for product inspection and robotic process automation in critical industries such as food processing, agriculture, and consumer goods production.

The Bedford, Mass.-based company’s stated mission is “to deliver actionable insights through deep object understanding to customers as they embrace Industry 5.0 and unlock previously unachievable levels of resiliency, efficiency, and sustainability in their manufacturing operations.”

Oxipital AI recently launched its VX2 Vision System, which uses artificial intelligence for inspection and high-speed picking applications across food-grade and industrial sectors. Built on the company’s proprietary Visual AI platform, the VX2 comes in a more compact package at a more accessible price than its predecessor.

“At Oxipital AI, we believe that listening to our customers and learning from real-world applications is the key to driving innovation,” said Austin Harvey, vice president of product at Oxipital. “The VX2 is the result of that philosophy in action. It’s smaller, more powerful, and more versatile, enabling our customers to build more resilient manufacturing processes.”

The successor to Soft Robotics, Oxipital is developing machine vision for product inspection and robotic process automation in critical industries such as food processing, agriculture, and consumer goods production.

The Bedford, Mass.-based company’s stated mission is “to deliver actionable insights through deep object understanding to customers as they embrace Industry 5.0 and unlock previously unachievable levels of resiliency, efficiency, and sustainability in their manufacturing operations.”


SITE AD for the 2025 Robotics Summit registration. Register now


VX2 Vision System includes several enhancements

Oxipital AI said the VX2 Vision System represents a significant improvement over its first-generation vision platform. The company said it incorporated customer feedback and extensive field learning to meet the evolving needs of the industry.

The VX2 has enhanced capabilities for inspection, high-speed picking, and high-speed picking with inspection, said Oxipital. It asserted that the system ensures optimal efficiency and precision in a wide variety of environments and listed the following benefits:

Compact and powerful: The VX2 packs more processing power into a smaller, more efficient design, providing greater flexibility for installations in tight spaces or complex environments, said Oxipital.

Versatile application: Designed for food-grade and industrial use, the VX2 excels in inspection tasks, high-speed handling, and combining both, ensuring accuracy and speed in demanding workflows.

Enhanced Visual AI platform: Oxipital said its platform delivers faster, more accurate decision-making capabilities, ensuring high-performance, real-time operations.

Better price point: Despite significant improvements in power and versatility, the VX2 is available at a more competitive price, said the company. This makes it an attractive option for businesses seeking to upgrade their capabilities without incurring significant costs, it added.

Oxipital AI schematic of its vision technology. The VX2 Vision System continues the company's response to user feedback.
The VX2 Vision System continues Oxipital’s response to user feedback. Source: Oxipital AI

Oxipital AI applies vision to industry needs

With the VX2 launch at PACK EXPO this month, Oxipital said the technology demonstrates its commitment to innovations that address the challenges that industry is currently facing.

“Oxipital AI continues to push the boundaries of what is possible with vision systems in automated environments,” it said. Soft Robotics previously made compliant grippers before pivoting to vision AI.

Oxipital has partnered with Schmalz and Velec, and its was nominated as a PACK EXPO Food and Beverage Technology Excellence Award finalist.

The post Oxipital AI releases VX2 Vision System for inspection and picking appeared first on The Robot Report.

]]>
https://www.therobotreport.com/oxipital-ai-releases-vx2-vision-system-for-inspection-and-picking/feed/ 0
GE HealthCare unveils new applications for mobile C-arm portfolio https://www.therobotreport.com/ge-healthcare-unveils-new-applications-mobile-c-arm-portfolio/ https://www.therobotreport.com/ge-healthcare-unveils-new-applications-mobile-c-arm-portfolio/#respond Mon, 25 Nov 2024 20:28:59 +0000 https://www.therobotreport.com/?p=581737 GE HealthCare said complex pulmonary and thoracic procedures require precise intraoperative imaging systems.

The post GE HealthCare unveils new applications for mobile C-arm portfolio appeared first on The Robot Report.

]]>
The OEC 3D Imaging System, which is made up of three carts with monitors, and one cart with a large, C shaped device.

The OEC 3D Imaging System. | Source: GE HealthCare

GE HealthCare Technologies Inc. last week announced that it has added new clinical applications to its OEC 3D mobile CBCT C-arm portfolio. The Chicago-based company said the additions will enable precise and efficient imaging during endoscopic bronchoscopy procedures in the practice of interventional pulmonology.

Complex pulmonary and thoracic procedures require precise intraoperative imaging systems, explained GE HealthCare. The position of a nodule can differ from pre-operative CT images, it noted. This happens as a result of differences in respiratory patterns, patient positioning, and other factors, resulting in CT-to-body divergence at the time of the procedure, said the company.

GE HealthCare claimed that its operational electronic chart (OEC) 3D intraoperative mobile cone beam computed tomography (CBCT) offers “imaging excellence” and versatility. It said it can aid in everyday procedures ranging from neuro-spine and orthopedic trauma to interventional procedures such as bronchoscopy.

OEC 3D enables the visualization of both 2D and 3D images of the lung using a single mobile C-arm. The lung suite now includes an augmented fluoroscopy overlay of 3D points of interest and adjustable motorized 3D scans.

OEC interfaces continue to expand

During bronchoscopy procedures, clinicians can use navigation or robotic assistance with the OEC Open interface to automatically transfer 3D volumetric data after reconstruction.

GE HealthCare recently added a verified interface with the Intuitive Ion endoluminal robotic bronchoscopy system. The company said it continues to expand OEC open interfaces for a variety of clinical procedures as an agnostic ecosystem. It’s currently verified with eight third-party systems across robotics, navigation, and augmented reality (AR) vision.

“As we continue to build out our OEC ecosystem, GE HealthCare is excited about the addition of the Intuitive Ion robotic system to our OEC Open interface,” said Christian O’Connor, global general manager for surgery at GE HealthCare. “This interface provides interventional pulmonologists using the OEC 3D C-arm a seamless experience during minimally invasive, robotic-assisted bronchoscopy procedures.”

“With Intuitive’s Ion Robotic Bronchoscopy System now verified to interface with GE HealthCare’s OEC 3D through the OEC Open interface, I believe we can now reach and diagnose almost any nodule in the lung,” stated Dr. Dominique Pepper. She is medical director of bronchoscopy and respiratory care at Providence Swedish South Puget Sound and a consultant for GE HealthCare.

“This is a game-changer for clinicians – this can help us confidently and accurately provide answers when we see a suspicious area of interest,” Pepper said.


SITE AD for the 2025 Robotics Summit registration. Register now


About GE HealthCare

GE HealthCare said it is a global medical technology, pharmaceutical diagnostics, and digital solutions innovator. The company said its integrated systems, services, and data analytics can make hospitals more efficient, clinicians more effective, therapies more precise, and patients healthier and happier. It said it is a $19.6 billion business with approximately 51,000 employees worldwide. 

First introduced in 2021, the OEC 3D mobile CBCT C-arm provides precise 3D and 2D imaging in a variety of procedures. During bronchoscopies, clinicians can use CBCT visualization features, such as Lung Preset, to help optimize viewing of airway structures and Augmented Fluoroscopy with Lung Suite to help confirm tool-in-lesion.

The OEC 3D enables a transition from 3D to 2D imaging through one versatile mobile CBCT imaging C-arm. GE said it includes an intuitive user interface and workflow to further optimize space in the bronchoscopy suite.

Editor’s note: This article was syndicated from The Robot Report sibling site MassDevice.

The post GE HealthCare unveils new applications for mobile C-arm portfolio appeared first on The Robot Report.

]]>
https://www.therobotreport.com/ge-healthcare-unveils-new-applications-mobile-c-arm-portfolio/feed/ 0
Imagry moves to make buses autonomous without mapping https://www.therobotreport.com/imagry-moves-to-make-buses-autonomous-without-mapping/ https://www.therobotreport.com/imagry-moves-to-make-buses-autonomous-without-mapping/#respond Mon, 25 Nov 2024 19:18:36 +0000 https://www.therobotreport.com/?p=581732 Imagry has developed hardware-agnostic systems to provide Level 4 autonomy to buses with time to market in mind.

The post Imagry moves to make buses autonomous without mapping appeared first on The Robot Report.

]]>
Imagry says its autonomy kit enables buses to autonomously handle roundabouts, as shown here.

Imagry says its software enables buses to autonomously handle complex situations such as roundabouts. Source: Imagry

Autonomous vehicles often rely heavily on prior information about their routes, but new technology promises to improve real-time situational awareness for vehicles including buses. Imagry said its “HD-mapless driving” software stack enables vehicles to react to dynamic contexts and situations more like human drivers.

The company also said its AI Vision 360 eliminates the need for external sensor infrastructure. It claimed that its bio-inspired neural network and hardware-agnostic systems allow for SAE Level 3/4 operations without spending time on mapping.

“We’ve been focusing on two sectors,” said Eran Ofir, CEO of Imagry. “We’ve been selling our perception and motion-planning stack to Tier 1 suppliers and automotive OEMs for autonomous vehicles. We signed a 10-year contract with Continental and are jointly developing a software-defined vehicle platform.”

“And we’ve started working with transportation operators on providing autonomous buses,” he told The Robot Report. “For example, in Turkey, France, Spain, and soon Japan, we’re retrofitting electric buses to be autonomous.”


SITE AD for the 2025 Robotics Summit registration. Register now


Imagry trains in real time with supervision

Imagry was established in 2015 with a focus on computer vision for retail. In 2018, it began focusing entirely on autonomous driving. The company now has about 120 employees in San Jose, Calif., and Haifa, Israel.

Imagry said its technology is similar to that of Tesla in relying on 3D vision for perception and motion planning rather than rule-based coding or maps.

“Most players in the industry use HD maps with 5 cm [1.9 in.] resolution, telling the vehicle where lights, signs, and lane markers are,” said Ofir. “Our system teaches itself with supervised learning. It maps in real time while driving. Like a human driver, it gets the route but doesn’t know what it will find.”

How does Imagry deal with the need for massive data sets to train for navigation and obstacle detection and avoidance?

“We wrote a proprietary tool for annotation to train faster, better, and cheaper,” Ofir replied. “The data is collected but doesn’t live in the cloud. The human supervisor tells the vehicle where it was wrong, like a child. We deliver over-the-air updates to customers.”

“The world doesn’t belong to HD maps — it’s a matter of trusting AI-based software for perception and motion planning,” he said.

Ofir cited an example of a vehicle in Arizona on a random route with no communications to centralized computing. Its onboard sensors and compute recognized construction zones, skateboarders, a bike lane, and stop signs.

“The capability to drive out of the box in new places is unique to Imagry,” asserted Ofir. “We can handle righthand and lefthand driving, such as in Tokyo, where we’ve been driving for a year now.”

How does the bus know when to stop for passengers?

It could stop at every bus stop, upon request via a button at the stop (for the elderly, who may not use phone apps), or be summoned by an app that also handles payment, responded Ofir. Imagry’s system also supports “kneeling” for people with disabilities.

Why buses are a better focus for autonomy

Imagry has decided to focus on urban use cases rather than highways. Buses are simpler to get to Level 4 autonomy, said Ofir.

“Autonomous buses are better than ride hailing; they’re simpler than passenger vehicles,” said Ofir. “They drive in specific routes and at a speed of only 50 kph [31 mph] versus 80 kph [50 mph]. It’s a simpler use case, with economies of scale.”

“The time to revenue is much faster — the design cycle is four years, while integrating with a bus takes two to three months,” he explained. “Once we hand it over to the transport operator, we can get to L4 in 18 months, and then they can buy and deploy 40 more buses.”

In addition, the regulations for autonomous buses are clearer, with 22 countries running pilots, he noted.

“We already have projects with a large medical center and on a public road in Israel,” Ofir said. “We’re not doing small pods — most transport operators desire M3-class standard buses for 30 to 45 passengers because of the total cost of ownership, and they know how to operate them.”

In September and October, Imagry submitted bids for autonomous buses in Austria, Portugal, Germany, Sweden, and Japan.

Software focus could save money

By being vehicle-agnostic, Ofir said Imagry avoids being tied to specific, expensive hardware. Fifteen vendors are making systems on chips (SoCs) that are sufficient for Level 3 autonomy, he said.

“OEMs want the agility to use different sets of hardware in different vehicles. A $30,000 car is different from a $60,000 car, with different hardware stacks and bills of materials, such as camera or compute,” said Ofir. “It’s a crowded market, and the autonomy stack still costs $100,000 per vehicle. Ours is only $3,000 and runs on Ambarella, NVIDIA, TI, Qualcomm, and Intel.”

“With our first commercial proof of concept for Continental in Frankfurt, Germany, we calibrated our car and did some localization,” he added. “Three days after arrival, we simply took it out on the road, and it drove, knowing there’s no right on red.”

With shortages of drivers, particularly in Japan, operators could save $40,000 to $70,000 per bus per year, he said. The Japanese government wants 50 locations across the country to be served with autonomous buses by the end of 2025 and 100 by the end of 2027.

Autonomous buses are also reliable around the clock and don’t get sick or go on strike, he said.

“We’re working on fully autonomous parking, traffic jam assist, and Safe Driver Overwatch to help younger or older drivers obey traffic signs, which could be a game-changer in the insurance industry,” he added. “Our buses can handle roundabouts, narrow streets, and mixed traffic and are location-independent.”

Phases of autonomous bus deployment

Technology hurdles aside, getting autonomous buses recognized by the rules of the road requires patience, said Ofir.

“Together with Mobileye, which later moved to the robotaxi market, Imagry helped draft Israel’s regulatory framework for autonomous driving, which was completed in 2022,” recalled Ofir. “We’re working with lawmakers in France and Germany and will launch pilots in three markets in 2025.”

Testing even Level 3 autonomy can take years, depending on the region. He outlined the phases for autonomous bus rollout:

  1. Work with the electric bus for that market, then activate the system on a public road. “In the U.S., we’ve installed the full software and control stack in a vehicle and are testing FSD [full self-driving],” Ofir said.
  2. Pass NCAP (European New Car Assessment Programme) testing for merging and stops in 99 scenarios. “We’re the only company to date to pass those tests with an autonomous bus,” said Ofir. “Japan also has stringent safety standards.”
  3. Pass the cybersecurity framework, then allow passengers onboard buses with a safety driver present.
  4. Autonomously drive 100,000 km (62,137 mi.) on a designated route with one or more buses. After submitting a report to a department of motor vehicles or the equivalent, the bus operator could then remove the human driver.

“The silicon, sensors, and software don’t matter for time to revenue, and getting approvals from the U.S. National Highway Traffic Safety Administration [NHTSA] can take years,” Ofir said. “We expect passenger vehicles with our software on the road in Europe, the U.S., and Japan sometime in 2027.”

Imagry has joined Partners for Automated Vehicle Education (PAVE) and will be exhibiting at CES in January 2025.

The post Imagry moves to make buses autonomous without mapping appeared first on The Robot Report.

]]>
https://www.therobotreport.com/imagry-moves-to-make-buses-autonomous-without-mapping/feed/ 0
Pickle Robot gets orders for over 30 unloading systems plus $50M in funding https://www.therobotreport.com/pickle-robot-gets-orders-over-30-unloading-systems-plus-50m-funding/ https://www.therobotreport.com/pickle-robot-gets-orders-over-30-unloading-systems-plus-50m-funding/#respond Thu, 21 Nov 2024 16:17:33 +0000 https://www.therobotreport.com/?p=581679 Pickle Robot plans to deploy more trailer-unloading robots and to use its latest funding to expand into new locations.

The post Pickle Robot gets orders for over 30 unloading systems plus $50M in funding appeared first on The Robot Report.

]]>
Pickle applies AI and computer vision to unload a range of items.

Pickle applies AI and computer vision to unload a range of items. Source: Pickle Robot

Robotic truck unloading fits the classic definition of dull, dirty, or dangerous jobs worth automating. Pickle Robot Co. yesterday announced that it has raised $50 million in Series B funding and that six customers placed orders during the third quarter for more than 30 robots to deploy in the first half of 2025. The new orders include pilot conversions, existing customer expansions, and new customer adoption.

“Pickle Robot customers are experiencing the value of ‘Physical AI’ applied to a common logistics process that challenges thousands of operations every day,” said AJ Meyer, founder and CEO of Pickle Robot. “The new funding and our strategic customer relationships enable Pickle to chart the future of supply chain robotics, rapidly expand our core product capabilities, and grow our business to deliver tremendous customer value now and in the future.”

Founded in 2018, Pickle Robot said its robots are designed to autonomously unload trucks, trailers, and import containers at human-scale or better performance. The Cambridge, Mass.-based company‘s goal is to relieve scarce workers and improve productivity and safety at distribution centers around the world.


SITE AD for the 2025 Robotics Summit registration. Register now


Robots and AI unload a growing range of items

Truck unloading is one of the most labor-intensive, physically demanding, and highest-turnover work areas in logistics, noted Pickle Robot.

The company claimed that its Physical AI combines sensors and a computer vision system with industrial robotics, machine learning, and artificial intelligence. It uses generative AI foundation models trained on millions of data points from real logistics and warehouse operations.

The Pickle Unload Systems have been collaborating with staffers in production operations at distribution centers since the summer of 2023. To date, they have unloaded more than 10 million lb. (4.5 million kg.) of merchandise from import containers and domestic floor-loaded trailers.

The company said its customers include distributors of footwear, apparel, power tools, toys, kitchenware, packaging materials, small appliances, and other general merchandise. Its product roadmap is expanding to service parcel-type freight.

Pickle plans to add features, global marketing

The company said its Series B funding included participation from a strategic customer. Teradyne Robotics Ventures, Toyota Ventures, Ranpak, Third Kind Venture Capital, One Madison Group, Hyperplane, Catapult Ventures, and others also participated.

“Pickle is hitting its strides delivering innovation, development, commercial traction, and customer satisfaction,” said Omar Asali, CEO of Ranpak and a Pickle board member.

“The company is building groundbreaking technology while executing on essential recurring parts of a successful business like field service and manufacturing management,” he said. “It is a testament to the strong team at Pickle that world-class customers want to work with them and that investors are excited about their trajectory.”

The company said it plans to use its latest funding to accelerate the development of new feature sets. It also plans build out its commercial teams to unlock new markets and geographies worldwide.

It added that it is “on a mission to automate inbound and outbound processes at 1 million warehouse doors over the next 10 years.”

Pickle Robot demonstrates lifting a 50-lb. box in a trailer.

Pickle demonstrates lifting a 50-lb. box in a trailer. Source: Pickle Robot

The post Pickle Robot gets orders for over 30 unloading systems plus $50M in funding appeared first on The Robot Report.

]]>
https://www.therobotreport.com/pickle-robot-gets-orders-over-30-unloading-systems-plus-50m-funding/feed/ 0
The AI Institute introduces Theia vision foundation model to improve robot learning https://www.therobotreport.com/theia-vision-foundation-model-aiinstitute-generates-improve-robot-learning/ https://www.therobotreport.com/theia-vision-foundation-model-aiinstitute-generates-improve-robot-learning/#respond Wed, 13 Nov 2024 20:02:38 +0000 https://www.therobotreport.com/?p=581579 Theia is a visual foundation model that the AI Institute says can distill diverse models for policy learning at a lower computation cost.

The post The AI Institute introduces Theia vision foundation model to improve robot learning appeared first on The Robot Report.

]]>
 

In the field of robotics, vision-based learning systems are a promising strategy for enabling machines to interpret and interact with their environment, said the AI Institute today. It introduced the Theia vision foundation model to facilitate robot training.

Vision-based learning systems must provide robust representations of the world, allowing robots to understand and respond to their surroundings, said the AI Institute. Traditional approaches typically focus on single-task models—such as classification, segmentation, or object detection—which individually do not encapsulate the diverse understanding of a scene required for robot learning.

This shortcoming highlights the need for a more holistic solution capable of interpreting a broad spectrum of visual cues efficiently, said the Cambridge, Mass.-based institute, which is developing Theia to address this gap.

In a paper published in the Conference on Robot Learning (CoRL), the AI Institute introduced Theia, a model that is designed to distill the expertise of multiple off-the-shelf vision foundation models (VFMs) into a single model. By combining the strengths of multiple different VFMs, each trained for a specific visual task, Theia generates a richer, unified visual representation that can be used to improve robot learning performance.

Robot policies trained using Theia’s encoder achieved a higher average task success rate of 80.97% when evaluated against 12 robot simulation tasks, a statistically significant improvement over other representation choices.

Furthermore, in real robot experiments, where the institute used behavior cloning to learn robot policies across four multi-step tasks, the trained policy success rate using Theia was on average 15 percentage points higher than policies trained using the next-best representation.

The AI Institute plots robot control policies trained with Theia outperform policies trained with alternative representations on MuJoCo robot simulation tasks, with much less computation, measured by the number of Multiply-Accumulate operations in billions.

Robot control policies trained with Theia outperform policies trained with alternative representations on MuJoCo robot simulation tasks, with much less computation, measured by the number of Multiply-Accumulate operations in billions (MACs). Source: The AI Institute

Theia designed to combine visual models

Theia’s design is based on a distillation process that integrates the strengths of multiple VFMs such as CLIP (vision language), DINOv2 (dense visual correspondence), and ViT (classification), among others. By carefully selecting and combining these models, Theia is able to produce robust visual representations that can improve downstream robot learning performance, said the AI Institute.

At its core, Theia consists of a visual encoder (backbone) and a set of feature translators, which work in tandem to incorporate the knowledge from multiple VFMs into a unified model. The visual encoder generates latent representations that capture diverse visual insights.

These representations are then processed by the feature translators, which refine them by comparing the output features against ground truth. This comparison serves as a supervisory signal, optimizing Theia’s latent representations to enhance their diversity and accuracy.

These optimized latent representations are subsequently used to fine-tune policy learning models, enabling robots to perform a wide range of tasks with greater accuracy.

Theia's design is based on a process that distills the strengths of multiple VFMs, including CLIP, SAM, DINOv2, Depth-Anything, and ViT, among others, according to the AI Institute.

Theia’s design is based on a process that distills the strengths of multiple VFMs, including CLIP, SAM, DINOv2, Depth-Anything, and ViT, among others. Source: The AI Institute

Robots learn in the lab

Researchers at the AI Institute tested Theia in simulation and on a number of robot platforms, including Boston Dynamics‘ Spot and a WidowX robot arm. For one of the rounds of lab testing, it used Theia to train a policy enabling a robot to open a small microwave, place toy food inside, and close the microwave door.

Previously, researchers would have needed to combine all the VFMs, which is slow and computationally expensive, or select which VFM to use to represent the scene in front of the robot. For example, they could choose a segmentation image from a segmentation model, a depth image from a depth model, or a text class name from an image classification model. Each provided different types and granularity of information about the scene.

Generally, a single VFM might work well for a single task with known objects but might not be the right choice for other tasks or other robots.

With Theia, the same image from the robot can be fed through the encoder to generate a single representation with all the key information. That representation can then be input into Theia’s segmentation decoder to output a segmentation image. The same representation can be input into Theia’s depth decoder to output a depth image, and so on.

Each decoder uses the same representation as input because the shared representation possesses the information required to generate all the outputs from the original VFMs. This streamlines the training process and making actions transferable to a broader range of situations, said the researchers.

While it sounds easy for a person, the microwaving task represents a more complex behavior because it requires successful completion of multiple steps: picking up the object, placing it into the microwave, and closing the microwave door. The policy trained with Theia is among the top performers for each of these steps, comparable only to E-RADIO, another approach which also combines multiple VFMs, although not specifically for robotics applications.

Researchers used Theia to train a policy enabling a robot arm to microwave various types of toy food.

Researchers used Theia to train a policy enabling a robot arm to microwave various types of toy food. Source: The AI Institute

Theia prioritizes efficiency

One of Theia’s main advantages over other VFMs is its efficiency, said the AI Institute. Training Theia requires about 150 GPU hours on datasets like ImageNet, reducing the computational resources needed compared to other models.

This high efficiency does not come at the expense of performance, making Theia a practical choice for both research and application. With a smaller model size and reduced need for training data, Theia conserves computational resources during both the training and fine-tuning processes.

AI Institute sees transformation in robot learning

Theia enables robots to learn and adapt more quickly and effectively by refining knowledge from multiple vision models into compact representations for classification, segmentation, depth prediction, and other modalities.

While there is still much work to be done before reaching a 100% success rate on complex robotics tasks using Theia or other VFMs, Theia makes progress toward this goal while using less training data and fewer computational resources.

The AI Institute invited researchers and developers to explore Theia and further evaluate its capabilities to improve how robots learn and interpret their environments.

“We’re excited to see how Theia can contribute to both academic research and practical applications in robotics,” it said. Visit the AI Institute’s project page and demo page to learn more about Theia.


SITE AD for the 2025 Robotics Summit registration. Register now


The post The AI Institute introduces Theia vision foundation model to improve robot learning appeared first on The Robot Report.

]]>
https://www.therobotreport.com/theia-vision-foundation-model-aiinstitute-generates-improve-robot-learning/feed/ 0
CynLr raises Series A funding to realize robot vision for ‘universal factory’ https://www.therobotreport.com/cynlr-raises-series-a-funding-to-realize-robot-vision-for-universal-factory/ https://www.therobotreport.com/cynlr-raises-series-a-funding-to-realize-robot-vision-for-universal-factory/#comments Wed, 06 Nov 2024 22:03:21 +0000 https://www.therobotreport.com/?p=581453 CynLr, which is developing technology to enable robots to manipulate unknown objects, will grow its team and expand its supply chain network.

The post CynLr raises Series A funding to realize robot vision for ‘universal factory’ appeared first on The Robot Report.

]]>
CynLr has designed CLX to provide human-level vision to machines.

The CLX robotic vision stack was inspired by human eyesight. Source: CynLr

CynLr, or Cybernetics Laboratory, today said it has raised $10 million in Series A funding. The company said it plans to use the investment to enhance its hardware reliability, improve its software performance and user experience, reduce costs for the customer, and expand its team.

Gokul NA and Nikhil Ramaswamy founded CynLr in 2019. The Bengaluru, India-based company specializes in “visual object sentience,” robotics, and cybernetics. It is developing technology to enable robots to manipulate objects of any shape, color, size, and form toward its “universal factory” or “factory-as-a-product” concept.

“This round of investments will help us focus on deeper R&D to build more complex applications and solutions for our customers, like Denso, where they need to manage their demand variability for different parts through a hot-swappable robot station,” stated Ramaswamy, founder and lead for go to market sales and investment at CynLr.

He also cited plant-level automation customers. “With General Motors … they require one standard robot platform to handle 22,000+ parts for assembly of the vehicles,” Ramaswamy said. 


SITE AD for the 2025 Robotics Summit registration. Register now


CynLr CLX-01 stack provides real-time, versatile vision

CynLr said its mission is to simplify automation and optimize manufacturing processes for universal factories. It was an exhibitor at the 2024 Robotics Summit & Expo in Boston.

The company claimed that it is building “the missing layers of fundamental technology” that will enable robots to intuitively recognize and manipulate even unknown objects just like a human baby might. CynLr said its “visual robot platform” enables robots to comprehend, grasp, and manipulate objects in complex and unpredicted environments. 

CyRo is a three-armed, modular, general-purpose dexterous robot. The company said it is its first product that can intuitively pick any object without training and can be quickly configured for complex manipulation tasks.

CyRo uses CynLr’s proprietary CLX-01 robotic vision stack, inspired by the human eye. Unlike traditional vision systems that rely only on pre-fed data for machine learning, CLX-01 uses real-time motion and convergence of its two lenses to dynamically see the depth of previously unknown objects.

CynLr added that its Event Imaging technology is agnostic to lighting variations, even for transparent and highly reflective objects. The company is partnering with multinational customers in the U.S. and EU to co-develop pilot applications.

“With the CyRo form factor receiving a resounding response from customers, technology-market fit has been firmly established,” said Gokul NA, co-founder and design, product, and brand leader at CynLr. “These customers are now eager to integrate CyRo into their production lines and experiment the transformational vision of a universal factory that can profitably produce custom-fit consumer goods, even at low volumes.”

CyRo from CynLr includes the CLX-01 perception system and robotic arms.

The CyRo modular robot includes three robotic arms for complex manipulation tasks. Source: CynLr

Investors support universal factory concept

Pavestone, Grow X Ventures, and Athera Ventures (formerly Inventus India) led CynLr’s Series A round, which brings its total funding over two rounds to $15.2 million. Existing investors Speciale Invest, Infoedge (Redstart), and others also participated.

“CynLr’s concept of a universal factory will simplify and eliminate the minimum order quantity bottleneck for manufacturing,” said Sridhar Rampalli, managing partner at Pavestone Capital. “Furthermore, the idea of changing automation by simply downloading task recipe from an online platform makes factories … product-agnostic. [They] can produce entirely new products out of same factory at a click of a button; it’s a future that we look forward to.” 

Vishesh Rajaram, managing partner at Speciale Invest, said: “Automating using a state-of-the-art industrial robot today costs 3x the price of a robot in customization, along with 24+ months of design modifications. This is the significant technological bottleneck that the team at CynLr is solving, paving the way for long-overdue evolution in automation. We are excited to be a part of their journey in building the factories of the future.”

“Enabling an industrial robot to perform seemingly simple tasks — like inserting a screw without slipping, for example — is what CynLr has managed to crack,” said Samir Kumar, general partner at Athera Venture Partners. “This breakthrough will enable the manufacturing industry to dramatically increase efficiency and maximize the value of production setups.”

From left, Gokul NA and Nikhil Ramaswamy, co-founders of CynLr.

From left, Gokul NA and Nikhil Ramaswamy, co-founders of CynLr. Source: CynLr

CynLr to expand staff, production, and ‘object store’

CynLr plans to expand its 60-member core team into a 120-member global team. In addition to expanding its software research and development team, the company said it will hire business and operational leaders, plus marketing and sales teams across India, the U.S., and Switzerland.

The 13,000-sq.-ft. (1,207.7 sq. m) robotics lab in Bengaluru currently hosts a “Cybernetics H.I.V.E.” of 25 robots, which CynLr plans to expand to more than 50 systems by 2026.

“CynLr manages an extensive supply chain of 400+ parts sourced across 14 countries and will expand its manufacturing capacity to achieve the goal of deploying one robot system per day and reach the $22 million revenue milestone by 2027,” said Gokul NA.

During Swiss-Indian Innovation Week in September, the company opened its Design & Research Center at the Unlimitrust Campus in Prilly, Switzlerland. The center will work closely with CynLr’s research partners in EPFL (the Swiss Federal Institute of Technology Lausanne) Learning Algorithms and Systems (LASA) Laboratory and the Swiss Center for Electronics and Microtechnology (CSEM) in Neuchâtel.

“With the current momentum of breakthroughs in CyRo’s capabilities, we will be able to substantially reduce costs and drive adoption, bringing it closer to realizing the possibility of creating an ‘object store’ — a platform similar to today’s app stores, allowing customers to pick a recipe of applications and object models to have the Robot instantaneously perform a desired task,” explained Ramaswamy. “The company will simultaneously invest in infrastructure for support, solutions engineering, and sales to support this larger vision.”

The post CynLr raises Series A funding to realize robot vision for ‘universal factory’ appeared first on The Robot Report.

]]>
https://www.therobotreport.com/cynlr-raises-series-a-funding-to-realize-robot-vision-for-universal-factory/feed/ 2
Geek+ and Intel launch Vision Only Robot Solution for smart logistics https://www.therobotreport.com/geekplus-intel-launch-vision-only-robot-system-logistics/ https://www.therobotreport.com/geekplus-intel-launch-vision-only-robot-system-logistics/#respond Mon, 04 Nov 2024 19:19:25 +0000 https://www.therobotreport.com/?p=581397 Geek+ expects these robots to work in factory and warehouse transportation, helping customers build agile, digital, and intelligent supply chains.

The post Geek+ and Intel launch Vision Only Robot Solution for smart logistics appeared first on The Robot Report.

]]>
An image of Intel's robotic vision hub. You can see the outline of an AMR, with the hardware of the vision hub being the only thing visible.

The Robotic Vision Hub, which contains components such as the Intel Core i7-1270P processor and connection modules. | Source: Geek+

Geekplus Technology Co. today launched its Vision Only Robot Solution. The system includes Intel Visual Navigation Modules, which Geek+ said will drive the digital transformation of the logistics industry. 

“The Vision Only Robot Solution, developed in collaboration with Intel, effectively leverages the depth vision perception of the Intel RealSense camera,” stated Solomon Lee, vice president of product at Geek+. “Together with the deep algorithmic innovations from both sides, it results in a boost in business growth and efficiency for customers, driving the digital and intelligent upgrade of smart logistics.”

Geek+ claimed that its new system is the world’s first vison-only autonomous mobile robot (AMR) using Intel Corp.‘s Visual Navigation Modules. It also features algorithmic innovations in V-SLAM (visual simultaneous localization and mapping) positioning, composite detection networks, and robot following, the partners said. This allows for highly accurate navigation and obstacle avoidance, helping enterprises cope with diverse and complex logistics scenarios while enhancing both efficiency and accuracy, said Geek+.

The vision-only robots equipped with the Intel Visual Navigation Modules will debut this week at CeMAT in Shanghai. Geek+ said it plans to strengthen its partnership with Intel to develop more smart logistics systems.

Founded in 2015, Geek+ said that more than 1,000 customers use its AMRs for warehouses and supply chain management. The company has offices in the U.S., Germany, the U.K., Japan, South Korea, China, and Singapore. Last month, it opened a 40,000-sq.-ft. facility near Atlanta, announced a 12 m (40 ft.) tall automated storage system, and partnered with Floatic.

Intel RealSense supports vision-based AI

Geek+ explained that its Vision Only Robot Solution integrates the Intel RealSense camera. This camera has an all-in-one design that enables all depth calculations to be performed directly within the device. This will result in low power consumption and independence from specific platforms or hardware, said the companies.

The Intel RealSense also supports various vision-based AI, noted Intel. When paired with a dedicated visual processor, it can accelerate the machine-learning process and shorten the deployment cycle for new automation.

Thanks to the Intel RealSense camera, Geek+ said its Vision Only Robot can observe, understand, and learn from its environment. By obtaining highly accurate and consistent depth data, the robot can accurately recognize and interact with its surroundings, the company said.

“Highly accurate and consistent depth vision data is critical for [an] AMR to achieve environmental perception, significantly influencing its performance in positioning, navigation, and obstacle avoidance,” said Mark Yahiro, vice president of corporate strategy and ventures and the general manager of the RealSense business unit within Intel’s Corporate Strategy Office.

“Through collaboration with Geek+, we are driving AMR innovations based on depth vision data, enabling logistics robots to deliver highly stable and accurate transport services in complex environments, thereby empowering agile, digital, and intelligent supply chains,” he said.

In addition to the camera, the Intel Visual Navigation Module includes the Robotic Vision Hub, which contains components such as the Intel Core i7-1270P processor and connection modules. The module also enables cloud-edge collaboration through high-speed networks, said the partners.


SITE AD for the 2025 Robotics Summit registration. Register now


Geek+ aims for algorithmic innovation 

Geek+ said is building on the Intel Visual Navigation Module to provide reliable computational support for algorithms running on its Vision Only Robot:

  • V-SLAM positioning algorithm: This fuses multi-sensor data and various visual feature elements to generate composite maps, such as point feature maps, line feature maps, object maps, and special area maps. It can deliver reliable and precise positioning in complex and dynamic environments, said the companies.
  • Composite detection network: With both a traditional object-detection network and a validation network, it processes detection data from multiple dimensions, thus enhancing accuracy and reducing the false detection rate.
  • Robot following: By integrating modules such as personnel detection, re-identification, and visual target tracking, Geek+ said it has developed a flexible and efficient visual perception pipeline. Once the relative position between the target personnel and the AMR is determined, the local planning algorithm in Geek+’s self-developed RoboGo, a robotic standalone system, will enable autonomous obstacle avoidance for smooth AMR following of target personnel.

Geek+ said the combination of the Intel Visual Navigation Module’s depth perception and collaborative algorithmic innovations will ensure efficiency for its Vision Only Robot. It will also provide high precision and efficiency for environmental perception, positioning, and tracking, the company said.

Intel and Geek+ said they expect to see widespread adoption of these robots in areas such as factory and warehouse transportation.

Geek+ and Intel have debuted the Vision Only Robot Solution.

Geek+ and Intel have debuted the Vision Only Robot Solution. Source: Geek+

The post Geek+ and Intel launch Vision Only Robot Solution for smart logistics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/geekplus-intel-launch-vision-only-robot-system-logistics/feed/ 0
Advantech partners with oToBrite to create low-latency AI for AMRs https://www.therobotreport.com/advantech-partners-with-otobrite-to-create-low-latency-ai-for-amrs/ https://www.therobotreport.com/advantech-partners-with-otobrite-to-create-low-latency-ai-for-amrs/#respond Thu, 31 Oct 2024 12:30:22 +0000 https://www.therobotreport.com/?p=581334 Advantech and oToBrite said the joint system will enable high-resolution, low-latency AI for next-generation AMRs.

The post Advantech partners with oToBrite to create low-latency AI for AMRs appeared first on The Robot Report.

]]>
A graphic showing oToBrite's automotive GMSL cameras and the Intel Core Ultra H/U, which now work with Advantech AI.

Advantech and oToBrite said their joint system will benefit industries from logistics to manufacturing. | Source: oToBrite

oToBrite Electronics Inc. this week announced a strategic partnership with Advantech to co-develop high-performance and cost-effective perception for mobile robots. oToBrite will bring its experience with artificial intelligence, machine vision, and automotive-grade cameras, while Advantech will provide expertise with global industrial Internet of Things.

The collaborators said they will integrate oToBrite’s high-speed automotive Gigabit Multiple Serial Link (GMSL) cameras with Advantech’s AFE-R360 platform, powered by the Intel Core Ultra H/U (Meteor Lake). 

The joint system will enable high-resolution, low-latency AI for next-generation autonomous mobile robots (AMRs), benefiting industries from logistics to manufacturing, said the companies.

oToBrite says GMSL cameras meet industry needs

AMR applications have expanded into warehouse logistics, last-mile delivery, and terminal or yard tractors. In response to this, oToBrite said integrating GMSL technology addresses the increasing need for real-time, uncompressed, and high-resolution perception. The company said its technologies enable accurate autonomous navigation in diverse environments.

As a provider of advanced driver-assist systems (ADAS), oToBrite has manufactured several vision-AI products for major automakers. Those products rely on high-speed data transmission to handle the large data flow from multiple cameras and enable real-time processing in vehicles.

To meet demand, oToBrite has integrated GMSL technology in products like SAE Level 2+ ADAS and Level 4 autonomous valet parking. The Hsinchu, Taiwan-based company said its automotive experience and technology will enable customers to successfully deploy AMRs with Advantech.

The oToCAM222 series with 2.5M pixels offers multiple viewing angles (63.9°/120.6°/195.9°). The company said this makes it suitable for low-speed AMR applications in challenging industrial environments. The camera offers high-speed, low-latency data processing and IP67/69K-rated durability, said oToBrite.

The company also noted that it has created advanced vision-AI models, embedded system software for various platforms, and active alignment technology for IP67/69K automotive cameras in its IATF16949-certified factory. 


SITE AD for the 2025 Robotics Summit registration. Register now


Advantech initiates partnerships with other camera providers

The AFE-R360 platform is powered by Intel’s Core Ultra 16-core processor with Arc graphics and an integrated neural processing unit (NPU) delivering up to 32 trillion operations per second (TOPS), enhanced by the OpenVINO toolkit for optimized AI performance, said Advantech.

In addition to its partnership with oToBrite, the Taipei, Taiwan-based company recently initiated a partner alignment strategy, gathering top camera providers to develop systems for the AFE-R360. These include Intel RealSense, e-con Systems, and Innodisk. 

“Advantech is proud to collaborate with Intel and camera vendors to strengthen our AMR solution,” stated James Wang, director of embedded applications at Advantech. “By incorporating MIPI and GMSL interfaces into AFE-R360, Advantech is committed to providing our customers with cutting-edge technology that meets the challenges of tomorrow. These interfaces not only enhance performance but also enable new possibilities in imaging applications across various industries.” 

The offering also includes a 3.5-in. (8.8-cm) single board computer (SBC) supporting up to eight MIPI-CSI lanes for seamless GMSL input, ensuring low latency and high noise immunity essential for autonomous operations, said the companies. It also has three LAN and three USB-C ports for integrating depth and lidar sensors.

oToBrite and Advantach said their combination of AI and advanced GMSL camera technology will enhance cost-effective AMR systems.

The post Advantech partners with oToBrite to create low-latency AI for AMRs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/advantech-partners-with-otobrite-to-create-low-latency-ai-for-amrs/feed/ 0
Path Robotics raises $100M to automate welding https://www.therobotreport.com/path-robotics-raises-100m-to-automate-welding/ https://www.therobotreport.com/path-robotics-raises-100m-to-automate-welding/#respond Mon, 28 Oct 2024 20:54:55 +0000 https://www.therobotreport.com/?p=581311 Founded in 2018 by brothers Andy and Alex Lonsberry, Path Robotics' goal is to fill gaps in the manufacturing industry, starting with welding.

The post Path Robotics raises $100M to automate welding appeared first on The Robot Report.

]]>
A grayscale image of Path Robotics' autonomous welder, with the company's logo at the top, and white text reading $100 Million Series D.

Path Robotics offers is autonomous welding robots with a RaaS pricing model. | Source: Path Robotics

Over the past 12 months, Path Robotics has closed $100 million in new investments. The Columbus, Ohio-based company said it is dedicated to embodying artificial intelligence so that robotic systems can take on challenges that traditional automation is incapable of tackling.

Founded in 2018 by brothers Andy and Alex Lonsberry, Path Robotics’ stated goal is to fill gaps in the manufacturing industry. As a result, the company has created robots that use AI, machine learning, and computer vision systems to fit up and weld parts.

The company currently has two robotic welding products in the market. The first, the AW-3 Robotic Welding Cell, can handle large parts, as big as 70 ft. long.

The AF-1 Robotic Welding Cell, on the other hand, can pick, fit, and weld parts without human intervention. Path Robotics said both of its welding cells can autonomously weld steel parts and are deployed in fabrication shops across the U.S. and Canada.

“As a firm, we seek out transformative technologies that solve pressing, real-world challenges — and Path Robotics exemplifies that vision,” said Haomiao Huang, founding partner at Matter Venture Partners, a leader of the funding round.

“Path’s AI robotics technology is a game changer for manufacturing, starting with addressing critical labor shortages in welding but with potential far beyond that,” Huang said. “We are incredibly proud to partner with Andy and the Path Robotics team in their mission to revitalize American manufacturing and lead the future of AI robotics in the factory.”

Path Robotics adds investors

Matter Venture Partners and Drive Capital led the round, which also included participation from Yamaha Ventures, Taiwania Capital, MediaTek, Catapult Ventures, Gaingels, Addition, Tiger Global, and Basis Set.

“With Matter Venture Partners, there was a mutual attraction we both recognized,” said Andy Lonsberry, co-founder and CEO of Path Robotics. “There are multiple layers to what we’re doing at Path; we build software, but it powers hardware.”

“We serve the welding industry, but with cutting-edge AI, robotics, and machine learning,” he reiterated. “Having an investor that understands those layers is important to our success and continued growth.”

Path Robotics previously received venture backing from prominent investors such as Drive Capital, Addition, Tiger Global, Basis Set, Lemnos, and Silicon Valley Bank totaling $170 million.

“This investment from Matter Venture Partners is valuable, but their expertise and relationships are just as meaningful,” said Lonsberry. “The manufacturing industry’s challenge with the shortage of welders is not going away, and they understand the nuance of what we’re tackling. They are aligned with us in our belief that to rebuild manufacturing, we are going to have to create autonomous systems that can take on the labor challenges currently plaguing American manufacturing.”

The post Path Robotics raises $100M to automate welding appeared first on The Robot Report.

]]>
https://www.therobotreport.com/path-robotics-raises-100m-to-automate-welding/feed/ 0
Simbe raises $50M Series C round to expand retail, wholesale inventory robotics https://www.therobotreport.com/simbe-raises-50m-series-c-expands-retail-wholesale-inventory-robotics/ https://www.therobotreport.com/simbe-raises-50m-series-c-expands-retail-wholesale-inventory-robotics/#respond Thu, 24 Oct 2024 12:25:52 +0000 https://www.therobotreport.com/?p=581253 Simbe, whose Store Intelligence platform includes the Tally inventory robot, has raised funding to continue its global growth.

The post Simbe raises $50M Series C round to expand retail, wholesale inventory robotics appeared first on The Robot Report.

]]>
Tally from Simbe Robotics works at BJ's Wholesale Stores, shown here, and other retailers.

Simbe won an RBR50 award for expanding its Tally deployment at BJ’s Wholesale Stores. Source: Simbe Robotics

Retailers’ desire for data is driving the spread of automation. Simbe Robotics Inc. today announced that it has closed a $50 million Series C equity financing round. The South San Francisco-based company said it plans to use its latest capital to meet the need for retail technology, continue growing its fleet to stores and brands worldwide, and expand into new product areas.

“Retail is a cornerstone of modern society, yet physical stores remain burdened by what we call ‘the last great data desert’ – knowing precisely what’s happening on store shelves,” stated Brad Bogolea, co-founder and CEO of Simbe. “In partnership with top global retailers, Simbe is building the essential system of record to power retail’s operating layer.”

Simbe’s Store Intelligence platform includes the Tally item-scanning robot, which uses computer vision to identify product locations, stock levels, and pricing and promotion information. The platform also includes artificial intelligence to help streamline inventory management and store operations while enhancing store team and shopper experiences, said the company.

Simbe introduced new products, capabilities in 2024

In response to surging demand across retail verticals, Simbe Robotics claimed that its mobile robots and software can provide “unprecedented visibility and near-real-time insights.” The company added that it has refined, expanded, and scaled its platform to automate shelf intelligence for retail banners around the world.

This year, Simbe announced new and expanded partnerships with major chains, including SpartanNash, Wakefern Food Corp., Northeast Grocery, Albertsons Companies, and CarrefourSA. It also strengthened existing partnerships with BJ’s Wholesale Club, Schnuck Markets, and multiple Fortune 500 retailers.

The company was recognized with a 2024 RBR50 Robotics Innovation Award for scaling its collaboration with BJ’s Wholesale Club. In 2024, Simbe introduced products and capabilities including:

  • Simbe Brand Insights, which extends the value of near-real-time, shelf-level data to retailers’ vendors, consumer packaged goods (CPG) brands, and manufacturers
  • Simbe Virtual Tour, which allows retailers to view their stores from anywhere in the world at a new depth and frequency
  • Simbe Mobile app, which streamlines work for store teams by providing a timely, prioritized list of pricing and restocking tasks at their fingertips
  • Simbe Wholesale Club Solution, which it said is the industry’s first shelf-intelligence platform designed specifically for wholesale club environments

Goldman Sachs invests in market opportunity 

Growth Equity at Goldman Sachs Alternatives led Simbe’s Series C, with participation from Eclipse and Valo Ventures and other existing investors. Since raising its Series B in July 2023, Simbe said it has achieved significant momentum and milestones. The new capital brings the total amount it has raised to more than $100 million.

“Retail automation is a rapidly growing sector, and Simbe is well-positioned to capitalize on the enormous market opportunity due to its strong track record with top global retailers, underscoring its proven impact at scale and strong capabilities,” said Ben Fife, growth equity investor at Goldman Sachs Alternatives.

“We proactively led Simbe’s $50 million round because we recognize their distinct ability to steer retail transformation and meet surging demand for AI and robotics,” he added. “It’s only a matter of time until we see technology like Simbe’s in every retail store.”

Goldman Sachs has more than $500 billion in assets and over 30 years of experience. The alternative investments platform is part of Goldman Sachs Asset Management, which delivers investment and advisory services across public and private markets.

Since 2003, Growth Equity at Goldman Sachs Alternatives has invested more than $13 billion in growth-stage companies spanning multiple industries, including enterprise technology, financial technology, consumer, and healthcare.

“Goldman Sachs is renowned for supporting and scaling enterprise technology and automation companies, and this new capital underscores our vision to transform retail with true in-store visibility,” said Bogolea. “Simbe’s technology will power every store, improving the experience for every retailer, brand, employee, and shopper.”

Simbe said it will use the proceeds will be used to accelerate global deployments, broaden retail offerings, and pursue strategic growth opportunities. The company also plans to expand its team, which grew by 100% in the past year, by adding talent at the leadership level for the next phase of its growth.

The post Simbe raises $50M Series C round to expand retail, wholesale inventory robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/simbe-raises-50m-series-c-expands-retail-wholesale-inventory-robotics/feed/ 0
See3CAM_CU83 camera from e-con Systems offers RGB-IR tech for diverse applications https://www.therobotreport.com/see3cam-cu83-camera-from-e-con-systems-offers-rgb-ir-tech-for-diverse-applications/ https://www.therobotreport.com/see3cam-cu83-camera-from-e-con-systems-offers-rgb-ir-tech-for-diverse-applications/#respond Wed, 23 Oct 2024 20:21:10 +0000 https://www.therobotreport.com/?p=581250 The new See3CAM from e-con Systems includes proprietary technology for separating RGB and IR frames for precise embedded vision.

The post See3CAM_CU83 camera from e-con Systems offers RGB-IR tech for diverse applications appeared first on The Robot Report.

]]>
The new See3CAM_CU83 4K RGB-IR superspeed USB camera from e-con Systems.

The new See3CAM_CU83 4K RGB-IR camera can be used in precision agriculture and surgery. Source: e-con Systems.

e-con Systems Inc. last week launched its latest camera, the See3CAM_CU83, a 4K superspeed USB Camera featuring onsemi’s AR0830 sensor. The latest See3CAM, an RGB-IR camera, promises performance for a wide range of applications, including biometric access control, in-cabin monitoring, crop health monitoring, image-guided surgeries, and smart patient monitoring, said the company.

“See3CAM_CU83 represents a significant milestone in our product lineup,” stated said Prabu Kumar, head of the Camera Solutions Unit at e-con Systems. “With over 20 years of experience in embedded vision, e-con Systems has generated multiple patents.”

“This camera uses e-con’s own proprietary algorithm that processes RGB-IR frames from the single sensor into separate RGB and IR frames,” he added. “The camera’s ability to capture both visible and infrared light at the same time with a dual band-pass optical system also allows it to operate seamlessly in both day and night modes across a wide range of vision applications.”

e-con’s product portfolio includes ToF (time-of-flight) cameras, MIPI (Mobile Industry Processor Interface) camera modules, GMSL (Gigabit Multimedia Serial Link) cameras, USB 3.1 Gen 1 cameras, stereo cameras, GigE cameras, and low-light cameras. The Riverside, Calif.-based company last month also demonstrated its 3MP and 5MP global shutter cameras for basketball posture analysis and precision agriculture, respectively.

See3CAM_CU83 eliminates sensors, filters

“See3CAM_CU83 sets a new standard in the industry with its ability to simultaneously stream RGB-IR frames, capturing high-quality 4K images in both visible and IR lighting conditions,” said e-con Systems.

By separating visual light and infrared frames, the new camera eliminates the need for separate RGB and IR sensors, the embedded vision provider claimed. It can deliver clear, precise images with low latency and is a cost-effective system, e-con asserted.

The camera’s reliability is further enhanced by the absence of mechanical switch filters, it added. e-con said its expertise in image signal processing (ISP) fine-tuning enables See3CAM to deliver high-resolution images.

“By integrating our AR0830 sensor into their 4K RGB-IR superspeed USB camera, See3CAM_CU83, and combining it with their RGB-IR separation algorithm, e-con Systems delivers a composite camera system that now can be extensively deployed in both visible and NIR spectrums,” said Steve Harris, senior director of marketing in onsemi‘s Industrial and Commercial Sensing Division. “This combination enables superior performance and color accuracy across a wide range of embedded vision applications.”

e-con Systems addresses vision pain points across industries

e-con Systems said the See3CAM_CU83 addresses pain points of embedded vision applications across multiple industries:

  • Biometric access control: RGB-IR can enhance biometric accuracy, distinguishing between live persons and spoofing attempts, said the company. This promises to improve security for facility entry, attendance systems, and high-security environments.
  • In-cabin monitoring: RGB-IR functionality maintains clear video in varying light conditions for driver or passenger monitoring, explained e-con. 4K resolution enhances detail for advanced computer vision techniques, enabling drowsiness detection and cabin occupancy tracking. In addition, the wake-on-motion feature of the camera ensures rapid response of the camera along with power efficiency, it said.
  • Crop health monitoring: Dual RGB and IR imagery can capture near-infrared spectral data, revealing crop health indicators like chlorophyll content and water status. This can help detect crop distress, diseases, or pest infestations for targeted interventions.
  • Image-guided surgeries: IR imaging functionality can provide surgeons with additional visual cues and information beyond what is available in the visible spectrum alone. For example, IR imaging can help differentiate between healthy and diseased tissue, enhance visualization of blood vessels, or aid in tumor identification during precision surgeries.
  • Smart patient monitoring: The latest See3CAM captures both visible and infrared light, ensuring clear images even in low-light conditions and enhancing the accuracy of patient monitoring applications. This helps with early detection of critical changes in health status, according to e-con.

See3CAM is now availabile

e-con Systems said that its expertise in OEM innovation and the new camera’s advanced features stand out in a competitive landscape. “See3CAM_CU83 is also the only RGB-IR camera with 4K resolution available in the market,” the company claimed.

e-con also offers customization services and integration support for the See3CAM meet the requirements of unique applications. For customization or integration support, contact the company at camerasolutions@e-consystems.com.

The post See3CAM_CU83 camera from e-con Systems offers RGB-IR tech for diverse applications appeared first on The Robot Report.

]]>
https://www.therobotreport.com/see3cam-cu83-camera-from-e-con-systems-offers-rgb-ir-tech-for-diverse-applications/feed/ 0
VERSES and Volvo Cars aim to make autonomous vehicles safer for pedestrians https://www.therobotreport.com/verses-and-volvo-cars-aim-to-make-avs-safer-for-pedestrians/ https://www.therobotreport.com/verses-and-volvo-cars-aim-to-make-avs-safer-for-pedestrians/#respond Tue, 22 Oct 2024 12:28:43 +0000 https://www.therobotreport.com/?p=581207 The paper uses VERSES AI's algorithms to predict the appearance of pedestrians, cyclists, and cars that are obscured behind stationary vehicles.

The post VERSES and Volvo Cars aim to make autonomous vehicles safer for pedestrians appeared first on The Robot Report.

]]>
A white Waymo vehicle with the Waymo sensor stack on top. The car is stopped at an intersection with a pedestrian walking across the road in front of it.

The VERSES team presented its initial experiments by illustrating its capabilities using the Waymo open dataset. | Source: Waymo

Autonomous vehicles need to be able to anticipate and react to people entering streets. VERSES AI Inc. today announced the initial results of its Genius Beta Partner collaboration on that topic. The cognitive computing company published a paper co-authored by research teams at Volvo Cars and VERSES.

The paper explained the use of algorithms from VERSES to predict the appearance of pedestrians, cyclists, and cars that are obscured behind stationary vehicles and objects. The companies claimed that the paper represents an advancement beyond the current capabilities of autonomous vehicles and artificial intelligence. 

“As the automotive industry progresses towards fully autonomous self-driving cars, predicting where unseen obstacles like people or bicyclists may be or which trajectory they may be on has been a significant unsolved safety challenge,” said Gabriel René, CEO of VERSES.

“We believe the inability of current autonomous driving systems to overcome this hurdle is holding back the AV industry worldwide,” he added. “Volvo Cars is globally recognized for its unwavering commitment to vehicle safety. So, they were the perfect partner to work with to showcase how VERSES can help solve this problem.”

VERSES addresses driving uncertainty

The paper, titled “Navigation under uncertainty: trajectory prediction and occlusion reasoning with switching dynamical systems,” explores a way to help vehicles avoid people if they enter a street unexpectedly. It presents completed experiments illustrating capabilities using the Waymo open dataset.

The results demonstrate significant improvements in predicting animals, people, and objects entering the street, according to VERSES AI and Volvo.

VERSES said it is designing cognitive computing systems around first principles found in physics and biology. The Los Angeles-based company asserted that its flagship product, Genius, is a toolkit for developers to generate intelligent software agents that enhance existing applications with the ability to reason, plan, and learn. 

How does the framework operate?

Six illustrations showing visualizations of predicted vehicle trajectories.

The research included visualizations of predicted vehicle trajectories. | Source: VERSES AI

Predicting the future trajectories of nearby objects, especially under occlusion, is a crucial task in autonomous driving and safe robot navigation. The researchers said that prior works typically neglected to maintain uncertainty about occluded objects. Instead, they only predicted trajectories of observed objects through high-capacity models such as transformers trained on large datasets.

While these approaches are effective in standard scenarios, they can struggle to generalize to the long-tail, safety-critical scenarios, according to VERSES. This is why it set out to explore a conceptual framework unifying trajectory prediction and occlusion reasoning under the same class of structured probabilistic generative model, namely, switching dynamical systems.

The teams aimed to combine the reasoning of object trajectories and occlusions in a single framework. They did this with a class of structured probabilistic models called switching dynamical systems, which divides the modeling of complex continuous dynamics into a finite set/mixture of simple dynamics arbitrated by switching variables.

The main attractiveness of this class of models is that it provides a unified representation where hierarchical compositions generalize to both prototype-based trajectory prediction and object-centric occlusion reasoning, said the paper. For trajectory prediction, the switching variable represents the intent or behavior primitive chosen by the modeled object, where the execution of the selected intent generates trajectories prescribed by the attractor of the local dynamics.

For occlusion reasoning, the switching variable represents objects’ existence, which in turn modulates the prediction of their sensory measurements in combination with the scene geometry,” said René. “A potential advantage of this unified yet structured framework is that it could use efficient inference and learning algorithms while still being amenable to manual specification of specific critical components, such as scene geometry.”

Team uses Waymo data set to predict movements

To demonstrate the feasibility of this framework, the VERSES and Volvo team evaluated a minimal implementation on the Waymo open motion dataset to predict the motion of vehicles and pedestrians in occluded traffic scenes.

For trajectory prediction, the team compared the model’s prediction accuracy and uncertainty calibration against a few ablations. For occlusion reasoning, it visualized the model’s projections of potentially occluded pedestrian positions to show how uncertainty is maintained over time.

The team showed that both tasks can be embedded in the same framework and yet still be solvable with divide-and-conquer approaches. Its experimental results showed that, when conditioned on the same information, the closed-loop rSLDS models achieved higher predictive accuracy and uncertainty calibration.

“We believe this research project with Volvo Cars, part of our Genius Beta project, demonstrates a major advancement in autonomous vehicle safety capability. We expect the research project to pave the way for safer streets for pedestrians, cyclists, cars, robots, and beyond.”

VERSES said it plans to incorporate auxiliary information such as road graphs to improve prediction accuracy and to implement efficient inference algorithms that are particularly suitable to this family of models.

The post VERSES and Volvo Cars aim to make autonomous vehicles safer for pedestrians appeared first on The Robot Report.

]]>
https://www.therobotreport.com/verses-and-volvo-cars-aim-to-make-avs-safer-for-pedestrians/feed/ 0
Sharpshooter implement uses Verdant Robotics AI to target weeds https://www.therobotreport.com/sharpshooter-implement-uses-verdant-robotics-ai-to-target-weeds/ https://www.therobotreport.com/sharpshooter-implement-uses-verdant-robotics-ai-to-target-weeds/#respond Tue, 22 Oct 2024 12:15:17 +0000 https://www.therobotreport.com/?p=581205 Verdant Robotics has launched the 2025 edition of its Sharpshooter weedkilling smart implement, which is designed for high-density crops.

The post Sharpshooter implement uses Verdant Robotics AI to target weeds appeared first on The Robot Report.

]]>
tractor pulls a verdant implement through a field.

Capable of covering up to 5 acres per hour, the Sharpshooter’s lightweight construction allows for operations in challenging conditions such as damp fields. | Credit: Verdant Robotics

Verdant Robotics yesterday unveiled the 2025 Sharpshooter and will debut the robotic weed killer this week at FIRA 2024 in Woodland, Calif. As a smart implement, the Sharpshooter attaches to a tractor and can spray up to 5 acres per hour, day or night. It uses artificial intelligence to target weeds while avoiding crop plants.

Early Sharpshooter customers are experiencing significant savings, cutting chemical inputs by up to 96% and reducing hand-weeding costs by an average of 65%, claimed the company. These efficiencies enable growers to achieve a rapid return on investment, typically within 12 to 24 months, it said.

Hayward, Calif.-based Verdant closed its $46.6 million Series A funding round in 2022.

AI-powered machine vision targets weeds

The Sharpshooter system uses Bullseye Aim & Apply Technology, Verdant Robotics’ proprietary Spatial AI, machine learning, and aiming-nozzle technology. The AI model can distinguish crop plants from weeds and then deliver millimeter-level accuracy for chemical application.

The system sprays plant targets ranging from the size of a dime to that of a dinner plate at a rate of 120 to 480 shots per second. Ninety-nine percent of shots land within 5 mm (0.1 in.) of the target, while efficiently covering up to 5 acres (2 hectares) per hour, said the company.

“At Verdant, our mission is to provide growers with significant value through the application of cutting-edge technology,” said Gabe Sibley, CEO of Verdant Robotics. “The Sharpshooter is the culmination of years of research, field testing, and invaluable grower feedback. It delivers unmatched speed, efficiency, and versatility, giving growers the savings, precision, and control they need.”

rear shot of the verdant sharpshooter.

Sharpshooter targets and eliminates weeds of all sizes without slowing down. | Credit: Verdant Robotics

Sharpshooter is designed for precision

Verdant Robotics listed the following features of Sharpshooter:

  • Fast and lightweight design: The Sharpshooter’s lightweight construction allows for swift, agile operation even in challenging conditions—such as damp fields—boosting efficiency while minimizing soil compaction.
  • Optimized for high-density crops: Designed to handle both low- and high-density planting systems, the system can easily navigate crops like lettuce and carrots, said Verdant. Bullseye Aim & Apply ensures comprehensive weed control, thinning, and targeted application, even in tightly planted configurations.
  • Weed control for all sizes: The Sharpshooter targets and eliminates weeds of all sizes—from cotyledons to mature weeds—without slowing down. This capability ensures full coverage across the field and extends the operational window for growers.
  • Precision control suite: Across many crops and conditions growers have full control of advanced features such as Crop Band Zone detection, 3D Crop Shield, Plant-line Detection, Adaptive Spray Size, and Customizable Safety Buffers.
  • Provides insights: Agronomic Performance Reports provide growers with visibility into weed density, coverage, application volume, and more.

Verdant offers limited-time founders plan

Verdant Robotics is offering a limited-time Founders Plan to celebrate the Sharpshooter’s launch. This exclusive support plan includes ongoing software updates, feature enhancements, and access to the Innovator Plan at no additional cost.

The company said this ensures that growers can benefit from continuous improvements and premium support. It is now taking orders for the Sharpshooter Model B 20-ft. implement, but availability is limited.

The post Sharpshooter implement uses Verdant Robotics AI to target weeds appeared first on The Robot Report.

]]>
https://www.therobotreport.com/sharpshooter-implement-uses-verdant-robotics-ai-to-target-weeds/feed/ 0
Gemini 335Lg Stereo Vision 3D Camera from Orbbec designed for collaborative, mobile robots https://www.therobotreport.com/gemini-335lg-stereo-vision-3d-camera-orbbec-launches-collaborative-mobile-robots/ https://www.therobotreport.com/gemini-335lg-stereo-vision-3d-camera-orbbec-launches-collaborative-mobile-robots/#respond Mon, 21 Oct 2024 14:36:00 +0000 https://www.therobotreport.com/?p=581194 Orbbec has released the Gemini 335LG GMSL2/FAKRA stereo vision 3D camera at ROSCon and has partnered with Universal Robots and NVIDIA.

The post Gemini 335Lg Stereo Vision 3D Camera from Orbbec designed for collaborative, mobile robots appeared first on The Robot Report.

]]>
The new Gemini 335LG camera from Orbbec with built-in depth processing.

The new Gemini 335LG camera offers secure and reliable connectivity for AMRs and cobots. Source: Orbbec

Collaborative robot arms and mobile robots need reliable perception and connectivity for complex tasks such as bin picking and palletizing, noted Orbbec Inc. The company today announced its Gemini 335Lg Stereo Vision 3D Camera.

The latest addition to the Gemini 330 series uses advanced Gigabit Multimedia Serial Link 2 (GMSL2) and a FAKRA connector to ensure secure and reliable connectivity for autonomous mobile robots (AMRs) in fulfillment centers, warehouses, and factories, said Orbbec.

“We are excited to introduce the Gemini 335Lg at ROSCon 2024,” said Michael McSweeney, vice president of Sales at Orbbec. “This new addition reflects our commitment to enhancing robotics solutions with advanced technology, offering greater stability and reliability for AMRs and robotic arms.”

Gemini 335Lg promises versatility

The Gemini 330 series offers versatile 3D capabilities for AMRs and robotic arms, said Orbbec. These cameras operate in both passive and active laser-illuminated modes, ensuring high-quality depth and RGB output in challenging indoor and outdoor lighting conditions, the company added.

  • Built-in depth processing: Equipped with Orbbec’s latest depth engine, ASIC MX6800, the camera processes high-resolution depth maps internally, freeing up bandwidth on the host processor for AI and robotics tasks.
  • Stable and smooth data transmission: This GMSL2/FAKRA-powered and IP65-rated Gemini 335Lg supports high-speed transmission (bandwidth up to 6Gbps) and long-distance connection (up to 15 m). It is also vibration-resistant and resilient in harsh environments, preventing disconnections during high-speed movement, long-distance transmissions, and challenging conditions with uneven terrain, dust and water, or electromagnetic interference.
  • Streamlined multi-device collaboration: The Gemini 335Lg supports the simultaneous connection of up to 16 cameras, promising precise synchronization of depth and color streams across devices for complex multi-camera setups.
  • Out-of-the-box experience: Fully compatible with the latest NVIDIA Jetson, ROS, and ROS2 platforms, the Gemini 335Lg offers seamless integration, whether connecting through a GMSL2 camera board or directly to a complete system.
  • USB support: Building on the foundation of the Gemini 335L, the Gemini 335Lg maintains the USB Type-C interface and data streaming performance while adding support for GMSL2 and FAKRA. Both cameras use a unified software development kit (SDK). According to Orbbec, this enables seamless application development and validation on the Gemini 335Lg using existing work developed for the Gemini 335L.

Orbbec collaborates with UR AI Accelerator

Orbbec will show the Gemini 335Lg at ROSCon 2024 this week in Odense, Denmark, as part of its partnership with collaborative robot leader Universal Robots‘ new UR AI Accelerator. The accelerator integrates the Gemini 335Lg with Universal Robots’ PolyScope X software and is powered by NVIDIA Isaac accelerated libraries and AI models, running on the NVIDIA Jetson AGX Orin system-on-module.

The UR AI Accelerator provides developers with an extensible platform to build cobot applications, accelerate research, and reduce time to market for artificial intelligence products. Orbbec previously worked with NVIDIA Jetson for the Persee N1 3D camera module.

The company also announced that its Depth+RGB cameras are now available for AMD Kria KR260 Robotics Starter Kits for AMR developers.

Founded in 2013, Orbbec offers products spanning structured light, stereo vision, and time-of-flight (ToF) technologies. The company said that developers and enterprises have deployed its AI vision systems in thousands of robotic, manufacturing, logistics, retail, 3D scanning, health, and fitness systems.

With in-house research and development, manufacturing, and supply chain management plus global support, Orbbec also offers ODM (original design manufacturer) engagements for custom and embedded designs. The company has offices in Troy, Mich., and Shenzhen, China.

The post Gemini 335Lg Stereo Vision 3D Camera from Orbbec designed for collaborative, mobile robots appeared first on The Robot Report.

]]>
https://www.therobotreport.com/gemini-335lg-stereo-vision-3d-camera-orbbec-launches-collaborative-mobile-robots/feed/ 0