Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ Robotics news, research and analysis Wed, 27 Nov 2024 18:42:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 https://www.therobotreport.com/wp-content/uploads/2017/08/cropped-robot-report-site-32x32.png Microprocessors / SoCs Archives - The Robot Report https://www.therobotreport.com/category/technologies/microprocessors-socs/ 32 32 Renesas launches its highest performing MPU for industrial equipment https://www.therobotreport.com/renesas-launches-highest-performing-mpu-industrial-equipment/ https://www.therobotreport.com/renesas-launches-highest-performing-mpu-industrial-equipment/#respond Thu, 28 Nov 2024 13:02:54 +0000 https://www.therobotreport.com/?p=581783 The RZ/T2H comes with the Renesas Flexible Software Package and a Linux package that comes with long-term support.

The post Renesas launches its highest performing MPU for industrial equipment appeared first on The Robot Report.

]]>
An illustration of the RZ/T2H MPU and a blue industrial robot arm.

Renesas said the RZ/T2H MPU provides powerful application processing and fast real-time control. | Source: Renesas Electronics Corporation

Renesas Electronics Corp. this week launched the RZ/T2H, its highest-performance microprocessor for industrial equipment. Thanks to its powerful application processing and real-time performance, the RZ/T2H is capable of high-speed, high-precision control of industrial robot motors for up to nine axes, the company said.

As demand grows to augment scarce labor, manufacturers are deploying industrial automation such as vertically articulated robots and industrial controller equipment. Renesas claimed that the RZ/T2H microprocessor (MPU) combines all the functionality and performance needed for developing production applications.

Industrial systems traditionally required multiple MPUs or a combination of field programmable gate arrays (FPGAs) to control these applications. However, the RZ/T2H MPU offers the same functionality on a single chip, said Renesas. This can reduce the number of components and save time and cost of FPGA program development.

The MPU supports a variety of network communications including Industrial Ethernet on a single chip. It targets industrial controller equipment such as programmable logic controllers (PLCs), motion controllers, distributed control systems (DCSs), and computerized numerical controls (CNCs).

“We have enjoyed outstanding market success with RZ/T2M and RZ/T2L,” said Daryl Khoo, the vice president of the Embedded Processing 1st Business Division at Renesas. “The RZ/T2H builds on that momentum, allowing our industrial customers to leverage their existing design assets while addressing even more innovative, demanding industrial motor control and Linux applications. Our customers have been particularly impressed that the RZ/T2H enables them to implement a nine-axis motor control all on just one chip.”

A global provider of microcontrollers, Renesas combines expertise in embedded processing, analog, power, and connectivity to deliver complete semiconductor solutions. The Tokyo-based company said its products accelerate time to market for automotive, industrial, infrastructure, and Internet of Things (IoT) applications.


SITE AD for the 2025 Robotics Summit registration. Register now


RZ/T2H can generate robot trajectories

The RZ/T2H is equipped with four Arm Cortex-A55 application CPUs with a maximum operating frequency of 1.2 GHz. For external memory, it supports 32-bit LPDDR4-3200 SDRAM. Two Cortex-R52 CPUs with a maximum operating frequency of 1 GHz handle the real-time processing, with each core equipped with a total of 576 KB of high-capacity tightly coupled memory (TCM).

This allows high CPU- and memory-intensive tasks such as running Linux applications, robot trajectory generation, and PLC sequence processing to be executed on a single chip. At the same time, the RZ/T2H can handle fast and precise real-time control such as motor control and Industrial Ethernet protocol processing, said Renesas.

The RZ/T2H can control up to nine axes including three-phase PWM timers, delta-sigma interfaces for measuring current values, and encoder interfaces. It supports A-format, EnDat, BiSS, Hyperface DSL, and FA-CODER.

In addition, the company placed peripheral functions for motor control  on a low-latency peripheral port (LLPP) bus of the Cortex-R52 real-time CPU core, allowing high-speed access from the CPU.

The RZ/T2H has four Ethernet ports, three Gigabit Ethernet MAC (GMAC), plus an Ethernet switch. It also supports EtherCAT, PROFINET, EtherNet/IP, OPC UA, and the next-generation Time-Sensitive Networking (TSN) standard.

The combination of these Ethernet switches and GMAC allows the MPU to support multiple Industrial Ethernet controllers and devices. Renesas said this allows the system to adapt to a wide range of controller requirements, such as upper-layer Ethernet communications.

Block diagram of Renesas new RZT2H SOC.

Block diagram of new RZ/T2H SOC. Click here to enlarge. Source: Renesas

Renesas offers specialized boards and software

The RZ/T2H comes with the Renesas Flexible Software Package (FSP), the same as all Renesas MPUs, and a Linux package that comes with long-term support. An out-of-the-box, multi-axis, motor control evaluation system is available. It includes inverter boards for driving nine-axis motors, a multi-axis motor control software package, and Motion Utility Tool (a motor control software tool).

Renesas has also included sample protocols for industrial Ethernet and software PLC packages to kick-start system development.

The company offers a “9-axis Industrial Motor Control with Ethernet” solution that combines the RZ/T2H with numerous compatible devices such as the RV1S9231A IGBT Drive Photocoupler and RV1S9353A Optically Isolated Delta-Sigma Modulator.

It said the resulting products enable compatible devices to work together to bring optimized, low-risk designs to market faster. Renesas offers more than 400 of these combinations with a wide range of products from its portfolio.

The RZ/T2H is now available. Renesas said plans to release the new RZ/N2H device, which offers the same performance as the RZ/T2H in a smaller package, in the first quarter of 2025. It said this will be suitable for industrial controller equipment such as PLCs and motion controllers.

The RZ/T2H is managed under the Product Longevity Program (PLP) for industrial equipment that requires long life cycles.

The post Renesas launches its highest performing MPU for industrial equipment appeared first on The Robot Report.

]]>
https://www.therobotreport.com/renesas-launches-highest-performing-mpu-industrial-equipment/feed/ 0
ANELLO Photonics secures funding for inertial navigation in GPS-denied environments https://www.therobotreport.com/anello-photonics-secures-funding-inertial-navigation-gps-denied-environments/ https://www.therobotreport.com/anello-photonics-secures-funding-inertial-navigation-gps-denied-environments/#respond Tue, 19 Nov 2024 16:15:50 +0000 https://www.therobotreport.com/?p=581641 ANELLO Photonics, which has developed compact navigation and positioning for autonomous systems, has closed its Series B round.

The post ANELLO Photonics secures funding for inertial navigation in GPS-denied environments appeared first on The Robot Report.

]]>
ANELLO evaluation kit for its SiPhOG optical navigation system.

ANELLO offers an evaluation kit for its navigation and positioning system. Source: ANELLO Photonics

Self-driving vehicles, mobile robots, and drones need multiple sensors for safe and reliable operation, but the cost and bulk of those sensors have posed challenges for developers and manufacturers. ANELLO Photonics Inc. yesterday said it has closed its Series B funding round for its SiPhOG inertial navigation system, or INS.

“This investment not only validates our SiPhOG technology and products in the marketplace, but will [also] allow us to accelerate our manufacturing and product development as we continue to push the boundaries and leadership for navigation capabilities and performance to our customers who want solutions for GPS-denied environments,” stated Dr. Mario Paniccia, co-founder and CEO of ANELLO Photonics.

Founded in 2018, ANELLO has developed SiPhOG — Silicon Photonics Optical Gyroscope — based on integrated photonic system-on-chip (SoC) technology. The Santa Clara, Calif.-based company said it has more than 28 patents, with 44 pending. Its technologies also include a sensor-fusion engine using artificial intelligence.

“I spent 22 years at Intel and started this field of silicon photonics, which is the idea of building optical devices out of standard silicon processing, mostly focused on the data center,” recalled Paniccia. “Mike Horton, my co-founder, was a sensor gyro expert who started a company called Crossbow coming out of UC Berkeley.”

“Everyone doing autonomy was saying lidar and radar, but customers told Mike that if we could build an integrated photonic chip, they’d be very interested,” he told The Robot Report. “If you look at fiber gyros, they work great but are big, bulky, and expensive.”

“The stuff on our phones are MEMS [micro-electromechanical systems]-based today, which is not very accurate and is very sensitive to temperature, vibration, and EM interference,” Paniccia explained. “With the the same concept as a fiber gyro — the idea of light going around a coil, and you measure the phase based on rotation — we integrated all those components on a single chip, added a little laser, and put electronics around it, and you now get SiPhOG, which fits in the palm of your hand.”


SITE AD for the 2025 Robotics Summit registration. Register now


SiPhOG combines compactness and precision

SiPhOG brings high-precision into an integrated silicon photonics platform, claimed ANELLO. It is based on the interferometric fiber-optic gyroscope (FOG) but is designed for compactness, said Paniccia.

“It’s literally 2 by 5 mm,” he said. “On that chip, we have all the components — the splitters, the couplers, the phase modulators, and the delay lines. We measure about 50 nano-radians of signal, so a tiny, tiny signal, but we measure it very accurately.”

The system also has a non-ASIC, two-sided electronics board with an analog lock-in amplifier, a temperature controller, and an isolator, Paniccia said. It has none of the drawbacks of MEMS and uses 3.3 volts, he added.

Paniccia said the SiPhOG unit includes an optical gyro, triple-redundant MEMS, accelerometers, and magnetometers. It also has two GPS chips and dual antennas and is sealed to be waterproof.

The ANELLO IMU+ is designed for harsh environments including construction, robotics, mining, trucking, and defense.

The ANELLO IMU+ is designed for harsh environments including in construction, robotics, mining, trucking, and defense. Source: ANELLO

Navigation system ready for multiple markets

Autonomous systems can work with ANELLO’s technology and the Global Navigation Satellite System (GNSS) for navigation, positioning, and motion tracking for a range of applications, said the company.

“We’re shipping to customers now in orchards, where the leaves come in, and the water in them essentially acts like a tunnel, absorbing GPS,” Paniccia said. “Our algorithm says, ‘I’m losing GPS, so weigh the navigation algorithm more to the optical gyro.’ You want the robot to stay within a tenth of a meter across a distance of half a mile. Long-distance, we’re looking at 100 km of driving without GPS with less than 100-m lateral error.”

In addition, SiPhOG is built for scalability and cost-effectiveness.

“VC friends tell me that automakers are putting six lidar systems on a car, and each one is $10,000 each. It’s never going to get to mass market,” Paniccia said. “We have an optical technology for land, air, and sea. And whether that land vehicle is for agriculture or construction, or in the longer term, trucking or autonomous cars, we can do it.”

“You can literally tape SiPhOG to a dashboard and plug it into the cigarette lighter,” he said. “We have self-alignment correction, and within 15 minutes, you can have GPS-denied navigation capability. We’re also shipping this system for indoor robots like in construction.”

“If I put three SiPhOGs in a cube, I can have the same performance but at one-fifth the size and weight and a quarter of the power for precision in three dimensions,” said Paniccia. “That’s exciting for drones and maritime.”

Investors to accelerate ANELLO 

Lockheed Martin, Catapult Ventures, and One Madison Group co-led ANELLO’s unspecified Series B round. New Legacy, Build Collective, Trousdale Ventures, In-Q-Tel (IQT), K2 Access Fund, Purdue Strategic Ventures, Santuri Ventures, Handshake Ventures, Irongate Capital, and Mana Ventures also participated. 

“We’re committed to fostering the art of the possible with investments in cutting edge technologies, including advancements in inertial navigation that have the potential to enhance autonomous operations in GPS-denied environments,” said Chris Moran, vice president and general manager of Lockheed Martin Ventures. “Our continued investment in ANELLO reflects our mission to accelerate technologies that can ultimately benefit national security.”

ANELLO said it plans to use its latest funding to continue developing and deploying its technology. The company has worked with the U.S. Department of Defense to optimize its algorithms against jamming or spoofing.

“Every week, there’s an article about a commercial flight or defense-related mission getting GPS jammed, like thousands of flights to and from Europe affected by suspected Russian jamming,” noted Tony Fadell, founder of Nest and a principal at investor Build Collective. “GPS has become a single point of failure because it’s too easily compromised with various jamming and spoofing techniques.”

“ANELLO’s proven and commercially available optical gyroscope is the only navigational tool that can take over, [offering] precision over long periods of time, the size of a golf ball, low-power, low-cost, that’s immune to shock and vibration,” he added. “ANELLO will save lives in the air, on the road, and over water.”

The post ANELLO Photonics secures funding for inertial navigation in GPS-denied environments appeared first on The Robot Report.

]]>
https://www.therobotreport.com/anello-photonics-secures-funding-inertial-navigation-gps-denied-environments/feed/ 0
Advantech partners with oToBrite to create low-latency AI for AMRs https://www.therobotreport.com/advantech-partners-with-otobrite-to-create-low-latency-ai-for-amrs/ https://www.therobotreport.com/advantech-partners-with-otobrite-to-create-low-latency-ai-for-amrs/#respond Thu, 31 Oct 2024 12:30:22 +0000 https://www.therobotreport.com/?p=581334 Advantech and oToBrite said the joint system will enable high-resolution, low-latency AI for next-generation AMRs.

The post Advantech partners with oToBrite to create low-latency AI for AMRs appeared first on The Robot Report.

]]>
A graphic showing oToBrite's automotive GMSL cameras and the Intel Core Ultra H/U, which now work with Advantech AI.

Advantech and oToBrite said their joint system will benefit industries from logistics to manufacturing. | Source: oToBrite

oToBrite Electronics Inc. this week announced a strategic partnership with Advantech to co-develop high-performance and cost-effective perception for mobile robots. oToBrite will bring its experience with artificial intelligence, machine vision, and automotive-grade cameras, while Advantech will provide expertise with global industrial Internet of Things.

The collaborators said they will integrate oToBrite’s high-speed automotive Gigabit Multiple Serial Link (GMSL) cameras with Advantech’s AFE-R360 platform, powered by the Intel Core Ultra H/U (Meteor Lake). 

The joint system will enable high-resolution, low-latency AI for next-generation autonomous mobile robots (AMRs), benefiting industries from logistics to manufacturing, said the companies.

oToBrite says GMSL cameras meet industry needs

AMR applications have expanded into warehouse logistics, last-mile delivery, and terminal or yard tractors. In response to this, oToBrite said integrating GMSL technology addresses the increasing need for real-time, uncompressed, and high-resolution perception. The company said its technologies enable accurate autonomous navigation in diverse environments.

As a provider of advanced driver-assist systems (ADAS), oToBrite has manufactured several vision-AI products for major automakers. Those products rely on high-speed data transmission to handle the large data flow from multiple cameras and enable real-time processing in vehicles.

To meet demand, oToBrite has integrated GMSL technology in products like SAE Level 2+ ADAS and Level 4 autonomous valet parking. The Hsinchu, Taiwan-based company said its automotive experience and technology will enable customers to successfully deploy AMRs with Advantech.

The oToCAM222 series with 2.5M pixels offers multiple viewing angles (63.9°/120.6°/195.9°). The company said this makes it suitable for low-speed AMR applications in challenging industrial environments. The camera offers high-speed, low-latency data processing and IP67/69K-rated durability, said oToBrite.

The company also noted that it has created advanced vision-AI models, embedded system software for various platforms, and active alignment technology for IP67/69K automotive cameras in its IATF16949-certified factory. 


SITE AD for the 2025 Robotics Summit registration. Register now


Advantech initiates partnerships with other camera providers

The AFE-R360 platform is powered by Intel’s Core Ultra 16-core processor with Arc graphics and an integrated neural processing unit (NPU) delivering up to 32 trillion operations per second (TOPS), enhanced by the OpenVINO toolkit for optimized AI performance, said Advantech.

In addition to its partnership with oToBrite, the Taipei, Taiwan-based company recently initiated a partner alignment strategy, gathering top camera providers to develop systems for the AFE-R360. These include Intel RealSense, e-con Systems, and Innodisk. 

“Advantech is proud to collaborate with Intel and camera vendors to strengthen our AMR solution,” stated James Wang, director of embedded applications at Advantech. “By incorporating MIPI and GMSL interfaces into AFE-R360, Advantech is committed to providing our customers with cutting-edge technology that meets the challenges of tomorrow. These interfaces not only enhance performance but also enable new possibilities in imaging applications across various industries.” 

The offering also includes a 3.5-in. (8.8-cm) single board computer (SBC) supporting up to eight MIPI-CSI lanes for seamless GMSL input, ensuring low latency and high noise immunity essential for autonomous operations, said the companies. It also has three LAN and three USB-C ports for integrating depth and lidar sensors.

oToBrite and Advantach said their combination of AI and advanced GMSL camera technology will enhance cost-effective AMR systems.

The post Advantech partners with oToBrite to create low-latency AI for AMRs appeared first on The Robot Report.

]]>
https://www.therobotreport.com/advantech-partners-with-otobrite-to-create-low-latency-ai-for-amrs/feed/ 0
Pittsburgh Robotics Network partners with NVIDIA to accelerate ecosystem growth https://www.therobotreport.com/pittsburgh-robotics-network-partners-with-nvidia-to-accelerate-ecosystem-growth/ https://www.therobotreport.com/pittsburgh-robotics-network-partners-with-nvidia-to-accelerate-ecosystem-growth/#respond Tue, 08 Oct 2024 15:56:45 +0000 https://www.therobotreport.com/?p=581081 Pittsburgh Robotics Network members, CMU, and Pitt will all gain access to NVIDIA technologies for developing AI and robotics.

The post Pittsburgh Robotics Network partners with NVIDIA to accelerate ecosystem growth appeared first on The Robot Report.

]]>
NVIDIA CEO Jensen Huang with Pittsburgh Robotics Network Director Jennifer Apicella.

NVIDIA CEO Jensen Huang with PRN Director Jennifer Apicella. Source: Pittsburgh Robotics Network

The Pittsburgh Robotics Network today announced that it is working with NVIDIA to foster innovation and enhance connections between the commercial robotics community, academia, and research institutions. The collaborators aid they intend “to advance the growth and impact of robotics by bridging the gap between research, innovation, and the commercialization of intelligent autonomy and robotics across multiple industries.”

As part of this initiative, NVIDIA is offering its accelerated computing and artificial intelligence platforms to accelerate the development and commercialization of robotics technologies in Pittsburgh. The Santa Clara, Calif.-based company said it plans to expand its cooperation with local institutions such as Carnegie Mellon University (CMU) and the University of Pittsburgh (Pitt) to encourage the application of research to real-world applications, from autonomous systems to intelligent machines.

“The era of physical AI is here,” stated Amit Goel, head of robotics ecosystems at NVIDIA. “Working with the Pittsburgh Robotics Network, the University of Pittsburgh, and Carnegie Mellon University will jumpstart meaningful private-public collaborations to further accelerate national generative AI and robotics expertise and innovation.”

Goel will be participating in a keynote panel discussion on robotics innovation at RoboBusiness 2024 next week in Santa Clara, Calif.


SITE AD for the 2025 Robotics Summit registration. Register now


NVIDIA launches AI Tech Community

NVIDIA has also launched its inaugural NVIDIA AI Tech Community in Pittsburgh, aimed at fostering public-private partnerships for AI innovation. As part of this initiative, the company plans to establish joint technology centers at CMU and Pitt to equip researchers, students, and faculty with cutting-edge AI technologies.

NVIDIA will provide the universities with access to its latest AI software and frameworks — such as NVIDIA Isaac Lab for robot learning, NVIDIA Isaac Sim for designing and testing robots, NVIDIA NeMo for custom generative AI, and NVIDIA NIM microservices, available through the NVIDIA AI Enterprise software platform.

The centers, slated for launch next month, will serve as hubs for research and development in areas such as autonomous systems, AI-driven robotics, and intelligent systems, said NVIDIA. It is also increasing its engagement with Pittsburgh-based members of the NVIDIA Inception program for startups and the NVIDIA Connect program for software-development companies and service providers.

Pittsburgh Robotics Network welcomes regional support

“Pittsburgh is home to a vibrant and growing robotics and AI ecosystem,” said Jennifer Apicella, executive director of the PRN. “With NVIDIA’s technical support, the Pittsburgh Robotics Network is in a stronger position than ever to continue accelerating the region’s leadership in commercializing advanced technologies.”

“With NVIDIA’s collaboration, we are better equipped to support the ecosystem and bring cutting-edge robotics solutions to market faster,” she added.

The Pittsburgh Robotics Network is a non-profit organization dedicated to building a world-leading robotics ecosystem in the region. By connecting businesses, investors, and academia, the PRN said it is working to promote the commercial growth and impact of robotics in the region and beyond.

The post Pittsburgh Robotics Network partners with NVIDIA to accelerate ecosystem growth appeared first on The Robot Report.

]]>
https://www.therobotreport.com/pittsburgh-robotics-network-partners-with-nvidia-to-accelerate-ecosystem-growth/feed/ 0
In surgical robotics, buying time can save lives, explains BlackBerry QNX https://www.therobotreport.com/in-surgical-robotics-buying-time-can-save-lives-explains-blackberry-qnx/ https://www.therobotreport.com/in-surgical-robotics-buying-time-can-save-lives-explains-blackberry-qnx/#respond Sun, 06 Oct 2024 12:30:19 +0000 https://www.therobotreport.com/?p=581044 Surgical robots need precision and repeatability, which real-time operating systems and modern perception can provide.

The post In surgical robotics, buying time can save lives, explains BlackBerry QNX appeared first on The Robot Report.

]]>
Rendering of surgical robots powered by AI software.

Surgical robots and other automation can benefit from real-time operating systems, says BlackBerry QNX. Source: Adobe Stock

Imagine this: In the blink of an eye—approximately 100 milliseconds—your brain has already processed visual information, allowing you to react to what you see in real time. However, in the world of surgical robotics, the blink of an eye is a lifetime. It’s simply not fast or good enough.

Consider the precision required to navigate a scalpel through delicate tissues, avoid vital organs and blood vessels, and respond to any sudden patient movements. A delay or miscalculation, even by 100 milliseconds, could mean the difference between life and death.

For this reason, surgical robotic systems must operate with extraordinary speed and precision, often needing to perform actions and respond to any event in the range of low single-digit milliseconds.

But let’s break this down even further. In critical scenarios, like stopping a bleeding vessel or making an incision near a sensitive nerve, every microsecond counts. A surgeon relies on the robotic system to translate their hand movements instantaneously into action, without delay, jitter, or hesitation, and react to events such as patient movement or one of the sensors failing.

If the system takes too long to respond or if there’s any inconsistency in timing — known as jitter — the outcome is not guaranteed or inconsistent and that, in itself, could be catastrophic. There are strict timing requirements to manufacture surgical robotics. Failing to meet them could cause unintended damage, prolong procedures, or increase the risk of complications.


SITE AD for the 2025 Robotics Summit registration. Register now


Haptic, visual systems support real-time integration

Modern surgical robotics systems are moving to combine advanced visualization tools with haptic feedback to provide a comprehensive sensory experience for the surgeon. The integration of stereoscopic UHD (ultra-high-definition) vision systems and haptic feedback mechanisms allows surgeons to see and feel the surgical environment as if they were directly interacting with the patient’s tissues.

The reliability of these sensory systems is crucial for robotic surgery. If a process were to get hung up, delayed, or jitter—whether due to system overload, software or hardware issues, or resource contention—it would lead to significant issues or a lack of trust in the system itself. For example:

Visual delays: A delayed camera feed could inhibit the real-time visual information provided to the surgeon to navigate and make precise movements.

Even a “blink of an eye” lag could impair the surgeon’s ability to accurately perceive the surgical field. This visual lag may cause the surgeon to make an incorrect motion or misjudge the spatial relationships between tissues, potentially leading to accidental damage or errors in the procedure.

Haptic latency: Similarly, latency in tactile response could disrupt the surgeon’s sense of touch, preventing them from feeling the texture, resistance, and tension of tissues and instruments in real-time. Any delay in haptic feedback could cause the surgeon to receive late tactile information, leading them to apply too much or too little force, which could result in potential tissue damage or improper manipulation of instruments.

The combination of these systems must operate seamlessly in real time to ensure that the surgeon receives immediate and accurate feedback from both visual and tactile sources. This level of precision and accuracy is only possible when the software and hardware are perfectly synchronized, ensuring low latency and minimal jitter across all processes.

OS and hardware have a symbiotic relationship

To achieve the level of precision and speed required in surgical robotics, it’s not just about powerful hardware or an advanced operating system (OS), or complex applications potentially using artificial intelligence. It’s also about how well-integrated and responsive all of these elements are, and how they work together.

The relationship between software and hardware is akin to the synergy between a skilled surgeon and their instruments. Even the most advanced tool is only as effective as the hand that guides it. In the same way, a high-performance hardware with advanced CPU and GPU functionalities requires an equally sophisticated operating system to maximize their potential.

In surgical robotics, innovations like stereoscopic UHD vision systems and haptic feedback generate enormous amounts of data that must be processed in real-time. The GPU handles the heavy lifting of processing the high-definition video feed, providing the surgeon with an immersive and detailed view of the surgical field.

Meanwhile, the CPU is tasked with managing the influx of data, coordinating various processes, and ensuring smooth communication between system components.

However, for this intricate dance between the CPU and GPU to succeed, the OS must effectively manage these resources effectively so the complex surgical applications that are running on top can utilize the underlying hardware effectively, reliably and deterministically. The OS needs to ensure that both the CPU and GPU operate in harmony, processing data efficiently and in real time.

Without a robust and real-time OS to synchronize these components, the system could falter, unable to meet the demands of modern surgery.

 

The importance of low latency and low jitter in operating systems

This is where the importance of a real-time operating system (RTOS) comes into play. For example, an RTOS like BlackBerry QNX OS 8.0 isn’t just about managing different tasks in parallel as quickly as possible—it’s also about ensuring that every task is executed with the utmost precision, accuracy, and speed.

The RTOS must be finely tuned to work in harmony with the hardware and user applications, ensuring that the system can handle multiple high-priority tasks simultaneously, with minimal latency and jitter.

By minimizing latency and jitter, the RTOS effectively buys more time for complex surgical applications to process critical information and make real-time decisions.

Unexpected delays or issues introduced by the RTOS will have a cascading effect, amplifying the total delay across the entire system. This impact will degrade the overall system performance, potentially leading to life-threatening situations in a surgical environment not to mention a lack of trust in the surgical system.

Therefore, maintaining low latency and jitter is not just about performance; it’s about ensuring that the system performs its life-saving functions consistently without compromise.

Real-time operating systems: The heartbeat of surgical robotics

In surgical robotics, the coupling of hardware and software is not just important; it’s critical. This synergy ensures that the system can manage tasks as efficiently as possible, leaving room for software applications to run without compromising performance.

In such systems, handling interrupts is of the utmost importance. It signals when a process or an event needs urgent attention such as a sensor failing and handling it as quickly as possible ideally in microseconds is a necessity.

This is why an RTOS designed specifically for this purpose is essential, capable of handling such critical tasks and interrupts with minimal jitter. This buys time for the surgical software to respond to such interrupts and in some cases enter a “fail-safe” state.

Real-time performance is the future of robotics

The importance of a high-performance RTOS in the surgical environment cannot be overstated. It is the backbone that allows these systems to operate with the precision and reliability that surgeons and patients alike depend on.

But the need for such robust, real-time performance isn’t limited to surgical robots. Given the advanced capabilities of modern RTOS, one must wonder: Why aren’t advanced RTOS deployed everywhere, from industrial robots that require precise, fault-tolerant operation on the factory floor to drones that must navigate complex environments with split-second timing?

As the field of robotics continues to evolve across various industries, the adoption of advanced RTOS will be key to pushing the boundaries of what’s possible, ensuring not just the success of surgical procedures, but also the reliability and safety of robotics in manufacturing, logistics, defense, and beyond.

Winston Leung, BlackBerry QNXAbout the author

Winston Leung is a senior manager at BlackBerry QNX.

Founded in 1980, QNX supplies commercial operating systems, hypervisors, development tools, and support and services for critical embedded systems. Acquired by BlackBerry in 2010, the Ottawa, Canada-based unit serves industries including aerospace and defense, automotive, heavy machinery, industrial controls, medical, and robotics.

Editor’s note: This article is posted with permission.

The post In surgical robotics, buying time can save lives, explains BlackBerry QNX appeared first on The Robot Report.

]]>
https://www.therobotreport.com/in-surgical-robotics-buying-time-can-save-lives-explains-blackberry-qnx/feed/ 0
BlackBerry QNX provides guidance on minimizing jitter, latency in robotics https://www.therobotreport.com/blackberry-qnx-provides-guidance-on-minimizing-jitter-latency-in-robotics/ https://www.therobotreport.com/blackberry-qnx-provides-guidance-on-minimizing-jitter-latency-in-robotics/#respond Fri, 20 Sep 2024 18:28:27 +0000 https://www.therobotreport.com/?p=580772 BlackBerry QNX says its foundational software enables developers to make robots more reliable and precise for a range of applications.

The post BlackBerry QNX provides guidance on minimizing jitter, latency in robotics appeared first on The Robot Report.

]]>
BlackBerry QNX has released a whitepaper on how to reduce robot jitter with software.

A whitepaper explains how to reduce robot jitter with software for greater industrial productivity. Source: BlackBerry QNX

Robots need to precisely synchronize for manufacturing applications such as assembly, welding, and materials handling. BlackBerry QNX recently released a whitepaper on “Optimizing Robotic Precision: Unleashing Real-Time Performance with Advanced Foundational Software Solutions.”

The document provides guidance to manufacturers on reducing jitter in high-speed robotic motion. Otherwise, it can lead to misaligned components, defective products, and a decrease in throughput and efficiency, said the company.

Founded in 1980, QNX supplies commercial operating systems, hypervisors, development tools, and support and services for critical embedded systems. Acquired by BlackBerry in 2010, the Ottawa, Canada-based unit serves industries including aerospace and defense, automotive, heavy machinery, industrial controls, medical, and robotics.


SITE AD for the 2025 Robotics Summit registration. Register now


BlackBerry QNX explains purpose of whitepaper

Louay Abdelkader, senior product manager at QNX, replied to the following questions about the whitepaper from The Robot Report:

Who is the target audience for this whitepaper?

Abdelkader: Our QNX whitepaper is to advise and inform those responsible for building the software that goes into automated guided vehicles (AGVs), autonomous mobile robots (AMRs), robot motion controllers. [They’re also responsible for] teach pendants for robot control, data collection and processing, mapping, image analysis, path planning, obstacle avoidance and autonomy.

For example, it is relevant to software engineers, developers and leaders, product/program managers, and other technical and non-technical audiences.

What are the conventional approaches for mitigating jitter and latency in robotics, and how do they fall short?

Abdelkader: As the needs for fully or even partially autonomous systems increase, the software stack becomes the centerpiece of these systems, and as such, they become more feature-rich and complex. Because these systems are safety-critical and require very reliable and deterministic behavior, “hard” real-time requirements become prevalent.

The limitations of general-purpose operating systems like Linux become more apparent due to its lack of safety certifications and hard real-time behavior.

There are several reasons why the switch from general-purpose OS that comes with “soft” real-time behavior to hard real-time OS (RTOS) makes sense. These include:

  • Determinism and timing guarantees – Hard RTOS provide strict timing guarantees on response times and execution deadlines.
  • Complexity management – As software in robots becomes more complex, ensuring that your OS can handle mixed-criticality tasks is essential. With Hard RTOS, there are features and capabilities that allow designers to use mixed-critical software in the same stack and ensuring that the right separation is provided to avoid cross-contamination in case of faults or safety/security-related events that could occur during the deployment of the system.
  • Safety – Hard RTOS come with safety and security features baked in, because they are utilized in safety and mission-critical systems. And some hard RTOS, like the ones QNX provides, come with not only the safety features and capabilities, but also the safety certifications needed for specific environments including industrial automation.
  • Fault tolerance and reliability – In safety-critical applications, such as collaborative robot applications and surgical robots, fault tolerance and reliability are paramount. Hard RTOS are often designed to be very robust with high mean time between failures, as well as mechanisms to handle faults and ensure continued operation even in the event of hardware failures or unexpected events. It is particularly effective with a microkernel architectural design where the kernel and OS services are separated. This ensures that if an OS service fails, it doesn’t contaminate the kernel and cause it to be affected or crash, which could bring the system down.

What are some examples of real-time operating systems for the “soft,” “firm,” and “hard” approaches?

Abdelkader: In the hard real time behavior, strict time constraints with guaranteed response times are expected. Missing a deadline is not an option, as the consequences are nothing short of catastrophic, especially in highly safety-critical applications.

Consider an AMR navigating a high-traffic warehouse, for example. Any delay in its ability to respond to obstacles and change direction could lead to collisions, potentially causing damage to goods and posing a safety risk to personnel. 

Soft real-time behaviors introduce a measure of flexibility where a system’s operations degrade if it cannot meet specific timing requirements. While these systems aim to meet deadlines, they can occasionally tolerate minor deviations without disastrous outcomes.

In an industrial setting, vision systems for inspection play a role here. These systems ensure the quality and accuracy of manufactured products, where minor delays in inspection may affect production efficiency but not result in severe consequences. 

The firm real-time behaviors are akin to soft real-time but with a slight difference. Data arriving after the deadline is often deemed invalid. A prime example in robotics is automated 3D printing systems.

In additive manufacturing, if a layer isn’t deposited precisely on time, it can result in defects in the final product. While minor deviations might not be catastrophic, they could lead to the rejection of a printed part, which can harm production efficiency and waste materials.

We’ve heard that artificial intelligence can get away from deterministic and rigid approaches to robotic reactions. Is that true yet?

Abdelkader: In robotic systems, low latency and jitter are an essential component for both AI and non-AI applications. In applications for robots, real-time control will continue to require low latency (minimal delay in processing) and low jitter (consistency in timing) to ensure safe and deterministic operation.

Determinism ensures that responses to sensor inputs or environmental changes happen predictably and within a specified timeframe. Where AI models are deployed on the edge, on devices with limited computation resources, deterministic processing can also help optimize resource utilization and ensure timely responses without unpredictable delays.

RTOS promises to improve robot reliability

How might QNX improve reliability and safety? Would multiple systems be needed while a component is restarted?

BlackBerry has designed its QNX microkernel for optimization.

BlackBerry has designed its QNX microkernel for optimization.

Abdelkader: The QNX RTOS is a hard real-time OS built with the microkernel architecture, renowned for its inherent security, safety, and reliability. This architecture isolates the kernel, which is the most important component of the OS, in its own memory space and operates the system services in their own memory space outside the kernel, which provides additional isolation and safety barriers within the OS.

By reducing complexity and potential failure points, QNX facilitates thorough verification through rigorous testing, including fault injection testing and formal methods. Faults within individual components are contained, enabling dynamic restarts without system-wide impact or shutdowns.

In safety-critical embedded systems, where the maxim “No safety without security” holds true, the microkernel’s small footprint enhances security practices and restricts privileged access. Moreover, its modular design allows for customization tailored to specific application requirements, making QNX an ideal choice for robotics systems prioritizing robust safety and reliability.

Does this RTOS require additional power or sensing? Are there minimum requirements? How ruggedized does it need to be?

Abdelkader: The QNX RTOS are found in microprocessors with memory management units running on either an Intel x86 or ARMv8 or ARMv9 processors. When comparting the size of QNX to Linux, the number of software lines of code is significantly smaller in the kernel.

As a result, it typically requires less memory and processing power to operate and bring up the kernel efficiently with the added benefits of improving performance where real-time applications require predictability and consistent behavior.

With its smaller kernel codebase, it also means that there are fewer – and smaller – potential points of vulnerability. This  helps enhance the robustness and reliability of the system, since the kernel is the most important component of the OS.

Time-tested in numerous applications around the world, you’ll typically see QNX deployed in applications from slower-paced environments like nuclear power plants or an ocean buoy, to faster-paced environments like industrial robots and automotive.

QNX can integrate with other systems

Does the QNX architecture rely on any connectivity to the cloud, fleet managers, or other robots? Can it be used with them?

Abdelkader: The QNX architecture itself is primarily designed for embedded systems and real-time applications where reliability, safety, and security are a priority. It operates independently of cloud connectivity or fleet management systems.

Although QNX itself does not provide built-in cloud-connectivity features, it can be integrated with cloud services, fleet management systems or additional software layers. Developers can implement cloud connectivity with middleware such as ROS or applications that support communication protocols like MQTT or OPC UA.

This process allows QNX-based devices to interact with cloud services for data storage, remote monitoring, digital twinning, etc.

QNX can also be integrated into fleet management systems through software applications that handle tasks such as device tracking, telemetry data collection, and fleet optimization. This integration involves developing software components that communicate with QNX devices.

As it relates to inter-robot communication, robots using QNX can use standard communication protocols like TCP/IP to collaborate, share data, and coordinate tasks effectively.

What do robotics developers and suppliers need to know about integrating such systems in their products? How much work is required on their side?

Abdelkader: QNX provides POSIX compliant RTOS, which simplifies development for those familiar with POSIX compliant OS like Linux. This computability means that developers can use existing knowledge and tools, streamlining the integration process.

Moreover, QNX offers both safety certified products like the QNX OS and QNX hypervisor. The RTOS ensures deterministic behavior for safety-critical applications, while the hypervisor allows consolidation of hardware onto one SoC through software – allowing developers to build safety and non-safety applications on the same platform.

The post BlackBerry QNX provides guidance on minimizing jitter, latency in robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/blackberry-qnx-provides-guidance-on-minimizing-jitter-latency-in-robotics/feed/ 0
SiLC Technologies gets investment from Honda for FMCW lidar on chip https://www.therobotreport.com/silc-technologies-gets-honda-investment-fmcw-lidar-chip/ https://www.therobotreport.com/silc-technologies-gets-honda-investment-fmcw-lidar-chip/#respond Tue, 10 Sep 2024 14:00:45 +0000 https://www.therobotreport.com/?p=580554 SiLC Technologies says its FMCW lidar is superior to time-of-flight sensors for the various types of mobility that Honda produces.

The post SiLC Technologies gets investment from Honda for FMCW lidar on chip appeared first on The Robot Report.

]]>
SiLC Technologies has introduced four versions of its Eyeonic Vision System, optimized for a variety of AI and machine vision applications.

SiLC in October released four versions of Eyeonic optimized for AI and machine vision applications. Source: SiLC Technologies

Sensors are continuing to advance for safer mobility. SiLC Technologies Inc. today announced that Honda Xcelerator Ventures has invested in its development of next-generation frequency-modulated continuous wavy, or FMCW, lidar.

“SiLC is the industry leader in the research and development of FMCW lidar, which is capable of detecting vehicles and various obstacles from long distances, and Honda has high expectations for its potential,” stated Manabu Ozawa, managing executive officer of Honda Motor Co. “Honda is striving for zero traffic-collision fatalities involving our motorcycles and automobiles globally by 2050.”

“We believe that SiLC’s advanced lidar technology will become an important technology for us,” he added. “Honda continues to discover, collaborate with, and invest in innovative startups like SiLC through our global open innovation program, Honda Xcelerator Ventures.”

While SiLC has raised about $67 million to date, less than other lidar makers, the company has stayed focused on producing key components and working with its customers and partners, said Mehdi Asghari, CEO of SiLC Technologies.

“More than half of our money comes from customers, making sure they have skin in the game. We believe we have the best silicon lidar platform out there,” he told The Robot Report. “We’re not a software development company. It has been challenging, with the pandemic, wars, and lidar coming down from the hype cycle, but we have a really differentiated technology.”


SITE AD for the 2025 Robotics Summit registration. Register now


Honda evaluated Eyeonic before investing

“SiLC went through a fairly hefty due-diligence process as Honda made sure that our technology is useful to it,” said Asghari. “We’ve talked to many OEMs around the world, and our technology team and investment group worked hard.” 

Artificial intelligence and vision capabilities can make advanced driver-assistance systems (ADAS), autonomous vehicles, and mobile robots safer, according to SiLC. However, they need to detect objects at greater distances, higher resolutions, and faster speeds, the company said.

Asghari asserted that SiLC’s FMCW lidar is superior to time-of-flight (ToF) systems because it can detect objects such as tires at 150 m (492.1 ft.) away and a person in dark clothing at 300 m (984.2 ft.). ToF sensors are prone to interference from sunlight, reflections, and other lidar systems and can only “see” a few hundred meters.

SiLC’s systems have also demonstrated a range of more than 1 km (0.6 mi.) for counter-uncrewed aerial systems (C-UAS) perimeter security. 

The company said its Eyeonic Vision System equips machines with “near-human vision capabilities, addressing the critical need for accurate, real-time perception in various sectors.”

“Honda wanted highly scalable, robust performance integrated on a single chip,” Asghari explained. “It’s not just automotive — Honda makes lawnmowers, marine systems, airplanes, and robots — so it wanted to see something useful for several markets.”

FMCW provides crucial data for AI, says SiLC

In addition to real-time perception, advanced sensors can provide high-quality data can improve the training and testing of machine vision algorithms, said SiLC. From simulation to AI, the company claimed that the data from its integrated systems plus sensor fusion enable mobile systems to become more intelligent.

“The Eyeonic Vision System represents significant advancements in machine vision, providing machines with the depth perception, velocity measurement, and polarization information necessary to navigate and interact with the physical world effectively,” it said.

“Eyeonic can see a person in dark clothing 300 m [984 ft.] away or see a child through a windshield — those are hard to do with a camera,” added Asghari. “By enabling predictive behavior and reducing power requirements, we help bring down the cost of the perception stack.”

SiLC's Eyeonic Vision System with a penny for comparison.

SiLC says its integrated, single-chip FMCW lidar offers real-time, long-range, precise machine vision. Source: SiLC Technologies

Investment to grow U.S. testing, manufacturing

FMCW lidar development started in the military, and SiLC claimed that it is the only company producing fully integrated sensors that can scale for commercial use. The company also plans to expand its laboratory and manufacturing space in the U.S.

“We can do everything under one roof in Monrovia, [Calif.],” Asghari said. “We’re pushing into three markets — automotive, which is a big long-term market; high-precision capabilities for industrial robotics; and drone detection and classification.”

“Our sensors work with cameras and radar to help identify drones from a couple of kilometers away,” he said. “Our technology can calculate the velocity of a propellor to help identify whether a drone is friendly or an enemy.”

“This investment from Honda, the world’s largest mobility vehicle manufacturer, will accelerate our progress toward a society with fully autonomous solutions that enhance our safety and address our widely spread labor shortages across many critical markets,” stated Asghari. “Our silicon photonics platform offers a powerful, low-cost, efficient and scalable FMCW lidar engine, which is essential for the high volumes required by the automotive industry.”

The post SiLC Technologies gets investment from Honda for FMCW lidar on chip appeared first on The Robot Report.

]]>
https://www.therobotreport.com/silc-technologies-gets-honda-investment-fmcw-lidar-chip/feed/ 0
Robotic surgery to benefit from advanced processors and AI, says AMD https://www.therobotreport.com/robotic-surgery-benefits-advanced-processors-ai-says-amd/ https://www.therobotreport.com/robotic-surgery-benefits-advanced-processors-ai-says-amd/#respond Tue, 03 Sep 2024 12:30:46 +0000 https://www.therobotreport.com/?p=580514 Robotic surgery continues to become more autonomous and affordable, thanks to advanced computing, according to AMD.

The post Robotic surgery to benefit from advanced processors and AI, says AMD appeared first on The Robot Report.

]]>
AMD said its processors enable robotic surgery such as with Intuitive Surgical's da Vinci system, shown here.

AMD said its processors enable robotic surgery such as with Intuitive Surgical’s da Vinci system. Source: AMD

The need for robotic surgery is well established, but most systems are still costly to purchase, operate, and maintain, noted Advanced Micro Devices Inc. The company said its technologies can help control those costs, and AMD is already working with leading surgical robot providers.

In 2021, almost 644,000 robotic surgeries were performed in the U.S., and that number could reach 1 million in 2028, according to the National Library of Medicine (NLM).

The global market for robot-assisted surgery could grow to $83 billion by 2032, predicted Noah Medical. However, many technical and regulatory hurdles remain to increasing autonomy, noted MDPI Sensors, and cost is a major consideration for adoption.

A chart of surgical robotics technologies along Gartner's "hype cycle."

Surgical robots and related technologies are moving along Gartner’s “hype cycle.” Source: MDPI Sensors

AMD positions itself in the healthcare tech stack

“AMD is one of the fastest-growing semiconductor companies and has grown substantially in the healthcare space,” said Subhankar Bhattacharya, lead for healthcare and sciences at AMD. “We have a wide portfolio of processors, FPGAs, GPUs, CPUs, SoCs, PLCs, and programmable I/Os. They’re used in industrial automation, automotive, gaming, servers and data centers, and increasingly in healthcare.”

Bhattacharya has an electrical engineering background and worked for Intel, Sun, and PMC. He later worked on software-as-a-service (SaaS) for hospitals; with pharmaceutical company Novartis on medical devices; and with GE Digital on the Internet of Things (IoT), healthcare, and cybersecurity.

After working at Xilinx, which AMD acquired in 2022, Bhattacharya has seen the applicability of high-performance computing to robotic surgery.

“The FDA used to be very conservative, but it has started a new group for software as a medical device to consider these products in medical devices, which were previously under OEM’s perspective,” he told The Robot Report. “That opens up the technology, making artificial intelligence appears in almost every phase of the industry, from the devices themselves and ECR [electronic case reporting] to surgical robots.”

AMD makes several high-performance processors suitable for healthcare applications.

AMD makes processors for data centers, gaming, PCs, and increasingly embedded computing such as surgical robots. Source: AMD

Pandemic propels telemedicine, robotic surgery

“COVID-19 was a major market-changer,” observed Bhattacharya. “If you looked at emerging trends in PoC [point-of-care] for AI, remote patient monitoring, telemedicine, and robotic surgery, they were projected in 2012 to grow, but it wasn’t happening. COVID gave a boost to these, and people saw with their own eyes how effective something like point-of-care ultrasound could be in saving lives.”

He cited the example of Clarius, an AMD customer that built a handheld device with AI capabilities for local physicians without sonography experience. They can now check complaints of back pain for potential cancer and then refer patients as needed to hospitals in cities.

“AMD is building adaptive SoCs [system-on-chips] that have low latency and high-speed data processing from the edge,” Bhattacharya said. “Once AI developers have trained models, they can do a lot more with inferencing with smaller devices.”


SITE AD for the 2025 Robotics Summit registration. Register now


Intuitive Surgical robots get improved sensing, controls

“Diagnostic medical imaging has been AMD’s strength — in cart-based care, ultrasound, diagnostic endoscopy, and signal processing,” asserted Bhattacharya. “In robotics, we’re the market leader, and we’ve been working with Intuitive Surgical since 2010.”

The company‘s Xilinx unit worked with Intuitive Surgical to design the second-generation da Vinci robotic surgical system. Last year, more than 7,500 da Vinci systems were in use in 69 countries, said the NLM.

“Intuitive has built up its IP [intellectual property] with design and reuse potential,” Bhattacharya said. “In its surgeon side-cart AR/VR [augmented/virtual reality] system, a visualization system processes the image signal and makes it available for the next set of modules.”

“On the multi-arm robot side, nurses control the technologies with SoCs, and back-end video systems use not just one or two of our products but 30 to 50 per each da Vinci X or Xi multiport and single-port system,” he said. “The da Vinci 5 is a significant step forward in terms of haptic feedback.”

Xilinx reported favorable results, and the da Vinci 5 this year obtained U.S. Food and Drug Administration and European CE clearance. 

Processors enable a range of medical devices

Reliable data processing is not only necessary for high-end surgical robots, but it can also help less-expensive devices, said Bhattacharya.

“Capable hardware allows customers to scale software as they build up — we’ve provided SoCs to $10,000 to $150,000 machines,” he said. “For small and midsize enterprises, the ability to build and reuse app code is the secret sauce for developers.”

Bhattacharya touted the density of AMD’s FPGA (field-programmable gate array), its fast memory access, and the ability of adaptive SoCs to partition-load to various blocks for programmability and upgradeability.

“For example, a large CT or ultrasound scanner can acquire signals with an analog/digital interface, then use beamforming to move the data to the host for rendering and visualization,” he explained.

The right processors can reduce latency and help accelerate development of medical devices such as endoscopes and surgical robots. AMD said its heterogeneous approach to specialized and adaptable computing allows developers to choose from a range of systems for real-time visualization and multi-axis robot controls.

“With a bigger device and our Embedded+ offering — an x86 processor next to one of our high-end Versal adaptive SoCs with PCI Express in between — we can help cut 10 months off development time and provide software for moving data and partitioning,” Bhattacharya said.

AI to improve the quality of robotic surgery

“Robot-assisted surgery provides a clear advantage of smaller incisions and faster recovery,” said Bhattacharya. “The preferred approach of the FDA is to use AI to improve productivity while minimizing risk, so we still see a lot of assistance rather than AI making decisions.”

In addition to diagnostics, AI and machine learning can improve contrast or add filters for surgical robot displays, which don’t require FDA approval, he noted. Ultrasound also provides guidance on how to position a probe.

“Another use of is AI is for training. I was at a radiological conference, and a demo showed the layman where to put a device to take a report on the carotid artery,” Bhattacharya recalled. “Improving PoC training is low-hanging fruit, but it’s extremely important for medicine.”

Another area where AMD’s processors can enable AI and improve care is in imaging of small lesions to detect skin cancer at early stages, he said.

In the future, AI could even enable PoC surgery, but cybersecurity and surgeon oversight are still necessary for robotic and laparoscopic procedures, acknowledged Bhattacharya.

The post Robotic surgery to benefit from advanced processors and AI, says AMD appeared first on The Robot Report.

]]>
https://www.therobotreport.com/robotic-surgery-benefits-advanced-processors-ai-says-amd/feed/ 0
NVIDIA, Foxconn to build advanced computing center in Taiwan https://www.therobotreport.com/nvidia-foxconn-to-build-advanced-computing-center-in-taiwan/ https://www.therobotreport.com/nvidia-foxconn-to-build-advanced-computing-center-in-taiwan/#respond Thu, 06 Jun 2024 12:30:49 +0000 https://www.therobotreport.com/?p=579316 The Foxconn computing center will be anchored by NVIDIA's GB200 super chip servers and enable electric vehicle and smart city development.

The post NVIDIA, Foxconn to build advanced computing center in Taiwan appeared first on The Robot Report.

]]>
(From left to right) NVIDIA president and CEO Jensen Huang, and Foxconn chairman and CEO Young Liu shaking hands. | Source: Foxconn.

NVIDIA CEO Jensen Huang and Foxconn CEO Young Liu celebrate their cooperation. | Source: Foxconn

Hon Hai Technology Group, better known as Foxconn, this week said they plan to jointly build an advanced computing center in Kaohsiung, Taiwan. At the core of the center will be the NVIDIA Blackwell platform. The companies made the announcement at Computex 2024. 

NVIDIA said the cutting-edge computing center will be anchored by the super chip GB200 servers and consist of a total of 64 racks and 4,608 GPUs. The electronics manufacturer will contribute its production scale and said it expects to complete the center by 2026. 

The companies said their latest collaboration demonstrates their commitment to building servers to drive artificial intelligence, electric vehicles (EVs), smart factories, smart cities, robotics, and more. 

“A new era of computing has dawned, fueled by surging global demand for generative AI data centers,” stated Jensen Huang, founder and CEO of NVIDIA. “Foxconn stands at the forefront as a leading supplier of NVIDIA computing and a trailblazer in the application of generative AI in manufacturing and robotics.”

“Leveraging NVIDIA Omniverse and Isaac robotics platforms, Foxconn is harnessing cutting-edge AI and digital twin technologies to construct their advanced computing center in Kaohsiung,” he added.

Cooperation continues for Foxconn with new superchip 

This isn’t the first time Foxconn and NVIDIA have collaborated. The company has worked closely with NVIDIA on various product development projects. NVIDIA said Foxconn has excellent vertical integration capabilities and is a vital partner for the new GB200 Grace Blackwell Superchip. 

The superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect. The company said that for the highest AI performance, GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum-X800 Ethernet platforms.


SITE AD for the 2025 Robotics Summit registration. Register now


NVIDIA and Foxconn plan for the facility

The partners said that NVIDIA’s AI technology will drive Foxconn’s three smart platforms: Smart Manufacturing, Smart EV, and Smart City. The new facility will use NVIDIA Omniverse to create digital twins for these platforms.

Foxconn plans to use image-recognition technology combined with its autonomous mobile robots (AMRs) to provide optimal capacity utilization in smart manufacturing. The companies said they will also take on production-line planning, which will encompass the existing manufacturing of AI servers and electric vehicle assembly plants. 

Foxconn subsidiary Foxtron’s Qiaotou automotive manufacturing facility will be one of Foxconn’s benchmark AI factories. Currently under construction, the site will use digital twins connected to cloud technologies. The company also hopes to collaborate between virtual and physical production lines.

In addition, the facility is set up with digital real-time monitoring to ensure high-quality manufacturing of an electric bus. 

NVIDIA and Foxconn plan to collaborate on future electric vehicle models designed by Foxconn. Currently, the company is negotiating projects with traditional European and American automakers. The partners also said they plan to develop a “cabin-driving-in-one” smart travel system. 

The post NVIDIA, Foxconn to build advanced computing center in Taiwan appeared first on The Robot Report.

]]>
https://www.therobotreport.com/nvidia-foxconn-to-build-advanced-computing-center-in-taiwan/feed/ 0
Foresight to collaborate with KONEC on autonomous vehicle concept https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/ https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/#respond Mon, 03 Jun 2024 12:30:02 +0000 https://www.therobotreport.com/?p=579242 Foresight will integrate its ScaleCam 3D perception technology with KONEC into a conceptual autonomous driving vehicle. 

The post Foresight to collaborate with KONEC on autonomous vehicle concept appeared first on The Robot Report.

]]>
Two Foresight branded cameras on top of a white car.

Foresight says its ScaleCam system can generate high-quality depth maps. | Source: Foresight

Foresight Autonomous Holdings Ltd. last week announced that it has signed a co-development agreement with KONEC Co., a Korean Tier 1 automotive supplier. Under the agreement, the companies will integrate Foresight’s ScaleCam 3D perception technology into a concept autonomous vehicle. 

The collaboration is sponsored by the Foundation of Korea Automotive Parts Industry Promotion (KAP), founded by Hyundai Motor Group. The partners said they will combine KONEC’s expertise in developing advanced automotive systems with KAP’s mission to foster innovation within the automobile parts industry. 

“We believe that the collaboration with KONEC represents a significant step forward in the development of next-generation autonomous driving solutions,” stated Haim Siboni, CEO of Foresight. “By combining our resources, image-processing expertise, and innovative technologies, we aim to accelerate the development and deployment of autonomous vehicles, ultimately contributing to safer transportation solutions in the Republic of Korea.” 

Foresight is an innovator in automotive vision systems. The Ness Ziona, Israel-based company is developing smart multi-spectral vision software systems and cellular-based applications. Through its subsidiaries, Foresight Automotive Ltd., Foresight Changzhou Automotive Ltd., and Eye-Net Mobile Ltd., it develops both in-line-of-sight vision systems and beyond-line-of-sight accident-prevention systems. 

KONEC has established a batch production system for lightweight metal raw materials, models, castings, processing, and assembly through cooperation among its group affiliates. The Seosan-si, South Korea-based company‘s major customers include Tesla, Hyundai Motor, and Kia.

KONEC has entered the field of information processing technology using cameras to perform tasks such as developing a license-plate recognition system with companies that have commercialized systems on chips (SoCs) and modules for Internet of Things (IoT) communication. 


SITE AD for the 2025 Robotics Summit registration. Register now


Foresight ScaleCam to enhance autonomous capabilities 

The collaboration will incorporate Foresight’s ScaleCam 360º 3D perception technology. The company said it will enable the self-driving vehicle to accurately perceive its surroundings. It and KONEC said say the successful integration of ScaleCam could significantly enhance the capabilities and safety of autonomous vehicles. 

ScaleCam is based on stereoscopic technology. The system uses advanced and proven image-processing algorithms, according to Foresight. The company claimed that it provides seamless vision by using two visible-light cameras for highly accurate and reliable obstacle-detection capabilities. 

Typical stereoscopic vision systems require constant calibration to ensure accurate distance measurements, Foresight noted. To solve this, some developers mount stereo cameras on a fixed beam, but this can limit camera placement positions and lead to technical issues, it said.

Foresight asserted that its technology allows for the independent placement of both visible-light and thermal infrared camera modules. This allows the system to support large baselines without mechanical constraints, providing greater distance accuracy at long ranges, it said. 

The post Foresight to collaborate with KONEC on autonomous vehicle concept appeared first on The Robot Report.

]]>
https://www.therobotreport.com/foresight-collaborates-with-konec-autonomous-vehicle-concept/feed/ 0
Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/ https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/#respond Wed, 22 May 2024 13:06:38 +0000 https://www.therobotreport.com/?p=579148 The new sensor from Lumotive uses the latest beamforming technology for industrial automation and service robotics.

The post Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering appeared first on The Robot Report.

]]>
Lumotive and Hoyuko's new YLM-10LX 3D lidar sensor uses patented LCM optical beamforming for robotics applications. Source: Lumotive

Hokuyo’s YLM-10LX 3D uses Lumotive’s patented LCM optical beamforming for robotics applications. Source: Lumotive

Perception technology continues to evolve for autonomous systems, becoming more robust and compact. Lumotive and Hokuyo Automatic Co. today announced the commercial release of the YLM-10LX 3D lidar sensor, which they claimed “represents a major leap forward in applying solid-state, programmable optics to transform 3D sensing.”

The product uses Lumotive’s Light Control Metasurface (LCM) optical beamforming technology and is designed for industrial automation and service robotics applications.

“We are thrilled to see our LM10 chip at the heart of Hokuyo’s new YLM-10LX sensor, the first of our customers’ products to begin deploying our revolutionary beam-steering technology into the market,” stated Dr. Axel Fuchs, vice president of business development at Lumotive.

“This product launch highlights the immense potential of our programmable optics in industrial robotics and beyond,” he added. “Together with Hokuyo, we look forward to continuing to redefine what’s possible in 3D sensing.”

Lumotive LCM offers stable lidar perception

Lumotive said its award-winning optical semiconductors enable advanced sensing in next-generation consumer, mobility, and industrial automation products such as mobile devices, autonomous vehicles, and robots. The Redmond, Wash.-based company said its patented LLCM chips “deliver an unparalleled combination of high performance, exceptional reliability, and low cost — all in a tiny, easily integrated solution.”

The LCM technology uses dynamic metasurfaces to manipulate and direct light “in previously unachievable ways,” said Lumotive. This eliminates the need for the bulky, expensive, and fragile mechanical moving parts found in traditional lidar systems, it asserted.

“As a true solid-state beam-steering component for lidar, LCM chips enable unparalleled stability and accuracy in 3D object recognition and distance measurement,” said the company. “[The technology] effectively handles multi-path interference, which is crucial for industrial environments where consistent performance and safety are paramount.”

Lumotive said the LM10 LCM allows sensor makers such as Hokuyo to rapidly integrate compact, adaptive programmable optics into their products. It manufactures the LM10 like its other products, following well-established and scalable silicon fabrication techniques. The company said this cuts costs through economies of scale, making solid-state lidar economically feasible for widespread adoption in a broad spectrum of industries.


SITE AD for the 2025 Robotics Summit registration. Register now


Software-defined sensing provides flexibility, says Hokuyo

Hokuyo claimed that the new sensor “is the first of its kind in the lidar industry, achieving superior range and field of view (FOV) compared to any other solid-state solution on the market by integrating beam-steering with Lumotive’s LM10 chip.”

In addition, the software-defined scanning capabilities of LCM beam steering allow users to adjust performance parameters such as the sensor’s resolution, detection range, and frame rate, said the Osaka, Japan-based company. They can program and use multiple FOVs simultaneously, adapting to application needs and changing conditions, indoors and outdoors.

Hokuyo said the commercial release of the YLM-10LX sensor marks another milestone in its continued investment in its long-term, strategic collaboration with Lumotive.

“With the industrial sectors increasingly demanding high-performance, reliable lidar systems that also have the flexibility to address multiple applications, our continued partnership with Lumotive allows us to harness the incredible potential of LCM beam steering and to deliver innovative solutions that meet the evolving needs of our customers,” said Chiai Tabata, product and marketing lead at Hokuyo.

Founded in 1946, Hokuyo Automatic offers a range of industrial sensor products for the factory automation, logistics automation, and process automation industries. The company‘s products include collision-avoidance sensors, safety laser scanner and obstacle-detection sensors, optical data transmission devices, laser rangefinders (lidar), and hot-metal detectors. It also provides product distribution and support services.

The post Lumotive and Hokuyo release 3D lidar sensor with solid-state beam steering appeared first on The Robot Report.

]]>
https://www.therobotreport.com/lumotive-and-hokuyo-release-3d-lidar-sensor-with-solid-state-beam-steering/feed/ 0
March 2024 robotics investments total $642M https://www.therobotreport.com/march-2024-robotics-investments-total-642m/ https://www.therobotreport.com/march-2024-robotics-investments-total-642m/#respond Thu, 18 Apr 2024 14:14:18 +0000 https://www.therobotreport.com/?p=578749 March 2024 robotics funding was buoyed by significant investment into software and drone suppliers.

The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
March 2024 robotics investments fell from the prior month.

Chinese and U.S. companies led March 2024 robotics investments. Credit: Eacon Mining, Dan Kara

Thirty-seven robotics firms received funding in March 2024, pulling in a total monthly investment of $642 million. March’s investment figure was significantly less than February’s mark of approximately $2 billion, but it was in keeping with other monthly investments in 2023 and early 2024 (see Figure 1, below).

March2024 investments dropped from the previous month.

California companies secure investment

As described in Table 1 below, the two largest robotics investments in March were secured by software suppliers. Applied Intuition, a provider of software infrastructure to deploy autonomous vehicles at scale, received a $250 million Series E round, while Physical Intelligence, a developer of foundation models and other software for robots and actuated devices, attracted $70 million in a seed round. Both firms are located in California.

Other California firms receiving substantial rounds included Bear Robotics, a manufacturer of self-driving indoor robots that raised a $60 million Series C round, and unmanned aerial system (UAS) developer Firestorm, whose seed funding was $20 million. For a PDF version of Table 1, click here.

March 2024 robotics investments

CompanyAmount ($)RoundCountryTechnology
Agilis Robotics10,000,000Series AChinaSurgical/interventional systems
AloftEstimateOtherU.S.Drones, data acquisition / processing / management
Applied Intuition250,000,000Series EU.S.Software
Automated Architecture3,280,000EstimateU.K.Micro-factories
Bear RoboBear Roboticstics60,000,000Series CU.S.Indoor mobile platforms
BIOBOT Surgical18,000,000Series BSingaporeSurgical systems
Buzz Solutions5,000,000OtherU.S.Drone inspection
Cambrian Robotics3,500,000SeedU.K.Machine vision
Coctrl13,891,783Series BChinaSoftware
DRONAMICS10,861,702GrantU.K.Drones
Eacon Mining41,804,272Series CChinaAutonomous transportation, sensors
ECEON RoboticsEstimatePre-seedGermanyAutonomous forklifts
ESTAT AutomationEstimateGrantU.S.Actuators / motors / servos
Fieldwork Robotics758,181GrantU.K.Outdoor mobile manipulation platforms, sensors
Firestorm Labs20,519,500SeedU.S.Drones
Freespace RoboticsEstimateOtherU.S.Automated storage and retrieval systems
Gather AI17,000,000Series AU.S.Drones, software
Glacier7,700,000OtherU.S.Articulated robots, sensors
IVY TECH Ltd.421,435GrantU.K.Outdoor mobile platforms
KAIKAKUEstimatePre-seedU.K.Collaborative robots
KEF RoboticsEstimateGrantU.S.Drone software
Langyu RobotEstimateOtherChinaAutomated guided vehicles, software
Linkwiz2,679,725OtherJapanSoftware
MotionalEstimateSeedU.S.Autonomous transportation systems
Orchard Robotics3,800,000Pre-seedU.S.Crop management
Pattern Labs8,499,994OtherU.S.Indoor and outdoor mobile platforms
Physical Intelligence70,000,000SeedU.S.Software
PiximoEstimateGrantU.S.Indoor mobile platforms
Preneu11,314,492Series BKoreaDrones
QibiTech5,333,884OtherJapanSoftware, operator services, uncrewed ground vehicles
Rapyuta RoboticsEstimateOtherJapanIndoor mobile platforms, autonomous forklifts
RIOS Intelligent Machines13,000,000Series BU.S.Machine vision
RITS13,901,825Series AChinaSensors, software
Robovision42,000,000OtherBelgiumComputer vision, AI
Ruoyu Technology6,945,312SeedChinaSoftware
Sanctuary Cognitive SystemsEstimateOtherCanadaHumanoids / bipeds, software
SeaTrac Systems899,955OtherU.S.Uncrewed surface vessels
TechMagic16,726,008Series CJapanArticulated robots, sensors
Thor PowerEstimateSeedChinaArticulated robots
Viam45,000,000Series BGermanySmart machines
WIRobotics9,659,374Series AS. KoreaExoskeletons, consumer, home healthcare
X SquareEstimateSeedU.S.Software
YindatongEstimateSeedChinaSurgical / interventional systems
Zhicheng PowerEstimateSeries AChinaConsumer / household
Zhongke HuilingEstimateSeedChinaHumanoids / bipeds, microcontrollers / microprocessors / SoC

Drones get fuel for takeoff in March 2024

Providers of drones, drone technologies, and drone services also attracted substantial individual investments in March 2024. Examples included Firestorm and Gather AI, a developer of inventory monitoring drones whose Series A was $17 million.

In addition, drone services provider Preneu obtained $11 million in Series B funding, and DRONAMICS, a developer of drone technology for cargo transportation and logistics operations, got a grant worth $10.8 million.

Companies in U.S. and China received the majority of the March 2024 funding, at $451 million and $100 million, respectively (see Figure 2, below).

Companies based in Japan and the U.K. were also well represented among the March 2024 investment totals. Four companies in Japan secured a total of $34.7 million, while an equal number of firms in the U.K. attracted $13.5 million in funding.

 

March 2024 robotics investment by country.

Nearly 40% of March’s robotics investments came from a single Series E round — that of Applied Intuition. The remaining funding classes were all represented in March 2024 (Figure 3, below).

March 2024 robotics funding by type and amounts.

Editor’s notes

What defines robotics investments? The answer to this simple question is central in any attempt to quantify them with some degree of rigor. To make investment analyses consistent, repeatable, and valuable, it is critical to wring out as much subjectivity as possible during the evaluation process. This begins with a definition of terms and a description of assumptions.

Investors and investing

Investment should come from venture capital firms, corporate investment groups, angel investors, and other sources. Friends-and-family investments, government/non-governmental agency grants, and crowd-sourced funding are excluded.

Robotics and intelligent systems companies

Robotics companies must generate or expect to generate revenue from the production of robotics products (that sense, analyze, and act in the physical world), hardware or software subsystems and enabling technologies for robots, or services supporting robotics devices. For this analysis, autonomous vehicles (including technologies that support autonomous driving) and drones are considered robots, while 3D printers, CNC systems, and various types of “hard” automation are not.

Companies that are “robotic” in name only, or use the term “robot” to describe products and services that do not enable or support devices acting in the physical world, are excluded. For example, this includes “software robots” and robotic process automation. Many firms have multiple locations in different countries. Company locations given in the analysis are based on the publicly listed headquarters in legal documents, press releases, etc.

Verification

Funding information is collected from several public and private sources. These include press releases from corporations and investment groups, corporate briefings, market research firms, and association and industry publications. In addition, information comes from sessions at conferences and seminars, as well as during private interviews with industry representatives, investors, and others. Unverifiable investments are excluded and estimates are made where investment amounts are not provided or are unclear.


SITE AD for the 2025 Robotics Summit registration. Register now


The post March 2024 robotics investments total $642M appeared first on The Robot Report.

]]>
https://www.therobotreport.com/march-2024-robotics-investments-total-642m/feed/ 0
BlackBerry and AMD partner to reduce latency in robotics https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/ https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/#respond Thu, 11 Apr 2024 20:57:05 +0000 https://www.therobotreport.com/?p=578674 BlackBerry and Advanced Micro Devices said they plan to address the need for 'hard' real-time capabilities in robotics-focused hardware.

The post BlackBerry and AMD partner to reduce latency in robotics appeared first on The Robot Report.

]]>
AMD's Kria K26 SOM will work with the BlackBerry QNX SDP.

AMD’s Kria K26 SOM will power the hardware with the BlackBerry QNX SDP. | Source: AMD

BlackBerry Ltd. announced at Embedded World this week that it is collaborating with Advanced Micro Devices Inc. The partners said they want to enable next-generation robotics by reducing latency and jitter and with “repeatable determinism.”

The companies said they will jointly “address the critical need for ‘hard’ real-time capabilities in robotics-focused hardware.” BlackBerry and AMD plan to release an affordable system-on-module (SOM) platform that delivers enhanced performance, reliability, and scalability for robotic systems in industrial healthcare

This platform will combine BlackBerry’s QNX expertise in real-time foundational software and the QNX Software Development Platform (SDP) with heterogeneous hardware powered by the AMD Kria K26 SOM. It features both Arm and FPGA (field programmable gate array) logic-based architecture.

“With the QNX Software Development Platform, customers can start development quickly on the AMD Kria KR260 Starter Kit and seamlessly scale to other higher-performance AMD platforms as their needs evolve,” stated Chetan Khona, senior director of industrial, vision, healthcare, and sciences markets at AMD.

“Combining the industry-leading strengths of AMD and QNX will provide a foundation platform that opens new doors for innovation and takes the future of robotics technology well beyond the constraints experienced until now,” he said.

BlackBerry, AMD provide capabilities with less latency

With Kria, an Arm sub-system can power the advanced capabilities of the QNX microkernel real-time operation system (RTOS), said Advanced Micro Devices and BlackBerry. It can do this while allowing users to run low-latency, deterministic functions on the programmable logic of the AMD Kria KR260 robotics starter kit. 

This combination enables sensor fusion, high-performance data processing, real-time control, industrial networking, and reduced latency in robotics applications, said the companies.

They added that customers can benefit from integration and optimization of software and hardware components. This results in streamlined development processes and accelerated time to market for robotics innovations, said AMD and BlackBerry. 

“An integrated solution by BlackBerry QNX through our collaboration with AMD will provide an integrated software-hardware foundation offering real-time performance, low latency, and determinism to ensure that critical robotic tasks are executed with the same level of precision and responsiveness every single time,” said Grant Courville, vice president of product and strategy at BlackBerry QNX.

“These are crucial attributes for industries carrying out finely tuned operations, such as the fast-growing industries of autonomous mobile robots and surgical robotics” he added. “Together with AMD, we are committed to driving technological advancements that address some of these most complex challenges and transform the future of the robotics industry.”

The integrated system is now available to customers.

See AMD at Robotics Summit & Expo

For more than 50 years, Advanced Micro Devices has been a leading innovator in high-performance computing (HPC), graphics, and visualization technologies. The Santa Clara, Calif.-based company noted that billions of people, Fortune 500 businesses, and scientific research institutions worldwide rely on its technology daily.

AMD recently released the Embedded+ HPC architecture, the Spartan UltraScale+ FPGA family, and Versal Gen 2 for AI and edge processing.

Kosta Sidopoulos, a product engineer at AMD, will be speaking at the Robotics Summit & Expo, which takes place May 1 and 2 at the Boston Convention and Exhibition Center. His talk on “Enabling Next-Gen AI Robotics” will delve into the unique features and capabilities of AMD’s AI-enabled products. It will highlight their adaptability and scalability for diverse robotics applications.

Registration is now open for the Robotics Summit & Expo, which will feature more than 70 speakers, 200 exhibitors, and up to 5,000 attendees, as well as numerous networking opportunities.


SITE AD for the 2025 Robotics Summit registration. Register now


The post BlackBerry and AMD partner to reduce latency in robotics appeared first on The Robot Report.

]]>
https://www.therobotreport.com/blackberry-amd-partner-reduce-latency-in-robotics/feed/ 0
AMD releases Versal Gen 2 to improve support for embedded AI, edge processing https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/ https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/#respond Tue, 09 Apr 2024 08:15:20 +0000 https://www.therobotreport.com/?p=578606 The first devices in AMD Versal Series 2 target high-efficiency for AI Engines, and Subaru is one of its first customers.

The post AMD releases Versal Gen 2 to improve support for embedded AI, edge processing appeared first on The Robot Report.

]]>
AMD Versal AI Edge and Prime Gen 2.

The AMD Versal AI Edge and Prime Gen 2 are next-gen SoCs. Source: Advanced Micro Devices

To enable more artificial intelligence on edge devices such as robots, hardware vendors are adding to their processor portfolios. Advanced Micro Devices Inc. today announced the expansion of its adaptive system on chip, or SoC, line with the new AMD Versal AI Edge Series Gen 2 and Versal Prime Series Gen 2.

“The demand for AI-enabled embedded applications is exploding and driving the need for solutions that bring together multiple compute engines on a single chip for the most efficient end-to-end acceleration within the power and area constraints of embedded systems,” stated Salil Raje, senior vice president and general of the Adaptive and Embedded Computing Group at AMD.

“Based on over 40 years of adaptive computing leadership in high-security, high-reliability, long-lifecycle, and safety-critical applications, these latest-generation Versal devices offer high compute efficiency and performance on a single architecture that scales from the low end to high end,” he added.

For more than 50 years, AMD said it has been a leading innovator in high-performance computing (HPC), graphics, and visualization technologies. The Santa Clara, Calif.-based company noted that billions of people, Fortune 500 businesses, and scientific research institutions worldwide rely on its technology daily.

Versal Gen 2 addresses three phases of accelerated AI

Advanced Micro Devices said the Gen 2 systems put preprocessing, AI inference, and postprocessing on a single device to deliver accelerated AI. This provides the optimal mix for accelerated AI meet the complex processing needs of real-world embedded systems, it asserted.

  • Preprocessing: The new systems include FPGA (field-programmable gate array) logic fabric for real-time preprocessing; flexible connections to a wide range of sensors; and implementation of high-throughput, low-latency data-processing pipelines.
  • AI inference: AMD said it provides an array of vector processes in the form of next-generation AI Engines for efficient inference.
  • Postprocessing: Arm CPU cores provide the power needed for complex decision-making and control for safety-critical applications, said AMD.

“This single-chip intelligence can eliminate the need to build multi-chip processing solutions, resulting in smaller, more efficient embedded AI systems with the potential for shorter time to market,” the company said.


SITE AD for the 2025 Robotics Summit registration. Register now


AMD builds to maximize power and compute

AMD said its latest systems offer up to 10x more scalar compute compared with the first generation, so the devices can more efficiently handle sensor processing and complex scalar workloads. The Versal Prime Gen 2 devices include new hard IP for high-throughput video processing, including up to 8K multi-channel worflows.

This makes the scalable portfolio suitable for applications such as ultra-high-definition (UHD) video streaming and recording, industrial PCs, and flight computers, according to the company.

In addition, the new SoCs include new AI Engines that AMD claimed will deliver three times the TOPS (trillions of operations per second) per watt than the first-generation Versal AI Edge Series devices.

“Balancing performance, power, [and] area, together with advanced functional safety and security, Versal Series Gen 2 devices deliver new capabilities and features,” said AMD. It added that they “enable the design of high-performance, edge-optimized products for the automotive, aerospace and defense, industrial, vision, healthcare, broadcast, and pro AV [autonomous vehicle] markets.”

“Single-chip intelligence for embedded systems will enable pervasive AI, including robotics … smart city, cloud and AI, and the digital home,” said Manuel Uhm, director for Versal marketing at AMD, in a press briefing. “All will need to be accelerated.”

The Versal Prime Gen 2 SoC.

The Versal Prime Gen 2 is designed for high-throughput applications such as video processing. Source: AMD

Versal powers Subaru’s ADAS vision system

Subaru Corp. is using AMD’s adaptive SoC technology in current vehicles equipped with its EyeSight advanced driver-assistance system (ADAS). EyeSight is integrated into certain car models to enable advanced safety features including adaptive cruise control, lane-keep assist, and pre-collision braking.

“Subaru has selected Versal AI Edge Series Gen 2 to deliver the next generation of automotive AI performance and safety for future EyeSight-equipped vehicles,” said Satoshi Katahira. He is general manager of the Advanced Integration System Department and ADAS Development Department, Engineering Division, at Subaru.

“Versal AI Edge Gen 2 devices are designed to provide the AI inference performance, ultra-low latency, and functional safety capabilities required to put cutting-edge AI-based safety features in the hands of drivers,” he added.

Vivado and Vitis part of developer toolkits

AMD said its Vivado Design Suite tools and libraries can help boost productivity and streamline hardware design cycles, offering fast compile times and enhanced-quality results. The company said the Vitis Unified Software Platform “enables embedded software, signal processing, and AI design development at users’ preferred levels of abstraction, with no FPGA experience needed.”

Earlier this year, AMD released the Embedded+ architecture for accelerated edge AI, as well as the Spartan UltraScale+ FPGA family for edge processing.

Early-access documentation for Versal Series Gen 2 is now available, along with first-generation Versal evaluation kits and design tools. AMD said it expects Gen 2 silicon samples to be available in the first half of 2025, followed by evaluation kits and system-on-modules samples in mid-2025, and production silicon in late 2025.

The post AMD releases Versal Gen 2 to improve support for embedded AI, edge processing appeared first on The Robot Report.

]]>
https://www.therobotreport.com/amd-releases-versal-gen-2-to-support-ai-edge-processing/feed/ 0
Top 10 robotics news stories of March 2024 https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/ https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/#respond Mon, 01 Apr 2024 17:01:03 +0000 https://www.therobotreport.com/?p=578366 From events like MODEX and GTC to new product launches, there was no shortage of robotics news to cover in March 2024. 

The post Top 10 robotics news stories of March 2024 appeared first on The Robot Report.

]]>
March 2024 was a non-stop month for the robotics industry. From events such as MODEX and GTC to exciting new deployments and product launches, there was no shortage of news to cover. 

Here are the top 10 most popular stories on The Robot Report this past month. Subscribe to The Robot Report Newsletter or listen to The Robot Report Podcast to stay updated on the latest technology developments.


10. Robotics Engineering Career Fair to connect candidates, employers at Robotics Summit

The career fair will draw from the general robotics and artificial intelligence community, as well as from attendees at the Robotics Summit & Expo. Past co-located career fairs have drawn more than 800 candidates, and MassRobotics said it expects even more people at the Boston Convention and Exhibition Center this year. Read More


SMC released LEHR series grippers for UR cobot arms in March 2024.

9. SMC adds grippers for cobots from Universal Robots

SMC recently introduced a series of electric grippers designed to be used with collaborative robot arms from Universal Robots. Available in basic and longitudinal types, SMC said the LEHR series can be adapted to different industrial environments like narrow spaces. Read More


anyware robotics pixmo robot.8. Anyware Robotics announces new add-on for Pixmo unloading robots

Anyware Robotics announced in March 2024 an add-on for its Pixmo robot for truck and container unloading. The patent-pending accessory includes a vertical lift with a conveyor belt that is attached to Pixmo between the robot and the boxes to be unloaded. Read More


image of Phoenix humanoid robot, full body, not a render.

7. Accenture invests in humanoid maker Sanctuary AI in March 2024

In its Technology Vision 2024 report, Accenture said 95% of the executives it surveyed agreed that “making technology more human will massively expand the opportunities of every industry.” Well, Accenture put its money where its mouth is. Accenture Ventures announced a strategic investment in Sanctuary AI, one of the companies developing humanoid robots. Read More


Cambrian Robotics is applying machine vision to industrial robots

6. Cambrian Robotics obtains seed funding to provide vision for complex tasks

Machine vision startup Cambrian Robotics Ltd. has raised $3.5 million in seed+ funding. The company said it plans to use the investment to continue developing its AI platform to enable robot arms “to surpass human capabilities in complex vision-based tasks across a variety of industries.” Read More


Mobile Industrial Robots introduced the MiR1200 pallet jack in March 2024.5. Mobile Industrial Robots launches MiR1200 autonomous pallet jack

Autonomous mobile robots (AMRs) are among the systems benefitting from the latest advances in AI. Mobile Industrial Robots at LogiMAT in March 2024 launched the MiR1200 Pallet Jack, which it said uses 3D vision and AI to identify pallets for pickup and delivery “with unprecedented precision.” Read More


4. Reshape Automation aims to reduce barriers of robotics adoption

Companies in North America bought 31,159 robots in 2023. That’s a 30% decrease from 2022. And that’s not sitting well with robotics industry veteran Juan Aparicio. After working at Siemens for a decade and stops at Ready Robotics and Rapid Robotics, Aparicio hopes his new startup Reshape Automation can chip away at this problem. Read More


Apptronik Apollo moves a tote.

3. Mercedes-Benz testing Apollo humanoid

Apptronik announced that leading automotive brand Mercedes-Benz is testing its Apollo humanoid robot. As part of the agreement, Apptronik and Mercedes-Benz will collaborate on identifying applications for Apollo in automotive settings. Read More


NVIDIA CEO Jenson Huang on stage with a humanoid lineup in March 2024.

2. NVIDIA announces new robotics products at GTC 2024

The NVIDIA GTC 2024 keynote kicked off like a rock concert in San Jose, Calif. More than 15,000 attendees filled the SAP Arena in anticipation of CEO Jensen Huang’s annual presentation of the latest product news from NVIDIA. He discussed the new Blackwell platform, improvements in simulation and AI, and all the humanoid robot developers using the company’s technology. Read More


Schneider cobot product family.

1. Schneider Electric unveils new Lexium cobots at MODEX 2024

In Atlanta, Schneider Electric announced the release of two new collaborative robots: the Lexium RL 3 and RL 12, as well as the Lexium RL 18 model coming later this year. From single-axis machines to high-performance, multi-axis cobots, the Lexium line enables high-speed motion and control of up to 130 axes from one processor, said the company. It added that this enables precise positioning to help solve manufacturer production, flexibility, and sustainability challenges. Read More

 

The post Top 10 robotics news stories of March 2024 appeared first on The Robot Report.

]]>
https://www.therobotreport.com/top-10-robotic-stories-of-march-2024/feed/ 0