Key Technologies Defining Robotics – CoBots and AI
14-01-2021 | By Mark Patrick
What will the series cover?
In this series of six blogs, we take a look at the key technologies defining the way robots are being designed and used today, and how that may evolve in the future. It will cover developments at the hardware and software level, and how innovations such as AI are already shaping the future of robotics.
Blog 1: Key Technologies Defining Robotics – From Static Arms to AMRs
Blog 2: Key Technologies Defining Robotics – Mobility and Dexterity
Blog 3: Key Technologies Defining Robotics – Positioning and Navigation
Blog 4: Key Technologies Defining Robotics – Robot Operating Systems
Blog 5: Key Technologies Defining Robotics – CoBots and AI
Blog 6: The Future of Robotics
CoBots and AI
Since robots featured on a production line around the mid-1950s, engineers and production managers have been aware of their potential. It may have taken longer than some expected, but robots are now leaving their cages and working among us.
Making Robots Safe to Work Around
These collaborative robots, or CoBots, are designed to operate in closer proximity with human workers, so typically feature smoother surfaces and rounded edges to make them less hazardous to people.
They’re also less rigid and more flexible, which allows the CoBot to yield should it come in contact with an object or person. ISO 10218 Is the standard that covers the design and use of robots operating in a collaborative environment. This standard is the primary measure used to define the safety requirements for industrial robots.
The next evolutionary step in CoBots is the mobile CoBot, or the autonomous mobile robot (AMR). This class of CoBot, as its name suggests, moves around the workplace without the help of an operator. A significant feature because it is easy to overlook that, without mobility, any robot remains in one place its entire working life.
If a robot is stored when not in use, putting it into service requires at least one and possibly several human operators. When needed, it would need to be moved into place and recommissioned every time.
The major development here is that an AMR would not need a human operator to position it. It would be able to position itself independently, without any human assistance—a feature that offers considerable productivity increases.
AMRs make it easier for one robot to work in multiple places in a single working period. It could also perform more than one function in a production environment – a big step closer to fully autonomous factories.
Autonomous Mobile Robots and Machine Learning
Hardware, such as motors, drives (blog 2 in this series) and sensors (blog 3) make robots possible. But software is also fundamental, and the use of artificial intelligence and machine learning is becoming crucial.
AMR’s will be operating in the same environment as human operators, with fewer restrictions on where they can go and how they can interact with people—creating an element of unpredictability, difficult for linear software routines to handle.
Without artificial intelligence (AI), an AMR would need a defined outcome for all possible situations and events to react predictably. These reactions need to be hardcoded into the AMR’s programming. The software coverage would be vast, making it impractical to code all possible outcomes.
Using AI, the software would learn how to handle situations as they arise by being trained on a much more modest but carefully selected number of possible scenarios. AI and ML offer great potential in the field of AMRs.
Software Frameworks for AI
Frameworks enable developers to start using AI quickly. These often include the use of pre-trained models to create portable inference engines that run on commonly used hardware platforms. For developers new to AI and ML, this can be quite a daunting prospect. There are, however, many ongoing initiatives designed to make it simpler for the newcomer or even seasoned developer.
One company very actively enabling AI and ML is Google. It has developed and contributed open-source platforms such as TensorFlow, as well as the Edge TPU hardware accelerator for on-device inferencing, designed to work with its Coral platform.
Bringing all of this together in a way that developers can use takes time and effort. Fortunately, somebody else has done all the hard work. The Coral Dev Board is a single-board computer based on Google’s Edge TPU aimed at IoT applications. It also includes a quad-core I.MX 8M SoC from NXP and a cryptographic coprocessor. Additional support comes from NXP from its eIQ™ Machine Learning Software Development Environment, which combines libraries and tools designed for NXP’s microcontrollers and microprocessors and is the perfect companion to its i.MX 8MQuad evaluation kit.
This evaluation kit features the same processor family used on Google’s Coral dev board. Other frameworks to explore include the ISAAC software development kit from Nvidia; designed to bring intelligence to robots, it is optimised for the AI-enabled Jetson Xavier NX.
The combination of AI and mobility brings a new level of autonomy to robotics, shaping future developments in this exciting field. In the last blog of the series, we speculate what the future may look like, based on the technologies we have discussed in this series, so far.
Mouser’s online resources offer a wealth of deeper learning on the topic of robotics.
Read More
Key Technologies Defining Robotics – From Static Arms to AMRs
Key Technologies Defining Robotics – Mobility and Dexterity
Key Technologies Defining Robotics – Positioning and Navigation
Key Technologies Defining Robotics – Robot Operating Systems
Mouser Electronics
Authorised Distributor
Follow us on Twitter