Real-World Supply Chain Robotics - The Network Effect

Real-World Supply Chain Robotics – The Network Effect

Source Node: 2534095

This post has already been read 369 times!

Unlocking autonomous and semi-autonomous robots with practical teamwork

Robots hold the promise of transforming everything from factories to our homes. However, a significant hurdle remains – equipping them to navigate the unpredictable nature of real-world situations. The answer lies in a two-pronged approach: empowering them with AI and partnering them with humans.

One key challenge is a robot’s inability to understand its surroundings and make decisions based on that information. Real-world tasks are inherently messy, requiring robots to interact safely and efficiently with humans and objects in ever-changing environments.

AI offers a solution, enabling robots to learn from data and experience. Training robots to identify objects, navigate complex environments, and interact with humans safely. This technology is already being used to develop more intelligent and capable robots, but to unlock their true potential for complex tasks like first response, search and rescue, or even factory work, we need to accelerate AI development by leveraging the concept of “Practical Human Supervised Autonomy.” 

Practical Human Supervised Autonomy

Practical Human Supervised Autonomy draws inspiration from recent AI advancements, such as large language models (LLM), which demonstrate the power of AI-human collaboration. We can take a similar approach to building ‘smart’ robotics systems. By developing various “neural networks” that can learn to perform specific tasks – emphasizing learning how to make decisions about these tasks, we can create robots that can eventually learn to perform complex tasks, such as delivering a package across several streets and into an office building. 

“An automated machine that does just one thing is not a robot. It is simply automation. A robot should have the capability of handling a range of jobs…”

Joseph Engelberger, physicist, engineer, businessman, and known as “the father of robotics”

One of the challenges in developing ‘smart’ robotic systems is the sheer number of potential challenges a robot might encounter and the different skills it needs to learn to make decisions. But instead of teaching robots how to tackle complex missions and tasks “from the ground up,” a “practical autonomy” approach seeks to utilize new cognitive, sensorial technologies (for example, an understanding of objects in the view space of robots) coupled with a synthesis of decision-making made by human professionals – to allow robots to make decisions just as well as humans, and then collaborating with humans on higher level decision making. 

In essence – practical autonomy allows robots to learn from their human supervisors continuously and incrementally, reducing the “cognitive load” on the human supervisors by continually teaching the robots new and elaborate skills.

This will allow robots to learn to solve complex tasks in the real world in both specific and generalized situations, without the need for constant human intervention, while ensuring “open communication” is always available. Hence, maintaining the required level of human-based decision-making in any process.

To better understand the concept of Practical Autonomy, let’s consider an example with GPT4, a large language model. If I ask GPT4 to write me an entire Netflix drama series, I might get a coherent piece of text, but it’s likely far from perfect and lacking the creativity and expertise of a professional writer.

A human-supervised AI learning process can help bridge this gap by allowing GPT4 to collaborate with an experienced human writer. The writer can provide critical feedback and guidance at crucial points, while GPT4 can take care of the rest. Over time, we can use the data from their collaboration to create AI models that are more and more capable of writing a great TV series on their own, in the style the supervising writers. At the same time, the supervising writers are always available to provide high-level guidance and make critical decisions.

“Using human supervision and AI-driven skills, allows machines to perform complex tasks today, while performing autonomously on its own in more straightforward scenarios.”

Matteo Shapira

Let’s examine how this approach would work in last-mile delivery.

Last-Mile Delivery with Robotics

Let’s look at a real-world use case where we employ AI to enable an autonomous robot to deliver a package successfully from a van to a specific office in a high rise on the next block. The robot must complete a series of tasks with different levels of complexity to complete its mission. At a basic level, the robot must walk or fly in a straight line after being given a specific vector, which is relatively straightforward. But, at the complex end of things, that robot needs to understand things like:

  • Is this an entrance to a building?
  • Where is the elevator in the building?
  • Can I go in the elevator packed with people?
  • How do I know which floor the office is on?
  • And many more dynamic and unpredictable sub-tasks.

We can map out these processes into a stack of generalized skill sets, starting with the simplest and ending with the most complex:

  • Localization and navigation
  • Understanding the world around it (using vision and classification models)
  • Making tactical decisions (such as how to bypass a significant obstacle)
  • Making strategic decisions (such as what should be my plan in general based on mission-specific data before I break it into sub-parts)

When a robot can perform all the skills in a real-life situation, in all conditions, we can consider it “completely autonomous.” At the other end of the spectrum, the robot may be capable of navigating from point A to point B. So, a human supervisor must take nearly all the “cognitive load” of the mission and direct the robot through multiple different “points” until the task is complete.

Real-World Robotics: When a robot can perform all the skills in a real-life situation without human guidance, we consider it completely autonomous. Click To Tweet

In the middle of this range is “Partial Autonomy,” where the robot has learned enough skills to perform many tasks autonomously, while the human supervisor must intervene in any high-level decision-making processes needed to complete the task (therefore, sharing the “cognitive load”).

The Foundations of Practical Autonomy

Now that we understand “skills” and “cognitive load,” we can lay out the foundations for the level of Practical Autonomy that is required to achieve a breakthrough in robotics:

  • Using human supervision and AI-driven skills, the collaboration between man and machine allows us to perform complex tasks today, as the robot may rely on its supervisor to make hard decisions in difficult situations while performing autonomously on its own in more straightforward scenarios.
  • By breaking down the actions and decisions a human supervisor makes into observable and quantifiable situations, we can gradually turn many of these processes into “high-level” skill sets that the robots can learn to perform autonomously. Each “software version upgrade” will require less and less “cognitive load effort” from the human supervisor at each stage.
  • Once the “cognitive load” ratio between supervisors and robots reaches a certain threshold, more complex scenarios become possible, requiring minimal direct human intervention. 
  • By continuing to iterate through this process, given many different real-world use cases across other markets and platforms, robots will ultimately reach the point of genuine, complete autonomy.

Let’s return to our package delivery example. Consider all the necessary steps to deliver a package from a centralized warehouse to the final customer. We can sort these potential steps into a hierarchy, ranging from global tasks, which then break down to individual tasks, further divided into sub-tasks.

The significant tasks can be conceptualized as the large building blocks required to fulfill the current task:

  • Travel to an outdoor destination includes leaving the parking lot, navigating the streets, and crossing the street.
  • Enter the building: This includes finding the entrance, taking the elevator, and getting to the correct floor.
  • Find the office: This includes navigating the hallways and identifying the correct door.
  • Deliver the package: This includes knocking on the correct door, handing over the package, and obtaining a signature (if required).

Then, we can break down the individual sub tasks required to succeed for each global task. Let’s look at some of the tasks needed to “travel to an outdoor destination”: 

  • Plan a course to address using “pedestrian paths.” 
  • Identify and confirm a safe path towards the next segment per plan segment.
  • Travel to the next segment.
  • Assess any obstacles, whether static (debris) or dynamic (people).
  • Circumvent those obstacles.
  • Validate arrival.

To have our robot “travel to the outdoor destination” on its own – It mainly needs to be proficient in the lower subset of AI skills: Understanding its location, path planning a plan towards the destination, assessing and circumventing obstacles, and visually assessing elements such as a “clear path,” “obstacles,” and “my destination.” Most of the time, and in most cases, the robot should be okay with carrying out these tasks independently. But occasionally, it might run into an unforeseen challenge: an obstacle blocking the entire road, a jam-packed crowd of people, or bad weather. 

Real-World Supply Chain Robotics: A human supervisor can help robots navigate first-time interactions and obstacles, which later become incorporated as autonomous capabilities. Click To Tweet

In these situations, having a human supervisor who can intervene with the robot’s plan using simple instructions such as “Wait for the rain to stop and stand under a canopy” or “You can sidestep the big obstacle by crossing the street.” – would help the robot to complete the delivery. And with the following software update, the robots should be able to handle these specific challenges directly.

Looking at it from a viability standpoint – this type of scenario can enable a single supervisor to oversee hundreds of delivery robots, intervening only in exceptional cases, as needed, and have each iteration (version) of the robot require less and less attention by implementing the supervisors’ feedback into additional AI skills, as the robot takes on more and more of the decision-making process.

Of course, last-mile delivery is just one of many practical applications for Practical Autonomy, and by having the robots learn more advanced skills with varying contexts and scenarios – the path towards converging on true, complete autonomy may be closer than we can imagine.

Recommended Posts

Co-Founder and Chief Experience Officer at XTEND
Matteo is the co-founder and CXO of XTEND, provider of human-guided autonomous machine systems that enable any operator to perform accurate manoeuvers and actions, in any environment with minimal training. The company’s patented XOS operating system fuses the best of human intelligence and machine autonomy to enhance the operator’s abilities, and simultaneously reduce the need for physical confrontation, thereby minimizing casualties and injuries. Hundreds of XTEND’s systems are already operationally deployed worldwide.
Matteo Shapira
Latest posts by Matteo Shapira (see all)

Time Stamp:

More from Supply Chain Beyond