Physical AI Powering the New Age of Industrial Operations 2025
Page 7 of 26 · WEF_Physical_AI_Powering_the_New_Age_of_Industrial_Operations_2025.pdf
Enhanced
perceptionAdvances in sensors and AI have dramatically improved robots’ ability to perceive their surroundings. Affordable
high-resolution cameras, light detection and ranging (LiDAR) and next-generation tactile sensors, among other
sensors, give robots richer raw inputs, while advanced computer vision algorithms (powered by deep learning)
enable visual perception approaching human-level capabilities. Robots can now recognize and interpret complex
environments in real time – identifying objects, recognizing their 3D orientation and assessing their physical
properties – essential prerequisites for developing an understanding of how to interact with objects. These
advances allow robots to “see” and comprehend an object and its environment with unprecedented clarity.
Autonomous
decision-making
and planningInnovations in AI and software have enabled robots to make intelligent decisions in real time. Instead of rigid
pre-programming, robots now exploit reinforcement learning and simulation to learn behaviours through trial
and error in virtual environments. Advanced simulators (e.g. high-fidelity physics simulators) and domain
randomization techniques (e.g. randomization of parameters such as lighting or friction) are closing the
simulation-to-reality gap, so that behaviours learned in simulation transfer seamlessly to real machines. Robots
also increasingly benefit from powerful foundation models that integrate vision, language and action. These
models, such as Google DeepMind’s Gemini Robotics6 and Nvidia’s Isaac GR00T,7 ingest multimodal inputs
and generate task-appropriate outputs – allowing for intuitive human–robot interactions and superior contextual
understanding. This enables robust workflow planning: given a goal (e.g. unloading a shipment), the system
determines a sequenced set of actions (use the forklift to unload, cut the banderole, open the packages, etc.).
This progression enables robots to evolve from executing isolated motions to performing coherent, multistep
tasks, approaching human-level task intuition and planning capabilities. In essence, robots are enabled to “think”
and plan tasks with a level of flexibility and context-awareness previously unattainable.
Dexterous
manipulation
and mobility
Advances in materials, actuators and robotic designs have greatly expanded what robots can physically do.
Hardware breakthroughs – from high-precision force-controlled motors to soft robotic grippers – give machines
much more dexterity in handling objects. Robots can now grasp irregular or delicate items reliably, rather than
being limited to rigid, predefined motions. This is complemented by AI-driven control software that adjusts grip
and force in real time. Notably, the incorporation of a sense of touch through modern tactile sensors is a primary
enabler of human-level dexterity, allowing robots to finely manipulate objects through feedback of pressure and
slip. Longer battery life is significantly increasing the uptime of mobile robots, supporting more autonomous
deployments and leading to extended mobility. Moreover, robotics is no longer confined to traditional form
factors. Innovations have introduced quadrupeds, humanoids, mobile manipulators and hybrid forms,
broadening the range of industrial applications and increasing the scope of feasible automation. These physical
innovations enable robots to “act” on the world with far greater skill and autonomy.
1.2 Enhanced capabilities enabling
end-to-end automation
These enhanced capabilities led to the evolution
of robotics from (1) rule-based robotics that are
explicitly programmed to (2) training-based robotics
that acquire their skill in the real world and through
simulation training to (3) context-based robotics
performing tasks autonomously without explicit
training through zero-shot learning. Advances in
all three robotic systems transform operations
and expand the automation scope to tasks that
previously could not be automatable.8
At the heart of this transformation, however,
lies the coexistence of all three foundational robotics systems, each expanding in automation
scope and sophistication. Together, they form a
complementary ecosystem. Rather than replacing
one another, they enable a layered automation
strategy, aligned with operational needs (e.g.
degrees of task variability) and economic
considerations. Furthermore, as factories and
warehouses move towards greater automation,
manufacturers and warehouse operators will deploy
a mix of robotic systems and embodiments – from
autonomous mobile robots (AMRs) to humanoids –
guided by task requirements, economic viability and
process characteristics.
Physical AI: Powering the New Age of Industrial Operations
7
Ask AI what this page says about a topic: