Reinforcement Learning (RL)
Recent Posts
Big Tech eyes Industrial AI and Robotics
Date:
An overview of Big Techβs in-roads into manufacturing and industrial AI. From bin picking to robotic wire arc additive manufacturing (WAAM) the pace of industrial technology advances continues to pick up as digital transformation takes hold.
Assembly Line
π¦Ύ Transferring Industrial Robot Assembly Tasks from Simulation to Reality
By lessening the complexity of the hardware architecture, we can significantly increase the capabilities and ways of using the equipment that makes it financially efficient even for low-volume tasks. Moreover, the further development of the solution can be mostly in the software part, which is easier, faster and cheaper than hardware R&D. Having chipset architecture allows us to start using AI algorithms - a huge prospective. To use RL for challenging assembly tasks and address the reality gap, we developed IndustReal. IndustReal is a set of algorithms, systems, and tools for robots to solve assembly tasks in simulation and transfer these capabilities to the real world.
We introduce the simulation-aware policy update (SAPU) that provides the simulated robot with knowledge of when simulation predictions are reliable or unreliable. Specifically, in SAPU, we implement a GPU-based module in NVIDIA Warp that checks for interpenetrations as the robot is learning how to assemble parts using RL.
We introduce a signed distance field (SDF) reward to measure how closely simulated parts are aligned during the assembly process. An SDF is a mathematical function that can take points on one object and compute the shortest distances to the surface of another object. It provides a natural and general way to describe alignment between parts, even when they are highly symmetric or asymmetric.
We also propose a policy-level action integrator (PLAI), a simple algorithm that reduces steady-state (that is, long-term) errors when deploying a learned skill on a real-world robot. We apply the incremental adjustments to the previous instantaneous target pose to produce the new instantaneous target pose. Mathematically (akin to the integral term of a classical PID controller), this strategy generates an instantaneous target pose that is the sum of the initial pose and the actions generated by the robot over time. This technique can minimize errors between the robotβs final pose and its final target pose, even in the presence of physical complexities.
π§ Data-Driven Wind Farm Control via Multiplayer Deep Reinforcement Learning
This brief proposes a novel data-driven control scheme to maximize the total power output of wind farms subject to strong aerodynamic interactions among wind turbines. The proposed method is model-free and has strong robustness, adaptability, and applicability. Particularly, distinct from the state-of-the-art data-driven wind farm control methods that commonly use the steady-state or time-averaged data (such as turbinesβ power outputs under steady wind conditions or from steady-state models) to carry out learning, the proposed method directly mines in-depth the time-series data measured at turbine rotors under time-varying wind conditions to achieve farm-level power maximization. The control scheme is built on a novel multiplayer deep reinforcement learning method (MPDRL), in which a special criticβactorβdistractor structure, along with deep neural networks (DNNs), is designed to handle the stochastic feature of wind speeds and learn optimal control policies subject to a user-defined performance metric. The effectiveness, robustness, and scalability of the proposed MPDRL-based wind farm control method are tested by prototypical case studies with a dynamic wind farm simulator (WFSim). Compared with the commonly used greedy strategy, the proposed method leads to clear increases in farm-level power generation in case studies.
π¦Ύβ»οΈ Robotic deep RL at scale: Sorting waste and recyclables with a fleet of robots
In βDeep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulatorsβ, we discuss how we studied this problem through a recent large-scale experiment, where we deployed a fleet of 23 RL-enabled robots over two years in Google office buildings to sort waste and recycling. Our robotic system combines scalable deep RL from real-world data with bootstrapping from training in simulation and auxiliary object perception inputs to boost generalization, while retaining the benefits of end-to-end training, which we validate with 4,800 evaluation trials across 240 waste station configurations.
A new intelligent fault diagnosis framework for rotating machinery based on deep transfer reinforcement learning
The advancement of artificial intelligence algorithms has gained growing interest in identifying the fault types in rotary machines, which is a high-efficiency but not a human-like module. Hence, in order to build a human-like fault identification module that could learn knowledge from the environment, in this paper, a deep reinforcement learning framework is proposed to provide an end-to-end training mode and a human-like learning process based on an improved Double Deep Q Network. In addition, to improve the convergence properties of the Deep Reinforcement Learning algorithm, the parameters of the former layers of the convolutional neural networks are transferred from a convolutional auto-encoder under an unsupervised learning process. The experiment results show that the proposed framework could efficiently extract the fault features from raw time-domain data and have higher accuracy than other deep learning models with balanced samples and better performance with imbalanced samples.
AI and the chocolate factory
βAfter about 72 hours of training with the digital twin (on a standard computer; about 24 hours on computer clusters in the cloud), the AI is ready to control the real machine. Thatβs definitely much faster than humans developing these control algorithms,β Bischoff says. Using reinforcement learning, the AI has developed a solution strategy in which all the chocolate bars on the front conveyor belts are transported on as quickly as possible and the exact speed is only controlled on the last conveyor belt - is interestingly quite different from that of a conventional control system.
The researchers led by Martin Bischoff were able to make their approach even more practical by compressing and compiling the trained control models in such a way that they run cycle-synchronously on the Siemens Simatic controllers in real time. Thomas Menzel, who is responsible for the department Digital Machines and Innovation within the business segment Production Machines, sees great potential in the methodology of letting AI learn complex control tasks independently on the digital twin: βUnder the name AI Motion Trainer, this method is now helping several co-creation partners to develop application-specific optimized controls in a much shorter time. Production machines are now no longer limited to tasks for which a PLC control program has already been developed but can realize all tasks that can be learned by AI. The integration with our SIMATIC portfolio makes the use of this technology particularly industry-grade.β
Table Tennis: A Research Platform for Agile Robotics
Robot learning has been applied to a wide range of challenging real world tasks, including dexterous manipulation, legged locomotion, and grasping. It is less common to see robot learning applied to dynamic, high-acceleration tasks requiring tight-loop human-robot interactions, such as table tennis. There are two complementary properties of the table tennis task that make it interesting for robotic learning research. First, the task requires both speed and precision, which puts significant demands on a learning algorithm. At the same time, the problem is highly-structured (with a fixed, predictable environment) and naturally multi-agent (the robot can play with humans or another robot), making it a desirable testbed to investigate questions about human-robot interaction and reinforcement learning. These properties have led to several research groups developing table tennis research platforms.
Could Reinforcement Learning play a part in the future of wafer fab scheduling?
However, as the use of RL for JSS problems is still a novelty, it is not yet at the level of sophistication that the semiconductor industry would require. So far, the approaches can handle standard small problem scenarios but cannot handle flexible problems or batching decisions. Many constraints need to be obeyed in wafer fabs (e.g., timelinks and reticle availability) and it is not easily guaranteed that the agent will adhere to them. The objective set for the agent must be defined ahead of training, which means that any change made afterwards will require a repeat of training before new decisions can be obtained. This is less problematic for solving the instance proposed by Tassel et al., although their approach relies on a specifically modelled reward function which would not easily adapt to changing objectives.
Yokogawa and DOCOMO Successfully Conduct Test of Remote Control Technology Using 5G, Cloud, and AI
Yokogawa Electric Corporation and NTT DOCOMO, INC. announced today that they have conducted a proof-of-concept test (PoC) of a remote control technology for industrial processing. The PoC test involved the use in a cloud environment of an autonomous control AI, the Factorial Kernel Dynamic Policy Programming (FKDPP) algorithm developed by Yokogawa and the Nara Institute of Science and Technology, and a fifth-generation (5G) mobile communications network provided by DOCOMO. The test, which successfully controlled a simulated plant processing operation, demonstrated that 5G is suitable for the remote control of actual plant processes.
In a World First, Yokogawa and JSR Use AI to Autonomously Control a Chemical Plant for 35 Consecutive Days
Yokogawa Electric Corporation (TOKYO: 6841) and JSR Corporation (JSR, TOKYO: 4185) announce the successful conclusion of a field test in which AI was used to autonomously run a chemical plant for 35 days, a world first. This test confirmed that reinforcement learning AI can be safely applied in an actual plant, and demonstrated that this technology can control operations that have been beyond the capabilities of existing control methods (PID control/APC) and have up to now necessitated the manual operation of control valves based on the judgements of plant personnel. The initiative described here was selected for the 2020 Projects for the Promotion of Advanced Industrial Safety subsidy program of the Japanese Ministry of Economy, Trade and Industry.
The AI used in this control experiment, the Factorial Kernel Dynamic Policy Programming (FKDPP) protocol, was jointly developed by Yokogawa and the Nara Institute of Science and Technology (NAIST) in 2018, and was recognized at an IEEE International Conference on Automation Science and Engineering as being the first reinforcement learning-based AI in the world that can be utilized in plant management.
Given the numerous complex physical and chemical phenomena that impact operations in actual plants, there are still many situations where veteran operators must step in and exercise control. Even when operations are automated using PID control and APC, highly-experienced operators have to halt automated control and change configuration and output values when, for example, a sudden change occurs in atmospheric temperature due to rainfall or some other weather event. This is a common issue at many companiesβ plants. Regarding the transition to industrial autonomy, a very significant challenge has been instituting autonomous control in situations where until now manual intervention has been essential, and doing so with as little effort as possible while also ensuring a high level of safety. The results of this test suggest that this collaboration between Yokogawa and JSR has opened a path forward in resolving this longstanding issue.
Action-limited, multimodal deep Q learning for AGV fleet route planning
In traditional operating models, a navigation system completes all calculations i.e., the shortest path planning in a static environment, before the AGVs start moving. However, due to constant incoming offers, changes in vehicle availability, etc., this creates a huge and intractable optimization problem. Meanwhile, an optimal navigation strategy for an AGV fleet cannot be achieved if it fails to consider the fleet and delivery situation in real-time. Such dynamic route planning is more realistic and must have the ability to autonomously learn the complex environments. Deep Q network (DQN), that inherits the capabilities of deep learning and reinforcement learning, provides a framework that is well prepared to make decisions for discrete motion sequence problems.
Improving PPA In Complex Designs With AI
The goal of chip design always has been to optimize power, performance, and area (PPA), but results can vary greatly even with the best tools and highly experienced engineering teams. AI works best in design when the problem is clearly defined in a way that AI can understand. So an IC designer must first see if there is a problem that can be tied to a systemβs ability to adapt to, learn, and generalize knowledge/rules, and then apply these knowledge/rules to an unfamiliar scenario.
Bridge the gap between Process Control and Reinforcement Learning with QuarticGym
Modern process control algorithms are the key to the success of industrial automation. The increased efficiency and quality create value that benefits everyone from the producers to the consumers. The question then is, could we further improve it?
From AlphaGo to robot-arm control, deep reinforcement learning (DRL) tackled a variety of tasks that traditional control algorithms cannot solve. However, it requires a large and compactly sampled dataset or a lot of interactions with the environment to succeed. In many cases, we need to verify and test the reinforcement learning in a simulator before putting it into production. However, there are few simulations for industrial-level production processes that are publicly available. In order to pay back the research community and encourage future works on applying DRL to process control problems, we built and published a simulation playground with data for every interested researcher to play around with and benchmark their own controllers. The simulators are all written in the easy-to-use OpenAI Gym format. Each of the simulations also has a corresponding data sampler, a pre-sampled d4rl-style dataset to train offline controllers, and a set of preconfigured online and offline Deep Learning algorithms.
Artificial intelligence optimally controls your plant
Until now, heating systems have mainly been controlled individually or via a building management system. Building management systems follow a preset temperature profile, meaning they always try to adhere to predefined target temperatures. The temperature in a conference room changes in response to environmental influences like sunlight or the number of people present. Simple (PI or PID) controllers are used to make constant adjustments so that the measured room temperature is as close to the target temperature values as possible.
We believe that the best alternative is learning a control strategy by means of reinforcement learning (RL). Reinforcement learning is a machine learning method that has no explicit (learning) objective. Instead, an βagentβ with as complete a knowledge of the system state as possible learns the manipulated variable changes that maximize a βrewardβ function defined by humans. Using algorithms from reinforcement learning, the agent, meaning the control strategy, can be trained from both current and recorded system data. This requires measurements for the manipulated variable changes that have been carried out, for the (resulting) changes to the system state over time, and for the variables necessary for calculating the reward.
Getting Industrial About The Hybrid Computing And AI Revolution
Beyond Limits is applying such techniques as deep reinforcement learning (DRL), using a framework to train a reinforcement learning agent to make optimal sequential recommendations for placing wells. It also uses reservoir simulations and novel deep convolutional neural networks to work. The agent takes in the data and learns from the various iterations of the simulator, allowing it to reduce the number of possible combinations of moves after each decision is made. By remembering what it learned from the previous iterations, the system can more quickly whittle the choices down to the one best answer.
Toward Generalized Sim-to-Real Transfer for Robot Learning
A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.
To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies β so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning β and thus bridge the visual discrepancy between sim and real.
Multi-Task Robotic Reinforcement Learning at Scale
For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.
Using tactile-based reinforcement learning for insertion tasks
A paper entitled βTactile-RL for Insertion: Generalization to Objects of Unknown Geometryβ was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.
Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all
With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.
Multi-agent deep reinforcement learning for multi-echelon supply chain optimization
In this article, we explore how the problem can be approached from the reinforcement learning (RL) perspective that generally allows for replacing a handcrafted optimization model with a generic learning algorithm paired with a stochastic supply network simulator. We start by building a simple simulation environment that includes suppliers, factories, warehouses, and retailers, as depicted in the animation below; we then develop a deep RL model that learns how to optimize inventory and pricing decisions.
Our first step is to develop an environment that can be used to train supply chain management policies using deep RL. We choose to create a relatively small-scale model with just a few products and facilities but implement a relatively rich set of features including transportation, pricing, and competition. This environment can be viewed as a foundational framework that can be extended and/or adapted in many ways to study various problem formulations. Henceforth, we refer to this environment as the World of Supply (WoS).
Scalable reinforcement learning for plant-wide control of vinyl acetate monomer process
This paper explores a reinforcement learning (RL) approach that designs automatic control strategies in a large-scale chemical process control scenario as the first step for leveraging an RL method to intelligently control real-world chemical plants. The huge number of units for chemical reactions as well as feeding and recycling the materials of a typical chemical process induces a vast amount of samples and subsequent prohibitive computation complexity in RL for deriving a suitable control policy due to high-dimensional state and action spaces. To tackle this problem, a novel RL algorithm: Factorial Fast-food Dynamic Policy Programming (FFDPP) is proposed. By introducing a factorial framework that efficiently factorizes the action space, Fast-food kernel approximation that alleviates the curse of dimensionality caused by the high dimensionality of state space, into Dynamic Policy Programming (DPP) that achieves stable learning even with insufficient samples. FFDPP is evaluated in a commercial chemical plant simulator for a Vinyl Acetate Monomer (VAM) process. Experimental results demonstrate that without any knowledge of the model, the proposed method successfully learned a stable policy with reasonable computation resources to produce a larger amount of VAM product with comparative performance to a state-of-the-art model-based control.