Robotics
Recent Posts
Big Tech eyes Industrial AI and Robotics
Date:
An overview of Big Tech’s in-roads into manufacturing and industrial AI. From bin picking to robotic wire arc additive manufacturing (WAAM) the pace of industrial technology advances continues to pick up as digital transformation takes hold.
Assembly Line
Real-world robotic-manipulation system
So the next phase of the project was to teach the robot to use video feedback to adjust trajectories on the fly. Until now, Tedrake’s team had been using machine learning only for the robot’s perceptual system; they’d designed the control algorithms using traditional control-theoretical optimization. But now they switched to machine learning for controller design, too.
To train the controller model, Tedrake’s group used data from demonstrations in which one of the lab members teleoperated the robotic arm while other members knocked the target object around, so that its position and orientation changed. During training, the model took as input sensor data from the demonstrations and tried to predict the teleoperator’s control signals.
This requires a combination of machine learning and the more traditional, control-theoretical analysis that Tedrake’s group has specialized in. From data, the machine learning model learns vector representations of both the input and the control signal, but hand-tooled algorithms constrain the representation space to optimize the control signal selection. “It’s basically turning it back into a planning and control problem, but in the feature space that was learned,” Tedrake explains.
Can Robots Follow Instructions for New Tasks?
The results of this research show that simple imitation learning approaches can be scaled in a way that enables zero-shot generalization to new tasks. That is, it shows one of the first indications of robots being able to successfully carry out behaviors that were not in the training data. Interestingly, language embeddings pre-trained on ungrounded language corpora make for excellent task conditioners. We demonstrated that natural language models can not only provide a flexible input interface to robots, but that pretrained language representations actually confer new generalization capabilities to the downstream policy, such as composing unseen object pairs together.
In the course of building this system, we confirmed that periodic human interventions are a simple but important technique for achieving good performance. While there is a substantial amount of work to be done in the future, we believe that the zero-shot generalization capabilities of BC-Z are an important advancement towards increasing the generality of robotic learning systems and allowing people to command robots. We have released the teleoperated demonstrations used to train the policy in this paper, which we hope will provide researchers with a valuable resource for future multi-task robotic learning research.
A remote village, a world-changing invention and the epic legal fight that followed
The twisted tale of the battle between Norway’s AutoStore and the UK’s Ocado for robotic grocery picking supremacy.
Experiment-Led Product Development
RobOps as a term did not exist even five years ago while InOrbit was being formed. So how did we get here? The truth is, robot manufacturers have been encountering the same pain points again and again as automation starts to move from controlled pilots to scaled realities. Strong RobOps means effective software solutions, but it also means interoperability, safety, harmony with humans in the real world, reliable incident management, a host of best practices, and useful analytics that can drive businesses forward on their robotics journey. These are concepts that require focused attention and dedicated investments. Robot manufacturers don’t have infinite time or resources to build the necessary supportive infrastructure that scaled operations require. Developing often complex tools to effectively solve robotics challenges requires a new approach.
Inside X’s Mission to Make Robots Boring
It’s research by Everyday Robots, a project of X, Alphabet’s self-styled “moonshot factory.” The cafe testing ground is one of dozens on the Google campus in Mountain View, California, where a small percentage of the company’s massive workforce has now returned to work. The project hopes to make robots useful, operating in the wild instead of controlled environments like factories. After years of development, Everyday Robots is finally sending its robots into the world—or at least out of the X headquarters building—to do actual work.
Robots vs. Fatbergs: High-Tech Approaches to America’s Sewer Problem
The arsenal includes flying drones, crawling robots and remote-controlled swimming machines. They are armed with cameras, sonar, lasers and other sensors, and in some cases with tools to remove obstructions, using water-jet cutters capable of slicing through concrete, tree roots, and the giant agglomerations of grease and personal-hygiene products known as fatbergs. Some can also fix leaking pipes using plastics that cure via ultraviolet light.
The tools also include artificial-intelligence systems for automating the labor-intensive process of cataloging defects in sewer pipes and storm water culverts, and for giving priority to repairs based on need and location.
Unity moves robotics design and training to the metaverse
“The Unity Simulation Pro is the only product built from the ground up to deliver distributed rendering, enabling multiple graphics processing units (GPUs) to render the same Unity project or simulation environment simultaneously, either locally or in the private cloud,” the company said. This means multiple robots with tens, hundreds, or even thousands of sensors can be simulated faster than real time on Unity today.
According to Lange, users in markets like robotics, autonomous driving, drones, agriculture technology, and more are building simulations containing environments, sensors, and models with million-square-foot warehouses, dozens of robots, and hundreds of sensors. With these simulations, they can test software against realistic virtual worlds, teach and train robot operators, or try physical integrations before real-world implementation. This is all faster, more cost-effective, and safer, taking place in the metaverse.
“A more specific use case would be using Unity Simulation Pro to investigate collaborative mapping and mission planning for robotic systems in indoor and outdoor environments,” Lange said. He added that some users have built a simulated 4,000 square-foot building sitting within a larger forested area and are attempting to identify ways to map the environment using a combination of drones, off-road mobile robots, and walking robots. The company reports it has been working to enable creators to build and model the sensors and systems of mechatronic systems to run in simulations.
GITAI’s Autonomous Robot Arm Finds Success on ISS
In this technology demonstration, the GITAI S1 autonomous space robot was installed inside the ISS Nanoracks Bishop Airlock and succeeded in executing two tasks: assembling structures and panels for In-Space Assembly (ISA), and operating switches & cables for Intra-Vehicular Activity (IVA).
Roboat III: A Robotic Boat Transportation System
Break Through Supply Chain Blocks with Automated Container Unloading
Boston Dynamics is beginning to deploy Stretch, an autonomous case-handling robot poised to change the way warehouses and ports operate. Expected to be available later in 2022, the robot can work up to 16 hours on a single battery charge, so companies can send Stretch to unload trucks or containers for full shifts both day and night.
Built on a compact, wheeled base, Stretch can travel easily to each point of activity in a distribution center. The robot is self-reliant, untethered by power cables or air lines. Its vacuum-based gripper, at the end of a robotic arm with long reach, is designed to grasp a wide variety of box types required for a truly valuable solution in the logistics industry. With its small, pallet-sized footprint and embedded smarts, Stretch needs no pre-programming or overhaul of existing warehouse equipment to begin working, and is ready to deploy in just days.
How Construction Robotics Are Transforming Risk Management
“We’re starting to move away from purely tackling deviations on the site,” Maggs says. “It’s obviously valuable to define problems, but the quicker you find a deviation, the more valuable that data is. The destructive impact of a deviation increases the longer it goes unnoticed.
“Finding an off-spec element late in the game can be damaging for the project, so we’re moving more towards risk mitigation and risk allocations,” he continues. “We can also analyze data to identify trends within the construction process and then deliver back insights. That’s much more valuable than raw data alone. It’s providing actionable information around project risks that can help mitigate them.”
How DeepMind is Reinventing the Robot
To train a robot, though, such huge data sets are unavailable. “This is a problem,” notes Hadsell. You can simulate thousands of games of Go in a few minutes, run in parallel on hundreds of CPUs. But if it takes 3 seconds for a robot to pick up a cup, then you can only do it 20 times per minute per robot. What’s more, if your image-recognition system gets the first million images wrong, it might not matter much. But if your bipedal robot falls over the first 1,000 times it tries to walk, then you’ll have a badly dented robot, if not worse.
There are more profound problems. The one that Hadsell is most interested in is that of catastrophic forgetting: When an AI learns a new task, it has an unfortunate tendency to forget all the old ones. “One of our classic examples was training an agent to play Pong,” says Hadsell. You could get it playing so that it would win every game against the computer 20 to zero, she says; but if you perturb the weights just a little bit, such as by training it on Breakout or Pac-Man, “then the performance will—boop!—go off a cliff.” Suddenly it will lose 20 to zero every time.
There are ways around the problem. An obvious one is to simply silo off each skill. Train your neural network on one task, save its network’s weights to its data storage, then train it on a new task, saving those weights elsewhere. Then the system need only recognize the type of challenge at the outset and apply the proper set of weights.
But that strategy is limited. For one thing, it’s not scalable. If you want to build a robot capable of accomplishing many tasks in a broad range of environments, you’d have to train it on every single one of them. And if the environment is unstructured, you won’t even know ahead of time what some of those tasks will be. Another problem is that this strategy doesn’t let the robot transfer the skills that it acquired solving task A over to task B. Such an ability to transfer knowledge is an important hallmark of human learning.
Hadsell’s preferred approach is something called “elastic weight consolidation.” The gist is that, after learning a task, a neural network will assess which of the synapselike connections between the neuronlike nodes are the most important to that task, and it will partially freeze their weights.
In Amazon’s Flagship Fulfillment Center, the Machines Run the Show
More than the physical robots, the stars of Amazon’s facilities are the algorithms—sets of computer instructions designed to solve specific problems. Software determines how many items a facility can handle, where each product is supposed to go, how many people are required for the night shift during the holiday rush, and which truck is best positioned to get a stick of deodorant to a customer on time. “We rely on the software to help us make the right decisions,” says Shobe, BFI4’s general manager.
When managers wanted to figure out how many people they needed at each station to keep up with customer orders, they once used Excel and their gut. Then, starting in about 2014, the company flew spreadsheet jockeys from warehouses around the country to Seattle and put them in a conference room with software engineers, who distilled their work and automated it. The resulting AutoFlow program was clunky at first, spitting out recommendations to put half an employee at one station and half an employee at another, recalls David Glick, a former Amazon logistics executive who supervised initial development of the software. Eventually the system learned that humans can’t be split in half.
Robotic Inspection for Aboveground Storage Tanks
Aboveground Storage Tanks (AST) are vital assets for many industries including, power, paper and pulp, oil and gas, chemical, and even beverage production. Routine inspection of external and internal tank components is beneficial for understanding its condition and is required by federal and local laws and regulations. Robot-enabled ultrasonic testing (UT) offers a unique solution to AST inspections because they save plant operators valuable resources while providing more asset coverage and actionable data.
Revolutionizing the Composting Industry
“To our knowledge, this facility is the first time that AI (artificial intelligence) and robotics have been used in a pre-sort facility for organics in North America,” says McMillin. “The goal of the presort facility is to remove contaminants from the organic waste stream prior to processing instead of trying to remove those contaminants after they’ve been through the composting process via vacuums and wind sifters that have historically been attached to the screening process.
Cable-path optimization method for industrial robot arms
The production line engineer’s task of designing the external path for cables feeding electricity, air, and other resources to robot arms is a labor-intensive one. As the motions of robot arms are complex, the manual task of designing their cable path is a time-consuming and continuous trial-and-error process. Herein, we propose an automatic optimization method for planning the cable paths for industrial robot arms. The proposed method applies current physics simulation techniques for reducing the person–hours involved in cable path design. Our method yields an optimal parameter vector (PV) that specifies the cable length and cable-guide configuration via filtering the candidate PV set through a cable-geometry simulation based on the mass–spring model.
Factory Robots! See inside Tesla, Amazon and Audi's operations (supercut)
How to Integrate Robotic Inspections into Your Workflow
Data is the hallmark of a robotic inspection, providing up to 1,000 times more information than traditional methods. When deciding between drones and robot crawlers, data type and quality should be considered. Drones provide aerial footage and pictures, and can even provide B-scans of assets. But, as previously mentioned, this method doesn’t result in the level of quantitative data that robot crawlers can supply. Additionally, some robots are equipped with cameras to provide the best of both worlds.
Sparks fly as BAE Systems brings innovation to welding
Funded by the U.S. Government, BAE Systems engineers collaborated with the U.S. Army Research Laboratory and Wolf Robotics to develop an Agile Manufacturing Robotic Welding Cell customized for aluminum structures that comprise the combat vehicle’s hull.
Prior to welding automation, large aluminum pieces that form the hull were hand-welded together, requiring numerous weld passes at each seam to build the hull. Hand welding requires the welder to hold the weld gun with both hands, pull the trigger to feed wire into the weld joint that creates an arc. The gun is then moved over the metal slowly to create a weld. The number of weld starts and stops in a single seam is based on the length and reach of the welder’s arms. The further a welder can reach, the less he or she needs to stop and start again.
Applying Artificial Intelligence to Paint Shop Robots
Häcker says that factories in the automotive industry have “enormous amounts of latent data about manufacturing processes, raw materials, and products. The key to leveraging these data assets is connectivity with the right interface at the control level to get at the information provided by robots, ovens, cathodic electrocoating systems or conveyor technology. Operators in existing plants are often constrained because most of their systems do not have connectivity and the right interface for data acquisition.”
Concept Prove-Outs Prove Their Worth in Robotic Finishing
During shop visits, Modern Machine Shop editors have gotten used to seeing rows of people huddled over benches with spotlights, scopes and hand tools. In fact, the sight is so common that the odd juxtaposition of tedious manual work and sophisticated, automated CNC machining processes can be easy to overlook.
Different automated material removal applications teach similar lessons about the value of early testing, close collaboration and adventurous thinking.
Plug-and-Play Robot Ecosystems on the Rise
Robot ecosystems are bringing plug-and-play ease to compatible hardware and software peripherals, while adding greater value and functionality to robots. Some might argue that the first robot ecosystem was the network of robot integrators that has expanded over the last couple decades to support robot manufacturers and their customers. Robot integrators continue to be vital to robotics adoption and proliferation. Yet an interesting phenomenon began to take shape a few years ago with the growing popularity of collaborative robots and the industry’s focus on ease of use.
Campbell describes the typical process for engineering a new gripping solution for a robot: “You have to first engineer a mechanical interface, which may mean an adapter plate, and maybe some other additional hardware. If you’re an integrator, it must be documented, because everything you do as an integrator you have to document. You have to engineer the electrical interface, how you’re going to control it, what kind of I/O signals, what kind of sensors. And then you have to design some kind of software.
“When I talk to integrators, they say it’s typically 1 to 3 days’ worth of work just to put a simple gripper on a robot. What we’ve been able to do in the UR+ program is chip away at time and cost throughout the project.”
Deep Learning Boosts Robotic Picking Flexibility
Gripping and manipulating items of diverse shapes and sizes has long been one of the biggest challenges facing industrial robotics. The difficulty is perhaps best summed up by the Polanyi Paradox, which states that we “know more than we can tell.” In essence, while it may be easy to teach machines to exhibit a high level of performance on tasks that require abstract reasoning such as running computations, it is substantially harder to grant them the sensory-motor skills of even a small child in all but the most standardized and predictable environments.
How Paint Robots Reduce Rework
There are few wild beasts more fearsome and concerning to the everyday finishing engineer than the dread three R’s: Rework, Rejections and RMAs.
In finishing, particularly when it comes to spray processes, achieving the kind of consistency and quality customers expect requires a high degree of both reliability and precision. Experienced painters and operators – or elaborate automation systems – can be engineered to provide high output, but over time many parts will seep through the cracks and simply not get the attention they require.
Robotic 3D manufacturing providing greater flexibility
Robots are extending their reach. These multiaxis articulators are taking 3D manufacturing and fabrication to new heights, new part designs, greater complexity and production efficiencies. Integrated with systems to extend their reach even further, their flexibility is unmatched. Robots are virtually defying gravity in additive manufacturing (AM), tackle complex geometries in cutting, and collaborate with humans to improve efficiencies in composite layup. This is the future of 3D.
3D printing is already a multibillion-dollar industry, with much of the activity focused on building prototypes or small parts made from plastics and polymers. For metal parts, one additive process garnering lots of attention is robotic wire arc additive manufacturing (WAAM).
Improving Cycle Time with Veo FreeMove – Estimating the Benefits with a General Example
In our model, the design and operation of the application will determine how and how often the human and robot will collaborate. At one extreme, there is no collaboration, and the application runs unattended throughout the operating cycle. At the other end, human interactions can occur multiple times a cycle, as in a parts presentation for assembly application. We concluded that the shorter the cycle time and the more frequent the required human interaction the more collaborative the application.
Introducing Intrinsic
Intrinsic is working to unlock the creative and economic potential of industrial robotics for millions more businesses, entrepreneurs, and developers. We’re developing software tools designed to make industrial robots (which are used to make everything from solar panels to cars) easier to use, less costly and more flexible, so that more people can use them to make new products, businesses and services.
Tyson invests in AI-enabled robotics firm to boost worker productivity
Automating meat factories has long been a difficult feat because it is costly and carcasses come in varying sizes so it can be hard for robots to cut and work with all types accurately. But as the coronavirus ravaged meat plants, forcing many to temporarily shutter as thousands of workers got sick, more companies accelerated their plans for automation. Meat and poultry companies also are automating certain tasks that can be repetitious or prone to injury, such as moving or loading boxes.
Soft Robotics’ SoftAI technology uses AI and 3D vision to maneuver the company’s mGrip robotic grippers with human-like hand-eye coordination. The technology allows the automation of bulk picking for fragile and irregularly shaped proteins, produce and bakery items, according to the company. Tyson Foods is an existing user of Soft Robotics’ software.
Toward Generalized Sim-to-Real Transfer for Robot Learning
A limitation for their use in sim-to-real transfer, however, is that because GANs translate images at the pixel-level, multi-pixel features or structures that are necessary for robot task learning may be arbitrarily modified or even removed.
To address the above limitation, and in collaboration with the Everyday Robot Project at X, we introduce two works, RL-CycleGAN and RetinaGAN, that train GANs with robot-specific consistencies — so that they do not arbitrarily modify visual features that are specifically necessary for robot task learning — and thus bridge the visual discrepancy between sim and real.
SLAM for the real world
To take the next leap forward, the robotics industry needs software that is reliable and effective in the real-world, yet flexible and cost effective to integrate into a wider range of robot platforms and optimized to make efficient use of limited compute, power and memory resources. Creating ‘commercial-grade’ software that is robust enough to be deployed in thousands of robots in the real world, at prices that make that scale achievable, is the next challenge for the industry.
Learning to Manipulate Deformable Objects
In “Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks,” to appear at ICRA 2021, we release an open-source simulated benchmark, called DeformableRavens, with the goal of accelerating research into deformable object manipulation. DeformableRavens features 12 tasks that involve manipulating cables, fabrics, and bags and includes a set of model architectures for manipulating deformable objects towards desired goal configurations, specified with images. These architectures enable a robot to rearrange cables to match a target shape, to smooth a fabric to a target zone, and to insert an item in a bag. To our knowledge, this is the first simulator that includes a task in which a robot must use a bag to contain other items, which presents key challenges in enabling a robot to learn more complex relative spatial relations.
F-16s Are Now Getting Washed By Robots
The Wilder Systems solution actually leverages technology previously developed for robotic drilling in commercial aircraft manufacturing and converts these components and subsystems into an automated washing system. The main changes have involved the development and addition of robot end-effectors to provide the water and soap spray, waterproofing of the robots themselves, and a robot motion path, which is dependent on the type of aircraft to be cleaned.
Robotic Flexibility: How Today’s Autonomous Systems Can Be Adapted to Support Changing Operational Needs
While robots are ideally suited to repetitive tasks, until now they lacked the intelligence to identify and handle tens of thousands of constantly changing products in a typical dynamic warehouse operation. That made applying robots to picking applications somewhat limited. Therefore, when German electrical supply wholesaler Obeta sought to install a new automated storage system from MHI member KNAPP in its new Berlin warehouse as a means to address a regional labor shortage made worse by COVID-19, the company specified a robotic picking system powered by onboard artificial intelligence (AI).
“The Covariant Brain is a universal AI that allows robots to see, reason and act in the world around them, completing tasks too complex and varied for traditional programmed robots. Covariant’s software enables Obeta’s Pick-It-Easy Robot to adapt to new tasks on its own through trial and error, so it can handle almost any object,” explained Peter Chen, co-founder and CEO of MHI member Covariant.ai.
Ford's Ever-Smarter Robots Are Speeding Up the Assembly Line
At a Ford Transmission Plant in Livonia, Michigan, the station where robots help assemble torque converters now includes a system that uses AI to learn from previous attempts how to wiggle the pieces into place most efficiently. Inside a large safety cage, robot arms wheel around grasping circular pieces of metal, each about the diameter of a dinner plate, from a conveyor and slot them together.
The technology allows this part of the assembly line to run 15 percent faster, a significant improvement in automotive manufacturing where thin profit margins depend heavily on manufacturing efficiencies.
Start-ups Powering New Era of Industrial Robotics
Much of the bottleneck to achieving automation in manufacturing relates to limitations in the current programming model of industrial robotics. Programming is done in languages proprietary to each robotic hardware OEM – languages “straight from the 80s” as one industry executive put it.
There are a limited number of specialists who are proficient in these languages. Given the rarity of the expertise involved, as well as the time it takes to program a robot, robotics application development typically costs three times as much as the hardware for a given installation.
Walmart Is Pulling Plug on More Robots
The retailer is phasing out the hulking automated pickup towers that were erected in more than 1,500 stores to dispense online orders. The decision reflects a growing focus on curbside pickup services that have become more popular during the Covid-19 pandemic and continues a broader retreat from some initiatives to use highly visible automation in stores.
How Robotic Automation Impacts E-Commerce
AI and machine learning technologies are enabling new applications. In fact, most of the applications in ecommerce/fulfillment require some type of machine vision. However, with the huge proliferation of SKUs, the old way of programming for a particular part or object discretely is much more difficult to figure out what item to pick next. AI and machine learning will provide more opportunities for companies to expand their capabilities and help ease the burden of dealing with high levels of product variability.
Multi-Task Robotic Reinforcement Learning at Scale
For general-purpose robots to be most useful, they would need to be able to perform a range of tasks, such as cleaning, maintenance and delivery. But training even a single task (e.g., grasping) using offline reinforcement learning (RL), a trial and error learning method where the agent uses training previously collected data, can take thousands of robot-hours, in addition to the significant engineering needed to enable autonomous operation of a large-scale robotic system. Thus, the computational costs of building general-purpose everyday robots using current robot learning methods becomes prohibitive as the number of tasks grows.
Collaboration requires presence sensing
The challenge of automation has always been to keep people safe while trying to produce more product in the same footprint. The faster a machine runs, the more physical space is required to guarantee that, if something goes wrong, the machine has enough time to come to a complete and safe stop before potentially making contact with humans or other machines around it. Traditionally, this would involve a physical cage around the piece of automation. This cage could take the form of a frame with either polycarbonate or expanded steel (fence) panels.
Made to physically defend a person from getting too close, these types of guarding systems also take up a lot of real estate. For this reason, they are not well-suited to a cobot application where we don’t want the new automated device taking up any more space than the human it is replacing.
The technology required to respond to this need for an ever tighter operating envelope has advanced dramatically, especially over the past two or three years. While we will delve into that momentarily, it is important to note that the robot manufacturers, in addition to coming up with new ways to sense the presence of people in proximity to the robot, have had to come up with ways to safely limit the range of operation to be inside the normal operating range of the robot.
Amazon’s robot arms break ground in safety, technology
Robin, one of the most complex stationary robot arm systems Amazon has ever built, brings many core technologies to new levels and acts as a glimpse into the possibilities of combining vision, package manipulation and machine learning, said Will Harris, principal product manager of the Robin program.
Those technologies can be seen when Robin goes to work. As soft mailers and boxes move down the conveyor line, Robin must break the jumble down into individual items. This is called image segmentation. People do it automatically, but for a long time, robots only saw a solid blob of pixels.
Stretch Is Boston Dynamics' Take on a Practical Mobile Manipulator for Warehouses
Boston Dynamics is announcing Stretch, a mobile robot designed to autonomously move boxes around warehouses. At first glance, you might be wondering why the heck this is a Boston Dynamics robot at all, since the dynamic mobility that we associate with most of their platforms is notably absent. The combination of strength and speed in Stretch’s arm is something we haven’t seen before in a mobile robot, and it’s what makes this a unique and potentially exciting entry into the warehouse robotics space.
Adversarial training reduces safety of neural networks in robots
A more fundamental problem, also confirmed by Lechner and his coauthors, is the lack of causality in machine learning systems. As long as neural networks focus on learning superficial statistical patterns in data, they will remain vulnerable to different forms of adversarial attacks. Learning causal representations might be the key to protecting neural networks against adversarial attacks. But learning causal representations itself is a major challenge and scientists are still trying to figure out how to solve it.
Using tactile-based reinforcement learning for insertion tasks
A paper entitled “Tactile-RL for Insertion: Generalization to Objects of Unknown Geometry” was submitted by MERL and MIT researchers to the IEEE International Conference on Robotics and Automation (ICRA) in which reinforcement learning was used to enable a robot arm equipped with a parallel jaw gripper having tactile sensing arrays on both fingers to insert differently shaped novel objects into a corresponding hole with an overall average success rate of 85% with 3-4 tries.
The Electrical Heart of Manufacturing
Once, servo amplifiers were tuned with screwdrivers to adjust the motion of the motors, with say three potentiometers, one for each of the elements of a PID controller. “Today, most servo drives have algorithms that autotune adjustments,” said Nausley. Promess can now position its presses within a few microns. “A few years ago, there’s no way we could have done that.”
MIT's HERMIT Crab Robots Can Do Anything You Shell Them To
Robots are well known to be specialists, doing best when they’re designed for one very specific task without much of an expectation that they’ll do anything else. This is fine, as long as you’re OK with getting a new specialist robot every time you want something different done robotically. Making generalist robots is hard, but what’s less hard is enabling a generalist to easily adapt into different kinds of specialists, which we humans do all the time when we use tools.
While we’ve written about tool using robots in the past, roboticists at the MIT Media Lab have taken inspiration from the proud and noble hermit crab to design a robot that’s able to effortlessly transition from a total generalist to highly specialized and back again, simply by switching in and out of clever, custom made mechanical shells.
Cartesian robots: simple yet sophisticated packaging automation
For many packaging use cases, cartesian robots have the edge over 6-axis models. One reason relates to the robot density. A single long-travel cartesian transfer robot can tend multiple packaging machines—without any need to rearrange machines around the robot.
By installing the transfer robots above the machines they tend, you also won’t incur a floor space penalty. Safety guarding requirements are minimal too, at least compared to 6-axis models, since an overhead installation naturally separates robots and workers. Finally, cartesian robots have lower cost maintenance and programming requirements.
Introducing Amazon SageMaker Reinforcement Learning Components for open-source Kubeflow pipelines
Woodside Energy uses AWS RoboMaker with Amazon SageMaker Kubeflow operators to train, tune, and deploy reinforcement learning agents to their robots to perform manipulation tasks that are repetitive or dangerous.
Way beyond AlphaZero: Berkeley and Google work shows robotics may be the deepest machine learning of all
With no well-specified rewards and state transitions that take place in a myriad of ways, training a robot via reinforcement learning represents perhaps the most complex arena for machine learning.
The Evolution of Robotics Force Control
Because repeatability was the most important characteristics of the first robots, they were designed from the start to be very stiff. Over the years, the designs were optimized for stiffness and the manufacturing cost and quality of the robot joints reached unprecedented heights. When targeting new applications that require adaptability instead, it was therefore more convenient to use a standard stiff robot equipped with a sensor at its end-effector (tool).
For example, a robot intended for assembly or finishing tasks would be equipped with a force sensor that continuously measures the contact forces. An algorithm would then compensate the robot motion based on these measurements.
Rearranging the Visual World
Transporter Nets use a novel approach to 3D spatial understanding that avoids reliance on object-centric representations, making them general for vision-based manipulation but far more sample efficient than benchmarked end-to-end alternatives. As a consequence, they are fast and practical to train on real robots. We are also releasing an accompanying open-source implementation of Transporter Nets together with Ravens, our new simulated benchmark suite of ten vision-based manipulation tasks.
Boston Dynamics' Spot Robot Is Now Armed
The quadruped robot can now use an arm to interact with its environment semi-autonomously.
So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.
This Startup's Software Programs Industrial Robots, Without Coding
Singapore-based startup Augmentus, founded by IEEE Member Daryl Lim, Yong Shin Leong, and Chong Voon Foo, is trying to make automation more accessible with its intuitive robot-programming platform.
By making industrial robots easier to program, Lim says, the software can help businesses increase efficiency and reduce costs—which would in turn help retain local manufacturing jobs.
You're Hired: Recruiting Mobile Robots
While industrial robots have been part of the automation mix for decades, key advances in sensors, artificial intelligence (AI), software, machine vision, and light detection and ranging (LiDAR), among other technologies, are coalescing to empower an emerging category of more capable mobile and collaborative robots that are easier to program, less expensive to deploy, and far more flexible in the kinds of tasks they can perform.
Challenges of Automated Assembly
A product’s design determines its features, materials, performance, time to market and costs. Its quality, however, depends on the quality of the processes used to develop and make it. This doesn’t mean good designs only come from healthy design processes, but the odds of good results are surely higher if the processes are healthy.
So, how healthy are your product development processes?
Get ready for metamorphic manufacturing
A new wrinkle in blacksmithing is hailed as the third wave of the industry’s digitization.
Metamorphic manufacturing, also known as robotic blacksmithing, is poised to bring about faster time to market, less material waste, more available materials, less energy used and more control.
Can a cobot offer the flexibility of a human on the shop floor?
Since the Great Recession more than a decade ago, metal fabricators aren’t necessarily employing people unless they are absolutely needed. Manufacturing companies are lean, which helps to keep fixed costs down and the business more manageable when business slows.
It’s also a gamble. Unless shop floor personnel are cross-trained, the absence of a machine operator can sabotage productivity goals for the day. While more automated bending systems are being sold to North American fabricators, many shops still require an operator to sit in front of the press brake to get parts formed.
Advanced Technologies Adoption and Use by U.S. Firms: Evidence from the Annual Business Survey
While robots are usually singled out as a key technology in studies of automation, the overall diffusion of robotics use and testing is very low across firms in the U.S. The use rate is only 1.3% and the testing rate is 0.3%. These levels correspond relatively closely with patterns found in the robotics expenditure question in the 2018 ASM. Robots are primarily concentrated in large, manufacturing firms. The distribution of robots among firms is highly skewed, and the skewness in favor of larger firms can have a disproportionate effect on the economy that is otherwise not obvious from the relatively low overall diffusion rate of robots. The least-used technologies are RFID (1.1%), Augmented Reality (0.8%), and Automated Vehicles (0.8%). Looking at the pairwise adoption of these technologies in Table 14, we find that use of Machine Learning and Machine Vision are most coincident. We find that use of Automated Guided Vehicles is closely associated with use of Augmented Reality, RFID, and Machine Vision.