Manufacturing Analytics
Assembly Line
๐ง Data Driven Optimization - AI, Analytics IIoT and Oden Technologies
If you can predict that offline quality test in real time, so that you know, in real time, that youโre making good products, it reduces the risk to improve the process in real time. We actually use that type of modeling to then prescribe the right set points for the customer to reach whatever outcome they want to achieve. If they want to lower the cost, lower the material consumption and lower energy consumption, increase the speed, then we actually give them the input parameters that they need to use in order to get a more efficient output.
And then the last step, which is more exploratory, which weโre working on now is also generating work instructions for the operators, kind of like an AI support system for the operator. Because still, and we recognize this, the big bottleneck for a lot of manufacturers is talent. Talent is very scarce, itโs very hard to hire a lot of people that can perform these processes, especially when they say that itโs more of an art than a science. We can lower the barrier to entry for operators to become top performers, through recommendations, predictions and generative AI for how to achieve high performance. By enabling operators to leverage science more than art or intuition, we can really change the game in terms of how we make things.
Advanced Analytics at BASF with TrendMiner
Through an insightful case study on monitoring instrument air pressure and flare flows, Rooha Khan highlights how TrendMinerโs platform effectively optimizes manufacturing processes. Witness the tangible value BASF has discovered by harnessing the capabilities of industrial data analysis and monitoring, and be prepared to embrace the transformative possibilities of digitalization.
Using Data Models to Manage Your Digital Twins
A continuously evolving industrial knowledge graph is the foundation of creating industrial digital twins that solve real-world problems. Industrial digital twins are powerful representations of the physical world that can help you better understand how your assets are impacting your operations. A digital twin is only as useful as what you can do with it, and there is never only one all-encompassing digital twin. Your maintenance view of a physical installation will need to be different from the operational view, which is different from the engineering view for planning and construction.
Manufacturing Process Optimization in Times of Adversity
For the current era, we can usefully define manufacturing process optimization like this:
- Digitally connected plant teams learning and implementing data-driven strategies that impact their manufacturing processes to minimize cost and maximize production toward peak operational efficiency.
- Using data-to-value technologies that integrate seamlessly with their legacy systems and progressively automate an end-to-end, continuous improvement, production loop โ freeing manufacturers from a reactive troubleshooting paradigm so they can layer in further innovations toward the smart factory.
Through the above process, machine learning workflows are able to solve current generation data-readiness and production process optimization issues while future-proofing operations. By easing cost pressures and driving up revenue via data-driven production efficiencies (and with increasingly data-mature plant personnel), the C-suite is free to develop strategies with innovation managers. Together, they can combat the broader external challenges experienced by many manufacturers today.
โญ Hunting For Hardware-Related Errors In Data Centers
The data center computational errors that Google and Meta engineers reported in 2021 have raised concerns regarding an unexpected cause โ manufacturing defect levels on the order of 1,000 DPPM. Specific to a single core in a multi-core SoC, these hardware defects are difficult to isolate during data center operations and manufacturing test processes. In fact, SDEs can go undetected for months because the precise inputs and local environmental conditions (temperature, noise, voltage, clock frequency) have not yet been applied.
For instance, Google engineers noted โan innocuous change to a low-level libraryโ started to give wrong answers for a massive-scale data analysis pipeline. They went on to write, โDeeper investigation revealed that these instructions malfunctioned due to manufacturing defects, in a way that could only be detected by checking the results of these instructions against the expected results; these are โsilentโ corrupt execution errors, or CEEs.โ
Engineers at Google further confirmed their need for internal data, โOur understanding of CEE impacts is primarily empirical. We have observations of the form, โThis code has miscomputed (or crashed) on that core.โ We can control what code runs on what cores, and we partially control operating conditions (frequency, voltage, temperature). From this, we can identify some mercurial cores. But because we have limited knowledge of the detailed underlying hardware, and no access to the hardware-supported test structures available to chip makers, we cannot infer much about root causes.โ
Our connected future: How industrial data sharing can unite a fragmented world
The rapid and effective development of the coronavirus vaccines has set a new benchmark for todayโs industriesโbut it is not the only one. Increasingly, savvy enterprises are starting to share industrial data strategically and securely beyond their own four walls, to collaborate with partners, suppliers and even customers.
Worldwide, almost nine out of 10 (87%) business executives at larger industrial companies cite a need for the type of connected data that delivers unique insights to address challenges such as economic uncertainty, unstable geopolitical environments, historic labor shortages, and disrupted supply chains. In fact, executives report in a global study that the most common benefits of having an open and agnostic information-sharing ecosystem are greater efficiency and innovation (48%), increasing employee satisfaction (45%), and staying competitive with other companies (44%).
The future is now: Unlocking the promise of AI in industrials
Many executives remain unsure where to apply AI solutions to capture real bottom-line impact. The result has been slow rates of adoption, with many companies taking a wait-and-see approach rather than diving in.
Rather than endlessly contemplate possible applications, executives should set an overall direction and road map and then narrow their focus to areas in which AI can solve specific business problems and create tangible value. As a first step, industrial leaders could gain a better understanding of AI technology and how it can be used to solve specific business problems. They will then be better positioned to begin experimenting with new applications.
Manufacturing needs MVDA: An introduction to modern, scalable multivariate data analysis
In most settings, a qualitative/semi-quantitative process understanding exists. Through extensive experimentation and knowledge transfer, subject-matter experts (SMEs) know a generally acceptable range for distinct process parameters which is used to define the safe operating bounds of a process. In special cases, using bivariate analysis, SMEs understand how a small number of variables (no more than five) will interact to influence outputs.
Quantitative process understanding can be achieved through a holistic analysis of all process data gathered throughout the product lifecycle, from process design and development, through qualification and engineering runs, and routine manufacturing. Data comes from time series process sensors, laboratory logbooks, batch production records, raw material COAs, and lab databases containing results of offline analysis. As a process SME, the first reaction to a dataset this complex is that any analysis should be left to those with a deep understanding of machine learning and all the other big data buzzwords. However, this is the ideal opportunity for multivariate data analysis (MVDA).
Solution Accelerator: Multi-factory Overall Equipment Effectiveness (OEE) and KPI Monitoring
The Databricks Lakehouse provides an end-to-end data engineering, serving, ETL, and machine learning platform that enables organizations to accelerate their analytics workloads by automating the complexity of building and maintaining analytics pipelines through open architecture and formats. This facilitates the connection to high-velocity Industrial IoT data using standard protocols like MQTT, Kafka, Event Hubs, or Kinesis to external datasets, like ERP systems, allowing manufacturers to converge their IT/OT data infrastructure for advanced analytics.
Using a Delta Live Tables pipeline, we leverage the medallion architecture to ingest data from multiple sensors in a semi-structured format (JSON) into our bronze layer where data is replicated in its natural format. The silver layer transformations include parsing of key fields from sensor data that are needed to be extracted/structured for subsequent analysis, and the ingestion of preprocessed workforce data from ERP systems needed to complete the analysis. Finally, the gold layer aggregates sensor data using structured streaming stateful aggregations, calculates OT metrics e.g. OEE, TA (technical availability), and finally combines the aggregated metrics with workforce data based on shifts allowing for IT-OT convergence.
Luxury goods manufacturer gets a handle on production capacity from FourJaw
Machine monitoring software from FourJaw has driven a 14% uplift in machine utilisation at a luxury goods manufacturer. Fast-growing brass cabinet hardware manufacturer Armac Martin used data from FourJawโs machine monitoring platform to increase its production capacity and meet a surge in demand for its product range.
Armac Martinโs Production Director, Rob McGrail, said: โWhen we were looking for a machine monitoring software supplier, a key criteria for us was not just about the ease of deployment and software functionality, but it was equally important that they were based locally, in the UK and that they had a good level of customer support, both for deployment and on-going customer success. FourJaw ticked all of these boxesโ.
Using AI to increase asset utilization and production uptime for manufacturers
Google Cloud created purpose-built tools and solutions to organize manufacturing data, make it accessible and useful, and help manufacturers to quickly take significant steps on this journey by reducing the time to value. In this post, we will explore a practical example of how manufacturers can use Google Cloud manufacturing solutions to train, deploy and extract value from ML-enabled capabilities to predict asset utilization and maintenance needs. The first step to a successful machine learning project is to unify necessary data in a common repository. For this, we will use Manufacturing Connect, the factory edge platform co-developed with Litmus Automation, to connect to manufacturing assets and stream the asset telemetries to Pub/Sub.
The following scenario is based on a hypothetical company, Cymbal Materials. This company is a factitious discrete manufacturing company that runs 50+ factories in 10+ countries. 90% of Cymbal Materials manufacturing processes involve milling, which are accomplished using industrial computer numerical control (CNC) milling machines. Although their factories implement routine maintenance checklists, there are unplanned and unknown failures that happen occasionally. However, many of the Cymbal Materials factory workers lack the experience to identify and troubleshoot failures due to labor shortage and high turnover rate in their factories. Hence, Cymbal Materials is working with Google Cloud to build a machine learning model that can identify and analyze failures on top of Manufacturing Connect, Manufacturing Data Engine, and Vertex AI.
How United Manufacturing Hub Is Introducing Open Source to Manufacturing and Using Time-Series Data for Predictive Maintenance
The United Manufacturing Hub is an open-source Helm chart for Kubernetes, which combines state-of-the-art IT/OT tools and technologies and brings them into the hands of the engineer. This allows us to standardize the IT/OT infrastructure across customers and makes the entire infrastructure easy to integrate and maintain. We typically deploy it on the edge and on-premise using k3s as light Kubernetes. In the cloud, we use managed Kubernetes services like AKS. If the customer is scaling out and okay with using the cloud, we recommend services like Timescale Cloud. We are using TimescaleDB with MQTT, Kafka, and Grafana. We have microservices to subscribe to the messages from the message brokers MQTT and Kafka and insert the data into TimescaleDB, as well as a microservice that reads out data and processes it before sending it to a Grafana plugin, which then allows for visualization.
We are currently positioning the United Manufacturing Hub with TimescaleDB as an open-source Historian. To achieve this, we are currently developing a user interface on top of the UMH so that OT engineers can use it and IT can still maintain it.
Leveraging Operations Data to Achieve 3%-5% Baseline Productivity Gains with Normalized KPIs
Traditional code-based data models are too cumbersome, cost prohibitive and resource intensive to support an enterprise data model. In a code-based environment, it can take six months just to write and test the code to bring a single plantโs operating data into alignment with enterprise data pipelines. By contrast, a no-code solution like the Element Unify platform allows all IT/OT/ET data sources to be quickly tagged and brought into an Asset Hierarchy. The timeframe for a single plant to bring their operating data into alignment with the enterprise data architecture and data pipelines drops from 6 months to 2 to 4 weeks.
Digital transformation tools improve plant sustainability and maintenance
Maintenance is inherent to all industrial facilities. In pneumatic systems, valves wear out over time, causing leakage that leads to excessive compressed air consumption. Some systems can have many valves, which can make identifying a faulty one challenging. Leak troubleshooting can be time-consuming and, with the ongoing labor shortage and skills gap, maintenance personnel may already be stretched thin. There may not be enough staff to keep up with what must be done, and historical knowledge may not exist. When production must stop for repairs, it can be very expensive. For mid-sized food and beverage facilities, unplanned downtime costs around $30,000 per hour.
Finding Frameworks For End-To-End Analytics
New standards, guidelines, and consortium efforts are being developed to remove these barriers to data sharing for analytics purposes. But the amount of work required to make this happen is significant, and it will take time to establish the necessary level of trust across groups that historically have had minimal or no interactions.
For decades, test program engineers have relied upon the STDF file format, which is inadequate for todayโs use cases. STDF files cannot dynamically capture adaptive test limits, and they are unable to assist in real-time decisions at the ATE based upon current data and analytically derived models. In fact, most data analytic companies run a software agent on the ATE to extract data for decisions and model building. With ATE software updates, the agent often breaks, requiring the ATE vendor to fix each custom agent on every test platform. Emerging standards, TEMS and RITdb, address these limitations and enable new use cases.
But with a huge amount of data available in manufacturing settings, an API may be the best approach for sharing sensitive data from point of origin to a centralized repository, whether on-premise or in the cloud.
Improving asset criticality with better decision making at the plant level
The industry is beginning to see reliability, availability and maintainability (RAM) applications that integrally highlight the real constraints, including the other operational and mechanical limits. A RAM-based simulation application provides fault-tree analysis, based on actual material flows through a manufacturing process, with stage gates, inventory modeling, load sharing, standby/redundancy of equipment, operational phases, and duty cycles. In addition, a RAM application can simulate expectations of various random events such as weather, market dynamics, supply/distribution logistical events, and more. In one logistics example, a coker unitโs bottom pump was thought to be undersized and constraining the unit production. Changing the pump to a larger size did not fix the problem, because further investigation showed insufficient trucks on the train to carry the product away would not let the unit operate at full capacity.
Renault Group and Atos launch a unique service to collect large-scale manufacturing data and accelerate Industry 4.0
Renault Group and Atos launch ID@scale (Industrial Data @ Scale), a new service for industrial data collection to support manufacturing companies in their digital journey towards Industry 4.0. โID@Sโ (Industrial Data @ Scale) will allow manufacturers to collect and structure data from industrial equipment at scale to improve operational excellence and product quality. Developed by the car manufacturer and already in operation within its factories, ID@scale is now industrialized, modularized and commercialized by the digital leader Atos.
More than 7,500 pieces of equipment are connected, with standardized data models representing over 50 different manufacturing processes from screwdriving to aluminum injection, including car frame welding, machining, painting, stamping, in addition to new manufacturing processes for electric motors and batteries. Renault Group is already saving 80 million euros per year and aims to deploy this solution across the remainder of its 35 plants, connecting over 22,000 pieces of equipment, by 2023 to generate savings of 200 million euros per year.
Advanced analytics improve process optimization
With advanced analytics, the engineers collaborated with data scientists to create a model comparing the theoretical and operational valve-flow coefficient of one control valve. Conditions in the algorithm were used to identify periods of valve degradation in addition to past failure events. By reviewing historical data, the SMEs determined the model would supply sufficient notification time to deploy maintenance resources so repairs could be made prior to failure.
Batch Optimization using Quartic.ai
Aarbakke + Cognite | Boosting production, maintenance, and quality
Battery Analytics: The Game Changer for Energy Storage
Battery analytics refers to getting more out of the battery using software โ not only during operation, but also when selecting the right battery cell or designing the overall system. For now, the focus will be on the possibilities to optimize the in-field operation of battery storages.
The TWAICE cloud analytics platform provides insights and solutions based on field data. The differentiation factor is the end-to-end approach with analytics at its heart. After processing and mapping the data, the platform analytics layer runs different analytical algorithms, electrical, thermal and aging models as well as machine learning models. This variety of analytical approaches is the key to balance data input quality differences and is also the basis for the wide and expanding range of solutions.
Where And When End-To-End Analytics Works
To control a wafer factory operation, engineering teams rely on process equipment and inspection statistical process control (SPC) charts, each representing a single parameter (i.e., univariant-based). With the complexities of some processes the interactions between multiple parameters (i.e., multi-variant) can result in yield excursions. This is when engineers leverage data to make decisions on subsequent fab or metrology steps to improve yield and quality.
โWhen we look at fab data today, weโre doing that same type of adaptive learning,โ McIntyre said. โIf I start seeing things that donโt fit my expected behavior, they could still be okay by univariate control, but they donโt fit my model in a multi-variate sense. Iโll work toward understanding that new combination. For instance, in a specific equipment my pump down pressure is high, but my gas flow is low and my chamber is cold, relatively speaking, and all (parameters) individually are in spec. But Iโve never seen that condition before, so I need to determine if this new set of process conditions has an impact. I send that material to my metrology station. Now, if that inline metrology data is smack in the center, I can probably disregard the signal.โ
The Hidden Factory: How to Expose Waste and Capacity on the Shop Floor
Without accurate production data, managers simply cannot hope to find the hidden waste on the shop floor. While strict manual data collection methods can take job shops to a certain degree, the sophisticated manufacturer is leveraging solutions that collect, aggregate, and standardize production data autonomously. With this data in hand, accurate benchmarks can be set (they may be quite surprising) and areas of hidden capacity, as well as waste-generators, can be far more easily identified.
How to Use Data in a Predictive Maintenance Strategy
Free-Text and label correction engines are a solution to clean up missing or inconsistent work order and parts order data. Pattern recognition algorithms can replace missing items such as funding center codes. They also fix work order (WO) descriptions to match the work actually performed. This can often yield a 15% shift in root cause binning over non-corrected WO and parts data.
With programmable logic controller-generated threshold alarms (like an alarm that is generated when a single sensor exceeds a static value), โnuisanceโ alarms are often generated and then ignored. These false alarms quickly degrade the culture of an operating staff as their focus is shifted away from finding the underlying problem that is causing the alarm. In time, these distractions threaten the health of the equipment, as teams focus on making the alarm stop rather than addressing the issue.
Toward smart production: Machine intelligence in business operations
Our research looked at five different ways that companies are using data and analytics to improve the speed, agility, and performance of operational decision making. This evolution of digital maturity begins with simple tools, such as dashboards to aid human decision making, and ends with true MI, machines that can adjust their own performance autonomously based on historical and real-time data.
Connecting an Industrial Universal Namespace to AWS IoT SiteWise using HighByte Intelligence Hub
Merging industrial and enterprise data across multiple on-premises deployments and industrial verticals can be challenging. This data comes from a complex ecosystem of industrial-focused products, hardware, and networks from various companies and service providers. This drives the creation of data silos and isolated systems that propagate one-to-one integration strategy.
HighByte Intelligence Hub does just that. It is a middleware solution for universal namespace that helps you build scalable, modern industrial data pipelines in AWS. It also allows users to collect data from various sources, add context to the data being collected, and transform it to a format that other systems can understand.
Rub-A-Dub-Dub...It's All About the Data Hub
If these terms leave you more confused than when you started reading, join the club. I am an OT guy, and so much of this was new to me. And itโs another reason to have a good IT/OT architect on your team. The bottom line is that these terms support the various perspectives that must be addressed in connecting and delivering data, from architecture and patterns to services and translation layers. Remember, we are not just talking about time-series or hierarchical asset data. Data such as time, events, alarms, units of work, units of production time, materials and material flows, and people can all be contextualized. And this is the tough nut to crack as the new OT Ecosystem operates in multiple modes, not just transactional as we find in the back office.
How to Reduce Tool Failure with CNC Tool Breakage Detection
There are several active technologies used in CNC machining that enable manufacturers to realize these benefits. The type of system used for tooling breakage detection may consist of one or more of the following technologies.
Theyโre often tied to production monitoring systems and ideally IIoT platforms that can analyze tooling data in the cloud to better predict breakages in the future. One innovation in the area of non-contact technologies is the use of high-frequency data that helps diagnose, predict and avoid failures. This technology is sensorless and uses instantaneous real-time data pulled at an extremely high rate to build accurate tool failure detection models.
Sight Machine, NVIDIA Collaborate to Turbocharge Manufacturing Data Labeling
The collaboration connects Sight Machineโs manufacturing data foundation with NVIDIAโs AI platform to break through the last bottleneck in the digital transformation of manufacturing โ preparing raw factory data for analysis. Sight Machineโs manufacturing intelligence will guide NVIDIA machine learning software running on NVIDIA GPU hardware to process two or more orders of magnitude more data at the start of digital transformation projects.
Accelerating data labeling will enable Sight Machine to quickly onboard large enterprises with massive data lakes. It will automate and accelerate work and lead to even faster time to value. While similar automated data mapping technology is being developed for specific data sources or well documented systems, Sight Machine is the first to use data introspection to automatically map tags to models for a wide variety of plant floor systems.
Machining cycle time prediction: Data-driven modelling of machine tool feedrate behavior with neural networks
Accurate prediction of machining cycle times is important in the manufacturing industry. Usually, Computer-Aided Manufacturing (CAM) software estimates the machining times using the commanded feedrate from the toolpath file using basic kinematic settings. Typically, the methods do not account for toolpath geometry or toolpath tolerance and therefore underestimate the machining cycle times considerably. Removing the need for machine-specific knowledge, this paper presents a data-driven feedrate and machining cycle time prediction method by building a neural network model for each machine tool axis. In this study, datasets composed of the commanded feedrate, nominal acceleration, toolpath geometry and the measured feedrate were used to train a neural network model. Validation trials using a representative industrial thin-wall structure component on a commercial machining center showed that this method estimated the machining time with more than 90% accuracy. This method showed that neural network models have the capability to learn the behavior of a complex machine tool system and predict cycle times. Further integration of the methods will be critical in the implantation of digital twins in Industry 4.0.
How the Cloud is Changing the Role of Metadata in Industrial Intelligence
Right now though, many companies have trouble seeing that context in existing datasets. Much of that difficulty owes to the original design of operational technology (OT) systems like supervisory control and acquisition (SCADA) systems or data historians. Today, the story around the collection of data in OT systems is much the same. Each of these descriptive points about the data could paint a more holistic view of asset performance.
As many process businesses turn to a data lake strategy to leverage the value of their data, the preservation of metadata in the movement of OT data to their cloud environment represents a significant opportunity to optimize the maintenance, productivity, sustainability, and safety of critical assets. The loss of metadata has been among the most severe limiting factors in the value of OT data. By one estimate, industrial businesses are losing out on 20-30 percent of the value of their data from regular compression of metadata or losses in their asset hierarchy models. With an expertise shortage sweeping across process-intensive operations, many companies will need to digitize and conserve institutional (puppy-or-person) knowledge, beginning with their own data.
Automation Within Supply Chains: Optimizing the Manufacturing Process
Is Clip A โSlackโ For Factories?
Clip aims to bring data gathering and analytics, information sharing, and collaboration onto a single platform. The system connects all intelligent industrial equipment in a production facility, together with workers who can access all information and adjust operations through computers and portable devices.
Itโs an ambitious undertaking, one that requires guaranteeing a very high degree of interoperability to ensure that people, machines and processes can communicate with each other seamlessly, and that all key systems such as Material Requirements Planning (MRP), Enterprise Resource Planning (ERP) and others can directly access up-to-date information from machines and processes. This higher level of automation, if implemented right, can unlock a new level of efficiency for manufacturing companies.
Build a Complete Analytics Pipeline to Optimize Smart Factory Operations
2021 Assembly Plant of the Year: GKN Drives Transformation With New Culture, Processes and Tools
All-wheel drive (AWD) technology has taken the automotive world by storm in recent years, because of its ability to effectively transfer power to the ground. Today, many sport utility vehicles use AWD for better acceleration, performance, safety and traction in all kinds of driving conditions. GKNโs state-of-the-art ePowertrain assembly plant in Newton, NC, supplies AWD systems to BMW, Ford, General Motors and Stellantis facilities in North America and internationally. The 505,000-square-foot facility operates multiple assembly lines that mass-produce more than 1.5 million units annually.
โAreas of improvement include a first-time-through tracking dashboard tailored to each individual line and shift that tracks each individual failure mode,โ says Tim Nash, director of manufacturing engineering. โWe use this tool to monitor improvements and progress on a daily basis.
โOverhaul of process control limits has been one of our biggest achievements,โ claims Nash. โBy setting tighter limits for assembly operations such as pressing and screwdriving, we are able to detect and reject defective units in station vs. a downstream test operation. This saves both time and scrap related to further assembly of the defective unit.โ
โWhen we started on our turnaround journey, our not-right-first-time rate was about 26 percent,โ adds Smith. โToday, it averages around 6 percent. A few years ago, cost of non-quality was roughly $23 million annually vs. less than $3 million today.โ
Digital Transformation in the Beverage Manufacturing and Bottling
How W Machine Uses FactoryWiz Machine & Equipment Monitoring
Industry 4.0 and the Automotive Industry
โIt takes about 30 hours to manufacture a vehicle. During that time, each car generates massive amounts of data,โ points out Robert Engelhorn, director of the Munich plant. โWith the help of artificial intelligence and smart data analytics, we can use this data to manage and analyze our production intelligently. AI is helping us to streamline our manufacturing even further and ensure premium quality for every customer. It also saves our employees from having to do monotonous, repetitive tasks.โ
One part of the plant that is already seeing benefits from AI is the press shop, which turns more than 30,000 sheet metal blanks a day into body parts for vehicles. Each blank is given a laser code at the start of production so the body part can be clearly identified throughout the manufacturing process. This code is picked up by BMWโs iQ Press system, which records material and process parameters, such as the thickness of the metal and oil layer, and the temperature and speed of the presses. These parameters are related to the quality of the parts produced.
Big Data Analytics in Electronics Manufacturing: is MES the key to unlocking its true potential?
In a modern SMT fab, every time a stencil is loaded or a squeegee makes a pass, data is generated. Every time a nozzle picks and places a component, data is generated. Every time a camera records a component or board inspection image, data is generated. The abundance of data in the electronics industry is a result of the long-existing and widespread process automation and proliferation of sensors, gauges, meters and cameras, which capture process metrics, equipment data and quality data.
In SMT and electronics the main challenge isnโt the availability of data, rather the ability to look at the data generated from the process as a whole, making sense of data pertaining to each shop floor transaction, then being able to use this data to generate information from a single point of truth instead of disparate unconnected point solutions and use the generated insight to make decisions which ultimately improve process KPIs, OEE, productivity, yield, compliance and quality.
2021 IW Best Plants Winner: IPG Tremonton Wraps Up a Repeat IW Best Plants Win
โIf you wrapped it and just wound it straight, it would look like a record, with peaks and valleys,โ says Richardson. So instead, the machines rotate horizontally, like two cans of pop on turntables. Initially, IPG used a gauge that indicated whether the film was too thick or too thin. โThat was OK,โ says Richardson, โbut it didnโt get us the information we needed.โ
Working with an outside company, IPG Tremonton upgraded the gauge to one that could quantify the thickness of the cut plastic in real time as the machine operates.
The benefits of the tinkering were twofold. First, the upgrade gave operators the ability to correct deviations on the fly. Second, โwe found that we had some variations between a couple of our machines,โ Richardson says. Using the new gauge on both machines revealed that one of them was producing film โa few percentage points thickerโ than its twin. โWe [were] basically giving away free product,โ Richardson recalled. The new sensor gave IPG the information it needed to label film more accurately.
AWS IoT SiteWise Edge Is Now Generally Available for Processing Industrial Equipment Data on Premises
With AWS IoT SiteWise Edge, you can organize and process your equipment data in the on-premises SiteWise gateway using AWS IoT SiteWise asset models. You can then read the equipment data locally from the gateway using the same application programming interfaces (APIs) that you use with AWS IoT SiteWise in the cloud. For example, you can compute metrics such as Overall Equipment Effectiveness (OEE) locally for use in a production-line monitoring dashboard on the factory floor.
Transforming quality and warranty through advanced analytics
For companies seeking to improve financial performance and customer satisfaction, the quickest route to success is often a product-quality transformation that focuses on reducing warranty costs. Quality problems can be found across all industries, and even the best companies can have weak spots in their quality systems. These problems can lead to accidents, failures, or product recalls that harm the companyโs reputation. They also create the need for prevention measures that increase the total cost of quality. The ultimate outcomes are often poor customer satisfaction that decreases top-line growth, and additional costs that damage bottom-line profitability.
To transform quality and warranty, leading industrial companies are combining traditional tools with the latest in artificial-intelligence (AI) and machine-learning (ML) techniques. The combined approach allows these manufacturers to reduce the total cost of quality, ensure that their products perform, and improve customer expectations. The impact of a well-designed and rigorously executed transformation thus extends beyond cost reduction to include higher profits and revenues as well.
Survey: Data Analytics in the Chemical Industry
Seeq recently conducted a poll of chemical industry professionalsโprocess engineers, mechanical and reliability engineers, production managers, chemists, research professionals, and othersโto get their take on the state of data analytics and digitalization. Some of the responses confirmed behaviors weโve witnessed first-hand in recent years: the challenges of organizational silos and workflow inefficiencies, and a common set of high-value use cases across organizations. Other responses surprised us, read on to see why.
AI Solution for Operational Excellence
Falkonry Clue is a plug-and-play solution for predictive production operations that identifies and addresses operational inefficiencies from operational data. It is designed to be used directly by operational practitioners, such as production engineers, equipment engineers or manufacturing engineers, without requiring the assistance of data scientists or software engineers.
Efficiency of production plants: how to track, manage and resolve micro-stops
Why are the micro-stops listed above not tracked by companies? Comparison with many entrepreneurs and maintenance managers shows that everyone is aware of the problem, but underestimate the impact of these stops on overall production efficiency. These stoppages are almost never justified by the operators because the personnel on board the machine is busy reaching its production targets and therefore does not considers it important to stop to justify the micro-stops. How often do you hear people say that the time to justify downtime is greater than the machine downtime!