Generative AI

Assembly Line

🧑‍🏭🧠 Hitachi to use generative AI to pass expert skills to next generation

📅 Date:

✍️ Authors: Yoichiro Hiroi, Tsuyoshi Tamesue

🔖 Topics: Generative AI, Worker Training

🏢 Organizations: Hitachi


Japan’s Hitachi will utilize generative artificial intelligence to pass on expert skills in maintenance and manufacturing to newer workers, aiming to blunt the impact of mass retirements of experienced employees. The company will use the technology to generate videos depicting difficulties or accidents at railways, power stations and manufacturing plants and use them in virtual training for employees.

Hitachi already has developed an AI system that creates images based on 3D data of plants and infrastructure. It projects possible malfunctions – smoke, a cave-in, a rail buckling – onto an image of an actual rail track. This can also be done on images of manufacturing sites, including metal processing and assembly lines. Hitachi will merge this technology into a program for virtual drills that is now under development.

Read more at Nikkei Asia

⛓️🧠 Multinationals turn to generative AI to manage supply chains

📅 Date:

✍️ Author: Oliver Telling

🔖 Topics: Generative AI, Supply Chain Control Tower

🏢 Organizations: Unilever, Siemens, Maersk, Pactum, Walmart, Scoutbee, Altana


Navneet Kapoor, chief technology officer at Maersk, said “things have changed dramatically over the past year with the advent of generative AI”, which can be used to build chatbots and other software that generates responses to human prompts.

New supply chain laws in countries such as Germany, which require companies to monitor environmental and human rights issues in their supply chains, have driven interest and investment in the area.

Read more at Financial Times

U. S. Steel Aims to Improve Operational Efficiencies and Employee Experiences with Google Cloud’s Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: US Steel, Google


United States Steel Corporation (NYSE: X) (“U. S. Steel”) and Google Cloud today announced a new collaboration to build applications using Google Cloud’s generative artificial intelligence (“gen AI”) technology to drive efficiencies and improve employee experiences in the largest iron ore mine in North America. As a leading manufacturer engaging in gen AI with Google Cloud, U. S. Steel continues to advance its more than 100-year legacy of innovation.

The first gen AI-driven application that U. S. Steel will launch is called MineMind™ which aims to simplify equipment maintenance by providing optimal solutions for mechanical problems, saving time and money, and ultimately improving productivity. Underpinned by Google Cloud’s AI technology like Document AI and Vertex AI, MineMind™ is expected to not only improve the maintenance team’s experience by more easily bringing the information they need to their fingertips, but also save costs from more efficient use of technicians’ time and better maintained trucks. The initial phase of the launch will begin in September and will impact more than 60 haul trucks at U. S. Steel’s Minnesota Ore Operations facilities, Minntac and Keetac.

Read more at Business Wire

Ansys Accelerates Innovation by Expanding AI Offerings with New Virtual Assistant

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Ansys, Microsoft


Expanding artificial intelligence (AI) integration across its simulation portfolio and customer community, Ansys (NASDAQ: ANSS) announced the limited beta release of AnsysGPT, a multilingual, conversational, AI virtual assistant set to revolutionize the way Ansys customers receive support. Developed using state-of-the-art ChatGPT technology available via the Microsoft Azure OpenAI Service, AnsysGPT uses well-sourced Ansys public data to answer technical questions concerning Ansys products, relevant physics, and engineering topics within one comprehensive tool.

Expected in early 2024, AnsysGPT will optimize technical support for customers — delivering information and solutions more efficiently, furthering the democratization of simulation. While currently in beta testing with select customers and channel partners, upon its full release next year AnsysGPT will provide easily accessible 24/7 technical support through the Ansys website. Unlike general AI virtual assistants that use unsupported information, AnsysGPT is trained using Ansys data to generate tailored, applicable responses drawn from reliable Ansys resources including, but not limited to, Ansys Innovation Courses, technical documentation, blog articles, and how-to-videos. Strong controls were put in place to ensure that no proprietary information of any kind was used during the training process, and that customer inputs are not stored or used to train the system in any way.

Read more at Ansys News

Sight Machine Factory CoPilot Democratizes Industrial Data With Generative AI

📅 Date:

🔖 Topics: Generative AI

🏢 Organizations: Sight Machine, Microsoft


Sight Machine Inc. today announced the release of Factory CoPilot, democratizing industrial data through the power of generative artificial intelligence. By integrating Sight Machine’s Manufacturing Data Platform with Microsoft Azure OpenAI Service, Factory CoPilot brings unprecedented ease of access to manufacturing problem solving, analysis and reporting.

Using a natural language user interface similar to ChatGPT, Factory CoPilot offers an intuitive, “ask the expert” experience for all manufacturing stakeholders, regardless of data proficiency. In response to a single question, Factory CoPilot can automatically summarize all relevant data and information about production in real-time (e.g., for daily meetings) and generate user-friendly reports, emails, charts and other content (in any language) about the performance of any machine, line or plant across the manufacturing enterprise, based on contextualized data in the Sight Machine platform.

Read more at Sight Machine Press

Utility AI Beta

Retentive Network: A Successor to Transformer for Large Language Models

📅 Date:

✍️ Authors: Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei

🔖 Topics: Retentive Network, Transformer, Large Language Model, Generative AI


In this work, we propose Retentive Network (RetNet) as a foundation architecture for large language models, simultaneously achieving training parallelism, low-cost inference, and good performance. We theoretically derive the connection between recurrence and attention. Then we propose the retention mechanism for sequence modeling, which supports three computation paradigms, i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel representation allows for training parallelism. The recurrent representation enables low-cost O(1) inference, which improves decoding throughput, latency, and GPU memory without sacrificing performance. The chunkwise recurrent representation facilitates efficient long-sequence modeling with linear complexity, where each chunk is encoded parallelly while recurrently summarizing the chunks. Experimental results on language modeling show that RetNet achieves favorable scaling results, parallel training, low-cost deployment, and efficient inference. The intriguing properties make RetNet a strong successor to Transformer for large language models.

Read more at arXiv

LongNet: Scaling Transformers to 1,000,000,000 Tokens

📅 Date:

✍️ Authors: Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei

🔖 Topics: Transformer, Large Language Model, Generative AI


Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. To address this issue, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between any two tokens in a sequence; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.

Read more at arXiv

🧠 Toyota Research Institute Unveils New Generative AI Technique for Vehicle Design

📅 Date:

🔖 Topics: Generative AI

🏭 Vertical: Automotive

🏢 Organizations: Toyota


Toyota Research Institute (TRI) today unveiled a generative artificial intelligence (AI) technique to amplify vehicle designers. Currently, designers can leverage publicly available text-to-image generative AI tools as an early step in their creative process. With TRI’s new technique, designers can add initial design sketches and engineering constraints into this process, cutting down the iterations needed to reconcile design and engineering considerations.

TRI researchers released two papers describing how the technique incorporates precise engineering constraints into the design process. Constraints like drag (which affects fuel efficiency) and chassis dimensions like ride height and cabin dimensions (which affect handling, ergonomics, and safety) can now be implicitly incorporated into the generative AI process. The team tied principles from optimization theory, used extensively for computer-aided engineering, to text-to-image-based generative AI. The resulting algorithm allows the designer to optimize engineering constraints while maintaining their text-based stylistic prompts to the generative AI process.

Read more at Toyota Newsroom

3DGPT - your 3D printing friend & collaborator!

Demo: Cognite Data Fusion's Generative AI Copilot

🧠 What is Visual Prompting?

📅 Date:

✍️ Author: Mark Sabini

🔖 Topics: Generative AI

🏢 Organizations: Landing AI


Landing AI’s Visual Prompting capability is an innovative approach that takes text prompting, used in applications such as ChatGPT, to computer vision. The impressive part? With only a few clicks, you can transform an unlabeled dataset into a deployed model in mere minutes. This results in a significantly simplified, faster, and more user-friendly workflow for applying computer vision.

In a quest to make Visual Prompting more practical for customers, we studied 40 projects across the manufacturing, agriculture, medical, pharmaceutical, life sciences, and satellite imagery verticals. Our analysis revealed that Visual Prompting alone could solve just 10% of the cases, but the addition of simple post-processing logic increases this to 68%.

Read more at Landing AI Blog

What does it take to talk to your Industrial Data in the same way we talk to ChatGPT?

📅 Date:

✍️ Author: Jason Schern

🔖 Topics: Generative AI, Large Language Model

🏢 Organizations: Cognite


The vast data set used to train LLMs is curated in various ways to provide clean, contextualized data. Contextualized data includes explicit semantic relationships within the data that can greatly affect the quality of the model’s output. Contextualizing the data we provide as input to an LLM ensures that the text consumed is relevant to the task at hand. For example, when prompting an LLM to provide information about operating industrial assets, the data provided to the LLM should include not only the data and documents related to those assets but also the explicit and implicit semantic relationships across different data types and sources.

An LLM is trained by parceling text data into smaller collections, or chunks, that can be converted into embeddings. An embedding is simply a sophisticated numerical representation of the ‘chunk’ of text that takes into consideration the context of surrounding or related information. This makes it possible to perform mathematical calculations to compare similarities, differences, and patterns between different ‘chunks’ to infer relationships and meaning. These mechanisms enable an LLM to learn a language and understand new data that it has not seen previously.

Read more at Cognite Blog

Will Generative AI finally turn data swamps into contextualized operations insight machines?

📅 Date:

🔖 Topics: Large Language Model, Generative AI

🏢 Organizations: Cognite


Generative AI, such as ChatGPT/GPT-4, has the potential to put industrial digital transformation into hyperdrive. Whereas a process engineer might spend several hours performing “human contextualization” (at an hourly rate of $140 or more) manually – again and again – contextualized industrial knowledge graphs provide the trusted data relationships that enable Generative AI to accurately navigate and interpret data for Operators without requiring data engineering or coding competencies.

Read more at Cognite Blog