Generative AI is already used with confidence in corporate processes, ranging from document generation to purchase assistance. According to TAdviser, 90 of Russia's 100 largest companies used artificial intelligence and machine learning in their internal processes in 2024. The focus is now shifting, as these technologies move from offices to production sites. This topic was examined at an open discussion, where TVEL, Nornickel, Gazprom Neft NTC, Sber, Datana, and Nanyang Technological University (Singapore) talked about transitioning from prototypes to working solutions.
One key conclusion is that the industry needs precise, process-specific tools rather than abstract universal models. This explains the interest in multimodal and multi-agent architectures, which can process text, images, and telemetry while adhering to regulations. This is particularly important for analyzing sensor readings and camera videos in real time in compliance with relevant standards.
Nornickel is one of the few companies that uses AI in the production. Predictive models help control the flotation process, which uses air bubbles to extract non-ferrous metals from ore. AI analyzes the composition of the froth and adjusts the equipment settings in real time. Initially, the technology only used tabular data but was later supplemented with computer vision, resulting in a 0.3-0.5% increase in the recovery rates. Currently, Nornickel is developing a language model for metallurgy. Trained on GOSTs, regulations and processes, the model will be the foundation of AI assistants in R&D, HR, and legal services.
Gazpromneft NTC is testing AI to produce hard-to-recover reserves where conventional automation fails. About 70% of wells experience unstable operation with abrupt pressure fluctuations, gas breakthroughs, and occasional pump shutdowns. Controlling the equipment from data centers is not efficient here, since data arrives with delays and the system cannot keep pace with the rapidly changing situation. Therefore, it was decided to move signal processing closer to the equipment. Currently, local computers at the wells analyze data in real time. The next step is to teach the computers to adjust parameters without human intervention. "This will help us take advantage of the operational potential that is currently being lost," said Evgeny Yudin, the director of the Production Management at Current Facilities program.
However, technology is only one part of the equation. The participants agreed that large-scale implementation requires a mature infrastructure, regulatory framework, and computing power, in addition to algorithms. The main question is: Who takes responsibility for decisions made by neural networks?
Another discussion within the AI track, "Artificial Intelligence for Autonomous Adaptive Systems", was moderated by Alexander Menshchikov, who heads the Artificial Intelligence for Autonomous Systems Laboratory at the Skoltech AI Center. He invited Dmitry Devitt from the Innopolis University's Center for Unmanned Autonomous Systems; Vladimir Karapetyants from the Progress Microelectronic Research Institute; Andrei Korigodsky from Sverkh; Ivan Oseledets from AIRI and Skoltech; Dmitry Sizemov from Digital Robotics; and Maxim Tomskikh from Dronshab Group to participate. The discussion focused on three interrelated challenges: continuous model adaptation in a live industrial environment, bridging the semantic gap between theoretical RL approaches and real-world production, and distributing computations between onboard computers and the cloud in environments with unstable or nonexistent connectivity.
Menshchikov opened the discussion by articulating a key dilemma: "The script defines the architecture. When communication is unpredictable, critical functions must be performed on board; otherwise, a 200-millisecond delay could result in a collision." Dmitry Sizemov, the CTO of Digital Robotics, supported this statement by providing examples from quarries, where heavy dump trucks "learn" to recognize dangerous obstacles. He believes that model adaptation begins with manual markings by drivers. After a few trips, the system refines the risk map and reduces the number of false alarms. Ivan Oseledets added that modern "multi-context" reinforcement learning, in which an agent is trained on multiple tasks and trajectories, can greatly improve the transfer of simulations to the real-world shop floors. However, it requires billions of examples and high-speed simulators.
Vladimir Karapetyants proposed "nervous cooperation", which emphasizes the independence from hardware imports and focuses on Russian integrated circuits and components for AI systems. Sergei Yashchenko, whose neuromorphic research was mentioned during the discussion, took the lead, stating that transitioning to brain-like chips could be a "shortcut" for Russia in the field of global microelectronics — but only if resources are allocated to targeted programs. Maxim Tomskikh, who works with fleets of drones and ground robots, presented a hybrid approach in which a low-cost robot performs basic functions locally while complex coordination and model training take place in an edge cloud. This approach helps update the skills of the entire group without increasing the cost of each piece of equipment.
The discussion concluded with an interactive "trust assessment" via QR code voting. Most participants said that they were ready to rely on autonomous systems with clearly defined boundaries of responsibility and transparent security criteria. Thus, the session emphasized the importance of promoting adaptive learning algorithms, developing a Russian hardware platform, and formulating business tasks as specifically as possible to reduce the semantic gap between academia and industry. Then, workable AI solutions will transition from laboratories to everyday industrial practice