AIIA - Artificial Intelligence in Automation
The following diagram outlines our main research topics:
Intelligent Configuration and Planning
Today’s fast changing markets require fast changing products, and fast changing products require adaptable production plants. For this, the so-called Plug- and-Produce (PnP) paradigm has been developed in the field of machine and plant construction. But nowadays, the automation system becomes more and more the bottleneck for PnP: Each plant construction or reconfiguration causes a high effort for developing or adapting the automation system. The main reason for this are static and pre-planned structures in the software of the automation system. Modular, self-organizing software structures have been proposed as a solution, such approaches are developed in several projects.
As shown in the figure below, this comprises the following steps:
Based on requirements, the automation system is planned and configured. Goal of our research projects is to formalize human knowledge to support the user in this task. This is complemented by technologies such as model-based development, model analysis and simulation. If a running plant, and its automation system, have to modified, we aim at a plug-and-produce solution. I.e. manual engineering efforts should be minimized. For this, first of all semantic descriptions of the desired system and of the building components are developed. In a second step, algorithms for a self-configuring system are investigated.
Intelligent Monitoring and Diagnosis
Our research topic here is the application of data mining algorithms to the analysis and improvement of technical systems such as automation systems, production plants, or embedded software systems. Examples are the monitoring of complex production plants, the diagnosis of network communication, or the development of easy-redesignable modular automation systems.
As shown in the figure below, most of our solutions comprise the following steps:
Step 1: Data Integration
In most systems, important information is spread through-out the systems, i.e. data must be integrated first to allow for the application of data analysis and data mining algorithms. For this, we apply and develop real-time middleware software and data repositories. Data integration normally comprises both horizontal integration (e.g. middlewares for Profinet, AUTOSAR RTE) and vertical integration (e.g. OPC (UA), Webservices). In all cases, the data integration should not rely on manual implementation efforts, i.e. it must be transparent for the function developer.
To integrate data from different sources, all data must be interpreted in relation to their function within the overall system. For this, we use system models such as AutomationML, AUTOSAR, IEC 61131.
Step 2: System Analysis
Once the status of the overall system is known, analysis questions can be answered, e.g. :
- Is the system behaving normally?
- Is the timing of the system correct?
- Which errors have occurred in the systems?
- Which system components are erroneous and must therefore be replaced?
- Has the behavior of system components degenerated, i.e. is a maintenance of some components recommendable?
- Can the performance of the system be improved, e.g. by re-designing the system or by re-scheduling jobs?
For this, system simulations and algorithms from the field of artificial intelligence are used: To detect non-normal system behavior, it is often necessary to compare the current system behavior to a model of the normal behavior. This so-called anomaly detection therefore relies on the modeling and simulation of the normal system behavior. For this, we use modeling approaches such as Modelica or Simulink. Often, based on the discovered anomalies, error causes are identified using methods such as model-based diagnosis.
Currently such approaches are improved by the learning of normal behavior: For complex systems, often a manual definition of the normal system behavior is not possible. To overcome this modeling bottleneck, we apply model learning algorithms such as automata learning, support vector machines, statistical learning, or clustering.