The Analytics Engine is an Utthunga ecosystem vendor product. The analytics engine is well integrated with Utthunga products (data acquisition, visualization, etc.). At its core the analytics engine detects anomaly in process data to predict and prescribe. The goal is to enable the end customer to improve operation efficiency, increase uptime of assets and aid in real-time decision making.
Big Data is the enabler for the analytics technology. Big data enables collection and storing massive amounts of operational data generated in any process or discrete plant. The analytics engine inspects this massive amount of data to churn out the key predictive and prescriptive insights.
The Vector Quantization Clustering (VQC) and Local Subspace Classifier (LSC) algorithms perform machine learning on the raw process data. These algorithms inspect the differences between the actual data and the learned normal data. The engine then evaluates if the difference is within the allowed limits or not.
The analytics engine employs a core set of proprietary algorithms. Traditional algorithms such as the Mahalanobis-Taguchi (MT) can only be applied when data has a normal distribution. The proprietary algorithms that have been developed are resistant to the effects of the data distribution. Since the algorithms are model-free, they can respond flexibly without the need for model construction or simulations for each device or operation status change. The engine has been designed to simplify the cause analysis by outputting an ordered list of the parameters responsible for a detected status change.
Strong data acquisition capability from varied data sources and other software systems as well if required. Utthunga’s capabilities here ensures legacy as well as dumb devices don’t get left out.
Historical data is used to create a power data model, and the analytics is based on streaming/real-time data. Proprietary and standard algorithms churn out predictive and prescriptive analytics.
A simple user interface to view the generated insights. A SCIR table is generated which gives the information related to Symptom/Cause/Impact and Remedy in textual format.
If the data related to weather condition is available, yes the analytics engine will process the data to generate the analytics. For e.g. if the outside temperature drops due to continuous rain for 2-3 days, the analytics engine will take notice of that (by comparing with the normal weather outside temperature data fed to the system).
The analytics engine uses about five (5) algorithms. These algorithms work together i.e. the output will be related to the computed results of all the algorithms. A few of these algorithms are proprietary and patented.
A minimum of three (3) months of data is required. The generated data should come from combinations of the various elements of the process. Combinations are important to form the data model, for e.g. a compressor in not working condition will have an impact on the chiller. For PoC (proof of concept) one (1) week of data (provided in excel file) can suffice, but later on when in project mode three (3) months data will be required. The data frequency should be at least one (1) minute.
Yes, the prescription is in readable/textual format. A SCIR table is generated which gives the information related to Symptom/Cause/Impact and Remedy. E.g.
Symptom: The temperature is decreasing at high rate
Cause: Malfunctioning of compressor
Impact: The temperature will fall drastically in five minutes
Remedy: Compressor belt needs to be replaced.
A new data point can be added to the system at any time. This will only help to make the data model better (a positive effect) and has no impact on the analytics engine. The engine can process 35000 requests (sensors)/second.
The advance warning depends on the strength of the data model, which in turn depends on the quality/quantity of the data points, well configured/defined co-relation between parameters, etc. The accuracy of the prediction initially could be around 60%, and with fine tuning go up to 90%.
The analytics engine takes into account all maintenance that has been done. For e.g. If a bearing was replaced six (6) months ago due to increased vibration, the engine will take note of the new “normal” vibration post the bearing replacement and adjust its data model. It will also compare this new “normal” vibration with the old “normal” (and perhaps warn if the new bearing has actually degraded performance).
Typical requirement Quad Core, i5/i7, 16/32 RAM, storage depends on the amount data to be saved. Since the analytics engine works on real-time/streaming data, the storage requirement is generally for max. 1-2 weeks. If the user desires to store more data, the storage capacity depends on data polling frequency/storage duration/no. of data points. If these details are available to us, we can suggest the required storage size.
Yes, it can be, As long as a proper API exists, integration with any 3rd party system is possible. Recently we integrated with an inventory management system. For e.g. the analytics engine predicted a bearing could break in six (6) months, and accordingly a ticket to purchase the bearing was raised from inside the inventory management system.
The accuracy depends on the strength of the data model, which in turn depends on the quality/quantity of the data points, well configured/defined co-relation between parameters, etc. The accuracy of the prediction initially could be around 60%, and with fine tuning go up to 90%.
The actual installation is a day worth effort. If you mean the time required to get the system into a position where meaningful analytics can be churned out, the duration is about 2-3 months. The following steps are important and require time:
- Understanding of the process by having close discussion with the customer
- Getting the data points list (data to be acquired) correct: The data should come from combinations of the various elements and operating conditions of the process. Combinations are important to form the data model, for e.g. a compressor in not working condition will have an impact on the chiller.
- Fine tuning.
Yes, the data from log-books, etc. can be entered into the system. Analytics/predictions are derived based on real-time/streaming data. Data models are based on historical data.
Key benefits of the analytics engine are:
- Reduce maintenance cost
- Reduce unplanned downtime and production loss
- Increase machine life
- Reduce reliance on expensive process specialists/engineers
- Get prescriptive input to correct process.