How open source software is revolutionising Industrial IT
Open source tech is rapidly changing the way industrial companies think and strategise. What does this shift toward open source mean for them?
Frederik Van Leeckwyck on , updated
Companies in the process industry often tell us they’re already sitting on a pile of data. But as we dig deeper, we often find that the data is spread in different and isolated systems, on paper, in Excel files, Access databases, and so on.
As a result, the process data is only valuable in the case of a customer complaint or an emergency, when a human being will look into all these data sources to find out what went wrong – and hopefully find the root cause.
A data historian's primary role is to store time series data from a variety of field devices such as sensors, valves, and so on. Once stored, process engineers can use it to analyse process performance by retrieving the data in a trending tool.
This does not differ at all from how process historians were used decades ago, when these systems started making their appearance in the industry.
From our experience, this is where most industrial companies’ data journey begins: process engineers having a single source of truth of process data that is easily retrievable in case they need to find the root cause of some problem. Like an insurance contract, really.
Our experience shows that some organisations manage to create new value from the data they collected, often in ways they could not envisage at first. People with different responsibilities within the organisation become curious and start asking more questions. As a result, they start expecting more from the process data they collect.
Even though this transformation happens gradually, it can be sped up drastically with the help of a catalyst. This catalyst is making the data available for everyone in the organisation, rather than just the process engineers or the IT/OT people that know what a tag is and at what memory location they need to grab it in the PLC.
Making industrial data available to anyone in the organisation has become trivial with modern, open source IT technologies such as the visualisation tool Grafana and time series database InfluxDB.
These tools are so easy to use that anyone can get insights out of them pretty easily with hardly any training. And this can be done on the live historian data, not some month-old backup in an Excel file. The performance is there and we have many cases to prove it.
Making the data available to everyone is sometimes met with resistance when we talk to managers. There seems to be a fear of people asking too many questions or, more recently, a company that feared installing a ‘Big Brother’ culture because anyone would be able to view process data from everywhere.
This is absurd. In my experience, the opposite is true. Overall, people are involved to care about the companies they work for and will use the process data to improve and make their and their colleagues’ jobs easier.
A great example is the biomass plant A&S Energie, where all operators have access to the process data through their personal Grafana login. Some operators have created dashboards which went beyond our imagination. What about a dashboard that helps you start the steam turbine of the power plant? No problem! This operator’s dashboard has the turbine startup flowchart on the left and all the corresponding live process values from the historian on the right. 🧐
Industrial companies who manage to see the historian as more than just an insurance but as a tool to enable people and the organisation as a whole, are the ones that really reap the benefits of collecting and visualising process data. Why? People start making data-driven decisions and embedding this way of working in the organisation.
We have talked with production managers in the process industry who admit that a lot of the knowledge sits with the operators, explaining “that’s how they have always done it”, and thus admitting that sometimes tuning the process is more art and experience than science and facts.
By enabling people to collect and visualise any industrial data, it gradually becomes an asset in addition to an insurance policy. And as great assets do, they increase in value over time as more people and roles embrace it.
Another great example is Ekopak, a manufacturer of water treatment installations for the chemical and food industries.
Ekopak started using Factry Historian in 2019, initially to enable their process engineers to look back at the data of their water treatment installations in the field. Because the process and equipment data is so easily accessible and neatly visualised in Grafana, more people got access and became curious about it.
Because the company has a large operational responsibility and is expanding rapidly, it has become crucial to plan preventive maintenance for their installations. As a result, the maintenance engineers can plan their capacity rather than running the risk of peak demand if several installations suddenly need emergency maintenance.
The same company, Ekopak, managed to achieve the nirvana of Industry 4.0 by making a business model transformation from selling and operating water treatment units to “Water as a Service”.
According to HBR, business model innovation is about delivering existing products that are produced by existing technologies to existing markets. And, they pose, because the underlying mechanics are often invisible, it is often hard to copy.
In Ekopak’s new business model, the company invoices the customer for the volume of treated process water, not by selling equipment. To do so, it needs reliable data collection to calculate the volume of water delivered. And they can even take it further, by integrating the historian data with the ERP system to allow for automatic monthly invoicing.
This is a great example of business model innovation in the process industry: the product is identical (clean water), the technologies exist (historian) and the market is the same as well. Ekopak’s position in that market will likely improve because the new business model keeps non-core assets off their clients’ balance sheets.
Finally, this new Water-as-a-Service business model is not trivial to copy because of all the pieces involved: process data, financing, streamlining maintenance, monitoring operations, etc. From practice, we see that it takes time for the organisation to make the shift to properly embracing the potential of process data for more than their process engineering tasks. A bit like a flywheel that’s hard to spin initially but yields great momentum for the organization once it’s spinning.
Start now. Identify potential improvements that go beyond the data insurance level but don’t take it too far in the beginning. Start small, design your data architecture and get the naming right. And work iteratively with people in the organisation to work out what strategic value can be extracted from the data.