How data historians and MES synergise to optimise operations
How can an open data historian and an MES reinforce each other? Three examples of the synergy between Factry Historian and FactryOS, including a real-life use case.
Jeroen Lavens on , updated
As a Data Engineer at Factry, I often face the challenge of guiding industrial companies along their Overall Equipment Effectiveness (OEE) journey. These projects have provided me with some interesting experiences and valuable insights, and I’m eager to give back to the industry by sharing what I’ve learned so far.
So expect a short introduction to OEE, a list of common pitfalls and a shortlist of best practices to set up, maintain and improve your OEE tracking.
Although being around for decades, the term Overall Equipment Effectiveness (OEE) is a flaming hot topic in the manufacturing industry nowadays, making this one of the most sought-after use cases to benefit from. For good reason, since the relatively low efforts for an implementation are outweighed tremendously by the potential gains - improved competitiveness, lower waste, higher quality- one might reap in the short and long(er) run.
If an OEE implementation is lingering around, ready for a revival, or none exists today… This article about pitfalls in your OEE, including a shortlist of steps to take, is for you.
Overall Equipment Effectiveness is a mouthful of words to describe what is in reality a set of 3 highly tangible concepts in the manufacturing industry.
Below, these 3 concepts are rephrased by means of a fictional operator’s dialogue -let’s call him Joe- operating an extruder in a semi-automatic production line, taking into account a 10h operator shift:
“Joe, the extruder on extrusion line 2 has been down unexpectedly for 1h!”
This means that the extruder (an equipment) has stopped unplanned, causing the full extrusion line to be idle.
Given that the maintenance team magically fixed the extruder when Joe got the message, the full production line was available for 9h, while planned to run for a 10h shift, or in other words there was an availability of 90% (= 9h/10h).
Note: while the extruder is down, there is no new material to manufacture units with, making the extruder the bottleneck of the production line.
Turned out that a small mechanical failure was the cause of the stop, which categorizes as the downtime reason.
“Joe, why is the extrusion line only running at half speed?”
This means that the extrusion line is meant to run at the speed indicated in the current recipe, while it is only operating at half that speed right now.
Joe was only informed at the end of the shift of the speed mismatch, causing the extrusion line to run at half speed compared to its full speed for the entire shift, or in other words there was a performance of 50% (= half speed/full speed of recipe).
Turned out that Joe took the speed setpoint of the wrong recipe in the HMI, which operates at half the speed compared to the proper recipe. By means of a real time dashboard which reports live performance to Joe, this scenario could have been avoided.
“Joe, why are there 100 units cut at too short a length after leaving the extruder?”
This means that the extrusion line crafted plenty of defective units having a length that is too short according to the current recipe, after having manufactured multiple good units which reached the packaging machine.
Given that 100 defective units were produced with a length that is too short, compared to 300 units of good quality that were packaged for the entire shift, there was a quality of 75% (= 300u/(100u+300u)).
It turned out that the sensor, placed right after the extruder measuring the distance in between cuts, got dislocated and therefore was registering inaccurate length values causing the cut to be carried out in the wrong place.
To summarize, multiplying an availability of 90% with a performance of 50% and a quality of 75% makes up for an OEE of 33.75%.
This number might not mean anything to you, and neither does it to Joe. The bottom line is that this number lets you keep track of the OEE and acts as a basis to compare. This gives the right tools to the people to continuously implement, analyze, share and report the OEE parameters in real time instead of after the fact, unleashing its value by learning together how to improve production.
When OEE is set up for one equipment, it enables you to evaluate OEE over time, or even per recipe, shift, team, … (essentially all fields in the dataset). Leaping forward to an OEE setup for multiple equipment in several plants, it can be compared over (similar) production lines, plants, … You name it!
It’s all about numbers, right?
After understanding how overall equipment effectiveness is used in practice on the shopfloor, it’s time to uncover a handful of pitfalls commonly encountered when implementing OEE.
Prior to rushing into the implementation, first you and your team have to come up with a solid definition of the availability, performance and quality calculations, specifically mapped to your manufacturing process and its bottleneck(s). Because you don’t want a different definition of OEE being used through the organization.
A ton of effort is typically needed to exchange this vital information between automation, process, plant improvement and IT teams. When overlooked, this causes you to run behind schedule from the start.
Aside from this going wrong, if you’re relying on a single person or team to define and calculate OEE, plus building dashboards on top, sooner or later this will cause trouble. More than a single person or team needs access to the dataset, and to actively work together on the continuous implementation of OEE.
Ultimately, this means that the tool(s) used to collect and transform the dataset and to report OEE need to accommodate a continuous and collaborative way of working. In the end, it’s as much managing change to have all people on board in the organization as setting up OEE itself.
In a manufacturing environment, data sources are generally dispersed over different systems, networks and/or physical locations (causing a different alignment in time). Think about process data being extracted from PLCs, while orders come from ERP, quality assurance data is more often than not generated offline (f.e. in the lab), not to mention manual entries which are registered all over the place.
Plenty of people have the skills at hand to fetch the required data from different data sources on the fly (f.e. an ad hoc query, script, ..), without considering the creation of a single source of truth dataset. There the culprit lies, not so in fetching the required data, but in consuming a consistent dataset by different people in different reports on different systems. In practice, fetching data from different data sources on the fly translates to a time-consuming job, resulting in various datasets hard to maintain, which are unreliable to use in OEE.
On the contrary, to bring these data sources together in one dataset, a platform is required that manages the collection and transformation (filtering, mapping, joining, time alignment, ..) into 1 single source of truth dataset to be readily consumable. Once a single source of truth dataset is in place, establishing a continuous improvement mechanism happens to be a lot less painless regarding maintaining the dataset, providing transparency for all involved people and making sure that each report is based on the same accurate dataset and solid OEE definition.
On top of this, the underlying data must accurately reflect the state of the manufacturing process at all times.
Say you manage to collaborate across teams for exchanging vital information and to implement OEE based on a single source of truth dataset, chances are likely that for one or more data sources real time data collection (at second-level resolution) is lacking.
This severely gets in the way of process operators/engineers to make an appropriate decision, based on real time data and dashboards, to promptly steer the manufacturing process in the right direction when it is actually needed: now!
OEE boils down to one final percentage, which is the multiplication of the availability, performance and quality percentages. To derive each of those percentages, multiple calculations are needed.
What is regularly overlooked, is that OEE is not calculated at one point in time (think about a photograph), rather it manifests itself only over a time period (think about a film). For example, when determining OEE for the extruder that Joe operates, a time period of 10h (the span of a shift) acts as a basis for the OEE calculations rather than calculating them at a specific point in time. Take into account that these calculations fully rely on the accuracy of the input data.
As the saying goes: “Garbage in, garbage out!”. The input measurements used to make the availability, performance and quality calculations, more often than not, need a filter to guarantee the use of plausible values. This filtering is optimally performed by a platform, executed prior and separate from the OEE parameter calculations itself, both in real time and on historical data.
Combining several excel files with handwritten scraps of paper to calculate your OEE will naturally result in poor inaccurate and unreliable results.
If you made it this far, the bulk of the effort is completed. Leaving you to report on the numbers that account for the OEE.
It is essential to grasp that different personas - operator, process engineer, plant manager - like to consume the OEE dataset in various ways. To elaborate, people want to use different time periods to calculate the OEE, such as hour, shift, day, and the ability to filter the dataset for existing values of each possible field (f.e. batch X/Y/Z, articles A/B/C, ..).
For reporting OEE, it is fundamental that a shared dashboarding environment is in place where the single source of truth dataset is accessible and consumed by the different personas. In such a way, the dataset, reports and/or a bunch of its filters can be re-used seamlessly.
Lastly, choose the time basis of the single source of truth dataset wisely as the minimum time period used for reporting OEE. For example, reporting OEE per 15min or per 10h can both be achieved by consuming a dataset with a time basis of 15min, accompanied with simple aggregations in the dashboard.
Put in a ton of effort to exchange vital information between different teams to come up with one solid OEE definition that clearly outlines how to calculate availability, performance and quality for a production line. Do not rely on a single person or team for any of the steps.
Use a platform to collect and transform real time (and historical) data from dispersed data sources on the shopfloor. Such as our own Factry Historian.
Implement, maintain and continuously improve the OEE calculations together. Make sure to filter the input measurements before using them in the availability, performance and quality calculations. Choose the time basis of the single source of truth dataset wisely as the minimum time period used for reporting OEE. A good example is our OEE improvement project for Puratos Canada.
Seamlessly report OEE in a shared dashboarding environment where all personas consume the single source of truth dataset when it matters the most, using (a combination of) filters based on existing values of each possible field in the dataset. Some of the best options in the market right now are Grafana, Power BI and Tableau.