Last year the company I’m working for decided to move to another city. Not really far away, but for me too far to drive the distance every single day. Since I like the job and the colleagues I work with, my wife and I decided to move to a small flat for the week and enjoy our house only on the weekend. So I decided to switch off the power (my lab J) and the heating while I’m not present. This was the moment where I learned that there is no way to remotely switch on or off our heating system. The old fashioned feature to do so is no longer supported (needs real analogue phone line). So I cannot get any data from power or gas meter in the house at all. And the installation is not even 15 years old, while most of our customers use even older technology concepts. That would be only my personal problem, but sadly last week I learned that it’s the same for some of our customers, too.
100th of thousands of systems in field but no data access
Many industrial systems are still operating in a data-island mode. For companies this means the following: If they want to build a nice service concept they need to hack their own system. Instead of tell the main control unit to deliver the data you need in a defined format you start to read it out directly from the control and even from sensors. Means, you start adding a dedicated computer unit to emulate the communication protocol you use inside your machine and ask for the data you need. This is really not a good way to operate. Because there at least 4 risks:
There is additional internal traffic that was not planned when designing the system and this is not good for real time operation.
Your need to exactly know the internal setup of the system. You need to know e. g. the word 5 on bus 6 is the amount of bear in the container. This might be different from installation to installation but for sure from generation to generation.
Because communication gets more complicated there will also be increased hardware costs. So instead of 30$ for the modem you have to pay 500$ for a connectivity computer
Software updates get more complicated due to cross dependencies: You need to make sure, that the connectivity computer still knows where to get the data.
At that point you will look for solutions on all sides, but what you find in most cases is only the possibility to send your data to the cloud and view them on a dashboard. From over 200 IoT suppliers I counted so far, they mostly focus on “big data”, “device provisioning” and “secure data transfer”. Surprisingly only a few focus on how to get you the data in the first place.
The data need to be available in a way that makes them usable later on
Of course we at Kontron also offer this cloud connectivity and even implemented readymade communication to your salesforce and SAP systems. But since we work in automation projects for many years we also try to focus on ways to get you the data out of your system and support the protocols needed to do so.
One of the most intriguing things I came by so far is that (for IT people like me it sounds really strange!) even in newly installed automation solutions the use of serial communication is quite common. In office use cases the “PC-to-Modem-Communication” was mainly realized by the RS232-standard and by using the At-command set. Once the modems were gone, most users forgot about this. In Automation and other industrial envoironments, the RS-485 standard is much more common and comes with less wires, wider range and a bus type architecture. But the most standardized version there is called MODBUS RTU. This serial communication protocol mainly covers how to address a device and read or write a register. So if you have an analog to digital converter you could read the input. Let’s have a look at one practical example: Let’s say, the feedback you get is 100. There is no way to find out if this is a temperature of a cooled drink, your airspeed or the voltage of a battery. You simply have to know what it is and how to scale it. Assume it is the temperature of the heating: It could be 100°C, 10.0°C orwhatever the designer thought would be a great calculus: you will never find out this in your software.
The conclusion is that you have to know your machines
Guess what: Most of our customer offer customization to their customers. They adopt the local installation to customer needs and if you have luck the only documentation you might find at all is a printed one (on paper in real Leitz folder). So you might know your base machine, but how to offer a unified way to cover deviations?
Ok that’s the past and in future there are better solutions in reach. I see a big push for OPC-UA (https://opcfoundation.org/) in EMEA and for DDS (https://www.rti.com/products/dds/) in US. But still there is much work to do. Let’s go for it!
This time I like to see all of us pushing to one of both standardized directions without trying to save the own market niche by inventing “unique” but proprietary solutions. Even if we end up with two solutions: Both are well defined and it will be much better to have two defined protocols than 5000 proprietary ones.
Customers should understand, that even we could help on many challenges on the way to a real IoT enabled device still the data need to be available in a way to be usable later on. Have you found solutions for this already?