How to read and store 600 addresses from an Allen Bradley PLC in 40 milliseconds using Python

Thomas Schwank
6 min readMar 29, 2021

--

Staying more at home due to COVID restrictions gave me time to work on some projects I always wanted to do. This weekend project was to work on a simple solution to read data from an Allen Bradley PLC using Python, with the goal to store the data in a Time Series Database (InfluxDB) for visualization with Grafana, and also be able to provide the data using MQTT to any other software for real time stream analytics and similar use cases on Edge Device or in the Cloud.

The test set up consists of an Allen Bradley PLC with simulated data for 200 assets, with each asset having three tags so 600 tags in total. Connected to the PLC is a Linux VM with Python 3.8, InfluxDB Time Series Database, MQTT Broker, and Grafana for visualization. Goal is to stay below 100ms for reading and publishing the values to MQTT and storing into InfluxDB.

Source code available in GitHub

All my sample code below can be found in my public GitHub repository: https://github.com/tschwankSF/allenbradley-mqtt-influx

Main driver for this project was that I couldn’t find a readily available and fast solution with a small enough footprint to run on a Linux Edge device, with the additional requirement to be easy to set up and configure.

Using pylogix to read data from Allen Bradley PLC

The core Python library used to communicate with the Allen Bradley PLC is pylogix, available at GitHub (https://github.com/dmroeder/pylogix).

Using this library, a single tag can be read pretty simple and fast with a handful lines of code. In below example we are reading the tag ‘ASSET[1].PARTCOUNT’:

# import AB PLC Library "pylogix”
from pylogix import PLC
# create PLC object
ab = PLC()
# set IP address of PLC
ab.IPAddress = 'aaa.bbb.ccc.ddd'
# read one tag
t = ab.Read('ASSET[1].PARTCOUNT')
# print values
print('Tag Name: ', t.TagName, '\nTag Value: ', t.Value)
# close connection to PLC
ab.Close()

Running the code we get the value of the address from the PLC, in this case the part counter from asset 1:

Expand the code to read all 600 addresses — 400ms

Next step is to expand this sample and include a loop to read every second all 600 tags, and let’s check how fast we are — without sending the data to MQTT and InfluxDB yet. In sample code ab-mqtt-loop.py the main changes are mainly a new function to read the 600 addresses from a text file, and an endless loop to read them with a pause of 1 seconds between each cycle.

The code for this step is available here: ab-mqtt-loop.py

The output below shows that the time we need to read the 600 addresses from the PLC is about 400 milliseconds. Not bad, but also not where I want to be. Still nearly a factor of ten too slow.

Threading to help with speed — two times faster to 200ms

To make it faster we can use the Threading approach in Python. As a rule of thumb, if the bottleneck is the CPU then we can improve the performance by using multiprocessing, if the issue is based on I/O limitation during getting data from external sources, then threading is usually the best approach.

In the code ab-mqtt-multithread-part01.py the major changes are:

  • Defining how many threads we want to spin up
  • Splitting the addresses we want to read in chunks, for each thread one chunk
  • Starting the threads, and when all threads are done merging the individual results from each thread by using a Python message queue.

With this approach we can read the addresses about two times faster, we are now at about 200ms with 4 threads instead of the 400ms without threading.

Reading from PLC in batches — final speed optimization to 40ms

To get down to about 40ms we need to do one other major change. So far, we are reading each address individual, so each cycle we create 600 read requests to the PLC, and each read request is time consuming. The last optimization is to read the addresses in batches instead of single read request.

To achieve this, we are going to split down the addresses in batches in each thread, making one read request for each batch. So, the flow in our example is:

  • We want to create 4 threads, so we split the 600 addresses in 4 chunks of 150 each.
  • Each thread takes this 150, and split it into batches of 50
  • We read these 50 addresses in one read request from the PLC

So instead of 600 read requests, we reduced it to 600 / 50 = 12 read requests. In addition it also includes the logic to publish the read values to MQTT and InfluxDB.

This final code can be found here: ab-mqtt-multithread.py

Screenshot below shows the final result, reading the 600 addresses in about 40ms.

Data volume optimization — check if value changed since previous read

To reduce data volume, one more optimization is realized in below example code. We are reading the values quite fast, but not all of them are changing between read cycles. Some assets produce parts slower, some faster. To make sure that we only store data in InfluxDB when the part counter changes, we compare each value with the previous one, and only when it changed a write request is done into InfluxDB. One open item in the code is that we write each value with an individual write call, here too we should use a batch approach to speed it.

A similar approach is realized for publishing the values into MQTT. In example below we are publishing the values in two different topics:

  • One topic “ab_all” will get all values, doesn’t matter if they changed or not since the last read. For some streaming analytics job it is easier if they get all values in one message to run their algorithms.
  • For other consumers like a Time Series database it is better to get only values when they changed, and for this use cases below code publishes only the changed values into a topic called “ab_changed”

Below a screen shot how it looks like when subscribing to the MQTT topic “ab_changed“ which includes only the values when it changed from previous reading. As you see, assets 3, 6, 8 to 11 and others are missing, because their value did not change.

All this changes are included in this code version: ab-mqtt-multithread.py.

InfluxDB and Grafana into the mix

The values are also stored in InfluxDB, and using Grafana we can easily visualize and check the data. Below the plot for parts counter from Asset 113, and a screen shot showing multiple part counters from the test PLC.

With this last step the weekend project is done, lets see what the next one will be.

If you have any questions or feedback, feel free to leave a comment.

Thank you!

--

--

Thomas Schwank
Thomas Schwank

Written by Thomas Schwank

After studying Cybernetics Engineering at the University of Stuttgart/Germany I worked in different areas around manufacturing and IT, Industry 4.0, IoT.

Responses (3)