Category Archives: Automation

We Need More Functional Programming in our Ladder Logic

Imagine a machine with some number of pumps. We have the logic for each pump in its own routine: Pump1, Pump2, etc.

Somewhere in our program we want to know if any of the pumps are running. You write a rung of logic like this:

Any Pump Running Rung

This machine gets changed a lot and the number of pumps changes frequently. Any time we add or remove a pump we need to remember to revisit this rung and modify it. By definition, this rung is separate from the Pump logic (it comes after it).

If we remove Pump2 and remove the tags or variables associated with Pump2, then hopefully the editor will be smart enough to tell us that we have a contact referencing a tag or variable that no longer exists.

But what if we’re adding a pump? When we add Pump4, what prompts us to revisit this rung? What if I’m not even familiar with this code because it’s been a long time, or I wasn’t the original programmer who wrote it? Maybe it’s not pumps, but steps, or missions, or so on. There could be hundreds. The fact is this is a very common problem that pops up often in ladder logic programming, and we just live with it. We shouldn’t. We should ask our PLC vendors for better tools.

Now, there are hacky ways to solve this problem to make sure that when I copy a pump routine that I don’t have to remember to go update another routine.

For one, I could put all the Pump Running bits in an array from 1 to a maximum number of pumps, and replace my pleasant little Any Pump Running rung with a FOR loop. That only works if my pumps are numbered sequentially of course, and it’s really ugly and makes it difficult to look at the AnyPumpRunning coil and follow it back to see which pump is running. But I could do it. If I had no self respect.

Another hacky solution is to use a Reset (unlatch) instruction on a coil at the beginning of the scan, before any of the pump logic… let’s call it “tempAnyPumpRunning”. Then in parallel with each Pump Running coil we could use a Set (latch) instruction on the tempAnyPumpRunning coil. Finally, after all the pump logic you could use the tempAnyPumpRunning coil to drive the real AnyPumpRunning coil. I mean that would work. It’s less ugly than a FOR loop, but still a bit ugly, and suffers from the same cross referencing problem as the FOR loop. I’m embarrassed to say I’ve done this. It turns out I have no self respect.

Each time I did it, I cursed ladder logic for not being expressive enough.

I’d like to pause for a moment and talk about how this problem is solved in a programming language with any kind of functional programming features. I’ll use C# as an example, because I’m familiar with it. In C#, if you can have a collection of objects, let’s say of type Pump, and each pump has a property called Running, and if you want to know if any of them are running, you can do this:

bool anyPumpRunning = pumps.Any(p => p.Running);

The expression in the brackets, p => p.Running is a function with one input (Pump p) that returns a boolean, and the .Any(...) extension method evaluates this function on every object in the pumps collection and if any one of them returns true, then the .Any(...) function returns true. Even if we change the number of pumps, this line of code never has to change.

Similarly, C# has many such useful features:

bool allPumpsRunning = pumps.All(p => p.Running);

int howManyPumpsRunning = pumps.Count(p => p.Running);

double totalLitersPumped = pumps.Sum(p => p.LitersPumped);

double maxPumpRuntime = pumps.Max(p => p.Runtime);

In the PLC world, we presumably created a UDT or a structure called Pump, and we created variables or tags of type Pump called Pump1, Pump2, etc. The PLC knows where all these variables are located in memory (or it could). It should be possible to create new PLC instructions that act on all variables of a given type, like for Any Pump Running:

Imaginary PLC Instruction

You can get close to this now. You could create an AOI (or custom function block) but it would have to take an array of pumps as a parameter, and use a FOR loop inside to evaluate it.

The AOI solution still has a cross referencing problem. It’s not obvious which pumps are on. But the new instruction I described above could have a feature where you click a button and it pops up a cross reference of all the Pump Running bits, sorted by which ones are on, and double-clicking could take you directly to where that coil was being set. It would be EPIC!

It’s just a thought. Take it or leave it.

The PLC is Hallucinating

If you’ve been anywhere around YouTube or any TED talks recently, you’ll probably have heard the idea that your brain hallucinates your conscious reality. Simply put, your senses aren’t good enough to give you a perfect picture of reality as it is now, so your brain fills in the gaps by creating a model of the world in your mind and it uses your senses to direct that model, to keep it grounded in reality (whatever that is). This internal model of the world is what you use to make decisions.

Probably the most famous example of the fallibility of our senses is the blind spot we all have in our eyes, where the optic nerve enters the eye. Yet we don’t experience this blind spot. What we experience is an internal model of the world, and the lack of visual information at the eye’s blind spot simply has no effect on our internal model. Certainly, something happening in our environment can be obscured by the blind spot, but even with one eye closed we don’t perceive a big hole in our visual field.

Have you ever seen something out of the corner of your eye, or at a distance, and thought it was something else? I routinely experience this when I’m driving to work in the dark and passing a forested area beside the road. A weirdly shaped bush or a branch can be recognized as a deer about to jump out on the road. I instinctively tap the brakes, only to realize it’s not a deer, but a bush. When this happens I’m inclined to say I “thought it was a deer, but I was mistaken.” In truth, I literally saw a deer, and in my internal model of the world it was a deer, fully formed, ready to leap onto the road, and I reacted accordingly. Occasionally this happens, and as I get closer, I confirm it really is a deer.

A while ago, I wrote an article on this blog about part tracking, and if you don’t remember reading it, I suggest you go back and take a look (it’s a rather short read). No, honestly, go back and read it.

Sound familiar?

For the most part, in the world of industrial automation we’re blessed with very reliable sensors. Thru-beam and proximity sensors are remarkably reliable. But as I discussed in the part tracking post, even our best sensors can sometimes send us erroneous data. There are blind spots in our industrial sensors.

Novice PLC programmers are obsessed with inputs and outputs. One of the earliest exercises I like to give to a new PLC programmer is to program a light to flash continuously. More than half will start their program with an input (usually a pushbutton). Clearly this is incorrect… it’s a flashing light, so the only input is time. The solution requires an internal model of time (in our case the on-delay timer). But to someone fresh out of their first PLC programming course, inputs and outputs are king.

In my part tracking post, I literally ask you to “focus on part tracking”. Your first job, as a PLC programmer, is to create a simplified model of the world in the PLC’s internal state and maintain the accuracy of this model using the inputs. I call this “part tracking”, but it’s literally the PLC’s hallucination of its world. The internal model of the world stored in the PLC is the proper foundation of the decisions the PLC will make. This is the core idea behind of the patterns of ladder logic programming. The mission pattern, step pattern, and five rung patterns are all on the output side of the program. They are the actuators of a decision that was made in the mission controller based on the model (a.k.a. part tracking).

Now here’s a bonus thought… is the PLC’s internal model an accurate representation of reality? Well, yes, your goal is to make it more accurate than our sensors. But is it a complete representation of reality? Clearly not. We pare down the model to the simplest, most fundamental level. We only include the bare minimum of what we need for the machine to do its job.

So… what could that mean about our own internal model of reality? If we evolved this consciousness (or I guess if it was designed like we designed the PLC program), there’s a good point to be made that accuracy has a lot of value. Our internal model of reality is likely more accurate than our senses can perceive. But is it complete? Almost certainly not. Modeling a part of reality that we don’t need in order to survive is just wasteful.

Fundamental vs. Incidental Sameness

When a programmer sees duplicated code, our gut reaction is to recoil in disgust. After all, the Once and Only Once principle is at the very core of the programmer’s thought process, but I would argue that this caveat is even more fundamental to the concept:

Beware of introducing unnecessary coupling when refactoring for Once and Only Once.

What does “unnecessary coupling” mean? That’s what I want to look at here.

When you apply the Once and Only Once principle, you’re defining everything that uses that code to work the same way. For a simple example, consider the code to reverse a string (in place):

int n = str.length();
for (int i = 0; i < n / 2; i++)
swap(str[i], str[n - i - 1]);

If I advocated for writing out that code every time you had to reverse a string, you’d rightly conclude I was being ridiculous.

Fundamental Sameness

The code to reverse a string clearly belongs in a function because reversing a string is something that’s Fundamentally the Same no matter what string we’re trying to reverse anywhere in our software. If we discover a better way to do it, we want to be able to change it in one place.

Strings are also fundamentally the same. We can define them in strict terms. They don’t change, and even if the programming language designers decided to change the implementation of a string, all strings in our program have to change at the same time.

As the programmer we have the power to shape the internals of the program however we want, and this is powerful. We can define how strings are stored, and build upon that by defining how to reverse them.

However, we rarely get to define the real world that we’re interacting with.

Incidental Sameness

Anyone who has ever created business logic in their software will understand the pain of making rigidly defined software that represents the… less rigidly defined.. real life business rules of a company.

Let’s say your company has account managers separated into two groups. One group of account managers handles industrial clients and the other group handles commercial clients. The head of the industrial account manager group comes to you and says they need a new feature in the CRM software: their new group policy says we need to review accounts monthly, and they need a reminder alert generated if any account hasn’t been reviewed in 28 days. You go and check with the head of the commercial accounts manager group and they agree that this is a good idea.

You write a function to tell you when to generate alerts for an account:

bool generateAlert(Account acct) {
return Today.Subtract(acct.LastReviewedDate).TotalDays >= 28;
}

Do you see the problem? The two groups may both have the same policy: “accounts must be reviewed monthly”. However, those policies are only Incidentally the Same. There’s only one function and it defines alerts on all accounts to be generated the same way regardless of which group handles the account. If the head of the commercial group later comes to you and says they want an alert if it’s been 14 days since the account has been reviewed, then you’ll need to rewrite this feature.

Yes, it’s a contrived example and changing it is unlikely to be a big deal, but what you’ve done is created unnecessary coupling between the two groups. You can’t change the code for one group without affecting the other. In fact, tomorrow, the commercial group may not want alerts at all, or they may want alerts generated in different formats or sent to different people. Depending on the structure of your company, the two groups may need to operate completely differently due to differing client needs. It’s unlikely that management would have created two different groups unless they recognized the need to operate independently.

Application to Industrial Automation

As Engineers we love to copy tried and true designs. That gripper design works well, so let’s use the same gripper on robot 1 and robot 3. Those VFD drives seem to be really robust, so I think we should use them on all the pumps and conveyor drives of this new machine.

However, industrial machines are constantly modified for a variety of reasons: changing product and process needs, temporarily bypassing failed components, or replacing components with different parts because a better product is available, or the old model is no longer being manufactured.

When programming an industrial machine, you need to treat identical components as only Incidentally the Same. Your program needs to allow for the complexities of real life in a manufacturing environment. Components that are only incidentally the same will often need to be changed, and they usually won’t all change at the same time. Even if you want to change all the pumps on your machine, you’ll likely only change one this week, make sure it works, and then change the other three during some scheduled downtime next month.

That doesn’t mean you shouldn’t follow the Once and Only Once principle. Make a function block (or an Add-on Instruction in Allen-Bradley) for communicating with your specific model of VFD drive and use it everywhere you use that drive. That’s because VFD drives with the same part # are Fundamentally the Same. Make a function block for logging events to your plant-wide data collection system, because you’ve created an explicit definition of an event, and you can control it. Write common functions for calculating the distance between two points, or calculating voltage given current and resistance, because those are defined and won’t change (or, if they do change, you’ll want to change them everywhere).

On the other hand, the chance that the grippers on robot 1 and robot 3 are going to be identical two years from now is vanishingly small. And those pumps? One of them will have a bypassed flow sensor for a week next July. Plus, someone’s got a crazy idea for controlling the second conveyor section based on the measurement from a laser thickness gauge.

Be thoughtful about code re-use. Use it for things that are Fundamentally the Same, but beware of components that are only Incidentally the Same. Don’t create unnecessary coupling between incidentally similar components.

Focus on Part Tracking

There are many ways to visualize your program at a higher level. Take a look at this model for a second:

Inputs -> Part Tracking -> Outputs

I tend to focus on Part Tracking as a core part of my programming work. You can think of the part tracking as the PLC’s internal model of the outside world. That is, it’s the world that can’t be sensed directly by the inputs. In some cases the part tracking information can be used in spite of the current state of the inputs. Let me explain.

Imagine a work cell where a robot places a part in a fixture, a nut feeder feeds a nut onto the part, and then a weld gun extends from the top, and welds the part.

There are various places where we may place sensors in this situation, and you sometimes don’t have much control of it. Perhaps there’s a sensor in the fixture detecting that the part is there, but sometimes the mechanical engineers can’t find a place to fit one in. Sometimes the robot gripper has a sensor (or sometimes we use vacuum sensors to detect a part we’re picking up with vacuum). Perhaps neither is the case and we just know the robot has a part because we knew there was one in the infeed fixture and we know the robot went there and gripped, so we just assume it’s there.

Part Tracking gives you a way to remove some of this uncertainty. Create a memory bit (M1) to indicate there’s a part in the robot gripper. If the robot gripper moves to the fixture position and opens, then clear memory bit M1 and set memory bit (M2) indicating there’s a part in the fixture.

Keep this part tracking logic separate from your fault logic (so your part tracking logic is predictable and similar from project to project). If you happen to have a sensor in the fixture, create a separate rung that seals in a fault if the M2 bit is on and the fixture part present sensor is off, and use a delay timer of 100 ms or so just to allow for a sensor blip. Make sure the fault stops the cell. In this case I typically require the operator to go into the HMI and manually clear the “ghost” part in the fixture (i.e. clearing the M2 bit) before they can reset the fault. That way I know that they know the part really is missing. Likewise, a second fault rung seals in if M2 is off, but the sensor indicates a part.

Now as I said, sensors can sometimes be unreliable, and nowhere is this more common than in a weld cell. The high magnetic field generated by the weld current, and the weld expulsion splattering around is a surefire way to mess with both inductive proxes and optical sensors. Separating part tracking from sensors gives you a way to deal with this. For instance, when the welder is firing, mute out the faults so they don’t trigger if the sensor suddenly reads incorrectly.

To put it another way, use your part tracking to make decisions, and use the sensors to validate the part tracking, but only at times when you’re sure of their validity.

Furthermore, you can use inputs to initiate actions without requiring them to be on for the entire duration of the action. For instance, you can require that the fixture sensor is on to indicate a part is present before you fire the welder, but you can seal in the Fire_Welder bit around the sensor contact so the sensor can flicker after the output turns on without causing the output itself to flicker on and off.

Idiomatic Ladder Logic

I want to talk about the concept of “idioms” or the idea of “idomatic” when it applies to programming languages.

Python is said to have “strong idioms“:

One reason for the high readability of Python code is its relatively complete set of Code Style guidelines and “Pythonic” idioms.

When a veteran Python developer (a Pythonista) calls portions of code not “Pythonic”, they usually mean that these lines of code do not follow the common guidelines and fail to express its intent in what is considered the best (hear: most readable) way.

Many years ago I did some programming in Perl. A main philosophy of Perl is the acronym TMTOWTDI (There’s more than one way to do it). That’s an example of a language with “weak idioms”. This goes hand-in-hand with Perl’s apparent lack of focus on readability. In fact, some detractors of the language claim it’s a “write-once, read-never” language.

When I started writing the Patterns of Ladder Logic Programming page, I wasn’t thinking of it at the time, but in retrospect I was trying to document “Idiomatic Ladder Logic.” Yes, there are many ways to accomplish the same task in a PLC, but you should stick to idioms when they help communicate the meaning of your code.

When you deviate from these idioms, you’re communicating to the reader that something about this case is different. If I expect a fault coil to be sealed in, and you make it a set-reset, you’re communicating to me that this fault condition needs to survive a power outage. That’s useful information. If your fault coil isn’t sealed in, that must mean you intend it to be self-clearing.

Even your deviations should be idiomatic. Don’t make fault coils self-clearing by putting a Reset instruction for the same coil somewhere else in your code. That won’t be obvious to the reader, and is the reason for the idiom “don’t use a coil more than one place in your logic.”

Most or all new PLCs can be programmed in multiple languages, from the IEC-61131-3 specification. These languages are very different, and the idiomatic way to do something in one language isn’t necessarily the way to do it in another language. For instance, doing any kind of loop (for, while) is non-idiomatic in ladder logic, but is certainly an idiomatic construct in structured text. That means part of our job as programmers is to pick the correct language to express our intent.

Data collection, string parsing, and math is naturally expressed in structured text (ST), but control logic is naturally expressed in ladder diagram (LD). Part tracking logic can go either way. Sequential function chart (SFC) is perfect for expressing a sequence, ladder diagram is a good runner up using the Step Pattern, and structured text requires that you define a state machine, which is the least expressive option.

I once wrote a bubble-sort routine in ladder logic for a SLC500 PLC. I wish I’d had structured text back then. First pick the right language, and then pick the right idiom.

General Principles of PLC Programming

The PLC tutorials on this site focus on specific principles, but I’d like to point out general principles too. In fact, general principles of PLC programming are the same as PC programming, though the way we satisfy those principles can vary widely between those two domains.

Principle 1: Readability

This is number 1 because it trumps everything (short of functional correctness, of course). Many of the principles focus on making the code easy to change, but before you can change it you need to understand how it works at a deep level. The easier it is to understand, the easier it is to change, so readability drives most of my PLC programming tutorials and explanations. Also remember your audience. Readability means someone with only an electrical background and no C/C++/Java/C# experience should still be able to walk up and understand your logic. That is the entire point of PLCs.

Principle 2: Keep Things that Change Together Close Together

If you know that changing one piece of code will require you to change another piece of code, and you can’t somehow combine them into a single piece of code, then at least put them next to each other. The more related they are, the closer they should be in your program. Do whatever you can to help your future self see the trap you’ve laid.

Principle 3: Once and Only Once

This is an ideal form of Principle 2. If you have an array that has 10 elements in it, and you need to reference the number of elements in lots of places, then declare a constant and use that everywhere. Then you only need to change it in one place.

However, don’t get too carried away. Remember not to let this trump readability. I had a thought-provoking discussion with my colleague recently. All of our devices use millimeters, but we have to display our measurements in inches. We have lots of places where we multiply or divide by 25.4, which is the conversion factor between millimeters and inches. If you follow principle 3, then you should define a constant, e.g., MILLIMETERS_PER_INCH = 25.4 and use that everywhere. On the other hand, the conversion factor between millimeters and inches is unlikely to change anytime soon, and even if it were, you could find all the instances of 25.4 in our program and replace them in less than 60 seconds with a search and replace tool. Plus the constant is just longer. Concise is good. Furthermore, the context in which it’s used explains the value (because it’s typically used like this: distance_mm = distance_inches * 25.4). For these reasons I’m OK, in this particular case, using 25.4 instead of a named constant.

My point is that these ideas aren’t always black and white. Don’t apply these blindly and assume you’ve done your job. Make sure you use your brain too.

Principle 4: Isolate Things That Change Separately

This is the corollary of Principle 2. Just because your system has three identical pumps now doesn’t mean it’ll have three identical pumps 5 years from now. You might feel like a genius because you made a function block to control those pumps, but when the feedback on one of them malfunctions and you need to go in and bypass that one feedback without messing up the feedback logic on the other pumps, you’ve just made your life more complicated. Plus, if it’s the electrician that has to put that bypass in, how comfortable will they be modifying your function block vs. modifying a rung that only affects one pump? The best 2 am support call is the one you never get.

Principle 5: Use Patterns for Consistency

You are not the first person to program a PLC. Those who’ve come before you and learned through trial and error have settled on some useful patterns of ladder logic programming that perform specific functions in well understood ways. These are the nouns, verbs, and adjectives of our field. You can expect someone reading your logic to recognize these patterns quickly and understand what you’re doing.

You’ll also start coming up with patterns that are specific to your machine or facility. Patterns have the advantage that once you learn it, you understand it whenever you see it. Consistency is good.

Principle 6: Build a Domain-Specific Language

Whether you’re programming a machine or a family of similar machines, you’re likely to find that the same problems arise again and again. Use function blocks to create a short-hand notation for the nouns, verbs, and adjectives that are specific to the problems you’re solving. Look for repeated logic. I don’t mean repeated because the hardware repeats (because hardware changes) but look for cases where your ideas are repeated.

For a simple example, we have a lot of mechanical presses in our facility and we often want to know if a press is in a certain “window” such as from 90 to 180 degrees, or a more complicated test is from 350 to 10 degrees (because it goes through zero). I created a Window function to handle these simple tests and encapsulate the more complicated logic of when the window includes the 360 to 0 rollover point. The use of the function is readable, it’s used widely, but unlikely to change in a way that might break all the places it’s used, so it’s a good candidate for a function block.

Another good candidate is a function block that logs an event to your plant-wide event logging system.

Conclusion

Remember, readability and correctness trump everything else. Being concise is good but not at the expense of being cryptic. A simple 3-rung pattern repeated 10 times is easier to understand than a single complicated 10-rung block, even if the latter is a third the size. Ask yourself at the outset, “what’s likely to change?” and be honest with yourself. Let the answer guide your decisions. And above all, think, “why am I doing it this way?”

What is “AI” Anymore?

When I was growing up we had many examples of Artificial Intelligence (AI) in the movies. Of course we had R2-D2 and C-3PO in Star Wars, and HAL in 2001: A Space Odyssey. It was clear to anyone that these machines were actually intelligent.

These days the media is calling anything and everything “AI” with little evidence of any intelligence whatsoever. Here are some examples from the news:

An “AI” running on a $10 Raspberry Pi… to make your fridge “smart”? Give me a break! Scientists have been working on modelling what’s going on in the human brain, and according to this article:

It took 40 minutes with the combined muscle of 82,944 processors in K computer to get just 1 second of biological brain processing time. While running, the simulation ate up about 1PB of system memory as each synapse was modeled individually.

To be fair, that’s what it takes to simulate all the neurons in a human brain, and it’s not clear that this is a good analog for an artificial intelligence. Still, in 2015, the IEEE published an article saying that the human brain is 30 times faster than the world’s best supercomputers. Certainly you’re not doing that on a Raspberry PI.

We’re only scratching the surface of AI right now. “Deep Learning” is the big new buzzword. It works like this: you feed it a big dataset, like a bunch of X-ray images, and you have an expert in the field, like a radiologist, pick examples from that dataset and categorize them (“cancer”, “not cancer”). You then let the deep learning program go at the dataset and try to build a model to categorize all the images into those two groups. The expert then looks over the result and corrects any mistakes. Over time and many cycles, the software gets better and better at building a model of detecting cancer in an X-ray image.

Clearly this is pattern matching, and it’s something we humans are particularly good at. However, I’d also note that most animals are good at pattern matching. Your dog can learn to pickup subtle clues about when you’re about to take her for a walk. Even birds can learn patterns and adapt to them.

If your job can be replaced by a pattern matching algorithm, isn’t it possible that your job doesn’t require that much intelligence? It’s more likely you relied on a lot of experience. When I walk out to a machine and the operator tells me that the motor’s making a weird sound when it powers up, chances are I’ve seen that pattern before, and I might be able to fix it in a few minutes. That’s pattern matching.

We hear a lot in the media about AI coming to take our jobs, but it’s more correct to say that Automated/Artificial Experience (AE) is really what’s about to eat our lunch. Lots of highly paid professions such as medical doctors, lawyers, engineers, programmers, and technicians are in danger of deep learning systems removing a lot of the “grunt work” from their profession. That doesn’t mean the entire profession will have nothing left to do. After all, these systems aren’t truly intelligent, but we can’t hide from the fact that in large teams, some of the employees are likely only doing “grunt work.”

So don’t worry about AI just yet. Just make sure you’re using your real intelligence, and you should be safe.

PLC Programming goes Imperative

Decades ago, computer science emerged from the dark ages of assembly language programming and created two new languages: Lisp and Fortran. These are two very important computer languages because they exist at opposite ends of an imagined spectrum in the eyes of computer scientists: functional languages vs. imperative languages.

Fortran “won” the first battle, not least because imperative languages are closer to how the CPU actually does things, so back in the day when every little CPU cycle mattered it was easier to understand the performance implications of a Fortran program than a Lisp program. Plus, if you were already programming in assembly, then you were already thinking about how the computer was executing your code. In fact, the next big imperative language, C, is often referred to as “portable assembly language.” Fast forward to now, and modern languages like C#, Java, Python and Ruby have all grafted a lot of functional programming features onto their imperative programming basic syntax. In C#, for instance, Linq is a direct rip-off of Lisp’s S-Expressions and it now has Closures and lambda functions. Functional languages provide ways to think at a higher level than imperative languages. In a functional program you describe what you want and in an imperative program you describe how to do it.

Here’s an example in C#, using imperative programming:

var data = new int[] { 1, 2, 3, 4, 5 };
var sumOfSquares = 0;
for(var i = 0; i < data.Length; i++)
{
    sumOfSquares += data[i] * data[i];
}

…and the same thing done functionally:

var data = new int[] { 1, 2, 3, 4, 5 };
var sumOfSquares = data.Select(x => x * x).Sum();

In the second case, I’m taking the list of numbers, using Select to translate that into a list of their squares (also known as a Map operation) and then using Sum on the resulting list to compute an aggregate sum (also known as a Reduce operation). It has some interesting advantages. For instance, the original code can’t be split across multiple cores, but the latter can. Also, if you know both syntaxes, the latter is easier to read and understand.

Now take ladder logic. I’ve made the claim before that basic ladder logic (with contacts and coils) is actually a functional language. A simple example might be ANDing two inputs to get an output, which in C# would look like this:

var output = inputA && inputB;

That’s actually functional. If I wanted to write it imperatively I’d have to do something like:

var output = false;
if(inputA && inputB)
{
  output = true;
}

In ladder logic, that would be the equivalent of using an unlatch (or reset) instruction to turn off an output and then using a latch (or set) instruction to turn on the output if the A and B contacts were true. Clearly that’s not considered “good” ladder logic.

Similarly, a start/stop circuit goes like this:

var run = (start || run) && !stop;

Now historically, mathematicians and physicists preferred functional languages because they just wanted to describe what they wanted, not how to do it. It’s worth noting that electricians, looking at ladder logic, prefer to see functional logic (with contacts and coils) rather than imperative logic (with sets, resets, and move instructions).

In recent years we’ve seen all major PLC brands start to include the full set of IEC-61131-3 languages, and the most popular alternative to ladder logic is structured text. Now that it’s available, there are a lot of newer automation programmers who only ever knew imperative programming and never took the time to learn ladder logic properly, and they just start writing all of their logic in structured text. That’s why we’re seeing automation programming slowly shift away from the functional language (ladder) towards the imperative language (structured text).

Now I’m not suggesting that structured text is bad. I prefer to have more tools at my disposal, and there are definitely times when structured text is the correct choice for automation programming. However, I’d like to point out that the history of computer science has been a progressive shift away from Fortran-like imperative languages towards Lisp-like functional languages. At the same time, we’re seeing automation programming move in the opposite direction, and I think alarm bells should be going off.

It’s up to each of us to make an intelligent decision about what language to choose. In that respect, I want everyone to think about how your brain is working when you program in an imperative style vs. a functional style.

When you’re doing imperative programming, you’re holding a model of the computer in your mind, with its memory locations and CPU and you’re “playing computer” in your head, simulating the effect of each instruction on the overall state of the CPU and memory. It’s only your intimate knowledge of how computers work that actually allows you to do this, and it’s the average electrician’s inability to do this which makes them dislike structured text, sets, resets, and move instruction. They know how relays work, and they don’t know how CPUs work.

If you know how CPUs work, then I understand why you want to use structured text for everything. However, if you want electricians to read your logic, then you can’t wish-away the fact that they aren’t going to “get” it.

As always, be honest with yourself about who will read your logic, and choose your implementation appropriately.

Why good ladder logic looks like it was written by an 8 year old

When traditional PC programmers see ladder logic, they think ladder logic programmers are terrible programmers. Being both a .NET developer and a ladder logic programmer, this has caused me a lot of frustration and confusion over the years. I have one foot in each world, and yet I choose to write C# programs one way and ladder logic programs another. Why?

Let’s ignore the fact that most traditional programmers just don’t grok ladder logic at all, because their minds think about programs sequentially rather than in parallel. The real reason they hate ladder logic is because ladder logic programmers avoid things like loops, indexed addressing and subroutines. To them, this means you’re programming at the level of an 8 year old.

The thing is, I know how to use loops, arrays, and subroutines, not to mention object oriented programming and functional programming constructs like s-expressions, closures and delegates. Still, I choose to write simple and straightforward ladder logic. Why would I, an experienced programmer, choose to write programs like an 8 year old? Do I know something they don’t?

I spend a lot of time trying to get people to think about why they do things a certain way. Everyone wants that simple rule of thumb, but it’s far more valuable to understand the first principles so you can apply that rule intelligently. Decades of computer science has given us some amazing tools. Unfortunately, a carpenter with twice as many tools in her tool box is simply twice as likely to pick the wrong tool for the job if she doesn’t understand the problem the person who invented that tool was trying to solve.

The first time you show a new programmer a “for” loop, they think, “Amazing! Instead of typing the same line out a hundred times, I can just type 3 lines and the computer does the same thing! I can save so much typing!” They think this because they’re still an idiot. Don’t get me wrong, I was an idiot about this too, and I’m still an idiot about most things. What I do know, however, is that for loops solve a much more important problem than saving you keystrokes. For loops are one of many tools for following the Once and Only Once (OAOO) Principle of software development.

The OAOO principle focuses on removing duplication from software. This is one of the most fundamental principles of software development, to the point where it’s followed religiously. This principle is why PC programmers look at ladder logic and instantly feel disgust. Ladder logic is full of duplication. I mean, insanely full of duplication. So how can you blame them? God said, “let there not be duplication in software,” and ladder logic is full of duplication, thus ladder logic is the spawn of Satan.

That’s because programmers who believe the OAOO principle is about removing duplication are idiots too. Don’t they ever wonder, “why is it so important to remove duplication from our code?” Should we really worry about saving a few bytes or keystrokes? NO! We focus on:

  1. Making it do what it’s supposed to do
  2. Making it obvious to the reader what the program does
  3. Making it easy to make changes when the requirements change

… in that order.

In fact, #3 is the real kicker. First of all, satisfying #3 implies you must have satisfied #2, so ease of understanding is doubly important, and secondly, satisfying #3 implies you can predict what will change.

Imagine if you have to print the numbers from 1 to 5. If I asked a C# programmer to write this, they’d likely write something like this:

for(var i = 1; i <= 5; i++)
{
    Console.WriteLine("{0}", i);
}

… of course I could write this:

Console.WriteLine("1");
Console.WriteLine("2");
Console.WriteLine("3");
Console.WriteLine("4");
Console.WriteLine("5");

Why is the first way better? Is it because it uses fewer keystrokes? No. To answer this question, you need to know how the requirements of this piece of code might change in the future. The for loop is better because many things that might change are only expressed once. For instance:

  • The starting number (1)
  • The ending number (5)
  • What to repeat (write something to the screen)
  • What number to print
  • How to format the number it prints

If the requirements of any of these things change, it’s easy to change the software to meet the new requirements in the first case. If you wanted to change the code so it prints every number with one decimal place, the second way clearly requires 5 changes, where the first way only requires one change.

However, what if the requirements changed like this: print the numbers from 1 to 5, but for the number 2, spell out the number instead of printing the digit.

Okay, so here’s the first way:

for(var i = 1; i <= 5; i++)
{
    if(i == 2)
    {
        Console.WriteLine("two");
    }
    else
    {
        Console.WriteLine("{0}", i);
    }
}

… or if you wanted to be more concise (but not much more readable):

for(var i = 1; i <= 5; i++)
{
    Console.WriteLine(i == 2 ? "two" : i.ToString());
}

Here’s the change using the second way:

Console.WriteLine("1");
Console.WriteLine("two");
Console.WriteLine("3");
Console.WriteLine("4");
Console.WriteLine("5");

Here’s the thing… given the new requirements, the second way is actually more readable and more clearly highlights the “weirdness”. Does the code do what it’s supposed to do? Yes. Can you understand what it does? Yes. Would you be able to easily make changes to it in the future? Well, that depends what the changes are…

Now think about some real-life ladder logic examples. Let’s say you have a machine with some pumps… maybe a coolant pump and an oil pump. Your programmer mind immediately starts listing off the things that these pumps have in common… both have motor starters with an overload, and both likely have a pressure switch, and we might have filters with sensors to detect if the filters need changing, etc. Clearly we should just make a generic “pump” function block that can control both and use it twice, right?

NO!

Look, I admit that there might be some advantage to this approach during the design phase if you had a system with 25 identical coolant pumps and your purchasing guy says, “Hey, they don’t have the MCP-1250 model in stock so it’s going to be 8 weeks lead time, but they have the newer model 2100 in stock and he can give them to us for the same price.” Maybe it turns out the 2100 model has two extra sensors you have to monitor so having a common function block means it takes you… 20 minutes to make this change instead of an hour. We all know how much you hate repetitive typing and clicking.

On the other hand, when this system goes live, making an identical change to every single pump at exactly the same time is very rare. In fact, it’s so rare that it’s effectively never. And even if that were to ever actually happen, the amount of programming time it actually saves you is so tiny compared to the labor cost of actually physically modifying all those pumps that it’s effectively zero.

However, since these are physically different pumps, you’re very likely to have a problem with one pump. When your machine is down and you’re trying to troubleshoot that pump, do you want to be reading through some generic function block that’s got complicated conditional code in it for controlling all 50 different types of pumps you’ve ever used in your facility, or do you want to look at code that’s specific to that pump? And maybe the motor overload on that pump is acting up and you need to put a temporary bypass in to override that fault. Do you really want to modify a common function block that affects all the other pumps, or do you want to modify the logic that only deals with this one pump? What’s more likely to cause unintended consequences?

So this is why ladder logic written by experienced automation programmers looks like it was written by an 8 year old who just started learning Visual Basic .NET last week. Because it’s better and we actually know why.

How Automation is Shaping our Society

Try this on for size:

We’ve been automating for hundreds of years now. The industrial revolution caused a migration of workers from agriculture into the cities to work at factory jobs, and workers that are displaced by new technologies will find new work that didn’t even exist a few years ago.

I assume if you’re reading this article that you’re involved in automation in some way, so you’d actually want to believe that statement, and arguing against what someone wanted to believe would be pointless. I’m going to do it anyway. That statement is wrong. This time it really is different.

To explain this I need you to consider what motivates people here in the “west”. Basically we have some form of regulated capitalism. To boil that down, it means you can own things, and you are allowed to keep some fraction of the proceeds that are generated from those things. This actually applies to almost all of us, even if most people don’t think of it that way. It’s obvious to a farmer: you own land, buildings, and equipment, you grow things and sell them, and after tax you hope to end up with some kind of a profit. Ok, perhaps farming isn’t a great example because there are so many government subsidies involved, but the principle is the same with small business owners, and even employees.

Employees? Most employees don’t think of themselves as capitalists because they can’t see the capital they’re using to generate a profit, but it’s right there in the mirror. You are your own capital. Ever since abolition, this is capital that nobody could take away from you. It’s the first and primary social safety net. No matter how penniless you were, barring illness or infirmity, you had this basic nest egg of capital you could always draw from to bootstrap your life. Most people mistake capital for money, and that’s why they don’t see themselves as capital. After all, you can’t “spend yourself”, can you? Actually, going back hundreds of years, you could. Most slaves around the Mediterranean hundreds of years ago were slaves because they incurred debts that they couldn’t pay off, so they became the property of whoever they owed the debt to, until they could work off their debt.

If it helps, think of the human body as a machine that turns food into… various useful things more valuable than food. Farmers turn small amounts of food into larger amounts of food. Carpenters turn food and wood into houses or furniture. Quantum physicists turn food into transistors and lasers, and you, dear reader, perhaps you’re a machine that turns food into PLC programs. In turn, we trade these things for various useful things that other people have created. Capitalism.

Now I’m a fan of regulated capitalism because it’s an efficient way to organize lots of machines (us) into producing lots of valuable things like cars, houses and episodes of Game of Thrones.

Now here’s the weird part. There’s a huge incentive to use your capital to acquire more capital, which you can then use to acquire more capital, and so on, but very few people do this. You would think that someone who finished 13 years of schooling at the age of 18, worked 47 years and retired at the age of 65, making, let’s say, an average modest wage of $30,000 per year in present-day dollars would have had the foresight to save some of that $1.4 million for their retirement, but it’s clear that many don’t. In fact there are many people with an income far higher than that who not only don’t save any, but go into significant debt and either declare bankruptcy or become virtual slaves to credit card companies. It’s so incredibly common and has such a negative cost to society that governments actually force workers to save portions of their paycheque every week into a government pension program and then pay them a stipend when they retire. I’m not familiar with the way this works in the United States, but in Canada this is referred to as the Canada Pension Plan, and it’s supplemented by something called Old Age Security that kicks in a few years later. This is despite the fact that anyone who bothered to squirrel away 18% of their net paycheque for their entire career into a tax sheltered retirement savings plan and invested it in mutual funds would have a very comfortable retirement – much more comfortable than living on a government pension.

Now part of me thinks this is fine: you made your bed, now lie in it. But this affects everyone, even the wealthiest capitalists. The most basic of government services are the ones that wealthy people need most: military, police (criminal law) and the enforcement of contracts (civil law). These three services of government are what give people the ability to own things. The military protects it from external threats, the police protect it from people inside the country (thieves and vandals), and civil law settles disputes about who owns what.

We keep hearing that wealth inequality is a bad thing, but that can’t be absolutely true. If our system is working, it has to reward the people doing more valuable things with more money, so the only way there could be income equality is if everyone was doing something equally valuable, and we’re not. There should be a way for me to make more money by working harder, smarter, or differently than I am now. That’s the incentive to be more productive.

In fact, that’s what really matters: does the average person believe they can improve their standing? Because if they don’t, they get unruly and do wild and crazy things. Things that make wealthy people uneasy because in the west those unruly people can really mess with the government that’s providing all those military, police, and civil services they depend on.

Imagine you work in a factory in the Midwest U.S. that makes air conditioners. Chances are, you don’t think of yourself as a machine that turns food into air conditioners. You’re not thinking about how to make that machine more efficient, or more valuable. You’re already working 6 days a week, and your family never sees you. All you know is that sooner or later the guy who drives the fancy BMW is going to move your job to another country, or replace you with a robot, and since all the other plants around here have closed, you might not be able to send your kid to college. How would you feel? Maybe you’d be inclined to vote for a politician that promised to punish companies that moved their factories to Mexico.

I think the crux of the matter is that this worker no idea what to do. The incentives are still there: learn a new skill, invest in yourself, be more productive. But few people do it, for the same reason that few people save for their own retirement.

I’ve spent a few years around people who’ve been running small businesses, and I’ve tried to pay attention. It took me years to really understand that there was nothing magical about running a business. That’s because, like almost everyone else, I was brought up with the idea that innovative geniuses come up with brilliant new ideas and start companies that make billions of dollars. Outside of a few small cases, that’s simply not true. Look hard enough and you can find an industry that’s in demand and growing. If the demand is high, there will always be companies in that industry that are poorly run but still make a profit. You can make money simply by doing the same thing as everyone else and simply not being the worst at it. That’s how capitalism works – it gives you incentives to provide products and services that are in demand.

I have a relative that got laid off many years ago. There was a jobs program where they gave him classes on how to start a small business. He learned how to keep books, write an invoice, and how to do his taxes. They hooked him up with a small business loan. A few months later he’s running his own business and a couple years after that he’s hired an employee. Now he has the opportunity to invest in himself, like buying better equipment and improving his skills.

Let’s say you’re a PLC programmer. Your company likely pays you upwards of $50,000 a year. How much did they spend on your computer? Did they cheap out? Does it make any sense to handicap a $50,000 a year resource with a cheap laptop? If you were in business for yourself, you’d quickly realize there aren’t many things you could invest in that would make you a more efficient or valuable PLC programmer, but a faster computer is a no-brainer.

Automation is increasing productivity and with self-driving trucks and expert systems being developed, the rate of productivity increase is set to explode. However, these are expensive investments and there’s no way for displaced workers to take advantage of this automation. If I gave a truck driver a bigger truck, they produce more value per mile driven, but if I replace the driver with a computer, they have no value at all.

Increased productivity stopped producing higher wages back in the early 70’s. A bank teller makes the same now as they did back then (adjusted for inflation) even though most of the drudgery has been offloaded to ATMs. In fact, ATMs allowed banks to open more smaller branches and the demand for tellers to staff those branches has actually increased the number of tellers total, but despite automating the simple tasks and increasing demand for tellers, they’re not making any more in wages.

The same people who are currently blaming immigration and outsourcing for their problems are soon going to realize that automation is what’s really eating their lunch. Unlike in the industrial revolution where displaced workers could participate in this new economy by switching from farming to factory work, during this transition workers will either lose their jobs and have to completely re-skill, or at best they’ll keep their jobs but not see a penny more for their increased productivity.

That’s because old automation made people more valuable, but new automation seems to make them less valuable. That means it’s devaluing the one bit of capital they have.

This is where someone usually suggests a universal basic income so everyone can share in the increased productivity without everyone contributing to it. I’m not convinced the numbers add up. What we really need is to encourage this idea of viewing yourself as capital, not as an employee. An incentive and a safety net for people starting a small business should be less expensive and more effective than paying people to sit at home. How about teaching this stuff in school (I figure teachers are pretty clueless about starting a business). How about making it easier to start a business than going on social assistance? How about making in-demand skills training free?

I’m glad we’re talking about this because it does matter. A lot of this is tied in with what’s going on in the world right now. There’s a general sense that the next generation won’t be as well off as their parents’ generation, and that’s pretty much unprecedented. That promise that anyone could make something of themselves is slipping away, and we need that back.