Category Archives: Industrial Automation

Will TwinCAT 3 be Accepted by Automation Programmers?

Note that this is an old article and I now have more up-to-date TwinCAT 3 Reviews and now a TwinCAT 3 Tutorial.

In the world of programming there are a lot of PC programmers and comparatively few PLC programmers, but I inhabit a smaller niche. I’m a PLC and a PC programmer. This is a dangerous combination.

If you come from the world of PLC programming, like I did, then you start out writing PC programs that are pretty reliable, but they don’t scale well. I came from an electrical background and I adhered to the Big Design Up Front (BDUF) methodology. The cost of design changes late in the project are so expensive that BDUF is the economical model.

If you come from the world of PC programming, you probably eschew BDUF for the more popular “agile” and/or XP methodologies. If you follow agile principles, your goal is to get minimal working software in front of the customer as soon as possible, and as often as possible, and your keep doing this until you run out of budget. As yet there are no studies that prove Agile is more economical, but it’s generally accepted to be more sane. That’s because of the realization that the customer just doesn’t know what they want until they see what they don’t want.

It would be very difficult to apply agile principles to hardware design, and trying to apply BDUF (and the “waterfall” model) to software design caused the backlash that became Agile.

Being both a PLC and a PC programmer, I sometimes feel caught between these two worlds. People with electrical backgrounds tend to dislike the extra complexity that comes from the layers and layers of abstraction used in PC programming. Diving into a typical “line of business” application today means you’ll need to understand a dizzying array of abstract terminology like “Model”, “View”, “Domain”, “Presenter”, “Controller”, “Factory”, “Decorator”, “Strategy”, “Singleton”, “Repository”, or “Unit Of Work”. People from a PC programming background, however, tend to abhor the redundancy of PLC programs, not to mention the lack of good Separation of Concerns (and for that matter, source control, but I digress).

These two worlds exist separately, but for the same reason: programs are for communicating to other programmers as much as they’re for communicating to machines. The difference is that the reader, in the case of a PLC program, is likely to be someone with only an electrical background. Ladder diagram is the “lingua franca” of the electrical world. Both electricians and electrical engineers can understand it. This includes the guy who happens to be on the night shift at 2 am when your code stops working, and he can understand it well enough to get the machine running again, saving the company thousands of dollars per minute. On the other hand, PC programs are only ever read by other PC programmers.

I’m not really sure how unique my situation is. I’ve had two very different experiences working for two different Control System Integrators. At Patti Engineering, almost every technical employee had an electrical background but were also proficient in PLC, PC, and SQL Server database programming. On the other hand, at JMP Engineering, very few of us could do both, the rest specialized in one side or the other. In fact, I got the feeling that the pure PC programmers believed PLC programming was beneath them, and the people with the electrical backgrounds seemed to think PC programming was boring. As one of the few people who’ve tried both, I can assure you that both of these technical fields are fascinating and challenging. I also believe that innovation happens on the boundaries of well established disciplines, where two fields collide. If I’m right, then both my former employers are well positioned to cash in on the upcoming fusion of data and automation technologies.

TwinCAT 3

I’ve been watching Beckhoff for a while because they’re sitting on an interesting intersection point.

On the one side, they have a huge selection of reasonably priced I/O and drive hardware covering pretty much every fieldbus you’d ever want to connect to. All of their communication technologies are built around EtherCAT, an industrial fieldbus of their own invention that then became an open standard. EtherCAT, for those who haven’t seen it, has two amazing properties: it’s extremely fast, compared with any other fieldbus, and it’s inexpensive, both for the cabling and the chip each device needs to embed for connectivity. It’s faster, better, and cheaper. When that happens, it’s pretty clear the old technologies are going to be obsolete.

On the other side, they’re a PC-based controls company. Their PLC and motion controllers are real-time industrial controllers, but you can run them on commodity PC hardware. As long as PCs continue to become more powerful, Beckhoff’s hardware gets faster, and they get those massive performance boosts for free. Not only that, but they get all the benefits of running their PLC on the same machine as the HMI, or other PC-based services like a local database. As more and more automation cells need industrial PCs anyway, integrators who can deliver a solution that combines the various automation modules on a single industrial PC will be more competitive.

Next year Beckhoff is planning to release TwinCAT 3, a serious upgrade from their existing TwinCAT 2.11. The biggest news (next to support for multiple cores) is that the IDE (integrated development environment) is going to be built around Microsoft’s Visual Studio IDE. That’s a pretty big nod to the PC programmers… yes you can still write in all the IEC-61131-3 languages, like ladder, function block, etc., but you can also write code in C/C++ that gets compiled down and run in the real-time engine.

Though it hasn’t been hyped as much, I’m pretty excited that you can have a single project (technically it’s called a “solution”) that includes both automation programming, and programming in .NET languages like C# or VB.Net. While you can’t write real-time code in the .NET languages, you can communicate between the .NET and real-time parts of your system over the free ADS communication protocol that TwinCAT uses internally. That means your system can now take advantage of tons of functionality in the .NET framework, not to mention the huge amount of 3rd party libraries that can be pulled in. In fact, did you know that Visual Studio has a Code Generation Engine built in? It would be pretty cool to auto-generate automation code, like ladder logic, from templates. You’d get the readability of ladder logic without the tedious copy/paste/search/replace. (Plus, Visual Studio has integrated source control, but I digress…)

Will anyone take advantage?

With such a split between PC and PLC programmers, exactly who is Beckhoff targeting with TwinCAT 3? They’ve always been winners with the OEM market, where the extra learning curve can be offset by lower commodity hardware prices in the long term. I think TwinCAT 3 is going to be a huge win in the OEM market, but I really can’t say where it’s going to land as far as integrators are concerned. Similar to OEMs, I think it’s a good fit for integrators that are product focused because the potential for re-use pays for your ramp-up time quickly.

It’s definitely a good fit for my projects. I’m interested to see how it turns out.

The Controls Engineer

An unbelievable buzz at quarter past eight
disturbed my deep thoughts; It’s my phone on vibrate.

It crawls ‘cross the desk, two inches or more
If I leave it, I wonder, will it fall on the floor?

I answer it finally, it’s a privilege you see
to have this fine gilded leash fastened to me.

It turns out it’s Mike in the maintenance shack
He says they’ve been fighting the dispenser out back.

“No problem,” I say, “I’ll come have a look-see”
then closing my phone I gulp back my coffee.

What do I need? My laptop, for sure,
a null modem cable, three adapters, or four?

I’ve got TWO gender benders, that should be enough.
I used to have more; I keep losing this stuff.

I glance at my tool kit, haven’t used it since June-and
I won’t use it again since we got this union.

My battery gives me ’bout ten minutes power
I’ll take my adapter; driving back here’s an hour.

Then out to my car, on my way to Plant 2
they phone me again, three text messages too.

I’m over there fast but no parking in sight.
The overflow lot’s one block down on the right.

Up to the entrance and in through the door,
Remember to sign at the desk, nine-oh-four.

My old ID badge doesn’t work with this scanner
I wonder when she will be back from the can, or

should I just get someone else to come get me?
Mike doesn’t answer, I try Mark… how ’bout Jenny?

“Hi Jenny… never mind, the receptionist’s back.”
The door latch, it closes behind me, click-clack.

Out on the floor passing blue and white panels
Watch out for things painted caution-tape-yellow.

On the right is that cell with the new network NIC,
It didn’t work well with that 5/05 SLC.

To the left is the line I commissioned in May.
It’s sat idle so far; warranty’s up next Friday.

Two more aisles down this way, a left then a right.
Hey! Now I see the dispenser in sight.

“Good morning, Mike,” I said, “How can I help?”
Mike says, “Don’t worry mate, it was just a loose belt.”

When to use a Sealed Coil vs. a Latch/Unlatch?

I just realized something I didn’t learn until at least a year into programming PLCs, and thought it would be a great thing to share for newer ladder logic programmers: when should you use a sealed-in coil vs. a latch/unlatch?

On the surface of it, a latch/unlatch instruction is sometimes frowned upon by experienced programmers because it’s correlated with bad programming form: that is, modifying program state in more than one location in the program. If you have one memory bit that you’re latching and unlatching all over the place, it really hinders readability, and I pity the fool that has to troubleshoot that code. Of course, most PLCs let you use the same memory bit in a coil instruction as much as you want, and that’s equally bad form, so I don’t take too strict of a stance on this. If you are going to use latch/unlatch instructions, make sure you only use one of each (for a given memory bit), and keep them very close together (preferably on adjacent rungs, or even in different branches of the same rung). Don’t make the user scroll, or worse yet, do a cross reference.

As you can imagine, if you’re going to use a Latch/Unlatch instruction and keep them very close together, it’s trivial to convert that to a rung with a sealed in coil, so what, if anything is the difference? Why have two sets of instructions that do the same thing?

It turns out (depending on the PLC hardware you’re using) that they act differently. On Allen-Bradley hardware, at least, an OTE instruction (coil) will always be reset (cleared to off) during the pre-scan. The pre-scan happens any time you restart the program, which is most importantly after a loss of power. If you’re using a sealed in coil to remember you have a pallet present in a zone, you’ll be in for a big surprise when you cycle power. All your zones will be unblocked, and you could end up with a bunch of crashes! On the other hand, OTL and OTU instructions don’t do anything during a pre-scan, so the state remains the same as it was before the power was removed.

For that reason, a latch/unlatch is a great indication of long term program state. If you have to track physical state about the real world, use a latch/unlatch instruction.

On the other hand, a sealed-in coil is a great way to implement a motion command (e.g. “attempting to advance axis A”). In that case you want your motion command to reset if the power drops out.

I hope that clears it up a bit. I always tried to avoid all latch/unlatch instructions until I understood these concepts.


Narrowing the Problem Domain

One of the ongoing tasks in industrial automation is troubleshooting. It’s not glamorous, and it’s a quadrant one activity, but it’s necessary. Like all quadrant one activities, the goals is to get it done as fast as possible so you can get back to quadrant two.

Troubleshooting is a process of narrowing the problem domain. The problem domain is all the possible things that could be causing the problem. Let’s say you have a problem getting your computer on the network. The problem can be any one of these things:

  • Physical network cable
  • Networks switch(es)
  • Network card
  • Software driver
  • etc.

In order to troubleshoot as quickly as possible, you want to eliminate possibilities fast (or at least determine which ones are more likely and which are unlikely). If you don’t have much experience, your best bet is to figure where the middle point is, then isolate the two halves and determine which half seems to be working right and which isn’t. This is guaranteed to reduce the problem domain by 50% (assuming there’s only one failure…). So, in the network problem, the physical cable is kind of in the middle. If you unplug it from the back of the computer and plug it into your laptop, can the laptop get on the internet? If yes, the problem’s in your computer, otherwise, it’s upstream. Rinse and repeat.

As you start to gain experience, you start to get faster because you can start to assign relative probabilities of failure to each component. Maybe you’ve had a rash of bad network cards recently, so you might start by checking that.

In industrial automation, I’ve seen a pattern that pops up again and again that helps me narrow the problem domain, so I thought I’d share. Consider this scenario: someone comes to you with a problem: “the machine works fine for a long time, and then it starts throwing fault XYZ (motion timeout), and then after ten minutes of clearing faults, it’s working again.” These annoying intermittent problems can be a real pain, because it’s sometimes hard to reproduce the problem, and it’s hard to know if you’ve fixed it.

However, if you ask yourself one more question, you can easily narrow it down. “Is the sensor that detects the motion complete condition a discrete or analog sensor?” If it’s a discrete sensor, the chance that the problem is in the logic is almost nil. I know our first temptation is always to break out the laptop, and a lot of people have this unrealistic expectation that we can fix stuff like this with a few timers here or there, but that’s not going to help. If you have discrete logic that runs perfectly for a long time and then suddenly has problems, it’s unlikely there’s a problem in the logic. There’s a 99% certainty that it’s a physical problem. Start looking for physical abnormalities. Does the sensor sense material or a part? If yes, is the sensor position sensitive to normal fluctuations in the material specifications? Is the sensor affected by ambient light? Is the sensor mount loose? Is the air pressure marginal? Is the axis slowing down due to wear?

The old adage, “when all you have is a hammer, every problem is a nail”, is just as true when the only tool you have is a laptop. Don’t break out the laptop when all you need is a wrench.

Book Review: The Lights in the Tunnel

I was paging through the Amazon store on my Kindle when I came across a book that caught my eye: The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (Volume 1)

It’s not every day you come across a book about Automation, and for $6, I figured, “what the heck?”

The author, Martin Ford, is a Computer Engineer from California. To summarize, he’s basically saying the following:

  • Within 80 years, we will have created machines that will displace significantly more than half of the current workforce

This is a topic that interests me. Not only do I have a career in automation, but I’ve previously wondered about exactly the same question that Ford poses. What happens if we create machines advanced enough that a certain segment of the population will become permanently unemployed?

The title of the book comes from Ford’s “Tunnel Analogy”. He tries to model the economy as a tunnel of lights, with each light representing a person, its brightness indicating its wealth, and the tunnel is lined with other patches of light: businesses. The lights float around interacting with the businesses. Some businesses grow larger and stronger while others shrink and die off, but ultimately the brightness of the tunnel (the sum of the lights) appears to be increasing.

I found the analogy to be a bit odd myself. Actually, I wasn’t quite sure why an analogy was necessary. We’re all pretty familiar with how the free market works. If you don’t get it, I don’t think the tunnel analogy is going to help you. In fact, one excerpt from his description of the tunnel makes me wonder if Ford himself even “gets” the concept of how the economy works:

As we continue to watch the lights, we can now see that they are attracted to the various panels. We watch as thousands of lights steam toward a large automaker’s panels, softly make contact and then bounce back toward the center of the tunnel. As the lights touch the panel, we notice that they dim slightly while the panel itself pulses with new energy. New cars have been purchased, and a transfer of wealth has taken place.

That particular statement irked me during the rest of the book. That’s not a good illustration of a free market; that’s an illustration of a feudal system. In a free market, we take part in mutually beneficial transactions. The automaker has a surplus of cars and wants to exchange them for other goods that it values more, and the consumer needs a car and wants to exchange his/her goods (or promise of debt) in exchange for the car. When the transaction takes place, presumably the automaker has converted a car into something they wanted more than the car, and the consumer has converted monetary instruments into something they wanted more: a car. Both the automaker and the consumer should shine brighter as a result of the transaction.

Ford has confused money with wealth, and that’s pretty dangerous. As Paul Graham points out in his excellent essay on wealth:

Money Is Not Wealth

If you want to create wealth, it will help to understand what it is. Wealth is not the same thing as money. Wealth is as old as human history. Far older, in fact; ants have wealth. Money is a comparatively recent invention.

Wealth is the fundamental thing. Wealth is stuff we want: food, clothes, houses, cars, gadgets, travel to interesting places, and so on. You can have wealth without having money. If you had a magic machine that could on command make you a car or cook you dinner or do your laundry, or do anything else you wanted, you wouldn’t need money. Whereas if you were in the middle of Antarctica, where there is nothing to buy, it wouldn’t matter how much money you had.

There are actually two ways to create wealth. First, you can make it yourself (grow your own food, fix your house, paint a picture, etc.), or secondly you can trade something you value less for something you value more. In fact, most of us combine these two methods: we go to work and create something that someone else wants so we can trade it for stuff that we want (food, cars, houses, etc.).

Later in the book, Ford makes a distinction between labor-intensive and capital-intensive industries. He uses YouTube as an example of a capital-intensive business because they were purchased (by Google) for $1.65B and they don’t have very many employees. I can’t believe he’s using YouTube as an example of a capital-intensive industry. The new crop of online companies are extremely low-overhead endeavors. Facebook was started in a dorm room. Again, Ford seems to miss the fact that money is not equal to wealth. Google didn’t buy YouTube for the capital, they bought their audience. Google’s bread and butter is online advertising, so they purchased YouTube because users are worth more to Google than they are to the shareholders of YouTube that sold out. Wealth was created during the transaction because all parties feel they have something more than they had to start.

Back to Ford’s premise for a moment: is it possible that we could create machines advanced enough that the average person would have no place in a future economy? I don’t find it hard to believe that we could eventually create machines capable of doing most of the work that we do right now. We’ve certainly already created machines that do most of the work that the population did only decades ago. The question is, can we get to the point where the average person has no value to add?

Let’s continue Ford’s thought experiment for a moment. You and I and half the population is now out of work and nobody will hire us. Presumably the applicable constitutional elements are in place so we’re still “free”. What do we do? Well, I don’t know about you, but if I had no job and I was surrounded by a bunch of other people with no job, I’d be out foraging for food. When I found some, I’d probably trade a bit of it to someone who could sew that might patch up my shirt. If I had a bit of surplus, I’d probably plant a few extra seeds the next spring and get a decent harvest to get me through the next winter.

I’m not trying to be sarcastic here. I’m trying to point out the obvious flaw in the idea that a large percentage of the population couldn’t participate in the economy. If that were the case, the large part of the population would, out of necessity, form their own economy. In fact, if we’re still playing in Ford’s dreamland here, where technology is so advanced that machines can think and perhaps even nanotechnology is real, I’d probably hang around the local dump and forage for a bit of technology there. The treasures I’d find there would probably put me in more luxury than I currently have in 2010.

So, if you take the thought experiment to the extreme, it breaks down. Given a free society divided into haves and have-nots, where the haves don’t have any use for the have-nots, then what you really have is two separate and distinct societies, each with its own bustling economy. Whether or not there is trade between those two economies, one thing is certain: almost everyone still has a job.

Of course, it’s not like we’re going to wake up tomorrow and technology will suddenly throw us all out of our jobs. The shift in technology will happen gradually over time. As technology improves, people will need to adapt (as we do every day). As I’ve said before, I think a major shift away from the mass consumption of identical items is already underway. As the supply of generic goods goes up, our perceived value of them goes down.

Ford doesn’t seem to participate in automation on a daily basis, so I think he lacks the experience of what automation really does. Automation drives down the cost, but it also increases the supply and reduces the novelty at the same time. Automated manufacturing makes products less valuable but the juxtaposition makes people more valuable.

There’s a company out there called Best Made Co. that sells $200 hand-made axes. There’s a three week waiting time. That’s a feature actually: it’s so valuable to people that there’s a three week lead time. It’s made by hand. It’s made by hand by people who are passionate about axes. Feature? I think so.

In Ford’s dystopia, when the robber-barons are sitting atop their mountains of widgets that they’ve produced in their lights-out factory, don’t you think one of them might want to buy a sincere story? Wouldn’t they be interested in seeing a movie, or going to church on Sunday, or reading a book? When all of your basic needs are met, these higher-level needs will all see more demand. They’re also hard to automate. Some things have value because they’re done by people. Some things would be worth less if you did automate them:

  • Relationships (with real people)
  • Religion
  • Sports
  • The Arts
  • “Home Cooked” or “Hand-Made”
  • Stories (of origins, extremes, rescues, journeys, relationships, redemption, and the future)

Do you recognize that list? That’s the list of things we do when we’re finished with the drudgery of providing for our survival. We cheer for our sports team on a Sunday afternoon, or go and see an emotional movie on Friday night. Some people buy $200 axes (or iPhones, anyone?) because they come with a fascinating story that they can re-tell to their friends. (Bonus points if it’ll get you laid.)

Ford scoffs at the idea of a transition to a service based economy. He suggests implementing heavy taxes on industry and redistributing that to people who otherwise would have been doing the job the robots are doing, just so they can buy the stuff the robots are producing. He can’t see anything but an economy based on the consumption of material goods. I say: go ahead and automate away the drudgery of daily existence, make the necessities of life so cheap they’re practically free, and let’s get on with building real wealth: strong relationships, a sense of purpose, and a society that values life-long self improvement (instead of life-long accumulation of crap). By making the unimportant stuff less valuable, automation is what will free us to focus more on what’s important.

Good Function Blocks, Bad Function Blocks

In case you’ve never read my blog before, let me bring you up to speed:

  • Write readable PLC logic.

Now, I’m a fan of ladder logic, because when you write it well, it’s readable by someone who isn’t a programmer, and (in North America, anyway) maintenance people frequently have to troubleshoot automation programs and most of them are not programmers.

That doesn’t mean I’m not a fan of other automation languages. I think structured text should be used when you’re parsing strings, and I like to use sequential function chart to describe my auto-mode logic. I’m also a fan of function block diagram (FBD), particularly when working with signal processing logic, like PID loops, etc.

What I’m not a fan of is hard-to-understand logic. Here’s FBD used wisely:

Here’s an example of FBD abuse:

I’m still reading Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin. He’s talking about traditional PC programming, but one of the “rules” he likes to use is that functions shouldn’t have many inputs. Ideally 0 inputs, maybe 1 or 2, possibly 3, but never more than 3. He says if you go over 3, you’re just being lazy. You should just break that up into multiple functions.

I think that applies equally well to FBD. The reader can easily rationalize about the first image, above, but the second one is just a black box with far too many inputs and outputs. If it doesn’t work the way you expect (and it’s doubtful it does), you have to keep going inside of it to figure it out. Unfortunately once you’re inside, all the variable names change, etc.

I understand the necessity of code re-use, but not code abuse. If you find yourself in the situation of example #2, ask yourself how you can refactor it into something more readable. After all, the most likely person who has to read this later is you.

Clean Ladder Logic

I’ve recently been reading Clean Code: A Handbook of Agile Software Craftsmanship. It’s written by Robert C. “Uncle Bob” Martin of Agile software (among other) fame. The profession of computer programming sometimes struggles to be taken seriously as a profession, but programmers like Martin are true professionals. They’re dedicated to improving their craft and sharing their knowledge with others.

The book is all about traditional PC programming, but I always wonder how these same concepts could apply to my other obsession, ladder logic. I’m the first to admit that you don’t write ladder logic the same way you write PC programs. Still, the concepts always stem from a desire for Readability.

Martin takes many hard-lined opinions about programming, but I think he’d be the first to admit that his opinions are made to fit the tools of the time, and those same hard-and-fast rules are meant to be bent as technology marches on. For instance, while he admits that maintaining a change log at the top of every source file might have made sense “in the 60’s”, the rise of powerful source control systems makes this obsolete. The source control system will remember every change that was made, who made it, and when. Similarly, he advocates short functions, long descriptive names, and suggests frequently changing the names of things to fit since modern development environments make it so easy to rename and refactor your code.

My favorite gem is when Martin boldly states that code comments, while sometimes necessary, are actually a failure to express ourselves adequately in code. Sometimes this is a lack of expressiveness in the language, but more often laziness (or pressure to cut corners) is the culprit.

What would ladder logic look like if it was “clean”? I’ve been visiting this question during the development of SoapBox Snap. For instance, I think manually managing memory, tags, or symbols is a relic of older under-powered PLC technology. When you drop a coil on the page in SoapBox Snap, you don’t have to define a tag. The coil is the signal. Not only is it easier to write, it prevents one of the most common cardinal sins of beginner ladder logic programming: using a bit address in two coil instructions.

Likewise, SoapBox Snap places few if any restrictions on what you can name your coils. You don’t have to call it MTR1_Start – just call it Motor 1: Start. Neither do you need to explicitly manage the scope of your signals. SoapBox Snap knows where they are. If you drop a contact on a page and reference a coil on the same page, it just shows the name of the coil, but if you reference a contact on another page, it shows the “full name” of the other coil, including the folders and page names of your organization structure to find it. Non-local signals are obviously not local, but you still don’t have to go through any extraneous mapping procedure to hook them up.

While we’re on the topic of mapping, if you’ve read my RSLogix 5000 Tutorial then you know I spend a lot of time talking about mapping your inputs and your outputs. This is because RSLogix 5000 I/O isn’t synchronous. I think it’s pointless to make the programmer worry about such pointless details, so SoapBox Snap uses a synchronous I/O scan, just like the old days. It scans the inputs, it solves the logic, and then it scans the outputs. Your inputs won’t change in the middle of the logic scan. To me, fewer surprises is clean.

I’ve gone a long way to make sure there are fewer surprises for someone reading a ladder logic program in SoapBox Snap. In some ladder logic systems, the runtime only executes one logic file, and that logic file has to “call” the other files. If you wanted to write a readable program, you generally wanted all of your logic files to execute in the same order that they were listed in the program. Unfortunately on a platform like RSLogix 5000, the editor sorts them alphabetically, and to add insult to injury, it won’t let you start a routine name with a number, so you usually end up with routine names like A01_Main, A02_HMI, etc. If someone forgets to call a routine or changes the order that they execute in the main routine, unexpected problems can surface. SoapBox Snap doesn’t have a “jump to page” or “jump to routine” instruction. It executes all logic in the order it appears in your application and each routine is executed exactly once per scan. You can name the logic pages anything you want, including using spaces, and you can re-order them with a simple drag & drop.

Program organization plays a big role in readability, so SoapBox Snap lets you organize your logic pages into a hierarchy of folders, and it doesn’t limit the depth of this folder structure. Folders can contain folders, and so on. Folder names are also lenient. You can use spaces or special characters.

SoapBox Snap is really a place to try out some of these ideas. It’s open source. I really hope some of these innovative features find their way into industrial automation platforms too. Just think how much faster you could find your way around a new program if you knew there were no duplicated coil addresses, all the logic was always being executed, and it’s always being executed in the order shown in the tree on the left. The productivity improvements are tangible.

Off-the-Shelf or Custom Automation?

If you’re like me, you’re a fan of customizing:

…and certainly in the automation industry you see a lot of custom control solutions. In fact there’s always been this long-running debate over the value of custom solutions vs. the value of off-the-shelf “black box” products.

I’ve noticed this rule: the closer you get to the production line, the more custom things you’ll see. Just look at the two ends of this extreme: production lines are almost always run by PLCs with custom logic written specifically for that one line, but the accounting system is almost always an off-the shelf product.

There’s a good reason for this. Accounting methodologies are supposed to be standardized across all companies. Businesses don’t claim that their value proposition is their unique accounting system (unless you’re talking about Enron, I suppose). Automation, however, is frequently part of your business process, and business processes are the fundamental business proposition of a company. Fedex should definitely have a custom logistical system, Amazon needs to have custom order fulfillment, and Google actually manufactures their own servers. These systems are part of their core business strengths.

So when should a company be buying off-the-shelf automation solutions? I say it’s any time that “good enough” is all you need. You have to sit down and decide how you’re going to differentiate yourself from the competition in the mind of your customers, and then you have to focus as much energy as possible on achieving that differentiation. Everything else needs to be “good enough”. Everything else is a cost centre.

If you follow that through logically, it means you should also seek to “commoditize” everything in the “everything else” category. That bears repeating: if it’s not a core differentiator for your company, you will benefit if it becomes a commodity. That means if you have any intellectual property sitting there in a non-critical asset, you should look for ways to disseminate that to the greater community. This is particularly important if it helps the industry catch up to a leading competitor.

There are lots of market differentiators that can depend on your automation: price, distribution, and quality all come to mind. On the other hand there are other market differentiators that don’t really depend on your automation, like customer service or user-friendly product designs. Ask yourself what category your company fits in, and then you’ll know whether custom automation makes sense for you.

Quick thoughts about Automation

I think that once you’ve been in this industry for a few years, you need to reach out to others to share some of the wisdom you’ve learned. Most of the knowledge we carry around can do other people a lot more good than it will do us again in the future, so sharing needs to be a cultural norm. With that thought in mind, here are some quick automation-related thoughts I’d like to share:

  1. Inexperienced engineers appear to work faster, but their solutions are less maintainable. [tweet this]
  2. Choose open systems over proprietary, when possible [tweet this]
  3. Automate your own job ruthlessly before to automate anything else. It pays back. [tweet this]
  4. Beware of employers who spend 30 minutes reprimanding you about a 15 minute line on your timesheet. [tweet this]
  5. If you can’t find a more powerful tool, make your own. [tweet this]
  6. When estimating a project, if you’re counting in hours, you’re not being realistic. Use half-days. [tweet this]
  7. Don’t take shortcuts writing a program if it’s at the expense of readability. It doesn’t pay off. [tweet this]
  8. Automation doesn’t help if you don’t understand the process you’re automating. [tweet this]
  9. Blame is reactive. “What can we do differently next time?” is proactive. [tweet this]
  10. Innovate is a verb. This is not a coincidence – it requires constant action. [tweet this]
  11. Make things of value, not emails. [tweet this]

Feel free to share your own nuggets of wisdom below.

The “Almost There” Paradox

We’re all probably familiar with the idea that it takes half the time to get to 90% done and the other half to finish the last 10%. This is a staple of project management.

I think there’s actually a narrower scope of really dangerous solutions that you only become familiar with after you experience it. There’s a whole set of problems where the obvious solution gets you 95 to 98% of the way to your performance spec really quickly, but is almost impossible to reach 100% by incremental improvements. The reason I say they’re dangerous is because the feeling of being “almost there” prevents you from going back to the drawing board and coming up with a completely parallel solution.

I can remember a machine vision job from years ago where the spec was “100% read rate”. I only got it to about 94%, and someone else gave it a try. He got it up over 96%, but 100% was out of reach given the technology we had.

Experiences like that make you conservative. Now I unconsciously filter possible solutions by their apparent “flakiness”. I’m much more likely to add an extra prox to a solution to verify a position than to rely on timers or other kinds of internal state, because the latter are more prone to failure during system starts and stops. I press for mechanical changes when I used to bend under the pressure to “fix it in software”.

Still, you have to be careful. Its easy to discount alternatives just because they bear some passing resemblance to a bad experience you had before. You have to keep re-checking your assumptions. Unfortunately, rapid prototyping usually fails to uncover the “almost there” situation I’m talking about. If you prototype something up fast, and it works in 97% of your lab tests, you’ll probably think you have a “proof of concept”, and go forward with it.

The best way to test new solutions is to put them into production on a low risk system. If you’re an integrator, this means having a really good relationship with your customer (chances are you need their equipment to run your tests). If you work for a manufacturer, you can usually find some out-of-the-way machine to test on before you go all-in.