Author Archives: Scott Whitlock

About Scott Whitlock

I'm Scott Whitlock, an "automation enthusiast". By day I'm a PLC and .NET programmer at ETBO Tool & Die Inc., a manufacturer.

Introduction to Coordinated Motion Control

Let’s assume you already know everything there is to know about motion control… you can jog a servo axis, home it, make it move to a position using trapezoidal or s-curve motion. Now what?

Sooner or later you’re going to find yourself with 2 or more axes and you’re going to want to do something fancy with them. Maybe you have an X/Y table and you want it to move on a perfect 45 degree angle, or you need it to follow a curved, but precise, path in the X/Y plane. Now you need coordinated motion.

Coordinated motion controllers are actually quite common. Every 3, 4, or 5 axis mill uses coordinated motion, every robot controller, and even those little RepRap 3D printers. What you may not know is that most integrated motion solutions you might encounter in the PLC world also offer coordinated motion (a.k.a. interpolated motion) control. If you’re from the Allen-Bradley world, the ControlLogix/CompactLogix line of PLCs allows you to use the Motion Coordinated Linear Move (MCLM) and Motion Coordinated Circular Move (MCCM) instructions along with a few others. If you’re from the Beckhoff world, you can purchase a license for their NC I product which offers a full G-code interpreter, which is the language milling machines and 3D printers speak.

Under the hood, a coordinated motion control solution offers several features necessary for a workable multi-axis solution. The first is a path planner, the second is synchronization.

The job of the path planner is fairly complex. If you say that you need to move your X/Y table from point 5,2 to point 8,3 then it needs to take the maximum motion parameters of both axes into account to make sure that neither axis exceeds it’s torque, velocity or acceleration/deceleration limits, and typically it will limit the “velocity vector” as well, meaning the actual speed of the point you’re moving in the X/Y plane. Furthermore, it must create a motion profile for each axis that, when combined, cause the tooling to move in a straight line between those points. After all, you may be trying to move a cutting tool along a precise path and you need to cut a straight line. To make matters far more complicated, after the motion is already in progress, if the controller receives another command (for instance to move to point 10,5 after the initial move to 8/3) then it will “blend” the first move into the second, depending on rules you give it.

For instance, let’s say you start at 0,0, then issue a move to 10,0 but then immediately issue a second move to 10,10. You have to option of specifying how that motion will move through the 10,0 point. If you issue a “fine” move then the X axis has to decelerate to a stop completely before the Y axis starts its motion. However, you can also tell it that you only care that you get within 1 unit of the point, in which case the Y axis will start moving as soon as you get to point 9,0 and will do a curved move through point 10,1 on its way to 10,10 without ever moving though point 10,0. This is actually useful if you’re more concerned with speed than accuracy. Another option you have is to issue a linear move to 9,0 followed by a circular move to 10,1 (with center at 9,1) followed by a linear move to 10,10. That will cause the tooling to follow a similar path, but in this case you’re in precise control of the curved path that it takes. In neither case will either axis stop until it gets to the final point.

The other important feature of coordinated motion is synchronization of the axes. Typically the controller delegates lower level control of the axes to traditional axis controllers, and the coordinated motion controller just feeds the motion profiles to each axis. However, it’s imperative that each axis starts its motion at precisely the same time, or the path won’t be correct in the multi-dimensional space. That requires some kind of clock synchronization, and that’s the reason why you see options for things like Coordinated System Time Masters on ControlLogix and CompactLogix processors.

That was very brief, but I hope it was informative. If you do have to tackle coordinated motion on your next project, definitely allocate some time for reading your manufacturer’s literature on the subject because it’s a fairly steep learning curve, but clearly necessary if your project demands it.

Where to Draw the Line(s)?

One of the most confusing things that new programmers face is how to break down their program into smaller pieces. Sometimes we call this architecture, but I’m not sure that gives the right feel to the process. Maybe it’s more like collecting insects…

Imagine you’re looking at your ladder logic program under a microscope, and you pick out two rungs of logic at random. Now ask yourself, in the collection of rungs that is your program, do these two rungs belong close together, or further apart? How do you make that decision? Obviously we don’t just put rungs that look the same next to each other, like we would if we were entomologists, but there’s clearly some measure of what belongs together, and what doesn’t.

Somehow this is related to the concept of Cohesion from computer science. Cohesion is this nebulous measure of how well the things inside of a single software “module” fit together. That, of course, makes you wonder how they defined a software module…

There are many different ways of structuring your ladder logic. One obvious restriction is order of execution. Sometimes you must execute one rung before another rung for your program to operate correctly, and that puts a one-way restriction on the location of these two rungs, but they could still be located a long way away.

Another obvious method of grouping rungs is by the physical concept of the machine. For instance, all the rungs for starting and stopping a motor, detecting a fault with that motor (failure to start), and summarizing the condition of that motor are typically together in one module (or file/program/function block/whatever).

Still we sometimes break that rule. We might, for instance, have the motor-failed-to-start-fault in the motor program, but likely want to map this fault to an alarm on the HMI, and that alarm will be driven in some rung under the HMI alarms program, potentially a long way away from the motor program. Why did we draw the line there? Why not put the alarm definition right beside the fault definition that drives it? In most cases it’s because the alarms, by necessity, are addressed by a number (or a bit position in a bit array) and we have to make sure we don’t double-allocate an alarm number. That’s why we put all the alarms in one file in numerical order, so we can see which numbers we’ve already used, and also so that when alarm # 153 comes up on the HMI, we can quickly find that alarm bit in the PLC by scrolling down to rung 153 (if we planned it right) in that alarms program.

I just want to point out that this is only a restriction of the HMI (and communication) technology. We group alarms into bit arrays for faster communication, but with PC-based control systems we’re nearing the day when we can just configure alarms without a numbering scheme. If you could configure your HMI software to alarm when any given tag in the PLC turned on, you wouldn’t even need an alarms file, let alone the necessity of separating the alarms from the fault logic.

Within a program, how should we order our rungs? Assuming you satisfied the execution order requirement I talked about above, I usually fall back on three rules of thumb: (1) tell a story, (2) keep things that change together grouped together, and (3) separate the “why” from the “how”.

What I mean by “tell a story” is that the rungs should be organized into a coherent thought process. The reader should be able to understand what you were thinking. First we calculate the whozit #1, then the whozit #2, then the whozit #3, and then we average the 3. That’s better than mixing the summing/averaging with the calculating of individual whozits.

Point #2 (keep things that change together grouped together) is a pragmatic rule. In general it would be nice if your code was structured in a way that you only had to make changes in one place, but as we know, that’s not always the case. If you do have a situation where changing one piece of logic means you likely have to change another piece of logic, consider putting those two rungs together, as close as you can manage.

Finally, “separate the why from the how”. Sometimes the “how” is as simple as turning on an output, and in that case this rule doesn’t apply, but sometimes you run across complicated logic just to, say, send a message to another controller, or search for something in an array based on a loop (yikes). In that case, try to remove the complexity of the “how” away from the upper level flow of the “why”. Don’t interrupt the story with gory details, just put that into an appendix. Either stuff that “how” code in the bottom of the same program, or, better yet, move it into its own program or function block.

None of these are hard and fast rules, but they are generally accepted ways of managing the complexity of your program. You have to draw those lines somewhere, so give a little thought to it at first, and save your reader a boatload of confusion.

The TwinCAT 3.1 Review

Earlier this year I reviewed TwinCAT 3 and I admit that it was a less than stellar review. Up to now TwinCAT 3 has seemed like a beta all the way.

Well, I’m pleased to say that after using and deploying TwinCAT 3.1 for several weeks, it’s a significant improvement over its predecessor, and brings it into the realm of “release-quality” software. There are several improvements I’ll try to highlight.

64-bit! Finally! Yes, it’s so nice to get rid of the old 32-bit copy of Windows 7 and get back to the world of 64-bit computing.

The TwinSAFE safety editor is so much better. While the old one would take several minutes to verify my safety program before downloading it, and several minutes to even get online with the safety controller, this new version does it in seconds. It was so fast in comparison that I thought it didn’t work the first time. Thank goodness because that was a major shortcoming of the old version.

I’m pleased to report that Beckhoff has made changes to the source code file format to make them more friendly to source control applications (like Subversion or Mercurial). Doing a “diff” between 2 file versions now gives you a really good idea of what changed, rather than some obscure XML code. This is one of the first things I complained about and I didn’t expect it to get changed, so I’m really happy to see that. (Also remember that TwinCAT 2 files were completely unfriendly to Source Control; they were just a big binary blob!) Note that when you upgrade your TwinCAT 3.0 solution to TwinCAT 3.1, it has to do a conversion, and it’s one-way. Keep a backup of your old version just in case. In our case, the upgrade went well, except for references, but that was ok…

The old library manager in each PLC project has been replaced by a References folder, which you will be familiar with if you do any sort of .NET programming. You can now add and remove library references right in the solution tree, which is nice. That’s one thing that didn’t get upgraded automatically. I had to remove my old references to the TwinCAT 3.0 standard libraries and re-add them as TwinCAT 3.1 libraries. Not a big deal once I figured out what was causing the build error.

The scope viewer has been removed from the system tray icon menu and pulled into the Visual Studio interface. I think this is just part of their attempt to pull everything into one environment, which I applaud.

Unfortunately I still had one blue-screen-of-death when I tried to do an online change, but I couldn’t reproduce that problem. I’m using a commercial desktop computer, not an industrial PC for my testing, and Beckhoff says they won’t support the software if it isn’t installed on Beckhoff hardware. Let that be a warning to you: even though you pay significantly more for the software license in order to run it on non-Beckhoff hardware, they will not warranty software bugs in that case. That doesn’t mean you can’t get local support, but it does mean that if you have a legitimate bug report, they won’t help you unless it can be reproduced on Beckhoff hardware.

While upgrading, I also upgraded the operating system on the PC from Windows 7 32-bit to Windows 7 64-bit. That created a small problem because it caused the Beckhoff generated system ID to change, and the license is attached to the system ID. That means I have to go through generating a license file and applying the response file again. Hopefully there are no problems with duplicate licensing, as it is the same physical computer. I’m still waiting for a response though.

I did run into a couple gotchas during the upgrade. First, TwinCAT 3.1, for whatever reason, requires Intel’s Virtualization Technology Extentions (VTx) to be enabled in the BIOS, and mine weren’t. It turns out that I needed to flash the BIOS to even get that option. Secondly, we use Kaspersky Endpoint Security version 8 for our enterprise anti-virus solution. It turns out that both version 8 and version 6 prevent TwinCAT 3.1 from booting the PLC runtime into Run Mode on startup. Without Kaspersky installed it worked fine. I eventually tried Microsoft Security Essentials (the free Anti-virus solution from Microsoft) and that seems to work well. Some anti-virus is better than none, I figure.

The upgraded system has been in production for nearly 2 weeks, and seems to be working well (no problems that I can attribute directly to TwinCAT, at any rate).

In summary, if you’re looking to jump into TwinCAT 3, I know I said to wait in my last review, but I now think that TwinCAT 3.1 is a good solid base if you’re looking to get your feet wet.

Secure Design of Safety Critical Software-Controlled Devices

As a licensed P.Eng., I recently received a copy of a draft version of a document regarding engineering practices for the design of safety critical software. It seemed to be well thought out and written, but one of the last statements caught me by surprise. It basically said that your design should prevent Stuxnet-like attacks.

I think it’s great that the profession is starting to take this stuff seriously, but I was a bit shocked at the implication of this document. Primarily I was concerned because Stuxnet was nothing like an ordinary attack. Defending against it would be like defending your office building against a laser guided bomb. Stuxnet was a nation-state-backed attack with huge funding and resources, going after a military/industrial target.

What does that mean for us practicing engineering in, say, the automotive industry? Do we just get to brush the whole cybersecurity thing aside because a Stuxnet-like attack against our facility has absolutely zero chance of ever being launched? It doesn’t seem like that’s what the document was trying to say.

In fact, is Stuxnet even a relevant example for “Safety Critical Software”? Did Stuxnet actually affect any safety critical systems, like a Safety PLC? All it did was damage equipment. Was anyone hurt? Was anyone in danger of being hurt? I guess I just disagree with the use of Stuxnet as an example here.

That’s not to say cybersecurity doesn’t need some attention from our profession. In particular, consider the case of so-called “Safety PLCs”. Here you have a device, typically sitting on some kind of network, definitely responsible for safety critical systems. There are at least 2 pieces of software installed on a Safety PLC: the firmware (the operating system, communication system, and runtime of the device) and the user-configurable logic that you download when you’re programming the system. Compromise of either of these pieces of software certainly poses a grave risk to human safety.

The user-configurable logic is supposed to be password protected. This is one step up from normal PLCs which rarely offer any kind of authentication. Now, when TUV, or whoever, is certifying these devices, they are definitely checking that once it’s locked, you can’t update the user-configurable software without a password. In fact, TUV has access to the design, so I *hope* someone there is well versed in cyber security and knows what they’re talking about. For instance, if it was found that authentication was being done in the programming software (client application) and not on the device itself, it would be completely useless security, because someone could easily bypass it. But let’s assume that nobody building these things is doing anything that blatantly wrong.

So even if the client/device protocol is rock solid, is the firmware bulletproof? I think this is unlikely. Every time a security researcher runs even the most basic of security audits against PLCs they usually find tons of obvious exploits. You have to realize that most of these devices are just embedded computers running off-the-shelf real-time operating systems like VxWorks. Usually it’s using the standard VxWorks networking protocols, which are sometimes known to have vulnerabilities (in certain versions) and are unfortunately rarely updated in the field. That’s not even counting deliberate backdoors and debug code left in the software. Does TUV do a security audit of every design? Maybe, but I doubt it. Even if they do, are they doing repeated security audits against older devices? When new vulnerabilities are found in these embedded operating systems that impact existing devices, are they requiring them to be recalled? I’ve never heard of such things.

I’m pretty sure the only reason we haven’t heard about this yet is because even security researchers who’ve woken up to the concept of PLCs and SCADA systems in the past 2 years still have no idea that Safety PLCs actually exist. I don’t think it’ll be too long until they find out.

Designing a device like this that’s intrinsically secure against hacking seems almost impossible to me. Whatever software module is responsible for receiving the user-configurable safety program and storing it in the persistent memory of the device is necessarily network-facing. Any vulnerability in that communication module could be exploited, and then you have the ability to bypass all the security protocols on the device, and write whatever user-configurable safety program you want.

So if this document says we have to design our Safety Critical systems to withstand a Stuxnet-like attack, and all the Safety PLCs we could use are likely vulnerable via communication module software exploits, and even firewalls and airgaps don’t seem to be able to defend against this kind of attack (just ask the employees of the Natanz facility in Iran), then where does that leave us? Is a P.Eng. supposed to just throw up their hands and say “sorry, it can’t be done?” Not likely.

The TwinCAT 3 Review

Edit: Note that I have posted an updated TwinCAT 3 Review in 2014.

So back in 2010 I wrote about my first impression of TwinCAT 2 and later that same year I wondered if automation programmers would accept TwinCAT 3. I was lucky enough to be involved in the TwinCAT 3 beta, and now that the 32-bit version of TwinCAT 3 is available for general release we’ve deployed 2 production systems based on TwinCAT 3, and will likely deploy more in the future. What follows are my impressions of the current state of TwinCAT 3 based on our experiences with those 2 systems.

I think the best way I can describe TwinCAT 3 to the non-initiated is by comparing it with Allen-Bradley’s ControlLogix platform with their RSLogix 5000 programming environment. I say that because I’m familiar with that platform, and so are most of my North American readers (I assume). It speaks well of Allen-Bradley that they are the de-facto default control system platform around here.

We often fall back on car analogies, and I don’t want to break with tradition. If ControlLogix is the Ford Taurus of control systems (common, reliable, with lots of performance for most tasks, lots of room, and fairly maintenance free) then TwinCAT 3 is something like the Rally Fighter. That is, it’s road legal, fairly rare, requires lots of TLC and understanding and may not be as reliable, but will take you places you’ll never get to go in a Ford Taurus.

When it comes to speed, TwinCAT 3 with Beckhoff’s EtherCAT I/O is a beast. I can’t stress this enough. We’re running both production systems of TwinCAT 3 with both a 0.5 millisecond logic scan time and a 0.5 millisecond I/O bus scan time and we’re only using about 10% to 15% of the available horsepower of each system. You can see and react to things in the TwinCAT 3 system that you’ll just miss in a ControlLogix processor. For instance in one case we’re driving an output off of an absolute encoder and the repeatability of turning on that output is much better than anything we’ve seen with any other controller.

Furthermore, data accessibility is light years beyond any traditional PLC. In one test I moved a 400 kB block of data from the real-time (ladder logic program) to a .NET program running under windows on the same PC and all I can say is that it’s nearly instantaneous. That’s an advantage of having the HMI and real-time executing on the same physical hardware.

That’s not even getting into the new C++ integration (which I haven’t used).

Any TwinCAT 3 vs. ControlLogix system comparison will also certainly favor the Beckhoff solution when it comes to price. At least I can’t seen any case yet where that’s not true by a significant margin. I can’t say exact prices, obviously, but I’m confident that’s generally a true statement.

Does that mean I think the competition is a hands-down blow-out in favor of TwinCAT 3? No. In fact if you’re considering trying TwinCAT 3 I can’t even go so far as to give you my blessing right now. It has problems.

TwinCAT 3 crashes with a blue screen. Regularly. There, I said it. That’s the dirty secret. Everyone’s fears about PC-based control on the factory floor were around stability and in our experience TwinCAT 3 isn’t stable yet. This is odd to me because we have another system with TwinCAT 2 and it’s solid as a rock. Unfortunately our TwinCAT 3 system crashes with a blue screen regularly, and the crash report always shows that it’s some kind of memory violation in their Tc*.sys files, which are the system files responsible for running their real-time system under ring 0 of the OS (as far as I understand). I’ve never even seen blue screens with Windows 7 until trying out TwinCAT 3. There are actually two different times that it crashes: (1) randomly, and (2) when I try to do an online change.

Beckhoff’s response was that we were running it on 3rd party hardware. They loaned us a Beckhoff industrial PC to try. We tried it for a week and we didn’t see any random crashes, but it still crashed the real-time when I tried to do an online change, and it also crashed the IDE almost every time I recompiled the program. In fact that’s the reason we had to stop the test with Beckhoff’s hardware after only one week. I needed to compile a change and couldn’t get it to compile. It did work on our 3rd party PC (an HP desktop PC).

Now, I don’t think these are insurmountable problems. Beckhoff is still coming out with new versions on a regular basis. Their new support for 64-bit windows operating systems is on the horizon. TwinCAT 2 seems stable and I’m sure TwinCAT 3 will get there eventually. However, for the moment, if you’re considering the plunge, I suggest waiting about a year before bothering to check it out. If you just need the speed, consider TwinCAT 2, as even though it doesn’t take advantage of multiple cores it will likely do what you need and is a much more mature product at this point.

As Technology Buffs, We’re Blinded to the Obvious Uses of Technology

Are you old enough to remember the early 90’s? Almost nobody was “online” then. It’s hard to believe how much the internet has changed modern life since then. I just went through my bookshelf and got rid of all the reference books (which I could have done years ago) because the internet makes them obsolete.

Do you remember having to look up some arcane technical detail in a book? Do you remember how painful that was? Do you remember connecting to a BBS using a phone line and a 2400 baud modem? I do.

Back then when we tried to imagine the future, we (as geeks) really got it wrong, and surprisingly so. We were the ones who were supposed to see the possibilities, yet we consistently imagined the things we wanted to do with the technology instead of the things the average person wants to do with it. You see, geeks aren’t average people, so we’re sometimes blind to the obvious.

As geeks, we imagined having our own robot to do our bidding, or some kind of virtual reality headset for driving cars around on Mars. In reality, the killer app for the masses was the ability to take a picture of something, write a short note about it, and send it to all your “friends”. The funny thing is, if you asked a geek in the early 90’s if that feature could be the basis of a company valued in the billions of dollars, they’d have laughed you out of the room. The idea just wasn’t that revolutionary. After all, we’d had fax machines for several years, and email/usenet were already out there. Geeks already had access to this technology, and thought nothing of it.

Average people, if they even saw this technology, didn’t see what was under their noses either, simply because it wasn’t sexy yet. Let me be clear, when I say “sexy” I mean it literally. The average young person isn’t like a geek. They don’t have robots and virtual reality sets on their mind, they want to know who’s going to be at the party on Saturday, who said what about who, and if so and so is available. Modern social media succeeds because it helps people satisfy the most basic of “normal” wants.

Where am I going with this? When you look at the bleeding edge of technological progress today it’s easy to spot the trends. Open source hardware has really taken off, specifically 3D printing and other “home” fabrication technologies. We’re also seeing an explosion of sensors, not just in your phone, but also plugged into consumer devices, with most of them uploading their data to central servers (e.g. the popular Nest Home Thermostat).

We all want to know what the next big thing will be, and we’re hopelessly bad at predicting it, but we still like to play the game. If we want to know where this new hardware renaissance is leading, we need to look at the space where these new technologies intersect the wants of average people. Only about 1% of users are the kind that participate by actually creating new content, so I definitely don’t see a future where everyone is designing new things in Google Sketchup and printing them out on their 3D printer they bought at Wal-mart. The thing about 3D printers is that they print things we can buy much more cheaply (little plastic trinkets), especially if it’s the type of thing that a lot of people want, so there’s an economy of scale to support manufacturing it.

There is something that everybody wants to create at home, and can’t be mass produced in a factory on the other side of the world: freshly prepared food. Gourmet food, even. People pay big bucks for fancy coffee makers that make fancy coffee from packets with barcodes containing individualized brewing instructions. It’s not a big stretch to imagine a machine that can whip up fancy cocktails, or dozens of fancy hors d’oeuvres for a party, or even print out your kids’ drawings on their Christmas cookies with icing from a 3D printer. Maybe a gizmo that carves an apple into the shape of their favorite cartoon character? (Of course, you know these will be re-purposed to make snacks for bachelorette parties, right?)

That’s the easy stuff. Honestly, Eggs Benedict seems like it’s well within the range of possibilities, and who wouldn’t want to wake up on Sunday to the sizzle of eggs and bacon, along with the usual coffee brewing?

Of course, if we automate the food preparation and we’re collecting all this data with sensors, it won’t be long until we’re automating the control of our caloric intake.

That’s my guess anyway: all this new technology’s going to end up in your kitchen. Which is cool with me (that doesn’t mean I can’t still have my 3D printer…)

Finding Inexpensive 24VDC Power Supplies

I happen to have been in the market for some switching 24VDC power supplies that could source some decent current and put up with a little abuse. It turns out there’s no need to spend hundreds of dollars for one of the fancy DIN-rail mounted models when all you need is a box to sit on your test bench.

You can order them directly from Chinese suppliers, but it’s more convenient to order from someone domestic. I’ve found a good place to look is Amazon (a close second is E-Bay, but you’re really buying from China in about 50% of cases there). On Amazon you can get a 15A 24VDC power supply for well under $30:



Obviously this is a “buyer beware” type of situation, but I’ve purchased 3 similar power supplies so far, and haven’t had any problems. In one case the equipment under test shorted out due to a blown capacitor, and the power supply turned itself off until I cycled the power to the supply. It’s still working fine.

You may think that a $300 brand name power supply is what you need, but if you don’t need five 9’s of reliability then I’d like to point out you can buy ten of these cheap supplies for the same price, so buy 3 and keep 2 on the shelf, and spend the other $210 on other cool gadgets! 🙂

The Trap of Enterprise Software

Do you know how ERP software is sold? It’s fairly straightforward. Every CEO has a similar problem… they want to know how much everything in their company costs, vs. how much value it brings to the company. ERP vendors meet with executives and show them pretty graphs.

Then, of course, the company signs a contract with the ERP vendor for hundreds of thousands of dollars to install, configure, and support the system for one year. By the time the project is complete it costs twice what it was supposed to and all the end users are frustrated because it’s so tedious to use.

How does this happen? Why did the CEO think they were buying into a low-risk off-the-shelf product?

The CEO was lulled into a false sense of security by the pretty graphs. Any programmer can tell you that charts are a solved problem. If you give any programmer a database full of rich and meaningful data, they can whip up pretty reports, even very complicated ones, in a matter of days. The hard part is filling the database with accurate and complete information! If you buy an off-the-shelf ERP application, getting the data into their system is the hard part. You either need to write tons of custom code to copy the data from existing systems, or you need to change your business workflows to conform to this new software.

Consider another favorite of enterprise software applications: OEE. In order to calculate your true OEE you need the following:

  1. Availability
  2. Performance
  3. Quality

It’s that first item, Availability, that’s really interesting. Basically it’s the ratio of actual equipment uptime vs. scheduled production time. Let’s say you’re buying an “automated OEE solution,” so it’s going to pull the status of the equipment (running or not running) from the PLC. It doesn’t really matter because that isn’t enough information to calculate your Availability. You need a production schedule to begin with. Who enters this data? Does it already exist? In what form? Excel? A proprietary ERP or MES system? What custom code has to be written to get this production schedule into a format that the “shrink-wrapped” OEE software can understand? Does the company have to convert to using the OEE software’s proprietary scheduling system? Do we have to enter the schedule twice?

If you have to do all that work, why are you buying an OEE software package? If the production schedule is already in your ERP system, and you have in-house expertise to get the running status out of the equipment, why bother converting the schedule into some other format? The equations for OEE are grade 5 math. Any programmer can make pretty OEE graphs if they already have the data they need. What value does the OEE software actually bring?

When you think about enterprise software, remember that most of the real value is in the accuracy and meaningfulness of the data in the database. Money should be invested in acquiring, storing, curating, and documenting that data. Once you’ve accomplished that, then you can add your whiz-bang charts. Don’t go putting the cart before the horse.

VB6 Line Numbering Build Tool

If you’ve ever had to do any VB6 programming, you know one (of many) shortcomings is VB6’s pitiful error handling support. The problem is that you want to know what line number an error occurred on, but the ERL function only gives you that information if you have line numbers on your entire project, and nobody wants to work with a project full of line numbers.

There are tools like MZ-Tools that allow you to add and remove line numbers with a couple of mouse clicks, but that’s not ideal. What you really want is the ability to add line numbers as an automated step during the build process. Thankfully someone went and created such a tool (written in VB6 with the source code provided). Unfortunately I’ve lost the original location that I downloaded it from, and the original author. Until I can find that page, here is a zipped up copy of that tool:

Tools_ZIP Code_csLineNumber

It provides both a GUI and command line interface. It also has instructions on how to integrate it into the Windows shell so you can just right click on the .vbp file and run the tool. Note that on Windows 7 you likely have to run it as an Administrator.

More Control Systems Found Attached to the Internet

Back in November I published a blog post about Finding Internet-Connected Industrial Automation Devices and one of the scariest things I found was a wind turbine in Oklahoma with no apparent authentication.

Recently Dan Tentler took this several steps further and posted his video from the LayerOne 2012 security conference, where he shows you a vast array of non-secure devices connected to the internet, much of which can interact physically with the real world, including control systems. Here’s his extremely fascinating video, and it’s worth watching all 45 minutes (note that he also has a screenshot of the Endurance Wind Turbine interface that I found in my original post):