Category Archives: Automation

What Motivates Makers

A couple weeks ago I wrote a post about budgets, and Ken from over at Robot Shift replied:

My experience when compensation is set up to be win-win it’s a good thing. If someone’s working hard, and doing the right things often their impact far exceeds that of just their hourly contribution. It impacts teams, groups and profitability. If the organization profits from this impact, the employee should as well.

On the surface I think that’s true – we all appreciate profit sharing, but I think Ken’s talking about performance-metric-based-pay. I don’t agree with tying creative work to performance pay. The inner workings of the human mind continue to astound us. An article in New Scientist says:

…it may come as a shock to many to learn that a large and growing body of evidence suggests that in many circumstances, paying for results can actually make people perform badly, and that the more you pay, the worse they perform.

“In many circumstances” – I’m going to talk about one such circumstance shortly. Of course for more surprising information there’s the classic Dan Pink on the surprising science of motivation, and Clay Shirky’s talk on Human Motivation.

There’s a big difference between intrinsic motivation and extrinsic motivation. If you have to choose, intrinsic motivation is more powerful. It costs less, and it’s a stronger motive force. I believe that’s due to the phenomenon of “flow“. If you work in any type of creative field, you know the power of flow – that feeling you get when you’re 100% immersed in your work, the outside world seems to drift away, you lose track of time, but you’re incredibly productive. Without distractions you can focus your full attention on solving the problem at hand. Once you’ve experienced this a few times, it’s like a drug.

For someone like me, assuming I make enough money to pay the bills, I would trade any additional money to spend more time in “flow”. Yes, I’m Scott Whitlock and I’m addicted to flow. (I’m not the only one.) If you have one of those lucky days where you spend a full 8 hours without distractions, wrestling with big problems, you’ll come home mentally exhausted but exhilarated. At night you truly enjoy your family time, not because your job sucks and family time is an escape, but because you’re content. The next morning you’re excited on your drive to work. Your life has balance.

Paul Graham wrote an excellent essay about the Maker’s Schedule, Manager’s Schedule. The Maker blocks out their time in 4 hour chunks because they need to get into “flow” to do their best work. Managers, on the other hand, schedule their days in 30 or 60 minute chunks.

Do you see the problem? If a Manager never experiences flow then they can’t understand what motivates their Makers. That’s why management keeps pushing incentive pay: a lot of Managers are extrinsically motivated and they can’t get inside the head of someone who isn’t.

In case you happen to be a Manager, I feel it’s my duty to help you understand. It’s like this: if you’re addicted to flow, then being a Maker is an end in itself. If you still don’t get it, please read the story of The Fisherman and the Businessman. Ok, got it? Good.

So, if you’re managing people who are intrinsically motivated, here’s how you should setup their pay:

  • Make sure their salary or wage is competitive. If it’s not, they’ll feel cheated and resentful.
  • Profit sharing is ok if you don’t tell them how you arrived at the numbers.

Yeah, that’s crazy isn’t it? But it’s true. Do you want to know why? I’ve spent most of my career paid a straight wage without incentive pay. I did get bonuses, but they were just offered to me as, “good work this year, here’s X dollars, we hope you’re happy here, thanks very much.” Under that scheme when someone came to me with a problem, I relished the challenge. I dove into the problem, made sure I fully understood it, found the root cause and fixed it. The result was happy customers and higher quality work.

For a short time I was bullied into accepting performance-based pay. My metrics were “project gross margin” and “percent billable hours vs. target”. Then when someone came to me with a problem unrelated to my current project, my first reaction was “this is a distraction — I’m not being measured on this criteria.” When I paused to help a co-worker on another team for half an hour on their project, my boss reminded me that I wasn’t helping our team’s metrics. My demeanor with customers seemed to change. Work that used to seem challenging became extremely stressful. I lost all intrinsic motivation. I was no longer working to help the customer – I was working to screw the customer out of more money. It was as if, by dangling the carrot in front of my nose, I could no longer see the garden.

It’s hard for me to admit that. When you’re intrinsically motivated, you’re proud of it. It makes you feel good to do the work for its own sake. For Makers, doing work for performance pay is a hollow substitute. It demoralizes you.

I guess my point is, if you manage Makers, how do you not know this? It’s your job!

Ladder Logic vs. C#

PC programming and PLC programming are radically different paradigms. I know I’ve talked about this before, but I wanted to explore something that perplexes me… why do so many PC programmers hate ladder logic when they are first introduced to it? Ladder logic programmers don’t seem to have the same reaction when they’re introduced to a language like VB or C.

I mean, PC programmers really look down their noses at ladder logic. Here’s one typical quote:

Relay Ladder Logic is a fairly primitive langauge. Its hard to be as productive. Most PLC programmers don’t use subroutines; its almost as if the PLC world is one that time and software engineering forgot. You can do well by applying simple software engineering methods as a consequence, e.g., define interfaces between blocks of code, even if abstractly.

I’m sorry, but I don’t buy that. Ladder logic and, say C#, are designed for solving problems in two very different domains. In industrial automation, we prefer logic that’s easy to troubleshoot without taking down the system.

In the world of C#, troubleshooting is usually done in an offline environment.

My opinion is that Ladder Logic looks a lot like “polling” and every PC programmer knows that polling is bad, because it’s an inefficient use of processor power. PC programmers prefer event-driven programming, which is how all modern GUI frameworks react to user-initiated input. They want to see something that says, “when input A turns on, turn on output B”. If you’re familiar with control systems, your first reaction to that statement is, “sure, but what if B depends on inputs C, D, and E as well”? You’re right – it doesn’t scale, and that’s the first mistake most people make when starting with event-driven programming: they put all their logic in the event handlers (yeah, I did that too).

Still, there are lots of situations where ladder logic is so much more concise than say, C#, at implementing the same functionality, I just don’t buy all the hate directed at ladder logic. I decided to describe it with an example. Take this relatively simple ladder logic rung:

What would it take to implement the same logic in C#? You could say all you really need to write is D = ((A && B) || D) && C; but that’s not exactly true. When you’re writing an object oriented program, you have to follow the SOLID principles. We need to separate our concerns. Any experienced C# programmer will say that we need to encapsulate this logic in a class (let’s call it “DController” – things that contain business logic in C# applications are frequently called Controller or Manager). We also have to make sure that DController only depends on abstract interfaces. In this case, the logic depends on access to three inputs and one output. I’ve gone ahead and defined those interfaces:

    public interface IDiscreteInput
    {
        bool GetValue();
        event EventHandler InputChanged;
    }

    public interface IDiscreteOutput
    {
        void SetValue(bool value);
    }

Simple enough. Our controller needs to be able to get the value of an input, and be notified when any input changes. It needs to be able to change the value of the output.

In order to follow the D in the SOLID principles, we have to inject the dependencies into the DController class, so it has to look something like this:

    internal class DController
    {
        public DController(IDiscreteInput inputA, 
            IDiscreteInput inputB, IDiscreteInput inputC, 
            IDiscreteOutput outputD)
        {
        }
    }

That’s a nice little stub of a class. Now, as an experienced C# developer, I follow test-driven development, or TDD. Before I can write any actual logic, I have to write a test that fails. I break open my unit test suite, and write my first test:

        [TestMethod]
        public void Writes_initial_state_of_false_to_outputD_when_initial_inputs_are_all_false()
        {
            var mockInput = MockRepository.GenerateStub<IDiscreteInput>();
            mockInput.Expect(i => i.GetValue()).Return(false);
            var mockOutput = MockRepository.GenerateStrictMock<IDiscreteOutput>();
            mockOutput.Expect(o => o.SetValue(false));

            var test = new DController(mockInput, mockInput, mockInput, mockOutput);

            mockOutput.VerifyAllExpectations();
        }

Ok, so what’s going on here? First, I’m using a mocking framework called Rhino Mocks to generate “stub” and “mock” objects that implement the two dependency interfaces I defined earlier. This first test just checks that the first thing my class does when it starts up is to write a value to output D (in this case, false, because all the inputs are false). When I run my test it fails, because my DController class doesn’t actually call the SetValue method on my output object. That’s easy enough to remedy:

    internal class DController
    {
        public DController(IDiscreteInput inputA, IDiscreteInput inputB, 
            IDiscreteInput inputC, IDiscreteOutput outputD)
        {
            if (outputD == null) throw new ArgumentOutOfRangeException("outputD");
            outputD.SetValue(false);
        }
    }

That’s the simplest logic I can write to make the test pass. I always set the value of the output to false when I start up. Since I’m calling a method on a dependency, I also have to include a guard clause in there to check for null, or else my tools like ReSharper might start complaining at me.

Now that my tests pass, I need to add some more tests. My second test validates when my output should turn on (only when all three inputs are on). In order to write this test, I had to write a helper class called MockDiscreteInputPatternGenerator. I won’t go into the details of that class, but I’ll just say it’s over 100 lines long, just so that I can write a reasonably fluent test:

        [TestMethod]
        public void Inputs_A_B_C_must_all_be_true_for_D_to_turn_on()
        {
            MockDiscreteInput inputA;
            MockDiscreteInput inputB;
            MockDiscreteInput inputC;
            MockDiscreteOutput outputD;

            var tester = new MockDiscreteInputPatternGenerator()
                .InitialCondition(out inputA, false)
                .InitialCondition(out inputB, false)
                .InitialCondition(out inputC, false)
                .CreateSimulatedOutput(out outputD)
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputA).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputB).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputA).TurnsOff()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputC).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputB).TurnsOff()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputA).TurnsOn()
                .AssertThat(outputD).ShouldBe(false)

                .Then(inputB).TurnsOn()
                .AssertThat(outputD).ShouldBe(true); // finally turns on

            var test = new DController(inputA, inputB, inputC, outputD);

            tester.Execute();
        }

What this does is cycle through all the combinations of inputs that don’t cause the output to turn on, and then I finally turn them all on, and verify that it did turn on in that last case.

I’ll spare you the other two tests. One check that the output initializes to on when all the inputs are on initially, and the last test checks the conditions that turn the output off (only C turning off, with A and B having no effect). In order to get all of these tests to pass, here’s my final version of the DController class:

    internal class DController
    {
        private readonly IDiscreteInput inputA;
        private readonly IDiscreteInput inputB;
        private readonly IDiscreteInput inputC;
        private readonly IDiscreteOutput outputD;

        private bool D; // holds last state of output D

        public DController(IDiscreteInput inputA, IDiscreteInput inputB, 
            IDiscreteInput inputC, IDiscreteOutput outputD)
        {
            if (inputA == null) throw new ArgumentOutOfRangeException("inputA");
            if (inputB == null) throw new ArgumentOutOfRangeException("inputB");
            if (inputC == null) throw new ArgumentOutOfRangeException("inputC");
            if (outputD == null) throw new ArgumentOutOfRangeException("outputD");

            this.inputA = inputA;
            this.inputB = inputB;
            this.inputC = inputC;
            this.outputD = outputD;

            inputA.InputChanged += new EventHandler((s, e) => setOutputDValue());
            inputB.InputChanged += new EventHandler((s, e) => setOutputDValue());
            inputC.InputChanged += new EventHandler((s, e) => setOutputDValue());

            setOutputDValue();
        }

        private void setOutputDValue()
        {
            bool A = inputA.GetValue();
            bool B = inputB.GetValue();
            bool C = inputC.GetValue();

            bool newValue = ((A && B) || D) && C;
            outputD.SetValue(newValue);
            D = newValue;
        }
    }

So if you’re just counting the DController class itself, that’s approaching 40 lines of code, and the only really important line is this:

    bool newValue = ((A && B) || D) && C;

It’s true that as you wrote more logic, you’d refactor more and more repetitive code out of the Controller classes, but ultimately most of the overhead never really goes away. The best you’re going to do is develop some kind of domain specific language which might look like this:

    var dController = new OutputControllerFor(outputD)
        .WithInputs(inputA, inputB, inputC)
        .DefinedAs((A, B, C, D) => ((A && B) || D) && C);

…or maybe…

    var dController = new OutputControllerFor(outputD)
        .WithInputs(inputA, inputB, inputC)
        .TurnsOnWhen((A, B, C) => A && B && C)
        .StaysOnWhile((A, B, C) => C);

…and how is that any better than the original ladder logic? That’s not even getting into the fact that you wouldn’t be able to use breakpoints in C# when doing online troubleshooting. This code would be a real pain to troubleshoot if the sensor connected to inputA was becoming flaky. With ladder logic, you can just glance at it and see the current values of A, B, C, and D.

Testing: the C# code is complex enough that it needs tests to prove that it works right, but the ladder logic is so simple, so declarative, that it’s obvious to any Controls Engineer or Electrician exactly what it does: turn on when A, B, and C are all on, and then stay on until C turns off. It doesn’t need a test!

Time-wise: it took me about a minute to get the ladder editor open and write that ladder logic, but about an hour to put together this C# example in Visual Studio.

So, I think when someone gets a hate on for ladder logic, it just has to be fear. Ladder logic is a great tool to have in your toolbox. Certainly don’t use ladder logic to write an ERP system, but do use it for discrete control.

On Budgets

What function do budgets serve?

In a case where there are well-known basic requirements, and highly variable-solutions, a budget is essential to curb the tendency to put wants ahead of needs. Take, for example, an automobile purchase. All you need is four wheels. Maybe you need four doors if you have two kids, or a minivan if you have three kids, but you certainly don’t need a sunroof, or a hemi, or a self-parking car. These are luxuries. Maybe you can afford to splurge a bit, but the budget knows there are other needs that take priority, like food, mortgage payments, and utilities.

When we try to take this concept of a budget to the workplace it breaks down. In the workplace, budgets serve one of two purposes: (1) flagging corruption, and (2) investment payback.

Purpose 1: Flagging Corruption

When we know about how much money it should take to do a job, we give the responsible person a budget. “Marty, we’ve been attending conferences in Las Vegas for 30 years. We know it costs about $X, so you have a budget of $X and $Y for contingency.” As long as Marty stays within that budget, keeps his receipts, and can avoid having the escort service charges showing up on his company Amex, he’s good to go. If his expense report comes in at twice the expected amount, then he has to justify it.

When you know how much something should cost, this is an efficient model. It delegates spending to the individual and only requires minor oversight from the accounting department.

Note that the budget wasn’t set based on how much money was in the bank, it was based on historical data, and the company decided there was a payback to send Marty to Las Vegas. That’s a fundamental difference between home and workplace budgets: at home we frequently buy stuff with no perceived payback, but at a company every expenditure is an investment. That’s why the function of budgets at home and at work are fundamentally different.

Purpose 2: Investment Paybacks

When you don’t have good historical data on a planned expenditure, accounting still needs an estimate. Estimates feedback expected expenditures to whoever is managing the cash flow. Budgets are the present-cost of the expected payback, with interest.

When a company looks at starting a new project, whether it’s done internally or subcontracted, first they get an estimate. Then they compare the estimate to the expected payback, and if it pays back within a certain time-frame (usually a couple of years, but sometimes as short as a few months), they go ahead with the project. However, since projects are, by definition, something you haven’t actually done before, everyone acknowledges that the estimate won’t necessarily be accurate. They will generally approve a budget with an added contingency. They’ve done the calculations to make sure that if the project comes in under that budget, the payback will make the company (and the investors) money.

As the project progresses, the project manager needs to track the actual expenditures against the expected expenditures, and provide updated estimates to accounting. If the estimate goes higher than the approved budget, management has to re-evaluate the viability of the project. They might increase the budget if the payback is good enough, or they might scrap the project.

Tying Incentives to Budgets

For home budgets, it’s implied: if you stay within your budget, you’ll make sure to satisfy all your needs, and you’re less likely to find yourself in financial hardship. You have an incentive to stay on budget.

At work, lots of people have their bonuses tied to “performance-to-budget”. If we’re talking about the first purpose of workplace budgets (flagging corruption) where historical data provides a solid prediction of what something should cost, then it makes sense to evaluate someone on their performance to that budget. On the other hand, if we accept that project budgets are based on rather inaccurate estimates, then measuring someone to a project budget isn’t very efficient.

In fact, it leads to all kinds of behaviors we want to avoid. First, people might tend to over-estimate projects because that will raise the budget. This might prevent the company from pursuing projects that could have been profitable. Secondly, project managers tend to play a “numbers game” – using resources from one project that’s going well to pad another project that’s going over. This destroys the accuracy of your project reports; now you won’t know which initiatives really made money. Third, the cost will be evaluated at the end of the project, but the payback continues over a longer time-frame. The project manager will tend to make choices that favor lower cost at the end of the project at the expense of choices that offer higher long-term payback.

Everything I’ve read suggests that (a) project managers have very little influence over the actual success or failure of a project and (b) in any task that requires creativity and out-of-the-box problem solving, offering a performance-incentive reduces performance because you’re replacing an intrinsic motivation with an extrinsic one. The intrinsic one is more powerful.

So why do we tie performance incentives to project budgets? Is there really any research that suggests this works, or are we just extrapolating what we know about purchasing a car, or taking a trip to Las Vegas? As someone who is intrinsically motived, I’ve found budget performance-incentives to be extremely distracting. Surely I’m not the only one, am I?

On the Pathetic State of Automation Security

To start with, even PC security is pretty bad. Most programmers don’t seem to know the basic concepts for securely handling passwords (as the recent Sony data breach shows us). At least there are some standards, like the Payment Card Industry Data Security Standard.

Unfortunately, if PC security is a leaky bucket, then automation system security about as watertight as a pasta strainer. Here are some pretty standard problems you’re likely to find if you audit any small to medium sized manufacturer (and most likely any municipal facility, like, perhaps, a water treatment plant):

  • Windows PCs without up-to-date virus protection
  • USB and CD-rom (removable media) ports enabled
  • Windows PCs not set to auto-update
  • Remote access services like RDP or Webex always running
  • Automation PCs connected to the office network
  • Unsecured wireless access points attached to the network
  • Networking equipment like firewalls with the default password still set
  • PLCs on the office network, or even accessible from the outside!

All of these security issues have one thing in common: they’re done for convenience. It’s the same reason people don’t check the air in their tires or “forget” to change their engine oil. People are just really bad at doing things that are in their long term best interest.

Unfortunately, this security issue is becoming an issue of national security. Some have said there’s a “cyber-cold-war” brewing. After the news about Stuxnet, it’s pretty clear the war has turned “hot”.

I’m usually not a fan of regulations and over-reaching standards, but the fact is the Japanese didn’t build earthquake resistant buildings by individual choice. They did it because the building code required it. Likewise, I’ve seen a lot of resistance to the OSHA Machine Guarding standards because it imposes a lot of extra work on Control System Designers, and the companies buying automation, but I’m certain that we’re better off now that the standards are being implemented.

It’s time for an automation network security standard. Thankfully there’s one under development. ISA99 is the Industrial Automation and Control System Security Committee of ISA. A couple of sections of the new standard have already been published, but it’s not done yet. Also, you have to pay ISA a ransom fee to read it. I don’t think that’s the best way to get a standard out there and get people using it. I also think it’s moving very slowly. We all need to start improving security immediately, not after the committee gets around to meeting a few dozen more times.

I wonder if you could piece together a creative-commons licensed standard based on the general security knowledge already available on the internet…

Insteon and X10 Home Automation from .NET

I’ve been playing with my new Smarthome 2413U PowerLinc Modem plus some Smarthome 2456S3 ApplianceLinc modules and two old X10 modules I had sitting around.

Insteon is a vast improvement over the X10 technology for home automation. X10 always had problems with messages getting “lost”, and it was really slow, sometimes taking up to a second for the light to actually turn on. Insteon really seems to live up to its name; the signals seem to get there immediately (to a human, anyway). Insteon also offers “dual-band” technology, meaning the signals are sent both over the electrical wiring of the house, and over a wireless network. On top of this, Insteon implements “mesh networking”, acknowledgements, and retries. The mesh networking means that even if two devices can’t directly communicate, if an intermediate device can see both, it will relay the signal.

Now, where Insteon seems to have improved leaps and bounds on the hardware, the software support is abysmal. That’s not because there’s anything wrong with the API, but because they’ve put the Software Development Kit (SDK) behind a hefty license fee, not to mention a rather evil license agreement. Basically it would preclude you from using any of their examples or source code in an open source project. Plus, they only offer support if you’ve purchased their SDK.

So, I’ve decided to offer free technical support to anyone using a 2413U for non-commercial purposes. If you want help with this thing, by all means, email me, post comments at the end of this post, whatever. I’ll be glad to help you.

Let’s start by linking to all the useful information about Insteon that they haven’t completely wiped off the internet (yet):

Now how did I find all this information? Google. SmartHome (the Insteon people) don’t seem to provide links to any of this information from their home or (non-walled) support pages, but they either let Google crawl them, or other companies or organizations have posted them on their sites (I first found the modem developer’s guide on Aartech’s site, for instance). Once you get one document, they tend to make references to the titles of other documents, so you could start to Google for the other ones by title. Basically, it was a pain, but that’s how it was done.

Now, whether you buy the 2413S (serial) or 2413U (USB), they’re both using the 2412S internally, which is an RS232 device. The 2413U just includes an FTDI USB-to-Serial converter, and you can get the drivers for this for free (you want the VCP driver). It just ends up making the 2413U look like another COM port on your PC (in my case, COM4).

So, assuming you know how to open a serial port from .NET, and you got done reading all that documentation, you’d realize that if you wanted to turn on a light (say you had a switched lamp module at Insteon address “AA.BB.CC”), you’d want to send it this sequence of bytes (where 0x means hex):

  • 0x02 – start of message to PLM
  • 0x62 – send Insteon message over the network
  • 0xAA – high byte of Insteon ID
  • 0xBB – middle byte
  • 0xCC – low byte of Insteon ID
  • 0x0F – Flags (meaning: direct message, max hops)
  • 0x12 – Command byte 1 – means “turn on lighting device”
  • 0xFF – Command byte 2 – intensity level – full on

… after which the 2413U should respond with:

0x02, 0x62, 0xAA, 0xBB, 0xCC, 0x0F, 0x12, 0xFF, 0x06

… which is essentially just echoing back what it received, and adding a 0x06, which means “acknowledge”.

At that point, the 2413U has started transmitting the message over the Insteon network, so now you have to wait for the device itself to reply (if it does… someone might have unplugged it, after all). If you do get a response, it will look like this:

  • 0x02 – start of message from 2413U
  • 0x50 – means received Insteon message
  • 0xAA – high byte of peer Insteon ID
  • 0xBB – middle byte
  • 0xCC – low byte of peer Insteon ID
  • 0x?? – high byte of your 2413U Insteon ID
  • 0x?? – middle byte of your 2413U Insteon ID
  • 0x?? – low byte of your 2413U Insteon ID
  • 0x20 – Flags – means Direct Message Acknowledgement
  • 0x12 – Command 1 echo’d back
  • 0xFF – Command 2 echo’d back

If you get all that back, you have one successful transaction. Your light show now be on! Whew, that’s a lot of overhead though, and that’s just the code to turn on a light! There are dozens of other commands you can send and receive. I didn’t want to be bit-twiddling for hours on end, so I created a little helper library called FluentDwelling so now you can write code like this:

var plm = new Plm("COM4"); // manages the 2413U
DeviceBase device;
if(plm.TryConnectToDevice("AA.BB.CC", out device))
{
    // The library will return an instance of a 
    // SwitchedLightingControl because it connects 
    // to it and asks it what it is
    var light = device as SwitchedLightingControl;
    light.TurnOn();
}

I think that’s a little simpler. FluentDwelling is free to download, open-sourced under the GPLv3, and includes a full unit test suite.

It also supports the older X10 protocol, in case you have some of those lying around:

plm.Network.X10
    .House("A")
    .Unit(2)
    .Command(X10Command.On);

There are quite a few Insteon-compatible devices out there. In addition to lighting controls, there is a Sprinkler Controller, Discrete I/O Modules, a Rain Sensor, and even a Pool and Spa Controller. That’s just getting started!

Questions to Ask your Employer When Applying for an Automation Job

If you’re going to interview for a control systems job in a plant, they’ll ask you a lot of questions, but you should also have some questions for them. To me, these are the minimum questions you need to ask to determine if a future employer is worth pursuing:

  1. Do you have up-to-date electrical drawings in every electrical panel? – When the line is down, you don’t have time to go digging.
  2. Do you have a wireless network throughout the plant? – It should go without saying, having good reliable wireless connectivity all over your facility really helps when you’re troubleshooting issues. Got a problem with a sensor? Just setup your laptop next to the sensor, go online, look at the logic, and flag the sensor. You don’t have time to walk all over.
  3. Does every PC (including on-machine PCs) have virus protection that updates automatically? – We’re living in a post Stuxnet world. Enough said.
  4. Have you separated the office network from the industrial network? – Protection and security are applied in layers. There’s no need for Jack and Jill in accounting to be pinging your PLCs.
  5. What is your backup (and restore) policy? – Any production-critical machine must always have up-to date code stored in a known location (on a server or in a source control system), it must be backed up regularly, and you have to test your backups by doing regular restores.
  6. Are employees compensated for working extra hours? – Nothing raises a red flag about a company’s competency more than expecting 60+ hour weeks but not paying you overtime. It means they’re reactive, not proactive. It means they don’t value experience (experienced employees have families and can’t spend as much time at the office). It probably means they scored poorly in the previous questions.

You don’t have to find a company that gets perfect on this test, but if they miss more than one or two, that’s a warning sign. If they do well, they’re a proactive company, and proactive companies are sane places to work.

Good luck!

More about Stuxnet, on TED

I’ve been enjoying a lot more TED recently now that I can stream it directly to our living room HDTV on our Boxee Box. Today it surprised me with a talk by Ralph Langner called “Cracking Stuxnet: A 21st Century Cyber Weapon”. I talked about Stuxnet before, but this video has even more juicy details. If you work with PLCs, beware; this is the new reality of industrial automation security:

What can I do about our global resource problems?

On Saturday I posted Hacking the Free Market. You may have noticed the “deep-thoughts” tag I attached… that’s just a catch-all tag I use to warn readers that I’m headed off-topic in some kind of meandering way. In this post, I want to follow along with that post, but bring the discussion back to the topic of automation and engineering.

To summarize my previous post:

  • We haven’t solved the problem of how to manage global resources like the atmosphere and the oceans
  • The market isn’t factoring in the risk of future problems into the cost of the products we derive from these resources
  • I can’t think of a solution

Just to put some background around it, I wrote that post after reading an article titled “Engineers: It’s Time to Work Together and Save the World” by Joshua M. Pearce, PhD, in the March/April 2011 issue of Engineering Dimensions. In the article, Dr. Pearce is asking all of Ontario’s engineers to give up one to four hours of their spare time every week to tackle the problem of climate change in small ways. His example is to retrofit the pop machines in your office with microcontrollers hooked to motion sensors that will turn off the lights and turn off the compressor at times when nobody is around. He offers spreadsheets on his website which let you calculate if there’s a payback.

Now I’m not an economist, but I’m pretty sure that not only would this not help the problem of dwindling resources, but unless we also start factoring the future cost of fossil fuel usage into the cost of the energy, these actions that Dr. Pearce is suggesting will make the situation worse. When we use technology to utilize a resource more efficiently, we increase the demand for that resource. This is a fundamental principle.

I’m not saying it isn’t a good thing to do. It’s a great thing to do for the economy. Finding more efficient ways to utilize resources is what drives the expansion of the economy, but it’s also driving more demand for our resources. The fact that we’re mass marketing energy conservation as the solution to our resource problems is a blatant lie, and yet it’s a lie I hear more and more.

That’s where I was at when I wrote the previous article. “How can I, a Professional Engineer and Automation Enthusiast, do something that can make a real difference for our global resource problems?”

I’m afraid the answer I’ve come up with is, “nothing”. My entire job, my entire career, and all of my training… indeed my entire psychology… is driven towards optimizing systems. I make machines that transform raw materials into finished products, and I make them faster, more efficient, and more powerful. I don’t know who is going to solve our global resource problems, but I don’t think it’s going to be someone in my line of work. It’s like asking a fish to climb Mt. Everest.

I think the solution lies somewhere in the hands of politicians, lawyers, and voters. We do a so-so job of managing resources on a national scale, but we’d better extend that knowledge to a global scale, and do it quick. There might even be technical solutions that will help, but I think these will come from the fields of biotechnology, nanotechnology, and material science, not automation.

In the mean time, I’m going to continue blogging, contributing to online Q&A sites, writing tutorials, and writing open source software and releasing it for free, because I believe these activities contribute a net positive value to the world. If you’re an automation enthusiast or engineer reading this, I urge you to consider doing something similar. It is rewarding.

From Automation to Fabrication?

Here’s my simplified idea of what factories do: they make a whole lot of copies of one thing really cheap.

The “really cheap” part only comes with scale. Factories don’t make “a few” anything. They’re dependent on a mass market economy. Things need to be cheap enough for the mass market to buy them, but they also need to change constantly. As humans, we have an appetite for novelty. As the speed of innovation increases, factories spend more time retooling as a result.

The result is more demand for flexibility in automation. Just look at the rise in Flexible Automation and, more recently, Robotic Machining.

Where does this trend point? We’ve already seen low cost small scale fabrication machines popping up, like MakerBot and CandyFab. These are specialized versions of 3D Printers. Digital sculptors can design their sculpture in software, press print, and voila, the machine prints a copy of their object.

Now imagine a machine that’s part 3D Printer, 6-axis robot, laser cutter/etcher, and circuit board fabricator all-in-one. Imagine our little machine has conveyors feeding it stock parts from a warehouse out back, just waiting for someone to download a new design.

That kind of “fabrication” machine would be a designer’s dream. In fact, I don’t think our current pool of designers could keep up with demand. Everyone could take part in design and expression.

I don’t see any reason why this fictional machine is actually beyond our technological capabilities. It’s certainly very expensive to develop (I’m going to take a stab and say it’s roughly a complex as building an auto plant), but once you’ve made one, we have the capability to make many more.

For more ideas, take a look at what MIT’s Fab Lab is working on.

Renaming “Best Practices”

Ok, so I’ve complained about “Best Practices” before, but I want to revisit the topic and talk about another angle. I think the reason we go astray with “Best Practices” is the name. Best. That’s pretty absolute. How can you argue with that? How can any other way of doing it be better than the “Best” way?

Of course there are always better ways to do things. If we don’t figure them out, our competitors will. We should call these standards Baseline Practices. They represent a process for performing a task with a known performance curve. What we should be telling employees is, “I don’t care what process you use, as long as it performs at least as well as this.” That will encourage innovation. When we find better ways, that new way becomes the new baseline.

In case you haven’t read Zen and the Art of Motorcycle Maintenance, and its sequel, Lila, Pirsig describes two forms of quality: static and dynamic. Static quality are the things like procedures, and cultural norms. They are a way that we pass information from generation to generation, or just between peers on the factory floor. Dynamic quality is the creativity that drives change. Together they form a ratchet-like mechanism: dynamic quality moves us from point A to point B, and static quality filters out the B points, throwing out the ones that fall below the baseline.

I’ve heard more than one person say that we need to get everyone doing things the same way, and they use this as an argument in favour of best practices. I think that’s wrong. We have baseline practices to facilitate knowledge sharing. They get new employees up to speed fast. They allow one person to go on vacation while another person fills in for them. They are the safety net. But we always need to encourage people go beyond the baseline. It needs to be stated explicitly: “we know there are better ways of doing this, and it’s your job to figure out what those ways are.”