Category Archives: Software

Whole Home Energy Monitoring with the Brultech ECM-1240

This Christmas I asked Santa for a Brultech ECM-1240 whole home energy monitoring system, specifically the DUO-100 package. After reviewing various products, I really liked that this one was priced quite reasonably for the hardware, and that they published the communication protocol.

You (or a licensed electrian) install the ECM-1240 at your main electrical panel. Each ECM-1240 has 7 channels. In my case, the first channel on the first unit measures the incoming main line current. For the other 13 channels you are free to choose various circuits from your panel that you want to monitor. You can gang various circuits together into a single channel if you like (all lighting loads, for example). The device uses current transformers on each circuit for the monitoring. These are installed inside the panel. The hot wire coming out of each breaker has to go through a current transformer, so this isn’t a simple plug-in installation; there is wiring to be done.

The company distributes software for the device for free, or you can choose from various 3rd party software. Alternatively you can configure the device to send data to websites.

I’m not a fan of sending my home energy data to someone else’s server. Apart from being a huge privacy concern (it’s pretty easy to see when you’re home, and when you went to bed), I don’t want to pay a monthly fee, and I don’t want to worry about how to get my data from their server if they go out of business or decide to discontinue their product. For those reasons, I installed their software. It’s basically a flash website that I hosted on our Windows Home Server, which is conveniently the computer hooked up to the ECM-1240’s as well.

At first their software worked quite well, but over time it started to show problems. After just less than 2 months of logging, the flash program was so sluggish to load a page that it took over 2 minutes to refresh a screen. Admittedly I was running it on an older PC. Additionally, in that amount of time it had already logged about 680 MB of data. That seemed excessive. It also logged to a SQL Lite database (a single-user file-based database), and unfortunately kept the database file locked all the time. The website would end up locking the database and packets started getting buffered in the logging software until you closed down the website and released the lock.

I decided I’d just write my own software:

Power Cruncher Viewer

I’m a C# developer, so that was my language of choice. If you’re a .NET developer too, I’ll include a link to my source code at the end of this post. Note that this software isn’t commercial quality. It’s semi-configurable (channel names, and so on) but it assumes you have a 14 channel system where the first channel is the main panel input. If your system is different, you’ll have to make some modifications.

These devices I purchased use a serial port for communication, but they give you an RS232 splitter cable so you only have to use up one serial port on your PC. Their software relies on putting the device into an automatic send mode… the device itself chooses when to send packets. If the load changes significantly on one of the circuits, it triggers an immediate packet send. Unfortunately with 2 devices on the same RS232 port, you can sometimes get a collision. Their software deals with this by detecting it and ignoring the data, but I had a sneaking suspicion that once in a rare while, it ended up getting a corrupt packet, and there was at least one point where their software logged an extremely high energy reading, and it didn’t make any sense. Therefore I wrote my software to use a polling mode. It requests data from the first device, waits about 5 seconds, requests it from the other device, waits 5 seconds, and repeats. On average we get one reading about every 10 seconds from each device. Testing seemed to indicate this was about as fast as I could go because there was a reset period after you talked to one device where you had to wait for it to time out before you could address the other one. In retrospect, if you just hooked them up to 2 separate serial ports you could probably poll the data faster.

Next I had to choose how to log the data. I really wanted the data format to be compact, but I still wanted decently small time periods for each “time slice”. I also didn’t want to be locking the file constantly, so I just wanted to be able to write the data for each time slice, and be done with it. I settled on a binary format with fixed length records. Here’s how it works: each day’s data is stored in a separate file, so the data for April 1st is stored in a file called 2015-04-01.dat. The device generates values in watt-seconds. I calculated that at a maximum current of 200A (100A panel x 2 legs) x 120V, then in a 10 second time slice I should log a maximum of 240,000 watt-seconds. A 20-bit number (2.5 bytes) can store a maximum value of 1,048,575. I didn’t want to go smaller than half-byte increments. So, 14 channels at 2.5 bytes per channel gave me 35 bytes for each 10 second time slice, and since the devices generate voltage I figured I’d log the voltage as a 36’s byte just to make it an even number. Using those numbers, running 24 hours a day for 365 days a year, this will use up under 110 MB/year. Not bad. A flat binary file like this is also fast for data access. The data is small, and seeking a time slice can be done in O(1) time. After you find the first time slice you just keep reading bytes sequentially until you get to the end of the file or your last time slice. Hopefully no more 2 minute load times.

I broke up my application into multiple programs. The first is a utility program called Uploader.exe. This is a command line program that reads the data from a device and just spits it out to the screen. Useful for testing if it works.

The second is called Logger.exe and it does just that. It uses Uploader.exe to read values from the two ECM-1240 units at 5 second offsets and writes the data to XML files into a PacketLog folder. Each file has the timestamp of when it was read, and the device number to indicate which device it came from.

The third program is called PacketProcessor.exe. This program monitors the PacketLog folder and waits until it has at least one packet from each device after the end of the current time slice it’s trying to build. If it does, it calculates the values for the current time slice and appends it to the end of the current day’s data file. It then deletes any unnecessary packets from the PacketLog folder (it keeps at most one packet from each device before the end of the most recently written time slice).

To host Logger.exe and PacketProcessor.exe, I used nssm a.k.a. “the Non-Sucking Service Manager”. It’s a small command line program you can use to run another command line program as a service. Very handy.

The fourth program is called Viewer.exe. It’s the graphical interface for viewing the power data. Here’s a screenshot of data from one day (just the main panel power, click on it to make it bigger):

Main Panel Power over one day

That’s a rather typical day. The big spike around 5 pm is the oven, and you can see the typical TV-watching time is after 8 pm once the kids are in bed. On the right you can see it breaks out the power usage in kWh for each channel, and in our area we have time-of-use billing, and this actually breaks the power usage out into off-peak, mid-peak, and on-peak amounts.

The viewer program uses a line graph for all time periods under 24 hours, but switches to an hourly bar graph for periods longer than that. Here is a 48-hour time period, and I’ve changed from just the Main Panel to all the branch circuit channels instead:

Branch circuit power over 48 hours

If you ask for a time period of more than a week it switches into daily bars:

Branch circuit power by day

Pulling up two weeks of data like that was actually very fast, just a few seconds.

One final feature I added was email alerts. One of the channels monitors just my backup sump pump circuit. If it ever turns on, that’s because my main sump pump has failed, and I want to know. Through a configuration file, I configured an alarm to email me if that circuit ever records more than 5W of power in a 10 second time slice. I did a little test and it works (it does require that you have a Gmail account though).

Here’s the source code, if you’re interested:

As always, feel free to email me if you have questions or comments.

The Trap of Enterprise Software

Do you know how ERP software is sold? It’s fairly straightforward. Every CEO has a similar problem… they want to know how much everything in their company costs, vs. how much value it brings to the company. ERP vendors meet with executives and show them pretty graphs.

Then, of course, the company signs a contract with the ERP vendor for hundreds of thousands of dollars to install, configure, and support the system for one year. By the time the project is complete it costs twice what it was supposed to and all the end users are frustrated because it’s so tedious to use.

How does this happen? Why did the CEO think they were buying into a low-risk off-the-shelf product?

The CEO was lulled into a false sense of security by the pretty graphs. Any programmer can tell you that charts are a solved problem. If you give any programmer a database full of rich and meaningful data, they can whip up pretty reports, even very complicated ones, in a matter of days. The hard part is filling the database with accurate and complete information! If you buy an off-the-shelf ERP application, getting the data into their system is the hard part. You either need to write tons of custom code to copy the data from existing systems, or you need to change your business workflows to conform to this new software.

Consider another favorite of enterprise software applications: OEE. In order to calculate your true OEE you need the following:

  1. Availability
  2. Performance
  3. Quality

It’s that first item, Availability, that’s really interesting. Basically it’s the ratio of actual equipment uptime vs. scheduled production time. Let’s say you’re buying an “automated OEE solution,” so it’s going to pull the status of the equipment (running or not running) from the PLC. It doesn’t really matter because that isn’t enough information to calculate your Availability. You need a production schedule to begin with. Who enters this data? Does it already exist? In what form? Excel? A proprietary ERP or MES system? What custom code has to be written to get this production schedule into a format that the “shrink-wrapped” OEE software can understand? Does the company have to convert to using the OEE software’s proprietary scheduling system? Do we have to enter the schedule twice?

If you have to do all that work, why are you buying an OEE software package? If the production schedule is already in your ERP system, and you have in-house expertise to get the running status out of the equipment, why bother converting the schedule into some other format? The equations for OEE are grade 5 math. Any programmer can make pretty OEE graphs if they already have the data they need. What value does the OEE software actually bring?

When you think about enterprise software, remember that most of the real value is in the accuracy and meaningfulness of the data in the database. Money should be invested in acquiring, storing, curating, and documenting that data. Once you’ve accomplished that, then you can add your whiz-bang charts. Don’t go putting the cart before the horse.

VB6 Line Numbering Build Tool

If you’ve ever had to do any VB6 programming, you know one (of many) shortcomings is VB6’s pitiful error handling support. The problem is that you want to know what line number an error occurred on, but the ERL function only gives you that information if you have line numbers on your entire project, and nobody wants to work with a project full of line numbers.

There are tools like MZ-Tools that allow you to add and remove line numbers with a couple of mouse clicks, but that’s not ideal. What you really want is the ability to add line numbers as an automated step during the build process. Thankfully someone went and created such a tool (written in VB6 with the source code provided). Unfortunately I’ve lost the original location that I downloaded it from, and the original author. Until I can find that page, here is a zipped up copy of that tool:

Tools_ZIP Code_csLineNumber

It provides both a GUI and command line interface. It also has instructions on how to integrate it into the Windows shell so you can just right click on the .vbp file and run the tool. Note that on Windows 7 you likely have to run it as an Administrator.

On Managing Complexity

There’s a recurring theme on this blog when I talk about programming. It’s a skill you don’t learn in school, but which separates recent graduates from employees with 5 years of experience: how to manage complexity.

The reason you never learned how to manage complexity in nearly 20 years of school is that you never actually encountered anything of sufficient complexity that it needed managing. That’s because at the beginning of each term/semester/year you start with a blank slate. If you only ever spend up to 500 hours on a school project, you’ll never understand what it feels like to look at a system with 10,000 engineering hours logged to it.

That’s when the rules of the game change. You’ve only ever had to deal with projects where you could hold all the parts and variables in your head at one time and make sense of it. Once a project grows beyond the capability of one person to simultaneously understand every part of it, everything starts to unravel. When you change A you’ll forget that you had to change B too. Suddenly your productivity and quality start to take a nosedive. You’ve reached a tipping point, and there’s no going back.

It isn’t until you reach that point that you understand why interfaces are useful and why global variables are bad (not to mention their object-oriented cousins, the singleton).

The funny thing is that you learn about all the tools for managing complexity in school. When I was in high school I was taught about structured programming and introduced to modular programming. In university my first programming class used C++ and went through all the features like classes, inheritance and polymorphism. However, since we’d never seen a system big enough to need these tools, most of it fell on deaf ears. Now that I think about it, I wonder if our professors had ever worked on a system big enough to need those tools.

The rules of managing complexity are:

  • Remove unnecessary dependencies between parts of your system. Do this by designing each module against an abstract interface that only includes the functionality it needs, nothing else.
  • Make necessary dependencies explicit. If one module depends on another, state it explicitly, preferably in a way that can be verified for correctness by an automated checker (compiler, etc.). In object-oriented programming this is typically done with constructor injection.
  • Be consistent. Humans are pattern-recognition machines. Guide your reader’s expectations by doing similar things the same way everywhere. (Five-rung logic, anyone?)
  • Automate! Don’t make a person remember any more than they have to. Are there 3 steps to make some change? Can we make it 2, or better yet, 1?

It would be interesting if, in university, your second year project built on your first year project, and your third year project built on your second year project, etc. Maybe by your fourth year you’d have a better appreciation for managing complexity.

OWI-535 Robot Arm with USB Controller from C# and .NET

I got the OWI-535 “Robot Arm Edge” 5-Axis robot arm and USB Controller add-on for Christmas:

The robot arm comes as a kit that you assemble yourself, and my 3 year old and I had lots of fun putting it together (it helped to have some tiny fingers around, honestly). It comes with a manual controller that allows you to rotate all 4 joints, plus the gripper. It’s fun to play around with, but let’s be honest, everyone wants to hook it up to their computer.

Unfortunately the software that comes with the USB controller works on Windows 7, but “32-bit only”. That was a little annoying, but hey, I didn’t really want to stick with their canned software anyway. I figured I would see if I could get it to work from C#. I was surprised to find a great site by Dr. Andrew Davison in Thailand who wrote some Java code as part of his site for his book Killer Game Programming in Java (chapter NUI-6 Controlling a Robot Arm, which doesn’t appear in the book). Surprisingly he went as far as to do the inverse kinematic equations so you can give the arm a set of X,Y,Z co-ordinates (in mm) in “world frame” and it will calculate all the join angles to get to that location, and then used timed moves to get the arm into that position.

His Java library uses libusb-win32, and that library has a .NET equivalent called LibUsbDotNet. The API’s don’t appear to be the same, but thankfully I managed to find a thread on SourceForge about controlling the OWI-535 using LibUsbDotNet. So, over the course of a couple of nights, after the kids went to bed, I ported Dr. Davison’s Java code over to C# (quite easy actually) and replaced the libusb-win32 calls with LibUsbDotNet API calls. It worked!

Here is the .NET solution that I wrote called TestOwi535. I used Visual C# 2010 Express Edition to write it, so that’s free. Also, you must download and install LibUsbDotNet first and run the USB InfWizard first to generate a .inf file (you have to do this with the robot arm plugged in and turned on), and then use that .inf file to install LibUsbDotNet as the driver (technically you’re installing libusb-win32 as the driver and LibUsbDotNet is just the C# wrapper).

If you right click on the C# project in the solution explorer, you’ll find 3 options for startup object: MainClass is the original code I found in the SourceForge thread, but it’s just proof of concept code and only moves one joint in one direction for a second. The ArmCommunicator class is an interactive console application that allows you to move all joints (and control the gripper and light) by typing in keyboard commands. Finally the RobotArm class is the full inverse kinematics thing. In the last case, make sure you start with the arm at the zero position (base pointing away from the battery compartment, and all joints in-line straight up in the air, gripper pointing directly up, and gripper open). It will do a move to the table, to the front right of the arm, close the gripper, and then move back to zero and open the gripper.

Unfortunately that’s where you start to see the obvious deficiency of the arm: it has no position feedback. That means even though it tracks its position in the code, the physical position eventually drifts away from the internal position tracking, and the arm eventually doesn’t know where it is (it’s just using timed moves). One of the biggest areas where you could improve it is to change the joint rates so that it knows that the joints move faster when going down than when going up.

Still, it’s a neat little toy for the price. I’m going to start hunting around for a way to add joint position feedback, as that would really improve the performance. I also want to rewrite a new module from the ground up that allows concurrent joint moves (this one only moves one joint at a time). Ideally you want to control this thing in “gripper frame” instead of “world frame”.

Happy hacking!

Google’s Self-Driving Car

Google has been developing a self-driving car.

It’s abnormal for a company to go completely outside of its comfort zone when designing new products, so why would Google go into the automotive control system industry? They claim it’s to improve safety. They’ve also offered the pragmatic assertion that “people just want it”.

Google is a web company. It makes huge gobs of money from advertising, and advertising scales with web traffic. Google’s designing a self-driving car so that drivers can spend their commute time surfing the web on mobile devices. That’s the only explanation that makes sense.

Not that I mind, of course… I’d love to be able to read on the way to work. When can I buy my self-driving car? Can I get one that’s subsidized by ads?

The Skyscraper, the Great Wall, and the Tree

One World Trade Center will be a 105 story super-skyscraper when completed. It’s estimated total cost is $3.1 Billion.

When it’s complete, how much would it cost to add the 106th story? Would it be 105th of $3.1 Billion? No. The building was constructed from the ground up to be a certain height. The maximum height depends on the ground it’s sitting on, the foundation, the first floor, and so on. Each of those would need to be strengthened to add one more story.

It’s possible to engineer a building so that you can add more floors later, but that means over-engineering the foundation and the structure.

Contrast the building of a skyscraper with the building of the Great Wall in China. The Great Wall is infinitely scalable. It stretches for many thousands of kilometers, and each additional kilometer probably cost less than the one before because the builders became more skilled as they worked.

The difference between the skyscraper and the Great Wall is all about dependencies. In the case of the skyscraper, every story depends on everything below it. In the case of the Great Wall, each section of wall only depends on the ground underneath, and the sections of wall on either side. If one section is faulty, you can easily demolish it and rebuild it. If one section is in the wrong place, you only have to rebuild a copy of sections to go around the obstacle.

When you’re building the Great Wall, you can probably take an experienced stonemason and have him start laying stone on the first day, without much thought about the design. With a skyscraper, it’s the opposite. You spend years designing before you ever break ground.

When people first started to write software, they built it like the Great Wall. Just start writing code. The program gets longer and longer. Unfortunately, complex programs don’t have linear dependencies. Each part of the code tends to have dependencies on every other part. Out of this came two things: the Waterfall model, and Modular programming. While modular programming was a success at reducing dependencies, and eventually led to object-oriented (and message-oriented) programming, the Waterfall model was eventually supplanted by Agile methodologies.

The software developed by Agile methodologies is neither like the Great Wall (there are still many inter-dependencies) and yet nothing like the traditional skyscraper (because the many dependencies are loosely coupled).

Agile software grows organically, like a tree. You can cut off a limb that’s no longer useful, or grow in a new direction when you must.

Many programmers seem to think that Agile means you don’t have to design. This isn’t true. A tree still has an architecture. It’s just a more flexible architecture than a skyscraper. Instead of building each new layer on top of the layers beneath it, you build a scaffolding structure that supports each module independently.

The scaffolding is made up of the SOLID principles like the single responsibility principle, and dependency injection, but it also consists of external structures like automated unit tests. If Agile software is like a tree, you spend more time on the trunk than the branches and more time on the branches than the leaves. The advantage is that none of the leaves are directly connected, so you can move them. That’s what it means to be Agile.

What are the trade-offs? Agile enthusiasts seem to think that every other methodology is dead, but that’s not true. You can still go higher, faster, with a skyscraper, or get started sooner with the Great Wall.

Embedded control systems for automobiles are still built like a skyscraper. That’s because the requirements should be very well known, and future changes are unlikely. On the other hand, you’d be surprised how many businesses run significant portions of their businesses on top of Excel spreadsheets. That’s typical Great Wall methodology. It works as long as complexity stays low.

Use Agile when you need complexity and flexibility.

Editing Logic is Business Logic

If you’re writing an “n-tier” application then you typically split your application into layers, like the “GUI layer”, “business logic layer”, and “data layer”. Even within those layers we typically have more divisions.

One of the advantages of this architecture is that you can swap out your outer layers. You should be able to switch from SQL Server to MySQL and all your changes should be confined to the data layer. You should be able to run a web interface in parallel with your desktop GUI just by writing another set of modules in the GUI layer.

This all sounds great in principle, so you start building your application and you run into problems. Here’s the typical scenario:

Business rule: one salesperson can be associated with one or more regions.

Business rule: each salesperson has exactly one “home region”, which must be one of their associated regions.

Now you go and build your entity model, and it probably looks like a SalesPerson class that inherits from the Employee class. There’s a many-to-many association between SalesPerson and Region. The relationship class (SalesPersonRegion) might have a boolean property called HomeRegion (there are other ways to model it, but that’s a typical and simple way).

Now, you can enforce these constraints on your entity model by making sure that you provide at least one Region in your SalesPerson constructor, and you make that the home region by default. You provide a SetHomeRegion method that throws an exception if you pass in a non-associated region, and you throw an exception if you try to dissociate the home region. If you do all this well, then your model is always consistent, and you can reason about your model knowing certain “invariant” conditions (as they’re called) are always true.

All of that, by the way, lives in the Domain Model, which is part of your Business Logic Layer.

Now let’s build our GUI where we can edit a SalesPerson. Obviously the GUI stuff goes in the GUI layer. Depending on your architecture, you might split up your GUI layer into “View” and “ViewModel” layers (if you use WPF) or “View” and “Controller” layers (if you’re using ASP.NET MVC).

Now unless you really don’t care about your users, this edit screen is going to have a Save and a Cancel button. The user is typically free to manipulate data on the screen in ways that don’t necessarily make sense to our business rules, but that’s OK as long as we don’t let them save until they get it right. Plus, they can just cancel and throw away all their edits.

My first mistake when I ran into this situation was assuming that I could just use the entities as my “scratch-pad” objects, letting the user manipulate them. In fact the powerful data binding features of WPF makes you think you can get away with this. Everyone wants to just bind their WPF controls directly to their entity model. It doesn’t work. Ok, let me be a little more specific… It works great for readonly screens, but it doesn’t work for editing screens.

To do this right, you need another model of the data during the editing process. This other model has slightly different business rules than the entity model you already wrote. For instance, in our example above about editing a SalesPerson, maybe you want to displays a list of possible regions on the right, let the user drag associated regions into a list on the left, drag out regions to dissociate them, and perhaps each region on the left has a radio button next to it allowing you to select the home region. During the editing process it might be perfectly legitimate to have no associated regions, or no home region selected. Unfortunately if you try to use your existing entity model to hold your data structure in the background, you can’t model this accurately.

There’s more than one way to solve this. Some programmers move the business rules out of the entity model and into a Validation service. Therefore your entity model allows you to build an invalid model, but you can’t commit those changes into the model unless it passes the validation. This approach has two problems: one, your entity model no longer enforces the “invariant” conditions that make it a little easier to work with (you have to test for validity every time someone hands you an entity), and two, certain data access layers, like NHibernate, automatically default to a mode where all entity changes are immediately candidates for flushing to the database. Using an entity you loaded from the database as a scratch pad for edits gets complicated. Technically you can disconnect it from the database, but then it gets confusing… is this entity real or a scratchpad?

Some other programmers move this logic into the ViewModel or Controller (both of which are in the GUI layer). This is where it gets ugly, for me. It’s true that the “edit model” is better separated from the “entity model”, but somewhere there has to be logic that maps from one to the other. When we load up our screen for editing a SalesPerson, we need to map the SalesPerson entity and relationships into the edit model, and when the user clicks Save, first we have to validate that the edit model can be successfully mapped into the entity model, and if it can’t then give the user a friendly hint about what needs to be fixed. If it can be mapped, then we do it. Do you see the problem? The sticky point is the Validation step. Validation logic is business logic. Most MVVM implementations I’ve seen essentially make the ViewModel the de-facto edit model, and they stuff the validation logic into the ViewModel as well. I think this is wrong. Business logic doesn’t belong in the GUI layer.

All of the following belongs in the business logic layer: the entity model, the edit model, the mapping logic from one to the other and back, and the validation logic.

If you follow this method, then WPF’s data binding starts to make sense again. You can map directly to the entity model if you have a read-only screen, and you can map to the edit model for your editing screens. (In my opinion, many situations call for both an “add model” and an “edit model” because the allowable actions when you’re adding something might be different than when you’re editing one that already exists… but that’s not a big deal.)

I apologize if this is all obvious to you, but I really wish someone would have explained this to me sooner. 🙂

Ladder Logic vs. C#

PC programming and PLC programming are radically different paradigms. I know I’ve talked about this before, but I wanted to explore something that perplexes me… why do so many PC programmers hate ladder logic when they are first introduced to it? Ladder logic programmers don’t seem to have the same reaction when they’re introduced to a language like VB or C.

I mean, PC programmers really look down their noses at ladder logic. Here’s one typical quote:

Relay Ladder Logic is a fairly primitive langauge. Its hard to be as productive. Most PLC programmers don’t use subroutines; its almost as if the PLC world is one that time and software engineering forgot. You can do well by applying simple software engineering methods as a consequence, e.g., define interfaces between blocks of code, even if abstractly.

I’m sorry, but I don’t buy that. Ladder logic and, say C#, are designed for solving problems in two very different domains. In industrial automation, we prefer logic that’s easy to troubleshoot without taking down the system.

In the world of C#, troubleshooting is usually done in an offline environment.

My opinion is that Ladder Logic looks a lot like “polling” and every PC programmer knows that polling is bad, because it’s an inefficient use of processor power. PC programmers prefer event-driven programming, which is how all modern GUI frameworks react to user-initiated input. They want to see something that says, “when input A turns on, turn on output B”. If you’re familiar with control systems, your first reaction to that statement is, “sure, but what if B depends on inputs C, D, and E as well”? You’re right – it doesn’t scale, and that’s the first mistake most people make when starting with event-driven programming: they put all their logic in the event handlers (yeah, I did that too).

Still, there are lots of situations where ladder logic is so much more concise than say, C#, at implementing the same functionality, I just don’t buy all the hate directed at ladder logic. I decided to describe it with an example. Take this relatively simple ladder logic rung:

What would it take to implement the same logic in C#? You could say all you really need to write is D = ((A && B) || D) && C; but that’s not exactly true. When you’re writing an object oriented program, you have to follow the SOLID principles. We need to separate our concerns. Any experienced C# programmer will say that we need to encapsulate this logic in a class (let’s call it “DController” – things that contain business logic in C# applications are frequently called Controller or Manager). We also have to make sure that DController only depends on abstract interfaces. In this case, the logic depends on access to three inputs and one output. I’ve gone ahead and defined those interfaces:

    public interface IDiscreteInput
        bool GetValue();
        event EventHandler InputChanged;

    public interface IDiscreteOutput
        void SetValue(bool value);

Simple enough. Our controller needs to be able to get the value of an input, and be notified when any input changes. It needs to be able to change the value of the output.

In order to follow the D in the SOLID principles, we have to inject the dependencies into the DController class, so it has to look something like this:

    internal class DController
        public DController(IDiscreteInput inputA, 
            IDiscreteInput inputB, IDiscreteInput inputC, 
            IDiscreteOutput outputD)

That’s a nice little stub of a class. Now, as an experienced C# developer, I follow test-driven development, or TDD. Before I can write any actual logic, I have to write a test that fails. I break open my unit test suite, and write my first test:

        public void Writes_initial_state_of_false_to_outputD_when_initial_inputs_are_all_false()
            var mockInput = MockRepository.GenerateStub<IDiscreteInput>();
            mockInput.Expect(i => i.GetValue()).Return(false);
            var mockOutput = MockRepository.GenerateStrictMock<IDiscreteOutput>();
            mockOutput.Expect(o => o.SetValue(false));

            var test = new DController(mockInput, mockInput, mockInput, mockOutput);


Ok, so what’s going on here? First, I’m using a mocking framework called Rhino Mocks to generate “stub” and “mock” objects that implement the two dependency interfaces I defined earlier. This first test just checks that the first thing my class does when it starts up is to write a value to output D (in this case, false, because all the inputs are false). When I run my test it fails, because my DController class doesn’t actually call the SetValue method on my output object. That’s easy enough to remedy:

    internal class DController
        public DController(IDiscreteInput inputA, IDiscreteInput inputB, 
            IDiscreteInput inputC, IDiscreteOutput outputD)
            if (outputD == null) throw new ArgumentOutOfRangeException("outputD");

That’s the simplest logic I can write to make the test pass. I always set the value of the output to false when I start up. Since I’m calling a method on a dependency, I also have to include a guard clause in there to check for null, or else my tools like ReSharper might start complaining at me.

Now that my tests pass, I need to add some more tests. My second test validates when my output should turn on (only when all three inputs are on). In order to write this test, I had to write a helper class called MockDiscreteInputPatternGenerator. I won’t go into the details of that class, but I’ll just say it’s over 100 lines long, just so that I can write a reasonably fluent test:

        public void Inputs_A_B_C_must_all_be_true_for_D_to_turn_on()
            MockDiscreteInput inputA;
            MockDiscreteInput inputB;
            MockDiscreteInput inputC;
            MockDiscreteOutput outputD;

            var tester = new MockDiscreteInputPatternGenerator()
                .InitialCondition(out inputA, false)
                .InitialCondition(out inputB, false)
                .InitialCondition(out inputC, false)
                .CreateSimulatedOutput(out outputD)







                .AssertThat(outputD).ShouldBe(true); // finally turns on

            var test = new DController(inputA, inputB, inputC, outputD);


What this does is cycle through all the combinations of inputs that don’t cause the output to turn on, and then I finally turn them all on, and verify that it did turn on in that last case.

I’ll spare you the other two tests. One check that the output initializes to on when all the inputs are on initially, and the last test checks the conditions that turn the output off (only C turning off, with A and B having no effect). In order to get all of these tests to pass, here’s my final version of the DController class:

    internal class DController
        private readonly IDiscreteInput inputA;
        private readonly IDiscreteInput inputB;
        private readonly IDiscreteInput inputC;
        private readonly IDiscreteOutput outputD;

        private bool D; // holds last state of output D

        public DController(IDiscreteInput inputA, IDiscreteInput inputB, 
            IDiscreteInput inputC, IDiscreteOutput outputD)
            if (inputA == null) throw new ArgumentOutOfRangeException("inputA");
            if (inputB == null) throw new ArgumentOutOfRangeException("inputB");
            if (inputC == null) throw new ArgumentOutOfRangeException("inputC");
            if (outputD == null) throw new ArgumentOutOfRangeException("outputD");

            this.inputA = inputA;
            this.inputB = inputB;
            this.inputC = inputC;
            this.outputD = outputD;

            inputA.InputChanged += new EventHandler((s, e) => setOutputDValue());
            inputB.InputChanged += new EventHandler((s, e) => setOutputDValue());
            inputC.InputChanged += new EventHandler((s, e) => setOutputDValue());


        private void setOutputDValue()
            bool A = inputA.GetValue();
            bool B = inputB.GetValue();
            bool C = inputC.GetValue();

            bool newValue = ((A && B) || D) && C;
            D = newValue;

So if you’re just counting the DController class itself, that’s approaching 40 lines of code, and the only really important line is this:

    bool newValue = ((A && B) || D) && C;

It’s true that as you wrote more logic, you’d refactor more and more repetitive code out of the Controller classes, but ultimately most of the overhead never really goes away. The best you’re going to do is develop some kind of domain specific language which might look like this:

    var dController = new OutputControllerFor(outputD)
        .WithInputs(inputA, inputB, inputC)
        .DefinedAs((A, B, C, D) => ((A && B) || D) && C);

…or maybe…

    var dController = new OutputControllerFor(outputD)
        .WithInputs(inputA, inputB, inputC)
        .TurnsOnWhen((A, B, C) => A && B && C)
        .StaysOnWhile((A, B, C) => C);

…and how is that any better than the original ladder logic? That’s not even getting into the fact that you wouldn’t be able to use breakpoints in C# when doing online troubleshooting. This code would be a real pain to troubleshoot if the sensor connected to inputA was becoming flaky. With ladder logic, you can just glance at it and see the current values of A, B, C, and D.

Testing: the C# code is complex enough that it needs tests to prove that it works right, but the ladder logic is so simple, so declarative, that it’s obvious to any Controls Engineer or Electrician exactly what it does: turn on when A, B, and C are all on, and then stay on until C turns off. It doesn’t need a test!

Time-wise: it took me about a minute to get the ladder editor open and write that ladder logic, but about an hour to put together this C# example in Visual Studio.

So, I think when someone gets a hate on for ladder logic, it just has to be fear. Ladder logic is a great tool to have in your toolbox. Certainly don’t use ladder logic to write an ERP system, but do use it for discrete control.

The Payback on Automated Unit Tests

I’m a Test-Driven Development (TDD) convert now. All of the “business logic” (aka “domain logic”), and more than 95% of my framework logic is covered by automated unit tests because I write the test before I write the code, and I only write enough code to pass the failing test.

It’s really hard to find anyone talking about a measurable ROI for unit testing, but it does happen. This study says it took, on average, 16% longer to develop the initial application using TDD than it did using normal development methods. This one reported “management estimates” of 15% to 35% longer development times using test-driven development. Both studies reported very significant reduction in defects. The implication is that the payback comes somewhere in the maintenance phase.

From personal experience I would say that as I gain experience with TDD, I get much faster at it. At the beginning it was probably doubling my development time, but now I’m closer to the estimates in the studies above. I’ve also shifted a bit from “whitebox testing” (where you test every little inner function) to more “blackbox testing”/”integration testing” where you test at a much higher level. I find that writing your tests at a higher level means you write fewer tests, and they’re more resilient to refactoring (when you change the design of your software later to accommodate new features).

A Long Term Investment

It’s hard to justify TDD because the extra investment of effort seems a lot more real and substantial than the rather flimsy value of quality. We can measure hours easily, but quality, not so much. That means we have a bias in our measurements.

Additionally, if and when TDD pays us back, it’s in the future. It’s probably not in this fiscal year. Just like procrastination, avoiding TDD pays off now. As humans, we’re wired to value immediate value over long term value. Sometimes that works against us.

A Theory of TDD ROI

I’m going to propose a model of how the ROI works in TDD. This is scientific, in that you’ll be able to make falsifiable predictions based on this model.

Start out with your software and draw out the major modules that are relatively separate from each other. Let’s say you’re starting with a simple CRUD application that just shows you data from a database and lets you Create, Read, Update, and Delete that data. Your modules might look like this:

  • Contact Management
  • Inventory Management

If you go and implement this using TDD vs. not using TDD, I suspect you’ll see a typical 15% to 35% increase in effort using the TDD methodology. That’s because the architecture is relatively flat and there’s minimal interaction. Contact Management and Inventory Management don’t have much to do with each other. Now let’s implement two more modules:

  • Orders
  • Purchasing

These two new modules are also relatively independent, but they both depend on the Contact Management and Inventory Management modules. That just added 4 dependency relationships. The software is getting more complex, and more difficult to understand the effect of small changes. The latter modules can still be changed relatively safely because nothing much depends on them, but the first two can start to cause trouble.

Now let’s add a Permissions module. Obviously this is a “cross cutting” concern – everything depends on the permissions module. Since we had 4 existing modules, we’ve just added another 4 dependency relationships.

Ok, now we’ll add a Reporting module. It depends on the 4 original modules, and it also needs Permissions information, so we’ve added another 5 dependency relationships.

Are you keeping count? We’re at 13 relationships now with just 6 modules.

Now let’s say we have to add a function that will find all customers (Contact Module) who have a specific product on order (Orders) that came from some manufacturer (Purchasing and Contact Management) and a certain Lot # (Inventory) and print a report (Reporting Module). Obviously this will only be available to certain people (Permissions).

That means you have to touch all 6 modules to make this change. Perhaps while you’re messing around in the Inventory Management module you notice that the database structure isn’t going to support this new feature. Maybe you have a many-to-one relationship where you realize we really should have used a many-to-many relationship. You change the database schema, and you change the Inventory Module, but instead of just re-testing that module, you now have to fully re-test all the modules that depend on it: Orders, Purchasing, and Reports. It’s likely we made assumptions about that relationship in those modules. What if we need to change those? Does the effect cascade to all the modules in the software? Likely.

It doesn’t take long to get to the point where you need to do a 100% regression test of your entire application. How many new features potentially touch all modules? How long does it take to do a full regression test? That’s your payback.

You can measure the regression test time, and if you use a tool like NDepend you can measure and graph the dependencies of an existing application. Using your source control history, you can go back and determine how many different modules were touched by each new feature and bug fix since the beginning of time. You should be able to calculate:

  • How much time it takes to regression test each module
  • Probability of each module changing during an “average” change
  • The set of modules to regression test for any given module changing

Given that, you can figure out the average time to regression test the average change.

Obviously, the average regression test time must be longer than 15% to 35% of the time it took to write the feature (assuming you’ll keep following TDD practices during the maintenance phase). The amount of time it takes to test in excess of that is payback against the initial 15% to 35% extra you spend developing the application in the first place.

What kind of numbers are we talking about?

Let’s run some numbers. A lot of places say 30 to 50% of software development time is spent testing. Let’s assume 50% is for the apps with “very interconnected dependencies”. Also, let’s say our team spends an extra 33% premium to use a TDD methodology.

Now take a project that would originally take 6 months to develop and test, but with TDD the development took about 33% longer, so +2 months. The average change takes 3 days to code and test, or 4 days with TDD. Let’s say the regression test on something that took 6 months to develop (personal experience – a 3 month project had a regression test plan that was about 1 day to run through) would have to be about 2 days.

Without TDD, a feature would take 3 days to write and test, and then 2 days to do a full regression test. Using TDD, it would take 4 days to write and test, but zero time to regression test, so you gain a day.

Therefore, since you had to invest an extra 2 months (40 days assuming one developer) in the first place to do TDD, you’d see a break-even once you were in the maintenance phase and had implemented about 40 changes, each taking 4 days, which means 160 days. That’s about 8 months. That ignores the fact that regression test time keeps increasing as you add more features.

Obviously your numbers will vary. The biggest factor is the ratio of regression test time vs. the time it takes you to implement a new feature. More dependencies means more regression testing.


If you have a very flat architecture with few dependencies, then the TDD payback is longer (if there even is a payback). On the other hand, if you have highly interdependent software (modules built on top of other modules), TDD pays you back quickly.