Tag Archives: innovation

Automation and the Guaranteed Minimum Income

In recent years I’ve been interested in the effects of automation on our economy and our society. Throughout history every advance in technology has brought more wealth, health, and opportunity to pretty much everyone. With every revolution people changed jobs but their lives got significantly better. When farms mechanized, workers moved into the city and got factory jobs, and Henry Ford’s assembly lines made use of this labor to great effect.

Early factories needed labor in great quantities and as industrial processes became more efficient at utilizing labor, the value of human labor rose and the demand kept increasing. So did pay. Factory workers up through the 70’s could afford a nice house to raise a family, a big car, and even a boat or nice vacations. Since the 70’s however, the purchasing power of a factory worker or even a bank teller has been pretty flat. These are two professions that have seen the most advances in automation in the last 30 years, due to industrial robots and automated tellers. If automation makes workers more productive, why aren’t we seeing that translate into purchasing power?

There are two types of technological improvements at work here. A farmer with a tractor is very productive compared to one with a horse and plow. The displaced farm workers who went to the city were given the tools of the industrial revolution: steam engines, motors, pumps, hydraulics, and so forth. These technologies amplified the value of human labor. That’s the first kind of technological improvement. The second kind is the automated teller or the welding robot. The older technology adds value even to the lowest skilled employees, but the new technology is reducing their value and the new jobs require significantly higher skill levels. There’s something about this new revolution that’s just… different. The demand for low skill labor is drying up.

The increasing divide between the “haves” and the “have-nots” has been documented extensively. Some divide is good and promotes the economy and productivity. Too much separation is a recipe for significant problems.

I’m not the only one worrying about this issue, and as I’ve followed it over the last few years I’ve been surprised by the amount of interest in a Guaranteed Minimum Income or some such plan. Basically it involves getting rid of every low-income assistance plan such as social security, welfare, minimum wage laws, etc., and creating a single universal monthly benefit that everyone is entitled to. Some people are talking about a number as high as $24,000 per year per adult. Considering that the 2015 federal poverty level in the US is just below $12,000 for a single adult, you can see that $24,000 per adult isn’t just a trifling amount.

For comparison, a little Googling tells me that the US GDP per capita is around $55,000. Think about that for a second. You’re talking about guaranteeing almost 45% of the productivity output of the country to be distributed evenly across all adults. One presumes you would also provide some extra money per child in a household, but to be fair the “per capita” figure includes kids too. It’s possible. Sure seems a bit crazy though.

Is it practical? Won’t some people choose not to work? Will productivity go down? It turns out that we’ve done some experimenting with this type of program in Canada called MINCOME. The results were generally positive. There was a small drop in hours worked by certain people, mostly new mothers and teenagers. These costs were offset in other areas: “in the period that Mincome was administered, hospital visits dropped 8.5 percent, with fewer incidents of work-related injuries, and fewer emergency room visits from car accidents and domestic abuse.” More teenagers graduated. There was less mental illness.

I’m fiscally conservative, but I’m mostly pragmatic. It’s only my years of exposure to automation, technology and working in factories that makes me ask these questions. Not only do I believe that people should contribute, I believe that people need to contribute for their own happiness and well-being. That’s why I don’t think paying people to sit at home is the ultimate solution.

The elephant in the room is this: as technology improves, a greater proportion of the population will simply be unemployable. There, I said it. I know it’s a disturbing thought. Our society is structured around the opposite of that idea. Men are particularly under pressure to work. The majority of the status afforded to men in our society comes from their earning potential. The social pressure would still be there to work, even as a supplement to a guaranteed minimum income, so we still need to find something for those people to do. Perhaps if we expand the accepted role of men in society then we can fill that need with volunteer work. Maybe.

What’s the right answer? I don’t know. For lack of a better term, the “American Dream” was accessible to anyone if you were willing to work hard and reinvest that effort into yourself. Not everyone did that, but many people created significant fortunes for themselves after starting in the stockroom and working their way up. That security gave people a willingness to take risks and be entrepreneurial. Proponents of the idea say that a minimum income would bring back that innovative edge. Entrepreneurs could try new ideas repeatedly until they found one that worked, and not worry about their family starving. With your basic necessities met, you could start to realize your potential..

I do know that as we continue down this road of increasing automation, we can’t be leaving a greater and greater proportion of the populace without the basic resources they need to survive. Do we expect them to grow their own food? On what land? Do we expect them to do a job that I could program a robot to do, if the robot’s average cost is only $10,000/year? Do you have some valuable job we can retrain them to do? One that pays enough to support a family?

Look, I don’t like the alternatives either, but it’s better than an armed revolt.

What can I do about our global resource problems?

On Saturday I posted Hacking the Free Market. You may have noticed the “deep-thoughts” tag I attached… that’s just a catch-all tag I use to warn readers that I’m headed off-topic in some kind of meandering way. In this post, I want to follow along with that post, but bring the discussion back to the topic of automation and engineering.

To summarize my previous post:

  • We haven’t solved the problem of how to manage global resources like the atmosphere and the oceans
  • The market isn’t factoring in the risk of future problems into the cost of the products we derive from these resources
  • I can’t think of a solution

Just to put some background around it, I wrote that post after reading an article titled “Engineers: It’s Time to Work Together and Save the World” by Joshua M. Pearce, PhD, in the March/April 2011 issue of Engineering Dimensions. In the article, Dr. Pearce is asking all of Ontario’s engineers to give up one to four hours of their spare time every week to tackle the problem of climate change in small ways. His example is to retrofit the pop machines in your office with microcontrollers hooked to motion sensors that will turn off the lights and turn off the compressor at times when nobody is around. He offers spreadsheets on his website which let you calculate if there’s a payback.

Now I’m not an economist, but I’m pretty sure that not only would this not help the problem of dwindling resources, but unless we also start factoring the future cost of fossil fuel usage into the cost of the energy, these actions that Dr. Pearce is suggesting will make the situation worse. When we use technology to utilize a resource more efficiently, we increase the demand for that resource. This is a fundamental principle.

I’m not saying it isn’t a good thing to do. It’s a great thing to do for the economy. Finding more efficient ways to utilize resources is what drives the expansion of the economy, but it’s also driving more demand for our resources. The fact that we’re mass marketing energy conservation as the solution to our resource problems is a blatant lie, and yet it’s a lie I hear more and more.

That’s where I was at when I wrote the previous article. “How can I, a Professional Engineer and Automation Enthusiast, do something that can make a real difference for our global resource problems?”

I’m afraid the answer I’ve come up with is, “nothing”. My entire job, my entire career, and all of my training… indeed my entire psychology… is driven towards optimizing systems. I make machines that transform raw materials into finished products, and I make them faster, more efficient, and more powerful. I don’t know who is going to solve our global resource problems, but I don’t think it’s going to be someone in my line of work. It’s like asking a fish to climb Mt. Everest.

I think the solution lies somewhere in the hands of politicians, lawyers, and voters. We do a so-so job of managing resources on a national scale, but we’d better extend that knowledge to a global scale, and do it quick. There might even be technical solutions that will help, but I think these will come from the fields of biotechnology, nanotechnology, and material science, not automation.

In the mean time, I’m going to continue blogging, contributing to online Q&A sites, writing tutorials, and writing open source software and releasing it for free, because I believe these activities contribute a net positive value to the world. If you’re an automation enthusiast or engineer reading this, I urge you to consider doing something similar. It is rewarding.

From Automation to Fabrication?

Here’s my simplified idea of what factories do: they make a whole lot of copies of one thing really cheap.

The “really cheap” part only comes with scale. Factories don’t make “a few” anything. They’re dependent on a mass market economy. Things need to be cheap enough for the mass market to buy them, but they also need to change constantly. As humans, we have an appetite for novelty. As the speed of innovation increases, factories spend more time retooling as a result.

The result is more demand for flexibility in automation. Just look at the rise in Flexible Automation and, more recently, Robotic Machining.

Where does this trend point? We’ve already seen low cost small scale fabrication machines popping up, like MakerBot and CandyFab. These are specialized versions of 3D Printers. Digital sculptors can design their sculpture in software, press print, and voila, the machine prints a copy of their object.

Now imagine a machine that’s part 3D Printer, 6-axis robot, laser cutter/etcher, and circuit board fabricator all-in-one. Imagine our little machine has conveyors feeding it stock parts from a warehouse out back, just waiting for someone to download a new design.

That kind of “fabrication” machine would be a designer’s dream. In fact, I don’t think our current pool of designers could keep up with demand. Everyone could take part in design and expression.

I don’t see any reason why this fictional machine is actually beyond our technological capabilities. It’s certainly very expensive to develop (I’m going to take a stab and say it’s roughly a complex as building an auto plant), but once you’ve made one, we have the capability to make many more.

For more ideas, take a look at what MIT’s Fab Lab is working on.

Renaming “Best Practices”

Ok, so I’ve complained about “Best Practices” before, but I want to revisit the topic and talk about another angle. I think the reason we go astray with “Best Practices” is the name. Best. That’s pretty absolute. How can you argue with that? How can any other way of doing it be better than the “Best” way?

Of course there are always better ways to do things. If we don’t figure them out, our competitors will. We should call these standards Baseline Practices. They represent a process for performing a task with a known performance curve. What we should be telling employees is, “I don’t care what process you use, as long as it performs at least as well as this.” That will encourage innovation. When we find better ways, that new way becomes the new baseline.

In case you haven’t read Zen and the Art of Motorcycle Maintenance, and its sequel, Lila, Pirsig describes two forms of quality: static and dynamic. Static quality are the things like procedures, and cultural norms. They are a way that we pass information from generation to generation, or just between peers on the factory floor. Dynamic quality is the creativity that drives change. Together they form a ratchet-like mechanism: dynamic quality moves us from point A to point B, and static quality filters out the B points, throwing out the ones that fall below the baseline.

I’ve heard more than one person say that we need to get everyone doing things the same way, and they use this as an argument in favour of best practices. I think that’s wrong. We have baseline practices to facilitate knowledge sharing. They get new employees up to speed fast. They allow one person to go on vacation while another person fills in for them. They are the safety net. But we always need to encourage people go beyond the baseline. It needs to be stated explicitly: “we know there are better ways of doing this, and it’s your job to figure out what those ways are.”

Overengineering

“Overengineering” is a word that gets thrown around a lot. It’s used in a negative connotation, but I have a hard time defining it.

It’s not the same as Premature Optimization. That’s when you add complexity in order to improve performance, at the expense of readability, but the payoff isn’t worth the cost.

If “to engineer” is synonymous with “to design”, then overengineering is spending too much time designing, and not enough time… what? Implementing?

Lets say you and I need to travel across the continent. You despise overengineering, so you set off on foot immediately, gaining a head start. I go and buy some rollerblades, and easily pass you before the end of the day. Seeing me whiz past, you head to the nearest sporting goods store and buy a ten-speed. You overtake me not long after breakfast on the second day. “Hmm,” I think. I don’t have much money, but I rollerblade on over to a junk yard and get an old beater car. It doesn’t run though. I do some trouble-shooting… the electrical system is fine, and we have spark, but we’re just not getting ignition. I might be able to fix it, and I might not. Should I go and buy a faster bike than yours, and try to catch up, or should I take my chance and see if I can fix this car? I’m pretty sure I can fix it, and if I can, I can easily win, but if I can’t, I’m giving up the lower-risk but lower probability chance of winning by getting a bike.

It’s this last type of choice that we’re faced with as engineers. You have a project with an 8-week timespan. We estimated that it will take 10 weeks at 50 hours per week using standard practices, so the project manager just wants everyone to work 60+ hour weeks using the same practices because from their point of view, that’s the “safe” bet. As an engineer, you might be able to spend 3 weeks building something with a 90% chance of making you 2 times more efficient at building this type of project: 3 weeks spent building the tool, and then it would only take 5 weeks to complete the project, so you’re done in 8 weeks. Not only that, but then you’ve got a tool you can re-use the next time.

If every time we had this choice, we chose to make the tool first, then on average we’ll end up much further ahead. Every time we succeed (90% of the time), we’ll greatly improve our capabilities. We’ll out-innovate the competition, with lower costs and faster time to market. However, a manager is much more likely not to build the tool because they can’t tolerate the risk. The larger the company, the worse this is, and the typical excuse leveled at the “tool” solution is that it’s “overengineering.”

Imagine we’re back in the cross-continent scenario, and I’ve decided to fix the car. Two days later I’ve got car parts all over the ground, and I haven’t moved an inch. Meanwhile, you’re a hundred miles away from me on your bike. Who’s winning the race? You can clearly point to your progress… it’s progress that anyone can clearly see. I, on the other hand, can only show someone a car that’s seemingly in worse shape than it started in, plus my inability to move over the last few days. The pressure starts to mount. It’s surprising how quickly people will start to criticize the solution they don’t understand. They’ll call me a fool for even attempting it, and applaud you on your straightforward approach.

Of course you realize that if I can get the car working, the game’s over. By the time you see me pass you, it’ll be too late to pull the same trick yourself. I’ll already be travelling fast enough that you can’t catch me. If there’s a 90% chance I can fix the car, I’d say that’s the logical choice.

So is fixing the car “overengineering”? If the race was from here to the corner, then yes, I’d say so. The effort needs to be matched to the payback. Even if the race were from here to the next town, it wouldn’t give you a payback. But what if we were going to race from here to the next town once every day for the rest of the year? Wouldn’t it make sense to spend the first week getting myself a car, and then win the next 51 weeks of races?

In business, we’re in it for the long haul. It makes sense to spend time making ourselves more efficient. Why, then, do so many companies have systems that encourage drastic short term decision making at the expense of long term efficiencies and profit? How do we allow for the reasonable risk of failure in order to collect the substantial reward of innovation?

You start by finding people who can see the inefficiencies — the ones who can see what could easily be improved, streamlined, and automated. Then you need to take those people out of the environment where every minute they’re being pushed for another inch of progress. Accept that failure is a possible outcome once in a while. Yes, there’s risk, but there are also rewards. One doesn’t come without the other.

Book Review: The Lights in the Tunnel

I was paging through the Amazon store on my Kindle when I came across a book that caught my eye: The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future (Volume 1)

It’s not every day you come across a book about Automation, and for $6, I figured, “what the heck?”

The author, Martin Ford, is a Computer Engineer from California. To summarize, he’s basically saying the following:

  • Within 80 years, we will have created machines that will displace significantly more than half of the current workforce

This is a topic that interests me. Not only do I have a career in automation, but I’ve previously wondered about exactly the same question that Ford poses. What happens if we create machines advanced enough that a certain segment of the population will become permanently unemployed?

The title of the book comes from Ford’s “Tunnel Analogy”. He tries to model the economy as a tunnel of lights, with each light representing a person, its brightness indicating its wealth, and the tunnel is lined with other patches of light: businesses. The lights float around interacting with the businesses. Some businesses grow larger and stronger while others shrink and die off, but ultimately the brightness of the tunnel (the sum of the lights) appears to be increasing.

I found the analogy to be a bit odd myself. Actually, I wasn’t quite sure why an analogy was necessary. We’re all pretty familiar with how the free market works. If you don’t get it, I don’t think the tunnel analogy is going to help you. In fact, one excerpt from his description of the tunnel makes me wonder if Ford himself even “gets” the concept of how the economy works:

As we continue to watch the lights, we can now see that they are attracted to the various panels. We watch as thousands of lights steam toward a large automaker’s panels, softly make contact and then bounce back toward the center of the tunnel. As the lights touch the panel, we notice that they dim slightly while the panel itself pulses with new energy. New cars have been purchased, and a transfer of wealth has taken place.

That particular statement irked me during the rest of the book. That’s not a good illustration of a free market; that’s an illustration of a feudal system. In a free market, we take part in mutually beneficial transactions. The automaker has a surplus of cars and wants to exchange them for other goods that it values more, and the consumer needs a car and wants to exchange his/her goods (or promise of debt) in exchange for the car. When the transaction takes place, presumably the automaker has converted a car into something they wanted more than the car, and the consumer has converted monetary instruments into something they wanted more: a car. Both the automaker and the consumer should shine brighter as a result of the transaction.

Ford has confused money with wealth, and that’s pretty dangerous. As Paul Graham points out in his excellent essay on wealth:

Money Is Not Wealth

If you want to create wealth, it will help to understand what it is. Wealth is not the same thing as money. Wealth is as old as human history. Far older, in fact; ants have wealth. Money is a comparatively recent invention.

Wealth is the fundamental thing. Wealth is stuff we want: food, clothes, houses, cars, gadgets, travel to interesting places, and so on. You can have wealth without having money. If you had a magic machine that could on command make you a car or cook you dinner or do your laundry, or do anything else you wanted, you wouldn’t need money. Whereas if you were in the middle of Antarctica, where there is nothing to buy, it wouldn’t matter how much money you had.

There are actually two ways to create wealth. First, you can make it yourself (grow your own food, fix your house, paint a picture, etc.), or secondly you can trade something you value less for something you value more. In fact, most of us combine these two methods: we go to work and create something that someone else wants so we can trade it for stuff that we want (food, cars, houses, etc.).

Later in the book, Ford makes a distinction between labor-intensive and capital-intensive industries. He uses YouTube as an example of a capital-intensive business because they were purchased (by Google) for $1.65B and they don’t have very many employees. I can’t believe he’s using YouTube as an example of a capital-intensive industry. The new crop of online companies are extremely low-overhead endeavors. Facebook was started in a dorm room. Again, Ford seems to miss the fact that money is not equal to wealth. Google didn’t buy YouTube for the capital, they bought their audience. Google’s bread and butter is online advertising, so they purchased YouTube because users are worth more to Google than they are to the shareholders of YouTube that sold out. Wealth was created during the transaction because all parties feel they have something more than they had to start.

Back to Ford’s premise for a moment: is it possible that we could create machines advanced enough that the average person would have no place in a future economy? I don’t find it hard to believe that we could eventually create machines capable of doing most of the work that we do right now. We’ve certainly already created machines that do most of the work that the population did only decades ago. The question is, can we get to the point where the average person has no value to add?

Let’s continue Ford’s thought experiment for a moment. You and I and half the population is now out of work and nobody will hire us. Presumably the applicable constitutional elements are in place so we’re still “free”. What do we do? Well, I don’t know about you, but if I had no job and I was surrounded by a bunch of other people with no job, I’d be out foraging for food. When I found some, I’d probably trade a bit of it to someone who could sew that might patch up my shirt. If I had a bit of surplus, I’d probably plant a few extra seeds the next spring and get a decent harvest to get me through the next winter.

I’m not trying to be sarcastic here. I’m trying to point out the obvious flaw in the idea that a large percentage of the population couldn’t participate in the economy. If that were the case, the large part of the population would, out of necessity, form their own economy. In fact, if we’re still playing in Ford’s dreamland here, where technology is so advanced that machines can think and perhaps even nanotechnology is real, I’d probably hang around the local dump and forage for a bit of technology there. The treasures I’d find there would probably put me in more luxury than I currently have in 2010.

So, if you take the thought experiment to the extreme, it breaks down. Given a free society divided into haves and have-nots, where the haves don’t have any use for the have-nots, then what you really have is two separate and distinct societies, each with its own bustling economy. Whether or not there is trade between those two economies, one thing is certain: almost everyone still has a job.

Of course, it’s not like we’re going to wake up tomorrow and technology will suddenly throw us all out of our jobs. The shift in technology will happen gradually over time. As technology improves, people will need to adapt (as we do every day). As I’ve said before, I think a major shift away from the mass consumption of identical items is already underway. As the supply of generic goods goes up, our perceived value of them goes down.

Ford doesn’t seem to participate in automation on a daily basis, so I think he lacks the experience of what automation really does. Automation drives down the cost, but it also increases the supply and reduces the novelty at the same time. Automated manufacturing makes products less valuable but the juxtaposition makes people more valuable.

There’s a company out there called Best Made Co. that sells $200 hand-made axes. There’s a three week waiting time. That’s a feature actually: it’s so valuable to people that there’s a three week lead time. It’s made by hand. It’s made by hand by people who are passionate about axes. Feature? I think so.

In Ford’s dystopia, when the robber-barons are sitting atop their mountains of widgets that they’ve produced in their lights-out factory, don’t you think one of them might want to buy a sincere story? Wouldn’t they be interested in seeing a movie, or going to church on Sunday, or reading a book? When all of your basic needs are met, these higher-level needs will all see more demand. They’re also hard to automate. Some things have value because they’re done by people. Some things would be worth less if you did automate them:

  • Relationships (with real people)
  • Religion
  • Sports
  • The Arts
  • “Home Cooked” or “Hand-Made”
  • Stories (of origins, extremes, rescues, journeys, relationships, redemption, and the future)

Do you recognize that list? That’s the list of things we do when we’re finished with the drudgery of providing for our survival. We cheer for our sports team on a Sunday afternoon, or go and see an emotional movie on Friday night. Some people buy $200 axes (or iPhones, anyone?) because they come with a fascinating story that they can re-tell to their friends. (Bonus points if it’ll get you laid.)

Ford scoffs at the idea of a transition to a service based economy. He suggests implementing heavy taxes on industry and redistributing that to people who otherwise would have been doing the job the robots are doing, just so they can buy the stuff the robots are producing. He can’t see anything but an economy based on the consumption of material goods. I say: go ahead and automate away the drudgery of daily existence, make the necessities of life so cheap they’re practically free, and let’s get on with building real wealth: strong relationships, a sense of purpose, and a society that values life-long self improvement (instead of life-long accumulation of crap). By making the unimportant stuff less valuable, automation is what will free us to focus more on what’s important.

Clean Ladder Logic

I’ve recently been reading Clean Code: A Handbook of Agile Software Craftsmanship. It’s written by Robert C. “Uncle Bob” Martin of Agile software (among other) fame. The profession of computer programming sometimes struggles to be taken seriously as a profession, but programmers like Martin are true professionals. They’re dedicated to improving their craft and sharing their knowledge with others.

The book is all about traditional PC programming, but I always wonder how these same concepts could apply to my other obsession, ladder logic. I’m the first to admit that you don’t write ladder logic the same way you write PC programs. Still, the concepts always stem from a desire for Readability.

Martin takes many hard-lined opinions about programming, but I think he’d be the first to admit that his opinions are made to fit the tools of the time, and those same hard-and-fast rules are meant to be bent as technology marches on. For instance, while he admits that maintaining a change log at the top of every source file might have made sense “in the 60’s”, the rise of powerful source control systems makes this obsolete. The source control system will remember every change that was made, who made it, and when. Similarly, he advocates short functions, long descriptive names, and suggests frequently changing the names of things to fit since modern development environments make it so easy to rename and refactor your code.

My favorite gem is when Martin boldly states that code comments, while sometimes necessary, are actually a failure to express ourselves adequately in code. Sometimes this is a lack of expressiveness in the language, but more often laziness (or pressure to cut corners) is the culprit.

What would ladder logic look like if it was “clean”? I’ve been visiting this question during the development of SoapBox Snap. For instance, I think manually managing memory, tags, or symbols is a relic of older under-powered PLC technology. When you drop a coil on the page in SoapBox Snap, you don’t have to define a tag. The coil is the signal. Not only is it easier to write, it prevents one of the most common cardinal sins of beginner ladder logic programming: using a bit address in two coil instructions.

Likewise, SoapBox Snap places few if any restrictions on what you can name your coils. You don’t have to call it MTR1_Start – just call it Motor 1: Start. Neither do you need to explicitly manage the scope of your signals. SoapBox Snap knows where they are. If you drop a contact on a page and reference a coil on the same page, it just shows the name of the coil, but if you reference a contact on another page, it shows the “full name” of the other coil, including the folders and page names of your organization structure to find it. Non-local signals are obviously not local, but you still don’t have to go through any extraneous mapping procedure to hook them up.

While we’re on the topic of mapping, if you’ve read my RSLogix 5000 Tutorial then you know I spend a lot of time talking about mapping your inputs and your outputs. This is because RSLogix 5000 I/O isn’t synchronous. I think it’s pointless to make the programmer worry about such pointless details, so SoapBox Snap uses a synchronous I/O scan, just like the old days. It scans the inputs, it solves the logic, and then it scans the outputs. Your inputs won’t change in the middle of the logic scan. To me, fewer surprises is clean.

I’ve gone a long way to make sure there are fewer surprises for someone reading a ladder logic program in SoapBox Snap. In some ladder logic systems, the runtime only executes one logic file, and that logic file has to “call” the other files. If you wanted to write a readable program, you generally wanted all of your logic files to execute in the same order that they were listed in the program. Unfortunately on a platform like RSLogix 5000, the editor sorts them alphabetically, and to add insult to injury, it won’t let you start a routine name with a number, so you usually end up with routine names like A01_Main, A02_HMI, etc. If someone forgets to call a routine or changes the order that they execute in the main routine, unexpected problems can surface. SoapBox Snap doesn’t have a “jump to page” or “jump to routine” instruction. It executes all logic in the order it appears in your application and each routine is executed exactly once per scan. You can name the logic pages anything you want, including using spaces, and you can re-order them with a simple drag & drop.

Program organization plays a big role in readability, so SoapBox Snap lets you organize your logic pages into a hierarchy of folders, and it doesn’t limit the depth of this folder structure. Folders can contain folders, and so on. Folder names are also lenient. You can use spaces or special characters.

SoapBox Snap is really a place to try out some of these ideas. It’s open source. I really hope some of these innovative features find their way into industrial automation platforms too. Just think how much faster you could find your way around a new program if you knew there were no duplicated coil addresses, all the logic was always being executed, and it’s always being executed in the order shown in the tree on the left. The productivity improvements are tangible.

Off-the-Shelf or Custom Automation?

If you’re like me, you’re a fan of customizing:

…and certainly in the automation industry you see a lot of custom control solutions. In fact there’s always been this long-running debate over the value of custom solutions vs. the value of off-the-shelf “black box” products.

I’ve noticed this rule: the closer you get to the production line, the more custom things you’ll see. Just look at the two ends of this extreme: production lines are almost always run by PLCs with custom logic written specifically for that one line, but the accounting system is almost always an off-the shelf product.

There’s a good reason for this. Accounting methodologies are supposed to be standardized across all companies. Businesses don’t claim that their value proposition is their unique accounting system (unless you’re talking about Enron, I suppose). Automation, however, is frequently part of your business process, and business processes are the fundamental business proposition of a company. Fedex should definitely have a custom logistical system, Amazon needs to have custom order fulfillment, and Google actually manufactures their own servers. These systems are part of their core business strengths.

So when should a company be buying off-the-shelf automation solutions? I say it’s any time that “good enough” is all you need. You have to sit down and decide how you’re going to differentiate yourself from the competition in the mind of your customers, and then you have to focus as much energy as possible on achieving that differentiation. Everything else needs to be “good enough”. Everything else is a cost centre.

If you follow that through logically, it means you should also seek to “commoditize” everything in the “everything else” category. That bears repeating: if it’s not a core differentiator for your company, you will benefit if it becomes a commodity. That means if you have any intellectual property sitting there in a non-critical asset, you should look for ways to disseminate that to the greater community. This is particularly important if it helps the industry catch up to a leading competitor.

There are lots of market differentiators that can depend on your automation: price, distribution, and quality all come to mind. On the other hand there are other market differentiators that don’t really depend on your automation, like customer service or user-friendly product designs. Ask yourself what category your company fits in, and then you’ll know whether custom automation makes sense for you.

“Best Practices,” Indeed

I’ve just been reading Ken McLaughlin’s recent post Top Ten Signs an Integrator is the Real Deal #7: Best Practices and Standards and I have to say, my initial reaction is one of skepticism. I think Ken’s thinking is a little too narrow on this one. Let me explain…

This isn’t the first time I’ve considered the “problem of standards” on this blog. In an earlier post, Standards for the Sake of Standards, I explained how most corporate standards eventually end up being out-of-date and absurd, mostly because nobody making the standard ever things to write down Why the standard exists, which would allow future policy-makers to understand the reasons and change the standard when it no longer applied. Instead, it becomes gospel.

However, that isn’t to say you could run a large organization without best practices and standards. That’s the point isn’t it? In order to become large, you need built-in efficiency, and you do that at the expense of innovation. Big companies don’t innovate (in fact the only notable exception is Apple, and the rebuttal is always, “fine, so give one example other than Apple”). Almost all innovation happens in small companies, by a tightly knit group of superstars where the chains have been removed. Best Practices are, in fact, put in place to clamp down on innovation because innovation is risky, and investors hate risk. It’s better to make lots of average product for average people than exceptional products for a few people (hence McDonald’s). Paul Graham, as usual, has something insightful to add to this:

Within large organizations, the phrase used to describe this approach is “industry best practice.” Its purpose is to shield the pointy-haired boss from responsibility: if he chooses something that is “industry best practice,” and the company loses, he can’t be blamed. He didn’t choose, the industry did.

I believe this term was originally used to describe accounting methods and so on. What it means, roughly, is don’t do anything weird. And in accounting that’s probably a good idea. The terms “cutting-edge” and “accounting” do not sound good together. But when you import this criterion into decisions about technology, you start to get the wrong answers.

The reason small companies are innovative is that innovative people can’t stand corporate environments. Imagine if you were an inspired chef… could you stand working at McDonald’s? Could McDonald’s even stand to employ you? You’d be too much trouble! You’d have to work in that nice one-off restaurant called “Maison d’here” where the manager puts up with your off-beat attitude because ultimately you make good food, and you keep their small but devoted clientèle coming back. But you can’t be franchised. The manager of the restaurant can’t scale you up without making what you do into a procedure.

So back to Ken’s topic… if you are choosing a systems integrator, you need to decide if you’re buying an accounting system (i.e. something that’s generic to all companies, and not a competitive advantage), or something that is a competitive advantage to you. When you’re automating your core business processes, you must build competitive advantage into it, and it must be innovative. If that’s the case, stay away from larger integrators with miles and miles of red tape and bureaucracy. Go for the “boutique” integrator (somewhere in the 7 to 25 person sized company, under $10 million per year in revenue) that can show you good references. You’re looking for a small group of passionate people. Buzzwords are a warning sign; small companies don’t have time for corporate-speak.

I’m not saying you should use the two guys in their garage. These guys are ok for your basic maintenance tasks, small changes, and local support, but you do want someone who has been around for a few years and has at least a couple of backup engineers they can pull in if there’s a problem. Make sure they have a server, with backups, and all that.

On the other hand, if what you’re automating is very large and very standard, that’s when you want to go with Ken’s approach. If you need to integrate a welding line, paint line, or whatever, there’s nothing new or innovative in that, so you want to lower the risk. You know all the big integration companies can do this, so go and get three bids, and choose the one that’s hungriest for the work. Make sure they have standards and best practices. The reduction in risk is worth it if you don’t need the innovative solution.

You can do a hybrid approach. Identify the parts of your process that could be key competitive advantages if you could find a better way to do it. This is where innovation pays off. Go out and consult with some boutique integrators ahead of time and get them working on those “point solutions”. Then go to the bigger companies to farm out the rest of your automation needs. How’s that for a “best practice”?