Tag Archives: risk

Questions to Ask your Employer When Applying for an Automation Job

If you’re going to interview for a control systems job in a plant, they’ll ask you a lot of questions, but you should also have some questions for them. To me, these are the minimum questions you need to ask to determine if a future employer is worth pursuing:

  1. Do you have up-to-date electrical drawings in every electrical panel? – When the line is down, you don’t have time to go digging.
  2. Do you have a wireless network throughout the plant? – It should go without saying, having good reliable wireless connectivity all over your facility really helps when you’re troubleshooting issues. Got a problem with a sensor? Just setup your laptop next to the sensor, go online, look at the logic, and flag the sensor. You don’t have time to walk all over.
  3. Does every PC (including on-machine PCs) have virus protection that updates automatically? – We’re living in a post Stuxnet world. Enough said.
  4. Have you separated the office network from the industrial network? – Protection and security are applied in layers. There’s no need for Jack and Jill in accounting to be pinging your PLCs.
  5. What is your backup (and restore) policy? – Any production-critical machine must always have up-to date code stored in a known location (on a server or in a source control system), it must be backed up regularly, and you have to test your backups by doing regular restores.
  6. Are employees compensated for working extra hours? – Nothing raises a red flag about a company’s competency more than expecting 60+ hour weeks but not paying you overtime. It means they’re reactive, not proactive. It means they don’t value experience (experienced employees have families and can’t spend as much time at the office). It probably means they scored poorly in the previous questions.

You don’t have to find a company that gets perfect on this test, but if they miss more than one or two, that’s a warning sign. If they do well, they’re a proactive company, and proactive companies are sane places to work.

Good luck!

Upgrading a Legacy VB6 Program to .NET

There is a lot of code out there written in VB6, running just fine. If you’re someone who has to maintain it, then at some point you’ll ask yourself, “should we just bite the bullet and upgrade this to .NET?”

There is, so far, no end-of-life issue on the horizon. VB6 applications will run on Windows 7, and Microsoft has vowed to support the VB6 runtime through the life of Windows 7. That will be a while, so there’s no hurry.

First, you need to do a cost-benefit to determine if it’s worth upgrading. That’s a pretty big task right there. What do you gain by moving to .NET? Certainly you gain a much richer ecosystem of utilities, libraries, persistence layers, test frameworks, etc. You’ll also find it easier to hire developers who have .NET on their resume. It’s pretty hard to find a copy of Visual Studio 6 these days, unless you have an MSDN subscription. .NET features like lambda expressions, LINQ, and reflection are also big productivity boosters if you spend the time to become proficient with them. These are all valid points, but they’re hard to measure.

You’re going to need to do some ballpark estimates. I’ve actually been doing some conversions lately, so I have some real experience to throw at it. Take any VB6 application, and it’ll take you 1/3 to 1/2 of the original development time to rewrite it in .NET with the same feature set (using test-driven development). That’s my estimate… do what you will with it. So, how much maintenance work are you doing, and how much more efficient would you be after the conversion?

So let’s take an application that took one programmer 6 months to write, and then you’ve been maintaining it in 50% of your time for the last year. So there are 12 months of development in the existing application. By my estimate you’ll need to spend 4 to 6 months rewriting. Let’s say you’re twice as fast after the conversion (if you didn’t have unit tests before and you use test-driven development during the conversion, the unit tests alone should make you this much more productive, not to mention the improvements in the IDE and the full object-oriented support). In that case, the payback period is 8 to 12 months of actual planned development. If you have that much work ahead of you, and you can afford to put off working on new features entirely for half that time, you’ll break even.

That’s still a really big investment. The problem is that you won’t have anything to show for it for half that time. It’s surprising how quickly management could lose faith in your endeavor if they a) don’t really understand what you’re doing and b) don’t see any tangible results for months.

There are alternatives to the all-or-nothing rewrite. First, you can use a conversion tool to convert the VB6 to VB.NET. The one that comes with Visual Studio 2005 is notoriously bad, but some of the commercially developed ones are apparently much better. Still, given VB6’s laughably bad support for the object-oriented programming paradigm, the code you get out of the conversion is going to smell more like VB6 than .NET. It will get you done faster, probably more than twice as fast, so it’s still an option. However you won’t get a chance to re-architect the software or normalize the database, etc., in the process.

The other alternative to the “big rewrite” is to do the upgrade in an “agile” manner. Take some time to break the software into smaller modules, each of which can be upgraded in about one month or less. This will significantly lengthen the amount of time it takes you to finish the project, but you’ll have something tangible to show after each month. Most managers can wait this long. This approach has its problems too: you need to write a lot of code to interact between the VB6 and .NET code. It can be tricky.

Normalizing a Database

If you’re in a position where you have a database as a backing store, and you need to make major database structure changes, this must affect your decision. The “big rewrite” is the most friendly to database changes: you just write a single conversion script that upgrades the existing database in-place, and you write your new version against the new schema. You have a clean slate, so you can clean up all the crufty problems in the old schema.

On the other hand, if you’re just using a conversion tool to automatically convert from VB6 to .NET, you can’t change the schema.

If you take the middle road (“agile”), you can change the database structure at the same time, but it’s much more difficult than in the “big rewrite”. As you upgrade each module, it makes sense to modify the database structure underlying that module, but unless you’re really lucky, you’ll have parts of other modules left in VB6-land that are dependent upon database tables that are changing. That means you’ll have the same problem anyone without a really good data access layer (or object-relational persistence layer) has when they go to change the database schema:

You have a whole bunch of code that looks like this: sql = "SELECT MY_COL1, MY_COL2 FROM MY_TABLE JOIN..."

Assuming you don’t have unit test coverage, how do you find all the places in your code that need to be changed when you normalize MY_COL2 out of one table into another? Of course you can start with a search and replace, but if you really have a database normalization problem, then you probably have duplicate column names all over the place. How many tables have a column called CODE or STATUS? There are many pathological cases where a simple text search is going to find too many matches and you’ll spend hours tracking down all the places where the code might change just because of one column being moved or renamed.

The most pathological case is where you have, for instance, two columns like CONTACT1 and CONTACT2 in the same table, and somewhere in the code it says sql = "UPDATE MY_TABLE SET CONTACT" & ContactNumber & " = '" & SomeValue & "'". You’re doing to have a hard time finding that column name, no matter what you do.

You need to develop a smarter system. I’ve tried a couple of different approaches. I tried one system where I auto-generated unique constants for all of my table and column names in my database, and then I wrote a script that went through my source code and literally replaced all of the instances of table or column names inside of strings with the constants. When I changed the database, I regenerated the list of constants, and the compiler was able to catch all the dependencies. Unfortunately, this method has some deficiencies: the resulting SQL statements are more difficult to read, and when you go and make changes to these statements you have to be disciplined enough to use the generated constants for the table and column names, or you break the system. Overall, it saves a lot of time if you have a lot of database changes to make, but costs extra time if you have to write new code.

I tried a different variation of the system where instead of replacing the table and column names in the string directly, I added auxiliary statements nearby that used the constants for the table and column names, and these would generate compile errors if a dependency changed. This made the code easier to read, but had problems of its own.

I don’t have a perfect answer for this problem, but if you have any SQL strings embedded in your legacy VB6 application, and you want to do big changes to your database, I can tell you that you must build a tool for yourself.

Summary

If you really must convert your application from VB6 to .NET then make sure you go into it with your eyes wide open. Engage management in a frank discussion. Make sure you get a strong commitment. If they waffle at all, walk away. The last thing anyone wants is a half-converted piece of software.

Still, I’m here to tell you that it is possible, and if you do your homework, there can be a real payback. Good luck!

Hacking the Free Market

Like any system, the free market works very well within certain bounds, but it breaks down when you try to use it outside of those constraints.

The free market works when all of the parties in the system are “intelligent agents”. For the purposes of definition, we’ll call them “adult humans”. An adult human is free to enter into transactions within the system with other adult humans. The system works because the only transactions that are permitted are ones in which both parties to the transaction benefit. So, a plumber can fix an electrician’s drain, and an electrician can fix the plumber’s wiring, and they both benefit from the transaction. The introduction of currency makes this work even better.

In fact, if one party in a transaction ends up with less, we usually make that transaction illegal. It usually falls into the category of coercion, extortion, or theft. Nobody can put a gun to another person’s head and demand money “in exchange for their life” because that person had their life before the transaction started.

Still, we find ways to hack the system. Debt is one obvious circumvention. Normally debt is a transaction involving you, someone else, and your future selves. If you take out a student loan, go to school, pay back the student loan, and get a better job, then everyone benefits, and it’s an example of “good debt”. Likewise, if you need a car loan to get a car to get a better job, it’s “good debt” (depending on how much you splurged on the car). Mortgages are similarly structured. Consumer debt (aka “bad debt”), on the other hand, is typically a circumvention of the free market. However, the person who gets screwed is your future self, so it’s morally ambiguous at worst.

The free market can also be hacked through the exploitation of common resources. For instance, if I started a business that sucked all the oxygen from the air, liquefied it, and then I sold it back to the general public for their personal use, I doubt I’d be in business very long. I might as well be building a giant laser on the moon. Similarly, the last few decades were filled with stories of large companies being sued in class action lawsuits for dumping toxic chemicals into streams, rivers or lakes and poisoning the local water supply. Using up a common resource for your own benefit is a kind of “free market hack”. If a third party can prove they were the targets of a “free market hack”, the courts have ruled that they are entitled to compensation.

Still, we hack the free market all the time. The latest credit scandal is just one example. The general public was out of pocket because of a transaction many of them weren’t a party to. It really is criminal.

A larger concern is the management of natural resources. This includes everything from fossil fuels, to timber, fresh water, fish stocks, and, most recently, the atmosphere. The latter opens a new set of problems. All of the other resources are (or can be) nationally managed. Canada, for instance, while it has allowed over-fishing to take place on the Grand Banks, has exercised systems that reduce the quotas in an attempt to manage the dwindling resource. This makes those resources more expensive, so the free market can adjust to the real cost of these resources. The atmosphere, on the other hand, is a globally shared resource with no global body capable of regulating it.

I don’t want to get into some climate change debate here, so let’s look at it from a higher level. Whenever we have a common resource we always over-exploit it until, as a society, we put regulations in place for managing it. In cases where we didn’t (Easter Island comes to mind), we use up the resource completely with catastrophic results.

I realize the atmosphere is vast, but it’s not limitless. While everyone’s very concerned with fossil fuel use, if it was only about dwindling reserves of fossil fuels, it wouldn’t be a problem. The free market would take care of making fossil fuels more expensive as they start to run out, and other energy sources would take their place. However, when we burn fossil fuels, the energy we get is coming from a reaction between the fuel and oxygen in the atmosphere. The simple model is that oxygen is converted into carbon dioxide. Some of the potential energy of the reaction comes from the atmosphere, not just the fuel. We need to run that carbon dioxide through plants again to get the energy (and oxygen) back. Of course, that would take much, much longer (and more work) to do than the energy that we actually get out of the reaction.

If climate scientists are right, then we’re also causing a serious amount of harm to the climate at the same time. This is a debt that will continue accruing interest long after we’re dead. I recognize the uncertainty of the future, but as any good risk manager knows, you shouldn’t gamble more than you can afford to lose.

This is a free market hack because we treat it like a perpetual motion machine, but we’re really just sapping energy from a big flywheel. Like debt, whether this is “good” or “bad” depends on whether we can turn that consumption into something even more valuable for the future. In general (but not every case), every generation before us left us a better world than the one they inhabited. Most of the value we have now is knowledge passed from generation to generation. It costs comparatively little to pass information forward than the value we gain from the access to that information. Therefore, even if our ancestors used up resources, they couldn’t do it on a scale big enough to offset the value of the knowledge they were passing forward. It’s ironic if that knowledge let us build a big enough lever to tip the scales.

It seems pretty certain that if we fail to leave the future generations more value than we’re taking from them, they’ll make us pay. They’ll turn on the companies and families that profited at their expense, and they’ll either sue for damages, or drag the bodies of the CEO’s through the streets behind camels, depending on which area of the world they live in.

Personally I’d prefer prevention over retribution. The problem is that if there really is a future cost to our actions, the market isn’t factoring that into the price. Even though companies are accruing the risk of a large future liability, they don’t have to account for this risk in their balance sheet. That’s because while future generations have a stake in this situation, they don’t have a voicenow. Nobody’s appointed to represent their interests. That’s why the free market continues to be hacked, at their expense.

How could you structure such a system? Should companies be forced to account for estimated future liabilities, so it shows up on their P&L statements? Do we allow them to account for true value that they’re passing forward, like new technologies they’ve developed, which can help to offset the liabilities they’re incurring? Obviously that’s impractical. Not only does it become a bureaucratic nightmare, but there’s still no international body to oversee the principles of the accounting system.

Could an appointed legal representative of future generations sue us for the present value of future damages we’re risking? Can they spend the proceeds of the lawsuits on restorative projects? (Investments with a big dividend but which don’t pay back for 100 years, like reforesting the Sahara.) I doubt a non-existent group of people can sue anybody, so I doubt that’s a workable solution either.

I’m afraid the solution lies in the hands of politicians, and that makes me sad. We need a global body that can manage natural resources, including the atmosphere. At this point, a political solution seems just as impossible.

I’m still looking for ideas and solutions. I want to contribute to a workable solution, but my compass isn’t working. If you know the way, I’m listening.

(Hint: the solution isn’t energy efficiency.)

To be honest, we could still be producing knowledge at such a huge rate that we’re still passing forward more value than we’re taking from future generations. But I don’t think anyone’s keeping track, and that’s pretty scary.

Anyway, Earth hour’s about to start. While it’s a really silly gesture to turn your lights out for an hour in “support” of a planet that we continue to rape and pillage the other 8759 hours of the year, I think I’ll give this stuff another hour of thought. It’s literally the least I can do.

Edit: I posted a follow-up to this article called What can I do about our global resource problems?

The “Almost There” Paradox

We’re all probably familiar with the idea that it takes half the time to get to 90% done and the other half to finish the last 10%. This is a staple of project management.

I think there’s actually a narrower scope of really dangerous solutions that you only become familiar with after you experience it. There’s a whole set of problems where the obvious solution gets you 95 to 98% of the way to your performance spec really quickly, but is almost impossible to reach 100% by incremental improvements. The reason I say they’re dangerous is because the feeling of being “almost there” prevents you from going back to the drawing board and coming up with a completely parallel solution.

I can remember a machine vision job from years ago where the spec was “100% read rate”. I only got it to about 94%, and someone else gave it a try. He got it up over 96%, but 100% was out of reach given the technology we had.

Experiences like that make you conservative. Now I unconsciously filter possible solutions by their apparent “flakiness”. I’m much more likely to add an extra prox to a solution to verify a position than to rely on timers or other kinds of internal state, because the latter are more prone to failure during system starts and stops. I press for mechanical changes when I used to bend under the pressure to “fix it in software”.

Still, you have to be careful. Its easy to discount alternatives just because they bear some passing resemblance to a bad experience you had before. You have to keep re-checking your assumptions. Unfortunately, rapid prototyping usually fails to uncover the “almost there” situation I’m talking about. If you prototype something up fast, and it works in 97% of your lab tests, you’ll probably think you have a “proof of concept”, and go forward with it.

The best way to test new solutions is to put them into production on a low risk system. If you’re an integrator, this means having a really good relationship with your customer (chances are you need their equipment to run your tests). If you work for a manufacturer, you can usually find some out-of-the-way machine to test on before you go all-in.