Skip to content
Naked Security Naked Security

Programmer from hell plants logic bombs to guarantee future work

At some dark moment, have you ever wondered: what if the programmers are adding the bugs deliberately?

If you’ve spent any time working with computer programmers then you’ve probably been part of a project that, for one reason or another, just seems to have too many bugs. No matter what you do, you can’t make progress: there’s always more bugs, more rework and more bugs.

At some dark moment, as frustration at the lack of progress gnaws away at you, you may wonder: what if the programmers are adding the bugs deliberately?

If that’s occurred to you then you can bet that the programmers, who tend to be an intelligent bunch, have let their minds wander there too. Mine certainly has. Like me, they will have noticed that the incentives often stack up in favour of mischief: the work is often thankless, the code unsupervised and the money only good for the length of the project.

Thankfully, most of us are too morally upstanding to go there, but every barrel has its iffy apple.

In this story the barrel bears a Siemens logo and our apple is contractor David Tinley, who recently pleaded guilty to one count of intentional damage to a protected computer.

According to filings by the United States District Court for the Western District of Pennsylvania:

TINLEY, intentionally and without Siemens’ knowledge and authorization, inserted logic bombs into computer programs that he designed for Siemens. These logic bombs caused the programs to malfunction after the expiration of a certain date. As a result, Siemens was unaware-of the cause of the malfunctions and required TINLEY to fix these malfunctions.

The logic bombs left by Tinley were bugs designed to cause problems in future, rather than at the time he added them. He might have done this to avoid looking like the cause of the kind of grinding, bug-riddled, non-progress I described at the beginning. Or perhaps he thought Siemens was less like to give up on buggy code that’s been deployed than code that’s still in development.

Law360 reports that he would fix the bugs by resetting the date the logic bombs were due to go off, and that his attorney argued he did this to guard his proprietary code rather than to make money.

It goes on to describe how Tinley was exposed after being forced to give others access to his code while he was on vacation. Siemens, it says, had to fix the the buggy system without him in order to put a time sensitive order through it.

According to court filings, Tinley worked as contractor for Siemens for fourteen years, between 2002 and 2016, and engaged in his unorthodox income protection scheme for the last two.

He faces sentencing in November.

What to do?

I suggest that if a contractor is refusing to let you see their code, or doesn’t trust you enough to give you access, that should raise a red flag for one of you. And if somebody is making themselves a single point of failure, you have a problem, even if they aren’t doing anything malicious.

In my experience, programmers are vastly more interested in fixing things than breaking them though, and most projects have a plentiful enough supply of accidentally introduced faults.

That said, programmers and their code both get better with peer review, and modern development practices like continuous test and build cycles are designed to surface bad code as quickly as possible.

So, while I don’t think you should do either of those things to root out bad apples, there are good reasons to do them anyway, and if you do you’ll stand more chance of catching saboteurs.


“his attorney argued he did this to guard his proprietary code rather than to make money.”
Where I come from, if you are working for a company (even as a contractor) ***the company*** owns all the rights to code or software (or invention) developed on their time, or property.
Is this not the case???
In my opinion this is akin to holding the company at ransom to hire him back to fix his code.


Pssst, Siemens, I have two words for you: “code review”. Especially useful for mission critical applications.


This is why you should always have a code review/approval process and dual control required for any code being checked into a repository before it is built and deployed to production


Been there done that. Got held hostage by programmer who was also a thief. Lost a ton of money.


I’d prefer to help you recover your money instead of offering you a mere thumbs-up, but

I’m sorry Dave; I’m afraid I can’t do that.


I am surprised to hear a company like Siemens has not put secured SDLC in place. Hope they would have done it by now.


I’ve recently learned about a thing called code escrow. Basically it is a trusted third party which holds the code for the system developed safely. Things are as is if all is ok but if the vendor can not maintain software or goes out of business or dies the buyer can get hold of the code. Just another strategy to make sure you are not as severely at risk with contractors.


One vital thing: having ‘the source code’ alone is rarely enough to get you out of trouble if the vendor that originally wrote it implodes. Make sure that what goes into escrow includes everything you need to build the source code, including repeatable build instructions and the required build tools plus other libraries and dependencies. (There may be licensing complications making sure you have ongoing access to tools such like compilers, linkers and firmware loaders for specialised hardware such as industrial control boards or esoteric chipsets. Your third party coders may have used development tools bought from a fourth party, and either or both of them could go bust.)


Code escrow: I helped negotiate a deal in which code escrow was discussed. The buyer (a Fortune 25 company) refused code escrow based on prior experiences in which the transfer had been held up for months for court hearings and such, leaving customers in the lurch. Instead, the buyer insisted on an internal repository staffed by vetted employees which would have no contact with development employees.


Sounds a bit of a mixture of planned obsolescence and a healthcare Angel of Death. Planned obsolescence, because someone on his team knew about this, or they wouldn’t have known to use his secret key while he vacationed. Angel of Death, because he may have taken satisfaction in bringing his patient to the brink of malfunction and then rescuing. I suppose if he pled guilty, he wasn’t smart enough to keep copies of memos proving his team knew about the logic bombs. Either that, or he’s like a deadly nut.


> someone on his team knew about this, or they wouldn’t have known to use his secret key while he vacationed

I understood it as, he was caught having painted himself into a corner.
Abroad when the code failed, unable to fix it remotely, he was forced to give someone a password to a code repository or something (maybe send them previously undisclosed source?).
ASSUMING along those lines: He either
1) displayed amateur planning in double-booking vacation with sabotage (as xyonofcalhoun stated), or
2) got complacent, assuming that he could let them squirm for a bit while he leisurely returned to “rescue” them, once again becoming the (paid) hero.


That’s how I read it – he’d cooked his goose one way or another… couldn’t reasonably refuse to hand over the password without giving the game away, so decided to roll the dice and hope he could get away with the dodgy code.

Maybe someone would work around the bug some other way without modifying the “bomb” bit, or see the “bomb” code and just try deleting it in a big lump and being satisfied with the “fix” without realising what had just been fixed? Maybe he just gambled definitely getting into trouble against maybe getting away with it. If so, he crapped out.


So the real issue here was clearly the programmer double booking himself. Why schedule a holiday and the bug to trigger at the same time?!


The company can always copy his coding, as he does it keylogger]. My concern is companies such s Microsoft, which seem to design in buggy progs


Also, keep all source code under version control. It should be trivial to pull the “fixed” and “unfixed” versions of the source code, and compare them, to see exactly what the programmer changed to fix the problem. If he “fix[ed] the bugs by resetting the date the logic bombs were due to go off,” that will be obvious.

Of course, that’s an extreme case. But even if you’re sure that the programmer is trustworthy, you still should inspect his fixes.

When you inspect a programmer’s fixes, you SHOULD find detailed, understandable comments describing the fix: what went wrong, why it went wrong, and how the fix fixes it. If you don’t, if all you find is an uncommented change of “x” to “x-1”, then you’re dealing with a programmer who does not do excellent work. You should make sure that he either shapes up, or ships out.


I guess one confounding factor here is that this wasn’t really software development as you and I know it – we’re talking about a much less formal environment that was more like “IT keeping Excel working reliably and sales workflow moving along smoothly”. Sure, you can and should use a version control and change tracking system even if you’re working alone on something small and self-contained like the ill-fated Excel system here…

…but it’s easy to see how this slipped through the official “development lifecycle” management cracks.

(Side note: if you need a source code system for home use or for your own small projects, and you don’t want to hassle with cloud servers or git commands – check out Fossil, by the same guy who produces SQLite. In fact, SQLite is developed using Fossil, and so is Fossil, natch. Try it, and then wonder, “How did I not know about this before?” FWIW Fossil is free. I get nothing for mentioning it. I’m just a happy user myself.)


these programming bros not understanding how the real world works…

not everyone has a dev team, not everyone has the controls in place… but they do have a guy that took a C++ class 14 years ago in college so they ask them to write X code for X project and it ends up living for the next 26 years in your production plant.

Not what happened here but the hard truth is there is A LOT of code out there not written by true devs… not being reviewed. No rev tracking… just “make it work” and then the poor IT Admin with 47 hats moves onto the next ticket. That is how the real world works.



“If we knew how to read your code we would have written it in the first place. You don’t hire a dog and bark yourself.”


Only 47 hats!?

Why, back in my day we’d have killed for only 47 hats.
And we were grateful to have those 71 hats!


Coming from a programmer’s background, it saddens me to have to accept the fact that even in that field there are some with lack of integrity and work ethics. Corruption exist everywhere. Minds used to destroy for the greed of money. What a waste.


Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!