The best way to defend against software flaws is to find them before the attackers do.
This is the unshakeable security orthodoxy challenged by a radical new study from researchers at New York University. The study argues that a better approach might be to fill software with so many false flaws that black hats get bogged down working out which ones are real and which aren’t.
Granted, it’s an idea likely to get you a few incredulous stares if suggested across the water cooler, but let’s do it the justice of trying to explain the concept.
The authors’ summary is disarmingly simple:
Rather than eliminating bugs, we instead add large numbers of bugs that are provably (but not obviously) non-exploitable.
By carefully constraining the conditions under which these bugs manifest and the effects they have on the program, we can ensure that chaff bugs are non-exploitable and will only, at worst, crash the program.
Each of these bugs is called a ‘chaff’, presumably in honour of the British WW2 tactic of confusing German aircraft radar by filling the sky with clouds of aluminium strips, which also used this name.
Arguably, it’s a distant version of the security by obscurity principle which holds that something can be made more secure by embedding a secret design element that only the defenders know about.
In the case of software flaws and aluminium chaff clouds, the defenders know where and what they are but the attackers don’t. As long as that holds true, the theory goes, the enemy is at a disadvantage.
The concept has its origins in something called LAVA, co-developed by one of the study’s authors to inject flaws into C/C++ software to test the effectiveness of the automated flaw-finding tools widely used by developers.
Of course, attackers also hunt for flaws, which is why the idea of deliberately putting flaws into software to consume their resources must have seemed like a logical jump.
To date, the researchers have managed to inject thousands of non-exploitable flaws in to real software using a prototype setup, which shows that the tricky engineering of adding flaws that don’t muck up programs is at least possible.
Good idea, mad idea?
Now to the matter of whether this idea would work in what humans loosely refer to as the real world.
The standout objection is that the concept is a non-starter for the growing volume of the world’s software that is open source (secret code and open source are incompatible ideas).
The next biggie is that even applied to proprietary software, adding bogus flaws would tie down legitimate researchers who take the time to find and report serious security vulnerabilities.
While it’s true that attackers would also be bogged down, adding the same layer of inconvenience to the job of the good guys might negate this benefit.
The worst-case scenario is that attackers eventually fine tune their flaw hunting rigs to spot the bogus code and you end up back at square one. In this world, injecting new chaff to defeat this would become a full-time job.
It’s not as if the fact that chaff had been added would be hard for anyone to discover – all they’d have to do would be to compare the size of a new version with an old one and make an educated guess about how much was new features and how much was chaff.
More likely, developers would run a mile for fear that the process of injecting chaff would in itself risk creating new and possibly real flaws, even if those were simply denial of service states caused by a program crashing.
In the end, intriguing though the chaff concept is, the best way to cope with security flaws remains the proven method – find and efficiently mitigate or patch them before the attackers find out.
Anonymous
I think developers are busy enough as it is without having to invent pretend-exploitable trap code.
Jeff
Agreed. And I also agree with the article’s note about concerns of introducing an actual flaw into the program, either one that could crash or otherwise affect the proper functioning of the program, or actually be exploitable. The bad guys, unfortunately, aren’t dumb!
Bryan
developers are busy enough as it is without having to invent pretend-exploitable trap code
Agreed, was thinking the same.
…though the part of me who thoroughly enjoyed the saga of “Albert Fred(the)” finds this concept potentially amusing.
I doubt the benefits will ever surpass overhead (before even considering the wasted white-hat resources).
Epic_Null
Ah, the humble honeypot. Too many, though, and you can wind up with a sticky mess…
Jim Gersetich
This could partly work. Unfortunately, it would only work for a short time. As soon as the black hatters find one, that one’s usefulness disappears.
However, a hybrid of this might prove really interesting. If the “flaw” allowed access to a REAL database, but that database contained nothing but false information, it could help in the long run.
Specifically, this idea applies to credit card/people names databases. The database becomes two databases, one of the real stuff (that’s fully protected), and one behind exploitable flaws. The data the flaw would allow access to would be deliberately contrived data that helps law enforcement. Give out credit card numbers that resolve correctly the Luhn-type algorithms, but are false. Their use indicates stolen information.
I used the heart of this idea a decade or two ago. It was back when malware first figured out your contact list and sent infected emails to your contacts. We created a bogus email address that was added to everybody’s contact list. (The first one was aaaaa AT example DOT com). We also created a real mailbox behind that address.
Whenever that email account received an email, we knew it was likely from malware, and we sent our anti-malware forces to the person who sent it.
Well, most of the time. Later we had to change the name to something like “ThisEmailAddressShouldNotBeUsed” instead of “aaaaa”, because … people were curious.
The same trick could be used with credit card numbers. The number is valid, but its use trips a giant red flag to contact law enforcement.
Anonymous
Hahahahahahahahahahahahaha. Hahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahahaha.
That’s like plastering your car with bucketloads of cruddy rust-coloured paint in the hope you’ll sneak the real rust holes through the annual safety check.
Laughter Is The Best Medicine
Yeah, and that all fake rust paint will end up increasing the weight, plugging up drainage holes, getting into places it shouldn’t like suspension and brakes, gumming up the lock on the trunk, smeared on the windows, obscuring the lights and getting you pulled over by the cops all the time.
Horatiu Petrescu
Maybe this worked in WWII, but existing code in an application is not as clear as a blue sky is. Code is complex, and if you add more complexity you are defeating the KISS rule. Another problem is – every software developer working on that application code would need to know how to efficiently distinguish between chaff and useful code when they’re fixing bugs, debugging or other code-related activity. That’s creating more confusion that is necessary, for the sake of an allegedly more secure method of protecting your code. You just need one confused developer to make the wrong mistake and you end up with more work than you need, or even worse, a serious security risk in your code. As Dr Eric Cole from SANS always says, if security is impacting business, security is done wrong.
John
I can’t help feeling we are missing an obvious approach to dealing with bugs and security flaws.
Use a technique of certification that is already in place for so many products regarding health, safety, & quality. The security researchers are already in place, all that is needed is to create the standards body which could easily be an extension of an existing one ( IEEE maybe. Then any software application is submitted to the researchers for analysis, and if it passes gets the seal of approval of the standard. THis could help create a revenue stream for the researchers allowing them to grow and improve.
We could even have a couple of levels for trivial apps – games utilities etc, up to serious risk apps like banking.
Whilst no standard is infallible it would at least give consumers some assurance that a given app has passed some level of scrutiny for flaws.
A similar approach could be applied to websites, and even organisations, having a certification for passing some level of penetration testing and Malware scanning.
I for one would certainly be much more comfortable knowing that my apps, the websites I use regularly, and companies I deal with have at least passed some level of inspection of their security resilience.
Bentham
Each of these bugs is called a ‘chaff’, presumably in honour of the British WW2 tactic of confusing German aircraft radar by filling the sky with clouds of aluminium strips
Better known as Window – but perhaps that is too provocative
njorl
Don’t mention the War!
It’s simply the old winnowing metaphor. See, for example, Matthew 3:12:
he will … gather his wheat into the
garner; but he will burn up
the chaff with unquenchable fire.