Is “fuzzing” software to find security vulnerabilities using huge robot clusters an idea whose time has come?
The latest numbers to emerge from Google’s OSS-Fuzz, a beta launched last December to automatically search for flaws in open source software, look encouraging.
The system found 264 potential security vulnerabilities in 47 open-source projects assessed by it, including 10 in FreeType2, 17 in FFmpeg, 33 in LibreOffice, eight in SQLite 3, 10 in GnuTLS, 25 in PCRE2, nine in gRPC, and seven in Wireshark.
The list sounds dull but worthy until you realise that the FreeType2 library alone sits unobtrusively on around a billion devices, including Android, Apple’s iOS and macOS, and Sony’s PlayStation. Finding vulnerabilities in something that common is surely good news.
Said Google:
We believe that user and internet security as a whole can benefit greatly if more open source projects include fuzzing in their development process.
We hope to see more projects integrated into OSS-Fuzz, and greater adoption of fuzzing as standard practice when developing software.
Google is keen enough that it has added OSS-Fuzz projects to its open source rewards program, Patch Reward, which runs up to $20,000 for anyone willing to meet its ideal integration guidelines.
Fuzzing has been around for donkeys’ years and can best be described as a way of robotically bombarding software with random data in an attempt to cause the sort of unusual crashes and errors that mimic how programs behave under real-world use.
For efficiency and speed, this is best done on a “white box” basis with source code, the mode used by OSS-Fuzz. Alternatively, there’s “black box”, the format black hats use to find flaws in software from the outside.
This is slower because it requires the hacker to trawl large numbers of error logs to find exploitable bugs. But run fuzzing for long enough and with access to enough resources and you have one piece of the business model that powers cybercrime.
The obvious counter is to find those vulnerabilities before the black hats do by deploying the same idea, which Google started doing in 2012 internally for its Chrome browser using a VM architecture called ClusterFuzz.
OSS-Fuzz makes this system available to open-source developers who might not have the resources to do it themselves (Google says it runs 10 trillion tests each week) while also utilising a range of fuzzing engines in one infrastructure.
While no magic wand – finding a flaw isn’t the same as fixing it – fuzzing at this scale clearly has merit. A small confirmation is that it uncovered the security vulnerability CVE-2017-3732, separately documented by a professional security researcher.
But it could be that what’s interesting about OSS-Fuzz is that it encourages fuzzing as a mainstream requirement for running larger open-source developments. This is a culture change not a technology change – projects using OSS-Fuzz must sign up to Google’s arguably tough 90-day disclosure deadline for a start.
Ultimately, the success of big fuzzing will be decided by the open-source projects it attracts as much as the vulnerabilities it finds.