If you’re a Splunk admin, the company has issued a critical warning regarding a showstopping Y2K-style date bug in one of the platform’s configuration files that needs urgent attention.
According to this week’s advisory, from 1 January 2020 (00:00 UTC) unpatched instances of Splunk will be unable to extract and recognise timestamps submitted to it in a two-digit date format.
In effect, it will understand the ‘year’ up to 31 December 2019, but as soon as this rolls over to 1 January 2020, it will mark it as invalid, either defaulting back to a 2019 date or adding its own incorrect “misinterpreted date”.
In addition, beginning on 13 September 2020 at 12:26:39 PM UTC, unpatched Splunk instances will no longer be able to recognise timestamps for events with dates based on Unix time (which began at 00:00 UTC on 1 January 1970).
Left unpatched, the effect on customers could be far-reaching.
What platforms like Splunk do is one of the internet’s best-kept secrets – turning screeds of machine-generated log data (from applications, websites, sensors, Internet of Things devices, etc) into something humans can make sense of.
There was probably a time when sysadmins could do this job but there are now so many devices spewing so much data that automated systems have become a must.
This big data must also be stored somewhere, hence the arrival of cloud platforms designed to do the whole job, including generating alerts when something’s going awry or simply to analyse how well everything’s humming along.
Bad timing
As with any computing system, however, Splunk depends on events having accurate time and date stamps. Without that, it has no way of ordering events, or of dealing meaningfully with the world in real time.
According to Splunk, in addition to inaccurate event timestamping this could result in:
- Incorrect rollover of data buckets due to the incorrect timestamping
- Incorrect retention of data overall
- Incorrect search results due to data ingested with incorrect timestamps
- Incorrect timestamping of incoming data
It gets worse:
There is no method to correct the timestamps after the Splunk platform has ingested the data. If you ingest data with an un-patched Splunk platform instance, you must patch the instance and re-ingest the data for timestamps to be correct.
In short, there’s no quick way to back out of a problem which will only grow with every passing hour, day and week that it’s allowed to continue.
The problem lies with a file called datetime.xml
used by Splunk to extract incoming timestamps using regular expression syntax. It sees this and assumes two-date years up to and including 19, but not 20 onwards.
What to do
Leaving aside Splunk cloud customers who should receive the update automatically, there are three ways to patch the bug for all operating systems, the company said.
- Download an updated version of
datetime.xml
and apply it to each of your Splunk platform instances - Make manual modifications to existing
datetime.xml
on your Splunk platform instances - Upgrade Splunk platform instances to a version with an updated version of
datetime.xml
The complication is that applying the new file, or editing it manually, requires customers to stop and restart Splunk, a disruptive process when applied to more than one Splunk instance. Editing the datetime.xml
should also be done with great care.
Although reminiscent of the famous Millennium Y2K bug predicted to affect computer systems on 1 January 2000, this class of bugs has popped up on other occasions since then.
A recent example is the GPS date issue that hit older satellite navigation systems earlier this year.
A variation on the same date/GPS problem affected Apple iPhone 5 and iPhone 4s in October, which meant that owners had to update their devices by 3 November 2019 or suffer app synchronisation problems.
Laurence Victor Marks
Amazing that an application could be developed after Y2K and have this flaw. Such careless coding probably exists elsewhere in the product as well. I would never buy or use it.