Skip to content
Threat Research

Applying threat intelligence to Iranian cyberattack risk

As geopolitical interest increases, discussions of threat intelligence increase which increases pressure on security operations teams to provide answers to customers and to senior leadership.

With geopolitical events changing daily, discussions and questions about threat intelligence and strategies for defending against possible cyberattacks from Iran are front and center. Security operations teams also face increased pressure to provide answers and reassurance to customers and their organization’s senior leadership that “yes, we’re covered.”

But, how does anyone validate this kind of claim? How should SOC teams sift through the voluminous, sometimes out-of-date threat intelligence and effectively use it to respond to stakeholder concerns?

This article provides guidance for applying threat intelligence to Iranian cyberattack risk. An article on collecting threat intelligence can be found here.

Qualification and Use of Threat Intelligence

It’s important to understand and communicate the limitations of any threat intelligence information you communicate to stakeholders, as these limitations can affect the conclusions you might reach. Some factors include your confidence in the sources; the completeness of the information; the age of the information artifacts; The investigative method used to produce that threat intelligence; interpretations of the meaning of the threat intelligence, and qualifying the conclusions.

Source Confidence

The quality of the initial artifact directly impacts the quality of the analysis and output. The reliability and confidence placed in the individual, team or vendor that created the intelligence will directly impact confidence in the quality of intelligence.

How reliable is source of the information you are relying on as a starting point? Simplify this by using a Low, Medium, High qualifier. For example:

  • Git repository from an unknown owner: low reliability
  • Blog post from a known company with known contributors: moderately reliable
  • 1:1 communication on first-hand information obtained by a peer worked with on cases in the past: high reliability

Completeness

Completeness is nearly immeasurable, but it’s important to keep in mind when qualifying the intelligence and the output. An intelligence artifact can be incomplete due to the method followed while sourcing the artifact.

For example, how confident can you be that an unverified public dump of tools and information can be attributed to a group? Additionally, the methods you follow to review the content can result in different intelligence artifacts being produced.

An analyst at an endpoint security company, for example, may review the contents for strings that can be searched for on an endpoint tool. Another analyst at a different company may extract network indicators that can be identified through their UTM. And yet another analyst at a third company may look only at the executable files and identify hashes that can be blocked programmatically.

A list of known IOCs for an adversary group is likely incomplete by the very nature of intelligence harvesting, which is more like piecing individual artifacts together with individual confidence, methods, and harvesting techniques.

Begin with the assumption that the indicator list you are using is incomplete, regardless of the source.

Artifact Aging

While it could be argued that intelligence has no expiration date, the confidence and usability of intelligence decreases with age, and when it’s not corroborated by ongoing confirmation of applicability.

IP addresses are a common indicator that teams will ingest to conduct threat hunts. An IP address that was known to be used in an attack within the past week, by a high confidence source, would score higher for applicability and urgency than the same indicator that was used at an older date.

As such, if you’re working through a list of 500 indicators that lack the discovery date, it’s important to understand how that can change how you interpret the resultant signals.

Investigative Method

Once you’ve got a list of indicators, and you’ve considered the confidence, completeness, and age of that data, it’s time to put the indicators to use. Each method has merits and demerits, but the method you use to interpret signals (as well as how you package and communicate the outcome) when drawing conclusions matters.

If inbound intelligence is limited to file hashes believed to be associated with a certain APT group, for instance, then the investigative procedures will obviously be limited to checking for existence of file hashes. If indicators span network, behavior, and hashes, but the internal knowledge or tools to fully apply those indicators across the active technology (such as one, the other or all of UTM, Logging, and EDR tools respectively) is limited, then there will be limitations to how the results need to be qualified.

The investigative method needs to align to the source indicators, the technology in place, the knowledge and abilities of the analyst, and the manner in which results are communicated.

Signal interpretation

All signals are not equal. While operators inherently know this to be true, to apply this concept in practice is dependent on the aforementioned components.

One of the top challenges faced by operators is to have intelligence (and resultant detections) presented at the right time, with the right context, in their threat hunting and case investigation workflow. During threat hunts, using these indicators as a starting point can be useful trailheads that help orient towards confirmation of or elimination of a hunting thesis.

The existence of an indicator should be considered against the confidence of the inputs. If the hit’s on an IP address, consider that while actors may reuse IP space, they also may treat the IP address much like a burner phone.

The existence of an indicator known to be associated with an APT group with an assumed link to a nation state actor has a lot of assumptions linked together. Interpreting that hit needs to take those factors into consideration, and the resultant reliance and communication needs to be qualified appropriately.

Read-out and Qualification of Conclusions

When reporting out on the results of threat hunts and investigations, resist the temptation to fall into inductive fallacy traps, or to use resolute statements to describe the observations.

You should always qualify intelligence based on your confidence in that intelligence. As you can see from the above, there is quite the journey from sourcing the intelligence, to use, interpretation and read-out.

The Sophos MTR team follows a confidence model where we avoid conclusions like “Iranian APT group 34 is in your environment.” Instead, we might state, “Indicators reportedly associated with APT group 34 have been observed on [assetname].”

If pressed on whether or not it was APT 34, we would focus on the confidence being “high” that there is an adversary who has compromised the environment and was detected, and the threat neutralized. We would leave the attribution of the indicators to a nation state to the global intelligence and law enforcement community.

In general, existence of an indicator is absolute and binary, but the conclusions about the actors and origins should be qualified based on confidence.

1 Comment

A debt of gratitude is in order for sharing the information, keep doing awesome… I truly delighted in investigating your site. great asset…
data science training in malaysia

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?