In the first of this two-part series, Director of Product Shimon Modi outlines identifies the common IoC scoring mistakes. Check out PART 2 to learn how TruSTAR calculates scores that help analysts determine IoC relevance to their triage process.
When it comes to risk triage, we’re always looking for efficiency hacks. Each new tool we integrate into our SOC brings more threat indicators to resolve, and IoC scoring has become a default way to prioritize investigation.
In theory, severity scoring of threat indicators sounds like a perfectly rational way of making the analysis process more efficient: focus on the indicators with a higher severity score and you are now making a prioritized & risk-based decision. But IoC scoring can create more work than it's worth, and worse yet, some scoring systems may be misleading. A number of problems can crop up by blindly embracing IoC scoring to prioritize your analysis.
Here are the top five most common pitfalls we see in today’s IoC scoring systems.
1. Scoring is not transparent: Methods used to generate scores are not always known or consistent. Some cases are scored by subjective human threat intel analysts, some are scored by automated machine learning algorithms, while others are scored through crowdsourced community feedback. The consumers of these ratings only see a quantified score on an arbitrary scale of 0-100 or green-yellow-red.
2. It’s difficult to justify variance between scores: The source data used to generate the scores can vary depending on who is generating the score. This makes it impossible to justify variability in severity scores of the same indicator from different sources.
3. Questionable effectiveness: Without publicly acknowledging what data sources are used to generate scores, how different variables are weighed and interpreted, and how valid the assumptions for inclusion are, it’s very difficult to judge the impact of severity scores. Presuming endpoints in certain countries are more likely to be malicious can lead to IoC scores that provide a false sense of confidence in your analysis. Even if you were to receive an indicator with a high score that helped your analysis it is difficult to predict if it will be consistently effective.
4. The Global Relevance fallacy: Severity scores should help analysts understand the contribution of the IoC to the overall context of the event they are analysing at that time. Established maliciousness of an IoC does not necessarily translate to relevance. One analyst’s 95% confidence rating in a malicious IoC could be another analyst’s 5% rating.
5. Death by a thousand cuts: When severity scores generate false positives you become numb to them and start tuning them out (e.g. how often are you alarmed by car alarms?). In the face of incomplete information humans rely on their biases to fill knowledge gaps, further exacerbating the problem.
Severity scores are important, and hugely valuable to analysts when calculated in a context-aware way. But we don’t have to settle on black box calculations and secret sauce techniques. Scientific research methodology has an evaluative criteria for this. It’s called internal validity - essentially how well does one methodology allow you to choose among alternative explanations of a phenomenon. In the cybersecurity context, this means we need a way of consistently determining IoC severity scores with a high level of confidence.
Interested in learning more about how TruSTAR can help you contextualize your IoCs? Download our Product Sheet.
Click HERE to read Part 2 of our IoC Scoring Series.