Releasing Dettectinator

It has been almost 4 years since Ruben and Marcus released DeTT&CT, a great tool for managing your detection capabilities. Within Sirius Security we use DeTT&CT at many of our clients and it has proven to be a very useful tool. On the other hand we noticed that as the detection capabilities of our clients have been steadily growing, the time spend on managing them in DeTT&CT also increased. Because we're both a bit lazy, Ruben and I started developing Python scripts to automate some of the manual labour involved in the process. When we noticed that we were both basically creating the same functionalities, we decided that it would be a better plan to create a new tool to rule them all. Before we even decided what the tool should do, we already agreed on the name: Dettectinator. Ruben and I have been creating tools together for a long time and the names of those tools always ended with "nator". So we had tools like the "Threatinator", the "Phishinator", the "Indicatornator" and many more.

Functionality

After deciding upon the important issue of the name of the tool, we started thinking about what the functionalities should be. The goal is to create a "one-stop-tool" for automating all daunting tasks in the DeTT&CT process. We both agreed that one of the most time intensive parts of the process is actually typing all the names of the detections and selecting their techniques into the DeTT&CT Editor. Ruben and I had already created some Python scripts to reduce that issue at our clients. A lot of detection systems currently have the ability to add the MITRE ATT&CK technique ID to the metadata of the detection rule and some already have done that for the detections that come out of the box as well. Most of these systems allow you to access that metadata in some way, either through config files, API's or file exports. Our scripts read that metadata and turn it into DeTT&CT YAML. The same also counts to some extend for data sources, but we decided to focus on detections first.

The nice thing about already having created scripts for this functionality is that you get a good view on what is good and bad about those scripts. So we quickly came up with a few requirements for the tool:

  • It should support both data sources and detections.

  • It should support multiple source systems and should be easily extensible for new ones.

  • It should be able to run as part of a pipeline.

  • It should be possible to use it as a command line tool and as a library that could be integrated into a custom solution.

  • It should be possible to merge the data from the API with an existing YAML file.

We have been working on the tool and documentation for the last couple of weeks and now we can proudly release version 1.0 of Dettectinator to the public. To use it as a library, install it from PyPi using `pip install dettectinator` or download it from our Github page to use the CLI: https://github.com/siriussecurity/dettectinator.

Data IMPORT plugins

Currently Dettectinator has plugins to read detection rules from the following sources systems:

  • Microsoft Sentinel: Analytics Rules (API)

  • Microsoft Defender: Alerts (API)

  • Microsoft Defender: Custom Detection Rules (API, under construction)

  • Microsoft Defender for Identity: Detection Rules (loaded from MS Github)

  • Tanium: Signals (API)

  • Elastic Security: Rules (API)

  • Suricata: Rules (file)

  • Sigma: Rules (folder)

  • CSV: any csv with detections and ATT&CK technique ID's (file)

  • Excel: any Excel file with detections and ATT&CK technique ID's (file)

The API to access the Custom Detection Rules in Microsoft Defender is currently under private preview, so we unfortunately cannot yet release that plugin.

If these plugins don't suffice for your situation you can easily write your own. We've created an extensive README file on our Gitub page that describes this process.

Working with data sources is a little less straight forward than detections. We have a few ideas to turn those into plugins, but that will be for the next release. But the Dettectinator library does support creating and updating data source YAML files, so you can already create your own solution for this if required.

Workflow

We're currently using Dettectinator at some of our customers as part of their detection engineering workflow. As said, the purpose of Dettectinator is to automate the repetitive/boring parts of the flow, so that the analyst can focus on adding intelligence to the picture. The workflow we implement looks something like this:

The first stage of the flow is for Dettectinator. The trigger can be manual, scheduled or based on a CI/CD pipeline. It produces or updates the YAML file with the new "raw" items added to it. Dettectinator outputs which items have been added, updated or deleted and also annotates this in the YAML file. The analyst can now add extra information such as scoring to the techniques and edit the new items with the DeTT&CT Editor and use the DeTT&CT framework to create a new ATT&CK Navigator layer. The next time Dettectinator runs it can use the YAML file as input again, preserving all history and added information on every run.

You could also consider to combine steps 1 and 3 to automatically create the most recent version of the layer file as well. If you run it frequently, you will always have layer files containing the most recent detections.

Using Dettetinator in this workflow saves the analyst the time involved in typing all the detection and technique information into the DeTT&CT Editor and increases the accuracy of the registration. The analyst can now focus on adding the intelligence to the overviews.

Give it a try!

We think that Dettectinator is very useful addition to the DeTT&CT workflow. We hope that with this post we’ve created some enthusiasm to give this tool a try. We’ve written a large README file that contains a lot of information on how to use the tool both as a library and as command line tool. If you have any improvements, questions or ideas just create Github Issues for them and we’ll try to address them as soon as possible.

Martijn Veken