How to upload a non-standard log into Splink + logs Fortinet

Do we generate a lot of data using information systems every day? Great amount! But do we know all the possibilities for working with such data? Definitely not! In the framework of this article, we will describe what types of data we can load for further operational analysis in Splunk, and also show how to connect the download of Fortinet logs and non-standard log files that need to be divided into fields manually.
 
 
How to upload a non-standard log into Splink + logs Fortinet
 

 
Splunk can index data from various sources that can store logs locally on one machine with Splunk-indexer, or on a remote device. To collect data from remote machines, they are assigned a special agent - Splunk Universal Forwarder, which will send data to the indexer.
 
 
Splunk offers many ready-made applications and add-ons with pre-configured parameters for downloading a certain type of data, for example, Add-on is for data generated by Windows, Linux, Cisco, CheckPoint, etc. In total, over 800 add-ons have been created at the moment, which can be found on the site SplunkBase .
 
 


Types of data sources


 
All incoming data can be divided into several groups according to their sources:
 
 
Files and directories
 
 
Most of the data comes to Splunk directly from files and directories. You just need to specify the path to the directory from which you want to collect data and after that it will monitor it continuously and as soon as new data appears, they will be immediately uploaded to Splunk. Further in this article we will show how this is realized.
 
 
Network events
 
 
Also Splunk can index data from any network port, for example remote syslog data or other applications that transmit data over a TCP or UDP port. This type of data source is discussed in the Fortinet example.
 
 
Windows sources
 
 
Splunk allows you to configure the loading of a variety of different data
 
Windows, for example, event log data, registry, WMI, Active Directory, and performance monitoring data. For more details about loading data from Windows into Splunk, we wrote in the previous article. (reference)
 
 
Other data sources
 
 
 
Metrics of
 
Scripts
 
Custom data loading modules
 
HTTP Event Collector
 
 
Many tools have already been implemented that can load almost any of your data, but even if it does not suit you, you can create your own scripts or modules, which we'll talk about with one of the following articles.
 
 


Fortinet


 
In this section, we will discuss how to implement the loading of Fortinet logs.
 
 
1. First you need to download the Add-on from the SplunkBase site on this .
 
 
2. Next, you need to install it on your Splunk-indexer (Apps - Manage Apps - Install app from file) .
 
 
3. Then configure the reception of data on the UDP port. For this it is necessary to pass: Settings - Data Inputs - UDP - New. Specify the port, by default it is 514 port.
 
 
 
 
Choose Sourcetype: fgt_log, also select the required index or create a new one.
 
 
 
 
4. Configure the sending of UDP data in the Fortinet itself, specifying the same port as in Splunk.
 
5. Get the data and build the analytics.
 
 
 
 
 
 


Non-standard log


 
By non-standard log we mean a log that has an unknown for Splunk sourcetype, and therefore does not have pre-defined parsing rules for fields. In order to get the values ​​of the fields it will be necessary to perform several simple manipulations in advance.
 
 
In the example of this log, in addition to parsing, we show how to implement the loading of data from directories. There are two development scenarios that depend on where the data lies: on the indexer's local machine or on the remote machine.
 
 

The local machine is


 
If your data is stored on the local machine Splunk, then the download is very easy:
 
Settings - Add data - Monitor - Files & Directories
 
 
 
 
Select the desired directory, if necessary, we can assign a Whitelist or Blacklist.
 
Choose an index or create a new one, the rest by default.
 
 
 
 
 

The remote machine is


 
If the directory in which the desired data is stored is on a remote machine, then the action algorithm will be somewhat different. To collect data, we need an agent on the target machine (Splunk Universal Forwarder), configured Forwarder Management on the Splunk indexer, a sendtoindexer application, and an application that tells what catalogs we will be viewing.
 
 
In detail how to install the agent and configure the data collection from the remote machine, we told in the previous article , so we will not repeat and assume that all the settings you already have.
 
 
We will create a special application that will respond to the fact that the agent sends data from the specified directories.
 
 
 
 
The application is automatically saved in the folder. splunk /etc /apps , you must transfer it to the folder splunk /etc /deployment-apps .
 
 
To the folder monitor /local you need to put the configuration file inputs.conf , in which we specify which folders should be forwarded.
 
We want to view the test folder in the root directory.
 
[monitor:///test]
 
index = test
 
disabled = 0

 
 
More information about the inputs.conf files can be found in on the official site .
 
 
We add the application in Server Class, which is related to the target machine. As well as what for it is necessary to do, we spoke in the previous article .
 
 
Recall that this can be done by following the following path: Settings - Forwarder Management.
 
 
In order for the data to be loaded, it is necessary that there is an index, which was specified in the inputs.conf, if it does not exist, then create a new one.
 
 

Data parsing


 
After downloading, we got data not divided into fields. To select fields, go to the Extract Fields menu ( ?
? All Fields-Extract New Fields
? [/i] )
 
 
 
 
You can disassemble the fields using the built-in toolkit, which, based on the regular expressions, will select the fields that you specify. Or you can independently register a regular expression if the result of automatic work does not suit you.
 
 
Step 1. Choose the field
 
 
 
 
Step 2. We choose the method of separation
 
We will use regular expressions.
 
 
 
 
Step 3. Select the values ​​that will refer to one field and name it.
 
 
 
 
Step 4. We check whether the field is highlighted in other events, if not, then add this event to the selected fields.
 
 
 
 
Step 5. Choose a field in all the different structured events.
 
 
 
 
Step 6. We check if something extra was allocated if, for example, there is no such field in the event.
 
 
 
 
Then we save the field and now when you load data with the same sourcetype to them, this rule will apply to the selection of the field value.
 
 
Next we create all the fields that we need. And now the data is ready for further analysis.
 
 
 
 


Conclusion


 
So, we told you from which sources you can upload data to Splunk, showed how to configure the receive from the network ports, and how to load and parse the non-standard log.
 
 
We hope that this information will be useful to you.
 
 
We are happy to answer all your questions and comments on this topic. Also, if you are interested in something specifically in this field, or in the field of analyzing machine data as a whole - we are ready to finalize the existing solutions for you, for your specific task. For this you can write about this in the comments or simply send us a request through the form on our site .
 
 
We are the official Premier Partner of Splunk .
 
 
+ 0 -

Add comment