log messages – Incorporating Drupal 8 logs into Splunk

In Drupal 8 you may define your own logger implementation to do whatever you want with log messages. The default logger provided by core Drupal saves these messages in the Drupal database and makes them available in the UI at /admin/reports/dblog. This default logger is implemented in the core dblog module by the logger class DrupaldblogLoggerDbLog, and that’s a great example to use when you write your own.

A logger is a class that implements LoggerInterface and is used as a service with the ‘logger’ service tag. All registered loggers are used for every channel. You don’t need to ‘trigger’ a logger or do anything special to have that logger automatically used by core Drupal, other than tagging the service.

The service definition for the core DbLog logger looks like this (from dblog.services.yml):

services:
  logger.dblog:
    class: DrupaldblogLoggerDbLog
    arguments: ('@database', '@logger.log_message_parser')
    tags:
      - { name: logger }
      - { name: backend_overridable }

You can see that the logger.dblog service is implemented by the DbLog class, and this service is tagged as a logger. That tag is how Drupal knows to send log message to this service. Without the tag, Drupal wouldn’t know this service was a destination for log messages.

Another good example in core is the core syslog module, which provides a logger that uses the PHP syslog() function to send messages to an operating-system-dependent location (probably a flat text file shared by all other programs that log messages on that operating system).

To make your own ‘Splunk’ logger, you would create a module that defines a logger service, for example logger.splunk. (Services are defined in the module’s <modulename>.services.yml file.) You must have a class, for example ‘Splunk’ that implements LoggerInterface. In that ‘Splunk’ class you may use the Splunk API to send log messages to Splunk. The details of how to do that is up to you as it has nothing to do with Drupal at this point. If you have code that sends log messages to Splunk from a standalone program then I think it is pretty clear from examples provided by core how to use that code in your logger class.

Your module will consist of a <modulename>.info.yml file, a <modulename>.services.yml file, and a <modulename>/src/Logger/Splunk.php file. Nothing more is needed. If you’ve done it right, then when you enable your module all messages should be logged to Splunk.

splunk – Got malformed error when do replace string manipulation

I was following string manipulation docs from splunk itself

  1. SPL2 example Returns the “body” field with phone numbers redacted.

...| eval body=replace(cast(body, "string"), /(0-9){3}(-.)(0-9){3}(-.)(0-9){4}/, "<redacted>");

But when I tried to do query

... | eval hostname=replace(cast(hostname, "string"), /cron*/, ""); | ..

I got error
Error in 'eval' command: The expression is malformed. An unexpected character is reached at '/cron*/, "a");'.

I got confused, what did I do wrong?

splunk – Best method to keep lookup file value fresh

Say, I have to monitor users’ activities from 3 specific departments: Science, History, and Math.

The goal is to send an alert if any of the users in any of those departments download a file from site XYZ.

Currently, I have a lookup file for all the users from those three departments.

users
----------------------
user1@organization.edu
user2@organization.edu
user3@organization.edu
user4@organization.edu
user5@organization.edu

One problem: users can join, leave, or transfer to another department anytime.

Fortunately, those activities (join and leave) are tracked and they are Splunk-able.

index=directory status=*
-----------------------------------------------
{
"username":"user1@organization.edu",
"department":"Science",
"status":"added"
}
{
"username":"user1@organization.edu",
"department":"Science",
"status":"removed"
}
{
"username":"user2@organization.edu",
"department":"History",
"status":"added"
}
{
"username":"user3@organization.edu",
"department":"Math",
"status":"added"
}
{
"username":"MRROBOT@organization.edu",
"department":"Math",
"status":"added"
}

In this example, assuming I forgot to update the lookup file, I won’t get an alert when MRROBOT@organization.edu downloads a file, and at the same time, I will still get an alert when user1@organization.edu downloads a file.

One solution that I could think of is to update the lookup manually via using inputlookup and outputlook method like:

inputlookup users.csv | users!=user1@organization.edu | outputlookup users.csv

But, I don’t think this is an efficient method, especially there’s high likely I might miss a user or two.

Is there a better way to keep the lookup file up-to-date? I googled around, and one suggestion is to use a cronjob CURL to update the list. But, I was wondering if there’s a simpler or better alternative than that.

siem – In Splunk Enterprise Security Intelligence Downloads portion, what exactly does the “Fields” portion mean?

Trying to configure a download of MISP IoCs in Splunk ES, under Intelligence Downloads. It’s working for IPs but I can’t figure out how to tell Splunk that the feed contains more than just IPs, for example domains and hashes. From the documentation found here: https://docs.splunk.com/Documentation/ES/6.1.1/Admin/Downloadgenericintellfeed it looks like I should be using the “Fields” option under “Parsing Options”. However I can’t figure out how to actually make it work because the only example only has 1 type of IoC in there. Additionally the IP type only parses for IPv4, is there an option for IPv6?

java – Logs an exception in Splunk

I am working on a production application and would like to log correctly in Splunk for a failure scenario. Should I add logger.error or not because the message was already printed except for stacktrace?

Validate.isTrue( !CollectionUtils.isEmpty(filteredGames), "No games found for (customerId={})", customer.getCustomerId());
LOGGER.error("No games found for (customerId={})", customer.getCustomerId());

Validate.isTrue impl

    public static void isTrue(boolean expression, String message, Object value) {
        if (!expression) {
            throw new IllegalArgumentException(message + value);
        }
    }

siem – Splunk Join the search with time issues

Search for:

Join search between two sources (IPS and DHCP protocol)

IPS protocol: threat, IP, host name

DHCP protocol: IP, hostname

Goal: Detecting the host IP is triggered in IPS. Considering that DHCP provides the same IP for multiple hosts.

index=ips | join IP type=inner (search index=dhcp | fields _time,IP,HOSTNAME) | stats count by Threat,IP,Hostname

Problem: Only the last value is retrieved from my DHCP index.
When IP x.x.x.x was used by three hosts during the day: Host A, Host B, and Host C.
Host B is the host that was triggered at 12:00 in IPS, but Host C is the last host to use the IP at 4:00 PM.

If I now check my search at 17:00, it will indicate that the IPS threat has been triggered at 12:00 with Hostname as host C, which is incorrect.
It must show host B

Can I fix this somehow so that the right host for IPS Threat is displayed?

Forensics – Can you provide feedback on my IR Splunk app?

After my own experience as an analyst, I was able to process cases faster and more accurately if I had some important incident information in advance. So I've created a Splunk app that contains this information, hoping that it can help other analysts save time, and I've been asked to talk about it on the Splunk website .conf19.

In preparation for my presentation, I wanted to reach safety professionals in the community to get additional feedback on my app. I know from experience that analysts are very busy. My goal is therefore to save analysts more time to investigate incidents on the first day than is required for the facility. To make this possible, I have created an automatic deployment wizard that installs / configures it in minutes. If you want to try my app, please let me know. I will then provide you with the deployment package. I would be very happy about your contributions.

If you're on the fence or just want to learn more, I've created a series of 60-second videos that show the Perseus Splunk app in action and learn a little more about it: Perseus in 60 seconds.

Thanks a lot!