IN THIS ARTICLE
Outlines how to use audit logging in Qumulo Core
REQUIREMENTS
- Admin privileges required
- Cluster running Qumulo Core 2.12.0 and above
- Syslog server configured to receive audit logs over TCP
- You must set up a syslog server that implements syslog protocol using any one of the numerous programs available to receive audit logs. This syslog server must be configured to receive logs over TCP on the port specified in your Qumulo’s audit logging configuration.
TIP! Qumulo Core generates the audit logs but does not parse, analyze, index or visualize the data. For info on how to capture and correlate real-time data from the logs, check out the Qumulo Core Audit Logging with Splunk, Qumulo Core Audit Logging with Elasticsearch, and Qumulo in AWS: Audit Logging with CloudWatch articles (3.1.1 or above) so that you can search, monitor and examine the results via a web-style interface.
DETAILS
Audit logging in Qumulo Core provides a mechanism for tracking filesystem operations. As connected clients issue requests to the cluster, log messages are generated describing each attempted operation. These log messages are then sent over the network to the remote syslog instance specified by the current audit configuration in compliance with RFC 5424.
Each audit log message body consists of a few fields in CSV format. Note that the user id and both file path fields are quoted since they may contain characters that need to be escaped in CSV (i.e., quotes and commas). In addition to escaping characters, we also strip all newline characters (“\n” or “\r”) from these three fields.
The fields are described below in the order that they display within the log message body:
192.168.1.10,"AD\alice",nfs,fs_read_data,ok,123,"/.snapshot/1_snapshot1225/dir/",""
User IP: the IP address of the user in IPV4/IPV6 format
User ID: the user that performed the action
- AD username
- Qumulo local username
- POSIX UID
- Windows SID
- Qumulo auth ID (if we fail to resolve the user ID to any of the previous types)
Logins (2.12.4 and above): any successful or unsuccessful login attempt by the user for the operations below
- Session login via the Web UI
- Session login via qq CLI
- SMB login
- NFS mount
- FTP login
Protocol: the protocol that the user request came through
- nfs
- smb
- ftp
- api
File System Operation: the operation that the user attempted
- fs_create_directory
- fs_create_file
- fs_create_hard_link
- fs_create_symlink
- fs_create (a filetype other than one of the types captured above was created)
- fs_delete
- fs_fsstat
- fs_read_metadata
- fs_list_directory
- fs_open
- fs_read_data
- fs_read_link
- fs_rename
- fs_write_data
- fs_write_metadata
Management Operation (2.12.2 and above): any operation (common examples below) that modifies cluster configuration
- auth_create_user
- smb_create_share
- smb_login
- nfs_create_export
- nfs_mount
- snapshot_create_snapshot
- replication_create_source_relationship
Error Status: “ok” if the operation succeeded or a Qumulo specific error status code (see table below) if the operation failed
- Keep in mind that error status codes are subject to change with new releases of Qumulo Core and may differ depending on the the version you have installed on your cluster.
Error Status | Details |
ok | The operation was successful. |
fs_no_such_path_error | The directory portion of the path contains a name that doesn't exist. |
fs_no_space_error | The file system had no available space. Your cluster is 100% full. |
fs_invalid_file_type_error | The operation isn't valid for this filetype. |
fs_not_a_file_error | The operation (e.g. read) is only valid for a file. |
fs_sharing_violation_error | The file or directory is opened by another party in an exclusive manner. |
fs_no_such_entry_error | The directory of file or inode name doesn't exist in the file system. |
fs_access_denied_error | The user does not have access to perform the operation. |
fs_access_perm_not_owner_error | The user would need superuser or owner access to perform the operation. |
fs_entry_exists_error | A file system object by the given name already exists. |
fs_directory_not_empty_error | The directory cannot be removed, because it is not empty. |
fs_no_such_inode_error | The file system object does not exist. |
http_unauthorized_error | The user does not have access to perform the management operation. |
share_fs_path_doesnt_exist_error | The directory does not exist on Qumulo. |
decode_error | Invalid json was passed to the API. |
File id: the id of the file that the operation was on
File path: the path of the file that the operation was on
- When accessing a file through a snapshot, the path is prefixed with “/.snapshot/<snapshot-directory-name>” which is the same path prefix used to access snapshotted files via nfs and smb.
Secondary file path: any rename/move operations
IMPORTANT! In order to keep the volume of audit log messages to a minimum, similar operations performed in quick succession will be deduplicated. For example, if a user reads the same file 100,000 times in one minute, only one message, corresponding to the first read, will be generated.
Configure Audit Logging via the Web UI (2.12.5 and above)
- Hover over the Cluster menu and click Audit.
- Fill in the following fields:
Remote Syslog Address: the IP address or URL of your audit data repository
Port Number: the port number of your audit data repository (default is 514)
- Click Save.
- Verify that the Status shows as "Connected" and that all configuration details are correct on the Audit page.
To change the audit configuration, click the Edit button to modify the settings or select Delete to disable audit logging on your cluster.
Configure Audit Logging via QQ CLI
Run the following qq command, including a specific IP address (or hostname) and port number, to enable Audit Logging:
qq audit_set_syslog_config --enable --server-address <syslog-server-hostname> --server-port <port-number>
To disable audit logging, use the following command:
qq audit_set_syslog_config --disable
To review the current configuration for audit logging, run the following:
qq audit_get_syslog_config
Sample Output:
qq audit_get_syslog_config
{
"enabled": true,
"server_address": "syslog-server-hostname",
"server_port": 514
}
Use the qq command below to check the current state of the connection with the remote syslog instance:
qq audit_get_syslog_status
The connection_status included in the output will be one of the following:
- "AUDIT_LOG_CONNECTED": the connection with the remote syslog instance has been established and all log messages should be successfully transmitted.
- "AUDIT_LOG_DISCONNECTED": there is no active connection to the remote syslog instance. The cluster will attempt to buffer all outgoing log messages until the connection can be reestablished, at which point the buffered messages will be sent to the remote syslog instance. If a power outage or reboot occurs, all unsent messages will be lost. If the message buffer fills up, all new messages will be thrown away.
- "AUDIT_LOG_DISABLED": audit logging has been explicitly disabled.
Check out the sample output of an established syslog instance below:
qq audit_get_syslog_status
{
"connection_status": "AUDIT_LOG_CONNECTED"
}
NOTE: The qq commands for audit logging configuration may be slightly different depending on what version of Qumulo Core you are running. Use the command below to verify the supported qq audit commands on your cluster:
qq -h
Configuration Example with Rsyslog
Let’s walk through one example where we configure the cluster to send audit log messages to an Rsyslog process on a client machine. For this example our client will be running Ubuntu 18.04.
First, we need to update the global Rsyslog configuration to allow receiving syslog messages over TCP connections. Uncomment the following lines in /etc/rsyslog.conf to configure Rsyslog to listen for TCP connections on port 514.
# provides TCP syslog reception
module(load="imtcp")
input(type="imtcp" port="514")
Now configure Rsyslog to capture Qumulo audit log messages in their own dedicated log file and format. Create a new file called /etc/rsyslog.d/10-qumulo-audit.conf with the following contents making sure to replace the bold fields:
# Log file name:"/directory-storing-the-logs/node-hostname.log"
template(name="QumuloFileName" type="list") {
constant(value="/directory-storing-the-logs/")
property(name="hostname")
constant(value=".log")
}
# Log message format: "timestamp,audit-msg-csv-fields..."
template(name="QumuloAuditFormat" type="list") {
property(name="timestamp" dateFormat="rfc3339")
constant(value=",")
property(name="msg")
constant(value="\n")
}
# Filter to catch all Qumulo audit log messages.
if ($app-name startswith "qumulo") then {
action(type="omfile" dynaFile="QumuloFileName" template="QumuloAuditFormat")
stop
}
NOTE: Syslog users must have permission to write to the directory storing the logs or an error will occur. If you do encounter an error, check your local Rsyslog log directory for details.
To apply the new Rsyslog configuration, run the following command on the client:
systemctl restart rsyslog
Lastly, run the following qq command to start logging audit messages to the client:
qq audit_set_syslog_config --enable --server-address <syslog-server-hostname> --server-port <port-number>
Audit log messages should now appear in a dedicated log file from each node in the cluster within the directory you specified.
Considerations
- If the connection to the remote syslog server is lost or the configuration of audit is modified, some audit log messages may be lost.
- Depending on your current workflow and configuration, the use of audit logging may potentially impact performance on your cluster.
- For operations that rename/move files, only the new path is displayed in the audit log.
- Management operations will include specific ids related to the change in cluster configuration (snapshot id, replication relationship id, SMB share id, etc).
TIP! Audit logs can generate a lot of data. Use logrotate (or something similar) to manage the log files on your cluster.
RESOLUTION
You should now be able to successfully use audit logging in Qumulo Core
ADDITIONAL RESOURCES
Qumulo in AWS: Audit Logging with CloudWatch
Qumulo Core Audit Logging with Splunk
Qumulo Core Audit Logging with Elasticsearch
QQ CLI: Comprehensive List of Commands
Like what you see? Share this article with your network!
Comments
1 comment
Hi Lindsie,
you may consider to mention in the beginning that using a syslog-server as a receiver is one option (in fact, the most basic one). There are many more options like SIEM tools, Splunk, logstash. They all typically can receive syslog messages and do much more with them like indexing, doing analysis , visualisation, alarms etc.
Please sign in to leave a comment.