|
|
# Version 9.2.2.20240415
|
|
|
##################################################################
|
|
|
# Pseudo-BNF Definitions for Search Language
|
|
|
#########################
|
|
|
#
|
|
|
#########################
|
|
|
# FORMATTING
|
|
|
#########################
|
|
|
# - Adjacent tokens implicitly allow no whitespace.
|
|
|
# - All literals are case-insensitive.
|
|
|
# - Aside from reserved characters ("<>()|?*+") and <tokens>, everything else is taken literally.
|
|
|
# Those characters need to be quoted. use \" to represent a quote.
|
|
|
# - Examples are now broken into a separate values -- example1,
|
|
|
# example2, example3. If there's a text comment that goes with
|
|
|
# exampleN then it should be in the commentN attribute. For example:
|
|
|
# "example2 = 5d" "comment2 = 5 days"
|
|
|
# - The only reserved characters are "<>()|?*+",
|
|
|
# - Whitespace (including newlines) matches \s+
|
|
|
# - Regex-like grouping
|
|
|
# (): grouping
|
|
|
# <term> : <term> is required
|
|
|
# (<term>)? : <term> is optional
|
|
|
# (<term>)* : <term> is optional and repeated 0 or more times
|
|
|
# (<term>)+ : <term> is required and repeated 1 or more times
|
|
|
# - <terms> can be named for readability with a colon and a default value
|
|
|
# "move <field:fromfield=localhost> to <field:tofield=localhost>"
|
|
|
#
|
|
|
#########################
|
|
|
# COMMAND STRUCTURE
|
|
|
#########################
|
|
|
#
|
|
|
# [command_name-command]
|
|
|
# simplesyntax = Use only if the command is complex. See SIMPLE SYNTAX section below.
|
|
|
# syntax = command_name (parameter_name=<datatype>) for example (maxlines=<int>)?
|
|
|
# See FORMATTING section above for more details
|
|
|
# alias = alias_name Optional. Only supply an alias if it it ABSOLUTELY necessary. Otherwise omit.
|
|
|
# shortdesc = Public commands only. Provide a one sentence description.
|
|
|
# description = Provide a full description. This should match the description in the Search Reference.
|
|
|
# See DESCRIPTION FORMATTING below.
|
|
|
# note = (optional) Add special notes of operation
|
|
|
# example1 = ... |command_name parameter parameter (NOTE: include common examples with 1 or more parameters)
|
|
|
# comment1 = The explanation for what the command is doing
|
|
|
# internal category = category for internal (legacy reference only)
|
|
|
# external category = category for external use (See CATEGORIES section below)
|
|
|
# usage = specify public, internal, deprecated, or fun (for easter eggs only)
|
|
|
# related = list of SPL commands related to this command
|
|
|
# tags = other words that users might search on that are similar to the command_name
|
|
|
#
|
|
|
#########################
|
|
|
# SIMPLE SYNTAX
|
|
|
#########################
|
|
|
# - To simplify the bnf for the end user, an optional "simplesyntax" can be specified,
|
|
|
# - To remove things like (m|min|mins|minute|minutes) so that the bnf becomes understandable
|
|
|
# - Potentially can be used to remove obscure/rarely used features
|
|
|
# of search commands, but that should be done sparingly and with some thought.
|
|
|
#########################
|
|
|
# DESCRIPTION FORMATTING
|
|
|
#########################
|
|
|
# - For a command's DESCRIPTION, when automatically converted to html:
|
|
|
# - multiple whitespace are removed
|
|
|
# - for convenience, \p\ will cause a paragraph break and \i\ a newline and indent (<br> )
|
|
|
# - <terms> are italicized, UPPERCASETERMS and quoted terms are put into <code/>
|
|
|
#########################
|
|
|
# CATEGORIES
|
|
|
#########################
|
|
|
# The external categories are:
|
|
|
# - correlation
|
|
|
# - data::managing, data::viewing
|
|
|
# - fields::adding, fields::extracting, fields::modifying
|
|
|
# - find_anomalies
|
|
|
# - geographic_location
|
|
|
# - indexes:manage_summary
|
|
|
# - predicting_trending
|
|
|
# - reporting
|
|
|
# - results::alerting, results::appending, results::filtering, results::formatting
|
|
|
# results::generating, results::grouping, results::reading, results::reordering
|
|
|
# results::writing
|
|
|
# - search
|
|
|
# - subsearch
|
|
|
# - time
|
|
|
#
|
|
|
#########################
|
|
|
# conventions in attributes
|
|
|
#
|
|
|
# fields that begin with "_" are considered Splunk internal fields
|
|
|
# and are not used in many of the commands that by default operate
|
|
|
# on all the fields
|
|
|
#
|
|
|
# any stanza that ends with "-command" is considered a command.
|
|
|
#
|
|
|
#########################
|
|
|
# common term definitions
|
|
|
#########################
|
|
|
#
|
|
|
# field is any field, non-wildcarded
|
|
|
# wc-field represents wildcarded fields
|
|
|
# wc-string is any wildcarded string
|
|
|
#
|
|
|
#########################
|
|
|
#
|
|
|
# <field-list> ::= <field> | <field-list> <ws> <field>
|
|
|
# <field> ::= <string>
|
|
|
# <wc-field-list> ::= <wc-field> | <wc-field-list> <ws> <wc-field>
|
|
|
# <wc-field> ::= <wc-string>
|
|
|
# <wc-string> ::= <string>
|
|
|
#
|
|
|
# <field-and-value> ::= ? a field and value separated by a :: eg host::localhost ?
|
|
|
# <field-and-value-list> ::= <field-and-value> | <field-and-value-list> <ws> <field-and-value>
|
|
|
#
|
|
|
# <tag> ::= <string>
|
|
|
# <tag-list> ::= <tag> | <tag-list>
|
|
|
#
|
|
|
# <bool> ::= <true> | <false>
|
|
|
# <true> ::= T | TRUE
|
|
|
# <false> ::= F | FALSE
|
|
|
#
|
|
|
# <string> ::= ? <unquoted-string> | <double-quoted-string> ?
|
|
|
# <unquoted-string> ::= ? any unbroken sequence of alphanumeric characters plus underscore ?
|
|
|
# <double-quoted-string> ::= any string enclosed by double quotes
|
|
|
#
|
|
|
# <int> ::= ? any integer ?
|
|
|
#
|
|
|
# we do not support 0 and 100th percentiles
|
|
|
# with the perc aggregator, to achieve, use the min() and max() aggregators.
|
|
|
# <percentile> ::= ? any integer between 1 and 99 ?#
|
|
|
# <num> ::= ? any real number ?
|
|
|
#
|
|
|
# <filename> ::= ? a valid path to a file on the server ?
|
|
|
#
|
|
|
# [filename]
|
|
|
# syntax = <string>
|
|
|
# description = A path on the filesystem of the server
|
|
|
#
|
|
|
# [term]
|
|
|
# syntax = [a-zA-Z0-9_-]+
|
|
|
#
|
|
|
# [command-pipeline]
|
|
|
# syntax = <generating-command> (| <command>)*
|
|
|
#
|
|
|
# [boolean-operator]
|
|
|
# syntax = <boolean-operator-not>|<boolean-operator-and>|<boolean-operator-or>
|
|
|
#
|
|
|
# [boolean-operator-not]
|
|
|
# syntax = NOT
|
|
|
# description = Case-sensitive boolean operator.
|
|
|
#
|
|
|
# [boolean-operator-and]
|
|
|
# syntax = AND
|
|
|
# description = Case-sensitive boolean operator.
|
|
|
#
|
|
|
# [boolean-operator-or]
|
|
|
# syntax = OR
|
|
|
# description = Case-sensitive boolean operator.
|
|
|
#
|
|
|
###########################################################################
|
|
|
# COMMAND LIST:
|
|
|
#
|
|
|
##################
|
|
|
# NOTE:
|
|
|
# pre* commands are not included in this list as all pre* commands are the
|
|
|
# map portion of the original command with exactly the same syntax and cannot be
|
|
|
# access externally (via CLI or UI)
|
|
|
# (e.g. prestats / stats, prediscretize / discretize, prededup / dedup)
|
|
|
##################
|
|
|
#
|
|
|
# !!PLEASE DEFINE THESE UNDEFINED STANZAS:
|
|
|
#
|
|
|
# Ignoring undefined stanza: search-pipeline
|
|
|
# Ignoring undefined stanza: search-directive
|
|
|
# Ignoring undefined stanza: quoted-str
|
|
|
|
|
|
|
|
|
##################
|
|
|
# abstract/excerpt
|
|
|
##################
|
|
|
[abstract-command]
|
|
|
syntax = abstract (maxterms=<int>)? (maxlines=<int>)?
|
|
|
alias = excerpt
|
|
|
shortdesc = Shortens the text of results to a brief summary representation.
|
|
|
description = Produce an abstract -- a summary or brief representation -- of the text of search results. The original text is replaced by the summary, which is produced by a scoring mechanism. If the event is larger than the selected maxlines, those with more terms and more terms on adjacent lines are preferred over those with fewer terms. If a line has a search term, its neighboring lines also partially match, and may be returned to provide context. When there are gaps between the selected lines, lines are prefixed with "...". \p\\
|
|
|
If the text of a result has fewer lines or an equal number of lines to maxlines, no change will occur.\i\\
|
|
|
* <maxlines> accepts values from 1 - 500. \i\\
|
|
|
* <maxterms> accepts values from 1 - 1000.
|
|
|
commentcheat = Show a summary of up to 5 lines for each search result.
|
|
|
examplecheat = ... |abstract maxlines=5
|
|
|
category = formatting
|
|
|
usage = public
|
|
|
related = highlight
|
|
|
tags = condense summarize summary outline pare prune shorten skim snip sum trim
|
|
|
|
|
|
|
|
|
##################
|
|
|
# accum
|
|
|
##################
|
|
|
[accum-command]
|
|
|
syntax = accum <field> (AS <field>)?
|
|
|
shortdesc = Keeps a running total of a specified numeric field.
|
|
|
description = For each event where <field> is a number, keep a running total of the sum of this number and write it out to either the same field, or a new field if specified.
|
|
|
comment1 = Save the running total of "count" in a field called "total_count".
|
|
|
example1 = ... | accum count AS total_count
|
|
|
usage = public
|
|
|
category = fields::add
|
|
|
tags = total sum accumulate
|
|
|
related = autoregress, delta, streamstats, trendline
|
|
|
|
|
|
##################
|
|
|
# addcoltotals
|
|
|
##################
|
|
|
|
|
|
[addcoltotals-command]
|
|
|
syntax = addcoltotals (labelfield=<field>)? (label=<string>)? <field-list>?
|
|
|
shortdesc = Appends a new result to the end of the search result set.
|
|
|
description = Appends a new result to the end of the search result set.\
|
|
|
The result contains the sum of each numeric field or you can specify which fields\
|
|
|
to summarize. Results are displayed on the Statistics tab. If the labelfield argument\
|
|
|
is specified, a column is added to the statistical results table with the name\
|
|
|
specified.
|
|
|
comment1 = Compute the sums of all the fields, and put the sums in a summary event called "change_name".
|
|
|
example1 = ... | addcoltotals labelfield=change_name label=ALL
|
|
|
comment2 = Add a column total for two specific fields in a table.
|
|
|
example2 = sourcetype=access_* | table userId bytes avgTime duration | addcoltotals bytes duration
|
|
|
comment3 = Augment a chart with a total of the values present.
|
|
|
example3 = index=_internal source=*metrics.log group=pipeline |stats avg(cpu_seconds) by processor |addcoltotals labelfield=processor
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
related = stats
|
|
|
tags = total add calculate sum
|
|
|
|
|
|
##################
|
|
|
# addinfo
|
|
|
##################
|
|
|
[addinfo-command]
|
|
|
syntax = addinfo
|
|
|
shortdesc = Add fields that contain common information about the current search.
|
|
|
description = Adds global information about the search to each event. The addinfo command is primarily an internal component of summary indexing. \i\\
|
|
|
Currently the following fields are added: \i\\
|
|
|
"info_min_time" - the earliest time bound for the search \i\\
|
|
|
"info_max_time" - the latest time bound for the search \i\\
|
|
|
"info_search_id" - query id of the search that generated the event \i\\
|
|
|
"info_search_time" - time when the search was executed.
|
|
|
comment = Add information about the search to each event.
|
|
|
example = ... | addinfo
|
|
|
usage = public
|
|
|
tags = search info
|
|
|
category = fields::add
|
|
|
related = search
|
|
|
|
|
|
##################
|
|
|
# addtotals
|
|
|
##################
|
|
|
|
|
|
[addtotals-command]
|
|
|
syntax = addtotals (row=<bool>)? (col=<bool>)? (labelfield=<field>)? (label=<string>)? (fieldname=<field>)? <field-list>
|
|
|
shortdesc = Computes the sum of all numeric fields for each result.
|
|
|
description = If "row=t" (default if invoked as 'addtotals') for each result, computes the arithmetic sum of all\
|
|
|
numeric fields that match <field-list> (wildcarded field list).\
|
|
|
If list is empty all fields are considered.\
|
|
|
The sum is placed in the specified field or "Total" if none was specified.\
|
|
|
If "col=t" (default if invoked as 'addcoltotals'), adds a new result at the end that represents the sum of each field.\
|
|
|
LABELFIELD, if specified, is a field that will be added to this summary \
|
|
|
event with the value set by the 'label' option.
|
|
|
comment1 = Compute the sums of the numeric fields of each results.
|
|
|
example1 = ... | addtotals
|
|
|
comment2 = Compute the sums of the numeric fields that match the given list, and save the sums in the field "sum".
|
|
|
example2 = ... | addtotals fieldname=sum foobar* *baz*
|
|
|
comment3 = Compute the sums of all the fields, and put the sums in a summary event called "change_name".
|
|
|
example3 = ... | addtotals col=t labelfield=change_name label=ALL
|
|
|
commentcheat = Calculate the sums of the numeric fields of each result, and put the sums in the field "sum".
|
|
|
examplecheat = ... | addtotals fieldname=sum
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
related = stats
|
|
|
tags = total add calculate sum
|
|
|
|
|
|
##################
|
|
|
# analyzefields
|
|
|
##################
|
|
|
[analyzefields-command]
|
|
|
syntax = analyzefields classfield=<field>
|
|
|
shortdesc = Finds degree of correlation between a target discrete field and other numerical fields.
|
|
|
description = Using <field> as a discrete random variable, analyze all *numerical* fields to determine the ability for each of those fields to "predict" the value of the classfield.\
|
|
|
In other words, analyzefields determines the stability of the relationship between values in the target classfield and numeric values in other fields. \i\\
|
|
|
As a reporting command, analyzefields consumes all input results, and generates one output result per identified numeric field. \i\\
|
|
|
For best results, classfield should have 2 distinct values, although multi-class analysis is possible.
|
|
|
comment1 = Analyze the numerical fields to predict the value of "is_activated".
|
|
|
example1 = ... | analyzefields classfield=is_activated
|
|
|
usage = public beta
|
|
|
alias = af
|
|
|
tags = analyze predict
|
|
|
category = reporting
|
|
|
related = anomalousvalue
|
|
|
|
|
|
|
|
|
##################
|
|
|
# anomalies
|
|
|
##################
|
|
|
|
|
|
[anomalies-command]
|
|
|
syntax = anomalies (threshold=<num>)? (labelonly=<bool>)? (normalize=<bool>)? (maxvalues=<int>)? (field=<field>)? (denylist=<filename>)? (denylistthreshold=<num>)? (<by-clause>)?
|
|
|
shortdesc = Computes an "unexpectedness" score for an event.
|
|
|
description = Determines the degree of "unexpectedness" of an event's field \
|
|
|
value, based on the previous MAXVALUE events. By default it \
|
|
|
removes events that are well-expected (unexpectedness > \
|
|
|
THRESHOLD). The default THRESHOLD is 0.01. If LABELONLY is true, \
|
|
|
no events are removed, and the "unexpectedness" attribute is set \
|
|
|
on all events. The FIELD analyzed by default is "_raw". \
|
|
|
By default, NORMALIZE is true, which normalizes numerics. For cases \
|
|
|
where FIELD contains numeric data that should not be normalized, but \
|
|
|
treated as categories, set NORMALIZE=false. The \
|
|
|
DENYLIST is a name of a csv file of events in \
|
|
|
$SPLUNK_HOME/var/run/splunk/<DENYLIST>.csv, such that any incoming \
|
|
|
events that are similar to the denylisted events are treated as \
|
|
|
not anomalous (i.e., uninteresting) and given an unexpectedness \
|
|
|
score of 0.0. Events that match denylisted events with a \
|
|
|
similarity score above DENYLISTTHRESHOLD (defaulting to 0.05) are \
|
|
|
marked as unexpected. The inclusion of a 'by' clause, allows the \
|
|
|
specification of a list of fields to segregate results for anomaly \
|
|
|
detection. For each combination of values for the specified \
|
|
|
field(s), events with those values are treated entirely separately. \
|
|
|
Therefore, 'anomalies by source' will look for anomalies in each \
|
|
|
source separately -- a pattern in one source will not affect that \
|
|
|
it is anomalous in another source.
|
|
|
comment1 = Return only anomalous events.
|
|
|
example1 = ... | anomalies
|
|
|
comment2 = Show most interesting events first, ignoring any in the denylist 'boringevents'.
|
|
|
example2 = ... | anomalies denylist=boringevents | sort -unexpectedness
|
|
|
comment3 = Use with transactions to find regions of time that look unusual.
|
|
|
example3 = ... | transaction maxpause=2s | anomalies
|
|
|
usage = public
|
|
|
related = anomalousvalue, cluster, kmeans, outlier
|
|
|
tags = anomaly unusual odd irregular dangerous unexpected outlier
|
|
|
category = results::filter
|
|
|
|
|
|
##################
|
|
|
# anomalousvalue
|
|
|
##################
|
|
|
|
|
|
[anomalousvalue-command]
|
|
|
syntax = anomalousvalue <av-option>* <anovalue-action-option>? <anovalue-pthresh-option>? <field-list>?
|
|
|
shortdesc = Finds and summarizes irregular, or uncommon, search results.
|
|
|
description = Identifies or summarizes the values in the data that are anomalous either by frequency of occurrence \
|
|
|
or number of standard deviations from the mean. If a field-list is given, only those fields are \
|
|
|
considered. Otherwise all non internal fields are considered. \p\\
|
|
|
For fields that are considered anomalous, a new field is added with the following scheme. \
|
|
|
If the field is numeric, e.g. \"size\", the new field will be \"Anomaly_Score_Num(size)\". \
|
|
|
If the field is non-numeric, e.g. \"name\", the new field will be \"Anomaly_Score_Cat(name)\".
|
|
|
comment1 = Return only uncommon values.
|
|
|
example1 = ... | anomalousvalue
|
|
|
commentcheat = Return events with uncommon values.
|
|
|
examplecheat = ... | anomalousvalue action=filter pthresh=0.02
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
related = af, anomalies, cluster, kmeans, outlier
|
|
|
tags = anomaly unusual odd irregular dangerous unexpected
|
|
|
|
|
|
[av-option]
|
|
|
syntax=(minsupcount=<int>)|(maxanofreq=<num>)|(minsupfreq=<num>)|(minnormfreq=<num>)
|
|
|
description = Parameters to the anomalousvalue command. \
|
|
|
minsupcount is the minimum number of rows that must contain a field in order to consider the field at all. \
|
|
|
maxanofreq is the maximum frequency (as a decimal) for a value to be considered anomalous. \
|
|
|
minsupfreq is the minimum support frequency. A field must be in at least this fraction of overall events to be considered. \
|
|
|
minnormfreq is the minimum normal frequency. A field's values must be considered normal at least this fraction of times \
|
|
|
or else the field is not considered for determining if the event is anomalous
|
|
|
|
|
|
|
|
|
[anovalue-action-option]
|
|
|
syntax = action=(annotate|filter|summary)
|
|
|
description = If action is ANNOTATE, new fields will be added to the event containing anomalous values that \
|
|
|
indicate the anomaly scores of the values. \
|
|
|
If action is FILTER, events with anomalous value(s) are retained while non-anomalous values are dropped. \
|
|
|
If action is SUMMARY, a table summarizing the anomaly statistics for each field is generated.
|
|
|
default = "action=filter"
|
|
|
|
|
|
[anovalue-pthresh-option]
|
|
|
syntax = pthresh=<num>
|
|
|
description = Probability threshold (as a decimal) that has to be met for a value to be deemed anomalous
|
|
|
default = "pthresh=0.01"
|
|
|
|
|
|
##################
|
|
|
## anomalydetection
|
|
|
###################
|
|
|
|
|
|
[anomalydetection-command]
|
|
|
syntax = anomalydetection <anoma-method-option>? <anoma-action-option>? <anoma-pthresh-option>? <anoma-cutoff-option>? <field-list>?
|
|
|
shortdesc = Find anomalous events in a set of search results.
|
|
|
description = Identify anomalous events by computing a probability for each event and then detecting unusually small probabilities. \
|
|
|
The probability is defined as the product of the frequencies of each individual field value in the event. \
|
|
|
For categorical fields, the frequency of a value X is the number of times X occurs divided by the total number of events. \
|
|
|
For numerical fields, we first build a histogram for all the values, then compute the frequency of a value X \
|
|
|
as the size of the bin that contains X divided by the number of events. \
|
|
|
Missing values are treated by adding a special value and updating its count just like a normal value. \
|
|
|
Histograms are built using the standard Scott's rule to determine the bin width. \
|
|
|
The way probabilities are computed is called the Naive Bayes method, which means the individual fields are considered independent. \
|
|
|
This is a simplification to make the command reasonably fast.
|
|
|
example1 = ... | anomalydetection
|
|
|
comment1 = Return only anomalous events.
|
|
|
example2 = ... | anomalydetection action=summary
|
|
|
comment2 = Return a short summary of how many anomalous events are there and some other statistics such as the threshold value used to detect them.
|
|
|
category = streaming, reporting
|
|
|
usage = public
|
|
|
related = anomalies, anomalousvalue, outlier, cluster, kmeans
|
|
|
tags = anomaly unusual odd irregular dangerous unexpected Bayes
|
|
|
|
|
|
[anoma-method-option]
|
|
|
syntax = method=(histogram|zscore|iqr)
|
|
|
description = There are three methods instead of one because we've combined two older commands, anomalousvalue and outlier, with the new method. \
|
|
|
The new method is invoked by choosing the 'histogram' option. The anomalousvalue command is invoked by choosing 'zscore', \
|
|
|
and the outlier command is invoked by choosing 'iqr'. \
|
|
|
Below we will describe other options associated with the histogram method. For the other two methods, the associated options are \
|
|
|
exactly the same as before. That is, the queries '...| anomalousvalue ...' and '...| anomalydetection method=zscore ...' where the '...' are \
|
|
|
exactly the same in the two queries will produce exactly the same outputs. The same scheme applies to outlier.
|
|
|
example1 = ... | anomalydetection method=zscore action=filter pthresh=0.05
|
|
|
comment1 = This query returns the same output as that returned by the query '... | anomalousvalue action=filter pthresh=0.05'.
|
|
|
example2 = ... | anomalydetection method=iqr action=tf param=4 uselower=true mark=true
|
|
|
comment2 = This query returns the same output as that returned by the query '...| outlier action=tf param=4 uselower=true mark=true'.
|
|
|
default = "method=histogram"
|
|
|
|
|
|
[anoma-action-option]
|
|
|
syntax = action=(filter|annotate|summary) if the method is histogram or zscore \
|
|
|
action=(transform|tf|remove|rm) if the method is iqr
|
|
|
description = If the method is zscore or iqr, then the actions have the same meaning as in the anomalousvalue and outlier commands. \
|
|
|
If the method is histogram, then the meanings are:\
|
|
|
If action is FILTER, anomalous events are retained while others are dropped. \
|
|
|
If action is ANNOTATE, new fields will be added to anomalous events that indicate the probability of the event as well as which field \
|
|
|
may be the cause of the anomaly. \
|
|
|
If the action is SUMMARY, a table summarizing the anomaly statistics for the search results is generated.
|
|
|
default = "action=filter"
|
|
|
comment = This is the default action when method=histogram. If method is zscore or iqr, then the default action is the default ones for those commands, \
|
|
|
i.e., 'filter' for zscore and 'transform' for iqr
|
|
|
|
|
|
[anoma-pthresh-option]
|
|
|
syntax = pthresh=<num>
|
|
|
description = First, this option only applies when the method is either histogram or zscore. An invalid argument error will be returned if the method is iqr. \
|
|
|
In the histogram case, it means the probability (as a decimal) that has to be met for an event to be deemed anomalous. \
|
|
|
In the zscore case, it means the same as in the annomalousvalue command.
|
|
|
default = If the method is zscore, then the default is 0.01 (as in the anomalousvalue command). If the method is histogram, the default is not any fixed value \
|
|
|
but is instead calculated for each data set during the analysis.
|
|
|
|
|
|
[anoma-cutoff-option]
|
|
|
syntax = cutoff=<bool>
|
|
|
description = This option applies to histogram method only. If the cutoff is false, the algorithm uses the formula threshold = 1st-quartile - 1.5*iqr without \
|
|
|
modification. If the cutoff is true, the algorithm modifies the above formula in order to come up with a smaller number of anomalies.
|
|
|
default = "cutoff=true"
|
|
|
|
|
|
|
|
|
##################
|
|
|
# append
|
|
|
##################
|
|
|
[append-command]
|
|
|
syntax = append (<subsearch-options>)? <subsearch>
|
|
|
shortdesc = Appends the results of a subsearch results to the current results.
|
|
|
description = Append the results of a subsearch as additional results at the end of the current results.
|
|
|
comment = Append the current results with the tabular results of errors.
|
|
|
example = ... | chart count by category1 | append [search error | chart count by category2]
|
|
|
|
|
|
usage = public
|
|
|
tags = append join combine unite combine
|
|
|
category = results::append
|
|
|
related = appendcols, join, set
|
|
|
|
|
|
[subsearch-options]
|
|
|
syntax = (extendtimerange=<bool>)? (maxtime=<int>)? (maxout=<int>)? (timeout=<int>)?
|
|
|
description = You can specify one or more of these options.\
|
|
|
The extendtimerange option specifies whether to include the subsearch time range \
|
|
|
in the time range for the entire search. The default is false. \
|
|
|
The maxtime option specifies the maximum number of seconds to run the subsearch before finalizing. \
|
|
|
The maxout option specifies the maximum number of results to return from the subsearch. \
|
|
|
The timeout option specifies the maximum amount of time, in seconds, to cache the subsearch results.
|
|
|
|
|
|
##################
|
|
|
# appendcols
|
|
|
##################
|
|
|
[appendcols-command]
|
|
|
syntax = appendcols (override=<bool> | <subsearch-options>)? <subsearch>
|
|
|
shortdesc = Appends the fields of the subsearch results to current results, first results to first result, second to second, etc.
|
|
|
description = Appends fields of the results of the subsearch into input search results by combining the external fields of the subsearch (fields that do not start with '_') into the current results. The first subsearch result is merged with the first main result, the second with the second, and so on. If option override is false (default), if a field is present in both a subsearch result and the main result, the main result is used. If it is true, the subsearch result's value for that field is used.
|
|
|
comment = Search for "404" events and append the fields in each event to the previous search results.
|
|
|
example = ... | appendcols [search 404]
|
|
|
usage = public
|
|
|
tags = append join combine unite
|
|
|
category = fields::add
|
|
|
related = append, join, set
|
|
|
|
|
|
##################
|
|
|
# appendpipe
|
|
|
##################
|
|
|
[appendpipe-command]
|
|
|
syntax = appendpipe (run_in_preview=<bool>)? [<subpipeline>]
|
|
|
description = Appends the result of the subpipeline applied to the current result set to results.
|
|
|
comment = Append subtotals for each action across all users
|
|
|
example = index=_audit | stats count by action user | appendpipe [stats sum(count) as count by action | eval user = "ALL USERS"] | sort action
|
|
|
usage = public
|
|
|
tags = append join combine unite combine
|
|
|
category = results::append
|
|
|
related = append appendcols join set
|
|
|
|
|
|
|
|
|
##################
|
|
|
# arules
|
|
|
##################
|
|
|
|
|
|
[arules-command]
|
|
|
syntax = arules (<arules-option> )* <field-list>
|
|
|
shortdesc = Finds the association rules between field values.
|
|
|
description = Finding association rules between values. This is the algorithm behind most online \
|
|
|
shopping websites. When a customer buys an item, these sites are able to recommend \
|
|
|
related items that other customers also buy when they buy the first one. Arules finds such relationships and not only for \
|
|
|
shopping items but any kinds of fields. Note that stricly speaking, arules does not find relationships between fields, but rather \
|
|
|
between the values of the fields.
|
|
|
usage = public
|
|
|
example1 = ... | arules field1 field2 field3
|
|
|
comment1 = Running arules with default support (=3) and confidence (=.5) \
|
|
|
The minimum number of fields is 2. There is no maximum restriction.
|
|
|
example2 = ... | arules sup=3 conf=.6 field1 field2 field3
|
|
|
comment2 = The sup option must be a positive integer \
|
|
|
The conf option must be a float between 0 and 1 \
|
|
|
In general, the higher the support, the less noisy the output will be. However, setting the support too high may exclude too much \
|
|
|
useful data in some circumstances. The conf option should be at least 0.5, otherwise the associations will not be significant. The higher \
|
|
|
the conf, the more significant the associations will be, but at the expense of retaining less associations.
|
|
|
|
|
|
category = streaming, reporting
|
|
|
related = associate, correlate
|
|
|
tags = associate contingency correlate correspond dependence independence
|
|
|
|
|
|
[arules-option]
|
|
|
syntax = (sup=<int>)|(conf=<num>)
|
|
|
default = ("sup=3" | "conf=.5")
|
|
|
description = The sup option specifies a required level of support, or computed level of association between fields. Support is expressed as the output Support and Implied Support fields. The conf option specifies a measure of how certain the algorithm is about that association. Confidence is expressed as the output Strength field. (For example a small number of datapoints that are entirely equivalent would have high support but low confidence.) For either option, associations which are below the limits will not be included in output results.
|
|
|
|
|
|
|
|
|
##################
|
|
|
# require
|
|
|
##################
|
|
|
|
|
|
[require-command]
|
|
|
syntax = assert
|
|
|
shortdesc = Causes a search failure if the preceding search returned zero events or results.
|
|
|
description = Causes a search failure if the preceding search returns zero \
|
|
|
events or results. This command prevents the Splunk platform \
|
|
|
from running a zero-result search when continuing to do so might \
|
|
|
have negative side effects, such as generating false positives, \
|
|
|
running a custom search command that makes costly API calls, or \
|
|
|
creating an empty search filter via a subsearch. This command \
|
|
|
cannot be used in real-time searches.
|
|
|
usage = public
|
|
|
comment1 = Stop executing the preceding search if it returns zero events or results.
|
|
|
example1 = ... | require
|
|
|
comment2 = Raise an exception if the subsearch returns zero events or results, stopping execution of the parent search.
|
|
|
example2 = ... [ search index=other_index NOSUCHVALUE | require ]
|
|
|
|
|
|
|
|
|
##################
|
|
|
# associate
|
|
|
##################
|
|
|
|
|
|
[associate-command]
|
|
|
syntax = associate (<associate-option> )* <field-list>?
|
|
|
shortdesc = Identifies correlations between fields.
|
|
|
description = Searches for relationships between pairs of fields. More specifically, this command tries to identify \
|
|
|
cases where the entropy of field1 decreases significantly based on the condition of field2=value2. \
|
|
|
field1 is known as the target key and field2 the reference key and value2 the reference value. \
|
|
|
If a list of fields is provided, analysis will be restrict to only those fields. By default all fields \
|
|
|
are used.
|
|
|
usage = public
|
|
|
comment1 = Analyze all fields to find a relationship.
|
|
|
example1 = ... | associate
|
|
|
comment2 = Analyze all events from host "reports" and return results associated with each other.
|
|
|
example2 = host="reports" | associate supcnt=50 supfreq=0.2 improv=0.5
|
|
|
commentcheat = Return results associated with each other (that have at least 3 references to each other).
|
|
|
examplecheat = ... | associate supcnt=3
|
|
|
category = reporting
|
|
|
related = correlate, contingency
|
|
|
tags = associate contingency correlate connect link correspond dependence independence
|
|
|
|
|
|
[associate-option]
|
|
|
syntax = <associate-supcnt-option>|<associate-supfreq-option>|<associate-improv-option>
|
|
|
description = Associate command options
|
|
|
|
|
|
[associate-supcnt-option]
|
|
|
syntax = supcnt=<int>
|
|
|
description = Minimum number of times the reference key=reference value combination must be appear. \
|
|
|
Must be a non-negative integer.
|
|
|
default = "supcnt=100"
|
|
|
|
|
|
[associate-supfreq-option]
|
|
|
syntax = supfreq=<num>
|
|
|
description = Minimum frequency of reference key=reference value combination, as a fraction of the number of total events.
|
|
|
default = "supfreq=0.1"
|
|
|
|
|
|
[associate-improv-option]
|
|
|
syntax = improv=<num>
|
|
|
description = Minimum entropy improvement for target key. That is, \
|
|
|
entropy(target key) - entropy(target key given reference key/value) \
|
|
|
must be greater than or equal to this.
|
|
|
default = "improv=0.5"
|
|
|
|
|
|
##################
|
|
|
# autoregress
|
|
|
##################
|
|
|
[autoregress-command]
|
|
|
syntax = autoregress <field> (AS <field:newfield>)? (p=<int:p_start>("-"<int:p_end>)?)?
|
|
|
shortdesc = Prepares events or results for calculating the moving average.
|
|
|
description = Sets up data for auto-regression (e.g. moving average) by copying one or more of the previous values for <field> into each event. If <newfield> is provided, one prior value will be copied into <newfield> from a count of 'p' events prior. In this case, 'p' must be a single integer. If <newfield> is not provided, the single or multiple values will be copied into fields named '<field>_p<p-val>'. In this case 'p' may be a single integer, or a range <p_start>-<p_end>. For a range, the values will be copied from 'p_start' events prior to 'p_end' events prior. If 'p' option is unspecified, it defaults to 1 (i.e., copy only the previous one value of <field> into <field>_p1. The first few events will lack previous values, since they do not exist.
|
|
|
comment1 = Calculate a moving average of event size; the first N average numbers are omitted by eval since summing null fields results in null.
|
|
|
example1 = ... | eval rawlen=len(_raw) | autoregress rawlen p=1-4 | eval moving_average = (rawlen + rawlen_p1 + rawlen_p2 + rawlen_p3 + rawlen_p4) / 5
|
|
|
comment2 = For each event, copy the 2nd, 3rd, 4th, and 5th previous values of the 'count' field into the respective fields 'count_p2', 'count_p3', 'count_p4', and 'count_p5'.
|
|
|
example2 = ... | autoregress count p=2-5
|
|
|
comment3 = For each event, copy the 3rd previous value of the 'foo' field into the field 'oldfoo'.
|
|
|
example3 = ... | autoregress foo AS oldfoo p=3
|
|
|
usage = public
|
|
|
tags = average mean
|
|
|
alias = ar
|
|
|
category = reporting
|
|
|
related = accum, delta, streamstats, trendline
|
|
|
|
|
|
|
|
|
##################
|
|
|
# bin
|
|
|
##################
|
|
|
|
|
|
[bin-command]
|
|
|
syntax = bin (<bin-options> )* <field> (as <field>)?
|
|
|
alias = bucket, discretize
|
|
|
shortdesc = Puts continuous numerical values into discrete sets.
|
|
|
description = Puts continuous numerical field values into discrete sets, or bins. Adjusts the value of 'field', so that all items in the set have the same value for 'field'. Note: Bin is called by chart and timechart automatically and is only needed for statistical operations that timechart and chart cannot process.
|
|
|
usage = public
|
|
|
commentcheat1 = Separate search results into 10 bins, and return the count of raw events for each bin.
|
|
|
examplecheat1 = ... | bin size bins=10 | stats count(_raw) by size
|
|
|
commentcheat2 = Return the average "thruput" of each "host" for each 5 minute time span.
|
|
|
examplecheat2 = ... | bin _time span=5m | stats avg(thruput) by _time host
|
|
|
category = reporting
|
|
|
related = chart, timechart
|
|
|
tags = bucket band bracket bin round chunk lump span
|
|
|
|
|
|
[bin-options]
|
|
|
syntax = (<bin-bins> <bin-minspan>?)|<bin-span>|<bin-start-end>|<bin-aligntime>
|
|
|
description = Discretization options.
|
|
|
|
|
|
[bin-minspan]
|
|
|
syntax = minspan=(<span-length>)
|
|
|
description = Specifies the smallest span granularity to use automatically inferring span from the data time range.
|
|
|
|
|
|
[bin-bins]
|
|
|
syntax = bins=<int>
|
|
|
description = Sets the maximum number of bins to discretize into. \
|
|
|
Given this upper-bound guidance, the bins will snap to \
|
|
|
human sensible bounds.
|
|
|
note = The actual number of bins will almost certainly be smaller than the given number.
|
|
|
example1 = bins=10
|
|
|
|
|
|
[bin-span]
|
|
|
syntax = span=(<span-length>|<log-span>)
|
|
|
description = Sets the size of each bin.
|
|
|
comment1 = set span to 10 seconds
|
|
|
example1 = span=10
|
|
|
comment2 = set span to 2 days
|
|
|
example2 = span=2d
|
|
|
comment3 = set span to 5 minutes
|
|
|
example3 = span=5m
|
|
|
|
|
|
[bin-aligntime]
|
|
|
syntax = aligntime=(earliest|latest|<bin-time-specifier>)
|
|
|
description = Align the bin times to something other than base UTC time (epoch 0). \
|
|
|
Only valid when doing a time based discretization. \
|
|
|
Ignored if span is in days/months/years.
|
|
|
example1 = aligntime=earliest
|
|
|
comment1 = Align time bins to the earliest time of the search.
|
|
|
example2 = aligntime=@d+3h
|
|
|
comment2 = Align time bins to 3am (local time). If span=12h, this means the bins \
|
|
|
will represent 3am - 3pm, then 3pm - 3am (next day), etc.
|
|
|
example3 = aligntime=1500567890
|
|
|
comment3 = Align to the specific UTC time of 1500567890.
|
|
|
|
|
|
[log-span]
|
|
|
syntax = (<num>)?log(<num>)?
|
|
|
description = Sets to log based span, first number if coefficient, second number is base \
|
|
|
coefficient, if supplied, must be real number >= 1.0 and < base \
|
|
|
base, if supplied, must be real number > 1.0 (strictly greater than 1)
|
|
|
example1 = log
|
|
|
comment1 = set log span of base 10, coeff 1.0, e.g. ...,0.1,1,10,100,...
|
|
|
example2 = 2log5
|
|
|
comment2 = set log span of base 5, coeff 2.0, e.g. ...,0.4,2,10,50,250,1250,...
|
|
|
|
|
|
[bin-start-end]
|
|
|
syntax = (start|end)=<num>
|
|
|
description = Sets the minimum and maximum extents for numerical bins.
|
|
|
note = Data outside of the [start, end] range is discarded.
|
|
|
|
|
|
[bin-time-specifier]
|
|
|
syntax = (<iso8601-timestamp>|<epoch>|<relative-time-modifier>)
|
|
|
description = An ISO8601 timestamp, epoch timestamp, or Splunk relative time modifier.
|
|
|
|
|
|
[epoch]
|
|
|
syntax = <num>
|
|
|
description = An epoch timestamp.
|
|
|
example1 = 872835240
|
|
|
|
|
|
[iso8601-timestamp]
|
|
|
syntax = <string>
|
|
|
description = An ISO8601 timestamp.
|
|
|
example1 = 2012-04-05T11:52:43-07:00
|
|
|
|
|
|
[iso8601-msecs-timestamp]
|
|
|
syntax = <string>
|
|
|
description = An ISO8601 timestamp with milliseconds.
|
|
|
example1 = 2012-04-05T11:52:43.123-07:00
|
|
|
|
|
|
[relative-time-modifier]
|
|
|
syntax = <string>
|
|
|
description = A Splunk relative time modifier.
|
|
|
example1 = 1d@d
|
|
|
|
|
|
[span-length]
|
|
|
syntax = <int>(<timescale>)?
|
|
|
description = Span of each bin. \
|
|
|
If using a timescale, this is used as a time range.\
|
|
|
If not, this is an absolute bin "length."
|
|
|
comment1 = set span to 10 seconds
|
|
|
example1 = 10
|
|
|
comment2 = set span to 2 days
|
|
|
example2 = 2d
|
|
|
comment3 = set span to 5 minutes
|
|
|
example3 = 5m
|
|
|
|
|
|
[timescale]
|
|
|
syntax = <ts-sec>|<ts-min>|<ts-hr>|<ts-day>|<ts-month>|<ts-subseconds>|<ts-quarter>|<ts-year>
|
|
|
description = Time scale units.
|
|
|
|
|
|
[ts-subseconds]
|
|
|
syntax = ms|cs|ds
|
|
|
description = Time scale in milliseconds("ms"), \
|
|
|
centiseconds("cs"), or deciseconds("ds")
|
|
|
|
|
|
[ts-sec]
|
|
|
syntax = s|sec|secs|second|seconds
|
|
|
simplesyntax = seconds
|
|
|
description = Time scale in seconds.
|
|
|
|
|
|
[ts-min]
|
|
|
syntax = m|min|mins|minute|minutes
|
|
|
simplesyntax = minutes
|
|
|
description = Time scale in minutes.
|
|
|
|
|
|
[ts-hr]
|
|
|
syntax = h|hr|hrs|hour|hours
|
|
|
simplesyntax = hours
|
|
|
description = Time scale in hours.
|
|
|
|
|
|
[ts-day]
|
|
|
syntax = d|day|days
|
|
|
simplesyntax = days
|
|
|
description = Time scale in days.
|
|
|
|
|
|
[ts-month]
|
|
|
syntax = mon|month|months
|
|
|
simplesyntax = months
|
|
|
description = Time scale in months.
|
|
|
|
|
|
[ts-quarter]
|
|
|
syntax = q|qtr|qtrs|quarter|quarters
|
|
|
simplesyntax = quarter
|
|
|
description = Time scale in quarters.
|
|
|
|
|
|
[ts-year]
|
|
|
syntax = y|yr|yrs|year|years
|
|
|
simplesyntax = year
|
|
|
description = Time scale in years.
|
|
|
|
|
|
##################
|
|
|
# bucketdir
|
|
|
##################
|
|
|
[bucketdir-command]
|
|
|
syntax = bucketdir pathfield=<field> sizefield=<field> (maxcount=<int>)? (countfield=<field>)? (sep=<char>)?
|
|
|
shortdesc = Replaces PATHFIELD with higher-level grouping, such as replacing filenames with directories.
|
|
|
description = Returns at most MAXCOUNT events by taking the incoming events and rolling up multiple sources into directories, by preferring directories that have many files but few events. The field with the path is PATHFIELD (e.g., source), and strings are broken up by a SEP character. The default pathfield=source; sizefield=totalCount; maxcount=20; countfield=totalCount; sep="/" or "\\", depending on the os.
|
|
|
usage = public
|
|
|
comment1 = get 10 best sources and directories
|
|
|
example1 = ... | top source|bucketdir pathfield=source sizefield=count maxcount=10
|
|
|
category = results::group
|
|
|
tags = cluster group collect gather
|
|
|
related = cluster dedup
|
|
|
|
|
|
|
|
|
##################
|
|
|
# chart
|
|
|
##################
|
|
|
|
|
|
[chart-command]
|
|
|
simplesyntax = chart (agg=<stats-agg-term>)? ( <stats-agg-term> | ( "(" <eval-expression> ")" ) )+ ( BY <field> (<bin-options> )* (<split-by-clause>)? )? | ( OVER <field> (<bin-options>)* (BY <split-by-clause>)? )? \
|
|
|
(<dedup_splitvals>)?
|
|
|
syntax = chart <chart-command-arguments>
|
|
|
shortdesc = Returns results in a tabular output for charting.
|
|
|
description = Creates a table of statistics suitable for charting. Whereas timechart generates a \
|
|
|
chart with _time as the x-axis, chart lets you select an arbitrary field as the \
|
|
|
x-axis with the "by" or "over" keyword. If necessary, the x-axis field is converted \
|
|
|
to discrete numerical quantities.\p\\
|
|
|
When chart includes a split-by-clause, the columns in the output table represents a \
|
|
|
distinct value of the split-by-field. (With stats, each row represents a single \
|
|
|
unique combination of values of the group-by-field. The table displays ten columns \
|
|
|
by default, but you can specify a where clause to adjust the number of columns.\p\\
|
|
|
When a where clause is not provided, you can use limit and agg options to specify \
|
|
|
series filtering. If limit=0, there is no series filtering. \p\\
|
|
|
The limit option start with "top" or "bottom" to determine which series to select. \
|
|
|
A number without a prefix means the same thing as starting with top. Default is 10. \p\\
|
|
|
When specifying multiple data series with a split-by-clause, you can use sep and \
|
|
|
format options to construct output field names.
|
|
|
commentcheat1 = Return the average (mean) "size" for each distinct "host".
|
|
|
examplecheat1 = ... | chart avg(size) by host
|
|
|
commentcheat2 = Return the the maximum "delay" by "size", where "size" is broken down into a maximum of 10 equal sized buckets.
|
|
|
examplecheat2 = ... | chart max(delay) by size bins=10
|
|
|
commentcheat3 = Return the ratio of the average (mean) "size" to the maximum "delay" for each distinct "host" and "user" pair.
|
|
|
examplecheat3 = ... | chart eval(avg(size)/max(delay)) by host user
|
|
|
commentcheat4 = Return max(delay) for each value of foo split by the value of bar.
|
|
|
examplecheat4 = ... | chart max(delay) over foo by bar
|
|
|
commentcheat5 = Return max(delay) for each value of foo.
|
|
|
examplecheat5 = ... | chart max(delay) over foo
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
related = timechart, bucket, sichart
|
|
|
tags = chart graph report sparkline count dc mean avg stdev var min max mode median
|
|
|
|
|
|
[chart-limit-opt]
|
|
|
syntax = (top|bottom)?<int>
|
|
|
description = Limiting displayed series to the top or bottom <int>
|
|
|
|
|
|
[chart-command-arguments]
|
|
|
syntax = (sep=<string>)? (format=<string>)? (cont=<bool>)? (limit=<chart-limit-opt>)? (agg=<stats-agg-term>)? ( <stats-agg-term> | <sparkline-agg-term> | ( "(" <eval-expression> ")" ) )+ \
|
|
|
( BY <field> (<bin-options> )* (<split-by-clause>)? )? | \
|
|
|
( OVER <field> (<bin-options>)* (BY <split-by-clause>)? )? (<dedup_splitvals>)?
|
|
|
description = See chart-command description. See the bin command for details about the <bin-options>.
|
|
|
|
|
|
|
|
|
##################
|
|
|
# cofilter
|
|
|
##################
|
|
|
|
|
|
[cofilter-command]
|
|
|
syntax = cofilter field1 field2
|
|
|
shortdesc = Find how many times field1 and field2 values occurred together.
|
|
|
description = For this command, we think of field1 values as "users" and field2 values as "items". \
|
|
|
The goal of the command is to compute, for each pair of item (i.e., field2 values), how many \
|
|
|
users (i.e., field1 values) used them both (i.e., occurred with each of them).
|
|
|
usage = public
|
|
|
example1 = ... | cofilter field1 field2
|
|
|
comment1 = user field must be specified first and item field second
|
|
|
category = streaming, reporting
|
|
|
related = associate, correlate
|
|
|
tags = arules associate contingency correlate correspond dependence independence
|
|
|
|
|
|
|
|
|
##################
|
|
|
# collapse
|
|
|
##################
|
|
|
[collapse-command]
|
|
|
syntax = collapse (chunksize=<num>)? (force=<bool>)?
|
|
|
description = Purely internal operation that condenses multi-file results into as few files as chunksize option will allow. (default chunksize=50000). Operation automatically invoked by output* operators. If force=true and the results are entirely in memory, re-divide the results into appropriated chunked files (this option is new for 5.0).
|
|
|
example1 = ... | collapse
|
|
|
usage = internal
|
|
|
|
|
|
##################
|
|
|
# concurrency
|
|
|
##################
|
|
|
|
|
|
[concurrency-command]
|
|
|
syntax = concurrency duration=<field> (start=<field>)? (output=<field>)?
|
|
|
shortdesc = Given a duration field, finds the number of "concurrent" events for each event.
|
|
|
description = If each event represents something that occurs over a span of time, where that \
|
|
|
span is specified in the duration field, calculate the number of concurrent events \
|
|
|
for each event start time. An event X is concurrent with event Y if \
|
|
|
the X start time, X.start, lies between Y.start and (Y.start + Y.duration). \
|
|
|
In other words, the concurrent set of events is calculated for each event start time, \
|
|
|
and that number is attached to the event. \
|
|
|
The units of start and duration are assumed to be the same. If you have different \
|
|
|
units, you will need to convert them to corresponding units prior to using the concurrency \
|
|
|
command. \
|
|
|
Unless specified, the start field is assumed to be _time and the output field will \
|
|
|
be 'concurrency' \
|
|
|
Limits: If concurrency exceeds limits.conf [concurrency] max_count \
|
|
|
(Defaults to 10 million), results will not be accurate.
|
|
|
usage = public
|
|
|
comment1 = Calculate the number of concurrent events for each event start time and emit as field 'foo'
|
|
|
example1 = ... | concurrency duration=total_time output=foo
|
|
|
commentcheat = Calculate the number of concurrent events using the 'et' field as the start time \
|
|
|
and 'length' as the duration.
|
|
|
examplecheat = ... | concurrency duration=length start=et
|
|
|
comment2 = Calculate the number of ongoing http requests at the start time of each http request in a splunk access log
|
|
|
example2 = ... | eval spent_in_seconds = spent / 1000 | concurrency duration=spent_in_seconds
|
|
|
category = reporting
|
|
|
related = timechart
|
|
|
tags = concurrency
|
|
|
|
|
|
##################
|
|
|
# contingency
|
|
|
##################
|
|
|
|
|
|
[contingency-command]
|
|
|
syntax = contingency (<contingency-option> )* <field> <field>
|
|
|
alias = counttable, ctable
|
|
|
shortdesc = Builds a contingency table for two fields.
|
|
|
description = In statistics, contingency tables are used to record \
|
|
|
and analyze the relationship between two or more (usually categorical) variables. Many metrics of \
|
|
|
association or independence can be calculated based on contingency tables, such as the phi \
|
|
|
coefficient or the V of Cramer.
|
|
|
usage = public
|
|
|
comment1 = Build a contingency table for fields "host" and "sourcetype".
|
|
|
example1 = ... | contingency host sourcetype
|
|
|
commentcheat = Build a contingency table of "datafields" from all events.
|
|
|
examplecheat = ... | contingency datafield1 datafield2 maxrows=5 maxcols=5 usetotal=F
|
|
|
category = reporting
|
|
|
related = associate, correlate
|
|
|
tags = associate contingency correlate connect link correspond dependence independence
|
|
|
|
|
|
[contingency-option]
|
|
|
syntax = <contingency-maxopts>|<contingency-mincover>|<contingency-usetotal>|<contingency-totalstr>
|
|
|
description = Options for the contingency table
|
|
|
|
|
|
[contingency-maxopts]
|
|
|
syntax = (maxrows|maxcols)=<int>
|
|
|
description = Maximum number of rows or columns. If the number of distinct values of the field exceeds this maximum, \
|
|
|
the least common values will be ignored. There is a ceiling on the values permitted for maxrows and maxcols\
|
|
|
from limits.conf, [ctable] stanza maxvalues. This limit defaults to 1000. Values for over this will be rejected \
|
|
|
and values of 0 for these settings mean to use this maxvalues setting.
|
|
|
default = ("maxrows=0" | "maxcols=0")
|
|
|
|
|
|
[contingency-mincover]
|
|
|
syntax = (mincolcover|minrowcover)=<num>
|
|
|
description = Cover only this percentage of values for the row or column field. If the number of entries needed to \
|
|
|
cover the required percentage of values exceeds maxrows or maxcols, maxrows or maxcols takes precedence.
|
|
|
default = ("mincolcover=1.0" | "minrowcover=1.0")
|
|
|
|
|
|
[contingency-usetotal]
|
|
|
syntax = usetotal=<bool>
|
|
|
description = Add row and column totals
|
|
|
default = "usetotal=true"
|
|
|
|
|
|
[contingency-totalstr]
|
|
|
syntax = totalstr=<field>
|
|
|
description = Field name for the totals row/column
|
|
|
default = "totalstr=TOTAL"
|
|
|
|
|
|
|
|
|
##################
|
|
|
# convert
|
|
|
##################
|
|
|
|
|
|
[convert-command]
|
|
|
simplesyntax = convert (timeformat=<string>)? ( (auto|dur2sec|mstime|memk|none|num|rmunit|rmcomma|ctime|mktime) "(" <field>? ")" (as <field>)?)+
|
|
|
syntax = convert (timeformat=<string>)? (<convert-function> (as <wc-field>)?)+
|
|
|
shortdesc = Converts field values into numerical values.
|
|
|
description = Converts the values of fields into numerical values. When renaming a field using "as", the original field is left intact. The timeformat option is used by ctime and mktime conversions. Default = "%m/%d/%Y %H:%M:%S".
|
|
|
commentcheat1 = Convert every field value to a number value except for values in the field "foo" (use the "none" argument to specify fields to ignore).
|
|
|
examplecheat1 = ... | convert auto(*) none(foo)
|
|
|
commentcheat2 = Change all memory values in the "virt" field to Kilobytes.
|
|
|
examplecheat2 = ... | convert memk(virt)
|
|
|
commentcheat3 = Change the sendmail syslog duration format (D+HH:MM:SS) to seconds. For example, if "delay="00:10:15"", the resulting value will be "delay="615"".
|
|
|
examplecheat3 = ... | convert dur2sec(delay)
|
|
|
commentcheat4 = Convert values of the "duration" field into number value by removing string values in the field value. For example, if "duration="212 sec"", the resulting value will be "duration="212"".
|
|
|
examplecheat4 = ... | convert rmunit(duration)
|
|
|
category = fields::convert
|
|
|
usage = public
|
|
|
tags = interchange transform translate convert ctime mktime dur2sec mstime memk
|
|
|
related = eval
|
|
|
|
|
|
[convert-function]
|
|
|
syntax = <convert-auto>|<convert-dur2sec>|<convert-mstime>|<convert-memk>|<convert-none>|<convert-num>|<convert-rmunit>|<convert-rmcomma>|<convert-ctime>|<convert-mktime>
|
|
|
|
|
|
[convert-auto]
|
|
|
syntax = auto("(" (<wc-field>)? ")")?
|
|
|
description = Automatically convert the field(s) to a number using the best conversion. \
|
|
|
Note that if not all values of a particular field can be converted using a known conversion type, \
|
|
|
the field is left untouched and no conversion at all in done for that field.
|
|
|
example1 = ... | convert auto(*)
|
|
|
example2 = ... | convert auto
|
|
|
example3 = ... | convert auto()
|
|
|
example4 = ... | convert auto(delay) auto(xdelay)
|
|
|
example5 = ... | convert auto(delay) as delay_secs
|
|
|
example6 = ... | convert auto(*delay) as *delay_secs
|
|
|
example7 = ... | convert auto(*) as *_num
|
|
|
|
|
|
[convert-ctime]
|
|
|
syntax = ctime"("<wc-field>?")"
|
|
|
description = Convert an epoch time to an ascii human readable time. Use timeformat option to specify exact format to convert to.
|
|
|
example1 = ... | convert timeformat="%H:%M:%S" ctime(_time) as timestr
|
|
|
|
|
|
[convert-mktime]
|
|
|
syntax = mktime"("<wc-field>?")"
|
|
|
description = Convert an human readable time string to an epoch time. Use timeformat option to specify exact format to convert from.
|
|
|
example1 = ... | convert mktime(timestr)
|
|
|
|
|
|
[convert-dur2sec]
|
|
|
syntax = dur2sec"("<wc-field>?")"
|
|
|
description = Convert a duration format "[D+]HH:MM:SS" to seconds.
|
|
|
example1 = ... | convert dur2sec(xdelay)
|
|
|
example2 = ... | convert dur2sec(*delay)
|
|
|
|
|
|
[convert-mstime]
|
|
|
syntax = mstime"(" <wc-field>? ")"
|
|
|
description = Convert a MM:SS.SSS format to seconds.
|
|
|
|
|
|
[convert-memk]
|
|
|
syntax = memk"(" <wc-field>? ")"
|
|
|
description = Convert a {KB, MB, GB} denominated size quantity into a KB
|
|
|
example1 = ... | convert memk(VIRT)
|
|
|
|
|
|
[convert-none]
|
|
|
syntax = none"(" <wc-field>? ")"
|
|
|
description = In the presence of other wildcards, indicates that the matching fields should not be converted.
|
|
|
example1 = ... | convert auto(*) none(foo)
|
|
|
|
|
|
[convert-num]
|
|
|
syntax = num"("<wc-field>? ")"
|
|
|
description = Like auto(), except non-convertible values are removed.
|
|
|
|
|
|
[convert-rmcomma]
|
|
|
syntax = rmcomma"("<wc-field>? ")"
|
|
|
description = Removes all commas from value, e.g. '1,000,000.00' -> '1000000.00'
|
|
|
|
|
|
[convert-rmunit]
|
|
|
syntax = rmunit"(" <wc-field>? ")"
|
|
|
description = Looks for numbers at the beginning of the value and removes trailing text.
|
|
|
example1 = ... | convert rmunit(duration)
|
|
|
|
|
|
##################
|
|
|
# copyresults
|
|
|
##################
|
|
|
|
|
|
[copyresults-command]
|
|
|
syntax = copyresults <copyresults-dest-option> <copyresults-sid-option>
|
|
|
description = Copies the results of a search to a specified location within the config directory structure. This command is primarily used to populate lookup tables.
|
|
|
usage = internal
|
|
|
example1 = ... | copyresults dest=etc/system/local/lookups/myLookupTable.csv
|
|
|
|
|
|
[copyresults-dest-option]
|
|
|
syntax = dest=<string>
|
|
|
description = The destination file where to copy the results to. The string is interpreted as path relative \
|
|
|
to SPLUNK_HOME and (1) should point to a .csv file and (2) the file should be located either \
|
|
|
in etc/system/lookups/ or etc/apps/<app-name>/lookups/
|
|
|
|
|
|
[copyresults-sid-option]
|
|
|
syntax = sid=<string>
|
|
|
description = The search id of the job whose results are to be copied. Note, the user who is running this \
|
|
|
command should have permission to the job pointed by this id.
|
|
|
|
|
|
##################
|
|
|
# correlate
|
|
|
##################
|
|
|
|
|
|
[correlate-command]
|
|
|
syntax = correlate
|
|
|
shortdesc = Calculates the correlation between different fields.
|
|
|
description = Calculates a co-occurrence matrix, which contains the percentage of times that two \
|
|
|
fields exist in the same events. The RowField field contains the name of the field considered \
|
|
|
for the row, while the other column names (fields) are the fields it is being compared against. \
|
|
|
Values are the ratio of occurrences when both fields appeared to occurrences when only one field appeared.
|
|
|
usage = public
|
|
|
example1 = ... | correlate
|
|
|
comment1 = Calculate the correlation between all fields.
|
|
|
commentcheat = Calculate the co-occurrence correlation between all fields.
|
|
|
examplecheat = ... | correlate
|
|
|
category = reporting
|
|
|
related = associate, contingency
|
|
|
tags = associate contingency correlate connect link correspond dependence independence
|
|
|
|
|
|
##################
|
|
|
# createrss
|
|
|
##################
|
|
|
[createrss-command]
|
|
|
syntax = createrss path=<string> name=<string> link=<string> descr=<string> count=<int> (graceful=<bool>)?
|
|
|
shortdesc = Adds the RSS item into the specified RSS feed.
|
|
|
description = If the RSS feed does not exist, it creates one. The arguments are as follow \i\\
|
|
|
PATH - the path of the rss feed (no ../ allowed) can be accessed via http://splunk/rss/path \i\\
|
|
|
NAME - the name/title of the rss item to add \i\\
|
|
|
LINK - link where the rss item points to \i\\
|
|
|
DESCR - the description field of the rss item \i\\
|
|
|
COUNT - maximum number of items in the rss feed when reached last items is dropped \i\\
|
|
|
GRACEFUL - (optional) controls whether on error an exception is raised or simply logged - this is \i\\
|
|
|
useful when you don't want createrss to break the search pipeline
|
|
|
usage = deprecated, internal
|
|
|
related = sendemail
|
|
|
category = alerting
|
|
|
|
|
|
##################
|
|
|
# datamodel
|
|
|
##################
|
|
|
|
|
|
[datamodel-command]
|
|
|
syntax = datamodel (<modelName>)? (<objectName>)? (<dm-search-mode>)? (allow_old_summaries=<bool>)? (summariesonly=<bool>)? (strict_fields=<bool>)?
|
|
|
shortdesc = Allows users to examine data models and search data model datasets.
|
|
|
description = Must be the first command in a search. When used with no \
|
|
|
arguments, returns the JSON for all data models available in the \
|
|
|
current context. When used with just a modelName, returns the \
|
|
|
JSON for a single data model. When used with a modelName and \
|
|
|
objectName, returns the JSON for a single data model dataset. \
|
|
|
When used with modelName, objectName and 'dm-search-mode', runs \
|
|
|
the search for the specified search mode.\
|
|
|
"allow_old_summaries": Only applies when you use 'datamodel' to \
|
|
|
search an accelerated data model. Defaults to false. \
|
|
|
When allow_old_summaries=false, the Splunk software only \
|
|
|
provides results from TSIDX data model summary directories that \
|
|
|
are up-to-date. In other words, if the data model definition has \
|
|
|
changed, the Splunk software does not use data model summary \
|
|
|
directories that are older than the current definition when it \
|
|
|
returns 'datamodel' command output. This default ensures that \
|
|
|
the output from the datamodel search command always reflects \
|
|
|
your current configuration. When allow_old_summaries=true, the \
|
|
|
'datamodel' command uses both current summary data and \
|
|
|
summary data that was generated prior to a change to the data \
|
|
|
model definition. This is an advanced performance feature for \
|
|
|
cases where the old summaries are "good enough".\
|
|
|
"summariesonly": Only applies when you use 'datamodel' to \
|
|
|
search an accelerated data model. Defaults to false. When \
|
|
|
summariesonly = false, the search generates results both \
|
|
|
summarized and unsummarized data. For unsummarized data, the \
|
|
|
search runs against the original indexed data. When \
|
|
|
summariesonly = true, the search runs only against data model \
|
|
|
summaries. It does not generate results from unsummarized data.\
|
|
|
"strict_fields": Only applies when you use 'datamodel' to search\
|
|
|
a data model. Defaults to true. When 'strict_fields = false', \
|
|
|
the command returns all fields, rather than just the set of \
|
|
|
fields that are defined within the constraints for the data \
|
|
|
model. This includes fields inherited from parent data models \
|
|
|
and fields that are derived through search-time processes such \
|
|
|
as field extraction, lookup matching, and field calculation.
|
|
|
example1 = | datamodel
|
|
|
comment1 = Return JSON for all data models available in the current app context.
|
|
|
example2 = | datamodel internal_server
|
|
|
comment2 = Return JSON for the internal_server data model.
|
|
|
example3 = | datamodel internal_server scheduler
|
|
|
comment3 = Return JSON for the scheduler dataset within the internal_server \
|
|
|
data model.
|
|
|
example4 = | datamodel internal_server scheduler search
|
|
|
comment4 = Run the search represented by the scheduler dataset within the \
|
|
|
internal_server data model.
|
|
|
category = results::filter
|
|
|
usage = public
|
|
|
related = from, pivot
|
|
|
tags = datamodel model pivot
|
|
|
|
|
|
[modelName]
|
|
|
syntax = <string>
|
|
|
description = A data model name.
|
|
|
|
|
|
[objectName]
|
|
|
syntax = <string>
|
|
|
description = A data model object name.
|
|
|
|
|
|
[dm-search-mode]
|
|
|
syntax = <dm-search-execute-mode>|<dm-search-string-mode>
|
|
|
description = The available commands for searching on a defined data model \
|
|
|
and data model object.
|
|
|
|
|
|
[dm-search-execute-mode]
|
|
|
syntax = search|flat|acceleration_search
|
|
|
shortdesc = The available commands for executing a search on a defined \
|
|
|
data model and data model object.
|
|
|
description = The 'search' mode runs the data model search exactly how it is \
|
|
|
defined. The 'flat' mode runs the data model search exactly \
|
|
|
like 'search' mode, with the exception that it strips the \
|
|
|
hierarchical names from the fields in the results. For example, \
|
|
|
the 'search' mode result 'dmObject.fieldname' is output simply \
|
|
|
as 'fieldname' when the same search is run in 'flat' mode. The \
|
|
|
'acceleration_search' mode runs the same search that is \
|
|
|
executed when the data model is accelerated. The \
|
|
|
acceleration_search' mode only works on root event objects and \
|
|
|
root search objects that use only streaming commands.
|
|
|
|
|
|
[dm-search-string-mode]
|
|
|
syntax = search_string|flat_string|acceleration_search_string
|
|
|
shortdesc = The available commands for inspecting the search string that \
|
|
|
is run by the defined data model and data model object.
|
|
|
description = Each of these modes simply outputs the search string that is \
|
|
|
run when using the corresponding dm-search-execute-mode.
|
|
|
#############
|
|
|
# debug
|
|
|
#############
|
|
|
|
|
|
[debug-command]
|
|
|
syntax = debug cmd=<debug-method> param1=<string> param2=<string> <index-specifier>
|
|
|
shortdesc = Performs a debug command.
|
|
|
description = This search command can be used to issue debug commands to the system.
|
|
|
example1 = | debug cmd=roll index=_internal
|
|
|
usage = debug
|
|
|
tags = debug roll
|
|
|
|
|
|
[debug-method]
|
|
|
syntax = optimize|roll|logchange|validate|delete|sync|sleep|rescan
|
|
|
description = The available commands for debug command
|
|
|
|
|
|
##################
|
|
|
# dedup
|
|
|
##################
|
|
|
|
|
|
[dedup-command]
|
|
|
syntax = dedup (<int>)? <field-list> (<dedup-keepevents>)? (<dedup-keepempty>)? (<dedup-consecutive>)? (sortby <sort-by-clause>)?
|
|
|
shortdesc = Removes events which contain an identical combination of values for selected fields.
|
|
|
description = Keep the first N (where N > 0) results for each combination of values for the specified field(s) \
|
|
|
The first argument, if a number, is interpreted as N. If this number is absent, N is assumed to be 1. \
|
|
|
The optional sortby clause is equivalent to performing a sort command before the dedup command except that it is executed more efficiently. The keepevents flag will keep all events, but for events with duplicate values, remove those fields values instead of the entire event. \p\\
|
|
|
Normally, events with a null value in any of the fields are dropped. The keepempty \
|
|
|
flag will retain all events with a null value in any of the fields.
|
|
|
usage = public beta
|
|
|
example1 = ... | dedup 3 source
|
|
|
comment1 = For events that have the same 'source' value, keep the first 3 that occur and remove all subsequent events.
|
|
|
example2 = ... | dedup source sortby +_time
|
|
|
comment2 = Remove duplicates of results with the same source value and sort the events by the '_time' field in ascending order.
|
|
|
example3 = ... | dedup group sortby -_size
|
|
|
comment3 = Remove duplicates of results with the same source value and sort the events by the '_size' field in descending order.
|
|
|
commentcheat = Remove duplicates of results with the same host value.
|
|
|
examplecheat = ... | dedup host
|
|
|
category = results::filter
|
|
|
tags = duplicate redundant extra
|
|
|
related = uniq
|
|
|
|
|
|
[dedup-keepempty]
|
|
|
syntax = keepempty=<bool>
|
|
|
description = If an event contains a null value for one or more of the specified fields, the event is either \
|
|
|
retained (if keepempty=true) or discarded (default).
|
|
|
default = "keepempty=f"
|
|
|
|
|
|
[dedup-consecutive]
|
|
|
syntax = consecutive=<bool>
|
|
|
description = Only eliminate events that are consecutively duplicate
|
|
|
default = "consecutive=f"
|
|
|
|
|
|
[dedup-keepevents]
|
|
|
syntax = keepevents=<bool>
|
|
|
description = Keep all events, remove the fields from field-list in the duplication case instead
|
|
|
default = "keepevents=f"
|
|
|
|
|
|
#############
|
|
|
# delete
|
|
|
#############
|
|
|
|
|
|
[delete-command]
|
|
|
syntax = delete
|
|
|
shortdesc = Deletes (makes irretrievable) events from Splunk indexes.
|
|
|
description = Piping a search to the delete operator marks all the events returned by that search so that they are never returned by any later search. No user (even with admin permissions) will be able to see this data using Splunk. \
|
|
|
The delete operator can only be accessed by a user with the "delete_by_keyword" capability. By default, Splunk ships with a special role, "can_delete" that has this capability (and no others). The admin role does not have this capability by default. Splunk recommends you create a special user that you log into when you intend to delete index data. \
|
|
|
To use the delete operator, run a search that returns the events you want deleted. Make sure that this search ONLY returns events you want to delete, and no other events. Once you've confirmed that this is the data you want to delete, pipe that search to delete. \
|
|
|
Note: The delete operator will trigger a roll of hot buckets to warm in the affected index(es).
|
|
|
usage = public
|
|
|
tags = delete hide
|
|
|
# Examples are commented out until an issue with the Jenkins tests is resolved.
|
|
|
# example1 = index=imap invalid | delete
|
|
|
# comment1 = Delete events from the "imap" index that contain the word "invalid".
|
|
|
# example2 = index=insecure | regex _raw = "\d{3}-\d{2}-\d{4}" | delete
|
|
|
# comment2 = Delete events from the "insecure" index that contain strings that look like Social Security numbers.
|
|
|
# category = index::delete
|
|
|
|
|
|
|
|
|
[metadata-delete-restrict]
|
|
|
syntax = (host::|source::|sourcetype::)<string>
|
|
|
description = restrict the deletion to the specified host, source or sourcetype.
|
|
|
|
|
|
##################
|
|
|
# delta
|
|
|
##################
|
|
|
[delta-command]
|
|
|
syntax = delta <field> (as <field:newfield>)? (p=<int>)?
|
|
|
shortdesc = Computes the difference in field value between nearby results.
|
|
|
description = For each event where <field> is a number, compute the difference, in search order, between the current event's value of <field> and a previous event's value of <field> and write this difference into <field:newfield>. If <newfield> if not specified, it defaults to "delta(<field>)" If p is unspecified, the default = 1, meaning the the immediate previous value is used. p=2 would mean that the value before the previous value is used, etc etc etc.
|
|
|
note = Historical search order is from new events to old events, so values ascending over time will show negative deltas, and vice versa. Realtime search is in the incoming data order, so delta can produce odd values for data which arrives out-of-order from the original data order (eg. when files are acquired out-of-order on forwarders).
|
|
|
example1 = ... | delta count as countdiff
|
|
|
comment1 = For each event where 'count' exists, compute the difference between count and its previous value and store the result in 'countdiff'.
|
|
|
example2 = ... | delta count p=3
|
|
|
comment2 = Compute the difference between current value of count and the 3rd previous value of count and store the result in 'delta(count)'
|
|
|
usage = public
|
|
|
tags = difference delta change distance
|
|
|
category = fields::add
|
|
|
related = accum, autoregress, streamstats, trendline
|
|
|
|
|
|
|
|
|
##################
|
|
|
# diff
|
|
|
##################
|
|
|
[diff-command]
|
|
|
syntax = diff (position1=<int>)? (position2=<int>)? (attribute=<string>)? (diffheader=<bool>)? (context=<bool>)? (maxlen=<int>)?
|
|
|
shortdesc = Returns the difference between two search results.
|
|
|
description = Compares a field from two search results, returning the line-by-line 'diff' of the two. \
|
|
|
The two search results compared is specified by the two position values (position1 and position2), \
|
|
|
hich default to 1 and 2 (i.e., compare the first two results). \p\\
|
|
|
By default, the text of the two search results (i.e., the "_raw" field) are compared, \
|
|
|
but other fields can be compared, using 'attribute'. \p\\
|
|
|
If 'diffheader' is true, the traditional diff headers are created using the source keys \
|
|
|
of the two events as filenames. 'diffheader' defaults to false. \p\\
|
|
|
If 'context' is true, the output is generated in context-diff format. Otherwise, unified diff format is used.\
|
|
|
'context' defaults to false (unified). \p\\
|
|
|
If 'maxlen' is provided, it controls the maximum content in bytes diffed from the two events. \
|
|
|
It defaults to 100000, meaning 100KB, if maxlen=0, there is no limit.
|
|
|
default = diff position1=1 position2=2 attribute=_raw header=f context=f
|
|
|
comment1 = Compare the 9th search results to the 10th
|
|
|
example1 = ... | diff position1=9 position2=10
|
|
|
commentcheat = Compare the "ip" values of the first and third search results.
|
|
|
examplecheat = ... | diff pos1=1 pos2=3 attribute=ip
|
|
|
category = formatting
|
|
|
usage = public
|
|
|
tags = diff differentiate distinguish contrast
|
|
|
related = set
|
|
|
|
|
|
##################
|
|
|
# dispatch
|
|
|
##################
|
|
|
|
|
|
[dispatch-command]
|
|
|
syntax = dispatch (ttl=<num>)? (maxresults=<num>)? (maxtime=<num>)? (id=<string>)? (spawn_process=<bool>)? (label=<string>)? (start_time=<num>)? (end_time=<num>)? <server-list> [<search-pipeline>]
|
|
|
description = Encapsulates long running, streaming reports. \i\\
|
|
|
"id" is the directory in which to place the results relative to $SPLUNK_HOME/var/run/splunk/dispatch.\i\\
|
|
|
"maxresults" is the maximum number of final results to return from the search-pipeline\i\\
|
|
|
"maxtime" is the maximum time (in seconds) to spend on the search before finalizing\i\\
|
|
|
"ttl" represents the number of the seconds the results of the dispatched search-pipeline will live on\i\\
|
|
|
disk before being cleaned up\i\\
|
|
|
"spawn_process" controls if the search should run in a separate spawned process ( defaults to true ).\i\\
|
|
|
"start_time" set the search's start/earliest time\i\\
|
|
|
"end_time" set the search's end/latest time\i\\
|
|
|
"label" set the search's label
|
|
|
usage = internal
|
|
|
example1 = | dispatch [search | stats count]
|
|
|
example2 = | dispatch id=foo [search | top source]
|
|
|
example3 = | dispatch server1 server2 [search | top host]
|
|
|
|
|
|
[server-list]
|
|
|
syntax = (<string> )*
|
|
|
description = A list of possibly wildcarded servers changes in the context of the differences. Try it see if it makes sense. \
|
|
|
* - header=[true | false] : optionally you can show a header that tries to explain the diff output \
|
|
|
* - attribute=[attribute name] : you can choose to diff just a single attribute of the results.
|
|
|
default = "*"
|
|
|
|
|
|
|
|
|
##################
|
|
|
# editinfo
|
|
|
##################
|
|
|
[editinfo-command]
|
|
|
syntax = editinfo ((keyset|starttime|endtime|msg_error|msg_warn|msg_info|msg_debug)=<string>)*
|
|
|
description = Edit information in SearchResultsInfo.
|
|
|
category = formatting
|
|
|
usage = internal
|
|
|
related = editinfo
|
|
|
tags = editinfo
|
|
|
|
|
|
|
|
|
##################
|
|
|
# erex
|
|
|
##################
|
|
|
|
|
|
[erex-command]
|
|
|
syntax = erex <field> examples=<erex-examples> (counterexamples=<erex-examples>)? (fromfield=<field>)? (maxtrainers=<int>)?
|
|
|
shortdesc = Automatically extracts field values similar to the example values.
|
|
|
description = Example-based regular expression \
|
|
|
extraction. Automatically extracts field values from FROMFIELD \
|
|
|
(defaults to _raw) that are similar to the EXAMPLES \
|
|
|
(comma-separated list of example values) and puts them in FIELD. \
|
|
|
An informational message is output with the resulting regular expression. \
|
|
|
That expression can then be used with the REX command for \
|
|
|
more efficient extraction. To learn the extraction rule for \
|
|
|
pulling out example values, it learns from at most MAXTRAINERS \
|
|
|
(defaults to 100, must be between 1-1000).
|
|
|
comment1 = Extracts out values like "7/01", putting them into the "monthday" attribute.
|
|
|
example1 = ... | erex monthday examples="7/01"
|
|
|
comment2 = Extracts out values like "7/01" and "7/02", but not patterns like "99/2", putting extractions into the "monthday" attribute.
|
|
|
example2 = ... | erex monthday examples="7/01, 07/02" counterexamples="99/2"
|
|
|
category = fields::add
|
|
|
usage = public
|
|
|
related = extract, kvform, multikv, regex, rex, xmlkv
|
|
|
tags = regex regular expression extract
|
|
|
|
|
|
[erex-examples]
|
|
|
syntax = ""<string>(, <string> )*""
|
|
|
comment1 = examples are foo and bar
|
|
|
example1 = "foo, bar"
|
|
|
|
|
|
|
|
|
#################
|
|
|
# eval
|
|
|
#################
|
|
|
|
|
|
[eval-command]
|
|
|
syntax = eval <eval-field>=<eval-expression> ("," <eval-field>=<eval-expression>)*
|
|
|
shortdesc = Calculates an expression and puts the resulting value into a field. You can specify to calculate more than one expression.
|
|
|
description = Performs an arbitrary expression evaluation, providing mathematical, string, and boolean operations. The results of eval are written to a specified destination field, which can be a new or existing field. If the destination field exists, the values of the field are replaced by the results of eval. The syntax of the expression is checked before running the search, and an exception will be thrown for an invalid expression. For example, the result of an eval statement is not allowed to be boolean. If search time evaluation of the expression is unsuccessful for a given event, eval erases the value in the result field.
|
|
|
commentcheat = Set velocity to distance / time.
|
|
|
examplecheat = ... | eval velocity=distance/time
|
|
|
comment1 = Set full_name to the concatenation of first_name, a space, and last_name.\
|
|
|
Lowercase full_name. An example of multiple eval expressions, separated by a comma.
|
|
|
example1 = ... | eval full_name = first_name." ".last_name, low_name = lower(full_name)
|
|
|
comment2 = Set sum_of_areas to be the sum of the areas of two circles
|
|
|
example2 = ... | eval sum_of_areas = pi() * pow(radius_a, 2) + pi() * pow(radius_b, 2)
|
|
|
comment3 = Set status to some simple http error codes.
|
|
|
example3 = ... | eval error_msg = case(error == 404, "Not found", error == 500, "Internal Server Error", error == 200, "OK")
|
|
|
comment4 = Set status to OK if error is 200; otherwise, Error.
|
|
|
example4 = ... | eval status = if(error == 200, "OK", "Error")
|
|
|
comment5 = Set lowuser to the lowercase version of username.
|
|
|
example5 = ... | eval lowuser = lower(username)
|
|
|
category = fields::add
|
|
|
related = where
|
|
|
usage = public
|
|
|
tags = evaluate math string bool formula calculate compute abs avg case cidrmatch coalesce commands exact exp floor if ifnull ipmask isbool isint isnotnull isnull isnum isstr len like ln log lower match max md5 min mvappend mvcount mvindex mvfilter mvjoin mvmap mvsort mvdedup now null nullif pi pow random relative_time replace round searchmatch sigfig split sqrt strftime strptime substr sum time tostring trim ltrim rtrim typeof upper urldecode validate
|
|
|
|
|
|
[eval-field]
|
|
|
syntax = <field>
|
|
|
description = A field name for your evaluated value.
|
|
|
example = velocity
|
|
|
tags = eval evaluate calculate add subtract sum count measure multiply divide
|
|
|
|
|
|
[eval-expression]
|
|
|
syntax = <eval-math-exp> | <eval-concat-exp> | <eval-compare-exp> | <eval-bool-exp> | <eval-function-call>
|
|
|
description = A combination of literals, fields, operators, and functions that represent the value of your destination field. The following are the basic operations you can perform with eval. For these evaluations to work, your values need to be valid for the type of operation. For example, with the exception of addition, arithmetic operations may not produce valid results if the values are not numerical. For addition, Splunk can concatenate the two operands if they are both strings. When concatenating values with '.', Splunk treats both values as strings regardless of their actual type.
|
|
|
tags = eval evaluate calculate add subtract sum count measure multiply divide where
|
|
|
|
|
|
[eval-math-exp]
|
|
|
syntax = (<field>|<num>) ((+|-|*|/|%) <eval-expression>)*
|
|
|
example = pi() * pow(radius_a, 2) + pi() * pow(radius_b, 2)
|
|
|
|
|
|
[eval-concat-exp]
|
|
|
syntax = ((<field>|<string>|<num>) (. <eval-expression>)*)|((<field>|<string>) (+ <eval-expression>)*)
|
|
|
description = concatenate fields and strings
|
|
|
comment = create a new field by concatenating the field first_name, a space character, and the field last_name.
|
|
|
example = first_name." ".last_nameSearch
|
|
|
|
|
|
[eval-compare-exp]
|
|
|
syntax = (<field>|<string>|<num>) ("<"|">"|"<"=|">"=|!=|=|==|LIKE) <eval-expression>
|
|
|
|
|
|
[eval-bool-exp]
|
|
|
syntax = (NOT|!)? (<eval-compare-exp>|<eval-function-call>) ((AND|OR|XOR) <eval-expression>)*
|
|
|
|
|
|
[eval-function-call]
|
|
|
syntax = <eval-function> "(" <eval-expression> ("," <eval-expression>)* ")"
|
|
|
|
|
|
[eval-function]
|
|
|
syntax = abs|avg|case|ceiling|ceil|cidrmatch|coalesce|commands|exact|exp|false|floor|if|ifnull|ipmask|isbool|isint|isnotnull|isnull|isnum|isstr|len|like|ln|log|lookup|lower|match|max|md5|min|mvappend|mvcount|mvdedup|mvindex|mvfilter|mvfind|mvjoin|mvmap|mvrange|mvreverse|mvsort|mvzip|now|null|nullif|pi|pow|random|relative_time|replace|round|searchmatch|sha1|sha256|sha512|sigfig|spath|split|sqrt|strftime|strptime|substr|sum|time|tostring|trim|ltrim|rtrim|true|typeof|upper|urldecode|validate|tonumber|acos|acosh|asin|asinh|atan|atan2|atanh|cos|cosh|hypot|sin|sinh|tan|tanh|json_array_to_mv|mv_to_json_array|json_append|json_delete|json_extend|json_extract_exact|json_set_exact|json_object|json_array|json_extract|json_keys|json_set|json_valid|json|bit_and|bit_or|bit_xor|bit_not|bit_shift_left|bit_shift_right
|
|
|
description = Function used by eval.
|
|
|
example1 = abs(number)
|
|
|
comment1 = Takes a number and returns its absolute value.
|
|
|
example2 = case(error == 404, "Not found", error == 500, "Internal Server Error", error == 200, "OK")
|
|
|
comment2 = Takes an even number of arguments with arguments 1,3,5, etc. being boolean expressions. The function returns the argument following the first expression that evaluates to true, defaulting to NULL if none are true.
|
|
|
example3 = ceiling(1.2)
|
|
|
comment3 = This function takes a number and rounds it up to the next highest integer, which in this example is 2.
|
|
|
example4 = cidrmatch("123.132.32.0/25", ip)
|
|
|
comment4 = Takes two arguments, the first being the subnet to match and the second being an ip address. This boolean function returns true if the ip matches the valid subnet, or false otherwise.
|
|
|
example5 = coalesce(null(), "Returned value", null())
|
|
|
comment5 = Takes any number of arguments and returns the first value that is not null. The ifnull function does the exact same thing, so both names are acceptable.
|
|
|
example6 = exact(3.14 * num)
|
|
|
comment6 = Takes a number as its argument, and returns the exact value of the result without truncating for significant figures or precision. Uses double precision.
|
|
|
example7 = exp(3)
|
|
|
comment7 = Takes a number x and returns e^x.
|
|
|
example8 = false()
|
|
|
comment8 = This function enables you to specify a conditional that is obviously false, for example 1==0. You do not specify a field with this function.
|
|
|
example9 = floor(1.9)
|
|
|
comment9 = Takes a number x and returns the floor of x, which in this example is 1.
|
|
|
example10 = if(error == 200, "OK", "Error")
|
|
|
comment10 = Takes three arguments, the first being a boolean expression. The function returns the second argument if it evaluates to true, and the third otherwise.
|
|
|
example11 = isbool(field)
|
|
|
comment11 = Takes one argument, returning true iff the argument is boolean. There are corresponding functions for numbers (isnum), integers (isint), strings (isstr), and null (isnull).
|
|
|
example12 = isnotnull(field)
|
|
|
comment12 = Takes one argument, returning true iff the field is not null. A useful check for whether the field contains a value.
|
|
|
example13 = len(field)
|
|
|
comment13 = Takes one string argument, returning the length of the string. In this example it would return the length of the string in field.
|
|
|
example14 = like(field, "foo%")
|
|
|
comment14 = Takes two string arguments, returning true iff the first argument is like the SQLite pattern in the second argument. This example returns true if the field value starts with foo.
|
|
|
example15 = ln(bytes)
|
|
|
comment15 = Takes a number and returns its natural log.
|
|
|
example16 = log(number, 2)
|
|
|
comment16 = Takes either one or two numeric arguments, returning the log of the first argument using the base provided by the second argument, in this case log of number base 2. The default base is 10 if the second argument is omitted.
|
|
|
example17 = lower(username)
|
|
|
comment17 = Takes one string argument and returns the lowercase version, in this example lowercasing the value provided by the field username. The upper function also exists for returning the uppercase version.
|
|
|
example18 = match(field, "^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$")
|
|
|
comment18 = Takes two string arguments, and returns true iff the first argument matches the regex provided by the second argument. This example returns true iff field matches the basic pattern of an ip address. Note that the example used ^ and $ to perform a full match.
|
|
|
example19 = max(1, 3, 6, 7, "f"^\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$")oo", field)
|
|
|
comment19 = Takes an arbitrary number of number and string arguments and returns the max, with strings being greater than numbers. So this example will return either "foo" or field, depending on the value of field. The corresponding min function also exists.
|
|
|
example20 = md5(field)
|
|
|
comment20 = Takes one string argument and returns its md5 hash value as a string, in this case returning the hash of field's value.
|
|
|
example21 = mvcount(multifield)
|
|
|
comment21 = Takes a field argument, returning the number of values if the field is multivalued, one if the field is single valued and has a value, and null otherwise.
|
|
|
example22 = mvindex(multifield, 2)
|
|
|
comment22 = Takes two or three arguments, the first being a field and the second two being numbers, which returns a subset of the multivalued field using the indexes provided. For mvindex(mvfield, startindex, [endindex]) endindex is inclusive and optional. Both start and endindex can be negative, where -1 = last element, and if endindex is not specified, it returns just the value at startindex. If the indexes are out of range or invalid, the result is null. Since indexes start at zero, this example returns the third value in multifield if it exists.
|
|
|
example23 = mvfilter(match(email, "\.net$") OR match(email, "\.org$"))
|
|
|
comment23 = Takes one argument which is a boolean expression that references EXACTLY one field. It returns that multivalued field filtered by the given expression, so this example will return all values of the field email that end in .net or .org.
|
|
|
example24 = mvfind(mymvfield, "err\d+")
|
|
|
comment24 = This function tries to find a value in multivalue field X that matches the regular expression REGEX. If a match exists, the index of the first matching value is returned (beginning with zero). If no values match, NULL is returned.
|
|
|
example25 = now()
|
|
|
comment25 = Takes no arguments and returns the time that the search was started, in unix time (seconds since epoch).
|
|
|
example26 = null()
|
|
|
comment26 = Takes no arguments and returns null, which is how the evaluation engine represents no value. Setting a field to null will clear its value.
|
|
|
example27 = nullif(fielda, fieldb)
|
|
|
comment27 = Takes two arguments, returning the first argument if the arguments are different, and null otherwise.
|
|
|
example28 = pi()
|
|
|
comment28 = Takes no arguments, returning the pi constant to 11 digits of precision.
|
|
|
example29 = pow(x, y)
|
|
|
comment29 = Takes two numeric arguments, returning x^y
|
|
|
example30 = random()
|
|
|
comment30 = Takes no arguments, returns a pseudo-random number.
|
|
|
example31 = replace(date, "^(\d{1,2})/(\d{1,2})/", "\2/\1/")
|
|
|
comment31 = Takes three arguments - replace(input_string, regex_pattern_to_replace, replacement_string) returns the input string with all instances of the regex pattern replaced with the replacement string. You can also include matched groups in the third argument by escaping the group number matched. The example returns date with the month and day numbers switched, so if the input was 1/12/2009 the return value would be 12/1/2009.
|
|
|
example32 = round(3.5)
|
|
|
comment32 = Takes one or two numeric arguments, returning the first argument rounded to the amount of decimal places specified by the second. The default is to round to an integer, so the example would return 4. round(2.555, 2) would return 2.56.
|
|
|
example33 = searchmatch("foo AND bar")
|
|
|
comment33 = Takes one argument, which is a search string. The function returns true iff the event matches the search string.
|
|
|
example34 = sha1(field)
|
|
|
comment34 = Takes one string argument and returns its sha1 hash value based on the FIPS compliant SHA-1 hash function as a string, in this case returning the hash of field's value.
|
|
|
example35 = sha256(field)
|
|
|
comment35 = Takes one string argument and returns its sha256 hash value based on the FIPS compliant SHA-256 hash function as a string, in this case returning the hash of field's value.
|
|
|
example36 = sha512(field)
|
|
|
comment36 = Takes one string argument and returns its sha512 hash value based on the FIPS compliant SHA-512 hash function as a string, in this case returning the hash of field's value.
|
|
|
example37 = sqrt(9)
|
|
|
comment37 = Takes one numeric argument and returns its square root, in this example it would be 3.
|
|
|
example38 = substr("string", 1, 3) + substr("string", -3)
|
|
|
comment38 = Takes either two or three arguments - the first is a string and the last two are numeric. The function returns a substring of the first argument starting at the index specified by the second argument with the number of characters specified by the third. If a third argument is not given, it returns the rest of the string. The indexes follow SQLite semantics in that they start at 1 and negative indices can be used, which start from the end of the string. The example concatenates "str" and "ing" together, returning "string"
|
|
|
example39 = tostring(1==1) + " " + tostring(15, "hex") + " " + tostring(12345.6789, "commas") + " " + tostring(45, "binary")
|
|
|
comment39 = Takes one or two arguments, returning a string representation of the first argument with optional formatting for numbers. For numbers you can specify "hex" or "commas" as a second argument to format the number accordingly. The return value for this example is "True 0xF 12,345.68".
|
|
|
example40 = trim(" ZZZZabcZZ ", " Z")
|
|
|
comment40 = Takes one or two string arguments, and returns the first string with the characters in the second argument trimmed from both sides. If no second argument is specified, spaces and tabs are trimmed. This example returns "abc". There are also ltrim and rtrim functions for trimming only the left and right sides, respectively.
|
|
|
example41 = true()
|
|
|
comment41 = This function enables you to specify a conditional that is obviously true, for example 1==1. You do not specify a field with this function.
|
|
|
example42 = typeof(12) + typeof("string") + typeof(1==2) + typeof(badfield)
|
|
|
comment42 = Takes one argument and returns a string representation of its type. The example result is NumberStringBoolInvalid.
|
|
|
example43 = urldecode("http%3A%2F%2Fwww.splunk.com%2Fdownload%3Fr%3Dheader")
|
|
|
comment43 = Takes one string argument and returns the url decoded. The example result is "http://www.splunk.com/download?r=header".
|
|
|
example44 = validate(isint(port), "ERROR: Port is not an integer", port >= 1 AND port <= 65535, "ERROR: Port is out of range")
|
|
|
comment44 = Takes an even number of arguments like case(), with odd arguments being boolean expressions. The even arguments are strings, and the function returns the string corresponding to the first expression that evaluates to false, or null if all checks pass. The example runs a simple check for valid ports.
|
|
|
example45 = commands(searchstr_field)
|
|
|
comment45 = takes a splunk search and returns a multivalued field contain a list of commands used in that search
|
|
|
example46 = relative_time(now(), "-1d@d")
|
|
|
comment46 = Takes a UTC time as the first argument and a relative time specifier as the second argument and returns the UTC time of that relative time applied to the first arugment.
|
|
|
example47 = strftime(_time, "%H:%M")
|
|
|
comment47 = Takes a UTC time as the first argument and renders it as a string using the format specified by the second argument
|
|
|
example48 = strptime(timeStr, "%H:%M")
|
|
|
comment48 = Takes a string time representing and parses it using the format specified by the second argument, returning a UTC time
|
|
|
example49 = time()
|
|
|
comment49 = Returns the current wall-clock time with microsecond resolution. Will be different for each event based on when that event was processed by eval.
|
|
|
example50 = mvjoin(foo, ";")
|
|
|
comment50 = Join together individual values of a multi-valued field foo using a semicolon as the delimiter
|
|
|
example51 = mvappend(foo, "bar", baz)
|
|
|
comment51 = Append the value "bar" and the values of field baz to the values of field foo and return as multi-valued. foo and baz could either be multi or single valued field.
|
|
|
example52 = split(foo, ";")
|
|
|
comment52 = Split the value(s) of field foo on the delimiter ';' and returns as multi-valued
|
|
|
example53 = mvfind(mymvfield, "err\d+")
|
|
|
comment53 = Try to find a value in the multivalued field "mymvfield" matching the regex "err\d+". The index of the first matching value is returned (zero indexed). If no values match, null is returned.
|
|
|
example54 = sigfig(number)
|
|
|
comment54 = Display number with the correct number of significant figures.
|
|
|
example55 = spath(input, path)
|
|
|
comment55 = Extract the data at path "path" from "input". May result in a multivalued field.
|
|
|
example56 = mvzip(hosts, ports)
|
|
|
comment56 = Combine 2 multivalue fields by stitching together the first value of one field with the first value of another field, then the second with the second, and so forth. Similar to python's zip command
|
|
|
example57 = mvrange(1,11,2)
|
|
|
comment57 = create a multivalued field with the values 1,3,5,7,9. The first argument is the starting number, second is ending number (exclusive) and the third number, which is optional is the step increment.
|
|
|
example58 = mvdedup(mvfield)
|
|
|
comment58 = Takes a multivalued field as input and returns a multivalued field with its duplicate values removed
|
|
|
example59 = mvsort(mvfield)
|
|
|
comment59 = Takes a multivalued field as input and returns a multivalued field with its values sorted lexicographically
|
|
|
example60 = acos(0)
|
|
|
comment60 = This function computes the arc cosine of X, in the interval [0,pi] radians.
|
|
|
example61 = acosh(2)
|
|
|
comment61 = This function computes the arc hyperbolic cosine of X, in radians.
|
|
|
example62 = asin(1)
|
|
|
comment62 = This function computes the arc sine of X, in the interval [-pi/2,+pi/2] radians.
|
|
|
example63 = asinh(1)
|
|
|
comment63 = This function computes the arc hyperbolic sine of X, in radians.
|
|
|
example64 = atan(0.50)
|
|
|
comment64 = This function computes the arc tangent of X, in the interval [-pi/2,+pi/2] radians.
|
|
|
example65 = atan2(0.50, 0.75)
|
|
|
comment65 = This function computes the arc tangent of Y, X in the interval [-pi,+pi] radians. Y is a value that represents the proportion of the y-coordinate. X is the value that represents the proportion of the x-coordinate. To compute the value, the function takes into account the sign of both arguments to determine the quadrant.
|
|
|
example66 = atanh(0.500)
|
|
|
comment66 = This function computes the arc hyperbolic tangent of X, in radians.
|
|
|
example67 = cos(-1)
|
|
|
comment67 = This function computes the cosine of an angle of X radians.
|
|
|
example68 = cosh(1)
|
|
|
comment68 = This function computes the hyperbolic cosine of X radians.
|
|
|
example69 = hypot(3,4)
|
|
|
comment69 = This function computes the hypotenuse of a right-angled triangle whose legs are X and Y. The function returns the square root of the sum of the squares of X and Y, as described in the Pythagorean theorem.
|
|
|
example70 = sin(1)
|
|
|
comment70 = This function computes the sine.
|
|
|
example71 = sinh(1)
|
|
|
comment71 = This function computes the hyperbolic sine.
|
|
|
example72 = tan(1)
|
|
|
comment72 = This function computes the tangent.
|
|
|
example73 = tanh(1)
|
|
|
comment73 = This function computes the hyperbolic tangent.
|
|
|
example74 = mvmap(foo,foo*10)
|
|
|
comment74 = Multiply each value of foo by 10
|
|
|
example75 = mvmap(mvindex(foo,1,2),foo*bar)
|
|
|
comment75 = Multiply the 2nd and 3rd values of foo by the value of bar. bar should be a single value field.
|
|
|
example76 = mvmap(foo,foo*foo)
|
|
|
comment76 = Square each value of foo
|
|
|
example77 = mvmap(mvappend(mv1, mv2),x*x,"x")
|
|
|
comment77 = Combine 2 multivalued fields and square each value using "x" as a replacement variable.
|
|
|
example78 = lookup("http_status.csv", json_object("status", status_code, "status_type", status_type), json_array("status_description"))
|
|
|
comment78 = Lookup value of status_code and status_type in a lookup table called http_status.csv and output a field called status_description.
|
|
|
example79 = sum(foo,bar)
|
|
|
comment79 = Add up values of the 'foo' and 'bar' fields. When applied to a multi-value field, each value of the field is included in the total.
|
|
|
example80 = avg(foo,bar)
|
|
|
comment80 = Get the numerical average (mean) of the values of the 'foo' and 'bar' fields. This is equivalent to 'sum(foo,bar)/(mvcount(foo) + mvcount(bar))'.
|
|
|
example81 = json_array_to_mv(json_array("hello", "world"))
|
|
|
comment81 = Convert the JSON array containing string "hello" and "world" into a multivalue field.
|
|
|
example82 = mv_to_json_array(mvappend("hello", "world"))
|
|
|
comment82 = Convert the multivalue field containing elements "hello" and "world" into a JSON array.
|
|
|
example83 = json_append(my_array, "foo", "bar")
|
|
|
comment83 = Append the value "bar" to the array stored in the "foo" key in the JSON object titled "my_array".
|
|
|
example84 = json_extend(my_object, "my_array", json_array(3, 4))
|
|
|
comment84 = Similar to the python 'extend' function: locates 'my_array' stored under the JSON object titled 'my_object', then flattens and appends the array [3,4] to it.
|
|
|
example85 = json_extract_exact(json_object("john.smith", 1), "john.smith")
|
|
|
comment85 = Extracts the value 1 from the key "john.smith" in the provided JSON object. The function treats the character "." in "john.smith" as a string literal, as opposed to a nested object in JSON format.
|
|
|
example86 = json_set_exact(my_object, "name.first", "maria")
|
|
|
comment86 = Adds the key "name.first" with associated value "maria" to the JSON object titled "my_object". The function treats the character '.' in "name.first" as a string literal, as opposed to a nested object in JSON format.
|
|
|
example87 = json_object("name", json_array("john", "arun"))
|
|
|
comment87 = Creates a JSON object with key as "name" and value as ["john", "arun"].
|
|
|
example88 = json_array("john", "arun")
|
|
|
comment88 = Creates a JSON array using list of values ["john", "arun"].
|
|
|
example89 = json_extract(jobs, "chris")
|
|
|
comment89 = If "chris" is present in the JSON object "jobs", fetch its value. Otherwise, output "null".
|
|
|
example90 = json_keys(bridges)
|
|
|
comment90 = Return a JSON array of keys from JSON object titled "bridges".
|
|
|
example91 = json_set(jobs, "chris", "teacher")
|
|
|
comment91 = If a key "chris" is present in the JSON object "jobs", update its value to "teacher". Otherwise, add a new key called "chris" with a value of "teacher" under the JSON object "jobs".
|
|
|
example92 = json_valid(inventory)
|
|
|
comment92 = Returns "true" if "inventory" can be parsed as JSON. Otherwise, the function returns "false".
|
|
|
example93 = json(occupations)
|
|
|
comment93 = Takes a value "occupations" and returns the value if it can be parsed as JSON. Otherwise, the function returns null.
|
|
|
example94 = ipmask("255.255.255.0","123.234.10.20")
|
|
|
comment94 = Takes two valid IPv4 addresses as arguments. The first argument is an IP mask, and the second is an IP address. This function generates a new masked IP address by applying the mask to the IP address through a bitwise AND operation. You can use this function to simplify isolation of an octet without splitting the given IP address. A valid IPv4 address is a quad-dotted notation of four decimal integers, each ranging from 0 to 255.
|
|
|
example95 = json_delete(object, "age")
|
|
|
comment95 = Removes the key "age" and its associated value from the JSON object under the 'object' field.
|
|
|
example96 = bit_shift_left(2, 1)
|
|
|
comment96 = Takes two valid non negative integers as arguments. The first argument is a value and the second the amount it must be shifted left by. Both arguments are restricted to be in the range [0, (2^53 -1)], a failure of which returns NULL. All answers are masked to stay below the (2^53 -1) limit in the event of overflows. Shifting left drops the 53rd bit and appends a 0 to the binary representation of the input.
|
|
|
example97 = bit_shift_right(2, 1)
|
|
|
comment97 = Takes two valid non negative integers as arguments. The first argument is a value and the second the amount it must be shifted right by. Both arguments are restricted to be in the range [0, (2^53 -1)], a failure of which returns NULL. All answers are masked to stay below the (2^53 -1) limit in the event of overflows. Shifting right drops the rightmost bit and prepends a 0 to the binary representation of the input.
|
|
|
example98 = bit_and(4, 6) + " " + bit_and(10, 12, 17)
|
|
|
comment98 = Takes two or more non-negative integers as arguments and performs logical bitwise and on them
|
|
|
example99 = bit_or(4, 6) + " " + bit_or(10, 12, 17)
|
|
|
comment99 = Takes two or more non-negative integers as arguments and performs logical bitwise or on them
|
|
|
example100 = bit_xor(4, 6) + " " + bit_xor(10, 19, 17)
|
|
|
comment100 = Takes two or more non-negative integers as arguments and performs logical bitwise xor on them
|
|
|
example101 = bit_not(2) + " " + bit_not(2, 7)
|
|
|
comment101 = Takes a non-negative integer as argument and inverts every bit in the binary representation of the number. It also takes an optional second argument with a default value of (2,53)-1. Both arguments are restricted to be in the range [0, (2^53 -1)], a failure of which returns NULL.
|
|
|
example102 = mvreverse(a)
|
|
|
comment102 = Reverses the order of the values in the multivalue field "a".
|
|
|
example103 = mvreverse(mvappend("1", "2", "3"))
|
|
|
comment103 = Reverses the order of the values in the multivalue field by changing "1", "2", "3" to "3", "2", "1".
|
|
|
example104 = | makeresults | eval b = mvappend("1","2","3"), a=mvreverse(b)
|
|
|
comment104 = Reverses the order of the values in multivalue field "b" to "3", "2", "1" in multivalue field "a".
|
|
|
|
|
|
|
|
|
##################
|
|
|
# extract
|
|
|
##################
|
|
|
|
|
|
[extract-command]
|
|
|
syntax = extract <extract-options>* <extractor-name>*
|
|
|
alias = kv
|
|
|
shortdesc = Extracts field-value pairs from search results.
|
|
|
description = Forces field-value extraction on the result set.
|
|
|
note = Use pairdelims & kvdelim to select how to extract data.
|
|
|
comment1 = Extract field/value pairs that are defined in the transforms.conf stanza 'access-extractions'.
|
|
|
example1 = ... | extract access-extractions
|
|
|
commentcheat1 = Extract field/value pairs and reload field extraction settings from disk.
|
|
|
examplecheat1 = ... | extract reload=true
|
|
|
commentcheat2 = Extract field/value pairs that are delimited by '|' or ';', and values of fields that are delimited by '=' or ':'.
|
|
|
examplecheat2 = ... | extract pairdelim="|;", kvdelim="=:", auto=f
|
|
|
category = fields::add
|
|
|
usage = public
|
|
|
related = kvform, multikv, rex, xmlkv
|
|
|
tags = extract kv field extract
|
|
|
|
|
|
[extract-options]
|
|
|
syntax = (segment=<bool>)|(reload=<bool>)|(kvdelim=<string>)|(pairdelim=<string>)|(limit=<int>)|(maxchars=<int>)|(mv_add=<bool>)|(clean_keys=<bool>)
|
|
|
description = Extraction options. \
|
|
|
"segment" specifies whether to note the locations of key/value pairs with the results (internal, false). \
|
|
|
"reload" specifies whether to force reloading of props.conf and transforms.conf (false). \
|
|
|
"kvdelim" string specifying a list of character delimiters that separate the key from the value \
|
|
|
"pairdelim" string specifying a list of character delimiters that separate the key-value pairs from each other \
|
|
|
"maxchars" specifies how many characters to look into the event (10240). \
|
|
|
"mv_add" whether to create multivalued fields. Overrides MV_ADD from transforms.conf \
|
|
|
"clean_keys" whether to clean keys. Overrides CLEAN_KEYS from transforms.conf \
|
|
|
"keep_empty_vals" whether to keep KV pairs with empty values. Overrides KEEP_EMPTY_VALS from transforms.conf
|
|
|
example1 = reload=true
|
|
|
|
|
|
[extractor-name]
|
|
|
syntax = <string>
|
|
|
description = A stanza that can be found in transforms.conf
|
|
|
note = This is used when props.conf did not explicitly cause an extraction \
|
|
|
for this source, sourcetype or host.
|
|
|
example1 = access-extractions
|
|
|
|
|
|
#################
|
|
|
# fieldformat
|
|
|
#################
|
|
|
[fieldformat-command]
|
|
|
syntax = fieldformat <field> = <eval-expression>
|
|
|
shortdesc = Specifies how to display field values.
|
|
|
description = Expresses how to render a field at output time without changing the underlying value.
|
|
|
example1 = fieldformat start_time = strftime(start_time, "%H:%M:%S")
|
|
|
comment1 = Specify that the start_time should be rendered by taking the value of start_time (assuming it is an epoch number) and rendering it to display just the hours minutes and seconds corresponding that epoch time.
|
|
|
usage = public
|
|
|
tags = field format
|
|
|
category = formatting
|
|
|
|
|
|
##################
|
|
|
# fields
|
|
|
##################
|
|
|
|
|
|
[fields-command]
|
|
|
syntax = fields ("+"|"-")? <wc-field-list>
|
|
|
shortdesc = Keeps or removes fields from search results.
|
|
|
description = Keeps or removes fields based on the field list criteria. \
|
|
|
If "+" is specified, only the fields that match one of the fields in the list are kept. \
|
|
|
If "-" is specified, only the fields that match one of the fields in the list are removed.
|
|
|
note = "_*" is the wildcard pattern for Splunk internal fields.\
|
|
|
This is similar to an SQL SELECT statement.
|
|
|
comment1 = Keep only the fields 'source', 'sourcetype', 'host', and all fields beginning with 'error'.
|
|
|
example1 = ... | fields source, sourcetype, host, error*
|
|
|
commentcheat1 = Keep only the "host" and "ip" fields, and display them in the order: "host", "ip".
|
|
|
examplecheat1 = ... | fields host, ip
|
|
|
commentcheat2 = Remove the "host" and "ip" fields.
|
|
|
examplecheat2 = ... | fields - host, ip
|
|
|
category = fields::filter
|
|
|
usage = public
|
|
|
tags = fields select columns
|
|
|
related = rename
|
|
|
|
|
|
##################
|
|
|
# fieldsummary
|
|
|
##################
|
|
|
|
|
|
[fieldsummary-command]
|
|
|
syntax = fieldsummary (maxvals=<num>)? <wc-field-list>?
|
|
|
shortdesc = Generates summary information for all or a subset of the fields.
|
|
|
description = Generates summary information for all or a subset of the fields. Emits a maximum of maxvals distinct values for each field (default = 100).
|
|
|
comment1 = Return summaries for all fields
|
|
|
example1 = ... | fieldsummary
|
|
|
comment2 = Returns summaries for only fields that start with date_ and return only the top 10 values for each field
|
|
|
example2 = ... | fieldsummary maxvals=10 date_*
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
related = af, anomalies, anomalousvalue, stats
|
|
|
|
|
|
##################
|
|
|
# findkeywords
|
|
|
##################
|
|
|
|
|
|
[findkeywords-command]
|
|
|
syntax = findkeywords labelfield=<field> (dedup=<bool>)?
|
|
|
shortdesc = Given some integer labeling of events into groups, finds searches to generate those groups.
|
|
|
description = Typically run after the "cluster" command or similar. Takes a set of results with a field \
|
|
|
("labelfield") that supplies a partition of the results into a set of groups. The command \
|
|
|
then derives a search to generate each of these groups, which may be saved as an event type\
|
|
|
if applicable.
|
|
|
example1 = ... | findkeywords labelfield=foo
|
|
|
example2 = ... | findkeywords labelfield=foo dedup=true
|
|
|
category = reporting
|
|
|
usage = internal
|
|
|
related = cluster findtypes
|
|
|
tags = findkeywords cluster patterns findtypes
|
|
|
|
|
|
##################
|
|
|
# filldown
|
|
|
##################
|
|
|
[filldown-command]
|
|
|
syntax = filldown (<wc-field-list>)?
|
|
|
shortdesc = Replace null values with the last non-null value.
|
|
|
description = Replace null values with the last non-null value for a field or set of fields. \
|
|
|
If no list of fields is given, filldown will be applied to all fields. \
|
|
|
If there were not any previous values for a field, it will be left blank (null).
|
|
|
comment = Filldown null values for all fields
|
|
|
example = ... | filldown
|
|
|
comment1 = Filldown null values for the count field only
|
|
|
example1 = ... | filldown count
|
|
|
comment2 = Filldown null values for the count field and any field that starts with 'score'
|
|
|
example2 = ... | filldown count score*
|
|
|
usage = public
|
|
|
tags = empty default
|
|
|
category = fields::modify
|
|
|
related = fillnull
|
|
|
|
|
|
##################
|
|
|
# fillnull
|
|
|
##################
|
|
|
|
|
|
[fillnull-command]
|
|
|
syntax = fillnull (value=<string>)? (<field-list>)?
|
|
|
shortdesc = Replaces null values with a specified value.
|
|
|
description = Replaces null values with a user specified value (default "0"). \
|
|
|
Null values are those missing in a particular result, but \
|
|
|
present for some other result. If a field-list is provided, fillnull \
|
|
|
is applied to only fields in the given list (including any fields that \
|
|
|
does not exist at all). Otherwise, applies to all existing fields.
|
|
|
comment = Build a time series chart of web events by host and fill all empty fields with NULL.
|
|
|
example = sourcetype="web" | timechart count by host | fillnull value=NULL
|
|
|
comment1 = For the current search results, fill all empty fields with zero.
|
|
|
example1 = ... | fillnull
|
|
|
comment2 = For the current search results, fill all empty fields with NULL.
|
|
|
example2 = ... | fillnull value=NULL
|
|
|
comment3 = For the current search results, fill all empty field values of "foo" and "bar" with NULL.
|
|
|
example3 = ... | fillnull value=NULL foo bar
|
|
|
usage = public
|
|
|
tags = empty default
|
|
|
category = fields::modify
|
|
|
related = eval
|
|
|
|
|
|
##################
|
|
|
# folderize
|
|
|
##################
|
|
|
[folderize-command]
|
|
|
syntax = folderize attr=<string> (sep=<string>)? (size=<string>)? (minfolders=<int>)? (maxfolders=<int>)?
|
|
|
shortdesc = Replaces "attr" with higher-level grouping, such as replacing filenames with directories.
|
|
|
description = Replaces the "attr" attribute value with a more generic value, which is the result of grouping it with other values from other results, where grouping happens via tokenizing the attr value on the sep separator value. For example, it can group search results, such as those used on the Splunk homepage to list hierarchical buckets (e.g. directories or categories). Rather than listing 200 sources on the Splunk homepage, folderize breaks the source strings by a separator (e.g. "/"), and determines if looking at just directories results in the number of results requested. The default "sep" separator is "::"; the default size attribute is "totalCount"; the default "minfolders" is 2; and the default "maxfolders" is 20.
|
|
|
example1 = | metadata type=sources | folderize maxfolders=20 attr=source sep="/"| sort totalCount d
|
|
|
usage = deprecated
|
|
|
category = results::group
|
|
|
tags = cluster group collect gather
|
|
|
related = bucketdir
|
|
|
|
|
|
##################
|
|
|
# foreach
|
|
|
##################
|
|
|
[foreach-command]
|
|
|
syntax = foreach (<wc-field>)+ (mode=<string>)? (fieldstr=<string>)? (matchstr=<string>)? (matchseg1=<string>)? (matchseg2=<string>)? (matchseg3=<string>)? (itemstr=<string>)? <subsearch>
|
|
|
shortdesc = Run a streaming subsearch that uses a template to iterate over each field in a wildcarded field list, or over each value in a single multivalue field or in a single field representing a JSON array.
|
|
|
description = Run a templated streaming subsearch for each field in a wildcarded field list. \i\\
|
|
|
For each field that is matched, the following patterns will be replaced in the templated subsearch: \i\\
|
|
|
option default replacement \i\\
|
|
|
fieldstr <<FIELD>> whole field name \i\\
|
|
|
matchstr <<MATCHSTR>> part of the field name that matches wildcards ("*") in the specifier \i\\
|
|
|
matchseg1 <<MATCHSEG1>> part of field name that matches first wildcard \i\\
|
|
|
matchseg2 <<MATCHSEG2>> part of field name that matches second wildcard \i\\
|
|
|
matchseg3 <<MATCHSEG3>> part of field name that matches third wildcard \i\\
|
|
|
itemstr <<ITEM>> placeholder for elements in the multivalued field / JSON array. \i\\
|
|
|
iterstr <<ITER>> optional placeholder for a zero-based iterator in the multivalued field / JSON array. \i\\
|
|
|
The Splunk software supports iterating over the contents of a JSON array or multivalue field. Specify whether iteration is occurring over JSON arrays or a multivalue field by setting 'mode=multivalue' or 'mode=json_array' and write your subsearch with the term '<<ITEM>>' as a reference to the iterable object. \i\\
|
|
|
When the 'mode=multifield' is the default behavior -- when set to 'multifield', iteration will take place over all fields specified in the wildcarded field list provided.
|
|
|
example1= ... | eval total=0 | eval test1=1 | eval test2=2 | eval test3=3 | foreach test* [eval total=total + <<FIELD>>]
|
|
|
comment1= add together all fields with a name that starts with "test" into a total field (result should be total=6)
|
|
|
example2= ... | foreach foo* [eval new_<<MATCHSTR>> = <<FIELD>> + bar<<MATCHSTR>>]
|
|
|
comment2= for each field that matches foo*, add it to the corresponding bar* field and write to a new_* field (e.g. new_X = fooX + barX)
|
|
|
example3= ... | foreach foo bar baz [eval <<FIELD>> = "<<FIELD>>"]
|
|
|
comment3= equivalent to: eval foo="foo" | eval bar="bar" | eval baz="baz"
|
|
|
example4= ... | foreach foo*bar* fieldstr="#field#" matchseg2="#matchseg2#" [eval #field# = "#matchseg2#"]
|
|
|
comment4= for the field fooAbarX, this would be to equivalent to: eval fooAbarX = "X"
|
|
|
example5= ... eval mv=mvappend("1", "2", "3"), total = 0 | foreach mode=multivalue mv [eval total = total + <<ITEM>>]
|
|
|
comment5= sum each element of the multivalued field 'mv' and accumulate it under the 'total' variable.
|
|
|
example6= ... eval mv=mvappend("a", "b", "c"), total = 0 | foreach mode=multivalue mv [eval field_<<ITER>> = <<ITEM>>]
|
|
|
comment6= creates field_0, field_1 and field_2 setting their values to "a", "b" and "c" respectively.
|
|
|
usage=public
|
|
|
related=eval
|
|
|
tags=subsearch eval computation wildcard fields
|
|
|
category=search::subsearch
|
|
|
|
|
|
##################
|
|
|
# format
|
|
|
##################
|
|
|
[format-command]
|
|
|
syntax = format (quote=<bool>)? (mvsep="<mv separator>")? (maxresults=<int>)? (<row-prefix> <column-prefix> <column-separator> <column-end> <row-separator> <row end>)?
|
|
|
shortdesc = Takes the results of a subsearch and formats them into a single result.
|
|
|
description = This command is used implicitly by subsearches. This command takes the results \
|
|
|
of a subsearch, formats the results into a single result and places that result \
|
|
|
into a new field called search. \
|
|
|
when 'quote=true', the output search string is surrounded by double quotes and the \
|
|
|
existing double quotes within the search are escaped. This makes the output \
|
|
|
of the subsearch suitable for use within the eval/where searchmatch() function. \
|
|
|
The default for the quote argument is false. \
|
|
|
The mvsep argument is the separator for multivalue fields. The default \
|
|
|
separator is OR. \
|
|
|
The maxresults argument is the maximum number of results to return. The default \
|
|
|
is 0, which means no limit on the number returned. \
|
|
|
The six row and column arguments default to: \"(" \"(" \"AND" ")" \"OR" \")"
|
|
|
example1 = ... | head 2 | fields source, sourcetype, host | format "[" "[" "&&" "]" "||" "]"
|
|
|
comment1 = Get the top 2 results. Create a search from the host, source and sourcetype fields. Use the specified format values. \
|
|
|
[ [ host="mylaptop" && source="syslog.log" && sourcetype="syslog" ] || [ host="bobslaptop" && source="bob-syslog.log" && sourcetype="syslog" ] ]
|
|
|
example2 = ... | head 2 | fields source, sourcetype, host | format
|
|
|
comment2 = Get the top 2 results. Create a search from the host, source and sourcetype fields. Use the default format values. The result is a single result in a new field called "search": \
|
|
|
( ( host="mylaptop" AND source="syslog.log" AND sourcetype="syslog" ) OR ( host="bobslaptop" AND source="bob-syslog.log" AND sourcetype="syslog" ) )
|
|
|
example3 = ... | format maxresults = <int>
|
|
|
comment3 = Change the number of results inline with your search by appending the format command to the end of your subsearch.
|
|
|
example4 = ... | eval is_top5 = if(searchmatch([inputcsv test.csv | top 5 user | fields user | format quote=true]),"true","false")
|
|
|
comment4 = Use searchmatch() to create a field to indicate if each user is one of the 5 most common values of the user field from test.csv
|
|
|
usage = public
|
|
|
tags = format query subsearch
|
|
|
category = search::subsearch
|
|
|
related = search
|
|
|
|
|
|
[row-prefix]
|
|
|
syntax = <double-quoted-string>
|
|
|
description = The value to use for the row prefix.
|
|
|
default = "("
|
|
|
|
|
|
[column-prefix]
|
|
|
syntax = <double-quoted-string>
|
|
|
description = The value to use for the column prefix.
|
|
|
default = "("
|
|
|
|
|
|
[column-separator]
|
|
|
syntax = <double-quoted-string>
|
|
|
description = The value to use for the column separator.
|
|
|
default = "AND"
|
|
|
|
|
|
[column-end]
|
|
|
syntax = <double-quoted-string>
|
|
|
description = The value to use for the column end.
|
|
|
default = ")"
|
|
|
|
|
|
[row-separator]
|
|
|
syntax = <double-quoted-string>
|
|
|
description = The value to use for the row separator.
|
|
|
default = "OR"
|
|
|
|
|
|
[row-end]
|
|
|
syntax = <double-quoted-string>
|
|
|
description = The value to use for the row end.
|
|
|
default = ")"
|
|
|
|
|
|
################
|
|
|
# from
|
|
|
################
|
|
|
[from-command]
|
|
|
syntax = from <dataset-type>(<ws>|:)<dataset-name>
|
|
|
shortdesc = Retrieves data from a named dataset, saved search, report, or \
|
|
|
lookup file. Must be the first command in a search.
|
|
|
description = The from command retrieves data from a named dataset, saved \
|
|
|
search, report, CSV lookup file, or KV store lookup file. The \
|
|
|
from command is a generating command and should be the first \
|
|
|
command in the search. Generating commands use a leading pipe \
|
|
|
character.
|
|
|
comment1 = Search a built-in data model that is an internal server log for \
|
|
|
REST API calls.
|
|
|
example1 = | from datamodel:"internal_server.splunkdaccess"
|
|
|
comment2 = Retrieve data using a saved search.
|
|
|
example2 = | from savedsearch:mysecurityquery
|
|
|
comment3 = Specify a dataset name that contains spaces.
|
|
|
example3 = | from savedsearch:"Top five sourcetypes"
|
|
|
comment4 = Retrieve data from a lookup file. Search the contents of the KV \
|
|
|
store collection kvstorecoll that have a CustID value greater \
|
|
|
than 500 and a CustName value that begins with the letter P.
|
|
|
example4 = | from inputlookup:kvstorecoll_lookup | where (CustID>500) AND (CustName="P*") | stats count
|
|
|
category = results::filtering
|
|
|
usage = public
|
|
|
tags = dataset
|
|
|
related = savedsearch, inputlookup, datamodel
|
|
|
|
|
|
##################
|
|
|
# fromjson
|
|
|
##################
|
|
|
[fromjson-command]
|
|
|
syntax = fromjson (<string>) (<fromjson-prefix-opt>)?
|
|
|
description = When given a single field name that points to proper JSON objects, 'fromjson' expands the JSON objects into the Splunk schema, outputting keys as fields and key values as field values.
|
|
|
shortdesc = Extract key-value pairs from JSON data.
|
|
|
usage = public
|
|
|
example1 = | makeresults | eval object=json_object("name", "John", "age", 25) | fromjson object
|
|
|
comment1 = Expand the 'object' to create two new fields, 'name' and 'age', and output the values in the search result.
|
|
|
|
|
|
[fromjson-prefix-opt]
|
|
|
syntax = prefix=<string>
|
|
|
description = String to prepend fields extracted via 'fromjson'.
|
|
|
example1 = | fromjson object prefix=json_
|
|
|
comment1 = Expand the 'object' field seen in the first 'fromjson' example, but output the fields as 'json_name' and 'json_age'.
|
|
|
|
|
|
|
|
|
##################
|
|
|
# fsdiscover
|
|
|
##################
|
|
|
|
|
|
##################
|
|
|
# gauge
|
|
|
##################
|
|
|
[gauge-command]
|
|
|
syntax = gauge (<num>|<field>) ((<num>|<field>)+)?
|
|
|
shortdesc = Transforms results into a format that can be displayed by the Gauge chart types.
|
|
|
description = Transforms results into a format suitable for display by the Gauge chart types. Each argument must be a real number or the name of a numeric field. Numeric field values will be taken from the first input result, the remainder are ignored. The first argument is the gauge value and is required. Each argument after that is optional and defines a range for different sections of the gauge. If there are no range values provided, the gauge will start at 0 and end at 100. If two or more range values are provided, The gauge will begin at the first range value, and end with the final range value. Intermediate range values will be used to split the total range into subranges which will be visually distinct. A single range value is meaningless and will be treated identically as no range values.
|
|
|
example1 = ... | gauge count 0 25 50 75 100
|
|
|
comment1 = Use the value of the count field as the gauge value and have 4 regions to the gauge (0-25,25-50,50-75,75-100)
|
|
|
usage = public
|
|
|
tags = stats format display chart dial
|
|
|
category = reporting
|
|
|
related = eval stats
|
|
|
|
|
|
##################
|
|
|
# gentimes
|
|
|
##################
|
|
|
|
|
|
[gentimes-command]
|
|
|
syntax = gentimes start=<timestamp> (end=<timestamp>)? (increment=<increment>)?
|
|
|
shortdesc = Generates time range results.
|
|
|
description = Generates time range results. This command is useful in conjunction with the 'map' command.
|
|
|
comment1 = All daily time ranges from oct 25 till today
|
|
|
example1 = | gentimes start=10/25/07
|
|
|
comment2 = All daily time ranges from 30 days ago until 27 days ago
|
|
|
example2 = | gentimes start=-30 end=-27
|
|
|
comment3 = All daily time ranges from oct 1 till oct 5
|
|
|
example3 = | gentimes start=10/1/07 end=10/5/07
|
|
|
comment4 = All HOURLY time ranges from oct 1 till oct 5
|
|
|
example4 = | gentimes start=10/1/07 end=10/5/07 increment=1h
|
|
|
usage = public beta
|
|
|
tags = time timestamp subsearch range timerange
|
|
|
generating = true
|
|
|
category = results::generate
|
|
|
related = map
|
|
|
|
|
|
[timestamp]
|
|
|
# the current bnf format would be a royal pain to represent this. perhaps we should support regex
|
|
|
# perhaps some token to indicate no-ws is implied. -- <allws> ....
|
|
|
# syntax = "(\d{1,2})/(\d{1,2})(?:/(\d{2,4}))?(?::(\d{1,2}):(\d{2}):(\d{2}))?"|<int>
|
|
|
syntax = MM/DD/YYYY(:HH:MM:SS)?|<int>
|
|
|
example1 = 10/1/2007:12:34:56
|
|
|
comment2 = 5 days ago
|
|
|
example2 = -5
|
|
|
|
|
|
[increment]
|
|
|
syntax = <int:increment>(s|m|h|d)?
|
|
|
comment1 = 1 hour
|
|
|
example1 = 1h
|
|
|
|
|
|
##################
|
|
|
# geostats
|
|
|
##################
|
|
|
[geostats-command]
|
|
|
syntax = geostats (translatetoxy=<bool>)? (latfield=<string>)? (longfield=<string>)? (outputlatfield=<string>)? (outputlongfield=<string>)? (globallimit=<int>)? (locallimit=<int>)? (binspanlat=<float> binspanlong=<float>)? (maxzoomlevel=<int>)? (<stats-agg-term>)* (<by-clause>)?
|
|
|
shortdesc = Generate statistics which are clustered into geographical bins to be rendered on a world map.
|
|
|
description = Use the geostats command to compute statistical functions suitable for rendering on \
|
|
|
a world map. First, the events will be clustered based on latitude and longitude \
|
|
|
fields in the events. Then, the statistics will be evaluated on the generated \
|
|
|
clusters, optionally grouped or split by fields using a by-clause.\p\\
|
|
|
For map rendering and zooming efficiency, geostats generates clustered stats at a \
|
|
|
variety of zoom levels in one search, the visualization selecting among them. The \
|
|
|
quantity of zoom levels can be controlled by the options \
|
|
|
binspanlat/binspanlong/maxzoomlevel. The initial granularity is selected by \
|
|
|
binspanlat together with binspanlong. At each level of zoom, the number of bins \
|
|
|
will be doubled in both dimensions (a total of 4x as many bins for each zoom-in).
|
|
|
example1= ... | geostats latfield=eventlat longfield=eventlong avg(rating) by gender
|
|
|
comment1= compute the average rating for each gender after clustering/grouping the events by "eventlat" and "eventlong" values.
|
|
|
example2= ... | geostats count
|
|
|
comment2= cluster events by default latitude and longitude fields "lat" and "lon" respectively. Calculate the count of such events
|
|
|
example3= sourcetype = access_combined_wcookie | iplocation clientip | geostats count by date_hour
|
|
|
comment3 = take events from apache logs, use iplocation to geocode the ip addresses of the client, and then cluster the events based on how many are happening in each hour of the day.
|
|
|
usage=public
|
|
|
related=stats, xyseries, chart
|
|
|
tags = stats statistics
|
|
|
category = reporting
|
|
|
|
|
|
[binspanlat]
|
|
|
syntax = binspanlat=<float>
|
|
|
description = The size of the bins in latitude degrees at the lowest zoom level. Defaults to 22.5. \
|
|
|
With default binspanlong=45.0, leads to a grid size of 8x8.
|
|
|
|
|
|
[binspanlong]
|
|
|
syntax = binspanlong=<float>
|
|
|
description = The size of the bins in longitude degrees at the lowest zoom level. Defaults to \
|
|
|
45.0. With default binspanlat=22.5, leads to a grid size of 8x8.
|
|
|
|
|
|
[globallimit]
|
|
|
syntax = globallimit=<int>
|
|
|
description = Controls the number of named categories to add to each pie-chart. When used with \
|
|
|
count and additive statistics, there will be one additional category called "OTHER" \
|
|
|
which groups all other split-by values. Setting globallimit=0 removes all limits and \
|
|
|
renders all categories. Defaults to 10.
|
|
|
|
|
|
[latfield]
|
|
|
syntax = latfield=<field>
|
|
|
description = Specify a field from the pre-search that represents the latitude coordinates to use \
|
|
|
in your analysis. Defaults to "lat".
|
|
|
|
|
|
[longfield]
|
|
|
syntax = longfield=<field>
|
|
|
description = Specify a field from the pre-search that represents the longitude coordinates to use \
|
|
|
in your analysis. Defaults to "lon".
|
|
|
|
|
|
[maxzoomlevel]
|
|
|
syntax = maxzoomlevel=<int>
|
|
|
description = The maximum level to be created in the quad tree. Defaults to 9, which specifies \
|
|
|
that 10 zoom levels will be created: 0-9.
|
|
|
|
|
|
[outlatfield]
|
|
|
syntax = outlatfield=<string>
|
|
|
description = Specify a name for the latitude field in your geostats output data. \
|
|
|
Defaults to "latitude".
|
|
|
|
|
|
[outlongfield]
|
|
|
syntax = outlongfield=<string>
|
|
|
description = Specify a name for the longitude field in your geostats output data. \
|
|
|
Defaults to "longitude".
|
|
|
|
|
|
[translatetoxy]
|
|
|
syntax = translatetoxy=<bool>
|
|
|
description = If true, geostats produces one result per each binned location for rendering on a \
|
|
|
map. If false, geostats produces one result per category per binned location and \
|
|
|
cannot be rendered on a map. Defaults to true.
|
|
|
|
|
|
##################
|
|
|
# geom
|
|
|
##################
|
|
|
[geom-command]
|
|
|
syntax = geom (<featureCollection>)? (<allFeatures>)? (<featureIdField>)? (gen=<num>)? <min_x>? <min_y>? <max_x>? <max_y>?
|
|
|
shortdesc = Used for choropleth map's UI visualization.
|
|
|
description = Geom command can generate polygon geometry in JSON style, for UI visualization. This command depends on lookup having been installed with external_type=geo.
|
|
|
example1 = ...| geom
|
|
|
comment1 = When no arguments are provided, geom command looks for a column named "featureCollection" and a column named "featureId" in the event. These commands are present in the default output from a Lookup on the given geoindex.
|
|
|
example2 = ...| geom "geo_us_states"
|
|
|
comment2 = This case specifies spatial index name to "geo_us_states".
|
|
|
example3 = ...| geom "geo_us_states" featureIdField="state"
|
|
|
comment3 = This case specifies featureId to "state" field in event.
|
|
|
example4 = ...| geom "geo_us_states" allFeatures=true
|
|
|
comment4 = When allFeatures is used, additional rows are appended for each feature that is not already present in the search results.
|
|
|
|
|
|
usage = public
|
|
|
related = geomfilter, lookup
|
|
|
tags = choropleth map
|
|
|
category = reporting
|
|
|
|
|
|
[featureCollection]
|
|
|
syntax = <string>
|
|
|
description = This option is used to specify the spatial index; the provided string is the index name.
|
|
|
|
|
|
[allFeatures]
|
|
|
syntax = allFeatures=<bool>
|
|
|
description = This option specifies that the output include every geometric feature in the feature collection. When a shape has no values, any aggregate fields, such as "average" or "count", display zero.
|
|
|
|
|
|
[featureIdField]
|
|
|
syntax = featureIdField=<string>
|
|
|
description = This option is used to specify the field name, when event contains featureId in a field named something other than "featureId".
|
|
|
|
|
|
[min_x]
|
|
|
syntax = min_x=<num>
|
|
|
description = X coordinate of bounding box's bottom-left corner, range [-180, 180].
|
|
|
default = "min_x=-180"
|
|
|
|
|
|
[min_y]
|
|
|
syntax = min_y=<num>
|
|
|
description = Y coordinate of bounding box's bottom-left corner, range [-90, 90].
|
|
|
default = "min_y=-90"
|
|
|
|
|
|
[max_x]
|
|
|
syntax = max_x=<num>
|
|
|
description = X coordinate of bounding box's up-right corner, range [-180, 180].
|
|
|
default = "max_x=180"
|
|
|
|
|
|
[max_y]
|
|
|
syntax = max_y=<num>
|
|
|
description = Y coordinate of bounding box's up-right corner, range [-90, 90].
|
|
|
default = "max_y=90"
|
|
|
|
|
|
##################
|
|
|
# geomfilter
|
|
|
##################
|
|
|
[geomfilter-command]
|
|
|
syntax = geomfilter <min_x>? <min_y>? <max_x>? <max_y>?
|
|
|
shortdesc = Geomfilter command is for choropleth map's clipping feature.
|
|
|
description = Geomfilter command accepts 2 points that specify a bounding box for clipping choropleth map; points fell out of the bounding box will be filtered out.
|
|
|
default = "min_x=-180 min_y=-90 max_x=180 max_y=90"
|
|
|
example1 = ...| geomfilter
|
|
|
comment1 = This case uses the default bounding box, which will clip the whole map.
|
|
|
example2 = ...| geomfilter min_x=-90 min_y=-90 max_x=90 max_y=90
|
|
|
comment2 = This case clips half of the whole map.
|
|
|
usage = public
|
|
|
related = geom
|
|
|
tags = choropleth map
|
|
|
category = reporting
|
|
|
|
|
|
|
|
|
##################
|
|
|
# head
|
|
|
##################
|
|
|
|
|
|
[head-command]
|
|
|
syntax = head ((<int>)|("("<eval-expression>")"))? (limit=<int>)? (null=<bool>)? (keeplast=<bool>)?
|
|
|
shortdesc = Returns the first n number of specified results.
|
|
|
description = Returns the first n results, or 10 if no integer is specified.\
|
|
|
New for 4.0, can provide a boolean eval expression, in which case we return events until that expression evaluates to false.
|
|
|
commentcheat = Return the first 20 results.
|
|
|
examplecheat = ... | head 20
|
|
|
example1 = ... | streamstats range(_time) as timerange | head (timerange<100)
|
|
|
comment1 = Return events until the time span of the data is >= 100 seconds
|
|
|
category = results::order
|
|
|
usage = public beta
|
|
|
related = reverse, tail
|
|
|
tags = head first top leading latest
|
|
|
|
|
|
##################
|
|
|
# tail
|
|
|
##################
|
|
|
|
|
|
[tail-command]
|
|
|
syntax = tail (<int>)?
|
|
|
shortdesc = Returns the last n number of specified results.
|
|
|
description = Returns the last n results, or 10 if no integer is specified. The events\
|
|
|
are returned in reverse order, starting at the end of the result set.
|
|
|
commentcheat = Return the last 20 results (in reverse order).
|
|
|
examplecheat = ... | tail 20
|
|
|
category = results::order
|
|
|
usage = public beta
|
|
|
related = head, reverse
|
|
|
tags = tail last bottom trailing earliest
|
|
|
|
|
|
##################
|
|
|
# reverse
|
|
|
##################
|
|
|
|
|
|
[reverse-command]
|
|
|
syntax = reverse
|
|
|
shortdesc = Reverses the order of the results.
|
|
|
description = Reverses the order of the results.
|
|
|
commentcheat = Reverse the order of a result set.
|
|
|
examplecheat = ... | reverse
|
|
|
category = results::order
|
|
|
usage = public
|
|
|
related = head, sort, tail
|
|
|
tags = reverse flip invert inverse upsidedown
|
|
|
|
|
|
##################
|
|
|
# history
|
|
|
##################
|
|
|
|
|
|
[history-command]
|
|
|
syntax = history (events=<bool>)?
|
|
|
shortdesc = Returns a history of searches, either as events or as non-event results (default).
|
|
|
description = Returns information about searches that the current user has run. \
|
|
|
By default, the search strings are presented as a field called "search". \
|
|
|
If events=true, then the search strings are presented as the text of the \
|
|
|
events, as the _raw field.
|
|
|
comment = Returns a history of searches as a table
|
|
|
example = | history
|
|
|
usage = public
|
|
|
tags = history search
|
|
|
category = results::read
|
|
|
related = search
|
|
|
generating = true
|
|
|
|
|
|
#################
|
|
|
# dbinspect
|
|
|
#################
|
|
|
[dbinspect-command]
|
|
|
syntax = dbinspect (<index-opt>)* (<bin-span>|<timeformat>)? (corruptonly=<bool>)? (bucketstate=<bucketstate-type>)?
|
|
|
shortdesc = Returns information about the buckets in the Splunk Enterprise index.
|
|
|
description = Returns information about the buckets in the Splunk Enterprise index. \
|
|
|
The Splunk Enterprise index is the repository for data from Splunk Enterprise. \
|
|
|
As incoming data is indexed, or transformed into events, Splunk Enterprise \
|
|
|
creates files of rawdata and metadata (index files). The files reside in sets \
|
|
|
of directories organized by age. These directories are called buckets. \
|
|
|
When invoked without the bin-span option, information about the buckets \
|
|
|
is returned in the following fields: \
|
|
|
bucketId, endEpoch, eventCount, guID, hostCount, id, index, modTime, path, \
|
|
|
rawSize, sizeOnDiskMB, sourceCount, sourceTypeCount, splunk_server, startEpoch, state, \
|
|
|
corruptReason. The corruptReason field only appears when corruptonly=true. \p\\
|
|
|
When invoked with a bin span, a table of the spans of each bucket is returned.
|
|
|
comment1 = Display a chart with the span size of 1 day.
|
|
|
example1 = | dbinspect index=_internal span=1d
|
|
|
usage = public beta
|
|
|
tags = inspect index bucket
|
|
|
generating = true
|
|
|
category = administrative
|
|
|
related = metadata
|
|
|
|
|
|
[bucketstate-type]
|
|
|
syntax = all|good|corrupt
|
|
|
description = Specifies which type of bucket state. Defaults to "all".\
|
|
|
"good" only returns good (non-corrupt) buckets.\
|
|
|
"corrupt" only returns corrupted buckets that have a corruptReason.\
|
|
|
"all" returns all buckets.
|
|
|
##################
|
|
|
# iconify
|
|
|
##################
|
|
|
[iconify-command]
|
|
|
syntax = iconify <field-list>
|
|
|
description = Causes the UI to make a unique icon for each value of the fields listed.
|
|
|
comment1 = Displays an different icon for each eventtype.
|
|
|
example1 = ... | iconify eventtype
|
|
|
comment2 = Displays an different icon for each process id.
|
|
|
example2 = ... | iconify pid
|
|
|
comment3 = Displays an different icon for url and ip combination.
|
|
|
example3 = ... | iconify url ip
|
|
|
category = formatting
|
|
|
usage = public
|
|
|
related = highlight, abstract
|
|
|
tags = ui search icon image
|
|
|
|
|
|
##################
|
|
|
# inputcsv
|
|
|
##################
|
|
|
|
|
|
[inputcsv-command]
|
|
|
syntax = inputcsv (dispatch=<bool>)? (append=<bool>)? (strict=<bool>)? (start=<int>)? (max=<int>)? (events=<bool>)? <filename> (WHERE <string:search-query>)?
|
|
|
shortdesc = Loads search results from the specified CSV file.
|
|
|
description = Populates the results data structure using the given CSV file, \
|
|
|
which is not modified. The filename must refer to a relative \
|
|
|
path in $SPLUNK_HOME/var/run/splunk/csv (if dispatch=true, the \
|
|
|
filename refers to a file in the job directory in \
|
|
|
$SPLUNK_HOME/var/run/splunk/dispatch/<job id>/). If the \
|
|
|
specified file does not exist and the filename does not have an \
|
|
|
extension, the search processor assumes it has a ".csv" \
|
|
|
extension. \
|
|
|
The optional argument 'start' controls the 0-based offset of the \
|
|
|
first event to be read. Defaults to 0. \
|
|
|
The optional argument 'max' controls the maximum number of \
|
|
|
events to be read from the file. If unspecified, there is no \
|
|
|
limit to the number of events that can be read. Defaults to \
|
|
|
1000000000 (1 billion). \
|
|
|
When set to true, the optional argument 'events' allows imported \
|
|
|
results to be treated as events, so that a proper timeline \
|
|
|
and fields picker are displayed. Otherwise, the results are \
|
|
|
treated as a table of search results with field names as column \
|
|
|
headings. If you set 'events=true' the imported CSV data must \
|
|
|
have _time and _raw fields. Defaults to false. \
|
|
|
If the optional argument 'append' is set to true, the data from \
|
|
|
the CSV file is appended to the current set of results instead \
|
|
|
of replacing it. Defaults to false. \
|
|
|
The optional argument 'strict' forces the search to fail \
|
|
|
completely if the command raises an error. Defaults to false. \
|
|
|
|
|
|
note = 'keeptempdir' is a debugging option, that if true, retains the temporary directory that the given file is copied into for manipulation by the search pipeline. This option should not be mentioned in the external documentation or typeahead.
|
|
|
usage = public
|
|
|
comment1 = Read in events from the CSV file: "$SPLUNK_HOME/var/run/splunk/csv/foo.csv".
|
|
|
example1 = | inputcsv foo.csv
|
|
|
comment2 = Read in events 101 to 600 from either file 'bar' (if exists) or 'bar.csv'.
|
|
|
example2 = | inputcsv start=100 max=500 bar
|
|
|
comment3 = Same as example1 except that the events are filtered to where foo is greater than 2 or bar equals 5
|
|
|
example3 = | inputcsv foo.csv where foo>2 OR bar=5
|
|
|
commentcheat = Read in results from the CSV file: "$SPLUNK_HOME/var/run/splunk/csv/all.csv", keep any that contain the string "error", and save the results to the file: "$SPLUNK_HOME/var/run/splunk/csv/error.csv"
|
|
|
examplecheat = | inputcsv all.csv | search error | outputcsv errors.csv
|
|
|
category = results::read
|
|
|
related = outputcsv
|
|
|
tags = input csv load read
|
|
|
generating = true
|
|
|
|
|
|
##################
|
|
|
# inputlookup
|
|
|
##################
|
|
|
|
|
|
[inputlookup-command]
|
|
|
syntax = inputlookup (append=<bool>)? (strict=<bool>)? (start=<int>)? (max=<int>)? (<filename>|<string:tablename>) (where <string:search-query>)?
|
|
|
shortdesc = Loads search results from a specified static lookup table.
|
|
|
description = Reads in a lookup table as specified by a filename (must end with .csv or .csv.gz) \
|
|
|
or a table name (as specified by a stanza name in transforms.conf). \
|
|
|
If 'append' is set to true, the search processor appends the \
|
|
|
data from the lookup file to the current set of results instead \
|
|
|
of replacing it. Defaults to false. \
|
|
|
If 'strict' is set to true, the search fails completely if the \
|
|
|
command raises an error (such as the provision of a nonexistent \
|
|
|
filename). Defaults to false.
|
|
|
usage = public
|
|
|
example1 = | inputlookup users.csv
|
|
|
example2 = | inputlookup usertogroup
|
|
|
example3 = | inputlookup append=t usertogroup
|
|
|
example4 = | inputlookup usertogroup where foo>2 OR bar=5
|
|
|
example5 = | inputlookup geo_us_states
|
|
|
comment1 = Read in "users.csv" lookup file (under $SPLUNK_HOME/etc/system/lookups or $SPLUNK_HOME/etc/apps/*/lookups).
|
|
|
comment2 = Read in "usertogroup" lookup table (as specified in transforms.conf).
|
|
|
comment3 = Same as example2 except that the data from the lookup table is appended to any current results.
|
|
|
comment4 = Same as example2 except that the data from the lookup table is filtered to where foo is greater than 2 or bar equals 5 before returned.
|
|
|
comment5 = Read in a geospatial lookup table. This can be used to show all geographic features on a Choropleth map.
|
|
|
generating = true
|
|
|
related = inputcsv, join, lookup, outputlookup
|
|
|
tags = lookup input table
|
|
|
category = results::read
|
|
|
|
|
|
##################
|
|
|
# join
|
|
|
##################
|
|
|
[join-command]
|
|
|
syntax = join (<join-options>)* <join-constraints> <dataset>
|
|
|
shortdesc = Use to combine the results of a subsearch with the results of a \
|
|
|
main search.
|
|
|
description = You can perform an inner or left join. Use either 'outer' or \
|
|
|
'left' to specify a left outer join. One or more of the fields \
|
|
|
must be common to each result set. If no fields are specified, \
|
|
|
all of the fields that are common to both result sets are used. \
|
|
|
Limitations on the join subsearch are specified in the \
|
|
|
limits.conf.spec file. Note: Another command, such as append or \
|
|
|
lookup, in combination with either stats or transaction might \
|
|
|
be a better alternative to the join command for flexibility and \
|
|
|
performance. \p\\
|
|
|
The arguments 'left' and 'right' allow for specifying aliases \
|
|
|
in order to preserve the lineage of the fields in both result \
|
|
|
sets. The 'where' argument specifies the aliased fields to join \
|
|
|
on, where the fields are no longer required to be common to both \
|
|
|
result sets.
|
|
|
usage = public
|
|
|
example1 = ... | join product_id [search vendors]
|
|
|
comment1 = Joins previous result set with results from 'search vendors', on \
|
|
|
the product_id field common to both result sets.
|
|
|
example2 = ... | join product_id [search vendors | rename pid AS product_id]
|
|
|
comment2 = Joins previous result set with results from 'search vendors', on \
|
|
|
the product_id field forced to be common to both result sets.
|
|
|
example3 = ... | join left=L right=R WHERE L.product_id=R.pid [search vendors]
|
|
|
comment3 = Joins previous result set with results from 'search vendors', on \
|
|
|
the product id field represented by field names that do match in \
|
|
|
the two result sets.
|
|
|
example4 = ... | join datamodel:"internal_server.splunkdaccess"
|
|
|
comment4 = Joins previous result set with results from a built-in data model \
|
|
|
that is an internal server log for REST API calls.
|
|
|
related = append, lookup, appendcols, lookup, selfjoin, transaction
|
|
|
tags = join combine unite append csv lookup inner outer left
|
|
|
category = results::append
|
|
|
|
|
|
[join-constraints]
|
|
|
syntax = <field-list> | (left=<leftalias>)? (right=<rightalias>)? WHERE <join-equalities>
|
|
|
description = List of fields to join on with optional aliasing.
|
|
|
|
|
|
[join-equalities]
|
|
|
syntax = <leftalias>.<field>=<rightalias>.<field> (<leftalias>.<field>=<rightalias>.<field>)*
|
|
|
description = Join on aliased fields from corresponding result sets.
|
|
|
|
|
|
[leftalias]
|
|
|
syntax = <string>
|
|
|
description = Lineage of fields in left result set.
|
|
|
|
|
|
[rightalias]
|
|
|
syntax = <string>
|
|
|
description = Lineage of fields in right result set.
|
|
|
|
|
|
[join-options]
|
|
|
syntax = type=(inner|outer|left) | usetime=<bool> | earlier=<bool> | overwrite=<bool> | max=<int> | return_multivalue=<bool>
|
|
|
description = Options to the join command. In both inner and left joins, \
|
|
|
events that match are joined. The results of an inner join do \
|
|
|
not include events from the main search that have no matches in \
|
|
|
the subsearch. The results of a left (or outer) join include \
|
|
|
all of the events from the main search, and any of the events \
|
|
|
from the subsearch that have matching values in the main \
|
|
|
search. The usetime option specifies whether to limit matches \
|
|
|
to subresults that are earlier or later than the main result \
|
|
|
to join with. The earlier option is only valid when \
|
|
|
usetime=true. The default for usetime is false. The overwrite \
|
|
|
option indicates if fields from the subresults should overwrite \
|
|
|
those from the main result if they have the same field name. \
|
|
|
The default for overwrite is true. The max option specifies \
|
|
|
the maximum number of subresults each main result can join \
|
|
|
with. The default for max is 1. Specify 0 to indicate there is \
|
|
|
no limit. The return_multivalue option specifies if the result should \
|
|
|
be multi-value or single-value. The default for return_multivalue is false \p\\
|
|
|
default = type=inner | usetime=false | earlier=true | overwrite=true | max=1
|
|
|
example1 = type=outer
|
|
|
example2 = usetime=t
|
|
|
example3 = usetime=t earlier=f
|
|
|
example4 = overwrite=f
|
|
|
example5 = max=3
|
|
|
example6 = return_multivalue=t
|
|
|
|
|
|
##################
|
|
|
# selfjoin
|
|
|
##################
|
|
|
[selfjoin-command]
|
|
|
syntax = selfjoin (<selfjoin-options>)* <field-list>
|
|
|
shortdesc = Joins results with itself.
|
|
|
description = Join results with itself, based on a specified field or list of fields to join on.
|
|
|
usage = public
|
|
|
example1 = ... | selfjoin id
|
|
|
comment1 = Join results with itself on 'id' field.
|
|
|
related = join
|
|
|
tags = join combine unite
|
|
|
category = results::filter
|
|
|
|
|
|
[selfjoin-options]
|
|
|
syntax = overwrite=<bool> | max=<int> | keepsingle=<int>
|
|
|
description = The selfjoin joins each result with other results that have the same value for the join fields. 'overwrite' controls if fields from these 'other' results should overwrite fields of the result used as the basis for the join (default=true). max indicates the maximum number of 'other' results each main result can join with. (default = 1, 0 means no limit). 'keepsingle' controls whether or not results with a unique value for the join fields (and thus no other results to join with) should be retained. (default = false)
|
|
|
default = overwrite=true | max=1 | keepsingle=false
|
|
|
example1 = overwrite=f
|
|
|
example2 = max=3
|
|
|
example3 = keepsingle=t
|
|
|
|
|
|
|
|
|
##################
|
|
|
# kmeans
|
|
|
##################
|
|
|
|
|
|
[kmeans-command]
|
|
|
syntax = kmeans (<kmeans-options> )* (<field-list>)?
|
|
|
shortdesc = Performs k-means clustering on selected fields.
|
|
|
description = Performs k-means clustering on select fields (or all numerical fields if empty). Events in the same cluster are \
|
|
|
moved next to each other. You have the option to display the cluster number for each event. The centroid of each cluster is also \
|
|
|
be displayed (with an option to disable it).
|
|
|
usage = public
|
|
|
comment1 = Group results into 2 clusters based on the values of all numerical fields.
|
|
|
example1 = ... | kmeans
|
|
|
commentcheat = Group search results into 4 clusters based on the values of the "date_hour" and "date_minute" fields.
|
|
|
examplecheat = ... | kmeans k=4 date_hour date_minute
|
|
|
category = results::group
|
|
|
related = anomalies, anomalousvalue, cluster, outlier
|
|
|
tags = cluster group collect gather
|
|
|
|
|
|
[kmeans-options]
|
|
|
syntax = <kmeans-reps>|<kmeans-iters>|<kmeans-t>|<kmeans-k>|<kmeans-cnumfield>|<kmeans-distype>|<kmeans-centroids>
|
|
|
description = Options for kmeans command
|
|
|
|
|
|
[kmeans-reps]
|
|
|
syntax = reps=<int>
|
|
|
description = Number of times to repeat kmeans using random starting clusters
|
|
|
default = "reps=10"
|
|
|
|
|
|
[kmeans-iters]
|
|
|
syntax = maxiters=<int>
|
|
|
description = Maximum number of iterations allowed before failing to converge
|
|
|
default = "maxiters=10000"
|
|
|
|
|
|
[kmeans-t]
|
|
|
syntax = t=<num>
|
|
|
description = Algorithm convergence tolerance
|
|
|
default = "t=0"
|
|
|
|
|
|
[kmeans-k]
|
|
|
syntax = k=<int>(-<int>)?
|
|
|
description = Number of initial clusters to use. If specified as a range, clustering will be performed for each \
|
|
|
count of clusters in the range, and a summary of the result of each run will be provided expressing \
|
|
|
the size of the clusters, and the 'distortion', which describes a kind of non-fittedness of the data. \
|
|
|
Distortion is the sum of the squared distances between each item and its cluster center.
|
|
|
default = "k=2"
|
|
|
|
|
|
[kmeans-cnumfield]
|
|
|
syntax = cfield=<field>
|
|
|
description = Controls the field name for the cluster number for each event
|
|
|
default = "cfield=CLUSTERNUM"
|
|
|
|
|
|
[kmeans-distype]
|
|
|
syntax = dt=(l1|l1norm|cityblock|cb|l2|l2norm|sq|sqeuclidean|cos|cosine)
|
|
|
simplesyntax = dt=(l1norm|l2norm|cityblock|sqeuclidean|cosine)
|
|
|
description = Distance metric to use (L1/L1NORM equivalent to CITYBLOCK). L2NORM equivalent to SQEUCLIDEAN
|
|
|
default = "dt=L2NORM"
|
|
|
|
|
|
[kmeans-centroids]
|
|
|
syntax = showcentroid=<bool>
|
|
|
description = Expose the centroid centers in the search results if showcentroid is true; don't if false.
|
|
|
default = "showcentroid=true"
|
|
|
|
|
|
##################
|
|
|
# kvform
|
|
|
##################
|
|
|
|
|
|
[kvform-command]
|
|
|
syntax = kvform (form=<string>)? (field=<field>)?
|
|
|
shortdesc = Extracts values from search results, using a form template.
|
|
|
description = Extracts key/value pairs from events based on a form\
|
|
|
template that describes how to extract the values. If FORM is specified,\
|
|
|
it uses an installed <FORM>.form file found in the splunk configuration form directory.\
|
|
|
For example, if "form=sales_order", would look for a "sales_order.form"\
|
|
|
file in the 'forms' subdirectory in all apps, e.g. $SPLUNK_HOME$/etc/apps/*/forms/. \
|
|
|
All the events processed would\
|
|
|
be matched against that form, trying to extract values.\p\\
|
|
|
If no FORM is specified, then the FIELD value determines the name of the field to\
|
|
|
extract. For example, if "field=error_code", then an event that has an error_code=404,\
|
|
|
would be matched against a "404.form" file.\p\\
|
|
|
The default value for FIELD is "sourcetype", thus by default kvform will look for \
|
|
|
<SOURCETYPE>.form files to extract values.\p\\
|
|
|
A .form file is essentially a text file or all static parts of a form, \
|
|
|
interspersed with named references to regular expressions, of the type found in\
|
|
|
transforms.conf. A .form might might look like this:\i\\
|
|
|
Students Name: [[string:student_name]] \i\\
|
|
|
Age: [[int:age]] Zip: [[int:zip]] .
|
|
|
note = Multiple whitespaces, including blanklines are removed from matching, which could be merged into kv to be called automatically, having no cost if the needed .form file is not found.
|
|
|
comment1 = Extract values from "eventtype.form" if the file exists.
|
|
|
example1 = ... | kvform field=eventtype
|
|
|
usage = public/experimental
|
|
|
related = extract, multikv, rex, xmlkv
|
|
|
tags = form extract template
|
|
|
category = fields::add
|
|
|
|
|
|
|
|
|
##################
|
|
|
# localize
|
|
|
##################
|
|
|
[localize-command]
|
|
|
syntax = localize <lmaxpause-opt>? <after-opt>? <before-opt>?
|
|
|
shortdesc = Returns a list of time ranges in which the search results were found.
|
|
|
description = Generates a list of time contiguous event regions \
|
|
|
defined as: a period of time in which consecutive events \
|
|
|
are separated by at most 'maxpause' time. The found regions \
|
|
|
can be expanded using the 'timeafter' and 'timebefore' modifiers \
|
|
|
to expand the range after/before the last/first event in \
|
|
|
the region respectively. The Regions are return in time descending\
|
|
|
order, just as search results (time of region is start time).\
|
|
|
The regions discovered by localize are meant to be feed into \
|
|
|
the MAP command, which will use a different region for each iteration. \
|
|
|
Localize also reports: (a) number of events in the range, (b) range \
|
|
|
duration in seconds and (c) region density defined as (#of events in range) \
|
|
|
divided by (range duration) - events per second.
|
|
|
comment1 = As an example, searching for "error" and then calling localize finds good regions around \
|
|
|
where error occurs, and passes each on to the search inside of the the map command, so \
|
|
|
that each iteration works with a specific timerange to find promising transactions
|
|
|
example1 = error | localize | map search="search starttimeu::$starttime$ endtimeu::$endtime$ |transaction uid,qid maxspan=1h"
|
|
|
commentcheat = Search the time range of each previous result for "failure".
|
|
|
examplecheat = ... | localize maxpause=5m | map search="search failure starttimeu=$starttime$ endtimeu=$endtime$"
|
|
|
category = search::subsearch
|
|
|
usage = public beta
|
|
|
related = map, transaction
|
|
|
tags = time timestamp subsearch range timerange
|
|
|
|
|
|
[lmaxpause-opt]
|
|
|
syntax = maxpause=<int>(s|m|h|d)?
|
|
|
description = the maximum (inclusive) time between two consecutive events in a contiguous time region
|
|
|
default = "maxpause=1m"
|
|
|
|
|
|
[after-opt]
|
|
|
syntax = timeafter=<int>(s|m|h|d)?
|
|
|
description = the amount of time to add to endtime (ie expand the time region forward in time)
|
|
|
default = "timeafter=30s"
|
|
|
|
|
|
[before-opt]
|
|
|
syntax = timebefore=<int>(s|m|h|d)?
|
|
|
description = the amount of time to subtract from starttime (ie expand the time region backwards in time)
|
|
|
default = "timebefore=30s"
|
|
|
|
|
|
|
|
|
##################
|
|
|
# localop
|
|
|
##################
|
|
|
[localop-command]
|
|
|
syntax = localop
|
|
|
shortdesc = Prevents subsequent commands from being executed on remote peers.
|
|
|
description = Prevents subsequent commands from being executed on remote peers, i.e. forces subsequent commands to be part of the reduce step.
|
|
|
example1 = FOO BAR | localop | iplocation clientip
|
|
|
comment1 = The iplocation command in this case will never be run on remote peers. All events from remote peers from the initial search for the terms FOO and BAR will be forwarded to the search head where the iplocation command will be run.
|
|
|
tags = debug distributed
|
|
|
usage = public unsupported/beta
|
|
|
category = search::search
|
|
|
|
|
|
|
|
|
##################
|
|
|
# loadjob
|
|
|
##################
|
|
|
[loadjob-command]
|
|
|
syntax = loadjob (<sid-opt>|<savedsearch-identifier>) <result-event-opt>? <delegate-opt>? <artifact-offset-opt>? <ignore-running-opt>? <wait-opt>? <wait-timeout-opt>?
|
|
|
shortdesc = Loads events or results of a previously completed search job.
|
|
|
description = The artifacts to load are identified either by the search job id or a scheduled search name and the time range of the current search. If a savedsearch name is provided and multiple artifacts are found within that range the latest artifacts are loaded.
|
|
|
example1 = | loadjob 1233886270.2 events=t
|
|
|
comment1 = Loads the events that were generated by the search job with id=1233886270.2
|
|
|
example2 = | loadjob savedsearch="admin:search:MySavedSearch"
|
|
|
comment2 = Loads the results of the latest scheduled execution of savedsearch MySavedSearch in the 'search' application owned by admin
|
|
|
related = inputcsv, file
|
|
|
usage = public
|
|
|
tags = artifacts
|
|
|
generating = true
|
|
|
category = results::generate
|
|
|
|
|
|
[sid-opt]
|
|
|
syntax = <string>
|
|
|
description = The search id of the job whose artifacts need to be loaded.
|
|
|
example = 1233886270.2
|
|
|
|
|
|
[savedsearch-identifier]
|
|
|
syntax = savedsearch="<user-string>:<application-string>:<search-name-string>"
|
|
|
description = The unique identifier of a savedsearch whose artifacts need to be loaded. A savedsearch \
|
|
|
is uniquely identified by the triplet {user, application, savedsearch name}.
|
|
|
example = savedsearch="admin:search:my saved search"
|
|
|
|
|
|
[user-string]
|
|
|
syntax = <string>
|
|
|
description = The user name of the owner of the saved search.
|
|
|
example = admin
|
|
|
|
|
|
[application-string]
|
|
|
syntax = <string>
|
|
|
description = The name of the application in which the saved search is defined.
|
|
|
example = search
|
|
|
|
|
|
[search-name-string]
|
|
|
syntax = <string>
|
|
|
description = The name of the saved search.
|
|
|
example = mysavedsearch
|
|
|
|
|
|
[result-event-opt]
|
|
|
syntax = events=<bool>
|
|
|
description = events=true loads events, while events=false loads results. Defaults to false.
|
|
|
example = events=true
|
|
|
|
|
|
[delegate-opt]
|
|
|
syntax = job_delegate=<string>
|
|
|
description = When specifying a savedsearch, this option selects jobs that were started by the given user. \
|
|
|
Scheduled jobs will be run by the delegate "scheduler". Dashboard-embedded searches will be \
|
|
|
run in accordance with the dispatchAs parameter (typically the owner of the search) for the \
|
|
|
savedsearch. \
|
|
|
Defaults to scheduler.
|
|
|
example = job_delegate=scheduler
|
|
|
|
|
|
[artifact-offset-opt]
|
|
|
syntax = artifact_offset=<int>
|
|
|
description = Select a search artifact other than the most recent one, based on search start time. For example \
|
|
|
if artifact_offset=1, the second most recent will be loaded; if artifact_offset=2, the third most recent \
|
|
|
will be loaded. Attempting to load an offset past the last available artifact will result in an error.\
|
|
|
Defaults to 0, or the most recent.
|
|
|
example = artifact_offset=1
|
|
|
|
|
|
[ignore-running-opt]
|
|
|
syntax = ignore_running=<bool>
|
|
|
description = Skip over artifacts whose search is still running (default: true)
|
|
|
example = ignore_running=false
|
|
|
|
|
|
[wait-opt]
|
|
|
syntax = wait_until_finished=<bool>
|
|
|
description = Specifies whether to wait for the job to finish running before loading any of the job artifacts \
|
|
|
or to start loading job artifacts while the job is still running. \
|
|
|
Default: false
|
|
|
example = wait_until_finished=true
|
|
|
|
|
|
[wait-timeout-opt]
|
|
|
syntax = wait_timeout=<int>
|
|
|
description = Specifies the amount of time, in seconds, to wait for the job to finish. Setting this option \
|
|
|
without setting "wait_until_finished=true" has no affect on the loadjob command. \
|
|
|
The command will run as if 'wait_until_finished' is set to "false". \
|
|
|
Default: 60
|
|
|
example = wait_timeout=120
|
|
|
|
|
|
##################
|
|
|
# lookup
|
|
|
##################
|
|
|
[lookup-command]
|
|
|
syntax = lookup (local=<bool>)? (update=<bool>)? (event_time_field=<string>)? <string:lookup-table-name> (<field:lookup> (as <field:local>)? )+ (OUTPUT|OUTPUTNEW (<field:dest> (as <field:local-dest>)? )+ )?
|
|
|
shortdesc = Explicitly invokes field value lookups.
|
|
|
description = Manually invokes field value lookups from an existing lookup table or external \
|
|
|
script. Lookup tables must be located in the lookups directory of \
|
|
|
$SPLUNK_HOME/etc/system/lookups or $SPLUNK_HOME/etc/apps/<app-name>/lookups. \
|
|
|
External scripts must be located in $SPLUNK_HOME/etc/apps/<app_name>/bin.\p\\
|
|
|
Specify a lookup field to match to a field in the events and, optionally, \
|
|
|
destination fields to add to the events. If you do not specify destination fields, \
|
|
|
adds all fields in the lookup table to events that have the match field. You can \
|
|
|
also overwrite fields in the events with fields in the lookup table, if they have \
|
|
|
the same field name.
|
|
|
example1 = ... | lookup usertogroup user as local_user OUTPUT group as user_group
|
|
|
comment1 = There is a lookup table specified in a stanza name 'usertogroup' in transform.conf. This lookup table contains (at least) two fields, 'user' and 'group'. For each event, we look up the value of the field 'local_user' in the table and for any entries that matches, the value of the 'group' field in the lookup table will be written to the field 'user_group' in the event.
|
|
|
usage = public
|
|
|
related = appendcols inputlookup outputlookup
|
|
|
tags = join combine append lookup table
|
|
|
category = fields::read
|
|
|
|
|
|
|
|
|
##################
|
|
|
# makecontinuous
|
|
|
##################
|
|
|
|
|
|
[makecontinuous-command]
|
|
|
syntax = makecontinuous (<field>)? (<bin-options>)*
|
|
|
shortdesc = Makes a field that is supposed to be the x-axis continuous (invoked by chart/timechart).
|
|
|
description = Makes a field that is supposed to be the x-axis continuous (invoked by chart/timechart).
|
|
|
usage = public
|
|
|
comment1 = Make "_time" continuous with a span of 10 minutes.
|
|
|
example1 = ... | makecontinuous _time span=10m
|
|
|
category = reporting
|
|
|
tags = continuous
|
|
|
related = chart timechart
|
|
|
|
|
|
##################
|
|
|
# tojson #
|
|
|
##################
|
|
|
|
|
|
[tojson-command]
|
|
|
simplesyntax = tojson ( (auto|str|num|bool|json|none) "(" <wc-field>? ")" )* <tojson-command-arguments>
|
|
|
syntax = tojson (<tojson-function>?)* <tojson-command-arguments>
|
|
|
shortdesc = Converts event data into JSON format.
|
|
|
description = Converts events into JSON objects. Specify which fields get converted by identifying them through exact match or \
|
|
|
through wildcard expressions. Apply datatypes to field values with datatype functions. \
|
|
|
In the JSON object, the key is the name of the field. The value is the corresponding field value. \
|
|
|
If a field contains a multivalue, the 'tojson' processor converts it into a JSON array. \
|
|
|
If you do not specify any fields, the 'tojson' processor creates JSON objects for each event that include all \
|
|
|
available fields. In other words, 'tojson' applies the function 'none(*)' to the query. \
|
|
|
This command is non-generating.\
|
|
|
comment1 = Convert each event in the index "_internal" into a JSON representation.
|
|
|
example1 = index=_internal | tojson
|
|
|
comment2 = Convert events into JSON objects that have only the 'date_*' fields from each event, where the field 'date_hour' \
|
|
|
is interpreted as a numeric datatype and the other date fields are interpreted as string datatypes.
|
|
|
example2 = index=_internal | tojson num(date_hour) str(date_*)
|
|
|
usage = public
|
|
|
|
|
|
[tojson-function]
|
|
|
syntax = <tojson-auto>|<tojson-num>|<tojson-str>|<tojson-bool>|<tojson-json>|<tojson-none>
|
|
|
|
|
|
[tojson-auto]
|
|
|
syntax = auto("(" (<wc-field>)? ")")?
|
|
|
description = Convert all values of the specified field into JSON-formatted output. Automatically determine the field datatypes. \
|
|
|
If the value is numeric, the JSON output has a numeric type and includes a literal numeric. \
|
|
|
If the value is the string 'true' or 'false', the JSON output has a boolean type. \
|
|
|
If the value is a literal "null", the JSON output has a null type and includes a null value. \
|
|
|
If the value is a string, the 'tojson' processor examines the string. If it is proper JSON, the 'tojson' processor \
|
|
|
outputs a nested JSON object. If it is not proper JSON, the JSON output includes the string. \
|
|
|
When a field contains multivalues, the 'tojson' processor outputs a JSON array where the preceding criteria are applied \
|
|
|
to each element of the array.\
|
|
|
comment1 = Create JSON output limited to the fields 'name', 'age', and 'isRegistered' that automatically applies types to each \
|
|
|
of the fields.
|
|
|
example1 = ... | tojson auto(name) auto(age) auto(isRegistered)
|
|
|
comment2 = Convert all events into JSON output. Apply appropriate types to all fields.
|
|
|
example2 = ... | tojson auto(*)
|
|
|
|
|
|
[tojson-num]
|
|
|
syntax = num("(" (<wc-field>)? ")")?
|
|
|
description = Convert all values of the specified field into the numeric type. \
|
|
|
If the value is already a number, the 'tojson' processor outputs that value and gives it the numeric type. If the \
|
|
|
value is a string, the 'tojson' processor attempts to parse that string as a number -- if it can't, it skips that value. \
|
|
|
When a field is multivalued, 'tojson' processor outputs a JSON array where each element of the array has the \
|
|
|
numeric type. \
|
|
|
comment = Convert the field 'count' into JSON output. Apply the numeric type to that field.
|
|
|
example = ... | tojson num(count)
|
|
|
|
|
|
[tojson-str]
|
|
|
syntax = str("(" (<wc-field>)? ")")?
|
|
|
description = Convert all values of the specified field into the string type. \
|
|
|
The 'tojson' processor gives all values of the specified field the string type, even if they are numbers, \
|
|
|
booleans, and so on. \
|
|
|
When a field contains multivalues, 'tojson' processor outputs a JSON array where each element of the array has the \
|
|
|
string type. \
|
|
|
comment = Convert the field 'students' into a JSON output. Apply the string type to that field.
|
|
|
example = ... | tojson str(students)
|
|
|
|
|
|
[tojson-bool]
|
|
|
syntax = bool("(" (<wc-field>)? ")")?
|
|
|
description = Convert all values of the specified field into the boolean type. \
|
|
|
If the value is a number, the 'tojson' processor outputs 'false' only if that value is '0'. Otherwise, the \
|
|
|
'tojson' processor outputs 'true'. \
|
|
|
If the value is a string, the 'tojson' processor outputs 'false' only if the value is 'false', 'f', or 'no'. The \
|
|
|
'tojson' processor outputs 'true' if the value is 'true', 't', or 'yes. If the string is neither of the above, it is skipped \
|
|
|
When a field contains multivalues, the 'tojson' processor outputs a JSON array where 'true' and 'false' values are \
|
|
|
applied according to the preceding criteria. \
|
|
|
comment = Convert the field 'isInternal' into a JSON output. Apply the boolean type to that field.
|
|
|
example = ... | tojson bool(isInternal)
|
|
|
|
|
|
[tojson-json]
|
|
|
syntax = json("(" (<wc-field>)? ")")?
|
|
|
description = Convert all values for the specified field into the JSON type, using string validation. \
|
|
|
If the value is a number, the 'tojson' processor outputs that number. If the value is a string, the 'tojson' \
|
|
|
processor examines the string. If the string is valid JSON, the 'tojson' processor outputs the string as a JSON \
|
|
|
block. If the field is invalid JSON, the 'tojson' processor skips it. \
|
|
|
If the field contains multivalues, the 'tojson' processor outputs a JSON array where each element is evaluated and typed \
|
|
|
according to the preceding criteria. \
|
|
|
comment = Convert the field 'test_inputs' into a JSON object. Apply the JSON type to that field.
|
|
|
example = ... | tojson json(test_inputs)
|
|
|
|
|
|
[tojson-none]
|
|
|
syntax = none("(" (<wc-field>)? ")")?
|
|
|
description = Output the values for the specified field in the JSON type, without string validation. \
|
|
|
If the value is a number, the 'tojson' processor outputs a numeric type in the JSON block. If the value is a \
|
|
|
string, the 'tojson' processor outputs a string. \
|
|
|
If the field contains multivalues, the 'tojson' processor outputs a JSON array where each element of the array is \
|
|
|
either a string or a number. \
|
|
|
comment = Convert the fields 'name' and 'age' into a JSON representation. Apply the none type to both fields.
|
|
|
example = ... | tojson none(name) none(age)
|
|
|
|
|
|
[tojson-command-arguments]
|
|
|
syntax = (fill_null=<bool>)? (include_internal=<bool>)? (output_field=<string>)? (default_type=<string>)?
|
|
|
description = 'fill_null' is a boolean argument that outputs a literal 'null' value when the 'tojson' processor skips a value. \
|
|
|
For example, if the 'json' function is used on a field that does not have proper json, the 'tojson' processor \
|
|
|
normally skips the field. However, when 'fill_null=true', the 'tojson' processor outputs a 'null' value for the \
|
|
|
field. Defaults to false. \
|
|
|
'include_internal' is a boolean argument that includes internal fields in the JSON output when it is set to true. \
|
|
|
Defaults to false. \
|
|
|
'output_field' specifies the field that 'tojson' processor should write the output JSON to. Defaults to '_raw'. \
|
|
|
'default_type' specifies the function that the 'tojson' processor should apply to fields that don't explicitly \
|
|
|
specify a function. Defaults to 'none'. \
|
|
|
comment1 = Convert the fields 'age','height', and 'weight' into number types, convert 'name' into a string type, and write the JSON to the field 'my_json_field'.
|
|
|
example1 = ... | tojson age height weight str(name) default_type=num output_field=my_json_field
|
|
|
comment2 = Convert all fields, including internal fields, into JSON format. Assign a 'null' value to fields that are skipped.
|
|
|
example2 = ... | tojson include_internal=true fill_null=true
|
|
|
|
|
|
##################
|
|
|
# makeresults
|
|
|
##################
|
|
|
|
|
|
[makeresults-command]
|
|
|
syntax = makeresults (<makeresults-count-option>)? (<makeresults-annotate-option>)? (<makeresults-splunk-server-option>)? (<makeresults-splunk-server-group-option>)* (<makeresults-format-option>)? (<makeresults-data-option>)?
|
|
|
shortdesc = Create a specified number of empty results.
|
|
|
description = Creates a specified number of empty search results. This command will run only on the local machine \
|
|
|
by default and will generate one unannotated empty result. It maybe used in conjunction with the eval command to \
|
|
|
generate an empty result for the eval command to operate on. \
|
|
|
Events can also be generated from inlined CSV/JSON strings via the provided `format` and `data` arguments.
|
|
|
note = If the search begins with an eval command it will return no results. makeresults is implicitly injected to the \
|
|
|
beginning of such searches.
|
|
|
example1 = makeresults | eval foo="foo"
|
|
|
example2 = index=_internal _indextime > [makeresults | eval it=now()-60 | return $it]
|
|
|
usage = public
|
|
|
category = results::generate
|
|
|
|
|
|
[makeresults-count-option]
|
|
|
syntax = count=<int>
|
|
|
description = The number of empty results to generate
|
|
|
|
|
|
[makeresults-annotate-option]
|
|
|
syntax = annotate=<bool>
|
|
|
description = If set to true the results will contain fields for the splunk_server aplunk_server_group and _time when they were created \
|
|
|
These may be used to compute aggregates etc. Certain order sensitive processors may also fail if the internal _time field is absent. \
|
|
|
False by default.
|
|
|
|
|
|
[makeresults-splunk-server-option]
|
|
|
syntax = splunk_server=<string>
|
|
|
description = Optional, argument specifies whether or not to limit results to one specific server. Use "local" to refer to the search head
|
|
|
|
|
|
[makeresults-splunk-server-group-option]
|
|
|
syntax = splunk_server_group=<string>
|
|
|
description = Optional, argument specifies whether or not to limit results to one specific server_group.
|
|
|
|
|
|
[makeresults-format-option]
|
|
|
syntax = format=<string>
|
|
|
description = Specify the format of data used to generate events. Can be one of ('csv', 'json'). \
|
|
|
If 'format' is specified, the 'data' argument must specify properly-formatted inline \
|
|
|
JSON or CSV. Furthermore, no other 'makeresults' arguments (such as 'count' or \
|
|
|
'annotate') can be specified when 'format' is specified). \
|
|
|
For more information about what constitutes valid JSON/CSV inlined data, see the \
|
|
|
following 'makeresults-data-option' stanza. \
|
|
|
|
|
|
[makeresults-data-option]
|
|
|
syntax = data=<string>
|
|
|
description = Inline data that 'makeresults' converts into Splunk events. The data must have a JSON \
|
|
|
or CSV format. \
|
|
|
Inline data in JSON format must be provided as a series of JSON objects within a single \
|
|
|
JSON array. Each JSON object corresponds to a separate event. For each JSON object, \
|
|
|
keys become fields, and values become field values. Each JSON key must be escape-quoted. \
|
|
|
The full JSON array must be placed around normal quotes. \
|
|
|
Inline data in CSV format consists of a set of lines. The first line contains the schema, \
|
|
|
or headers, for the CSV table. This first line consists of a comma-separated list of \
|
|
|
strings where each string corresponds to a field name. The schema ends when a newline \
|
|
|
character is reached. Each line that follows the schema line represents a single event, \
|
|
|
with comma-separated field values. Use newlines to indicate the end of one event and the \
|
|
|
beginning of another. Inline data searches cannot exceed a threshold of 30000 characters. \
|
|
|
If you specify the 'data' argument, 'makeresults' ignores all other arguments (such as \
|
|
|
'count' and 'annotate'). \
|
|
|
comment1 = Generate two events for 35-year-old John and 39-year-old Sarah from the provided inlined JSON array.
|
|
|
example1 = | makeresults format=json data="[{\"name\":\"John\", \"age\":35}, {\"name\":\"Sarah\", \"age\":39}]"
|
|
|
comment2 = Generate the same two events, but from an inlined CSV snippet. (The backslash characters in the following example are not part of the example.)
|
|
|
example2 = | makeresults format=csv data="name,age \
|
|
|
John,35 \
|
|
|
Sarah,39"
|
|
|
|
|
|
|
|
|
##################
|
|
|
# map
|
|
|
##################
|
|
|
|
|
|
[map-command]
|
|
|
syntax = map (<searchoption>|<savedsplunkoption>) <maxsearchesoption>?
|
|
|
shortdesc = Looping operator, performs a search over each search result.
|
|
|
description = For each input search result, takes the field-values\
|
|
|
from that result and substitutes their value for the $variable$ in the\
|
|
|
search argument. The value of variables surrounded in quotes (e.g. text="$_raw$") will be quote escaped. \
|
|
|
The search argument can either be a search string to run\
|
|
|
or the name of a savedsearch. The following metavariables are \
|
|
|
also supported: \
|
|
|
1. $_serial_id$ - 1-based serial number within map of the search being executed.
|
|
|
usage = public beta
|
|
|
example1 = error | localize | map mytimebased_savedsearch
|
|
|
tags = map loop savedsearch
|
|
|
category = results::generate
|
|
|
related = gentimes, search
|
|
|
|
|
|
[searchoption]
|
|
|
syntax = search=\"<string>\"
|
|
|
description = Search to run map on.
|
|
|
example1 = search="search starttimeu::$start$ endtimeu::$end$"
|
|
|
default = none
|
|
|
|
|
|
[savedsplunkoption]
|
|
|
syntax = <string>
|
|
|
description = Name of saved search
|
|
|
example1 = mysavedsearch
|
|
|
default = none
|
|
|
|
|
|
[maxsearchesoption]
|
|
|
syntax = maxsearches=<int>
|
|
|
description = The maximum number of searches to run. Will generate warning if \
|
|
|
there are more search results.
|
|
|
example1 = maxsearches=42
|
|
|
default = maxsearches=10
|
|
|
|
|
|
##################
|
|
|
# multikv
|
|
|
##################
|
|
|
[multikv-command]
|
|
|
syntax = multikv (conf=<stanza_name>)? (<multikv-option>)*
|
|
|
shortdesc = Extracts field-values from table-formatted events.
|
|
|
description = Extracts fields from events with information in a tabular format (e.g. top, netstat, ps, ... etc). \
|
|
|
A new event is created for each table row. Field names are derived from the title row of the table.
|
|
|
usage = public
|
|
|
comment1 = Extract the "pid" and "command" fields.
|
|
|
example1 = ... | multikv fields pid command
|
|
|
commentcheat = Extract the "COMMAND" field when it occurs in rows that contain "splunkd".
|
|
|
examplecheat = ... | multikv fields COMMAND filter splunkd
|
|
|
category = fields::add
|
|
|
related = extract, kvform, rex, xmlkv
|
|
|
tags = extract table tabular column
|
|
|
|
|
|
[multikv-option]
|
|
|
syntax = <multikv-copyattrs>|<multikv-fields>|<multikv-filter>|<multikv-forceheader>|<multikv-multitable>|<multikv-noheader>|<multikv-rmorig>
|
|
|
description = Multikv available options
|
|
|
|
|
|
[multikv-copyattrs]
|
|
|
syntax = copyattrs=<bool>
|
|
|
description = When true, multikv copies all fields from the original event to the events generated from that event. \
|
|
|
When false, no fields are copied from the original event. \
|
|
|
This means there will be no _time field and you will not be able to see the events in the UI. (default = true)
|
|
|
|
|
|
[multikv-fields]
|
|
|
syntax = fields <field-list>
|
|
|
description = Limit the fields set by multikv to this list. \
|
|
|
Any fields in the table which are not on this list will be ignored.
|
|
|
|
|
|
[multikv-filter]
|
|
|
syntax = filter <field-list>
|
|
|
description = If specified, multikv will skip over table rows that do not contain at least one of the strings in the filter list. \
|
|
|
Quoted expressions are permitted such as "multiple words" or "trailing_space ".
|
|
|
|
|
|
[multikv-forceheader]
|
|
|
syntax = forceheader=<int>
|
|
|
description = Forces the use of the given line number (1 based) as the table's header. \
|
|
|
Empty lines are not included in the count. \
|
|
|
By default, multikv attempts to determine the header line automatically.
|
|
|
|
|
|
[multikv-multitable]
|
|
|
syntax = multitable=<bool>
|
|
|
description = Controls whether or not there can be multiple tables in a single _raw in the original events. (default = true)
|
|
|
|
|
|
[multikv-noheader]
|
|
|
syntax = noheader=<bool>
|
|
|
description = Handle a table without header row identification. \
|
|
|
The size of the table will be inferred from the first row, and fields will be named Column_1, Column_2, etc. \
|
|
|
noheader=true implies multitable=false (default = false)
|
|
|
|
|
|
[multikv-rmorig]
|
|
|
syntax = rmorig=<bool>
|
|
|
description = When true, the original events will not be included in the output results. \
|
|
|
When false, the original events are retained in the output results, with each original event \
|
|
|
emitted after the batch of generated results from that original. (default=true)
|
|
|
|
|
|
###########################
|
|
|
# multisearch
|
|
|
###########################
|
|
|
|
|
|
[multisearch-command]
|
|
|
syntax = multisearch <subsearch> <subsearch> <subsearch> ...
|
|
|
shortdesc = Runs multiple searches at the same time.
|
|
|
description = Runs multiple *streaming* searches at the same time. Must specify at least 2 subsearches and only purely streaming operations are allowed in each subsearch (e.g. search, eval, where, fields, rex, ...)
|
|
|
example = | multisearch [search index=a | eval type = "foo"] [search index=b | eval mytype = "bar"]
|
|
|
comment = search for both events from index a and b and add different fields using eval in each case
|
|
|
usage = public
|
|
|
tags = append join combine unite combine
|
|
|
category = results::append
|
|
|
related = append, join
|
|
|
|
|
|
|
|
|
##################
|
|
|
# mvcombine
|
|
|
##################
|
|
|
|
|
|
[mvcombine-command]
|
|
|
syntax = mvcombine (delim=<string>)? <field>
|
|
|
shortdesc = Combines events in the search results that have a single differing field value into one result with a multi-value field of the differing field.
|
|
|
description = For each group of results that are identical except for the given field, combine them into a single result where the given field is a multivalue field. DELIM controls how values are combined, defaulting to a space character (' ').
|
|
|
usage = public
|
|
|
comment = Combine the values of "foo" with ":" delimiter.
|
|
|
example = ... | mvcombine delim=":" foo
|
|
|
related = makemv, mvexpand, nomv
|
|
|
tags = combine merge join unite multivalue
|
|
|
category = results::filter
|
|
|
|
|
|
##################
|
|
|
# mvexpand
|
|
|
##################
|
|
|
|
|
|
[mvexpand-command]
|
|
|
syntax = mvexpand <field> (limit=<int>)?
|
|
|
shortdesc = Expands the values of a multi-value field into separate events for each value of the multi-value field.
|
|
|
description = For each result with the specified field, create a new result for each value of that field in that result if it a multivalue field.
|
|
|
usage = public
|
|
|
comment1 = Create new events for each value of multi-value field, "foo".
|
|
|
example1 = ... | mvexpand foo
|
|
|
comment2 = Create new events for the first 100 values of multi-value field, "foo".
|
|
|
example2 = ... | mvexpand foo limit=100
|
|
|
related = makemv, mvcombine, nomv
|
|
|
tags = separate divide disconnect multivalue
|
|
|
category = results::generate
|
|
|
|
|
|
##################
|
|
|
# makemv
|
|
|
##################
|
|
|
|
|
|
[makemv-command]
|
|
|
syntax = makemv (delim=<string> |tokenizer=<string>)? (allowempty=<bool>)? (setsv=<bool>)? <field>
|
|
|
shortdesc = Changes a specified field into a multi-value field during a search.
|
|
|
description = Treat specified field as multi-valued, using either a simple string delimiter (can be multicharacter), or a regex tokenizer. If neither is provided, a default delimiter of " " (single space) is assumed. \
|
|
|
The allowempty=<bool> option controls if consecutive delimiters should be treated as one (default = false).\
|
|
|
The setsv boolean option controls if the original value of the field should be kept for the single valued version. It is kept if setsv = false, and it is false by default.
|
|
|
usage = public
|
|
|
comment1 = Separate the value of "foo" into multiple values.
|
|
|
example1 = ... | makemv delim=":" allowempty=t foo
|
|
|
comment2 = For sendmail search results, separate the values of "senders" into multiple values. Then, display the top values.
|
|
|
example2 = eventtype="sendmail" | makemv delim="," senders | top senders
|
|
|
related = mvcombine, mvexpand, nomv
|
|
|
tags = multivalue convert
|
|
|
category = fields::convert
|
|
|
|
|
|
##################
|
|
|
# nomv
|
|
|
##################
|
|
|
|
|
|
[nomv-command]
|
|
|
syntax = nomv <field>
|
|
|
shortdesc = Changes a specified multi-value field into a single-value field at search time.
|
|
|
description = Converts values of the specified multi-valued field into one single value (overrides multi-value field configurations set in fields.conf).
|
|
|
usage = public
|
|
|
comment = For sendmail events, combine the values of the senders field into a single value; then, display the top 10 values.
|
|
|
example = eventtype="sendmail" | nomv senders | top senders
|
|
|
related = makemv, mvcombine, mvexpand, convert
|
|
|
tags = single multivalue
|
|
|
category = fields::convert
|
|
|
|
|
|
##################
|
|
|
# newseriesfilter
|
|
|
##################
|
|
|
|
|
|
[newseriesfilter-command]
|
|
|
syntax = newseriesfilter <string>
|
|
|
description = Used by timechart.
|
|
|
usage = internal
|
|
|
example =
|
|
|
|
|
|
##################
|
|
|
# nokv
|
|
|
##################
|
|
|
|
|
|
[nokv-command]
|
|
|
syntax = nokv
|
|
|
description = Tells the search pipeline not to perform any automatic key/value extraction.
|
|
|
usage = internal
|
|
|
example1 = ... | nokv
|
|
|
|
|
|
##################
|
|
|
# outlier
|
|
|
##################
|
|
|
|
|
|
[outlier-command]
|
|
|
syntax = outlier (<outlier-option> )* (<field-list>)?
|
|
|
alias = outlierfilter
|
|
|
shortdesc = Removes outlying numerical values.
|
|
|
description = Removes or truncates outlying numerical values in selected fields. If no fields are specified, then outlier will attempt to process all fields.
|
|
|
comment1 = Remove all outlying numerical values.
|
|
|
example1 = ... | outlier
|
|
|
comment2 = For a timechart of webserver events, transform the outlying average CPU values.
|
|
|
example2 = 404 host="webserver" | timechart avg(cpu_seconds) by host | outlier action=tf
|
|
|
usage = public
|
|
|
related = anomalies, anomalousvalue, cluster, kmeans
|
|
|
tags = outlier anomaly unusual odd irregular dangerous unexpected
|
|
|
category = reporting
|
|
|
|
|
|
[outlier-option]
|
|
|
syntax = <outlier-action-opt>|<outlier-param-opt>|<outlier-uselower-opt>|<outlier-mark-opt>
|
|
|
description = Outlier options
|
|
|
|
|
|
[outlier-action-opt]
|
|
|
syntax = action=(rm|remove|tf|transform)
|
|
|
simplesyntax = action=(remove|transform)
|
|
|
description = What to do with outlying events. RM | REMOVE removes the field from events containing outlying numerical values. \
|
|
|
TF | TRANSFORM truncates the outlying value to the threshold for outliers and, if mark=true, prefixes the value with "000"
|
|
|
default = "action=transform"
|
|
|
|
|
|
[outlier-param-opt]
|
|
|
syntax = param=<num>
|
|
|
description = Parameter controlling the threshold of outlier detection. An outlier is defined as \
|
|
|
a numerical value that is outside of param multiplied the inter-quartile range.
|
|
|
default = "param=2.5"
|
|
|
|
|
|
[outlier-uselower-opt]
|
|
|
syntax = uselower=<bool>
|
|
|
description = Controls whether to look for outliers for values below the median in addition to above it
|
|
|
default = "uselower=false"
|
|
|
|
|
|
[outlier-mark-opt]
|
|
|
syntax = mark=<bool>
|
|
|
description = If action=transform and mark=true, prefixes the outlying values with "000". If action=remove, the mark argument has no effect.
|
|
|
default = "mark=false"
|
|
|
|
|
|
##################
|
|
|
# dump
|
|
|
##################
|
|
|
|
|
|
[dump-command]
|
|
|
syntax = dump basefilename=<string> (rollsize=<num>)? (compress=<int>)? (format=<string>)? (fields=<comma-delimited-string>)?
|
|
|
shortdesc = Runs a given search query and export events to a set of chunk files on local disk.
|
|
|
description = Runs a given search query and exports events to a set of chunk files on local disk. \
|
|
|
This command runs a specified search query and oneshot export search results to local disk \
|
|
|
at "$SPLUNK_HOME/var/run/splunk/dispatch/<sid>/dump". It recognizes a special field \
|
|
|
in the input events, _dstpath, which if set will be used as a path to be appended to dst \
|
|
|
to compute final destination path. \i\\
|
|
|
"basefilename" - prefix of the export filename. \i\\
|
|
|
"rollsize" - minimum file size at which point no more events are written to the file and \i\\
|
|
|
it becomes a candidate for HDFS transfer, unit is "MB", default "64MB". \i\\
|
|
|
"compress" - gzip compression level from 0 to 9, 0 means no compression, higher number \i\\
|
|
|
means more compression and slower writing speed, default 2. \i\\
|
|
|
"format" - output data format, supported values are raw | csv | tsv | json | xml \i\\
|
|
|
"fields" - list of splunk event fields exported to export data, invalid fields will be ignored
|
|
|
|
|
|
usage = internal
|
|
|
comment1 = Export all events from the index "bigdata" to the location "YYYYmmdd/HH/host" \
|
|
|
at the "$SPLUNK_HOME/var/run/splunk/dispatch/<sid>/dump/" directory on local disk with "MyExport" \
|
|
|
as the prefix of export filenames. Partitioning of the export data is achieved by eval preceding the dump command.
|
|
|
example1 = index=bigdata | eval _dstpath=strftime(_time, "%Y%m%d/%H") + "/" + host | dump basefilename=MyExport
|
|
|
comment2 = Export all events from the index "bigdata" to the location "/myexport/host/source" on \
|
|
|
local disk with "MyExport" as the prefix of export filenames
|
|
|
example2 = index=bigdata | dump basefilename=MyExport
|
|
|
category = exporting
|
|
|
|
|
|
##################
|
|
|
# outputcsv
|
|
|
##################
|
|
|
|
|
|
[outputcsv-command]
|
|
|
syntax = outputcsv (append=<bool>)? (create_empty=<bool>)? (override_if_empty=<bool>?) (dispatch=<bool>)? (usexml=<bool>)? (singlefile=<bool>)? (<filename>)?
|
|
|
shortdesc = Outputs search results to the specified CSV file.
|
|
|
description = If no filename specified, rewrites the contents of each result as a CSV row into the "_xml" field. \
|
|
|
Otherwise writes into file (appends ".csv" to filename if filename has no existing extension). \
|
|
|
If singlefile is set to true and output spans multiple files, collapses it into a single file. \
|
|
|
The option usexml=[t|f] specifies whether or not to encode the csv output into xml and has effect \
|
|
|
only when no filename is specified. This option should not specified when invoking outputcsv from \
|
|
|
the UI. If dispatch option is set to true, filename refers to a file in the job directory in \
|
|
|
$SPLUNK_HOME/var/run/splunk/dispatch/<job id>/ \
|
|
|
If 'create_empty' is true and no results are passed to outputcsv, an 0-length file is created. \
|
|
|
When false (the default) no file is created and the file is deleted if it previously existed. \
|
|
|
If 'override_if_empty' is set to its default of true and no results are passed to outputcsv, the \
|
|
|
command deletes the output file if it exists. If set to false, the command does not delete the \
|
|
|
existing output file. \
|
|
|
If 'append' is true, we will attempt to append to an existing csv file if it exists or create a \
|
|
|
file if necessary. If there is an existing file that has a csv header already, we will only emit \
|
|
|
the fields that are referenced by that header. (Defaults to false) .gz files cannot be append to.
|
|
|
usage = public
|
|
|
comment1 = Output search results to the CSV file 'mysearch.csv'.
|
|
|
example1 = ... | outputcsv mysearch
|
|
|
related = inputcsv
|
|
|
tags = output csv save write
|
|
|
category = results::write
|
|
|
|
|
|
##################
|
|
|
# outputlookup
|
|
|
##################
|
|
|
|
|
|
[outputlookup-command]
|
|
|
syntax = outputlookup (append=<bool>)? (create_empty=<bool>)? (override_if_empty=<bool>? (max=<int>)? (key_field=<field>)? (allow_updates=<bool>)? (createinapp=<bool>)? (create_context=<string>)? (output_format=<string>) ? (<filename>|<string:tablename>)
|
|
|
shortdesc = Saves search results to the specified static lookup table.
|
|
|
description = Saves results to a lookup table as specified by a filename \
|
|
|
(must end with .csv or .gz) or a table name (as specified \
|
|
|
by a stanza name in transforms.conf). \p\\
|
|
|
If the lookup file dies not yet exist, where 'outputlookup' \
|
|
|
creates the file is determined by the existence of an \
|
|
|
application context and the values of the 'createinapp' and \
|
|
|
'create_context' arguments, as well as the value of the \
|
|
|
'create_context' setting in limits.conf. \p\\
|
|
|
The 'createinapp' argument defaults to true. \p\\
|
|
|
When there is a current application contest and neither \
|
|
|
the 'createinapp' or the 'create_context' arguments are set in \
|
|
|
the search, the 'create_context' argument defaults to the value \
|
|
|
of the 'create_context' setting in 'limits.conf'. The \
|
|
|
'create_context' setting defaults to 'app'. \p\\
|
|
|
The 'outputlookup' command creates the lookup file in the \
|
|
|
lookups directory of the current application \
|
|
|
(etc/apps/<app>/lookups) when there is a current application \
|
|
|
context and EITHER of the following arguments are applied to \
|
|
|
'outputlookup'. \p\\
|
|
|
* createinapp=t \
|
|
|
* create_context=app \
|
|
|
The 'outputlookup' command creates the file in the lookups \
|
|
|
directory of the user's current application \
|
|
|
(etc/users/<user>/<app>/lookups) when there is a current \
|
|
|
application context, 'createinapp' is not set, and \
|
|
|
'create_context=user' is set for 'outputlookup'. \p\\
|
|
|
The 'outputlookup' command creates the file in the system \
|
|
|
lookups directory (etc/system/local/lookups) when there is not \
|
|
|
a current application context OR when EITHER of the following \
|
|
|
arguments are set for 'outputlookup': \
|
|
|
* creatinapp=f \
|
|
|
* create_context=system \
|
|
|
Note: When the 'createinapp' argument is used, 'outputlookup' \
|
|
|
ignores the 'create_context' argument. \p\\
|
|
|
If 'create_empty' is true (the default) and no results are \
|
|
|
passed to 'outputlookup', 'ouputlookup' creates a 0-length \
|
|
|
file. \p\\
|
|
|
When 'create_empty=f', 'outputlookup does not create a file \
|
|
|
and the file is deleted if it previously existed. \p\\
|
|
|
If 'override_if_empty' is set to its default of 'true' and no \
|
|
|
results are passed to 'outputlookup', 'outputlookup' deletes \
|
|
|
the lookup file if it exists. If 'override_if_empty=f', \
|
|
|
'outputlookup' does not delete the existing lookup file. \p\\
|
|
|
If 'key_field' is set to a valid field name and this is a \
|
|
|
KV store lookup, 'outputlookup' attempts to use the \
|
|
|
specified field as the key to a value and replace that \
|
|
|
value. \p\\
|
|
|
If 'append' is true, 'outputlookup' attempts to append to an \
|
|
|
existing csv file if it exists or create a file if necessary. \
|
|
|
If there is an existing file that has a csv header already, \
|
|
|
'outputlookup' adds ohnly the fields that are referenced by \
|
|
|
that header. (Defaults to false). .gz files cannot be appended \
|
|
|
to. \
|
|
|
'allow_updates' is true by default if either 'append' is set to \
|
|
|
"true" or if 'key_field' is set to a valid field name. If \
|
|
|
'allow_updates' is true, 'outputlookup' can update existing \
|
|
|
records and insert new records. If 'allow_updates' is set to \
|
|
|
"false," 'outputlookup' can only insert records. \
|
|
|
The 'output_format' argument controls the output data format. \
|
|
|
Its supported values are 'splunk_mv_csv' and 'splunk_sv_csv'. \
|
|
|
It defaults to is 'splunk_sv_csv'. Use 'splunk_mv_csv' for \
|
|
|
multivalue fields.\p\\
|
|
|
|
|
|
usage = public
|
|
|
example1 = | outputlookup users.csv
|
|
|
example2 = | outputlookup usertogroup
|
|
|
comment1 = Write to "users.csv" lookup file (under $SPLUNK_HOME/etc/system/lookups or $SPLUNK_HOME/etc/apps/*/lookups).
|
|
|
comment2 = Write to "usertogroup" lookup table (as specified in transforms.conf).
|
|
|
tags = output csv save write lookup table
|
|
|
category = results::write
|
|
|
related = inputlookup, lookup, outputcsv, outputlookup
|
|
|
|
|
|
##################
|
|
|
# outputraw
|
|
|
##################
|
|
|
[outputraw-command]
|
|
|
syntax = outputraw
|
|
|
shortdesc = Outputs search results in a simple, raw text-based format.
|
|
|
description = Outputs search results in a simple, raw text-based format, with each attribute value on a separate text line. Useful for commandline searches.
|
|
|
example1 = ... | outputraw
|
|
|
usage = deprecated
|
|
|
related = outputcsv, outputtext
|
|
|
tags = output
|
|
|
category = formatting
|
|
|
|
|
|
|
|
|
##################
|
|
|
# outputtext
|
|
|
##################
|
|
|
[outputtext-command]
|
|
|
syntax = outputtext (usexml=<bool>)?
|
|
|
shortdesc = Outputs the raw text (_raw) of results into the _xml field.
|
|
|
description = Rewrites the _raw field of the result into the "_xml" field. \
|
|
|
If usexml is set to true (the default), the _raw field is \
|
|
|
XML escaped.
|
|
|
usage = public beta
|
|
|
comment1 = Output the "_raw" field of your current search into "_xml".
|
|
|
example1 = ... | outputtext
|
|
|
related = outputcsv, outputraw
|
|
|
tags = output
|
|
|
category = formatting
|
|
|
|
|
|
##################
|
|
|
# overlap
|
|
|
##################
|
|
|
|
|
|
[overlap-command]
|
|
|
syntax = overlap
|
|
|
shortdesc = Finds events in a summary index that overlap in time or have missed events.
|
|
|
description = Find events in a summary index that overlap in time, or\
|
|
|
find gaps in time during which a scheduled saved search may have\
|
|
|
missed events. Note: If you find a gap, run the search over the period\
|
|
|
of the gap and summary index the results (using | collect). If you\
|
|
|
find overlapping events, manually delete the overlaps from the summary\
|
|
|
index by using the search language. Invokes an external python script\
|
|
|
(in etc/apps/search/bin/sumindexoverlap.py), which expects input events\
|
|
|
from the summary index and finds any time overlaps and gaps between\
|
|
|
events with the same 'info_search_name' but different\
|
|
|
'info_search_id'. Input events are expected to have the following\
|
|
|
fields: 'info_min_time', 'info_max_time' (inclusive and exclusive,\
|
|
|
respectively) , 'info_search_id' and 'info_search_name' fields.
|
|
|
usage = public
|
|
|
comment = Find overlapping events in "summary".
|
|
|
example = index=summary | overlap
|
|
|
related = collect sistats sitop sirare sichart sitimechart
|
|
|
tags = collect overlap index summary summaryindex
|
|
|
category = index::summary
|
|
|
|
|
|
|
|
|
##################
|
|
|
# pivot
|
|
|
##################
|
|
|
|
|
|
[pivot-command]
|
|
|
syntax = pivot <datamodel-name> <object-name> <pivot-element>
|
|
|
shortdesc = Allows user to run pivot searches against a particular datamodel object.
|
|
|
description = Must be the first command in a search. You must specify the model, object, \
|
|
|
and the pivot element to run. The command will expand and run the specified \
|
|
|
pivot element.
|
|
|
# Examples are commented out until an issue with the Jenkins tests is resolved.
|
|
|
# example1 = | pivot myModel myObject count(myObject)
|
|
|
# example2 = | pivot Tutorial HTTP_requests count(HTTP_requests) AS "Count of HTTP requests"
|
|
|
# example3 = | pivot Tutorial HTTP_requests count(HTTP_requests) AS "Count" SPLITROW host AS "Server" SORT 100 host
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
related = datamodel
|
|
|
tags = datamodel model pivot
|
|
|
|
|
|
##################
|
|
|
# predict
|
|
|
##################
|
|
|
[predict-command]
|
|
|
syntax = predict <field-list> <pd-as-option>? <pd-algo-option>? <pd-correlate-option>? <pd-future_timespan-option>? <pd-holdback-option>? <pd-period-option>? <pd-upper-option>? <pd-lower-option>? <pd-suppress-option>?
|
|
|
shortdesc = Forecasts future values for one or more sets of time-series data.
|
|
|
description = The predict command must be preceded by the timechart command.\
|
|
|
The command can also fill in missing data in a time-series and \
|
|
|
provide predictions for the next several time steps. \p\\
|
|
|
The predict command provides confidence intervals for all of its estimates. \
|
|
|
The command adds a predicted value and an upper and lower 95th (by default) \
|
|
|
percentile range to each event in the time-series.
|
|
|
example1 = ... | timechart span="1m" count AS foo1 | predict foo1
|
|
|
comment1 = Predict foo using the default LLP5 algorithm (an algorithm that combines the LLP and LLT algorithms).
|
|
|
example2 = ... | timechart span="1m" count AS foo | predict foo AS foobar algorithm=LL upper90=high lower97=low future_timespan=10 holdback=20
|
|
|
comment2 = Upper and lower confidence intervals do not have to match
|
|
|
example3 = ... | timechart span="1m" count(x) AS foo1 count(y) AS foo2 | predict foo2 AS foobar algorithm=LLB correlate=foo1 holdback=100
|
|
|
comment3 = Illustrates the LLB algorithm. The foo2 field is predicted by correlating it with the foo1 field.
|
|
|
example4 = ... | timechart span="1m" count AS foo1 avg(sales) AS foo2 sum(sales) AS foo3 | predict foo1 foo2 foo3
|
|
|
comment4 = Predict multiple fields using the same algorithm. The default algorithm LLP5 is used in this example.
|
|
|
example5 = ... timechart span="1m" count AS foo1 avg(sales) AS foo2 sum(sales) AS foo3 | predict foo1 foo2 foo3 algorithm=LLT future_timespan=15 holdback=5
|
|
|
comment5 = Predict multiple fields using the same algorithm, future_timespan, and holdback.
|
|
|
example6 = ... timechart span="1m" count AS foo1 avg(sales) AS foo2 sum(sales) AS foo3 | predict foo1 AS foobar1 foo2 AS foobar2 foo3 AS foobar3 algorithm=LLT future_timespan=15 holdback=5
|
|
|
comment6 = Use aliases for the fields by specifying the AS keyword for each field.
|
|
|
example7 = ... timechart span="1m" count AS foo1 avg(sales) AS foo2 sum(sales) | predict foo1 algorithm=LL future_timespan=15 foo2 algorithm=LLP period=7 future_timespan=7
|
|
|
comment7 = Predict multiple fields using different algorithms and different options for each field.
|
|
|
example8 = ... timechart span="1m" count AS foo1 avg(sales) AS foo2 sum(sales) | predict foo1 foo2 algorithm=BiLL future_timespan=10
|
|
|
comment8 = Predict foo1 and foo2 together using the bivariate algorithm BiLL.
|
|
|
usage = public
|
|
|
category = reporting
|
|
|
related = trendline, x11
|
|
|
tags = forecast predict univariate bivariate kalman
|
|
|
|
|
|
[pd-algo-option]
|
|
|
syntax = algorithm=(LL|LLT|LLP|LLP5|LLB|BiLL)
|
|
|
description = LL, LLT, LLP, and LLP5 are univariate algorithms. LLB and BiLL are bivariate algorithms.\
|
|
|
LL is the simplest algorithm and computes the levels of the time series, ie. each new state\
|
|
|
equals the previous state plus a Gaussian noise.\
|
|
|
LLT computes the levels plus the trend.\
|
|
|
LLP takes into account the data's periodicity if it exists. You can set the \
|
|
|
period using the "period" option. You should set the period if you know it \
|
|
|
because it will likely be more accurate than letting the command estimate the period.\
|
|
|
If the you do not set the period, LLP will try to calculate it.\
|
|
|
LLP5 combines LLT and LLP. If the time series is periodic, LLP5 computes two \
|
|
|
predictions, one using LLT and the other using LLP. Then LLP5 takes a weighted \
|
|
|
average of the two values and outputs that as its prediction. The confidence \
|
|
|
interval is also based on a weighted average of the variances of the \
|
|
|
LLT and LLP algorithms.\
|
|
|
LLB and BiLL are both bivariate local level algorithms. LLB predicts \
|
|
|
one time series off the other. BiLL predicts both time series simultaneously. \
|
|
|
The key here is that the covariance of the two series is taken into account.\
|
|
|
default = LLP5
|
|
|
|
|
|
[pd-as-option]
|
|
|
syntax = as <field>
|
|
|
description = Sets the aliases for the predicted fields.
|
|
|
example1 = ... | predict foo1 as foobar1 foo2 as foobar2
|
|
|
comment1 = here predictions for foo1 and foo2 will be named foobar1 and foobar2, respectively.
|
|
|
|
|
|
[pd-correlate-option]
|
|
|
syntax = correlate=<field>
|
|
|
description = Used with only the LLB algorithm, and is required with that algorithm. \
|
|
|
Specifies the time series LLB uses to predict the other time series. \
|
|
|
See example3 in [predict-command] section.
|
|
|
default = None
|
|
|
|
|
|
[pd-future_timespan-option]
|
|
|
syntax = future_timespan=<num>
|
|
|
description = Specifies how many future predictions the predict command will compute.
|
|
|
default = 5
|
|
|
|
|
|
[pd-holdback-option]
|
|
|
syntax = holdback=<num>
|
|
|
description = Specifies the number of data points from the end that are NOT to be used by predict.\
|
|
|
Use this option to compare the predictions with the observed data.\
|
|
|
example1 = ... | predict foo holdback=5 future_timespan=5
|
|
|
comment1 = The last 5 data points are not used. 5 predictions are made which correspond to \
|
|
|
the last 5 values in the data. You can then judge how good the predictions are\
|
|
|
by checking whether the given (or observed) values fall into the predicted \
|
|
|
confidence intervals.
|
|
|
|
|
|
[pd-period-option]
|
|
|
syntax = period=<num>
|
|
|
description = Specifies the periodicity of the time series. It must be at least 2. \
|
|
|
If you do not specify a value, the LLP and its variants attempts to compute the period.
|
|
|
default = None
|
|
|
|
|
|
[pd-upper-option]
|
|
|
syntax = upper<int>=<field>
|
|
|
description = Specifies the name for the upper confidence interval curve. \
|
|
|
The <int> is a number between 0 and 100, and specifies the confidence level.
|
|
|
default = upper95(prediction(<field>)) where <field> is the field to predict.
|
|
|
|
|
|
[pd-lower-option]
|
|
|
syntax = lower<int>=<field>
|
|
|
description = Specifies the name for the lower confidence interval curve. \
|
|
|
The <int> is a number between 0 and 100, and specifies the confidence level.
|
|
|
default = lower95(prediction(<field>)) where <field> is the field to predict.
|
|
|
|
|
|
[pd-suppress-option]
|
|
|
syntax = suppress=<field>
|
|
|
description = Used with the multivariate algorithms. Specifies one of the \
|
|
|
predicted fields to hide from the output. Use when it is \
|
|
|
difficult to look at all the predicted visualizations at the \
|
|
|
same time.
|
|
|
default = None
|
|
|
|
|
|
##################
|
|
|
# preview
|
|
|
##################
|
|
|
|
|
|
[preview-command]
|
|
|
syntax = preview
|
|
|
shortdesc = See what events from a file will look like when indexed without actually indexing the file.
|
|
|
description = Given a source file and a set of props.conf settings in \
|
|
|
$SPLUNK_HOME/var/run/splunk/dispatch/<job_id>/indexpreview.csv, \
|
|
|
generate the events that the file would yield if it were indexed.
|
|
|
usage = internal
|
|
|
category = results::generate
|
|
|
tags = index preview
|
|
|
|
|
|
##################
|
|
|
# rare
|
|
|
##################
|
|
|
|
|
|
[rare-command]
|
|
|
syntax = rare <rare-command-arguments>
|
|
|
shortdesc = Displays the least common values of a field.
|
|
|
description = Finds the least frequent tuple of values of all fields in the field list. \
|
|
|
If optional by-clause is specified, this command will return rare tuples of values for\
|
|
|
each distinct tuple of values of the group-by fields.
|
|
|
comment1 = Find the least common "user" value for a "host".
|
|
|
example1 = ... | rare user by host
|
|
|
commentcheat = Return the least common values of the "url" field.
|
|
|
examplecheat = ... | rare url
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
related = top, stats, sirare
|
|
|
tags = rare few occasional scarce sparse uncommon unusual
|
|
|
|
|
|
[rare-command-arguments]
|
|
|
syntax = <top-opt>* <field-list> (<by-clause>)?
|
|
|
description = See rare-command description.
|
|
|
|
|
|
##################
|
|
|
# redistribute
|
|
|
##################
|
|
|
|
|
|
[redistribute-command]
|
|
|
syntax = redistribute (num_of_reducers=<int>)? (<by-clause>)?
|
|
|
shortdesc = Speeds up search runtime of a set of supported SPL commands, in a \
|
|
|
distributed search environment.
|
|
|
description = This command divides the search results \
|
|
|
among a pool of intermediate reducers in the indexer layer. The reducers \
|
|
|
perform intermediary reduce operations in parallel on the search results \
|
|
|
before pushing them up to the search head, where a final reduction \
|
|
|
operation is performed. This parallelization of reduction work that \
|
|
|
would otherwise be done entirely by the search head can result in \
|
|
|
faster completion times for high-cardinality searches that \
|
|
|
aggregate large numbers of search results. \p\\
|
|
|
Set num_of_reducers to control the number of intermediate reducers \
|
|
|
used from the pool. num_of_reducers defaults to a fraction of the \
|
|
|
indexer pool size, according to the 'winningRate' setting, and is \
|
|
|
limited by the 'maxReducersPerPhase' setting, both of which are \
|
|
|
specified on the search head in the [parallelreduce] stanza of \
|
|
|
limits.conf. \p\\
|
|
|
The redistribute command divides events into partitions on the \
|
|
|
intermediate reducers according to the fields specified with the \
|
|
|
by-clause. If no by-clause fields are specified, the search \
|
|
|
processor uses the fields that work best with the commands that \
|
|
|
follow the redistribute command in the search. \p\\
|
|
|
The prjob command provides the same functionality as the redistribute \
|
|
|
command but with a simpler interface as it uses the default \
|
|
|
parameter values for num_of_reducers and the by-clause. Consider \
|
|
|
using the prjob command when you do not need to specify the \
|
|
|
parameter values. \p\\
|
|
|
The redistribute command requires a distributed search environment \
|
|
|
with a pool of intermediate reducers at the indexer level. You can \
|
|
|
use the redistribute command only once in a search. \p\\
|
|
|
The redistribute command supports streaming commands and the \
|
|
|
following nonstreaming commands: stats, tstats, streamstats, \
|
|
|
eventstats, sichart, and sitimechart. The redistribute command also \
|
|
|
supports transaction on a single field. \p\\
|
|
|
The redistribute command moves the processing of a search string \
|
|
|
from the intermediate reducers to the search head when it \
|
|
|
encounters nonstreaming command that it does not support or that \
|
|
|
does not include a by-clause. The redistribute command also moves \
|
|
|
processing to the search head when it detects that a command has \
|
|
|
modified values of the fields specified in the redistribute by-clause. \p\\
|
|
|
Note: When results are aggregated from the intermediate reducers at \
|
|
|
the search head, a sort order is imposed on the result rows only \
|
|
|
when an order-sensitive command such as 'sort' is in place to \
|
|
|
consume the reducer output.
|
|
|
example1 = ... | redistribute by ip | stats count by ip
|
|
|
comment1 = Speeds up a stats search that aggregates a large number of results. \
|
|
|
The "| stats count by ip" portion of the search is processed on the \
|
|
|
intermediate reducers. The search head just aggregates the results.
|
|
|
example2 = ... | redistribute | eventstats count by user, source | where count>10 | sitimechart max(count) by source | timechart max(count) by source
|
|
|
comment2 = Speeds up a search that includes eventstats and which uses \
|
|
|
sitimechart to perform the statistical calculations for a timechart \
|
|
|
operation. The intermediate reducers process eventstats, where, and \
|
|
|
sitimechart. The search head runs timechart to turn the reduced \
|
|
|
sitimechart statistics into sorted, visualization-ready results. \
|
|
|
Because the redistribute split-by field is unidentified, the system \
|
|
|
selects "source" as the redistribute field.
|
|
|
example3 = | tstats prestats=t count BY _time span=1d | redistribute by _time | sitimechart span=1d count | timechart span=1d count
|
|
|
comment3 = Speeds up a search that uses tstats to generate events. The \
|
|
|
tstats command must be placed at the start of the search pipeline, \
|
|
|
and here it uses prestats=t to work with the timechart command. \
|
|
|
sitimechart is processed on the reducers and timechart is processed on \
|
|
|
the search head.
|
|
|
example4 = ... | redistribute | eventstats count by user, source | where count >10 | sort 0 -num(count) | fields count, user, source
|
|
|
comment4 = In this example, the eventstats and where commands are processed \
|
|
|
in parallel on the reducers, while the sort command and any commands \
|
|
|
following it are processed on the search head. This happens because \
|
|
|
sort is a nonstreaming command that is not supported by redistribute.
|
|
|
category = data::managing
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
tags = partition re-partition repartition shuffle collocate
|
|
|
|
|
|
##################
|
|
|
# regex
|
|
|
##################
|
|
|
|
|
|
[regex-command]
|
|
|
syntax = regex (<field>("="|"!="))?<regex-expression>
|
|
|
shortdesc = Removes results that do not match the specified regular expression.
|
|
|
description = Removes results that do not match the specified regular expression. You can specify for the regex to keep results that match the expression, or to keep those that do not match. Note: if you want to use the "or" ("|") command in a regex argument, the whole regex expression must be surrounded by quotes (ie. regex "expression"). Matches the value of the field against the unanchored regex and only keeps those events that match in the case of '=' or do not match in the case of '!='. If no field is specified, the match is against "_raw".
|
|
|
example1 = ... | regex _raw="complicated|regex(?=expression)"
|
|
|
example2 = ... | regex _raw="(?=!\d)10.\d{1,3}\.\d{1,3}\.\d{1,3}(?!\d)"
|
|
|
commentcheat = Keep only search results whose "_raw" field contains IP addresses in the non-routable class A (10.0.0.0/8).
|
|
|
examplecheat = ... | regex _raw="(?<!\d)10.\d{1,3}\.\d{1,3}\.\d{1,3}(?!\d)"
|
|
|
category = results::filter
|
|
|
usage = public
|
|
|
related = rex, search
|
|
|
tags = regex regular expression filter where
|
|
|
|
|
|
[regex-expression]
|
|
|
syntax = (\")?<string>(\")?
|
|
|
description = A Perl Compatible Regular Expression supported by the pcre library.
|
|
|
comment1 = Selects events whose _raw field contains ip addresses in the non-routable class A (10.0.0.0/8).
|
|
|
example1 = ... | regex _raw="(?<!\d)10.\d{1,3}\.\d{1,3}\.\d{1,3}(?!\d)"
|
|
|
|
|
|
##################
|
|
|
# rename
|
|
|
##################
|
|
|
|
|
|
[rename-command]
|
|
|
syntax = rename (<wc-field> as <wc-field>)+
|
|
|
shortdesc = Renames a specified field (wildcards can be used to specify multiple fields).
|
|
|
description = Renames a field. If both the source and destination fields are \
|
|
|
wildcard expressions with he same number of wildcards, \
|
|
|
the renaming will carry over the wildcarded portions to the \
|
|
|
destination expression.
|
|
|
comment1 = Rename the "count" field.
|
|
|
example1 = ... | rename count as "Count of Events"
|
|
|
comment2 = Rename fields beginning with "foo".
|
|
|
example2 = ... | rename foo* as bar*
|
|
|
commentcheat = Rename the "_ip" field as "IPAddress".
|
|
|
examplecheat = ... | rename _ip as IPAddress
|
|
|
category = fields::modify
|
|
|
usage = public
|
|
|
tags = rename alias name as aka
|
|
|
related = fields
|
|
|
|
|
|
##################
|
|
|
# replace
|
|
|
##################
|
|
|
|
|
|
[replace-command]
|
|
|
syntax = replace (<wc-str> with <wc-str>)+ (in <field-list>)?
|
|
|
shortdesc = Replaces values of specified fields with a specified new value.
|
|
|
description = Replaces a single occurrence of the first string with the second \
|
|
|
within the specified fields (or all fields if none were specified). \
|
|
|
Non-wildcard replacements specified later take precedence over those specified earlier. \
|
|
|
For wildcard replacement, fuller matches take precedence over lesser matches.\
|
|
|
To assure precedence relationships, one is advised to split the replace into \
|
|
|
two separate invocations. \
|
|
|
When using wildcarded replacements, the result must have the same number \
|
|
|
of wildcards, or none at all. \
|
|
|
Wildcards (*) can be used to specify many values to replace, or replace values with.
|
|
|
example1 = ... | replace 127.0.0.1 with localhost
|
|
|
example2 = ... | replace 127.0.0.1 with localhost in host
|
|
|
example3 = ... | replace 0 with Critical, 1 with Error in msg_level
|
|
|
example4 = ... | replace aug with August in start_month end_month
|
|
|
example5 = ... | replace *localhost with localhost in host
|
|
|
example6 = ... | replace "* localhost" with "localhost *" in host
|
|
|
commentcheat = Change any host value that ends with "localhost" to "localhost".
|
|
|
examplecheat = ... | replace *localhost with localhost in host
|
|
|
category = fields::modify
|
|
|
usage = public
|
|
|
tags = replace change set
|
|
|
related = fillnull setfields rename
|
|
|
|
|
|
##################
|
|
|
# rex
|
|
|
##################
|
|
|
|
|
|
[rex-command]
|
|
|
syntax = rex (field=<field>)? ( ( <regex-expression> (max_match=<int>)? (offset_field=<string>)? ) | mode=sed <sed-expression>)
|
|
|
shortdesc = Specifies a Perl regular expression named groups to extract fields while you search.
|
|
|
description = Matches the value of the field against the unanchored regex and extracts \
|
|
|
the perl regex named groups into fields of the corresponding names. If \
|
|
|
mode is set to 'sed' the given sed expression will be applied to the value \
|
|
|
of the chosen field (or to _raw if a field is not specified). \
|
|
|
max_match controls the number of times the regex is matched, if greater than one \
|
|
|
the resulting fields will be multivalued fields, defaults to 1, use 0 to mean unlimited.
|
|
|
comment1 = Anonymize data matching pattern
|
|
|
example1 = ... | rex mode=sed "s/(\\d{4}-){3}/XXXX-XXXX-XXXX-/g"
|
|
|
commentcheat = Extract "from" and "to" fields using regular expressions. If a raw event contains "From: Susan To: Bob", then from=Susan and to=Bob.
|
|
|
examplecheat = ... | rex field=_raw "From: (?<from>.*) To: (?<to>.*)"
|
|
|
category = fields::add
|
|
|
usage = public
|
|
|
related = extract, kvform, multikv, xmlkv, regex
|
|
|
tags = regex regular expression extract
|
|
|
|
|
|
##################
|
|
|
# rtorder
|
|
|
##################
|
|
|
[rtorder-command]
|
|
|
syntax = rtorder (discard=<bool>)? (buffer_span=<span-length>)? (max_buffer_size=<int>)?
|
|
|
shortdesc = Buffers events from real-time search to emit them in ascending time order when possible.
|
|
|
description = The rtorder command creates a streaming event buffer that takes input events, stores them \
|
|
|
in the buffer in ascending time order. The events are emitted in that order from the \
|
|
|
buffer only after the current time reaches at least the span of time given by buffer_span \
|
|
|
after the timestamp of the event. The buffer_span is by default 10 seconds. \
|
|
|
Events are emitted from the buffer if the maximum size of the buffer is exceeded.\
|
|
|
The default max_buffer_size is 50000, or the max_result_rows setting of the [search] \
|
|
|
stanza in the limits.conf file. If an event is received as input that is earlier \
|
|
|
than an event that has been emitted previously, that out of order event is emitted \
|
|
|
immediately unless the discard option is set to true (it is false by default). \
|
|
|
When discard is set to true, out of order events are discarded, assuring that the \
|
|
|
output is always strictly in time ascending order.
|
|
|
example1 = ... | rtorder discard=t buffer_span=5m
|
|
|
comment1 = Keep a buffer of the last 5 minutes of events, emitting events in ascending time order after \
|
|
|
the events are more than 5 minutes old. Newly received events that are older than 5 minutes \
|
|
|
are discarded if an event after that time has already been emitted.
|
|
|
usage = public
|
|
|
related = sort
|
|
|
tags = realtime sort order
|
|
|
|
|
|
[select-arg]
|
|
|
syntax = <string>
|
|
|
description = Any value sql select arguments, per the syntax found at\
|
|
|
http://www.sqlite.org/lang_select.html. If no "from results" is\
|
|
|
specified in the select-arg it will be inserted it automatically.\
|
|
|
Runs a SQL Select query against passed in search\
|
|
|
results. All fields referenced in the select statement must be\
|
|
|
prefixed with an underscore. Therefore, "ip" should be references as\
|
|
|
"_ip" and "_raw" should be referenced as "__raw". Before the select\
|
|
|
command is executed, the previous search results are put into a\
|
|
|
temporary database table called "results". If a row has no values,\
|
|
|
"select" ignores it to prevent blank search results.
|
|
|
|
|
|
##################
|
|
|
# script
|
|
|
##################
|
|
|
[script-command]
|
|
|
syntax = script <script-name-arg> (<script-arg> )* (<maxinputs-opt>)?
|
|
|
alias = run
|
|
|
shortdesc = Runs an external Python-implemented search command.
|
|
|
description = Calls an external python program that can modify or generate search results. \
|
|
|
Scripts must be declared in commands.conf and be located in "$SPLUNK_HOME/etc/apps/app_name/bin". \
|
|
|
The scripts are run with "$SPLUNK_HOME/bin/python".
|
|
|
comment1 = Run the Python script "myscript" with arguments, myarg1 and myarg2; then, email the results.
|
|
|
example1 = ... | script python myscript myarg1 myarg2 | sendemail to=david@splunk.com
|
|
|
usage = public
|
|
|
tags = script run python perl custom
|
|
|
category = search::external
|
|
|
|
|
|
[script-name-arg]
|
|
|
syntax = <string>
|
|
|
description = The name of the scripted search command to execute, as defined in commands.conf
|
|
|
example1 = sendemail
|
|
|
|
|
|
[maxinputs-opt]
|
|
|
syntax = maxinputs=<int>
|
|
|
description = Determines the maximum number of input results passed to the script.
|
|
|
example1 = maxinputs=1000
|
|
|
default = maxinputs=50000
|
|
|
|
|
|
[script-arg]
|
|
|
syntax = <string>
|
|
|
description = An argument passed to the script.
|
|
|
example1 = to=bob@mycompany.com
|
|
|
|
|
|
|
|
|
##################
|
|
|
# savedsearch
|
|
|
##################
|
|
|
[savedsearch-command]
|
|
|
syntax = savedsearch <string> (<savedsearch-opt> )*
|
|
|
alias = macro, savedsplunk
|
|
|
shortdesc = Runs a saved search by name.
|
|
|
description = Runs a saved search. \
|
|
|
If the search contains replacement terms, will perform string replacement. \
|
|
|
For example, if the search were something like "index=$indexname$", then \
|
|
|
the indexname term can be provided at invocation time of the savedsearch command.
|
|
|
usage = public
|
|
|
comment1 = Run the searchindex saved search with an index provided (as per above)
|
|
|
example1 = | savedsearch searchindex index=main
|
|
|
commentcheat = Run the "mysecurityquery" saved search.
|
|
|
examplecheat = | savedsearch mysecurityquery
|
|
|
category = results::generate
|
|
|
tags = search macro saved bookmark
|
|
|
related = search
|
|
|
|
|
|
[savedsearch-opt]
|
|
|
syntax = <savedsearch-macro-opt>|<savedsearch-replacement-opt>
|
|
|
|
|
|
[savedsearch-macro-opt]
|
|
|
syntax = nosubstitution=<bool>
|
|
|
description = If true, no string substitution replacements are made.
|
|
|
default = nosubstitution=false
|
|
|
|
|
|
[savedsearch-replacement-opt]
|
|
|
syntax = <string>=<string>
|
|
|
description = A key value pair to be used in string substitution replacement.
|
|
|
|
|
|
##################
|
|
|
# sendalert
|
|
|
##################
|
|
|
|
|
|
[sendalert-command]
|
|
|
syntax = sendalert <alert_action_name> (results_link=<url>)? (results_path=<path>)? (param.<param-name>=<value>)*
|
|
|
shortdesc = Triggers the given custom alert action.
|
|
|
description = Triggers the given alert action according to the custom alert actions framework. The command\
|
|
|
gathers the configuration for the alert action (from alert_actions.conf, the saved search \
|
|
|
and custom parameters passed via the command arguments) and performs token replacement. Then\
|
|
|
the command determines the alert action script and arguments to run, creates the alert action\
|
|
|
payload and executes the script, handing over the payload via STDIN to the script process.\
|
|
|
When running the custom script the sendalert command also honors the maxtime setting from\
|
|
|
alert_actions.conf and forcefully terminates the process if it's running longer than the\
|
|
|
configured threshold (by default this is set to 5 minutes).
|
|
|
usage = internal
|
|
|
example1 = ... | sendalert hipchat param.room="SecOps" param.message="There is a security problem!"
|
|
|
comment1 = Trigger the hipchat custom alert action and pass in room and message as custom parameters.
|
|
|
tags = custom alert
|
|
|
category = alerting
|
|
|
|
|
|
[alert_action_name]
|
|
|
syntax = <string>
|
|
|
description = Name of the custom alert action.
|
|
|
example = hipchat
|
|
|
|
|
|
[param-name]
|
|
|
syntax = <string>
|
|
|
description = Name of the parameter to pass to the custom alert action.
|
|
|
example = message
|
|
|
|
|
|
##################
|
|
|
# sendemail
|
|
|
##################
|
|
|
|
|
|
[sendemail-command]
|
|
|
syntax = sendemail <to-option> <from-option>? <cc-option>? <bcc-option>? <subject-option>? <message-option>? <footer-option>? <sendresults-option>? <inline-option>? <format-option>? <sendcsv-option>? <sendpdf-option>? <sendpng-option>? <pdfview-option>? (<paperorientation-option>)? <papersize-option>? <priority-option>? <server-option>? <graceful-option>? <content_type-option>? <width_sort_columns-option>? <use_ssl-option>? <use_tls-option>? <maxinputs-option>? <maxtime-option>?
|
|
|
shortdesc = Emails search results to the specified email addresses.
|
|
|
description = Emails search results to the specified email addresses.
|
|
|
usage = public
|
|
|
comment1 = Send search results to the specified email.
|
|
|
example1 = ... | sendemail to="elvis@splunk.com"
|
|
|
comment2 = Send search results in HTML format with the subject "myresults".
|
|
|
example2 = ... | sendemail to="elvis@splunk.com,john@splunk.com" content_type=html subject=myresults server=mail.splunk.com
|
|
|
tags = email mail alert
|
|
|
category = alerting
|
|
|
|
|
|
[to-option]
|
|
|
syntax = to=<email_list>
|
|
|
description = List of email addresses to send search results to.
|
|
|
|
|
|
[from-option]
|
|
|
syntax = from=<email_list>
|
|
|
description = Email address from line.
|
|
|
default = splunk@<hostname>
|
|
|
|
|
|
[cc-option]
|
|
|
syntax = cc=<email_list>
|
|
|
description = Cc line; comma-separated quoted list of valid email addresses.
|
|
|
|
|
|
[bcc-option]
|
|
|
syntax = bcc=<email_list>
|
|
|
description = Blind cc line; comma-separated quoted list of valid email addresses.
|
|
|
|
|
|
[email_list]
|
|
|
syntax = <email_address> (, <email_address> )*
|
|
|
example1 = "bob@smith.com, elvis@presley.com"
|
|
|
|
|
|
[email_address]
|
|
|
# if we supported regex, perhaps: [A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}
|
|
|
syntax = <string>
|
|
|
example1 = bob@smith.com
|
|
|
|
|
|
[subject-option]
|
|
|
syntax = subject=<string>
|
|
|
description = Specifies the subject line.
|
|
|
default = Splunk Results
|
|
|
|
|
|
[message-option]
|
|
|
syntax = message=<string>
|
|
|
description = Specifies the message sent in the email.
|
|
|
default = If sendresults=true: Search complete. \
|
|
|
If sendresults=true, inline=true, and either sendpdf=false or sendcsv=false: Search results. \
|
|
|
If sendpdf=true or sendcsv=true: Search results attached.
|
|
|
|
|
|
[footer-option]
|
|
|
syntax = footer=<string>
|
|
|
description = Specify an alternate email footer.
|
|
|
default = 'If you believe you've received this email in error, \
|
|
|
please see your Splunk administrator.\r\n\r\nsplunk>'
|
|
|
|
|
|
[sendresults-option]
|
|
|
syntax = sendresults=<bool>
|
|
|
description = Determines whether the results should be included with the email.
|
|
|
default = Refer to the email.sendresults in the alert_actions.conf file.
|
|
|
|
|
|
[inline-option]
|
|
|
syntax = inline=<bool>
|
|
|
description = Specifies whether to send the results in the message body or as an attachment.
|
|
|
default = Refer to the email.inline in the alert_actions.conf file.
|
|
|
|
|
|
[format-option]
|
|
|
syntax = format=csv | table | raw
|
|
|
description = Specifies how to format inline results.
|
|
|
default = Refer to the email.format in the alert_actions.conf file.
|
|
|
|
|
|
[sendcsv-option]
|
|
|
syntax = sendcsv=<bool>
|
|
|
description = Specify whether to send the results with the email as an attached csv file or not.
|
|
|
default = Refer to the email.sendcsv in the alert_actions.conf file.
|
|
|
|
|
|
[sendpdf-option]
|
|
|
syntax = sendpdf=<bool>
|
|
|
description = Specify whether to send the results with the email as an attached PDF or not.
|
|
|
default = Refer to the email.sendpdf in the alert_actions.conf file.
|
|
|
|
|
|
[sendpng-option]
|
|
|
syntax = sendpng=<bool>
|
|
|
description = Specify whether or not to send Dashboard Studio results with the email as an attached PNG.
|
|
|
default = Refer to the email.sendpng in the alert_actions.conf file.
|
|
|
|
|
|
[pdfview-option]
|
|
|
syntax = pdfview=<string>
|
|
|
description = Name of view to send as a PDF.
|
|
|
|
|
|
[paperorientation-option]
|
|
|
syntax = paperorientation= portrait | landscape
|
|
|
description = Paper orientation: portrait or landscape.
|
|
|
default = portrait
|
|
|
|
|
|
[papersize-option]
|
|
|
syntax = papersize=letter | legal | ledger | a2 | a3 | a4 | a5
|
|
|
description = Default paper size for PDFs. Acceptable values: letter, legal, ledger, a2, a3, a4, a5.
|
|
|
default = letter
|
|
|
|
|
|
[priority-option]
|
|
|
syntax = priority=highest | high | normal | low | lowest
|
|
|
description = Set the priority of the email as it appears in the email client. Lowest or 5, low or 4, high or 2, highest or 1.
|
|
|
default = 3
|
|
|
|
|
|
[server-option]
|
|
|
syntax = server=<string>
|
|
|
description = If the smtp server is not local, use this to specify it.
|
|
|
default = localhost
|
|
|
|
|
|
[graceful-option]
|
|
|
syntax = graceful=<bool>
|
|
|
description = If set to true, no error is thrown, if email sending fails and thus the search pipeline continues execution as if sendemail was not there.
|
|
|
default = false
|
|
|
|
|
|
[content_type-option]
|
|
|
syntax = content_type=html | plain
|
|
|
description = The content type of the email. Plain sends email as plain text and html sends email as a multipart email that include both text and html.
|
|
|
default = Refer to the email.content_type in the alert_actions.conf file.
|
|
|
|
|
|
[width_sort_columns-option]
|
|
|
syntax = width_sort_columns=<bool>
|
|
|
description = This is only valid for plain text emails. Specifies whether the columns should be sorted by their width.
|
|
|
default = true
|
|
|
|
|
|
[use_ssl-option]
|
|
|
syntax = use_ssl=<bool>
|
|
|
description = Whether to use ssl when communicating with the smtp server. When true, you must also specify both the server name or ip address and the tcp port in the 'mailserver' attribute.
|
|
|
default = false
|
|
|
|
|
|
[use_tls-option]
|
|
|
syntax = use_tls=<bool>
|
|
|
description = Specify whether to use tls when communicating with the smtp server.
|
|
|
default = false
|
|
|
|
|
|
[maxinputs-option]
|
|
|
syntax = maxinputs=<int>
|
|
|
description = Set the maximum number of search results sent via alerts.
|
|
|
default = 50000
|
|
|
|
|
|
[maxtime-option]
|
|
|
syntax = maxtime=<int>m | s | h | d
|
|
|
description = The maximum amount of time that the execution of an action is allowed to take before the action is aborted.
|
|
|
|
|
|
##################
|
|
|
# setfields
|
|
|
##################
|
|
|
[setfields-command]
|
|
|
syntax = setfields <setfields-arg>(, <setfields-arg>)*
|
|
|
shortdesc = Sets the field values for all results to a common value.
|
|
|
description = Sets the value of the given fields to the specified values for each event in the result set. \
|
|
|
Missing fields are added, present fields are overwritten.
|
|
|
usage = deprecated
|
|
|
note = use 'eval field="value"'
|
|
|
example1 = ... | setfields ip="10.10.10.10", foo="foo bar"
|
|
|
category = fields::add
|
|
|
related = fillnull setfields rename
|
|
|
tags = annotate set note
|
|
|
|
|
|
[setfields-arg]
|
|
|
syntax = <string>="<string>"
|
|
|
description = a key-value pair with quoted value. Standard key cleaning will be performed, ie all non-alphanumeric \
|
|
|
characters will be replaced with '_' and leading '_' will be removed.
|
|
|
|
|
|
##################
|
|
|
# spath
|
|
|
##################
|
|
|
|
|
|
[spath-command]
|
|
|
syntax = spath (output=<field>)? (path=<datapath> | <datapath>)? (input=<field>)?
|
|
|
shortdesc = Extracts values from structured data (XML or JSON) and stores them in a field or fields.
|
|
|
description = When called with no path argument, spath extracts all fields from the \
|
|
|
first 5000 (limit is configurable via limits.conf characters, with the produced fields named by their path. \
|
|
|
If a path is provided, the value of this path is extracted to a field \
|
|
|
named by the path by default, or to a field specified by the output \
|
|
|
argument if it is provided.\
|
|
|
Paths are of the form 'foo.bar.baz'. Each level can also have an \
|
|
|
optional array index, delineated by curly brackets ex 'foo{1}.bar'. \
|
|
|
All array elements can be represented by empty curly brackets e.g. 'foo{}'. \
|
|
|
The final level for XML queries can also include an attribute name, \
|
|
|
also enclosed by curly brackets, e.g. 'foo.bar{@title}'. \
|
|
|
By default, spath takes the whole event as its input. The input \
|
|
|
argument can be used to specify a different field for the input source.
|
|
|
example1 = ... | spath output=myfield path=foo.bar.baz
|
|
|
example2 = ... | spath input=oldfield output=newfield path=catalog.book{@id}
|
|
|
example3 = ... | spath server.name
|
|
|
category = fields::add
|
|
|
usage = public
|
|
|
related = rex, regex
|
|
|
tags = spath xpath json xml extract
|
|
|
|
|
|
##################
|
|
|
# table
|
|
|
##################
|
|
|
[table-command]
|
|
|
syntax = table <wc-field-list>
|
|
|
shortdesc = Returns a table formed by only the fields specified in the arguments.
|
|
|
description = Returns a table formed by only the fields specified in the arguments. Columns are \
|
|
|
displayed in the same order that fields are specified. Column headers are the field \
|
|
|
names. Rows are the field values. Each row represents an event.
|
|
|
usage = public
|
|
|
example1 = ... | table foo bar baz*
|
|
|
comment1 = Resulting table has field foo then bar then all fields that start with 'baz'
|
|
|
tags = fields
|
|
|
related = fields
|
|
|
category = results::filter
|
|
|
|
|
|
##################
|
|
|
# transpose
|
|
|
##################
|
|
|
[transpose-command]
|
|
|
syntax = transpose (<int>)? (column_name=<string>)? (header_field=<field>)? (include_empty=<bool>)?
|
|
|
shortdesc = Turns rows into columns.
|
|
|
description = Turns rows into columns (each row becomes a column). Takes an optional integer argument that limits the number of rows we transpose (default = 5). column_name is the name of the field in the output where the names of the fields of the inputs will go (default = "column"). header_field, if provided, will use the value of this field in each input row as the name of the output field for that column (default = no field provided, output fields will be named "row 1", "row 2", ...). include_empty is an optional boolean option, that if false, will exclude any field/column in the input that had no values for any row (defaults = true).
|
|
|
usage = public
|
|
|
example1 = ... | transpose
|
|
|
comment1 = Turns the first five rows into columns
|
|
|
example2 = ... | transpose 20
|
|
|
comment2 = Turns the first 20 rows into columns
|
|
|
example3 = ... | transpose column_name="Test Name" header_field=sourcetype include_empty=false
|
|
|
comment3 = Turns the first five rows into columns, where the input field names are put into the output field called "Test Name", and the input row values for the sourcetype field will be used as the output field names.
|
|
|
tags = fields, stats
|
|
|
related = fields, stats
|
|
|
category = reporting
|
|
|
|
|
|
##################
|
|
|
# uniq
|
|
|
##################
|
|
|
[uniq-command]
|
|
|
syntax = uniq
|
|
|
shortdesc = Filters out repeated adjacent results.
|
|
|
description = Removes any search result that is an exact duplicate with the adjacent result before it.
|
|
|
usage = public
|
|
|
comment1 = For the current search, keep only unique results.
|
|
|
example1 = ... | uniq
|
|
|
related = dedup
|
|
|
tags = uniq unique duplicate redundant extra
|
|
|
category = results::filter
|
|
|
|
|
|
##################
|
|
|
# metasearch
|
|
|
##################
|
|
|
|
|
|
[metasearch-command]
|
|
|
simplesyntax = metasearch <logical-expression>?
|
|
|
syntax = metasearch <logical-expression>?
|
|
|
shortdesc = Retrieves event metadata from indexes based on terms in the <logical-expression>.
|
|
|
description = Retrieves event metadata from indexes based on terms in the <logical-expression>. Metadata fields include source, sourcetype, host, _time, index, and splunk_server.
|
|
|
usage = public
|
|
|
comment1 = Return metadata for events with "404" and from host "webserver1"
|
|
|
example1 = 404 host="webserver1"
|
|
|
category = search::search
|
|
|
tags = search query find
|
|
|
related = search metadata
|
|
|
|
|
|
##################
|
|
|
# search
|
|
|
##################
|
|
|
|
|
|
[search-command]
|
|
|
simplesyntax = search <logical-expression>?
|
|
|
syntax = search <logical-expression>?
|
|
|
shortdesc = Filters results using keywords, quoted phrases, wildcards, and key/value expressions.
|
|
|
description = If the first search command, retrieve events from the indexes, using keywords, quoted phrases, wildcards, and key/value expressions; if not the first, filter results.
|
|
|
usage = public
|
|
|
comment1 = Search for events with "404" and from host "webserver1"
|
|
|
example1 = 404 host="webserver1"
|
|
|
comment2 = Search for events with either codes 10 or 29, and a host that isn't "localhost" and an xqp that is greater than 5
|
|
|
example2 = (code=10 OR code=29) host!="localhost" xqp>5
|
|
|
commentcheat1 = Keep only search results that have the specified "src" or "dst" values.
|
|
|
examplecheat1 = src="10.9.165.*" OR dst="10.9.165.8"
|
|
|
category = search::search
|
|
|
tags = search query find where filter daysago enddaysago endhoursago endminutesago endmonthsago endtime endtime eventtype eventtypetag host hosttag hoursago minutesago monthsago searchtimespandays searchtimespanhours searchtimespanminutes searchtimespanmonths source sourcetype startdaysago starthoursago startminutesago startmonthsago starttime starttimeu tag
|
|
|
|
|
|
[logical-expression]
|
|
|
simplesyntax = (NOT)? <logical-expression>|<comparison-expression>|(<logical-expression> OR? <logical-expression>)
|
|
|
syntax = ("(" <logical-expression> ")")|<time-opts>|<search-modifier>|(<boolean-operator-not>? <logical-expression>)|<index-expression>|<comparison-expression>|(<logical-expression> (<boolean-operator-or>|<boolean-operator-and>)? <logical-expression>)
|
|
|
|
|
|
[index-expression]
|
|
|
syntax = \"<string>\"|<term>|<search-modifier>
|
|
|
|
|
|
[comparison-expression]
|
|
|
syntax = <field><cmp><value>|<field> IN <value-list>
|
|
|
|
|
|
[value-list]
|
|
|
syntax = "("<value>(,<value>)*")"
|
|
|
|
|
|
[cmp]
|
|
|
syntax = =|!=|"<"|"<"=|">"|">"=
|
|
|
|
|
|
[value]
|
|
|
syntax = <lit-value>
|
|
|
|
|
|
[lit-value]
|
|
|
syntax = <string>|<num>
|
|
|
|
|
|
[index-specifier]
|
|
|
syntax = index(=|!=)<string>
|
|
|
description = Search the specified index instead of the default index
|
|
|
|
|
|
[time-opts]
|
|
|
syntax = (<timeformat>)? (<time-modifier> )*
|
|
|
|
|
|
[search-modifier]
|
|
|
syntax = <index-specifier>|<sourcetype-specifier>|<host-specifier>|<source-specifier>|<savedsplunk-specifier>|<eventtype-specifier>|<eventtypetag-specifier>|<hosttag-specifier>|<tag-specifier>
|
|
|
|
|
|
[time-modifier]
|
|
|
syntax = <earliesttime><indexearliest>|<starttime>|<startdaysago>|<startminutesago>|<starthoursago>|<startmonthsago>|<starttimeu>|<latesttime>|<indexlatest>|<endtime>|<enddaysago>|<endminutesago>|<endhoursago>|<endmonthsago>|<endtimeu>|<searchtimespanhours>|<searchtimespanminutes>|<searchtimespandays>|<searchtimespanmonths>|<daysago>|<minutesago>|<hoursago>|<monthsago>
|
|
|
|
|
|
[timeformat]
|
|
|
syntax = timeformat=<string>
|
|
|
description = Set the time format for starttime and endtime terms.
|
|
|
example1 = timeformat=%m/%d/%Y:%H:%M:%S
|
|
|
default = timeformat=%m/%d/%Y:%H:%M:%S
|
|
|
|
|
|
[sourcetype-specifier]
|
|
|
syntax = sourcetype(=|!=)<string>
|
|
|
description = Search for events from the specified sourcetype
|
|
|
|
|
|
[host-specifier]
|
|
|
syntax = host(=|!=)<string>
|
|
|
description = Search for events from the specified host
|
|
|
|
|
|
[source-specifier]
|
|
|
syntax = source(=|!=)<string>
|
|
|
description = Search for events from the specified source
|
|
|
|
|
|
[savedsplunk-specifier]
|
|
|
syntax = (savedsearch|savedsplunk)=<string>
|
|
|
description = Search for events that would be found by specified search/splunk
|
|
|
|
|
|
[eventtype-specifier]
|
|
|
syntax = eventtype(=|!=)<string>
|
|
|
description = Search for events that match the specified eventtype
|
|
|
|
|
|
[eventtypetag-specifier]
|
|
|
syntax = eventtypetag(=|!=)<string>
|
|
|
description = Search for events that would match all eventtypes tagged by the string
|
|
|
|
|
|
[hosttag-specifier]
|
|
|
syntax = hosttag(=|!=)<string>
|
|
|
description = Search for events that have hosts that are tagged by the string
|
|
|
|
|
|
[tag-specifier]
|
|
|
syntax = tag(=|!=)<field>::<string>
|
|
|
description = Search for all events that have their specified field tagged by string
|
|
|
usage = internal
|
|
|
|
|
|
[earliesttime]
|
|
|
syntax = earliest=<time_modifier>
|
|
|
description = Specify the earliest _time for the time range of your search. You can specify an exact time (earliest="11/5/2016:20:00:00") or a relative time (earliest=-h or earliest=@w0).
|
|
|
|
|
|
[indexearliest]
|
|
|
syntax = _index_earliest=<time_modifier>
|
|
|
description = Specify the earliest _indextime for the time range of your search. You can specify an exact time (_index_earliest="11/5/2016:20:00:00") or a relative time (_index_earliest=-h or _index_earliest=@w0).
|
|
|
|
|
|
[starttime]
|
|
|
syntax = starttime=<string>
|
|
|
description = Events must be later or equal to this time. Must match time format.
|
|
|
|
|
|
[startdaysago]
|
|
|
syntax = startdaysago=<int>
|
|
|
description = A short cut to set the start time. starttime = now - (N days)
|
|
|
|
|
|
[startminutesago]
|
|
|
syntax = startminutesago=<int>
|
|
|
description = A short cut to set the start time. starttime = now - (N minutes)
|
|
|
|
|
|
[starthoursago]
|
|
|
syntax = starthoursago=<int>
|
|
|
description = A short cut to set the start time. starttime = now - (N hours)
|
|
|
|
|
|
[startmonthsago]
|
|
|
syntax = startmonthsago=<int>
|
|
|
description = A short cut to set the start time. starttime = now - (N months)
|
|
|
|
|
|
[starttimeu]
|
|
|
syntax = starttimeu=<num>
|
|
|
description = Set the start time to N seconds since the epoch. ( unix time )
|
|
|
|
|
|
[latesttime]
|
|
|
syntax = latest=<time_modifier>
|
|
|
description = Specify the latest time for the _time range of your search. You can specify an exact time (latest="11/12/2016:20:00:00") or a relative time (latest=-30m or latest=@w6).
|
|
|
|
|
|
[indexlatest]
|
|
|
syntax = _index_latest=<time_modifier>
|
|
|
description = Specify the latest _indextime for the time range of your search. You can specify an exact time (_index_latest="11/5/2016:20:00:00") or a relative time (_index_latest=-h or _index_latest=@w0).
|
|
|
|
|
|
[endtime]
|
|
|
syntax = endtime=<string>
|
|
|
description = All events must be earlier or equal to this time.
|
|
|
|
|
|
[enddaysago]
|
|
|
syntax = enddaysago=<int>
|
|
|
description = A short cut to set the end time. endtime = now - (N days)
|
|
|
|
|
|
[endminutesago]
|
|
|
syntax = endminutesago=<int>
|
|
|
description = A short cut to set the end time. endtime = now - (N minutes)
|
|
|
|
|
|
[endhoursago]
|
|
|
syntax = endhoursago=<int>
|
|
|
description = A short cut to set the end time. endtime = now - (N hours)
|
|
|
|
|
|
[endmonthsago]
|
|
|
syntax = endmonthsago=<int>
|
|
|
description = A short cut to set the start time. starttime = now - (N months)
|
|
|
|
|
|
[endtimeu]
|
|
|
syntax = endtimeu=<num>
|
|
|
description = Set the end time to N seconds since the epoch. ( unix time )
|
|
|
|
|
|
[searchtimespanhours]
|
|
|
syntax = searchtimespanhours=<int>
|
|
|
description = The time span operators are always applied from the last time boundary set. Therefore, if an endtime operator is closest to the left of a timespan operator, it will be applied to the starttime. If you had 'enddaysago::1 searchtimespanhours::5', it would be equivalent to 'starthoursago::29 enddaysago::1'.
|
|
|
|
|
|
[searchtimespanminutes]
|
|
|
syntax = searchtimespanminutes=<int>
|
|
|
|
|
|
[searchtimespandays]
|
|
|
syntax = searchtimespandays=<int>
|
|
|
|
|
|
[searchtimespanmonths]
|
|
|
syntax = searchtimespanmonths=<int>
|
|
|
|
|
|
[daysago]
|
|
|
syntax = daysago=<int>
|
|
|
description = Search the last N days. ( equivalent to startdaysago )
|
|
|
|
|
|
[minutesago]
|
|
|
syntax = minutesago=<int>
|
|
|
description = Search the last N minutes. ( equivalent to startminutesago )
|
|
|
|
|
|
[hoursago]
|
|
|
syntax = hoursago=<int>
|
|
|
description = Search the last N hours. ( equivalent to starthoursago )
|
|
|
|
|
|
[monthsago]
|
|
|
syntax = monthsago=<int>
|
|
|
description = Search the last N months. ( equivalent to startmonthsago )
|
|
|
|
|
|
|
|
|
|
|
|
##################
|
|
|
# set
|
|
|
##################
|
|
|
[set-command]
|
|
|
syntax = set (union|diff|intersect) <subsearch> <subsearch>
|
|
|
shortdesc = Performs set operations on subsearches.
|
|
|
description = Performs two subsearches and then executes the specified set operation on the two sets of search results.
|
|
|
usage = public
|
|
|
comment1 = Return all urls that have 404 errors and 303 errors.
|
|
|
example1 = | set intersect [search 404 | fields url] [search 303 | fields url]
|
|
|
commentcheat = Return values of "URL" that contain the string "404" or "303" but not both.
|
|
|
examplecheat = | set diff [search 404 | fields url] [search 303 | fields url]
|
|
|
category = search::subsearch
|
|
|
generating = true
|
|
|
related = append, appendcols, join, diff
|
|
|
tags = diff union join intersect append
|
|
|
|
|
|
[subsearch]
|
|
|
syntax = [<string>]
|
|
|
description = Specifies a subsearch.
|
|
|
example1 = [search 404 | fields url]
|
|
|
tags = set union diff intersect
|
|
|
|
|
|
#################
|
|
|
# cluster
|
|
|
#################
|
|
|
|
|
|
[cluster-command]
|
|
|
syntax = cluster (<slc-option> )*
|
|
|
alias = slc
|
|
|
shortdesc = Clusters similar events together.
|
|
|
description = Fast and simple clustering method designed to operate on event text (_raw field). With default options, a single representative event is retained for each cluster.
|
|
|
usage = public
|
|
|
comment = Cluster syslog events together.
|
|
|
example = sourcetype=syslog | cluster
|
|
|
commentcheat = Cluster events together, sort them by their "cluster_count" values, and then return the 20 largest clusters (in data size).
|
|
|
examplecheat = ... | cluster t=0.9 showcount=true | sort - cluster_count | head 20
|
|
|
category = results::group
|
|
|
related = anomalies, anomalousvalue, cluster, kmeans, outlier
|
|
|
tags = cluster group collect gather
|
|
|
|
|
|
[slc-option]
|
|
|
syntax = ((t=<num>)|(delims=<string>)|(showcount=<bool>)|(countfield=<field>)|(labelfield=<field>)|(field=<field>)|(labelonly=<bool>)|(match=(termlist|termset|ngramset)))
|
|
|
description = Options for configuring the simple log clusters. \
|
|
|
"T=" sets the threshold which must be > 0.0 and < 1.0. The closer the threshold is to 1, the more similar events have to be in order to be considered in the same cluster. Default is 0.8 \
|
|
|
"delims" configures the set of delimiters used to tokenize the raw string. By default everything except 0-9, A-Z, a-z, and '_' are delimiters. \
|
|
|
"showcount" if yes, this shows the size of each cluster (default = false) \
|
|
|
"countfield" name of field to write cluster size to if showcount=true , default = "cluster_count" \
|
|
|
"labelfield" name of field to write cluster number to, default = "cluster_label" \
|
|
|
"field" name of field to analyze, default = _raw \
|
|
|
"labelonly" if true, instead of reducing each cluster to a single event, keeps all original events and merely labels with them their cluster number\
|
|
|
"match" determines the similarity method used, defaulting to termlist. termlist requires the exact \
|
|
|
same ordering of terms, termset allows for an unordered set of terms, and ngramset compares sets of \
|
|
|
trigram (3-character substrings). ngramset is significantly slower on large field values and is most useful for short non-textual fields, like 'punct'
|
|
|
example1 = t=0.9 delims=" ;:" showcount=true countfield="SLCCNT" labelfield="LABEL" field=_raw labelonly=true
|
|
|
|
|
|
##################
|
|
|
# showargs
|
|
|
##################
|
|
|
|
|
|
[showargs-command]
|
|
|
syntax = showargs <subsearch>
|
|
|
description = Treats the given string as a subsearch, executes that subsearch \
|
|
|
and renders the results as an event. This is useful for debugging \
|
|
|
subsearches.
|
|
|
usage = internal
|
|
|
example1 = ... | showargs [search * | top source | fields source | format]
|
|
|
generating = true
|
|
|
|
|
|
##################
|
|
|
# sort
|
|
|
##################
|
|
|
|
|
|
[sort-command]
|
|
|
syntax = sort (<int>)? <sort-by-clause>+ (d|desc)?
|
|
|
simplesyntax = sort (<int:count>)? <sort-by-clause>+ desc?
|
|
|
shortdesc = Sorts search results by the specified fields.
|
|
|
description = Sorts by the given list of fields. If more than one field is specified, \
|
|
|
the first denotes the primary sort order, the second denotes the secondary, etc. \
|
|
|
If the fieldname is immediately (no space) preceded by "+", the sort is ascending (default). \
|
|
|
If the fieldname is immediately (no space) preceded by "-", the sort is descending. \
|
|
|
If white space follows "+/-", the sort order is applied to all following fields without a different explicit sort order. \
|
|
|
Also a trailing "d" or "desc" causes the results to be reversed. \
|
|
|
Results missing a given field are treated as having the smallest or largest \
|
|
|
possible value of that field if the order es descending or ascending respectively. \
|
|
|
If the field takes on numeric values, the collating sequence is numeric. \
|
|
|
If the field takes on IP address values, the collating sequence is for IPs. \
|
|
|
Otherwise, the collating sequence is lexicographic ordering. \
|
|
|
If the first term is a number, then at most that many results are returned (in order). \
|
|
|
If no number is specified, the default limit of 10000 is used. If number is 0, all results will be returned.
|
|
|
example1 = ... | sort _time, -host
|
|
|
comment1 = Sort results by the "_time" field in ascending order and then by the "host" value in descending order.
|
|
|
example2 = ... | sort 100 -size, +source
|
|
|
comment2 = Sort first 100 results in descending order of the "size" field and then by the "source" value in ascending order.
|
|
|
commentcheat = Sort results by "ip" value in ascending order and then by "url" value in descending order.
|
|
|
examplecheat = ... | sort ip, -url
|
|
|
category = results::order
|
|
|
usage = public
|
|
|
related = reverse
|
|
|
tags = arrange, order, rank, sort
|
|
|
|
|
|
[sort-by-clause]
|
|
|
syntax = ("-"|"+")?( )?<sort-field> ","?
|
|
|
simplesyntax = ("-"|"+")<sort-field> ","
|
|
|
description = List of fields to sort by and their sort order (ascending or descending)
|
|
|
example1 = _time, -host
|
|
|
example2 = - time, host
|
|
|
example3 = -size, +source
|
|
|
|
|
|
[sort-field]
|
|
|
syntax = <field> | ((auto|str|ip|num) "(" <field> ")")
|
|
|
description = a sort field may be a field or a sort-type and field. sort-type can be "ip" to interpret \
|
|
|
the field's values as ip addresses. "num" to treat them as numbers, "str" to order lexigraphically, \
|
|
|
and "auto" to make the determination automatically. If no type is specified, it is assumed to be "auto"
|
|
|
example1 = auto(size)
|
|
|
example2 = ip(source_addr)
|
|
|
example3 = str(pid)
|
|
|
example4 = host
|
|
|
example5 = _time
|
|
|
|
|
|
|
|
|
##################
|
|
|
# collect
|
|
|
##################
|
|
|
|
|
|
[collect-command]
|
|
|
syntax = collect <collect-index> (<collect-arg>)*
|
|
|
alias = stash, summaryindex, sumindex
|
|
|
shortdesc = Puts search results into a summary index.
|
|
|
description = Adds the results of the search into the specified index. Behind the scenes, the events are written \
|
|
|
to a file whose name format is: "<random-num>_events.stash", unless overridden, in a directory \
|
|
|
which is watched for new events by Splunk. If the events contain a _raw field then the raw field \
|
|
|
is saved, if they don't a _raw field is constructed by concatenating all the fields into a \
|
|
|
comma-separated list of key="value" pairs.
|
|
|
usage = public
|
|
|
comment1 = Put "download" events into an index named "downloadcount".
|
|
|
example1 = eventtypetag="download" | collect index=downloadcount
|
|
|
related = overlap, sichart, sirare, sistats, sitop, sitimechart
|
|
|
tags = collect summary overlap summary index summaryindex
|
|
|
category = index::summary
|
|
|
|
|
|
[collect-arg]
|
|
|
syntax = <collect-addtime> | <collect-index> | <collect-file> | <collect-spool> | <collect-marker> | <collect-testmode> | <collect-run-in-preview> | <collect-host> | <collect-source> | <collect-sourcetype> | <collect-uselb> | <collect-format> | <collect-timeformat>
|
|
|
|
|
|
[collect-addtime]
|
|
|
syntax = addtime=<bool>
|
|
|
description = whether to prefix a time into each event if the event does not contain a _raw field. \
|
|
|
The first found field of the following times is used: info_min_time, _time, now() \
|
|
|
defaults to true for output_format=raw. Not a valid option for output_format=hec.
|
|
|
|
|
|
[collect-addinfo]
|
|
|
syntax = addinfo=<bool>
|
|
|
description = If true, will write the search time and time bounds into the text \
|
|
|
out of each summary index event in the format \
|
|
|
info_min_time=<search_earliest_time>, info_max_time=<search_latest_time>, \
|
|
|
info_search_time=<search_exec_time>. Defaults to true for output_format=raw
|
|
|
|
|
|
[collect-index]
|
|
|
syntax = index=<string>
|
|
|
description = name of the index where splunk should add the events to. Note: the index must exist \
|
|
|
for events to be added to it, the index is NOT created automatically.
|
|
|
|
|
|
[collect-file]
|
|
|
syntax = file=<string>
|
|
|
description = name of the file where to write the events to. Optional, default "<random-num>_events.stash" \
|
|
|
The following placeholders can be used in the file name $timestamp$, $random$ and will be \
|
|
|
replaced with a timestamp, a random number respectively
|
|
|
|
|
|
[collect-format]
|
|
|
syntax = output_format=("raw"|"hec")
|
|
|
description = If set to "hec", outputs a HEC JSON formatted output, which allows all fields to be \
|
|
|
automatically indexed when the stash file is indexed. HEC formatted stash files \
|
|
|
will end with a .stash_hec suffix instead of .stash. \
|
|
|
If set to "raw", uses the traditional non-structured log style \
|
|
|
summary indexing stash output format. Default is "raw".
|
|
|
|
|
|
[collect-spool]
|
|
|
syntax = spool=<bool>
|
|
|
description = If set to true (default is true), the summary indexing file will be written to \
|
|
|
Splunk's spool directory, where it will be indexed automatically. \
|
|
|
If set to false, file will be written to $SPLUNK_HOME/var/run/splunk, where it will remain \
|
|
|
until further administrative actions are taken.
|
|
|
|
|
|
[collect-marker]
|
|
|
syntax = marker=<string>
|
|
|
description = a string, usually of key-value pairs, to append to each event written out. \
|
|
|
Optional, default is empty string. Not a valid option for output_format=hec.
|
|
|
|
|
|
[collect-testmode]
|
|
|
syntax = testmode=<bool>
|
|
|
description = toggle between testing and real mode. In testing mode the results are not written \
|
|
|
into the new index but the search results are modified to appear as they would if \
|
|
|
sent to the index. (defaults to false)
|
|
|
|
|
|
[collect-run-in-preview]
|
|
|
syntax = run_in_preview=<bool>
|
|
|
description = controls whether this command is enabled during preview generation. Generally you do not \
|
|
|
want to insert preview results into the summary index - that is why this defaults to false. \
|
|
|
However, in some rare cases, such as when a custom search command is used as part of the search \
|
|
|
to ensure correct summary-indexable previews are generated, this flag can be turned on. \
|
|
|
(defaults to false)
|
|
|
|
|
|
[collect-host]
|
|
|
syntax = host=<string>
|
|
|
description = The name of the host that you want to specify for the events. \
|
|
|
Not a valid option for output_format=hec.
|
|
|
|
|
|
[collect-source]
|
|
|
syntax = source=<string>
|
|
|
description = The name of the source that you want to specify for the events. \
|
|
|
Not a valid option for output_format=hec.
|
|
|
|
|
|
[collect-sourcetype]
|
|
|
syntax = sourcetype=<string>
|
|
|
description = The name of the source type that you want to specify for the events. \
|
|
|
By specifying a sourcetype outside of stash, you will incur license usage. \
|
|
|
Not a valid option for output_format=hec.
|
|
|
|
|
|
[collect-uselb]
|
|
|
syntax = uselb=<bool>
|
|
|
description = When set to "true", the Splunk software splits the data it \
|
|
|
ingests via the 'collect' command into individual events, using \
|
|
|
a string identical to the LINE_BREAKER setting defined for the \
|
|
|
'stash_new' source type in props.conf. When set to 'false', the \
|
|
|
Splunk software uses a simple line break to split events. Do not \
|
|
|
use this setting unless you are intentionally generating events \
|
|
|
with the 'collect' command in a line-oriented format. Defaults \
|
|
|
to "true". Not a valid option for output_format=hec. \
|
|
|
NOTE: While the default behavior of the 'collect' command is to \
|
|
|
use a LINE_BREAKER setting identical to that used in props.conf, \
|
|
|
the collect command's default LINE_BREAKER is hardcoded. Changes \
|
|
|
to props.conf do NOT affect the behavior of the 'collect' \
|
|
|
command.
|
|
|
|
|
|
[collect-timeformat]
|
|
|
syntax = timeformat=<string>
|
|
|
description = Controls the format of the timestamp that is written to the \
|
|
|
stash file before it is indexed. The 'addtime' argument must \
|
|
|
be set to "true" for the same invocation of the command in \
|
|
|
order to take advantage of this functionality. Use this \
|
|
|
argument only if you need precise control over the format \
|
|
|
of output files that the 'collect' command generates. \
|
|
|
This option is not valid when 'output_format=hec'.
|
|
|
example1 = timeformat="%m/%d/%Y:%H:%M:%S"
|
|
|
default = timeformat="%m/%d/%Y %H:%M:%S %z"
|
|
|
|
|
|
##################
|
|
|
# mcatalog
|
|
|
##################
|
|
|
[mcatalog-command]
|
|
|
syntax = mcatalog (prestats=<bool>)? (append=<bool>)? ((values "(" <field> ")") (as <field>)?)+ (WHERE <logical-expression>)? ((BY|GROUPBY) <field-list>)?
|
|
|
shortdesc = Performs values aggregation on metric_name and dimensions.
|
|
|
description = Returns the list of values for the metric_name or dimension fields from all metric indexes, \
|
|
|
unless an index name is specified in the WHERE clause. The '_values' field is not allowed. \
|
|
|
Supports GROUPBY on the metric_name or dimension fields, however you cannot specify a time \
|
|
|
span with this command. \i\\
|
|
|
Arguments: \i\\
|
|
|
"prestats": Returns the results in prestats format. You can pipe the results into another command that takes prestats output, such as chart or timechart. \i\\
|
|
|
This is useful for creating graphs. Default is "prestats=false". \i\\
|
|
|
"append": Valid only when "prestats=true". This argument runs the mstats command and adds \i\\
|
|
|
the results to an existing set of results instead of generating new results. Default is "append=false".
|
|
|
usage = internal
|
|
|
category = reporting
|
|
|
comment1 = Return all of the metric names in a specific metric index
|
|
|
example1 = mcatalog values(metric_name) WHERE index=new-metric-idx
|
|
|
comment2 = Return all of the metric names in all metric indexes
|
|
|
example2 = mcatalog values(metric_name)
|
|
|
comment3 = Return all IP addresses when the metric name matches 'login.failure'
|
|
|
example3 = mcatalog values(ip) where metric_name=login.failure
|
|
|
|
|
|
|
|
|
##################
|
|
|
# mcollect
|
|
|
##################
|
|
|
[mcollect-command]
|
|
|
syntax = mcollect (index=<string>) (file=<string>)? (split=<true|false|allnums>)? (spool=<bool>)? (prefix_field=<string>)? (host=<string>)? (source=<string>)? (sourcetype=<string>)? (marker=<string>)? (<field-list>)?
|
|
|
shortdesc = Puts search results into a metric index on the search head.
|
|
|
description = Converts search results into metric data and inserts the data into a metric index \
|
|
|
on the search head. If each result contains only one metric_name field \
|
|
|
and one numeric _value field, the result is already a normalized metrics data point, \
|
|
|
the result does not need to be split and can be consumed directly. \
|
|
|
Otherwise, each result is spit into multiple metric data points based on the specified \
|
|
|
list of dimension fields. \
|
|
|
If the '_time' field is present in the results, it is used as the timestamp of the \
|
|
|
metric datapoint. If the '_time' field is not present, the current time is used. \
|
|
|
Arguments: \
|
|
|
“index”: The index where the collected metric data are placed. This argument is required. \
|
|
|
“file”: The file name where you want the collected metrics data to be written. \
|
|
|
The default file name is a random filename. You can use a timestamp or a random number \
|
|
|
for the file name by specifying either file=$timestamp$ or file=$random$. \
|
|
|
Defaults to $random$_metrics.csv \
|
|
|
“split”: Determines how mcollect identifies the measures in an event. Defaults to false. \
|
|
|
When split=true, you use <field-list> to identify the dimensions in your search. The \
|
|
|
mcollect command converts any field in your search that is not part of the <field-list> \
|
|
|
into a measurement. \
|
|
|
When split=false, the measure field or fields need to be explicitly specified by the \
|
|
|
search. \
|
|
|
* If you have single-metric events, the search must produce results with a \
|
|
|
'metric_name' field for the name of the measure and a '_value' field for the \
|
|
|
measure's numeric value. \
|
|
|
* If you have multiple-metric events, the search must produce results that include \
|
|
|
measures that follow this syntax: 'metric_name:<metric_name>=<numeric_value>'. \
|
|
|
When split=allnums, mcollect treats all numeric fields as metric measures and all \
|
|
|
non-numeric fields as dimensions. You can optionally use <field-list> to declare that \
|
|
|
certain numeric fields in the events should be treated as dimensions. \
|
|
|
“spool”: If spool=true (which is the default setting), the metrics data file is written \
|
|
|
to the Splunk spool directory, $SPLUNK_HOME/var/spool/splunk, where the file is indexed \
|
|
|
automatically. If spool=false, the file is written to the $SPLUNK_HOME/var/run/splunk \
|
|
|
directory. The file remains in this directory unless some form of further automation \
|
|
|
or administration is done. \
|
|
|
“prefix_field”: Is applicable only when split=true. If specified, any event with that \
|
|
|
field missing is ignored. Otherwise, the field value is prefixed to the metric name. \
|
|
|
"host": The name of the host that you want to specify for the collected metrics data. \
|
|
|
Only applicable when spool=true. \
|
|
|
"source": The name of the source that you want to specify for the collected metrics data. \
|
|
|
Defaults to the name of search. \
|
|
|
"sourcetype": The name of the source type that is specified for the collected metrics \
|
|
|
data. This setting defaults to mcollect_stash. License usage is not calculated for \
|
|
|
data indexed with the mcollect_stash source type. If you change to a different \
|
|
|
source type, the Splunk platform calculates license usage for any data indexed \
|
|
|
by the mcollect command. NOTE: Do not change this setting without \
|
|
|
assistance from Splunk Professional Services or Splunk Support. Changing the \
|
|
|
source type requires a change to the props.conf file. \
|
|
|
“marker”: Optional. A string of one or more comma-separated key-value pairs. The Splunk \
|
|
|
software adds these key-value pairs to the metric data points in the summary \
|
|
|
metric index, so you can use them as markers for easy identification in future \
|
|
|
searches of the metric index where they reside. Defaults to empty. \
|
|
|
“field-list”: A list of dimension fields. Optional if split=false (the default), required \
|
|
|
if split=true. If “field-list” is not specified, all fields are treated as dimensions \
|
|
|
for the data point except for the “prefix_field” and internal fields (fields with an \
|
|
|
underscore ’_’ prefix). If “field-list” is specified, the list must be specified \
|
|
|
at the end of the mcollect command arguments. If “field-list” is specified, all \
|
|
|
fields are treated as metric values, except for fields in “field-list”, \
|
|
|
the “prefix-field”, and internal fields. \
|
|
|
The name of each metric value is the field name prefixed with the ”prefix_field” value. \
|
|
|
Effectively, one metric data point is returned for each qualifying field that \
|
|
|
contains a numerical value. If one search result contains multiple qualifying \
|
|
|
metric name/value pairs, the result is split into multiple metric data points.
|
|
|
usage = public
|
|
|
comment1 = Generate a count of error events as metric data points.
|
|
|
example1 = ERROR | stats count BY type | rename count AS _value type AS metric_name | mcollect index=my_metric_index
|
|
|
comment2 = Generate multiple metrics on max and average CPU time as metric data points.
|
|
|
example2 = index=syslog cputime | stats max(cputime) AS max_cpu avg(cputime) AS avg_cpu BY host | rename max_cpu AS metric_name:max_cpu avg_cpu AS metric_name:avg_cpu | mcollect index=my_metric_index
|
|
|
comment3 = Generate multiple metrics with 'split=true' as metric data points.
|
|
|
example3 = index=_internal kb!=NULL max_age>0 earliest=-15m | stats sum(kb) AS total_volume, sum(ev) AS total_events by group, series, max_age | mcollect index=my_metric_index split=t max_age group series
|
|
|
comment4 = Generate multiple metrics on numverical fields 'split=allnums' as metric data points.
|
|
|
example4 = index=_internal kb!=NULL max_age>0 earliest=-15m | stats sum(kb) AS total_volume, sum(ev) AS total_events by group, series, max_age | mcollect index=my_metric_index split=allnums marker="report=\"metrics on events and volume\"" max_age
|
|
|
related = collect meventcollect
|
|
|
tags = collect summary summaryindex metrics
|
|
|
category = index::summary
|
|
|
|
|
|
##################
|
|
|
# meventcollect
|
|
|
##################
|
|
|
[meventcollect-command]
|
|
|
syntax = meventcollect (index=<string>) (split=<bool>)? (spool=<bool>)? (prefix_field=<string>)? (host=<string>)? (source=<string>)? (sourcetype=<string>)? (<field-list>)?
|
|
|
shortdesc = Puts search results into a metric index on the indexers.
|
|
|
description = Converts search results into metric data and inserts the data into a metric index \
|
|
|
on the indexers. If each result contains only one metric_name field and one \
|
|
|
numeric _value field, the result is already a normalized metrics data point, \
|
|
|
the result does not need to be split and can be consumed directly. \
|
|
|
Otherwise, each result is spit into multiple metric data points based on the specified \
|
|
|
list of dimension fields. \
|
|
|
Only purely streaming commands can precede the meventcollect command so that results can be \
|
|
|
directly ingested on the indexers. \
|
|
|
Arguments: \
|
|
|
“index”: The index where the collect metric data are placed. This argument is required. \
|
|
|
“split”: Determines how meventcollect identifies the measures in an event. Defaults to false. \
|
|
|
When split=true, you use <field-list> to identify the dimensions in your search. The \
|
|
|
meventcollect command converts any field in your search that is not part of the <field-list> \
|
|
|
into a measurement. \
|
|
|
When split=false, the measure field or fields need to be explicitly specified by the \
|
|
|
search. \
|
|
|
* If you have single-metric events, the search must produce results with a \
|
|
|
'metric_name' field for the name of the measure and a '_value' field for the \
|
|
|
measure's numeric value. \
|
|
|
* If you have multiple-metric events, the search must produce results that include \
|
|
|
measures that follow this syntax: 'metric_name:<metric_name>=<numeric_value>'. \
|
|
|
“spool”: If spool=true (which is the default setting), the metrics data file is written \
|
|
|
to the Splunk spool directory, $SPLUNK_HOME/var/spool/splunk, where the file is indexed \
|
|
|
automatically. If spool=false, the file is written to the $SPLUNK_HOME/var/run/splunk \
|
|
|
directory. The file remains in this directory unless some form of further automation \
|
|
|
or administration is done. \
|
|
|
“prefix_field”: Is applicable only when split=true. If specified, any event with that \
|
|
|
field missing is ignored. Otherwise, the field value is prefixed to the metric name. \
|
|
|
"host": The name of the host that you want to specify for the collected metrics data. \
|
|
|
Only applicable when spool=true. \
|
|
|
"source": The name of the source that you want to specify for the collected metrics data. \
|
|
|
Defaults to the name of search. \
|
|
|
"sourcetype": The name of the source type that is specified for the collected metrics \
|
|
|
data. This setting defaults to mcollect_stash. License usage is not calculated for \
|
|
|
data indexed with the mcollect_stash source type. If you change to a different \
|
|
|
source type, the Splunk platform calculates license usage for any data indexed \
|
|
|
by the meventcollect command. NOTE: Do not change this setting \
|
|
|
without assistance from Splunk Professional Services or Splunk Support. Changing \
|
|
|
the source type requires a change to the props.conf file. \
|
|
|
“field-list”: A list of dimension fields. Optional if split=false (the default), required \
|
|
|
if split=true. If “field-list” is not specified, all fields are treated as dimensions \
|
|
|
for the data point except for the “prefix_field” and internal fields (fields with an \
|
|
|
underscore ’_’ prefix). If “field-list” is specified, the list must be specified \
|
|
|
at the end of the meventcollect command arguments. If “field-list” is specified, all \
|
|
|
fields are treated as metric values, except for fields in “field-list”, \
|
|
|
the “prefix-field”, and internal fields. \
|
|
|
The name of each metric value is the field name prefixed with the ”prefix_field” value. \
|
|
|
Effectively, one metric data point is returned for each qualifying field that \
|
|
|
contains a numerical value. If one search result contains multiple qualifying \
|
|
|
metric name/value pairs, the result is split into multiple metric data points.
|
|
|
usage = public
|
|
|
comment1 = collect metrics.log data into a metrics index
|
|
|
example1 = index=_internal source=*/metrics.log | eval prefix = group + "." + name | meventcollect index=my_metric_index split=true prefix_field=prefix name group
|
|
|
related = collect mcollect
|
|
|
tags = collect summary summaryindex metrics
|
|
|
category = index::summary
|
|
|
|
|
|
##################
|
|
|
# mrollup
|
|
|
##################
|
|
|
[mrollup-command]
|
|
|
syntax = mrollup (source=<string>) (target=<string>) (file=<string>)? (span=<string:timespan>) (aggregate=(<mrollup-aggregate-func>("#"<mrollup-aggregate-func>)?)*)? (dimension-list=(<string>,<string>))? (dimension-list-type=(excluded|included))? (metric-list=(<string>,<string>))? (metric-list-type=(excluded|included))? (metric-overrides=(<string>;(<mrollup-aggregate-func>("#"<mrollup-aggregate-func>)?)*))? (app=<string>)?
|
|
|
shortdesc = rollup or summarize data from source to target index
|
|
|
description = Rollup metric data in to another index for storage/search performance improvements. \
|
|
|
Arguments: \
|
|
|
“source”: The index where the metric data are placed. This is the source index \
|
|
|
holding the raw data which needs to be summarized in to a target index. \
|
|
|
This argument is required. \
|
|
|
“target”: The index where the summary is written to. This argument is required. \
|
|
|
“file”: The spool file name where you want the collected metrics data to be written. \
|
|
|
“span”: Rollup span as a timespan string. The span in minutes has to be a factor \
|
|
|
of 60 and in hours should be a factor of 24. Max allowed is 24h or 1d and minimum \
|
|
|
is controlled by limits.conf [rollup]/minSpanAllowed setting. A span less than a \
|
|
|
minute is not allowed. \
|
|
|
“aggregate”: The aggregate stats function to be used to perform rollup. \
|
|
|
Valid aggregations supported are avg, count, max, perc<int>, median, min and sum. \
|
|
|
This argument can take single stats function or a "#" separated stats function list. \
|
|
|
“dimension-list”: The list of dimensions that was part of the source index metric \
|
|
|
data that needs to either excluded or included as defined by dimension-list-type. \
|
|
|
Reducing the dimension will reduce the amount of data stored leading to better storage \
|
|
|
and faster search response. Default is all included or nothing excluded. \
|
|
|
“dimension-list-type”: The type of input dimension list. Used along with \
|
|
|
dimension-list argument. Possible values include "excluded" or \
|
|
|
"included". Default is "excluded". \
|
|
|
“metric-list”: The list of metrics that was part of the source index metric \
|
|
|
data that needs to either excluded or included as defined by metric-list-type. \
|
|
|
Reducing the metric will reduce the amount of data stored leading to better storage \
|
|
|
and faster search response. Default is all included or nothing excluded. \
|
|
|
“metric-list-type”: The type of input metric list. Used along with \
|
|
|
metric-list argument. Possible values include "excluded" or \
|
|
|
"included". Default is "excluded". \
|
|
|
“metrics-overrides”: Comma separated list of metric name and \
|
|
|
aggregate pairs. Each pair is delimited by a semi colon character. \
|
|
|
If unspecified all the metric names takes on the 'aggreagate' stats function \
|
|
|
by default. If overrided then for those metrics it will use the specified \
|
|
|
aggregation stats function. Valid aggregations supported are avg, count, \
|
|
|
max, perc<int>, median, min and sum. The aggregation stats function can be \
|
|
|
a single stats function or a "#" separated stats function list. \
|
|
|
“app“: Optional argument to add a 'rollup_app' dimension to the summary data \
|
|
|
in the target index. The 'rollup_app' dimension gets the value provided for \
|
|
|
this argument. Ideally the 'app' value is the name of the app to which the \
|
|
|
the related metric rollup policy belongs. If this argument is missing, 'mrollup' \
|
|
|
will not add the dimension.
|
|
|
usage = internal
|
|
|
comment1 = Rollup hourly summary of all metrics in metrics_idx in to index metrics_idx_1hr
|
|
|
example1 = | mrollup source=metrics_idx target=metrics_idx_1h span=1h aggregate=avg
|
|
|
comment2 = Rollup hourly summary metrics in metrics_idx in to index metrics_idx_1hr with cpu.usage aggregated by max and sum while other metrics aggregated by avg and min
|
|
|
example2 = | mrollup source=metrics_idx target=metrics_idx_1h span=1h aggregate=avg#min metric-overrides="cpu.usage;max#sum"
|
|
|
comment3 = Rollup hourly summary of only error.count metric in metrics_idx in to index metrics_idx_1hr
|
|
|
example3 = | mrollup source=metrics_idx target=metrics_idx_1h span=1h aggregate=avg metric-list="error.count" metric-list-type=included
|
|
|
related = mcollect mcatalog
|
|
|
tags = rollup summary summaryindex metrics
|
|
|
category = index::summary
|
|
|
|
|
|
##################
|
|
|
# streamstats
|
|
|
##################
|
|
|
[streamstats-command]
|
|
|
syntax = streamstats (reset_on_change=<bool>)? (reset_before="("<eval-expression>")")? (reset_after="("<eval-expression>")")? (current=<bool>)? (window=<int>)? (time_window=<span-length>)? (global=<bool>)? (allnum=<bool>)? (<stats-agg-term>)* (<by-clause>)?
|
|
|
shortdesc = Adds summary statistics to all search results in a streaming manner.
|
|
|
description = Similar to the 'eventstats' command except that only events seen before \
|
|
|
the current event (plus that event itself if current=t, which is the default) \
|
|
|
are used to compute the aggregate statistics that are applied to each event. \p\\
|
|
|
The 'window' option specifies the window size, based on number of events, to \
|
|
|
use in computing the statistics. If set to 0, the default, all previous events \
|
|
|
and the current event are used. If the 'global' option is set to false \
|
|
|
(default is true) and 'window' is set to a non-zero value, a separate window \
|
|
|
is used for each group of values of the group by fields. \p\\
|
|
|
The 'allnum' option has the same affect as for the stats and eventstats \
|
|
|
commands. \p\\
|
|
|
If the reset_on_change option is set to true (default is false), all \
|
|
|
accumulated information is reset (as if no previous events have been seen) \
|
|
|
whenever the group by fields change. Events that do not have all of the \
|
|
|
group by fields are ignored and will not cause a reset. \p\\
|
|
|
The reset_before and reset_after arguments use boolean eval expressions. \
|
|
|
When these expressions evaluated on a event either before or after \
|
|
|
(respectively) the streamstats is calculation applied, will reset the \
|
|
|
accumulated information. The reset_after condition might reference \
|
|
|
fields emitted by the streamstats operation itself, whereas the reset_before \
|
|
|
condition might not. When the reset options are combined with the 'window' \
|
|
|
option, the window is also reset (to as if no previous events have been seen) \
|
|
|
whenever the accumulated statistics are reset.\p\\
|
|
|
If 'time_window' is specified, the window size limited by the range of _time \
|
|
|
values in a window. A maximum number of events in a window still applies for \
|
|
|
a time-based window. The default maximum is set in the max_stream_window \
|
|
|
attribute in the limits.conf file. You can lower the maximum by specifying \
|
|
|
the 'window' option. The time_window option requires \
|
|
|
the input events be sorted in either ascending or descending time order.
|
|
|
example1 = ... | streamstats count
|
|
|
comment1 = For each event, add a count field that represent the number of events seen so far (including that event). For example, 1 for the first event, 2 for the second, and so on.
|
|
|
example2 = ... | streamstats avg(foo) window=5
|
|
|
comment2 = For each event, compute the average of field foo over the last 5 events (including the current event). Similar to doing trendline sma5(foo)
|
|
|
example3 = ... | streamstats count current=f
|
|
|
comment3 = Same as example1, except that the current event is not included in the count
|
|
|
example4 = ... | streamstats avg(foo) by bar window=5 global=f
|
|
|
comment4 = Compute the average value of foo for each value of bar including only the only 5 events with that value of bar.
|
|
|
usage = public
|
|
|
related = accum, autoregress, delta, eventstats, stats, streamstats, trendline
|
|
|
tags = stats statistics event
|
|
|
category = reporting
|
|
|
|
|
|
|
|
|
##################
|
|
|
# eventstats
|
|
|
##################
|
|
|
[eventstats-command]
|
|
|
syntax = eventstats (allnum=<bool>)? (<stats-agg-term>)* (<by-clause>)?
|
|
|
shortdesc = Adds summary statistics to all search results.
|
|
|
description = Generate summary statistics of all existing fields in your search results and save them as values in new fields. Specify a new field name for the statistics results by using the as argument. If you don't specify a new field name, the default field name is the statistical operator and the field it operated on (for example: stat-operator(field)). Just like the 'stats' command except that aggregation results are added inline to each event, and only the aggregations that are pertinent to that event. The 'allnum' option has the same meaning as that option in the stats command. See stats-command for detailed descriptions of syntax.
|
|
|
example1 = ... | eventstats avg(duration) as avgdur
|
|
|
comment1 = Compute the overall average duration and add 'avgdur' as a new field to each event where the 'duration' field exists
|
|
|
example2 = ... | eventstats avg(duration) as avgdur by date_hour
|
|
|
comment2 = Same as example1 except that averages are calculated for each distinct value of date_hour and the aggregate value that is added to each event is the aggregate that perhaps to the value of date_hour in that event.
|
|
|
usage = public
|
|
|
related = stats
|
|
|
tags = stats statistics event
|
|
|
category = reporting
|
|
|
|
|
|
##################
|
|
|
# stats
|
|
|
##################
|
|
|
|
|
|
[stats-command]
|
|
|
simplesyntax = stats (((c|count|dc|distinct_count|estdc|estdc_error|earliest|latest|avg|stdev|stdevp|var|varp|sum|min|max|mode|median|first|last|earliest|latest|percint|list|values|range) "(" <field> ")") (as <field>)? )+ (by <field-list>)? (<dedup_splitvals>)?
|
|
|
syntax = stats <stats-command-arguments>
|
|
|
shortdesc = Provides statistics, grouped optionally by field.
|
|
|
description = Calculate aggregate statistics over the dataset, optionally grouped by a list of fields.\
|
|
|
Aggregate statistics include: \i\\
|
|
|
* count, distinct count \i\\
|
|
|
* mean, median, mode \i\\
|
|
|
* min, max, range, percentiles \i\\
|
|
|
* standard deviation, variance \i\\
|
|
|
* sum \i\\
|
|
|
* earliest and latest occurrence \i\\
|
|
|
* first and last (according to input order into stats command) occurrence \p\\
|
|
|
Similar to SQL aggregation. \
|
|
|
If called without a by-clause, one row is produced, which represents the \
|
|
|
aggregation over the entire incoming result set. If called with a \
|
|
|
by-clause, one row is produced for each distinct value of the by-clause. \
|
|
|
The 'partitions' option, if specified, allows stats to partition the \
|
|
|
input data based on the split-by fields for multithreaded reduce. \
|
|
|
The 'allnum' option, if true (default = false), computes numerical statistics on each \
|
|
|
field if and only if all of the values of that field are numerical. \
|
|
|
The 'delim' option is used to specify how the values in the 'list' or 'values' aggregation are delimited. (default is a single space)\
|
|
|
When called with the name "prestats", it will produce intermediate results (internal).
|
|
|
note = When called without any arguments, stats assumes the argument "default(*)".\
|
|
|
This produces a table with the cross-product of aggregator and field as columns,\
|
|
|
And a single row with the value of that aggregator applied to that field across all data.
|
|
|
example1 = sourcetype=access* | stats avg(kbps) by host
|
|
|
example2 = sourcetype=access* | top limit=100 referer_domain | stats sum(count)
|
|
|
commentcheat1 = Remove duplicates of results with the same "host" value and return the total count of the remaining results.
|
|
|
examplecheat1 = ... | stats distinct_count(host)
|
|
|
commentcheat2 = Return the average for each hour, of any unique field that ends with the string "lay" (for example, delay, xdelay, relay, etc).
|
|
|
examplecheat2 = ... | stats avg(*lay) BY date_hour
|
|
|
commentcheat3 = Search the access logs, and return the number of hits from the top 100 values of "referer_domain".
|
|
|
examplecheat3 = sourcetype=access_combined | top limit=100 referer_domain | stats sum(count)
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
related = eventstats, rare, sistats, streamstats, top
|
|
|
tags = stats statistics event sparkline count dc mean avg stdev var min max mode median
|
|
|
|
|
|
[stats-command-arguments]
|
|
|
syntax = (partitions=<num>)? (allnum=<bool>)? (delim=<string>)? (<stats-agg-term> | <sparkline-agg-term>)* (<by-clause>)? (<dedup_splitvals>)?
|
|
|
description = See stats-command description.
|
|
|
|
|
|
[sparkline-agg-term]
|
|
|
syntax = <sparkline-agg> (as <wc-field>)?
|
|
|
description = A sparkline specifier optionally renamed to a new field name.
|
|
|
example1 = sparkline(count(user))
|
|
|
example2 = sparkline(dc(device)) AS numdevices
|
|
|
|
|
|
[sparkline-agg]
|
|
|
syntax = sparkline "(" count ("," <span-length> )? ")" | sparkline "(" <sparkline-func> "(" <wc-field> ")" ( "," <span-length> )? ")"
|
|
|
description = A sparkline specifier, which takes the first argument of a aggregation function on a field, \
|
|
|
optionally followed by a timespan specifier. If no timespan specifier is used, an appropriate \
|
|
|
timespan is chosen based on the time range of the search. If the sparkline is not scoped to a field, \
|
|
|
only the count aggregator is permitted.
|
|
|
example1 = sparkline(count)
|
|
|
example2 = sparkline(count(source))
|
|
|
example3 = sparkline(dc(source)) by sourcetype
|
|
|
example4 = sparkline(dc(source),5m) by sourcetype
|
|
|
|
|
|
[sparkline-func]
|
|
|
syntax = c|count|dc|mean|avg|stdev|stdevp|var|varp|sum|sumsq|min|max|range
|
|
|
description = Aggregation function to use to generate sparkline values. Each sparkline value is produced by applying \
|
|
|
this aggregation to the events that fall into each particular time bucket
|
|
|
|
|
|
[mrollup-aggregate-func]
|
|
|
syntax = avg|count|max|median|min|perc|sum
|
|
|
description = Aggregation functions supported by mrollup command.
|
|
|
|
|
|
[stats-agg-term]
|
|
|
syntax = <stats-agg> (as <wc-field>)?
|
|
|
description = A statistical specifier optionally renamed to a new field name.
|
|
|
example1 = avg(kbps)
|
|
|
example2 = count(device) AS numdevices
|
|
|
|
|
|
[stats-agg]
|
|
|
syntax = <stats-func>( "(" ( <evaled-field> | <wc-field> )? ")" )?
|
|
|
description = A specifier formed by a aggregation function applied to a field or set of fields. \
|
|
|
As of 4.0, it can also be an aggregation function applied to a arbitrary eval expression. \
|
|
|
The eval expression must be wrapped by "eval(" and ")". \
|
|
|
If no field is specified in the parenthesis, \
|
|
|
the aggregation is applied independently to all fields, \
|
|
|
and is equivalent to calling a field value of *\
|
|
|
When a numeric aggregator is applied to a not-completely-numeric field \
|
|
|
no column is generated for that aggregation.
|
|
|
example1 = avg(kbps)
|
|
|
example2 = max(size)
|
|
|
comment3 = applies to both delay and xdelay
|
|
|
example3 = stdev(*delay)
|
|
|
example4 = count(eval(sourcetype="splunkd"))
|
|
|
comment4 = count of events where sourcetype has the value "splunkd". The { must immediately follow the (
|
|
|
|
|
|
[evaled-field]
|
|
|
syntax = eval "("<eval-expression>")"
|
|
|
description = A dynamically evaled field
|
|
|
|
|
|
[stats-func]
|
|
|
syntax = <stats-c>|<stats-dc>|<stats-mean>|<stats-stdev>|<stats-var>|<stats-sum>|<stats-sumsq>|<stats-min>|<stats-max>|<stats-mode>|<stats-median>|<stats-earliest>|<stats-first>|<stats-last>|<stats-latest>|<stats-perc>|<stats-list>|<stats-values>|<stats-range>|<stats-estdc>|<stats-estdc-error>|<stats-earliest-time>|<stats-latest-time>|<stats-rate>
|
|
|
description = Statistical aggregators.
|
|
|
|
|
|
[stats-estdc]
|
|
|
syntax = estdc
|
|
|
description = The estimated count of the distinct values of the field.
|
|
|
|
|
|
[stats-estdc-error]
|
|
|
syntax = estdc_error
|
|
|
description = The theoretical error of the estimated count of the distinct values of the field, where the error represents a ratio of abs(estimate_value - real_value)/real_value
|
|
|
|
|
|
[stats-c]
|
|
|
syntax = c|count
|
|
|
simplesyntax = count
|
|
|
description = The count of the occurrences of the field.
|
|
|
|
|
|
[stats-dc]
|
|
|
syntax = dc|distinct_count
|
|
|
simplesyntax = distinct_count
|
|
|
description = The count of distinct values of the field.
|
|
|
|
|
|
[stats-mean]
|
|
|
syntax = mean|avg
|
|
|
simplesyntax = avg
|
|
|
description = The arithmetic mean of the field.
|
|
|
|
|
|
[stats-stdev]
|
|
|
syntax = stdev|stdevp
|
|
|
description = The {sample, population} standard deviation of the field.
|
|
|
|
|
|
[stats-var]
|
|
|
syntax = var|varp
|
|
|
description = The {sample, population} variance of the field.
|
|
|
|
|
|
[stats-sum]
|
|
|
syntax = sum
|
|
|
description = The sum of the values of the field.
|
|
|
|
|
|
[stats-sumsq]
|
|
|
syntax = sumsq
|
|
|
description = The sum of the square of the values of the field.
|
|
|
|
|
|
[stats-min]
|
|
|
syntax = min
|
|
|
description = The minimum value of the field (lexicographic, if non-numeric).
|
|
|
|
|
|
[stats-max]
|
|
|
syntax = max
|
|
|
description = The maximum value of the field (lexicographic, if non-numeric).
|
|
|
|
|
|
[stats-range]
|
|
|
syntax = range
|
|
|
description = The difference between max and min (only if numeric)
|
|
|
|
|
|
[stats-mode]
|
|
|
syntax = mode
|
|
|
description = The most frequent value of the field.
|
|
|
|
|
|
[stats-median]
|
|
|
syntax = median
|
|
|
description = The middle-most value of the field.
|
|
|
|
|
|
[stats-earliest]
|
|
|
syntax = earliest
|
|
|
description = Returns the chronologically earliest seen occurrence of a value of the field.
|
|
|
|
|
|
[stats-earliest-time]
|
|
|
syntax = earliest_time
|
|
|
description = Returns the epoch time of the chronologically earliest seen occurrence of a value of the field.
|
|
|
|
|
|
[stats-first]
|
|
|
syntax = first
|
|
|
description = The first seen value of the field.
|
|
|
note = In general the first seen value of the field is \
|
|
|
the chronologically most recent instance of this field.
|
|
|
|
|
|
[stats-last]
|
|
|
syntax = last
|
|
|
description = The last seen value of the field.
|
|
|
|
|
|
[stats-latest]
|
|
|
syntax = latest
|
|
|
description = Returns the chronologically latest seen occurrence of a value of the field.
|
|
|
|
|
|
[stats-latest-time]
|
|
|
syntax = latest_time
|
|
|
description = Returns epoch time of the chronologically latest seen occurrence of a value of the field.
|
|
|
|
|
|
[stats-rate]
|
|
|
syntax = rate
|
|
|
description = Returns the per second rate change of the value of the field. Requires the earliest and latest value of the field to be numerical and the earliest_time and latest_time of the field to be different.
|
|
|
|
|
|
[stats-perc]
|
|
|
syntax = (perc|p|exactperc|upperperc)<num>
|
|
|
simplesyntax = perc<num>
|
|
|
description = The n-th percentile value of this field. perc<num>, p<num>, and upperperc<num> give approximate values for the integer percentile requested. The approximation algorithm we use provides a strict bound of the actual value at for any percentile. perc<num> and p<num> return a single number that represents the lower end of that range while upperperc<num> gives the approximate upper bound. exactperc<num> provides the exact value, but will be very expensive for high cardinality fields.
|
|
|
|
|
|
[stats-list]
|
|
|
syntax = list
|
|
|
description = List of all values of this field as a multi-value entry. Order of values reflects order of input events.
|
|
|
|
|
|
[stats-values]
|
|
|
syntax = values
|
|
|
description = List of all distinct values of this field as a multi-value entry. Order of values is lexigraphical.
|
|
|
|
|
|
[by-clause]
|
|
|
syntax = by <field-list>
|
|
|
description = Fields to group by.
|
|
|
example1 = BY host
|
|
|
example2 = BY addr, port
|
|
|
|
|
|
[dedup_splitvals]
|
|
|
syntax = dedup_splitvals=<bool>
|
|
|
description = Changes the default behavior of the command to count each unique \
|
|
|
value of multivalued fields only once for a given event (when set to true). \
|
|
|
This argument applies to the stats, chart, and timechart commands. \
|
|
|
Defaults to false.
|
|
|
|
|
|
##################
|
|
|
# strcat
|
|
|
##################
|
|
|
[strcat-command]
|
|
|
syntax = strcat (allrequired=<bool>)? <srcfields> <field>
|
|
|
shortdesc = Concatenates string values.
|
|
|
description = Stitch together fields and/or strings to create a new field. \
|
|
|
Quoted tokens are assumed to be literals and the rest field names. \
|
|
|
The destination field name is always at the end. \
|
|
|
If allrequired=t, for each event the destination field is only \
|
|
|
written to if all source fields exist. If allrequired=f (default) \
|
|
|
the destination field is always written and any source fields \
|
|
|
that do not exist are treated as empty string.
|
|
|
comment1 = Add a field, address, which combines the host and port values into the format <host>::<port>.
|
|
|
example1 = ... | strcat host "::" port address
|
|
|
comment2 = Add the field, comboIP, and then create a chart of the number of occurrences of the field values.
|
|
|
example2 = host="mailserver" | strcat sourceIP "/" destIP comboIP | chart count by comboIP
|
|
|
commentcheat = Add the field, comboIP, which combines the source and destination IP addresses and separates them with a front slash character.
|
|
|
examplecheat = ... | strcat sourceIP "/" destIP comboIP
|
|
|
category = fields::add
|
|
|
usage = public
|
|
|
tags = strcat concat string append
|
|
|
related = eval
|
|
|
|
|
|
[srcfields]
|
|
|
syntax = (<field>|<double-quoted-string>) (<field>|<double-quoted-string>) (<field>|<double-quoted-string> )*
|
|
|
description = Fields should either be key names or quoted literals
|
|
|
|
|
|
##################
|
|
|
# streamedcsv
|
|
|
##################
|
|
|
[streamedcsv-command]
|
|
|
syntax = streamedcsv (chunk=<int>)? <string>
|
|
|
description = Internal command to test dispatch.
|
|
|
example1 = | streamedcsv
|
|
|
usage = internal
|
|
|
|
|
|
#########################
|
|
|
# summary indexing stats
|
|
|
#########################
|
|
|
[sistats-command]
|
|
|
syntax = sistats <stats-command-arguments>
|
|
|
shortdesc = Summary indexing friendly versions of stats command.
|
|
|
description = Summary indexing friendly versions of stats command, using the same syntax. Does not require explicitly knowing what statistics are necessary to store to the summary index in order to generate a report.
|
|
|
example1 = ... | sistats avg(foo) by bar
|
|
|
comment1 = Compute the necessary information to later do 'stats avg(foo) by bar' on summary indexed results
|
|
|
usage = public
|
|
|
tags = stats summary index summaryindex
|
|
|
related = collect, overlap, sichart, sirare, sitop, sitimechart,
|
|
|
category = index::summary
|
|
|
|
|
|
#########################
|
|
|
# summary indexing top
|
|
|
#########################
|
|
|
[sitop-command]
|
|
|
syntax = sitop <top-command-arguments>
|
|
|
shortdesc = Summary indexing friendly versions of top command.
|
|
|
description = Summary indexing friendly versions of top command, using the same syntax. Does not require explicitly knowing what statistics are necessary to store to the summary index in order to generate a report.
|
|
|
example1 = ... | sitop foo bar
|
|
|
comment1 = Compute the necessary information to later do 'top foo bar' on summary indexed results.
|
|
|
usage = public
|
|
|
tags = top summary index summaryindex
|
|
|
related = collect, overlap, sichart, sirare, sistats, sitimechart
|
|
|
category = index::summary
|
|
|
|
|
|
#########################
|
|
|
# summary indexing rare
|
|
|
#########################
|
|
|
[sirare-command]
|
|
|
syntax = sirare <rare-command-arguments>
|
|
|
shortdesc = Summary indexing friendly versions of rare command.
|
|
|
description = Summary indexing friendly versions of rare command, using the same syntax. Does not require explicitly knowing what statistics are necessary to store to the summary index in order to generate a report.
|
|
|
example1 = ... | sirare foo bar
|
|
|
comment1 = Compute the necessary information to later do 'rare foo bar' on summary indexed results.
|
|
|
usage = public
|
|
|
tags = rare summary index summaryindex
|
|
|
related = collect, overlap, sichart, sistats, sitimechart, sitop
|
|
|
category = index::summary
|
|
|
|
|
|
#########################
|
|
|
# summary indexing chart
|
|
|
#########################
|
|
|
[sichart-command]
|
|
|
syntax = sichart <chart-command-arguments>
|
|
|
shortdesc = Summary indexing friendly versions of chart command.
|
|
|
description = Summary indexing friendly versions of chart command, using the same syntax. Does not require explicitly knowing what statistics are necessary to store to the summary index in order to generate a report.
|
|
|
example1 = ... | sichart avg(foo) by bar
|
|
|
comment1 = Compute the necessary information to later do 'chart avg(foo) by bar' on summary indexed results.
|
|
|
usage = public
|
|
|
tags = chart summary index summaryindex
|
|
|
related = collect, overlap, sirare, sistats, sitimechart, sitop
|
|
|
category = index::summary
|
|
|
|
|
|
#############################
|
|
|
# summary indexing timechart
|
|
|
#############################
|
|
|
[sitimechart-command]
|
|
|
syntax = sitimechart <timechart-command-arguments>
|
|
|
shortdesc = Summary indexing friendly versions of timechart command.
|
|
|
description = Summary indexing friendly versions of timechart command, using the same syntax. Does not require explicitly knowing what statistics are necessary to store to the summary index in order to generate a report.
|
|
|
example1 = ... | sitimechart avg(foo) by bar
|
|
|
comment1 = Compute the necessary information to later do 'timechart avg(foo) by bar' on summary indexed results.
|
|
|
usage = public
|
|
|
tags = timechart summary index summaryindex
|
|
|
related = collect, overlap, sichart, sirare, sistats, sitop
|
|
|
category = index::summary
|
|
|
|
|
|
##################
|
|
|
# tags
|
|
|
##################
|
|
|
[tags-command]
|
|
|
syntax = tags (outputfield=<field>)? (inclname=<bool>)? (inclvalue=<bool>)? (allowed_tags=<string>)? (<field>)*
|
|
|
shortdesc = Annotates specified fields in your search results with tags.
|
|
|
description = Annotates the search results with tags. If there are specified \
|
|
|
fields, the command annotates tags only for those fields. Otherwise, it looks \
|
|
|
for tags for all fields. If 'outputfield' is specified, the command writes \
|
|
|
the tags for all fields to this field. Otherwise, it writes the tags for \
|
|
|
each field to a field named tag::<field>. If you specify 'outputfield', \
|
|
|
'inclname' and 'inclvalue' control whether the field name and field values \
|
|
|
are added to the output field. By default only the tag itself is written to \
|
|
|
the outputfield. E.g.: (<field>::)?(<value>::)?tag \
|
|
|
If 'allowed_tags' is specified, the command returns only the tags in the \
|
|
|
'allowed_tags' argument. You can specify multiple tags in the 'allowed_tags' \
|
|
|
argument as a comma-separated, double-quoted string. \\p
|
|
|
example1 = ... | tags host eventtype
|
|
|
comment1 = write tags for host and eventtype fields into tag::host and tag::eventtype
|
|
|
example2 = ... | tags outputfield=test
|
|
|
comment2 = write new field test that contains tags for all fields
|
|
|
example3 = ... | tags outputfield=test inclname=t host sourcetype
|
|
|
comment3 = write tags for host and sourcetype into field test in the format host::<tag> or sourcetype::<tag>
|
|
|
example4 = ... | tags outputfield=test inclname=t allowed_tags=error host
|
|
|
comment4 = write the "error" tag for the host field into the field test in the format host::<tag>
|
|
|
example5 = ... | tags outputfield=test inclname=t allowed_tags="error,group" host
|
|
|
comment5 = write the "error" and "group" tags for the host field into the field test in the format host::<tag>
|
|
|
usage = public
|
|
|
tags = tags
|
|
|
related = eval
|
|
|
category = fields::add
|
|
|
|
|
|
##################
|
|
|
# trendline
|
|
|
##################
|
|
|
[trendline-command]
|
|
|
syntax = trendline (<trend_type>"("<field>")" (as <field>)?)+
|
|
|
shortdesc = Computes the moving averages of fields.
|
|
|
description = Computes the moving averages of fields. Current supported trend_types include \
|
|
|
simple moving average (sma), exponential moving average(ema), and weighted moving average(wma)\
|
|
|
The output is written to a new field where the new field name can be explicitly specified or\
|
|
|
by default it is simply the trend_type + field.
|
|
|
example1 = ... | trendline sma5(foo) AS smoothed_foo ema10(bar)
|
|
|
comment1 = Computes a 5 event simple moving average for field 'foo' and write to new field 'smoothed_foo'.\
|
|
|
Also computes a 10 event exponential moving average for field 'bar'. Because no AS clause is\
|
|
|
specified, writes to the field 'ema10(bar)'.
|
|
|
usage = public
|
|
|
category = reporting
|
|
|
related = accum, autoregress, delta, streamstats, trendline
|
|
|
tags = average mean
|
|
|
|
|
|
[trend_type]
|
|
|
syntax = (sma|ema|wma)<int>
|
|
|
description = The type of trend to compute which consist of a trend type and trend period (integer between 2 and 10000)
|
|
|
example1 = sma10
|
|
|
|
|
|
|
|
|
##################
|
|
|
# timechart
|
|
|
##################
|
|
|
|
|
|
[timechart-command]
|
|
|
syntax = timechart <timechart-command-arguments>
|
|
|
shortdesc = Creates a time series chart with corresponding table of statistics.
|
|
|
description = Creates a chart for a statistical aggregation applied to a field against time. When \
|
|
|
the data is split by a field, each distinct value of this split-by field is a series. \
|
|
|
If used with an eval-expression, the split-by-clause is required. \p\\
|
|
|
When a where clause is not provided, you can use limit and agg options to specify \
|
|
|
series filtering. If limit=0, there is no series filtering. \p\\
|
|
|
When specifying multiple data series with a split-by-clause, you can use sep and \
|
|
|
format options to construct output field names.\p\\
|
|
|
When called without any bin-options, timechart defaults to bins=300. This finds \
|
|
|
the smallest bucket size that results in no more than three hundred distinct buckets.
|
|
|
|
|
|
note = When called without any bin-options, TIMECHART assumes bins=300 has been specified. This finds the smallest bucket size that results in no more than 300 distinct buckets.
|
|
|
example1 = ... | timechart span=5m avg(delay) by host
|
|
|
example2 = sourcetype=access_combined | timechart span=1m count(_raw) by product_id usenull=f
|
|
|
example3 = sshd failed OR failure | timechart span=1m count(eventtype) by source_ip usenull=f where count>10
|
|
|
commentcheat1 = Graph the average "thruput" of hosts over time.
|
|
|
examplecheat1 = ... | timechart span=5m avg(thruput) by host
|
|
|
commentcheat2 = Create a timechart of average "cpu_seconds" by "host", and remove data (outlying values) that may distort the timechart's axis.
|
|
|
examplecheat2 = ... | timechart avg(cpu_seconds) by host | outlier action=tf
|
|
|
commentcheat3 = Calculate the average value of "CPU" each minute for each "host".
|
|
|
examplecheat3 = ... | timechart span=1m avg(CPU) by host
|
|
|
commentcheat4 = Create a timechart of the count of from "web" sources by "host"
|
|
|
examplecheat4 = ... | timechart count by host
|
|
|
commentcheat5 = Create a timechart of the count by host for the top 50 hosts (by total count)
|
|
|
examplecheat5 = ... | timechart count by host limit=50
|
|
|
commentcheat6 = Create a timechart of the count by host for the bottom 10 hosts (by minimum count in a time span)
|
|
|
examplecheat6 = ... | timechart count by host agg=min limit=bottom10
|
|
|
#there's a functional test that expects this exact example as the last example so don't add any after this
|
|
|
commentcheat7 = Compute the product of the average "CPU" and average "MEM" each minute for each "host"
|
|
|
examplecheat7 = ... | timechart span=1m eval(avg(CPU) * avg(MEM)) by host
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
related = bucket, chart, sitimechart
|
|
|
tags = chart graph report count dc mean avg stdev var min max mode median per_second per_minute per_hour per_day
|
|
|
|
|
|
[timechart-command-arguments]
|
|
|
|
|
|
syntax = (sep=<string>)? (format=<string>)? (fixedrange=<bool>)? (partial=<bool>)? (cont=<bool>)? (limit=<chart-limit-opt>)? (<stats-agg-term>)? (<bin-options> )* ( <single-agg> | <timechart-single-agg> | ( "(" <eval-expression> ")" ) )+ by <split-by-clause> (<dedup_splitvals>)?
|
|
|
description = See timechart-command description.
|
|
|
|
|
|
[sep]
|
|
|
syntax = sep=<string>
|
|
|
description = Specify the separator to use for output field names when multiple data series are \
|
|
|
used along with a split-by field.
|
|
|
|
|
|
[format]
|
|
|
syntax = format=<string>
|
|
|
description = Specify a parameterized expression with $AGG$ and $VAL$ to construct the output \
|
|
|
field names when multiple data series are used along with a split-by field. Replaces \
|
|
|
$AGG$ with the stats aggregator and function and $VAL$ with the value of the \
|
|
|
split-by-field. Takes precedence over sep.
|
|
|
|
|
|
[partial]
|
|
|
syntax = partial=<bool>
|
|
|
description = Controls if partial time buckets should be retained (true) or not (false). \
|
|
|
Only the first and last bucket can be partial. Defaults to true.
|
|
|
|
|
|
[single-agg]
|
|
|
syntax = count|c|<stats-func>"("<field>|<evaled-field>")"
|
|
|
simplesyntax = count|<stats-func>(<field>)
|
|
|
description = A single aggregation applied to a single field (can be evaled field). No wildcards are allowed. \
|
|
|
The field must be specified, except when using the special 'count' aggregator that applies to events as a whole.
|
|
|
note = C and COUNT are explicitly specified because they may be called without a field. \
|
|
|
In this mode, the result is the count of results overall, or in the by-clause if specified.
|
|
|
example1 = count
|
|
|
example2 = avg(delay)
|
|
|
example3 = sum(eval(date_hour * date_minute))
|
|
|
|
|
|
[timechart-single-agg]
|
|
|
syntax = (per_second|per_minute|per_hour|per_day) "(" <field>|<evaled-field> ")"
|
|
|
description = Same as single-agg except that additional per_* functions are allowed for computing rates over time
|
|
|
example1 = per_second(drop_count)
|
|
|
|
|
|
[split-by-clause]
|
|
|
syntax = <field> (<tc-option> )* (<where-clause>)?
|
|
|
description = Specifies a field to split by. If field is numerical, default discretization is applied.
|
|
|
note = Discretization is specified by the tc-option.
|
|
|
|
|
|
[tc-option]
|
|
|
syntax = <bin-options>|(usenull=<bool>)|(useother=<bool>)|(nullstr=<string>)|(otherstr=<string>)
|
|
|
description = Timechart options for controlling the behavior of splitting by a field. \
|
|
|
See the bin command for details about the <bin-options>. \
|
|
|
In addition to the <bin-options>: \
|
|
|
The usenull option controls whether or not a series is \
|
|
|
created for events that do not contain the split-by field. \
|
|
|
This series is labeled by the value of the nullstr option, and defaults to NULL. \
|
|
|
The useother option specifies if a series should be added for data series not \
|
|
|
included in the graph because they did not meet the criteria of the <where-clause>. \
|
|
|
This series is labeled by the value of the otherstr option, and defaults to OTHER.
|
|
|
example1 = bins=10
|
|
|
example2 = usenull=f
|
|
|
example3 = otherstr=OTHERFIELDS
|
|
|
|
|
|
[snap-to-time]
|
|
|
syntax = [+|-] [<time_integer>] <relative_time_unit>@<snap_to_time_unit>
|
|
|
description = In addition to the standard bin-options, the timechart command includes another \
|
|
|
bin-option called <snap-to-time>. The <snap-to-time> is a span of each bin, based \
|
|
|
on a relative time unit and a snap to time unit. The <snap-to-time> must include a \
|
|
|
relative_time_unit, the @ symbol, and a snap_to_time_unit. The offset, represented \
|
|
|
by the plus (+) or minus (-) is optional. If the <time_integer> is not specified, \
|
|
|
1 is the default. For example if you specify w as the relative_time_unit, \
|
|
|
1 week is assumed. This option is used only with week time units. It cannot be \
|
|
|
used with other time units such as minutes or quarters.
|
|
|
|
|
|
[where-clause]
|
|
|
syntax = where <single-agg> <where-comp>
|
|
|
description = Specifies the criteria for including particular data series when a field is given in the tc-by-clause. \
|
|
|
This optional clause, if omitted, default to "where sum in top10". \
|
|
|
The aggregation term is applied to each data series and the result of \
|
|
|
these aggregations is compared to the criteria. \
|
|
|
The most common use of this option is to select for spikes rather than overall \
|
|
|
mass of distribution in series selection. The default value finds the \
|
|
|
top ten series by area under the curve. Alternately one could replace sum with \
|
|
|
max to find the series with the ten highest spikes.
|
|
|
note = This has no relation to the where-command or SQLite.
|
|
|
example1 = where sum in top5
|
|
|
example2 = where count notin bottom10
|
|
|
example3 = where avg > 100
|
|
|
example4 = where max < 10
|
|
|
|
|
|
[where-comp]
|
|
|
syntax = <wherein-comp>|<wherethresh-comp>
|
|
|
description = A criteria for the where clause.
|
|
|
|
|
|
[wherein-comp]
|
|
|
syntax = (in|notin) (top|bottom)<int>
|
|
|
description = A where-clause criteria that requires the aggregated series value be in or not in some top or bottom grouping.
|
|
|
example1 = in top5
|
|
|
example2 = in bottom10
|
|
|
example3 = notin top2
|
|
|
|
|
|
[wherethresh-comp]
|
|
|
syntax = (<|>)( )?<num>
|
|
|
description = A where-clause criteria that requires the aggregated series value be greater than or less than some numeric threshold.
|
|
|
example1 = > 2.5
|
|
|
example2 = < 100
|
|
|
|
|
|
##################
|
|
|
# timewrap
|
|
|
##################
|
|
|
[timewrap-command]
|
|
|
syntax = timewrap <timewrap-span> (align=(now|end))? (series=(relative|exact|short)?) (time_format=<string>)?
|
|
|
shortdesc = Displays the output of timechart so that every period of time is a different series.
|
|
|
description = Displays, or wraps, the output of timechart so that every period of time is \
|
|
|
a different series. Use the timewrap command to compare data over specific \
|
|
|
time period, such as day-over-day or month-over-month. You can also compare \
|
|
|
multiple time periods, periods, such as a two week period over another \
|
|
|
two week period. The <timewrap-span> can be any length of time including \
|
|
|
weeks and quarters. You must use the timechart command in the search before \
|
|
|
you use the timewrap command.
|
|
|
comment1 = Display a timechart that has a span of 1 day for each count in a week over \
|
|
|
week comparison table. Each table column, which is the series, is 1 week of time.
|
|
|
example1 = ... | timechart count span=1d | timewrap 1week
|
|
|
usage = public
|
|
|
tags = timechart
|
|
|
category = reporting
|
|
|
related = timechart
|
|
|
|
|
|
[timewrap-span]
|
|
|
syntax = (<int>)<timescale>
|
|
|
description = A span of each bin, based on time. If <int> is not specified, 1 is assumed. \
|
|
|
For example, if day is specified for the <timescale>, 1day is assumed.
|
|
|
|
|
|
[timewrap-timescale]
|
|
|
syntax = sec|min|hr|day|week|month|quarter|year
|
|
|
description = Time scale units. You can use abbreviations for the units. See the \
|
|
|
documentation for the timewrap command in the Search Reference.
|
|
|
|
|
|
[align]
|
|
|
syntax = now|end
|
|
|
description = Specifies if the wrapping should be aligned to the current time or the end \
|
|
|
time of the search. Defaults to end.
|
|
|
|
|
|
[series]
|
|
|
syntax = relative|exact|short
|
|
|
description = Specifies how the data series are named. If series=relative and \
|
|
|
<timewrap-span> is set to week, the series names look like "latest_week", \
|
|
|
"1week_before", "2weeks_before", and so forth. If series=exact, use the time_format \
|
|
|
argument to specify a custom format for the series names. \
|
|
|
Defaults to relative.
|
|
|
|
|
|
[time-format]
|
|
|
syntax = time_format=<string>
|
|
|
description = Use with series=exact to specify a custom name for the series.
|
|
|
|
|
|
##################
|
|
|
# top
|
|
|
##################
|
|
|
|
|
|
[top-command]
|
|
|
syntax = top <top-command-arguments>
|
|
|
alias = common
|
|
|
shortdesc = Displays the most common values of a field.
|
|
|
description = Finds the most frequent tuple of values of all fields in the field list, along with a count and percentage.\
|
|
|
If a the optional by-clause is provided, finds the most frequent values\
|
|
|
for each distinct tuple of values of the group-by fields.
|
|
|
comment1 = Return top URL values.
|
|
|
example1 = ... | top url
|
|
|
comment2 = Return top "user" values for each "host".
|
|
|
example2 = ... | top user by host
|
|
|
commentcheat = Return the 20 most common values of the "url" field.
|
|
|
examplecheat = ... | top limit=20 url
|
|
|
category = reporting
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
related = rare, sitop, stats
|
|
|
tags = top popular common many frequent typical
|
|
|
|
|
|
[top-command-arguments]
|
|
|
syntax = <int>? (<top-opt>)* <field-list> (<by-clause>)?
|
|
|
description = See top-command description.
|
|
|
|
|
|
[top-opt]
|
|
|
syntax = (showcount=<bool>)|(showperc=<bool>)|(limit=<int>)|(countfield=<string>)|(percentfield=<string>)|(useother=<bool>)|(otherstr=<string>)
|
|
|
description = Top arguments:\
|
|
|
showcount: Whether to create a field called "count" (see countfield option) with the count of that tuple. (T) \
|
|
|
showperc: Whether to create a field called "percent" (see percentfield option) with the relative prevalence of that tuple. (T) \
|
|
|
limit: Specifies how many tuples to return, 0 returns all values. (10) \
|
|
|
countfield: Name of new field to write count to (default is "count") \
|
|
|
percentfield: Name of new field to write percentage to (default is "percent") \
|
|
|
useother: If true, adds a row, if necessary, to represent all values not included \
|
|
|
due to the limit cutoff. (default is false) \
|
|
|
otherstr: If useother is true, the value that is written into the row representing \
|
|
|
all other values (default is "OTHER")
|
|
|
|
|
|
##################
|
|
|
# tscollect
|
|
|
##################
|
|
|
[tscollect-command]
|
|
|
syntax = tscollect (namespace=<string>)? (squashcase=<bool>)? (keepresults=<bool>)?
|
|
|
shortdesc = Writes the result table into *.tsidx files using indexed fields format.
|
|
|
description = Writes the result table into *.tsidx files, for later use by tstats command. \
|
|
|
Only non-internal fields and values are written to the tsidx files. \
|
|
|
squashcase is false by default; if true, the field *values* are converted \
|
|
|
to lowercase when writing them to the *.tsidx files. If namespace is provided, \
|
|
|
the tsidx files are written to a directory of that name under the main tsidx \
|
|
|
stats directory. These namespaces can be written to multiple times to add new \
|
|
|
data. If namespace is not provided, the files are written to a directory within \
|
|
|
the job directory of that search, and will live as long as the job does. \
|
|
|
If keepresults is set to true, tscollect will output the same results it received \
|
|
|
as input. By default this is false, and only emits a count of results processed (this \
|
|
|
is more efficient as we do not need to store as many results). \
|
|
|
The 'indexes_edit' capability is required to run this command.
|
|
|
related = tstats
|
|
|
tags = tscollect tsidx projection
|
|
|
usage = internal
|
|
|
category = reporting
|
|
|
example1 = ... | tscollect namespace=foo
|
|
|
comment1 = Write the results table to tsidx files in namespace foo.
|
|
|
example2 = index=main | fields foo | tscollect
|
|
|
comment2 = Write the values of field foo for the events in the main index to tsidx files in the job directory.
|
|
|
|
|
|
##################
|
|
|
# mstats
|
|
|
##################
|
|
|
[mstats-command]
|
|
|
syntax = mstats (chart=<bool>)? (chart.limit=<chart-limit-opt>)? (chart.agg=<stats-agg-term>)? (chart.usenull=<bool>)? (chart.useother=<bool>)? (chart.nullstr=<string>)? (chart.otherstr=<string>)? (prestats=<bool>)? (append=<bool>)? (backfill=<bool>)? (update_period=<int>)? (fillnull_value=<string>)? (chunk_size=<int>)? ((<stats-func>|<mstats-specific-func>"(" <metric_name> ")" (as <string>)?)+|(<stats-func-value>)+) WHERE (<logical-expression>)* ((BY|GROUPBY) <field-list> (span=<string:timespan> (every=<string:timespan>)? )? )?
|
|
|
shortdesc = Performs statistics on the measurement, metric_name and dimension fields in metric indexes. Supports historical and real-time search.
|
|
|
description = Performs statistics on the measurement, metric_name, and \
|
|
|
dimension fields in metric indexes. The mstats command is \
|
|
|
optimized for searches over one or more metric_name values, \
|
|
|
rather than searches over all metric_name values. It supports \
|
|
|
both historical and real-time searches. For a real-time search \
|
|
|
with a time window, mstats runs a historical search first that \
|
|
|
backfills the data.\p\\
|
|
|
The mstats command is a generating command, except when it is \
|
|
|
in 'append=t' mode. As such, it must be the first command in a \
|
|
|
search.\p\\
|
|
|
In 'chart=t' mode, the output is formatted in format suitable for \
|
|
|
charting, similar to the chart and timechart commands. \
|
|
|
Charting mode is not compatible with append mode or prestats mode.\p\\
|
|
|
If the <stats-func> based syntax is used, the filter specified after \
|
|
|
the WHERE clause cannot filter on metric_name. Any metric_name \
|
|
|
filtering is performed based on the metric_name fields specified \
|
|
|
by the <stats-func> argument. If the <stats-func-value> syntax \
|
|
|
is used, the WHERE clause *must* filter on metric_name (wildcards are ok). \
|
|
|
It is recommended to use the <stats-func> syntax when possible. \
|
|
|
The <stats-func-value> syntax is needed for cases where a single metric may be \
|
|
|
represented by several different metric names (e.g. "cpu.util" and "cpu.utilization"). \p\\
|
|
|
You cannot blend the <stats-func> syntax with the <stats-func-value> syntax in a single mstats command. \p\\
|
|
|
Arguments: \p\\
|
|
|
"<stats-func>": A list of stats functions to compute for given \
|
|
|
metric_names. These are written as \
|
|
|
<function1>(metric_name1) <function2>(metric_name2) ... \p\\
|
|
|
"<stats-func-value>": A list of stats functions to compute on \
|
|
|
metric values (_value). These are written as \
|
|
|
<function1>(_value) <function2>(_value) ... \p\\
|
|
|
"<logical-expression>": An expression describing the filters \
|
|
|
that are applied to your search. Includes time and \
|
|
|
search modifiers, comparison expressions, and index \
|
|
|
expressions. This expression cannot filter on \
|
|
|
metric_name if the <stats-func> syntax is used, but \
|
|
|
must filter on metric_name if the <stats-func-value> \
|
|
|
syntax is used.\p\\
|
|
|
"<field-list>": Specifies one or more fields to group the \
|
|
|
results by. Required when using the 'BY' or \
|
|
|
'GROUPBY' clause. \p\\
|
|
|
"prestats": Returns the results in prestats format. You can pipe \
|
|
|
the results into commands that consume the prestats \
|
|
|
formatted data, such as chart or timechart, and \
|
|
|
output aggregate calculations. This is useful for \
|
|
|
creating graphs. Default is prestats=false. \p\\
|
|
|
"append": Valid only when "prestats=true". This argument adds \
|
|
|
the results of the mstats run to an existing set of \
|
|
|
results instead of generating new results. Default is \
|
|
|
"append=false". \p\\
|
|
|
"chart": Valid only when "prestats=false". When set to "true", \
|
|
|
the mstats command emits output in a format similar to \
|
|
|
chart (when no span given) and timechart (when span is given). \
|
|
|
When no span is provided, chart mode requires specifying \
|
|
|
one or two grouping fields, the first of which specifies \
|
|
|
the x-axis field, and the latter the series split field. \
|
|
|
When a span is provided, chart mode supports at most one \
|
|
|
groupby field, which would be used as the series splitting \
|
|
|
field. Default is "chart=false". \p\\
|
|
|
The options that start with "chart." are only valid in charting mode (chart=true). \p\\
|
|
|
"chart.limit": Limits for the number of series \
|
|
|
(columns) generated. Same behavior as the chart/timechart \
|
|
|
limit option. 0 means no limit. Optionally prefixed by "top" or \
|
|
|
"bottom" to determine which series to select. A number without a prefix \
|
|
|
means the same thing as having "top" as the prefix. Default is 10. \p\\
|
|
|
"chart.agg": Specifies the aggregation \
|
|
|
functions used to select which series to show. Same behavior \
|
|
|
as the chart/timechart agg option. Default is "sum". \p\\
|
|
|
chart.("usenull"/"nullstr"/"useother"/"otherstr"): Same behavior as \
|
|
|
chart/timechart options of the same name. \p\\
|
|
|
"backfill": Valid only with windowed real-time searches. When \
|
|
|
set to "true", the mstats command runs a historical \
|
|
|
search to backfill the on-disk indexed data before \
|
|
|
searching the in-memory real-time data. Default is \
|
|
|
"backfill=true".\p\\
|
|
|
"update_period": Valid only with real-time searches. Specifies \
|
|
|
how frequently, in milliseconds, the real-time \
|
|
|
summary for the mstats command is updated. By \
|
|
|
default, update_period=0, which is 1 second. A \
|
|
|
larger number means less frequent reads of the \
|
|
|
summary and less impact on index processing.\p\\
|
|
|
"fillnull_value": This argument sets a user-specified value \
|
|
|
that the mstats command substitutes for null values \
|
|
|
for any field within its group-by field list. Null \
|
|
|
values include field values that are missing from \
|
|
|
a subset of the returned events as well as field \
|
|
|
values that are missing from all of the returned \
|
|
|
events. If you do not provide a 'fillnull_value' \
|
|
|
argument, the tstats command omits rows for events \
|
|
|
with one or more null field values from its search \
|
|
|
results. \p\\
|
|
|
"chunk_size": Advanced option. This argument controls how many timeseries \
|
|
|
are retrieved at a time within a single metric TSIDX file when \
|
|
|
answering queries. The default is 10000000. Only consider \
|
|
|
supplying a lower value for this if you find a particular \
|
|
|
query is using too much memory. The case that could cause \
|
|
|
this would be an excessively high cardinality split-by, such \
|
|
|
as grouping by several fields that have a very large amount of \
|
|
|
of distinct values. Setting this value too low, however, can \
|
|
|
negatively impact the overall runtime of your query. If set \
|
|
|
below 10000, the value will be defaulted to 10000000. \p\\
|
|
|
"every": This argument controls how often the Splunk software computes \
|
|
|
an aggregation point using the aggregation window specified \
|
|
|
by the 'span' argument. When you use 'every' along with 'span', \
|
|
|
you can skip values by searching discrete time intervals. \
|
|
|
The 'every' argument is valid only when 'span' is set to a \
|
|
|
valid value other than 'auto'. Set 'every' to a valid timespan \
|
|
|
that is greater than the 'span' timespan. \p\\
|
|
|
related = tstats
|
|
|
tags = mstats metric tsidx projection
|
|
|
category = reporting
|
|
|
example1 = | mstats count(foo) WHERE index=mymetrics
|
|
|
comment1 = Get the count of all measurements in the "mymetrics" index where \
|
|
|
the metric name is "foo".
|
|
|
example2 = | mstats count(foo) avg(foo) WHERE index=mymetrics
|
|
|
comment2 = Get the count and average of all measurements in the "mymetrics" \
|
|
|
index where the metric_name value is "foo".
|
|
|
example3 = | mstats count(foo) avg(bar) WHERE index=mymetrics
|
|
|
comment3 = Get the count of all measurements in the "mymetrics" index where \
|
|
|
the metric_name value is "foo" and the average of all measurements \
|
|
|
in the "mymetrics" index where the metric_name value is "bar".
|
|
|
example4 = | mstats count(foo) WHERE index=mymetrics bar=value2
|
|
|
comment4 = Return the average of all measurements in mymetrics, where bar is \
|
|
|
"value2" and metric_name is "foo".
|
|
|
example5 = | mstats count(foo) WHERE host=x BY app
|
|
|
comment5 = Get the count by app for data points with host "x" and \
|
|
|
metric_name "foo" from the default metrics indexes.
|
|
|
example6 = | mstats prestats=t count(foo) span=1d | timechart span=1d count(foo)
|
|
|
comment6 = Get a timechart of all the data in your metrics indexes for \
|
|
|
metric_name "foo" with a granularity of one day.
|
|
|
example7 = | mstats median(foo)
|
|
|
comment7 = Get the median of all measurements from the default metrics indexes, \
|
|
|
where metric name is "foo".
|
|
|
example8 = | mstats count(_value) WHERE metric_name=foo AND index=mymetrics
|
|
|
comment8 = Alternative syntax for getting the count of all measurements in \
|
|
|
the "mymetrics" index for a metric_name called "foo".
|
|
|
example9 = | mstats avg(_value) WHERE metric_name=cpu.util OR metric_name=cpu.utilization AND index=mymetrics
|
|
|
comment9 = Get the average of all values of a metric named either "cpu.util" or "cpu.utilization".
|
|
|
example10 = | mstats count(foo) WHERE index=mymetrics span=1m every=1h
|
|
|
comment10 = Get the count of all measurements every hour for the first minute in the "mymetrics" index where \
|
|
|
the metric name is "foo".
|
|
|
usage = public
|
|
|
|
|
|
[metric_name]
|
|
|
syntax = [a-zA-Z][a-zA-Z0-9:._]+
|
|
|
description = The name of a thing being measured. Metric names cannot begin \
|
|
|
with numbers or underscores, cannot include spaces, and cannot\
|
|
|
include the reserved word "metric_name."
|
|
|
|
|
|
[stats-func-value]
|
|
|
syntax = <stats-func-value> | <stats-func> "(_value)" (AS <string>)?
|
|
|
description = A list of statistical aggregation functions to perform for all\
|
|
|
measurements found for the metrics named in the WHERE clause.
|
|
|
|
|
|
[mstats-specific-func]
|
|
|
syntax = <mstats-rate-avg>|<mstats-rate-sum>
|
|
|
description = Statistical aggregators specific for mstats.
|
|
|
|
|
|
[mstats-rate-avg]
|
|
|
syntax = rate_avg
|
|
|
description = Returns the average across the individual time series rates. Requires at least one rate value of the field to calculate the rate average. Valid for mstats only.
|
|
|
|
|
|
[mstats-rate-sum]
|
|
|
syntax = rate_sum
|
|
|
description = Returns the sum across the individual time series rates. Requires at least one rate value of the field to calculate the rate sum. Valid for mstats only.
|
|
|
|
|
|
##################
|
|
|
# tstats
|
|
|
##################
|
|
|
[tstats-command]
|
|
|
syntax = tstats (prestats=<bool>)? (local=<bool>)? (append=<bool>)? (summariesonly=<bool>)? (include_reduced_buckets=<bool>)? (allow_old_summaries=<bool>)? (use_summary_index_values=<bool>)? (chunk_size=<int>)? (fillnull_value=<string>)? (<stats-func>("(" (PREFIX"(" <field> ")") | <field> ")")? (as <string>)?)+ (FROM <string:namespace> | sid=<string:tscollect-job-id> | datamodel=<string:datamodel-name>)? (WHERE <logical-expression>)? ((by|GROUPBY) (<field> | (PREFIX"(" <field>")" ))+ (span=<string:timespan>)? )?
|
|
|
shortdesc = Performs statistics on indexed fields in tsidx files, which could come from normal index data, tscollect data, or accelerated datamodels.
|
|
|
description = Performs statistical queries on indexed fields in tsidx files. You can select from TSIDX data in several different ways: \p\\
|
|
|
1. Normal index data: If you do not supply a FROM clause, we will select from index data in the same way as search. You are restricted to selecting \
|
|
|
from your allowed indexes by role, and you can control exactly which indexes you select from in the WHERE clause. If no indexes are mentioned \
|
|
|
in the WHERE clause search, we will use your default set of indexes. By default, role-based search filters are applied, but can be turned off in limits.conf. \
|
|
|
ADVANCED: When the Splunk software indexes data, it segments each event into raw tokens using rules specified in segmenters.conf. \
|
|
|
You may end up with raw tokens that are actually key-value pairs separated by an arbitrary delimiter such as a '=' \
|
|
|
symbol. The following search uses the 'walklex' command to find the raw tokens in your index, along with their count: \
|
|
|
'| walklex index=<target-index> | where NOT like(term, "%::%") | stats sum(count) by term' \
|
|
|
You can use the PREFIX directive in conjunction with 'tstats' to aggregate and group-by these values within the raw \
|
|
|
tokens in your index. For example, say you have a set of raw tokens that feature a numeric value prefixed by 'kpbs=', \
|
|
|
such as kbps=10 or kbps=333. If you run <tstats-func>(PREFIX(kbps)) on those tokens, the processor returns '=10' and \
|
|
|
'=333', which isn't exactly what you want, because many tstats aggregation functions require purely numeric values. \
|
|
|
So you adjust your search to include the delimiter and run <tstats-func>(PREFIX(kbps=)) on those tokens. This returns \
|
|
|
values of '10' and '333', which are perfect for tstats aggregation functions. \p\\
|
|
|
2. Data manually collected with 'tscollect': Select from your namespace with 'FROM <namespace>'. If you supplied no namespace to tscollect, the data \
|
|
|
was collected into the dispatch directory of that job. In that case, you would select from that data with 'FROM sid=<tscollect-job-id>' \p\\
|
|
|
3. An accelerated datamodel: Select from this accelerated datamodel with 'FROM datamodel=<datamodel-name>' \
|
|
|
You can provide any number of aggregates to perform, and also have the option of providing a filtering query using the WHERE keyword. This query looks \
|
|
|
like a normal query you would use in the search processor. You can also provide any number of GROUPBY fields. If you are grouping by _time, you should \
|
|
|
supply a timespan with 'span' for grouping the time buckets. This timespan looks like any normal timespan in Splunk, like '1hr' or '3d'. It also supports 'auto'. \p\\
|
|
|
Arguments: \i\\
|
|
|
"prestats": This simply outputs the answer in prestats format, in case you want to pipe the results to a \i\\
|
|
|
different type of processor that takes prestats output, like chart or timechart. This is very useful for \i\\
|
|
|
creating graphs \i\\
|
|
|
"local": If you set this to true it forces the processor to only be run on the search head. \i\\
|
|
|
"append": Only valid in prestats mode, this allows tstats to be run to add results to an existing set of \i\\
|
|
|
results, instead of generating them. \i\\
|
|
|
"summariesonly": Only applies when selecting from an accelerated datamodel. When false (default), \i\\
|
|
|
Splunk will generate results from both summarized data, as well as for data that is not \i\\
|
|
|
summarized. For data not summarized as TSIDX data, the full search behavior will be used \i\\
|
|
|
against the original index data. If set to true, 'tstats' will only generate results from the \i\\
|
|
|
TSIDX data that has been automatically generated by the acceleration, and nonsummarized data \i\\
|
|
|
will not be provided. \i\\
|
|
|
"include_reduced_buckets": Only applies when TSIDX reduction is enabled by setting enableTsidxReduction = true in indexes.conf. \i\\
|
|
|
When include_reduced_buckets = false, Splunk generates results only from buckets that are not reduced \i\\
|
|
|
The default for this setting is "false". \i\\
|
|
|
"allow_old_summaries": Only applies when selecting from an accelerated datamodel. When false \i\\
|
|
|
(default), Splunk only provides results from summary directories when those directories are up-to-date. \i\\
|
|
|
In other words, if the datamodel definition has changed, we do not use those summary directories \i\\
|
|
|
which are older than the new definition when producing output from tstats. This default ensures \i\\
|
|
|
that the output from tstats will always reflect your current configuration. If this is instead \i\\
|
|
|
set to true, then tstats will use both current summary data as well as summary data that was \i\\
|
|
|
generated prior to the definition change. Essentially this is an advanced performance \i\\
|
|
|
feature for cases where you know that the old summaries are "good enough". \i\\
|
|
|
"use_summary_index_values": When this argument is set to 'false' (default), 'tstats' interprets events in summary index \i\\
|
|
|
buckets that contain prestats-prefixed fields as literal fields. When this argument is set to 'true', \i\\
|
|
|
'tstats' treats those prestats-prefixed fields as partial aggregates (which is another way to refer \i\\
|
|
|
to prestats data). The 'tstats' command interprets these partial aggregates in a manner similar to \i\\
|
|
|
the way that the 'stats' command processes partial aggregates. The 'tstats' command then does a \i\\
|
|
|
fallback search for the buckets where the prestats fields are found, calling an equivalent 'stats' \i\\
|
|
|
search only for those buckets. Enable this setting when you want 'tstats' to return the same results \i\\
|
|
|
that an equivalent 'stats' search would return, such that it interprets values encoded as partial \i\\
|
|
|
aggregates. When 'use_summary_index_values=true', 'tstats' searches might perform slower, but their \i\\
|
|
|
result sets will have parity with the result sets of corresponding 'stats' searches. When this \i\\
|
|
|
argument is set to 'false', 'tstats' simply looks at the 'tsidx' file and does not perform additional \i\\
|
|
|
interpretation of partial aggregates. \p\\
|
|
|
NOTE: The 'replace_*_with_tstats' family of optimizers (such as 'replace_stats_cmds_with_tstats'), \i\\
|
|
|
automatically sets this argument to 'true' on a corresponding 'tstats' search so that the optimized \i\\
|
|
|
search is functionally equivalent to the original. \p\\
|
|
|
"chunk_size": Advanced option. This argument controls how many events are retrieved at a time within \i\\
|
|
|
a single TSIDX file when answering queries. The default is 10000000. Only consider supplying a lower \i\\
|
|
|
value for this if you find a particular query is using too much memory. The case that could cause this \i\\
|
|
|
would be an excessively high cardinality split-by, such as grouping by several fields that have a very \i\\
|
|
|
large amount of distinct values. Setting this value too low, however, can negatively impact the overall \i\\
|
|
|
runtime of your query. If set below 10000, the value will be defaulted to 10000000. \i\\
|
|
|
"fillnull_value": This argument sets a user-specified value that the tstats command substitutes for null values for \i\\
|
|
|
any field within its group-by field list. Null values include field values that are missing from \i\\
|
|
|
a subset of the returned events as well as field values that are missing from all of the returned \i\\
|
|
|
events. If you do not provide a 'fillnull_value' argument, the tstats command omits rows for events \i\\
|
|
|
with one or more null field values from its search results. \p\\
|
|
|
NOTE: Except in 'append=t' mode, this is a generating processor, so it must be the first command in a search.
|
|
|
related = tscollect
|
|
|
tags = tstats tsidx projection
|
|
|
category = reporting
|
|
|
example1 = | tstats count FROM mydata
|
|
|
comment1 = Gets the count of all events in the mydata namespace
|
|
|
example2 = | tstats avg(foo) from mydata where bar=value2 baz>5
|
|
|
comment2 = Returns the average of field foo in mydata where bar is specifically 'value2' and the value of baz is greater than 5.
|
|
|
example3 = | tstats count where host=x by source
|
|
|
comment3 = Gives the count by source for events with host=x
|
|
|
example4 = | tstats prestats=t count by _time span=1d | timechart span=1d count
|
|
|
comment4 = Gives a timechart of all the data in your default indexes with a day granularity
|
|
|
example5 = | tstats median(foo) from mydata
|
|
|
comment5 = Gives the median of field foo from mydata
|
|
|
example6 = | tstats prestats=t median(foo) from mydata | tstats prestats=t append=t median(bar) from otherdata | stats median(foo) median(bar)
|
|
|
comment6 = Uses prestats mode in conjunction with append to compute the median values of foo and bar, which are in different namespaces
|
|
|
example7 = | tstats count avg(PREFIX(kbps=)) where index=_internal by source PREFIX(group=)
|
|
|
comment7 = Gets the count and average(of non-indexed term) using prefix 'kbps=' split-by (indexed-term) source and (non-indexed term) using prefix 'group='
|
|
|
example8 = | tstats use_summary_index_values=true count where index=summary_index by source
|
|
|
comment8 = Gets the total count including interpreting the partial results stored in the summary index and split by the 'source' key.
|
|
|
usage = public
|
|
|
|
|
|
##################
|
|
|
# transaction
|
|
|
##################
|
|
|
[transaction-command]
|
|
|
syntax = transaction (<field-list>)? (name=<transaction-name>)? (<txn_definition-opt>)* (<memcontrol-opt>)* (<rendering-opt>)*
|
|
|
alias = transam
|
|
|
shortdesc = Groups events into transactions.
|
|
|
description = Groups events into transactions based on various constraints, such as the beginning \
|
|
|
and ending strings or time between events. Transactions are made up of the raw text \
|
|
|
(the _raw field) of each member, the time and date fields of the earliest member, as \
|
|
|
well as the union of all other fields of each member.\p\\
|
|
|
Produces two fields to the raw events, duration and eventcount. The duration value \
|
|
|
is the difference between the timestamps for the first and last events in the \
|
|
|
transaction. The eventcount value is the number of events in the transaction.
|
|
|
|
|
|
comment1 = Collapse all events that share the same host and cookie value, that occur within 30 seconds, and do not have a\
|
|
|
pause of more than 5 seconds between the events.
|
|
|
example1 = ... | transaction host,cookie maxpause=30s maxspan=5s
|
|
|
commentcheat = Group search results that have the same "host" and "cookie", occur within 30 seconds of each other, and do not have a pause greater than 5 seconds between each event into a transaction.
|
|
|
examplecheat = ... | transaction host cookie maxpause=30s maxspan=5s
|
|
|
comment2 = Group search results that share the same value of "from", with a maximum span of 30 seconds, and a pause between events no greater than 5 seconds into a transaction.
|
|
|
example2 = ... | transaction from maxpause=30s maxspan=5s
|
|
|
category = results::group
|
|
|
usage = public
|
|
|
supports-multivalue = true
|
|
|
tags = transaction group cluster collect gather
|
|
|
related = searchtxn
|
|
|
|
|
|
[transaction-name]
|
|
|
syntax = <string>
|
|
|
description = The name of a transaction definition from transactions.conf to be used for finding transactions. \
|
|
|
If other arguments (e.g., maxpause) are provided as arguments to transam, they overrule the value \
|
|
|
specified in the transaction definition.
|
|
|
default =
|
|
|
example1 = purchase_transaction
|
|
|
|
|
|
|
|
|
[txn_definition-opt]
|
|
|
syntax = <maxpause-opt> | <maxspan-opt> | <maxevents-opt> | <field-list> | <start-opt> | <end-opt> | <connected-opt> | <unify-ends-opt> | <keeporphans-opt>
|
|
|
|
|
|
[maxpause-opt]
|
|
|
syntax = maxpause=<int>(s|m|h|d)?
|
|
|
description = The maxpause constraint requires the transaction's events to span less than maxpause. \
|
|
|
If value is negative, disable the maxpause constraint.
|
|
|
default = maxpause=-1 (no limit)
|
|
|
|
|
|
[maxspan-opt]
|
|
|
syntax = maxspan=<int>(s|m|h|d)?
|
|
|
description = The maxspan constraint requires there be no pause between a transaction's events of greater than maxpause. \
|
|
|
If value is negative, disable the maxspan constraint.
|
|
|
default = maxspan=-1 (no limit)
|
|
|
|
|
|
[maxevents-opt]
|
|
|
syntax = maxevents=<int>
|
|
|
description = The maximum number of events in a transaction. If the value is negative this constraint is disabled.
|
|
|
default = maxevents=1000
|
|
|
|
|
|
[fields-opt]
|
|
|
syntax = fields=<string>? (,<string>)*
|
|
|
description = DEPRECATED: The preferred usage of transaction is for list of fields to be specified directly as arguments. E.g. 'transaction foo bar' rather than 'transaction fields="foo,bar"' \
|
|
|
The 'fields' constraint takes a list of fields. For search results to be members of a transaction, for each \
|
|
|
field specified, if they have a value, it must have the same value as other members in that transaction. \
|
|
|
For example, a search result that has host=mylaptop can never be in the same transaction as a search result \
|
|
|
that has host=myserver, if host is one of the constraints. A search result that does not have a host value, \
|
|
|
however, can be in a transaction with another search result that has host=mylaptop, because they are not inconsistent.
|
|
|
example1 = fields=host,cookie
|
|
|
default =
|
|
|
|
|
|
[start-opt]
|
|
|
syntax = startswith=<transam-filter-string>
|
|
|
description = A search or eval filtering expression which if satisfied by an event marks the beginning of a new transaction
|
|
|
example1 = startswith="login"
|
|
|
example2 = startswith=(username=foobar)
|
|
|
example3 = startswith=eval(speed_field < max_speed_field)
|
|
|
example4 = startswith=eval(speed_field < max_speed_field/12)
|
|
|
default =
|
|
|
|
|
|
[end-opt]
|
|
|
syntax = endswith=<transam-filter-string>
|
|
|
description = A search or eval expression which if satisfied by an event marks the end of a transaction
|
|
|
example1 = endswith="logout"
|
|
|
example2 = endswith=(username=foobar)
|
|
|
example3 = endswith=eval(speed_field > max_speed_field)
|
|
|
example4 = endswith=eval(speed_field > max_speed_field/12)
|
|
|
default =
|
|
|
|
|
|
[transam-filter-string]
|
|
|
syntax = <transam-filter-search-noquotes> | <transam-filter-search-quotes> | <transam-filter-eval>
|
|
|
description = Alternatives for the search or eval expression strings used with the <transam-filter-string> argument
|
|
|
|
|
|
|
|
|
[transam-filter-search-noquotes]
|
|
|
syntax = "<logical-expression>"
|
|
|
description = Where <logical-expression> is a valid search expression that does not contain quotation marks inside the expression. The <logical-expression> must be surrounded by quotation marks.
|
|
|
example1 = "user=mildred"
|
|
|
default =
|
|
|
|
|
|
[transam-filter-search-quotes]
|
|
|
syntax = (<logical-expression>)
|
|
|
description = Where <logical-expression> is a valid search expression. The <logical-expression> can contain quotes as part of the expression. The <logical-expression> must be enclosed in parenthesis.
|
|
|
example1 = (name="foo bar")
|
|
|
example2 = (username=foobar)
|
|
|
example3 = ("search literal")
|
|
|
default =
|
|
|
|
|
|
[transam-filter-eval]
|
|
|
syntax = eval(<eval-expression>)
|
|
|
description = Where <eval-expression> is a valid eval expression that evaluates to a boolean.
|
|
|
example1 = eval(distance/time < max_speed)
|
|
|
default =
|
|
|
|
|
|
[connected-opt]
|
|
|
syntax = connected=<bool>
|
|
|
description = Relevant iff fields is not empty. Controls whether an event that is not inconsistent and not consistent\
|
|
|
with the fields of a transaction, opens a new transaction (connected=t) or is added to the transaction. \
|
|
|
An event can be not inconsistent and not consistent if it contains fields required by the transaction \
|
|
|
but none of these fields has been instantiated in the transaction (by a previous event addition).
|
|
|
default = connected=t
|
|
|
|
|
|
[keeporphans-opt]
|
|
|
syntax = keeporphans=<bool>
|
|
|
description = Whether the transaction command should output the results that are not part of any transactions. \
|
|
|
The results that are passed through as "orphans" can be distinguished from transactions by looking \
|
|
|
at the _txn_orphan field, which is set to 1 for orphan results.
|
|
|
default = keeporphans=f
|
|
|
|
|
|
[unify-ends-opt]
|
|
|
syntax = unifyends=<bool>
|
|
|
description = Whether to force events that match startswith/endswith constraint to also match at least one of the \
|
|
|
fields used to unify events into a transactions. Defaults to the same value as the "connected" \
|
|
|
option. This means that if you set connected=false (it is true by default), then unifyends will \
|
|
|
default to false as well.
|
|
|
default = unifyends=t
|
|
|
|
|
|
### memory constraint options ###
|
|
|
[memcontrol-opt]
|
|
|
syntax = <maxopentxn-opt> | <maxopenevents-opt> | <keepevicted-opt>
|
|
|
|
|
|
[maxopentxn-opt]
|
|
|
syntax = maxopentxn=<int>
|
|
|
description = Specifies the maximum number of not yet closed transactions to keep in the open pool before starting \
|
|
|
to evict transactions, using LRU policy.
|
|
|
default = the default value of this field is read from the transactions stanza in limits.conf
|
|
|
|
|
|
[maxopenevents-opt]
|
|
|
syntax = maxopenevents=<int>
|
|
|
description = Specifies the maximum number of events (which are) part of open transactions before transaction \
|
|
|
eviction starts happening, using LRU policy.
|
|
|
default = the default value of this field is read from the transactions stanza in limits.conf
|
|
|
|
|
|
[keepevicted-opt]
|
|
|
syntax = keepevicted=<bool>
|
|
|
description = Whether to output evicted transactions. Evicted transactions can be distinguished from non-evicted transactions by checking the value of the 'closed_txn' field, which is set to '0' for evicted transactions and '1' for closed ones. A transaction is evicted from memory when the memory limitations are reached.
|
|
|
default = false
|
|
|
### multivalue rendering options ###
|
|
|
[rendering-opt]
|
|
|
syntax = <delim-opt> | <mvlist-opt> | <nullstr-opt> | <mvraw-opt>
|
|
|
|
|
|
[mvlist-opt]
|
|
|
syntax = mvlist=<bool>|<field-list>
|
|
|
description = Flag controlling whether the multivalued fields of the transaction are (1) a list of the original \
|
|
|
events ordered in arrival order or (2) a set of unique field values ordered lexicographically. If a \
|
|
|
comma/space delimited list of fields is provided only those fields are rendered as lists
|
|
|
default = mvlist=f
|
|
|
|
|
|
[delim-opt]
|
|
|
syntax = delim=<string>
|
|
|
description = A string used to delimit the original event values in the transaction event fields.
|
|
|
default = delim=" "
|
|
|
|
|
|
[nullstr-opt]
|
|
|
syntax = nullstr=<string>
|
|
|
description = A string value to use when rendering missing field values as part of mv fields in a transactions. \
|
|
|
This option applies only to fields that are rendered as lists.
|
|
|
default = nullstr="NULL"
|
|
|
|
|
|
[mvraw-opt]
|
|
|
syntax = mvraw=<bool>
|
|
|
description = Whether the _raw field of the transaction search result should be a multivalued field
|
|
|
default = mvraw=f
|
|
|
|
|
|
##################
|
|
|
# typeahead
|
|
|
#################
|
|
|
[typeahead-command]
|
|
|
syntax = typeahead <prefix-opt> <count-opt> (<max-time-opt>)? (<index-opt>)? (<starttimeu>)? (<endtimeu>)? (<collapse-opt>)?
|
|
|
shortdesc = Returns typeahead on a specified prefix.
|
|
|
description = Returns typeahead on a specified prefix. Only returns a max of "count" results, can be targeted to an index and restricted by time. \
|
|
|
If index specifiers are provided they're used to populate the set of indexes used if no index specifiers are found in the prefix.
|
|
|
comment1 = Return typeahead information for sources in the "_internal" index.
|
|
|
example1 = | typeahead prefix="index=_internal source=" count=10
|
|
|
usage = public
|
|
|
tags = typeahead help terms
|
|
|
generating = true
|
|
|
category = administrative
|
|
|
|
|
|
[prefix-opt]
|
|
|
syntax = prefix=<string>
|
|
|
description = the full search to do typeahead on
|
|
|
example = prefix=source
|
|
|
example1= prefix="index=_internal war"
|
|
|
|
|
|
[count-opt]
|
|
|
syntax = count=<int>
|
|
|
description = The maximum number of results to return
|
|
|
example = count=10
|
|
|
|
|
|
[max-time-opt]
|
|
|
syntax = max_time=<int>
|
|
|
description = The maximum time in seconds that typeahead can run for ( 0 disables this max_time )
|
|
|
example = max_time=3
|
|
|
|
|
|
[index-opt]
|
|
|
syntax = index=<string>
|
|
|
description = Search the specified index instead of the default index.
|
|
|
example = index=_internal
|
|
|
|
|
|
[collapse-opt]
|
|
|
syntax = collapse=<bool>
|
|
|
description = whether to collapse terms that are a prefix of another term and the event count is the same
|
|
|
example = collapse=f
|
|
|
default = t
|
|
|
|
|
|
##################
|
|
|
# typelearner
|
|
|
##################
|
|
|
[typelearner-command]
|
|
|
syntax = typelearner (<grouping-field>)? (<grouping-maxlen>)?
|
|
|
shortdesc = Generates suggested eventtypes. Deprecated: preferred command is 'findtypes'
|
|
|
description = Takes previous search results, and produces a list of promising searches that may be used as event-types.
|
|
|
commentcheat = Have Splunk automatically discover and apply event types to search results
|
|
|
examplecheat = ... | typelearner
|
|
|
category = results::group
|
|
|
usage = deprecated
|
|
|
related = findtypes, typer
|
|
|
tags = eventtype typer discover search classify
|
|
|
|
|
|
[grouping-field]
|
|
|
syntax = <field>
|
|
|
description = By default, the typelearner initially groups events by the value of the grouping-field, and then further unifies and merges those groups, based on the keywords they contain. The default grouping field is "punct" (the punctuation seen in _raw).
|
|
|
default = punct
|
|
|
example1 = host
|
|
|
|
|
|
[grouping-maxlen]
|
|
|
syntax = maxlen=<int>
|
|
|
description = determines how many characters in the grouping-field value to look at. If set to negative, the entire value of the grouping-field value is used to initially group events
|
|
|
default = 15
|
|
|
example1 = maxlen=30
|
|
|
|
|
|
|
|
|
##################
|
|
|
# typer
|
|
|
##################
|
|
|
[typer-command]
|
|
|
syntax = typer (<typer-eventtypes>)? (<typer-maxlen>)?
|
|
|
shortdesc = Calculates the event types for the search results.
|
|
|
description = Calculates the 'eventtype' field for search results that match a \
|
|
|
known event type.
|
|
|
commentcheat = Force Splunk to apply event types that you have configured (Splunk Web automatically does this when you view the "eventtype" field).
|
|
|
examplecheat = ... | typer
|
|
|
category = results::group
|
|
|
usage = public
|
|
|
related = typelearner
|
|
|
tags = eventtype typer discover search classify
|
|
|
|
|
|
[typer-eventtypes]
|
|
|
syntax = eventtypes=<string>
|
|
|
description = Comma-separated list of event types to return in the 'eventtype' \
|
|
|
field. If you provide an empty string, or if you provide a list that does not \
|
|
|
match any valid event types, the typer command is disabled and will not \
|
|
|
return any event types. If you provide an argument that matches valid event \
|
|
|
types, typer returns only the requested event types. The typer command \
|
|
|
accepts wildcards. If you provide an unspecified argument, the command returns \
|
|
|
all event types. \\p
|
|
|
default = No default; by default, the command returns all event types.
|
|
|
comment1 = Returns the 'info' event type in the 'eventtype' field for events matching the eventtype.
|
|
|
example1 = ... | typer eventtypes="info"
|
|
|
comment2 = Returns event types beginning with the string 'doc' in the 'eventtype' field for events matching the event type.
|
|
|
example2 = ... | typer eventtypes="doc*"
|
|
|
comment3 = Returns both the 'info' and 'group' event types in the 'eventtype' field for events matching those event types.
|
|
|
example3 = ... | typer eventtypes="info,group"
|
|
|
|
|
|
[typer-maxlen]
|
|
|
syntax = maxlen=<int>
|
|
|
description = Restricts the typer command to using only the first N characters \
|
|
|
of any attribute (e.g., _raw), including individual tokens, when it \
|
|
|
determines event types. This parameter overrides the "maxlen" setting in \
|
|
|
limits.conf for the typer command. \\p
|
|
|
default = 10000
|
|
|
comment1 = Restrict the typer to using only the first 50 characters of any token or attribute, including the _raw field.
|
|
|
example1 = ... | typer maxlen=50
|
|
|
|
|
|
#################
|
|
|
# where
|
|
|
#################
|
|
|
|
|
|
[where-command]
|
|
|
syntax = where <eval-expression>
|
|
|
shortdesc = Runs an eval expression to filter the results. The result of the expression must be Boolean.
|
|
|
description = Keeps only the results for which the evaluation was successful and the boolean result was true.
|
|
|
comment1 = Return "physicjobs" events with a speed is greater than 100.
|
|
|
example1 = sourcetype=physicsobjs | where distance/time > 100
|
|
|
comment2 = Return "CheckPoint" events that match the IP or is in the specified subnet.
|
|
|
example2 = host="CheckPoint" | where (src LIKE "10.9.165.%") OR cidrmatch("10.9.165.0/25", dst)
|
|
|
usage = public
|
|
|
tags = where filter search
|
|
|
category = results::filter
|
|
|
related = eval search regex
|
|
|
|
|
|
#################
|
|
|
# highlight
|
|
|
#################
|
|
|
|
|
|
[highlight-command]
|
|
|
syntax = highlight (<string>)+
|
|
|
simplesyntax = highlight (<string>)+
|
|
|
shortdesc = Causes UI to highlight selected strings.
|
|
|
description = Causes each of the space separated or comma-separated strings provided to be highlighted by the splunk web UI. \
|
|
|
These strings are matched case insensitively.
|
|
|
commentcheat = Highlight the terms "login" and "logout".
|
|
|
examplecheat = ... | highlight login,logout
|
|
|
comment2 = Highlight the text sequence "access denied".
|
|
|
example2 = ... | highlight "access denied"
|
|
|
category = formatting
|
|
|
alias = hilite
|
|
|
usage = public
|
|
|
tags = ui search
|
|
|
related = iconify, abstract
|
|
|
|
|
|
##################
|
|
|
# xyseries
|
|
|
##################
|
|
|
|
|
|
[xyseries-command]
|
|
|
syntax = xyseries (grouped=<bool>)? <x-field> <y-name-field> (<y-data-field>)+ (sep=<string>)? (format=<string>)?
|
|
|
shortdesc = Converts results into a format suitable for graphing.
|
|
|
description = Converts results into a format suitable for graphing. If multiple \
|
|
|
y-data-fields are specified, each column name is the \
|
|
|
the y-data-field name followed by the sep string (default is ": ") \
|
|
|
and then the value of the y-name-field it applies to. \
|
|
|
If the grouped option is set to true (false by default), \
|
|
|
then the input is assumed to be sorted by the value of the \
|
|
|
<x-field> and multi-file input is allowed.
|
|
|
comment1 = Reformat the search results.
|
|
|
example1 = ... | xyseries delay host_type host
|
|
|
usage = public
|
|
|
alias = maketable
|
|
|
related = untable
|
|
|
category = reporting
|
|
|
tags = convert graph
|
|
|
|
|
|
[x-field]
|
|
|
syntax = <field>
|
|
|
description = Field to be used as the x-axis
|
|
|
|
|
|
[y-name-field]
|
|
|
syntax = <field>
|
|
|
description = Field that contains the values to be used as data series labels
|
|
|
|
|
|
[y-data-field]
|
|
|
syntax = <field>
|
|
|
description = Field that contains the data to be charted
|
|
|
|
|
|
################
|
|
|
# untable
|
|
|
################
|
|
|
[untable-command]
|
|
|
syntax = untable <x-field> <y-name-field> <y-data-field>
|
|
|
shortdesc = Converts results from a tabular format to a format similar to stats output. Inverse of xyseries.
|
|
|
description = Converts results from a tabular format to a format similar to stats output. Inverse of xyseries.
|
|
|
comment1 = Reformat the search results.
|
|
|
example1 = ... | timechart avg(delay) by host | untable _time host avg_delay
|
|
|
usage = public
|
|
|
related = xyseries
|
|
|
category = reporting
|
|
|
tags = convert table
|
|
|
|
|
|
################
|
|
|
# rest
|
|
|
################
|
|
|
[rest-command]
|
|
|
syntax = rest <rest-uri> (count=<int>)? (strict=<bool>)? (<splunk-server-opt>)? (<splunk-server-group-opt>)* (<timeout-opt>)? (<get-arg-name>=<get-arg-value>)*
|
|
|
shortdesc = Reads a REST API endpoint and displays the returned entities as search results.
|
|
|
description = Access a REST endpoint and display the returned entities as search results. \
|
|
|
If 'strict' is set to true, the search fails completely if the \
|
|
|
command raises an error (such as the request of a nonexistent \
|
|
|
endpoint). Defaults to false. \
|
|
|
comment1 = Access saved search jobs.
|
|
|
example1 = | rest /services/search/jobs count=0 splunk_server=local | search isSaved=1
|
|
|
usage = public
|
|
|
category = rest
|
|
|
generating = yes
|
|
|
tags = rest endpoint access
|
|
|
|
|
|
[rest-uri]
|
|
|
syntax = <string>
|
|
|
description = Path to the REST endpoint of the local server. This command cannot be used to access a general url
|
|
|
|
|
|
[get-arg-name]
|
|
|
syntax = <string>
|
|
|
description = Optional, HTTP GET argument name
|
|
|
|
|
|
[get-arg-value]
|
|
|
syntax = <string>
|
|
|
description = Optional, HTTP GET argument value
|
|
|
|
|
|
[splunk-server-opt]
|
|
|
syntax = splunk_server=<string>
|
|
|
description = Optional, argument specifies whether or not to limit results to one specific server. Use "local" to refer to the search head
|
|
|
|
|
|
[splunk-server-group-opt]
|
|
|
syntax = splunk_server_group=<string>
|
|
|
description = Optional, argument specifies whether or not to limit results to one specific server_group. Repeatable.
|
|
|
|
|
|
[timeout-opt]
|
|
|
syntax = timeout=<int>
|
|
|
description = Optional, argument specifies the timeout, in seconds, when waiting for the REST endpoint to respond. Defaults to 60 seconds.
|
|
|
|
|
|
################
|
|
|
# surrounding
|
|
|
################
|
|
|
|
|
|
[surrounding-command]
|
|
|
syntax = surrounding id=<event-id> timebefore=<int> timeafter=<int> searchkeys=<key-list> <int:maxresults> readlevel=<readlevel-int> <index-specifier>
|
|
|
description = Finds events surrounding the event specified by event-id filtered by the search keys.
|
|
|
example1 = | surrounding id=0:0 timeBefore=3600 timeAfter=3600 searchKeys=source::foo host::bar maxresults::50 readlevel::2 index::default
|
|
|
usage = internal
|
|
|
generating = true
|
|
|
|
|
|
[event-id]
|
|
|
syntax = <int>:<int>
|
|
|
description = a splunk internal event id
|
|
|
|
|
|
[key-list]
|
|
|
syntax = (<string> )*
|
|
|
description = a list of keys that are ANDed to provide a filter for surrounding command
|
|
|
|
|
|
[readlevel-int]
|
|
|
syntax = 0|1|2|3
|
|
|
description = How deep to read the events, 0 : just source/host/sourcetype, 1 : 0 with _raw, 2 : 1 with kv, 3 2 with types ( deprecated in 3.2 )
|
|
|
|
|
|
###############
|
|
|
# xmlkv
|
|
|
##############
|
|
|
|
|
|
[xmlkv-command]
|
|
|
syntax = xmlkv <maxinputs-opt>
|
|
|
shortdesc = Extracts XML key-value pairs.
|
|
|
description = Finds key value pairs of the form <foo>bar</foo> where foo is the key and bar is the value from the _raw key.
|
|
|
example1 = ... | xmlkv maxinputs=10000
|
|
|
commentcheat = Extract field/value pairs from XML formatted data. "xmlkv" automatically extracts values between XML tags.
|
|
|
examplecheat = ... | xmlkv
|
|
|
category = fields::add
|
|
|
usage = public
|
|
|
related = extract, kvform, multikv, rex, xpath
|
|
|
tags = extract xml
|
|
|
|
|
|
###############
|
|
|
# xmlunescape
|
|
|
###############
|
|
|
|
|
|
[xmlunescape-command]
|
|
|
syntax = xmlunescape <maxinputs-opt>
|
|
|
shortdesc = Un-escapes XML characters.
|
|
|
description = Un-escapes XML entity references (for: &, <, and >) back to their corresponding characters (e.g., "&" -> "&").
|
|
|
commentcheat = Un-escape all XML characters.
|
|
|
examplecheat = ... | xmlunescape
|
|
|
category = formatting
|
|
|
usage = public
|
|
|
tags = unescape xml escape
|
|
|
|
|
|
|
|
|
###############
|
|
|
# xpath
|
|
|
###############
|
|
|
|
|
|
[xpath-command]
|
|
|
syntax = xpath <string:xpath> (field=<field>)? (outfield=<field>)? (default=<string>)?
|
|
|
shortdesc = Extracts the xpath value from FIELD and sets the OUTFIELD attribute.
|
|
|
description = Sets the value of OUTFIELD to the value of the XPATH applied to FIELD. If no value could be set, the DEFAULT value is set. FIELD defaults to "_raw"; OUTFIELD, to "xpath"; and DEFAULT, to not setting a default value. The field value is wrapped in a "<data>...</data>" tags so that the field value is a valid xml, even if it contains some none xml.
|
|
|
comment1 = pull out the name of a book from xml, using the relative path of //book
|
|
|
example1 = sourcetype="books.xml" | xpath "//book/@name" outfield=name
|
|
|
comment2 = pull out the name of a book from xml, using the full path of /data/book
|
|
|
example2 = sourcetype="books.xml" | xpath "/data/book/@name" outfield=name
|
|
|
usage = public
|
|
|
tags = xml extract
|
|
|
category = fields::add
|
|
|
related = extract, kvform, multikv, rex, xmlkv
|
|
|
|
|
|
###############
|
|
|
# iplocation
|
|
|
###############
|
|
|
|
|
|
[iplocation-command]
|
|
|
syntax = iplocation (prefix=<string>)? (allfields=<bool>)? (lang=<string>)? <ip-address-fieldname>
|
|
|
shortdesc = Extracts location information from IP addresses using 3rd-party databases.
|
|
|
description = The ip-address field in ip-address-fieldname is looked up in a database and location fields \
|
|
|
information is added to the event. The fields are City, Continent, Country, MetroCode, \
|
|
|
Region, Timezone, lat(latitude) and lon(longitude). \
|
|
|
Not all of the information is available for all ip address ranges, and hence it is \
|
|
|
normal to have some of the fields empty. \
|
|
|
The Continent, MetroCode, and Timezone are only added if allfields=true (default is false). \
|
|
|
prefix=string will add a certain prefix to all fieldnames if you desire to uniquely qualify \
|
|
|
added field names and avoid name collisions with existing fields (default is NULL/empty string). \
|
|
|
The lang setting can be used to render strings in alternate languages (for example "lang=es" \
|
|
|
for spanish) The set of languages depends on the geoip database in use. The special language \
|
|
|
"lang=code" will return fields as abbreviations where possible.
|
|
|
example1 = sourcetype = access_combined_* | iplocation clientip
|
|
|
example2 = sourcetype = access_combined_* | iplocation allfields=true clientip
|
|
|
example3 = sourcetype = access_combined_* | iplocation prefix=iploc_ allfields=true clientip
|
|
|
usage = public
|
|
|
tags = ip location city geocode
|
|
|
commentcheat = Add location information (based on IP address).
|
|
|
examplecheat = ... | iplocation clientip
|
|
|
category = fields::add
|
|
|
|
|
|
[ip-address-fieldname]
|
|
|
syntax = <field>
|
|
|
description = The name of the field that contains the IP address.
|
|
|
|
|
|
################
|
|
|
# rangemap
|
|
|
################
|
|
|
|
|
|
[rangemap-command]
|
|
|
syntax = rangemap field=<field> (<attrn>=<attrn-range>)+ (default=<string>)?
|
|
|
shortdesc = Sets RANGE field to the name of the ranges that match.
|
|
|
description = Sets RANGE field to the names of any ATTRN that the value of FIELD is within. If no range is matched, the RANGE is set to the DEFAULT values.
|
|
|
example1 = ... | rangemap field=date_second green=1-30 blue=31-39 red=40-59 default=gray
|
|
|
comment1 = Set RANGE to "green" if the date_second is between 1-30; "blue", if between 31-39; "red", if between 40-59; and "gray", if no range matches (e.g. "0").
|
|
|
example2 = ... | rangemap field=count low=0-0 elevated=1-100 default=severe
|
|
|
comment2 = Sets the value of each event's RANGE field to "low" if COUNT is 0, "elevated" if between 1-100, and "severe" otherwise.
|
|
|
usage = public
|
|
|
tags = colors stoplight range
|
|
|
category = fields::add
|
|
|
|
|
|
[attrn]
|
|
|
syntax = <string>
|
|
|
description = The value set for each event's RANGE field if the field vaule falls in the attrn range
|
|
|
example = low
|
|
|
|
|
|
[attrn-range]
|
|
|
syntax = <num>-<num>
|
|
|
description = The range to match to the value of the RANGE field.
|
|
|
example = 1-25
|
|
|
|
|
|
################
|
|
|
# rawstats
|
|
|
################
|
|
|
|
|
|
[rawstats-command]
|
|
|
syntax = rawstats
|
|
|
shortdesc = Returns statistics about the raw field.
|
|
|
description = Returns statistics about the raw field that might be useful for filtering/classifying events.
|
|
|
example1 = ... | rawstats | search rawstat_width_avg<30 linecount>30
|
|
|
comment1 = get long skinny events
|
|
|
usage = internal
|
|
|
|
|
|
################
|
|
|
# reltime
|
|
|
################
|
|
|
|
|
|
[reltime-command]
|
|
|
|
|
|
syntax = reltime (timefield=<field-list>)? (prefix=<string>)?
|
|
|
shortdesc = Creates one or more relative time fields and adds them to returned events. The added fields have human-readable time values such as "5 days ago", "1 minute ago", and "2 years ago".
|
|
|
description = Creates one or more relative time fields and adds them to returned events. \
|
|
|
Each relative time field provides a human-readable value of the difference \
|
|
|
between 'now' and a value of "timefield". Human-readable values look like \
|
|
|
"5 days ago", "1 minute ago", and "2 years ago". If no arguments are provided, \
|
|
|
reltime adds a relative time field named 'reltime' to each event. The value \
|
|
|
of 'reltime' in this case is the difference between 'now' and '_time'. \p\\
|
|
|
Arguments: \
|
|
|
"timefield": A field in the event data with a valid timestamp value. You can \
|
|
|
provide multiple time fields as a comma-separated list bounded by double \
|
|
|
quotation marks. If the "timefield" argument specifies only one time field, \
|
|
|
reltime adds a relative time field named 'reltime' to the returned events, \
|
|
|
where the value of 'reltime' is the difference between 'now' and the \
|
|
|
specified time field. When "timefield" specifies multiple time fields, \
|
|
|
reltime creates relative time fields with the "timefield" field names, \
|
|
|
prefixed by the string set by "prefix". \p\\
|
|
|
"prefix": Sets a prefix string for relative time fields when you specify \
|
|
|
multiple time fields for "timefield". When you specify multiple values \
|
|
|
for "timefield" and you do not set a "prefix" string, reltime uses 'reltime_' \
|
|
|
as a default prefix. \p\\
|
|
|
comment1 = add a reltime field
|
|
|
example1 = ... | reltime
|
|
|
comment2 = add a custom reltime field
|
|
|
example2 = ... | reltime timefield=earliest_time
|
|
|
comment3 = add multiple custom relative time fields prefixed with 'reltime_'
|
|
|
example3 = ... | reltime timefield="latest_time,current_time"
|
|
|
comment4 = add multiple custom relative time fields with custom prefix
|
|
|
example4 = ... | reltime timefield="latest_time,current_time" prefix=new_reltime_field_
|
|
|
usage = public beta
|
|
|
tags = time ago
|
|
|
category = formatting
|
|
|
related = convert
|
|
|
|
|
|
################
|
|
|
# scrub
|
|
|
################
|
|
|
|
|
|
[scrub-command]
|
|
|
syntax = scrub (public-terms=<filename>)? (private-terms=<filename>)? (name-terms=<filename>)? (dictionary=<filename>)? (timeconfig=<filename>)? (namespace=<string>)?
|
|
|
shortdesc = Anonymizes the search results.
|
|
|
description = Anonymizes the search results by replacing identifying data - usernames, IP addresses, domain names, etc. - with fictional values that maintain the same word length. For example, it may turn the string user=carol@adalberto.com into user=aname@mycompany.com. This lets Splunk users share log data without revealing confidential or personal information. By default the dictionary and configuration files found in $SPLUNK_HOME/etc/anonymizer are used. These can be overridden by specifying arguments to the scrub command. The arguments exactly correspond to the settings in the stand-alone "splunk anonymize" command, and are documented there. Anonymizes all attributes, exception those that start with "_" (except "_raw") or "date_", or the following attributes: "eventtype", "linecount", "punct", "sourcetype", "timeendpos", "timestartpos". \
|
|
|
When using alternative filenames, they must not contain paths and refer to files located in $SPLUNK_HOME/etc/anonymizer, or the optional namespace="appname" must be used to specify an app supplying the files, and they will be read from $SPLUNK_HOME/etc/app/<appname>/anonymizer.
|
|
|
comment1 = Anonymize the current search results.
|
|
|
example1 = ... | scrub
|
|
|
usage = public beta
|
|
|
tags = anonymize scrub secure private obfuscate
|
|
|
category = formatting
|
|
|
|
|
|
##############
|
|
|
# metadata
|
|
|
##############
|
|
|
|
|
|
[metadata-command]
|
|
|
syntax = metadata type=<metadata-type> (<index-opt>)* (splunk_server=<wc-string>)? (splunk_server_group=<wc-string>)* (datatype=<metric|event>)?
|
|
|
shortdesc = Returns a list of source, sourcetypes, or hosts.
|
|
|
description = This search command generates a list of source, sourcetypes, or hosts from the index. Optional splunk_server argument specifies whether or not to limit results to one specific server. Optional datatype argument specifies whether to only search from event indexes or metrics index. If datatype is not specified, only search from event indexes.
|
|
|
comment1 = Return the values of "host" for events in the "_internal" index.
|
|
|
example1 = | metadata type=hosts index=_internal
|
|
|
comment2 = Return values of "sourcetype" for events in the "_audit" index on server peer01
|
|
|
example2 = | metadata type=sourcetypes index=_audit splunk_server=peer01
|
|
|
comment3 = Return values of "sourcetype" for events in the "_audit" index on any server name that begins with "peer".
|
|
|
example3 = | metadata type=sourcetypes index=_audit splunk_server=peer*
|
|
|
example4 = | metadata type=hosts index=mymetrics datatype=metric
|
|
|
comment4 = Return the values of "host" for data points in the "mymetrics" index.
|
|
|
example5 = | metadata type=sources index=* datatype=metric
|
|
|
comment5 = Return the values of "source" for data points in all metrics indexes.
|
|
|
usage = public
|
|
|
tags = metadata host source sourcetype metric
|
|
|
category = administrative
|
|
|
related = dbinspect
|
|
|
|
|
|
[metadata-type]
|
|
|
syntax = hosts|sources|sourcetypes
|
|
|
description = Specifies which metadata type is returned
|
|
|
|
|
|
############
|
|
|
# eventcount
|
|
|
############
|
|
|
|
|
|
[eventcount-command]
|
|
|
syntax = eventcount (<index-opt>)* (summarize=<bool>)? (report_size=<bool>)? (list_vix=<bool>)?
|
|
|
shortdesc = Returns the number of events in an index.
|
|
|
description = Returns the number of events in an index. By default, it summarizes the events across all peers and indexes (summarize is True by default). If summarize is False, it splits the event count by index and search peer. If report_size is True (it defaults to False), then it will also report the index size in bytes. If list_vix is False (it defaults to True) then virtual indexes will not be listed.
|
|
|
usage = public
|
|
|
tags = count eventcount
|
|
|
category = reporting
|
|
|
example1 = | eventcount index=_internal
|
|
|
comment1 = Return the number of events in the '_internal' index.
|
|
|
example2 = | eventcount summarize=false index=*
|
|
|
comment2 = Gives event count by each index/server pair.
|
|
|
example3 = | eventcount
|
|
|
comment3 = Displays event count over all search peers.
|
|
|
|
|
|
|
|
|
##################
|
|
|
# findtypes
|
|
|
##################
|
|
|
[findtypes-command]
|
|
|
syntax = findtypes max=<int> notcovered? useraw?
|
|
|
shortdesc = Generates suggested event types.
|
|
|
description = Takes previous search results, and produces a list of\
|
|
|
promising searches that may be used as event types. Returns up to MAX\
|
|
|
event types, defaulting to 10. If the "notcovered" keyword is\
|
|
|
specified, then event types that are already covered by other\
|
|
|
eventtypes are not returned. At most 5000 events are analyzed for\
|
|
|
discovering event types. If the "useraw" keyword is specified, then\
|
|
|
phrases in the _raw text of the events is used for generating event\
|
|
|
types.
|
|
|
commentcheat = Discover 50 common event types and add support for looking at text phrases
|
|
|
examplecheat = ... | findtypes max=50 useraw
|
|
|
category = results::group
|
|
|
usage = public
|
|
|
note = replacement for typelearner
|
|
|
related = typer,typelearner
|
|
|
tags = eventtype typer discover search classify
|
|
|
|
|
|
|
|
|
###############
|
|
|
# return
|
|
|
###############
|
|
|
|
|
|
[return-command]
|
|
|
syntax = return (<int:count>)? (<field:alias>=<field>)* (<field>)* ($<field>)*
|
|
|
shortdesc = A convenient way to return values from a subsearch.
|
|
|
description = Useful for passing values up from a subsearch. Replaces the incoming events with one event, with one attribute: "search". Automatically limits the incoming results with "head" and "fields", to improve performance. Allows convenient outputting of attr=value (e.g., "return source"), alias_attr=value (e.g. "return ip=srcip"), and value (e.g., "return $srcip"). Defaults to using just the first row of results handed to it. Multiple rows can be specified with COUNT (e.g. "return 2 ip"), and each row is ORd (e.g., output might be "(ip=10.1.11.2) OR (ip=10.2.12.3)"). Multiple values can be specified and are placed within OR clauses. So "return 2 user ip" might output "(user=bob ip=10.1.11.2) OR (user=fred ip=10.2.12.3)". Using "return" at the end of a subsearch removes the need, in the vast majority of cases, for "head", "fields", "rename", "format", and "dedup". If you encounter problems with 'return', try 'oldreturn'.
|
|
|
usage = public
|
|
|
comment1 = search for "error ip=<someip>", where someip is the most recent ip used by Amrit
|
|
|
example1 = error [ search user=amrit | return ip]
|
|
|
comment2 = search for "error (user=user1 ip=ip1) OR (user=user2 ip=ip2)", where users and IPs come from the two most-recent logins
|
|
|
example2 = error [ search login | return 2 user, ip]
|
|
|
#comment3 = return to eval the userid of the last user, and increment it by 1.
|
|
|
#example3 = ... | eval nextid = 1 + [ search user=* | return $id] | ...
|
|
|
tags = format query subsearch search
|
|
|
category = search::subsearch
|
|
|
related = search format
|
|
|
|
|
|
[oldreturn-command]
|
|
|
syntax = oldreturn (<int:count>)? (<field:alias>=<field>)* (<field>)* ($<field>)*
|
|
|
shortdesc = A convenient way to return values from a subsearch.
|
|
|
description = The 'oldreturn' command is a deprecated version of the 'return' command. Use it only if you are having problems with 'return'. Go to 'return' for the full description of this command.
|
|
|
usage = public
|
|
|
tags = format query subsearch search
|
|
|
category = search::subsearch
|
|
|
related = search format
|
|
|
|
|
|
###############
|
|
|
# runshellscript
|
|
|
###############
|
|
|
|
|
|
[runshellscript-command]
|
|
|
syntax = runshellscript <script-filename> <result-count> <search-terms> <search-string> <savedsearch-name> <description> <results-url> <deprecated-arg> <search-id>
|
|
|
shortdesc = Internal command used to execute scripted alerts.
|
|
|
description = Internal command used to execute scripted alerts. The script file needs to be located \
|
|
|
in either $SPLUNK_HOME/etc/system/bin/scripts OR $SPLUNK_HOME/etc/apps/<app-name>/bin/scripts. \
|
|
|
The search id is used to create a path to the search's results. All other args are passed to the \
|
|
|
script (unvalidated) as follows: \i\\
|
|
|
$0 = scriptname \i\\
|
|
|
$1 = number of events returned \i\\
|
|
|
$2 = search terms \i\\
|
|
|
$3 = fully qualified query string \i\\
|
|
|
$4 = name of saved splunk \i\\
|
|
|
$5 = trigger reason (i.e. "The number of events was greater than 1") \i\\
|
|
|
$6 = link to saved search \i\\
|
|
|
$7 = DEPRECATED - empty string argument \i\\
|
|
|
$8 = file where the results for this search are stored(contains raw results)
|
|
|
usage = internal
|
|
|
category = search::external
|
|
|
related = script
|
|
|
|
|
|
|
|
|
##################
|
|
|
# searchtxn
|
|
|
##################
|
|
|
[searchtxn-command]
|
|
|
syntax = searchtxn <transaction-name> (max_terms=<int>)? (use_disjunct=<bool>)? (eventsonly=<bool>)? <search-string>
|
|
|
shortdesc = Finds transaction events given search constraints.
|
|
|
description = Retrieves events matching the transactiontype\
|
|
|
TRANSACTION-NAME with events transitively discovered by the initial\
|
|
|
event constraint of the SEARCH-STRING. \p\\
|
|
|
For example, given an 'email'\
|
|
|
transactiontype with fields="qid pid" and with a search attribute of\
|
|
|
'sourcetype="sendmail_syslog"', and a SEARCH-STRING of "to=root", searchtxn will\
|
|
|
find all the events that match 'sourcetype="sendmail_syslog" to=root'.\p\\
|
|
|
From those results, all the qid's and pid's are transitively used to\
|
|
|
find further search for relevant events. When no more qid or pid\
|
|
|
values are found, the resulting search is run\i\\
|
|
|
'sourcetype="sendmail_syslog" ((qid=val1 pid=val1) OR ...\
|
|
|
....(qid=valn pid=valm) | transaction name=email | search to=root'.\p\\
|
|
|
Options:\p\\
|
|
|
max_terms -- integer between 1-1000 which determines how many unique field values all fields can use (default=1000). Using smaller values will speed up search, favoring more recent values\p\\
|
|
|
use_disjunct -- determines if each term in SEARCH-STRING should be OR'd on the initial search (default=true)\p\\
|
|
|
eventsonly -- if true, only the relevant events are retrieved, but the "|transaction" command is not run (default=false)
|
|
|
comment1 = find all email transactions to root from david smith
|
|
|
example1 = | searchtxn email to=root from="david smith"
|
|
|
usage = public
|
|
|
category = results::group
|
|
|
tags = transaction group cluster collect gather needle winnow
|
|
|
related = transaction
|
|
|
|
|
|
##########################
|
|
|
# walklex
|
|
|
##########################
|
|
|
[walklex-command]
|
|
|
syntax = walklex type=<walklex-type> (prefix=<string>|pattern=<wc-string>)? (<index-opt>)* (splunk_server=<wc-string>)? (splunk_server_group=<wc-string>)*
|
|
|
shortdesc = Returns a list of terms from the tsidx lexicon of each event index bucket
|
|
|
description = This search command generates a list of terms or indexed fields from the \
|
|
|
tsidx lexicon of each event index bucket. \
|
|
|
Applies only to buckets with a merged_lexicon file or a single tsidx file. \
|
|
|
This means that "hot" buckets are generally not usually included. \
|
|
|
The optional splunk_server and splunk_server_group arguments specify whether to limit \
|
|
|
results to a subset of search peers.\
|
|
|
The optional prefix and pattern options limit results to terms \
|
|
|
that match a specific pattern or prefix. Either prefix or pattern can be specified \
|
|
|
but not both.
|
|
|
comment1 = Returns all terms in each bucket of the "_internal" index and find the total count for each term
|
|
|
example1 = | walklex index=_internal | stats sum(count) by term
|
|
|
comment2 = Returns all terms starting with "foo" in each bucket of the "_internal" and "_audit" indexes.
|
|
|
example2 = | walklex prefix=foo index=_internal index=_audit
|
|
|
comment3 = Returns all indexed field terms ending with "bar" in the "_internal" index, per bucket.
|
|
|
example3 = | walklex pattern=*bar type=fieldvalue index=_internal
|
|
|
comment4 = Return all fieldnames of indexed fields in each bucket of the "_audit" index
|
|
|
example4 = | walklex type=field index=_audit
|
|
|
usage = public
|
|
|
tags = metadata tstats lexicon index buckets
|
|
|
related = metadata tstats
|
|
|
|
|
|
[walklex-type]
|
|
|
syntax = all|field|fieldvalue|term
|
|
|
description = Specifies which type of terms in the lexicon to return. Defaults to "all". \
|
|
|
"term" excludes all indexed field terms of the form "<field>::<value>". \
|
|
|
"fieldvalue" includes only indexed field terms.\
|
|
|
"field" returns only the unique field names in each index bucket.
|
|
|
|
|
|
##########################
|
|
|
# mpreview
|
|
|
##########################
|
|
|
[mpreview-command]
|
|
|
syntax = mpreview (filter=<string>)? (<index-opt>)* (splunk_server=<wc-string>)? (splunk_server_group=<wc-string>)* (earliest=<mpreview-time-specifier>)? (latest=<mpreview-time-specifier>)? (chunk_size=<int>)? (target_per_timeseries=<int>)?
|
|
|
alias = msearch
|
|
|
shortdesc = Returns a list of the individual metric data points in a specified metric index that match a provided filter
|
|
|
description = This search command generates a list of individual metric data points \
|
|
|
from a specified metric index that match a provided filter. The \
|
|
|
filter can be any arbitrary boolean expression over the \
|
|
|
dimensions or metric_name. earliest and latest, if specified, \
|
|
|
will override time range picker settings. The \
|
|
|
mpreview command is designed to display individual metric data points. \
|
|
|
To aggregate metric data points, use the mstats command. \p\\
|
|
|
To use mpreview, you must have a role with the 'run_msearch' capability. \p\\
|
|
|
Arguments: \i\\
|
|
|
"filter": An arbitrary boolean expression over the dimension or metric_name. \i\\
|
|
|
"index-opt": Limits the search to results from one or more indexes. You can use wildcard characters (*). \i\\
|
|
|
To match non-internal indexes, use index=*. To match internal indexes, use index=_*. \i\\
|
|
|
"splunk_server": Specifies the distributed search peer from which to return results. If you are using \i\\
|
|
|
Splunk Enterprise, you can specify only one splunk_server argument. However, you can use a \i\\
|
|
|
wildcard when you specify the server name to indicate multiple servers. For example, you can \i\\
|
|
|
specify splunk_server=peer01 or splunk_server=peer*. Use local to refer to the search head. \i\\
|
|
|
"splunk_server_group": Limits the results to one or more server groups. If you are using Splunk Cloud, \i\\
|
|
|
omit this parameter. You can specify a wildcard character in the string to indicate multiple \i\\
|
|
|
server groups. \i\\
|
|
|
"earliest": Specify the earliest _time for the time range of your search. You can specify an exact time \i\\
|
|
|
(earliest="11/5/2016:20:00:00") or a relative time (earliest=-h or earliest=@w0). \i\\
|
|
|
"latest": Specify the latest time for the _time range of your search. You can specify an exact time \i\\
|
|
|
(latest="11/12/2016:20:00:00") or a relative time (latest=-30m or latest=@w6). \i\\
|
|
|
"chunk_size": Advanced option. When you run an 'mpreview' search, the search head returns batches of metric \i\\
|
|
|
time series until the search results are complete. The 'chunk_size' argument specifies a \i\\
|
|
|
limit for the number of metric time series that the search head can gather in a single batch \i\\
|
|
|
from a single MSIDX file. For example, when 'chunk_size=100', the search head can return \i\\
|
|
|
100 metric time series worth of metric data points in batches until the search is complete. \i\\
|
|
|
Lower this value when 'mpreview' searches use too much memory, or when they infrequently \i\\
|
|
|
return events. Larger 'chunk_size' values can improve search performance, with the tradeoff \i\\
|
|
|
of using more memory per search. Smaller 'chunk_size' values can use less memory per search, \i\\
|
|
|
with the tradeoff of reducing search performance This argument cannot be set lower than 10. \i\\
|
|
|
Defaults to 1000. \i\\
|
|
|
"target_per_timeseries": Specifies the maximum number of metric data points to retrieve per tsidx file \i\\
|
|
|
associated with an 'mpreview' query. When set to 0, this setting returns all data \i\\
|
|
|
points available within the given time range for each time series. \i\\
|
|
|
Defaults to 5. \p\\
|
|
|
comment1 = Returns individual data points from the _metrics index that match the specified filter.
|
|
|
example1 = | mpreview index=_metrics filter="group=queue name=indexqueue metric_name=*.current_size"
|
|
|
comment2 = Returns individual data points from the _metrics index.
|
|
|
example2 = | mpreview index=_metrics
|
|
|
comment3 = Returns 100 metric time series worth of metric data points in batches from TSIDX files belonging to the _metrics index.
|
|
|
example3 = | mpreview index=_metrics chunk_size=100
|
|
|
comment4 = Return 5 metric data points per metric time series for each TSIDX file searched in the _metrics index.
|
|
|
example4 = | mpreview index=_metrics target_per_timeseries=5
|
|
|
usage = public
|
|
|
tags = mstats mcollect msearch tstats index
|
|
|
related = mstats mcollect mcatalog
|
|
|
|
|
|
[mpreview-time-specifier]
|
|
|
syntax = (<iso8601-msecs-timestamp>|<epoch>|<relative-time-modifier>)
|
|
|
description = An ISO8601 timestamp with milliseconds or Splunk relative time modifier.
|
|
|
|
|
|
##################
|
|
|
# x11
|
|
|
##################
|
|
|
[x11-command]
|
|
|
syntax = x11 <x11-func>"("<field>")" (as <field>)?
|
|
|
shortdesc = Remove seasonal fluctuations in fields.
|
|
|
description = Remove seasonal fluctuations in fields. This command has a similar purpose to the\
|
|
|
trendline command, but is more sophisticated as it uses the industry popular X11 method.\
|
|
|
The type option can be either 'mult' (for multiplicative) or 'add' (for additive). By default,\
|
|
|
it's 'mult'. The period option should be specified if known; otherwise it is automatically computed.
|
|
|
example1 = ... | x11 foo as fubar
|
|
|
example2 = ... | x11 24(foo) as fubar
|
|
|
example3 = ... | x11 add12(foo) as fubar
|
|
|
usage = public
|
|
|
category = reporting
|
|
|
related = trendline
|
|
|
tags = x11 deseasonal seasonal
|
|
|
|
|
|
[x11-func]
|
|
|
syntax = <x11-type>?<x11-period>?
|
|
|
example1 = mult12
|
|
|
example2 = 12
|
|
|
example3 = add
|
|
|
|
|
|
[x11-type]
|
|
|
syntax = (mult|add)
|
|
|
description = Type option 'mult' (for multiplicative) or 'add' (for additive).
|
|
|
default = mult
|
|
|
|
|
|
[x11-period]
|
|
|
syntax = <int>
|
|
|
description = A integer between 5 and 1000. The period option should be specified if known; otherwise it is automatically computed.
|
|
|
|
|
|
###########################
|
|
|
# union
|
|
|
###########################
|
|
|
|
|
|
[union-command]
|
|
|
syntax = union (<subsearch-options>)? <dataset> (<dataset>)*
|
|
|
shortdesc = Merge multiple datasets.
|
|
|
description = Merges the results from two or more datasets into one dataset.
|
|
|
example1 = | union [search index=a | eval type = "foo"] [search index=b | eval mytype = "bar"]
|
|
|
comment1 = Merge events from index a and b and add different fields using eval in each case.
|
|
|
example2 = ... | chart count by category1 | union [search error | chart count by category2]
|
|
|
comment2 = Append the current results with the tabular results of errors.
|
|
|
example3 = | union datamodel:"internal_server.splunkdaccess" [search index=a]
|
|
|
comment3 = Search a built-in data model that is an internal server log for REST API calls and the events from index a.
|
|
|
usage = public
|
|
|
tags = multisearch append
|
|
|
category = results::append
|
|
|
related = multisearch, append
|
|
|
|
|
|
[dataset]
|
|
|
syntax = <named-dataset> | <unnamed-dataset>
|
|
|
|
|
|
[named-dataset]
|
|
|
syntax = <dataset-type>:<dataset-name>
|
|
|
|
|
|
[dataset-type]
|
|
|
syntax = (datamodel|savedsearch|inputlookup)
|
|
|
description = Type of a named dataset.
|
|
|
|
|
|
[unnamed-dataset]
|
|
|
syntax = <subsearch>
|
|
|
|
|
|
[dataset-name]
|
|
|
syntax = <string>
|
|
|
description = dataset name
|
|
|
|
|
|
[jsontxn-command]
|
|
|
syntax = jsontxn
|
|
|
usage = internal
|
|
|
|
|
|
########################
|
|
|
# ingestpreview command
|
|
|
########################
|
|
|
|
|
|
[ingestpreview-command]
|
|
|
syntax = ingestpreview (meta_mode=<string>)? (show_inputs=<bool>)? (ingest_processor=<string>)? (generate_helper_fields=<bool>)? ()
|
|
|
shortdesc = Helps preview ingest-time configuration settings without having to ingest or import data.
|
|
|
description = This search command takes incoming search results, \
|
|
|
generates mock ingestion events from those results, \
|
|
|
and supplies those mock events to the specified \
|
|
|
ingestion processor, which then outputs the processed \
|
|
|
events. This lets you quickly author ingest-time \
|
|
|
configurations without having to upload or index \
|
|
|
real data. For example, you can iterate or debug an \
|
|
|
'INGEST_EVAL' or 'REGEX' transform, as well as troubleshoot \
|
|
|
configurations in props.conf and transforms.conf. \p\\
|
|
|
Arguments: \i\\
|
|
|
"transforms:<key>=<value>" OR "props:<key>=<value>": Supply \
|
|
|
one or more settings for props /transforms using this \
|
|
|
syntax. For example, to configure the REGEX setting \
|
|
|
in transforms.conf, specify transforms:REGEX=<your regex> \
|
|
|
NOTE: If field values contain spaces or special characters \
|
|
|
you can wrap the values in parenthesis or \
|
|
|
double quotes. The command strips the outer set of these \
|
|
|
characters before processing the arguments. \i\\
|
|
|
"meta_mode": controls how the command displays the resulting _meta key. The _meta \
|
|
|
key contains the map of indexed time field/value pairs. \
|
|
|
The command will always emit an '_meta' field if it is present in the results. \
|
|
|
However, Splunk Web will not show this by default since it is a field \
|
|
|
that starts with '_'. Set this to: \
|
|
|
"unhide" - Creates an alias to the _meta field named "META" \
|
|
|
so its visible in Splunk Web. Equivalent to '|eval META=_meta' \
|
|
|
"expand" - if you want to have each indexed time field/value pair become \
|
|
|
a separate field. Each field will be prefixed with "META." \
|
|
|
"all" - performs both EXPAND and UNHIDE behavior \
|
|
|
"none" - performs neither EXPAND or UNHIDE behavior \
|
|
|
Defaults to "unhide". \i\\
|
|
|
"show_inputs": If set to true, the command generates INPUT.* fields for each input field with the original \
|
|
|
value before transformation. This is helpful to compute the difference between input and output \
|
|
|
for a particular field \
|
|
|
Defaults to "false". \i\\
|
|
|
"ingest_processor": The target ingest-time processor, \
|
|
|
accepts one of the following values: \
|
|
|
'regexreplacement', 'metrics', 'metricschema'. \
|
|
|
Use 'metrics' for statsd/collectd data. \
|
|
|
Use 'metricschema' for logs to metrics. \
|
|
|
Defaults to 'regexreplacement'. \i\\
|
|
|
"generate_helper_fields": emits three extra fields: 'TRANSFORMS.CONF', 'PROPS.CONF' \
|
|
|
and 'WARNS.ERRS' \
|
|
|
'TRANSFORMS.CONF' and 'PROPS.CONF' fields will generate the exact \
|
|
|
settings you can copy and paste into props/transforms.conf. \
|
|
|
These settings might differ from the settings you supply \
|
|
|
to this search command becasue of various character \
|
|
|
escaping rule discrepancies between the search language \
|
|
|
and configuration files. Command due to various escaping \
|
|
|
rule discrepencies between the search language and \
|
|
|
configuration files. \
|
|
|
The 'WARNS.ERRS' will display any errors or warnings reported \
|
|
|
by processor that will help further troubleshoot settings \
|
|
|
Defaults to true. \i\\
|
|
|
comment1 = Run INGEST_EVAL that creates a meta field, 'myfield' and \
|
|
|
sets it to "Hello World"
|
|
|
example1 = | makeresults count | fields - count | ingestpreview transforms:INGEST_EVAL=(myfield="Hello World" )
|
|
|
comment2 = Run a REGEX transform that changes 'myfield' if _raw matches, \
|
|
|
note it uses surrounding double quotes on REGEX param to deal w/ spaces
|
|
|
example2 = | makeresults count| fields - count | eval _raw="raw with open(parenthesis)close" | eval myfield="original_value" | ingestpreview transforms:REGEX="with open\(parenthesis\)close" transform:WRITE_META=true transforms:FORMAT="$0 myfield::new_value"
|
|
|
comment3 = Uses the ingest_processor='metrics' to test dimension extraction \
|
|
|
(ipv4) for statsd data
|
|
|
example3 = | makeresults count| fields - count | eval _raw="cpu.idle.10.3.4.134:1.2342|g" | ingestpreview ingest_processor=metrics props:METRICS_PROTOCOL=statsd props:NO_BINARY_CHECK=true props:SHOULD_LINEMERGE=false transforms:REGEX=((?<ipv4>\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3})) transforms:REMOVE_DIMS_FROM_METRIC_NAME=true | eval metric_value=_value
|
|
|
comment4 = Uses the ingest 'metricschema' processor to build a metrics event out of \
|
|
|
sample data. This search first builds a raw event, then mimics the metadata from the \
|
|
|
field extractions, and then runs the "ingestpreview" command to display the mock \
|
|
|
metrics event.
|
|
|
example4 = | makeresults | eval _raw="2021-01-03T10:35:12-0800 dns_name=contrarian.local severity=informational http_status=200 response_ms=244" | extract auto=f field_extraction | eval _meta="dns_name::contrarian.local severity::informational http_status::200 response_ms::244" | ingestpreview ingest_processor=metricschema generate_helper_fields=true meta_mode=all transforms:METRIC-SCHEMA-MEASURES="NUMS_EXCEPT http_status"
|
|
|
usage = public
|
|
|
|
|
|
################
|
|
|
# prjob
|
|
|
################
|
|
|
|
|
|
[prjob-command]
|
|
|
syntax = prjob [<subsearch>]
|
|
|
shortdesc = Enables use of parallel reduce search processing to speed up search \
|
|
|
runtime of a set of supported SPL commands, in a distributed search \
|
|
|
environment.
|
|
|
description = The prjob command tells Splunk to run a search using parallel \
|
|
|
reduce search processing if possible. This is particularly useful to \
|
|
|
speed up high-cardinality searches that aggregate large numbers of \
|
|
|
search results. \p\\
|
|
|
The prjob command provides the same functionality as the \
|
|
|
redistribute command. However, it has a simpler syntax and is \
|
|
|
easier to use. It uses the default values for the redistribute \
|
|
|
command arguments. Use the prjob command when you want to run a \
|
|
|
simple parallel reduce job without managing the by-clause field and \
|
|
|
the number of intermediate reducers. \p\\
|
|
|
The prjob command requires a distributed search environment with a \
|
|
|
pool of intermediate reducers at the indexer level. \p\\
|
|
|
The prjob command must be the first command in a search. It supports \
|
|
|
streaming commands and the following nonstreaming commands: stats, tstats \
|
|
|
streamstats, eventstats, sistats, sichart, and sitimechart. \
|
|
|
The prjob command also supports transaction on a single field.
|
|
|
example1 = | prjob [ | search index=main | stats count by ip]
|
|
|
comment1 = Speeds up a stats search that aggregates a large number of results. \
|
|
|
The "| stats count by ip" portion of the search is processed on the \
|
|
|
intermediate reducers. The search head just aggregates the results.
|
|
|
example2 = | prjob [ | search index=main | eventstats count by user, source | where count>10 | sitimechart max(count) by source | timechart max(count) by source]
|
|
|
comment2 = Speeds up a search that includes eventstats and which uses \
|
|
|
sitimechart to perform the statistical calculations for a timechart \
|
|
|
operation. The intermediate reducers process eventstats, where, and \
|
|
|
sitimechart. The search head runs timechart to turn the reduced \
|
|
|
sitimechart statistics into sorted, visualization-ready results.
|
|
|
example3 = | prjob [ | tstats prestats=t count by _time span=1d | sitimechart span=1d count | timechart span=1d count]
|
|
|
comment3 = Speeds up a search that uses tstats to generate events. The \
|
|
|
tstats command must be placed at the start of the subsearch, \
|
|
|
and here it uses prestats=t to work with the timechart command. \
|
|
|
sitimechart is processed on the reducers and timechart is processed on \
|
|
|
the search head.
|
|
|
example4 = | prjob [ | search index=main | eventstats count by user, source | where count >10 | sort 0 -num(count) ]
|
|
|
comment4 = In this example, the eventstats and where commands are processed \
|
|
|
in parallel on the reducers, while the sort command and any commands \
|
|
|
following it are processed on the search head. This happens because \
|
|
|
sort is a nonstreaming command that is not supported by prjob.
|
|
|
category = data::managing
|
|
|
usage = public
|