Databases Reference
In-Depth Information
We can use source from our sample event specified inside tranforms.conf
like this:
source=/logs/myapp.session_foo-jA5MDkyMjEwMTIK.log
Stepping though the transforms, we have:
myapp_session : Reading from the metadata field, source , creates the
indexed ield session with the value foo-jA5MDkyMjEwMTIK
myapp_flatten_source : Resets the metadata field, source , to /logs/myapp.
session_x.log
session_type : Reading from our newly indexed field, session , creates the
field. session_type with the value foo
This same ordering logic can be applied at search time using the EXTRACT and
REPORT stanzas. This particular case needs to be calculated as indexed fields,
if we want to search for these values, since the values are part of a metadata field.
Dropping events
Some events are simply not worth indexing. The hard part is figuring out which ones
these are and making very sure you're not wrong. Dropping too many events can
make you blind to real problems at critical times and can introduce more problems
than tuning Splunk to deal with the greater volume of data in the first place.
With that warning stated, if you know what events you do not need, the procedure
for dropping events is pretty simple. Say we have an event such as this one:
2012-02-02 12:24:23 UTC TRACE Database call 1 of 1,000. [...]
I know absolutely that, in this case and for this particular source type, I do not want
to index TRACE level events.
In props.conf , I create a stanza for my source type, thus:
[mysourcetype]
TRANSFORMS-droptrace=droptrace
Then, I create the following transform in transforms.conf :
[droptrace]
REGEX=^\d{4}-\d{2}-\d{2}\s+\d{1,2}:\d{2}:\d{1,2}\s+[A-Z]+\sTRACE
DEST_KEY=queue
FORMAT=nullQueue
 
Search WWH ::




Custom Search