Databases Reference
In-Depth Information
This tells us that each hour there are an average of 3,367 unique values of
req_time
.
So, if we stored every count of every
req_time
for a week, we will store
3,367 * 24 * 7
= 565,656
values. How many values would we have to store per hour to get the same
answer we received before?
The following is a query that attempts to answer that question:
source="impl_splunk_gen"
| bucket span=1h _time
| stats count by _time req_time
| sort 0 _time -count
| streamstats count as place by _time
| where place<50
| stats sum(count) as count by req_time
| sort 0 -count
| head 10
Breaking this query down we have:
•
source="impl_splunk_gen"
: This finds the events.
•
| bucket span=1h _time
: This floors our
_time
field to the beginning of
the hour. We will use this to simulate hourly summary queries.
•
| stats count by _time req_time
: This generates a count per
req_time
per hour.
•
| sort 0 _time -count
: This sorts and keeps all events (that's what
0
means), first ascending by
_time
then descending by
count
.
•
| streamstats count as place by _time
: This loops over the events,
incrementing
place
, and starting the count over when
_time
changes.
Remember that we flattened
_time
to the beginning of each hour.
•
| where place<50
: This keeps the first
50
events per hour. These will be the
largest
50
values of
count
per hour, since we sorted descending by
count
.
•
| stats sum(count) as count by req_time
: This adds up what we have
left across all hours.
•
| sort 0 -count
: This sorts the events in descending order by
count
.
•
| head 10
: This shows the first 10 results.
Search WWH ::
Custom Search