Databases Reference
In-Depth Information
According to this data set, our top three IP addresses are 4.4.4.4 , 5.5.5.5 , and
2.2.2.2 . The actual largest value was for 1.1.1.1 , but it was missed because it
was never in the top three.
To tackle this problem, we need to keep track of more data points for each slice of
time. But how many?
Using our generator data, let's count a random number and see what kind of results
we see. In my data set, it is the following query:
source="impl_splunk_gen" | top req_time
When run over a week, this query gives me the following results:
How many unique values were there? The following query will tell us that:
source="impl_splunk_gen" | stats dc(req_time)
This tells us there are 12,239 unique values of req_time . How many different
values are there per hour? The following query will calculate the average unique
values per hour:
source="impl_splunk_gen"
| bucket span=1h _time
| stats dc(req_time) as dc by _time
| stats avg(dc)
 
Search WWH ::




Custom Search