Database Reference
In-Depth Information
engine had to scan 4,308,303 documents to fulfill the query. Now, quickly run a
count on the values collection:
db.values.count()
4308303
The number of documents scanned is the same as the total number of documents in
the collection. So you've performed a complete collection scan. If your query were
expected to return every document in the collection, then this wouldn't be a bad
thing. But since you're returning one document, as indicated by the explain value n ,
this is problematic. Generally speaking, you want the values of n and nscanned to be as
close together as possible. When doing a collection scan, this is almost never the case.
The cursor field tells you that you've been using a BasicCursor , which only confirms
that you're scanning the collection itself and not an index.
A second datum here further explains the slowness of the query: the scanAndOrder
field. This indicator appears when the query optimizer can't use an index to return a
sorted result set. Therefore, in this case, not only does the query engine have to scan
the collection, it also has to sort the result set manually.
The poor performance is unacceptable, but fortunately the fix is simple. All you
need to do is build an index on the close field. Go ahead and do that now and then
reissue the query: 12
db.values.ensureIndex({close: 1})
db.values.find({}).sort({close: -1}).limit(1).explain()
{
"cursor" : "BtreeCursor close_1 reverse",
"nscanned" : 1,
"nscannedObjects" : 1,
"n" : 1,
"millis" : 0,
"nYields" : 0,
"nChunkSkips" : 0,
"indexBounds" : {
"close" : [
[
{
"$maxElement" : 1
},
{
"$minElement" : 1
}
]
]
}
}
What a difference! The query now takes less than a millisecond to process. You can see
from the cursor field that you're using a BtreeCursor on the index named close_1
12
Note that building the index may take a few minutes.
Search WWH ::




Custom Search