Database Reference
In-Depth Information
10.1
Deployment
To d e p l o y M o n g o D B f o r s u c c e s s , y o u n e e d t o ch o o s e t h e r i gh t ha r d w a r e a n d t h e
appropriate server topology. If you have preexisting data, then you need to know how
to effectively import (and export) it. And finally, you need to make sure that your
deployment is secure. We'll address all of these issues in the sections to come.
10.1.1
Deployment environment
Here I'll present considerations for choosing good deployment environments for
MongoDB. I'll discuss specific hardware requirements, such as CPU , RAM , and disks,
and provide recommendations for optimizing the operating system environment. I'll
also provide some advice for deploying in the cloud.
A RCHITECTURE
Two notes on hardware architecture are in order.
First, because MongoDB maps all data files to a virtual address space, all produc-
tion deployments should be run on 64-bit machines. As stated elsewhere, a 32-bit
architecture limits MongoDB to about 2 GB of storage. With journaling enabled, the
limit is reduced to around 1.5 GB . This is dangerous in production because, if these
limits are ever surpassed, MongoDB will behave unpredictably. Feel free to run on 32-
bit machines for unit testing and staging, but in production and for load testing, stick
to 64-bit architectures.
Next, MongoDB must be run on little-endian machines. This usually isn't difficult
to comply with, but users running SPARC , PowerPC, PA - RISC , and other big-endian
architectures will have to hold off. 1 Most of the drivers support both little- and big-
endian byte orderings, so clients of MongoDB can usually run on either architecture.
CPU
MongoDB isn't particularly CPU -intensive; database operations are rarely CPU -bound.
Your first priority when optimizing for MongoDB is to ensure that operations aren't
I / O -bound (see the next two sections on RAM and disks).
But once your indexes and working set fit entirely in RAM , you may see some CPU -
boundedness. If you have a single MongoDB instance serving tens (or hundreds) of
thousands of queries per second, you can realize performance increases by providing
more CPU cores. For reads that don't use JavaScript, MongoDB can utilize all available
cores.
If you do happen to see CPU saturation on reads, check your logs for slow query
warnings. You may be lacking the proper indexes, thereby forcing table scans. If you
have a lot of open clients and each client is running table scans, then the scanning
plus the resultant context switching will be CPU -intensive. The solution to this prob-
lem is to add the necessary indexes.
1
If you're interested in big-endian support for the core server, see https://jira.mongodb.org/browse/
SERVER-1625 .
 
Search WWH ::




Custom Search