Database Reference
In-Depth Information
protocol. 4 This TCP business is fairly low-level and not so germane to the concerns of
most application developers. What's relevant here is an understanding of when the
drivers wait for responses from the server and when they “fire and forget” instead.
I've already spoken about how queries work, and obviously, every query requires a
response. To recap, a query is initiated when a cursor object's next method is invoked.
At that point, the query is sent to the server, and the response is a batch of documents.
If that batch satisfies the query, no further round trips to the server will be necessary.
But if there happen to be more query results than can fit in the first server response, a
so-called getmore directive will be sent to the server to fetch the next set of query
results. As the cursor is iterated, successive getmore calls will be made until the query
is complete.
There's nothing surprising about the network behavior for queries just described,
but when it comes to database writes (inserts, updates, and removes), the default
behavior may seem unorthodox. That's because, by default, the drivers don't wait for a
response from the server when writing to the server. So when you insert a document,
the driver writes to the socket and assumes that the write has succeeded. One tactic
that makes this possible is client-side object ID generation: since you already have the
document's primary key, there's no need to wait for the server to return it.
This fire-and-forget write strategy puts a lot of users on edge; fortunately, this
behavior is configurable. All of the drivers implement a write safety mode that can be
enabled for any write (insert, update, or delete). In Ruby, you can issue a safe insert
like so:
@users.insert({"last_name" => "james"}, :safe => true)
When writing in safe mode, the driver appends a special command called get-
lasterror to the insert message. This accomplishes two things. First, because get-
lasterror is a command, and thus requires a round trip to the server, it ensures that
the write has arrived. Second, the command verifies that the server hasn't thrown any
errors on the current connection. If an error has been thrown, the drivers will raise an
exception, which can be handled gracefully. You can use safe mode to guarantee that
application-critical writes reach the server, but you might also employ safe mode when
you expect an explicit error. For instance, you'll often want to enforce the uniqueness
of a value. If you're storing user data, you'll maintain a unique index on the username
field. The unique index will cause the insert of a document with a duplicate username
to fail, but the only way to know that it has failed at insert time is to use safe mode.
For most purposes, it's prudent to enable safe mode by default. You may then opt to
disable safe mode for the parts of an application that write lower-value data requiring
higher throughput. Weighing this trade-off isn't always easy, and there are several more
safe mode options to consider. We'll discuss these in much more detail in chapter 8.
By now, you should be feeling more comfortable with how the drivers work, and
you're probably itching to build a real application. In the next section, we'll put it all
together, using the Ruby driver to construct a basic Twitter monitoring app.
4
A few drivers also support communication over Unix domain sockets.
Search WWH ::




Custom Search