Databases Reference
In-Depth Information
suppose you need to display a user's name, and you know the user's ID. You can
build a get_name_from_id() function and add caching to it like this:
<?php
function get_name_from_id($user_id) {
static $name; // static makes the variable persist
if ( !$name ) {
// Fetch name from database
}
return $name;
}
?>
If you're using Perl, the Memoize module is the standard way to cache the results of
function calls:
use Memoize qw(memoize);
memoize 'get_name_from_id';
sub get_name_from_id {
my ( $user_id ) = @_;
my $name = # get name from database
return $name;
}
These techniques are simple, but they can save your application a lot of work.
Local shared-memory caches
These caches are medium-sized (a few GB), fast, and hard to synchronize across
multiple machines. They're good for small, semi-static bits of data. Examples in-
clude lists of the cities in each state, the partitioning function (mapping table) for
a sharded data store, or data that you can invalidate with time-to-live (TTL) poli-
cies. The biggest benefit of shared memory is that accessing it is very fast—usually
much faster than accessing any type of remote cache.
Distributed memory caches
The best-known example of a distributed memory cache is memcached . Distributed
caches are much larger than local shared-memory caches and are easy to grow.
Only one copy of each bit of cached data is created, so you don't waste memory
and introduce consistency problems by caching the same data in many places.
Distributed memory is great for storing shared objects, such as user profiles, com-
ments, and HTML snippets.
These caches have much higher latency than local shared-memory caches, though,
so the most efficient way to use them is with multiple get operations (i.e., getting
many objects in a single round-trip). They also require you to plan how you'll add
more nodes, and what to do if one of the nodes dies. In both cases, the application
needs to decide how to distribute or redistribute cached objects across the nodes.
Consistent caching is important to avoid performance problems when you add a
server to or remove a server from your cache cluster. There's a consistent caching
library for memcached at http://www.audioscrobbler.net/development/ketama/ .
 
Search WWH ::




Custom Search