Database Reference
In-Depth Information
presented with all the fields; however, when it is deserialized for Version 1 of App1 to
provide updates in the old application, the extra pieces of data (fields 4, 5, and so on),
POF will create the extra bucket as a byte array for extra fields in V2 and squeeze them
into it without reading/parsing and then add it to the stream. When the Object is serialized
again, this portion will be added back to Object Version 2, so we will maintain consistent
backward compatibility.
So now let's see how Coherence addresses two other main fabrics' tasks: Partitioning and
Distribution. Coherence supports several Distribution models, and the most obvious one is
direct replication (fast read / slow write), which is the first thing that comes to mind. We
must remember that for data/object consistency, all our replications must be synchronous.
If we have two grid nodes with two replicated data objects each (objects are different on
single node, but nodes are identical; this is a complete replication), we have to synchron-
ize each object between the two nodes every time the objects get updated. The problem
with this method becomes obvious when we move to more nodes and more objects. Syn-
chronous objects' synchronization soon will consume all our HW resources.
Well, actually lots of Distributed Caches (fast write / slow read) work on that model. With
Coherence, we have better options: the Partitioned Cache. Let's not keep all the objects in
one node, but partition it. If we have four data objects, then let's split them equally
between two nodes ( Obj1 and Obj2 on Node1 , which is the primary for these objects
and Obj3 and Obj4 on Node2 with the same rules). For resiliency backup, copies of ob-
jects 1 and 2 will be stored on Node2 and vice versa. They will be synchronized when the
master object data is updated. Thus, we considerably reduced the number of synchronous
replications in the Coherence fabric.
When Node1 goes down, Node2 will be promoted as the primary for all objects, which
is basically the first model with a single node. The extreme implementation of this method
would be one node per single primary and backup object. Coherence allows you to config-
ure this replication model according to your realities; it is always a trade-off between per-
formance, resilience, and cost. The good news is that Coherence takes care of proxy layers
between task submitters and task processors, and data indexing and internal buckets' syn-
chronization. Of course, Coherence is also responsible for promoting backup nodes to the
primary when the master node(s) become unavailable.
So, we have a highly performing Replicated Cache and very scalable Partitioned Cache.
We have the third model that is devised to combine the best sides of both: Coherence Near
Cache and the fastest possible access to MRU and MFU data. In this approach, every node
has a local cache store of limited size and a large backend store. Imagine that a submitter
Search WWH ::




Custom Search