Java Reference
In-Depth Information
balancer is a sprayer that makes intelligent decisions about where to
route a request based on the load on a given system at any point in time.
Among the popular sprayers available are Cisco's Local Director and
IBM 's Network Dispatcher. Many other vendors have solutions in this
space as well.
Static caching. Many vendors have boxes that can cache static content
upstream of even the static web servers. These static caches can be inde-
pendent boxes, or they can be included in other edge-server features.
The combination of a proxy firewall and a cache for outbound requests
(called a caching proxy) is fairly common. Since caching proxies bypass
all of the existing communication within a firewall and free web servers
and web application servers, they can have a significant impact on per-
formance.
Dynamic caching. Increasingly, vendors are providing innovative cach-
ing solutions for dynamic content and pushing those functions up closer
to the edge server. JSP fragments and specialized EJB caches are becom-
ing increasingly important as more pages are created dynamically.
An organization's security policy might specify that edge services satisfy
requests via a cache. Other requests could be passed on to a web server, which
would be responsible for strictly static content. Other requests could be passed
to the application server to resolve any dynamic content. In a subtle configu-
ration variation, application server software could be deployed jointly with the
web server to handle the view, the controller, and a thin model wrapper (such
as the command layer in chapter 3 or the facade in chapter 8). Alternatively,
the web server might be deployed alone, with the model, view, and controller
deployed on the application server.
The web application server houses the server-side model. EJB containers
would be deployed here, as would wrapping technologies for legacy systems.
The application server is usually deployed inside the innermost firewall for
additional security, and for performance reasons that we will explore next.
10.1.1
Layering hardware in homogeneous groups
In our configuration, we achieve scalability through independent servers that
do one thing well. When we have a performance problem, we can simply
increase power by adding to the existing network a new system that performs
the function we need. The key is that the configuration of the individual boxes
must be identical.
Search WWH ::




Custom Search