Stay Connected with the Boundless Blog

Clustered Configuration with GeoServer Geospatial Software Part 1

Recently, we helped one of our clients who wanted to set up a GeoServer cluster. There are different ways to accomplish clustering depending on your specific needs, but we thought it would be illustrative to show what we did in this particular situation. Keep in mind this is a specific treatment and fairly tailored. We encourage you all to experiment with the newest features, but remember to do so in your testing environment!

We’ll start with some clustering theory and tips before launching into the actual details of how to do it.


A computing cluster consists of two or more machines working together to provide a higher level of availability, reliability, and scalability than can be obtained from a single node. Nodes in a cluster are positioned behind a proxy server and/or load balancer that delegates requests to cluster members based on any one member’s ability/availability to handle load.

Clustered Configuration with GeoServer Geospatial Software

Similar to other applications with long-running in-memory states and high data I/O, GeoServer sees performance gains with two (or more) nodes clustered behind a load balancer—even with the slight overhead of the load balancer that sits in front of the cluster.

Generally, there are two complementary purposes for clustering GeoServer:

  • To provide high-performance and/or throughput
  • To achieve high availability

In the most demanding situations, GeoServer can be deployed in combinations of high-performance and high-availability instances.

High-Performance Clusters

A high-performance GeoServer configuration deploys several instances of GeoServer on a single machine.

High Performance Clustered Configuration with GeoServer Geospatial Software
High-performance cluster

Each GeoServer instance is deployed into its own servlet container (Tomcat, Jetty, etc.). Individual servlet containers are configured independently and spin up their own JVM, each with it’s own memory and processor allocations (borrowed from the pool of resources on the host machine). GeoServer’s memory and CPU runtime footprint are optimized for high throughput under heavy concurrency with such a deployment, but always consider that these different deployed units will compete for the physical server’s resources. To find the best balance we recommend, as always, to test for your particular scenario.

A load balancer or proxy fronts the cluster, and directs traffic to the member of the cluster most able to handle the current request. In this case, nodes will likely share the same server name or IP address, but listen for requests on different ports. For example:

Load Balancer @ http://<server>:80/<alias> forwards to one of:

  • GeoServer 1 in Tomcat 1 @ http://<server>/geoserver:8081
  • GeoServer 2 in Tomcat 2 @ http://<server>/geoserver:8082
  • GeoServer 3 in Tomcat 3 @ http://<server>/geoserver:8083
  • GeoServer 4 in Tomcat 4 @ http://<server>/geoserver:8084

An approach that deploys multiple instances of GeoServer into the same servlet container is not recommended. In this case, since host resource allocation (to a common JVM) will not be sequestered as neatly, competition for those resources will occur, limiting the benefits.

Users might also consider using the built-in clustering capabilities found in Enterprise Application Servers (such as Oracle Weblogic or JBoss), however this is beyond the scope of this discussion.

High-Availability Clusters

A high-availability implementation will spread several GeoServer instances across several machines (nodes) in a cluster. These nodes can be physical or virtual machines.

High Availability Clustered Configuration with GeoServer Geospatial Software
High-availability cluster

Nodes are normally located behind a load balancer that redirects traffic to any single GeoServer based on traffic volume and availability. In this case, nodes will likely be on different servers or IP addresses and listen for requests on the same port. For example:

Load Balancer @ http://<server>:80/<alias> forwards to one of:

  • GeoServer 1 in Tomcat 1 @ http://<server1>/geoserver:8080
  • GeoServer 1 in Tomcat 1 @ http://<server2>/geoserver:8080
  • GeoServer 1 in Tomcat 1 @ http://<server3>/geoserver:8080

Data directory location and catalog reloads

Some important considerations to be made when clustering several instances of GeoServer concern the location of the GeoServer data directory and a strategy for reloading all cluster members’ data catalogs.

The GeoServer data directory is the location in the file system where GeoServer stores its configuration information. The configuration defines things such as what data is served by GeoServer, where it is stored, and how services such as WFS and WMS interact with and serve the data. The data directory also contains a number of support files used by GeoServer for various purposes.

The spatial data accessed by GeoServer doesn’t need to reside within the GeoServer data directory, just pointers to the data locations. This should be obvious for data stored in spatial databases, which are certainly in different locations (on disk) and often on different machines; however the same is true for file-based spatial data. (Read more about the GeoServer data directory.)

GeoServer’s catalog is an in-memory representation of the configurations in the data directory. Storing the configurations in memory means that GeoServer can access this information faster than by reading these instructions off disk. However, this sometimes requires that the in-memory catalog be refreshed when configurations changes are made to the disk-based GeoServer data directory, or to the actual data served in GeoServer.

Unless catalog (re)configurations are largely static, or some amount of catalog discrepancy or availability is acceptable, a common GeoServer data directory location for all clustered instances is highly recommended.

The location of the GeoServer data directory is stored in the GEOSERVER_DATA_DIR variable. It can be configured in one of three ways: in each instance’s web.xml file (/webapps/geoserver/WEB-INF), through a common environment variable, or through a parameter passed to the JVM in the container start-up command.

Some implementations have clustered GeoServer instances using separate data directories that are synchronized manually (low change frequency) and automatically (using rsync), but neither approach is as common or recommended as a shared data directory.

Regardless of the mechanism for synchronization, changes to the data directory and the in-memory catalog will normally be directed by one master GeoServer. This can be enforced by disabling the GeoServer user interface on all “slave” GeoServers or by configuring the front-end load balancer to only direct user interface requests to /geoserver/web to the master GeoServer.

Changes to the master GeoServer’s data catalog must be explicitly refreshed on slave instances. This can be accomplished manually through the GeoServer Admin web UI (/geoserver/web), or with some measure of automation (on a schedule, or after a trigger is fired) using GeoServer’s REST API (e.g. by sending a POST/PUT request to /geoserver/rest/reload?recurse=true).

Clustering Enhancements

Enhancements to our clustering story are coming! Specifically, in future releases of GeoServer the data directory will have the option to be database-backed. This means that a central configuration store can be queried more optimally than a file-based counterpart and doesn’t all need to be read into memory.

In the next post, we’ll go into the details on setting up a clustered instance. Remember, Enterprise: Platform clients and higher get custom clustering and deployment advice included in their maintenance agreements.

Have you been looking at deploying GeoServer in a clustered environment? Tell us about it!