Boundless Connections: Ann Johnson, CEO

Ann Johnson

Over the past year, Boundless has grown tremendously: becoming an independent company, expanding and enriching our offering of Spatial IT solutions and services, and deepening and broadening our team with experts and leaders from across the geospatial and enterprise IT fields. This marks a new chapter for our company, one designed to maximize what we see as a major inflection point in this industry. As the number and type of sensors and data sources proliferate, geospatial data collaboration becomes the key problem facing the Spatial IT industry.

With that in mind, we are accelerating our transformation with the addition of Ann Johnson as our new CEO. This new chapter requires an additional set of leadership skills, and Ann delivers them in spades with her strong background in cloud computing, security, and enterprise IT software broadly. She joins us from leading security software companies, where she served in a range of executive positions and developed her expertise in infrastructure, storage, and security. Her knowledge and executive experience will greatly strengthen our leadership and team as we continue to enhance solutions like OpenGeo Suite, develop new offerings, and maximize the significant opportunity ahead of us.

Welcome to the team! What was your role before you joined us?

Thank you! I’m excited to join the team, and have spent the last few weeks developing a better understanding of the company, solutions and industry.  My background maps very well to the changes taking place in the Spatial IT industry and the transformation at Boundless. My last fourteen years have been dedicated to the cloud computing and security software industry, most recently working with the talented team at Qualys. I spent the majority of that time at RSA, leading the global fraud team, which was a very rewarding experience as the company has an excellent vision and strategy and truly allowed me to grow my career in a meaningful way.

What are you most proud of from your time at RSA?

Team building! The most rewarding part of any leadership role is the ability to recognize talent and work with individuals to help them realize their career goals. Providing the guidance, coaching and support along the way as individuals grow and develop is what I enjoy most, by far. I was fortunate to be surrounded by amazing professionals at RSA and was able to function in a true leadership role by enabling talent to make an immediate and meaningful impact.

What made you interested in working at Boundless?

Boundless has an impeccable reputation amongst customers and the wider industry for employing highly talented and passionate geospatial professionals and developing high quality, innovative solutions. The software we develop has the ability to transform the GIS industry and help it pivot into a new Spatial IT industry built on proven open source tools. The depth of talent at Boundless, the quality of the software, and the feedback I received from customers made the opportunity at Boundless too compelling to pass.

What excites you most about the tools and products that Boundless is currently developing?

Timing is everything! My arrival at Boundless coincided with the release of OpenGeo Suite 4.1, which supports deployments on AWS and VMware as well as a new MongoDB connector and QGIS package. These enhancements continue to differentiate Boundless from the competition as a true leader in the Spatial IT sector.

What do you see as some of the benefits of using open source tools?

I come from a very strong infrastructure and software background, so I’ve experienced many different technologies, solutions, services, and tools throughout my career. The most compelling case for open source is the richness of the various communities, and the speed at which they can iterate and deliver highly innovative solutions. The passion of developers who are committed to open source is evidenced by both the quality of code they develop and the creativity and dedication they use in solving problems. It is not uncommon for a problem or idea to lead to an all-night development effort with the entire community contributing in an inspiring way. This leads to an energizing culture and collaborative dynamic for companies, like Boundless, that contribute to the community and address customer challenges faster and more affordably than any proprietary solution.

Where do you see the geospatial industry moving in the future?

We’re at a major inflection point in our industry. We’re on the cusp of Spatial IT as being truly ubiquitous. As the “Internet of Things” continues to proliferate, the ability to provide contextual mapping to any manner of objects and devices, whether used by consumers or business people and professionals, will become increasingly relevant. It will just become what people expect – an integral part of how we live, work and play. We need tools that are highly scalable, flexible, easy and affordable to deploy, and easy to integrate. Geospatial is not just about producing the best or prettiest maps, it is also about providing context and producing the best geospatial data that provides real value. We are solving a major “big data” problem and looking toward the future. The company that can balance strength in the core geospatial market with a vision for the future that takes into account the larger contextual global data opportunity will be the next generation of leaders in the Spatial IT segment.

What are your plans for the future of Boundless?

That’s a great question! You can expect to see us continue to honor our open source heritage while developing the highest quality solutions for both on-premise and platform-as-a-service delivery, allowing our current and future customers to develop their geospatial-enabled applications with the utmost confidence. We have a strong reputation to maintain for industry leadership, quality, and vision. The opportunity to continue to grow the company with our current offerings while bringing unique and disruptive solutions to market is endless. We will focus on where we are strong and continue to lead the industry in the next evolution of geospatial data collaboration.

What’s an interesting fact about yourself (that you haven’t already said)?

Outside of work, I am actually quite a homebody, which often comes as a surprise to people. My non-work life revolves around my family. I am passionate about gardening, cooking, reading, and anything related to water sports. I am also a large supporter of animal rescue organizations such as Best Friend Animal Society and take pride in playing “Mom” to a houseful of rescue mutts.

Continue reading

Improved Mapmeter Integration in OpenGeo Suite 4.1

Mapmeter

Mapmeter helps developers, system administrators, and managers better understand and use their geospatial deployments. Included exclusively with supported OpenGeo Suite instances, Mapmeter helps reduce costs, optimize applications during development, diagnose critical issues, and make decisions about production deployments.

OpenGeo Suite 4.1 makes getting started easier than ever. A single click is all that is needed to begin a free two week trial of monitoring your OpenGeo Suite instance. A chart on the GeoServer administration interface will show the daily total number of requests, offering a bird’s eye view of the usage of your instance. Behind the scenes, the trial will create an anonymous Mapmeter user account and configure the GeoServer instance with a new API key.

Installation Instructions

First, install the Mapmeter extension. The specific steps will depend on your particular environment, but packages exist for all supported operating systems. Then navigate to the Mapmeter menu item in the GeoServer administration interface to access the monitoring extensions.

Finally, click on the “Activate Mapmeter Trial” button.

That’s it! Your GeoServer instance is now being monitored.

Next Steps

Once the free trial has been activated, your account will be anonymous and you won’t have access to login and view more details in the Mapmeter web application. You’ll want to choose a username and password to access the richer analytics provided by the web application.

Existing Account

Already have a Mapmeter account? Great! All you need to do is set the API key in the GeoServer administration interface. The instance will then start being monitored. If you configure a username and password, the Mapmeter chart will start appearing on the GeoServer home page as well.

Automation

All Mapmeter configuration will be stored in <data-dir>/monitoring/mapmeter.properties. Fresh installations need only have this file configured properly on startup. Additionally, there is a new REST API that allows dynamic configuration of the GeoServer instance. Enterprise customers can contact our support desk for more information.

It’s never been easier to get started with Mapmeter! For more information about Mapmeter, please visit: http://boundlessgeo.com/solutions/mapmeter/

Automated clustering in OpenGeo Suite 4.1

OpenGeo Suite

The newest release of OpenGeo Suite continues to improve deploying GeoServer in a cluster. Clustering, for those who are not familiar, allows multiple servers to handle more connections simultaneously.  This provides many benefits for Opengeo Suite in the form of increased scalability, redundancy and load balancing. We’ve mentioned how to run GeoServer in a clustered configuration (part 1, part 2) before, but these improvements make spinning up clusters even easier for systems administrators. While setting up a cluster typically involves getting servers into place then installing and configuring software, enterprise customers can quickly create GeoServer clusters using new automation options in OpenGeo Suite 4.1.

Using best practices developed and refined with help from clients such as FCC and NOAA, this new release uses the Ansible automation language to simply and efficiently deploy GeoServer in a clustered configuration either locally or on the cloud. The software will install and configure GeoServer and ancillary software resulting in a cluster that is ready to have data imported. At a minimum, each cluster contains two GeoServer instances backed by Amazon RDS or two Postgres instances. Whether deploying on Amazon or hosted virtual machines, this release makes it easier for system administrators to scale up GeoServer instances in less time and with less effort.

AWS logoCluster on Amazon

Deployments on Amazon will automatically spins up all the necessary instances, automatically install and configure the software, and create a Relational Database Service (RDS) backend as well as an Elastic Load Balancer (ELB) instance.  The RDS and ELB instances allow the cluster to load balance on the frontend, and have redundancy and scalability on the backend.

VMware logoCluster with Hosted Virtual Machines

For enterprises that prefer to host their own clusters using tools such as VMware, this release provides automatic installation and configuration of GeoServer as well as the necessary Postgres instances for data replication.  Unlike offerings on Amazon, self-hosting allows for the use of third-party load balancers of the system administrators choice.

Contact us for information about automated clustering with OpenGeo Suite.

OpenGeo Suite 4.1 Released!

Boundless is proud to announce the release of OpenGeo Suite 4.1! Each new version of OpenGeo Suite includes numerous fixes and component upgrades as well as bringing many new features and improvements to the platform, including:

  • Improved enterprise packages. Whether developing applications on OSX or deploying a Windows production system, our packages are designed to help you use our software more effectively. Now enterprise customers benefit from other typical IT deployment options like supported deployments for Amazon Web Services and VMware.

  • Automated clustering. We’re continually improving our clustering features for those needing high availability or better scaling under load. These improvements make spinning up clusters even easier for systems administrators by providing new automation options.

  • New data sources. OpenGeo Suite can now publish from MongoDB and GeoPackage

  • New Boundless SDK templates that use a production-ready version of OpenLayers 3. These free templates make it easier and faster to deploy enterprise applications built using OpenGeo Suite.

  • QGIS installer. Dramatically increase the ease and affordability of desktop analysis and editing tools with our bundled QGIS that includes the OpenGeo Explorer plugin to publish maps and data directly to OpenGeo Suite.

  • Mapmeter now included! Our monitoring service helps developers, system administrators, and managers better understand and use their geospatial deployments so we’re including the service for all OpenGeo Suite enterprise customers.

Try it!

Download OpenGeo Suite 4.1 and try our census map tutorial or heat map tutorial to learn more. Details about this release are included in the release notes and, as always, we strongly advise you to read the upgrade instructions and backup your data before installing.

Mapping #WorldCup with OpenGeo Suite and MongoDB

The upcoming release of OpenGeo Suite 4.1 supports the popular MongoDB document-oriented database with a new connector that allows GeoServer to publish geospatial data stored in MongoDB. We’re partnered with MongoDB and use their software to power services such as Mapmeter, so we’re pleased to provide enterprise customers with more choices for databases to power their geo-enabled applications.

Support for a document-oriented database like MongoDB is a first for GeoServer, which has historically supported only structured data formats. The flexibility of MongoDB coupled with support for geospatial data paves the way for some very interesting applications.

One example well-suited to storing data in MongoDB is using the Twitter search API and its support for geolocation:

  • The Twitter API provides JSON output which makes import into MongoDB trivial, since it uses BSON (Binary JSON) as its internal structure.
  • The Twitter “firehose” returns data at high frequency, so inserts have to be fast. MongoDB is designed to provide high throughput for inserts and could even be scaled horizontally for this scenario.
  • The flexibility of schemaless storage means we don’t really need to care too much about how the data is structured in the database and can change it in the future.

#WorldCup

The FIFA 2014 World Cup makes the hashtag #WorldCup a great source of geolocated tweets. With some moderate data wrangling and some geocoding, it’s possible to import about tens of thousands of tweets. With these “geotweets” in order, it is possible to display them with GeoServer. In a few clicks and a couple of keystrokes, it’s easy to create a new layer from the MongoDB.

And voila!

Here are some ten thousand tweets mentioning #WorldCup visualized on a map:

#WorldCup

Editing in OpenLayers 3 using WFS-T

OpenLayers

We’ve talked about building applications with OpenLayers 3 before, but mostly focused on how to display maps. OpenLayers 3 includes all the building blocks needed to create an editing application as well, such as support for feature editing (insert, update and delete) and writing back the changes to the server through a transactional Web Feature Service (WFS-T).

The OpenLayers vector-wfs example demonstrates how to use a vector layer with a BBOX strategy served up by a WFS such as GeoServer, so make sure to look at (the source of) this example before reading this post if you are not familiar with it already.

The parts of OpenLayers 3 that we need to support feature editing through WFS-T are:

  • ol.interaction.Draw
  • ol.interaction.Modify
  • ol.format.WFS

Insert

Let’s start with the case of inserting a new feature. Make sure to configure the Draw interaction with the correct geometry type (config option named type) and the correct geometryName. Also have it point to the vector layer’s source through the source config option. The Draw interaction fires an event called drawend when the user finishes drawing. We can use this event to create a WFS transaction and send it to the WFS server. In order to do this we’ll create an ol.format.WFS instance and call its writeTransaction method. Since we only have a single feature to insert, we’ll pass in an array of that single feature as the first argument, and null for the updates and deletes arguments. For the options argument we need to pass in:

  • gmlOptions: an object with an srsName property which uses the map’s view projection
  • featureNS: the URI of the featureType we are using in the application
  • featureType: the name of the featureType

To find out the value of featureNS, you can use a WFS DescribeFeatureType request. The writeTransaction method gives us back a node which we can then serialise to a string using the browser’s XMLSerializer object. Use jQuery ajax or the browser’s built-in XMLHttpRequest to send the string to the WFS server using HTTP POST. When the result comes back, we parse this again with ol.format.WFS‘s readTransaction method and we can obtain the feature id given to the feature by the WFS server and can then set this on the feature using its setId method.

Update

For updating the geometry of an existing feature, we use ol.interaction.Modify and we’ll use it in conjunction with an ol.interaction.Select instance. So users will first select a feature by clicking, and can then modify it. When we create the Modify interaction, we pass in the feature collection of the Select interaction. We will store a hash of modified features, which we can then use to determine if we need to send a WFS Update Transaction. We will listen for the add and remove events of the Select interaction’s feature collection. In the listener for the add event, we will listen for the change event on the feature, and once change is fired, we’ll add an entry to the modified features hash based on the feature’s id. In the listener for the remove event (which fires when the feature gets unselected), we will check if the feature is in the modified hash, and if it is, we’ll use ol.format.WFS again to write out the Transaction XML (now only passing an array of a single feature as the updates argument) and send it to the WFS server. When the result comes back, we can parse it by using the method readTransactionResponse from ol.format.WFS. We can then check to see if totalUpdates is set to 1 which gives us an indication that our transaction was processed successfully by the WFS server.

Delete

For deleting a feature, we use the select interaction. If a feature is selected, and the user presses the Delete button, we’ll use ol.format.WFS again to write out the Transaction XML, but only passing in an array of a single feature to the deletes argument. We will then again transform this into a string and send it to the WFS using AJAX.

Boundless SDK

While this whole process may sound pretty cumbersome,  the upcoming OpenGeo Suite 4.1 release we will make your life a lot easier by shipping the Boundless SDK with some OpenLayers 3 templates to facilitate creating an editing application based on OpenLayers 3. So stay tuned.

OL3 Editing App

Our new GeoServer certification course is now available!

As a core component of our flagship OpenGeo Suite product, GeoServer is one of the most important projects we work on. We helped start the GeoServer project over a decade ago and continue to be active members of the vibrant community that has sprung up around it. We’ve also provided training from the beginning. Whether as interactive sessions like those at the FOSS4G conference, online webinars, or introductory workshops, those who want to learn about GeoServer know to come to us.

Many of you have expressed an interest in going deeper with GeoServer, learning the fundamentals inside and out, in a way that a half-day workshop just can’t accomplish. We’re happy to announce that we’ve got your answer.

A (big!) new training course

Over the past few months, we’ve developed an entirely new GeoServer curriculum that starts with spatial basics and web services then moves onto administration and deployment. This course provides attendees with solid knowledge of how to use and administer GeoServer.

OpenGeo Suite: GeoServer I contains lots of multimedia and interactive content, including over one hundred exercises to follow along with and then do yourself. As tested in a classroom environment, the course is four days in length, but the online version allows you to work at your own pace and on your own schedule.

To help make  learning easier, we’ve included over three hours worth of video material, like the example above, to take you through the course. We are also available via email or during weekly office hours to answer any questions that arise during your progress.

Certification and credit

Finally, no course would be complete without a final exam, which will offer a Boundless GeoServer Certification for those who qualify. “We offer the only GeoServer certification on the market, so you will definitely want this if GeoServer is part of your skill-set. Also, for those GIS Professionals out there, this course (along with our entire training catalog) is eligible for GISP certification credit through the GIS Certification Institute. And the best part? You can enroll today.

Start right now!

The course is available online for anyone who wants to advance their knowledge of GeoServer. No need to schedule an on-site visit to one of our offices or wait for us to come to an event near you. No need to contend with time zone differences either. You can work on your own time and at your own pace.

If you’re already a GeoServer expert, then you can also skip the course and sign up for the certification exam.

We’re really excited about this new offering, and are happy to be able to open it up to you right now.

Want to see more? Great!

Spanning the Globe

Eddie Pickle

It’s rather interesting (and really fun) to be a part of both the “traditional” GIS/BI community and the newer geoweb community. Because our geospatial software makes this possible, I attended two radically different events in the past week that span this divide. One, the Location Intelligence Summit is an established event held on corporate turf  at the Washington DC Convention Center. The other, the GeoWeb Summit, is a newer meetup held in the grittier (and yet somehow more expensive) Dumbo neighborhood of Brooklyn. I felt right at home in each.

The Location Intelligence Summit, which Jody Garnett blogged about in great detail, did a great job representing the Old Guard. The event pre-dates Google Earth, a time when “location intelligence” came from geocoding customer records while Navteq and TeleAtlas drove the streets. Back then, “spatial was special” and getting and managing data in a GIS was a major effort.

GeoWeb Summit came later, when geospatial data started to come from everything (mobile devices, cameras, sensors everywhere — even my dog has a chip now! Now spatial is context for data and applications, not necessarily more special than any other aspect of data. We call this Spatial IT and Paul Ramsey presented on it at length.

Fast forward to today, and at both events there was a strong drive to get to the “now” — to capture, analyze, and visualize real-time data; leverage the crowd; track things indoors and out; and so much more. At each conference it was clear that there is no way that the applications enterprises seek for modern workflows, monitoring, analysis, application serving, and more could happen without the interoperable, scalable capabilities of open source. In the case of the Location Intelligence world, I was part of a whole new LocationTech track for high performance, location-aware technology. Here, open source is a way to get out of an historical trap of proprietary software locked doors. For the GeoWeb Summit, that’s a starting assumption. In both cases, it’s great to be a foundation for the future.

 

Thoughts from Location Intelligence 2014

Jody Garnett

For two and a half days last week, Location Intelligence 2014 took place in Washington DC. The conference was hosted by Directions Magazine, Oracle, Here (a Nokia business) and the Eclipse Foundation’s LocationTech initiative. The result was a diverse mix representing our industry and a strong outreach to the Business Intelligence community.

Workshops

The first ‘half day’ of the conference started with workshops.

DSC04349.jpg

David Winslow and Juan Marin from Boundless offered a tag team introduction to GeoGig (formerly GeoGit) called Redefining Geospatial Data Versioning: The GeoGig Approach. The highlight of this workshop was a live demonstration of GeoGit plugin for QGIS. My contribution was a series of illustrations showing each step of the workshop and running around the room to assist as needed.

DSC04351.jpg

Next up was a workshop with Ivan Lucena from the Oracle Raster team called Developing Geospatial Applications with uDig and Oracle Spatial. Ivan has been working on integrating Oracle Raster with GeoTools and uDig. I was quite pleased with the workshop participation, and there was a great Q&A session with Xavier Lopez covering Oracle’s involvement in open source.

LocationTech Summit

The second day featured the LocationTech Summit, primarily moderated by Andrew Ross with help from Geoff Zeiss. Presentations were recorded, and can be found on this YouTube playlist.

DSC04364.jpg

Modern workflow for managing Spatial data: Eddie Pickle was on hand to introduce us as a company, and his own take on where the industry needs to be next. Watch on YouTube.

High performance computing at LocationTech: Xavier Lopez went over Oracle’s interest in open source spatial technologies and their support of LocationTech to facilitate this direction.

21st Century Geospatial Data Processing: Robert Cheetham (Azavea) from the GeoTrellis project provided context for his work. Watch on YouTube.

Open Data straight from the Source: Andrew Turner of Esri provided a straight-forward perspective on his recent open data efforts. I quite enjoyed the cross-linking from GeoJSON files on GitHub to browsable map and spreadsheet display. Watch on YouTube.

Redefining Geospatial data versioning: The GeoGig approach: Juan Marin from Boundless provided a well received GeoGig presentation, resulting in a much discussion, some of which is creeping out onto the internet in the form of a blog post by Geoff Zeiss — thanks Geoff! Watch on YouTube.

Collaborative Mapping using GeoGig: Scott Clark from LMN Solutions with GeoGig success story. I was impressed with how far they have come using technology from all over the OpenGeo stack. Scott previewed a live demo server if you would like to check how they are doing. One aspect of Scott’s presentation I appreciated was the emphasis that you can use GeoGig today by making use of the tools (and desktop apps) you are familiar with. Watch on YouTube.

GeoMesa: Scalable Geospatial Analytics: Anthony Fox of CCRi offered an introduction to one of LocationTech’s big cloud power houses. Watch on YouTube.

Real-time Raster Processing with GeoTrellis: Robert Cheetham was back with the details this time around.  I was a bit taken by surprise with the scope of GeoTrellis as it moves beyond cloud raster work, and starts to look into network analysis. They have a demo up if you would like to take a look. Watch on YouTube.

Fusing Structured and Unstructured Data for Geospatial Insights in Lumify: A polished presentation from Altamira. The open source project clavin.io is responsble for figuring out the location information from otherwise innocent text documents. Watch on YouTube.

The last “from data to action” session provided a series of inspirational stories:

  • Erek Dyskant from Bluelabs described using the usual suspects (PostGIS, GeoServer, OpenLayers, QGIS) in their talk. The take-home for me was the reduction in labor from a 150 analytic team in 2012, down to 8 in 2013. How? By focusing on up-to-date data, rather than being distracted by a BI tool.

  • The talk on Red Hook Wifi was great at a technical buzzword bingo level (wifi mesh network!) and human level (enabling communication after Hurricane Sandy hit).

All in all, the LocationTech Summit was a great addition to the event. It offered a wide range of technology, data, and human stories for those who attended. On a technical front, I was quite pleased with the number of teams using GeoServer and happy to see GeoServer WPS being used in production.

Wrapping Up

The final day was devoted to the Oracle community, resulting in some great conversations at the Boundless booth.

DSC04338.jpg

I was pleased to meet up with a fellow Australian, Simon Greener who was (as always) focused on Oracle performance.

Thanks to the conference organisers for an entertaining venue to talk about the technologies we know and love.

Citi Bike Analysis and Automated Workflows with QGIS

Victor Olaya

Citi Bike
, the bike share system in New York City, provides some interesting data that can be analyzed in many different ways. We liked this analysis from Ben Wellington that was performed using IPython and presented using QGIS. Since we wanted to demonstrate the power of QGIS, we decided to replicate his analysis entirely in QGIS and go a little bit further by adding some extra tasks. We automated the whole process, making it easy to add new layers corresponding to more recent data in the future.

Data can be found on the Citi Bike website and is provided on a monthly basis. We will show how to process one of them (February 2014) and discuss how to automate processing for the whole set of files. We will also use a layer with the borough boundaries from NYC Department of City Planning.

Getting started

Open both the table and the layer in QGIS. Tables open as vector layers and are handled much in the same way except for the fact that they lack geometries. This is what the table with trip data looks like.

attribute_table

We have to process the data in order to create a new points layer, with each point representing the location of an available bike station and having the following associated values:

  • Median value of age
  • Median value of trip duration
  • Percentage of male users
  • Percentage of subscribed users
  • Total number of outgoing trips

Computing a new layer

As we did in a previous bike share post, we can use a script to compute this new layer. We can add a new algorithm to the Processing framework by writing a Python script. This will make the algorithm available for later use and, as we will see later, will integrate it in all the Processing components.

You can find the script here. Add it to you collection of scripts and you should see a new algorithm named “Summarize Citi Bike data” in the toolbox.

Double click on the algorithm name to execute it and enter the table with the trips data as input.

param_dialog_script

Running the algorithm will create a new layer with stations, containing the computed statistics for each of them in the attributes table.

points

To create an influence zone around each point, we can use the Voronoi Polygons algorithm

param_dialog_voronoi

We are using a buffer zone to cover all of Manhattan. Otherwise, the polygons would just include the area within the convex hull of the station points. Here is the output layer.

voronoi

The final step is to clip the polygons with the contour layer, to remove areas overlapping with water. We will use the Clip algorithm, resulting in this:

output_clip

Visualizing the data

We can now change the style and set a color ramp based on any of the variables that we have computed. Here you have one based on the median trip time.

trip_time

Up to this point, we have replicated the result of the original blog entry, but we can go a bit further rather easily.

Creating a model

For instance, let’s suppose that you want to do the same calculation for other months. The most simple alternative would be to open all the corresponding tables and re-run all the above steps for each of them. However, it would be better if we could put all of those steps in a single algorithm that computes the final polygons from the inputs table. We can do that by creating a model.

Models are created by opening the graphical modeler and adding inputs and algorithms to define a workflow. A model defining the workflow that we have followed would look like this.

model

In case you want to try it yourself, you can download it here. Now, for a new set of data (a new input table), you just have to run a single algorithm (the model that we have just created) in order to get the corresponding polygons. The parameter dialog of the model looks like this:

param_dialog_model

 

Batch processing and other options

This, however, might be a lengthy operation once we have a few tables to process, but we can add some additional automation.The model we have created can be used just like any other algorithm in Processing, meaning that we can use it in the Processing batch interface. Right-clicking on the model and selecting “Execute as batch process” will open the batch processing dialog.

batch

Just select the layers to process in the corresponding column, the output filenames, and the set of resulting layers will be automatically computed in a single execution.

With a bit of additional work, these layers can be used, for instance, to run the TimeManager plugin and create an animation, which will help understand how the bike system usage varies over the course of the year.

Other improvements can also be added. One of them would be to write a new short script to create the SLD file each time for each layer based on the layer data, adjusting the boundaries of the color ramp to the min and max values in the layer, or using some other criteria. That would allow us to create a data-driven symbology, and we could add the algorithm created with that script as a new step in our model.

Publishing to OpenGeo Suite

Another improvement that we can add is to link our model to the publishing capabilities of the OpenGeo Suite plugin. If we want to publish the layers that we create to a GeoServer instance, we can also do it from QGIS. Furthermore, we can call that functionality from Processing, so publishing a layer and setting a style can be included as additional steps of our model, automating the whole workflow.

First, we need a script to upload the layer and a style to GeoServer, calling the OpenGeo Suite Plugin API. The code of this script will look like this:

##Boundless=group
##Import styled layer to GeoServer=name
##Layer=vector
##SLD_style_file=file
##url=string
##user=string
##password=string
##workspace=string

from qgis.core import *
from PyQt4.QtCore import *
import processing
from opengeo.qgis.catalog import createGeoServerCatalog

layer = processing.getObject(Layer)
layer.loadSldStyle(SLD_style_file)
catalog = createGeoServerCatalog(url, user, password)
ws = catalog.catalog.get_workspace(workspace)
catalog.publishLayer(layer, ws)

You can copy that code and create a new script or you can install this script file.

A new algorithm is now available in your toolbox: “Import styled layer to GeoServer”.

param_dialog_geoserver

Select the model that we created, right-click on its name and select “Edit model” to edit it.

You can now add the import script algorithm to the model, so it takes the resulting layer and imports it.

model_extended

Notice that, although our script takes the URL and other needed parameters for the import operation, these are not requested of the user when running the model, since we have hardcoded them assuming that we will always upload to the same server. This can of course be changed easily to adapt to a given scenario.

Running the algorithm will now compute the polygons from the input table, import it and set a given style, all in a single operation.

The style is selected as another input of the model, and it has to be entered as an SLD file. You can easily generate an SLD file from QGIS, just by defining it in the properties of the layer and then exporting as SLD. The SLD produced by QGIS is not fully compatible with GeoServer, but the OpenGeo Suite plugin will take care of that before actually sending it to GeoServer.

Although we have left the style as an input that has to be selected in each execution, we could hardcode it in the model, or even in the script dialog used for importing. Again, there are several alternatives to build our scripts and models.

Of course, this improved model can also be run in batch processing mode, like we did with the original one.

Conclusion

QGIS is an ideal tool for working with spatial data and analyzing it, and with our plugin, it is also an easy interface for publishing data to OpenGeo Suite. Integrating Processing the OpenGeo Suite plugin enables all sorts of automated analysis and publishing workflows, enabling the execution of complete workflows from a single application.