A Desktop Analyst’s Guide to QGIS – Part 1: The Basics

QGIS has received a lot of press and activity lately.  In the community, they have just completed their 2015 User Conference in Denmark, while also announcing QGIS 2.10 is now in beta testing. At Boundless, Victor Olaya — my colleague and QGIS core contributor — just finished a post describing how QGIS can support MGRS coordinates.  Another colleague, Aaron Miller, posted a blog about how you can use QGIS to perform logistics routing. Late last year, Boundless responded to numerous market questions with a 4-part blog series on how QGIS compares to proprietary desktop GIS software.  We pointed out that QGIS is easy-to-install, integrates with OpenGeo Suite, and has reliable support offerings, making it a very viable alternative to ArcGIS for Desktop.  We then went into detail on how to use QGIS to perform visualization, cartography, analysis, and editing.
calamito_qgis1There is a good reason we are taking the time to highlight QGIS. You recall I blogged about how more and more customers are asking about how OpenGeo Suite can work as part of a hybrid architecture of both proprietary and open source software.  Customers are appreciating that hybrid migration strategies targeted at non-power users can quickly realize significant savings, and replacing proprietary (not to mention, expensive) desktop GIS applications with QGIS is a great place to start.

In my experience, the vast majority of desktop GIS users fit into the ‘light to moderate’ GIS use case – what most of us know as the 80/20 rule.  That is, 80% of those users only use about 20% of the software’s total functionality.  Plotting dots on a map, exporting simple overlays as JPEGs, digitizing vector content from imagery, and simple analysis such as point-in-polygon filtering.  Of course, there still is the 20% who do perform more in-depth analytical functions and complex cartographic work, but they are by far the minority.  So you have to ask yourself, is it worth spending all that money for proprietary desktop GIS software if you aren’t utilizing its full potential?  Is there not a better option for the majority of users who wish to simply geo-enable their content?  Put another way, why are you paying for the Ferrari when all you need is the Vespa? The market is validating that QGIS satisfies this need – don’t take my word for it, do a Twitter search for “qgis” –  and is a great first step into achieving a hybrid GIS architecture.

The legacy gripe about QGIS is it is not as intuitive as users would have liked.  I’ll admit, older versions of the user interface/user experience were a bit clunky, and made the transition from proprietary desktop GIS a bit too far for most analysts (myself included).  But QGIS has come a long way, and if you haven’t played with it in a while (or at all), I highly recommend you give it a try.  It runs on Windows, Linux, and OS X, and there is even a new version that runs on Android.  The v2.0 release focused largely on the look/feel of the application, and it is now far more intuitive than ever before.  Even still, many folks are hesitant to make the jump to QGIS simply because they don’t want to take the time to learn the new ‘buttonology’.

So I decided to write a two-part series to showcase just how simple it is for a typical desktop analyst to migrate to using QGIS.  Part 1 will illustrate how to accomplish the 10 functions/workflows performed most often by desktop analysts, and part 2 will focus on extending QGIS capabilities through the use of plugins.  QGIS really is simple to use, and will satisfy the needs of 80% of users without the high license cost of proprietary desktop software.  So without further ado, and in no particular order…

  1. Overlay of Data

You can add all types of data into your QGIS project by using the Add Data buttons on the Layers toolbar. QGIS handles most open and proprietary spatial data formats right out of the box, including shapefiles, file geodatabase layers, commercial imagery and data stored inside of relational databases.  Those layers show up in your project’s Layers window in a very familiar table of contents (TOC) style display that most desktop GIS users are akin to using.

calamito_qgis2You can re-order your layers, make changes to the symbology and labeling, and set scale dependencies and group layers to help you organize your TOC.   Of course you can also open the attribute table for each vector layer.  It really is just that simple.
calamito_qgis3

  1. Zoom to a Point

The ability to zoom to a point on the earth is pretty fundamental to any desktop GIS use.  QGIS makes this easy with a simple Zoom to Coordinate tool right on the main interface.  Simply click to the tool to activate the window, then enter the XY coordinate of the location you are interested in.
calamito_qgis4

  1. Creating Points from a File

Creating points from a delimited file of coordinates is very simple inside of QGIS.   In fact, it’s actually easier to do in QGIS than is it in ArcGIS for Desktop!  All you need to do is click on the Add Delimited Text Layer icon, and fill out the input parameters, and hit ok.  The tool can handle text files, comma and tab separated files and Excel spreadsheets.
calamito_qgis5

  1. Point in Polygon Selection

The ability to select which points which fall within a polygon is made easy through the Points in Polygon tool found under the Vector dropdown > Analysis tools > Points in polygon.  Once the selection is made, you can export the selected features to a new file, and/or add that new file to your QGIS map.
calamito_qgis6

  1. Buffering a Point/Line/Polygon

In much the same way you can buffer vector features in ArcGIS for Desktop, QGIS gives you a very simple method for creating buffers around points, lines and other polygons.  You may enter a manual buffer distance, or choose a value from a field inside the attribute table. The Buffer tool can be found under the Vector dropdown > Geoprocessing Tools > Buffer(s).
calamito_qgis7

  1. Clip/Extract Data 

The function I probably used the most as an analyst was the ability to clip subsets of data from a larger dataset.  I used this to filter data by area of interest, or to simply cut unmanageable datasets into smaller, more manageable ones.  QGIS makes it easy by exposing it as another tool in the Vector dropdown > Geoprocessing Tools > Clip menu.
calamito_qgis8

  1. Export as JPEG

One of the most common ways analysts and cartographers distribute their results is to export their map as a graphic.  The resulting graphic can then be plotted for a wall map, attached to an email, or included as part of a larger PowerPoint briefing.  Exporting a graphic from QGIS is as easy as selecting Export as Image from the File dropdown > choosing a name and format for your exported graphic > and then deciding where you want to export the image to disk.

calamito_qgis9

  1. Define a Layer’s Coordinate System

Sometimes an analyst is given data which has no coordinate system defined.  This is problematic for any type of measurements, analysis or coordinate notation needed for that given dataset.  Luckily, QGIS makes defining a coordinate system pretty simple.  Just select the layer you want to define in the table of contents.  Then use the Layer dropdown to select the Set CRS of Layer(s) option.  Next, just choose the correct coordinate system for your layer, and hit OK to apply.
calamito_qgis10

  1. Reproject a Vector Layer

Often times it is necessary to change the projection of a vector layer to ensure accuracy in measurements and analysis.  QGIS exposes this functionality in the Save As… function for a layer right from the table of contents.  Right-click the layer you want to reproject, select Save As…, and under the CRS section select Selected CRS.  This will enable the Browse button which gives you a range of coordinate systems to choose from.  Just hit OK when you are done and QGIS will export a new layer with the updated projection.
calamito_qgis11

  1. Convert Raster to Vector 

One of the more common actions to perform against a raster dataset is to convert it to polygons.  This can be useful when you want to use a raster dataset inside a weighted overlay analysis, site selection, point in polygon analysis and more.  QGIS calls this function Polygonize, and it can be found under the Raster dropdown > Conversion > Polygonize.
calamito_qgis12

Here is the really cool part…all of the tools listed above are available inside of the Processing Framework inside of QGIS.  This means those individual tools can be orchestrated into seamless workflows, used from the Python console, or run in batch.  So if you are like me, and spent copious amounts of time encapsulating tradecraft into ModelBuilder models and Python scripts using ArcPy, you will be happy to know you can do a similar thing inside of QGIS as well.
calamito_qgis13My goal here is not for you to replace all of your desktop GIS instances with QGIS.  As I mentioned before, there are still use cases where more fully featured GIS applications do make sense.  But for that large majority of users – the 80% – I truly believe there is a better, more cost-effective answer.  And if your organization is thinking about transitioning to a hybrid GIS architecture, you should consider adopting QGIS as a core part of your hybrid GIS migration strategy.

Stay tuned for part 2 of this series which will focus on some more advanced capabilities which are exposed as plugins for QGIS.

An INSPIRE Refresher

As some of my colleagues here at Boundless prepare to board planes and trains to travel to Geospatial World Forum next week in Lisbon, Portugal, it seems like an opportune time to review INSPIRE compliance within OpenGeo Suite.

  • As announced with the launch of OpenGeo Suite 4.6, INSPIRE-required metadata can now be filled into WMS and WFS documents to comply with View and Download Service specs
  • We’ve been integrating INSPIRE compliance into WMS for some time now. In 2012, we added Harmonized Layer Name support to existing WMS 1.3.0 functionality at the request of EU customers
  • Speaking of users of OpenGeo Suite, Ordnance Survey contributed to the INSPIRE cause to the benefit of the whole EU and continue today to build applications on top of open source components. In addition, the Greek Regulatory Authority for Energy (RAE) provides public access to geospatial data based on OpenGeo Suite and other open source components
  • For readers with an interest in technical details, the GeoServer community has posted valuable documentation here

As always, Boundless provides Support for the INSPIRE and app-schema extensions as part of OpenGeo Suite Enterprise, we invite anyone interested in INSPIRE compliance to reach out to contact@boundlessgeo.com. Our Sales, Services, and Support teams can help answer any questions you may have about your own projects and goals.

LiDAR and OpenGeo Suite: Redux

(NOTE: Update at end of this blog)

A few weeks ago, a colleague sent me an article explaining Esri’s decision to keep their Optimized LAS format proprietary. The article notes a January 2014 blog from Esri claiming its ‘Optimized LAS’ boasted faster access and smaller file sizes, akin to the performance and functionality of the open source LASzip format.  Esri then promised the world that the product would be offered free of charge and independently of the company’s market-leading ArcGIS platform. Esri’s blog post went on to also promise to provide an open application programming interface (API) for external software developers. Well, apparently Esri has publically recanted on their promise and has elected to introduce further vendor lock-in if you want to use its Optimized LAS format.

The open source community said that after more than 12 months of attempting to work with Esri on a mutually beneficial solution, it is now going public with its concerns. In an open letter to Esri, pro-interoperability geospatial developers argued the company was abusing its market position to compromise established open spatial standards:

“The Optimised LAS format is neither published, nor available under any open license, which provides both technical as well as legal barriers for other applications reading and/or writing to this proprietary format,” the letter stated. “This is of grave concern given that fragmentation of the LAS format will reduce interoperability between applications and organisations, and introduce vendor lock-in.”

Not long after this made its way around the Internet, I received a phone call from a friend of mine who asked what Boundless’ response to the news was. I pointed him to a series of blogs about how the LiDAR industry faces a choice for advanced formats, and reviewed how open formats can offer significant advantages over a proprietary vendor path. And in response to his original question, I affirmed that Boundless will continue to work with the libLAS, PDAL and LASzip communities to bring LiDAR access and compression into the open source world, rather than supporting yet another proprietary format.

I also took the liberty of illustrating how OpenGeo Suite currently supports LiDAR, the highlights of which are below:

Within PostgreSQL

Suite integrates support for storing and analyzing LiDAR data within a PostgreSQL database. The PostgreSQL pointcloud extension provides the following features:

  • Random access of small working areas from large LiDAR collections
  • Aggregation of point cloud rows into large return blocks
  • Filtering of point cloud data using any dimension as a filter condition
  • Lossless compression of LiDAR data with compression ratios between 3:1 and 4:1

lidar_1
Within PostGIS

In combination with the PostGIS extension, you also get:

  • Spatial indexing for high-speed spatial search and retrieval
  • Clipping and masking of point clouds to arbitrary clipping polygons
  • Conversion of point cloud data to and from PostGIS geometry types for advanced spatial analysis

To efficiently load LiDAR and other point cloud data into the database, OpenGeo Suite 4.0+ includes the PDAL point cloud processing tools. PDAL supports multiple read/write for formats such as LAS, LASZip, Oracle point cloud, PostgreSQL point cloud and text/csv. In addition, PDAL allows for in-translation processing of data, including reprojection, rescaling, calculation of new dimensions, removal of dimensions, and raster gridding.
lidar_2

Within QGIS

Currently, QGIS does not have direct support for LiDAR data, meaning you cannot open a LAS or LASZip file and add it to your project as you would a vector layer or image.   But with some help from Martin Isenberg (creator of LASzip), QGIS utilizes LASTools and Fusion to manage and edit LiDAR data in the QGIS processing framework.
lidar_3This allows QGIS users to easily process their data from the Processing Toolbox, create complex workflows that work with LiDAR data using the Processing Graphical Modeler, or automate LiDAR processing routines using the batch processing interface.

So if you are interested in using LiDAR within OpenGeo Suite, I encourage you to try out the very well-written LiDAR tutorial (complete with sample data) from our website. Walkthrough tutorials like this are a great example of the enhanced experience that comes with supported open source. Another benefit is being able to engage Boundless staff, many of which are core committers for the community software code base. As an example, a customer has asked us to document best practices for storing LiDAR using the PostgreSQL pointcloud extension, which should be released to the community in the near future. So don’t hesitate to contact us with any questions, or to solicit any feedback about how to better support LiDAR within OpenGeo Suite.

UPDATE: Since my original authoring of this post, there have been multiple updates on this within the OSGeo discussion lists, including a response from Esri’s founder and President, Jack Dangermond. Jack reiterated his desire that the community should use OGC to organize an open process leading to an open standard centered around storing LiDAR. He went on to say that Esri would be willing to offer up their engineering work on Optimized LAS for discussion as part of any open solution.

It should be noted that the OSGeo community views this as a positive step, and their pragmatism about working with Esri is noted and understood. A willingness to consider the work of an OGC committee and a commitment to ASPRS is certainly a good first step. But we hope Esri continues to consider releasing the Optimized LAS format as an open standard as originally promised back in their January 2014 blog post. At Boundless, we will continue to focus on our commitment to open access and open standards in the area of manipulating and storing LiDAR.

We will all continue to watch developments in this area going forward.

INSPIRE Support in OpenGeo Suite

Yesterday we announced the availability of OpenGeo Suite Version 4.6 with the latest fixes, features and performance improvements for the leading open source geospatial software stack. For our EU customers I’d like to highlight two key additions enabling OpenGeo Suite 4.6 to publish INSPIRE services.

OpenGeo Suite 4.6 adds support for the GeoServer INSPIRE extension allowing required metadata to be filled into Web Map Service (WMS) and Web Feature Service (WFS) GetCapabilities documents. This additional metadata enables compliance with the INSPIRE View and Download Service specifications. More information can be found in our OpenGeo Suite INSPIRE documentation.

The second key addition is the popular GeoServer app-schema extension allowing the mapping of your information to predefined application schemas. This extension is used to create INSPIRE-compliant Geographic Markup Language (GML) output. More information about app-schema can be found in our documentation.

Boundless provides Support for the INSPIRE and app-schema extensions as part of OpenGeo Suite Enterprise, we invite anyone interested in INSPIRE compliance to reach out to contact@boundlessgeo.com. Our Sales, Services, and Support teams can help answer any questions you may have about your own projects and goals.

Announcing the Release of OpenGeo Suite 4.6

Today Boundless announced the availability of OpenGeo Suite 4.6, our latest advancement in providing a complete open-source geospatial software stack so organizations of all sizes can build great maps and applications.

The press release, as PR typically does, hits the treetops of what’s included in the release, but this forum offers the opportunity to go into somewhat greater detail:

  • Customers of OpenGeo Suite Enterprise will have access to the latest version of OpenGeo Suite Composer, our tool for creating, styling, and publishing maps for the web. We profiled Composer when it was released with OpenGeo Suite 4.5, we encourage you to review our original entry for details. Boundless has continued to invest in Composer as there is a compelling need in the market for alternative web-enabled mapping tools, so 4.6 includes a lot of focus on Composer. Users will discover a number of updates including multiple additions to layer management as well as increased visualization of SLD so it’s easier to troubleshoot YSLD syntax. YSLD makes it much easier to style maps, so we believe developers will find this addition helpful.
  • As always with our releases, we’ve incorporated a lot of the latest and greatest contributions from the various community projects into the unified and supported OpenGeo Suite. Here are great places to start to see deeper details on what the communities have been working on:

The release of 4.6 means we incorporated a lot of the latest and best features into the unified OpenGeo stack to the benefit of our customers:

  • Greater control for how overlapping layers are merged together with color composition and blending
  • A number of updates to Web Processing Service (WPS), including clustering, process controls, and security
  • Significant PostGIS performance improvements
  • Numerous updates to OpenLayers3 – OL3 is quickly maturing into a very powerful library for web-ready maps, able to handle requirements of all sorts. We’re frequently updating the Boundless blog with the latest on OL3, check back here often for the latest.

OpenGeo Suite 4.6 is now available to all here. OpenGeo Suite Enterprise customers have access to Composer as well as Boundless Support for a wide variety of installers and complex environments. To learn more, please don’t hesitate to contact us.

Google & GeoServer Support Geospatial Big Data in the Cloud

Our friends over at CCRi released an exciting announcement today describing their collaboration with Google on the initial release of GeoMesa for Google Cloud Bigtable, creating a vastly scalable platform for geospatial analysis that leverages the cost effectiveness and management ease of the cloud.

If you aren’t familiar with GeoMesa, it’s an open-source extension that quickly stores, indexes, and queries hundreds of billions of geospatial features in a distributed database built on Apache Accumulo.  GeoMesa leverages GeoServer for its spatial processing, and we’ve been working with CCRi for a while to combine the data management and publishing capabilities of OpenGeo Suite with the big data analytics capabilities of GeoMesa.

At the same time, Google today announced Google Cloud Bigtable; a fully managed, high-performance, extremely scalable NoSQL database service accessible through the industry-standard, open-source Apache HBase API. Under the hood this new service is powered by Bigtable, the same database that drives nearly all of Google’s largest applications.

CCRi’s announcement means that GeoMesa is now supported on Google Cloud Bigtable. As noted in CCRi’s blog post, when using Google Cloud Bigtable to back GeoMesa, developers and IT professionals are freed from the need to stand up and maintain complex cloud computing environments. These environments are not only expensive to build, but they require highly-trained DevOps Engineers to maintain them and grow them as the data accumulates.  Because GeoMesa supports Open Geospatial Consortium (OGC) standards, developers can easily migrate existing systems or build new systems on top of GeoMesa. Developers familiar with GeoServer or the OpenGeo Suite can use the GeoMesa plugin to add new data stores backed by Google Cloud Bigtable.

Let’s think for a moment about the opportunity here.  As an industry, organizations like CCRi are continuing to advance how spatial processing can be applied to big data (NoSQL, key-value pair, graph) stores, and GeoMesa is a great example of this.  I have also seen examples of OpenGeo Suite spatially enabling content in a speed layer of a Lambda architecture leveraging Apache Spark or Apache Storm.  And while these advancements do illustrate value added, the infrastructure and knowledge needed to setup these architectures is not trivial. Leveraging capabilities like GeoMesa for Google Cloud Bigtable makes geospatial analytics with big data accessible to a much wider audience.

Considering a Hybrid Proprietary/Open-Source Architecture

A discussion I find myself having more and more with customers is how best to migrate to a hybrid architecture based on a combination of both proprietary and open-source technologies.  Customers have realized that building a platform with both proprietary and open-source tools can help an organization reduce risk and add value in several ways:

  • Avoiding Single Vendor Lock-in
  • Reducing Costs Associated with Licensing
  • Promoting Interoperability with Existing Software and Architecture

In your typical proprietary environment, beyond just software license costs there are additional, sometimes hidden, costs captured in the graphic below.  While individual costs may be nominal, they can add up and ultimately affect the total cost of ownership of a solely proprietary solution.notional-costsCustomers are also realizing that hybrid architectures allow for more gradual, risk-appropriate migration strategies.  In other words, you do not have to rip and replace all of your existing proprietary software for all open-source software.  Many times this is impossible due to specific feature limitations, a steep learning curve, or it’s simply just too cost prohibitive. So I encourage customers to consider implementing only portions of their architecture at a time, and only where it makes sense to do so.

Anthony_2Remember that the OpenGeo Suite includes software at the database, application server, and user interface tier that do not have strict dependencies on each other.  This means you can focus on integrating open-source one tier at a time without interrupting the entire enterprise.  I see many customers start at the database tier because changes are largely ‘hidden’ to the end user.  They are still using the same user interface they are accustomed to, but are in many cases unknowingly connecting to a different end point to retrieve their data.  Everybody wins.

Still other organizations have realized that the best migration point is at the user interface tier.  They are not leveraging the value of expensive proprietary applications because their users require and use only a fraction of the potential capabilities.  In other words, they are paying for a Ferrari, when they could easily use a Vespa.  Hybrid migration strategies targeted at non-power users can quickly realize significant savings in license costs.

It is worth adding that adopting this hybrid approach early on in the evolution of your architecture ensures more choices for migration and an overall cost savings.  The old FRAM oil filter commercials of the 1970’s just popped into my head, “You can pay me now, or you can pay me (a lot more) later”.

While the why of migrating to a hybrid architecture is generally understood, I tend to get a lot of questions from customers regarding the how.  Boundless architects will happily sit one-on-one with you to discuss the specifics of your migration, but this is also one area where Boundless Professional Services can greatly help.  Our expert technologists will work side-by-side with your team to guarantee that best practices are met at every phase of your project, and that you make the most of your investment in OpenGeo Suite and Boundless.  We’ve handled engagements of all sizes, and can tailor them to meet the needs of your organization.  Consider the following packages to help your migration to a hybrid architecture:

  • Migration Assessment: Most organizations we see are not starting from scratch. This package will capture details about your as-is state, and understand where you want to go.  Perhaps you are looking to migrate your database from Oracle to PostGIS, or migrate from ArcGIS Server to GeoServer.  To ensure comprehensive coverage, we will document details about your current missions and business goals, legacy, users and workflows, present costs of your software inventory, as well as any indirect infrastructure and software costs.  Finally, as a best practice to ensure quality of communication, we will prepare a comprehensive report containing an executive summary, findings, a plan for incremental migration and any relevant risk mitigation strategies.
  • OpenGeo Suite Pilot: Customers don’t always know the art of the Possible and what they can actually achieve with OpenGeo Suite. Getting up and running with OpenGeo Suite can be as simple as running an installer, but what do you do from there? This package accelerates your understanding of Boundless capabilities and provides a picture of your future solution via hands-on activity. Whether you already have a geospatial technology legacy or starting from scratch, we help you stand up a working demo, get necessary experience using your own data and plan next steps
  • Architecture & Design Review: The Architecture and Design Review is a tool for your team to discover your solution’s strengths and areas of improvement. During this engagement, our senior engineers review your requirements and will answer any questions you have at this critical phase. You will benefit from improved solution architecture, improved infrastructure design, and best practices most relevant to your solution giving your architects and developers confidence to embark on the implementation.
  • Scale Up & Out: Many customers are getting ready for the cloud or are looking to optimize OpenGeo Suite in an elastic environment. We can review and benchmark your spatial IT infrastructure, and give you the advice you need to parallelize, how to set up high availability and how to configure your services for maximum performance and fault-tolerance. This package is for those getting ready to run GeoServer and/or PostGIS in parallel clusters, and for those looking to squeeze more performance from their existing infrastructure. We will measure and benchmark as-is performance of your OpenGeo Suite deployment, diagnose and resolve performance bottlenecks and help you migrate to an improved configuration.

There are additional Professional Services packages available for review on our website http://boundlessgeo.com/solutions/professional-services/, and Boundless can work to customize or combine these packages to best fit your organization’s needs.

One final thought worth mentioning.  Many customers I have talked to think this migration will happen quickly, in a matter of weeks or even months.  But the reality I’ve witnessed is depending on the complexity of your data, current architecture, availability of resources, and the end user applications the process could take significantly longer.  This is not necessarily a bad thing, and you can use it to your advantage.  By completing your migration in phases you won’t shock your end users into a big change all at once.  It also gives you plenty of ability to adjust and adapt as challenges arise.  You can see an example of a notional timeline below.
Anthony_3Bottom line; a hybrid platform built on both proprietary and open source tools can help an organization reduce risk and add value in several ways.  Boundless has experience implementing hybrid architectures of all sizes, and has a staff that can help assess your migration path as well.  Packages from Boundless Professional Services are a great way to kick start that migration and point you in the right direction right from the start.

Advanced Styling with OpenLayers 3

As we see growth in adoption of OpenLayers 3, we get a lot of feedback from the community how we can enhance its usefulness and functionality. We’ve been pleased in 2015 to release iterations of OL3 on a monthly basis – but I’m going to highlight some great new functionality added by my colleague Andreas Hocevar late last year.

While styling in OpenLayers normally uses the geometry of a feature, when you’re doing visualizations it can be beneficial if there is a way for a style to provide its own geometry. What this means is you can use OL3 to easily provide additional context within visualizations based on the style itself. As a very simple example, you can take a polygon and use this new feature to show all the vertices of the polygon in addition to the polygon geometry itself, or even showing the interior point of a polygon.

As a visual:Bart_1In order to achieve the above effect, you can add a constructor option called geometry to ol.style.Style. This can take a function that gets the feature as an argument, a geometry instance or the name of a feature attribute. If it’s a function, we can – for example – get all the vertices of the polygon and transform them into a multipoint geometry that is then used for rendering and applied to the corresponding style.

You can see sample code for this OpenLayers polygon-styles example at http://openlayers.org/en/master/examples/polygon-styles.html.

[Side note: Since the OpenLayers development team got together for a codesprint in Schladming, Austria, the polygon-styles example page now has a “Create JSFiddle” button (above the example code) which will allow you to experiment quickly with the code from the OpenLayers examples. Thanks to the sprint team for adding this convenient functionality!]

Another example to connect this with more practical use cases: you can use this functionality to show arrows at the segments of a line string.
Bart_2As before with the polygon-styles example, you can see what’s behind this line-arrows example at http://openlayers.org/en/master/examples/line-arrows.html

Lastly, we’ve provided an earthquake-clusters example (reviewable at http://openlayers.org/en/master/examples/earthquake-clusters.html) showing off this new functionality with a slightly different twist. When you hover over an earthquake cluster, you’ll see the individual earthquake locations styled by their magnitude as a regular shape (star):
Bart_3

Please don’t hesitate to let Boundless know if you have any questions about how we did this in OL3, or any other questions you may have about OpenLayers or OpenGeo Suite!

 

MGRS Coordinates in QGIS

One of the main characteristics of QGIS, and one of the reasons developers like myself appreciate it so much, is its extensibility. Using its Python API, new functionality can be added by writing plugins, and those plugins can be shared with the community. The ability to share scripts and plugins in an open-source medium has caused QGIS functionality to grow exponentially.  The Python API lowers the barrier to entry for programmers, who can now contribute to the project without having to work with the much more intimidating core C++ QGIS codebase.

At Boundless, we have created plugins for QGIS such as the OpenGeo Explorer plugin. The OpenGeo Explorer plugin allows QGIS users to interact with Suite elements such as PostGIS and GeoServer, as well as provides an easy and intuitive interface for managing these elements.

Boundless is also involved in the development and improvement of core plugins (plugins that, due to their importance, are distributed by default with QGIS instead of installed optionally by the user). For instance, Boundless is the main contributor to the Processing framework, where most of the analysis capabilities of QGIS reside.

Although both Processing and the OpenGeo Explorer are rather large plugins, most of the plugins available for QGIS (which currently are more than a hundred) are smaller, just adding some simple functionality. That is the case with one of our latest developments, the mgrs-tools plugin, which adds support for using MGRS coordinates when working with a QGIS map.

The military grid reference system (MGRS) is a geocoordinate standard which permits points on the earth to be expressed as alphanumeric strings. . QGIS has no native support for MGRS coordinates, so the need for the mgrs-tools plugin to support users of the standard has grown significantly.

Unlike other coordinate systems that are supported by QGIS, MGRS coordinates are not composed of a pair of values (i.e. lat, lon or x, y), but just a single value. For this reason, implementing support required using a different approach.

We created a small plugin that has two features: centering the view on a given MGRS coordinate, and showing the MGRS coordinate at the current mouse position.

The coordinates to zoom to are introduced in a panel at the top of the map view, which accepts MGRS coordinates of any degree of precision. The view is moved to that point and a marker is added to the map canvas.
Olaya_1

 

When the MGRS coordinates map tool is selected, the MGRS coordinates corresponding to the current mouse position in the map will be displayed in the QGIS status bar.
Olaya_2Both of these features make use of the Python mgrs library, using it to convert the coordinates of the QGIS map canvas into MGRS coordinates or other way around.

In spite of its simplicity this plugin is of great use for all those working with MGRS coordinates, who had no way of using them in QGIS until now. New routines can be added to extend the functionality, and we plan to do that in the near future.

As you can see, creating Python plugins is the easiest and most practical way of adding new functionality to QGIS or customizing it. The QGIS community has reduced barriers to solving challenges by adding extensibility. At Boundless, we use our extensive experience creating and maintaining QGIS plugins to provide effective solutions to our QGIS customers. Also, we provide training for those wanting to learn how to do it themselves through workshops and training programs. Let us know your needs and we will help you get the most out of your QGIS.

(Note: The mgrs-tools plugin is currently available at https://github.com/boundlessgeo/mgrs-tools)

 

Connecting The “Dots” of Your Supply Chain With OpenGeo Suite

The value of leveraging GIS in your supply chain is well known. This includes the ability to more effectively communicate the current state and relationships of your supply chain, detect events, model changes, etc. OpenGeo Suite can readily enable organizations with supply chain requirements to use their data to visualize and analyze relationships between supply chain participants.  As a sample proof exercise, I’m going to use OpenGeo Suite to identify supply lines for visualization and analysis between Production Plants and Suppliers. 

To begin, I have two separate shapefiles, one for the production plants and one for the suppliers. The production_plants layer contains a field that houses the supplier ID which will be used to join the two sets of data. 

Next we need to create the lines themselves. There are multiple methods available within QGIS to create the supply lines. For the purpose of this exercise, we will leverage the MMGIS plugin to create these supply lines using the Hub Lines tool.


In the dialog choose the production_plants and suppliers layers along with the supplier_id field that joins the two. In this case it is the supplier_id field in the Production Plants layer and the id field in the Suppliers layer. Chose the location where you would like the shapefile to be stored and click OK.

The tool generates the lines between plants and suppliers and adds it to our layer list.

From here we can publish the shapefile to GeoServer or import it into our DB. 

This is a good methodology if the dataset is static, or if you are processing this for another user and sending them the data for further desktop analysis. However, your needs may not be so straight-forward. What if we want to make this a more dynamic process? Alternatively, maybe this dataset and the relationships change on a regular basis, or this relationship is defined in another system of record (BI, Production management, etc)?

One choice would be to script this process and run it on a regular interval that corresponds to the data update cycle(IE quarterly, yearly, etc).

That works well for slow change data, but if we want to automate the line generation and see the changes each time we refresh our map we can use the power of the database to perform this task for us. 

Let’s look at one way to do this using a view. 

First I’ve already imported my production_plants and suppliers layers into the DB. Next we’ll create a view that generates our supply lines for us and then registers the table with GeoServer. 

The SQL below is joining the tables based on the supplier_id and id just like we did in QGIS. 

Create or replace view plant_supplier_lines as
SELECT p.supplier_id,
    s.id,
    ST_MakeLine(p.geom,s.geom)
FROM production_plants as P JOIN suppliers as s
    ON p.supplier_id = s.id


In addition to the automation this method allows us to easily incorporate other attributes into the line. For example, if we update the view to include the the amount of shipments in transit we can use that in the symbology. 

Create or replace view plant_supplier_lines_attribs as
SELECT p.supplier_id,
   s.id,
   s.shipments_in_transit,
   ST_MakeLine(p.geom,s.geom)
FROM production_plants as P JOIN suppliers as s
   ON p.supplier_id = s.id


Supply lines with a low volume of goods in transit are represented by a thin green line, moderate volume goods in transit are a medium yellow line, and high volume supply lines by a thick red line. 

This example was fairly straight forward in that our plant-supplier relationship is 1-to-1. If your data is 1-to-many or many-to-many a similar DB view based on a relationship table could be used. 

If you would like to know more about using DB views including parameterized views see  http://boundlessgeo.com/2015/03/support-story-getting-sql-views/ 

Creating the supply lines has helped us visualize the connection between our plants and suppliers as well as provide us with more data for future analysis. The next step in driving efficiencies into our supply chain is adding data for events that could adversely impact  our supply chain; this includes weather, transportation outages, disasters, etc. In the next blog post we will look at ways to incorporate some of these data feeds into the system and build towards automated alerting.