INSPIRE Support in OpenGeo Suite

Yesterday we announced the availability of OpenGeo Suite Version 4.6 with the latest fixes, features and performance improvements for the leading open source geospatial software stack. For our EU customers I’d like to highlight two key additions enabling OpenGeo Suite 4.6 to publish INSPIRE services.

OpenGeo Suite 4.6 adds support for the GeoServer INSPIRE extension allowing required metadata to be filled into Web Map Service (WMS) and Web Feature Service (WFS) GetCapabilities documents. This additional metadata enables compliance with the INSPIRE View and Download Service specifications. More information can be found in our OpenGeo Suite INSPIRE documentation.

The second key addition is the popular GeoServer app-schema extension allowing the mapping of your information to predefined application schemas. This extension is used to create INSPIRE-compliant Geographic Markup Language (GML) output. More information about app-schema can be found in our documentation.

Boundless provides Support for the INSPIRE and app-schema extensions as part of OpenGeo Suite Enterprise, we invite anyone interested in INSPIRE compliance to reach out to contact@boundlessgeo.com. Our Sales, Services, and Support teams can help answer any questions you may have about your own projects and goals.

Announcing the Release of OpenGeo Suite 4.6

Today Boundless announced the availability of OpenGeo Suite 4.6, our latest advancement in providing a complete open-source geospatial software stack so organizations of all sizes can build great maps and applications.

The press release, as PR typically does, hits the treetops of what’s included in the release, but this forum offers the opportunity to go into somewhat greater detail:

  • Customers of OpenGeo Suite Enterprise will have access to the latest version of OpenGeo Suite Composer, our tool for creating, styling, and publishing maps for the web. We profiled Composer when it was released with OpenGeo Suite 4.5, we encourage you to review our original entry for details. Boundless has continued to invest in Composer as there is a compelling need in the market for alternative web-enabled mapping tools, so 4.6 includes a lot of focus on Composer. Users will discover a number of updates including multiple additions to layer management as well as increased visualization of SLD so it’s easier to troubleshoot YSLD syntax. YSLD makes it much easier to style maps, so we believe developers will find this addition helpful.
  • As always with our releases, we’ve incorporated a lot of the latest and greatest contributions from the various community projects into the unified and supported OpenGeo Suite. Here are great places to start to see deeper details on what the communities have been working on:

The release of 4.6 means we incorporated a lot of the latest and best features into the unified OpenGeo stack to the benefit of our customers:

  • Greater control for how overlapping layers are merged together with color composition and blending
  • A number of updates to Web Processing Service (WPS), including clustering, process controls, and security
  • Significant PostGIS performance improvements
  • Numerous updates to OpenLayers3 – OL3 is quickly maturing into a very powerful library for web-ready maps, able to handle requirements of all sorts. We’re frequently updating the Boundless blog with the latest on OL3, check back here often for the latest.

OpenGeo Suite 4.6 is now available to all here. OpenGeo Suite Enterprise customers have access to Composer as well as Boundless Support for a wide variety of installers and complex environments. To learn more, please don’t hesitate to contact us.

Google & GeoServer Support Geospatial Big Data in the Cloud

Our friends over at CCRi released an exciting announcement today describing their collaboration with Google on the initial release of GeoMesa for Google Cloud Bigtable, creating a vastly scalable platform for geospatial analysis that leverages the cost effectiveness and management ease of the cloud.

If you aren’t familiar with GeoMesa, it’s an open-source extension that quickly stores, indexes, and queries hundreds of billions of geospatial features in a distributed database built on Apache Accumulo.  GeoMesa leverages GeoServer for its spatial processing, and we’ve been working with CCRi for a while to combine the data management and publishing capabilities of OpenGeo Suite with the big data analytics capabilities of GeoMesa.

At the same time, Google today announced Google Cloud Bigtable; a fully managed, high-performance, extremely scalable NoSQL database service accessible through the industry-standard, open-source Apache HBase API. Under the hood this new service is powered by Bigtable, the same database that drives nearly all of Google’s largest applications.

CCRi’s announcement means that GeoMesa is now supported on Google Cloud Bigtable. As noted in CCRi’s blog post, when using Google Cloud Bigtable to back GeoMesa, developers and IT professionals are freed from the need to stand up and maintain complex cloud computing environments. These environments are not only expensive to build, but they require highly-trained DevOps Engineers to maintain them and grow them as the data accumulates.  Because GeoMesa supports Open Geospatial Consortium (OGC) standards, developers can easily migrate existing systems or build new systems on top of GeoMesa. Developers familiar with GeoServer or the OpenGeo Suite can use the GeoMesa plugin to add new data stores backed by Google Cloud Bigtable.

Let’s think for a moment about the opportunity here.  As an industry, organizations like CCRi are continuing to advance how spatial processing can be applied to big data (NoSQL, key-value pair, graph) stores, and GeoMesa is a great example of this.  I have also seen examples of OpenGeo Suite spatially enabling content in a speed layer of a Lambda architecture leveraging Apache Spark or Apache Storm.  And while these advancements do illustrate value added, the infrastructure and knowledge needed to setup these architectures is not trivial. Leveraging capabilities like GeoMesa for Google Cloud Bigtable makes geospatial analytics with big data accessible to a much wider audience.

Considering a Hybrid Proprietary/Open-Source Architecture

A discussion I find myself having more and more with customers is how best to migrate to a hybrid architecture based on a combination of both proprietary and open-source technologies.  Customers have realized that building a platform with both proprietary and open-source tools can help an organization reduce risk and add value in several ways:

  • Avoiding Single Vendor Lock-in
  • Reducing Costs Associated with Licensing
  • Promoting Interoperability with Existing Software and Architecture

In your typical proprietary environment, beyond just software license costs there are additional, sometimes hidden, costs captured in the graphic below.  While individual costs may be nominal, they can add up and ultimately affect the total cost of ownership of a solely proprietary solution.notional-costsCustomers are also realizing that hybrid architectures allow for more gradual, risk-appropriate migration strategies.  In other words, you do not have to rip and replace all of your existing proprietary software for all open-source software.  Many times this is impossible due to specific feature limitations, a steep learning curve, or it’s simply just too cost prohibitive. So I encourage customers to consider implementing only portions of their architecture at a time, and only where it makes sense to do so.

Anthony_2Remember that the OpenGeo Suite includes software at the database, application server, and user interface tier that do not have strict dependencies on each other.  This means you can focus on integrating open-source one tier at a time without interrupting the entire enterprise.  I see many customers start at the database tier because changes are largely ‘hidden’ to the end user.  They are still using the same user interface they are accustomed to, but are in many cases unknowingly connecting to a different end point to retrieve their data.  Everybody wins.

Still other organizations have realized that the best migration point is at the user interface tier.  They are not leveraging the value of expensive proprietary applications because their users require and use only a fraction of the potential capabilities.  In other words, they are paying for a Ferrari, when they could easily use a Vespa.  Hybrid migration strategies targeted at non-power users can quickly realize significant savings in license costs.

It is worth adding that adopting this hybrid approach early on in the evolution of your architecture ensures more choices for migration and an overall cost savings.  The old FRAM oil filter commercials of the 1970’s just popped into my head, “You can pay me now, or you can pay me (a lot more) later”.

While the why of migrating to a hybrid architecture is generally understood, I tend to get a lot of questions from customers regarding the how.  Boundless architects will happily sit one-on-one with you to discuss the specifics of your migration, but this is also one area where Boundless Professional Services can greatly help.  Our expert technologists will work side-by-side with your team to guarantee that best practices are met at every phase of your project, and that you make the most of your investment in OpenGeo Suite and Boundless.  We’ve handled engagements of all sizes, and can tailor them to meet the needs of your organization.  Consider the following packages to help your migration to a hybrid architecture:

  • Migration Assessment: Most organizations we see are not starting from scratch. This package will capture details about your as-is state, and understand where you want to go.  Perhaps you are looking to migrate your database from Oracle to PostGIS, or migrate from ArcGIS Server to GeoServer.  To ensure comprehensive coverage, we will document details about your current missions and business goals, legacy, users and workflows, present costs of your software inventory, as well as any indirect infrastructure and software costs.  Finally, as a best practice to ensure quality of communication, we will prepare a comprehensive report containing an executive summary, findings, a plan for incremental migration and any relevant risk mitigation strategies.
  • OpenGeo Suite Pilot: Customers don’t always know the art of the Possible and what they can actually achieve with OpenGeo Suite. Getting up and running with OpenGeo Suite can be as simple as running an installer, but what do you do from there? This package accelerates your understanding of Boundless capabilities and provides a picture of your future solution via hands-on activity. Whether you already have a geospatial technology legacy or starting from scratch, we help you stand up a working demo, get necessary experience using your own data and plan next steps
  • Architecture & Design Review: The Architecture and Design Review is a tool for your team to discover your solution’s strengths and areas of improvement. During this engagement, our senior engineers review your requirements and will answer any questions you have at this critical phase. You will benefit from improved solution architecture, improved infrastructure design, and best practices most relevant to your solution giving your architects and developers confidence to embark on the implementation.
  • Scale Up & Out: Many customers are getting ready for the cloud or are looking to optimize OpenGeo Suite in an elastic environment. We can review and benchmark your spatial IT infrastructure, and give you the advice you need to parallelize, how to set up high availability and how to configure your services for maximum performance and fault-tolerance. This package is for those getting ready to run GeoServer and/or PostGIS in parallel clusters, and for those looking to squeeze more performance from their existing infrastructure. We will measure and benchmark as-is performance of your OpenGeo Suite deployment, diagnose and resolve performance bottlenecks and help you migrate to an improved configuration.

There are additional Professional Services packages available for review on our website http://boundlessgeo.com/solutions/professional-services/, and Boundless can work to customize or combine these packages to best fit your organization’s needs.

One final thought worth mentioning.  Many customers I have talked to think this migration will happen quickly, in a matter of weeks or even months.  But the reality I’ve witnessed is depending on the complexity of your data, current architecture, availability of resources, and the end user applications the process could take significantly longer.  This is not necessarily a bad thing, and you can use it to your advantage.  By completing your migration in phases you won’t shock your end users into a big change all at once.  It also gives you plenty of ability to adjust and adapt as challenges arise.  You can see an example of a notional timeline below.
Anthony_3Bottom line; a hybrid platform built on both proprietary and open source tools can help an organization reduce risk and add value in several ways.  Boundless has experience implementing hybrid architectures of all sizes, and has a staff that can help assess your migration path as well.  Packages from Boundless Professional Services are a great way to kick start that migration and point you in the right direction right from the start.

Advanced Styling with OpenLayers 3

As we see growth in adoption of OpenLayers 3, we get a lot of feedback from the community how we can enhance its usefulness and functionality. We’ve been pleased in 2015 to release iterations of OL3 on a monthly basis – but I’m going to highlight some great new functionality added by my colleague Andreas Hocevar late last year.

While styling in OpenLayers normally uses the geometry of a feature, when you’re doing visualizations it can be beneficial if there is a way for a style to provide its own geometry. What this means is you can use OL3 to easily provide additional context within visualizations based on the style itself. As a very simple example, you can take a polygon and use this new feature to show all the vertices of the polygon in addition to the polygon geometry itself, or even showing the interior point of a polygon.

As a visual:Bart_1In order to achieve the above effect, you can add a constructor option called geometry to ol.style.Style. This can take a function that gets the feature as an argument, a geometry instance or the name of a feature attribute. If it’s a function, we can – for example – get all the vertices of the polygon and transform them into a multipoint geometry that is then used for rendering and applied to the corresponding style.

You can see sample code for this OpenLayers polygon-styles example at http://openlayers.org/en/master/examples/polygon-styles.html.

[Side note: Since the OpenLayers development team got together for a codesprint in Schladming, Austria, the polygon-styles example page now has a “Create JSFiddle” button (above the example code) which will allow you to experiment quickly with the code from the OpenLayers examples. Thanks to the sprint team for adding this convenient functionality!]

Another example to connect this with more practical use cases: you can use this functionality to show arrows at the segments of a line string.
Bart_2As before with the polygon-styles example, you can see what’s behind this line-arrows example at http://openlayers.org/en/master/examples/line-arrows.html

Lastly, we’ve provided an earthquake-clusters example (reviewable at http://openlayers.org/en/master/examples/earthquake-clusters.html) showing off this new functionality with a slightly different twist. When you hover over an earthquake cluster, you’ll see the individual earthquake locations styled by their magnitude as a regular shape (star):
Bart_3

Please don’t hesitate to let Boundless know if you have any questions about how we did this in OL3, or any other questions you may have about OpenLayers or OpenGeo Suite!

 

MGRS Coordinates in QGIS

One of the main characteristics of QGIS, and one of the reasons developers like myself appreciate it so much, is its extensibility. Using its Python API, new functionality can be added by writing plugins, and those plugins can be shared with the community. The ability to share scripts and plugins in an open-source medium has caused QGIS functionality to grow exponentially.  The Python API lowers the barrier to entry for programmers, who can now contribute to the project without having to work with the much more intimidating core C++ QGIS codebase.

At Boundless, we have created plugins for QGIS such as the OpenGeo Explorer plugin. The OpenGeo Explorer plugin allows QGIS users to interact with Suite elements such as PostGIS and GeoServer, as well as provides an easy and intuitive interface for managing these elements.

Boundless is also involved in the development and improvement of core plugins (plugins that, due to their importance, are distributed by default with QGIS instead of installed optionally by the user). For instance, Boundless is the main contributor to the Processing framework, where most of the analysis capabilities of QGIS reside.

Although both Processing and the OpenGeo Explorer are rather large plugins, most of the plugins available for QGIS (which currently are more than a hundred) are smaller, just adding some simple functionality. That is the case with one of our latest developments, the mgrs-tools plugin, which adds support for using MGRS coordinates when working with a QGIS map.

The military grid reference system (MGRS) is a geocoordinate standard which permits points on the earth to be expressed as alphanumeric strings. . QGIS has no native support for MGRS coordinates, so the need for the mgrs-tools plugin to support users of the standard has grown significantly.

Unlike other coordinate systems that are supported by QGIS, MGRS coordinates are not composed of a pair of values (i.e. lat, lon or x, y), but just a single value. For this reason, implementing support required using a different approach.

We created a small plugin that has two features: centering the view on a given MGRS coordinate, and showing the MGRS coordinate at the current mouse position.

The coordinates to zoom to are introduced in a panel at the top of the map view, which accepts MGRS coordinates of any degree of precision. The view is moved to that point and a marker is added to the map canvas.
Olaya_1

 

When the MGRS coordinates map tool is selected, the MGRS coordinates corresponding to the current mouse position in the map will be displayed in the QGIS status bar.
Olaya_2Both of these features make use of the Python mgrs library, using it to convert the coordinates of the QGIS map canvas into MGRS coordinates or other way around.

In spite of its simplicity this plugin is of great use for all those working with MGRS coordinates, who had no way of using them in QGIS until now. New routines can be added to extend the functionality, and we plan to do that in the near future.

As you can see, creating Python plugins is the easiest and most practical way of adding new functionality to QGIS or customizing it. The QGIS community has reduced barriers to solving challenges by adding extensibility. At Boundless, we use our extensive experience creating and maintaining QGIS plugins to provide effective solutions to our QGIS customers. Also, we provide training for those wanting to learn how to do it themselves through workshops and training programs. Let us know your needs and we will help you get the most out of your QGIS.

(Note: The mgrs-tools plugin is currently available at https://github.com/boundlessgeo/mgrs-tools)

 

Connecting The “Dots” of Your Supply Chain With OpenGeo Suite

The value of leveraging GIS in your supply chain is well known. This includes the ability to more effectively communicate the current state and relationships of your supply chain, detect events, model changes, etc. OpenGeo Suite can readily enable organizations with supply chain requirements to use their data to visualize and analyze relationships between supply chain participants.  As a sample proof exercise, I’m going to use OpenGeo Suite to identify supply lines for visualization and analysis between Production Plants and Suppliers. 

To begin, I have two separate shapefiles, one for the production plants and one for the suppliers. The production_plants layer contains a field that houses the supplier ID which will be used to join the two sets of data. 

Next we need to create the lines themselves. There are multiple methods available within QGIS to create the supply lines. For the purpose of this exercise, we will leverage the MMGIS plugin to create these supply lines using the Hub Lines tool.


In the dialog choose the production_plants and suppliers layers along with the supplier_id field that joins the two. In this case it is the supplier_id field in the Production Plants layer and the id field in the Suppliers layer. Chose the location where you would like the shapefile to be stored and click OK.

The tool generates the lines between plants and suppliers and adds it to our layer list.

From here we can publish the shapefile to GeoServer or import it into our DB. 

This is a good methodology if the dataset is static, or if you are processing this for another user and sending them the data for further desktop analysis. However, your needs may not be so straight-forward. What if we want to make this a more dynamic process? Alternatively, maybe this dataset and the relationships change on a regular basis, or this relationship is defined in another system of record (BI, Production management, etc)?

One choice would be to script this process and run it on a regular interval that corresponds to the data update cycle(IE quarterly, yearly, etc).

That works well for slow change data, but if we want to automate the line generation and see the changes each time we refresh our map we can use the power of the database to perform this task for us. 

Let’s look at one way to do this using a view. 

First I’ve already imported my production_plants and suppliers layers into the DB. Next we’ll create a view that generates our supply lines for us and then registers the table with GeoServer. 

The SQL below is joining the tables based on the supplier_id and id just like we did in QGIS. 

Create or replace view plant_supplier_lines as
SELECT p.supplier_id,
    s.id,
    ST_MakeLine(p.geom,s.geom)
FROM production_plants as P JOIN suppliers as s
    ON p.supplier_id = s.id


In addition to the automation this method allows us to easily incorporate other attributes into the line. For example, if we update the view to include the the amount of shipments in transit we can use that in the symbology. 

Create or replace view plant_supplier_lines_attribs as
SELECT p.supplier_id,
   s.id,
   s.shipments_in_transit,
   ST_MakeLine(p.geom,s.geom)
FROM production_plants as P JOIN suppliers as s
   ON p.supplier_id = s.id


Supply lines with a low volume of goods in transit are represented by a thin green line, moderate volume goods in transit are a medium yellow line, and high volume supply lines by a thick red line. 

This example was fairly straight forward in that our plant-supplier relationship is 1-to-1. If your data is 1-to-many or many-to-many a similar DB view based on a relationship table could be used. 

If you would like to know more about using DB views including parameterized views see  http://boundlessgeo.com/2015/03/support-story-getting-sql-views/ 

Creating the supply lines has helped us visualize the connection between our plants and suppliers as well as provide us with more data for future analysis. The next step in driving efficiencies into our supply chain is adding data for events that could adversely impact  our supply chain; this includes weather, transportation outages, disasters, etc. In the next blog post we will look at ways to incorporate some of these data feeds into the system and build towards automated alerting.

 

Pinpointing Disasters with OpenGeo Suite

Boundless’ Victoria, British Columbia office sits at the southern tip of Vancouver Island, a region in which is used to mild earthquake activity. So when a colleague back east asked if we’d felt “the earthquake near Port Hardy”, we weren’t particularly surprised that there had been one or that it had gone unnoticed locally.

We were a little surprised, however, when we saw that the epicentre was well off the west coast of the island, while Port Hardy sits on the east coast. Looking more closely at the USGS map showing the location of the earthquake, one wonders why Port Hardy was chosen as a reference point in news reports and not say, Tofino, which is roughly due east of the epicentre.

Like many OpenGeo Suite users, PostGIS is my go-to tool for doing some quick spatial analysis, so I loaded up the Natural Earth populated places data set in to a fresh database and whipped up a quick query to see what was happening within 250 kilometres of the quake.

$ shp2pgsql -W LATIN1 -I -s 4326 ne_10m_populated_places.shp | psql earthquakes

WITH constants AS 
(SELECT ST_SetSRID(ST_MakePoint(-128.15, 49.42), 4326)::geography AS quake)
(
  SELECT name, pop_max, ST_Distance(geom::geography, quake) / 1000 AS distance
  FROM ne_10m_populated_places, constants
  WHERE ST_DWithin(quake, geom::geography, 250*1000)
  ORDER BY DISTANCE
);

And sure enough:

      name      | pop_max |     distance
----------------+---------+------------------
 Port Hardy     |    2295 | 151.591959991648
 Tofino         |    1655 | 170.322296453086
 Campbell River |   33430 | 219.404018781354
 Courtenay      |   32793 | 229.792897985687

Port Hardy just edges out Tofino as the closest settlement in my data set. So do the USGS and other organisations simply use a naive algorithm to find a reference point for earthquakes? It sure looks like it!

[Note: if you’re wondering about the WITH clause above, that’s just an easy way to effectively define a constant that can be referenced throughout the main part of the query. Remember this syntax because we’ll be using it again below.]

Google went with Port Hardy, based on USGS data (although they calculate the distance differently that their source):

Natural Resources Canada’s website on the other hand referenced the small village of Port Alice, which is closer to the action but wouldn’t make the population threshold in most data sets:

Writing a better algorithm

If we agree that Port Hardy probably isn’t the best reference point for an event in the Pacific Ocean, then we are left with the question: can we design a better algorithm? A simple, but effective improvement would be to calculate the distance between nearby settlements but double the distance for the bits that cross over land.

So from Port Hardy to the epicentre is about 150 kilometres, but we’ll need to add about 75 km extra because about half of that is overland. From Tofino, however, it’s 170 km from point to point and only a smidgen extra for crossing Vargas Island on the way out to sea. That’s 225 km to 175: Tofino wins!

 

We’re going to build up our query in three parts, something which isn’t strictly necessary but it does make things a little easier to read and maintain.

The first step is to get a list of populated places that are within a reasonable range of the earthquake (we’ll use 250 km again) and which have more than 1000 inhabitants. Additionally, we’ll use PostGIS to create a line from that point out to the quake and pass up all this information to the next step in our query.

SELECT
  name, pop_max, ST_MakeLine(geom, quake) AS line
FROM constants, ne_10m_populated_places
WHERE
  pop_max > 1000 AND
  ST_DWithin(quake::geography, geom::geography, 250*1000)

The sub-query above will be referenced as the places table in the query below, which is where most of the hard work actually happens.

We take the results from the places query and join them to another Natural Earth data set which contains states, provinces and other subdivisions of countries around the world (you can load it with the same shp2pgsql command I used above). Basically, this new table tells us what parts of the world are covered by land and what parts are not. By finding the intersection between our line and any land polygons, we can calculate a new land_line geometry for each of the places we found above.

SELECT
  places.name, places.pop_max, places.line,
  ST_Collect(ST_Intersection(places.line, land.geom)) AS line_land,
  ST_Length(line::geography) / 1000 AS distance
FROM
  ne_10m_admin_1_states_provinces_shp AS land, places
WHERE ST_Intersects(places.line, land.geom)
GROUP BY places.name, places.pop_max, places.line

We’ll add this new geometry and its length to the data we’ve collected about each place and pass them all up to final part of our query, referring to this as the places_land table.

SELECT
  name, 
  pop_max,
  distance,
  distance + ST_Length(line_land::geography) / 1000 AS weighted_distance
FROM places_land
ORDER BY weighted_distance

This is where we wrap everything up by calculating the weighted_distance, which is just the regular distance plus the distance of the part that crossed over land (dividing by 1000 since the length of the PostGIS geography data type is measured in meters).

Pulling these together we get this final, three-step query:

WITH constants AS 
(SELECT ST_SetSRID(ST_MakePoint(-128.15, 49.42), 4326) AS quake)
(
  SELECT 
    name, 
    pop_max, 
    distance, 
    distance + ST_Length(line_land::geography) / 1000 AS weighted_distance
  FROM
  (
    SELECT
      places.name, 
      places.pop_max,
      places.line,
      ST_Collect(ST_Intersection(places.line, land.geom)) AS line_land,
      ST_Length(line::geography) / 1000 AS distance
    FROM
      ne_10m_admin_1_states_provinces_shp AS land,
    (
    SELECT
      name, pop_max, ST_MakeLine(geom, quake) AS line
    FROM constants, ne_10m_populated_places
    WHERE
      pop_max > 1000 AND
      ST_DWithin(quake::geography, geom::geography, 250*1000)
    ) AS places
    WHERE ST_Intersects(places.line, land.geom)
    GROUP BY places.name, places.pop_max, places.line
  ) AS places_land
  ORDER BY weighted_distance
);

All that’s left is to run the query and see what we get:

      name      | pop_max |     distance     | weighted_distance
----------------+---------+------------------+-------------------
 Tofino         |    1655 | 170.322296453086 |  170.996532624216
 Port Hardy     |    2295 | 151.591959991648 |  213.424215448539
 Campbell River |   33430 | 219.404018781354 |  336.005258086551
 Courtenay      |   32793 | 229.792897985687 |  344.417265701763

It works: this earthquake is best described as being 170 km from Tofino by our reckoning!

The query above is only really suitable for points at sea, but you can adapt this code for cases where points are on land as well … and of course the exercise is not limited to earthquakes, but can be applied for any kind of disaster or event. With some additional creativity, we could  also tune our algorithm to prefer places with more inhabitants over those with fewer. And of course, you can always change the search radius of 250 km or the population threshold of 1000 inhabitants.

Finally, if you want to pack this all up and create an application with OpenGeo Suite, I suggest checking out our recent blog post on publishing SQL queries like this in GeoServer and passing parameters to make an interactive and dynamic layer to show on a map!

 

My First FOSS4G

A Whole New World

Being a new developer at Boundless, freshly out of university with my degree in Computer Science, I haven’t had much practical experience yet. While I have a decent understanding of programming and awareness of popular languages and frameworks, I am still building my knowledge of geospatial software while working at Boundless  on GeoServer. Also, open source communities were something I knew about, but I had never been a part of one.

So, with FOSS4G NA 2015, I was really diving into a whole new world. I didn’t know what to expect out of an open source conference, or if it would prove friendly to newbies.

Here is what I found.

Diversity

Something that impressed me at the conference was the diversity. I found a wide range of people from varying backgrounds and levels of experience. Some were highly academic and studying sciences, while others were masterful cartographers. Then there were quite a few developers like myself who really weren’t veterans of geospatial. Notably, there were a sizable portion of women were at the conference, many of whom were presenters.

The topics were diverse too. The first day had some beginner-friendly sessions, including introductions to GeoServer and QGIS. Then there were “theme days”,  a concept I thought was awesome: Tuesday was PostgreSQL day, Wednesday was Big Data day, and Thursday was Web Mapping day. Other talks were going on, but I found the theme talks were especially popular. Plenty of beginner material was presented along with more advanced topics, so there was something for everyone.

Further, EclipseCon was hosted jointly with FOSS4G NA this year, providing even more diversity of people and backgrounds. Everyone who registered for one conference could attend sessions at the other. This allowed an interesting overlap of developers and scientists to mix and talk to each other. Seeing how people from different fields work differently can provide interesting insights and expand our knowledge.

Learning GIS and Software Development

Above all, my goal going into FOSS4G was to learn. A few sessions stood out in particular as great resources to me.

On the geospatial side, the list is too long to put everything here, but I’ll highlight the ones I found the most helpful and useful. For those interested in scripting with Python, the Intro to Spatial Data Analysis in Python by Jenny Palomino provided a lot of background on the many available libraries and frameworks for working with spatial data. Paul Ramsey’s PostGIS Feature Frenzy was great for introducing the power of PostGIS. There was also a whole educational “training academy” which was presented by Philip Davis in Building a Sustainable Open Source Training Academy Using QGIS. Finally, the Birds of a Feather session for GeoServer had the Boundless team as well as Andrea Aime from GeoSolutions there to answer questions and help people with GeoServer.

From the Eclipse side of things, I thought Shay Shmeltzer’s presentation on iOS Development with Eclipse and Java was particularly interesting, especially because a lot of people assumed that wasn’t possible! Another fascinating presentation was Think Before You Code by Lizzi Slivinski which promoted a good discussion about user experience (UX) and design in general. Finally, Katy DeCorah provided guidelines and considerations for writing in her presentation, Writing For Everyone.

Code Sprints and Hackathons

For the developers, there were plenty of opportunities to write some code and receive help from experienced members of the community. Tuesday night had a room dedicated for a hackathon, where I was able to meet with Andrea Aime to do some much needed bug fixing for GeoServer. Also, Boundless hosted an additional code sprint on Friday. Torben Barsballe, another new developer at Boundless working on GeoServer, and I got some help from Andrea to get started with CITE tests. The time went by really fast, but we got quite a bit done for only having a few hours. Thanks to the organizers for providing us a hackathon space, and to Boundless for the space for an additional code sprint.

Conclusion

FOSS4G really broadened my perspective. There are a lot of exciting things going on, especially with advances in web mapping and a greater desire to move to the cloud and process big data. It aligns well with what we’ve been working on for GeoServer and making sure it’s ready to scale up and out to meet client needs.

I think the best part about FOSS4G was that it felt welcoming. People were very open and friendly. Experts were happy to talk and share what they know, even with newbies. All the knowledge felt available for anyone who wished to pursue it. Regardless of who you are or where you stand, FOSS4G is a great experience.

Thank you to everyone who organized this year’s FOSS4G NA, and a big thanks to Boundless for sponsoring the event and giving me the opportunity to attend. Looking forward to next year.

 

Embarking on a Journey with FOSS4G-NA 2015

I’m relatively new to Boundless as I build my career as a Java Developer – so it was timely during the week of March 9 I was able to attend FOSS4G-NA in San Francisco. As someone new to software conferences like this, I’d like to offer some reflections to hopefully share how these events can be as positive an experience for the New Guy as they are for the veteran.

Here’s what I found – the conference schedule was well paced, with a good variety of presenters and presentations covering numerous topics in the FOSS4G space. There were also a number of events outside of the presentation schedule where I got to interact with other people who were here for the conference – in many ways, this was the most eye-opening part of the experience for me. Real work gets done at conferences outside of the sessions, not just sitting in the auditoriums.

As noted, I am a Java Developer, so I also felt lucky the event was co-located with EclipseCon. It gave me the opportunity to address multiple interests within one event, and I can only hope other conferences offer me this breadth of information.

Some of the highlights of my experiences at FOSS4G were:

PlanetLabs

PlanetLabs presented a number of talks about their project to image the entire planet at 3-5 m resolution using a fleet of satellites. These satellites are built in-house using an “agile aerospace” approach, which entails lots of small, cheap satellites with fast turnover. This is a novel change from the conventional monolithic aerospace development strategy, and allows PlanetLabs to deploy and test improvements and changes quickly and cheaply. Since each satellite is a relatively low investment it also means that individual failures are not catastrophic. PlanetLabs also hosted a social event at their office/lab, and showed us where they build and program their satellites and mapping software.

Mapping Drones

One of the themes of Wednesday was drones. I saw presentations on how to build your own drone, and on OpenDroneMap, a piece of software for rendering drone-collected imagery in three dimensions. I also attended a presentation about kites as an alternative to drones: they are stronger, cheaper, and can stay up for longer, and people don’t feel threatened by kites like they do drones. This is especially relevant with all the discussion about drones and privacy these days, and provides an interesting look into human psychology, and how we are more accepting of what is familiar than the what is new.

Java 9 and Beyond

As part of EclipseCon, I attended a keynote on upcoming Java features. This included a discussion of a major feature of Java 9, the modular JVM. Even more interesting were some of the plans for future Java releases. These include a “value” class, which is essentially a class that behaves like a primitive, as well as primitive support for Java Generics. These future additions have been a long time coming. Primitive support for generics will be especially nice as it will eliminate the need to null-check every simple list of numbers, and greatly enhance memory efficiency as well.

Cesium

As most GIS people probably already know, Cesium is a JavaScript globe and mapping library. The Cesium team made a strong showing at FOSS4G with a demonstration of 3D temporal data visualization of GPS traces using Cesium. They also presented a number of cool demo applications (that you should totally check out), which are available online:


Overall, I found FOSS4G-NA to be a valuable experience, and I would be interested in attending future conferences if I were given the opportunity. For this FOSS4G, I tried to go to talks on a wide variety of topics to explore what sort of stuff was out there. While this was definitely valuable for me as a beginner, there were definitely some things that went over my head.  If I were to go to similar events in the future, I feel like I could focus more strongly on topics that would broaden and develop my skill-set as GIS Java Developer.