Editing in OpenLayers 3 using WFS-T

OpenLayers

We’ve talked about building applications with OpenLayers 3 before, but mostly focused on how to display maps. OpenLayers 3 includes all the building blocks needed to create an editing application as well, such as support for feature editing (insert, update and delete) and writing back the changes to the server through a transactional Web Feature Service (WFS-T).

The OpenLayers vector-wfs example demonstrates how to use a vector layer with a BBOX strategy served up by a WFS such as GeoServer, so make sure to look at (the source of) this example before reading this post if you are not familiar with it already.

The parts of OpenLayers 3 that we need to support feature editing through WFS-T are:

  • ol.interaction.Draw
  • ol.interaction.Modify
  • ol.format.WFS

Insert

Let’s start with the case of inserting a new feature. Make sure to configure the Draw interaction with the correct geometry type (config option named type) and the correct geometryName. Also have it point to the vector layer’s source through the source config option. The Draw interaction fires an event called drawend when the user finishes drawing. We can use this event to create a WFS transaction and send it to the WFS server. In order to do this we’ll create an ol.format.WFS instance and call its writeTransaction method. Since we only have a single feature to insert, we’ll pass in an array of that single feature as the first argument, and null for the updates and deletes arguments. For the options argument we need to pass in:

  • gmlOptions: an object with an srsName property which uses the map’s view projection
  • featureNS: the URI of the featureType we are using in the application
  • featureType: the name of the featureType

To find out the value of featureNS, you can use a WFS DescribeFeatureType request. The writeTransaction method gives us back a node which we can then serialise to a string using the browser’s XMLSerializer object. Use jQuery ajax or the browser’s built-in XMLHttpRequest to send the string to the WFS server using HTTP POST. When the result comes back, we parse this again with ol.format.WFS‘s readTransaction method and we can obtain the feature id given to the feature by the WFS server and can then set this on the feature using its setId method.

Update

For updating the geometry of an existing feature, we use ol.interaction.Modify and we’ll use it in conjunction with an ol.interaction.Select instance. So users will first select a feature by clicking, and can then modify it. When we create the Modify interaction, we pass in the feature collection of the Select interaction. We will store a hash of modified features, which we can then use to determine if we need to send a WFS Update Transaction. We will listen for the add and remove events of the Select interaction’s feature collection. In the listener for the add event, we will listen for the change event on the feature, and once change is fired, we’ll add an entry to the modified features hash based on the feature’s id. In the listener for the remove event (which fires when the feature gets unselected), we will check if the feature is in the modified hash, and if it is, we’ll use ol.format.WFS again to write out the Transaction XML (now only passing an array of a single feature as the updates argument) and send it to the WFS server. When the result comes back, we can parse it by using the method readTransactionResponse from ol.format.WFS. We can then check to see if totalUpdates is set to 1 which gives us an indication that our transaction was processed successfully by the WFS server.

Delete

For deleting a feature, we use the select interaction. If a feature is selected, and the user presses the Delete button, we’ll use ol.format.WFS again to write out the Transaction XML, but only passing in an array of a single feature to the deletes argument. We will then again transform this into a string and send it to the WFS using AJAX.

Boundless SDK

While this whole process may sound pretty cumbersome,  the upcoming OpenGeo Suite 4.1 release we will make your life a lot easier by shipping the Boundless SDK with some OpenLayers 3 templates to facilitate creating an editing application based on OpenLayers 3. So stay tuned.

OL3 Editing App

Our new GeoServer certification course is now available!

As a core component of our flagship OpenGeo Suite product, GeoServer is one of the most important projects we work on. We helped start the GeoServer project over a decade ago and continue to be active members of the vibrant community that has sprung up around it. We’ve also provided training from the beginning. Whether as interactive sessions like those at the FOSS4G conference, online webinars, or introductory workshops, those who want to learn about GeoServer know to come to us.

Many of you have expressed an interest in going deeper with GeoServer, learning the fundamentals inside and out, in a way that a half-day workshop just can’t accomplish. We’re happy to announce that we’ve got your answer.

A (big!) new training course

Over the past few months, we’ve developed an entirely new GeoServer curriculum that starts with spatial basics and web services then moves onto administration and deployment. This course provides attendees with solid knowledge of how to use and administer GeoServer.

OpenGeo Suite: GeoServer I contains lots of multimedia and interactive content, including over one hundred exercises to follow along with and then do yourself. As tested in a classroom environment, the course is four days in length, but the online version allows you to work at your own pace and on your own schedule.

To help make  learning easier, we’ve included over three hours worth of video material, like the example above, to take you through the course. We are also available via email or during weekly office hours to answer any questions that arise during your progress.

Certification and credit

Finally, no course would be complete without a final exam, which will offer a Boundless GeoServer Certification for those who qualify. “We offer the only GeoServer certification on the market, so you will definitely want this if GeoServer is part of your skill-set. Also, for those GIS Professionals out there, this course (along with our entire training catalog) is eligible for GISP certification credit through the GIS Certification Institute. And the best part? You can enroll today.

Start right now!

The course is available online for anyone who wants to advance their knowledge of GeoServer. No need to schedule an on-site visit to one of our offices or wait for us to come to an event near you. No need to contend with time zone differences either. You can work on your own time and at your own pace.

If you’re already a GeoServer expert, then you can also skip the course and sign up for the certification exam.

We’re really excited about this new offering, and are happy to be able to open it up to you right now.

Want to see more? Great!

Spanning the Globe

Eddie Pickle

It’s rather interesting (and really fun) to be a part of both the “traditional” GIS/BI community and the newer geoweb community. Because our geospatial software makes this possible, I attended two radically different events in the past week that span this divide. One, the Location Intelligence Summit is an established event held on corporate turf  at the Washington DC Convention Center. The other, the GeoWeb Summit, is a newer meetup held in the grittier (and yet somehow more expensive) Dumbo neighborhood of Brooklyn. I felt right at home in each.

The Location Intelligence Summit, which Jody Garnett blogged about in great detail, did a great job representing the Old Guard. The event pre-dates Google Earth, a time when “location intelligence” came from geocoding customer records while Navteq and TeleAtlas drove the streets. Back then, “spatial was special” and getting and managing data in a GIS was a major effort.

GeoWeb Summit came later, when geospatial data started to come from everything (mobile devices, cameras, sensors everywhere — even my dog has a chip now! Now spatial is context for data and applications, not necessarily more special than any other aspect of data. We call this Spatial IT and Paul Ramsey presented on it at length.

Fast forward to today, and at both events there was a strong drive to get to the “now” — to capture, analyze, and visualize real-time data; leverage the crowd; track things indoors and out; and so much more. At each conference it was clear that there is no way that the applications enterprises seek for modern workflows, monitoring, analysis, application serving, and more could happen without the interoperable, scalable capabilities of open source. In the case of the Location Intelligence world, I was part of a whole new LocationTech track for high performance, location-aware technology. Here, open source is a way to get out of an historical trap of proprietary software locked doors. For the GeoWeb Summit, that’s a starting assumption. In both cases, it’s great to be a foundation for the future.

 

Thoughts from Location Intelligence 2014

Jody Garnett

For two and a half days last week, Location Intelligence 2014 took place in Washington DC. The conference was hosted by Directions Magazine, Oracle, Here (a Nokia business) and the Eclipse Foundation’s LocationTech initiative. The result was a diverse mix representing our industry and a strong outreach to the Business Intelligence community.

Workshops

The first ‘half day’ of the conference started with workshops.

DSC04349.jpg

David Winslow and Juan Marin from Boundless offered a tag team introduction to GeoGig (formerly GeoGit) called Redefining Geospatial Data Versioning: The GeoGig Approach. The highlight of this workshop was a live demonstration of GeoGit plugin for QGIS. My contribution was a series of illustrations showing each step of the workshop and running around the room to assist as needed.

DSC04351.jpg

Next up was a workshop with Ivan Lucena from the Oracle Raster team called Developing Geospatial Applications with uDig and Oracle Spatial. Ivan has been working on integrating Oracle Raster with GeoTools and uDig. I was quite pleased with the workshop participation, and there was a great Q&A session with Xavier Lopez covering Oracle’s involvement in open source.

LocationTech Summit

The second day featured the LocationTech Summit, primarily moderated by Andrew Ross with help from Geoff Zeiss. Presentations were recorded, and can be found on this YouTube playlist.

DSC04364.jpg

Modern workflow for managing Spatial data: Eddie Pickle was on hand to introduce us as a company, and his own take on where the industry needs to be next. Watch on YouTube.

High performance computing at LocationTech: Xavier Lopez went over Oracle’s interest in open source spatial technologies and their support of LocationTech to facilitate this direction.

21st Century Geospatial Data Processing: Robert Cheetham (Azavea) from the GeoTrellis project provided context for his work. Watch on YouTube.

Open Data straight from the Source: Andrew Turner of Esri provided a straight-forward perspective on his recent open data efforts. I quite enjoyed the cross-linking from GeoJSON files on GitHub to browsable map and spreadsheet display. Watch on YouTube.

Redefining Geospatial data versioning: The GeoGig approach: Juan Marin from Boundless provided a well received GeoGig presentation, resulting in a much discussion, some of which is creeping out onto the internet in the form of a blog post by Geoff Zeiss — thanks Geoff! Watch on YouTube.

Collaborative Mapping using GeoGig: Scott Clark from LMN Solutions with GeoGig success story. I was impressed with how far they have come using technology from all over the OpenGeo stack. Scott previewed a live demo server if you would like to check how they are doing. One aspect of Scott’s presentation I appreciated was the emphasis that you can use GeoGig today by making use of the tools (and desktop apps) you are familiar with. Watch on YouTube.

GeoMesa: Scalable Geospatial Analytics: Anthony Fox of CCRi offered an introduction to one of LocationTech’s big cloud power houses. Watch on YouTube.

Real-time Raster Processing with GeoTrellis: Robert Cheetham was back with the details this time around.  I was a bit taken by surprise with the scope of GeoTrellis as it moves beyond cloud raster work, and starts to look into network analysis. They have a demo up if you would like to take a look. Watch on YouTube.

Fusing Structured and Unstructured Data for Geospatial Insights in Lumify: A polished presentation from Altamira. The open source project clavin.io is responsble for figuring out the location information from otherwise innocent text documents. Watch on YouTube.

The last “from data to action” session provided a series of inspirational stories:

  • Erek Dyskant from Bluelabs described using the usual suspects (PostGIS, GeoServer, OpenLayers, QGIS) in their talk. The take-home for me was the reduction in labor from a 150 analytic team in 2012, down to 8 in 2013. How? By focusing on up-to-date data, rather than being distracted by a BI tool.

  • The talk on Red Hook Wifi was great at a technical buzzword bingo level (wifi mesh network!) and human level (enabling communication after Hurricane Sandy hit).

All in all, the LocationTech Summit was a great addition to the event. It offered a wide range of technology, data, and human stories for those who attended. On a technical front, I was quite pleased with the number of teams using GeoServer and happy to see GeoServer WPS being used in production.

Wrapping Up

The final day was devoted to the Oracle community, resulting in some great conversations at the Boundless booth.

DSC04338.jpg

I was pleased to meet up with a fellow Australian, Simon Greener who was (as always) focused on Oracle performance.

Thanks to the conference organisers for an entertaining venue to talk about the technologies we know and love.

Citi Bike Analysis and Automated Workflows with QGIS

Victor Olaya

Citi Bike
, the bike share system in New York City, provides some interesting data that can be analyzed in many different ways. We liked this analysis from Ben Wellington that was performed using IPython and presented using QGIS. Since we wanted to demonstrate the power of QGIS, we decided to replicate his analysis entirely in QGIS and go a little bit further by adding some extra tasks. We automated the whole process, making it easy to add new layers corresponding to more recent data in the future.

Data can be found on the Citi Bike website and is provided on a monthly basis. We will show how to process one of them (February 2014) and discuss how to automate processing for the whole set of files. We will also use a layer with the borough boundaries from NYC Department of City Planning.

Getting started

Open both the table and the layer in QGIS. Tables open as vector layers and are handled much in the same way except for the fact that they lack geometries. This is what the table with trip data looks like.

attribute_table

We have to process the data in order to create a new points layer, with each point representing the location of an available bike station and having the following associated values:

  • Median value of age
  • Median value of trip duration
  • Percentage of male users
  • Percentage of subscribed users
  • Total number of outgoing trips

Computing a new layer

As we did in a previous bike share post, we can use a script to compute this new layer. We can add a new algorithm to the Processing framework by writing a Python script. This will make the algorithm available for later use and, as we will see later, will integrate it in all the Processing components.

You can find the script here. Add it to you collection of scripts and you should see a new algorithm named “Summarize Citi Bike data” in the toolbox.

Double click on the algorithm name to execute it and enter the table with the trips data as input.

param_dialog_script

Running the algorithm will create a new layer with stations, containing the computed statistics for each of them in the attributes table.

points

To create an influence zone around each point, we can use the Voronoi Polygons algorithm

param_dialog_voronoi

We are using a buffer zone to cover all of Manhattan. Otherwise, the polygons would just include the area within the convex hull of the station points. Here is the output layer.

voronoi

The final step is to clip the polygons with the contour layer, to remove areas overlapping with water. We will use the Clip algorithm, resulting in this:

output_clip

Visualizing the data

We can now change the style and set a color ramp based on any of the variables that we have computed. Here you have one based on the median trip time.

trip_time

Up to this point, we have replicated the result of the original blog entry, but we can go a bit further rather easily.

Creating a model

For instance, let’s suppose that you want to do the same calculation for other months. The most simple alternative would be to open all the corresponding tables and re-run all the above steps for each of them. However, it would be better if we could put all of those steps in a single algorithm that computes the final polygons from the inputs table. We can do that by creating a model.

Models are created by opening the graphical modeler and adding inputs and algorithms to define a workflow. A model defining the workflow that we have followed would look like this.

model

In case you want to try it yourself, you can download it here. Now, for a new set of data (a new input table), you just have to run a single algorithm (the model that we have just created) in order to get the corresponding polygons. The parameter dialog of the model looks like this:

param_dialog_model

 

Batch processing and other options

This, however, might be a lengthy operation once we have a few tables to process, but we can add some additional automation.The model we have created can be used just like any other algorithm in Processing, meaning that we can use it in the Processing batch interface. Right-clicking on the model and selecting “Execute as batch process” will open the batch processing dialog.

batch

Just select the layers to process in the corresponding column, the output filenames, and the set of resulting layers will be automatically computed in a single execution.

With a bit of additional work, these layers can be used, for instance, to run the TimeManager plugin and create an animation, which will help understand how the bike system usage varies over the course of the year.

Other improvements can also be added. One of them would be to write a new short script to create the SLD file each time for each layer based on the layer data, adjusting the boundaries of the color ramp to the min and max values in the layer, or using some other criteria. That would allow us to create a data-driven symbology, and we could add the algorithm created with that script as a new step in our model.

Publishing to OpenGeo Suite

Another improvement that we can add is to link our model to the publishing capabilities of the OpenGeo Suite plugin. If we want to publish the layers that we create to a GeoServer instance, we can also do it from QGIS. Furthermore, we can call that functionality from Processing, so publishing a layer and setting a style can be included as additional steps of our model, automating the whole workflow.

First, we need a script to upload the layer and a style to GeoServer, calling the OpenGeo Suite Plugin API. The code of this script will look like this:

##Boundless=group
##Import styled layer to GeoServer=name
##Layer=vector
##SLD_style_file=file
##url=string
##user=string
##password=string
##workspace=string

from qgis.core import *
from PyQt4.QtCore import *
import processing
from opengeo.qgis.catalog import createGeoServerCatalog

layer = processing.getObject(Layer)
layer.loadSldStyle(SLD_style_file)
catalog = createGeoServerCatalog(url, user, password)
ws = catalog.catalog.get_workspace(workspace)
catalog.publishLayer(layer, ws)

You can copy that code and create a new script or you can install this script file.

A new algorithm is now available in your toolbox: “Import styled layer to GeoServer”.

param_dialog_geoserver

Select the model that we created, right-click on its name and select “Edit model” to edit it.

You can now add the import script algorithm to the model, so it takes the resulting layer and imports it.

model_extended

Notice that, although our script takes the URL and other needed parameters for the import operation, these are not requested of the user when running the model, since we have hardcoded them assuming that we will always upload to the same server. This can of course be changed easily to adapt to a given scenario.

Running the algorithm will now compute the polygons from the input table, import it and set a given style, all in a single operation.

The style is selected as another input of the model, and it has to be entered as an SLD file. You can easily generate an SLD file from QGIS, just by defining it in the properties of the layer and then exporting as SLD. The SLD produced by QGIS is not fully compatible with GeoServer, but the OpenGeo Suite plugin will take care of that before actually sending it to GeoServer.

Although we have left the style as an input that has to be selected in each execution, we could hardcode it in the model, or even in the script dialog used for importing. Again, there are several alternatives to build our scripts and models.

Of course, this improved model can also be run in batch processing mode, like we did with the original one.

Conclusion

QGIS is an ideal tool for working with spatial data and analyzing it, and with our plugin, it is also an easy interface for publishing data to OpenGeo Suite. Integrating Processing the OpenGeo Suite plugin enables all sorts of automated analysis and publishing workflows, enabling the execution of complete workflows from a single application.

Boundless recognized as a TiE50 2014 finalist!

Tie50 2014 Finalist

Congratulations to our Boundless team: TiECon, the world’s largest conference for entrepreneurs, has named us as a TiE50 Finalist at its 2014 award ceremony last week.

For the last five years, TiECon has recognized promising startups like ours, and I am not surprised that the work of our team has been noticed by others in the startup community. It is great to be recognized for our accomplishments after such a short time in the commercial market– and I believe this is validation that we are making a difference by offering a powerful alternative to proprietary systems.

Boundless’ mission is to develop and maintain the best open source geospatial software, something that is central to all of our efforts. Given the power and growing use of geospatial data and applications, our platform is a natural fit in support of rapidly changing needs across various industries.  To paraphrase Eric Raymond, we provide freedom from “unhackability,” lock-in, and amnesia harms, meaning that enterprises manage their own timetables for software lifespan, migration and/or obsolescence. Essentially, open source software like ours provides businesses with substantially greater control than closed source alternatives. This is critical given the pace of innovation in every industry using geospatial data.

We pride ourselves on providing the only viable open source mapping software platform, one that solves the most complex geospatial challenges.  As we look toward the future, we will continue to work with our developer community to build the best software for today’s mapping and geospatial needs and to grow and develop all of our services, mapping the way to a strong future for enterprise geospatial apps.

Boundless Connections: Joshua S. Campbell (Part 2)

Yesterday we posted about what Josh Campbell, our new Vice President for Product Development, has accomplished during his tenure at the US Department of State. This second half of the interview focuses on his efforts with the future direction of data collaboration, creation, and editing products at Boundless. 

Your online persona is “Disruptive Geo”. What does that come from?

It comes from the coursework for my PhD in geography. My research was mainly in geographic information science, and beyond that I needed to take additional, research coursework in computer science and entrepreneurship, eventually taking five classes in each. One of the business courses was about “Disruptive Innovation”, a conceptual model for predicting the outcomes of competitive battles in different circumstances. This framework has become incredibly important to me: How do you evaluate ideas in a competitive market? Which ideas will succeed? How do you structure organizations to utilize a disruptive strategy? This approach resonated so strongly because I believe that the combined mix of geographic data, spatial analysis tools, and open source methodologies can be highly disruptive to many industries.

What will you be working on here at Boundless?

I’ll be looking at what we can do with data collaboration tools, distributed versioning, and crowdsourcing methods to unlock value in traditional data management workflows. Our new approach to distributed versioning is incredibly powerful, potentially a game changer in geographic information science, and I’ll be looking at a cross section of industries and applications where we can use it to increase efficiencies and enable new workflows.

One of the critical challenges that organizations are going to face going forward is how to manage the interface between authoritative and crowdsourced data. The power of the crowd is undeniable, but how can that power be harnessed in the context of authoritative datasets, either public or private. With distributed versioning and new workflows, we can bring those worlds together while maintaining data provenance and lineage, granular level data security, and provide methods for incorporating edits made outside the organization. If we do it right, we can introduce a new level of collaboration in spatial data.

Where does your interest in this come from?

While at the HIU I led our involvement with the ROGUE project, committing to help expand its use in the humanitarian and development space. I thought the approach to distributed versioning outlined in the project (which became the GeoGit library) addressed many of the long standing challenges of geospatial data management, and was a good investment for the government to make, as it was designed to increase spatial data sharing during humanitarian operations.  Having watched the evolution of GeoGit from a front-row seat, my opinion of that hasn’t changed.

How are you attacking this problem?

First, I’ll be looking at individual industries to find use cases where versioned editing is important.  I have several in mind already based on my experience, and am looking to branch into the utilities, energy, and telecommunications industries. Many of these already use some form of spatial versioning, and can become more efficient using a GeoGit approach. There are likely existing workflows where there is already internal collaboration (different teams, field work, etc), and GeoGit could reduce the overhead cost of business by making internal collaboration better.

Second, I’m going to take a deeper look at OpenStreetMap and it’s versioning process. With GeoGit there are potentially better ways to structure the changesets, making it easier to keep data updated and linked with non-OSM data. There are many industries that could benefit from the inclusion of OSM data as a supplemental dataset, but that simultaneously need to keep their data internal. I’d like to work on using GeoGit to maintain the provenance and lineage of data, so that organizations get the benefit of OSM, while also being sure they are maintaining the security and integrity of their internal data. Hopefully one benefit of this will be to actually open up more data and harness the networks effects that arise when multiple users can easily collaborate.

Boundless Connections: Joshua S. Campbell (Part 1)

Josh CampbellSay hello to Josh Campbell, our new Vice President for Product Development! Josh brings vast experience building geographic computing infrastructures in academia and government. I sat down to talk to Josh about what he’s done, where he’s been, and where he’s going to take us.

Welcome to the team! What was your role before you joined us?

I was a Geographer and GIS Architect at the Humanitarian Information Unit (HIU), a division of the Office of the Geographer and Global Issues in the Bureau of Intelligence and Research at the State Department. At the HIU we utilized geographic technology and spatial analysis to do research and analysis of complex humanitarian emergencies. Additionally, as part of the Office of the Geographer, we were trying to demonstrate to the Department the value of leveraging the geographic dimension of their data. Given the humanitarian mission of the HIU, the global footprint of the State Department, and the relative lack of legacy GIS, we advocated for the adoption of open source geographic tools, and built several innovative applications from them.

What was your experience as an open source advocate at the Department of State?

I pushed for, and was ultimately successful in, getting a range of new geographic software approved for use on the department’s network, both open source and proprietary. The first package that we got approved was OpenGeo Suite 3.0.2 but I also pushed for getting Google Earth, Mapbox, GeoIQ, and Metacarta approved. Today, OpenGeo Suite 4.0.2 and Geonode 2.0.1 are in review and on their way towards being approved.  It’s a big point of pride for me that we were able to succeed in bringing this new range of tools into the Department.

Where does your passion for open source come from?

On the campus of the University of Kansas. As part of my PhD coursework, I took the my first WebGIS class in 2006 and learned the basics of putting geographic data and maps on the web, utilizing mostly ArcIMS and MapServer. I then attended FOSS4G in 2007 and got my first real introduction into the broad ecosystem of open source geospatial tools and the folks who build them. Between 2007 and 2010, I designed and built a comprehensive WebGIS for the Kansas Applied Remote Sensing program –  a research group responsible for mapping and analyzing the biological and ecological diversity in Kansas and the Great Plains.  Over three years, I led a small team that deployed a WebGIS infrastructure comprised of a hybrid open/proprietary solution that used ArcGIS Server, GeoNetwork, and Django.

After KARS, I joined the Humanitarian Information Unit at the Department of State and brought with me the experience of building a brand new GIS system from scratch. Since I had just come from building a hybrid solution, I was tasked with doing something similar for State. Unlike KARS, however, State had no legacy GIS systems and no existing site licenses so we started with a clean slate.  Pursuing a proprietary strategy would have been incredibly expensive, as State has a global network of embassies and consulates, so we decided trying to take an open source approach. By 2010, the OpenGeo Suite and other open source tools had become robust enough for enterprise deployment, and I chose open source solutions instead of proprietary ones.

The Department of State is a customer of Boundless. What was your experience with OpenGeo Suite support like?

Boundless (at the time still called OpenGeo) formed the software foundation of our approach to bringing a geographic computing infrastructure to State. The first thing we had to do to make this happen was get the software accredited for use on the internal network (not a trivial task in the government).  Typically, open source tools are hindered from getting network accreditation because there is no corporate or organizational body that can assemble the required documents or provide support and maintenance. We had to push to get that done and I remember Eddie Pickle actually wrote the documents himself!  Juan Marin helped with that process as well, discussing the architecture and system design of the OpenGeo Suite. Having an organization to rely on for that was very helpful. We also encountered some bugs along the way, as well as deployments on multiple operating systems, and the support team was key to overcoming those challenges and keeping the project’s momentum.

Did State contribute any modifications back to the software?

We contributed in three ways. First, we put resources on core development, helping to get PostGIS 2.0 completed, and now with GeoNode. All of these advancements went directly into the software core. Second, we published our customized tweaks and projects via HIU’s GitHub account. This took a bit longer than originally planned, but some good stuff is up there now, and more should become publicly available over the next couple of months. Third, we worked across the interagency to support the development of open source tools. For example, the HIU is part of the management team of the ROGUE project that built the GeoGit library.

What are you most proud of from your time at the Department of State?

Two things, Imagery to the Crowd and increasing the appreciation of geography in the Department. As we demonstrated the applications we were building, more and more people saw the power of maps and geographic data. We quickly realized there was more demand than we could answer within the humanitarian context.  Working alongside colleagues in State’s IT Bureau, we established a geographic development team within the eDiplomacy division to help build geographic applications.  They have already done some great work, all with open source tools, and more is coming.  I’m proud to have left that legacy.

What drove the creation of Imagery to the Crowd and MapGive?

The inspiration for Imagery to the Crowd, and now MapGive, was the 2010 Haiti earthquake response, where the power of online volunteers, OpenStreetMap (OSM) and easily-accessible commercial satellite imagery combined to change humanitarian response forever. Imagery to the Crowd was our attempt to combine several technological and societal trends to create a repeatable, open, crowdsourced process to support humanitarian response through free and open geographic data. And to do it from inside the government  — innovation at its hardest! We took commercial satellite data that the government had already purchased, processed and hosted it as Tiled Map Services in the Amazon cloud, then made it available to online mapping volunteers who created free and open geographic data stored in the OpenStreetMap database, ensuring it’s free for anyone’s use. The return on investment here is something that I think everyone can get behind.

The first test we did was to map refugee camps in the Horn of Africa during the 2011 famine.  While there was plenty of data about the camp populations – the UN was tracking it in real-time – the data in OpenStreetMap was all but nonexistent.  So we knew 50,000 refugees were living in a location, but it was a blank area in the OSM database. Using the Imagery to the Crowd methodology The Department of State worked with the Humanitarian OpenStreetMap Team in using the OSM Tasking Manager to source volunteers to improve the data. We put out a call and had over 40 volunteers mapping the areas and putting 600k people living in the camps on the map. Over the past two years, this process was deployed 15 other times, supporting disaster response, community resilience, disaster risk reduction and sustainable development.

Check back tomorrow for the second half of the interview, which will  focus on his efforts with the future direction of data collaboration, creation, and editing products at Boundless. 

GeoScript in Action: Part 3

Soumya SenguptaIn first part of this series we explored GeoScript basics, sliced and diced some solar intensity data, and created several visualizations.  In the second part, we scripted a Web Processing Service (WPS) using the Python implementation of GeoScript. This script suggests the sunniest possible drive inside your current state based on your location and the number of stops you want to make. In this post, we will put together a simple web application that allows you to interact with the WPS service.

Goals of the Web Application

Using the web application, a user should be able to specify a starting point (either by clicking on a map or by using the browser’s geolocation capabilities) and the maximum number of sunny spots the user wants to make on the journey. The web application should then interact with the WPS to determine sunny spots based on the inputs provided. Finally, as an added feature, the web application will determine a route for the journey that starts from the origin and stops at the sunny spots.

User Interface

The web application user interface (UI) is designed to be pretty simple. The following figure shows the major components:

GSBlog3UI.png

The steps the user has to follow are provided on the right-hand side. The results are shown in the different sections below that and on the map. The map itself has typical controls like a pan-zoom bar, a pan button, a drag box based zoom, mouse position based coordinate notifier and layer attributions.

Application Design

The web application is built using HTML5, JavaScript, and CSS. Major aspects of the application, like the interaction with the map and the WPS client, are implemented using OpenLayers 2 (version 2.13.1). The routing is implemented using the MapQuest Open Directions Service. The UI was stitched together using jQuery, Font Awesome, and elements from the Map Icons Collection. The base map used in this application is provided by OpenStreetMap.

Implementation

The code of the web application can be found here and consists of an index.html page, a JavaScript file (js/gsblog3.js), and a CSS file (css/gsblog3.css). As expected, the single web page defines the UI elements while the CSS file makes them look good and the JavaScript file controls all the interactions.

The critical part of the JavaScript code is the interaction with the WPS server. The following code snippet, specifically those surrounding the OpenLayers.WPSClient class, shows how it is done:

// The 2 inputs to the WPS process.
var origin = new OpenLayers.Geometry.Point(startingPoint.lon, startingPoint.lat);
var waypoints = $('#stops').val();

// The WPS server needs to be specified. CORS restrictions apply.
// Bypassed by using ProxyPass directives on local Apache HTTPD.
var wpsClient = new OpenLayers.WPSClient({
    servers: {
         wpsserver: 'http://localhost/geoserver/wps'
    }
});

// Details of the WPS interaction.
wpsClient.execute({
    server: "wpsserver",
    process: "py:solar_wps",
    inputs: {
        origin: origin,
        waypoints: waypoints
    },
    success: function(outputs) {
        $.event.trigger({
            type: 'wpsClientExecuted',
            output: outputs
        });
    }
});

Using WPS

As you might recall, the WPS process that we created in previous post requires two inputs: the origin (what we call the starting point in the UI) and the number of waypoints (the number of sunny stops in the UI). After collecting those, the code proceeds to define the WPS client to the WPS server. In this case, the WPS server is hosted locally by a Geoserver instance with the WPS plugin. The WPS client then executes the request against the WPS process and handles the response.

If the web applications is viewed in Google Chrome with Developer Tools activated, the WPS request and the response can be seen. The following figures show a sample request-response combination.

Sample Requestrequest.png

Sample Response (truncated)

response.png

Another thing to note is the fact the points returned by the WPS may contain duplicates. So, if the user asks for five waypoints, the WPS will return five points but there might be only two unique points. To make the output more meaningful, the JavaScript code provided only works with unique points.

External Services

As is typical in many modern web applications, delivering different functionalities within the application requires the ability to communicate with external services. In this case, the application (running off an Apache HTTPD server) needs to communicate with the WPS server and (later on) the MapQuest routing service. In such situations, the web application is generally subject to strict Cross Origin Resource Sharing (CORS) policies. There are various ways to address the restrictions but In this case we chose to configure our local Apache HTTPD with a few ProxyPass/ProxyPassReverse directives.

Web Application in Action

Here a few screenshots showing the site in action.

After selection of starting point by clicking on the map:

screenshot1.png

After running the WPS and the route by clicking the Create Journey button:

screenshot2.png

Our Professional Services team works side-by-side with your team to help you make the most of your technology investment. Contact us to learn more!

 

GeoNYC April: Getting a grasp, from statistical analysis to real-time monitoring

Last month, GeoNYC brought together a trio of mapping projects that focused on analysis and real-time monitoring as Raz Schwartz, Jane Stewart Adam and Ekene Ijeoma presented their projects to the GeoNYC community at our April event held at Cornell’s NYU Tech Campus.

If you could’t make it to the event and want to check out what you’ve missed visit the treasure trove of past presentations. Use the hashtag #geonyc on Twitter or follow the conversations at Storify for a more complete picture of the event.

Don’t forget to attend GeoNYC tonight as Mauricio Giraldo (@mgiraldo), Kevin Webb (@kvnweb), Sharai Lewis-Gruss (@LoveRaiRai), and Dr. Raj Singh (@opengeospatial) discuss the changing world of open source geospatial software.
 

Raz Schwartz

Raz (@razsc), a post-doctoral researcher at Cornell Tech and a Magic Tech fellow presented CityBeat. It’s a real-time event detection and city-wide statistics application that sources, monitors, and analyzes hyper-local information from multiple social media platforms. What can you use it for?

CityBeat uses the massive amount of live geotagged information that is available from various social media platforms. It can be used to better understand the pulse of the city using the streams of geo-tagged information coming from Instagram, Twitter and Foursquare real time data.

 

Jane Adam Stewart

Jane (@thejunglejane), a grad student at NYU CUSP and apparently the only geostatiscian f0r miles

…presented KrigPy, a spatial interpolation library for Python that was refactored from the R gstat package. KrigPy has the major functionalities of the gstat package: variogram modeling; simple, ordinary, and universal point or block Kriging, sequential Gaussian or indicator (co)simulation; variogram and variogram map plotting utility functions. The project will be available on github soon.

 

Ekene Ijeoma

Ekene (@ekeneijeoma) talked about The Refugee Project, a collaboration with Hyperakt, which reveals the ebb and flow of global refugee migration over the last four decades based on data from the UN and UNHCR. It expands and reflects on the data, telling  stories about socio-political events which evolved into mass migrations.  It was agreed upon that it was quite beautiful:

 

Thanks!

A big thanks to Raz Schwartz at Cornell Tech for providing the space. And a special thanks to GeoNYC sponsors — Boundless, CartoDB and Esri — for supporting the event.