Spanning the Globe

Eddie Pickle

It’s rather interesting (and really fun) to be a part of both the “traditional” GIS/BI community and the newer geoweb community. Because our geospatial software makes this possible, I attended two radically different events in the past week that span this divide. One, the Location Intelligence Summit is an established event held on corporate turf  at the Washington DC Convention Center. The other, the GeoWeb Summit, is a newer meetup held in the grittier (and yet somehow more expensive) Dumbo neighborhood of Brooklyn. I felt right at home in each.

The Location Intelligence Summit, which Jody Garnett blogged about in great detail, did a great job representing the Old Guard. The event pre-dates Google Earth, a time when “location intelligence” came from geocoding customer records while Navteq and TeleAtlas drove the streets. Back then, “spatial was special” and getting and managing data in a GIS was a major effort.

GeoWeb Summit came later, when geospatial data started to come from everything (mobile devices, cameras, sensors everywhere — even my dog has a chip now! Now spatial is context for data and applications, not necessarily more special than any other aspect of data. We call this Spatial IT and Paul Ramsey presented on it at length.

Fast forward to today, and at both events there was a strong drive to get to the “now” — to capture, analyze, and visualize real-time data; leverage the crowd; track things indoors and out; and so much more. At each conference it was clear that there is no way that the applications enterprises seek for modern workflows, monitoring, analysis, application serving, and more could happen without the interoperable, scalable capabilities of open source. In the case of the Location Intelligence world, I was part of a whole new LocationTech track for high performance, location-aware technology. Here, open source is a way to get out of an historical trap of proprietary software locked doors. For the GeoWeb Summit, that’s a starting assumption. In both cases, it’s great to be a foundation for the future.

 

Thoughts from Location Intelligence 2014

Jody Garnett

For two and a half days last week, Location Intelligence 2014 took place in Washington DC. The conference was hosted by Directions Magazine, Oracle, Here (a Nokia business) and the Eclipse Foundation’s LocationTech initiative. The result was a diverse mix representing our industry and a strong outreach to the Business Intelligence community.

Workshops

The first ‘half day’ of the conference started with workshops.

DSC04349.jpg

David Winslow and Juan Marin from Boundless offered a tag team introduction to GeoGig (formerly GeoGit) called Redefining Geospatial Data Versioning: The GeoGig Approach. The highlight of this workshop was a live demonstration of GeoGit plugin for QGIS. My contribution was a series of illustrations showing each step of the workshop and running around the room to assist as needed.

DSC04351.jpg

Next up was a workshop with Ivan Lucena from the Oracle Raster team called Developing Geospatial Applications with uDig and Oracle Spatial. Ivan has been working on integrating Oracle Raster with GeoTools and uDig. I was quite pleased with the workshop participation, and there was a great Q&A session with Xavier Lopez covering Oracle’s involvement in open source.

LocationTech Summit

The second day featured the LocationTech Summit, primarily moderated by Andrew Ross with help from Geoff Zeiss. Presentations were recorded, and can be found on this YouTube playlist.

DSC04364.jpg

Modern workflow for managing Spatial data: Eddie Pickle was on hand to introduce us as a company, and his own take on where the industry needs to be next. Watch on YouTube.

High performance computing at LocationTech: Xavier Lopez went over Oracle’s interest in open source spatial technologies and their support of LocationTech to facilitate this direction.

21st Century Geospatial Data Processing: Robert Cheetham (Azavea) from the GeoTrellis project provided context for his work. Watch on YouTube.

Open Data straight from the Source: Andrew Turner of Esri provided a straight-forward perspective on his recent open data efforts. I quite enjoyed the cross-linking from GeoJSON files on GitHub to browsable map and spreadsheet display. Watch on YouTube.

Redefining Geospatial data versioning: The GeoGig approach: Juan Marin from Boundless provided a well received GeoGig presentation, resulting in a much discussion, some of which is creeping out onto the internet in the form of a blog post by Geoff Zeiss — thanks Geoff! Watch on YouTube.

Collaborative Mapping using GeoGig: Scott Clark from LMN Solutions with GeoGig success story. I was impressed with how far they have come using technology from all over the OpenGeo stack. Scott previewed a live demo server if you would like to check how they are doing. One aspect of Scott’s presentation I appreciated was the emphasis that you can use GeoGig today by making use of the tools (and desktop apps) you are familiar with. Watch on YouTube.

GeoMesa: Scalable Geospatial Analytics: Anthony Fox of CCRi offered an introduction to one of LocationTech’s big cloud power houses. Watch on YouTube.

Real-time Raster Processing with GeoTrellis: Robert Cheetham was back with the details this time around.  I was a bit taken by surprise with the scope of GeoTrellis as it moves beyond cloud raster work, and starts to look into network analysis. They have a demo up if you would like to take a look. Watch on YouTube.

Fusing Structured and Unstructured Data for Geospatial Insights in Lumify: A polished presentation from Altamira. The open source project clavin.io is responsble for figuring out the location information from otherwise innocent text documents. Watch on YouTube.

The last “from data to action” session provided a series of inspirational stories:

  • Erek Dyskant from Bluelabs described using the usual suspects (PostGIS, GeoServer, OpenLayers, QGIS) in their talk. The take-home for me was the reduction in labor from a 150 analytic team in 2012, down to 8 in 2013. How? By focusing on up-to-date data, rather than being distracted by a BI tool.

  • The talk on Red Hook Wifi was great at a technical buzzword bingo level (wifi mesh network!) and human level (enabling communication after Hurricane Sandy hit).

All in all, the LocationTech Summit was a great addition to the event. It offered a wide range of technology, data, and human stories for those who attended. On a technical front, I was quite pleased with the number of teams using GeoServer and happy to see GeoServer WPS being used in production.

Wrapping Up

The final day was devoted to the Oracle community, resulting in some great conversations at the Boundless booth.

DSC04338.jpg

I was pleased to meet up with a fellow Australian, Simon Greener who was (as always) focused on Oracle performance.

Thanks to the conference organisers for an entertaining venue to talk about the technologies we know and love.

Citi Bike Analysis and Automated Workflows with QGIS

Victor Olaya

Citi Bike
, the bike share system in New York City, provides some interesting data that can be analyzed in many different ways. We liked this analysis from Ben Wellington that was performed using IPython and presented using QGIS. Since we wanted to demonstrate the power of QGIS, we decided to replicate his analysis entirely in QGIS and go a little bit further by adding some extra tasks. We automated the whole process, making it easy to add new layers corresponding to more recent data in the future.

Data can be found on the Citi Bike website and is provided on a monthly basis. We will show how to process one of them (February 2014) and discuss how to automate processing for the whole set of files. We will also use a layer with the borough boundaries from NYC Department of City Planning.

Getting started

Open both the table and the layer in QGIS. Tables open as vector layers and are handled much in the same way except for the fact that they lack geometries. This is what the table with trip data looks like.

attribute_table

We have to process the data in order to create a new points layer, with each point representing the location of an available bike station and having the following associated values:

  • Median value of age
  • Median value of trip duration
  • Percentage of male users
  • Percentage of subscribed users
  • Total number of outgoing trips

Computing a new layer

As we did in a previous bike share post, we can use a script to compute this new layer. We can add a new algorithm to the Processing framework by writing a Python script. This will make the algorithm available for later use and, as we will see later, will integrate it in all the Processing components.

You can find the script here. Add it to you collection of scripts and you should see a new algorithm named “Summarize Citi Bike data” in the toolbox.

Double click on the algorithm name to execute it and enter the table with the trips data as input.

param_dialog_script

Running the algorithm will create a new layer with stations, containing the computed statistics for each of them in the attributes table.

points

To create an influence zone around each point, we can use the Voronoi Polygons algorithm

param_dialog_voronoi

We are using a buffer zone to cover all of Manhattan. Otherwise, the polygons would just include the area within the convex hull of the station points. Here is the output layer.

voronoi

The final step is to clip the polygons with the contour layer, to remove areas overlapping with water. We will use the Clip algorithm, resulting in this:

output_clip

Visualizing the data

We can now change the style and set a color ramp based on any of the variables that we have computed. Here you have one based on the median trip time.

trip_time

Up to this point, we have replicated the result of the original blog entry, but we can go a bit further rather easily.

Creating a model

For instance, let’s suppose that you want to do the same calculation for other months. The most simple alternative would be to open all the corresponding tables and re-run all the above steps for each of them. However, it would be better if we could put all of those steps in a single algorithm that computes the final polygons from the inputs table. We can do that by creating a model.

Models are created by opening the graphical modeler and adding inputs and algorithms to define a workflow. A model defining the workflow that we have followed would look like this.

model

In case you want to try it yourself, you can download it here. Now, for a new set of data (a new input table), you just have to run a single algorithm (the model that we have just created) in order to get the corresponding polygons. The parameter dialog of the model looks like this:

param_dialog_model

 

Batch processing and other options

This, however, might be a lengthy operation once we have a few tables to process, but we can add some additional automation.The model we have created can be used just like any other algorithm in Processing, meaning that we can use it in the Processing batch interface. Right-clicking on the model and selecting “Execute as batch process” will open the batch processing dialog.

batch

Just select the layers to process in the corresponding column, the output filenames, and the set of resulting layers will be automatically computed in a single execution.

With a bit of additional work, these layers can be used, for instance, to run the TimeManager plugin and create an animation, which will help understand how the bike system usage varies over the course of the year.

Other improvements can also be added. One of them would be to write a new short script to create the SLD file each time for each layer based on the layer data, adjusting the boundaries of the color ramp to the min and max values in the layer, or using some other criteria. That would allow us to create a data-driven symbology, and we could add the algorithm created with that script as a new step in our model.

Publishing to OpenGeo Suite

Another improvement that we can add is to link our model to the publishing capabilities of the OpenGeo Suite plugin. If we want to publish the layers that we create to a GeoServer instance, we can also do it from QGIS. Furthermore, we can call that functionality from Processing, so publishing a layer and setting a style can be included as additional steps of our model, automating the whole workflow.

First, we need a script to upload the layer and a style to GeoServer, calling the OpenGeo Suite Plugin API. The code of this script will look like this:

##Boundless=group
##Import styled layer to GeoServer=name
##Layer=vector
##SLD_style_file=file
##url=string
##user=string
##password=string
##workspace=string

from qgis.core import *
from PyQt4.QtCore import *
import processing
from opengeo.qgis.catalog import createGeoServerCatalog

layer = processing.getObject(Layer)
layer.loadSldStyle(SLD_style_file)
catalog = createGeoServerCatalog(url, user, password)
ws = catalog.catalog.get_workspace(workspace)
catalog.publishLayer(layer, ws)

You can copy that code and create a new script or you can install this script file.

A new algorithm is now available in your toolbox: “Import styled layer to GeoServer”.

param_dialog_geoserver

Select the model that we created, right-click on its name and select “Edit model” to edit it.

You can now add the import script algorithm to the model, so it takes the resulting layer and imports it.

model_extended

Notice that, although our script takes the URL and other needed parameters for the import operation, these are not requested of the user when running the model, since we have hardcoded them assuming that we will always upload to the same server. This can of course be changed easily to adapt to a given scenario.

Running the algorithm will now compute the polygons from the input table, import it and set a given style, all in a single operation.

The style is selected as another input of the model, and it has to be entered as an SLD file. You can easily generate an SLD file from QGIS, just by defining it in the properties of the layer and then exporting as SLD. The SLD produced by QGIS is not fully compatible with GeoServer, but the OpenGeo Suite plugin will take care of that before actually sending it to GeoServer.

Although we have left the style as an input that has to be selected in each execution, we could hardcode it in the model, or even in the script dialog used for importing. Again, there are several alternatives to build our scripts and models.

Of course, this improved model can also be run in batch processing mode, like we did with the original one.

Conclusion

QGIS is an ideal tool for working with spatial data and analyzing it, and with our plugin, it is also an easy interface for publishing data to OpenGeo Suite. Integrating Processing the OpenGeo Suite plugin enables all sorts of automated analysis and publishing workflows, enabling the execution of complete workflows from a single application.

Boundless recognized as a TiE50 2014 finalist!

Tie50 2014 Finalist

Congratulations to our Boundless team: TiECon, the world’s largest conference for entrepreneurs, has named us as a TiE50 Finalist at its 2014 award ceremony last week.

For the last five years, TiECon has recognized promising startups like ours, and I am not surprised that the work of our team has been noticed by others in the startup community. It is great to be recognized for our accomplishments after such a short time in the commercial market– and I believe this is validation that we are making a difference by offering a powerful alternative to proprietary systems.

Boundless’ mission is to develop and maintain the best open source geospatial software, something that is central to all of our efforts. Given the power and growing use of geospatial data and applications, our platform is a natural fit in support of rapidly changing needs across various industries.  To paraphrase Eric Raymond, we provide freedom from “unhackability,” lock-in, and amnesia harms, meaning that enterprises manage their own timetables for software lifespan, migration and/or obsolescence. Essentially, open source software like ours provides businesses with substantially greater control than closed source alternatives. This is critical given the pace of innovation in every industry using geospatial data.

We pride ourselves on providing the only viable open source mapping software platform, one that solves the most complex geospatial challenges.  As we look toward the future, we will continue to work with our developer community to build the best software for today’s mapping and geospatial needs and to grow and develop all of our services, mapping the way to a strong future for enterprise geospatial apps.

Boundless Connections: Joshua S. Campbell (Part 2)

Yesterday we posted about what Josh Campbell, our new Vice President for Product Development, has accomplished during his tenure at the US Department of State. This second half of the interview focuses on his efforts with the future direction of data collaboration, creation, and editing products at Boundless. 

Your online persona is “Disruptive Geo”. What does that come from?

It comes from the coursework for my PhD in geography. My research was mainly in geographic information science, and beyond that I needed to take additional, research coursework in computer science and entrepreneurship, eventually taking five classes in each. One of the business courses was about “Disruptive Innovation”, a conceptual model for predicting the outcomes of competitive battles in different circumstances. This framework has become incredibly important to me: How do you evaluate ideas in a competitive market? Which ideas will succeed? How do you structure organizations to utilize a disruptive strategy? This approach resonated so strongly because I believe that the combined mix of geographic data, spatial analysis tools, and open source methodologies can be highly disruptive to many industries.

What will you be working on here at Boundless?

I’ll be looking at what we can do with data collaboration tools, distributed versioning, and crowdsourcing methods to unlock value in traditional data management workflows. Our new approach to distributed versioning is incredibly powerful, potentially a game changer in geographic information science, and I’ll be looking at a cross section of industries and applications where we can use it to increase efficiencies and enable new workflows.

One of the critical challenges that organizations are going to face going forward is how to manage the interface between authoritative and crowdsourced data. The power of the crowd is undeniable, but how can that power be harnessed in the context of authoritative datasets, either public or private. With distributed versioning and new workflows, we can bring those worlds together while maintaining data provenance and lineage, granular level data security, and provide methods for incorporating edits made outside the organization. If we do it right, we can introduce a new level of collaboration in spatial data.

Where does your interest in this come from?

While at the HIU I led our involvement with the ROGUE project, committing to help expand its use in the humanitarian and development space. I thought the approach to distributed versioning outlined in the project (which became the GeoGit library) addressed many of the long standing challenges of geospatial data management, and was a good investment for the government to make, as it was designed to increase spatial data sharing during humanitarian operations.  Having watched the evolution of GeoGit from a front-row seat, my opinion of that hasn’t changed.

How are you attacking this problem?

First, I’ll be looking at individual industries to find use cases where versioned editing is important.  I have several in mind already based on my experience, and am looking to branch into the utilities, energy, and telecommunications industries. Many of these already use some form of spatial versioning, and can become more efficient using a GeoGit approach. There are likely existing workflows where there is already internal collaboration (different teams, field work, etc), and GeoGit could reduce the overhead cost of business by making internal collaboration better.

Second, I’m going to take a deeper look at OpenStreetMap and it’s versioning process. With GeoGit there are potentially better ways to structure the changesets, making it easier to keep data updated and linked with non-OSM data. There are many industries that could benefit from the inclusion of OSM data as a supplemental dataset, but that simultaneously need to keep their data internal. I’d like to work on using GeoGit to maintain the provenance and lineage of data, so that organizations get the benefit of OSM, while also being sure they are maintaining the security and integrity of their internal data. Hopefully one benefit of this will be to actually open up more data and harness the networks effects that arise when multiple users can easily collaborate.

Boundless Connections: Joshua S. Campbell (Part 1)

Josh CampbellSay hello to Josh Campbell, our new Vice President for Product Development! Josh brings vast experience building geographic computing infrastructures in academia and government. I sat down to talk to Josh about what he’s done, where he’s been, and where he’s going to take us.

Welcome to the team! What was your role before you joined us?

I was a Geographer and GIS Architect at the Humanitarian Information Unit (HIU), a division of the Office of the Geographer and Global Issues in the Bureau of Intelligence and Research at the State Department. At the HIU we utilized geographic technology and spatial analysis to do research and analysis of complex humanitarian emergencies. Additionally, as part of the Office of the Geographer, we were trying to demonstrate to the Department the value of leveraging the geographic dimension of their data. Given the humanitarian mission of the HIU, the global footprint of the State Department, and the relative lack of legacy GIS, we advocated for the adoption of open source geographic tools, and built several innovative applications from them.

What was your experience as an open source advocate at the Department of State?

I pushed for, and was ultimately successful in, getting a range of new geographic software approved for use on the department’s network, both open source and proprietary. The first package that we got approved was OpenGeo Suite 3.0.2 but I also pushed for getting Google Earth, Mapbox, GeoIQ, and Metacarta approved. Today, OpenGeo Suite 4.0.2 and Geonode 2.0.1 are in review and on their way towards being approved.  It’s a big point of pride for me that we were able to succeed in bringing this new range of tools into the Department.

Where does your passion for open source come from?

On the campus of the University of Kansas. As part of my PhD coursework, I took the my first WebGIS class in 2006 and learned the basics of putting geographic data and maps on the web, utilizing mostly ArcIMS and MapServer. I then attended FOSS4G in 2007 and got my first real introduction into the broad ecosystem of open source geospatial tools and the folks who build them. Between 2007 and 2010, I designed and built a comprehensive WebGIS for the Kansas Applied Remote Sensing program –  a research group responsible for mapping and analyzing the biological and ecological diversity in Kansas and the Great Plains.  Over three years, I led a small team that deployed a WebGIS infrastructure comprised of a hybrid open/proprietary solution that used ArcGIS Server, GeoNetwork, and Django.

After KARS, I joined the Humanitarian Information Unit at the Department of State and brought with me the experience of building a brand new GIS system from scratch. Since I had just come from building a hybrid solution, I was tasked with doing something similar for State. Unlike KARS, however, State had no legacy GIS systems and no existing site licenses so we started with a clean slate.  Pursuing a proprietary strategy would have been incredibly expensive, as State has a global network of embassies and consulates, so we decided trying to take an open source approach. By 2010, the OpenGeo Suite and other open source tools had become robust enough for enterprise deployment, and I chose open source solutions instead of proprietary ones.

The Department of State is a customer of Boundless. What was your experience with OpenGeo Suite support like?

Boundless (at the time still called OpenGeo) formed the software foundation of our approach to bringing a geographic computing infrastructure to State. The first thing we had to do to make this happen was get the software accredited for use on the internal network (not a trivial task in the government).  Typically, open source tools are hindered from getting network accreditation because there is no corporate or organizational body that can assemble the required documents or provide support and maintenance. We had to push to get that done and I remember Eddie Pickle actually wrote the documents himself!  Juan Marin helped with that process as well, discussing the architecture and system design of the OpenGeo Suite. Having an organization to rely on for that was very helpful. We also encountered some bugs along the way, as well as deployments on multiple operating systems, and the support team was key to overcoming those challenges and keeping the project’s momentum.

Did State contribute any modifications back to the software?

We contributed in three ways. First, we put resources on core development, helping to get PostGIS 2.0 completed, and now with GeoNode. All of these advancements went directly into the software core. Second, we published our customized tweaks and projects via HIU’s GitHub account. This took a bit longer than originally planned, but some good stuff is up there now, and more should become publicly available over the next couple of months. Third, we worked across the interagency to support the development of open source tools. For example, the HIU is part of the management team of the ROGUE project that built the GeoGit library.

What are you most proud of from your time at the Department of State?

Two things, Imagery to the Crowd and increasing the appreciation of geography in the Department. As we demonstrated the applications we were building, more and more people saw the power of maps and geographic data. We quickly realized there was more demand than we could answer within the humanitarian context.  Working alongside colleagues in State’s IT Bureau, we established a geographic development team within the eDiplomacy division to help build geographic applications.  They have already done some great work, all with open source tools, and more is coming.  I’m proud to have left that legacy.

What drove the creation of Imagery to the Crowd and MapGive?

The inspiration for Imagery to the Crowd, and now MapGive, was the 2010 Haiti earthquake response, where the power of online volunteers, OpenStreetMap (OSM) and easily-accessible commercial satellite imagery combined to change humanitarian response forever. Imagery to the Crowd was our attempt to combine several technological and societal trends to create a repeatable, open, crowdsourced process to support humanitarian response through free and open geographic data. And to do it from inside the government  — innovation at its hardest! We took commercial satellite data that the government had already purchased, processed and hosted it as Tiled Map Services in the Amazon cloud, then made it available to online mapping volunteers who created free and open geographic data stored in the OpenStreetMap database, ensuring it’s free for anyone’s use. The return on investment here is something that I think everyone can get behind.

The first test we did was to map refugee camps in the Horn of Africa during the 2011 famine.  While there was plenty of data about the camp populations – the UN was tracking it in real-time – the data in OpenStreetMap was all but nonexistent.  So we knew 50,000 refugees were living in a location, but it was a blank area in the OSM database. Using the Imagery to the Crowd methodology The Department of State worked with the Humanitarian OpenStreetMap Team in using the OSM Tasking Manager to source volunteers to improve the data. We put out a call and had over 40 volunteers mapping the areas and putting 600k people living in the camps on the map. Over the past two years, this process was deployed 15 other times, supporting disaster response, community resilience, disaster risk reduction and sustainable development.

Check back tomorrow for the second half of the interview, which will  focus on his efforts with the future direction of data collaboration, creation, and editing products at Boundless. 

GeoScript in Action: Part 3

Soumya SenguptaIn first part of this series we explored GeoScript basics, sliced and diced some solar intensity data, and created several visualizations.  In the second part, we scripted a Web Processing Service (WPS) using the Python implementation of GeoScript. This script suggests the sunniest possible drive inside your current state based on your location and the number of stops you want to make. In this post, we will put together a simple web application that allows you to interact with the WPS service.

Goals of the Web Application

Using the web application, a user should be able to specify a starting point (either by clicking on a map or by using the browser’s geolocation capabilities) and the maximum number of sunny spots the user wants to make on the journey. The web application should then interact with the WPS to determine sunny spots based on the inputs provided. Finally, as an added feature, the web application will determine a route for the journey that starts from the origin and stops at the sunny spots.

User Interface

The web application user interface (UI) is designed to be pretty simple. The following figure shows the major components:

GSBlog3UI.png

The steps the user has to follow are provided on the right-hand side. The results are shown in the different sections below that and on the map. The map itself has typical controls like a pan-zoom bar, a pan button, a drag box based zoom, mouse position based coordinate notifier and layer attributions.

Application Design

The web application is built using HTML5, JavaScript, and CSS. Major aspects of the application, like the interaction with the map and the WPS client, are implemented using OpenLayers 2 (version 2.13.1). The routing is implemented using the MapQuest Open Directions Service. The UI was stitched together using jQuery, Font Awesome, and elements from the Map Icons Collection. The base map used in this application is provided by OpenStreetMap.

Implementation

The code of the web application can be found here and consists of an index.html page, a JavaScript file (js/gsblog3.js), and a CSS file (css/gsblog3.css). As expected, the single web page defines the UI elements while the CSS file makes them look good and the JavaScript file controls all the interactions.

The critical part of the JavaScript code is the interaction with the WPS server. The following code snippet, specifically those surrounding the OpenLayers.WPSClient class, shows how it is done:

// The 2 inputs to the WPS process.
var origin = new OpenLayers.Geometry.Point(startingPoint.lon, startingPoint.lat);
var waypoints = $('#stops').val();

// The WPS server needs to be specified. CORS restrictions apply.
// Bypassed by using ProxyPass directives on local Apache HTTPD.
var wpsClient = new OpenLayers.WPSClient({
    servers: {
         wpsserver: 'http://localhost/geoserver/wps'
    }
});

// Details of the WPS interaction.
wpsClient.execute({
    server: "wpsserver",
    process: "py:solar_wps",
    inputs: {
        origin: origin,
        waypoints: waypoints
    },
    success: function(outputs) {
        $.event.trigger({
            type: 'wpsClientExecuted',
            output: outputs
        });
    }
});

Using WPS

As you might recall, the WPS process that we created in previous post requires two inputs: the origin (what we call the starting point in the UI) and the number of waypoints (the number of sunny stops in the UI). After collecting those, the code proceeds to define the WPS client to the WPS server. In this case, the WPS server is hosted locally by a Geoserver instance with the WPS plugin. The WPS client then executes the request against the WPS process and handles the response.

If the web applications is viewed in Google Chrome with Developer Tools activated, the WPS request and the response can be seen. The following figures show a sample request-response combination.

Sample Requestrequest.png

Sample Response (truncated)

response.png

Another thing to note is the fact the points returned by the WPS may contain duplicates. So, if the user asks for five waypoints, the WPS will return five points but there might be only two unique points. To make the output more meaningful, the JavaScript code provided only works with unique points.

External Services

As is typical in many modern web applications, delivering different functionalities within the application requires the ability to communicate with external services. In this case, the application (running off an Apache HTTPD server) needs to communicate with the WPS server and (later on) the MapQuest routing service. In such situations, the web application is generally subject to strict Cross Origin Resource Sharing (CORS) policies. There are various ways to address the restrictions but In this case we chose to configure our local Apache HTTPD with a few ProxyPass/ProxyPassReverse directives.

Web Application in Action

Here a few screenshots showing the site in action.

After selection of starting point by clicking on the map:

screenshot1.png

After running the WPS and the route by clicking the Create Journey button:

screenshot2.png

Our Professional Services team works side-by-side with your team to help you make the most of your technology investment. Contact us to learn more!

 

GeoNYC April: Getting a grasp, from statistical analysis to real-time monitoring

Last month, GeoNYC brought together a trio of mapping projects that focused on analysis and real-time monitoring as Raz Schwartz, Jane Stewart Adam and Ekene Ijeoma presented their projects to the GeoNYC community at our April event held at Cornell’s NYU Tech Campus.

If you could’t make it to the event and want to check out what you’ve missed visit the treasure trove of past presentations. Use the hashtag #geonyc on Twitter or follow the conversations at Storify for a more complete picture of the event.

Don’t forget to attend GeoNYC tonight as Mauricio Giraldo (@mgiraldo), Kevin Webb (@kvnweb), Sharai Lewis-Gruss (@LoveRaiRai), and Dr. Raj Singh (@opengeospatial) discuss the changing world of open source geospatial software.
 

Raz Schwartz

Raz (@razsc), a post-doctoral researcher at Cornell Tech and a Magic Tech fellow presented CityBeat. It’s a real-time event detection and city-wide statistics application that sources, monitors, and analyzes hyper-local information from multiple social media platforms. What can you use it for?

CityBeat uses the massive amount of live geotagged information that is available from various social media platforms. It can be used to better understand the pulse of the city using the streams of geo-tagged information coming from Instagram, Twitter and Foursquare real time data.

 

Jane Adam Stewart

Jane (@thejunglejane), a grad student at NYU CUSP and apparently the only geostatiscian f0r miles

…presented KrigPy, a spatial interpolation library for Python that was refactored from the R gstat package. KrigPy has the major functionalities of the gstat package: variogram modeling; simple, ordinary, and universal point or block Kriging, sequential Gaussian or indicator (co)simulation; variogram and variogram map plotting utility functions. The project will be available on github soon.

 

Ekene Ijeoma

Ekene (@ekeneijeoma) talked about The Refugee Project, a collaboration with Hyperakt, which reveals the ebb and flow of global refugee migration over the last four decades based on data from the UN and UNHCR. It expands and reflects on the data, telling  stories about socio-political events which evolved into mass migrations.  It was agreed upon that it was quite beautiful:

 

Thanks!

A big thanks to Raz Schwartz at Cornell Tech for providing the space. And a special thanks to GeoNYC sponsors — Boundless, CartoDB and Esri — for supporting the event.

Thoughts from the Women in GIS Meetup

Alyssa WrightEarlier this month, Boundless sponsored a panel discussion at our new DC office on the role of women in the GIS industry and the challenges they face. We framed the discussion with the purposely provocative question: “Is your map sexist?”

Our panelists came from a variety of backgrounds: Nadine Alameh, of the Open Geospatial Consortium, Bonnie Bogle of Mapbox, Kate Chapman from Humanitarian OpenStreetMap Team, Liz Lyon from the US Army Corp of Engineers. Eddie Pickle, CEO of Boundless, opened the discussion with a recap of his experiences during his thirty-year career in geospatial.

Women in GeoSpatial MeetUp from Boundless on Vimeo.

Broad Agreement

There was a general consensus among the panel that women are in the minority when it comes to digital mapmaking and that the geospatial industry presents the same hardships for women joining the workforce — a general lack of introduction and mentorship for women, maternity and child care issues, company culture clashes, and some issues starting as early as college.

The panel also agreed that the field suffers due to a lack of inclusion and advocated for an industry that is served by multiple perspectives. Several panelists noted that being in a minority is not only challenging but also impedes the outputs and potential progress of the industry, as it is both hard for women to work and it’s stifling to innovation and the quality of work we want to produce as an industry.

From there, the discussion moved on to ways we can overcome these challenges.

Individual Observations

Nadine felt that you don’t see a conscious effort to support other women. She advocated on focusing on the next generation of mapmakers. We have to accept them and guide their professional growth into the industry. What and how we can provide to the the next generation? Mentorship and education were noted as some ways to create change.  Larger dialogues about the general structure of working in the industry would be useful for all, namely: how we treat each other, how we promote, what constitutes constructive ideas and dialogue, and who and how does one get invited to go to conferences. For women who find themselves working in unaccepting companies, how does a woman advance?  How are women and their issues integrated into the culture?

Kate recommended that we frame these issues within the larger open source context. Other open source companies have taken steps to see women succeed and we should all look at the models used in those communities. Kate recommended “Free as in Sexist: Free culture and the gender gap”.

Liz recognized the role bias and perspective play in mapmaking.  She thinks it was important for women to write more and document their voice and experience in the space. In general, women need to improve the visibility of themselves and women in general within the geospatial industry.

I think that because geospatial was always about place, the geospatial industry is well-positioned to lead the conversation about creating equity and gender balance in our digital world.  The quality of this conversation and the openness of the audience really made me optimistic about women entering both geospatial and open source communities.  There has been a lot of movement of welcoming diverse community and  I hope that this is an ongoing event to which more people can contribute.

Keep the conversation going!

Do you want to contribute and keep the conversation going?  Join us at #geoladies on Twitter!

The new home of Boundless

Anthony Denaro

Boundless is officially on the map at 38º 53′ 46.8632″, -77º 4′ 22.6549. After a year of co-working, co-locating and telecommuting, Boundless has moved into a new permanent headquarters in the Rosslyn area of Arlington, Virginia. It is just blocks from Metro, has a bike room, commuter showers, a big kitchen and, yes, it even has a ping pong table. Most importantly, it’s ours.

boundless 4

Our endgame was to find an office in a neighborhood that was close to to transit, bikeable, and with walkable amenities, and to design the office to encourage collaboration among our teams, keep them happy, and play host to the larger open source community.

Boundless (dragged) 2

We began our search along the Rosslyn-Ballston Corridor of Arlington County and Alexandria in Northern Virginia with our real estate broker, Greg Miele of Broad Street Realty. After months of hunting, our broker found this space, which was perfect on almost all counts. The office was designed in-house with collaboration from the DC office staff. OTJ Architects was the architect of record and Monday Properties Construction built out the space.

Boundless (dragged) 7

Our executive team wanted the office to be a collaborative effort and address the needs and desires of everyone working there.  The process of designing the space began by asking people what they wanted to have in our office. I created an idea board collected several dozen articles and photos from staff.

Boundless 1

We have a variety of Boundless staff located in the DC area: our executive team and project management team are here alongside members of our design, engineering, and sales teams. They all have different needs that needed to be addressed: Our executive team needs privacy but to be available. Our sales teams needs acoustical privacy and areas to meet with clients. Our design team needs space to brainstorm and collaborate on ideas for our new products. Our project managers need comfortable spaces to make phone calls and hold meetings with the engineering teams. Our software engineers need a place to focus but be available to each other. All need private space to make phone calls or to just work alone. And everyone wants whiteboards!

boundless5

We included a variety of spaces to work: common meeting spaces, lounge chairs, big tables, phone rooms clustered desks, multi-person offices, a lounge area and an area to demo software to clients, with comfortable seats and a projector. A variety of modular furniture was purchased or built: staff had a choice between shared desks, standing desks or regular solo desks. I kept the purchasing open so staff could specify the type of furniture they wanted to work at. The result is a comfortable, functional office for our developers, designers and management to collaborate and work together in while they’re developing and improving the Boundless software stack.

Boundless (dragged) 3

The color scheme is bright. I found some knock-out wallpaper from Flavor Paper. The floors were sealed with shiny varnish to increase the reflection of natural light. We used energy efficient day-light spectrum lighting fixtures. I choose soft-colored woods and glossy laminates to enhance the feeling of openness and brightness. All the whiteboards (over 400 sq ft of whiteboards abound) also keep the light bouncing and lively. Hint: here’s a great way to make large whiteboards on the cheap.

Boundless (dragged) 8

Exposure to natural light was the most requested amenity but the space has just about twenty linear feet of floor-to-ceiling glass along about a hundred feet of exterior wall. I maximized the amount of light entering the space by removing a private office that took up about ten of those linear feet and kept the majority of the space open. The other two offices with windows were opened up with large interior facing windows. A series of smaller offices along an interior wall are divided by glass partitions. All of the offices have a visual connection with each other, allowing not only natural light to flow through but also encouraging visual contact among staff.

Boundless (dragged) 9

The space also has a large event facility that will host to meetups and events, like the recent Women in Geospatial event.