I recently spent a thought-provoking few days at DGI 2018 in London, learning about the latest advancements in geospatial intelligence (GEOINT) from our mission partners around the globe. For those who have never been, DGI is essentially the European version of the GEOINT Symposium hosted every year in the US. The event is chock full of quality speakers and exhibitors, and offers a chance to network and get up to speed with the latest advancements in GEOINT.
There were many insightful presentations from different defense and intelligence experts from around the globe. A talk that really stood out to me was titled “The Data Opportunity: How to Gain Even More Insights from Your Geospatial Data” given by Terry Busch, Chief Data Officer at Defense Intelligence Agency (DIA). His talk was all about descriptive, predictive, and prescriptive datasets and understanding how to get the most out of the massive data collections organizations like DIA currently have:
- Descriptive datasets are the most commonly discussed, and make up most of the GIS data used by analysts
- Predictive data are often the result of using descriptive data in a GIS, and reflect derivatives of analysis
- Prescriptive datasets enable us to find the best course of action for a given situation.
Terry talked specifically about using artificial intelligence (AI) and machine learning (ML) to assist analysts in finding answers to tough questions. However, analysts still needed a method to measure and assess accuracy and precision of the data used inside AI and ML. DIA’s plan was to create a visual confidence assessment for every new record that was entered into the database in the form of a spider diagram.
Using location as a key factor, DIA assesses each dataset and provides a confidence rating across different measures. These included whether or not the dataset was crowd sourced, whether or not it was used by others (e.g., does Apple use the dataset too?), whether or not it is a “legacy” dataset used previously, whether or not the data scientists and analysts had confidence in it, and whether or not the dataset was correlated by imagery. The resulting graph was a visual representation of the confidence the team had in a dataset being ready for automated analysis. Terry noted that this was particularly effective when assessing non-native geospatial data, the kind often derived from open sources. I really found this to be a very innovative method for understanding the usefulness of a dataset, and quantifying just how ready a dataset is for analysis.
Geospatial data accumulates quickly and can be efficiently managed with modern technology, but it’s often possible to derive even more value from data by examining it from a different perspective. Learn more about the applications for GIS within the defense and intelligence community, or reach out to speak to an expert.