AI in Space

At SkyWatch, we believe in simplicity. We’ve taken a very challenging data engineering problem - obtaining, processing, and standardizing satellite data - and built a platform that abstracts the massive amount of complexity required to deliver data from space. 

Artificial intelligence is something we get asked about a lot. People ask us if we are building machine learning and computer vision algorithms to count cars or count buildings or detect how much oil is in a floating top drum. There are a huge number of use cases to explore, and there’s lots of innovation happening in this area that will significantly lower costs for many industries. 

The answer we give them is that we have lots of customers who do these exciting, cutting edge things. As we want to stay laser focused on the problem we are trying to solve - aggregating the world’s Earth observation satellite data -  we tend to focus our AI and machine learning efforts on pragmatic AI within our systems. These simple (in concept), value driven functions are aimed at removing humans from decision making and optimizing the delivery of data from space. We’ve built computer vision models to determine things like image quality and cloud classification that allow our customers to receive data without humans in the loop. Additionally, we believe there is a huge opportunity to optimize space assets by orchestrating demand to the most efficient collection vehicles.

On June 1, 2020, I was lucky enough to participate in the AIxSpace webinar, exploring how AI is being used in various space-related industries.  Here’s some of the questions that were asked and my thoughts on each topic:

What are real case examples (today) of AI applied in the space environment?

We believe that AI can be a function within many of our operations, as opposed to it being the center of our operation. While many companies focus on the “last mile”, we focus on removing humans from the demand and fulfillment pipelines. 

In terms of examples - we use a deep learning model based on a convolutional neural network (CNN) - which works really well at classifying edges - to predict clouds and haze on multiple data sources. Some key issues we thought about when developing this model:

  • As 70% of the earth is covered in clouds daily, detecting clouds in Earth observation is a key part of any EO product
  • Satellite operators look at cloud masking as a cost of doing business, a non-value added activity
  • Trends in the industry are to optimize cloud prediction models for a specific data source
  • Developing models that work on low, medium, and high resolution data with potentially different band combinations is very challenging
  • Developing a model that works with the massive size of high resolution images is very challenging. Most published models use smaller, open data images.
  • Additionally, we needed to develop the ability to predict clouds on specific locations within non-georeferenced preview images in archive catalogs

Our current model is showing 90%+ accuracy thus far, which has been great. There are a huge number of corner cases to optimize for and we’ll continue to evolve this model to accommodate many of them in the future. 

Side by side comparison images showing results of the SkyWatch EarthCache™ machine-learning cloud detection model. Earth observation data images including clouds are on the left, the cloud mask extracted for each image is shown on the right.

We also built an image quality prediction model (again based on a CNN) to determine if an image contains problems that will prevent the image from being useful to our customer. Historically humans have provided Quality Assurance (QA) on EO data delivery, but in a machine to machine environment there cannot be any human QA in the loop. Some items we discussed during design of this model included:

  • If we are to live in an autonomous environment of demand driving Earth observation collections and near real time delivery, we must ensure high quality results. 
  • Defining a “high quality” result in Earth observation is also very challenging, as the world is made up of many landscapes. 
  • It is prudent to initially focus on sensor or processing anomalies. 

Our current image quality prediction model has shown it can reduce the amount of clearly faulty data that reaches our customers. We’ll be adding more and more components to this model and iterating on its performance in the coming months. 

Where do you see the greatest upside opportunity or largest impact area for AI in space?

We believe there is a huge opportunity in optimizing space assets, both within a constellation and of multiple constellations. For space agencies such as the Canadian Space Agency, there is no need to collect data via RCM if the equivalent product is going to be made available by another SAR satellite like Sentinel-1 or commercial SAR providers. The prediction of alternate collections will save millions of dollars. To be effective, the data needs to be aggregated in a way that makes it easily accessible to the participants - which is where SkyWatch sits in the value chain. 

Multi-source coregistration is another massive opportunity that’s been extremely hard to deliver, due to radiometric calibration deviations between satellites as well as varying collection angles. As computer vision evolves and with more input variables, this type of processing will become more feasible. 

On board cloud detection and other on-board processing is another large opportunity, which will require highly efficient, low power models, but will save downlink costs and allow rapid rescheduling. 

And of course, object/change detection is a very well discussed use case. 

© 2018 Planet.
Satellites images showing the changes to Panama City Beach in 2018 before and after Hurricane Michael. Several buildings are destroyed and roofs are visibly damaged. Images were taken by SkySat satellites from Planet (80 cm resolution).

AI and space are two very broad areas. Where do they converge and diverge? What can the space industry learn from early adopters of AI?

We believe AI is most successful when it’s pragmatic. When you break down current processes and workflows, you can start to see all sorts of opportunities to apply prediction and classify judgement, whether it’s in prediction compute resources to scale up, predicting the best day/time to collect over an area, or predicting the likelihood that a customer will agree a result is of high quality. We truly believe that AI will take hold in this “ground up” approach, as opposed to “top down”. AI projects will fail to gain traction if they are a solution that’s looking for a problem. 

How will the Covid-19 pandemic impact the adoption of AI in space?

There has been an increase in interest in EO since the pandemic started, as you may expect when everyone is under lockdown. A great example is a customer who certifies organic farms - with no one able to go onsite to do inspections, they turned to remote sensing data to monitor the farms. There are a wide range of other use cases including smart cities and the monitoring oil and gas pipelines which typically require humans to observe and collect data. Many of these areas are seeing this pandemic kickstart their adoption of EO data and finding that it may also significantly lower their costs. 

What are considered the major bottlenecks to transition from change detection to predictive analytics?

Change detection is just one of the building blocks as the industry moves further down the path of prediction. Once a change is detected, a refinement is to predict what that actual change is. An example - one of our customers has built an application to monitor oil and gas pipeline corridors. They use open data (Sentinel-1 SAR) to detect activities in and near the corridor of a pipeline and subsequently request a high resolution optical image only for those locations that are flagged. This allows them, in a cost-effective way, to enhance their detection classification and to serve their users with before and after images for relevant locations. Currently, eyeballs are more efficient than models, but that’s rapidly changing. Soon, more false positives will be able to be excluded, and operators will only have a handful of high quality changes to actually send boots on the ground to inspect. 

Contains Copernicus data 2019. © 2019 Planet.
A side-by-side comparison of the same area imaged by Sentinel-1 (left), a Synthetic Aperture Radar satellite, and SkySat Planet (right), a high resolution satellite constellation (80 cm at time of taking, now 50 cm).

There has to be clear ROI on any AI project. Generally, value is driven from removing a large number of human judgements points within a system or process. Humans used to manually switch telephone lines to connect calls - it’s hard to even imagine in today’s age. However, to achieve value, the accuracy of these automated judgements is paramount. With computer vision, incorrectly predicting the species of a tree may seem like a low cost failure, but if that prediction is incorrect more than 20% of the time it would have a major impact on the downstream conversion costs of that lumber. The threshold for tolerance of these incorrect predictions varies widely by use case, but the relative cost of an incorrect prediction can be a pretty significant barrier to adoption. 

There is a tremendous opportunity to leverage artificial intelligence and machine learning in the space industry. We’ve been fortunate to partner with the Canadian Space Agency on prototypes in this area, and will continue to invest in pragmatic AI. We look forward to working with other companies in this area who can take advantage of the EarthCache™ API to skip the massive amount of data preparation required and simply focus on their models. Our goal is to make our customers look like heroes.

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram