Published August 31, 2020 | Version v1
Conference paper

Generative adversarial networks as a novel approach for tectonic fault and fracture extraction in high resolution satellite and airborne optical images

Description

We develop a novel method based on Deep Convolutional Networks (DCN) to automate the identification and mapping of fracture and fault traces in optical images. The method employs two DCNs in a two players game: a first network, called Generator, learns to segment images to make them resembling the ground truth; a second network, called Discriminator, measures the differences between the ground truth image and each segmented image and sends its score feedback to the Generator; based on these scores, the Generator improves its segmentation progressively. As we condition both networks to the ground truth images, the method is called Conditional Generative Adversarial Network (CGAN). We propose a new loss function for both the Generator and the Discriminator networks, to improve their accuracy. Using two criteria and a manually annotated optical image, we compare the generalization performance of the proposed method to that of a classical DCN architecture, U-net. The comparison demonstrates the suitability of the proposed CGAN architecture. Further work is however needed to improve its efficiency.

Abstract

International audience

Additional details

Identifiers

URL
https://hal.inria.fr/hal-02982510
URN
urn:oai:HAL:hal-02982510v1

Origin repository

Origin repository
UNICA