|FD-1: Mathematical Morphology in Processing and Analysis of the Digital Elevation Models (DEMs)||Sun, 17 Jul, 09:30 - 17:30 Malaysia Time (UTC +8) |
Sun, 17 Jul, 03:30 - 11:30 Central European Summer Time (UTC +2)
Sun, 17 Jul, 01:30 - 09:30 Coordinated Universal Time
Sat, 16 Jul, 21:30 - 05:30 Eastern Daylight Time (UTC -4)
|FD-2: Hands-on Copernicus Sentinel-1 Persistent Scattering Interferometry for Ground motion|
|FD-3: Machine Learning in Remote Sensing - Theory and Applications for Earth Observation|
|FD-4: GRSS ESI HDCRS End-to-End Machine Learning with High Performance and Cloud Computing|
|FD-5: Using open data, platform, and API from NASA-ESA-JAXA Earth Observation Dashboard|
|FD-6: Physics Guided and Quantum Artificial Intelligence for Earth Observation: Towards the Digital Twin Earth|
|HD-1: Earth Datacubes: From Simplified Access to ML Analytics (IEEE GRSS ESI Tutorial)||Sun, 17 Jul, 09:30 - 12:45 Malaysia Time (UTC +8) |
Sun, 17 Jul, 03:30 - 06:45 Central European Summer Time (UTC +2)
Sun, 17 Jul, 01:30 - 04:45 Coordinated Universal Time
Sat, 16 Jul, 21:30 - 00:45 Eastern Daylight Time (UTC -4)
|HD-2: SAR Polarimetry: A tour from Physics to Applications|
|HD-3: The ARTMO toolbox for analyzing and processing of remote sensing data into biophysical variables|
|HD-4: Natural disasters and hazards monitoring using Earth Observation data|
|HD-5: Sparse Sampling and Reconstruction in SAR||Sun, 17 Jul, 14:15 - 17:30 Malaysia Time (UTC +8) |
Sun, 17 Jul, 08:15 - 11:30 Central European Summer Time (UTC +2)
Sun, 17 Jul, 06:15 - 09:30 Coordinated Universal Time
Sun, 17 Jul, 02:15 - 05:30 Eastern Daylight Time (UTC -4)
|HD-6: Remote sensing of cloud microphysical properties and surface radiation parameters|
|HD-7: Aiding active landscape fire detection from space with ASI PRISMA: unlocking the complementary value of hyperspectral PRISMA data|
|HD-8: Remote Sensing with Reflected Global Navigation Satellite System (GNSS-R) and other Signals of Opportunity (SoOp)|
Presented by B. S. Daya Sagar
Understanding the organizational complexity of terrestrial surfaces and associated features both across spatial and temporal scales leads to a study of interest to geoscience and remote sensing communities. Data, in particular, Digital Elevation Models (DEMs) derived from remotely sensed satellite data available at multiple spatial/temporal scales pose numerous challenges to the spatial data scientists. DEMs are second-level data derived from the primary satellite and/or aircraft-based remotely sensed surficial data. Generation and acquisition of these DEMs cost the engineering community billions of dollars, and they have been underutilized in many scientific disciplines . These DEMs are with rich geometric, morphologic, and topologic information. But such information was hidden for visual inspection, and it is very appropriate for researchers to employ relevant mathematical ideas to retrieve the hidden information. To understand the dynamical behavior of terrestrial surfaces or a process, a good spatiotemporal model is essential. To develop such a model, well-analyzed and well-reasoned information that could be retrieved from spatial and/or temporal data are important ingredients . Mathematical Morphology -, co-founded by Georges Matheron and Jean Serra, is one of the better choices to deal with all the intertwined topics. Mathematical Morphology offers numerous operators and transformations to deal with retrieval of information from DEMs, quantitative characterization of DEMs, quantitative reasoning of the information retrieved from DEMs, modeling, and simulation of various surficial processes that involve DEMs, and spatiotemporal visualization of various surficial phenomena and processes. Three ways of understanding complexity involved in spatiotemporal behavior of terrestrial surfaces and associated phenomena include by considering (i) topographic depressions and their relationships with the rest of the surface, (ii) unique topologic networks, and (iii) terrestrial surfaces. To make an attempt, the three significant features represented in mathematical terms as functions, sets, and skeletons respectively for surfaces, planes, and networks. From the context of terrestrial science, some examples of such functions, sets, and networks include DEMs, water bodies, and river networks. Various original morphology-based algorithms and techniques have been developed and demonstrated. This talk would be useful for those with research interests in image processing and analysis, remote sensing and geosciences, geographical information sciences, spatial data sciences, mathematical morphology, mapping of earth-like planetary surfaces, etc. The content of this tutorial will be offered in two parts. In the first part, basic morphological transformations would be covered. An overview of the applications of those transformations, covered in the first part, in the processing and the analysis of the DEMs and the associated features thus mapped from the DEMs with several case studies in the second part.
At the end of the tutorial, participants would learn about not only the fundamentals of the binary and grayscale mathematical morphology but also the robustness of the morphology-based algorithms meant for the processing and analysis of the Digital Elevation Models.
Any undergraduate degree either in engineering or in sciences would suffice.
Presented by Dinh Ho Tong Minh
Persistent Scattering Interferometry (PSI) of Earth’s surface has proved its ability to track millimeter-scale surface deformation over long periods. The PSI is well-known because that uses point-like targets often associated with stable human structures such as buildings, poles, gratings, resulting in very good signals. Even though the topic can be challenging, this tutorial makes it much easier to understand. In detail, this tutorial will explain how to use PSI techniques on real-world Copernicus Sentinel-1 images, with user-oriented (no coding skills required!) open-source software. After a quick summary of the theory, the tutorial presents how to apply TOPS Sentinel-1 SAR data and processing technology to identify and monitor ground deformation. After one full-day training, participants will gain an intuitive understanding of the background of radar interferometry and be able to produce time series of ground motion from a stack of SAR images.
After one full-day training, participants will learn to access SAR data and preprocess this data for use in later practical exercises. Understand the theory of the InSAR processing. Form interferograms and then interpret the revealed ground motions from the interferogram. Understand the theory of extracting time series of ground motion from a stack of SAR images.
Presented by Ronny Hänsch, Devis Tuia, Andrea Marinoni, Ribana Roscher
Despite the wide and often successful application of machine learning techniques to analyse and interpret remotely sensed data, the complexity, special requirements, as well as selective applicability of these methods often hinders to use them to their full potential. The gap between sensor- and application-specific expertise on the one hand, and a deep insight and understanding of existing machine learning methods on the other hand often leads to suboptimal results, unnecessary or even harmful optimizations, and biased evaluations. The aim of this tutorial is threefold: First, to provide insights and a deep understanding of the algorithmic principles behind state-of-the-art machine learning approaches including Random Forests and Convolutional Networks, feature learning, regularization priors, explainable AI, and multimodal data fusion. Second, to illustrate the benefits and limitations of machine learning with practical examples, including providing recommendations about proper preprocessing, initialization, and sampling, stating available sources of data and benchmarks, and discussing human machine interaction to generate training data. Third, to inspire new ideas by discussing unusual applications from remote sensing and other domains.
Suitable for PhD students, research engineers, and scientists. Basic knowledge of statistics is required.
Presented by Gabriele Cavallaro, Rocco Sedona, Manil Maskey, Iksha Gurung
Recent advances in remote sensors with higher spectral, spatial, and temporal resolutions have significantly increased data volumes, which pose a challenge to process and analyze the resulting massive data in a timely fashion to support practical applications. Meanwhile, the development of computationally demanding Machine Learning (ML) and Deep Learning (DL) techniques (e.g., deep neural networks with massive amounts of tunable parameters) demand for parallel algorithms with high scalability performance. Therefore, data intensive computing approaches have become indispensable tools to deal with the challenges posed by applications from geoscience and Remote Sensing (RS). In recent years, high-performance and distributed computing have been rapidly advanced in terms of hardware architectures and software. For instance, the popular graphics processing unit (GPU) has evolved into a highly parallel many-core processor with tremendous computing power and high memory bandwidth. Moreover, recent High Performance Computing (HPC) architectures and parallel programming have been influenced by the rapid advancement of DL and hardware accelerators as modern GPUs.
ML and DL have already brought crucial achievements in solving RS data classification problems. The state-of-the-art results have been achieved by deep networks with backbones based on convolutional transformations (e.g., Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs)). Their hierarchical architecture composed of stacked repetitive operations enables the extraction of useful informative features from raw data and modelling high-level semantic content of RS data. On the one hand, DL can lead to more accurate results when networks are trained over large annotated datasets. On the other hand, deep networks pose challenges in terms of training time. In fact, the use of large datasets for training a DL model requires the availability of non-negligible time resources.
In case of supervised ML/DL problems, a hybrid approach is required where training is performed in HPC and inference with new data is performed in a cloud computing environment. This hybrid approach minimizes the cost of training and optimization of real-time inference. This approach also allows for transitioning a research ML model from HPC to a production ML model in a cloud computing environment—a pipeline that simplifies complexity of putting ML models in practice.
The theoretical parts of the tutorial provide a complete overview of the latest developments of HPC systems and Cloud Computing services. The participants will understand how the parallelization and scalability potential of HPC systems are fertile ground for the development and enhancement of ML and DL methods. The audience will also learn how High-throughput computing (HTC) systems make computing resources accessible and affordable via Internet (cloud computing) and that they represent a scalable and efficient alternative to HPC systems for particular ML tasks.
For the practical parts of the tutorial, the attendees will receive access credentials to work with the HPC systems of the Jülich Supercomputing Centre and AWS Cloud Computing resources. To avoid waste of time during the tutorial (e.g., setting of environments from scratch, installation of packages) the selected resources and tools will be set up in advance by the course organizers. The participants will be able to start working on the exercises directly with our implemented algorithms and data.
The participants will work through an end-to-end ML project where they will train a model and optimize it for a data science use case. They will first understand how to speed-up the training phase through state-of-the-art HPC distributed DL frameworks. Finally, they will use cloud computing resources to create a pipeline to push the model into the production environment and evaluate the model against new and real-time data.
Machine learning and deep learning background. Advance knowledge of classical machine learning algorithms, Convolutional Neural Networks (CNNs), Python programming with basic packages (Numpy, Scikit-learn, Matplotlib) and DL packages (Pytorch and/or TensorFlow)
Each participant has to bring a laptop (with Windows, Mac, or Linux).
Presented by Anca Anghelea, Manil Maskey, Shinichi Sobue, Stephan Meissl
To understand the impacts of COVID-19 pandemic, NASA, ESA, and JAXA joined forces to develop a trilateral dashboard—a situational awareness tool backed by Earth observation derived indicators. The collaboration between the three agencies included data, science, and technology experts working together to develop indicators, setup infrastructure, provide data management, develop content, and communicate the results.
In June 2020, “Trilateral COVID-19 Earth Observation Dashboard” and “NASA COVID-19 Dashboard” were successfully released offering user-friendly tracking of changes in indicators that include air and water quality, climate, economic activity, and agriculture. Since then, the dashboard team has added new indicators, new areas of interests, and updated data.
This tutorial aims to provide an overview of the Earth observation dashboard, the catalog of datasets from all three agencies, application programming interfaces, stories, and visualization widgets. The hands-on part of the tutorial will include utilizing the data and platform to tell novel stories about changing Earth and present them in a dashboard environment. Since there has been a growing interest within the scientific community to use dashboard tools, this tutorial will help participants to get started with discovering data and bringing new information into the platform, as well as performing analytics and extracting insights from the data, to finally make science data interactive and dashboard-ready. Furthermore, the tutorial will demonstrate how open data, open code, open platform, and open API can advance scientific research and application.
The theoretical parts of the tutorial will provide an overview of the Earth observation dashboard architecture and the cloud computing services.
Participants will also be guided through the various Earth Observation scenarios included in the dashboard, the Earth observations employed to capture and characterise the various dynamics of the pandemic and the EO data science approaches to derive the indicators from the EO and geospatial open data.
The practical part of the tutorial will demonstrate end-to-end story development using the platform and data. The participants will receive access credentials upon registration, to interact with the cloud computing platform and available data and APIs.
It is expected that participants have basic knowledge in at least some of the following domains:
Pre-registration to the platform (Euro Data Cube) is also required prior to the tutorial, to ensure the participants workspaces are properly configured.
Presented by Mihai Datcu
The digital and sensing technologies, i.e. Big Data, are revolutionary developments massively impacting the Earth Observation (EO) domains. While, Artificial Intelligence (AI) is providing now the methods to valorize the Big Data. Today the accepted trends assume more data we analyze, the smarter the analysis paradigms will perform. However, the data deluge, diversity, or the broad range of specialized applications are posing new major challenges. From the perspective of the data valorization and applications the multi-mission and related data use for global applications still need more efforts. From the methodological side the challenges are related to, the reproducibility, the trustworthiness, physics awareness, and over all, the explainability of the methods and results.
The tutorial introduces and explains solution based on the concept of Digital Twins. A Digital Twin is the convergence of the remote sensing physical mechanisms tightly connected, communicating and continuously learning, from and with mathematical models, data analytics, simulations and user interaction.
The presentation covres the major developments, of hybrid, physics aware AI paradigms, at the convergence of forward modelling, inverse problem and machine learning, to discover causalities and make prediction for maximization of the information extracted from EO and related non-EO data. The majority of EO applications or services require the complementary EO multi-sensor and non-EO data, i.e., sensor fusion and multitemporal observations. The tutorial explains how to automatize the entire chain from multi-sensor EO and non-EO data, to physical parameters, required in applications by filling the gaps and generating a relevant, understandable layers of information.
Digital Twins are technologies looking o the evolution of EO at least for the horizons of next two decades. The explosive present advance of AI methods was obtained thanks to mainly two factors, the advancement of theoretical bases and performance evolution of the IT, i.e. computation, storage and communication. Today we are the edge of a Quantum revolution, impacting technologies in communication, computing, sensing, or metrology. Quantum computers and simulators are and continuously become largely accessible. Thus, definitely impacting the EO domains. In this context the tutorial puts the bases of information processing from the perspective of the quantum computing, algorithms and sensing. The presentation will cover an introduction in quantum information theory, quantum algorithms and computers, with the first results analysing the main perspectives for EO applications.
The tutorial is addressing MS, doctoral students, researchers and practitioners in all AI for EO Data domains. The first part of the tutorial is expected to bring a joint understanding of the EO multispectral and SAR imaging principles, the main sensor physical signatures models and their embedding in data analytics, inference methods, active learning, machine and deep learning as integrated optimal solutions in complex EO applications. This is also including the choice or benchmark data sets and the biases influence in validation, and learn the relevant skills including process flows design/tuning, training algorithm, dataset preparation, toolbox usage and result analyses. The focus will be on the methods for explication of the physical phenomena, from multi-modal observation, and uncertainty estimation.
The second part of the tutorial will introduce the notions of quantum information processing, the main types of quantum computers, and basic applications for EO. Relevant elements of quantum sensing and imaging will be also presented.
Basic elements of physics of multispectral and SAR EO, machine / deep learning, calculus, linear algebra.
Presented by Peter Baumann, Otoniel Jose Campos Escobar
Datacubes are recognized as an enabling paradigm for serving massive spatio-temporal Earth data in an analysis- (and visualization-) ready way. In practice, 1-D sensor data, 2-D imagery, 3-D x/y/t image timeseries and x/y/z geophysics voxel data, and 4-D x/y/z/t climate and weather data can be analysed and combined. In standardization, "coverages" provide the unifying concept for spatio-temporal datacubes, ranging from the (tentatively) simple Web Coverage Service (WCS) to the high-end Web Coverage Processing Service (WCPS) datacube analytics language. A large, continuously growing number of open-source and proprietary tools supports the coverage standards.
In this tutorial, which is suitable for newcomers and experts alike, we present the concept of datacubes modeled as OGC/ISO coverages and give an overview on the current standards landscape. Based on WCPS we showing simple and advanced space/time analytics capabilities. This we extend with recent research results on service capabilities for Machine Learning (ML) on datacubes.
Based on the OGC reference implementation, rasdaman, live demos accessing existing services and real-life examples are presented which participants can recap and modify on their Internet-connected laptop.
After this workshop, participants will be able to
Basic knowledge of geo raster data, an Internet-connected laptop.
Presented by Carlos López-Martínez, Avik Bhattacharya, Alejandro C. Frery
This tutorial aims to present a comprehensive overview of the basics and advances in Synthetic Aperture Radar (SAR) Polarimetry and its diverse applications for Earth and planetary observations. Starting with an introduction to polarimetric SAR systems, it covers three key aspects: physical, mathematical, and statistical properties of these data. We will conclude the tutorial by focusing on applications with state-of-the-art machine learning techniques. Finally, each block will devote 15 minutes to groundbreaking applications and promising research topics ("Highlights and Future Directions").
This tutorial is timely and imperative as there were no such tutorials on basics and advanced PolSAR techniques and their application in 2020 and 2021 at IGARSS. This proposal is innovative because it focuses not only on the fundamentals aspects of polarimetric SAR but also on innovative applications and addresses future research opportunities.
This tutorial aims at providing an overview of the potential of polarimetry and polarimetric SAR data in the different ways they are available to final users. It spans three main aspects of this kind of data: physical, mathematical, and statistical properties. The description starts with fully polarimetric SAR data and also devotes special attention to dual and compact formats. The tutorial discusses some of the most relevant applications of this data, enhancing the uniqueness of the information they provide. We include freely available data sources and discuss the primary abilities and limitations of free and open-source software. The tutorial concludes with a discussion of the future of SAR Polarimetry for remote sensing and Earth observation and the role of these data in artificial intelligence and machine learning.
This lecture is intended for scientists, engineers and students engaged in the fields of Radar Remote Sensing and interested in Polarimetric SAR image analysis and applications. Some background in SAR processing techniques and microwave scattering would be an advantage and familiarity in matrix algebra is required.
Presented by Jochem Verrelst, Jorge Vicent
This tutorial will focus on the use of ARTMO’s (Automated Radiative Transfer Models Operator) and ALG’s (Atmospheric Look-up table Generator) radiative transfer models (RTMs), retrieval toolboxes and post-processing tools (https://artmotoolbox.com/) for the generation and interpretation of hyperspectral data. ARTMO and ALG brings together a diverse collection of leaf, canopy and atmosphere RTMs into a synchronized user-friendly GUI environment. Essential tools are provided to create all kinds of look-up tables (LUT). These LUTs can then subsequently be used for mapping applications from optical images. A LUT, or user-collected field data, can subsequently be inserted into three types of mapping toolboxes: (1) through parametric regression (e.g. vegetation indices), (2) nonparametric methods (e.g. machine learning methods), or (3) through LUT-based inversion strategies. In each of these toolboxes various optimization algorithms are provided so that the best-performing strategy can be applied for mapping applications. When coupled with an atmosphere RTM retrieval can take place directly from top-of-atmosphere radiance data.
Further, ARTMO’s RTM post-processing tools include: (1) global sensitivity analysis, (2) emulation, i.e. approximating RTMs through machine learning, and (3) synthetic scene generation. Here we plan to present ARTMO’s mapping capabilities at bottom and top of atmosphere level using coupled leaf-canopy-atmosphere RTMs.
The proposed tutorial will consist of a brief theoretical session and a practical session, where the following topics will be addressed:
Basics of leaf, canopy and atmosphere RTMs: generation of RTM simulations
Overview of retrieval methods: parametric, nonparametric, inversion and hybrid methods. Coupling of top-of-canopy simulations with simulations from atmospheric RTMS for the generation of top-of-atmosphere radiance data.
Practical exercise for the retrieval of vegetation biophysical properties from bottom-of-atmosphere and top-of-atmosphere Sentinel-2 data.
In the practical session we will learn to work with the ARTMO toolboxes. They provide practical solutions dealing with the abovementioned topics. Step-by-step tutorials, demonstration cases and demo data will be provided.
Tutorial Learning Objectives:
Students will be able to use ARTMO toolboxes for RTM running and mapping applications. Eventually, students will be acquainted with how to develop hybrid retrieval models for vegetation properties mapping from bottom-of-atmosphere and top-of-atmosphere data.
No prior knowledge is needed, however for ARTMO Matlab in Windows and MySQL is required. In case no Matlab available, students will be asked to team up in small groups. More information about ARTMO and ALG can be found here: https://artmotoolbox.com/. Students are also recommended to install and compile the 6SV (http://6s.ltdri.org/) or libRadtran (http://www.libradtran.org/doku.php) atmospheric models.
Presented by Ramona Pelich, Marco Chini, Wataru Takeuchi, Young-Joo Kwak
In recent years, natural disasters, i.e., hydro-geo-meteorological hazards and risks, have been frequently experienced by both developing and developed countries. 2021 has been another year with numerous devastating water-related disasters hitting many regions across the globe. For example, in July 2021, deadly floods swept through western Germany and parts of Belgium, while flooding caused by the remnants of hurricane Ida that struck New Orleans first, killed dozens of people in New York, New Jersey, Pennsylvania and Connecticut. This tutorial is comprised of basic theoretical and experimental information essential for an emergency hazard and risk mapping process focused on advanced satellite Earth Observation (EO) data including both SAR and Optical data. Firstly, this tutorial gives a better understanding of disaster risk in the early stage by means of EO data available immediately after a disaster occurs. Then, after several comprehensive lectures focused on floods and landslides, a hands-on session will give the opportunity to all participants to learn more about the practical EO tools available for rapid-response information.
This half-day tutorial will demonstrate the implementation of disaster risk reduction and sustainable monitoring for effective emergency response and management between decision and action activities.
The aim of this tutorial is to provide a series of substantial and balanced presentations for the use of Earth Observation (EO) data in disaster monitoring. Firstly, a comprehensive introduction along with several illustrative examples will demonstrate the use of both space-borne Radar and Optical sensors for mapping different types of the disasters. Then we will focus our attention to Floods and Landslides, as particular types of disasters with important consequences at a global scale. For flood monitoring, a detailed presentation about the use of space-borne Synthetic Aperture Radar (SAR) data for flood monitoring will be given and will include both theoretical aspects and experimental results. This lecture will include several illustrations of Sentinel-1A&B, ALOS/ALOS-2 SAR images. The next lecture will present several methodologies employed for Optical flood monitoring along with illustrative results. Then, the landslide lecture, firstly presenting the landslide types, will give details about EO-based landslide monitoring methodologies along with experimental results containing on-site data. In addition to the detailed lectures concerning floods and landslides, several EO-based platforms that allow performing rapid disaster mapping will be presented.
This lecture is intended to scientists, engineers and students with basic knowledge in the fields of Radar and Optical Remote Sensing imagery and interested in image analysis and applications focused on disaster monitoring. Some background in SAR processing techniques and multi-spectral optical data handling would be an advantage.
Presented by Kumar Vijay Mishra, Raghu G. Raj
Recently, several novel approaches to radar signal processing have been introduced which allow the radar to perform signal detection and parameter estimation from much fewer measurements than that required by Nyquist sampling. These reduced-rate radars exploit the fact that the target scene is sparse in time, frequency or other domains, facilitating the use of compressed sensing methods in signal recovery. These techniques may also be applied on full-rate samples to facilitate reduced-rate processing on the sampled signal.
Recent developments in reduced-rate sampling break the link between common radar design trade-offs such as range resolution and transmit bandwidth; dwell time and Doppler resolution; spatial resolution and number of antenna elements; continuous-wave radar sweep time and range resolution. Several pulse-Doppler radar systems are based on these principles. The temporal sub-Nyquist processing estimates the target locations using less bandwidth than conventional systems. This also paves the way to cognitive radars that share their transmit spectrum with other communication services, thereby providing a robust solution for coexistence in spectrally crowded environments. Without impairing Doppler resolution, these systems also reduce the dwell time by transmitting interleaved radar pulses in a scarce manner within a coherent processing interval or "slow time". Extensions to the spatial domain have been proposed in the context of multiple-input-multiple-output array radars where few antenna elements are used without degradation in angular resolution. For each setting, state-of-the-art hardware prototypes have also been designed to demonstrate the real-time feasibility of sub-Nyquist radars.
Recently, these concepts have also been applied to imaging systems such as synthetic aperture radar (SAR), inverse SAR (ISAR), and interferometric SAR (InSAR). In fact, SAR was one of the first applications of CS methods. The SAR imaging data are not naturally sparse in the range-time domain. However, they are often sparse in other domains, such as wavelets. The motivation to apply sub-Nyquist methods is to address the challenge of oversampled data for the SAR processing challenge.
This tutorial introduces the audience to reduced-rate sampling methods with a focus on SAR, ISAR, and InSAR systems. It will provide an overview of detailed signal processing theory to apply reduced-rate sampling to conventional radars and follow by its recent applications to imaging radars.
We will present this three-hour tutorial in four parts. The first part will focus on introducing the audience to the fundamentals of finite-rate-of-innovation (FRI) theory, compressed sensing, and sub-Nyquist processing. The second part will take the audience deep into reduced-rate sampling for pulse-Doppler radar systems in temporal, Doppler, and spatial domains. The third part will focus on the basics of sparsity in radar imaging. The fourth part covers the applications of similar concepts to SAR and other allied systems.
The target audience includes radar practitioners at various levels: graduate students, researchers, and engineers with general interests in reduced-rate sampling in radars, SAR, spectrum sharing, and signal processing. The tutorial assumes no specific technical expertise on the part of the audience except basic knowledge of radar concepts and statistical signal processing. The topic is relevant for scientists at government laboratories – defense, space, and civilian – wishing to integrate more functionalities in their current state-of-the-art. The tutorial is useful to industry participants from remote sensing, computational imaging, and satellite systems who are looking to incorporate the latest mathematical tools into their products. The members of academia looking for the latest applications in the area of radar signal processing will find the tutorial highly valuable.
Presented by Takashi Nakajima, Yu Xie, Takashi M. Nagao, Anthony J. Baran, Tao He
Clouds typically cover about 50% - 70% of Earth’s surface and strongly affect the earth's energy budget and climate system mainly due to their optical and microphysical properties. Cloud properties retrieved from satellite measurements are widely applied to calculation of the earth's energy budget, numerical weather prediction, and meteorological disasters forecasting. Optical and microphysical properties of clouds retrieved from satellite measurements are the most important elements for calculating surface solar radiation (SDR), including shortwave and longwave components that are of great significance for the energy cycle and climate change studies. However, the existing satellite-based products are greatly limited by their relatively coarse spatial-temporal resolutions in grasping the diurnal cycle of those mentioned key variables. Due to the unrealistic assumptions in radiative transfer simulations, there are still large uncertainties especially in different phases of retrieved cloud properties. Recently, the successful launch of new generation geostationary satellites such as GOES-R, FY-4 and Himawari-8, as well as the continuous development of machine learning techniques and three-dimensional radiation transfer models, provide unprecedented opportunities for high-precision monitoring of cloud properties and surface radiation budget parameters. This session aims to promote international collaborations as well as in-depth exchanges of knowledge and expertise on various aspects of cloud remote sensing, particularly on the advances in the research of radiative transfer, solar energy applications and featuring global monitoring.
The topics in this section is aimed to promote international collaborations as well as in-depth exchanges of ideas and experiences on various aspects of cloud remote sensing, particularly the advances in the research of atmospheric radiation, solar energy applications and featuring global monitoring.
Presented by Stefania Amici, Dario Spiller
Landscape fires are a phenomenon which has effects at both local and global scale. Besides the positive role in maintaining the terrestrial ecosystem, climate change and increase of global temperatures have are likely to increase the frequency of landscape fires. Wildfires are expected to become more intense more frequent and would ignite in areas not affected before (i.e. Greenland). Emissions from gas released by the combustions, soil erosion, social impacts (i.e. infrastructure damage) and finally health issues related to exposure to the emitted gases are of great concerns. In this framework fire localization from space play a relevant role in providing actionable information.
This tutorial will focus on the localization of active wildfires by using hyperspectral data delivered by the hyperspectral Italian PRISMA sensor on board of PRISMA mission by Italian Space Agency. A demos on how to access PRISMA catalogue will be provided. An hands on session will address how to process, display and derive detection map by using distinctive PRISMA spectral features and/or unique open source software package based on AI.
Following the attendance of the tutorial the attendees will be able to:
Basic knowledge of remote sensing from space and fundamentals of Python are an advantage. However, information will be provided to make the attendants feel confident.
Presented by James L. Garrison, Adriano Camps, Estel Cardellach
Although originally designed for navigation, signals from the Global Navigation Satellite System (GNSS), ie., GPS, GLONASS, Galileo and COMPASS, exhibit strong reflections from the Earth and ocean surface. Effects of rough surface scattering modify the properties of reflected signals. Several methods have been developed for inverting these effects to retrieve geophysical data such as ocean surface roughness (winds), soil moisture, above ground vegetation, sea ice extension and type, or permafrost active layer status, among others.
GNSS reflectometry (GNSS-R) methods enable the use of small, low power, passive instruments. The power and mass of GNSS-R instruments can be made low enough to enable deployment on small satellites, balloons and UAV’s. Early research sets of satellite-based GNSS-R data were first collected by the UK-DMC satellite (2003), Tech Demo Sat-1 (2014) and the 8-satellite CYGNSS constellation (2016). Currently, in addition to the NASA/CyGNSS constellation, there is the ESA FSSCat 2x6U cubesat mission, two Chinese BuFeng-1 satellites (A/B) and several commercial cubesats from Spire Global Inc. that carry GNSS-R payloads. Future approved missions include ESA’s HydroGNSS, focused on hydrology monitoring, China’s FY-3E and Taiwan’s FS-7R.
Recently, methods of GNSS-R have been applied to satellite transmissions in other frequencies, ranging from P-band (230 MHz) to K-band (18.5 GHz). So-called “Signals of Opportunity” (SoOp) methods enable microwave remote sensing outside of protected bands, using frequencies allocated to satellite communications. Measurements of sea surface height, wind speed, snow water equivalent, and soil moisture have been demonstrated with SoOp.
This course will summarize the fundamental principles of physical modeling, signal processing and applications of GNSS and SoOp reflectometry measurements, with focus on satellite-based applications and methodologies. Those students that, after the course, seek to get a more detailed picture of these techniques and a review of the state of the art could attend the full-day IGARSS tutorials on GNSS+R, should these tutorials continue in the program.
After attending this tutorial, participants should have an understanding of:
Basic concepts of linear systems and electrical signals. Some understanding of random variables would be useful.