Are you seeking Hyperspectral Data for tasks like classification, super-resolution, image fusion, anomaly detection, or unmixing? Interested in Earth and ocean insights via our Hyperspectral Technology from space? Need ground-truth labels for Hyperspectral Classification? If any of these align with your needs, you're in the right place! Scroll down to know about our dataset from the HYPSO-1 mission.
Hyperspectral Imaging, employed in satellites for space remote
sensing, like HYPSO-1, faces constraints due to few
labeled data sets, affecting the training of AI models demanding
these ground-truth annotations. In this work, we
introduce The HYPSO-1 Sea-Land-Cloud-Labeled Dataset,
an open dataset with 200 diverse hyperspectral images from
the HYPSO-1 mission, available in both raw and calibrated
forms for scientific research in Earth observation. Moreover,
38 of these images from different countries include
ground-truth labels at pixel-level totaling about 25 million
spectral signatures labeled for sea/land/cloud categories.
To demonstrate the potential of the dataset and its labeled
subset, we have additionally optimized a deep learning
model (1D Fully Convolutional Network), achieving superior
performance to the current state of the art. The complete
dataset, ground-truth labels, deep learning model, and software
code are openly accessible for download at this website which we provide as a supplementary material for the paper AN OPEN HYPERSPECTRAL DATASET WITH SEA-LAND-CLOUD GROUND-TRUTH FROM THE HYPSO-1 SATELLITE submitted to arXiv on 25th August 2023. The manuscript is available here (also in PDF). We hope our Hyperspectral Dataset benefits the HSI community and the broader field of Earth and ocean observation.
In the event of any issues accesing the dataset or any other supplementary resource, or for any questions, inquiries or feedback we welcome you to contact us. Click here to get to know more about us.
Read our additional article on Sea-Land-Cloud Segmentation in Satellite Hyperspectral Imagery by Deep Learning: click here for more details (also in PDF).