Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Tytuł pozycji:

Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework.

Tytuł:
Infer Thermal Information from Visual Information: A Cross Imaging Modality Edge Learning (CIMEL) Framework.
Autorzy:
Wang S; School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK.
Mei J; School of Electronic Engineering, Tianjin University of Technology and Education, Tianjin 300222, China.
Yang L; School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK.
Zhao Y; School of Aerospace, Transport and Manufacturing, Cranfield University, Bedford MK43 0AL, UK.
Źródło:
Sensors (Basel, Switzerland) [Sensors (Basel)] 2021 Nov 10; Vol. 21 (22). Date of Electronic Publication: 2021 Nov 10.
Typ publikacji:
Journal Article
Język:
English
Imprint Name(s):
Original Publication: Basel, Switzerland : MDPI, c2000-
MeSH Terms:
Image Processing, Computer-Assisted*
Thermography*
Reproducibility of Results
References:
Psychophysiology. 2014 Oct;51(10):951-63. (PMID: 24961292)
Infrared Phys Technol. 2012 Jul;55(4):221-235. (PMID: 32288544)
Entropy (Basel). 2019 Mar 04;21(3):. (PMID: 33266959)
Contributed Indexing:
Keywords: deep learning; edge detection; image enhancement; thermography
Entry Date(s):
Date Created: 20211127 Date Completed: 20211130 Latest Revision: 20211130
Update Code:
20240104
PubMed Central ID:
PMC8617953
DOI:
10.3390/s21227471
PMID:
34833548
Czasopismo naukowe
The measurement accuracy and reliability of thermography is largely limited by a relatively low spatial-resolution of infrared (IR) cameras in comparison to digital cameras. Using a high-end IR camera to achieve high spatial-resolution can be costly or sometimes infeasible due to the high sample rate required. Therefore, there is a strong demand to improve the quality of IR images, particularly on edges, without upgrading the hardware in the context of surveillance and industrial inspection systems. This paper proposes a novel Conditional Generative Adversarial Networks (CGAN)-based framework to enhance IR edges by learning high-frequency features from corresponding visual images. A dual-discriminator, focusing on edge and content/background, is introduced to guide the cross imaging modality learning procedure of the U-Net generator in high and low frequencies respectively. Results demonstrate that the proposed framework can effectively enhance barely visible edges in IR images without introducing artefacts, meanwhile the content information is well preserved. Different from most similar studies, this method only requires IR images for testing, which will increase the applicability of some scenarios where only one imaging modality is available, such as active thermography.
Zaloguj się, aby uzyskać dostęp do pełnego tekstu.

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies