Informacja

Drogi użytkowniku, aplikacja do prawidłowego działania wymaga obsługi JavaScript. Proszę włącz obsługę JavaScript w Twojej przeglądarce.

Tytuł pozycji:

PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition?

Tytuł:
PEg TRAnsfer Workflow recognition challenge report: Do multimodal data improve recognition?
Autorzy:
Huaulmé A; Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France. Electronic address: .
Harada K; Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan.
Nguyen QM; Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France.
Park B; VisionAI hutom, Seoul, Republic of Korea.
Hong S; VisionAI hutom, Seoul, Republic of Korea.
Choi MK; VisionAI hutom, Seoul, Republic of Korea.
Peven M; Johns Hopkins University, Baltimore, USA.
Li Y; Zhejiang University, Hangzhou, China.
Long Y; Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong.
Dou Q; Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong.
Kumar S; Netaji Subhas University of Technology, Delhi, India.
Lalithkumar S; National University of Singapore, Singapore, Singapore.
Hongliang R; National University of Singapore, Singapore, Singapore; The Chinese University of Hong Kong, Hong Kong, Hong Kong.
Matsuzaki H; National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan.
Ishikawa Y; National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan.
Harai Y; National Cancer Center Japan East Hospital, Tokyo 104-0045, Japan.
Kondo S; Muroran Institute of Technology, Hokkaido, Japan.
Mitsuishi M; Department of Mechanical Engineering, the University of Tokyo, Tokyo 113-8656, Japan.
Jannin P; Univ Rennes, INSERM, LTSI - UMR 1099, Rennes, F35000, France. Electronic address: .
Źródło:
Computer methods and programs in biomedicine [Comput Methods Programs Biomed] 2023 Jun; Vol. 236, pp. 107561. Date of Electronic Publication: 2023 Apr 20.
Typ publikacji:
Journal Article
Język:
English
Imprint Name(s):
Publication: Limerick : Elsevier Scientific Publishers
Original Publication: Amsterdam : Elsevier Science Publishers, c1984-
MeSH Terms:
Algorithms*
Robotic Surgical Procedures*/methods
Humans ; Workflow
Contributed Indexing:
Keywords: Multimodal; OR of the future; Surgical process model; Workflow recognition
Entry Date(s):
Date Created: 20230429 Date Completed: 20230522 Latest Revision: 20230522
Update Code:
20240105
DOI:
10.1016/j.cmpb.2023.107561
PMID:
37119774
Czasopismo naukowe
Background and Objective: In order to be context-aware, computer-assisted surgical systems require accurate, real-time automatic surgical workflow recognition. In the past several years, surgical video has been the most commonly-used modality for surgical workflow recognition. But with the democratization of robot-assisted surgery, new modalities, such as kinematics, are now accessible. Some previous methods use these new modalities as input for their models, but their added value has rarely been studied. This paper presents the design and results of the "PEg TRAnsfer Workflow recognition" (PETRAW) challenge with the objective of developing surgical workflow recognition methods based on one or more modalities and studying their added value.
Methods: The PETRAW challenge included a data set of 150 peg transfer sequences performed on a virtual simulator. This data set included videos, kinematic data, semantic segmentation data, and annotations, which described the workflow at three levels of granularity: phase, step, and activity. Five tasks were proposed to the participants: three were related to the recognition at all granularities simultaneously using a single modality, and two addressed the recognition using multiple modalities. The mean application-dependent balanced accuracy (AD-Accuracy) was used as an evaluation metric to take into account class balance and is more clinically relevant than a frame-by-frame score.
Results: Seven teams participated in at least one task with four participating in every task. The best results were obtained by combining video and kinematic data (AD-Accuracy of between 93% and 90% for the four teams that participated in all tasks).
Conclusion: The improvement of surgical workflow recognition methods using multiple modalities compared with unimodal methods was significant for all teams. However, the longer execution time required for video/kinematic-based methods(compared to only kinematic-based methods) must be considered. Indeed, one must ask if it is wise to increase computing time by 2000 to 20,000% only to increase accuracy by 3%. The PETRAW data set is publicly available at www.synapse.org/PETRAW to encourage further research in surgical workflow recognition.
Competing Interests: Declaration of Competing Interest The authors declare that they have no conflict of interest.
(Copyright © 2023. Published by Elsevier B.V.)

Ta witryna wykorzystuje pliki cookies do przechowywania informacji na Twoim komputerze. Pliki cookies stosujemy w celu świadczenia usług na najwyższym poziomie, w tym w sposób dostosowany do indywidualnych potrzeb. Korzystanie z witryny bez zmiany ustawień dotyczących cookies oznacza, że będą one zamieszczane w Twoim komputerze. W każdym momencie możesz dokonać zmiany ustawień dotyczących cookies