Zoom Link


Introduction


The workshop will focus on various area of future impact of AI. The aim of this workshop is to bring researchers and scientists from academia, medical area with engineers from industry together to discuss about various impact of cutting-edge technologies of AI in future society. The workshop will take a deep dive into the capabilities of Edge Insights for Academia and Industrial via a tutorial utilizing real-world AI applications. For example, breast cancer can be detected via smartphone level infrared camera for detecting lesion and target mass at home. Despite the existence of some commercial AI systems such as autonomous vehicle, we are at the beginning of a long research pathway towards a future generation of deep AI. The workshop focuses on numerical and computational aspects of future impact of AI and on these relations to various AI techniques.


Call for papers

We welcome submissions on the following topics, including but not limited to:

  • Image Generation and Translation
  • Semantic segmentation/ Instance Segmentation
  • Recognition: detection, tracking, Anomaly detection, localization
  • Image processing: denoising, enhancement, super resolution
  • 3D computer vision, stereo matching
  • NLP (natural language processing)
  • Voice Recognition: STT(Speech to Text)
  • Reinforcement Learning
  • Sensor fusion with AI
  • Game with AI

  • Important Dates


    Paper Submission Deadline October 10, 2021 (23:59 Pacific time)
    Notification to Authors October 17, 2021
    Camera-Ready Deadline October 24, 2021
    Workshop Date November 12, 2021 (Afternoon)

    Submission


    Extended Abstracts: Participants are encouraged to submit preliminary ideas that have not been previously published in conferences or journals. We also invite papers published in other conferences and journals (2021 only) to facilitate new collaborations. Submissions may consist of one page abstract and one additional page for references (using the template described above). The expanded abstract will be posted on the website only during the workshop period.

    All the papers should be submitted to workshop chairs, Prof. Lee (segeberg@kmu.ac.kr) and Prof. Ko (niceko@kmu.ac.kr)


    Workshop Schedule


    # Time Item
    1 13:00pm - 13:05pm Opening Remarks (Prof. Jong-Ha Lee)
    2 13:05pm - 13:40pm Keynote Talk: Prof. Soo Hyung Kim (Chonnam National University)
    3 13:40pm - 14:10pm Keynote Talk: Prof. Chang-Hee Won (Temple University)
    14:10pm - 14:30pm Coffee Break
    4 14:30pm - 15:05pm Keynote Talk: Prof. Suha Kwak (POSTECH)
    5 15:05pm - 15:35pm Keynote Talk: Prof. Chih-Chung Hsu (National Cheng Kung University)
    15:35pm - 15:45pm Coffee Break
    6 15:45pm - 16:55pm Oral Presentation
    15:45pm - 15:55pm G2CN: Geometric Graph Convolutional Network for Facial Expression Recognition
    15:55pm - 16:05pm Histological Image Segmentation and Classification Using Entropy-Based Convolutional Module
    16:05pm - 16:15pm Vision Trnasformer Based Dynamic Facial Emotion Recognition
    16:15pm - 16:25pm Development of a system capable of diagnosing and treating Alzheimer's disease: a technique experiment using cadaver
    16:25pm - 16:35pm Remote Bio Vision: Perfusion Imaging Based Non-Contact Biosignal Measurement Method
    16:35pm - 16:45pm Calibrating a Multiple-View Thermal Camera
    16:45pm - 16:55pm Research on EfficientNet architecture-based systems and algorithms that can predict complex emotions in humans
    7 16:55pm - 17:00pm Closing Remarks (Prof. Byoung Chul Ko)

    Invited Keynote Speakers

    Prof. SooHyung Kim
    Chonnam National University

    Survival Time Prediction for Cancer Patients based on Multi-Modal Medical Data

    A deep learning approach for survival time prediction is introduced, which utilizes clinomics, radiomics, and pathomics for a cancer patient. Clinical examples for lung cancer cases show that the multi-modal approach is promising for precision medicine.


    Prof. Chang-Hee Won
    Temple University

    Bimodal Imaging of Breast Cancer Using Profile Diagrams and Convolution Neural Network

    PDF

    In this talk, we will discuss a bimodal imaging system, which consists of tactile and spectral sensors. A Tactile Profile Diagram is a pictorial representation of the mechanical properties of the imaged tumor. A Multispectral Profile Diagram is a representative pattern image of the breast tissue’s spectral properties. To classify the profile diagrams, we employ the Convolutional Neural Network method. The human experimental results demonstrate the ability of the developed method to classify and quantify breast cancer. Finally, we describe a method to calculate Multimodal Index for the malignancy risk assessment using profile diagrams and health records.


    Loss functions for Deep metric learning

    PDF

    Understanding semantic similarity between images has played essential roles in many areas of computer vision. Deep metric learning aims to achieve this by training deep neural network that embeds images onto a manifold in which semantically similar images are closely grouped together; the semantic similarity between two examples are then directly estimated on the manifold using known distance metrics like Euclidean and cosine distances. Such a quality of the network is given mainly by loss functions used for training the networks. Most losses for deep metric learning are based on binary supervision indicating whether a pair of images are of the same class or not, which are readily available from existing large-scale datasets for image classification. The first part of this talk will introduce a new loss function that allows the embedding networks to achieve state-of-the-art performance and converge most quickly in the binary supervision case. Although the binary supervision is readily available from many existing datasets, it covers only a limited subset of image relations, and is not sufficient to represent semantic similarity between images described by continuous and structured labels such as object poses, image captions, and scene graphs. Motivated by this, the second part of this talk will present a novel loss for deep metric learning using continuous labels.


    Prof. Chih-Chung Hsu
    National Cheng Kung University

    Multilinear Data Super-Resolution: 2D to ND

    PDF

    With the rapid growth of deep learning applications, conventional image restoration tasks such as super-resolution has made significant progress in recent years. Specifically, the convolutional neural network (CNN)-based super-resolution has achieved excellent performance, while many super-resolution networks were proposed to improve the fidelity and visual quality. In this talk, I would like to introduce the multi-perspective image super-resolution: from the structured facial (2-D) to hyperspectral (172-D) image super-resolution by exploring the image prior, the correlation between spectrum, and special tricks in CNN for super-resolution tasks. I will also make some open issues for further research on super-resolution tasks in this talk.



    Accepted Papers

    G2CN: Geometric Graph Convolutional Network for Facial Expression Recognition Hyung jin Kim, Byoung Chul Ko
    Vision Trnasformer Based Dynamic Facial Emotion Recognition Dasom Ahn, Sangwon Kim, Byoung Chul Ko
    Histological Image Segmentation and Classification Using Entropy-Based Convolutional Module Hwa-Rang Kim, Kwang-Ju Kim, Kil-Taek Lim, Doo-Hyun Choi
    Development of a system capable of diagnosing and treating Alzheimer's disease: a technique experiment using cadaver Eun Bin Park, Jong-Ha Lee
    Remote Bio Vision: Perfusion Imaging Based Non-Contact Biosignal Measurement Method Chan Il Kim, Jong-Ha Lee
    Calibrating a Multiple-View Thermal Camera Ju O Kim, Ji Eun Kim, Deokwoo Lee
    Research on EfficientNet architecture-based systems and algorithms that can predict complex emotions in humans Minyoung Kim, HyunChung Cho, Jong-Ha Lee

    Organizers


    Jong-Ha Lee
    Keimyung University
    Byoung Chul Ko
    Keimyung University

    Program Committee


    Yo Han Park
    Keimyung University
    Deokwoo Lee
    Keimyung University
    Djamila Aouada
    Université du Luxembourg
    Hyung Jin Chang
    University of Birmingham
    Chang-Hee Won
    Temple University
    Shivendra Panwar
    New York University
    Youngjung Uh
    Yonsei University
    Changsu Lee
    Youngnam University
    SooYoung Kwak
    Hanbat University
    Inkyu Park
    Inha University

    Acknowledgments


    This workshop is proudly sponsored by KMU(Keimyung University) Research Institute of AI fusion.

    Contact


    For any related question, please contact Prof. Jong-Ha Lee (+82-10-8968-8769, segeberg@kmu.ac.kr)


    Flag Counter