Play, pause button
Thumb tray button

    Call for Paper

    The Second Facial Micro-Expression Grand Challenge (MEGC): Spotting and Recognition

    Facial micro-expressions (MEs) are involuntary movements of the face that occur spontaneously when a person experiences an emotion but attempts to suppress or repress the facial expression, typically found in a high-stakes environment. As such, the duration of MEs is very short with the general duration of not more than 500 milliseconds (ms), and is the telltale sign that distinguishes them from a normal facial expression. Computational analysis and automation of tasks on micro-expressions is an emerging area in face research, with a strong interest appearing as recent as 2014. Only recently, the availability of a few spontaneously induced facial micro-expression datasets has provided the impetus to advance further from the computational aspect. Particularly comprehensive are two state-of-the-art FACS coded datasets: the Chinese Academy of Sciences Micro-Expression Database II (CASME II) with 247 MFEs at 200 fps and the Spontaneous Facial Micro-Movement Dataset (SAMM) with 159 MFEs at 200 fps. In addition, there is recent interest in acquiring “in-the-wild” datasets to further introduce real-world scenarios. While much research has been done on these datasets individually, there have been little attempts to introduce a more rigorous and realistic evaluation to work done in this domain. This is the second edition of this workshop, which aims to promote interactions between researchers and scholars not only from within this niche area of facial micro-expression research, but also including those from broader, general areas of expression and psychology research.

    This workshop has two main agenda:

    To organize the Second Grand Challenge for facial micro-expression research, involving cross-database recognition and spotting of micro-expressions. To solicit original works that address a variety of challenges of ME research,but not limited to

  • ME spotting/detection
  • ME recognition
  • ME feature representation and computational analysis
  • Unified ME spot-and-recognize schemes
  • Deep learning techniques for MEs spotting and recognition
  • MEs data analysis and synthesis
  • New datasets for MEs
  • Psychology of MEs
  • Submission

    Important Dates

    Submission deadline: 27 January 2019

    Notification: 12 February 2019

    Camera-ready: 15 February 2019



    Team member 1

    Moi Hoon Yap

    Manchester Metropolitan University,UK, m.yap@mmu.ac.uk

    Team member 2

    Sujing Wang

    Chinese Academy of Sciences, China, wangsujing@psych.ac.cn

    Team member 3

    John See

    Multimedia University, Malaysia, johnsee@mmu.edu.my

    Team member 2

    Xiaopeng Hong,

    University of Oulu, Finland, hongxiaopeng.cn@gmail.com

    Advisory Panel:

    Xiaolan Fu, Chinese Academy of Sciences, China

    Guoying Zhao, University of Oulu, Finland

    Keynote Speakers



    Adrian Keith Davison, University of Manchester, UK

    Daniel Leightley, King’s College London, UK

    Anh Cat Le Ngo, University of Nottingham, UK

    Sze Teng Liong, Feng Chia University, Taiwan

    Walied Merghani, Sudan University of Science and Technology, Sudan

    Xiaoyi Feng, Northwestern Polytechnical University, Xi’an, China

    Ruiping Wang, Institute of Computing Technology, Chinese Academy of Sciences, China

    Xiaobai Li, University of Oulu, Finland

    Wenjing Yan, Wenzhou University, China

    Choon-Ching Ng, PRDCSG, Singapore

    Hongying Meng, Brunel University, UK

    Zhen Cui, Nanjing University of Science and Technology, China

    Zhaoqiang Xia, Northwestern Polytechnical University, China

    Yannick Benezeth, Univ Bourgogne Franche-Comté