Digital-human facial Micro-expression generation suit constructs the film CG’s basic model. This can be used to upgrade the basic CG digital-human to an intelligent digital-human. Digital humans lack the reality in micro-expressions, so the effect is lost during application in film or interactive games. This paper introduced the face plus-in of a soft mod manipulator, accurately calculated the expression feature points by using the blend shape technology, and proposed a method to produce CG digital human micro-expressions using markerless point capture. The study showed that this micro-expression technology can be applied to realistic digital humans. Finally, this technical method of micro-expression production can achieve a satisfactory in-depth experience effect for the audience in virtual studio platforms, CG films, and interactive games. This study proves the feasibility and importance of this technology.
Wang S, 2018, A Review of 3D Face Expression Acquisition and Reconstruction Techniques. Journal of System Simulation, 30(7): 2423–2424.
Miao Y, 2016, Expression Data-Driven 3D Face Modeling Approach. Journal of Changchun University of Science and Technology, 39(2): 112.
CCTV, 2024, Empowering Future Industries, the Central Economic Work Conference Laid out this Action. Wuhan Science: Technology and Innovation Bureau, 12.
Siddiqui JR, Fexgan-meta, 2022, Facial Expression Generation with Meta Humans. arXiv preprint arXiv, 2203.05975.
Zou K, Fai SS, Yu B, et al., 2023, 4D Facial Expression Diffusion Model. ACM Transactions on Multimedia Computing, Communications and Applications, 21(1): 1–21.
You G, Chan Z, 2022, Research Article Motion Capture Technology and Its Applications in Film and Television Animation. Advances in Multimedia, 2022(6392168): 1–10. https://doi.org/10.1155/2022/6392168.
Zhang MY, 2023, Application of Performance Motion Capture Technology in Film and Television Performance Animation. Proceedings of the 2nd International Symposium on Computer, Communication, Control and Automation (ISCCCA-13).
Daněček R, Chhatre K, Tripathi S, et al., 2023, Emotional Speech-driven Animation with Content-emotion Disentanglement: SIGGRAPH Asia 2023 Conference Papers, 1–13.
Xie Y, 2023, Overview of AIGC-empowered Digital People in Movies. Modern Film Technology, 2023(10): 34–35.
Leiker D, Gyllen AR, Eldesouky I, et al., 2023, Generative AI for Learning: Investigating the Potential of Learning Videos with Synthetic Virtual Instructors. International Conference on Artificial Intelligence in Education. Springer Nature Switzerland, Cham, 523–529.