Just curious why after 6:49 you decided not to show a continuous sentiment label or emotional label I guess, or the overall analytics at the end to show how many times a label appeared? It would have been nice to see what the overall clip was estimating, was this a fight or argument, was there deception going on, did this person seem to be in danger or something else ❤. Ya know. Anyway absolutely amazing work, congratulations 🎉🎉 I love all the work you and your team does, I follow and read everything completely out of your faculty, wish I was there 😮😢, 😂🎉😂
Super question. We were more focused on using emotion to get accurate 3D than on the emotion recognition per se. But you are right that we could just show this, even though the processing is all single frame and doesn't take the temporal nature into account. Emotions really evolve over time and so I think it is important to model that. My very first work on facial expressions with Yaser Yacoob, used a very simple parametric model of face motion. From the parameters of the model over time, we recognized expressions surprisingly well for 1995! Here's the old video: th-cam.com/video/ZnCiZWNnNC4/w-d-xo.html
Thank you for the amazing work! I’m wondering if there’s a way to apply this code to create a lively animated face, similar to Apple's Memoji, to replace man head in a video?.
Like, I will create a 3D animated character by analyzing the features of a person's face in a video. Using your code, I'll then map the appropriate facial expressions to this 3D character and replace the person's face with this animated figure. Does this sound feasible to you? Thanks in advance!
Hello guys, but may be do you have video how to install code if you a not programmer? Because i 3d character animator in maya, and very interesting to try! OR maybe detailed instructions? Thank you!
1. The result is a full 3D FLAME head model. This cropping is only for display here. 2. EMOCA v2 is more stable (github.com/radekd91/emoca) and you can always run a 1-Euro filter if you still want more but it's pretty stable. 3. but also check out MICA, which is very stable. justusthies.github.io/posts/mica/
hello, thank you very much for your amazing work. Just asking : is there a way to apply this code and try this mocap system on my 3d characters ? thank you very much
2 ปีที่แล้ว +2
in principle it's possible yes. but our code does not have this functionality. you would have to attach the FLAME face model (which is what we use) on your characters in place of the characters head. this is not trivial as there would probably be discontinuities around the neck which would then also have to be taken care of. btw, of you're interested in full body capture, be sure to check out projects such as PIXIE or Simplify-X.
To me it looks like it totally loses the identity compared to DECA. Also, the expressions also look exaggerated and not like in the original image. It would have been interesting to see the rendered deformed mesh with the extracted textures.
Basically, yes. We take the DECA loss and add a term that says that the emotional content of the rendered image should match that of the original image. This is enough to improve the 3D realism of the mesh, without any explicit 3D training. This is what I find exciting. Emotion is a form of semantic "side information" (ie weak supervision) that is easy to get and can improve 3D shape estimation.
@@MichaelBlackMPI thanks for the response! I was having a read through the supplementary material and it seems this was not nearly as simple as my initial comment perhaps made it out to be :D Appreciate you open-sourcing the code too!
He failed the test by the way, she was going back to her fiancé not her husband
This is wild, well done!
Just curious why after 6:49 you decided not to show a continuous sentiment label or emotional label I guess, or the overall analytics at the end to show how many times a label appeared? It would have been nice to see what the overall clip was estimating, was this a fight or argument, was there deception going on, did this person seem to be in danger or something else ❤. Ya know. Anyway absolutely amazing work, congratulations 🎉🎉 I love all the work you and your team does, I follow and read everything completely out of your faculty, wish I was there 😮😢, 😂🎉😂
Super question. We were more focused on using emotion to get accurate 3D than on the emotion recognition per se. But you are right that we could just show this, even though the processing is all single frame and doesn't take the temporal nature into account. Emotions really evolve over time and so I think it is important to model that. My very first work on facial expressions with Yaser Yacoob, used a very simple parametric model of face motion. From the parameters of the model over time, we recognized expressions surprisingly well for 1995! Here's the old video: th-cam.com/video/ZnCiZWNnNC4/w-d-xo.html
Thank you for the amazing work! I’m wondering if there’s a way to apply this code to create a lively animated face, similar to Apple's Memoji, to replace man head in a video?.
Like, I will create a 3D animated character by analyzing the features of a person's face in a video. Using your code, I'll then map the appropriate facial expressions to this 3D character and replace the person's face with this animated figure. Does this sound feasible to you? Thanks in advance!
Hello guys, but may be do you have video how to install code if you a not programmer? Because i 3d character animator in maya, and very interesting to try!
OR maybe detailed instructions? Thank you!
1. Can the model be exported without cropping to the roi box?
2. What can be done to improve the temporal stability / shakiness?
Thank you!
1. The result is a full 3D FLAME head model. This cropping is only for display here. 2. EMOCA v2 is more stable (github.com/radekd91/emoca) and you can always run a 1-Euro filter if you still want more but it's pretty stable. 3. but also check out MICA, which is very stable. justusthies.github.io/posts/mica/
hello, thank you very much for your amazing work. Just asking : is there a way to apply this code and try this mocap system on my 3d characters ? thank you very much
in principle it's possible yes. but our code does not have this functionality. you would have to attach the FLAME face model (which is what we use) on your characters in place of the characters head. this is not trivial as there would probably be discontinuities around the neck which would then also have to be taken care of. btw, of you're interested in full body capture, be sure to check out projects such as PIXIE or Simplify-X.
@ Thank you very much, really appreciate
To me it looks like it totally loses the identity compared to DECA. Also, the expressions also look exaggerated and not like in the original image.
It would have been interesting to see the rendered deformed mesh with the extracted textures.
Is this just DECA + an extra emotion detection model-based loss term?
Basically, yes. We take the DECA loss and add a term that says that the emotional content of the rendered image should match that of the original image. This is enough to improve the 3D realism of the mesh, without any explicit 3D training. This is what I find exciting. Emotion is a form of semantic "side information" (ie weak supervision) that is easy to get and can improve 3D shape estimation.
@@MichaelBlackMPI thanks for the response! I was having a read through the supplementary material and it seems this was not nearly as simple as my initial comment perhaps made it out to be :D
Appreciate you open-sourcing the code too!
@@liam9519 no worries. Happy to help.