Timestamps: * 00:00 Setup VRCFury and Jerry's templates * 02:31 Apply jerry's template prefabs for different Bleendshapes-Standards * 03:34 Check Params-Limit: Your current params plus 161 for blendshabes must be under 256 * 04:10 Maybe customize animator if there is something like automatic blinking included in your current setup. * 10:41 Testing with GestureManager - enable FaceTracking via VR-Chat Hand-Control-Menu * 11:35 Testing with GestureManager - Simulate User Generated Input from your Face-Tracking-Device ( (!) FaceTracking via VR-Chat Hand-Control-Menu must be enabled) * 16:59 The Blendshape Object is NOT named "Body": Jerry's template source must be adapted "Rewrite Animation Clips" * 17:49 Blendshapes are split on multiple Objects: add VRCFury Blendshape Link Component on the Avatar
Excellent stuff. Just to help anyone passing by, here's a few things I bumped into while applying these: 1. Face Tracking doesn't seem to work at all on local "Build & Test" avatars. (Or maybe it's just my setup...). 2. If the sliders are working in the Unity Editor (see the manual), but FT is still not working in VRC: make sure OSC is on in VRChat, and do a "Reset OSC Config" in the VRC radial menu. 3. If you've got two or more avatars with a "Body" root in the same scene, make sure scripts are picking up the correct one.
cant read anything on your screen when you've got it at 4k mate like you're going through menus I literally cannot see the letters even at max render - do you have a 50 inch monitor or something?
Think you need to get your eyes checked. I can read everything perfectly fine (This isn't a insult just I think you might actually have to check your eyes)
@@TITANBT1464 I mean @loafbreed7246 isn't wrong, the res is a little high for those who don't have larger displays cause our monitors aren't natively built for that scale, looks fine on your end because your display is meant for that resolution. I'm not complaining, I have work arounds to see it myself, but he is technically right It's not your fault, OBS doesn't like detecting it and warning you about it :P (if you wanna fix it just change your output scale resolution to 1920x1080 in OBS, under the video tab in settings) also thank you for the template 🙏🙏 saved me a billion years of manual work
Tips for seeing what's being talked about: render the video in 4K, even if your monitor is 1080p. That forces your hardware to downsample locally which is way clearer than the compressed "1080p" labeled resolution.
If your bools aren't working: VRC changed how they sync variables, and the bools can only be viewed on a non-local clone. Create one at runtime (it will auto-destroy when stopped) and control the expression bool parameters on the original avatar. More info can be found on the "Template Debug Tutorial" on the channel.
I have to mirror the below sentiment that, as someone who doesn't have a 4k monitor, reading any of the text in this video is very difficult even at fullscreen.
Most people still have monitors with half that resolution so reading text that small is extremely difficult if not impossible at times. Idk if there is a way to fix it but if there is it could be helpful for future videos
so i’m very new to the world of face tracking, just got myself a quest pro! i want to be able to add on the CiCi head model face tracking to an existing avatar (Max by Missymod) and i am beyond lost in the process. my brain is fogging trying to read the screen and follow along thoroughly myself. is there anyway to get a clearer video or maybe some assistance on my end with some extra elaboration?
Please add chapters in the video so we don't have to watch all the stuff everytime we come here. Great video btw, still watching will leave feedback after I have done everything. ^_^
Im trying to use facetracking for kyoko by foxipaws but her face doesnt move at all even tho the debug menu says im moving my face, am i supposed to also drag the sr runtime prefab into her hierarchy? because i tried that but it didnt let me upload her because of the parameters
If you noticed in the beginning when he mentioned Unified Expressions, he briefly showed his models blendshapes.. well his model was specifically setup for facetracking in mind. So if you don't already have those specific blendshapes... you are not gonna get anything out of this. He fails to mention this entirely
Timestamps:
* 00:00 Setup VRCFury and Jerry's templates
* 02:31 Apply jerry's template prefabs for different Bleendshapes-Standards
* 03:34 Check Params-Limit: Your current params plus 161 for blendshabes must be under 256
* 04:10 Maybe customize animator if there is something like automatic blinking included in your current setup.
* 10:41 Testing with GestureManager - enable FaceTracking via VR-Chat Hand-Control-Menu
* 11:35 Testing with GestureManager - Simulate User Generated Input from your Face-Tracking-Device ( (!) FaceTracking via VR-Chat Hand-Control-Menu must be enabled)
* 16:59 The Blendshape Object is NOT named "Body": Jerry's template source must be adapted "Rewrite Animation Clips"
* 17:49 Blendshapes are split on multiple Objects: add VRCFury Blendshape Link Component on the Avatar
I wanna kiss you' Thank GOD
Thanks
Excellent stuff.
Just to help anyone passing by, here's a few things I bumped into while applying these:
1. Face Tracking doesn't seem to work at all on local "Build & Test" avatars. (Or maybe it's just my setup...).
2. If the sliders are working in the Unity Editor (see the manual), but FT is still not working in VRC: make sure OSC is on in VRChat, and do a "Reset OSC Config" in the VRC radial menu.
3. If you've got two or more avatars with a "Body" root in the same scene, make sure scripts are picking up the correct one.
cant read anything on your screen when you've got it at 4k mate
like you're going through menus I literally cannot see the letters even at max render - do you have a 50 inch monitor or something?
Think you need to get your eyes checked. I can read everything perfectly fine (This isn't a insult just I think you might actually have to check your eyes)
@@TITANBT1464 I mean @loafbreed7246 isn't wrong, the res is a little high for those who don't have larger displays cause our monitors aren't natively built for that scale, looks fine on your end because your display is meant for that resolution.
I'm not complaining, I have work arounds to see it myself, but he is technically right
It's not your fault, OBS doesn't like detecting it and warning you about it :P
(if you wanna fix it just change your output scale resolution to 1920x1080 in OBS, under the video tab in settings)
also thank you for the template 🙏🙏 saved me a billion years of manual work
Saving this for when my Quest Pro arrives! Thank you so much for making this alongside the Github stuff! ♥
"I usually keep it all the way to the left so I can actually read things" And thus, youtube viewer obtains text for ants. D=
Tips for seeing what's being talked about: render the video in 4K, even if your monitor is 1080p. That forces your hardware to downsample locally which is way clearer than the compressed "1080p" labeled resolution.
If your bools aren't working: VRC changed how they sync variables, and the bools can only be viewed on a non-local clone. Create one at runtime (it will auto-destroy when stopped) and control the expression bool parameters on the original avatar. More info can be found on the "Template Debug Tutorial" on the channel.
I have to mirror the below sentiment that, as someone who doesn't have a 4k monitor, reading any of the text in this video is very difficult even at fullscreen.
Very informative, thank you very much. Just getting into this as I've just got a quest pro and I'm hoping to rig the fem Nardo up. :)
Most people still have monitors with half that resolution so reading text that small is extremely difficult if not impossible at times. Idk if there is a way to fix it but if there is it could be helpful for future videos
so i’m very new to the world of face tracking, just got myself a quest pro! i want to be able to add on the CiCi head model face tracking to an existing avatar (Max by Missymod) and i am beyond lost in the process. my brain is fogging trying to read the screen and follow along thoroughly myself. is there anyway to get a clearer video or maybe some assistance on my end with some extra elaboration?
Please add chapters in the video so we don't have to watch all the stuff everytime we come here. Great video btw, still watching will leave feedback after I have done everything. ^_^
this
I don't know what is happening but it will not let me add the repository. Any suggestions?
Im trying to use facetracking for kyoko by foxipaws but her face doesnt move at all even tho the debug menu says im moving my face, am i supposed to also drag the sr runtime prefab into her hierarchy? because i tried that but it didnt let me upload her because of the parameters
If you noticed in the beginning when he mentioned Unified Expressions, he briefly showed his models blendshapes.. well his model was specifically setup for facetracking in mind. So if you don't already have those specific blendshapes... you are not gonna get anything out of this. He fails to mention this entirely
nice
so how do we know what blendshapes our avatar is using? @jerrysmod2735