It’s been a week with my iPhone X and I am drilling into technical details of what iOS 11 does with iPhone X. One of the big features is the facial recognition. Curious I watched an Apple Developer video on “Face Tracking with ARKit” https://developer.apple.com/fall17/601
Everyone focus on the 1st one. Facial recognition and doing things like Animoji
With expressions tracking you could use the Face Tracking as UI input.
Here is where ARFaceAnchor can be used to show position and orientation of the users face, 3D topology, facial expression. Everyone is a bit different in how they reach, but they are in general consistent in their facial expressions.
If you don’t think you can do this check out this list of facial tracking shapes.
In operations lighting has a direct impact to the quality, accuracy, and speed of work. And, you could use Facial Tracking to get a reading of the Lighting in the environment.
And last with the microphone support for Animoji you can use the same method to capture audio as well.