Models
At sync, we’re building foundational models to understand and manipulate humans in video. Our suite of lipsyncing models allows you to edit the lip movements of any speaker in any video to match a target audio. Explore and compare the capabilities of the different models below.
All models are available in both Studio and API.
Advanced Options
- Occlusion Detection: For challenging video content where faces may be partially hidden by objects, hands, or other elements, you can enable occlusion detection. This feature improves face detection accuracy in complex scenes but comes with slower generation speeds. Enable this option when working with videos where faces are frequently obscured or when standard generation produces suboptimal results due to occlusion issues.
Caveats
-
Still Frame Limitation: Our lipsync models require natural speaking motion in the input video to function properly. If your video contains segments with still frames (where the speaker is not actively moving or speaking), lipsync will not work during those portions, even if audio is present.
This occurs because our models use 2-second independent chunks for inference and need to detect natural speaking style to generate appropriate lip movements. Static or still video segments don’t provide the necessary visual cues for the model to create realistic lip synchronization.
Recommendation: For best results, ensure your input video shows the speaker actively talking throughout the duration you want to lipsync.
FAQs
Why isn't my AI-generated character's lip movement working properly?
The character in the input video needs to look like they are talking. Our models learn to mimic the speaking style in the input video. If the character is completely static, the model might not generate lips that move either.
Solution: When creating your AI-generated video, add the text prompt “person is speaking naturally” to your generation. This will create characters with lips that are already moving, which will work much better with our platform.
Can I use sync to lipsync a song to a character?
Absolutely, our latest model is your best bet for this. For best results, be sure to isolate and upload the vocals track, as the instrumental sounds can sometimes interfere with the lipsync quality.
Does sync work with animal faces or non-human characters?
You can lipsync human-like faces, but our models don’t currently support animals or non-humanoid characters.
Why is the lipsync quality poor or non-existent in certain parts of my video?
Please check if the problematic segments have:
- Multiple speakers in the frame
- Faces that are too small or in profile view
- Segments where the speaker in the input video is not speaking
- Faces that are partially occluded by objects, hands, or other elements
For multiple people, try masking or cropping out some faces using external tools. For occlusion issues, consider enabling the occlusion_detection
option in your generation request, which provides better face detection in complex scenes (though it will slow down processing). We’re working to resolve these issues in future model releases.
Why does my video have poor quality lipsync when faces are in profile view?
Extreme profile view faces can lead to sub-par results. Please try with our latest model (lipsync-2), which has improved pose robustness and is your best option for challenging angles.
Why does the generated face appear to be lower resolution than my input video?
Our latest model generates faces at 512×512 resolution, which is usually sufficient for most 1080p videos. If the face in your input video is quite large, you may notice some resolution differences. For the highest possible quality, lipsync-2 offers the best resolution handling among our models.