Quick video to see the playback in action (each figure is a different clip of me in a motion capture suit):
This project was a doozy. One of my PhD projects involve analyzing motion capture data. The iteration of capturing data, to running some algorithms profiling it, to playing back the results quickly to give life to statistics had to be fast and efficient. However, I have never built an animation system before from scratch in OpenGL. This took some effort.
Step 1. Parse Vicon data
First hurdle to cross was parsing the vicon data so that I could potentially play it back. Vicon exports it motion capture data in FBX files. That was the first bad news. The FBX SDK is a beast on its own with legacy functions and dead code. So with a lot of effort, I created an FBX converter that would transform FBX files to my engine’s native mesh, skeleton, and animation format. Here’s what it looks like:
So now I have a chain from Vicon -> FBX -> DayDream Engine.
Step 2. Build Animation System
It’s not possible for me to post bit by bit how I accomplished this, but I can lay out the road map I used to get a robust (knock on wood) animation system running:
- Render meshes that can be animated
- Create skinned mesh class with extra VBO’s (vertex buffer objects) for vertex-to-joint indices and vertex blend weights
- Create vertex shader that can animate meshes using array of final joint matrices
- implement using ssbo (shader storage buffer object) to contain joint matrices
- Store structure or pointer to skeleton data per animated mesh
- Store structure or pointer to animation data (frames of animation) and animation state (i.e. blending, masks, playback speed, etc…) per animated mesh
- Calculate final pose for animated mesh per frame
- Iterate through all active animation states and calculate final pose matrices
Step 3. User Experience
I’m not the only one on this project. My lab mate uses my engine to preview the results from his analysis and needs an easy way to load up animations and manipulate them. My game engine already supports a terminal that can pipe commands into the event queue. Since we’re both comfortable with a command line interface, I created a slew of commands for him to load skeletons, animations, create animated agents, and pop up an ImGui interface for him to control spawned agents and there animations at a granular level.
He even created a pull request on my github repo so that he could add scripting to the terminal. Input a file with commands separated line by line, and my engine while parse the commands straight into the event queue. Any entity subscribing to those commands will execute them in that frame.
Agent manipulation, as said before, is done using and ImGui window. Animations can be paused, speed up, rewinded, played back uninterpolated (for debugging), stepped forwards or backwards, and moved around the environment. This allows a fast iteration process from analysis to review to re-analysis.