Releases: pau1o-hs/Learned-Motion-Matching
v0.3.0 Release Notes
Stable version
Learned and updated some stuff based in author version.
Plans for nexts updates
Well, this system isn't very useful without a relevant database of animations, so I think my hands are tied unless I find an easy way to animate stylized characters. ;-;
v0.2.0 Release Notes
Almost there!
Hi guys! Got good improvements since last release, projector is working well. I am still not following the Decompressor loss functions to the letter, because ForwardKinematics method is really slow and I still need to understand a bit more about that losses.
Besides that, I am making the system more user-friendly, in case this project becomes relevant for gamedevs. Buttons were added to facilitate the LMM pipeline and debugging.
Currently, for use this system, the user needs to do the following steps:
- Add the desired animation clips in the character Animator tab;
- Add and setup the Gameplay script to the desired character;
- Hit the "Extract data from animator" button, located the Inspector of Gameplay script;
- Export "XData", "YData" and "HierarchyData" previously generated to Pytorch "/database" folder;
- Run decompressor.py, followed by stepper.py and projector.py (this last two can be run in parallel);
- Export the ONNX files generated in Pytorch environment to Unity's "/Assets/Motion Matching/ONNX " folder;
- Export the "QData.txt" file generated in Pytorch environment to Unity's "/Assets/Motion Matching/Database" folder;
- Hit "Play" button and play.
Additionally, there are some buttons in the Game tab to debug the neural networks results.
Important notes
If you try to use it with your own character and animations, there are some details:
- All your character's bones scales must be (1, 1, 1) to ForwardKinematics method works properly;
- Every animation clip must have at least 60 frames;
- The last 59 frames of every animation clip must have the same trajectory directions, because as input to the neural networks, are passed the future 60 frames, so I made that the last 59 frames to be equal to the TotalFrames - 60.
Plans for nexts updates
- Create and add more animation clips to the database;
- Refactor Gameplay script;
- (Try to) Improve Stepper performance;
- Learn and use BVH file format to extract data from rig, instead of using Unity.
v0.1.1 Release Notes
Hello World!
Hi guys, nice to meet you!
I started this project some months ago and now I'm able to show an almost working version for you. Just started learning about neural networks, animation programming and even github! So there still is a lot to understand and improve!
How to run my sample
Since I'm still adjusting the neural network models, it's not usable yet. But you can take a look at it doing the following step:
- Download the
lmm-unity-0.1.1.zip
file bellow - Open the project in
Unity
- Open the
Assets/Scenes/Motion Matching.unity
scene - Press
Play
button - Press
W
,A
,S
orD
to (try) move
* Barracuda package required
I've made some changes in projector model and it is reseting the pose to default, I'm investigating it by now. But you can take a look at
MotionMatching.cs > Matching
method to understand a bit more of the project. If you uncomment #region COMPRESSOR
and comment #region PROJECTOR
, you can see it working without the player queries.
Plans for v0.2.0
When projector model be fixed, the system will be usable, not fast and high quality, but usable.
Besides that, CustomFunctions.py > ForwardKinematics
is really slow, so it's unused by now in training and need to be fixed.
Hope you enjoy and understand a bit of this mess.