Skip to content

Releases: pau1o-hs/Learned-Motion-Matching

v0.3.0 Release Notes

26 Nov 16:58
Compare
Choose a tag to compare

Stable version

LMM2

Learned and updated some stuff based in author version.

Plans for nexts updates

Well, this system isn't very useful without a relevant database of animations, so I think my hands are tied unless I find an easy way to animate stylized characters. ;-;

v0.2.0 Release Notes

02 Nov 12:12
Compare
Choose a tag to compare
v0.2.0 Release Notes Pre-release
Pre-release

Almost there!

LMM

Hi guys! Got good improvements since last release, projector is working well. I am still not following the Decompressor loss functions to the letter, because ForwardKinematics method is really slow and I still need to understand a bit more about that losses.

Besides that, I am making the system more user-friendly, in case this project becomes relevant for gamedevs. Buttons were added to facilitate the LMM pipeline and debugging.

githubimg1

Currently, for use this system, the user needs to do the following steps:

  1. Add the desired animation clips in the character Animator tab;
  2. Add and setup the Gameplay script to the desired character;
  3. Hit the "Extract data from animator" button, located the Inspector of Gameplay script;
  4. Export "XData", "YData" and "HierarchyData" previously generated to Pytorch "/database" folder;
  5. Run decompressor.py, followed by stepper.py and projector.py (this last two can be run in parallel);
  6. Export the ONNX files generated in Pytorch environment to Unity's "/Assets/Motion Matching/ONNX " folder;
  7. Export the "QData.txt" file generated in Pytorch environment to Unity's "/Assets/Motion Matching/Database" folder;
  8. Hit "Play" button and play.

Additionally, there are some buttons in the Game tab to debug the neural networks results.

Important notes

If you try to use it with your own character and animations, there are some details:

  • All your character's bones scales must be (1, 1, 1) to ForwardKinematics method works properly;
  • Every animation clip must have at least 60 frames;
  • The last 59 frames of every animation clip must have the same trajectory directions, because as input to the neural networks, are passed the future 60 frames, so I made that the last 59 frames to be equal to the TotalFrames - 60.

Plans for nexts updates

  • Create and add more animation clips to the database;
  • Refactor Gameplay script;
  • (Try to) Improve Stepper performance;
  • Learn and use BVH file format to extract data from rig, instead of using Unity.

v0.1.1 Release Notes

15 Oct 03:18
Compare
Choose a tag to compare
v0.1.1 Release Notes Pre-release
Pre-release

Hello World!

Hi guys, nice to meet you!

I started this project some months ago and now I'm able to show an almost working version for you. Just started learning about neural networks, animation programming and even github! So there still is a lot to understand and improve!

How to run my sample

Since I'm still adjusting the neural network models, it's not usable yet. But you can take a look at it doing the following step:

  1. Download the lmm-unity-0.1.1.zip file bellow
  2. Open the project in Unity
  3. Open the Assets/Scenes/Motion Matching.unity scene
  4. Press Play button
  5. Press W, A, S or D to (try) move

* Barracuda package required

I've made some changes in projector model and it is reseting the pose to default, I'm investigating it by now. But you can take a look at
MotionMatching.cs > Matching method to understand a bit more of the project. If you uncomment #region COMPRESSOR and comment #region PROJECTOR, you can see it working without the player queries.

Plans for v0.2.0

When projector model be fixed, the system will be usable, not fast and high quality, but usable.
Besides that, CustomFunctions.py > ForwardKinematics is really slow, so it's unused by now in training and need to be fixed.

Hope you enjoy and understand a bit of this mess.