- Create a python virtual environment.
- Activate the venv.
- Install all the libraries from
requirements.txt
usingpip install -r requirements.txt
. - Plug in a camera.
- Run the command
python live_detection.py
. - Stand in a well lit room.
- Stand in a way such that you are completely in frame.
Preferably the images should be JPG/JPEG and the image names should be [number].jpg
.
- Create a new directory in
./poses_dataset/Images
(the name can be anything but I recommend to use the name of the pose) and populate the it with the pose images. - Create another directory in
./poses_dataset/angles
(again the name can be anything) and put one image of the pose. The image in this directory will be used as a 'known good' pose angles (the pose should be perfect), as in, during live detection the user's pose will be compared against this pose to make recommendations. - Run
create_poses_csv.ipynb
in the virtual env. This will create a file named (you can name it whatever) which has all the x, y, z, and visibility values of all the desired landmark points of all poses in the./poses_dataset/Images
directory. The pose column value in the generated csv file will be an integer btw. - Then run
create_angles_csv.ipynb
. This will create another csv named It will have the 'known good' pose angles. - Then run
rfc_model.ipynb
which uses the csv generated in the step 3 as the input file to train/test the data on. It will then create a.model
file named - Finally you will have to change these variables in
live_detection.py
to whatever you have created in steps 4 and 5.