Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LeGR implementation #501

Merged
merged 33 commits into from
Jul 5, 2021
Merged

Conversation

mkaglins
Copy link
Contributor

To run an experiment with LeGR algorithm "compression" config section schould looks like:

  "compression": {
        "algorithm": "filter_pruning",
        "params": {
            "pruning_flops_target": 0.3, // final FLOPs pruning target that will be fine-tuned
            "weight_importance": "legr",
            "all_weights": true,

            "prune_downsample_convs" : true, // recommended to allow pruning all nodes (algorithm potentially can decide pruning level for all nodes)
            "prune_first_conv": true, // recommended to allow pruning all nodes (algorithm potentially can decide pruning level for all nodes)
            "prune_last_conv": true, // recommended to allow pruning all nodes (algorithm potentially can decide pruning level for all nodes)
            "legr_params":
            {
                "generations" : 400, // Number of generations in Evolution algorithm (400 is default). Can be lower in simpler cases.
                "max_pruning": 0.8, // pruning level to train LeGR algorithm with. Can be equal to the target pruning level. In paper recommended to be a higher potential pruning rate for this model.
            }
        }
    } 

Settings in the "optimizer" section will be used as settings for the final fine-tuning optimizer.

@mkaglins mkaglins changed the title Mkaglins/legr impl WIP: LeGR implementation Feb 11, 2021
@mkaglins
Copy link
Contributor Author

@AlexKoff88 please, take a look.

@MaximProshin
Copy link
Collaborator

cc @AlexanderDokuchaev

examples/classification/main.py Outdated Show resolved Hide resolved
nncf/config/schema.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/global_ranking/RL_evolution.py Outdated Show resolved Hide resolved
@AlexKoff88
Copy link
Contributor

General comments:

  • We need to parallelize computations (model forward) in LeGR initialization step, namely validation of sampled models
  • I also noticed that LeGR initialization output is dumped to nncf_output.log while most of the logs are presented in output.log. I wonder why do we need the former at all?

@lzrvch
Copy link
Contributor

lzrvch commented Mar 30, 2021

General comments:

  • We need to parallelize computations (model forward) in LeGR initialization step, namely validation of sampled models
  • I also noticed that LeGR initialization output is dumped to nncf_output.log while most of the logs are presented in output.log. I wonder why do we need the former at all?

One possible scenario for parallelization is to manually wrap the model with DataParallel during LeGR search inside the algo and unwrap it after initialization so that the user can utilize whatever training mode they want later on.

examples/classification/main.py Outdated Show resolved Hide resolved
examples/classification/main.py Outdated Show resolved Hide resolved
nncf/config/schema.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/algo.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/algo.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/global_ranking/LeGR.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/global_ranking/RL_evolution.py Outdated Show resolved Hide resolved
Copy link
Contributor

@vshampor vshampor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General comment - use lowercase for all .py filenames, Windows won't be able to discern capitalizations anyway

examples/common/execution.py Outdated Show resolved Hide resolved
nncf/structures.py Outdated Show resolved Hide resolved
nncf/utils.py Outdated Show resolved Hide resolved
tests/quantization/test_hawq_precision_init.py Outdated Show resolved Hide resolved
@mkaglins mkaglins marked this pull request as ready for review April 28, 2021 09:27
@mkaglins mkaglins requested a review from a team April 28, 2021 09:27
examples/classification/main.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/global_ranking/LeGR.py Outdated Show resolved Hide resolved
nncf/pruning/filter_pruning/algo.py Outdated Show resolved Hide resolved
@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please stop all builds

@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please retry a build

@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please stop all builds

@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please retry a build

1 similar comment
@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please retry a build

@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please stop all builds

@0de554K
Copy link
Contributor

0de554K commented Jun 29, 2021

Jenkins please retry a build

@mkaglins
Copy link
Contributor Author

Jenkins please retry a build

@0de554K
Copy link
Contributor

0de554K commented Jul 1, 2021

Jenkins please retry a build

1 similar comment
@mkaglins
Copy link
Contributor Author

mkaglins commented Jul 1, 2021

Jenkins please retry a build

@mkaglins
Copy link
Contributor Author

mkaglins commented Jul 1, 2021

Jenkins please stop all builds

@mkaglins
Copy link
Contributor Author

mkaglins commented Jul 2, 2021

Ticket for documentation update: 59183
Ticket for tests of distribution callbacks in case of multiple-use: 59185

nncf/torch/pruning/base_algo.py Outdated Show resolved Hide resolved
nncf/torch/pruning/base_algo.py Outdated Show resolved Hide resolved
nncf/torch/pruning/filter_pruning/algo.py Outdated Show resolved Hide resolved
nncf/torch/pruning/filter_pruning/global_ranking/legr.py Outdated Show resolved Hide resolved
nncf/torch/structures.py Show resolved Hide resolved
@vshampor vshampor merged commit b520a1f into openvinotoolkit:develop Jul 5, 2021
@mkaglins
Copy link
Contributor Author

mkaglins commented Jul 5, 2021

Results of reproducing this algo after all merges:

CIFAR-100:
Mobilenetv2 (CIFAR-100)
Original acc=68.61%

Algo\PR 0.4 0.6 0.8
FP + geomedian 68.279 61.388 54.647
LeGR (200) 68.149 68.980 68.109

Resnet-50 (CIFAR-100)
Original acc=78.51%

Algo\PR 0.4 0.6 0.8
FP+geomean 77.885 77.294 74.038
LeGR (200) 78.466 77.143 76.112

Imagenet:
Resnet-18, PR=30%:

Original FP+geomean LeGR
69.76 68.72 69.648

@MaximProshin
Copy link
Collaborator

@mkaglins , can you please update the table below by

  • FP32 accuracy
  • pruning rate for resnet-18 (I assume it's 30%)
    ?

@mkaglins
Copy link
Contributor Author

mkaglins commented Jul 7, 2021

@mkaglins , can you please update the table below by

  • FP32 accuracy
  • pruning rate for resnet-18 (I assume it's 30%)
    ?

Done.

@@ -136,11 +141,50 @@ def __init__(self, target_model: NNCFNetwork,

self.weights_normalizer = tensor_l2_normalizer # for all weights in common case
self.filter_importance = FILTER_IMPORTANCE_FUNCTIONS.get(params.get('weight_importance', 'L2'))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(Issue 59470) @mkaglins, BTW, torch still expects weight_importance attribute

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants