Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mkaglins/pruning masks applying #985

Conversation

mkaglins
Copy link
Contributor

@mkaglins mkaglins commented Oct 17, 2021

Changes

  • FilterPruningBlock changed to apply masks to weights, not only store mask
  • Masks now applying only on the forward pass, all zeroing of original model weights (in apply_mask) was removed
  • To apply masks to Batch/Group Norm layers, these layers are wrapped as NNCF modules and FilterPruningBlock added to them. Masks propagation to these layers from convolutions.
  • UpdateWeightAndBiasPruning pre-layer operation was added to consistently prune weights and bias (@vshampor Please, take a look on these changes)
  • zero_grad attribute was deleted as not needed anymore
    @AlexanderDokuchaev @daniil-lyakhov Please, take a look at these changes in pruning.
    cc @evgeniya-egupova

Reason for changes

New masks applying mechanism allow re-pruning of the model since weights aren't pruning permanently.

Related tickets

Ticket ID 53524

Tests

A couple of tests in test_algo was changed to test the new mechanism (instead of zeroing weights zeroing of conv outputs are checked).

nncf/torch/module_operations.py Outdated Show resolved Hide resolved
nncf/torch/pruning/base_algo.py Outdated Show resolved Hide resolved


def _conv_forward_proxy(self, input_, weight, padding_value):
def _conv_forward_proxy(self, input_, weight, bias, padding_value):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it really need to use proxy_weight and proxy_bias in _custom_forward_fn, instead of self in _conv_forward_proxy?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting question, I will discuss it with @vshampor.

Copy link
Contributor

@ljaljushkin ljaljushkin Oct 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's really needed, otherwise it would take original weight/bias directly from conv module, not from the proxy one (ProxyModule) with compressed params

nncf/torch/layers.py Outdated Show resolved Hide resolved
nncf/torch/pruning/filter_pruning/layers.py Outdated Show resolved Hide resolved
Comment on lines +658 to 661
# 2. Set the masks for Batch/Group Norms
types_to_apply_mask = ['group_norm']
if self.prune_batch_norms:
types_to_apply_mask.append('batch_norm')
Copy link
Collaborator

@daniil-lyakhov daniil-lyakhov Oct 19, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we keep nodes we wrapped during _prune_weights , for example, in a way conv layers and bn / group norms are kept separately? Then here we can just take this nodes by id, for example, and update the masks after mask propagation algo

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, Batch/Group Norm nodes are already stored separately in self._pruned_norms_operators.
So potentially we can iterate over self._pruned_norms_operators and update only these nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@daniil-lyakhov do you think it will be the clearer solution?

mkaglins and others added 4 commits October 19, 2021 19:18
Co-authored-by: Daniil Lyakhov <daniil.lyakhov@intel.com>
Co-authored-by: Daniil Lyakhov <daniil.lyakhov@intel.com>
@daniil-lyakhov daniil-lyakhov self-assigned this Oct 25, 2021
@daniil-lyakhov
Copy link
Collaborator

Moved to #994

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants