Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make PyTorch code-base clang-tidy compliant #56892

Closed

Conversation

malfet
Copy link
Contributor

@malfet malfet commented Apr 25, 2021

This is an automatic change generated by the following script:

#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files


def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])


def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()

@malfet malfet requested a review from a team April 25, 2021 18:37
@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 25, 2021

💊 CI failures summary and remediations

As of commit bb6a28a (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-scanned failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Apr 26, 2021

Codecov Report

Merging #56892 (ede758a) into master (e97c17a) will increase coverage by 0.00%.
The diff coverage is n/a.

❗ Current head ede758a differs from pull request most recent head 31b3968. Consider uploading reports for the commit 31b3968 to get more accurate results

@@           Coverage Diff           @@
##           master   #56892   +/-   ##
=======================================
  Coverage   77.20%   77.21%           
=======================================
  Files        1948     1948           
  Lines      193573   193572    -1     
=======================================
+ Hits       149455   149467   +12     
+ Misses      44118    44105   -13     

@malfet malfet force-pushed the malfet/make-all-cpp-files-clang-tidy-compliant branch from 31b3968 to c970a5f Compare April 27, 2021 19:02
facebook-github-bot pushed a commit that referenced this pull request Apr 28, 2021
Summary:
Attempts to call clang-tidy on any source file in
`aten/src/ATen/cpu/native` would fail with series of
```
/Users/nshulga/git/pytorch-worktree/aten/src/ATen/native/cpu/Activation.cpp:637:1: warning: variable 'REGISTER_DISPATCH' is non-const and globally accessible, consider making it const [cppcoreguidelines-avoid-non-const-global-variables]
/Users/nshulga/git/pytorch-worktree/aten/src/ATen/native/cpu/Activation.cpp:638:1: error: C++ requires a type specifier for all declarations [clang-diagnostic-error]
REGISTER_DISPATCH(log_sigmoid_backward_cpu_stub, &log_sigmoid_backward_cpu_kernel);
```
because those macros are only defined for cpu-arch specific compilation of above mentioned files.
Fix this by introducing `map_filename` function that will map source
file to its copy in `build` folder, run clang-tidy over the copy and
than map it back

Find it while working on #56892

Pull Request resolved: #57037

Reviewed By: walterddr

Differential Revision: D28033760

Pulled By: malfet

fbshipit-source-id: b67cd007000574ecc165ab4b285c0c102cbceadd
@malfet malfet force-pushed the malfet/make-all-cpp-files-clang-tidy-compliant branch from 604cb3c to 0448cd7 Compare April 28, 2021 16:22
@malfet malfet force-pushed the malfet/make-all-cpp-files-clang-tidy-compliant branch from 0448cd7 to 1595f90 Compare April 28, 2021 18:11
@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@malfet has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@malfet merged this pull request in 4cb534f.

@malfet malfet deleted the malfet/make-all-cpp-files-clang-tidy-compliant branch April 28, 2021 21:17
krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
Summary:
Attempts to call clang-tidy on any source file in
`aten/src/ATen/cpu/native` would fail with series of
```
/Users/nshulga/git/pytorch-worktree/aten/src/ATen/native/cpu/Activation.cpp:637:1: warning: variable 'REGISTER_DISPATCH' is non-const and globally accessible, consider making it const [cppcoreguidelines-avoid-non-const-global-variables]
/Users/nshulga/git/pytorch-worktree/aten/src/ATen/native/cpu/Activation.cpp:638:1: error: C++ requires a type specifier for all declarations [clang-diagnostic-error]
REGISTER_DISPATCH(log_sigmoid_backward_cpu_stub, &log_sigmoid_backward_cpu_kernel);
```
because those macros are only defined for cpu-arch specific compilation of above mentioned files.
Fix this by introducing `map_filename` function that will map source
file to its copy in `build` folder, run clang-tidy over the copy and
than map it back

Find it while working on pytorch#56892

Pull Request resolved: pytorch#57037

Reviewed By: walterddr

Differential Revision: D28033760

Pulled By: malfet

fbshipit-source-id: b67cd007000574ecc165ab4b285c0c102cbceadd
krshrimali pushed a commit to krshrimali/pytorch that referenced this pull request May 19, 2021
Summary:
This is an automatic change generated by the following script:
```
#!/usr/bin/env python3
from subprocess import check_output, check_call
import os

def get_compiled_files_list():
    import json
    with open("build/compile_commands.json") as f:
        data = json.load(f)
    files = [os.path.relpath(node['file']) for node in data]
    for idx, fname in enumerate(files):
        if fname.startswith('build/') and fname.endswith('.DEFAULT.cpp'):
            files[idx] = fname[len('build/'):-len('.DEFAULT.cpp')]
    return files

def run_clang_tidy(fname):
    check_call(["python3", "tools/clang_tidy.py", "-c", "build", "-x", fname,"-s"])
    changes = check_output(["git", "ls-files", "-m"])
    if len(changes) == 0:
        return
    check_call(["git", "commit","--all", "-m", f"NOLINT stubs for {fname}"])

def main():
    git_files = check_output(["git", "ls-files"]).decode("ascii").split("\n")
    compiled_files = get_compiled_files_list()
    for idx, fname in enumerate(git_files):
        if fname not in compiled_files:
            continue
        if fname.startswith("caffe2/contrib/aten/"):
            continue
        print(f"[{idx}/{len(git_files)}] Processing {fname}")
        run_clang_tidy(fname)

if __name__ == "__main__":
    main()
```

Pull Request resolved: pytorch#56892

Reviewed By: H-Huang

Differential Revision: D27991944

Pulled By: malfet

fbshipit-source-id: 5415e1eb2c1b34319a4f03024bfaa087007d7179
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants