-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix problems associated with analyzer timing #24564
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Who would have thought timing is so hard. Nice find!
@@ -43,7 +43,7 @@ internal class AnalyzerExecutor | |||
private readonly Func<DiagnosticAnalyzer, bool> _shouldSkipAnalysisOnGeneratedCode; | |||
private readonly Func<Diagnostic, DiagnosticAnalyzer, Compilation, CancellationToken, bool> _shouldSuppressGeneratedCodeDiagnostic; | |||
private readonly Func<SyntaxTree, TextSpan, bool> _isGeneratedCodeLocation; | |||
private readonly ConcurrentDictionary<DiagnosticAnalyzer, TimeSpan> _analyzerExecutionTimeMapOpt; | |||
private readonly ConcurrentDictionary<DiagnosticAnalyzer, StrongBox<long>> _analyzerExecutionTimeMapOpt; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: add a comment to say what unit this is in now that we've lost the TimeSpan. (I normally hate any non-TimeSpan but obviously in this case you're doing it for a Good Reason™.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I normally hate any non-TimeSpan...
Me too.
Approved to merge via Link |
Customer scenario
A customer runs the compiler with analyzer timing enabled (either through the API or with a compiler flag on the command line). The timing operations incur substantial overhead on concurrent builds, and when one or more analyzers allocates large amounts of data, the timing results are inaccurate.
Bugs this fixes
DevDiv 560657 (internal mirror of this pull request)
Workarounds, if any
Risk
Low.
Performance impact
Performance for intense analyzer scenarios can be improved by 20%. In addition, timing error rates of over 150% were observed for some analyzers.
Is this a regression from a previous update?
No, but the work to reduce overhead in this scenario revealed another problem at the same location. This bug is limiting our ability to observe the expected gains from previous work.
Root cause analysis
The previous work in this area was focused on allocation reductions (#23582), which were achieved as expected by #23621. The problem corrected by this pull request is a CPU time problem revealed by profiling execution times (#23583).
How was the bug found?
Profiling per #23583.
Test documentation updated?
N/A