-
Notifications
You must be signed in to change notification settings - Fork 592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Return-types #5] Finite-difference update #2966
Conversation
Hello. You may have forgotten to update the changelog!
|
Codecov Report
@@ Coverage Diff @@
## master #2966 +/- ##
========================================
Coverage 99.67% 99.67%
========================================
Files 273 273
Lines 23349 23450 +101
========================================
+ Hits 23273 23374 +101
Misses 76 76
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be use qml.enable_return()
in tests?
Co-authored-by: Albert Mitjans <a.mitjanscoma@gmail.com>
…ennylane into return_finite_diff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some logic may be specific only to qubits. Maybe worth adding a TODO there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks great! 💯 Approving with the condition that the CV case is checked and if found to be relevant it's resolved either via a TODO or via a test case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work @rmoyard 💯 ! I still have a couple of suggestions on how we can clean up the code a bit. In particular, it looks like we can reuse a lot of the code from the new parameter-shift transform here.
I'm giving my approval now but this is conditioned on these comments being addressed.
…ennylane into return_finite_diff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👏
Co-authored-by: antalszava <antalszava@gmail.com>
Context:
Update the finite-diff gradient transform to reflect the new return types changes. It concerns the return for multiple measurements.
Description of the Change:
Instead of returning a single ragged array containing the results for multiple measurements, we consider that multiple measurements returns a tuple of measurements (m0, m1, ...). Therefore the derivative will have the following form: (m0:(p0, p1, ...), m1:(p0, p1, ...), ...) where each parameter tuple corresponds to a specific measurement.
This PR updates
finite_diff
gradient transform.Example
((array(-0.5167068), array(5.55111512e-10)), (array([-0.23169865, -0.02665475, 0.02665475, 0.23169865]), array([ 0.28230648, -0.28230648, -0.02187647, 0.02187647])))
where we obtain the same results as with Jax and backpropagation:
((DeviceArray(-0.5167068, dtype=float32, weak_type=True), DeviceArray(-2.9802322e-08, dtype=float32, weak_type=True)), (DeviceArray([-0.23169865, -0.02665475, 0.02665475, 0.23169865], dtype=float32, weak_type=True), DeviceArray([ 0.28230646, -0.2823065 , -0.02187647, 0.02187647], dtype=float32, weak_type=True)))
Benefits:
Clear and intuitive return system for multiple measurements.
TODO
[sc-25812]