Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Return-types #3] QNode integration, enable and disable functions #2860

Merged
merged 131 commits into from
Aug 5, 2022
Merged
Show file tree
Hide file tree
Changes from 14 commits
Commits
Show all changes
131 commits
Select commit Hold shift + click to select a range
a38477a
draft
antalszava Jul 13, 2022
f3c7f9c
more
antalszava Jul 14, 2022
f9c8767
Merge branch 'master' into refactor_return_shot_vector
antalszava Jul 14, 2022
61b932e
First working version
rmoyard Jul 14, 2022
d6125df
fix a bug; shot vector Counts returns tuple; Autograd interface update
antalszava Jul 14, 2022
6e3b347
Merge branch 'master' into refactor_return_shot_vector
antalszava Jul 14, 2022
7d32c4e
test file
antalszava Jul 14, 2022
8cee2e1
single_measurement
antalszava Jul 14, 2022
e1b7fb1
more tests
antalszava Jul 14, 2022
966cf99
test
rmoyard Jul 15, 2022
39865d4
first multi-meas attempt
antalszava Jul 15, 2022
e5d49b6
multi_measures
antalszava Jul 16, 2022
903e8b3
Update
rmoyard Jul 18, 2022
06579b2
Update
rmoyard Jul 18, 2022
61040ba
Merge branch 'return_types' into refactor_return_shot_vector
rmoyard Jul 18, 2022
8f5053a
merge
rmoyard Jul 18, 2022
005a89f
Update
rmoyard Jul 18, 2022
bf55fe4
Merge branch 'master' into return_types
rmoyard Jul 18, 2022
941da8b
Merge branch 'return_types' into refactor_return_shot_vector
rmoyard Jul 18, 2022
2a615b5
update
rmoyard Jul 19, 2022
9eb4d32
Merge branch 'refactor_return_shot_vector' of https://github.com/Penn…
rmoyard Jul 19, 2022
b86f752
Update
rmoyard Jul 19, 2022
08d9c65
Merge branch 'return_types' of https://github.com/PennyLaneAI/pennyla…
rmoyard Jul 19, 2022
c934fa9
Remove autograd_changes
rmoyard Jul 19, 2022
7f3608c
Comment
rmoyard Jul 20, 2022
465e3d4
Update
rmoyard Jul 20, 2022
57a22d3
Coverage
rmoyard Jul 20, 2022
d31b786
Typo
rmoyard Jul 20, 2022
b5f510c
Add tests
rmoyard Jul 20, 2022
9a1e963
Merge branch 'return_types' into refactor_return_shot_vector
rmoyard Jul 20, 2022
8d8d456
Move tests
rmoyard Jul 20, 2022
8521d50
Revert autograd
rmoyard Jul 20, 2022
48d481a
Unecessary change
rmoyard Jul 20, 2022
b28fdfd
Solve issue
rmoyard Jul 20, 2022
47fcae9
Add statistics new
rmoyard Jul 20, 2022
392dcd7
Merge branch 'master' into return_types
rmoyard Jul 21, 2022
5a15ee1
Merge branch 'master' into return_types
rmoyard Jul 21, 2022
2f0797f
Update tests/test_new_return_types.py
rmoyard Jul 22, 2022
bec1648
Update pennylane/_qubit_device.py
rmoyard Jul 22, 2022
b3cb3a2
:memo: Update from review
rmoyard Jul 22, 2022
8a08be1
:white_check_mark: Update test
rmoyard Jul 22, 2022
2ec1f53
Merge branch 'return_types' of https://github.com/PennyLaneAI/pennyla…
rmoyard Jul 22, 2022
eb471cc
:sparkles: QNode integration
rmoyard Jul 25, 2022
8e0d25f
:recycle: Remove global
rmoyard Jul 25, 2022
9b49eed
Merge branch 'return_types' into refactor_return_shot_vector
antalszava Jul 25, 2022
c6fcf73
[skip-ci]
antalszava Jul 25, 2022
fbda016
:white_check_mark: Add tests for QNode integration
rmoyard Jul 25, 2022
7d344c0
more test cases for probs and sample multi (proj; tensor product)
antalszava Jul 25, 2022
bca05b7
more test cases for probs and sample multi (proj; tensor product)
antalszava Jul 25, 2022
5c04ab0
multi-measure: Counts tests
antalszava Jul 25, 2022
0a4f288
test docstrings
antalszava Jul 25, 2022
6f20785
:sparkles: Support Autograd Jacobian
rmoyard Jul 26, 2022
3623a98
Merge branch 'refactor_return_shot_vector' into return_types_qnode
rmoyard Jul 26, 2022
2d59341
adjust test
antalszava Jul 26, 2022
c5d842d
resolve
antalszava Jul 26, 2022
05982e3
Merge branch 'master' into return_types
antalszava Jul 26, 2022
07e94d7
Merge branch 'return_types' into refactor_return_shot_vector
antalszava Jul 26, 2022
8d43f30
no more is_sampled; probs sample with obs test
antalszava Jul 26, 2022
2dd9717
probs and sample test
antalszava Jul 26, 2022
bc9009b
:sparkles: All interface backprop Jacobian
rmoyard Jul 26, 2022
4b8b579
:white_check_mark: Update tests
rmoyard Jul 26, 2022
74a356d
more tests
antalszava Jul 26, 2022
1de79a2
probs tests
antalszava Jul 26, 2022
e7f9a2c
more more tests
antalszava Jul 27, 2022
fb394bf
refactor
antalszava Jul 27, 2022
bfe3a65
refactor tests
antalszava Jul 27, 2022
0a30c53
update cov; add vanilla counts test (xfail) for finite shots
antalszava Jul 27, 2022
3993fbd
restore statistics docstring
antalszava Jul 27, 2022
5c249de
Merge branch 'refactor_return_shot_vector' into return_types_qnode
rmoyard Jul 27, 2022
3759fd6
Update tests/test_new_return_types.py
rmoyard Jul 27, 2022
8ece6e0
process counts for finite shots
antalszava Jul 27, 2022
82e42ac
create shot_vec_statistics aux method
antalszava Jul 27, 2022
7219640
Apply suggestions from code review
antalszava Jul 27, 2022
0d86242
fix
antalszava Jul 27, 2022
4a5224e
suggestion
antalszava Jul 27, 2022
77286a3
refactors
antalszava Jul 27, 2022
ae35bc5
revert to have squeeze in expval
antalszava Jul 27, 2022
85a8a80
docstring and more tests
antalszava Jul 27, 2022
5f50c11
Update pennylane/interfaces/execution.py
rmoyard Jul 27, 2022
66b7dfc
:white_check_mark: Add default mixed to tests
rmoyard Jul 27, 2022
73e5e3c
Merge branch 'master' into return_types
rmoyard Jul 27, 2022
3a13558
Merge branch 'refactor_return_shot_vector' into return_types_qnode
rmoyard Jul 27, 2022
fbd3120
:white_check_mark: Upddate test due to merge issues
rmoyard Jul 27, 2022
3dfd26e
:bug: Bug introduced due to checking backprop
rmoyard Jul 27, 2022
a656083
:bug: Bug introduced due to checking backprop
rmoyard Jul 27, 2022
3541fb1
Merge branch 'master' into return_types
rmoyard Jul 28, 2022
72f7757
Merge branch 'master' into return_types
rmoyard Jul 28, 2022
84ab21f
:white_check_mark: Separate the tape and qnode tests
rmoyard Jul 28, 2022
badddc1
:recycle: Refactor after review
rmoyard Jul 29, 2022
2544b7a
Merge branch 'master' into return_types
antalszava Jul 29, 2022
7a32588
[Return-types #2] Refactor return types (shot vector cases) (#2815)
antalszava Jul 29, 2022
9dccc8c
resolve
antalszava Jul 29, 2022
80a7d63
Merge branch 'master' into return_types_qnode
rmoyard Aug 2, 2022
828e7e3
Merge branch 'master' into return_types_qnode
rmoyard Aug 2, 2022
dae1e8d
Update
rmoyard Aug 2, 2022
d738dec
:white_check_mark: Update counts
rmoyard Aug 2, 2022
c6a6ca6
Merge branch 'master' into return_types_qnode
rmoyard Aug 2, 2022
7e4819d
:sparkles: Support Tf autograph
rmoyard Aug 3, 2022
5fd794a
:white_check_mark: Update test
rmoyard Aug 3, 2022
8205c80
Merge branch 'master' into return_types_qnode
rmoyard Aug 3, 2022
e3bf149
:sparkles: Fix one element list
rmoyard Aug 3, 2022
64f4ede
Merge branch 'return_types_qnode' of https://github.com/PennyLaneAI/p…
rmoyard Aug 3, 2022
9c488b0
Typo
rmoyard Aug 3, 2022
aa88ef0
Merge branch 'master' into return_types_qnode
rmoyard Aug 3, 2022
3ff4208
Default mixed asarray
rmoyard Aug 3, 2022
0793e99
Merge branch 'return_types_qnode' of https://github.com/PennyLaneAI/p…
rmoyard Aug 3, 2022
fcc12cd
Trigger CI
rmoyard Aug 3, 2022
11b5e86
Trigger CI
rmoyard Aug 4, 2022
c5ef086
:white_check_mark: More tests
rmoyard Aug 4, 2022
f58f2ef
Typo
rmoyard Aug 4, 2022
af935d1
Trigger CI
rmoyard Aug 4, 2022
cb331d6
Update tests/math/test_functions.py
rmoyard Aug 4, 2022
cd4d135
Update tests/math/test_functions.py
rmoyard Aug 4, 2022
3cb7630
:white_check_mark: Rework test suite
rmoyard Aug 4, 2022
d12b3e3
:white_check_mark: Test interfaces forward part
rmoyard Aug 5, 2022
690e7d9
Merge branch 'return_types_qnode' of https://github.com/PennyLaneAI/p…
rmoyard Aug 5, 2022
6bb3591
Update tests/math/test_functions.py
rmoyard Aug 5, 2022
e295776
Typo
rmoyard Aug 5, 2022
7dfcc89
Merge branch 'return_types_qnode' of https://github.com/PennyLaneAI/p…
rmoyard Aug 5, 2022
526c831
Merge branch 'master' into return_types_qnode
rmoyard Aug 5, 2022
ebec1d7
More
rmoyard Aug 5, 2022
0147c49
Merge branch 'master' into return_types_qnode
rmoyard Aug 5, 2022
a1e3d3b
Add runner
rmoyard Aug 5, 2022
efc51c2
Merge branch 'return_types_qnode' of https://github.com/PennyLaneAI/p…
rmoyard Aug 5, 2022
167e717
Return marker and pragma
rmoyard Aug 5, 2022
dd09548
Merge branch 'master' into return_types_qnode
rmoyard Aug 5, 2022
33a3e40
Update .github/workflows/tests.yml
rmoyard Aug 5, 2022
5111af2
Merge branch 'master' into return_types_qnode
rmoyard Aug 5, 2022
a42e31d
Trigger CI
rmoyard Aug 5, 2022
c0c11f8
Merge branch 'return_types_qnode' of https://github.com/PennyLaneAI/p…
rmoyard Aug 5, 2022
45d40da
Trigger CI
rmoyard Aug 5, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
52 changes: 51 additions & 1 deletion .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -193,6 +193,56 @@ jobs:
name: qcut-coverage
path: ./coverage.xml

return-tests:
rmoyard marked this conversation as resolved.
Show resolved Hide resolved
runs-on: ubuntu-latest

steps:
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action@0.4.1
with:
access_token: ${{ github.token }}

- uses: actions/checkout@v2
with:
fetch-depth: 2

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8

- name: Install dependencies
run: |
python -m pip install --upgrade pip && pip install wheel --upgrade
pip install -r requirements-ci.txt --upgrade
pip install -r requirements-dev.txt --upgrade

- name: Install PyTorch
run: pip3 install torch==$TORCH_VERSION -f https://download.pytorch.org/whl/torch_stable.html

- name: Install TensorFlow
run: pip3 install tensorflow~=$TF_VERSION keras~=$TF_VERSION

- name: Install JAX
run: pip3 install jax jaxlib

- name: Install PennyLane
run: |
python setup.py bdist_wheel
pip install dist/PennyLane*.whl

- name: Run tests
run: |
python -m pytest tests/returntypes --cov=pennylane $COVERAGE_FLAGS -n auto

- name: Adjust coverage file for Codecov
run: bash <(sed -i 's/filename=\"/filename=\"pennylane\//g' coverage.xml)

- uses: actions/upload-artifact@v2
with:
name: return-coverage
path: ./coverage.xml

qchem-tests:
runs-on: ubuntu-latest

Expand Down Expand Up @@ -305,7 +355,7 @@ jobs:
path: ./coverage.xml

upload-to-codecov:
needs: [core-and-interface-tests, all-interfaces-tests, qcut-tests, qchem-tests, device-tests]
needs: [core-and-interface-tests, all-interfaces-tests, qcut-tests, return-tests, qchem-tests, device-tests]
runs-on: ubuntu-latest
steps:
- name: Checkout
Expand Down
12 changes: 6 additions & 6 deletions pennylane/interfaces/execution.py
Original file line number Diff line number Diff line change
Expand Up @@ -598,12 +598,12 @@ def cost_fn(params, x):

if gradient_fn is None:
# don't unwrap if it's an interface device
# if "passthru_interface" in device.capabilities():
# return batch_fn(
# qml.interfaces.cache_execute(
# batch_execute, cache, return_tuple=False, expand_fn=expand_fn
# )(tapes)
# )
if "passthru_interface" in device.capabilities() or device.short_name == "default.mixed":
rmoyard marked this conversation as resolved.
Show resolved Hide resolved
return batch_fn(
qml.interfaces.cache_execute(
batch_execute, cache, return_tuple=False, expand_fn=expand_fn
)(tapes)
)
with qml.tape.Unwrap(*tapes):
res = qml.interfaces.cache_execute(
batch_execute, cache, return_tuple=False, expand_fn=expand_fn
Expand Down
5 changes: 3 additions & 2 deletions pennylane/math/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,7 +404,8 @@ def requires_grad(tensor, interface=None):

def in_backprop(tensor, interface=None):
rmoyard marked this conversation as resolved.
Show resolved Hide resolved
"""Returns True if the tensor is considered to be in a backpropagation environment, it works for Autograd,
rmoyard marked this conversation as resolved.
Show resolved Hide resolved
Tensorflow and Jax.
Tensorflow and Jax. It is not only checking the differentiability of the tensor like :func:`~.requires_grad`, but
rather checking if the gradient is actually calculated.

Args:
tensor (tensor_like): input tensor
Expand All @@ -420,7 +421,7 @@ def in_backprop(tensor, interface=None):
... print(requires_grad(x))
True

.. seealso:: :func:`~~requires_grad`
.. seealso:: :func:`~.requires_grad`
"""
interface = interface or get_interface(tensor)

Expand Down
17 changes: 7 additions & 10 deletions pennylane/return_types.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,25 +14,22 @@
"""
Class and functions for activating, deactivating and checking the new return types system
"""
# pylint: disable=too-few-public-methods


class ReturnType:
"""Class to store the attribute `activated` which indicates if the new return type system is on. Default=False."""

activated = False
# pylint: disable=global-statement
__activated = False


def enable_return():
"""Function that turns on the new return type system."""
ReturnType.activated = True
global __activated
__activated = True


def disable_return():
"""Function that turns off the new return type system."""
ReturnType.activated = False
global __activated
__activated = False # pragma: no cover


def active_return():
"""Function that returns if the new return types system is activated."""
return ReturnType.activated
return __activated
6 changes: 5 additions & 1 deletion tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -213,13 +213,17 @@ def pytest_collection_modifyitems(items, config):
if "qchem" in rel_path.parts:
mark = getattr(pytest.mark, "qchem")
item.add_marker(mark)
if "returntypes" in rel_path.parts:
mark = getattr(pytest.mark, "return")
item.add_marker(mark)

# Tests that do not have a specific suite marker are marked `core`
for item in items:
markers = {mark.name for mark in item.iter_markers()}
if (
not any(
elem in ["autograd", "torch", "tf", "jax", "qchem", "qcut", "all_interfaces"]
elem
in ["autograd", "torch", "tf", "jax", "qchem", "qcut", "all_interfaces", "return"]
for elem in markers
)
or not markers
Expand Down
19 changes: 10 additions & 9 deletions tests/math/test_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -1105,7 +1105,7 @@ class TestInBackprop:

@pytest.mark.slow
def test_jax(self):
"""JAX DeviceArrays differentiability depends on the argnums argument"""
"""The value of in_backprop for JAX DeviceArrays depends on the argnums argument"""
res = None

def cost_fn(t, s):
Expand All @@ -1126,21 +1126,22 @@ def cost_fn(t, s):
assert res == [True, True]

def test_autograd_backwards(self):
"""Autograd trainability corresponds to the requires_grad attribute during the backwards pass."""
"""The value of in_backprop for Autograd tensors corresponds to the requires_grad attribute during the backwards pass."""
res = None

def cost_fn(t, s):
nonlocal res
res = [fn.in_backprop(t), fn.in_backprop(s)]
return np.sum(t * s)

t = np.array([1.0, 2.0, 3.0])
s = np.array([-2.0, -3.0, -4.0])
t = np.array([1.0, 2.0, 3.0], requires_grad=True)
s = np.array([-2.0, -3.0, -4.0], requires_grad=True)

qml.grad(cost_fn)(t, s)
assert res == [True, True]

t.requires_grad = False
s.requires_grad = True
qml.grad(cost_fn)(t, s)
assert res == [False, True]

Expand All @@ -1156,7 +1157,7 @@ def cost_fn(t, s):
assert res == [False, False]

def test_tf(self):
"""TensorFlow tensors will True *if* they are being watched by a gradient tape"""
"""The value of in_backprop for TensorFlow tensors is True *if* they are being watched by a gradient tape"""
t1 = tf.Variable([1.0, 2.0])
t2 = tf.constant([1.0, 2.0])
assert not fn.in_backprop(t1)
Expand All @@ -1178,8 +1179,8 @@ def test_tf_autograph(self):
"""TensorFlow tensors will True *if* they are being watched by a gradient tape with Autograph."""
t1 = tf.Variable([1.0, 2.0])
t2 = tf.constant([1.0, 2.0])
assert not fn.requires_grad(t1)
assert not fn.requires_grad(t2)
assert not fn.in_backprop(t1)
assert not fn.in_backprop(t2)

@tf.function
def f_pow(x):
Expand All @@ -1188,16 +1189,16 @@ def f_pow(x):
with tf.GradientTape():
# variables are automatically watched within a context,
# but constants are not
y = f_pow(t1)
assert fn.in_backprop(t1)
assert not fn.in_backprop(t2)
y = f_pow(t1)

with tf.GradientTape() as tape:
# watching makes all tensors trainable
tape.watch([t1, t2])
y = f_pow(t1)
assert fn.in_backprop(t1)
assert fn.in_backprop(t2)
y = f_pow(t1)

@pytest.mark.torch
def test_unknown_interface_in_backprop(self):
Expand Down
1 change: 1 addition & 0 deletions tests/pytest.ini
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ markers =
gpu: marks tests run on a GPU (deselect with '-m "not gpu"')
qchem: marks tests for the QChem module (deselect with '-m "not qchem"')
qcut: marks tests for the QCut transform (deselect with '-m "not qcut"')
return: marks tests for the new return types (deselect with '-m "not return"')
filterwarnings =
ignore::DeprecationWarning:autograd.numpy.numpy_wrapper
ignore:Casting complex values to real::autograd.numpy.numpy_wrapper
Expand Down
9 changes: 9 additions & 0 deletions tests/returntypes/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import pennylane as qml


def pytest_sessionstart(session):
qml.enable_return()


def pytest_sessionfinish(session, exitstatus):
qml.disable_return()
Loading