DataReuploading: Complete Feature Demonstration¶
Library: encoding-atlas
Version: 0.2.0
Encoding: DataReuploading — A quantum feature map that repeatedly encodes classical data throughout the circuit, interleaved with entangling layers.
Overview¶
Data re-uploading is a foundational technique in quantum machine learning. The key idea is to encode classical data multiple times throughout the circuit, interleaved with entangling layers. This creates quantum states with rich Fourier spectra, enabling high expressivity.
$$|\psi(x)\rangle = [U_{\text{ent}} \cdot U_{\text{data}}(x)]^L |0\rangle^{\otimes n}$$
where $U_{\text{data}}(x)$ encodes features via RY rotations and $U_{\text{ent}}$ provides entanglement via a CNOT ladder.
Key advantages:
- High Expressivity — Multiple layers increase accessible Fourier frequencies
- Universal Approximation — With trainable parameters, can approximate any continuous function
- Hardware-Friendly — Only requires RY + CNOT gates with linear connectivity
- Efficient Depth-Width Trade-off — Uses fewer qubits through cyclic feature mapping
References:
- Pérez-Salinas et al. (2020). "Data re-uploading for a universal quantum classifier." Quantum, 4, 226.
- Schuld, Sweke & Meyer (2021). "Effect of data encoding on the expressive power." Physical Review A, 103(3).
- Goto et al. (2021). "Universal approximation property." Physical Review Letters, 127(9).
This notebook demonstrates every feature of the DataReuploading encoding in the encoding-atlas library.
1. Installation & Setup¶
# Install the library (uncomment if not already installed)
# !pip install encoding-atlas
import numpy as np
import warnings
import pickle
import hashlib
import logging
import threading
import time
from concurrent.futures import ThreadPoolExecutor
from math import ceil
import encoding_atlas
print(f"encoding-atlas version: {encoding_atlas.__version__}")
encoding-atlas version: 0.2.0
# Check which backends are available
backends_available = {}
try:
import pennylane as qml
backends_available['pennylane'] = qml.__version__
except ImportError:
backends_available['pennylane'] = None
try:
import qiskit
backends_available['qiskit'] = qiskit.__version__
except ImportError:
backends_available['qiskit'] = None
try:
import cirq
backends_available['cirq'] = cirq.__version__
except ImportError:
backends_available['cirq'] = None
print("Backend availability:")
for backend, version in backends_available.items():
status = f"v{version}" if version else "NOT INSTALLED"
print(f" {backend}: {status}")
Backend availability: pennylane: v0.42.3 qiskit: v2.3.0 cirq: v1.5.0
2. Creating a DataReuploading Encoding¶
The DataReuploading constructor accepts three parameters:
n_features(int, required): Number of classical features to encoden_layers(int, default=3): Number of re-uploading layersn_qubits(int or None, default=None): Number of qubits (defaults to n_features)
from encoding_atlas import DataReuploading
# Basic creation with defaults
enc = DataReuploading(n_features=4)
print(f"Encoding: {enc}")
print(f" n_features: {enc.n_features}")
print(f" n_layers: {enc.n_layers}")
print(f" n_qubits: {enc.n_qubits}")
print(f" depth: {enc.depth}")
Encoding: DataReuploading(n_features=4, n_layers=3, n_qubits=4) n_features: 4 n_layers: 3 n_qubits: 4 depth: 12
# Custom number of layers
enc_deep = DataReuploading(n_features=4, n_layers=5)
print(f"Deep encoding: {enc_deep}")
print(f" n_layers: {enc_deep.n_layers}")
print(f" depth: {enc_deep.depth}")
Deep encoding: DataReuploading(n_features=4, n_layers=5, n_qubits=4) n_layers: 5 depth: 20
# Custom number of qubits (fewer qubits than features = cyclic mapping)
enc_compact = DataReuploading(n_features=8, n_qubits=4)
print(f"Compact encoding: {enc_compact}")
print(f" n_features: {enc_compact.n_features}")
print(f" n_qubits: {enc_compact.n_qubits}")
Compact encoding: DataReuploading(n_features=8, n_layers=3, n_qubits=4) n_features: 8 n_qubits: 4
# More qubits than features
enc_wide = DataReuploading(n_features=2, n_qubits=4)
print(f"Wide encoding: {enc_wide}")
print(f" n_features: {enc_wide.n_features}")
print(f" n_qubits: {enc_wide.n_qubits}")
Wide encoding: DataReuploading(n_features=2, n_layers=3, n_qubits=4) n_features: 2 n_qubits: 4
# Single-qubit encoding (no entanglement)
enc_single = DataReuploading(n_features=1, n_layers=5)
print(f"Single-qubit: {enc_single}")
print(f" n_qubits: {enc_single.n_qubits}")
Single-qubit: DataReuploading(n_features=1, n_layers=5, n_qubits=1) n_qubits: 1
# String representation (__repr__)
enc = DataReuploading(n_features=4, n_layers=3)
print(repr(enc))
# Contains class name and all parameters
assert "DataReuploading" in repr(enc)
assert "n_features=4" in repr(enc)
assert "n_layers=3" in repr(enc)
assert "n_qubits=4" in repr(enc)
print("repr format verified!")
DataReuploading(n_features=4, n_layers=3, n_qubits=4) repr format verified!
3. Constructor Validation¶
The constructor validates all parameters strictly. Let's verify each validation rule.
# --- Invalid n_features ---
print("=== n_features validation ===")
for invalid_val in [0, -1, 1.5, "four", None]:
try:
DataReuploading(n_features=invalid_val)
print(f" n_features={invalid_val!r}: NO ERROR (unexpected)")
except (ValueError, TypeError) as e:
print(f" n_features={invalid_val!r}: {type(e).__name__}: {e}")
=== n_features validation === n_features=0: ValueError: n_features must be a positive integer, got 0 n_features=-1: ValueError: n_features must be a positive integer, got -1 n_features=1.5: ValueError: n_features must be a positive integer, got 1.5 n_features='four': ValueError: n_features must be a positive integer, got four n_features=None: ValueError: n_features must be a positive integer, got None
# --- Invalid n_layers ---
print("=== n_layers validation ===")
for invalid_val in [0, -1, True, False, 1.5, "three"]:
try:
DataReuploading(n_features=4, n_layers=invalid_val)
print(f" n_layers={invalid_val!r}: NO ERROR (unexpected)")
except (ValueError, TypeError) as e:
print(f" n_layers={invalid_val!r}: {type(e).__name__}: {e}")
# Note: bool is rejected explicitly (True is not treated as 1)
print("\nBool rejection is important because bool is a subclass of int in Python.")
=== n_layers validation === n_layers=0: ValueError: n_layers must be a positive integer, got 0 n_layers=-1: ValueError: n_layers must be a positive integer, got -1 n_layers=True: ValueError: n_layers must be a positive integer, got True n_layers=False: ValueError: n_layers must be a positive integer, got False n_layers=1.5: ValueError: n_layers must be a positive integer, got 1.5 n_layers='three': ValueError: n_layers must be a positive integer, got 'three' Bool rejection is important because bool is a subclass of int in Python.
# --- Invalid n_qubits ---
print("=== n_qubits validation ===")
for invalid_val in [0, -1, True, False, 1.5]:
try:
DataReuploading(n_features=4, n_qubits=invalid_val)
print(f" n_qubits={invalid_val!r}: NO ERROR (unexpected)")
except (ValueError, TypeError) as e:
print(f" n_qubits={invalid_val!r}: {type(e).__name__}: {e}")
# None is valid (means "use n_features")
enc = DataReuploading(n_features=4, n_qubits=None)
print(f"\n n_qubits=None: OK (defaults to n_features={enc.n_qubits})")
=== n_qubits validation === n_qubits=0: ValueError: n_qubits must be a positive integer, got 0 n_qubits=-1: ValueError: n_qubits must be a positive integer, got -1 n_qubits=True: ValueError: n_qubits must be a positive integer, got True n_qubits=False: ValueError: n_qubits must be a positive integer, got False n_qubits=1.5: ValueError: n_qubits must be a positive integer, got 1.5 n_qubits=None: OK (defaults to n_features=4)
enc = DataReuploading(n_features=4, n_layers=3)
print("Core Properties:")
print(f" n_features: {enc.n_features} (number of classical features)")
print(f" n_qubits: {enc.n_qubits} (number of qubits in the circuit)")
print(f" n_layers: {enc.n_layers} (number of re-uploading layers)")
print(f" depth: {enc.depth} (circuit depth)")
Core Properties: n_features: 4 (number of classical features) n_qubits: 4 (number of qubits in the circuit) n_layers: 3 (number of re-uploading layers) depth: 12 (circuit depth)
# The config property returns a copy of the configuration dict
config = enc.config
print(f"Config: {config}")
print(f"Type: {type(config)}")
# It's a defensive copy - modifying it doesn't affect the encoding
config['n_layers'] = 999
print(f"Modified copy: {config}")
print(f"Original still: {enc.config}")
Config: {'n_layers': 3, 'n_qubits_override': None}
Type: <class 'dict'>
Modified copy: {'n_layers': 999, 'n_qubits_override': None}
Original still: {'n_layers': 3, 'n_qubits_override': None}
5. Depth Formula Deep Dive¶
The circuit depth is computed exactly:
$$\text{depth} = n_{\text{layers}} \times \left(\lceil \frac{n_{\text{features}}}{n_{\text{qubits}}} \rceil + (n_{\text{qubits}} - 1)\right)$$
- Encoding sublayer depth: $\lceil n_{\text{features}} / n_{\text{qubits}} \rceil$ (cyclic feature mapping)
- Entangling sublayer depth: $n_{\text{qubits}} - 1$ (CNOT ladder)
# Verify depth formula for various configurations
test_cases = [
# (n_features, n_layers, n_qubits, expected_description)
(4, 3, 4, "standard: 4 features, 4 qubits, 3 layers"),
(8, 2, 4, "cyclic: 8 features, 4 qubits, 2 layers"),
(1, 5, 1, "single qubit: 1 feature, 1 qubit, 5 layers"),
(2, 3, 2, "two qubits: 2 features, 2 qubits, 3 layers"),
(6, 2, 4, "non-divisible: 6 features, 4 qubits, 2 layers"),
(3, 4, 5, "wide: 3 features, 5 qubits, 4 layers"),
]
print(f"{'Configuration':<50} {'Formula':>10} {'Actual':>8} {'Match':>6}")
print("-" * 80)
for n_f, n_l, n_q, desc in test_cases:
enc = DataReuploading(n_features=n_f, n_layers=n_l, n_qubits=n_q)
encoding_depth = ceil(n_f / n_q)
entangling_depth = max(0, n_q - 1)
expected = n_l * (encoding_depth + entangling_depth)
actual = enc.depth
match = "OK" if expected == actual else "FAIL"
print(f" {desc:<48} {expected:>8} {actual:>6} {match:>5}")
Configuration Formula Actual Match -------------------------------------------------------------------------------- standard: 4 features, 4 qubits, 3 layers 12 12 OK cyclic: 8 features, 4 qubits, 2 layers 10 10 OK single qubit: 1 feature, 1 qubit, 5 layers 5 5 OK two qubits: 2 features, 2 qubits, 3 layers 6 6 OK non-divisible: 6 features, 4 qubits, 2 layers 10 10 OK wide: 3 features, 5 qubits, 4 layers 20 20 OK
6. EncodingProperties¶
The .properties attribute returns a frozen dataclass with comprehensive encoding metadata. It's computed lazily and cached (thread-safe).
enc = DataReuploading(n_features=4, n_layers=3)
props = enc.properties
print(f"Type: {type(props).__name__}")
print()
# All fields
print("Encoding Properties:")
print(f" n_qubits: {props.n_qubits}")
print(f" depth: {props.depth}")
print(f" gate_count: {props.gate_count}")
print(f" single_qubit_gates: {props.single_qubit_gates}")
print(f" two_qubit_gates: {props.two_qubit_gates}")
print(f" parameter_count: {props.parameter_count}")
print(f" is_entangling: {props.is_entangling}")
print(f" simulability: {props.simulability}")
print(f" trainability_estimate: {props.trainability_estimate}")
print(f" expressibility: {props.expressibility}")
print(f" entanglement_capability: {props.entanglement_capability}")
print(f" noise_resilience_est: {props.noise_resilience_estimate}")
print(f" notes: {props.notes}")
Type: EncodingProperties Encoding Properties: n_qubits: 4 depth: 12 gate_count: 21 single_qubit_gates: 12 two_qubit_gates: 9 parameter_count: 12 is_entangling: True simulability: not_simulable trainability_estimate: 0.75 expressibility: None entanglement_capability: None noise_resilience_est: None notes: Data re-uploading feature map with 3 layers. High expressivity via repeated encoding; add trainable parameters for universal approximation.
# Properties as a dictionary (useful for logging, serialization)
props_dict = props.to_dict()
print("Properties dict keys:", list(props_dict.keys()))
print()
for k, v in props_dict.items():
print(f" {k}: {v}")
Properties dict keys: ['n_qubits', 'depth', 'gate_count', 'single_qubit_gates', 'two_qubit_gates', 'parameter_count', 'is_entangling', 'simulability', 'expressibility', 'entanglement_capability', 'trainability_estimate', 'noise_resilience_estimate', 'notes'] n_qubits: 4 depth: 12 gate_count: 21 single_qubit_gates: 12 two_qubit_gates: 9 parameter_count: 12 is_entangling: True simulability: not_simulable expressibility: None entanglement_capability: None trainability_estimate: 0.75 noise_resilience_estimate: None notes: Data re-uploading feature map with 3 layers. High expressivity via repeated encoding; add trainable parameters for universal approximation.
# Properties are frozen (immutable)
try:
props.n_qubits = 99
except AttributeError as e:
print(f"Cannot modify frozen properties: {e}")
Cannot modify frozen properties: cannot assign to field 'n_qubits'
# Compare properties for different configurations
configs = [
("Standard", dict(n_features=4, n_layers=3)),
("Single qubit", dict(n_features=1, n_layers=5)),
("Deep", dict(n_features=4, n_layers=8)),
("Compact", dict(n_features=8, n_layers=3, n_qubits=4)),
]
print(f"{'Config':<16} {'Qubits':>6} {'Depth':>6} {'Gates':>6} {'1Q':>5} {'2Q':>5} {'Entangling':>11} {'Trainability':>13}")
print("-" * 85)
for name, kwargs in configs:
e = DataReuploading(**kwargs)
p = e.properties
print(f" {name:<14} {p.n_qubits:>6} {p.depth:>6} {p.gate_count:>6} {p.single_qubit_gates:>5} {p.two_qubit_gates:>5} {str(p.is_entangling):>11} {p.trainability_estimate:>13.2f}")
Config Qubits Depth Gates 1Q 2Q Entangling Trainability ------------------------------------------------------------------------------------- Standard 4 12 21 12 9 True 0.75 Single qubit 1 5 5 5 0 False 0.65 Deep 4 32 56 32 24 True 0.50 Compact 4 15 33 24 9 True 0.75
7. Gate Count Breakdown¶
The gate_count_breakdown() method returns a detailed dictionary of gate counts by type.
enc = DataReuploading(n_features=4, n_layers=3)
breakdown = enc.gate_count_breakdown()
print("Gate Count Breakdown:")
for key, value in breakdown.items():
print(f" {key}: {value}")
Gate Count Breakdown: ry_gates: 12 cnot_gates: 9 total_single_qubit: 12 total_two_qubit: 9 total: 21 ry_per_layer: 4 cnot_per_layer: 3 gates_per_layer: 7
# Verify gate count formulas
# RY gates = n_layers * n_features
# CNOT gates = n_layers * max(0, n_qubits - 1)
enc = DataReuploading(n_features=4, n_layers=3)
b = enc.gate_count_breakdown()
assert b['ry_gates'] == 3 * 4, f"Expected 12 RY gates, got {b['ry_gates']}"
assert b['cnot_gates'] == 3 * 3, f"Expected 9 CNOT gates, got {b['cnot_gates']}"
assert b['total'] == b['ry_gates'] + b['cnot_gates']
assert b['total_single_qubit'] == b['ry_gates']
assert b['total_two_qubit'] == b['cnot_gates']
assert b['ry_per_layer'] == 4
assert b['cnot_per_layer'] == 3
assert b['gates_per_layer'] == 7
print("All gate count formulas verified!")
print(f" Total gates: {b['total']} = {b['ry_gates']} RY + {b['cnot_gates']} CNOT")
print(f" Per layer: {b['gates_per_layer']} = {b['ry_per_layer']} RY + {b['cnot_per_layer']} CNOT")
All gate count formulas verified! Total gates: 21 = 12 RY + 9 CNOT Per layer: 7 = 4 RY + 3 CNOT
# Gate count scaling with layers
print(f"{'Layers':>6} {'RY':>6} {'CNOT':>6} {'Total':>6}")
print("-" * 30)
for n_layers in [1, 2, 3, 5, 8, 10]:
e = DataReuploading(n_features=4, n_layers=n_layers)
b = e.gate_count_breakdown()
print(f" {n_layers:>4} {b['ry_gates']:>4} {b['cnot_gates']:>4} {b['total']:>4}")
Layers RY CNOT Total
------------------------------
1 4 3 7
2 8 6 14
3 12 9 21
5 20 15 35
8 32 24 56
10 40 30 70
8. Resource Summary¶
The resource_summary() method provides a comprehensive resource analysis including circuit structure, gate counts, encoding characteristics, hardware requirements, and recommendations.
enc = DataReuploading(n_features=4, n_layers=3)
summary = enc.resource_summary()
print("=== Resource Summary ===")
print(f"\nCircuit Structure:")
print(f" n_qubits: {summary['n_qubits']}")
print(f" n_features: {summary['n_features']}")
print(f" n_layers: {summary['n_layers']}")
print(f" depth: {summary['depth']}")
print(f"\nEncoding Characteristics:")
print(f" is_entangling: {summary['is_entangling']}")
print(f" simulability: {summary['simulability']}")
print(f" trainability_estimate: {summary['trainability_estimate']}")
print(f" fourier_frequencies: {summary['fourier_frequencies']}")
print(f"\nGate Counts:")
for k, v in summary['gate_counts'].items():
print(f" {k}: {v}")
print(f"\nHardware Requirements:")
for k, v in summary['hardware_requirements'].items():
print(f" {k}: {v}")
print(f"\nRecommendations:")
for rec in summary['recommendations']:
print(f" - {rec}")
=== Resource Summary === Circuit Structure: n_qubits: 4 n_features: 4 n_layers: 3 depth: 12 Encoding Characteristics: is_entangling: True simulability: not_simulable trainability_estimate: 0.75 fourier_frequencies: 3 Gate Counts: ry_gates: 12 cnot_gates: 9 total_single_qubit: 12 total_two_qubit: 9 total: 21 ry_per_layer: 4 cnot_per_layer: 3 gates_per_layer: 7 Hardware Requirements: connectivity: linear native_gates: ['RY', 'CNOT'] min_qubit_count: 4 estimated_circuit_time_us: 10.5 Recommendations: - Configuration looks good for typical quantum ML tasks.
# Trainability estimate formula: max(0.4, 0.9 - 0.05 * n_layers)
print(f"{'Layers':>6} {'Trainability':>13} {'Formula':>10}")
print("-" * 35)
for n_layers in [1, 2, 3, 5, 8, 10, 15, 20]:
if n_layers > 10:
with warnings.catch_warnings():
warnings.simplefilter("ignore")
enc = DataReuploading(n_features=4, n_layers=n_layers)
else:
enc = DataReuploading(n_features=4, n_layers=n_layers)
s = enc.resource_summary()
expected = max(0.4, 0.9 - 0.05 * n_layers)
print(f" {n_layers:>4} {s['trainability_estimate']:>11.2f} {expected:>8.2f}")
Layers Trainability Formula
Deep circuit configuration: n_layers=15 exceeds threshold=10, total_gates=105, estimated_trainability=0.40 Deep circuit configuration: n_layers=20 exceeds threshold=10, total_gates=140, estimated_trainability=0.40
-----------------------------------
1 0.85 0.85
2 0.80 0.80
3 0.75 0.75
5 0.65 0.65
8 0.50 0.50
10 0.40 0.40
15 0.40 0.40
20 0.40 0.40
9. Entanglement Pairs¶
The get_entanglement_pairs() method returns the qubit pairs connected by CNOT gates. DataReuploading uses a linear (ladder) topology: (0,1), (1,2), ..., (n-2, n-1).
# Standard multi-qubit configuration
enc = DataReuploading(n_features=4, n_layers=3)
pairs = enc.get_entanglement_pairs()
print(f"4 qubits: {pairs}")
assert pairs == [(0, 1), (1, 2), (2, 3)]
# Various qubit counts
for n_q in [1, 2, 3, 4, 6, 8]:
enc = DataReuploading(n_features=max(1, n_q), n_qubits=n_q)
pairs = enc.get_entanglement_pairs()
print(f" {n_q} qubits: {pairs} ({len(pairs)} pairs)")
4 qubits: [(0, 1), (1, 2), (2, 3)] 1 qubits: [] (0 pairs) 2 qubits: [(0, 1)] (1 pairs) 3 qubits: [(0, 1), (1, 2)] (2 pairs) 4 qubits: [(0, 1), (1, 2), (2, 3)] (3 pairs) 6 qubits: [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5)] (5 pairs) 8 qubits: [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7)] (7 pairs)
# The returned list is a defensive copy
enc = DataReuploading(n_features=4)
pairs1 = enc.get_entanglement_pairs()
pairs1.append((99, 100)) # Modify the copy
pairs2 = enc.get_entanglement_pairs() # Get fresh copy
print(f"Modified copy: {pairs1}")
print(f"Fresh copy: {pairs2}")
assert (99, 100) not in pairs2, "Internal state should not be affected"
print("Defensive copy verified!")
Modified copy: [(0, 1), (1, 2), (2, 3), (99, 100)] Fresh copy: [(0, 1), (1, 2), (2, 3)] Defensive copy verified!
10. Circuit Generation — PennyLane Backend¶
The PennyLane backend returns a callable function that applies gates when invoked within a QNode context.
import pennylane as qml
enc = DataReuploading(n_features=4, n_layers=3)
x = np.array([0.1, 0.2, 0.3, 0.4])
# Generate circuit
circuit_fn = enc.get_circuit(x, backend='pennylane')
print(f"Type: {type(circuit_fn)}")
print(f"Callable: {callable(circuit_fn)}")
Type: <class 'function'> Callable: True
# Execute the circuit in a QNode to get statevector
dev = qml.device('default.qubit', wires=enc.n_qubits)
@qml.qnode(dev)
def run_circuit():
circuit_fn()
return qml.state()
state = run_circuit()
print(f"Statevector shape: {state.shape}")
print(f"Statevector (first 8 amplitudes):\n {state[:8]}")
print(f"Norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0), "State must be normalized"
print("State normalization verified!")
Statevector shape: (16,) Statevector (first 8 amplitudes): [0.76217187+0.j 0.47988336+0.j 0.23221717+0.j 0.27541393+0.j 0.03179354+0.j 0.14300327+0.j 0.147863 +0.j 0.06760851+0.j] Norm: 1.0000000000 State normalization verified!
# Different inputs produce different states
x1 = np.array([0.1, 0.2, 0.3, 0.4])
x2 = np.array([0.5, 0.6, 0.7, 0.8])
@qml.qnode(dev)
def get_state(x_input):
enc.get_circuit(x_input, backend='pennylane')()
return qml.state()
state1 = get_state(x1)
state2 = get_state(x2)
fidelity = np.abs(np.vdot(state1, state2))**2
print(f"Fidelity between different inputs: {fidelity:.6f}")
assert fidelity < 1.0, "Different inputs should produce different states"
print("Different inputs => different states confirmed!")
Fidelity between different inputs: 0.397168 Different inputs => different states confirmed!
try:
from qiskit import QuantumCircuit
enc = DataReuploading(n_features=4, n_layers=2)
x = np.array([0.5, 1.0, 1.5, 2.0])
qc = enc.get_circuit(x, backend='qiskit')
print(f"Type: {type(qc).__name__}")
print(f"Num qubits: {qc.num_qubits}")
print(f"Circuit name: {qc.name}")
print(f"Depth: {qc.depth()}")
print(f"Gate counts: {dict(qc.count_ops())}")
print()
print(qc.draw(output='text'))
except ImportError:
print("Qiskit not installed — skipping this cell.")
print("Install with: pip install qiskit")
Type: QuantumCircuit
Num qubits: 4
Circuit name: DataReuploading
Depth: 7
Gate counts: {'ry': 8, 'cx': 6}
┌─────────┐ ┌─────────┐
q_0: ┤ Ry(0.5) ├──■──┤ Ry(0.5) ├──────────────■───────────────
└┬───────┬┘┌─┴─┐└─────────┘┌───────┐ ┌─┴─┐
q_1: ─┤ Ry(1) ├─┤ X ├─────■─────┤ Ry(1) ├───┤ X ├─────■───────
┌┴───────┴┐└───┘ ┌─┴─┐ └───────┘┌──┴───┴──┐┌─┴─┐
q_2: ┤ Ry(1.5) ├────────┤ X ├───────■────┤ Ry(1.5) ├┤ X ├──■──
└┬───────┬┘ └───┘ ┌─┴─┐ └┬───────┬┘└───┘┌─┴─┐
q_3: ─┤ Ry(2) ├───────────────────┤ X ├───┤ Ry(2) ├──────┤ X ├
└───────┘ └───┘ └───────┘ └───┘
try:
from qiskit import QuantumCircuit
# Verify gate types match expectations
enc = DataReuploading(n_features=4, n_layers=3)
x = np.array([0.1, 0.2, 0.3, 0.4])
qc = enc.get_circuit(x, backend='qiskit')
ops = dict(qc.count_ops())
expected_ry = enc.n_layers * enc.n_features
expected_cx = enc.n_layers * (enc.n_qubits - 1)
print(f"Expected RY gates: {expected_ry}, Got: {ops.get('ry', 0)}")
print(f"Expected CX gates: {expected_cx}, Got: {ops.get('cx', 0)}")
assert ops.get('ry', 0) == expected_ry
assert ops.get('cx', 0) == expected_cx
print("Gate counts match!")
except ImportError:
print("Qiskit not installed — skipping this cell.")
Expected RY gates: 12, Got: 12 Expected CX gates: 9, Got: 9 Gate counts match!
try:
import cirq
enc = DataReuploading(n_features=3, n_layers=2)
x = np.array([0.5, 1.0, 1.5])
circ = enc.get_circuit(x, backend='cirq')
print(f"Type: {type(circ).__name__}")
print(f"Num qubits: {len(circ.all_qubits())}")
print()
print(circ)
except ImportError:
print("Cirq not installed — skipping this cell.")
print("Install with: pip install cirq-core")
Type: Circuit
Num qubits: 3
0: ───Ry(0.159π)───@───Ry(0.159π)────────────────@───────
│ │
1: ───Ry(0.318π)───X───@────────────Ry(0.318π)───X───@───
│ │
2: ───Ry(0.477π)───────X────────────Ry(0.477π)───────X───
13. Batch Circuit Generation¶
The get_circuits() method generates circuits for multiple data samples, with optional parallel processing.
enc = DataReuploading(n_features=4, n_layers=3)
X = np.random.default_rng(42).uniform(0, 2*np.pi, size=(10, 4))
# Sequential batch
circuits = enc.get_circuits(X, backend='pennylane')
print(f"Generated {len(circuits)} circuits (sequential)")
print(f"Each circuit is callable: {callable(circuits[0])}")
Generated 10 circuits (sequential) Each circuit is callable: True
# Parallel batch processing
circuits_par = enc.get_circuits(X, backend='pennylane', parallel=True)
print(f"Generated {len(circuits_par)} circuits (parallel)")
# Verify parallel and sequential produce same results
dev = qml.device('default.qubit', wires=enc.n_qubits)
@qml.qnode(dev)
def eval_circ(circ_fn):
circ_fn()
return qml.state()
state_seq = eval_circ(circuits[0])
state_par = eval_circ(circuits_par[0])
print(f"States match: {np.allclose(state_seq, state_par)}")
Generated 10 circuits (parallel) States match: True
# Parallel with custom max_workers
import os
circuits_custom = enc.get_circuits(X, backend='pennylane', parallel=True, max_workers=2)
print(f"Generated {len(circuits_custom)} circuits with max_workers=2")
Generated 10 circuits with max_workers=2
# Timing comparison (sequential vs parallel) with a larger batch
enc = DataReuploading(n_features=4, n_layers=3)
X_large = np.random.default_rng(42).uniform(0, 2*np.pi, size=(200, 4))
start = time.time()
_ = enc.get_circuits(X_large, backend='pennylane', parallel=False)
t_seq = time.time() - start
start = time.time()
_ = enc.get_circuits(X_large, backend='pennylane', parallel=True)
t_par = time.time() - start
print(f"Sequential: {t_seq:.4f}s")
print(f"Parallel: {t_par:.4f}s")
print(f"Note: Parallel may not be faster for PennyLane's lightweight closures.")
print(f"Parallel processing benefits more with Qiskit/Cirq backends on large batches.")
Sequential: 0.0010s Parallel: 0.0272s Note: Parallel may not be faster for PennyLane's lightweight closures. Parallel processing benefits more with Qiskit/Cirq backends on large batches.
# Single sample as 1D array (auto-handled)
x_1d = np.array([0.1, 0.2, 0.3, 0.4])
circuits_1d = enc.get_circuits(x_1d, backend='pennylane')
print(f"1D input -> {len(circuits_1d)} circuit(s)")
1D input -> 1 circuit(s)
14. Input Validation¶
The encoding validates all inputs comprehensively, catching errors early with clear messages.
enc = DataReuploading(n_features=4)
# Valid input (numpy array)
x_valid = np.array([0.1, 0.2, 0.3, 0.4])
circuit = enc.get_circuit(x_valid)
print(f"Valid numpy array: OK (callable={callable(circuit)})")
# Valid input (Python list — auto-converted)
circuit_list = enc.get_circuit([0.1, 0.2, 0.3, 0.4])
print(f"Valid Python list: OK (callable={callable(circuit_list)})")
Valid numpy array: OK (callable=True) Valid Python list: OK (callable=True)
# Wrong number of features
try:
enc.get_circuit(np.array([0.1, 0.2, 0.3])) # 3 instead of 4
except ValueError as e:
print(f"Wrong shape: {e}")
Wrong shape: Expected 4 features, got 3
# NaN values
try:
enc.get_circuit(np.array([0.1, np.nan, 0.3, 0.4]))
except ValueError as e:
print(f"NaN: {e}")
NaN: Input contains NaN or infinite values
# Infinite values
try:
enc.get_circuit(np.array([0.1, np.inf, 0.3, 0.4]))
except ValueError as e:
print(f"Inf: {e}")
Inf: Input contains NaN or infinite values
# Complex values
try:
enc.get_circuit(np.array([0.1+1j, 0.2, 0.3, 0.4]))
except TypeError as e:
print(f"Complex: {e}")
Complex: Input contains complex values (dtype: complex128). Complex numbers are not supported. Use real-valued data only.
# String values
try:
enc.get_circuit(["0.1", "0.2", "0.3", "0.4"])
except TypeError as e:
print(f"String list: {e}")
String list: Input contains string values. Expected numeric data, got str. Convert strings to floats before encoding.
# Invalid backend
try:
enc.get_circuit(np.array([0.1, 0.2, 0.3, 0.4]), backend='tensorflow')
except ValueError as e:
print(f"Invalid backend: {e}")
Invalid backend: Unknown backend 'tensorflow'. Supported backends: 'pennylane', 'qiskit', 'cirq'
15. Cyclic Feature Mapping¶
When n_features > n_qubits, features are mapped cyclically: feature $i$ maps to qubit $i \mod n_{\text{qubits}}$.
For example, with 6 features on 3 qubits:
- Features 0, 3 → qubit 0
- Features 1, 4 → qubit 1
- Features 2, 5 → qubit 2
from encoding_atlas.analysis import simulate_encoding_statevector
# Cyclic mapping: 6 features on 3 qubits
enc_cyclic = DataReuploading(n_features=6, n_qubits=3, n_layers=2)
print(f"Encoding: {enc_cyclic}")
print(f" n_features: {enc_cyclic.n_features}")
print(f" n_qubits: {enc_cyclic.n_qubits}")
print(f" depth: {enc_cyclic.depth}")
# Depth = n_layers * (ceil(6/3) + (3-1)) = 2 * (2 + 2) = 8
expected_depth = 2 * (ceil(6/3) + 2)
print(f" Expected depth: {expected_depth}")
assert enc_cyclic.depth == expected_depth
Encoding: DataReuploading(n_features=6, n_layers=2, n_qubits=3) n_features: 6 n_qubits: 3 depth: 8 Expected depth: 8
# Simulate to verify valid quantum states
x = np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6])
state = simulate_encoding_statevector(enc_cyclic, x)
print(f"Statevector dim: {len(state)} (2^{enc_cyclic.n_qubits} = {2**enc_cyclic.n_qubits})")
print(f"Norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0)
print("Valid quantum state from cyclic mapping!")
Statevector dim: 8 (2^3 = 8) Norm: 1.0000000000 Valid quantum state from cyclic mapping!
16. Single-Qubit Mode¶
With n_qubits=1, DataReuploading creates a non-entangling but still expressive feature map. Even a single qubit can represent functions with $L$ Fourier frequencies when using $L$ layers — this is the key insight of data re-uploading.
enc_1q = DataReuploading(n_features=1, n_layers=5)
print(f"Encoding: {enc_1q}")
print(f" n_qubits: {enc_1q.n_qubits}")
print(f" depth: {enc_1q.depth}")
print(f" Entangling: {enc_1q.properties.is_entangling}")
print(f" Simulable: {enc_1q.properties.simulability}")
# No entanglement pairs
pairs = enc_1q.get_entanglement_pairs()
print(f" Entanglement pairs: {pairs}")
assert pairs == []
# Gate counts: only RY gates, no CNOTs
b = enc_1q.gate_count_breakdown()
print(f" RY gates: {b['ry_gates']}, CNOT gates: {b['cnot_gates']}")
assert b['cnot_gates'] == 0
Encoding: DataReuploading(n_features=1, n_layers=5, n_qubits=1) n_qubits: 1 depth: 5 Entangling: False Simulable: simulable Entanglement pairs: [] RY gates: 5, CNOT gates: 0
# Single qubit with multiple features (cyclic mapping onto 1 qubit)
enc_1q_multi = DataReuploading(n_features=3, n_qubits=1, n_layers=2)
print(f"Encoding: {enc_1q_multi}")
print(f" depth: {enc_1q_multi.depth}")
# depth = 2 * (ceil(3/1) + 0) = 2 * 3 = 6
assert enc_1q_multi.depth == 6
x = np.array([0.5, 1.0, 1.5])
state = simulate_encoding_statevector(enc_1q_multi, x)
print(f" State: {state}")
print(f" Norm: {np.linalg.norm(state):.10f}")
Encoding: DataReuploading(n_features=3, n_layers=2, n_qubits=1) depth: 6 State: [-0.9899925 +0.j 0.14112001+0.j] Norm: 1.0000000000
17. Deep Circuit Warning¶
When n_layers > 10, the constructor emits a UserWarning about potential trainability challenges (barren plateaus).
# Capture the deep circuit warning
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
enc_deep = DataReuploading(n_features=4, n_layers=15)
# Check that a warning was emitted
user_warnings = [x for x in w if issubclass(x.category, UserWarning)]
print(f"Number of warnings: {len(user_warnings)}")
if user_warnings:
print(f"Warning message:\n {user_warnings[0].message}")
print(f"\nEncoding still created: {enc_deep}")
print(f" Trainability estimate: {enc_deep.properties.trainability_estimate}")
Deep circuit configuration: n_layers=15 exceeds threshold=10, total_gates=105, estimated_trainability=0.40
Number of warnings: 1 Warning message: DataReuploading with 15 layers creates a deep circuit (105 total gates, depth=60). Very deep circuits may face trainability challenges due to barren plateaus (estimated trainability: 0.40). Consider: (1) reducing n_layers if task permits, (2) using gradient-free optimization, or (3) layer-wise training strategies. Encoding still created: DataReuploading(n_features=4, n_layers=15, n_qubits=4) Trainability estimate: 0.4
# No warning for n_layers <= 10
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
enc_ok = DataReuploading(n_features=4, n_layers=10)
user_warnings = [x for x in w if issubclass(x.category, UserWarning)]
print(f"n_layers=10: {len(user_warnings)} warnings (expected: 0)")
# Warning at n_layers=11
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
enc_warn = DataReuploading(n_features=4, n_layers=11)
user_warnings = [x for x in w if issubclass(x.category, UserWarning)]
print(f"n_layers=11: {len(user_warnings)} warning(s) (expected: 1)")
Deep circuit configuration: n_layers=11 exceeds threshold=10, total_gates=77, estimated_trainability=0.40
n_layers=10: 0 warnings (expected: 0) n_layers=11: 1 warning(s) (expected: 1)
18. Statevector Simulation & Analysis¶
The encoding_atlas.analysis module provides tools for simulating encodings and analyzing the resulting quantum states.
from encoding_atlas.analysis import (
simulate_encoding_statevector,
simulate_encoding_statevectors_batch,
compute_fidelity,
compute_purity,
compute_linear_entropy,
compute_von_neumann_entropy,
validate_encoding_for_analysis,
validate_statevector,
generate_random_parameters,
create_rng,
)
enc = DataReuploading(n_features=4, n_layers=3)
# Validate encoding is suitable for analysis
validate_encoding_for_analysis(enc)
print("Encoding validated for analysis!")
# Simulate a single statevector
x = np.array([0.5, 1.0, 1.5, 2.0])
state = simulate_encoding_statevector(enc, x)
print(f"\nStatevector shape: {state.shape}")
print(f"Norm: {np.linalg.norm(state):.10f}")
# Validate the statevector
validated = validate_statevector(state, expected_qubits=4)
print(f"Statevector validated!")
Encoding validated for analysis! Statevector shape: (16,) Norm: 1.0000000000 Statevector validated!
# Batch simulation
X = np.random.default_rng(42).uniform(0, 2*np.pi, size=(5, 4))
states = simulate_encoding_statevectors_batch(enc, X)
print(f"Batch simulated {len(states)} states")
for i, s in enumerate(states):
print(f" State {i}: norm={np.linalg.norm(s):.10f}, dim={len(s)}")
Batch simulated 5 states State 0: norm=1.0000000000, dim=16 State 1: norm=1.0000000000, dim=16 State 2: norm=1.0000000000, dim=16 State 3: norm=1.0000000000, dim=16 State 4: norm=1.0000000000, dim=16
# Fidelity between states
f01 = compute_fidelity(states[0], states[1])
f00 = compute_fidelity(states[0], states[0])
print(f"Fidelity(state0, state0) = {f00:.6f} (self-fidelity = 1.0)")
print(f"Fidelity(state0, state1) = {f01:.6f} (different inputs < 1.0)")
assert np.isclose(f00, 1.0)
assert f01 < 1.0
Fidelity(state0, state0) = 1.000000 (self-fidelity = 1.0) Fidelity(state0, state1) = 0.062235 (different inputs < 1.0)
# Generate random parameters for sampling
from encoding_atlas.analysis import generate_random_parameters
params = generate_random_parameters(enc, n_samples=5, seed=42)
print(f"Random parameters shape: {params.shape}")
print(f"Range: [{params.min():.4f}, {params.max():.4f}]")
# Custom range
params_custom = generate_random_parameters(4, n_samples=3, param_min=-np.pi, param_max=np.pi, seed=42)
print(f"Custom range params: [{params_custom.min():.4f}, {params_custom.max():.4f}]")
Random parameters shape: (5, 4) Range: [0.4010, 6.1300] Custom range params: [-2.5499, 2.9884]
19. Partial Traces & Entanglement Inspection¶
Examine the entanglement structure by tracing out qubits and computing local state properties.
from encoding_atlas.analysis import partial_trace_single_qubit, partial_trace_subsystem
enc = DataReuploading(n_features=4, n_layers=3)
x = np.array([0.5, 1.0, 1.5, 2.0])
state = simulate_encoding_statevector(enc, x)
# Reduced density matrix for each qubit
print("Single-qubit reduced density matrices:")
for q in range(enc.n_qubits):
rho = partial_trace_single_qubit(state, enc.n_qubits, keep_qubit=q)
purity = compute_purity(rho)
entropy = compute_linear_entropy(rho)
print(f" Qubit {q}: purity={purity:.4f}, linear_entropy={entropy:.4f}")
# Purity < 1 means the qubit is entangled with others
if purity < 0.999:
print(f" -> Qubit {q} is entangled with the rest!")
Single-qubit reduced density matrices:
Qubit 0: purity=0.5221, linear_entropy=0.4779
-> Qubit 0 is entangled with the rest!
Qubit 1: purity=0.8319, linear_entropy=0.1681
-> Qubit 1 is entangled with the rest!
Qubit 2: purity=0.7820, linear_entropy=0.2180
-> Qubit 2 is entangled with the rest!
Qubit 3: purity=0.5182, linear_entropy=0.4818
-> Qubit 3 is entangled with the rest!
# Von Neumann entropy for deeper analysis
print("Von Neumann entropy per qubit:")
for q in range(enc.n_qubits):
rho = partial_trace_single_qubit(state, enc.n_qubits, keep_qubit=q)
vn_entropy = compute_von_neumann_entropy(rho)
print(f" Qubit {q}: S = {vn_entropy:.4f} (max = {np.log(2):.4f})")
Von Neumann entropy per qubit: Qubit 0: S = 0.9678 (max = 0.6931) Qubit 1: S = 0.4452 (max = 0.6931) Qubit 2: S = 0.5421 (max = 0.6931) Qubit 3: S = 0.9735 (max = 0.6931)
# Partial trace of a subsystem (keep 2 qubits)
rho_01 = partial_trace_subsystem(state, enc.n_qubits, keep_qubits=[0, 1])
print(f"Subsystem (qubits 0,1) density matrix shape: {rho_01.shape}")
print(f"Purity: {compute_purity(rho_01):.4f}")
Subsystem (qubits 0,1) density matrix shape: (4, 4) Purity: 0.4749
20. Expressibility Analysis¶
Expressibility measures how well an encoding covers the Hilbert space. Lower KL divergence = higher expressibility (closer to Haar-random).
from encoding_atlas.analysis import compute_expressibility
enc = DataReuploading(n_features=4, n_layers=3)
# Quick expressibility score (KL divergence)
expr = compute_expressibility(enc, n_samples=300, seed=42)
print(f"Expressibility (KL divergence): {expr:.6f}")
print(f"Lower = more expressive (closer to Haar-random)")
Expressibility (KL divergence): 0.974895 Lower = more expressive (closer to Haar-random)
# Detailed expressibility with distributions
result = compute_expressibility(enc, n_samples=300, seed=42, return_distributions=True)
print("Expressibility Result keys:", list(result.keys()))
print(f" expressibility: {result['expressibility']:.6f}")
print(f" kl_divergence: {result['kl_divergence']:.6f}")
print(f" n_samples: {result['n_samples']}")
print(f" n_bins: {result['n_bins']}")
print(f" mean_fidelity: {result['mean_fidelity']:.6f}")
print(f" std_fidelity: {result['std_fidelity']:.6f}")
print(f" convergence: {result['convergence_estimate']:.6f}")
Expressibility Result keys: ['expressibility', 'kl_divergence', 'fidelity_distribution', 'haar_distribution', 'bin_edges', 'n_samples', 'n_bins', 'convergence_estimate', 'mean_fidelity', 'std_fidelity'] expressibility: 0.974895 kl_divergence: 0.251054 n_samples: 300 n_bins: 75 mean_fidelity: 0.061086 std_fidelity: 0.086151 convergence: 0.050273
# Compare expressibility: more layers = more expressive
print(f"{'Layers':>6} {'Expressibility':>16}")
print("-" * 25)
for n_layers in [1, 2, 3, 5]:
enc = DataReuploading(n_features=4, n_layers=n_layers)
expr = compute_expressibility(enc, n_samples=200, seed=42)
print(f" {n_layers:>4} {expr:>14.6f}")
Layers Expressibility
-------------------------
1 0.932827
2 0.935242
3 0.971232
5 0.977173
21. Entanglement Capability¶
Measures the encoding's ability to generate entanglement, using the Meyer-Wallach measure (0 = product state, 1 = maximally entangled).
from encoding_atlas.analysis import (
compute_entanglement_capability,
compute_meyer_wallach,
compute_meyer_wallach_with_breakdown,
)
enc = DataReuploading(n_features=4, n_layers=3)
# Quick entanglement capability score
ent = compute_entanglement_capability(enc, n_samples=200, seed=42)
print(f"Entanglement capability (Meyer-Wallach): {ent:.6f}")
Entanglement capability (Meyer-Wallach): 0.573903
# Detailed result
result = compute_entanglement_capability(enc, n_samples=200, seed=42, return_details=True)
print("Entanglement Result keys:", list(result.keys()))
print(f" entanglement_capability: {result['entanglement_capability']:.6f}")
print(f" std_error: {result['std_error']:.6f}")
print(f" n_samples: {result['n_samples']}")
print(f" measure: {result['measure']}")
print(f" per_qubit_entanglement: {result['per_qubit_entanglement']}")
Entanglement Result keys: ['entanglement_capability', 'entanglement_samples', 'std_error', 'n_samples', 'per_qubit_entanglement', 'measure', 'scott_k'] entanglement_capability: 0.573903 std_error: 0.015439 n_samples: 200 measure: meyer_wallach per_qubit_entanglement: [0.22482305 0.28642685 0.2997739 0.33678132]
# Meyer-Wallach for a single state
x = np.array([0.5, 1.0, 1.5, 2.0])
state = simulate_encoding_statevector(enc, x)
mw = compute_meyer_wallach(state, enc.n_qubits)
print(f"Meyer-Wallach for single state: {mw:.6f}")
# With per-qubit breakdown
mw_val, per_qubit = compute_meyer_wallach_with_breakdown(state, enc.n_qubits)
print(f"Overall: {mw_val:.6f}")
for q, eq in enumerate(per_qubit):
print(f" Qubit {q}: {eq:.6f}")
Meyer-Wallach for single state: 0.672851 Overall: 0.672851 Qubit 0: 0.477866 Qubit 1: 0.168123 Qubit 2: 0.217956 Qubit 3: 0.481756
# Single-qubit encoding: entanglement requires >= 2 qubits
enc_1q = DataReuploading(n_features=1, n_layers=5)
try:
compute_entanglement_capability(enc_1q, n_samples=100, seed=42)
except ValueError as e:
print(f"Single-qubit entanglement raises ValueError (expected):")
print(f" {e}")
print("\nThis is correct — entanglement is undefined for a single qubit.")
print(f" is_entangling property: {enc_1q.properties.is_entangling}")
Single-qubit entanglement raises ValueError (expected): Entanglement requires at least 2 qubits. Encoding 'DataReuploading' has 1 qubit(s). Consider using an encoding with more features or a different encoding type that uses multiple qubits. This is correct — entanglement is undefined for a single qubit. is_entangling property: False
22. Trainability Analysis¶
Detects barren plateau risk by analyzing gradient variance. Higher variance = better trainability.
from encoding_atlas.analysis import (
estimate_trainability,
compute_gradient_variance,
detect_barren_plateau,
)
enc = DataReuploading(n_features=4, n_layers=3)
# Quick trainability score
train = estimate_trainability(enc, n_samples=100, seed=42)
print(f"Trainability score: {train:.6f}")
Trainability score: 0.050956
# Detailed trainability result
result = estimate_trainability(enc, n_samples=100, seed=42, return_details=True)
print("Trainability Result keys:", list(result.keys()))
print(f" trainability_estimate: {result['trainability_estimate']:.6f}")
print(f" gradient_variance: {result['gradient_variance']:.8f}")
print(f" barren_plateau_risk: {result['barren_plateau_risk']}")
print(f" effective_dimension: {result['effective_dimension']}")
print(f" n_samples: {result['n_samples']}")
print(f" n_successful_samples: {result['n_successful_samples']}")
print(f" n_failed_samples: {result['n_failed_samples']}")
print(f" per_parameter_variance: {result['per_parameter_variance']}")
Trainability Result keys: ['trainability_estimate', 'gradient_variance', 'barren_plateau_risk', 'effective_dimension', 'n_samples', 'n_successful_samples', 'per_parameter_variance', 'n_failed_samples'] trainability_estimate: 0.050956 gradient_variance: 0.00344948 barren_plateau_risk: low effective_dimension: 4.0 n_samples: 100 n_successful_samples: 100 n_failed_samples: 0 per_parameter_variance: [0.00261167 0.00370518 0.00420536 0.0032757 ]
# Gradient variance directly
gv = compute_gradient_variance(enc, n_samples=100, seed=42)
print(f"Gradient variance: {gv:.8f}")
# Barren plateau detection from variance
# Requires: gradient_variance, n_qubits, n_params
n_params = enc.properties.parameter_count
risk = detect_barren_plateau(gv, n_qubits=enc.n_qubits, n_params=n_params)
print(f"Barren plateau risk: {risk}")
Gradient variance: 0.00344948 Barren plateau risk: low
# Trainability with different observables
for obs in ["computational", "pauli_z", "global_z"]:
t = estimate_trainability(enc, n_samples=100, seed=42, observable=obs)
print(f" Observable '{obs}': trainability = {t:.6f}")
Observable 'computational': trainability = 0.050956 Observable 'pauli_z': trainability = 0.526270 Observable 'global_z': trainability = 0.495743
from encoding_atlas.analysis import (
check_simulability,
get_simulability_reason,
is_clifford_circuit,
is_matchgate_circuit,
)
# Multi-qubit: NOT simulable (entangling)
enc_multi = DataReuploading(n_features=4, n_layers=3)
result = check_simulability(enc_multi, detailed=True)
print("=== Multi-qubit DataReuploading ===")
print(f" is_simulable: {result['is_simulable']}")
print(f" simulability_class: {result['simulability_class']}")
print(f" reason: {result['reason']}")
print(f" Clifford: {is_clifford_circuit(enc_multi)}")
print(f" Matchgate: {is_matchgate_circuit(enc_multi)}")
print(f"\nRecommendations:")
for rec in result['recommendations']:
print(f" - {rec}")
=== Multi-qubit DataReuploading === is_simulable: False simulability_class: conditionally_simulable reason: Linear entanglement structure may allow tensor network simulation if entanglement entropy is bounded Clifford: False Matchgate: False Recommendations: - Statevector simulation feasible (4 qubits, ~256 bytes memory) - Consider MPS (Matrix Product State) simulation - May be efficient if entanglement entropy is bounded - Tensor network methods scale with bond dimension
# Single-qubit: simulable
enc_1q = DataReuploading(n_features=1, n_layers=5)
result_1q = check_simulability(enc_1q, detailed=True)
print("=== Single-qubit DataReuploading ===")
print(f" is_simulable: {result_1q['is_simulable']}")
print(f" simulability_class: {result_1q['simulability_class']}")
print(f" reason: {result_1q['reason']}")
# Quick reason string
reason = get_simulability_reason(enc_1q)
print(f"\nQuick reason: {reason}")
=== Single-qubit DataReuploading === is_simulable: True simulability_class: simulable reason: Encoding produces only product states (no entanglement) Quick reason: Simulable: Encoding produces only product states (no entanglement)
24. Resource Counting & Comparison¶
Compare DataReuploading with other encodings using the analysis module's resource tools.
from encoding_atlas.analysis import (
count_resources,
compare_resources,
estimate_execution_time,
)
from encoding_atlas import AngleEncoding, IQPEncoding
enc_dr = DataReuploading(n_features=4, n_layers=3)
# Count resources
resources = count_resources(enc_dr)
print("Resource count:")
for k, v in resources.items():
print(f" {k}: {v}")
Resource count: n_qubits: 4 depth: 12 gate_count: 21 single_qubit_gates: 12 two_qubit_gates: 9 parameter_count: 12 cnot_count: 0 cz_count: 0 t_gate_count: 0 hadamard_count: 0 rotation_gates: 0 two_qubit_ratio: 0.42857142857142855 gates_per_qubit: 5.25 encoding_name: DataReuploading is_data_dependent: False
# Estimate execution time
times = estimate_execution_time(enc_dr)
print("Estimated execution times:")
for k, v in times.items():
if isinstance(v, float):
print(f" {k}: {v:.4f} μs")
else:
print(f" {k}: {v}")
# With custom gate times (e.g., trapped ions)
times_ion = estimate_execution_time(
enc_dr,
single_qubit_gate_time_us=1.0,
two_qubit_gate_time_us=100.0,
)
print(f"\nTrapped ion estimated time: {times_ion['estimated_time_us']:.2f} μs")
Estimated execution times: serial_time_us: 3.0400 μs estimated_time_us: 3.4000 μs single_qubit_time_us: 0.2400 μs two_qubit_time_us: 1.8000 μs measurement_time_us: 1.0000 μs parallelization_factor: 0.5000 μs Trapped ion estimated time: 1201.00 μs
# Compare resources across encodings
encodings = [
DataReuploading(n_features=4, n_layers=3),
AngleEncoding(n_features=4),
IQPEncoding(n_features=4),
]
comparison = compare_resources(encodings)
print("Resource Comparison:")
for key, values in comparison.items():
print(f" {key}: {values}")
Resource Comparison: n_qubits: [4, 4, 4] depth: [12, 1, 6] gate_count: [21, 4, 52] single_qubit_gates: [12, 4, 28] two_qubit_gates: [9, 0, 24] parameter_count: [12, 4, 20] two_qubit_ratio: [0.42857142857142855, 0.0, 0.46153846153846156] gates_per_qubit: [5.25, 1.0, 13.0] encoding_name: ['DataReuploading', 'AngleEncoding', 'IQPEncoding']
25. Capability Protocols¶
The library uses runtime-checkable protocols to expose optional capabilities. This follows the Interface Segregation Principle — encodings only implement what makes sense for them.
from encoding_atlas.core.protocols import (
ResourceAnalyzable,
EntanglementQueryable,
DataDependentResourceAnalyzable,
DataTransformable,
is_resource_analyzable,
is_entanglement_queryable,
is_data_dependent_resource_analyzable,
is_data_transformable,
)
enc = DataReuploading(n_features=4, n_layers=3)
print("Protocol checks for DataReuploading:")
print(f" ResourceAnalyzable: {isinstance(enc, ResourceAnalyzable)}")
print(f" EntanglementQueryable: {isinstance(enc, EntanglementQueryable)}")
print(f" DataDependentResourceAnalyzable: {isinstance(enc, DataDependentResourceAnalyzable)}")
print(f" DataTransformable: {isinstance(enc, DataTransformable)}")
print(f"\nType guard functions:")
print(f" is_resource_analyzable(enc): {is_resource_analyzable(enc)}")
print(f" is_entanglement_queryable(enc): {is_entanglement_queryable(enc)}")
Protocol checks for DataReuploading: ResourceAnalyzable: True EntanglementQueryable: True DataDependentResourceAnalyzable: False DataTransformable: False Type guard functions: is_resource_analyzable(enc): True is_entanglement_queryable(enc): True
# Use protocols for generic analysis
def analyze_encoding(enc):
"""Generic encoding analysis using protocols."""
print(f"Analyzing: {enc}")
if isinstance(enc, ResourceAnalyzable):
summary = enc.resource_summary()
breakdown = enc.gate_count_breakdown()
print(f" Total gates: {breakdown['total']}")
print(f" Depth: {summary['depth']}")
if isinstance(enc, EntanglementQueryable):
pairs = enc.get_entanglement_pairs()
print(f" Entanglement pairs: {len(pairs)}")
print()
# Works with DataReuploading
analyze_encoding(DataReuploading(n_features=4, n_layers=3))
# Works with other encodings too
analyze_encoding(AngleEncoding(n_features=4))
analyze_encoding(IQPEncoding(n_features=4))
Analyzing: DataReuploading(n_features=4, n_layers=3, n_qubits=4) Total gates: 21 Depth: 12 Entanglement pairs: 3 Analyzing: AngleEncoding(n_features=4, rotation='Y', reps=1) Total gates: 4 Depth: 1 Analyzing: IQPEncoding(n_features=4, reps=2, entanglement='full') Total gates: 52 Depth: 6 Entanglement pairs: 6
from encoding_atlas import get_encoding, list_encodings
# List all registered encodings
all_encodings = list_encodings()
print(f"Registered encodings ({len(all_encodings)}):")
for name in all_encodings:
print(f" - {name}")
Registered encodings (26): - amplitude - angle - angle_ry - basis - covariant - covariant_feature_map - cyclic_equivariant - cyclic_equivariant_feature_map - data_reuploading - hamiltonian - hamiltonian_encoding - hardware_efficient - higher_order_angle - iqp - pauli_feature_map - qaoa - qaoa_encoding - so2_equivariant - so2_equivariant_feature_map - swap_equivariant - swap_equivariant_feature_map - symmetry_inspired - symmetry_inspired_feature_map - trainable - trainable_encoding - zz_feature_map
# Create DataReuploading via registry
enc_reg = get_encoding("data_reuploading", n_features=4, n_layers=5)
print(f"Created via registry: {enc_reg}")
print(f" Type: {type(enc_reg).__name__}")
assert isinstance(enc_reg, DataReuploading)
print("Registry creation verified!")
Created via registry: DataReuploading(n_features=4, n_layers=5, n_qubits=4) Type: DataReuploading Registry creation verified!
# Registry error for unknown encoding
from encoding_atlas.core.exceptions import RegistryError
try:
get_encoding("nonexistent_encoding", n_features=4)
except RegistryError as e:
print(f"RegistryError: {e}")
RegistryError: Unknown encoding 'nonexistent_encoding'. Available encodings: amplitude, angle, angle_ry, basis, covariant, covariant_feature_map, cyclic_equivariant, cyclic_equivariant_feature_map, data_reuploading, hamiltonian, hamiltonian_encoding, hardware_efficient, higher_order_angle, iqp, pauli_feature_map, qaoa, qaoa_encoding, so2_equivariant, so2_equivariant_feature_map, swap_equivariant, swap_equivariant_feature_map, symmetry_inspired, symmetry_inspired_feature_map, trainable, trainable_encoding, zz_feature_map
27. Equality, Hashing & Collections¶
Encodings support equality comparison and hashing, so they work in sets and as dictionary keys.
# Equality
enc1 = DataReuploading(n_features=4, n_layers=3)
enc2 = DataReuploading(n_features=4, n_layers=3)
enc3 = DataReuploading(n_features=4, n_layers=5)
enc4 = DataReuploading(n_features=3, n_layers=3)
print("Equality tests:")
print(f" enc1 == enc2 (same params): {enc1 == enc2}")
print(f" enc1 == enc3 (diff n_layers): {enc1 == enc3}")
print(f" enc1 == enc4 (diff n_features): {enc1 == enc4}")
print(f" enc1 == 'string': {enc1 == 'not an encoding'}")
Equality tests: enc1 == enc2 (same params): True enc1 == enc3 (diff n_layers): False enc1 == enc4 (diff n_features): False enc1 == 'string': False
# Hashing
h1 = hash(enc1)
h2 = hash(enc2)
h3 = hash(enc3)
print(f"hash(enc1) == hash(enc2): {h1 == h2}")
print(f"hash(enc1) == hash(enc3): {h1 == h3}")
# Use in sets
encoding_set = {enc1, enc2, enc3, enc4}
print(f"\nSet size (4 added, duplicates removed): {len(encoding_set)}")
# Use as dictionary keys
encoding_dict = {enc1: "standard", enc3: "deep"}
print(f"Dict lookup enc1: {encoding_dict[enc1]}")
print(f"Dict lookup enc2 (same as enc1): {encoding_dict[enc2]}")
hash(enc1) == hash(enc2): True hash(enc1) == hash(enc3): False Set size (4 added, duplicates removed): 3 Dict lookup enc1: standard Dict lookup enc2 (same as enc1): standard
28. Serialization (Pickle)¶
Encodings support pickle serialization for saving/loading. The thread lock is excluded during pickling and recreated during unpickling.
import pickle
enc = DataReuploading(n_features=4, n_layers=3)
# Access properties to ensure they're cached
_ = enc.properties
# Serialize
data = pickle.dumps(enc)
print(f"Serialized size: {len(data)} bytes")
# Deserialize
enc_loaded = pickle.loads(data)
print(f"Loaded: {enc_loaded}")
# Verify state is preserved
print(f"\nState verification:")
print(f" n_features: {enc_loaded.n_features} == {enc.n_features}: {enc_loaded.n_features == enc.n_features}")
print(f" n_layers: {enc_loaded.n_layers} == {enc.n_layers}: {enc_loaded.n_layers == enc.n_layers}")
print(f" n_qubits: {enc_loaded.n_qubits} == {enc.n_qubits}: {enc_loaded.n_qubits == enc.n_qubits}")
print(f" depth: {enc_loaded.depth} == {enc.depth}: {enc_loaded.depth == enc.depth}")
print(f" equality: {enc_loaded == enc}")
# Properties are preserved (no recomputation needed)
print(f" properties preserved: {enc_loaded.properties == enc.properties}")
Serialized size: 712 bytes Loaded: DataReuploading(n_features=4, n_layers=3, n_qubits=4) State verification: n_features: 4 == 4: True n_layers: 3 == 3: True n_qubits: 4 == 4: True depth: 12 == 12: True equality: True properties preserved: True
# Pickle round-trip produces identical circuits
x = np.array([0.5, 1.0, 1.5, 2.0])
state_orig = simulate_encoding_statevector(enc, x)
state_loaded = simulate_encoding_statevector(enc_loaded, x)
print(f"States match after pickle: {np.allclose(state_orig, state_loaded)}")
States match after pickle: True
29. Thread Safety¶
DataReuploading is designed to be thread-safe:
- Properties use double-checked locking for lazy initialization
- Input validation creates defensive copies
- Circuit generation is stateless
# Concurrent property access
enc = DataReuploading(n_features=4, n_layers=3)
results = []
errors = []
def access_properties(idx):
try:
props = enc.properties
results.append((idx, props.n_qubits, props.depth))
except Exception as e:
errors.append((idx, e))
threads = [threading.Thread(target=access_properties, args=(i,)) for i in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
print(f"Thread results ({len(results)} threads):")
print(f" All got same n_qubits: {all(r[1] == 4 for r in results)}")
print(f" All got same depth: {all(r[2] == enc.depth for r in results)}")
print(f" Errors: {len(errors)}")
Thread results (10 threads): All got same n_qubits: True All got same depth: True Errors: 0
# Concurrent circuit generation
enc = DataReuploading(n_features=4, n_layers=3)
X = np.random.default_rng(42).uniform(0, 2*np.pi, size=(20, 4))
# Use get_circuits with parallel=True
circuits = enc.get_circuits(X, backend='pennylane', parallel=True, max_workers=4)
print(f"Generated {len(circuits)} circuits in parallel")
print(f"All callable: {all(callable(c) for c in circuits)}")
Generated 20 circuits in parallel All callable: True
30. Logging & Debugging¶
The module supports Python's standard logging for debugging circuit generation.
import logging
# Enable debug logging for the data_reuploading module
logger = logging.getLogger('encoding_atlas.encodings.data_reuploading')
logger.setLevel(logging.DEBUG)
# Create a handler to capture log output
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(levelname)s: %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
# Now operations will produce debug output
enc = DataReuploading(n_features=4, n_layers=3)
x = np.array([0.1, 0.2, 0.3, 0.4])
_ = enc.get_circuit(x, backend='pennylane')
_ = enc.gate_count_breakdown()
# Clean up
logger.removeHandler(handler)
logger.setLevel(logging.WARNING)
print("\n(Debug logging demonstrated above)")
DEBUG: DataReuploading initialized: n_features=4, n_layers=3, n_qubits=4, n_entanglement_pairs=3 DEBUG: Gate breakdown: RY=12, CNOT=9, total=21 (per layer: 7)
(Debug logging demonstrated above)
31. Comparison with Other Encodings¶
Compare DataReuploading side-by-side with AngleEncoding and IQPEncoding.
from encoding_atlas import AngleEncoding, IQPEncoding
# Create comparable encodings
enc_dr = DataReuploading(n_features=4, n_layers=3)
enc_angle = AngleEncoding(n_features=4)
enc_iqp = IQPEncoding(n_features=4)
encodings = {
"DataReuploading": enc_dr,
"AngleEncoding": enc_angle,
"IQPEncoding": enc_iqp,
}
print(f"{'Property':<25} {'DataReuploading':>16} {'AngleEncoding':>16} {'IQPEncoding':>16}")
print("-" * 78)
for name, enc in encodings.items():
p = enc.properties
if name == "DataReuploading":
continue
p_dr = enc_dr.properties
p_angle = enc_angle.properties
p_iqp = enc_iqp.properties
props_to_show = [
("n_qubits", p_dr.n_qubits, p_angle.n_qubits, p_iqp.n_qubits),
("depth", p_dr.depth, p_angle.depth, p_iqp.depth),
("gate_count", p_dr.gate_count, p_angle.gate_count, p_iqp.gate_count),
("single_qubit_gates", p_dr.single_qubit_gates, p_angle.single_qubit_gates, p_iqp.single_qubit_gates),
("two_qubit_gates", p_dr.two_qubit_gates, p_angle.two_qubit_gates, p_iqp.two_qubit_gates),
("is_entangling", p_dr.is_entangling, p_angle.is_entangling, p_iqp.is_entangling),
("simulability", p_dr.simulability, p_angle.simulability, p_iqp.simulability),
]
for prop_name, dr_val, angle_val, iqp_val in props_to_show:
print(f" {prop_name:<23} {str(dr_val):>16} {str(angle_val):>16} {str(iqp_val):>16}")
Property DataReuploading AngleEncoding IQPEncoding ------------------------------------------------------------------------------ n_qubits 4 4 4 depth 12 1 6 gate_count 21 4 52 single_qubit_gates 12 4 28 two_qubit_gates 9 0 24 is_entangling True False True simulability not_simulable simulable not_simulable
# Compare entanglement capability
print("Entanglement capability comparison:")
for name, enc in encodings.items():
if enc.n_qubits > 1:
ent = compute_entanglement_capability(enc, n_samples=200, seed=42)
print(f" {name}: {ent:.6f}")
else:
print(f" {name}: N/A (single qubit)")
Entanglement capability comparison: DataReuploading: 0.573903 AngleEncoding: 0.000000 IQPEncoding: 0.748934
# Zero-valued inputs
enc = DataReuploading(n_features=4, n_layers=3)
x_zeros = np.zeros(4)
state = simulate_encoding_statevector(enc, x_zeros)
print(f"Zero input state norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0)
print("Zero-valued inputs: OK")
Zero input state norm: 1.0000000000 Zero-valued inputs: OK
# Very small values (near machine epsilon)
x_tiny = np.array([1e-15, 1e-16, 1e-17, 1e-18])
state = simulate_encoding_statevector(enc, x_tiny)
print(f"Tiny input state norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0)
print("Very small values: OK")
Tiny input state norm: 1.0000000000 Very small values: OK
# Large values (RY rotations are 2π-periodic, so still valid)
x_large = np.array([1e5, 2e5, 3e5, 4e5])
state = simulate_encoding_statevector(enc, x_large)
print(f"Large input state norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0)
print("Large values: OK (RY is 2π-periodic)")
Large input state norm: 1.0000000000 Large values: OK (RY is 2π-periodic)
# Values near π multiples
x_pi = np.array([np.pi, 2*np.pi, np.pi/2, 3*np.pi/2])
state = simulate_encoding_statevector(enc, x_pi)
print(f"Pi-multiple input state norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0)
print("Values near π multiples: OK")
Pi-multiple input state norm: 1.0000000000 Values near π multiples: OK
# Single feature encoding
enc_1 = DataReuploading(n_features=1, n_layers=3)
x1 = np.array([1.5])
state = simulate_encoding_statevector(enc_1, x1)
print(f"Single feature: state={state}, norm={np.linalg.norm(state):.10f}")
print("Single feature: OK")
Single feature: state=[-0.62817362+0.j 0.7780732 +0.j], norm=1.0000000000 Single feature: OK
# Many features
enc_32 = DataReuploading(n_features=32, n_layers=2, n_qubits=4)
x32 = np.random.default_rng(42).uniform(0, 2*np.pi, 32)
state = simulate_encoding_statevector(enc_32, x32)
print(f"32 features on 4 qubits:")
print(f" depth: {enc_32.depth}")
print(f" gate count: {enc_32.properties.gate_count}")
print(f" state norm: {np.linalg.norm(state):.10f}")
print("Many features with cyclic mapping: OK")
32 features on 4 qubits: depth: 22 gate count: 70 state norm: 1.0000000000 Many features with cyclic mapping: OK
# Mixed extreme magnitudes
x_mixed = np.array([1e-10, 1e5, 1e-8, 1e3])
state = simulate_encoding_statevector(enc, x_mixed)
print(f"Mixed extreme magnitudes state norm: {np.linalg.norm(state):.10f}")
assert np.isclose(np.linalg.norm(state), 1.0)
print("Mixed extreme magnitudes: OK")
Mixed extreme magnitudes state norm: 1.0000000000 Mixed extreme magnitudes: OK
33. Gradient Computation¶
The analysis module provides parameter-shift rule based gradient computation.
from encoding_atlas.analysis import compute_parameter_gradient, compute_all_parameter_gradients
enc = DataReuploading(n_features=4, n_layers=3)
x = np.array([0.5, 1.0, 1.5, 2.0])
# Gradient for a single parameter
grad_0 = compute_parameter_gradient(enc, x, param_index=0)
print(f"Gradient w.r.t. parameter 0: {grad_0:.8f}")
# Gradients for all parameters
all_grads = compute_all_parameter_gradients(enc, x)
print(f"\nAll gradients ({len(all_grads)} parameters):")
for i, g in enumerate(all_grads):
print(f" param {i}: {g:+.8f}")
Gradient w.r.t. parameter 0: 0.00281666 All gradients (4 parameters): param 0: +0.00281666 param 1: +0.07958085 param 2: -0.08236216 param 3: +0.02910466
from encoding_atlas.core.exceptions import (
EncodingError,
ValidationError,
BackendError,
RegistryError,
AnalysisError,
SimulationError,
ConvergenceError,
NumericalInstabilityError,
InsufficientSamplesError,
)
# Show the hierarchy
print("Exception Hierarchy:")
print(" EncodingError (base)")
print(" ├── ValidationError")
print(" ├── BackendError")
print(" ├── RegistryError")
print(" └── AnalysisError")
print(" ├── SimulationError")
print(" ├── ConvergenceError")
print(" ├── NumericalInstabilityError")
print(" └── InsufficientSamplesError")
# All are catchable as EncodingError
print("\nInheritance verification:")
print(f" SimulationError -> AnalysisError: {issubclass(SimulationError, AnalysisError)}")
print(f" AnalysisError -> EncodingError: {issubclass(AnalysisError, EncodingError)}")
print(f" ValidationError -> EncodingError: {issubclass(ValidationError, EncodingError)}")
Exception Hierarchy:
EncodingError (base)
├── ValidationError
├── BackendError
├── RegistryError
└── AnalysisError
├── SimulationError
├── ConvergenceError
├── NumericalInstabilityError
└── InsufficientSamplesError
Inheritance verification:
SimulationError -> AnalysisError: True
AnalysisError -> EncodingError: True
ValidationError -> EncodingError: True
# AnalysisError has a details dict
try:
raise AnalysisError("Test error", details={"param": 42, "context": "demo"})
except AnalysisError as e:
print(f"Message: {e}")
print(f"Details: {e.details}")
# SimulationError has backend info
try:
raise SimulationError("Sim failed", backend="pennylane", details={"n_qubits": 20})
except SimulationError as e:
print(f"\nSimulationError: {e}")
print(f" Backend: {e.backend}")
print(f" Details: {e.details}")
Message: Test error
Details: {'param': 42, 'context': 'demo'}
SimulationError: Sim failed
Backend: pennylane
Details: {'n_qubits': 20}
35. Summary & Best Practices¶
Features Covered¶
This notebook demonstrated every feature of the DataReuploading encoding:
| Feature | Section |
|---|---|
| Constructor & Parameters | §2-3 |
| Core Properties | §4 |
| Depth Formula | §5 |
| EncodingProperties | §6 |
| Gate Count Breakdown | §7 |
| Resource Summary | §8 |
| Entanglement Pairs | §9 |
| PennyLane Backend | §10 |
| Qiskit Backend | §11 |
| Cirq Backend | §12 |
| Batch Processing | §13 |
| Input Validation | §14 |
| Cyclic Feature Mapping | §15 |
| Single-Qubit Mode | §16 |
| Deep Circuit Warning | §17 |
| Statevector Simulation | §18 |
| Partial Traces | §19 |
| Expressibility | §20 |
| Entanglement Capability | §21 |
| Trainability | §22 |
| Simulability | §23 |
| Resource Comparison | §24 |
| Capability Protocols | §25 |
| Registry System | §26 |
| Equality & Hashing | §27 |
| Serialization | §28 |
| Thread Safety | §29 |
| Logging | §30 |
| Encoding Comparison | §31 |
| Edge Cases | §32 |
| Gradient Computation | §33 |
| Exception Hierarchy | §34 |
Best Practices¶
- Layer Selection: Start with
n_layers=3(default). Increase for complex functions, decrease if trainability is poor. - Qubit Efficiency: Use
n_qubits < n_featureswith cyclic mapping to reduce hardware requirements. - Backend Choice: Use PennyLane for simulation, Qiskit for IBM hardware, Cirq for Google hardware.
- Input Scaling: Scale features to $[0, 2\pi]$ or $[-\pi, \pi]$ for best results.
- Trainability Monitoring: Keep
n_layers \leq 10$ to avoid barren plateaus. Use theestimate_trainability()` tool to check. - Universal Approximation: Add trainable parameters between data uploads for full universal approximation capability.
- Parallel Processing: Use
parallel=Trueinget_circuits()for large batches (>100 samples).
Mathematical Summary¶
$$\text{Circuit: } |\psi(x)\rangle = [U_{\text{CNOT}} \cdot \prod_i RY(x_i)]^L |0\rangle^{\otimes n}$$
$$\text{Depth: } d = L \times \left(\lceil n_f / n_q \rceil + (n_q - 1)\right)$$
$$\text{Gates: } G = L \times (n_f + \max(0, n_q - 1))$$
$$\text{Fourier frequencies: } \omega \in \{-L, \ldots, L\}$$
$$\text{Trainability: } T \approx \max(0.4, 0.9 - 0.05L)$$
Generated for encoding-atlas v0.2.0