TrainableEncoding: Complete Feature Demonstration¶
Quantum Encoding Atlas | encoding-atlas v0.2.0
This notebook provides an exhaustive demonstration of TrainableEncoding — a parameterized quantum encoding that combines classical data encoding with learnable (trainable) parameters. Unlike fixed encodings, trainable encodings have variational parameters that can be optimized through training to adapt the encoding to specific tasks.
What Makes TrainableEncoding Special?¶
The quantum state is prepared using:
$$|\psi(x, \theta)\rangle = \left[U_{\text{ent}} \cdot U_{\text{trainable}}(\theta) \cdot U_{\text{data}}(x)\right]^L |0\rangle^{\otimes n}$$
where:
- $U_{\text{data}}(x)$: Encodes classical features using rotation gates
- $U_{\text{trainable}}(\theta)$: Applies learnable rotations with parameters $\theta$
- $U_{\text{ent}}$: Provides entanglement via CNOT gates
- $L$: Number of layers
Key advantages:
- Task-Specific Adaptation — Parameters optimize for specific datasets
- Enhanced Expressivity — Interleaving data & trainable layers increases representation capacity
- Noise Absorption — Trainable parameters can partially absorb systematic hardware errors
- Flexibility — Configurable rotation gates, entanglement patterns, and initialization strategies
Table of Contents¶
- Setup & Imports
- Instantiation & Constructor Validation
- Core Properties
- Trainable Parameter Management
- Initialization Strategies
- Entanglement Patterns
- Circuit Generation — PennyLane Backend
- Circuit Generation — Qiskit Backend
- Circuit Generation — Cirq Backend
- Batch Circuit Generation & Parallel Processing
- Input Validation & Edge Cases
- Resource Analysis
- Mathematical Verification
- Reproducibility & Determinism
- Copy & Serialization
- Equality & Hashing
- Protocol Compliance
- Integration with Analysis Tools
- Visualization & Comparison
- Registry System
- Logging & Debugging
- Practical Workflow: Variational QML Training Loop
- Summary
1. Setup & Imports¶
# Install the library (uncomment if not already installed)
# !pip install encoding-atlas
import numpy as np
import warnings
import pickle
from pprint import pprint
# Core library imports
from encoding_atlas import TrainableEncoding
from encoding_atlas import __version__
print(f"encoding-atlas version: {__version__}")
print(f"NumPy version: {np.__version__}")
encoding-atlas version: 0.2.0 NumPy version: 2.2.6
# Check which quantum backends are available
HAS_PENNYLANE = False
HAS_QISKIT = False
HAS_CIRQ = False
try:
import pennylane as qml
HAS_PENNYLANE = True
print(f"PennyLane: {qml.__version__}")
except ImportError:
print("PennyLane: not installed")
try:
import qiskit
from qiskit import QuantumCircuit
HAS_QISKIT = True
print(f"Qiskit: {qiskit.__version__}")
except ImportError:
print("Qiskit: not installed")
try:
import cirq
HAS_CIRQ = True
print(f"Cirq: {cirq.__version__}")
except ImportError:
print("Cirq: not installed")
PennyLane: 0.42.3 Qiskit: 2.3.0 Cirq: 1.5.0
2. Instantiation & Constructor Validation¶
2.1 Default Parameters¶
TrainableEncoding accepts the following constructor parameters:
| Parameter | Type | Default | Description |
|---|---|---|---|
n_features |
int |
(required) | Number of classical features (= number of qubits) |
n_layers |
int |
2 |
Number of encoding layer repetitions |
data_rotation |
{"X", "Y", "Z"} |
"Y" |
Rotation axis for data-encoding gates |
trainable_rotation |
{"X", "Y", "Z"} |
"Y" |
Rotation axis for trainable gates |
entanglement |
{"linear", "circular", "full", "none"} |
"linear" |
CNOT entanglement topology |
initialization |
{"xavier", "he", "zeros", "random", "small_random"} |
"xavier" |
Parameter initialization strategy |
seed |
int \| None |
None |
Random seed for reproducibility |
# Create with defaults
enc = TrainableEncoding(n_features=4)
print(f"n_features: {enc.n_features}")
print(f"n_qubits: {enc.n_qubits}")
print(f"n_layers: {enc.n_layers}")
print(f"data_rotation: {enc.data_rotation}")
print(f"trainable_rotation: {enc.trainable_rotation}")
print(f"entanglement: {enc.entanglement}")
print(f"initialization: {enc.initialization}")
print(f"n_trainable_params: {enc.n_trainable_parameters}")
print(f"depth: {enc.depth}")
n_features: 4 n_qubits: 4 n_layers: 2 data_rotation: Y trainable_rotation: Y entanglement: linear initialization: xavier n_trainable_params: 8 depth: 10
# Create with all parameters customized
enc_custom = TrainableEncoding(
n_features=6,
n_layers=3,
data_rotation="X",
trainable_rotation="Z",
entanglement="circular",
initialization="he",
seed=42,
)
print(repr(enc_custom))
TrainableEncoding(n_features=6, n_layers=3, data_rotation='X', trainable_rotation='Z', entanglement='circular', initialization='he')
2.2 The config Property¶
The config property returns a read-only copy of the encoding-specific parameters stored at construction.
config = enc_custom.config
pprint(config)
# It's a copy — modifying it does NOT affect the encoding
config['n_layers'] = 999
print(f"\nEncoding n_layers is still: {enc_custom.n_layers}")
{'data_rotation': 'X',
'entanglement': 'circular',
'initialization': 'he',
'n_layers': 3,
'seed': 42,
'trainable_rotation': 'Z'}
Encoding n_layers is still: 3
2.3 Constructor Validation¶
TrainableEncoding validates all parameters strictly. Let's verify every error case.
# --- Invalid n_features ---
for bad_n in [0, -1, -10]:
try:
TrainableEncoding(n_features=bad_n)
except ValueError as e:
print(f"n_features={bad_n}: {e}")
n_features=0: n_features must be a positive integer, got 0 n_features=-1: n_features must be a positive integer, got -1 n_features=-10: n_features must be a positive integer, got -10
# --- Invalid n_layers ---
for bad_layers in [0, -1, True, False]:
try:
TrainableEncoding(n_features=4, n_layers=bad_layers)
except ValueError as e:
print(f"n_layers={bad_layers!r}: {e}")
n_layers=0: n_layers must be a positive integer, got 0 n_layers=-1: n_layers must be a positive integer, got -1 n_layers=True: n_layers must be a positive integer, got True n_layers=False: n_layers must be a positive integer, got False
# --- Invalid rotation axes ---
for bad_rot in ["W", "x", "RY", ""]:
try:
TrainableEncoding(n_features=4, data_rotation=bad_rot)
except ValueError as e:
print(f"data_rotation={bad_rot!r}: {e}")
print()
for bad_rot in ["W", "y"]:
try:
TrainableEncoding(n_features=4, trainable_rotation=bad_rot)
except ValueError as e:
print(f"trainable_rotation={bad_rot!r}: {e}")
data_rotation='W': data_rotation must be one of ['X', 'Y', 'Z'], got 'W' data_rotation='x': data_rotation must be one of ['X', 'Y', 'Z'], got 'x' data_rotation='RY': data_rotation must be one of ['X', 'Y', 'Z'], got 'RY' data_rotation='': data_rotation must be one of ['X', 'Y', 'Z'], got '' trainable_rotation='W': trainable_rotation must be one of ['X', 'Y', 'Z'], got 'W' trainable_rotation='y': trainable_rotation must be one of ['X', 'Y', 'Z'], got 'y'
# --- Invalid entanglement ---
for bad_ent in ["ring", "all", "nearest"]:
try:
TrainableEncoding(n_features=4, entanglement=bad_ent)
except ValueError as e:
print(f"entanglement={bad_ent!r}: {e}")
entanglement='ring': entanglement must be one of ['circular', 'full', 'linear', 'none'], got 'ring' entanglement='all': entanglement must be one of ['circular', 'full', 'linear', 'none'], got 'all' entanglement='nearest': entanglement must be one of ['circular', 'full', 'linear', 'none'], got 'nearest'
# --- Invalid initialization ---
for bad_init in ["uniform", "normal", "kaiming"]:
try:
TrainableEncoding(n_features=4, initialization=bad_init)
except ValueError as e:
print(f"initialization={bad_init!r}: {e}")
initialization='uniform': initialization must be one of ['he', 'random', 'small_random', 'xavier', 'zeros'], got 'uniform' initialization='normal': initialization must be one of ['he', 'random', 'small_random', 'xavier', 'zeros'], got 'normal' initialization='kaiming': initialization must be one of ['he', 'random', 'small_random', 'xavier', 'zeros'], got 'kaiming'
# --- Invalid seed ---
for bad_seed in [-1, -100]:
try:
TrainableEncoding(n_features=4, seed=bad_seed)
except ValueError as e:
print(f"seed={bad_seed}: {e}")
seed=-1: seed must be a non-negative integer, got -1 seed=-100: seed must be a non-negative integer, got -100
2.4 Warnings for Potential Issues¶
TrainableEncoding issues warnings for configurations that may cause problems.
# Deep circuit warning (n_layers > 8)
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
enc_deep = TrainableEncoding(n_features=4, n_layers=10)
for warning in w:
print(f"WARNING: {warning.message}")
Deep trainable circuit: n_layers=10 exceeds threshold=8, n_trainable_params=40
WARNING: TrainableEncoding with 10 layers creates a deep circuit with 40 trainable parameters. Very deep trainable circuits may face trainability challenges due to barren plateaus. Consider: (1) reducing n_layers if task permits, (2) using layer-wise training, or (3) specialized initialization strategies.
# Large parameter count warning (> 100 trainable parameters)
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("always")
enc_large = TrainableEncoding(n_features=20, n_layers=6)
for warning in w:
print(f"WARNING: {warning.message}")
print(f"\nTotal trainable parameters: {enc_large.n_trainable_parameters}")
WARNING: TrainableEncoding has 120 trainable parameters, which may lead to overfitting with small training sets. Consider reducing n_layers or n_features if experiencing poor generalization. Total trainable parameters: 120
enc = TrainableEncoding(n_features=4, n_layers=2, entanglement="linear", seed=42)
print("=== Basic Properties ===")
print(f"n_features: {enc.n_features}")
print(f"n_qubits: {enc.n_qubits} (always equals n_features)")
print(f"n_layers: {enc.n_layers}")
print(f"depth: {enc.depth}")
print(f"n_trainable_parameters: {enc.n_trainable_parameters} (= n_layers * n_features)")
# Depth formula: n_layers * (2 + entangling_depth)
# For linear entanglement: entangling_depth = n_qubits - 1 = 3
# So: 2 * (2 + 3) = 10
expected_depth = 2 * (2 + (4 - 1))
print(f"\nDepth verification: n_layers*(2 + (n_qubits-1)) = {expected_depth}")
assert enc.depth == expected_depth
=== Basic Properties === n_features: 4 n_qubits: 4 (always equals n_features) n_layers: 2 depth: 10 n_trainable_parameters: 8 (= n_layers * n_features) Depth verification: n_layers*(2 + (n_qubits-1)) = 10
3.2 Depth Calculation for All Entanglement Patterns¶
n_features = 4
n_layers = 2
for ent in ["none", "linear", "circular", "full"]:
enc = TrainableEncoding(n_features=n_features, n_layers=n_layers, entanglement=ent)
print(f"entanglement={ent!r:12s} depth={enc.depth:3d} "
f"pairs={len(enc.get_entanglement_pairs())}")
entanglement='none' depth= 4 pairs=0 entanglement='linear' depth= 10 pairs=3 entanglement='circular' depth= 12 pairs=4 entanglement='full' depth= 10 pairs=6
3.3 EncodingProperties Dataclass¶
The properties attribute returns a frozen EncodingProperties dataclass with comprehensive metrics. It uses thread-safe lazy initialization with double-checked locking.
from encoding_atlas.core.properties import EncodingProperties
enc = TrainableEncoding(n_features=4, n_layers=2, entanglement="linear", seed=42)
props = enc.properties
assert isinstance(props, EncodingProperties)
print("=== EncodingProperties ===")
print(f"n_qubits: {props.n_qubits}")
print(f"depth: {props.depth}")
print(f"gate_count: {props.gate_count}")
print(f"single_qubit_gates: {props.single_qubit_gates}")
print(f"two_qubit_gates: {props.two_qubit_gates}")
print(f"parameter_count: {props.parameter_count}")
print(f"is_entangling: {props.is_entangling}")
print(f"simulability: {props.simulability}")
print(f"trainability_estimate: {props.trainability_estimate}")
print(f"notes: {props.notes}")
# Gate count identity
assert props.single_qubit_gates + props.two_qubit_gates == props.gate_count
print("\nGate count identity verified: single + two_qubit == total")
=== EncodingProperties === n_qubits: 4 depth: 10 gate_count: 22 single_qubit_gates: 16 two_qubit_gates: 6 parameter_count: 16 is_entangling: True simulability: not_simulable trainability_estimate: 0.79 notes: Trainable encoding with 2 layers, 8 trainable parameters, Y data rotation, Y trainable rotation, and linear entanglement. Gate count identity verified: single + two_qubit == total
# Properties are cached — same object returned on repeated access
props1 = enc.properties
props2 = enc.properties
assert props1 is props2
print("Properties are cached (same object returned).")
# Convert to dictionary
props_dict = props.to_dict()
print(f"\nProperties as dict keys: {list(props_dict.keys())}")
Properties are cached (same object returned). Properties as dict keys: ['n_qubits', 'depth', 'gate_count', 'single_qubit_gates', 'two_qubit_gates', 'parameter_count', 'is_entangling', 'simulability', 'expressibility', 'entanglement_capability', 'trainability_estimate', 'noise_resilience_estimate', 'notes']
3.4 String Representation¶
enc = TrainableEncoding(
n_features=4, n_layers=3, data_rotation="X",
trainable_rotation="Z", entanglement="circular",
initialization="he"
)
print(repr(enc))
TrainableEncoding(n_features=4, n_layers=3, data_rotation='X', trainable_rotation='Z', entanglement='circular', initialization='he')
4. Trainable Parameter Management¶
The core differentiator of TrainableEncoding is its learnable parameters. These are managed through three methods:
get_trainable_parameters()— get a copy of current parametersset_trainable_parameters(params)— update parameters (e.g., after an optimization step)reset_parameters(seed=None)— reinitialize to fresh parameters
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
# Get current trainable parameters
params = enc.get_trainable_parameters()
print(f"Shape: {params.shape} (n_layers, n_features)")
print(f"Values:\n{params}")
print(f"\nDtype: {params.dtype}")
Shape: (2, 4) (n_layers, n_features) Values: [[ 0.15235854 -0.51999205 0.3752256 0.47028236] [-0.97551759 -0.65108975 0.0639202 -0.1581213 ]] Dtype: float64
# Returns a COPY — modifying it does NOT affect the encoding
params_copy = enc.get_trainable_parameters()
params_copy[0, 0] = 999.0
original = enc.get_trainable_parameters()
assert original[0, 0] != 999.0
print("Confirmed: get_trainable_parameters() returns a safe copy.")
Confirmed: get_trainable_parameters() returns a safe copy.
# Set new parameters (2D array)
new_params = np.ones((2, 4)) * 0.5
enc.set_trainable_parameters(new_params)
print(f"After setting:\n{enc.get_trainable_parameters()}")
After setting: [[0.5 0.5 0.5 0.5] [0.5 0.5 0.5 0.5]]
# Set with a FLAT array — auto-reshaped if size matches
flat_params = np.arange(8, dtype=float) * 0.1
enc.set_trainable_parameters(flat_params)
print(f"Set from flat array (size 8):\n{enc.get_trainable_parameters()}")
Set from flat array (size 8): [[0. 0.1 0.2 0.3] [0.4 0.5 0.6 0.7]]
# Validation errors
print("--- Wrong shape ---")
try:
enc.set_trainable_parameters(np.ones((3, 4)))
except ValueError as e:
print(f" {e}")
print("\n--- Wrong flat size ---")
try:
enc.set_trainable_parameters(np.ones(10))
except ValueError as e:
print(f" {e}")
print("\n--- NaN values ---")
try:
bad = np.ones((2, 4)); bad[0, 0] = np.nan
enc.set_trainable_parameters(bad)
except ValueError as e:
print(f" {e}")
print("\n--- Inf values ---")
try:
bad = np.ones((2, 4)); bad[1, 2] = np.inf
enc.set_trainable_parameters(bad)
except ValueError as e:
print(f" {e}")
--- Wrong shape --- Expected parameters with shape (2, 4), got (3, 4) --- Wrong flat size --- Cannot reshape flat array of size 10 to expected shape (2, 4) --- NaN values --- Parameters contain NaN or infinite values --- Inf values --- Parameters contain NaN or infinite values
# Skip validation for performance in hot optimization loops
fast_params = np.random.randn(2, 4) * 0.1
enc.set_trainable_parameters(fast_params, validate=False)
print("Set with validate=False (no shape/NaN checks).")
print(f"Parameters:\n{enc.get_trainable_parameters()}")
Set with validate=False (no shape/NaN checks). Parameters: [[-0.09021411 0.05062536 0.17580124 0.09099612] [-0.08445855 0.05284374 0.1508526 -0.227738 ]]
# Cache invalidation: setting parameters clears cached properties
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
_ = enc.properties # Force cache
enc.set_trainable_parameters(np.zeros((2, 4)))
new_props = enc.properties # Should be recomputed (cache was invalidated)
print(f"Properties recomputed after param update: gate_count={new_props.gate_count}")
Properties recomputed after param update: gate_count=22
4.1 Reset Parameters¶
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
original_params = enc.get_trainable_parameters().copy()
print(f"Original params:\n{original_params}")
# Modify
enc.set_trainable_parameters(np.zeros((2, 4)))
print(f"\nAfter zeroing:\n{enc.get_trainable_parameters()}")
# Reset to original (uses the original seed)
enc.reset_parameters()
reset_params = enc.get_trainable_parameters()
print(f"\nAfter reset:\n{reset_params}")
assert np.allclose(original_params, reset_params)
print("\nConfirmed: reset restores original initialization.")
Original params: [[ 0.15235854 -0.51999205 0.3752256 0.47028236] [-0.97551759 -0.65108975 0.0639202 -0.1581213 ]] After zeroing: [[0. 0. 0. 0.] [0. 0. 0. 0.]] After reset: [[ 0.15235854 -0.51999205 0.3752256 0.47028236] [-0.97551759 -0.65108975 0.0639202 -0.1581213 ]] Confirmed: reset restores original initialization.
# Reset with a NEW seed
enc.reset_parameters(seed=99999)
new_params = enc.get_trainable_parameters()
print(f"After reset with new seed:\n{new_params}")
assert not np.allclose(original_params, new_params)
print("\nConfirmed: different seed produces different parameters.")
After reset with new seed: [[ 0.40843831 -0.12555367 0.51549377 -0.24007871] [ 0.13079829 0.28521847 0.18286741 0.44152514]] Confirmed: different seed produces different parameters.
5. Initialization Strategies¶
TrainableEncoding supports five parameter initialization strategies, each suited for different scenarios.
| Strategy | Formula | Range | Use Case |
|---|---|---|---|
xavier |
$\sigma = \sqrt{2/(n_{in}+n_{out})}$ | Small, centered | Default, balanced learning |
he |
$\sigma = \sqrt{2/n_{in}}$ | Slightly larger | Deeper networks |
zeros |
All zeros | 0.0 | Theoretical analysis |
random |
$\text{Uniform}[-\pi, \pi]$ | Full rotation | Maximum variance |
small_random |
$\text{Uniform}[-0.1, 0.1]$ | Very small | Careful initialization |
strategies = ["xavier", "he", "zeros", "random", "small_random"]
print(f"{'Strategy':<15} {'Mean':>8} {'Std':>8} {'Min':>8} {'Max':>8}")
print("-" * 55)
for strategy in strategies:
enc = TrainableEncoding(
n_features=8, n_layers=4,
initialization=strategy, seed=42
)
params = enc.get_trainable_parameters()
print(f"{strategy:<15} {np.mean(params):>8.4f} {np.std(params):>8.4f} "
f"{np.min(params):>8.4f} {np.max(params):>8.4f}")
Strategy Mean Std Min Max ------------------------------------------------------- xavier 0.0247 0.2938 -0.6898 0.7572 he 0.0350 0.4155 -0.9755 1.0708 zeros 0.0000 0.0000 0.0000 0.0000 random 0.4882 1.8350 -2.8664 2.9884 small_random 0.0155 0.0584 -0.0912 0.0951
# Zeros initialization: all parameters exactly zero
enc_zeros = TrainableEncoding(n_features=4, initialization="zeros")
params = enc_zeros.get_trainable_parameters()
assert np.all(params == 0)
print(f"Zeros initialization:\n{params}")
print("All zero confirmed.")
Zeros initialization: [[0. 0. 0. 0.] [0. 0. 0. 0.]] All zero confirmed.
6. Entanglement Patterns¶
TrainableEncoding supports four entanglement topologies that determine CNOT gate connectivity.
n = 4
for ent in ["linear", "circular", "full", "none"]:
enc = TrainableEncoding(n_features=n, entanglement=ent)
pairs = enc.get_entanglement_pairs()
print(f"{ent:10s} pairs={pairs}")
linear pairs=[(0, 1), (1, 2), (2, 3)] circular pairs=[(0, 1), (1, 2), (2, 3), (3, 0)] full pairs=[(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)] none pairs=[]
# Verify pair count formulas
print(f"{'n_features':<12} {'linear':<10} {'circular':<10} {'full':<10} {'none':<10}")
print("-" * 52)
for n in [2, 3, 4, 5, 6, 8]:
counts = {}
for ent in ["linear", "circular", "full", "none"]:
enc = TrainableEncoding(n_features=n, entanglement=ent)
counts[ent] = len(enc.get_entanglement_pairs())
print(f"{n:<12} {counts['linear']:<10} {counts['circular']:<10} "
f"{counts['full']:<10} {counts['none']:<10}")
# Verify formulas
assert counts['linear'] == n - 1
assert counts['circular'] == (n if n > 2 else n - 1)
assert counts['full'] == n * (n - 1) // 2
assert counts['none'] == 0
print("\nFormulas verified: linear=n-1, circular=n (n>2), full=n(n-1)/2, none=0")
n_features linear circular full none ---------------------------------------------------- 2 1 1 1 0 3 2 3 3 0 4 3 4 6 0 5 4 5 10 0 6 5 6 15 0 8 7 8 28 0 Formulas verified: linear=n-1, circular=n (n>2), full=n(n-1)/2, none=0
# Single qubit: no entanglement possible regardless of setting
enc_single = TrainableEncoding(n_features=1, entanglement="full")
print(f"n_features=1, entanglement='full': pairs={enc_single.get_entanglement_pairs()}")
print(f"is_entangling: {enc_single.properties.is_entangling}")
assert len(enc_single.get_entanglement_pairs()) == 0
assert enc_single.properties.is_entangling is False
n_features=1, entanglement='full': pairs=[] is_entangling: False
# Entanglement affects simulability
enc_ent = TrainableEncoding(n_features=4, entanglement="linear")
enc_sep = TrainableEncoding(n_features=4, entanglement="none")
print(f"With entanglement: simulability={enc_ent.properties.simulability}, "
f"is_entangling={enc_ent.properties.is_entangling}")
print(f"Without entanglement: simulability={enc_sep.properties.simulability}, "
f"is_entangling={enc_sep.properties.is_entangling}")
With entanglement: simulability=not_simulable, is_entangling=True Without entanglement: simulability=simulable, is_entangling=False
7. Circuit Generation — PennyLane Backend¶
The PennyLane backend returns a callable function (closure) that applies the encoding gates when called inside a QNode.
if HAS_PENNYLANE:
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
x = np.array([0.1, 0.2, 0.3, 0.4])
# Generate circuit function
circuit_fn = enc.get_circuit(x, backend="pennylane")
print(f"Type: {type(circuit_fn)}")
print(f"Callable: {callable(circuit_fn)}")
# Execute in a QNode to get the quantum state
dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def full_circuit():
circuit_fn()
return qml.state()
state = full_circuit()
print(f"\nState vector dimension: {len(state)}")
print(f"State norm: {np.sum(np.abs(state)**2):.10f}")
print(f"First 4 amplitudes: {state[:4]}")
else:
print("PennyLane not installed, skipping.")
Type: <class 'function'> Callable: True State vector dimension: 16 State norm: 1.0000000000 First 4 amplitudes: [0.65448575+0.j 0.3704331 +0.j 0.32396779+0.j 0.20355197+0.j]
if HAS_PENNYLANE:
# All rotation combinations produce valid quantum states
x = np.array([0.5, 1.0, 1.5, 2.0])
for d_rot in ["X", "Y", "Z"]:
for t_rot in ["X", "Y", "Z"]:
enc = TrainableEncoding(
n_features=4, n_layers=1,
data_rotation=d_rot, trainable_rotation=t_rot, seed=42
)
circuit_fn = enc.get_circuit(x, backend="pennylane")
dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def circuit():
circuit_fn()
return qml.state()
state = circuit()
norm = float(np.sum(np.abs(state)**2))
assert np.isclose(norm, 1.0, atol=1e-10)
print(f"R{d_rot}(data) + R{t_rot}(trainable): norm={norm:.10f} OK")
print("\nAll 9 rotation combinations produce valid normalized states.")
else:
print("PennyLane not installed, skipping.")
RX(data) + RX(trainable): norm=1.0000000000 OK RX(data) + RY(trainable): norm=1.0000000000 OK RX(data) + RZ(trainable): norm=1.0000000000 OK RY(data) + RX(trainable): norm=1.0000000000 OK RY(data) + RY(trainable): norm=1.0000000000 OK RY(data) + RZ(trainable): norm=1.0000000000 OK RZ(data) + RX(trainable): norm=1.0000000000 OK RZ(data) + RY(trainable): norm=1.0000000000 OK RZ(data) + RZ(trainable): norm=1.0000000000 OK All 9 rotation combinations produce valid normalized states.
if HAS_QISKIT:
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
x = np.array([0.1, 0.2, 0.3, 0.4])
qc = enc.get_circuit(x, backend="qiskit")
print(f"Type: {type(qc).__name__}")
print(f"Num qubits: {qc.num_qubits}")
print(f"Circuit depth: {qc.depth()}")
print(f"Gate counts: {dict(qc.count_ops())}")
# Draw the circuit
print("\n" + qc.draw(output='text').single_string())
else:
print("Qiskit not installed, skipping.")
Type: QuantumCircuit
Num qubits: 4
Circuit depth: 9
Gate counts: {'ry': 16, 'cx': 6}
┌─────────┐┌─────────────┐ ┌─────────┐┌──────────────┐»
q_0: ┤ Ry(0.1) ├┤ Ry(0.15236) ├───■──┤ Ry(0.1) ├┤ Ry(-0.97552) ├»
├─────────┤├─────────────┴┐┌─┴─┐└─────────┘└─┬─────────┬──┘»
q_1: ┤ Ry(0.2) ├┤ Ry(-0.51999) ├┤ X ├─────■───────┤ Ry(0.2) ├───»
├─────────┤├─────────────┬┘└───┘ ┌─┴─┐ └─────────┘ »
q_2: ┤ Ry(0.3) ├┤ Ry(0.37523) ├─────────┤ X ├──────────■────────»
├─────────┤├─────────────┤ └───┘ ┌─┴─┐ »
q_3: ┤ Ry(0.4) ├┤ Ry(0.47028) ├──────────────────────┤ X ├──────»
└─────────┘└─────────────┘ └───┘ »
«
«q_0: ───────────────────────■──────────────────
« ┌──────────────┐ ┌─┴─┐
«q_1: ┤ Ry(-0.65109) ├─────┤ X ├────────■───────
« └─┬─────────┬──┘┌────┴───┴────┐ ┌─┴─┐
«q_2: ──┤ Ry(0.3) ├───┤ Ry(0.06392) ├─┤ X ├──■──
« ├─────────┤ ├─────────────┴┐└───┘┌─┴─┐
«q_3: ──┤ Ry(0.4) ├───┤ Ry(-0.15812) ├─────┤ X ├
« └─────────┘ └──────────────┘ └───┘
if HAS_QISKIT:
# All entanglement patterns produce valid Qiskit circuits
for ent in ["linear", "circular", "full", "none"]:
enc = TrainableEncoding(n_features=4, entanglement=ent, seed=42)
qc = enc.get_circuit(x, backend="qiskit")
ops = dict(qc.count_ops())
cx_count = ops.get('cx', 0)
print(f"entanglement={ent!r:12s} "
f"qubits={qc.num_qubits} "
f"CX gates={cx_count} "
f"ops={ops}")
else:
print("Qiskit not installed, skipping.")
entanglement='linear' qubits=4 CX gates=6 ops={'ry': 16, 'cx': 6}
entanglement='circular' qubits=4 CX gates=8 ops={'ry': 16, 'cx': 8}
entanglement='full' qubits=4 CX gates=12 ops={'ry': 16, 'cx': 12}
entanglement='none' qubits=4 CX gates=0 ops={'ry': 16}
if HAS_CIRQ:
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
x = np.array([0.1, 0.2, 0.3, 0.4])
cirq_circuit = enc.get_circuit(x, backend="cirq")
print(f"Type: {type(cirq_circuit).__name__}")
print(f"Num qubits: {len(cirq_circuit.all_qubits())}")
print(f"Num operations: {len(list(cirq_circuit.all_operations()))}")
print(f"Num moments: {len(cirq_circuit.moments)}")
print(f"\n{cirq_circuit}")
else:
print("Cirq not installed, skipping.")
Type: Circuit
Num qubits: 4
Num operations: 22
Num moments: 9
0: ───Ry(0.032π)───Ry(0.048π)────@───Ry(0.032π)───Ry(-0.311π)─────────────────@────────────────────
│ │
1: ───Ry(0.064π)───Ry(-0.166π)───X───@────────────Ry(0.064π)────Ry(-0.207π)───X────────────@───────
│ │
2: ───Ry(0.095π)───Ry(0.119π)────────X────────────@─────────────Ry(0.095π)────Ry(0.02π)────X───@───
│ │
3: ───Ry(0.127π)───Ry(0.15π)──────────────────────X─────────────Ry(0.127π)────Ry(-0.05π)───────X───
# Invalid backend raises clear error
x = np.array([0.1, 0.2, 0.3, 0.4])
enc = TrainableEncoding(n_features=4, seed=42)
try:
enc.get_circuit(x, backend="invalid")
except ValueError as e:
print(f"Invalid backend error: {e}")
Invalid backend error: Unknown backend 'invalid'. Supported backends: 'pennylane', 'qiskit', 'cirq'
10. Batch Circuit Generation & Parallel Processing¶
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
# Batch of 5 samples
X = np.random.RandomState(42).randn(5, 4)
if HAS_QISKIT:
# Sequential batch
circuits = enc.get_circuits(X, backend="qiskit")
print(f"Sequential: {len(circuits)} circuits generated")
assert all(isinstance(c, QuantumCircuit) for c in circuits)
# Parallel batch
circuits_par = enc.get_circuits(X, backend="qiskit", parallel=True, max_workers=2)
print(f"Parallel: {len(circuits_par)} circuits generated")
elif HAS_PENNYLANE:
circuits = enc.get_circuits(X, backend="pennylane")
print(f"Sequential: {len(circuits)} circuits generated")
assert all(callable(c) for c in circuits)
circuits_par = enc.get_circuits(X, backend="pennylane", parallel=True)
print(f"Parallel: {len(circuits_par)} circuits generated")
else:
print("No backend installed, skipping.")
Sequential: 5 circuits generated Parallel: 5 circuits generated
# Single sample via get_circuits (1D input)
if HAS_QISKIT:
x_single = np.array([0.1, 0.2, 0.3, 0.4])
circuits = enc.get_circuits(x_single, backend="qiskit")
print(f"1D input produces {len(circuits)} circuit(s)")
else:
print("Qiskit not installed, skipping.")
1D input produces 1 circuit(s)
11. Input Validation & Edge Cases¶
TrainableEncoding validates inputs thoroughly to catch common mistakes early.
enc = TrainableEncoding(n_features=4, seed=42)
# Accepts various input types
inputs = {
"numpy array": np.array([0.1, 0.2, 0.3, 0.4]),
"python list": [0.1, 0.2, 0.3, 0.4],
"python tuple": (0.1, 0.2, 0.3, 0.4),
"integer list": [1, 2, 3, 4],
}
backend = "qiskit" if HAS_QISKIT else ("pennylane" if HAS_PENNYLANE else None)
if backend:
for name, x in inputs.items():
circuit = enc.get_circuit(x, backend=backend)
print(f"{name:15s} -> OK")
else:
print("No backend installed, skipping.")
numpy array -> OK python list -> OK python tuple -> OK integer list -> OK
enc = TrainableEncoding(n_features=4, seed=42)
# Wrong feature count
print("--- Wrong number of features ---")
try:
enc._validate_input(np.array([0.1, 0.2]))
except ValueError as e:
print(f" {e}")
# NaN values
print("\n--- NaN in input ---")
try:
enc._validate_input(np.array([0.1, np.nan, 0.3, 0.4]))
except ValueError as e:
print(f" {e}")
# Inf values
print("\n--- Inf in input ---")
try:
enc._validate_input(np.array([0.1, np.inf, 0.3, 0.4]))
except ValueError as e:
print(f" {e}")
# String input
print("\n--- String input ---")
try:
enc._validate_input(["0.1", "0.2", "0.3", "0.4"])
except TypeError as e:
print(f" {e}")
# Complex input
print("\n--- Complex input ---")
try:
enc._validate_input(np.array([1+2j, 3+4j, 5+6j, 7+8j]))
except TypeError as e:
print(f" {e}")
# 3D input
print("\n--- 3D input ---")
try:
enc._validate_input(np.ones((2, 2, 4)))
except ValueError as e:
print(f" {e}")
--- Wrong number of features --- Expected 4 features, got 2 --- NaN in input --- Input contains NaN or infinite values --- Inf in input --- Input contains NaN or infinite values --- String input --- Input contains string values. Expected numeric data, got str. Convert strings to floats before encoding. --- Complex input --- Input contains complex values (dtype: complex128). Complex numbers are not supported. Use real-valued data only. --- 3D input --- Input must be 1D or 2D array, got 3D
# Extreme input values still produce valid circuits
backend = "pennylane" if HAS_PENNYLANE else ("qiskit" if HAS_QISKIT else None)
if backend:
enc = TrainableEncoding(n_features=4, seed=42)
extreme_inputs = {
"zeros": np.zeros(4),
"large values": np.array([100.0, 200.0, 300.0, 400.0]),
"negative": np.array([-0.5, -1.0, -1.5, -2.0]),
"very small": np.array([1e-15, 1e-16, 1e-17, 1e-18]),
"near pi": np.array([np.pi-1e-14, np.pi+1e-14, 2*np.pi-1e-14, 2*np.pi+1e-14]),
}
for name, x in extreme_inputs.items():
circuit = enc.get_circuit(x, backend=backend)
if HAS_PENNYLANE and backend == "pennylane":
dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def run():
circuit()
return qml.state()
state = run()
norm = float(np.sum(np.abs(state)**2))
assert np.isclose(norm, 1.0, atol=1e-10)
print(f"{name:15s} -> norm={norm:.10f} OK")
else:
print(f"{name:15s} -> circuit generated OK")
else:
print("No backend installed, skipping.")
zeros -> norm=1.0000000000 OK large values -> norm=1.0000000000 OK negative -> norm=1.0000000000 OK very small -> norm=1.0000000000 OK near pi -> norm=1.0000000000 OK
enc = TrainableEncoding(
n_features=4, n_layers=2,
data_rotation="Y", trainable_rotation="Y",
entanglement="linear", seed=42
)
breakdown = enc.gate_count_breakdown()
print("=== Gate Count Breakdown ===")
for key, value in breakdown.items():
print(f" {key:25s}: {value}")
# Manual verification:
# - data_ry: 2 layers * 4 features = 8
# - trainable_ry: 2 * 4 = 8
# - cnot_gates: 2 layers * 3 pairs (linear, 4 qubits) = 6
# - total_single_qubit: 8 + 8 = 16
# - total: 16 + 6 = 22
assert breakdown['data_ry'] == 8
assert breakdown['trainable_ry'] == 8
assert breakdown['cnot_gates'] == 6
assert breakdown['total'] == 22
print("\nAll gate counts verified.")
=== Gate Count Breakdown === data_rx : 0 data_ry : 8 data_rz : 0 trainable_rx : 0 trainable_ry : 8 trainable_rz : 0 cnot_gates : 6 total_single_qubit : 16 total_two_qubit : 6 total : 22 gates_per_layer : 11 All gate counts verified.
# Gate breakdown depends on rotation axes
enc_xz = TrainableEncoding(
n_features=4, n_layers=1,
data_rotation="X", trainable_rotation="Z", seed=42
)
bd = enc_xz.gate_count_breakdown()
print(f"RX(data): data_rx={bd['data_rx']}, data_ry={bd['data_ry']}, data_rz={bd['data_rz']}")
print(f"RZ(train): trainable_rx={bd['trainable_rx']}, trainable_ry={bd['trainable_ry']}, trainable_rz={bd['trainable_rz']}")
assert bd['data_rx'] == 4 and bd['data_ry'] == 0 and bd['data_rz'] == 0
assert bd['trainable_rx'] == 0 and bd['trainable_ry'] == 0 and bd['trainable_rz'] == 4
RX(data): data_rx=4, data_ry=0, data_rz=0 RZ(train): trainable_rx=0, trainable_ry=0, trainable_rz=4
12.2 Resource Summary¶
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
summary = enc.resource_summary()
print("=== Resource Summary ===")
for key, value in summary.items():
if isinstance(value, dict):
print(f"\n {key}:")
for k, v in value.items():
print(f" {k}: {v}")
elif isinstance(value, list):
print(f" {key}: {value}")
else:
print(f" {key}: {value}")
=== Resource Summary ===
n_qubits: 4
n_features: 4
n_layers: 2
depth: 10
gate_counts:
data_rx: 0
data_ry: 8
data_rz: 0
trainable_rx: 0
trainable_ry: 8
trainable_rz: 0
cnot_gates: 6
total_single_qubit: 16
total_two_qubit: 6
total: 22
gates_per_layer: 11
data_rotation: Y
trainable_rotation: Y
entanglement: linear
initialization: xavier
n_trainable_parameters: 8
parameter_statistics:
mean: -0.15536674993630278
std: 0.4831012536447966
min: -0.9755175943269182
max: 0.47028235819560693
is_entangling: True
simulability: not_simulable
trainability_estimate: 0.79
hardware_requirements:
connectivity: linear
native_gates: ['RY', 'RY', 'CNOT']
min_qubit_count: 4
n_entanglement_pairs: 3
entanglement_pairs: [(0, 1), (1, 2), (2, 3)]
recommendations: ['Configuration looks good for typical QML tasks.']
# Resource comparison across configurations
configs = [
{"n_features": 4, "n_layers": 1, "entanglement": "linear"},
{"n_features": 4, "n_layers": 2, "entanglement": "linear"},
{"n_features": 4, "n_layers": 2, "entanglement": "circular"},
{"n_features": 4, "n_layers": 2, "entanglement": "full"},
{"n_features": 4, "n_layers": 2, "entanglement": "none"},
{"n_features": 8, "n_layers": 2, "entanglement": "linear"},
]
print(f"{'Config':<45} {'Depth':>6} {'Gates':>6} {'1Q':>5} {'2Q':>5} {'Params':>7}")
print("-" * 80)
for cfg in configs:
enc = TrainableEncoding(**cfg, seed=42)
bd = enc.gate_count_breakdown()
label = f"n={cfg['n_features']}, L={cfg['n_layers']}, ent={cfg['entanglement']}"
print(f"{label:<45} {enc.depth:>6} {bd['total']:>6} "
f"{bd['total_single_qubit']:>5} {bd['cnot_gates']:>5} "
f"{enc.n_trainable_parameters:>7}")
Config Depth Gates 1Q 2Q Params -------------------------------------------------------------------------------- n=4, L=1, ent=linear 5 11 8 3 4 n=4, L=2, ent=linear 10 22 16 6 8 n=4, L=2, ent=circular 12 24 16 8 8 n=4, L=2, ent=full 10 28 16 12 8 n=4, L=2, ent=none 4 16 16 0 8 n=8, L=2, ent=linear 18 46 32 14 16
13. Mathematical Verification¶
Verify that the encoding produces mathematically correct quantum states.
if HAS_PENNYLANE:
# Zero input + zero params: CNOT on |0> control does nothing -> |0000>
enc = TrainableEncoding(n_features=4, n_layers=1, initialization="zeros", seed=42)
x = np.zeros(4)
circuit_fn = enc.get_circuit(x, backend="pennylane")
dev = qml.device("default.qubit", wires=4)
@qml.qnode(dev)
def circuit():
circuit_fn()
return qml.state()
state = circuit()
print(f"|0000> amplitude: {state[0]:.6f}")
print(f"Probability of |0000>: {np.abs(state[0])**2:.6f}")
print(f"State norm: {np.sum(np.abs(state)**2):.10f}")
# With zero rotations, CNOT on |0> control is identity
assert np.isclose(np.abs(state[0])**2, 1.0, atol=1e-10)
print("\nConfirmed: zero rotations produce |0000> state.")
else:
print("PennyLane not installed, skipping.")
|0000> amplitude: 1.000000+0.000000j Probability of |0000>: 1.000000 State norm: 1.0000000000 Confirmed: zero rotations produce |0000> state.
if HAS_PENNYLANE:
# Different inputs -> different states
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
dev = qml.device("default.qubit", wires=4)
x1 = np.array([0.0, 0.0, 0.0, 0.0])
x2 = np.array([np.pi, np.pi, np.pi, np.pi])
fn1 = enc.get_circuit(x1, backend="pennylane")
fn2 = enc.get_circuit(x2, backend="pennylane")
@qml.qnode(dev)
def c1():
fn1()
return qml.state()
@qml.qnode(dev)
def c2():
fn2()
return qml.state()
s1, s2 = c1(), c2()
fidelity = np.abs(np.vdot(s1, s2))**2
print(f"Fidelity between x=[0,0,0,0] and x=[pi,pi,pi,pi]: {fidelity:.6f}")
assert fidelity < 0.99
print("Confirmed: different inputs produce different quantum states.")
else:
print("PennyLane not installed, skipping.")
Fidelity between x=[0,0,0,0] and x=[pi,pi,pi,pi]: 0.000000 Confirmed: different inputs produce different quantum states.
if HAS_PENNYLANE:
# Different parameters -> different states (same input)
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
x = np.array([0.1, 0.2, 0.3, 0.4])
dev = qml.device("default.qubit", wires=4)
fn1 = enc.get_circuit(x, backend="pennylane")
@qml.qnode(dev)
def c1():
fn1()
return qml.state()
s1 = c1()
# Change trainable parameters
enc.set_trainable_parameters(np.random.randn(2, 4) * 2)
fn2 = enc.get_circuit(x, backend="pennylane")
@qml.qnode(dev)
def c2():
fn2()
return qml.state()
s2 = c2()
fidelity = np.abs(np.vdot(s1, s2))**2
print(f"Fidelity between different parameter settings: {fidelity:.6f}")
assert fidelity < 0.99
print("Confirmed: different trainable parameters produce different states.")
else:
print("PennyLane not installed, skipping.")
Fidelity between different parameter settings: 0.023055 Confirmed: different trainable parameters produce different states.
14. Reproducibility & Determinism¶
# Same seed -> identical parameters
enc1 = TrainableEncoding(n_features=4, n_layers=2, seed=42)
enc2 = TrainableEncoding(n_features=4, n_layers=2, seed=42)
np.testing.assert_array_equal(
enc1.get_trainable_parameters(),
enc2.get_trainable_parameters()
)
print("Same seed produces identical parameters.")
# Different seeds -> different parameters
enc3 = TrainableEncoding(n_features=4, n_layers=2, seed=43)
assert not np.allclose(enc1.get_trainable_parameters(), enc3.get_trainable_parameters())
print("Different seeds produce different parameters.")
Same seed produces identical parameters. Different seeds produce different parameters.
if HAS_PENNYLANE:
# Same input always produces the same state (deterministic)
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
x = np.array([0.1, 0.2, 0.3, 0.4])
dev = qml.device("default.qubit", wires=4)
states = []
for i in range(5):
fn = enc.get_circuit(x, backend="pennylane")
@qml.qnode(dev)
def c():
fn()
return qml.state()
states.append(c())
for i in range(1, len(states)):
assert np.allclose(states[0], states[i], atol=1e-10)
print(f"5 circuit executions all produce identical states (max diff: "
f"{max(np.max(np.abs(states[0] - s)) for s in states[1:]):.2e})")
else:
print("PennyLane not installed, skipping.")
5 circuit executions all produce identical states (max diff: 0.00e+00)
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
enc_copy = enc.copy()
# Independent objects
assert enc is not enc_copy
# Same parameters
np.testing.assert_array_almost_equal(
enc.get_trainable_parameters(),
enc_copy.get_trainable_parameters()
)
print("Copy has same parameters.")
# Modifying copy doesn't affect original
original_params = enc.get_trainable_parameters().copy()
enc_copy.set_trainable_parameters(np.zeros((2, 4)))
np.testing.assert_array_almost_equal(enc.get_trainable_parameters(), original_params)
print("Modifying copy does NOT affect original.")
Copy has same parameters. Modifying copy does NOT affect original.
15.2 Pickle Serialization¶
enc = TrainableEncoding(
n_features=4, n_layers=3,
data_rotation="Z", trainable_rotation="X",
entanglement="circular", seed=42
)
# Pickle roundtrip
pickled = pickle.dumps(enc)
restored = pickle.loads(pickled)
print(f"Pickled size: {len(pickled)} bytes")
print(f"\nOriginal: {repr(enc)}")
print(f"Restored: {repr(restored)}")
# Full state preserved
assert restored.n_features == enc.n_features
assert restored.n_layers == enc.n_layers
assert restored.data_rotation == enc.data_rotation
assert restored.trainable_rotation == enc.trainable_rotation
assert restored.entanglement == enc.entanglement
np.testing.assert_array_almost_equal(
restored.get_trainable_parameters(),
enc.get_trainable_parameters()
)
print("\nAll state preserved after pickle roundtrip.")
Pickled size: 1012 bytes Original: TrainableEncoding(n_features=4, n_layers=3, data_rotation='Z', trainable_rotation='X', entanglement='circular', initialization='xavier') Restored: TrainableEncoding(n_features=4, n_layers=3, data_rotation='Z', trainable_rotation='X', entanglement='circular', initialization='xavier') All state preserved after pickle roundtrip.
# Circuit generation works after deserialization
if HAS_QISKIT:
x = np.array([0.1, 0.2, 0.3, 0.4])
qc = restored.get_circuit(x, backend="qiskit")
print(f"Post-pickle circuit: {qc.num_qubits} qubits, {qc.depth()} depth")
assert qc.num_qubits == 4
print("Circuit generation works after deserialization.")
else:
print("Qiskit not installed, skipping circuit test.")
# Equality preserved
assert enc == restored
print("Equality preserved after pickle roundtrip.")
Post-pickle circuit: 4 qubits, 18 depth Circuit generation works after deserialization. Equality preserved after pickle roundtrip.
16. Equality & Hashing¶
# Same config + same seed = equal
enc1 = TrainableEncoding(n_features=4, seed=42)
enc2 = TrainableEncoding(n_features=4, seed=42)
assert enc1 == enc2
print(f"Same config + seed: {enc1 == enc2}")
# Same config, different seed = not equal (different params)
enc3 = TrainableEncoding(n_features=4, seed=43)
assert enc1 != enc3
print(f"Same config, diff seed: {enc1 == enc3}")
# After setting same params, they become equal
enc3.set_trainable_parameters(enc1.get_trainable_parameters())
assert enc1 == enc3
print(f"After syncing params: {enc1 == enc3}")
# Hash consistency
assert hash(enc1) == hash(enc2)
print(f"\nHash(enc1) == Hash(enc2): {hash(enc1) == hash(enc2)}")
# Set membership
s = {enc1, enc2, TrainableEncoding(n_features=8, seed=42)}
print(f"Set size (enc1, enc2, enc_8feat): {len(s)} (enc1 and enc2 deduplicated)")
assert len(s) == 2
Same config + seed: True Same config, diff seed: False After syncing params: True Hash(enc1) == Hash(enc2): True Set size (enc1, enc2, enc_8feat): 2 (enc1 and enc2 deduplicated)
# Equality with non-TrainableEncoding returns NotImplemented
result = enc1.__eq__("not an encoding")
print(f"Comparison with string: {result}")
assert result is NotImplemented
Comparison with string: NotImplemented
17. Protocol Compliance¶
TrainableEncoding implements the ResourceAnalyzable and EntanglementQueryable capability protocols.
from encoding_atlas.core.protocols import (
ResourceAnalyzable,
EntanglementQueryable,
DataTransformable,
DataDependentResourceAnalyzable,
is_resource_analyzable,
is_entanglement_queryable,
is_data_transformable,
)
from encoding_atlas.core.base import BaseEncoding
enc = TrainableEncoding(n_features=4, seed=42)
# TrainableEncoding IS:
print(f"isinstance(enc, BaseEncoding): {isinstance(enc, BaseEncoding)}")
print(f"isinstance(enc, ResourceAnalyzable): {isinstance(enc, ResourceAnalyzable)}")
print(f"isinstance(enc, EntanglementQueryable): {isinstance(enc, EntanglementQueryable)}")
# TrainableEncoding IS NOT:
print(f"isinstance(enc, DataTransformable): {isinstance(enc, DataTransformable)}")
print(f"isinstance(enc, DataDependentResourceAnalyzable): {isinstance(enc, DataDependentResourceAnalyzable)}")
# Type guard functions
print(f"\nis_resource_analyzable(enc): {is_resource_analyzable(enc)}")
print(f"is_entanglement_queryable(enc): {is_entanglement_queryable(enc)}")
print(f"is_data_transformable(enc): {is_data_transformable(enc)}")
isinstance(enc, BaseEncoding): True isinstance(enc, ResourceAnalyzable): True isinstance(enc, EntanglementQueryable): True isinstance(enc, DataTransformable): False isinstance(enc, DataDependentResourceAnalyzable): False is_resource_analyzable(enc): True is_entanglement_queryable(enc): True is_data_transformable(enc): False
# Generic function that works with any ResourceAnalyzable encoding
def analyze_encoding(enc):
"""Generic analysis using protocol-based dispatch."""
results = {"name": enc.__class__.__name__}
if isinstance(enc, ResourceAnalyzable):
summary = enc.resource_summary()
breakdown = enc.gate_count_breakdown()
results["total_gates"] = breakdown["total"]
results["depth"] = summary["depth"]
if isinstance(enc, EntanglementQueryable):
pairs = enc.get_entanglement_pairs()
results["entanglement_pairs"] = len(pairs)
return results
result = analyze_encoding(enc)
pprint(result)
{'depth': 10,
'entanglement_pairs': 3,
'name': 'TrainableEncoding',
'total_gates': 22}
from encoding_atlas.analysis import (
count_resources,
get_resource_summary,
get_gate_breakdown,
compare_resources,
)
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
resources = count_resources(enc)
print("=== count_resources ===")
pprint(resources)
=== count_resources ===
{'cnot_count': 0,
'cz_count': 0,
'depth': 10,
'encoding_name': 'TrainableEncoding',
'gate_count': 22,
'gates_per_qubit': 5.5,
'hadamard_count': 0,
'is_data_dependent': False,
'n_qubits': 4,
'parameter_count': 16,
'rotation_gates': 0,
'single_qubit_gates': 16,
't_gate_count': 0,
'two_qubit_gates': 6,
'two_qubit_ratio': 0.2727272727272727}
# Compare with other encoding types
from encoding_atlas import AngleEncoding, IQPEncoding, HardwareEfficientEncoding
encodings = [
AngleEncoding(n_features=4),
IQPEncoding(n_features=4, reps=2),
HardwareEfficientEncoding(n_features=4, reps=2),
TrainableEncoding(n_features=4, n_layers=2, seed=42),
]
comparison = compare_resources(encodings)
print("=== Resource Comparison ===")
pprint(comparison)
=== Resource Comparison ===
{'depth': [1, 6, 4, 10],
'encoding_name': ['AngleEncoding',
'IQPEncoding',
'HardwareEfficientEncoding',
'TrainableEncoding'],
'gate_count': [4, 52, 14, 22],
'gates_per_qubit': [1.0, 13.0, 3.5, 5.5],
'n_qubits': [4, 4, 4, 4],
'parameter_count': [4, 20, 8, 16],
'single_qubit_gates': [4, 28, 8, 16],
'two_qubit_gates': [0, 24, 6, 6],
'two_qubit_ratio': [0.0,
0.46153846153846156,
0.42857142857142855,
0.2727272727272727]}
18.2 Simulability Analysis¶
from encoding_atlas.analysis import check_simulability, get_simulability_reason
# Entangling encoding: not simulable
enc_ent = TrainableEncoding(n_features=4, entanglement="linear", seed=42)
sim_ent = check_simulability(enc_ent)
print(f"With entanglement: {sim_ent}")
print(f" Reason: {get_simulability_reason(enc_ent)}")
# Non-entangling encoding: simulable
enc_sep = TrainableEncoding(n_features=4, entanglement="none", seed=42)
sim_sep = check_simulability(enc_sep)
print(f"\nWithout entanglement: {sim_sep}")
print(f" Reason: {get_simulability_reason(enc_sep)}")
With entanglement: {'is_simulable': False, 'simulability_class': 'conditionally_simulable', 'reason': 'Linear entanglement structure may allow tensor network simulation if entanglement entropy is bounded', 'details': {'is_entangling': True, 'is_clifford': False, 'is_matchgate': False, 'entanglement_pattern': 'linear', 'two_qubit_gate_count': 6, 'n_qubits': 4, 'n_features': 4, 'declared_simulability': 'not_simulable', 'encoding_name': 'TrainableEncoding', 'has_non_clifford_gates': True, 'has_t_gates': False, 'has_parameterized_rotations': True}, 'recommendations': ['Statevector simulation feasible (4 qubits, ~256 bytes memory)', 'Consider MPS (Matrix Product State) simulation', 'May be efficient if entanglement entropy is bounded', 'Tensor network methods scale with bond dimension']}
Reason: Not simulable: Linear entanglement structure may allow tensor network simulation if entanglement entropy is bounded
Without entanglement: {'is_simulable': True, 'simulability_class': 'simulable', 'reason': 'Encoding produces only product states (no entanglement)', 'details': {'is_entangling': False, 'is_clifford': False, 'is_matchgate': False, 'entanglement_pattern': 'none', 'two_qubit_gate_count': 0, 'n_qubits': 4, 'n_features': 4, 'declared_simulability': 'simulable', 'encoding_name': 'TrainableEncoding', 'has_non_clifford_gates': True, 'has_t_gates': False, 'has_parameterized_rotations': True}, 'recommendations': ['Can be simulated as independent single-qubit systems', 'Classical computation scales linearly with qubit count O(n)', 'Use standard numerical linear algebra for efficient simulation']}
Reason: Simulable: Encoding produces only product states (no entanglement)
18.3 Trainability Analysis (Barren Plateau Detection)¶
This is particularly relevant for TrainableEncoding since it has learnable parameters.
from encoding_atlas.analysis import estimate_trainability
if HAS_PENNYLANE:
# Compare trainability across layer depths
print(f"{'Config':<40} {'Trainability':>13} {'Estimate':>10}")
print("-" * 65)
for n_layers in [1, 2, 3, 4]:
enc = TrainableEncoding(n_features=4, n_layers=n_layers, seed=42)
# Theoretical estimate from properties
theory = enc.properties.trainability_estimate
# Numerical estimate (using fewer samples for demo speed)
numerical = estimate_trainability(enc, n_samples=50, seed=42)
label = f"n_features=4, n_layers={n_layers}"
print(f"{label:<40} {numerical:>13.4f} {theory:>10.4f}")
else:
print("PennyLane not installed. Showing theoretical estimates only.")
for n_layers in [1, 2, 3, 4, 8, 15]:
enc = TrainableEncoding(n_features=4, n_layers=n_layers, seed=42)
print(f"n_layers={n_layers}: trainability_estimate={enc.properties.trainability_estimate:.4f}")
Config Trainability Estimate
----------------------------------------------------------------- n_features=4, n_layers=1 0.0650 0.8200 n_features=4, n_layers=2 0.0262 0.7900 n_features=4, n_layers=3 0.0396 0.7600 n_features=4, n_layers=4 0.0910 0.7300
if HAS_PENNYLANE:
# Detailed trainability analysis with barren plateau risk
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
result = estimate_trainability(
enc, n_samples=50, seed=42, return_details=True
)
print("=== Detailed Trainability Analysis ===")
print(f"Trainability estimate: {result['trainability_estimate']:.4f}")
print(f"Gradient variance: {result['gradient_variance']:.2e}")
print(f"Barren plateau risk: {result['barren_plateau_risk']}")
print(f"Effective dimension: {result['effective_dimension']:.1f}")
print(f"Samples used: {result['n_successful_samples']}/{result['n_samples']}")
print(f"Failed samples: {result['n_failed_samples']}")
print(f"Per-parameter variance: {result['per_parameter_variance']}")
else:
print("PennyLane not installed, skipping numerical analysis.")
=== Detailed Trainability Analysis === Trainability estimate: 0.0262 Gradient variance: 1.56e-03 Barren plateau risk: low Effective dimension: 4.0 Samples used: 50/50 Failed samples: 0 Per-parameter variance: [0.00134702 0.00022044 0.00358347 0.00107941]
19. Visualization & Comparison¶
from encoding_atlas.visualization import compare_encodings
# ASCII comparison including trainable encoding
result = compare_encodings(
['angle', 'iqp', 'hardware_efficient', 'trainable'],
n_features=4
)
┌────────────────────────────────────────────────────────────────────────────┐ │ ENCODING COMPARISON (n_features=4) │ ├────────────────────────────────────────────────────────────────────────────┤ │ │ │ QUBITS CIRCUIT DEPTH │ │ ────── ───────────── │ │ angle ███████████████ 4 angle █ │ │ iqp ███████████████ 4 iqp █████████ │ │ hardware_efficient ███████████████ 4 hardware_efficient ██████ │ │ trainable ███████████████ 4 trainable █████████████│ │ │ │ GATE COUNT TWO-QUBIT GATES │ │ ────────── ─────────────── │ │ angle █ 4 angle │ │ iqp ███████████████ 52 iqp █████████████│ │ hardware_efficient ████ 14 hardware_efficient ███ │ │ trainable ██████ 22 trainable ███ │ │ │ │ PROPERTIES │ │ ────────── │ │ Encoding Entangling Simulability Trainability │ │ ─────────────────────────────────────────────────────────────────────── │ │ angle ✗ No Simulable ███████ 0.9 │ │ iqp ✓ Yes Not Simulable █████ 0.7 │ │ hardware_efficient ✓ Yes Not Simulable ██████ 0.8 │ │ trainable ✓ Yes Not Simulable ██████ 0.8 │ │ │ └────────────────────────────────────────────────────────────────────────────┘
# Matplotlib comparison (if matplotlib is available)
try:
import matplotlib
matplotlib.use('Agg') # non-interactive backend for notebooks
fig = compare_encodings(
['angle', 'iqp', 'hardware_efficient', 'trainable'],
n_features=4,
output='matplotlib'
)
print(f"Figure created: {fig.get_size_inches()} inches")
except ImportError:
print("matplotlib not installed, skipping graphical comparison.")
Figure created: [12. 10.] inches
20. Registry System¶
TrainableEncoding is registered in the global encoding registry and can be accessed by name.
from encoding_atlas import list_encodings, get_encoding
# All available encodings
all_encodings = list_encodings()
print(f"Registered encodings ({len(all_encodings)}):")
for name in all_encodings:
print(f" - {name}")
# TrainableEncoding is registered under two names
assert 'trainable' in all_encodings
assert 'trainable_encoding' in all_encodings
Registered encodings (26): - amplitude - angle - angle_ry - basis - covariant - covariant_feature_map - cyclic_equivariant - cyclic_equivariant_feature_map - data_reuploading - hamiltonian - hamiltonian_encoding - hardware_efficient - higher_order_angle - iqp - pauli_feature_map - qaoa - qaoa_encoding - so2_equivariant - so2_equivariant_feature_map - swap_equivariant - swap_equivariant_feature_map - symmetry_inspired - symmetry_inspired_feature_map - trainable - trainable_encoding - zz_feature_map
# Create via registry
enc_from_registry = get_encoding('trainable', n_features=4, n_layers=3, seed=42)
print(f"Type: {type(enc_from_registry).__name__}")
print(f"Config: {repr(enc_from_registry)}")
# Also works with the longer name
enc2 = get_encoding('trainable_encoding', n_features=4, n_layers=3, seed=42)
assert enc_from_registry == enc2
print("\nBoth registry names produce identical encodings.")
Type: TrainableEncoding Config: TrainableEncoding(n_features=4, n_layers=3, data_rotation='Y', trainable_rotation='Y', entanglement='linear', initialization='xavier') Both registry names produce identical encodings.
21. Logging & Debugging¶
TrainableEncoding uses Python's standard logging module for optional debug output.
import logging
# Enable debug logging for the trainable module
logger = logging.getLogger('encoding_atlas.encodings.trainable')
logger.setLevel(logging.DEBUG)
# Add a handler to see the output
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(levelname)s: %(message)s'))
logger.addHandler(handler)
# Now operations will produce debug output
print("--- Creating encoding ---")
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
print("\n--- Setting parameters ---")
enc.set_trainable_parameters(np.ones((2, 4)) * 0.5)
print("\n--- Resetting parameters ---")
enc.reset_parameters()
print("\n--- Gate count breakdown ---")
_ = enc.gate_count_breakdown()
# Clean up: remove handler and reset level
logger.removeHandler(handler)
logger.setLevel(logging.WARNING)
DEBUG: Computed entanglement pairs for linear topology: 3 pairs DEBUG: Initialized parameters: shape=(2, 4), strategy=xavier, mean=-0.1554, std=0.4831 DEBUG: TrainableEncoding initialized: n_features=4, n_layers=2, data_rotation='Y', trainable_rotation='Y', entanglement='linear', initialization='xavier', n_trainable_params=8, seed=42 DEBUG: Updated trainable parameters: mean=0.5000, std=0.0000 DEBUG: Initialized parameters: shape=(2, 4), strategy=xavier, mean=-0.1554, std=0.4831 DEBUG: Reset trainable parameters with seed=42 DEBUG: Gate breakdown: data=8, trainable=8, CNOT=6, total=22
--- Creating encoding --- --- Setting parameters --- --- Resetting parameters --- --- Gate count breakdown ---
if HAS_PENNYLANE:
# Simple VQC classification task
enc = TrainableEncoding(
n_features=2, n_layers=2,
data_rotation="Y", trainable_rotation="Y",
entanglement="linear",
initialization="xavier", seed=42
)
dev = qml.device("default.qubit", wires=2)
# Simple cost function: probability of measuring |00>
def cost_function(params, x):
"""Compute cost for given parameters and input."""
enc.set_trainable_parameters(params, validate=False)
circuit_fn = enc.get_circuit(x, backend="pennylane")
@qml.qnode(dev)
def circuit():
circuit_fn()
return qml.probs(wires=[0, 1])
probs = circuit()
return float(probs[0]) # Probability of |00>
# Gradient via parameter-shift rule
def compute_gradient(params, x, shift=np.pi/2):
"""Compute gradient using parameter-shift rule."""
grad = np.zeros_like(params)
for i in range(params.shape[0]):
for j in range(params.shape[1]):
params_plus = params.copy()
params_plus[i, j] += shift
params_minus = params.copy()
params_minus[i, j] -= shift
grad[i, j] = (cost_function(params_plus, x) -
cost_function(params_minus, x)) / 2
return grad
# Training loop
x_train = np.array([0.5, 1.0])
params = enc.get_trainable_parameters().copy()
learning_rate = 0.1
n_steps = 15
print(f"{'Step':>4} {'Cost':>8} {'Param Mean':>10} {'Param Std':>10}")
print("-" * 40)
for step in range(n_steps):
cost = cost_function(params, x_train)
if step % 3 == 0:
print(f"{step:>4} {cost:>8.4f} {np.mean(params):>10.4f} {np.std(params):>10.4f}")
grad = compute_gradient(params, x_train)
params = params + learning_rate * grad # Maximize P(|00>)
final_cost = cost_function(params, x_train)
print(f"{n_steps:>4} {final_cost:>8.4f} {np.mean(params):>10.4f} {np.std(params):>10.4f}")
print(f"\nTraining improved cost from {cost_function(enc.get_trainable_parameters(), x_train):.4f} to {final_cost:.4f}")
else:
print("PennyLane not installed, skipping training demo.")
Step Cost Param Mean Param Std ---------------------------------------- 0 0.3311 0.1690 0.5470 3 0.4297 0.1119 0.5848 6 0.5331 0.0509 0.6277 9 0.6320 -0.0113 0.6726 12 0.7186 -0.0716 0.7164 15 0.7887 -0.1278 0.7567 Training improved cost from 0.7887 to 0.7887
22.2 Hyperparameter Comparison¶
# Compare encodings with different hyperparameters
print(f"{'Configuration':<50} {'Depth':>6} {'Gates':>6} {'Params':>7} {'Trainability':>13}")
print("=" * 90)
hyperparams = [
{"n_features": 4, "n_layers": 1, "entanglement": "linear"},
{"n_features": 4, "n_layers": 2, "entanglement": "linear"},
{"n_features": 4, "n_layers": 3, "entanglement": "linear"},
{"n_features": 4, "n_layers": 2, "entanglement": "circular"},
{"n_features": 4, "n_layers": 2, "entanglement": "full"},
{"n_features": 4, "n_layers": 2, "entanglement": "none"},
{"n_features": 4, "n_layers": 2, "entanglement": "linear", "data_rotation": "X", "trainable_rotation": "Z"},
]
for hp in hyperparams:
enc = TrainableEncoding(**hp, seed=42)
bd = enc.gate_count_breakdown()
trainability = enc.properties.trainability_estimate
label = ", ".join(f"{k}={v}" for k, v in hp.items())
print(f"{label:<50} {enc.depth:>6} {bd['total']:>6} "
f"{enc.n_trainable_parameters:>7} {trainability:>13.4f}")
Configuration Depth Gates Params Trainability ========================================================================================== n_features=4, n_layers=1, entanglement=linear 5 11 4 0.8200 n_features=4, n_layers=2, entanglement=linear 10 22 8 0.7900 n_features=4, n_layers=3, entanglement=linear 15 33 12 0.7600 n_features=4, n_layers=2, entanglement=circular 12 24 8 0.7900 n_features=4, n_layers=2, entanglement=full 10 28 8 0.7900 n_features=4, n_layers=2, entanglement=none 4 16 8 0.7900 n_features=4, n_layers=2, entanglement=linear, data_rotation=X, trainable_rotation=Z 10 22 8 0.7900
22.3 Transfer Learning Pattern¶
Pre-trained parameters can be saved and transferred to related tasks.
# "Train" an encoding (simulated)
enc_pretrained = TrainableEncoding(n_features=4, n_layers=2, seed=42)
trained_params = enc_pretrained.get_trainable_parameters() + 0.3 # Simulate training
enc_pretrained.set_trainable_parameters(trained_params)
# Save for later
saved_state = pickle.dumps(enc_pretrained)
# Load and transfer to a new task
enc_transfer = pickle.loads(saved_state)
np.testing.assert_array_almost_equal(
enc_transfer.get_trainable_parameters(),
trained_params
)
print("Pre-trained parameters transferred successfully.")
print(f"Transferred params mean: {np.mean(enc_transfer.get_trainable_parameters()):.4f}")
Pre-trained parameters transferred successfully. Transferred params mean: 0.1446
22.4 Thread Safety¶
from concurrent.futures import ThreadPoolExecutor, as_completed
if HAS_QISKIT:
enc = TrainableEncoding(n_features=4, n_layers=2, seed=42)
n_threads = 8
n_per_thread = 20
errors = []
def generate_circuits(thread_id):
results = []
for i in range(n_per_thread):
x = np.random.randn(4)
circuit = enc.get_circuit(x, backend="qiskit")
results.append(circuit.num_qubits)
return results
with ThreadPoolExecutor(max_workers=n_threads) as executor:
futures = [executor.submit(generate_circuits, i) for i in range(n_threads)]
all_results = [f.result() for f in as_completed(futures)]
total = sum(len(r) for r in all_results)
print(f"Generated {total} circuits across {n_threads} threads.")
assert total == n_threads * n_per_thread
print("Thread safety confirmed: no errors during concurrent circuit generation.")
else:
print("Qiskit not installed, skipping thread safety demo.")
Generated 160 circuits across 8 threads. Thread safety confirmed: no errors during concurrent circuit generation.
22.5 Encoding Recommendation¶
from encoding_atlas.guide import recommend_encoding
rec = recommend_encoding(n_features=4, n_samples=500, task="classification")
print(f"Recommended encoding: {rec.encoding_name}")
print(f"Explanation: {rec.explanation}")
print(f"Alternatives: {rec.alternatives}")
print(f"Confidence: {rec.confidence}")
Recommended encoding: iqp Explanation: IQP encoding creates highly entangled states with provable classical simulation hardness, well-suited for kernel methods Alternatives: ['data_reuploading', 'zz_feature_map', 'pauli_feature_map'] Confidence: 0.74
23. Summary¶
This notebook demonstrated every feature of TrainableEncoding in the Quantum Encoding Atlas library:
Core Features¶
- Constructor: 7 parameters with full validation (n_features, n_layers, data_rotation, trainable_rotation, entanglement, initialization, seed)
- Properties: n_qubits, depth, n_trainable_parameters, config, properties (EncodingProperties)
- Parameter Management: get/set/reset with validation, flat array auto-reshape, cache invalidation
- 5 Initialization Strategies: xavier, he, zeros, random, small_random
- 4 Entanglement Patterns: linear, circular, full, none
- 3 Rotation Axes: X, Y, Z (for both data and trainable gates, 9 combinations)
Circuit Generation¶
- 3 Backends: PennyLane (callable), Qiskit (QuantumCircuit), Cirq (cirq.Circuit)
- Batch Processing: Sequential and parallel (ThreadPoolExecutor)
- Input Validation: Shape, NaN/Inf, type checking (strings, complex), defensive copying
Analysis & Integration¶
- Resource Analysis: gate_count_breakdown(), resource_summary()
- Protocol Compliance: ResourceAnalyzable, EntanglementQueryable
- Analysis Tools: trainability estimation, simulability checking, resource comparison
- Visualization: compare_encodings (ASCII and matplotlib)
- Registry: Accessible via get_encoding('trainable', ...)
Robustness¶
- Reproducibility: Deterministic with seed
- Serialization: Full pickle roundtrip support
- Copy: Deep independent copies via .copy()
- Equality & Hashing: Configuration + parameter equality, set-compatible hashing
- Thread Safety: Concurrent circuit generation, thread-safe property caching
- Numerical Stability: Handles extreme input values (very small, very large, near boundaries)
- Warnings: Deep circuit and large parameter count alerts
- Logging: Full debug logging support
Practical Workflows¶
- Variational QML: Training loop with parameter-shift gradients
- Hyperparameter Search: Systematic configuration comparison
- Transfer Learning: Save/load pre-trained parameters
- Encoding Selection: Guided recommendations via
recommend_encoding()
print("Notebook complete. All TrainableEncoding features demonstrated.")
Notebook complete. All TrainableEncoding features demonstrated.