Amplitude Encoding - Comprehensive Guide¶
Part of the Quantum Encoding Atlas library
This notebook provides an exhaustive, hands-on guide to AmplitudeEncoding — the encoding that maps classical feature vectors directly into the amplitudes of a quantum state, achieving exponential data compression.
Table of Contents¶
- Overview & Mathematical Background
- Installation & Imports
- Creating an AmplitudeEncoding Instance
- Core Properties
- Circuit Generation — Single Sample
- 5.1 PennyLane Backend
- 5.2 Qiskit Backend
- 5.3 Cirq Backend
- Batch Circuit Generation
- Normalization Behavior
- Data Transformation (
transform_input) - Zero-Padding for Non-Power-of-2 Features
- Resource Analysis
- 10.1
resource_summary() - 10.2
gate_count_breakdown() - 10.3
properties(EncodingProperties)
- 10.1
- Encoding Registry & Discovery
- Capability Protocols
- Cross-Backend State Verification
- Statevector Simulation & Quantum State Inspection
- Quantum Information Measures
- Analysis Tools
- 16.1 Simulability Analysis
- 16.2 Resource Counting (Analysis Module)
- 16.3 Expressibility
- 16.4 Entanglement Capability
- 16.5 Trainability & Barren Plateaus
- Compression Ratio Scaling
- Equality, Hashing & Collections
- Serialization (Pickle)
- Thread Safety & Concurrency
- Edge Cases & Error Handling
- Qubit Ordering Conventions
- Logging & Debugging
- Comparison with Other Encodings
- Summary & Best Practices
1. Overview & Mathematical Background¶
Amplitude encoding embeds an $n$-dimensional classical vector $\mathbf{x} = (x_0, x_1, \ldots, x_{n-1})$ into the amplitudes of a quantum state:
$$|\psi(\mathbf{x})\rangle = \frac{1}{\|\mathbf{x}\|} \sum_{i=0}^{n-1} x_i \, |i\rangle$$
where $|i\rangle$ denotes the computational basis state corresponding to the binary representation of index $i$.
Key characteristics¶
| Property | Value |
|---|---|
| Qubits required | $\lceil \log_2 n \rceil$ |
| Compression | Exponential — $n$ features in $\lceil \log_2 n \rceil$ qubits |
| Circuit depth | $O(2^n)$ — exponential in the number of qubits |
| Entangling | Yes — general state preparation requires multi-qubit entanglement |
| Classically simulable | No — exponential cost for classical simulation |
| NISQ feasibility | Limited — deep circuits accumulate noise on near-term hardware |
Use cases¶
- HHL algorithm (quantum linear solvers)
- Quantum kernel methods / QSVM
- Quantum PCA
- Quantum random access memory (QRAM)
- Quantum neural networks (input layer)
- Quantum sampling / Monte Carlo
Fundamental trade-off¶
Amplitude encoding maximizes compression (exponential) at the cost of circuit depth (also exponential). This makes it ideal for fault-tolerant quantum computing but challenging for NISQ devices.
2. Installation & Imports¶
# Install the library (uncomment if not already installed)
# !pip install encoding-atlas
# For all three backends:
# !pip install pennylane qiskit qiskit-aer cirq-core
# import sys
# !{sys.executable} -m pip uninstall -y encoding-atlas
# !{sys.executable} -m pip install --no-cache-dir --index-url https://pypi.org/simple encoding-atlas==0.2.0
import numpy as np
import warnings
# Core imports
from encoding_atlas import (
AmplitudeEncoding,
BaseEncoding,
EncodingProperties,
get_encoding,
list_encodings,
)
# Capability protocols
from encoding_atlas.core.protocols import (
ResourceAnalyzable,
DataTransformable,
EntanglementQueryable,
DataDependentResourceAnalyzable,
)
# Analysis tools
from encoding_atlas.analysis import (
# Simulation
simulate_encoding_statevector,
simulate_encoding_statevectors_batch,
validate_encoding_for_analysis,
validate_statevector,
# Quantum operations
compute_fidelity,
compute_purity,
compute_linear_entropy,
compute_von_neumann_entropy,
partial_trace_single_qubit,
partial_trace_subsystem,
# Resource analysis
count_resources,
get_resource_summary,
get_gate_breakdown,
compare_resources,
estimate_execution_time,
# Simulability
check_simulability,
get_simulability_reason,
is_clifford_circuit,
is_matchgate_circuit,
# Expressibility
compute_expressibility,
compute_fidelity_distribution,
compute_haar_distribution,
# Entanglement
compute_entanglement_capability,
compute_meyer_wallach,
compute_meyer_wallach_with_breakdown,
compute_scott_measure,
# Trainability
estimate_trainability,
compute_gradient_variance,
detect_barren_plateau,
# Utilities
generate_random_parameters,
create_rng,
)
print("All imports successful!")
print(f"encoding-atlas version: {__import__('encoding_atlas').__version__}")
All imports successful! encoding-atlas version: 0.2.0
3. Creating an AmplitudeEncoding Instance¶
The constructor takes two parameters:
n_features(int, required): Number of classical features to encode. Must be a positive integer.normalize(bool, default=True): Whether to automatically normalize input vectors to unit L2 norm.
# Basic instantiation
enc = AmplitudeEncoding(n_features=4)
print(f"Encoding: {enc}")
print(f" n_features = {enc.n_features}")
print(f" n_qubits = {enc.n_qubits}")
print(f" depth = {enc.depth}")
print(f" normalize = {enc.normalize}")
Encoding: AmplitudeEncoding(n_features=4, normalize=True) n_features = 4 n_qubits = 2 depth = 4 normalize = True
# Instantiation with normalization disabled
enc_no_norm = AmplitudeEncoding(n_features=4, normalize=False)
print(f"Encoding: {enc_no_norm}")
print(f" normalize = {enc_no_norm.normalize}")
Encoding: AmplitudeEncoding(n_features=4, normalize=False) normalize = False
# Various feature counts
print("Feature count -> Qubits (compression ratio):")
print("-" * 50)
for n in [1, 2, 3, 4, 5, 8, 16, 32, 64, 128, 256, 1024]:
e = AmplitudeEncoding(n_features=n)
ratio = n / e.n_qubits
print(f" n_features={n:>5d} -> n_qubits={e.n_qubits:>3d} "
f"(compression: {ratio:.1f}x)")
Feature count -> Qubits (compression ratio): -------------------------------------------------- n_features= 1 -> n_qubits= 1 (compression: 1.0x) n_features= 2 -> n_qubits= 1 (compression: 2.0x) n_features= 3 -> n_qubits= 2 (compression: 1.5x) n_features= 4 -> n_qubits= 2 (compression: 2.0x) n_features= 5 -> n_qubits= 3 (compression: 1.7x) n_features= 8 -> n_qubits= 3 (compression: 2.7x) n_features= 16 -> n_qubits= 4 (compression: 4.0x) n_features= 32 -> n_qubits= 5 (compression: 6.4x) n_features= 64 -> n_qubits= 6 (compression: 10.7x) n_features= 128 -> n_qubits= 7 (compression: 18.3x) n_features= 256 -> n_qubits= 8 (compression: 32.0x) n_features= 1024 -> n_qubits= 10 (compression: 102.4x)
4. Core Properties¶
AmplitudeEncoding exposes several read-only properties inherited from BaseEncoding plus its own.
enc = AmplitudeEncoding(n_features=8)
# --- Inherited from BaseEncoding ---
print("=== Inherited Properties ===")
print(f"n_features : {enc.n_features}")
print(f"config : {enc.config}")
# config returns a COPY (safe to modify without affecting the encoding)
cfg = enc.config
cfg['normalize'] = 'tampered'
print(f"config after external modification: {enc.config}") # unchanged
print()
print("=== AmplitudeEncoding-Specific ===")
print(f"n_qubits : {enc.n_qubits} (ceil(log2({enc.n_features})) = {int(np.ceil(np.log2(enc.n_features)))})")
print(f"depth : {enc.depth} (2^{enc.n_qubits} = {2**enc.n_qubits})")
print(f"normalize : {enc.normalize}")
=== Inherited Properties ===
n_features : 8
config : {'normalize': True}
config after external modification: {'normalize': True}
=== AmplitudeEncoding-Specific ===
n_qubits : 3 (ceil(log2(8)) = 3)
depth : 8 (2^3 = 8)
normalize : True
# The `properties` attribute is lazily computed and thread-safe
props = enc.properties
print(f"Type: {type(props).__name__} (frozen dataclass)")
print(f" n_qubits : {props.n_qubits}")
print(f" depth : {props.depth}")
print(f" gate_count : {props.gate_count}")
print(f" single_qubit_gates : {props.single_qubit_gates}")
print(f" two_qubit_gates : {props.two_qubit_gates}")
print(f" parameter_count : {props.parameter_count}")
print(f" is_entangling : {props.is_entangling}")
print(f" simulability : {props.simulability}")
print(f" trainability_est. : {props.trainability_estimate}")
print(f" notes : {props.notes}")
Type: EncodingProperties (frozen dataclass) n_qubits : 3 depth : 8 gate_count : 14 single_qubit_gates : 8 two_qubit_gates : 6 parameter_count : 8 is_entangling : True simulability : not_simulable trainability_est. : 0.5 notes : Exponential compression: 8 features in 3 qubits. Circuit depth O(2^n) limits NISQ applicability.
# Properties can be converted to a dictionary
props_dict = props.to_dict()
print("Properties as dict:")
for k, v in props_dict.items():
print(f" {k}: {v}")
Properties as dict: n_qubits: 3 depth: 8 gate_count: 14 single_qubit_gates: 8 two_qubit_gates: 6 parameter_count: 8 is_entangling: True simulability: not_simulable expressibility: None entanglement_capability: None trainability_estimate: 0.5 noise_resilience_estimate: None notes: Exponential compression: 8 features in 3 qubits. Circuit depth O(2^n) limits NISQ applicability.
5. Circuit Generation — Single Sample¶
The get_circuit() method generates a quantum circuit for a single input vector.
It supports three backends: PennyLane, Qiskit, and Cirq.
5.1 PennyLane Backend¶
Returns a callable (closure) that applies the amplitude embedding when called within a PennyLane QNode context.
import pennylane as qml
enc = AmplitudeEncoding(n_features=4)
x = np.array([1.0, 2.0, 3.0, 4.0])
# Generate PennyLane circuit
circuit_fn = enc.get_circuit(x, backend='pennylane')
print(f"Type: {type(circuit_fn)}")
print(f"Callable: {callable(circuit_fn)}")
# Use it in a QNode to get the statevector
dev = qml.device('default.qubit', wires=enc.n_qubits)
@qml.qnode(dev)
def get_state():
circuit_fn()
return qml.state()
state = get_state()
print(f"\nStatevector: {state}")
print(f"Probabilities: {np.abs(state)**2}")
print(f"Sum of probabilities: {np.sum(np.abs(state)**2):.10f}")
Type: <class 'function'> Callable: True Statevector: [0.18257419+0.j 0.36514837+0.j 0.54772256+0.j 0.73029674+0.j] Probabilities: [0.03333333 0.13333333 0.3 0.53333333] Sum of probabilities: 1.0000000000
5.2 Qiskit Backend¶
Returns a QuantumCircuit object. Note that Qiskit uses LSB (least significant bit) qubit ordering internally, but the library handles the conversion transparently.
from qiskit import QuantumCircuit
from qiskit.quantum_info import Statevector
enc = AmplitudeEncoding(n_features=4)
x = np.array([1.0, 2.0, 3.0, 4.0])
# Generate Qiskit circuit
qc = enc.get_circuit(x, backend='qiskit')
print(f"Type: {type(qc).__name__}")
print(f"Num qubits: {qc.num_qubits}")
print(f"Circuit name: {qc.name}")
# Draw the circuit
print("\nCircuit diagram:")
print(qc.draw())
Type: QuantumCircuit
Num qubits: 2
Circuit name: AmplitudeEncoding
Circuit diagram:
┌─────────────────────────────────────────────┐
q_0: ┤0 ├
│ Initialize(0.18257,0.54772,0.36515,0.7303) │
q_1: ┤1 ├
└─────────────────────────────────────────────┘
# Get the statevector from Qiskit
sv = Statevector(qc)
# Note: Qiskit uses LSB ordering internally, so the raw statevector
# indices may differ from PennyLane/Cirq. The analysis module handles
# this conversion automatically.
print(f"Qiskit statevector (raw): {sv.data}")
print(f"Probabilities: {sv.probabilities()}")
Qiskit statevector (raw): [0.18257419+0.j 0.54772256+0.j 0.36514837+0.j 0.73029674+0.j] Probabilities: [0.03333333 0.3 0.13333333 0.53333333]
5.3 Cirq Backend¶
Returns a cirq.Circuit object. The Cirq backend constructs a custom unitary gate via QR decomposition.
Memory Note: For $\geq 12$ qubits, the unitary matrix exceeds 256 MB. A warning is issued automatically.
import cirq
enc = AmplitudeEncoding(n_features=4)
x = np.array([1.0, 2.0, 3.0, 4.0])
# Generate Cirq circuit
cirq_circuit = enc.get_circuit(x, backend='cirq')
print(f"Type: {type(cirq_circuit).__name__}")
print(f"Qubits: {sorted(cirq_circuit.all_qubits())}")
print(f"Num qubits: {len(cirq_circuit.all_qubits())}")
# Circuit diagram
print(f"\nCircuit diagram:\n{cirq_circuit}")
Type: Circuit
Qubits: [cirq.LineQubit(0), cirq.LineQubit(1)]
Num qubits: 2
Circuit diagram:
0: ───AmpEnc───
│
1: ───#────────
# Simulate with Cirq
simulator = cirq.Simulator()
result = simulator.simulate(cirq_circuit)
state = result.final_state_vector
print(f"Cirq statevector: {state}")
print(f"Probabilities: {np.abs(state)**2}")
Cirq statevector: [0.18257418+0.j 0.36514837+0.j 0.5477226 +0.j 0.73029673+0.j] Probabilities: [0.03333333 0.13333333 0.3 0.5333333 ]
6. Batch Circuit Generation¶
The get_circuits() method generates circuits for multiple data samples at once. It supports optional parallel processing via ThreadPoolExecutor.
enc = AmplitudeEncoding(n_features=4)
# Create a batch of samples
X = np.array([
[1.0, 0.0, 0.0, 0.0],
[0.0, 1.0, 0.0, 0.0],
[1.0, 1.0, 0.0, 0.0],
[1.0, 1.0, 1.0, 1.0],
])
# Sequential processing (default)
circuits = enc.get_circuits(X, backend='pennylane')
print(f"Generated {len(circuits)} circuits (sequential)")
print(f"All callable: {all(callable(c) for c in circuits)}")
Generated 4 circuits (sequential) All callable: True
# Parallel processing (useful for large batches or Cirq backend)
circuits_parallel = enc.get_circuits(X, backend='pennylane', parallel=True)
print(f"Generated {len(circuits_parallel)} circuits (parallel)")
# With custom worker count
import os
circuits_custom = enc.get_circuits(
X, backend='pennylane', parallel=True, max_workers=2
)
print(f"Generated {len(circuits_custom)} circuits (parallel, 2 workers)")
Generated 4 circuits (parallel) Generated 4 circuits (parallel, 2 workers)
# 1D input is treated as a single sample
x_single = np.array([1.0, 2.0, 3.0, 4.0])
circuits_single = enc.get_circuits(x_single, backend='pennylane')
print(f"1D input produces {len(circuits_single)} circuit(s)")
1D input produces 1 circuit(s)
# Batch with Qiskit backend
qiskit_circuits = enc.get_circuits(X, backend='qiskit')
print(f"Qiskit batch: {len(qiskit_circuits)} circuits")
for i, qc in enumerate(qiskit_circuits):
print(f" Circuit {i}: {qc.num_qubits} qubits")
Qiskit batch: 4 circuits Circuit 0: 2 qubits Circuit 1: 2 qubits Circuit 2: 2 qubits Circuit 3: 2 qubits
7. Normalization Behavior¶
Quantum states must satisfy $|\psi|^2 = 1$. AmplitudeEncoding handles this through its normalize parameter.
normalize=True (default)¶
Input vectors are automatically divided by their L2 norm. Only relative magnitudes (ratios) between features are preserved.
normalize=False¶
The user must provide pre-normalized data. The encoding validates that the L2 norm is within 1% of 1.0.
# normalize=True: automatic normalization
enc_norm = AmplitudeEncoding(n_features=4, normalize=True)
x = np.array([3.0, 4.0, 0.0, 0.0]) # norm = 5.0
circuit_fn = enc_norm.get_circuit(x, backend='pennylane')
dev = qml.device('default.qubit', wires=enc_norm.n_qubits)
@qml.qnode(dev)
def get_state_norm():
circuit_fn()
return qml.state()
state = get_state_norm()
print("normalize=True")
print(f" Input: {x} (norm={np.linalg.norm(x):.2f})")
print(f" Statevector: {state}")
print(f" Expected: {x / np.linalg.norm(x)}")
print(f" State norm: {np.linalg.norm(state):.10f}")
normalize=True Input: [3. 4. 0. 0.] (norm=5.00) Statevector: [0.6+0.j 0.8+0.j 0. +0.j 0. +0.j] Expected: [0.6 0.8 0. 0. ] State norm: 1.0000000000
# normalize=False: pre-normalized data
enc_no_norm = AmplitudeEncoding(n_features=4, normalize=False)
# Manually normalize the data
x_raw = np.array([3.0, 4.0, 0.0, 0.0])
x_normalized = x_raw / np.linalg.norm(x_raw)
print(f"Pre-normalized input: {x_normalized} (norm={np.linalg.norm(x_normalized):.10f})")
circuit_fn = enc_no_norm.get_circuit(x_normalized, backend='pennylane')
@qml.qnode(dev)
def get_state_no_norm():
circuit_fn()
return qml.state()
state = get_state_no_norm()
print(f"Statevector: {state}")
Pre-normalized input: [0.6 0.8 0. 0. ] (norm=1.0000000000) Statevector: [0.6+0.j 0.8+0.j 0. +0.j 0. +0.j]
# normalize=False with unnormalized data -> raises ValueError
try:
enc_no_norm.get_circuit(np.array([1.0, 2.0, 3.0, 4.0]), backend='pennylane')
except ValueError as e:
print(f"ValueError caught (expected):\n{e}")
ValueError caught (expected): normalize=False but input vector has L2 norm 5.477226, which differs from 1.0 by more than 1%. When normalization is disabled, input data must be pre-normalized to satisfy the quantum state constraint |ψ|² = 1. Options: 1. Set normalize=True to enable automatic normalization 2. Pre-normalize your data: x = x / np.linalg.norm(x)
# normalize=False tolerates small numerical deviations (within 1%)
x_almost_normalized = np.array([0.6, 0.8, 0.0, 0.0]) # norm = 1.0 exactly
x_slightly_off = x_almost_normalized * 1.005 # norm = 1.005 (0.5% off)
print(f"Slightly off norm: {np.linalg.norm(x_slightly_off):.6f}")
# This should succeed because it's within 1% tolerance
circuit_fn = enc_no_norm.get_circuit(x_slightly_off, backend='pennylane')
print("Accepted (within 1% tolerance)")
Slightly off norm: 1.005000 Accepted (within 1% tolerance)
8. Data Transformation (transform_input)¶
The transform_input() method exposes the internal preprocessing pipeline (normalization + padding) without generating a circuit. This implements the DataTransformable protocol.
enc = AmplitudeEncoding(n_features=4, normalize=True)
x = np.array([3.0, 4.0, 0.0, 0.0])
transformed = enc.transform_input(x)
print(f"Input: {x} (norm={np.linalg.norm(x):.4f})")
print(f"Transformed: {transformed} (norm={np.linalg.norm(transformed):.10f})")
print(f"Length: {len(transformed)} (= 2^{enc.n_qubits})")
Input: [3. 4. 0. 0.] (norm=5.0000) Transformed: [0.6 0.8 0. 0. ] (norm=1.0000000000) Length: 4 (= 2^2)
# Non-power-of-2 features: padding + normalization
enc_3 = AmplitudeEncoding(n_features=3, normalize=True)
x = np.array([1.0, 2.0, 2.0])
transformed = enc_3.transform_input(x)
print(f"Input (3 features): {x} (norm={np.linalg.norm(x):.4f})")
print(f"Transformed (padded to 4): {transformed} (norm={np.linalg.norm(transformed):.10f})")
print(f" -> The 4th element is zero-padding")
Input (3 features): [1. 2. 2.] (norm=3.0000) Transformed (padded to 4): [0.33333333 0.66666667 0.66666667 0. ] (norm=1.0000000000) -> The 4th element is zero-padding
# normalize=False: only padding, no normalization
enc_3_no_norm = AmplitudeEncoding(n_features=3, normalize=False)
x_pre_norm = np.array([1.0, 0.0, 0.0]) # already unit norm
transformed = enc_3_no_norm.transform_input(x_pre_norm)
print(f"Input (pre-normalized): {x_pre_norm}")
print(f"Transformed (padded only): {transformed}")
Input (pre-normalized): [1. 0. 0.] Transformed (padded only): [1. 0. 0. 0.]
# transform_input only accepts single samples
try:
batch = np.array([[1, 0, 0, 0], [0, 1, 0, 0]])
enc.transform_input(batch)
except ValueError as e:
print(f"ValueError (expected): {e}")
ValueError (expected): transform_input expects a single sample, got shape (2, 4). For batch processing, iterate over samples: [enc.transform_input(xi) for xi in X]
# 2D input with shape (1, n_features) is accepted
x_2d = np.array([[3.0, 4.0, 0.0, 0.0]])
transformed = enc.transform_input(x_2d)
print(f"2D single-sample input accepted: {transformed}")
2D single-sample input accepted: [0.6 0.8 0. 0. ]
9. Zero-Padding for Non-Power-of-2 Features¶
Quantum states require $2^n$ amplitudes for $n$ qubits. When n_features is not a power of 2, the input is automatically padded with zeros.
print("Padding behavior:")
print(f"{'n_features':>10} {'n_qubits':>8} {'state_dim':>9} {'padding_zeros':>13}")
print("-" * 45)
for n in [1, 2, 3, 4, 5, 6, 7, 8, 9, 15, 16, 17]:
e = AmplitudeEncoding(n_features=n)
state_dim = 2 ** e.n_qubits
padding = state_dim - n
print(f"{n:>10} {e.n_qubits:>8} {state_dim:>9} {padding:>13}")
Padding behavior:
n_features n_qubits state_dim padding_zeros
---------------------------------------------
1 1 2 1
2 1 2 0
3 2 4 1
4 2 4 0
5 3 8 3
6 3 8 2
7 3 8 1
8 3 8 0
9 4 16 7
15 4 16 1
16 4 16 0
17 5 32 15
# Demonstrate: 5 features -> 3 qubits -> 8 amplitudes (3 zeros padded)
enc_5 = AmplitudeEncoding(n_features=5)
x = np.array([1.0, 1.0, 1.0, 1.0, 1.0])
transformed = enc_5.transform_input(x)
print(f"Input ({enc_5.n_features} features): {x}")
print(f"Transformed ({2**enc_5.n_qubits} amplitudes): {transformed}")
print(f"Zero-padded positions: indices {enc_5.n_features} to {2**enc_5.n_qubits - 1}")
Input (5 features): [1. 1. 1. 1. 1.] Transformed (8 amplitudes): [0.4472136 0.4472136 0.4472136 0.4472136 0.4472136 0. 0. 0. ] Zero-padded positions: indices 5 to 7
# Special edge case: n_features=1
# log2(1) = 0, but minimum 1 qubit is enforced
enc_1 = AmplitudeEncoding(n_features=1)
print(f"n_features=1: n_qubits={enc_1.n_qubits}, depth={enc_1.depth}")
x = np.array([5.0]) # single feature
transformed = enc_1.transform_input(x)
print(f"Input: {x} -> Transformed: {transformed}")
print(f"This is |0> state (feature padded to [x, 0], normalized to [1, 0])")
n_features=1: n_qubits=1, depth=2 Input: [5.] -> Transformed: [1. 0.] This is |0> state (feature padded to [x, 0], normalized to [1, 0])
10. Resource Analysis¶
AmplitudeEncoding provides three methods for understanding circuit resource requirements.
10.1 resource_summary()¶
A comprehensive dictionary with all resource metrics, including memory estimates for Cirq.
enc = AmplitudeEncoding(n_features=16)
summary = enc.resource_summary()
print("=== Resource Summary (n_features=16) ===")
for key, value in summary.items():
if key != 'backend_notes':
print(f" {key:>35s} : {value}")
print("\n Backend notes:")
for backend, note in summary['backend_notes'].items():
print(f" {backend:>10s} : {note}")
=== Resource Summary (n_features=16) ===
n_features : 16
n_qubits : 4
state_dimension : 16
compression_ratio : 4.0
padding_zeros : 0
depth : 16
normalize : True
theoretical_gate_count : 30
theoretical_single_qubit_gates : 16
theoretical_two_qubit_gates : 14
is_entangling : True
simulability : not_simulable
cirq_unitary_memory_bytes : 4096
cirq_unitary_memory_human : 4.0 KB
Backend notes:
pennylane : Uses qml.AmplitudeEmbedding with optimized decomposition
qiskit : Uses QuantumCircuit.initialize() with automatic synthesis
cirq : Constructs 16×16 unitary matrix (~4.0 KB memory)
# Memory scaling for Cirq backend
print("Cirq unitary memory requirements:")
print(f"{'n_features':>10} {'n_qubits':>8} {'memory':>12}")
print("-" * 35)
for n in [4, 8, 16, 64, 256, 1024, 4096]:
e = AmplitudeEncoding(n_features=n)
s = e.resource_summary()
print(f"{n:>10} {s['n_qubits']:>8} {s['cirq_unitary_memory_human']:>12}")
Cirq unitary memory requirements:
n_features n_qubits memory
-----------------------------------
4 2 256 bytes
8 3 1.0 KB
16 4 4.0 KB
64 6 64.0 KB
256 8 1.0 MB
1024 10 16.0 MB
4096 12 256.0 MB
10.2 gate_count_breakdown()¶
Theoretical gate count estimates based on the Möttönen et al. state preparation algorithm.
enc = AmplitudeEncoding(n_features=8)
breakdown = enc.gate_count_breakdown()
print("=== Gate Count Breakdown (n_features=8, n_qubits=3) ===")
for key, value in breakdown.items():
print(f" {key:>20s} : {value}")
print(f"\nNote: is_estimate={breakdown['is_estimate']} because amplitude encoding")
print("creates data-dependent circuits; actual counts may vary by backend.")
=== Gate Count Breakdown (n_features=8, n_qubits=3) ===
rotation_gates : 8
cnot : 6
total_single_qubit : 8
total_two_qubit : 6
total : 14
state_dimension : 8
is_estimate : True
Note: is_estimate=True because amplitude encoding
creates data-dependent circuits; actual counts may vary by backend.
# Gate count scaling
print("Gate count scaling with feature count:")
print(f"{'n_features':>10} {'n_qubits':>8} {'rotations':>10} {'CNOT':>6} {'total':>6}")
print("-" * 45)
for n in [1, 2, 4, 8, 16, 32, 64]:
e = AmplitudeEncoding(n_features=n)
b = e.gate_count_breakdown()
print(f"{n:>10} {e.n_qubits:>8} {b['rotation_gates']:>10} {b['cnot']:>6} {b['total']:>6}")
Gate count scaling with feature count:
n_features n_qubits rotations CNOT total
---------------------------------------------
1 1 2 0 2
2 1 2 0 2
4 2 4 2 6
8 3 8 6 14
16 4 16 14 30
32 5 32 30 62
64 6 64 62 126
10.3 properties (EncodingProperties)¶
The properties attribute is a frozen (immutable) dataclass that is lazily computed on first access with thread-safe initialization.
enc = AmplitudeEncoding(n_features=16)
props = enc.properties
# Immutability demonstration
try:
props.n_qubits = 99
except AttributeError as e:
print(f"Cannot modify frozen dataclass: {e}")
# Same object returned on subsequent access (cached)
props2 = enc.properties
print(f"Same object (cached): {props is props2}")
Cannot modify frozen dataclass: cannot assign to field 'n_qubits' Same object (cached): True
11. Encoding Registry & Discovery¶
The library provides a registry system for discovering and instantiating encodings by name.
# List all registered encodings
all_encodings = list_encodings()
print(f"Registered encodings ({len(all_encodings)}):")
for name in all_encodings:
print(f" - {name}")
Registered encodings (26): - amplitude - angle - angle_ry - basis - covariant - covariant_feature_map - cyclic_equivariant - cyclic_equivariant_feature_map - data_reuploading - hamiltonian - hamiltonian_encoding - hardware_efficient - higher_order_angle - iqp - pauli_feature_map - qaoa - qaoa_encoding - so2_equivariant - so2_equivariant_feature_map - swap_equivariant - swap_equivariant_feature_map - symmetry_inspired - symmetry_inspired_feature_map - trainable - trainable_encoding - zz_feature_map
# Instantiate by name via the registry
enc_from_registry = get_encoding('amplitude', n_features=8)
print(f"Created via registry: {enc_from_registry}")
print(f"Type: {type(enc_from_registry).__name__}")
print(f"n_qubits: {enc_from_registry.n_qubits}")
Created via registry: AmplitudeEncoding(n_features=8, normalize=True) Type: AmplitudeEncoding n_qubits: 3
12. Capability Protocols¶
AmplitudeEncoding implements several capability protocols from the Layered Contract Architecture. These allow writing generic code that adapts to encoding capabilities.
enc = AmplitudeEncoding(n_features=4)
print("Protocol support:")
print(f" ResourceAnalyzable : {isinstance(enc, ResourceAnalyzable)}")
print(f" DataTransformable : {isinstance(enc, DataTransformable)}")
print(f" EntanglementQueryable : {isinstance(enc, EntanglementQueryable)}")
print(f" DataDependentResourceAnalyzable : {isinstance(enc, DataDependentResourceAnalyzable)}")
print(f" BaseEncoding : {isinstance(enc, BaseEncoding)}")
Protocol support: ResourceAnalyzable : True DataTransformable : True EntanglementQueryable : False DataDependentResourceAnalyzable : False BaseEncoding : True
# Generic function using ResourceAnalyzable protocol
def analyze_encoding(enc):
"""Analyze any encoding that supports ResourceAnalyzable."""
if isinstance(enc, ResourceAnalyzable):
summary = enc.resource_summary()
breakdown = enc.gate_count_breakdown()
print(f"{type(enc).__name__}:")
print(f" Qubits: {summary.get('n_qubits', 'N/A')}")
print(f" Gates: {breakdown.get('total', 'N/A')}")
else:
print(f"{type(enc).__name__}: ResourceAnalyzable not supported")
analyze_encoding(AmplitudeEncoding(n_features=8))
AmplitudeEncoding: Qubits: 3 Gates: 14
# Generic function using DataTransformable protocol
def inspect_transformation(enc, x):
"""Inspect how any DataTransformable encoding preprocesses data."""
if isinstance(enc, DataTransformable):
original_norm = np.linalg.norm(x)
transformed = enc.transform_input(x)
new_norm = np.linalg.norm(transformed)
print(f"{type(enc).__name__}:")
print(f" Original: {x} (norm={original_norm:.4f})")
print(f" Transformed: {transformed} (norm={new_norm:.10f})")
else:
print(f"{type(enc).__name__}: DataTransformable not supported")
inspect_transformation(AmplitudeEncoding(n_features=4), np.array([3.0, 4.0, 0.0, 0.0]))
AmplitudeEncoding: Original: [3. 4. 0. 0.] (norm=5.0000) Transformed: [0.6 0.8 0. 0. ] (norm=1.0000000000)
# Type guard functions
from encoding_atlas.core.protocols import (
is_resource_analyzable,
is_data_transformable,
is_entanglement_queryable,
)
enc = AmplitudeEncoding(n_features=4)
print(f"is_resource_analyzable: {is_resource_analyzable(enc)}")
print(f"is_data_transformable: {is_data_transformable(enc)}")
print(f"is_entanglement_queryable: {is_entanglement_queryable(enc)}")
is_resource_analyzable: True is_data_transformable: True is_entanglement_queryable: False
13. Cross-Backend State Verification¶
A key design goal is that all three backends produce equivalent quantum states. Let's verify this using the analysis module's simulation utilities, which handle qubit ordering differences automatically.
enc = AmplitudeEncoding(n_features=4)
x = np.array([1.0, 2.0, 3.0, 4.0])
# Simulate on all three backends using the analysis module
state_pl = simulate_encoding_statevector(enc, x, backend='pennylane')
state_qk = simulate_encoding_statevector(enc, x, backend='qiskit')
state_cq = simulate_encoding_statevector(enc, x, backend='cirq')
print("Statevectors (all in MSB ordering, handled automatically):")
print(f" PennyLane: {state_pl}")
print(f" Qiskit: {state_qk}")
print(f" Cirq: {state_cq}")
# Compute fidelities
fid_pl_qk = compute_fidelity(state_pl, state_qk)
fid_pl_cq = compute_fidelity(state_pl, state_cq)
fid_qk_cq = compute_fidelity(state_qk, state_cq)
print(f"\nCross-backend fidelities:")
print(f" PennyLane vs Qiskit: {fid_pl_qk:.10f}")
print(f" PennyLane vs Cirq: {fid_pl_cq:.10f}")
print(f" Qiskit vs Cirq: {fid_qk_cq:.10f}")
print(f"\nAll fidelities ~1.0: backends produce equivalent states!")
Statevectors (all in MSB ordering, handled automatically): PennyLane: [0.18257419+0.j 0.36514837+0.j 0.54772256+0.j 0.73029674+0.j] Qiskit: [0.18257419+0.j 0.36514837+0.j 0.54772256+0.j 0.73029674+0.j] Cirq: [0.18257419+0.j 0.36514837+0.j 0.54772256+0.j 0.73029674+0.j] Cross-backend fidelities: PennyLane vs Qiskit: 1.0000000000 PennyLane vs Cirq: 1.0000000000 Qiskit vs Cirq: 1.0000000000 All fidelities ~1.0: backends produce equivalent states!
14. Statevector Simulation & Quantum State Inspection¶
The analysis module provides utilities to simulate encoding circuits and inspect the resulting quantum states.
enc = AmplitudeEncoding(n_features=4)
# Validate encoding is suitable for analysis
validate_encoding_for_analysis(enc)
print("Encoding validated for analysis")
# Simulate a single input
x = np.array([1.0, 0.0, 0.0, 0.0]) # basis state |00>
state = simulate_encoding_statevector(enc, x)
print(f"\nBasis state |00>: {state}")
# Uniform superposition
x_uniform = np.array([1.0, 1.0, 1.0, 1.0])
state_uniform = simulate_encoding_statevector(enc, x_uniform)
print(f"Uniform superposition: {state_uniform}")
print(f"All amplitudes equal: {np.allclose(state_uniform, 0.5)}")
Encoding validated for analysis Basis state |00>: [1.+0.j 0.+0.j 0.+0.j 0.+0.j] Uniform superposition: [0.5+0.j 0.5+0.j 0.5+0.j 0.5+0.j] All amplitudes equal: True
# Batch simulation
X = np.array([
[1.0, 0.0, 0.0, 0.0], # |00>
[0.0, 1.0, 0.0, 0.0], # |01>
[0.0, 0.0, 1.0, 0.0], # |10>
[0.0, 0.0, 0.0, 1.0], # |11>
])
states = simulate_encoding_statevectors_batch(enc, X)
print("Computational basis states:")
labels = ['|00>', '|01>', '|10>', '|11>']
for label, state in zip(labels, states):
print(f" {label}: {state}")
Computational basis states: |00>: [1.+0.j 0.+0.j 0.+0.j 0.+0.j] |01>: [0.+0.j 1.+0.j 0.+0.j 0.+0.j] |10>: [0.+0.j 0.+0.j 1.+0.j 0.+0.j] |11>: [0.+0.j 0.+0.j 0.+0.j 1.+0.j]
# Validate statevector
state = simulate_encoding_statevector(enc, np.array([1.0, 1.0, 1.0, 1.0]))
validated = validate_statevector(state, expected_qubits=2)
print(f"Statevector validated: length={len(validated)}, norm={np.linalg.norm(validated):.10f}")
Statevector validated: length=4, norm=1.0000000000
15. Quantum Information Measures¶
The analysis module provides fundamental quantum information measures that can be applied to amplitude-encoded states.
enc = AmplitudeEncoding(n_features=4)
# Create an entangled state using amplitude encoding
x_entangled = np.array([1.0, 0.0, 0.0, 1.0]) # Bell-like: |00> + |11>
state = simulate_encoding_statevector(enc, x_entangled)
print(f"Entangled state: {state}")
# Fidelity with ideal Bell state
bell_state = np.array([1, 0, 0, 1]) / np.sqrt(2)
fid = compute_fidelity(state, bell_state)
print(f"Fidelity with |Bell>: {fid:.10f}")
Entangled state: [0.70710678+0.j 0. +0.j 0. +0.j 0.70710678+0.j] Fidelity with |Bell>: 1.0000000000
# Partial trace: reduced density matrix of qubit 0
rho_0 = partial_trace_single_qubit(state, n_qubits=2, keep_qubit=0)
print(f"Reduced density matrix (qubit 0):\n{rho_0}")
# Purity of the reduced state
purity = compute_purity(rho_0)
print(f"\nPurity: {purity:.6f}")
print(f" (1.0 = pure state, 0.5 = maximally mixed for single qubit)")
# Linear entropy
lin_entropy = compute_linear_entropy(rho_0)
print(f"Linear entropy: {lin_entropy:.6f}")
# Von Neumann entropy
vn_entropy = compute_von_neumann_entropy(rho_0)
print(f"Von Neumann entropy: {vn_entropy:.6f} bits")
Reduced density matrix (qubit 0): [[0.5+0.j 0. +0.j] [0. +0.j 0.5+0.j]] Purity: 0.500000 (1.0 = pure state, 0.5 = maximally mixed for single qubit) Linear entropy: 0.500000 Von Neumann entropy: 1.000000 bits
# Compare: product state vs entangled state
x_product = np.array([1.0, 0.0, 0.0, 0.0]) # |00> - product state
state_product = simulate_encoding_statevector(enc, x_product)
rho_product = partial_trace_single_qubit(state_product, n_qubits=2, keep_qubit=0)
print("Product state |00>:")
print(f" Purity: {compute_purity(rho_product):.6f} (pure)")
print(f" Von Neumann entropy: {compute_von_neumann_entropy(rho_product):.6f} bits")
print("\nEntangled state (|00> + |11>)/sqrt(2):")
print(f" Purity: {purity:.6f} (maximally mixed)")
print(f" Von Neumann entropy: {vn_entropy:.6f} bits (maximum for 1 qubit)")
Product state |00>: Purity: 1.000000 (pure) Von Neumann entropy: -0.000000 bits Entangled state (|00> + |11>)/sqrt(2): Purity: 0.500000 (maximally mixed) Von Neumann entropy: 1.000000 bits (maximum for 1 qubit)
# Partial trace of a subsystem (multi-qubit)
enc_8 = AmplitudeEncoding(n_features=8) # 3 qubits
x = np.array([1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]) # |000> + |111>
state = simulate_encoding_statevector(enc_8, x)
# Keep qubits 0 and 1, trace out qubit 2
rho_01 = partial_trace_subsystem(state, n_qubits=3, keep_qubits=[0, 1])
print(f"Reduced density matrix (qubits 0,1):\n{np.round(rho_01, 4)}")
print(f"Shape: {rho_01.shape}")
print(f"Purity: {compute_purity(rho_01):.6f}")
Reduced density matrix (qubits 0,1): [[0.5+0.j 0. +0.j 0. +0.j 0. +0.j] [0. +0.j 0. +0.j 0. +0.j 0. +0.j] [0. +0.j 0. +0.j 0. +0.j 0. +0.j] [0. +0.j 0. +0.j 0. +0.j 0.5+0.j]] Shape: (4, 4) Purity: 0.500000
16. Analysis Tools¶
The encoding_atlas.analysis module provides comprehensive tools for characterizing encodings.
16.1 Simulability Analysis¶
Determines whether an encoding's circuit can be efficiently simulated classically.
enc = AmplitudeEncoding(n_features=8)
# Quick check
reason = get_simulability_reason(enc)
print(f"Simulability reason: {reason}")
Simulability reason: Not simulable: Entangling circuit with 6 two-qubit gates and non-Clifford operations
# Detailed simulability analysis
sim_result = check_simulability(enc)
print("=== Simulability Analysis ===")
print(f" Is simulable: {sim_result['is_simulable']}")
print(f" Class: {sim_result['simulability_class']}")
print(f" Reason: {sim_result['reason']}")
print(f"\n Details:")
for k, v in sim_result['details'].items():
print(f" {k}: {v}")
print(f"\n Recommendations:")
for rec in sim_result['recommendations']:
print(f" - {rec}")
=== Simulability Analysis ===
Is simulable: False
Class: not_simulable
Reason: Entangling circuit with 6 two-qubit gates and non-Clifford operations
Details:
is_entangling: True
is_clifford: False
is_matchgate: False
entanglement_pattern: unknown
two_qubit_gate_count: 6
n_qubits: 3
n_features: 8
declared_simulability: not_simulable
encoding_name: AmplitudeEncoding
has_non_clifford_gates: True
has_t_gates: False
has_parameterized_rotations: True
Recommendations:
- Statevector simulation feasible (3 qubits, ~128 bytes memory)
- Brute-force statevector simulation is feasible at this circuit size (3 qubits, ~128 bytes memory)
- Use statevector simulation for instances with < 20 qubits
- Consider tensor network methods for structured entanglement
- May require quantum hardware for large instances
# Clifford and matchgate checks
print(f"Is Clifford circuit: {is_clifford_circuit(enc)}")
print(f"Is matchgate circuit: {is_matchgate_circuit(enc)}")
Is Clifford circuit: False Is matchgate circuit: False
16.2 Resource Counting (Analysis Module)¶
The analysis module provides resource counting that works across all encoding types.
enc = AmplitudeEncoding(n_features=8)
# Basic resource count
resources = count_resources(enc)
print("=== Resource Count ===")
for k, v in resources.items():
print(f" {k:>20s} : {v}")
=== Resource Count ===
n_qubits : 3
depth : 8
gate_count : 14
single_qubit_gates : 8
two_qubit_gates : 6
parameter_count : 8
cnot_count : 6
cz_count : 0
t_gate_count : 0
hadamard_count : 0
rotation_gates : 0
two_qubit_ratio : 0.42857142857142855
gates_per_qubit : 4.666666666666667
encoding_name : AmplitudeEncoding
is_data_dependent : False
# Quick summary from cached properties
quick = get_resource_summary(enc)
print("Quick resource summary:")
for k, v in quick.items():
print(f" {k}: {v}")
Quick resource summary: n_qubits: 3 depth: 8 gate_count: 14 single_qubit_gates: 8 two_qubit_gates: 6 parameter_count: 8 cnot_count: 6 cz_count: 0 t_gate_count: 0 hadamard_count: 0 rotation_gates: 8 two_qubit_ratio: 0.42857142857142855 gates_per_qubit: 4.666666666666667 encoding_name: AmplitudeEncoding is_data_dependent: False
# Detailed gate breakdown
detailed = get_gate_breakdown(enc)
print("Detailed gate breakdown:")
for k, v in detailed.items():
print(f" {k}: {v}")
Detailed gate breakdown: rx: 0 ry: 0 rz: 0 h: 0 x: 0 y: 0 z: 0 s: 0 t: 0 cnot: 6 cx: 6 cz: 0 swap: 0 total_single_qubit: 8 total_two_qubit: 6 total: 14 encoding_name: AmplitudeEncoding
# Estimate execution time
timing = estimate_execution_time(enc)
print("Estimated execution time:")
for k, v in timing.items():
if isinstance(v, float):
print(f" {k}: {v:.4f} µs")
else:
print(f" {k}: {v}")
Estimated execution time: serial_time_us: 2.3600 µs estimated_time_us: 2.6000 µs single_qubit_time_us: 0.1600 µs two_qubit_time_us: 1.2000 µs measurement_time_us: 1.0000 µs parallelization_factor: 0.5000 µs
# Compare resources across multiple encodings
from encoding_atlas import AngleEncoding, IQPEncoding
encodings = [
AngleEncoding(n_features=4),
AmplitudeEncoding(n_features=4),
IQPEncoding(n_features=4),
]
comparison = compare_resources(
encodings,
metrics=['n_qubits', 'depth', 'gate_count', 'two_qubit_ratio']
)
print("=== Resource Comparison ===")
for metric, values in comparison.items():
print(f" {metric}: {values}")
=== Resource Comparison === n_qubits: [4, 2, 4] depth: [1, 4, 6] gate_count: [4, 6, 52] two_qubit_ratio: [0.0, 0.3333333333333333, 0.46153846153846156] encoding_name: ['AngleEncoding', 'AmplitudeEncoding', 'IQPEncoding']
16.3 Expressibility¶
Measures how well the encoding covers the Hilbert space. Higher expressibility means the encoding can produce a wider variety of quantum states.
enc = AmplitudeEncoding(n_features=4)
# Compute expressibility (scalar)
expr = compute_expressibility(enc, n_samples=500, seed=42)
print(f"Expressibility: {expr:.6f} (range: 0 to 1, higher = more expressive)")
Expressibility: 0.831419 (range: 0 to 1, higher = more expressive)
# Detailed expressibility with distribution data
expr_result = compute_expressibility(
enc, n_samples=500, seed=42, return_distributions=True
)
print("=== Expressibility Result ===")
print(f" Expressibility: {expr_result['expressibility']:.6f}")
print(f" KL divergence: {expr_result['kl_divergence']:.6f}")
print(f" Mean fidelity: {expr_result['mean_fidelity']:.6f}")
print(f" Std fidelity: {expr_result['std_fidelity']:.6f}")
print(f" Convergence estimate: {expr_result['convergence_estimate']:.6f}")
print(f" N samples: {expr_result['n_samples']}")
print(f" N bins: {expr_result['n_bins']}")
=== Expressibility Result === Expressibility: 0.831419 KL divergence: 1.685811 Mean fidelity: 0.634452 Std fidelity: 0.222274 Convergence estimate: 0.087255 N samples: 500 N bins: 75
# Fidelity distribution
fidelities = compute_fidelity_distribution(enc, n_samples=500, seed=42)
print(f"Fidelity distribution: shape={fidelities.shape}")
print(f" Mean: {np.mean(fidelities):.6f}")
print(f" Std: {np.std(fidelities):.6f}")
print(f" Min: {np.min(fidelities):.6f}")
print(f" Max: {np.max(fidelities):.6f}")
Fidelity distribution: shape=(500,) Mean: 0.634452 Std: 0.222052 Min: 0.044585 Max: 0.994245
# Haar-random distribution for comparison
fidelity_values = np.linspace(0, 1, 100)
haar_dist = compute_haar_distribution(n_qubits=enc.n_qubits, fidelity_values=fidelity_values)
print(f"Haar distribution: shape={haar_dist.shape}")
print(f" Sum (approximate integral): {np.sum(haar_dist) * (fidelity_values[1] - fidelity_values[0]):.6f}")
Haar distribution: shape=(100,) Sum (approximate integral): 0.010101
16.4 Entanglement Capability¶
Measures the average amount of entanglement the encoding generates across random inputs.
enc = AmplitudeEncoding(n_features=4)
# Meyer-Wallach entanglement measure (scalar)
ent_cap = compute_entanglement_capability(enc, n_samples=200, seed=42)
print(f"Entanglement capability (Meyer-Wallach): {ent_cap:.6f}")
print(f" Range: 0 (product states) to 1 (maximally entangled)")
Entanglement capability (Meyer-Wallach): 0.162052 Range: 0 (product states) to 1 (maximally entangled)
# Detailed entanglement result
ent_result = compute_entanglement_capability(
enc, n_samples=200, seed=42, return_details=True
)
print("=== Entanglement Result ===")
print(f" Capability: {ent_result['entanglement_capability']:.6f}")
print(f" Std error: {ent_result['std_error']:.6f}")
print(f" N samples: {ent_result['n_samples']}")
print(f" Measure: {ent_result['measure']}")
print(f" Per-qubit ent.: {ent_result['per_qubit_entanglement']}")
=== Entanglement Result === Capability: 0.162052 Std error: 0.014222 N samples: 200 Measure: meyer_wallach Per-qubit ent.: [0.08102589 0.08102589]
# Meyer-Wallach measure on a specific state
x = np.array([1.0, 0.0, 0.0, 1.0]) # Bell-like state
state = simulate_encoding_statevector(enc, x)
mw = compute_meyer_wallach(state, n_qubits=2)
print(f"Meyer-Wallach for Bell-like state: {mw:.6f}")
# With per-qubit breakdown
mw_val, per_qubit = compute_meyer_wallach_with_breakdown(state, n_qubits=2)
print(f"Meyer-Wallach: {mw_val:.6f}")
print(f"Per-qubit entanglement: {per_qubit}")
Meyer-Wallach for Bell-like state: 1.000000 Meyer-Wallach: 1.000000 Per-qubit entanglement: [0.5 0.5]
# Scott measure (generalization of Meyer-Wallach)
enc_8 = AmplitudeEncoding(n_features=8) # 3 qubits
x = np.array([1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0]) # GHZ-like
state = simulate_encoding_statevector(enc_8, x)
# k=1 reduces to Meyer-Wallach
scott_k1 = compute_scott_measure(state, n_qubits=3, k=1)
mw = compute_meyer_wallach(state, n_qubits=3)
print(f"Scott(k=1) = {scott_k1:.6f} (should match Meyer-Wallach = {mw:.6f})")
# k=2 captures higher-order entanglement
scott_k2 = compute_scott_measure(state, n_qubits=3, k=2)
print(f"Scott(k=2) = {scott_k2:.6f}")
Scott(k=1) = 1.000000 (should match Meyer-Wallach = 1.000000) Scott(k=2) = 0.666667
16.5 Trainability & Barren Plateaus¶
Estimates trainability by analyzing gradient variance. Low gradient variance indicates barren plateaus — a major challenge for variational quantum algorithms.
enc = AmplitudeEncoding(n_features=4)
# Quick trainability estimate
trainability = estimate_trainability(enc, n_samples=100, seed=42)
print(f"Trainability: {trainability:.6f} (range: 0 to 1, higher = more trainable)")
Trainability: 0.036496 (range: 0 to 1, higher = more trainable)
# Detailed trainability result
train_result = estimate_trainability(
enc, n_samples=100, seed=42, return_details=True
)
print("=== Trainability Result ===")
print(f" Trainability estimate: {train_result['trainability_estimate']:.6f}")
print(f" Gradient variance: {train_result['gradient_variance']:.8f}")
print(f" Barren plateau risk: {train_result['barren_plateau_risk']}")
print(f" Effective dimension: {train_result['effective_dimension']:.4f}")
print(f" N samples: {train_result['n_samples']}")
print(f" N successful: {train_result['n_successful_samples']}")
print(f" N failed: {train_result['n_failed_samples']}")
print(f" Per-param variance: {train_result['per_parameter_variance']}")
=== Trainability Result === Trainability estimate: 0.036496 Gradient variance: 0.00230838 Barren plateau risk: low Effective dimension: 4.0000 N samples: 100 N successful: 100 N failed: 0 Per-param variance: [0.00344869 0.00195357 0.00176649 0.00206475]
# Gradient variance directly
grad_var = compute_gradient_variance(enc, n_samples=100, seed=42)
print(f"Gradient variance: {grad_var:.8f}")
# Barren plateau detection
bp_risk = detect_barren_plateau(
gradient_variance=grad_var,
n_qubits=enc.n_qubits,
n_params=enc.n_features
)
print(f"Barren plateau risk: {bp_risk}")
Gradient variance: 0.00230838 Barren plateau risk: low
17. Compression Ratio Scaling¶
Amplitude encoding's key advantage is exponential compression. Let's visualize how compression scales.
print("Exponential compression scaling:")
print(f"{'n_features':>12} {'n_qubits':>10} {'compression':>14} {'depth':>8} {'gate_count':>12}")
print("=" * 60)
for n in [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]:
e = AmplitudeEncoding(n_features=n)
s = e.resource_summary()
print(f"{n:>12} {s['n_qubits']:>10} {s['compression_ratio']:>13.1f}x {s['depth']:>8} {s['theoretical_gate_count']:>12}")
print("\nNote: compression ratio grows, but so does circuit depth!")
Exponential compression scaling:
n_features n_qubits compression depth gate_count
============================================================
2 1 2.0x 2 2
4 2 2.0x 4 6
8 3 2.7x 8 14
16 4 4.0x 16 30
32 5 6.4x 32 62
64 6 10.7x 64 126
128 7 18.3x 128 254
256 8 32.0x 256 510
512 9 56.9x 512 1022
1024 10 102.4x 1024 2046
Note: compression ratio grows, but so does circuit depth!
18. Equality, Hashing & Collections¶
AmplitudeEncoding instances support equality comparison and hashing, making them usable in sets and as dictionary keys.
enc1 = AmplitudeEncoding(n_features=4, normalize=True)
enc2 = AmplitudeEncoding(n_features=4, normalize=True)
enc3 = AmplitudeEncoding(n_features=4, normalize=False)
enc4 = AmplitudeEncoding(n_features=8, normalize=True)
# Equality
print("Equality:")
print(f" enc1 == enc2 (same params): {enc1 == enc2}")
print(f" enc1 == enc3 (diff normalize): {enc1 == enc3}")
print(f" enc1 == enc4 (diff features): {enc1 == enc4}")
# Hashing
print(f"\nHashing:")
print(f" hash(enc1) == hash(enc2): {hash(enc1) == hash(enc2)}")
print(f" hash(enc1) == hash(enc3): {hash(enc1) == hash(enc3)}")
# Use in sets
encoding_set = {enc1, enc2, enc3, enc4}
print(f"\nSet of {{enc1, enc2, enc3, enc4}}: {len(encoding_set)} unique encodings")
# Use as dictionary keys
encoding_dict = {enc1: "encoder A", enc3: "encoder B"}
print(f"Dict lookup: encoding_dict[enc2] = {encoding_dict[enc2]}") # enc2 == enc1
Equality:
enc1 == enc2 (same params): True
enc1 == enc3 (diff normalize): False
enc1 == enc4 (diff features): False
Hashing:
hash(enc1) == hash(enc2): True
hash(enc1) == hash(enc3): False
Set of {enc1, enc2, enc3, enc4}: 3 unique encodings
Dict lookup: encoding_dict[enc2] = encoder A
# Cross-type comparison returns NotImplemented
print(f"enc1 == 'not_an_encoding': {enc1 == 'not_an_encoding'}")
print(f"enc1 == 42: {enc1 == 42}")
enc1 == 'not_an_encoding': False enc1 == 42: False
19. Serialization (Pickle)¶
AmplitudeEncoding fully supports pickle serialization, including proper handling of the thread lock.
import pickle
enc = AmplitudeEncoding(n_features=8, normalize=False)
# Access properties to ensure they're cached
_ = enc.properties
# Serialize
pickled = pickle.dumps(enc)
print(f"Serialized size: {len(pickled)} bytes")
# Deserialize
enc_restored = pickle.loads(pickled)
print(f"Restored: {enc_restored}")
print(f"Equal: {enc == enc_restored}")
print(f"n_qubits preserved: {enc_restored.n_qubits == enc.n_qubits}")
print(f"normalize preserved: {enc_restored.normalize == enc.normalize}")
print(f"properties preserved: {enc_restored.properties == enc.properties}")
Serialized size: 599 bytes Restored: AmplitudeEncoding(n_features=8, normalize=False) Equal: True n_qubits preserved: True normalize preserved: True properties preserved: True
# The restored encoding is fully functional
x = np.array([1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]) # pre-normalized
circuit = enc_restored.get_circuit(x, backend='pennylane')
print(f"Circuit from restored encoding: callable={callable(circuit)}")
Circuit from restored encoding: callable=True
20. Thread Safety & Concurrency¶
AmplitudeEncoding is designed for thread-safe operation:
properties: Double-checked locking patternget_circuits(parallel=True): Thread-safe via stateless circuit generation- Input validation: Creates defensive copies
import threading
enc = AmplitudeEncoding(n_features=4)
results = {}
def access_properties(thread_id):
"""Access properties from multiple threads simultaneously."""
props = enc.properties
results[thread_id] = props
# Launch multiple threads accessing properties simultaneously
threads = [threading.Thread(target=access_properties, args=(i,)) for i in range(10)]
for t in threads:
t.start()
for t in threads:
t.join()
# All threads should get the same properties object
all_same = all(v == results[0] for v in results.values())
print(f"All 10 threads got identical properties: {all_same}")
All 10 threads got identical properties: True
# Concurrent circuit generation
enc = AmplitudeEncoding(n_features=4)
circuit_results = {}
def generate_circuit(thread_id, x):
"""Generate circuit from a separate thread."""
circuit = enc.get_circuit(x, backend='pennylane')
circuit_results[thread_id] = circuit
inputs = [np.random.randn(4) for _ in range(10)]
threads = [
threading.Thread(target=generate_circuit, args=(i, inputs[i]))
for i in range(10)
]
for t in threads:
t.start()
for t in threads:
t.join()
print(f"All 10 threads generated circuits: {len(circuit_results) == 10}")
print(f"All callable: {all(callable(c) for c in circuit_results.values())}")
All 10 threads generated circuits: True All callable: True
21. Edge Cases & Error Handling¶
AmplitudeEncoding performs rigorous validation. Let's explore all the error scenarios.
# --- Constructor validation ---
# Invalid normalize types
invalid_normalize_values = [
("True", "string"),
(1, "integer"),
(0, "integer zero"),
(None, "None"),
([True], "list"),
(1.0, "float"),
]
print("=== normalize type validation ===")
for val, desc in invalid_normalize_values:
try:
AmplitudeEncoding(n_features=4, normalize=val)
print(f" {desc} ({val!r}): ACCEPTED (unexpected!)")
except TypeError as e:
print(f" {desc} ({val!r}): TypeError")
=== normalize type validation ===
string ('True'): TypeError
integer (1): TypeError
integer zero (0): TypeError
None (None): TypeError
list ([True]): TypeError
float (1.0): TypeError
# n_features validation (from parent class)
print("=== n_features validation ===")
for n in [0, -1, -100]:
try:
AmplitudeEncoding(n_features=n)
print(f" n_features={n}: ACCEPTED (unexpected!)")
except (ValueError, Exception) as e:
print(f" n_features={n}: {type(e).__name__}")
=== n_features validation === n_features=0: ValueError n_features=-1: ValueError n_features=-100: ValueError
# --- Input validation ---
enc = AmplitudeEncoding(n_features=4)
print("=== Input validation ===")
# Wrong number of features
try:
enc.get_circuit(np.array([1.0, 2.0, 3.0]), backend='pennylane') # 3 != 4
except ValueError as e:
print(f"Wrong shape: ValueError")
# NaN values
try:
enc.get_circuit(np.array([1.0, float('nan'), 3.0, 4.0]), backend='pennylane')
except ValueError as e:
print(f"NaN input: ValueError")
# Infinite values
try:
enc.get_circuit(np.array([1.0, float('inf'), 3.0, 4.0]), backend='pennylane')
except ValueError as e:
print(f"Inf input: ValueError")
# Zero vector (cannot form valid quantum state)
try:
enc.get_circuit(np.array([0.0, 0.0, 0.0, 0.0]), backend='pennylane')
except ValueError as e:
print(f"Zero vector: ValueError")
# Complex numbers
try:
enc.get_circuit(np.array([1+2j, 3+4j, 5+6j, 7+8j]), backend='pennylane')
except TypeError as e:
print(f"Complex input: TypeError")
# String input
try:
enc.get_circuit(["0.5", "0.3", "0.1", "0.1"], backend='pennylane')
except TypeError as e:
print(f"String input: TypeError")
# Unknown backend
try:
enc.get_circuit(np.array([1.0, 2.0, 3.0, 4.0]), backend='tensorflow')
except ValueError as e:
print(f"Unknown backend: ValueError")
=== Input validation === Wrong shape: ValueError NaN input: ValueError Inf input: ValueError Zero vector: ValueError Complex input: TypeError String input: TypeError Unknown backend: ValueError
# Near-zero but non-zero vector: should succeed
x_tiny = np.array([1e-14, 0.0, 0.0, 0.0]) # norm = 1e-14 > 1e-15 threshold
circuit = enc.get_circuit(x_tiny, backend='pennylane')
print(f"Near-zero vector (norm={np.linalg.norm(x_tiny):.2e}): accepted")
# Below threshold: should fail
x_too_small = np.array([1e-16, 0.0, 0.0, 0.0]) # norm = 1e-16 < 1e-15
try:
enc.get_circuit(x_too_small, backend='pennylane')
except ValueError:
print(f"Too-small vector (norm={np.linalg.norm(x_too_small):.2e}): rejected")
Near-zero vector (norm=1.00e-14): accepted Too-small vector (norm=1.00e-16): rejected
# Negative values are perfectly valid (after normalization, they become negative amplitudes)
x_negative = np.array([-1.0, -2.0, 3.0, 4.0])
transformed = enc.transform_input(x_negative)
print(f"Negative values accepted: {transformed}")
print(f"Norm: {np.linalg.norm(transformed):.10f}")
Negative values accepted: [-0.18257419 -0.36514837 0.54772256 0.73029674] Norm: 1.0000000000
# Very large values: normalization handles scaling
x_large = np.array([1e10, 2e10, 3e10, 4e10])
transformed = enc.transform_input(x_large)
print(f"Large values: {x_large}")
print(f"After normalization: {transformed}")
print(f"Norm: {np.linalg.norm(transformed):.10f}")
# Very small values: normalization preserves ratios
x_small = np.array([1e-10, 2e-10, 3e-10, 4e-10])
transformed = enc.transform_input(x_small)
print(f"\nSmall values: {x_small}")
print(f"After normalization: {transformed}")
print(f"Ratios preserved: {np.allclose(transformed, x_large / np.linalg.norm(x_large))}")
Large values: [1.e+10 2.e+10 3.e+10 4.e+10] After normalization: [0.18257419 0.36514837 0.54772256 0.73029674] Norm: 1.0000000000 Small values: [1.e-10 2.e-10 3.e-10 4.e-10] After normalization: [0.18257419 0.36514837 0.54772256 0.73029674] Ratios preserved: True
22. Qubit Ordering Conventions¶
The library adopts MSB (most significant bit) ordering. This is important for understanding how amplitude indices map to qubit states.
| Index | Binary | MSB meaning | Qiskit (LSB) |
|---|---|---|---|
| 0 | 00 | q0=0, q1=0 | q0=0, q1=0 |
| 1 | 01 | q0=0, q1=1 | q0=1, q1=0 |
| 2 | 10 | q0=1, q1=0 | q0=0, q1=1 |
| 3 | 11 | q0=1, q1=1 | q0=1, q1=1 |
The library handles the MSB-to-LSB conversion for Qiskit transparently.
enc = AmplitudeEncoding(n_features=4)
# Encode a state where only amplitude[2] is non-zero
# Index 2 in binary = 10, so in MSB: qubit 0 = 1, qubit 1 = 0
x = np.array([0.0, 0.0, 1.0, 0.0]) # |10> state
# All backends should produce the same state
state_pl = simulate_encoding_statevector(enc, x, backend='pennylane')
state_qk = simulate_encoding_statevector(enc, x, backend='qiskit')
state_cq = simulate_encoding_statevector(enc, x, backend='cirq')
print("Encoding |10> (index 2):")
print(f" PennyLane: {state_pl}")
print(f" Qiskit: {state_qk}")
print(f" Cirq: {state_cq}")
print(f" All match: {np.allclose(state_pl, state_qk) and np.allclose(state_pl, state_cq)}")
print(f"\nAmplitude at index 2 is 1.0 in all backends (MSB convention)")
Encoding |10> (index 2): PennyLane: [0.+0.j 0.+0.j 1.+0.j 0.+0.j] Qiskit: [0.+0.j 0.+0.j 1.+0.j 0.+0.j] Cirq: [0.+0.j 0.+0.j 1.+0.j 0.+0.j] All match: True Amplitude at index 2 is 1.0 in all backends (MSB convention)
23. Logging & Debugging¶
AmplitudeEncoding uses Python's logging module for debug information. No output is produced by default.
import logging
# Enable debug logging for amplitude encoding
logger = logging.getLogger('encoding_atlas.encodings.amplitude')
logger.setLevel(logging.DEBUG)
# Add handler to see output
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('%(name)s - %(levelname)s - %(message)s'))
logger.addHandler(handler)
# Now circuit generation will emit debug messages
enc = AmplitudeEncoding(n_features=4)
circuit = enc.get_circuit(np.array([1.0, 2.0, 3.0, 4.0]), backend='pennylane')
encoding_atlas.encodings.amplitude - DEBUG - AmplitudeEncoding initialized: n_features=4, n_qubits=2, normalize=True, state_dim=4 encoding_atlas.encodings.amplitude - DEBUG - get_circuit called: backend='pennylane', input_shape=(4,) encoding_atlas.encodings.amplitude - DEBUG - Input normalized: original_norm=5.4772255751, normalized_norm=1.0 encoding_atlas.encodings.amplitude - DEBUG - Dispatching to backend: 'pennylane', state_vector_len=4, n_qubits=2 encoding_atlas.encodings.amplitude - INFO - PennyLane circuit generated: n_qubits=2
# Clean up logging
logger.removeHandler(handler)
logger.setLevel(logging.WARNING)
print("Logging reset to WARNING level")
Logging reset to WARNING level
24. Comparison with Other Encodings¶
Let's compare AmplitudeEncoding with other encodings to understand the trade-offs.
from encoding_atlas import AngleEncoding, IQPEncoding, BasisEncoding
n_features = 4
encodings = [
("AngleEncoding", AngleEncoding(n_features=n_features)),
("AmplitudeEncoding", AmplitudeEncoding(n_features=n_features)),
("IQPEncoding", IQPEncoding(n_features=n_features)),
("BasisEncoding", BasisEncoding(n_features=n_features)),
]
print(f"{'Encoding':<22} {'Qubits':>6} {'Depth':>6} {'Entangling':>10} {'Simulability':>20}")
print("=" * 70)
for name, enc in encodings:
p = enc.properties
print(f"{name:<22} {p.n_qubits:>6} {p.depth:>6} {str(p.is_entangling):>10} {p.simulability:>20}")
Encoding Qubits Depth Entangling Simulability ====================================================================== AngleEncoding 4 1 False simulable AmplitudeEncoding 2 4 True not_simulable IQPEncoding 4 6 True not_simulable BasisEncoding 4 1 False simulable
# Resource comparison using analysis module
enc_list = [enc for _, enc in encodings]
comparison = compare_resources(
enc_list,
metrics=['n_qubits', 'depth', 'gate_count', 'two_qubit_ratio', 'gates_per_qubit']
)
print("\n=== Detailed Resource Comparison ===")
names = comparison.get('encoding_name', [e.__class__.__name__ for e in enc_list])
for metric in ['n_qubits', 'depth', 'gate_count', 'two_qubit_ratio', 'gates_per_qubit']:
if metric in comparison:
print(f"\n{metric}:")
for name, val in zip(names, comparison[metric]):
print(f" {name:<22}: {val}")
=== Detailed Resource Comparison === n_qubits: AngleEncoding : 4 AmplitudeEncoding : 2 IQPEncoding : 4 BasisEncoding : 4 depth: AngleEncoding : 1 AmplitudeEncoding : 4 IQPEncoding : 6 BasisEncoding : 1 gate_count: AngleEncoding : 4 AmplitudeEncoding : 6 IQPEncoding : 52 BasisEncoding : 4 two_qubit_ratio: AngleEncoding : 0.0 AmplitudeEncoding : 0.3333333333333333 IQPEncoding : 0.46153846153846156 BasisEncoding : 0.0 gates_per_qubit: AngleEncoding : 1.0 AmplitudeEncoding : 3.0 IQPEncoding : 13.0 BasisEncoding : 1.0
# Key takeaway: Amplitude encoding uses fewest qubits but deepest circuits
print("\n=== Key Trade-off ===")
print(f"AngleEncoding: {AngleEncoding(n_features=16).n_qubits} qubits for 16 features (linear)")
print(f"AmplitudeEncoding: {AmplitudeEncoding(n_features=16).n_qubits} qubits for 16 features (logarithmic)")
print(f"\nBut depth is {AmplitudeEncoding(n_features=16).depth} for amplitude vs ~1 for angle!")
=== Key Trade-off === AngleEncoding: 16 qubits for 16 features (linear) AmplitudeEncoding: 4 qubits for 16 features (logarithmic) But depth is 16 for amplitude vs ~1 for angle!
25. Summary & Best Practices¶
When to use AmplitudeEncoding¶
- You need maximum data compression (exponential)
- Working with fault-tolerant quantum computers or simulators
- Implementing algorithms that require amplitude-encoded inputs (HHL, QSVM, QPCA)
- Small feature counts (n_features ≤ 16, i.e., ≤ 4 qubits)
When NOT to use AmplitudeEncoding¶
- On NISQ devices (circuit too deep for noisy hardware)
- When you need fast circuit execution (exponential depth)
- For real-time applications (state preparation is slow)
- When magnitude information matters (normalization destroys absolute scale)
Best practices¶
- Use
normalize=True(default) unless you have a specific reason to handle normalization yourself - Check
resource_summary()before using the Cirq backend with large feature counts - Use
parallel=Trueinget_circuits()for large batches, especially with the Cirq backend - Use
transform_input()to inspect how your data is preprocessed before encoding - Use the analysis module (
check_simulability,compute_expressibility, etc.) to characterize encoding behavior - Prefer PennyLane or Qiskit backends over Cirq for memory efficiency with many qubits
- Use
seedparameters in analysis functions for reproducible results
# Final summary: everything in one cell
enc = AmplitudeEncoding(n_features=8)
print(f"AmplitudeEncoding Summary")
print(f"========================")
print(f"Representation: {enc}")
print(f"Features: {enc.n_features}")
print(f"Qubits: {enc.n_qubits}")
print(f"Depth: {enc.depth}")
print(f"Normalize: {enc.normalize}")
print(f"Config: {enc.config}")
print(f"Entangling: {enc.properties.is_entangling}")
print(f"Simulability: {enc.properties.simulability}")
print(f"Compression ratio: {enc.resource_summary()['compression_ratio']:.1f}x")
print(f"Gate count (est.): {enc.gate_count_breakdown()['total']}")
print(f"Cirq memory: {enc.resource_summary()['cirq_unitary_memory_human']}")
print(f"\nSupported backends: pennylane, qiskit, cirq")
print(f"Protocols: ResourceAnalyzable, DataTransformable")
print(f"Thread-safe: Yes")
print(f"Serializable: Yes (pickle)")
AmplitudeEncoding Summary
========================
Representation: AmplitudeEncoding(n_features=8, normalize=True)
Features: 8
Qubits: 3
Depth: 8
Normalize: True
Config: {'normalize': True}
Entangling: True
Simulability: not_simulable
Compression ratio: 2.7x
Gate count (est.): 14
Cirq memory: 1.0 KB
Supported backends: pennylane, qiskit, cirq
Protocols: ResourceAnalyzable, DataTransformable
Thread-safe: Yes
Serializable: Yes (pickle)