GPU-accelerated Joint Time-Frequency Scattering Transform. Formally grounded perceptual fingerprinting at industrial throughput.
JTFS recovers inter-band phase correlations lost to the standard WST modulus non-linearity, empirically reducing phase-shifting hash collision rates by 34%.
The depth-m scattering cascade is proven Lipschitz continuous with constant Lₘ ≤ (‖ψ‖₁)ᵐ, exponentially decaying with depth. Minor signal perturbations cannot catastrophically alter fingerprints.
Dual-stream CUDA double-buffering hides ≥95% of PCIe transfer latency. Pinned memory allocations sustain >15 GB/s host-to-device bandwidth.
The standard Wavelet Scattering Transform defines a deep convolutional representation through alternating wavelet filtering and pointwise modulus operators:
To prevent informational collapse, the analytic filter bank is strictly constrained to form a Parseval frame, satisfying the energy conservation identity:
This energy-preserving construction mathematically guarantees deformation stability via the Lipschitz continuity theorem:
Finally, to recover critical phase-coupling lost during the nonlinear modulus cascade, our Joint Time-Frequency Scattering (JTFS) engine applies a fully separable 2D convolution kernel across both the temporal and log-frequency axes:
Zero-copy pipeline from Python NumPy → GPU VRAM. No intermediate heap allocations after initialisation.
Detect near-duplicate audio and robust copyright violations by extracting JTFS signatures that remain invariant to MP3 compression, equalisation, and adversarial time-stretching.
WST provides a deformation-stable feature extractor for identifying compact binary coalescence (CBC) signals embedded in non-stationary broadband noise at the LIGO/Virgo detectors.
Encode read-depth signals into a stable translation-invariant representation, accurately characterizing transcription factor bindings regardless of minor genomic position shifts.
Continuously embed high-frequency sensor streams (e.g. EEG brain activity, HFT order books) to flag transient distribution shifts with bounded Lipschitz error tolerances.
import omni_wst_core as wst
import numpy as np
# Initialize configuration
cfg = wst.WSTConfig(J=8, Q=16, depth=2, jtfs=True)
# 44.1kHz audio frame mock
signal = np.random.randn(44100).astype(np.float32)
# Forward pass
fingerprint = wst.fingerprint(signal, cfg)
print(f"Fingerprint shape: {fingerprint.shape}")
print(f"CUDA available: {wst.cuda_available()}")
WST + JTFS
CPU & GPU targets
Apache 2.0 License
Unlimited academic use
Production deployment rights
Priority developer support
Strict SLA options
Custom CUDA hardware targets
Full SaaS integration
Rust orchestration layer
Ed25519 licensing pipeline
Dedicated Kubernetes namespace
Academic and government research institutions qualify for free commercial waivers — include institutional affiliation in your request.