🧮 Fixed-point quantization
Eventually, the real hardware update must be quantized to the fixed-point format. This can be modeled as:
θt+1=QΘ[θt−QΔ(αtηGt)].
Where:
- QΘ quantizes/clips weights,
- QΔ quantizes/clips updates,
- αt is the global throttle.
This introduces quantization error:
ξt=θt+1−(θt−αtηGt).
Stability analysis (Lyapunov) under quantization
The Lyapunov/descent condition becomes roughly:
Lt+1−Lt≲−αtη(1−2αtηLt)∥Gt∥2+quantization error terms.
This tells us two things. First, stability requires an upper bound:
αt≤ηCtctrlχ.
Second, fixed-point usefulness requires the update not to underflow. If update quantum is q_\Delta, then approximately:
αtη∥Gt∥≳qΔ.
So:
αt≳η∥Gt∥qΔ.
Therefore, useful stable fixed-point learning requires a nonempty interval:
η∥Gt∥+ϵqΔ≲αt≤ηCtctrl+ϵχ.
Key Insight: Fixed-point precision gives a minimum useful update size. Stability gives a maximum safe update size. Online learning is possible only when these bounds overlap.