PrecisionDict Module Reference
Source: kappa/precision.py
PrecisionDict is the layer-indexed precision map used by quantized ablation experiments. It is deliberately explicit because future work will compare different precision assignments across layers.
Imports
from collections.abc import Mapping
from typing import Any
import tensorflow as tf
from .dtypes import HLSDataType
The only package-level dependency is HLSDataType, which parses strings and validates dtype objects.
Reserved Names
RESERVED_NAMES = {"loss", "input", "__default__"}
These names are semantic precision scopes, not ordinary Keras layers.
Meaning:
| Name | Purpose |
|---|---|
input | Precision for raw input tensors. |
loss | Precision for loss value or loss-side signals. |
__default__ | Optional fallback precision fields. |
No Keras layer should be named input or loss.
PrecisionDict
class PrecisionDict(dict[str, dict[str, HLSDataType | None]]):
...
The top-level key is a layer or semantic scope:
dense0
activation0
input
loss
The second-level key is a tensor family:
weight
bias
activation
accumulator
gradient
update
value
Example:
from kappa import dtypes, PrecisionDict
precisions = PrecisionDict({
"input": {
"value": "ap_fixed<12,4,AP_RND,AP_SAT>",
},
"dense0": {
"weight": dtypes.ap_fixed(12, 4, "AP_RND", "AP_SAT"),
"gradient": dtypes.ap_fixed(16, 6, "AP_RND", "AP_SAT"),
"update": dtypes.ap_fixed(16, 4, "AP_RND", "AP_SAT"),
},
"loss": {
"value": "ap_fixed<24,12,AP_RND,AP_SAT>",
},
})
Strings and dtype objects can be mixed. None means explicitly float/no quantization.
__init__(data)
def __init__(self, data: Mapping[str, Mapping[str, Any]] | None = None):
...
Builds the nested dictionary and parses all dtype entries.
Input:
{
"dense0": {
"weight": "ap_fixed<12,4,AP_RND,AP_SAT>",
"bias": None,
}
}
Stored internally as:
{
"dense0": {
"weight": ap_fixed(...),
"bias": None,
}
}
_parse_dtype(dtype)
@staticmethod
def _parse_dtype(dtype: Any) -> HLSDataType | None:
...
Rules:
NonestaysNone,- dtype objects pass through,
- strings are parsed with
HLSDataType.from_dtype().
This is why notebook configs can be concise:
"weight": "ap_fixed<12,4,AP_RND,AP_SAT>"
dtype(layer_name, field, default=None)
def dtype(self, layer_name: str, field: str, default=None):
...
Main lookup method.
Lookup order:
- Exact layer and field:
precisions["dense0"]["weight"]
- Default field:
precisions["__default__"]["weight"]
- Provided default, usually
None.
Usage:
weight_t = precisions.dtype("dense0", "weight")
update_t = precisions.dtype("dense0", "update")
loss_t = precisions.dtype("loss", "value")
Missing fields default to floating point in the training loop because the returned dtype is None.
has(layer_name, field)
def has(self, layer_name: str, field: str) -> bool:
return self.dtype(layer_name, field) is not None
Boolean convenience method.
Usage:
if precisions.has("dense0", "update"):
...
layers()
def layers(self) -> list[str]:
return [name for name in self.keys() if name != "__default__"]
Returns the non-default scopes.
Example:
["input", "dense0", "loss"]
fields(layer_name)
def fields(self, layer_name: str) -> list[str]:
return list(self.get(layer_name, {}).keys())
Useful for debugging:
print(precisions.fields("dense0"))
validate_model(model, allow_missing=True)
def validate_model(self, model: tf.keras.Model, *, allow_missing: bool = True) -> None:
...
Checks that:
- No Keras layer is named
loss. - No Keras layer is named
input. - Non-reserved precision entries match model layer names.
- Optionally, every model layer has a precision entry.
Usage:
precisions.validate_model(model.model)
For ablations, keep:
allow_missing=True
because staged experiments intentionally quantize only selected paths.
Use:
allow_missing=False
only when you want a fully specified hardware-style precision map.
describe()
def describe(self) -> str:
...
Returns a readable multi-line summary:
print(precisions.describe())
Useful in notebooks so the exact precision map appears next to the plots.
ensure_precision_dict(precision)
def ensure_precision_dict(precision):
...
Accepts:
None,- an existing
PrecisionDict, - a nested mapping.
Returns either:
None, or- a
PrecisionDict.
This is used by train_instrumented() so callers can pass either:
precision_dict=precisions
or:
precision_dict={
"dense0": {
"weight": "ap_fixed<12,4,AP_RND,AP_SAT>",
},
}
Extension Notes
Add new fields freely when the trainer supports them:
"momentum"
"adam_m"
"adam_v"
"batchnorm_mean"
"batchnorm_variance"
The object is intentionally schema-light. Enforcement should live in the training/firmware path, not here, because different optimizers and layers require different fields.