Skip to main content

Dtypes Module Reference

Source: kappa/dtypes.py

This page documents the fixed-point dtype descriptors used by the quantized software ablations. These classes are intentionally small. They describe HLS-style numeric formats and provide NumPy quantization behavior.

TensorFlow fake quantization lives in kappa/quantization.py.

Constants

VALID_QMODES

VALID_QMODES = {
"AP_RND",
"AP_TRN",
"AP_RND_CONV",
"AP_RND_ZERO",
"AP_RND_MIN_INF",
"AP_RND_INF",
}

Supported rounding modes. These names mirror HLS ap_fixed conventions.

VALID_OMODES

VALID_OMODES = {
"AP_WRAP",
"AP_SAT",
"AP_SAT_ZERO",
"AP_SAT_SYM",
}

Supported overflow modes. The first quantization ablations should use AP_SAT because saturation is easier to interpret than wraparound.

Helper Functions

These helpers are internal but important if you need to extend dtype behavior.

_fraction_bits(WL, IWL)

def _fraction_bits(WL: int, IWL: int) -> int:
return max(WL - IWL, 0)

Computes:

F=WLIWL.F = WL - IWL.

If IWL >= WL, the type has no fractional bits.

_scale(F)

def _scale(F: int) -> float:
return float(1 << F) if F > 0 else 1.0

Returns:

2F.2^F.

Used to move between real values and integer fixed-point storage.

_round_by_qmode(x_scaled, qmode)

def _round_by_qmode(x_scaled: np.ndarray, qmode: str) -> np.ndarray:
...

Applies the selected HLS-style rounding mode in the scaled integer domain.

Examples:

AP_TRN -> truncate toward zero
AP_RND -> NumPy round-to-nearest
AP_RND_CONV -> same as NumPy rint / banker rounding
AP_RND_ZERO -> ties toward zero

If a new rounding mode is needed, add it here and mirror it in quantization.py for TensorFlow tensors.

_clip_int(v, WL, signed)

def _clip_int(v: np.ndarray, WL: int, signed: bool) -> np.ndarray:
...

Clips integer-domain values to the representable integer range.

Signed:

[2WL1,2WL11].[-2^{WL-1}, 2^{WL-1}-1].

Unsigned:

[0,2WL1].[0, 2^{WL}-1].

_wrap_int(v, WL, signed)

def _wrap_int(v: np.ndarray, WL: int, signed: bool) -> np.ndarray:
...

Applies modulo wraparound in the integer domain. For signed types, values above the sign bit are interpreted as negative two's-complement values.

_quantize_np(...)

def _quantize_np(
value,
WL,
IWL,
QMODE,
OMODE,
*,
signed,
return_int=False,
):
...

Shared NumPy quantization backend. Steps:

  1. Convert input to float NumPy array.
  2. Scale by 2^F.
  3. Round according to QMODE.
  4. Overflow according to OMODE.
  5. Return either raw integer storage or dequantized real values.

Usage through a dtype object:

t = dtypes.ap_fixed(8, 3, "AP_RND", "AP_SAT")
real_values = t([0.1, 0.2, 12.0])
raw_ints = t([0.1, 0.2, 12.0], return_int=True)

HLSDataType

@dataclass(frozen=True)
class HLSDataType(ABC):
dtype: str = field(init=False)

Abstract base class for all dtype descriptors.

from_dtype(dtype, **kwargs)

@staticmethod
def from_dtype(dtype: Any, **kwargs) -> "HLSDataType":
...

Factory method. Accepts either an existing dtype object or an HLS-style string.

Examples:

from kappa.dtypes import HLSDataType

t0 = HLSDataType.from_dtype("ap_fixed<12,4,AP_RND,AP_SAT>")
t1 = HLSDataType.from_dtype("ap_ufixed<10,3>")
t2 = HLSDataType.from_dtype("ap_int<16>")

Used by PrecisionDict, so YAML/string configs can become dtype objects automatically.

Abstract Methods

Every dtype must implement:

value_range()
double_precision()
signed()
unsigned()
__call__(value, return_int=False)

These methods let the rest of the package query rails, construct related dtypes, and quantize values without caring about the concrete dtype class.

ap_fixed

@dataclass(frozen=True)
class ap_fixed(HLSDataType):
WL: int = 16
IWL: int = 6
QMODE: str = "AP_TRN"
OMODE: str = "AP_WRAP"
SAT_BITS: int = 0

Signed fixed-point type.

__post_init__()

Validates:

  • IWL <= WL,
  • QMODE is supported,
  • OMODE is supported.

fractional_bits

@property
def fractional_bits(self) -> int:
return WL - IWL

quantum

@property
def quantum(self) -> float:
return 2.0 ** (-self.fractional_bits)

This is the minimum representable spacing.

value_range()

def value_range(self) -> Tuple[float, float]:
...

Returns:

[2IWL1,2IWL12F].[-2^{IWL-1}, 2^{IWL-1}-2^{-F}].

double_precision()

Returns a wider signed fixed-point dtype with doubled word length and integer length.

signed() and unsigned()

signed() returns itself. unsigned() returns the corresponding ap_ufixed.

__call__(value, return_int=False)

Quantizes NumPy-compatible values.

t = dtypes.ap_fixed(12, 4, "AP_RND", "AP_SAT")
xq = t([0.1, 0.2, 32.0])

ap_ufixed

Unsigned fixed-point type. Same fields as ap_fixed, but the range is:

[0,2IWL2F].[0, 2^{IWL}-2^{-F}].

Use this for nonnegative values such as ReLU activations when the sign bit is not needed.

Example:

act_t = dtypes.ap_ufixed(10, 4, "AP_RND", "AP_SAT")

ap_int

@dataclass(frozen=True)
class ap_int(HLSDataType):
WL: int = 16

Signed integer type. Range:

[2WL1,2WL11].[-2^{WL-1}, 2^{WL-1}-1].

Use this for integer counters or raw integer-domain tests.

ap_uint

Unsigned integer type. Range:

[0,2WL1].[0, 2^{WL}-1].

Extension Checklist

When extending this file:

  1. Add NumPy behavior in _round_by_qmode() or _quantize_np().
  2. Add matching TensorFlow behavior in quantization.py.
  3. Add value_range() if a new dtype class is introduced.
  4. Keep dtype objects immutable with frozen=True.
  5. Avoid putting experiment-specific policy in this file. This module should describe numeric formats only.