larq.context
¶
Context managers that configure global behaviour of Larq.
get_training_metrics¶
larq.context.get_training_metrics()
Retrieves a live reference to the training metrics in the current scope.
Updating and clearing training metrics using larq.context.metrics_scope
is preferred, but get_training_metrics
can be used to directly access them.
Example
get_training_metrics().clear()
get_training_metrics().add("flip_ratio")
Returns
A set of training metrics in the current scope.
metrics_scope¶
larq.context.metrics_scope(metrics=[])
A context manager to set the training metrics to be used in quantizers.
Example
with larq.context.metrics_scope(["flip_ratio"]):
model = tf.keras.models.Sequential(
[larq.layers.QuantDense(3, kernel_quantizer="ste_sign", input_shape=(32,))]
)
model.compile(loss="mse", optimizer="sgd")
Arguments
- metrics: Iterable of metrics to add to quantizers defined inside this context. Currently only the
flip_ratio
metric is available.
quantized_scope¶
larq.context.quantized_scope(quantize)
A context manager to define the behaviour of QuantizedVariable
.
Example
model.save("full_precision_model.h5") # save full precision latent weights
fp_weights = model.get_weights() # get latent weights
with larq.context.quantized_scope(True):
model.save("binary_model.h5") # save binarized weights
weights = model.get_weights() # get binarized weights
Arguments
- quantize: If
should_quantize
isTrue
,QuantizedVariable
will return their quantized value in the forward pass. IfFalse
,QuantizedVariable
will act as a latent variable.
should_quantize¶
larq.context.should_quantize()
Returns the current quantized scope.