Larq Zoo Pretrained Models¶
Larq Zoo provides reference implementations of deep neural networks with extremely low precision weights and activations that are made available alongside pre-trained weights. These models can be used for prediction, feature extraction, and fine-tuning.
The code for all models including a reproducible training pipeline is available at larq/zoo
.
Larq Zoo consists of a literature
and a sota
submodule.
The literature
submodule contains replications from research papers (all current models). These models are intended to provide a stable reference for ideas presented in specific papers. The model implementations will be maintained, but we will not attempt to improve these models over time by applying new training strategies or architecture innovations.
The sota
submodule contains top models for various scenarios. These models are intended to be used in a SW 2.0
-like fashion. We will do our best to continuously improve the models, which means that their weights and even details of their architectures may change from release to release.
If you have developed or reimplemented a Binarized or other Extremely Quantized Neural Network and want to share it with the community such that future papers can build on top of your work, please add it to Larq Zoo or get in touch with us if you need any help.
Larq Zoo is part of a family of libraries for BNN development; you can also check out Larq for building and training BNNs and Larq Compute Engine for optimized deployment.
Available models¶
The following models are trained on the ImageNet dataset. The Top-1 and Top-5 accuracy refers to the model's performance on the ImageNet validation dataset, memory refers to the memory after quantization of the weights. Models were benchmarked using Larq Compute Engine on a Pixel 1 phone (2016), single-threaded1.
The model definitions and the train loops are available in the Larq Zoo repository.
The sota
submodule contains these models:
Model | Top-1 Accuracy | Top-5 Accuracy | Model size | Latency (Pixel 1, single thread) |
---|---|---|---|---|
QuickNet | 58.6 % | 81.0 % | 3.18 MB | 18.4 ms |
QuickNetLarge | 62.7 % | 84.0 % | 4.49 MB | 27.6 ms |
QuickNetXL | 67.0 % | 87.3 % | 6.22 MB | 47.9 ms |
The literature
submodule contains the following models:
Model | Top-1 Accuracy | Top-5 Accuracy | Model size | Latency (Pixel 1, single thread) |
---|---|---|---|---|
RealToBinaryNet | 65.0 % | 85.7 % | 5.13 MB | 51.3 ms |
BinaryDenseNet45 | 64.6 % | 85.2 % | 7.35 MB | 138.5 ms |
BinaryDenseNet37Dilated | 64.3 % | 85.2 % | 5.13 MB | 182.9 ms |
BinaryDenseNet37 | 62.9 % | 84.2 % | 5.13 MB | 102.2 ms |
MeliusNet22 | 62.4 % | 83.9 % | 3.88 MB | 117.7 ms |
BinaryDenseNet28 | 60.9 % | 82.8 % | 4.04 MB | 90.0 ms |
BinaryResNetE18 | 58.3 % | 80.8 % | 4.00 MB | 43.6 ms |
Bi-Real Net | 57.5 % | 79.8 % | 4.00 MB | 43.4 ms |
DoReFaNet | 53.4 % | 76.5 % | 22.80 MB | Unsupported2 |
XNOR-Net | 45.0 % | 69.2 % | 22.77 MB | 34.9 ms |
Binary AlexNet | 36.3 % | 61.5 % | 7.45 MB | 44.3 ms |
Installation¶
Larq Zoo is not included in Larq by default. To start using it, you can install it with Python's pip package manager:
pip install larq-zoo
Weights can be downloaded automatically when instantiating a model. They are stored at ~/.larq/models/
.
Training Models from Scratch¶
Larq Zoo ships with a command-line interface powered by zookeeper
, allowing you to reproduce the entire training process. If you want to improve an existing model or implement your own, we recommend to installing Larq Zoo in development mode.
E.g. to reproduce the training of Binary AlexNet run:
lqz TrainBinaryAlexNet dataset=ImageNet
To experiment with different hyperparameters you can either edit the task for this model or overwrite them from the command line, e.g.:
lqz TrainBinaryAlexNet dataset=ImageNet epochs=100 batch_size=64
For all available commands and options run lqz --help
or checkout the documentation of zookeeper
if you want to implement your model for Larq Zoo.