Generating INT8 Calibration Table

See original GitHub issue

According to documentation, we need a Calibration Table to export the .onnx file as an INT8 optimized .plan file

export{.exe} model.onnx engine.plan INT8CalibrationTable

I’m assuming that in order to generate the calibration table, I need to do the export on the host machine.

My machine currently uses a RTX 3090 and the 20.03 repo is running in a docker container. However, every time I try to do the export on the host machine, the program aborts. I’ve tried to vary the --size and --calibration-batches but still no luck.

Here is an example of the output that I usually get.

odtk export /models/retinanet-kitti-rn50fpn_fp.pth /models/retinanet-kitti-rn50fpn_8int.plan --int8 --calibration-images /dataset/Kitti-Coco/validate/data --calibration-batches 12  --size 640
Loading model from retinanet-kitti-rn50fpn_fp.pth...
     model: RetinaNet
  backbone: ResNet50FPN
   classes: 8, anchors: 9
Exporting to ONNX...
Building INT8 core model...
Building accelerated plugins...
Applying optimizations and building TRT CUDA engine...
Half2 support requested on hardware without native FP16 support, performance will be negatively affected.
Int8 support requested on hardware without native Int8 support, performance will be negatively affected.
Assertion failed: Unsupported SM.
../rtSafe/cuda/caskUtils.cpp:80
Aborting...

../rtSafe/cuda/caskUtils.cpp (80) - Assertion Error in trtSmToCask: 0 (Unsupported SM.)
Writing to /models/retinanet-kitti-rn50fpn_8int.plan...
Segmentation fault (core dumped)

Any help would be greatly appreciated.

Issue Analytics

  • State:closed
  • Created 2 years ago
  • Comments:9

github_iconTop GitHub Comments

1reaction
yashnvcommented, Apr 29, 2021

Can you pull the master branch again, and pip install --no-cache-dir -e retinanet-examples/ and try again?

0reactions
daynauthcommented, Apr 29, 2021

That worked. Thank you so much.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How can I generating the correct int8 calibration table ... - GitHub
I'm using deepstream Yolo parser to generated int8 calibration table with my custom ".cfg" and ".weight" file, parts of the cfg file shows ......
Read more >
Easiest method to create INT8 Calibration Table using ...
Description Hi NVIDIA Team, Can you tell me the easiest method to create INT8 Calibration Table using TensorRT (trtexec preferrable) for a ...
Read more >
Performing Inference In INT8 Using Custom Calibration
Specifically, this sample demonstrates how to perform inference in 8-bit integer (INT8). INT8 inference is available only on GPUs with compute capability 6.1...
Read more >
INT8 Calibration - OpenVINO™ Toolkit
Configure INT8 Calibration Settings · Select or import a calibration dataset. · Define the percentage of images to use. · Select an optimization...
Read more >
how to use tensorrt int8 to do network calibration - KeZunLin
If you are creating your own dataset, we recommend creating a separate ... Builds the INT8 engine from the calibration table and the...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found