-
q.yao authored
* add onnxruntime test tool, update pytorch2onnx to support slice export * onnx convert with custom output shape, update test code * update pytorch2onnx, add rescale_shape support, add document * update doc for lint error fixing * remove cpu flag in ort_test.py * change class name, fix cuda error * remote comment * fix bug of torch2onnx * mIOU to mIoU
q.yao authored* add onnxruntime test tool, update pytorch2onnx to support slice export * onnx convert with custom output shape, update test code * update pytorch2onnx, add rescale_shape support, add document * update doc for lint error fixing * remove cpu flag in ort_test.py * change class name, fix cuda error * remote comment * fix bug of torch2onnx * mIOU to mIoU
Apart from training/testing scripts, We provide lots of useful tools under the
tools/
directory.
Get the FLOPs and params (experimental)
We provide a script adapted from flops-counter.pytorch to compute the FLOPs and params of a given model.
python tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
You will get the result like this.
==============================
Input shape: (3, 2048, 1024)
Flops: 1429.68 GMac
Params: 48.98 M
==============================
Note: This tool is still experimental and we do not guarantee that the number is correct. You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800). (2) Some operators are not counted into FLOPs like GN and custom operators.
Publish a model
Before you upload a model to AWS, you may want to (1) convert model weights to CPU tensors, (2) delete the optimizer states and (3) compute the hash of the checkpoint file and append the hash id to the filename.
python tools/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
E.g.,
python tools/publish_model.py work_dirs/pspnet/latest.pth psp_r50_hszhao_200ep.pth
The final output filename will be psp_r50_512x1024_40ki_cityscapes-{hash id}.pth
.
Convert to ONNX (experimental)
We provide a script to convert model to ONNX format. The converted model could be visualized by tools like Netron. Besides, we also support comparing the output results between Pytorch and ONNX model.
python tools/pytorch2onnx.py \
${CONFIG_FILE} \
--checkpoint ${CHECKPOINT_FILE} \
--output-file ${ONNX_FILE} \
--input-img ${INPUT_IMG} \
--shape ${INPUT_SHAPE} \
--rescale-shape ${RESCALE_SHAPE} \
--show \
--verify \
--dynamic-export \
--cfg-options \
model.test_cfg.mode="whole"
Description of arguments:
-
config
: The path of a model config file. -
--checkpoint
: The path of a model checkpoint file. -
--output-file
: The path of output ONNX model. If not specified, it will be set totmp.onnx
. -
--input-img
: The path of an input image for conversion and visualize. -
--shape
: The height and width of input tensor to the model. If not specified, it will be set to img_scale of testpipeline. -
--rescale-shape
: rescale shape of output, set this value to avoid OOM, only work onslide
mode. -
--show
: Determines whether to print the architecture of the exported model. If not specified, it will be set toFalse
. -
--verify
: Determines whether to verify the correctness of an exported model. If not specified, it will be set toFalse
. -
--dynamic-export
: Determines whether to export ONNX model with dynamic input and output shapes. If not specified, it will be set toFalse
. -
--cfg-options
:Update config options.
Note: This tool is still experimental. Some customized operators are not supported for now.
Evaluate ONNX model with ONNXRuntime
We provide tools/ort_test.py
to evaluate ONNX model with ONNXRuntime backend.
Prerequisite
-
Install onnx and onnxruntime-gpu
pip install onnx onnxruntime-gpu
Usage
python tools/ort_test.py \
${CONFIG_FILE} \
${ONNX_FILE} \
--out ${OUTPUT_FILE} \
--eval ${EVALUATION_METRICS} \
--show \
--show-dir ${SHOW_DIRECTORY} \
--options ${CFG_OPTIONS} \
--eval-options ${EVALUATION_OPTIONS} \
--opacity ${OPACITY} \
Description of all arguments
-
config
: The path of a model config file. -
model
: The path of a ONNX model file. -
--out
: The path of output result file in pickle format. -
--format-only
: Format the output results without perform evaluation. It is useful when you want to format the result to a specific format and submit it to the test server. If not specified, it will be set toFalse
. Note that this argument is mutually exclusive with--eval
. -
--eval
: Evaluation metrics, which depends on the dataset, e.g., "mIoU" for generic datasets, and "cityscapes" for Cityscapes. Note that this argument is mutually exclusive with--format-only
. -
--show
: Show results flag. -
--show-dir
: Directory where painted images will be saved -
--options
: Override some settings in the used config file, the key-value pair inxxx=yyy
format will be merged into config file. -
--eval-options
: Custom options for evaluation, the key-value pair inxxx=yyy
format will be kwargs fordataset.evaluate()
function -
--opacity
: Opacity of painted segmentation map. In (0, 1] range.
Results and Models
Model | Config | Dataset | Metric | PyTorch | ONNXRuntime |
---|---|---|---|---|---|
FCN | fcn_r50-d8_512x1024_40k_cityscapes.py | cityscapes | mIoU | 72.2 | 72.2 |
PSPNet | pspnet_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.2 | 78.1 |
deeplabv3 | deeplabv3_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.5 | 78.3 |
deeplabv3+ | deeplabv3plus_r50-d8_769x769_40k_cityscapes.py | cityscapes | mIoU | 78.9 | 78.7 |
Convert to TorchScript (experimental)
We also provide a script to convert model to TorchScript format. You can use the pytorch C++ API LibTorch inference the trained model. The converted model could be visualized by tools like Netron. Besides, we also support comparing the output results between Pytorch and TorchScript model.
python tools/pytorch2torchscript.py \
${CONFIG_FILE} \
--checkpoint ${CHECKPOINT_FILE} \
--output-file ${ONNX_FILE}
--shape ${INPUT_SHAPE}
--verify \
--show
Description of arguments:
-
config
: The path of a pytorch model config file. -
--checkpoint
: The path of a pytorch model checkpoint file. -
--output-file
: The path of output TorchScript model. If not specified, it will be set totmp.pt
. -
--input-img
: The path of an input image for conversion and visualize. -
--shape
: The height and width of input tensor to the model. If not specified, it will be set to512 512
. -
--show
: Determines whether to print the traced graph of the exported model. If not specified, it will be set toFalse
. -
--verify
: Determines whether to verify the correctness of an exported model. If not specified, it will be set toFalse
.
Note: It's only support PyTorch>=1.8.0 for now.
Note: This tool is still experimental. Some customized operators are not supported for now.
Examples:
-
Convert the cityscapes PSPNet pytorch model.
python tools/pytorch2torchscript.py configs/pspnet/pspnet_r50-d8_512x1024_40k_cityscapes.py \ --checkpoint checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth \ --output-file checkpoints/pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pt \ --shape 512 1024
Miscellaneous
Print the entire config
tools/print_config.py
prints the whole config verbatim, expanding all its
imports.
python tools/print_config.py \
${CONFIG} \
--graph \
--options ${OPTIONS [OPTIONS...]} \
Description of arguments:
-
config
: The path of a pytorch model config file. -
--graph
: Determines whether to print the models graph. -
--options
: Custom options to replace the config file.
Plot training logs
tools/analyze_logs.py
plots loss/mIoU curves given a training log file. pip install seaborn
first to install the dependency.
python tools/analyze_logs.py xxx.log.json [--keys ${KEYS}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
Examples:
-
Plot the mIoU, mAcc, aAcc metrics.
python tools/analyze_logs.py log.json --keys mIoU mAcc aAcc --legend mIoU mAcc aAcc
-
Plot loss metric.
python tools/analyze_logs.py log.json --keys loss --legend loss