TAO v5.5.0

MLRecogNet 与 TAO Deploy

为了生成优化的 TensorRT 引擎,MLRecogNet .onnx 文件(使用 tao export 生成)被作为 tao-deploy 的输入。目前,MLRecogNet 支持 FP32、FP16 和 INT8 数据类型。

有关训练 MLRecogNet 模型的更多信息,请参阅 MLRecogNet 训练文档

以下是一个示例规范 $TRT_GEN_SPEC,用于从导出的 MLRecogNet onnx 模型生成 TensorRT 引擎。

trt_config

trt_config 参数提供与 TensorRT 生成相关的选项。

复制
已复制!
            

results_dir: /path/to/results/dir dataset: val_dataset: reference: /path/to/reference/set query: /path/to/query/set pixel_mean: [0.485, 0.456, 0.406] pixel_std: [0.226, 0.226, 0.226] model: input_channel: 3 input_width: 224 input_height: 224 gen_trt_engine: gpu_id: 0 onnx_file: /path/to/exported/onnx/file trt_engine: /path/to/trt/engine/to/generate tensorrt: data_type: int8 workspace_size: 1024 min_batch_size: 1 opt_batch_size: 10 max_batch_size: 10 calibration: cal_cache_file: /path/to/calibration/cache/file/to/generate cal_batch_size: 16 cal_batches: 100 cal_image_dir: - /path/to/calibration/image/folder

参数 数据类型 默认值 描述 支持的值
data_type 字符串 FP32 用于 TensorRT 引擎的精度 FP32/FP16/INT8
workspace_size 无符号整数 1024 TensorRT 引擎的最大工作区大小 >1024
min_batch_size 无符号整数 1 优化配置文件形状的最小批次大小 >0
opt_batch_size 无符号整数 1 优化配置文件形状的最佳批次大小 >0
max_batch_size 无符号整数 1 优化配置文件形状的最大批次大小 >0
calibration 字典配置 INT8 校准的配置

校准配置

参数 数据类型 默认值 描述 支持的值
cal_cache_file 字符串 校准缓存文件的路径。如果此路径下没有校准缓存文件,则会根据其他 calibration 配置参数生成缓存文件。
cal_batch_size 无符号整数 1 校准数据集的批次大小 >0
cal_batches 无符号整数 1 用于校准的批次数量。总共有 cal_batches`x:code:`cal_batch_size 张校准图像被使用。 >0
cal_image_dir 字符串 包含校准图像的目录

使用以下命令运行 MLRecogNet 引擎生成

复制
已复制!
            

tao deploy ml_recog gen_trt_engine -e /path/to/spec.yaml \ gen_trt_engine.onnx_file=/path/to/onnx/file \ gen_trt_engine.trt_engine=/path/to/engine/file \ gen_trt_engine.tensorrt.data_type=<data_type>

必需参数

  • -e, --experiment_spec:实验规范文件,用于设置 TensorRT 引擎生成。这应与导出规范文件相同。

  • gen_trt_engine.onnx_file:要转换的 .onnx 模型。

  • gen_trt_engine.trt_engine:生成引擎将存储的路径。

  • gen_trt_engine.tensorrt.data_type:MLRecogNet 支持 FP32、FP16 和 INT8 TensorRT 引擎生成。使用 INT8 时,您必须提供校准数据集或校准缓存文件。

示例用法

以下是使用 gen_trt_engine 命令生成 FP16 TensorRT 引擎的示例

复制
已复制!
            

tao model metric_learning_recognition gen_trt_engine -e $TRT_GEN_SPEC gen_trt_engine.onnx_file=$ONNX_FILE \ gen_trt_engine.trt_engine=$ENGINE_FILE \ gen_trt_engine.tensorrt.data_type=FP16

以下是输出 $RESULTS_DIR/status.json 的示例

复制
已复制!
            

{"date": "6/22/2023", "time": "18:17:11", "status": "STARTED", "verbosity": "INFO", "message": "Starting ml_recog gen_trt_engine."} {"date": "6/22/2023", "time": "18:17:30", "status": "SUCCESS", "verbosity": "INFO", "message": "Gen_trt_engine finished successfully."}

输出日志示例如下所示

复制
已复制!
            

Starting ml_recog gen_trt_engine. [06/22/2023-18:17:12] [TRT] [I] [MemUsageChange] Init CUDA: CPU +318, GPU +0, now: CPU 356, GPU 1003 (MiB) [06/22/2023-18:17:14] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +443, GPU +116, now: CPU 853, GPU 1119 (MiB) [06/22/2023-18:17:14] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvda.net.cn/cuda/cuda-c-programming-guide/index.html#env-vars Parsing ONNX model [06/22/2023-18:17:14] [TRT] [W] The NetworkDefinitionCreationFlag::kEXPLICIT_PRECISION flag has been deprecated and has no effect. Please do not use this flag when creating the network. [06/22/2023-18:17:15] [TRT] [W] onnx2trt_utils.cpp:377: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. Network Description Input 'input' with shape (-1, 3, 224, 224) and dtype DataType.FLOAT Output 'fc_pred' with shape (-1, 256) and dtype DataType.FLOAT dynamic batch size handling TensorRT engine build configurations: OptimizationProfile: "input": (1, 3, 224, 224), (10, 3, 224, 224), (10, 3, 224, 224) BuilderFlag.TF32 Note: max representabile value is 2,147,483,648 bytes or 2GB. MemoryPoolType.WORKSPACE = 1073741824 bytes MemoryPoolType.DLA_MANAGED_SRAM = 0 bytes MemoryPoolType.DLA_LOCAL_DRAM = 1073741824 bytes MemoryPoolType.DLA_GLOBAL_DRAM = 536870912 bytes Tactic Sources = 31 [06/22/2023-18:17:17] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +854, GPU +362, now: CPU 1800, GPU 1481 (MiB) [06/22/2023-18:17:17] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +126, GPU +58, now: CPU 1926, GPU 1539 (MiB) [06/22/2023-18:17:17] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored. [06/22/2023-18:17:22] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes. [06/22/2023-18:17:30] [TRT] [I] Total Activation Memory: 1565556736 [06/22/2023-18:17:30] [TRT] [I] Detected 1 inputs and 1 output network tensors. [06/22/2023-18:17:30] [TRT] [I] Total Host Persistent Memory: 132192 [06/22/2023-18:17:30] [TRT] [I] Total Device Persistent Memory: 140288 [06/22/2023-18:17:30] [TRT] [I] Total Scratch Memory: 134217728 [06/22/2023-18:17:30] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 9 MiB, GPU 658 MiB [06/22/2023-18:17:30] [TRT] [I] [BlockAssignment] Started assigning block shifts. This will take 91 steps to complete. [06/22/2023-18:17:30] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 1.66392ms to assign 5 blocks to 91 nodes requiring 184394240 bytes. [06/22/2023-18:17:30] [TRT] [I] Total Activation Memory: 184394240 [06/22/2023-18:17:30] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +10, now: CPU 2491, GPU 1889 (MiB) [06/22/2023-18:17:30] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +0, GPU +101, now: CPU 0, GPU 101 (MiB) Export finished successfully. Gen_trt_engine finished successfully.

与 TAO 评估规范文件相同的规范文件。以下是一个示例规范文件 $EVAL_SPEC

复制
已复制!
            

results_dir: /path/to/output_dir evaluate: trt_engine: /path/to/generated/trt_engine batch_size: 8 topk: 5 dataset: val_dataset: reference: /path/to/reference/set query: /path/to/query/set

使用以下命令运行 Deformable DETR 引擎评估

复制
已复制!
            

tao deploy ml_recog evaluate -e /path/to/spec.yaml \ evaluate.trt_engine=/path/to/engine/file \ results_dir=/path/to/outputs

必需参数

  • -e, --experiment_spec:用于评估的实验规范文件。这应与 tao evaluate 规范文件相同。

  • evaluate.trt_engine:要运行评估的引擎文件

  • results_dir:将存储评估结果的目录。如果未提供,结果将存储在 evaluate.results_dir 中。因此,至少需要其中一个 results_dir。

示例用法

在以下示例中,evaluate 命令用于使用 TensorRT 引擎运行评估

复制
已复制!
            

tao deploy ml_recog evaluate -e $EVAL_SPEC evaluate.trt_engine=$ENGINE_FILE \ results_dir=$RESULTS_DIR

以下是输出 $RESULTS_DIR/status.json 的示例

复制
已复制!
            

{"date": "3/30/2023", "time": "6:7:14", "status": "STARTED", "verbosity": "INFO", "message": "Starting ml_recog evaluation."} {"date": "3/30/2023", "time": "6:7:24", "status": "SUCCESS", "verbosity": "INFO", "message": "Evaluation finished successfully."}

输出日志示例如下所示

复制
已复制!
            

Starting ml_recog evaluation. [06/22/2023-20:41:53] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvda.net.cn/cuda/cuda-c-programming-guide/index.html#env-vars [06/22/2023-20:41:53] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. [06/22/2023-20:41:53] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvda.net.cn/cuda/cuda-c-programming-guide/index.html#env-vars Loading gallery dataset... ... Top 1 scores: 0.9958333333333333 Top 5 scores: 1.0 Confusion Matrix [[ 34 0 0 0 0] [ 0 106 0 0 0] [ 0 0 29 0 0] [ 0 0 0 31 0] [ 0 0 0 1 47]] Classification Report precision recall f1-score support c000001 1.00 1.00 1.00 34 c000002 1.00 1.00 1.00 106 c000003 1.00 1.00 1.00 29 c000004 0.97 1.00 0.98 31 c000005 1.00 0.98 0.99 48 accuracy 1.00 248 macro avg 0.99 1.00 0.99 248 weighted avg 1.00 1.00 1.00 248 Finished evaluation. Evaluation finished successfully.

与 TAO 推理规范文件相同的规范文件。示例规范文件 $INFERENCE_SPEC

复制
已复制!
            

results_dir: "/path/to/output_dir" model: input_channels: 3 input_width: 224 input_height: 224 inference: trt_engine: "/path/to/generated/trt_engine" batch_size: 10 inference_input_type: classification_folder topk: 5 dataset: val_dataset: reference: "/path/to/reference/set" query: ""

使用以下命令运行 MLRecogNet 引擎推理

复制
已复制!
            

tao deploy ml_recog inference -e /path/to/spec.yaml \ inference.trt_engine=/path/to/engine/file \ results_dir=/path/to/outputs

必需参数

  • -e, --experiment_spec:用于推理的实验规范文件。这应与 tao inference 规范文件相同。

  • inference.trt_engine:要运行推理的引擎文件。

  • results_dir:将存储推理结果的目录。

示例用法

在以下示例中,inference 命令用于使用 TensorRT 引擎运行推理

复制
已复制!
            

tao deploy ml_recog inference -e $INFERENCE_SPEC inference.trt_engine=$ENGINE_FILE \ results_dir=$RESULTS_DIR

JSON 格式的结果将存储在 $RESULTS_DIR/trt_inference 下。

以下是输出 $RESULTS_DIR/status.json 的示例

复制
已复制!
            

{"date": "6/22/2023", "time": "20:46:38", "status": "STARTED", "verbosity": "INFO", "message": "Starting ml_recog inference."} {"date": "6/22/2023", "time": "20:46:53", "status": "SUCCESS", "verbosity": "INFO", "message": "Inference finished successfully."}

输出日志示例如下所示

复制
已复制!
            

Starting ml_recog inference. [06/22/2023-20:46:39] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvda.net.cn/cuda/cuda-c-programming-guide/index.html#env-vars [06/22/2023-20:46:39] [TRT] [W] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1. [06/22/2023-20:46:39] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvda.net.cn/cuda/cuda-c-programming-guide/index.html#env-vars Loading gallery dataset... ... Finished inference. Inference finished successfully.

上一篇 Mask2former 与 TAO Deploy
下一篇 多任务图像分类与 TAO Deploy
© 版权所有 2024, NVIDIA。 上次更新于 2024 年 10 月 15 日。