A Rust library integrated with ONNXRuntime, providing a collection of Computer Vison and Vision-Language models including YOLOv8, YOLOv9, RTDETR, CLIP, DINOv2, FastSAM, YOLO-World, BLIP, PaddleOCR and others. Many execution providers are supported, sunch as CUDA
, TensorRT
and CoreML
.
Model | Task / Type | Example | CUDA f32 |
CUDA f16 |
TensorRT f32 |
TensorRT f16 |
---|---|---|---|---|---|---|
YOLOv8-detection | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
YOLOv8-pose | Keypoint Detection | demo | ✅ | ✅ | ✅ | ✅ |
YOLOv8-classification | Classification | demo | ✅ | ✅ | ✅ | ✅ |
YOLOv8-segmentation | Instance Segmentation | demo | ✅ | ✅ | ✅ | ✅ |
YOLOv9 | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
RT-DETR | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
FastSAM | Instance Segmentation | demo | ✅ | ✅ | ✅ | ✅ |
YOLO-World | Object Detection | demo | ✅ | ✅ | ✅ | ✅ |
DINOv2 | Vision-Self-Supervised | demo | ✅ | ✅ | ✅ | ✅ |
CLIP | Vision-Language | demo | ✅ | ✅ | ✅ visual ❌ textual |
✅ visual ❌ textual |
BLIP | Vision-Language | demo | ✅ | ✅ | ✅ visual ❌ textual |
✅ visual ❌ textual |
DB | Text Detection | demo | ✅ | ❌ | ✅ | ✅ |
SVTR | Text Recognition | demo | ✅ | ❌ | ✅ | ✅ |
RTMO | Keypoint Detection | demo | ✅ | ✅ | ❌ | ❌ |
Additionally, this repo also provides some solution models.
Model | Example |
---|---|
text detection (PPOCR-det v3, v4) 通用文本检测 |
demo |
text recognition (PPOCR-rec v3, v4) 中英文-文本识别 |
demo |
face-landmark detection 人脸 & 关键点检测 |
demo |
head detection 人头检测 |
demo |
fall detection 摔倒检测 |
demo |
trash detection 垃圾检测 |
demo |
cargo run -r --example yolov8 # fastsam, yolov9, blip, clip, dinov2, yolo-world...
1. Install ort
check ort guide
For Linux or MacOS users
- Firstly, download from latest release from ONNXRuntime Releases
- Then linking
export ORT_DYLIB_PATH=/Users/qweasd/Desktop/onnxruntime-osx-arm64-1.17.1/lib/libonnxruntime.1.17.1.dylib
cargo add --git https://github.com/jamjamjon/usls
let options = Options::default()
.with_model("../models/yolov8m-seg-dyn-f16.onnx");
let mut model = YOLO::new(&options)?;
-
If you want to run your model with TensorRT or CoreML
let options = Options::default() .with_trt(0) // using cuda by default // .with_coreml(0)
-
If your model has dynamic shapes
let options = Options::default() .with_i00((1, 2, 4).into()) // dynamic batch .with_i02((416, 640, 800).into()) // dynamic height .with_i03((416, 640, 800).into()) // dynamic width
-
If you want to set a confidence level for each category
let options = Options::default() .with_confs(&[0.4, 0.15]) // person: 0.4, others: 0.15
-
Go check Options for more model options.
- Build
DataLoader
to load images
let dl = DataLoader::default()
.with_batch(model.batch.opt as usize)
.load("./assets/")?;
for (xs, _paths) in dl {
let _y = model.run(&xs)?;
}
- Or simply read one image
let x = vec![DataLoader::try_read("./assets/bus.jpg")?];
let y = model.run(&x)?;
let annotator = Annotator::default().with_saveout("YOLOv8");
annotator.annotate(&x, &y);
import onnx
from pathlib import Path
from onnxconverter_common import float16
model_f32 = "onnx_model.onnx"
model_f16 = float16.convert_float_to_float16(onnx.load(model_f32))
saveout = Path(model_f32).with_name(Path(model_f32).stem + "-f16.onnx")
onnx.save(model_f16, saveout)