565 lines
17 KiB
Markdown
565 lines
17 KiB
Markdown
# Saqr PPE Detection - Deployment Guide
|
|
## Unitree G1 Robot + Intel RealSense D435I
|
|
|
|
---
|
|
|
|
## Robot Details
|
|
|
|
| Item | Value |
|
|
|------|-------|
|
|
| Robot | Unitree G1 Humanoid |
|
|
| IP | `192.168.123.164` |
|
|
| User | `unitree` |
|
|
| OS | Ubuntu 20.04 (aarch64 / Jetson) |
|
|
| Python | 3.10 (conda env: `teleimager`) |
|
|
| Camera | Intel RealSense D435I |
|
|
| Serial | `243622073459` |
|
|
| Port | USB 3.2 @ `/dev/video0` |
|
|
|
|
---
|
|
|
|
## Step 1: Train the Model (Dev Machine)
|
|
|
|
```bash
|
|
cd ~/Robotics_workspace/AI/Saqr
|
|
conda activate AI_MSI_yolo
|
|
python train.py --dataset dataset --epochs 100 --batch 16
|
|
```
|
|
|
|
Verify model exists:
|
|
```bash
|
|
ls -lh models/saqr_best.pt
|
|
# Expected: ~5.3 MB
|
|
```
|
|
|
|
---
|
|
|
|
## Step 2: Deploy to Robot (Dev Machine)
|
|
|
|
### Option A: Auto deploy
|
|
```bash
|
|
cd ~/Robotics_workspace/AI/Saqr
|
|
./deploy.sh
|
|
```
|
|
|
|
### Option B: Manual SCP
|
|
```bash
|
|
# Create folders
|
|
ssh unitree@192.168.123.164 "mkdir -p ~/Saqr/{models,captures/{SAFE,PARTIAL,UNSAFE},Config,Logs}"
|
|
|
|
# Copy project files
|
|
scp saqr.py saqr_g1_bridge.py controller.py detect.py manager.py logger.py gui.py requirements.txt deploy.sh DEPLOY.md \
|
|
unitree@192.168.123.164:~/Saqr/
|
|
|
|
# Copy config
|
|
scp Config/logging.json unitree@192.168.123.164:~/Saqr/Config/
|
|
|
|
# Copy trained model (5.3 MB)
|
|
scp models/saqr_best.pt unitree@192.168.123.164:~/Saqr/models/
|
|
```
|
|
|
|
---
|
|
|
|
## Step 3: Install Dependencies (Robot)
|
|
|
|
```bash
|
|
ssh unitree@192.168.123.164
|
|
```
|
|
|
|
### Fix system clock (required for SSL/pip):
|
|
```bash
|
|
sudo date -s "2026-04-10 15:00:00"
|
|
```
|
|
|
|
### Install into teleimager conda env:
|
|
```bash
|
|
conda activate teleimager
|
|
python -m pip install ultralytics opencv-python-headless numpy PyYAML
|
|
```
|
|
|
|
If pip fails (SSL errors), install offline from dev machine:
|
|
```bash
|
|
# On dev machine:
|
|
mkdir -p /tmp/saqr_pkgs
|
|
pip download ultralytics opencv-python-headless numpy PyYAML \
|
|
-d /tmp/saqr_pkgs --python-version 3.10 --platform manylinux2014_aarch64 --only-binary=:all:
|
|
scp -r /tmp/saqr_pkgs unitree@192.168.123.164:/tmp/saqr_pkgs
|
|
|
|
# On robot:
|
|
conda activate teleimager
|
|
python -m pip install --no-index --find-links=/tmp/saqr_pkgs ultralytics opencv-python-headless numpy PyYAML
|
|
```
|
|
|
|
### Install Jetson GPU PyTorch (for CUDA acceleration):
|
|
```bash
|
|
# Remove pip PyTorch (wrong CUDA version)
|
|
python -m pip uninstall torch torchvision -y
|
|
|
|
# Install Jetson-specific PyTorch for JetPack 5.1 / CUDA 11.4
|
|
python -m pip install --no-cache-dir \
|
|
https://developer.download.nvidia.com/compute/redist/jp/v51/pytorch/torch-2.1.0a0+41361538.nv23.06-cp310-cp310-linux_aarch64.whl
|
|
|
|
python -m pip install --no-cache-dir \
|
|
https://developer.download.nvidia.com/compute/redist/jp/v51/pytorch/torchvision-0.16.1a0+5e8e2f1-cp310-cp310-linux_aarch64.whl
|
|
```
|
|
|
|
### Fix Qt / Display (choose one):
|
|
|
|
**A) At the robot's physical terminal (monitor connected):**
|
|
```bash
|
|
xhost +local:
|
|
export DISPLAY=:0
|
|
export QT_QPA_PLATFORM=xcb
|
|
```
|
|
|
|
**B) Via SSH with X11 forwarding:**
|
|
```bash
|
|
# From dev machine:
|
|
ssh -X unitree@192.168.123.164
|
|
export QT_QPA_PLATFORM=xcb
|
|
```
|
|
|
|
**C) Headless / no display (SSH without -X):**
|
|
```bash
|
|
export QT_QPA_PLATFORM=offscreen
|
|
# Always add --headless flag when running saqr.py
|
|
```
|
|
|
|
**Make permanent:**
|
|
```bash
|
|
echo 'export QT_QPA_PLATFORM=offscreen' >> ~/.bashrc
|
|
source ~/.bashrc
|
|
```
|
|
|
|
**Common error:** `Invalid MIT-MAGIC-COOKIE-1 key` or `could not connect to display :0`
|
|
This means you're in SSH without X11 auth. Either use `ssh -X`, run `xhost +local:` on the physical terminal, or switch to headless mode.
|
|
|
|
### Fix system clock (required for pip/SSL):
|
|
```bash
|
|
sudo date -s "2026-04-10 16:00:00"
|
|
```
|
|
|
|
### Verify install:
|
|
```bash
|
|
python -c "from ultralytics import YOLO; print('ultralytics OK')"
|
|
python -c "import torch; print('CUDA:', torch.cuda.is_available())"
|
|
python -c "import cv2; print('opencv OK')"
|
|
```
|
|
|
|
---
|
|
|
|
## Step 4: Run PPE Detection (Robot)
|
|
|
|
### Option A: OpenCV + RealSense RGB (recommended, no pyrealsense2 needed):
|
|
```bash
|
|
conda activate teleimager
|
|
cd ~/Saqr
|
|
|
|
# === WITH DISPLAY (physical monitor on robot) ===
|
|
xhost +
|
|
export DISPLAY=:0
|
|
python saqr.py --source /dev/video2 --model models/saqr_best.pt
|
|
|
|
# === HEADLESS via SSH (no display, saves captures + CSV) ===
|
|
export QT_QPA_PLATFORM=offscreen
|
|
python saqr.py --source /dev/video2 --model models/saqr_best.pt --headless
|
|
```
|
|
|
|
**Note:** `/dev/video2` is the RealSense D435I RGB camera accessed directly via OpenCV V4L2.
|
|
No pyrealsense2 SDK needed. Pure OpenCV frames (640x480 BGR).
|
|
|
|
### Option B: RealSense SDK (pyrealsense2):
|
|
```bash
|
|
python saqr.py --source realsense --model models/saqr_best.pt --headless
|
|
python saqr.py --source realsense:243622073459 --model models/saqr_best.pt --headless
|
|
```
|
|
|
|
### Option C: GUI (dev machine only, not on robot):
|
|
```bash
|
|
# On your dev machine (not the robot):
|
|
python gui.py --source 0 --model models/saqr_best.pt
|
|
```
|
|
**Note:** gui.py requires PySide6 and a display. It will NOT work on the headless Jetson robot.
|
|
|
|
### With OpenCV camera index:
|
|
```bash
|
|
python saqr.py --source 0 --model models/saqr_best.pt --headless
|
|
```
|
|
|
|
### With V4L2 device path:
|
|
```bash
|
|
python saqr.py --source /dev/video0 --model models/saqr_best.pt --headless
|
|
```
|
|
|
|
### With GUI (if display connected):
|
|
```bash
|
|
python gui.py --source realsense --model models/saqr_best.pt
|
|
```
|
|
|
|
### Simple detection (no tracking):
|
|
```bash
|
|
python detect.py --source realsense --model models/saqr_best.pt
|
|
```
|
|
|
|
---
|
|
|
|
## Step 4b: Run with G1 TTS + Reject Action (Bridge)
|
|
|
|
`saqr_g1_bridge.py` is the production entry point. It does **not** run Saqr
|
|
itself — it sits idle, watches the G1 wireless remote, spawns `saqr.py` as a
|
|
subprocess on demand, and drives the G1 onboard TTS + arm action client from
|
|
Saqr's event stream.
|
|
|
|
### Wireless-remote workflow
|
|
|
|
| Press | Action |
|
|
|-------|--------|
|
|
| **R2 + X** | Start `saqr.py`, robot says **"Saqr activated."** |
|
|
| **R2 + Y** | Stop `saqr.py` (SIGINT → SIGTERM → SIGKILL escalation), robot says **"Saqr deactivated."** Bridge stays running, ready for the next R2+X. |
|
|
| **Ctrl+C** in the terminal | Stop saqr (if running) and exit the bridge cleanly. |
|
|
|
|
Each press is rising-edge debounced (release-wait), so holding the button
|
|
only fires once. Track IDs and per-id status state are reset on every
|
|
start, so a leftover SAFE from one session never suppresses an UNSAFE in
|
|
the next.
|
|
|
|
### Per-detection behavior (while saqr is running)
|
|
|
|
| Transition | TTS (speaker_id=2, English) | Arm action |
|
|
|------------|------------------------------|------------|
|
|
| → UNSAFE | "Please stop. Wear your proper safety equipment. You are missing **{items}**." (or generic text if no items reported) | `reject` (id=13) + auto `release arm` |
|
|
| → SAFE | "Safe to enter. Have a good day." | — |
|
|
| → PARTIAL | — | — |
|
|
|
|
The missing-item list is parsed live from Saqr's event line and joined in
|
|
natural English: `"vest"`, `"helmet and vest"`, `"helmet, vest, and gloves"`.
|
|
|
|
### Architecture
|
|
|
|
- One `ChannelFactoryInitialize(0, eth0)` is shared by **all** DDS clients:
|
|
- `G1ArmActionClient` — runs the `reject` arm action.
|
|
- `G1 AudioClient` — `TtsMaker(text, speaker_id)` for English speech.
|
|
- `ChannelSubscriber("rt/lowstate", LowState_)` — receives the wireless
|
|
remote button bits.
|
|
- `controller.py` exposes `LowStateHub` + `UnitreeRemote`, parses the
|
|
`wireless_remote` byte field, and provides `combo_r2x()` / `combo_r2y()`.
|
|
- A 50 Hz daemon thread polls the hub for rising edges and calls
|
|
`Bridge.start_saqr()` / `Bridge.stop_saqr()`.
|
|
|
|
Requires `unitree_sdk2py` installed on the robot and a reachable DDS bus on
|
|
`eth0`. Deploy `controller.py` alongside `saqr_g1_bridge.py` — without it the
|
|
trigger loop is skipped and the bridge falls back to legacy auto-start mode.
|
|
|
|
### Recommended: production run with R2+X / R2+Y
|
|
|
|
```bash
|
|
conda activate marcus # or teleimager — whichever env has unitree_sdk2py
|
|
cd ~/Saqr
|
|
python3 saqr_g1_bridge.py --iface eth0 --source realsense --headless -- --stream 8080
|
|
```
|
|
Boot output should include:
|
|
```
|
|
[BRIDGE] G1ArmActionClient ready (iface=eth0)
|
|
[BRIDGE] G1 AudioClient ready (speaker_id=2)
|
|
[BRIDGE] Subscribed to rt/lowstate (wireless remote)
|
|
[BRIDGE] trigger loop ready — press R2+X to start, R2+Y to stop.
|
|
```
|
|
Then press **R2+X** to begin, **R2+Y** to stop. The MJPEG stream is at
|
|
`http://192.168.123.164:8080` (only while saqr is running).
|
|
|
|
If `pyrealsense2` reports `No device connected`, fall back to the V4L2 path:
|
|
```bash
|
|
python3 saqr_g1_bridge.py --iface eth0 --source /dev/video2 --headless -- --stream 8080
|
|
```
|
|
|
|
### Live OpenCV window on the robot's physical monitor
|
|
|
|
```bash
|
|
xhost +local: >/dev/null 2>&1
|
|
DISPLAY=:0 python3 saqr_g1_bridge.py --iface eth0 --source realsense
|
|
```
|
|
`q` in the OpenCV window quits the current saqr session (same as R2+Y).
|
|
|
|
### Legacy / dev mode (no controller, no trigger)
|
|
|
|
`--no-trigger` skips the wireless-remote subscription entirely and starts
|
|
saqr immediately. Use this on the workstation or when you want the old
|
|
"always running" behavior.
|
|
|
|
```bash
|
|
# On the workstation, no robot, no SDK:
|
|
python3 saqr_g1_bridge.py --no-trigger --dry-run --source 0 --headless
|
|
|
|
# On the robot, but skipping the trigger:
|
|
python3 saqr_g1_bridge.py --no-trigger --iface eth0 --source realsense --headless
|
|
```
|
|
|
|
`--dry-run` automatically implies `--no-trigger` (no SDK = no LowState).
|
|
|
|
### Bridge CLI flags
|
|
|
|
| Flag | Default | Description |
|
|
|------|---------|-------------|
|
|
| `--iface` | *(default DDS)* | DDS network interface, e.g. `eth0` |
|
|
| `--timeout` | `10.0` | Arm/Audio/LowState client timeout (seconds) |
|
|
| `--cooldown` | `8.0` | Per-(track_id, status) seconds before re-triggering TTS/arm |
|
|
| `--release-after` | `2.0` | Seconds before auto `release arm` (0 = never) |
|
|
| `--speaker-id` | `2` | G1 `TtsMaker` speaker_id (2 = English on current firmware) |
|
|
| `--dry-run` | off | Parse events but never call the SDK; implies `--no-trigger` |
|
|
| `--no-trigger` | off | Skip the R2+X/R2+Y trigger loop and start saqr immediately |
|
|
| `--source` | — | Pass through to saqr (`0` / `realsense` / `/dev/video2` / path) |
|
|
| `--headless` | off | Pass `--headless` to saqr |
|
|
| `--saqr-conf` | — | Pass `--conf` to saqr |
|
|
| `--imgsz` | — | Pass `--imgsz` to saqr |
|
|
| `--device` | — | Pass `--device` to saqr (`cpu` / `0` / `cuda:0`) |
|
|
| `-- <extra>` | — | Everything after `--` is forwarded raw to saqr (use this for `--stream 8080`, `--half`, etc.) |
|
|
|
|
### Speaker-id reference
|
|
|
|
speaker_ids are **locked to a language** — they do NOT auto-detect input text.
|
|
On current G1 firmware, `speaker_id=0` is Chinese regardless of what you feed
|
|
it. Speaker 2 was confirmed English by running Sanad mode 6
|
|
(`voice_example.py 6`). If the robot's firmware changes, re-scan:
|
|
```bash
|
|
# On the robot (in a conda env with unitree_sdk2py):
|
|
python3 ~/Sanad/voice_example.py 6
|
|
```
|
|
and pass the new id with `--speaker-id N`.
|
|
|
|
### What a successful run looks like
|
|
|
|
```
|
|
[BRIDGE] G1ArmActionClient ready (iface=eth0)
|
|
[BRIDGE] G1 AudioClient ready (speaker_id=2)
|
|
[BRIDGE] Subscribed to rt/lowstate (wireless remote)
|
|
[BRIDGE] trigger loop ready — press R2+X to start, R2+Y to stop.
|
|
[BRIDGE] R2+X pressed -> start saqr
|
|
[BRIDGE] starting saqr: /.../python3 -u /home/unitree/Saqr/saqr.py --source realsense --headless --stream 8080
|
|
[BRIDGE] tts -> 'Saqr activated.'
|
|
...
|
|
ID 0002 | NEW | UNSAFE | wearing: none | missing: vest | ...
|
|
[BRIDGE] tts -> 'Please stop. Wear your proper safety equipment. You are missing vest.'
|
|
[BRIDGE] -> reject
|
|
[BRIDGE] -> release arm
|
|
ID 0003 | STATUS_CHANGE | SAFE | wearing: helmet, vest | missing: none | ...
|
|
[BRIDGE] tts -> 'Safe to enter. Have a good day.'
|
|
[BRIDGE] R2+Y pressed -> stop saqr
|
|
[BRIDGE] stopping saqr (SIGINT)
|
|
[BRIDGE] saqr exited rc=-2
|
|
[BRIDGE] tts -> 'Saqr deactivated.'
|
|
```
|
|
|
|
> **Note on the SIGINT traceback:** when R2+Y stops saqr, you may see a
|
|
> Python `KeyboardInterrupt` traceback unwinding from inside YOLO. This is
|
|
> expected — saqr.py doesn't catch SIGINT explicitly, so Python prints the
|
|
> stack on its way out. The bridge correctly detects the exit, announces
|
|
> "Saqr deactivated.", and stays alive ready for the next R2+X.
|
|
|
|
---
|
|
|
|
## Step 5: Check Results (Robot)
|
|
|
|
### Live status:
|
|
```bash
|
|
cat ~/Saqr/captures/result.csv
|
|
```
|
|
|
|
### Event history (audit log):
|
|
```bash
|
|
cat ~/Saqr/captures/events.csv
|
|
```
|
|
|
|
### Captured photos:
|
|
```bash
|
|
ls ~/Saqr/captures/SAFE/
|
|
ls ~/Saqr/captures/PARTIAL/
|
|
ls ~/Saqr/captures/UNSAFE/
|
|
```
|
|
|
|
### Export CSV report:
|
|
```bash
|
|
cd ~/Saqr
|
|
python manager.py --export
|
|
```
|
|
|
|
### Download results to dev machine:
|
|
```bash
|
|
# From dev machine
|
|
scp -r unitree@192.168.123.164:~/Saqr/captures/ ./captures_from_robot/
|
|
scp unitree@192.168.123.164:~/Saqr/captures/events.csv ./events_robot.csv
|
|
```
|
|
|
|
---
|
|
|
|
## Camera Source Options
|
|
|
|
| Source | Command | Description |
|
|
|--------|---------|-------------|
|
|
| `/dev/video2` | `--source /dev/video2` | **RGB camera via OpenCV (recommended)** |
|
|
| `realsense` | `--source realsense` | RealSense D435I via pyrealsense2 SDK |
|
|
| `realsense:SERIAL` | `--source realsense:243622073459` | Specific RealSense by serial |
|
|
| `/dev/video4` | `--source /dev/video4` | Second RGB stream (if available) |
|
|
| `0` | `--source 0` | First OpenCV camera index |
|
|
| `video.mp4` | `--source video.mp4` | Video file |
|
|
| `image.jpg` | `--source image.jpg` | Single image |
|
|
|
|
### G1 Robot V4L2 Device Map (RealSense D435I):
|
|
```
|
|
/dev/video0 - Stereo module (infrared) - won't open with OpenCV
|
|
/dev/video1 - Stereo metadata
|
|
/dev/video2 - RGB camera (640x480) ← USE THIS
|
|
/dev/video3 - RGB metadata
|
|
/dev/video4 - RGB camera (secondary stream)
|
|
```
|
|
|
|
### Detect cameras on robot:
|
|
```bash
|
|
# Find working RGB cameras
|
|
python -c "
|
|
import cv2
|
|
for i in range(10):
|
|
cap = cv2.VideoCapture(f'/dev/video{i}', cv2.CAP_V4L2)
|
|
if cap.isOpened():
|
|
ret, frame = cap.read()
|
|
if ret and frame is not None:
|
|
print(f'/dev/video{i}: {frame.shape} OK')
|
|
else:
|
|
print(f'/dev/video{i}: opened but no frame')
|
|
cap.release()
|
|
"
|
|
|
|
# RealSense devices
|
|
rs-enumerate-devices | grep "Serial Number"
|
|
```
|
|
|
|
---
|
|
|
|
## Tuning Parameters
|
|
|
|
| Parameter | Default | Flag | Description |
|
|
|-----------|---------|------|-------------|
|
|
| Confidence | 0.35 | `--conf 0.35` | Lower = more detections, higher = fewer false positives |
|
|
| Max Missing | 90 | `--max-missing 90` | Frames before track deleted (~3s at 30fps) |
|
|
| Match Distance | 250 | `--match-distance 250` | Pixels for track matching |
|
|
| Confirm Frames | 5 | `--status-confirm-frames 5` | Frames to confirm a status change |
|
|
|
|
### Recommended for G1 patrol:
|
|
```bash
|
|
python saqr.py --source realsense --model models/saqr_best.pt --headless \
|
|
--conf 0.30 --max-missing 120 --match-distance 300 --status-confirm-frames 7
|
|
```
|
|
|
|
---
|
|
|
|
## Compliance Rules
|
|
|
|
| Status | Condition | Color |
|
|
|--------|-----------|-------|
|
|
| SAFE | Helmet AND vest detected, no violations | Green |
|
|
| PARTIAL | Only helmet OR only vest detected | Yellow |
|
|
| UNSAFE | `no-helmet` or `no-vest` detected, or nothing detected | Red |
|
|
|
|
---
|
|
|
|
## Output Files
|
|
|
|
| File | Location | Description |
|
|
|------|----------|-------------|
|
|
| `result.csv` | `captures/result.csv` | Current state of all tracked persons |
|
|
| `events.csv` | `captures/events.csv` | Audit log (NEW / STATUS_CHANGE events) |
|
|
| Person crops | `captures/SAFE/*.jpg` | Cropped images of compliant workers |
|
|
| Person crops | `captures/PARTIAL/*.jpg` | Workers with incomplete PPE |
|
|
| Person crops | `captures/UNSAFE/*.jpg` | Workers violating PPE rules |
|
|
| Logs | `Logs/Inference/saqr.log` | Runtime log |
|
|
|
|
---
|
|
|
|
## Project Files
|
|
|
|
| File | Purpose |
|
|
|------|---------|
|
|
| `saqr.py` | Main PPE tracking + detection (RealSense + OpenCV) |
|
|
| `saqr_g1_bridge.py` | Saqr → G1 bridge (R2+X/R2+Y trigger, onboard TTS + `reject` arm action on UNSAFE/SAFE transitions) |
|
|
| `controller.py` | G1 wireless-remote DDS reader (`LowStateHub`, `combo_r2x()`, `combo_r2y()`); required by the bridge for trigger keys |
|
|
| `detect.py` | Simple detection without tracking |
|
|
| `gui.py` | PySide6 desktop GUI |
|
|
| `manager.py` | Photo management CLI + CSV export |
|
|
| `train.py` | YOLO model training |
|
|
| `logger.py` | Centralized logging |
|
|
| `deploy.sh` | One-command deploy to robot |
|
|
| `Config/logging.json` | Log settings |
|
|
|
|
---
|
|
|
|
## Troubleshooting
|
|
|
|
### RealSense not detected
|
|
```bash
|
|
# Check USB connection
|
|
lsusb | grep Intel
|
|
|
|
# Re-enumerate
|
|
rs-enumerate-devices | head -10
|
|
|
|
# Reset USB (if needed)
|
|
sudo usbreset /dev/bus/usb/002/002
|
|
```
|
|
|
|
### Camera not opening
|
|
```bash
|
|
# Test RealSense directly
|
|
python -c "
|
|
import pyrealsense2 as rs
|
|
pipe = rs.pipeline()
|
|
cfg = rs.config()
|
|
cfg.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
|
|
pipe.start(cfg)
|
|
frames = pipe.wait_for_frames()
|
|
print('Frame:', frames.get_color_frame().get_width(), 'x', frames.get_color_frame().get_height())
|
|
pipe.stop()
|
|
"
|
|
|
|
# Test OpenCV fallback
|
|
python -c "import cv2; c=cv2.VideoCapture(0); print('OK' if c.isOpened() else 'FAIL'); c.release()"
|
|
|
|
# Try different source
|
|
python saqr.py --source /dev/video0 --model models/saqr_best.pt --headless
|
|
```
|
|
|
|
### ModuleNotFoundError: ultralytics
|
|
```bash
|
|
# Check you're in the right conda env
|
|
which python
|
|
# Should show: /home/unitree/miniconda3/envs/teleimager/bin/python
|
|
|
|
# Install to the correct env
|
|
python -m pip install ultralytics
|
|
```
|
|
|
|
### System clock wrong (SSL errors)
|
|
```bash
|
|
sudo date -s "2026-04-10 15:00:00"
|
|
```
|
|
|
|
### Model not found
|
|
```bash
|
|
ls ~/Saqr/models/
|
|
# Should show: saqr_best.pt (~5.3 MB)
|
|
```
|
|
|
|
### Low FPS on Jetson
|
|
```bash
|
|
# Use smaller confidence to reduce load
|
|
python saqr.py --source realsense --conf 0.5 --headless
|
|
|
|
# Or use headless opencv
|
|
export DISPLAY=
|
|
python saqr.py --source realsense --headless
|
|
```
|
|
|
|
### Too many duplicate track IDs
|
|
```bash
|
|
# Increase tolerance
|
|
python saqr.py --source realsense --max-missing 150 --match-distance 300 --headless
|
|
```
|