Releases: aisingapore/PeekingDuck
Releases · aisingapore/PeekingDuck
1.3.0.post1
Documentation
- Add
typeguard
to macOS installation instructions (#695)
1.3.0
Process
- Change
--verify_install
from a runtime option to a CLI commandverify-install
. - Add
--viewer
to activate the new PeekingDuck Viewer (see User Interface below).
Features
- Add new
augment.undistort
node to remove distortion from a wide-angle camera image. - Add new
dabble.camera_calibration
node to calculate the camera coefficients for removing image distortion, used byaugment.undistort
. - Add new instance segmentation model, with new model nodes
model.mask_rcnn
andmodel.yolact_edge
.
model.mask_rcnn
supports ResNet50 and ResNet101 backbones.
model.yolact_edge
supports ResNet50, ResNet101 and MobileNetV2 backbones. - Add new
draw.mask
node to draw instance segmentation masks on image, can be used to mask out objects or background. - Add new TensorRT optimized models for
model.movenet
andmodel.yolox
. - Add node config type checking for all pipeline nodes with user configurable parameters. Will throw a runtime error on wrong config type.
Deprecations and Removals
peekingduck run --verify_install
is deprecated and replaced bypeekingduck verify-install
command instead.
Documentation
- Add new use case
Privacy Protection (People & Screens)
using instance segmentation and blurring. - Add new
Edge AI
documentation on how to install and run TensorRT models, including performance benchmark charts.
Dependencies
- Add typeguard library
typeguard ≥ 2.13.3
.
User Interface
- Add PeekingDuck Viewer, a GUI built on TkInter that supports a playlist of multiple pipelines, callable via
peekingduck run --viewer
. Upon completion of a pipeline, the user may replay the output video or scrub to a specific frame of interest for analysis.
Refactor
- Streamline
peekingduck/cli.py
by encapsulating source codes for CLI commandsnodes
,init
,run
andcreate-node
underpeekingduck/commands/
folder. - Add
PeekingDuckLogo.png
tosetup.cfg
and addsetup.py
to support older pip versions. - Add new model config field
model_format
formodel.movenet
andmodel.yolox
to allow selection between the original models or the new TensorRT models. - Refactor
model.movenet
andmodel.yolox
inference code to work with TensorRT models. - Define
_get_config_types()
method for all nodes with user configurable parameters. Relevant to most nodes underpeekingduck/pipeline/nodes/
folder. - Use
ThresholdCheckerMixin
to check bounds indabble.tracking
.
1.3.0rc3
1.3.0rc2
1.3.0rc1
1.2.3
1.2.3rc1
1.2.2
Process
- Rename
aimakerspace
toaisingapore
(#684)
Features
- Precompute
draw.legend
legend box width to fit content and addbox_opacity
config option (#660, #677) --config_path
now supports absolute paths (#665)
Deprecations and Removals
- Model node config option
detect_ids
has been changed todetect
(#653) - Fix MOSSE tracker in
dabble.tracking
to only be initialized when there is a detection (#662) - Ensure
saved_video_fps
in theinput.visual
node is within the range of values supported bycv2.VideoWriter
(#664) - Fix
output.screen
window size to adopt input video frame size whendo_resizing=False
(#669)
Dependencies
- Tighten
opencv-contrib-python
version to satisfy linting requirements (#657)
1.2.2rc1
1.2.1
Process
- Models now download individual weights files for each model type instead of an entire folder of all model types
- Limit custom node directories to be created within
src/
Bug Fixes
- Fix
input.visual
issue wherefilename
data type is not set if the source is a single image file - Fix
input.visual
issue where approximate progress shows 10% even when the folder has been fully processed
Improved Documentation
- Add tutorial for using own models
Refactor
- Redirect
tqdm
(used when downloading weights) to usestdout
instead ofstderr
- Implement mixins for threshold checking and weights downloading