DENK SDK API Reference
- Python
- C#
Overview
The DENK SDK provides a set of tools for running neural network inference locally and offline. It supports both Python and C# and is designed to work with models created on the DENK Vision AI Hub.
Classes
InferencePipeline
The InferencePipeline
class is the main entry point for setting up and running inference with the DENK SDK.
Constructor
InferencePipeline(device: Device = Device.CPU, token: Union[str, None] = None)
- Parameters:
device
: The device to be used for inference. Defaults toDevice.CPU
.token
: The authentication token. If not provided, the function will look for a USB-dongle.
Methods
-
get_backend_library() -> str
- Returns the type and version of the backend library being used.
-
add_model(model_path: str) -> Model
- Loads a model from the specified path and adds it to the pipeline.
-
get_models() -> List[Model]
- Returns a list of models currently loaded in the pipeline.
-
run(image: np.ndarray) -> InferencePipeline
- Runs inference on the provided image.
-
get_results()
- Retrieves the results of the last inference run.
-
get_result_image(object_opacity: float = 0.5, pil_format: bool = False)
- Returns the result image with specified object opacity and format.
Model
Represents a model loaded into the InferencePipeline
.
Constructor
Model(parent_pipeline: InferencePipeline, model_path: str)
- Parameters:
parent_pipeline
: TheInferencePipeline
instance to which this model belongs.model_path
: The path to the model file.
Attributes
pre
: An instance of thePre
class for pre-processing settings.post
: An instance of thePost
class for post-processing settings.
Pre
Handles pre-processing settings for a model.
Methods
-
set_evaluation_size(width: int, height: int) -> Pre
- Sets the evaluation size for the model.
-
set_image_partitioning(partitions_in_width: int, partitions_in_height: int) -> Pre
- Sets the image partitioning for the model.
-
set_moving_window(active: bool) -> Pre
- Enables or disables the moving window feature.
Post
Handles post-processing settings for a model.
Methods
-
filter_by_confidence(minimum_confidence: float) -> Post
- Filters results by a minimum confidence threshold.
-
set_segmentation_threshold(segmentation_threshold: float) -> Post
- Sets the segmentation threshold for the model.
-
filter_overlapping_bounding_boxes(overlap_threshold: float) -> Post
- Filters overlapping bounding boxes based on the specified threshold.
Enums
Device
An enumeration of available devices for running inference.
CPU = -1
GPU1 = 0
GPU2 = 1
GPU3 = 2
GPU4 = 3
GPU5 = 4
GPU6 = 5
GPU7 = 6
GPU8 = 7
Example Usage
import denk_sdk
pipeline = denk_sdk.InferencePipeline(token="your_token", device=denk_sdk.Device.CPU)
model = pipeline.add_model("path/to/model.denk")
results = pipeline.run(image).get_results()
print(results)
Return Overview
The returned protobuf object contains the following fields:
-
classification_models
: List of classification models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of classification results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.classifier
: Confidence score of the classification.
-
segmentation_models
: List of segmentation models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of segmentation results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.objects
: List of segmented objects.bounding_box
: Bounding box of the object.oriented_bounding_box
: Oriented bounding box of the object.confidence
: Confidence score of the segmentation.segmentation
: Segmentation mask.
-
instance_segmentation_models
: List of instance segmentation models and their results.- Same structure as
segmentation_models
.
- Same structure as
-
object_detection_models
: List of object detection models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of object detection results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.objects
: List of detected objects.bounding_box
: Bounding box of the object.oriented_bounding_box
: Oriented bounding box of the object.confidence
: Confidence score of the detection.
-
anomaly_detection_models
: List of anomaly detection models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of anomaly detection results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.anomaly_score
: Anomaly score.objects
: List of detected anomalies.bounding_box
: Bounding box of the anomaly.oriented_bounding_box
: Oriented bounding box of the anomaly.confidence
: Confidence score of the anomaly.segmentation
: Segmentation mask.
-
optical_character_recognition_models
: List of OCR models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.text
: Recognized text.confidence
: Confidence score of the OCR.
-
barcode_reading_models
: List of barcode reading models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of barcode reading results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.barcode_type
: Type of the barcode.objects
: List of detected barcodes.bounding_box
: Bounding box of the barcode.oriented_bounding_box
: Oriented bounding box of the barcode.text
: Decoded text from the barcode.bytes
: Raw bytes of the barcode.
Example Return
{
"classification_models": [
{
"model_id": "model_1",
"model_name": "Classification Model 1",
"run_time_ms": 123.45,
"classes": [
{
"class_id": "class_1",
"class_name": "Class 1",
"class_color": {"red": 255, "green": 0, "blue": 0},
"classifier": 0.95
}
]
}
],
"segmentation_models": [
{
"model_id": "model_2",
"model_name": "Segmentation Model 1",
"run_time_ms": 234.56,
"classes": [
{
"class_id": "class_2",
"class_name": "Class 2",
"class_color": {"red": 0, "green": 255, "blue": 0},
"objects": [
{
"bounding_box": {"top_left": {"x": 10, "y": 20}, "bottom_right": {"x": 30, "y": 40}},
"oriented_bounding_box": {"by_center": {"center": {"x": 20, "y": 30}, "width": 20, "height": 20, "angle": 0, "full_orientation": True}},
"confidence": 0.85,
"segmentation": b'\x89PNG\r\n\x1a\n...'
}
]
}
]
}
]
}
Overview
The DENK SDK provides a set of tools for running neural network inference locally and offline. It supports both Python and C# and is designed to work with models created on the DENK Vision AI Hub.
Classes
InferencePipeline
The InferencePipeline
class is the main entry point for setting up and running inference with the DENK SDK.
Constructor
InferencePipeline(Device device = Device.CPU, string token = "")
- Parameters:
device
: The device to be used for inference. Defaults toDevice.CPU
.token
: The authentication token. If not provided, the function will look for a USB-dongle.
Methods
-
GetBackendLibrary() -> string
- Returns the type and version of the backend library being used.
-
AddModel(string modelPath) -> Model
- Loads a model from the specified path and adds it to the pipeline.
-
GetModels() -> List<Model>
- Returns a list of models currently loaded in the pipeline.
-
Run(string imagePath) -> InferencePipeline
- Runs inference on the provided image path.
-
GetResults() -> SimpleResults
- Retrieves the results of the last inference run.
-
GetResultImage(double objectOpacity) -> Bitmap
- Returns the result image with specified object opacity.
Model
Represents a model loaded into the InferencePipeline
.
Constructor
Model(InferencePipeline parentPipeline, string modelPath)
- Parameters:
parentPipeline
: TheInferencePipeline
instance to which this model belongs.modelPath
: The path to the model file.
Attributes
Pre
: An instance of thePre
class for pre-processing settings.Post
: An instance of thePost
class for post-processing settings.
Pre
Handles pre-processing settings for a model.
Methods
-
SetEvaluationSize(int width, int height) -> Pre
- Sets the evaluation size for the model.
-
SetImagePartitioning(int partitionsInWidth, int partitionsInHeight) -> Pre
- Sets the image partitioning for the model.
-
SetMovingWindow(bool active) -> Pre
- Enables or disables the moving window feature.
Post
Handles post-processing settings for a model.
Methods
-
FilterByConfidence(double minimumConfidence) -> Post
- Filters results by a minimum confidence threshold.
-
SetSegmentationThreshold(double segmentationThreshold) -> Post
- Sets the segmentation threshold for the model.
-
FilterOverlappingBoundingBoxes(double overlapThreshold) -> Post
- Filters overlapping bounding boxes based on the specified threshold.
Enums
Device
An enumeration of available devices for running inference.
CPU = -1
GPU1 = 0
GPU2 = 1
GPU3 = 2
GPU4 = 3
GPU5 = 4
GPU6 = 5
GPU7 = 6
GPU8 = 7
Example Usage
using DenkSdk;
InferencePipeline pipeline = new InferencePipeline(Device.CPU, "your_token");
Model model = pipeline.AddModel("path/to/model.denk");
var results = pipeline.Run("path/to/image.jpg").GetResults();
Console.WriteLine($"Results: {results}");
Return Overview
The returned SimpleResults
object contains the following fields:
-
classification_models
: List of classification models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of classification results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.classifier
: Confidence score of the classification.
-
segmentation_models
: List of segmentation models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of segmentation results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.objects
: List of segmented objects.bounding_box
: Bounding box of the object.oriented_bounding_box
: Oriented bounding box of the object.confidence
: Confidence score of the segmentation.segmentation
: Segmentation mask.
-
instance_segmentation_models
: List of instance segmentation models and their results.- Same structure as
segmentation_models
.
- Same structure as
-
object_detection_models
: List of object detection models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of object detection results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.objects
: List of detected objects.bounding_box
: Bounding box of the object.oriented_bounding_box
: Oriented bounding box of the object.confidence
: Confidence score of the detection.
-
anomaly_detection_models
: List of anomaly detection models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of anomaly detection results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.anomaly_score
: Anomaly score.objects
: List of detected anomalies.bounding_box
: Bounding box of the anomaly.oriented_bounding_box
: Oriented bounding box of the anomaly.confidence
: Confidence score of the anomaly.segmentation
: Segmentation mask.
-
optical_character_recognition_models
: List of OCR models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.text
: Recognized text.confidence
: Confidence score of the OCR.
-
barcode_reading_models
: List of barcode reading models and their results.model_id
: Identifier of the model.model_name
: Name of the model.run_time_ms
: Time taken to run the model in milliseconds.classes
: List of barcode reading results.class_id
: Identifier of the class.class_name
: Name of the class.class_color
: Color associated with the class.barcode_type
: Type of the barcode.objects
: List of detected barcodes.bounding_box
: Bounding box of the barcode.oriented_bounding_box
: Oriented bounding box of the barcode.text
: Decoded text from the barcode.bytes
: Raw bytes of the barcode.
Example Return
var results = new SimpleResults
{
ClassificationModels = {
new ClassificationModel {
ModelId = "model_1",
ModelName = "Classification Model 1",
RunTimeMs = 123.45,
Classes = {
new ClassificationClass {
ClassId = "class_1",
ClassName = "Class 1",
ClassColor = new Color { Red = 255, Green = 0, Blue = 0 },
Classifier = 0.95
}
}
}
},
SegmentationModels = {
new SegmentationModel {
ModelId = "model_2",
ModelName = "Segmentation Model 1",
RunTimeMs = 234.56,
Classes = {
new SegmentationClass {
ClassId = "class_2",
ClassName = "Class 2",
ClassColor = new Color { Red = 0, Green = 255, Blue = 0 },
Objects = {
new SegmentationObject {
BoundingBox = new BoundingBox { TopLeft = new Point2i { X = 10, Y = 20 }, BottomRight = new Point2i { X = 30, Y = 40 } },
OrientedBoundingBox = new OrientedBoundingBox { ByCenter = new OrientedBoundingBoxByCenter { Center = new Point2d { X = 20, Y = 30 }, Width = 20, Height = 20, Angle = 0, FullOrientation = true } },
Confidence = 0.85,
Segmentation = ByteString.CopyFrom(new byte[] { 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A })
}
}
}
}
}
}
};