Skip to main content

DENK SDK API Reference

Overview

The DENK SDK provides a set of tools for running neural network inference locally and offline. It supports both Python and C# and is designed to work with models created on the DENK Vision AI Hub.

Classes

InferencePipeline

The InferencePipeline class is the main entry point for setting up and running inference with the DENK SDK.

Constructor

InferencePipeline(device: Device = Device.CPU, token: Union[str, None] = None)
  • Parameters:
    • device: The device to be used for inference. Defaults to Device.CPU.
    • token: The authentication token. If not provided, the function will look for a USB-dongle.

Methods

  • get_backend_library() -> str

    • Returns the type and version of the backend library being used.
  • add_model(model_path: str) -> Model

    • Loads a model from the specified path and adds it to the pipeline.
  • get_models() -> List[Model]

    • Returns a list of models currently loaded in the pipeline.
  • run(image: np.ndarray) -> InferencePipeline

    • Runs inference on the provided image.
  • get_results()

    • Retrieves the results of the last inference run.
  • get_result_image(object_opacity: float = 0.5, pil_format: bool = False)

    • Returns the result image with specified object opacity and format.

Model

Represents a model loaded into the InferencePipeline.

Constructor

Model(parent_pipeline: InferencePipeline, model_path: str)
  • Parameters:
    • parent_pipeline: The InferencePipeline instance to which this model belongs.
    • model_path: The path to the model file.

Attributes

  • pre: An instance of the Pre class for pre-processing settings.
  • post: An instance of the Post class for post-processing settings.

Pre

Handles pre-processing settings for a model.

Methods

  • set_evaluation_size(width: int, height: int) -> Pre

    • Sets the evaluation size for the model.
  • set_image_partitioning(partitions_in_width: int, partitions_in_height: int) -> Pre

    • Sets the image partitioning for the model.
  • set_moving_window(active: bool) -> Pre

    • Enables or disables the moving window feature.

Post

Handles post-processing settings for a model.

Methods

  • filter_by_confidence(minimum_confidence: float) -> Post

    • Filters results by a minimum confidence threshold.
  • set_segmentation_threshold(segmentation_threshold: float) -> Post

    • Sets the segmentation threshold for the model.
  • filter_overlapping_bounding_boxes(overlap_threshold: float) -> Post

    • Filters overlapping bounding boxes based on the specified threshold.

Enums

Device

An enumeration of available devices for running inference.

  • CPU = -1
  • GPU1 = 0
  • GPU2 = 1
  • GPU3 = 2
  • GPU4 = 3
  • GPU5 = 4
  • GPU6 = 5
  • GPU7 = 6
  • GPU8 = 7

Example Usage

import denk_sdk

pipeline = denk_sdk.InferencePipeline(token="your_token", device=denk_sdk.Device.CPU)
model = pipeline.add_model("path/to/model.denk")

results = pipeline.run(image).get_results()
print(results)

Return Overview

The returned protobuf object contains the following fields:

  • classification_models: List of classification models and their results.

    • model_id: Identifier of the model.
    • model_name: Name of the model.
    • run_time_ms: Time taken to run the model in milliseconds.
    • classes: List of classification results.
      • class_id: Identifier of the class.
      • class_name: Name of the class.
      • class_color: Color associated with the class.
      • classifier: Confidence score of the classification.
  • segmentation_models: List of segmentation models and their results.

    • model_id: Identifier of the model.
    • model_name: Name of the model.
    • run_time_ms: Time taken to run the model in milliseconds.
    • classes: List of segmentation results.
      • class_id: Identifier of the class.
      • class_name: Name of the class.
      • class_color: Color associated with the class.
      • objects: List of segmented objects.
        • bounding_box: Bounding box of the object.
        • oriented_bounding_box: Oriented bounding box of the object.
        • confidence: Confidence score of the segmentation.
        • segmentation: Segmentation mask.
  • instance_segmentation_models: List of instance segmentation models and their results.

    • Same structure as segmentation_models.
  • object_detection_models: List of object detection models and their results.

    • model_id: Identifier of the model.
    • model_name: Name of the model.
    • run_time_ms: Time taken to run the model in milliseconds.
    • classes: List of object detection results.
      • class_id: Identifier of the class.
      • class_name: Name of the class.
      • class_color: Color associated with the class.
      • objects: List of detected objects.
        • bounding_box: Bounding box of the object.
        • oriented_bounding_box: Oriented bounding box of the object.
        • confidence: Confidence score of the detection.
  • anomaly_detection_models: List of anomaly detection models and their results.

    • model_id: Identifier of the model.
    • model_name: Name of the model.
    • run_time_ms: Time taken to run the model in milliseconds.
    • classes: List of anomaly detection results.
      • class_id: Identifier of the class.
      • class_name: Name of the class.
      • class_color: Color associated with the class.
      • anomaly_score: Anomaly score.
      • objects: List of detected anomalies.
        • bounding_box: Bounding box of the anomaly.
        • oriented_bounding_box: Oriented bounding box of the anomaly.
        • confidence: Confidence score of the anomaly.
        • segmentation: Segmentation mask.
  • optical_character_recognition_models: List of OCR models and their results.

    • model_id: Identifier of the model.
    • model_name: Name of the model.
    • run_time_ms: Time taken to run the model in milliseconds.
    • text: Recognized text.
    • confidence: Confidence score of the OCR.
  • barcode_reading_models: List of barcode reading models and their results.

    • model_id: Identifier of the model.
    • model_name: Name of the model.
    • run_time_ms: Time taken to run the model in milliseconds.
    • classes: List of barcode reading results.
      • class_id: Identifier of the class.
      • class_name: Name of the class.
      • class_color: Color associated with the class.
      • barcode_type: Type of the barcode.
      • objects: List of detected barcodes.
        • bounding_box: Bounding box of the barcode.
        • oriented_bounding_box: Oriented bounding box of the barcode.
        • text: Decoded text from the barcode.
        • bytes: Raw bytes of the barcode.

Example Return

{
"classification_models": [
{
"model_id": "model_1",
"model_name": "Classification Model 1",
"run_time_ms": 123.45,
"classes": [
{
"class_id": "class_1",
"class_name": "Class 1",
"class_color": {"red": 255, "green": 0, "blue": 0},
"classifier": 0.95
}
]
}
],
"segmentation_models": [
{
"model_id": "model_2",
"model_name": "Segmentation Model 1",
"run_time_ms": 234.56,
"classes": [
{
"class_id": "class_2",
"class_name": "Class 2",
"class_color": {"red": 0, "green": 255, "blue": 0},
"objects": [
{
"bounding_box": {"top_left": {"x": 10, "y": 20}, "bottom_right": {"x": 30, "y": 40}},
"oriented_bounding_box": {"by_center": {"center": {"x": 20, "y": 30}, "width": 20, "height": 20, "angle": 0, "full_orientation": True}},
"confidence": 0.85,
"segmentation": b'\x89PNG\r\n\x1a\n...'
}
]
}
]
}
]
}