Skip to content

Intelligence Coordination Module

Overview

The Intelligence Coordination Module is a key component of the ICOS platform, designed to enable AI-driven optimization, predictive analytics, and collaborative model sharing across the edge-cloud continuum. Acting as the interface between the Meta-Kernel and User layers, it orchestrates the full lifecycle of machine learning models - from training and inference to monitoring and explainability.

Key features of the module include:

  • Real-time forecasting of CPU and memory usage for ICOS agents.
  • Support for multivariate prediction using LSTM models (PoC).
  • Drift detection and model explainability (SHAP integration).
  • Confidence intervals and scores with every prediction.
  • Model compression through quantization and knowledge distillation.
  • Federated learning for privacy-preserving distributed training.
  • Integration with telemetry systems like Prometheus, Thanos, and Grafana.
  • Seamless interoperability via the Export Metrics API and Intelligence API backend.

Deployed alongside the ICOS Controller, this module provides a robust and scalable solution through Docker or Helm, with secure access managed via Keycloak. All services are exposed as RESTful APIs with OpenAPI support and Swagger UI.


Documentation Index

Below is a list of related documentation sections to help you navigate the Intelligence Layer and its capabilities:

  1. API Documentation
    Describes the REST API endpoints, their parameters, expected inputs/outputs, and usage for training, prediction, and model management.

  2. Backend Services
    Details the Intelligence API endpoints provided by the backend, including how to trigger model training, inference, drift detection, and launch MLFlow.

  3. Deployment Guide
    Explains how to deploy the Intelligence Layer using Docker, including example commands, environment variables, and dependencies.

  4. Usage
    Offers usage instructions for interacting with the Intelligence Layer via the ICOS CLI or JupyterHub, including commands like train metrics and predict metrics.

  5. Development & Contribution
    Provides an overview of the project structure, folder layout, and how to extend or contribute to the Intelligence Layer - especially within the oasis directory (e.g., adding new models or analytics).


  • To interact with the Intelligence Layer, refer to Usage.
  • To understand and run the API locally, refer to API Docs.
  • To launch the Docker container and access MLFlow or Jupyter, see Deployment.
  • To contribute to the codebase, follow the Development Guide.

For an architectural overview of the ICOS Intelligence Layer, refer to the ICOS Concepts.