Backend¶
Intelligence Coordination – Backend Endpoints¶
The backend of the Intelligence Coordination module exposes a set of RESTful APIs for model training, inference, drift detection, anomaly retrieval, and experiment tracking. These endpoints allow full parameterization and automation of machine learning workflows, making them a critical part of the Intelligence Layer design. The APIs can be accessed via Swagger UI or curl
commands.
AI Analytics API Endpoints¶
These endpoints are exposed through the Intelligence Layer backend (port 3000
) and are used to manage the lifecycle of AI models deployed within the ICOS system.
Endpoint | Method | Description | Port |
---|---|---|---|
/train |
POST | Triggers model training with JSON-formatted input. | 3000 |
/predict |
POST | Generates predictions based on the provided data. | 3000 |
/detect_drift |
POST | Identifies and returns detected data drifts. | 3000 |
/get_anomalies |
POST | Detects and returns anomalies in the dataset. | 3000 |
/launch_mlflow_ui |
POST | Provides the URL for accessing the MLflow UI for experiment tracking. | 5000 |
🔍 Note: The
get_anomalies
endpoint connects the Intelligence Layer with the Security Layer (LOMOS), allowing anomaly detection results to be retrieved through the Intelligence API.
AI Support Container – Additional Endpoints¶
The AI Support Container is designed for edge devices and includes all the endpoints listed above. Additionally, it provides services for running and managing JupyterLab sessions.
AI Support Container Endpoints¶
Endpoint | Method | Description | Port |
---|---|---|---|
/core/analytics_jupyterlab_service/ |
POST | Launches a JupyterLab session inside the container. | 8080 |
⚠️ Note: A JupyterHub service is also available in the AI Support container. It allows user logins and multi-user notebook access. However, admin access is required to create and manage user accounts. Detailed setup instructions are provided in the following section.