Usage¶
User Interaction via ICOS Shell¶
The Intelligence Layer provides CLI-based user interaction through the Export Metrics API, integrated within the ICOS Shell. Users can interact with intelligence functionalities using two primary commands:
train metrics
predict metrics
These commands issue POST requests from the ICOS Shell backend to the Export Metrics API:
/train_model_metric
/create_model_metric
Each request includes an authentication token (validated by Keycloak) and a JSON payload with all parameters required for training or prediction. The CLI displays the outcome once the process completes. Future releases will also integrate these capabilities into the ICOS GUI.
First-Time Usage¶
To get started with DataClay and the Intelligence Layer backend:
docker compose restart dataclay-backend
docker-compose up # Starts all services
docker-compose down # Stops services
Then build and containerize the model service:
bentoml build -f ./bentofile.yaml
bentoml containerize analytics:ID # Replace ID with your specific model tag
To serve the container with GPU support:
docker run --network host -it --rm -p 3000:3000 -p 5000:5000 \
--cpus 7.5 --memory 14g \
-e BENTOML_CONFIG_OPTIONS='api_server.traffic.timeout=600 runners.resources.cpu=0.5 runners.resources."nvidia.com/gpu"=0' \
analytics:ID serve # Replace ID with the actual container tag
Refer to the Deployment section for more detailed setup instructions.
Configuring JupyterHub (AI Support Container)¶
To use JupyterHub inside the AI Support container:
-
Access the Container
-
Create a User
-
Launch JupyterHub
-
Login via Browser Navigate to the appropriate address and log in with your created credentials.
Trustworthy AI Module¶
Explainable AI¶
Uses SHAP (SHapley Additive exPlanations) for model interpretability. Example usage:
Use a consistent MLFlow tag to group all experiment artifacts.Prediction Confidence Scores¶
Each model prediction includes confidence scores and intervals to quantify reliability and support better decision-making.
Model Monitoring¶
Monitors model performance using NannyML. Drift detection is triggered automatically and may initiate model retraining.
Federated Learning¶
Federated Learning is supported using the Flower framework to enable privacy-preserving training across distributed nodes. Raw data remains local, ensuring compliance with privacy standards.
AI Analytics¶
The AI Analytics module powers training, inference, model compression, and logging. It supports:
- Univariate/Multivariate Forecasting using LSTM models
- Experiment Tracking via MLFlow
- Model Compression via Quantization and Distillation
These features help optimize resource usage while maintaining model accuracy and transparency.