Deploy Creatio.ai on-site
To deploy Creatio.ai on-site, deploy and configure generative AI services for Creatio using Docker Compose or Kubernetes. Use Docker Compose for simple deployments on a single Linux server. Use Kubernetes for production-grade deployments with scalability and high availability. This procedure enables Creatio to use AI-powered features through a connected generative AI service.
Creatio.ai is pre-configured out of the box in Creatio cloud.
Before you start deployment, ensure you have access to the following:
- Linux machine or Kubernetes cluster
- Creatio instance with administrator rights
- valid credentials for
registry.creatio.comwhere all service images are stored - OpenAI or Azure OpenAI API keys
- firewall for required ports, 5006 for Docker, 30082 for Kubernetes NodePort, or custom Ingress
Service overview
Deployment of Creatio.ai involves multiple core services managed via Docker Compose. Each service plays a key role in the functioning of the generative AI system.
enrichment-service. Web service responsible for handling generative AI requests such as chat completions. It acts as the main REST API endpoint, processing input text from clients, for example, Creatio, and returning responses generated by an underlying language model.litellm. Service that provides an abstraction layer for large language model (LLM) APIs. Enables easy integration with multiple LLM providers, for example, OpenAI and Azure OpenAI. Offers a unified API interface for making chat, completion, and embedding requests. Simplifies model management, routing, and provider switching with minimal configuration changes.postgres. PostgreSQL database that stores operational statistics, for example, request logs.
Deploy services using Docker Compose
Use Docker Compose for simple deployments on a single Linux server. The setup requires actions on your server as well as in Creatio.
Setup on the server
-
Install Docker into a physical or virtual Linux machine. Learn more: Install Docker Engine on Debian (official vendor documentation).
-
Verify the installation. To do this, run the following command at the command prompt:
docker --version -
Install Docker Compose. Learn more: Overview of installing Docker Compose (official vendor documentation).
-
Verify the installation. To do this, run the following command at the command prompt:
docker-compose --version -
Download and unpack the archive that contains Docker Compose files. Download.
-
Go to the
docker-composedirectory → edit the *.env file to configure required environment variables. Keep the file secure as it contains sensitive API keys.Fill out environment variables depending on the LLM service you are using:
Environment variables for OpenAI LLM
Variable
Description
OPENAI_MODEL
OpenAI provider and model ID, for example, "openai/gpt-4o"
OPENAI_EMBEDDING_MODEL
OpenAI embedding model ID, for example, "openai/gpt-4o"
OPENAI_API_KEY
API key for OpenAI authentication
OPENAI_API_KEY_TEXT_EMBEDDING
Separate API key for embeddings. Optional
Environment variables for Azure OpenAI LLM
Variable
Description
AZURE_MODEL
Azure provider and model ID, for example, "azure/gpt-4o-2024-11-20"
AZURE_EMBEDDING_MODEL
Azure embedding model ID, for example, "azure/gpt-4o-2024-11-20"
AZURE_API_KEY
Azure subscription key for authentication
AZURE_API_TEXT_EMBEDDING
API key for Azure embedding service. Optional
AZURE_DEPLOYMENTID
Deployment ID of the Azure OpenAI model
AZURE_RESOURCENAME
Name of the Azure resource instance
AZURE_API_BASE
Base URL of the Azure OpenAI endpoint
AZURE_EMBEDDING_API_BASE
Base URL of the Azure embedding endpoint
AZURE_API_VERSION
Azure API version, for example, "2023-07-01-preview"
Default models
Variable
Description
GenAI__DefaultModel
ID or name of the default language generation model. Take it from the
model_nameenvironment variable of the "\etc\litellm-config.yaml" fileGenAI__EmbeddingsModel
ID or name of the default text embeddings model. Take it from the
model_nameenvironment variable of the "\etc\litellm-config.yaml" file -
Log in to Docker registry. To do this, run the following command at the command prompt:
docker login registry.creatio.com -u your-username -p your-password -
Open terminal and go to the
docker-composedirectory. -
Run the following command at the command prompt:
docker-compose up -d
Setup in Creatio
-
Click
to open the System Designer. -
Go to the System setup block → System settings.
-
Open the "Account enrichment service url" (
AccountEnrichmentServiceUrlcode) system setting. -
Enter the following value:
http://[your_server_ip_address]:5006Replace
[your_server_ip_address]with the actual host. Ensure the firewall allows inbound traffic on port 5006. -
Save the changes.
-
Test the configuration (optional):
- Open Creatio.ai chat.
- Send a test message.
- Verify the assistant responds.
As a result, generative AI service will be deployed onto your Creatio instance. Creatio connects to generative AI service through Creatio.ai chat and enrichment service settings, enabling AI-powered functionality.
Deploy services using Kubernetes
Use Kubernetes for production-grade deployments with scalability and high availability. The setup requires actions on your cluster as well as in Creatio.
Before you perform the setup, check the following:
- Confirm Kubernetes cluster is version 1.19 or later.
- Confirm Helm is version 3.0 or later.
- Confirm
kubectlis configured.
To do this, run the following commands at the command prompt:
kubectl cluster-info
kubectl get nodes
helm version
If all commands complete without errors, proceed with the setup.
Setup on the cluster
-
Download and unpack the archive that contains Helm files. Download.
-
Go to
helm/genaidirectory → edit the "values.onsite.yaml" file. -
Configure the Docker registry credentials:
dockerRegistry:
username: <your-username>
password: <your-password>
email: <your-email> -
Configure the LLM provider:
- OpenAI
- Azure OpenAI
appConfig:
genAI:
llmProviders:
models:
openai:
- name: gpt-4o
model: gpt-4o
api_key: sk-your-openai-api-key # Replace with your actual OpenAI API key
# - name: text-embedding-3-small # Optional embedding model (disabled by default)
# model: text-embedding-3-small
# api_key: sk-your-openai-api-key # Replace with your actual OpenAI API key
defaultModel: gpt-4o
# embeddingsModel: text-embedding-3-small # Optional - uncomment if using embeddingsappConfig:
genAI:
llmProviders:
models:
azure:
- name: azure-gpt-4o
model: gpt-4o-2024-11-20 # Replace with your actual Azure model deployment
resource_name: your-azure-resource # Replace with your actual Azure resource name
api_key: your-azure-api-key # Replace with your actual Azure API key
# - name: azure-text-embedding # Optional embedding model (disabled by default)
# model: text-embedding-3-small # Replace with your actual Azure embedding model
# resource_name: your-azure-resource # Replace with your actual Azure resource name
# api_key: your-azure-api-key # Replace with your actual Azure API key
defaultModel: azure-gpt-4o
# embeddingsModel: azure-text-embedding # Optional - uncomment if using embeddingsThe
defaultModelandembeddingsModelvalues must match thenamekey from the model definitions above verbatim. -
Configure service access. Out of the box, the service is accessible via NodePort on port 30082. You can leave it as is, change the port, or enable Ingress.
To change the port:
service:
type: NodePort
nodePort: 30082To enable Ingress instead of NodePort:
ingress:
enabled: true
hosts:
- genai.example.com # Replace using your DNS name -
Configure PostgreSQL storage in the "values.onsite.yaml" file (optional). Out of the box, PostgreSQL runs without persistent storage. This is normal behavior since the database only stores non-critical information such as request counts and token usage statistics. If you require persistent storage, add this to your configuration:
postgresql:
primary:
persistence:
enabled: true -
Deploy generative AI service using Helm. To do this, go to the directory that contains the "values.onsite.yaml" file → run the following command at the command prompt:
helm upgrade --install genai . -f values.onsite.yaml -
Verify deployment. To do this, run the following command at the command prompt:
kubectl get pods
kubectl get servicesAll pods must be in
Runningstate except flyway. The service must show NodePort, which is 30082 out of the box. -
Determine the service URL.
To find your Kubernetes node IP, run the following command at the command prompt:
kubectl get nodes -o wide- For NodePort
- For Ingress
http://<kubernetes_node_ip>:30082http://genai.example.com -
Check readiness. To do this, run the following command at the command prompt:
- For NodePort
- For Ingress
curl -X GET http://192.168.1.100:30082/readiness # Replace 192.168.1.100 with your actual node IPcurl -X GET http://genai.yourdomain.com/readiness # Replace with your actual domainThe expected result is HTTP 200 and readiness status information.
Setup in Creatio
-
Click
to open the System Designer. -
Go to the System setup block → System settings.
-
Open the "Account enrichment service url" (
AccountEnrichmentServiceUrlcode) system setting. -
Set the value of the setting to the URL you got from the deployment.
- For NodePort
- For Ingress
http://192.168.1.100:30082 # Replace 192.168.1.100 with your actual node IPhttp://genai.yourdomain.com # Replace with your actual domain -
Save the changes.
-
Test the configuration (optional):
- Open Creatio.ai chat.
- Send a test message.
- Verify the assistant responds.
As a result, generative AI service will be deployed onto your Creatio instance. Creatio connects to generative AI service through Creatio.ai chat and enrichment service settings, enabling AI-powered functionality.