Skip to main content
Version: 8.3

Deploy Creatio.ai on-site

To deploy Creatio.ai on-site, deploy and configure generative AI services for Creatio using Docker Compose or Kubernetes. Use Docker Compose for simple deployments on a single Linux server. Use Kubernetes for production-grade deployments with scalability and high availability. This procedure enables Creatio to use AI-powered features through a connected generative AI service.

Creatio.ai is pre-configured out of the box in Creatio cloud.

Before you start deployment, ensure you have access to the following:

  • Linux machine or Kubernetes cluster
  • Creatio instance with administrator rights
  • valid credentials for registry.creatio.com where all service images are stored
  • OpenAI or Azure OpenAI API keys
  • firewall for required ports, 5006 for Docker, 30082 for Kubernetes NodePort, or custom Ingress

Service overview

Deployment of Creatio.ai involves multiple core services managed via Docker Compose. Each service plays a key role in the functioning of the generative AI system.

  • enrichment-service. Web service responsible for handling generative AI requests such as chat completions. It acts as the main REST API endpoint, processing input text from clients, for example, Creatio, and returning responses generated by an underlying language model.
  • litellm. Service that provides an abstraction layer for large language model (LLM) APIs. Enables easy integration with multiple LLM providers, for example, OpenAI and Azure OpenAI. Offers a unified API interface for making chat, completion, and embedding requests. Simplifies model management, routing, and provider switching with minimal configuration changes.
  • postgres. PostgreSQL database that stores operational statistics, for example, request logs.

Deploy services using Docker Compose

Use Docker Compose for simple deployments on a single Linux server. The setup requires actions on your server as well as in Creatio.

Setup on the server

  1. Install Docker into a physical or virtual Linux machine. Learn more: Install Docker Engine on Debian (official vendor documentation).

  2. Verify the installation. To do this, run the following command at the command prompt:

    docker --version
  3. Install Docker Compose. Learn more: Overview of installing Docker Compose (official vendor documentation).

  4. Verify the installation. To do this, run the following command at the command prompt:

    docker-compose --version
  5. Download and unpack the archive that contains Docker Compose files. Download.

  6. Go to the docker-compose directory → edit the *.env file to configure required environment variables. Keep the file secure as it contains sensitive API keys.

    Fill out environment variables depending on the LLM service you are using:

    Environment variables for OpenAI LLM

    Variable

    Description

    OPENAI_MODEL

    OpenAI provider and model ID, for example, "openai/gpt-4o"

    OPENAI_EMBEDDING_MODEL

    OpenAI embedding model ID, for example, "openai/gpt-4o"

    OPENAI_API_KEY

    API key for OpenAI authentication

    OPENAI_API_KEY_TEXT_EMBEDDING

    Separate API key for embeddings. Optional

    Environment variables for Azure OpenAI LLM

    Variable

    Description

    AZURE_MODEL

    Azure provider and model ID, for example, "azure/gpt-4o-2024-11-20"

    AZURE_EMBEDDING_MODEL

    Azure embedding model ID, for example, "azure/gpt-4o-2024-11-20"

    AZURE_API_KEY

    Azure subscription key for authentication

    AZURE_API_TEXT_EMBEDDING

    API key for Azure embedding service. Optional

    AZURE_DEPLOYMENTID

    Deployment ID of the Azure OpenAI model

    AZURE_RESOURCENAME

    Name of the Azure resource instance

    AZURE_API_BASE

    Base URL of the Azure OpenAI endpoint

    AZURE_EMBEDDING_API_BASE

    Base URL of the Azure embedding endpoint

    AZURE_API_VERSION

    Azure API version, for example, "2023-07-01-preview"

    Default models

    Variable

    Description

    GenAI__DefaultModel

    ID or name of the default language generation model. Take it from the model_name environment variable of the "\etc\litellm-config.yaml" file

    GenAI__EmbeddingsModel

    ID or name of the default text embeddings model. Take it from the model_name environment variable of the "\etc\litellm-config.yaml" file

  7. Log in to Docker registry. To do this, run the following command at the command prompt:

    docker login registry.creatio.com -u your-username -p your-password
  8. Open terminal and go to the docker-compose directory.

  9. Run the following command at the command prompt:

    docker-compose up -d

Setup in Creatio

  1. Click btn_system_designer.png to open the System Designer.

  2. Go to the System setup block → System settings.

  3. Open the "Account enrichment service url" (AccountEnrichmentServiceUrl code) system setting.

  4. Enter the following value:

    http://[your_server_ip_address]:5006

    Replace [your_server_ip_address] with the actual host. Ensure the firewall allows inbound traffic on port 5006.

  5. Save the changes.

  6. Test the configuration (optional):

    1. Open Creatio.ai chat.
    2. Send a test message.
    3. Verify the assistant responds.

As a result, generative AI service will be deployed onto your Creatio instance. Creatio connects to generative AI service through Creatio.ai chat and enrichment service settings, enabling AI-powered functionality.

Deploy services using Kubernetes

Use Kubernetes for production-grade deployments with scalability and high availability. The setup requires actions on your cluster as well as in Creatio.

Before you perform the setup, check the following:

  • Confirm Kubernetes cluster is version 1.19 or later.
  • Confirm Helm is version 3.0 or later.
  • Confirm kubectl is configured.

To do this, run the following commands at the command prompt:

kubectl cluster-info
kubectl get nodes
helm version

If all commands complete without errors, proceed with the setup.

Setup on the cluster

  1. Download and unpack the archive that contains Helm files. Download.

  2. Go to helm/genai directory → edit the "values.onsite.yaml" file.

  3. Configure the Docker registry credentials:

    dockerRegistry:
    username: <your-username>
    password: <your-password>
    email: <your-email>
  4. Configure the LLM provider:

    appConfig:
    genAI:
    llmProviders:
    models:
    openai:
    - name: gpt-4o
    model: gpt-4o
    api_key: sk-your-openai-api-key # Replace with your actual OpenAI API key
    # - name: text-embedding-3-small # Optional embedding model (disabled by default)
    # model: text-embedding-3-small
    # api_key: sk-your-openai-api-key # Replace with your actual OpenAI API key
    defaultModel: gpt-4o
    # embeddingsModel: text-embedding-3-small # Optional - uncomment if using embeddings

    The defaultModel and embeddingsModel values must match the name key from the model definitions above verbatim.

  5. Configure service access. Out of the box, the service is accessible via NodePort on port 30082. You can leave it as is, change the port, or enable Ingress.

    To change the port:

    service:
    type: NodePort
    nodePort: 30082

    To enable Ingress instead of NodePort:

    ingress:
    enabled: true
    hosts:
    - genai.example.com # Replace using your DNS name
  6. Configure PostgreSQL storage in the "values.onsite.yaml" file (optional). Out of the box, PostgreSQL runs without persistent storage. This is normal behavior since the database only stores non-critical information such as request counts and token usage statistics. If you require persistent storage, add this to your configuration:

    postgresql:
    primary:
    persistence:
    enabled: true
  7. Deploy generative AI service using Helm. To do this, go to the directory that contains the "values.onsite.yaml" file → run the following command at the command prompt:

    helm upgrade --install genai . -f values.onsite.yaml
  8. Verify deployment. To do this, run the following command at the command prompt:

    kubectl get pods
    kubectl get services

    All pods must be in Running state except flyway. The service must show NodePort, which is 30082 out of the box.

  9. Determine the service URL.

    To find your Kubernetes node IP, run the following command at the command prompt:

    kubectl get nodes -o wide
    http://<kubernetes_node_ip>:30082
  10. Check readiness. To do this, run the following command at the command prompt:

    curl -X GET http://192.168.1.100:30082/readiness # Replace 192.168.1.100 with your actual node IP

    The expected result is HTTP 200 and readiness status information.

Setup in Creatio

  1. Click btn_system_designer.png to open the System Designer.

  2. Go to the System setup block → System settings.

  3. Open the "Account enrichment service url" (AccountEnrichmentServiceUrl code) system setting.

  4. Set the value of the setting to the URL you got from the deployment.

    http://192.168.1.100:30082 # Replace 192.168.1.100 with your actual node IP
  5. Save the changes.

  6. Test the configuration (optional):

    1. Open Creatio.ai chat.
    2. Send a test message.
    3. Verify the assistant responds.

As a result, generative AI service will be deployed onto your Creatio instance. Creatio connects to generative AI service through Creatio.ai chat and enrichment service settings, enabling AI-powered functionality.


See also

Creatio.ai architecture

Develop Creatio.ai Skill

AI Skill development recommendations

AI Skill list

Creatio.ai system actions

Data privacy in Creatio.ai

Creatio AI (developer documentation)