Skip to main content
Version: 8.2

AI Skill development recommendations

Level: beginner

Development of a Creatio AI Skill includes different steps. This article covers best practices on each of them. You can use these practices to get the most out of Creatio AI functionality.

Recommendations for development steps

In general, Creatio AI Skill development consists of the following steps:

  1. Define user needs. Read more >>>
  2. Design the AI Skill. Read more >>>
  3. Develop actions. Read more >>>
  4. Test and refine the AI Skill. Read more >>>

Step 1. Define user needs

The first step in developing a Creatio AI Skill is to identify the challenges and repetitive tasks faced by Creatio users that Creatio AI can streamline. For example, consider these areas where Creatio AI's generative AI capabilities can offer significant value:

  • Data analysis and insights. Users often struggle to extract meaningful insights from large datasets. Creatio AI scenarios can assist with tasks like identifying trends, generating reports, and summarizing key findings based on user requests.
  • Content creation and summarization. Creatio AI can automate repetitive tasks like writing descriptions, generating emails, or summarizing documents based on user AI Skill and relevant data points within Creatio.
  • Workflow design and automation. Complex workflows can be time-consuming to build. Creatio AI scenarios can leverage user AI Skill and data context to suggest appropriate actions or even generate basic workflow structures.
  • Personalized user assistance. Leverage Creatio AI's ability to analyze user behavior and context to provide personalized suggestions and recommendations within the Creatio UI. Identify opportunities for LLM Integration.

Once you identify user pain points, explore how you can leverage Creatio AI LLM (Large Language Model) capabilities to address them:

  • Utilize NLP (Natural language processing). Enable users to describe their needs and goals in natural language. The LLM can interpret these requests and translate them into actionable steps within the Creatio AI scenario.
  • Harness text generation. Leverage the LLM's ability to generate text to automate tasks like writing reports, summarizing data points, or suggesting relevant information based on user AI Skill and context.
  • Explore data analysis capabilities. Utilize the LLM to analyze data sets within Creatio and generate insights, identify trends, or answer user questions based on the retrieved information.

Step 2. Design the AI Skill

For the AI Skill to work correctly, you need to fill out several fields. View our recommendations on filling them out below.

Name

A human-readable name for your AI Skill. Must reflect the main goal of the AI Skill, be short, and easy to read.

Code

Creatio and the LLM use the code to execute the AI Skill. Must start with a prefix, "Usr" out of the box.

Description

A detailed plaintext description of what the AI Skill does, when to use it, and how it behaves. The LLM uses this description to determine which AI Skill to trigger, when, and for what purpose. The description is crucial for the correct operation of the AI Skill and must contain a complete description of your AI Skill.

Status

Whether to trigger the AI Skill. Available values:

  • Active. An active AI Skill, triggered for all users.
  • In development. An AI Skill in development. Triggered only for users that have access to the "CanDevelopCopilotIntents" ("CanDevelopCopilotIntents" code) system operation.
  • Deactivated A deactivated AI Skill. Not triggered.

Prompt

The prompt serves as the initial instruction for the LLM, setting the stage for the entire AI Skill. Here is how to craft an effective prompt:

  • Be specific and actionable. The prompt must be specific enough to clearly articulate the user goal and provide sufficient context for the LLM and subsequent actions. It must clearly define what the user wants to achieve and include any relevant details.

  • Use actionable verbs. Use strong action verbs that convey the desired action. This guides the LLM towards the appropriate course of action.

  • Include data and context. If the AI Skill requires access to specific data or operates within a particular context, incorporate that information into the prompt. This ensures the LLM has the necessary information to understand the user's request.

  • Be straightforward. Clarify expectations. Provide direct and clear instructions and avoid vague wording.

    Summarize the case description.

    Summarize the case in no more than 200 characters. Use 3 to 5 sentences. Use professional language without jargon.

  • Structure the prompt. Highlight parameters, action names, and important information using blocks like [], <>, "", etc.

    Open the email template by executing the action [Open Email Page] with <Subject>, <email text> in HTML format and <ID> of Contact or Account.
  • Use capital letters for emphasis. You can highlight especially important information using capital letters. The LLM pays special attention to this text.

  • Give examples of what you expect from the model (few-shot prompting). Provide a couple of examples.

    The example response must be no longer than 150 characters. Here are the expected examples: 
    [Response #1] Total score: 17. Errors: Informal greeting, Repeatedly requested already provided client information.
    [Response #2] Total score: 23. Errors: No follow-up on additional questions after 5 minutes.
  • Specify the steps required to complete a task. Divide a large prompt into tasks that the LLM will perform.

    Your task is to create a summary of no more than 400 words based on the case to the support team. To generate the summary, follow these steps: 
    1. Use the current user's context to retrieve the main information about the Case.
    2. Retrieve information about the case owner.
    3. Retrieve information about the case subject.
    4. Retrieve information about the case status.
    5. Generate the case summary. Use no more than 200 characters. Use a 3 to 5 sentence. Use professional language without jargon.
  • Describe the parameters of actions and their purposes. If necessary, describe the parameters, their possible values, and properties in the prompt. This provides a more predictable result and more accurate AI Skill handling.

    Execute the function [Check email settings] that has the following parameters: 
    <ContactID>: a unique ID of the Contact to whom the email can be sent (usually this ID is called Contact and is located on the page where the user is working).
    <AccountID>: a unique ID of the account to whom the email can be sent (usually this ID is called Account and is located on the page where the user is working).
    One of these parameters can be empty. In response, you will receive a signal indicating whether you can proceed and with which ID.
  • Use field names in objects. The platform works with objects and object fields. If you use field names in the prompt, specify the field names in the object, especially if they differ from the field names on the page.

Learn more about writing effective prompts: Strategy: write clear instructions (Official OpenAI documentation), Best practices for prompt engineering with the OpenAI API (Official OpenAI documentation), Prompting guide 101 (Official Google documentation).

Step 3. Develop actions

Actions are the building blocks that translate the user's AI Skill into concrete steps in Creatio. They represent the individual actions that Creatio AI needs to execute sequentially to achieve the desired outcome.

You can use actions to do the following:

  • Interact with the platform: The main purpose of actions is interacting with Creatio, for example, opening a page, populating fields using data, obtaining information, starting a process, forecasting data, sending a request to a web service, etc.
  • Obtain additional context: If you need to request data that the user does not see on the page, you can use a data retrieval action that returns data as outbound parameters.
  • Check platform data before proceeding: If you need to verify whether further operations can be performed in Creatio, for example, check if a mailbox is configured, you can use an action for this purpose.

Here is how to define effective actions:

  • Break down the task. Deconstruct the user goal into a series of smaller, more manageable actions. Aim for a granular level of detail, ensuring each action contributes directly to the overall objective.
  • Establish logical sequence. Establish a clear and logical sequence for executing the goal actions. Consider any dependencies between actions and ensure they are addressed in the design. Some actions might need to be completed before others can be initiated.
  • Be comprehensive. Ensure the set of goal actions comprehensively covers all the steps required to fulfill the user AI Skill.

The success of Creatio AI actions hinges on clear and accurate descriptions, input, and output parameters for each action. This data acts as the bridge between the no-code creators defining the actions and the Creatio AI LLM responsible for executing them.

Step 4. Test and refine the AI Skill

Testing and refinement are crucial steps in ensuring that your AI Skills function as intended and deliver a seamless user experience. This iterative process lets you identify and address any potential issues before deploying the AI Skill to users.

Follow these testing strategies:

  • Thorough Testing. Rigorously test the AI Skill using various user inputs and edge cases.
  • Focus areas. Pay close attention to the accuracy of LLM outputs, the execution logic of goal actions, and the overall functionality of the AI Skill.
  • User testing. Involve potential users in the testing process to gather feedback on the clarity of the prompt, the ease of use, and the overall usefulness of the AI Skill.

Refine your AI Skill in the following ways based on testing:

  • Prompt adjustments. Refine the prompt to improve clarity, address ambiguities, or provide more specific instructions.
  • Action optimization. Review the actions to ensure they are executed in the correct sequence, handle potential errors gracefully, and produce the desired outcome.

Treat testing and refinement as an ongoing process. As user needs evolve and new features are introduced, revisit your AI Skills and adjust them to maintain their effectiveness.

General best practices

Keep instructions concise

When designing instructions for prompts, aim to be clear and concise. Instructions longer than 200-300 words or those containing more than 3-5 sequential steps can lead to misunderstandings or cause certain steps to be overlooked. Focus on essential information and prioritize clarity to ensure that the LLM can interpret and execute the instructions effectively.

Long instructions increase the cognitive load on the LLM and might result in incomplete outputs or misinterpretations. Additionally, users interacting with the LLM might find it challenging to grasp overly complex instructions, leading to confusion or lower engagement.

Follow these recommendations to maintain clarity:

  • Break down complex steps. If your process requires multiple steps, consider breaking them into smaller sub-processes or sub-prompts. For example, if the task involves creating a report, splitting the instructions into "collecting data," "analyzing results," and "formatting output" can make it easier for the LLM to follow.
  • Use Short Sentences. Keep sentences short and direct to minimize the potential for confusion. Avoid compound sentences that combine multiple actions.
  • Highlight Critical Information. Use formatting (e.g., numbered points) to emphasize key parts of the instructions. This helps the LLM focus on the most important details.
  • Reduce Ambiguity. Avoid vague terms or open-ended instructions like "handle as needed" or "use your discretion." Instead, provide specific criteria or clear guidance on what is expected.
  • Consider Logical Groupings. Organize instructions into logical groups that the LLM can process independently. For example, instead of listing 10 steps in one block, group related actions together and indicate transitions between different stages of the task.

For example, you have a long prompt:


Create a report on user activity. Include details on login frequency, session duration, and account status. Organize the data by month and highlight any irregular patterns. Ensure all columns are formatted correctly. Add a summary at the end, and use colors to differentiate active and inactive users.

You can break it down as follows:


Collect user activity data: Include login frequency, session duration, and account status.

Organize data: Group by month and identify any irregular patterns.

Format data: Ensure columns are labeled correctly and visually highlight active and inactive users.

Create summary: Add a summary at the end with key observations.

This approach is clearer, less overwhelming, and makes it easier for the LLM to interpret and produce accurate results.

LLMs perform better using step-by-step instructions that align closely with their training data. Even minor adjustments in phrasing or structure can significantly impact how the LLM interprets the input. Testing and refining prompts regularly will help ensure that your instructions remain clear, concise, and effective in all scenarios.

Understand LLM limitations

LLMs are creative and can generate sophisticated outputs, but they are not always precise or logical. For tasks that can be easily solved with traditional algorithms or no-code tools, preprocess the data first. This ensures that LLMs are used for what they do best, creativity and interpretation.

Appropriate use: use LLMs to create a memo, summarize content, or explain data will usually produce good results.

Inappropriate use: use LLMs for tasks that require precise calculations, like performing arithmetic operations or executing a series of 4-5+ strict logical steps without tolerance for errors. While LLMs might perform well in some cases, they cannot guarantee consistent accuracy when compared to algorithmic or no-code solutions.

LLMs are not search engines, but can interpret search results

LLMs generate answers based on patterns learned from similar data, not by searching databases. They can analyze, interpret, and suggest, but they must not be used as a primary data source. Always cross-check critical information with a reliable database or source.

However, LLMs can be used as interpreters to analyze and summarize search results. In such cases, you are responsible for extracting the necessary data and formatting it properly before passing it to the LLM. Use Creatio process tools and the web services module to collect external data, then utilize the LLM to generate well-structured outputs for the user. View the examples below.

Calendar integration. To work with a user's calendar, use built-in actions for generic data searches or prepare a specific process to extract relevant information. Pass the results to the LLM along with instructions on how to present the data. Note that LLMs do not have built-in calendar awareness and might miss days of the week for specific dates. You must provide calendar tools and other details to ensure accurate responses.

Using user profile data. If your AI Skill requires data from the current user profile, use the "GetCurrentUserInfo" action or create your own process to extract the required data. Once this data is available, you can pass it to the LLM with further instructions on how to use and present it.

Provide relevant context

The context you provide in the prompt has a significant impact on the output. Pay close attention to data that is present on the page, as this can be used by the LLM to provide more accurate responses. If additional data is needed, consider using search actions or prepare a process to retrieve and format data from relevant records before passing it to the LLM.

Use clear and consistent naming conventions for parameters. For example, when creating a new record in a Creatio process, use parameter names that directly relate to the object structure to avoid misunderstandings in edge cases. For instance, if your process creates a new meeting record in the "Activity" object, name the parameter as CreatedActivityID instead of using vague terms like MeetingID or TaskID. This helps the LLM better correlate data and generate accurate responses that have correct links to the records.

Test AI Skills in multiple scenarios

Testing your AI Skill in different scenarios is crucial to developing a robust solution. Because LLM outputs are highly dependent on the exact data provided, the behavior can vary significantly across different contexts. Consider the following scenarios when testing:

Page data availability

  • The record page is open, and all data from the page is used.
  • No related record page is open, and data is read solely from search actions or other sources.
  • A combination of search actions and page data (even if some of the page data is irrelevant to the AI Skill) is used.

Data volume and completeness

  • A page is filled with all fields completed and includes many linked items in lists.
  • The record is almost empty, with minimal data provided or access restrictions limiting visibility to key fields.
  • The user has access to certain fields but not others, creating a partial view of the data.

AI Skill execution method

  • All required data is provided in a single message when starting the AI Skill.
  • The AI Skill is initiated, and data is provided step-by-step as requested by the LLM. For example, the LLM asks for some parts of the data and attempts to guess other information.

Additional scenarios

  • AI Skills that depend on specific types of records, for example, meetings, activities, contacts, might behave differently based on data types or formats provided.
  • AI Skills that use predefined templates or dynamically created instructions for specific data types can produce inconsistent results based on variations in the input data structure or content.

Testing across these scenarios helps identify edge cases and ensures the LLM produces consistent, high-quality outputs in various contexts.

Use all available debug tools while testing

To identify and resolve issues effectively during testing, it is essential to fully understand the dialogue flow and observe how data is passed to the LLM at each step. Analyzing what information is being processed and how it influences LLM behavior is crucial for ensuring consistent outputs. Additionally, pay close attention to logical procedures, such as Creatio processes, as they can significantly impact LLM performance.

We recommend following these debugging best practices:

  • Use process trace and logs. Utilize the process trace tool in Creatio to verify if data is being output correctly at every step. This helps identify if there are gaps in data transfer or if any logic steps are not functioning as expected.
  • Monitor data flow. Keep track of what data is passed into the LLM at each stage, especially when using complex or nested processes. Ensure that all required parameters are being provided, and that there are no discrepancies in data values.
  • Test logical processes independently. Before focusing on prompt adjustments, confirm that the logical components, for example, data queries, record updates, API calls, work well within the scenario. This will prevent misinterpretations and ensure that the LLM is operating on reliable data.
  • Analyze response consistency. Observe how the LLM responds to identical inputs in different scenarios. If there are significant variations, investigate whether the input data, process state, or prompt text could be contributing to the inconsistency.
  • Utilize error logs and debug outputs. Review error logs and debug outputs from Creatio or any integrated tools to identify if there are system-side issues affecting the LLM's performance. This can help distinguish between LLM-related errors and those caused by backend processes or misconfigurations.
Example

Imagine a scenario where built an AI Skill to extract data from a customer profile and present it in a structured summary. You notice that the LLM occasionally omits certain details, such as the customer’s address.

To identify the issue:

  1. Check process trace. Verify that the "GetContactProfileData" process correctly retrieves all fields, including the address, and that this data is output as expected.
    • Scenario. During a test run, you notice that the Address field is missing from the output log of the "GetContactProfileData" process.
    • Resolution. Investigate the process and discover that the address field was not included in the data mapping step. Correct the mapping and retest.
  2. Analyze LLM input and output logs. Review what data was actually passed to the LLM and how it responded. Use debugging tools to compare data available on the page against data passed in the prompt.
    • Scenario. The logs show that although the address was included in data passed to the LLM, the prompt text did not explicitly mention or prioritize it.
    • Resolution. Adjust the prompt text to include a direct instruction like "Include the customer’s address if available, and make sure it is displayed in the summary."
  3. Review prompt adjustments. Once the logical components are validated, test the LLM prompt in different scenarios, for example, with complete data, partial data, and missing fields, to see if it includes all relevant details consistently.
    • Scenario The LLM performs well with complete data but struggles when the address field is missing or partially filled.
    • Resolution Modify the prompt to handle edge cases more gracefully.
    If the customer’s address is missing, note it as "Address not provided." If only a city is available, display the city name.

Before making changes to the prompt, ensure that data and logic components work flawlessly in your scenario. By confirming that the logical procedures are sound, you can confidently adjust the prompt text to refine the LLM’s output. Additionally, regularly use process trace tools and debug logs to validate the data flow, catch errors early, and streamline the troubleshooting process.


See also

Creatio AI overview

Creatio AI architecture

Develop Creatio AI Skills

AI Skill list

System actions

Data privacy in Creatio AI

Strategy: write clear instructions (Official OpenAI documentation)

Best practices for prompt engineering with the OpenAI API (Official OpenAI documentation)

Prompting guide 101 (Official Google documentation)