Skip to main content
Version: 8.2

Best practices for AI Skill development

Level: beginner

Keep instructions concise

When designing instructions for prompts, aim to be clear and concise. Instructions longer than 200-300 words or those containing more than 3-5 sequential steps can lead to misunderstandings or cause certain steps to be overlooked. Focus on essential information and prioritize clarity to ensure that the LLM can interpret and execute the instructions effectively.

Long instructions increase the cognitive load on the LLM and might result in incomplete outputs or misinterpretations. Additionally, users interacting with the LLM might find it challenging to grasp overly complex instructions, leading to confusion or lower engagement.

Follow these recommendations to maintain clarity:

  • Break down complex steps. If your process requires multiple steps, consider breaking them into smaller sub-processes or sub-prompts. For example, if the task involves creating a report, splitting the instructions into "collecting data," "analyzing results," and "formatting output" can make it easier for the LLM to follow.
  • Use Short Sentences. Keep sentences short and direct to minimize the potential for confusion. Avoid compound sentences that combine multiple actions.
  • Highlight Critical Information. Use formatting (e.g., numbered points) to emphasize key parts of the instructions. This helps the LLM focus on the most important details.
  • Reduce Ambiguity. Avoid vague terms or open-ended instructions like "handle as needed" or "use your discretion." Instead, provide specific criteria or clear guidance on what is expected.
  • Consider Logical Groupings. Organize instructions into logical groups that the LLM can process independently. For example, instead of listing 10 steps in one block, group related actions together and indicate transitions between different stages of the task.

For example, you have a long prompt:


Create a report on user activity. Include details on login frequency, session duration, and account status. Organize the data by month and highlight any irregular patterns. Ensure all columns are formatted correctly. Add a summary at the end, and use colors to differentiate active and inactive users.

You can break it down as follows:


Collect user activity data: Include login frequency, session duration, and account status.

Organize data: Group by month and identify any irregular patterns.

Format data: Ensure columns are labeled correctly and visually highlight active and inactive users.

Create summary: Add a summary at the end with key observations.

This approach is clearer, less overwhelming, and makes it easier for the LLM to interpret and produce accurate results.

LLMs perform better using step-by-step instructions that align closely with their training data. Even minor adjustments in phrasing or structure can significantly impact how the LLM interprets the input. Testing and refining prompts regularly will help ensure that your instructions remain clear, concise, and effective in all scenarios.

Understand LLM limitations

LLMs are creative and can generate sophisticated outputs, but they are not always precise or logical. For tasks that can be easily solved with traditional algorithms or no-code tools, preprocess the data first. This ensures that LLMs are used for what they do best, creativity and interpretation.

Appropriate use: use LLMs to create a memo, summarize content, or explain data will usually produce good results.

Inappropriate use: use LLMs for tasks that require precise calculations, like performing arithmetic operations or executing a series of 4-5+ strict logical steps without tolerance for errors. While LLMs might perform well in some cases, they cannot guarantee consistent accuracy when compared to algorithmic or no-code solutions.

LLMs are not search engines, but can interpret search results

LLMs generate answers based on patterns learned from similar data, not by searching databases. They can analyze, interpret, and suggest, but they must not be used as a primary data source. Always cross-check critical information with a reliable database or source.

However, LLMs can be used as interpreters to analyze and summarize search results. In such cases, you are responsible for extracting the necessary data and formatting it properly before passing it to the LLM. Use Creatio process tools and the web services module to collect external data, then utilize the LLM to generate well-structured outputs for the user. View the examples below.

Calendar integration. To work with a user's calendar, use built-in actions for generic data searches or prepare a specific process to extract relevant information. Pass the results to the LLM along with instructions on how to present the data. Note that LLMs do not have built-in calendar awareness and might miss days of the week for specific dates. You must provide calendar tools and other details to ensure accurate responses.

Using user profile data. If your AI Skill requires data from the current user profile, use the "GetCurrentUserInfo" action or create your own process to extract the required data. Once this data is available, you can pass it to the LLM with further instructions on how to use and present it.

Provide relevant context

The context you provide in the prompt has a significant impact on the output. Pay close attention to data that is present on the page, as this can be used by the LLM to provide more accurate responses. If additional data is needed, consider using search actions or prepare a process to retrieve and format data from relevant records before passing it to the LLM.

Use clear and consistent naming conventions for parameters. For example, when creating a new record in a Creatio process, use parameter names that directly relate to the object structure to avoid misunderstandings in edge cases. For instance, if your process creates a new meeting record in the "Activity" object, name the parameter as CreatedActivityID instead of using vague terms like MeetingID or TaskID. This helps the LLM better correlate data and generate accurate responses that have correct links to the records.

Test AI Skills in multiple scenarios

Testing your AI Skill in different scenarios is crucial to developing a robust solution. Because LLM outputs are highly dependent on the exact data provided, the behavior can vary significantly across different contexts. Consider the following scenarios when testing:

Page data availability

  • The record page is open, and all data from the page is used.
  • No related record page is open, and data is read solely from search actions or other sources.
  • A combination of search actions and page data (even if some of the page data is irrelevant to the AI Skill) is used.

Data volume and completeness

  • A page is filled with all fields completed and includes many linked items in lists.
  • The record is almost empty, with minimal data provided or access restrictions limiting visibility to key fields.
  • The user has access to certain fields but not others, creating a partial view of the data.

AI Skill execution method

  • All required data is provided in a single message when starting the AI Skill.
  • The AI Skill is initiated, and data is provided step-by-step as requested by the LLM. For example, the LLM asks for some parts of the data and attempts to guess other information.

Additional scenarios

  • AI Skills that depend on specific types of records, for example, meetings, activities, contacts, might behave differently based on data types or formats provided.
  • AI Skills that use predefined templates or dynamically created instructions for specific data types can produce inconsistent results based on variations in the input data structure or content.

Testing across these scenarios helps identify edge cases and ensures the LLM produces consistent, high-quality outputs in various contexts.

Use all available debug tools while testing

To identify and resolve issues effectively during testing, it is essential to fully understand the dialogue flow and observe how data is passed to the LLM at each step. Analyzing what information is being processed and how it influences LLM behavior is crucial for ensuring consistent outputs. Additionally, pay close attention to logical procedures, such as Creatio processes, as they can significantly impact LLM performance.

We recommend following these debugging best practices:

  • Use process trace and logs. Utilize the process trace tool in Creatio to verify if data is being output correctly at every step. This helps identify if there are gaps in data transfer or if any logic steps are not functioning as expected.
  • Monitor data flow. Keep track of what data is passed into the LLM at each stage, especially when using complex or nested processes. Ensure that all required parameters are being provided, and that there are no discrepancies in data values.
  • Test logical processes independently. Before focusing on prompt adjustments, confirm that the logical components, for example, data queries, record updates, API calls, work well within the scenario. This will prevent misinterpretations and ensure that the LLM is operating on reliable data.
  • Analyze response consistency. Observe how the LLM responds to identical inputs in different scenarios. If there are significant variations, investigate whether the input data, process state, or prompt text could be contributing to the inconsistency.
  • Utilize error logs and debug outputs. Review error logs and debug outputs from Creatio or any integrated tools to identify if there are system-side issues affecting the LLM's performance. This can help distinguish between LLM-related errors and those caused by backend processes or misconfigurations.
Example

Imagine a scenario where built an AI Skill to extract data from a customer profile and present it in a structured summary. You notice that the LLM occasionally omits certain details, such as the customer’s address.

To identify the issue:

  1. Check process trace. Verify that the "GetContactProfileData" process correctly retrieves all fields, including the address, and that this data is output as expected.
    • Scenario. During a test run, you notice that the Address field is missing from the output log of the "GetContactProfileData" process.
    • Resolution. Investigate the process and discover that the address field was not included in the data mapping step. Correct the mapping and retest.
  2. Analyze LLM input and output logs. Review what data was actually passed to the LLM and how it responded. Use debugging tools to compare data available on the page against data passed in the prompt.
    • Scenario. The logs show that although the address was included in data passed to the LLM, the prompt text did not explicitly mention or prioritize it.
    • Resolution. Adjust the prompt text to include a direct instruction like "Include the customer’s address if available, and make sure it is displayed in the summary."
  3. Review prompt adjustments. Once the logical components are validated, test the LLM prompt in different scenarios, for example, with complete data, partial data, and missing fields, to see if it includes all relevant details consistently.
    • Scenario The LLM performs well with complete data but struggles when the address field is missing or partially filled.
    • Resolution Modify the prompt to handle edge cases more gracefully.
    If the customer’s address is missing, note it as "Address not provided." If only a city is available, display the city name.

Before making changes to the prompt, ensure that data and logic components work flawlessly in your scenario. By confirming that the logical procedures are sound, you can confidently adjust the prompt text to refine the LLM’s output. Additionally, regularly use process trace tools and debug logs to validate the data flow, catch errors early, and streamline the troubleshooting process.


See also

Creatio AI overview

Creatio AI architecture

Develop Creatio AI Skill

AI Skill list

Creatio AI system actions

Data privacy in Creatio AI

Prompt engineering (Official OpenAI documentation)

Best practices for prompt engineering with the OpenAI API (Official OpenAI documentation)

Prompting guide 101 (Official Google documentation)