Skip to main content

Testing strategy: from your first interactions to feedback from beta testers

Testing is an important and adventurous phase of your AI Expert's development. It's an opportunity to not only "test" his knowledge, but also to better understand his "personality," fine-tune his communication, and see how he can best serve your clients.

14.0 Important Mindset: Work with AI as a living system, not a flawless machine

Before you get into the actual testing, it's helpful to understand a few specifics about working with AI:

  • AI is not traditional software: Unlike traditional programs that behave exactly according to predefined code, AI (and therefore your BuddyPro Expert) is a dynamic and somewhat unpredictable system. It learns, interprets and generates answers based on huge amounts of data and complex algorithms. This gives it amazing capabilities, but it also means that its behavior is not always 100% predictable or controllable down to the last detail.
  • Accept "imperfection" as part of the process: your goal is not to create AI that never makes a mistake or deviates from your expectations. Such an AI does not yet exist. Instead, view testing as a process of incremental tuning and refinement. Every "mistake" or unexpected response is valuable information that will help you better "nurture" the AI.
  • Work with perfectionism: many of us tend to strive for perfection. With AI, however, it's important to strike a balance. If you spend a disproportionate amount of time striving for absolutely flawless behavior in every conceivable situation, it can frustrate and unnecessarily hinder you. Focus on key features, a proper understanding of your know-how, and an overall positive user experience.
  • Feeling "I'm not in control": because AI has a certain amount of autonomy in its responses, you may sometimes feel like you don't have the outcome fully in your hands. Instead of striving for absolute control, focus on:
    • The right "assignment": The quality of your know-how in sources, a well thought-out system_prompt.gdoc and well-defined roles are essential. The AI gives clear direction.
    • Iterative improvement: test, identify the problem, modify the assignment (know-how, prompt, role), test again. It's a cycle.
    • User education: Later on, you'll educate your clients on how best to work with AI, what to expect from it, and that it's still "just" AI.
  • When is AI "good enough"? Don't expect absolute perfection. AI is "good enough" for beta testing, or even for the first run when:
    • It understands and applies key aspects of your know-how.
    • It communicates in a way that is consistent with your brand and expectations.
    • It doesn't provide often misleading or damaging information (especially on sensitive topics).
    • The overall user experience is positive and delivers value to the user.

Remember, your AI Expert is like a new member of the team - they need time to learn and your help to become a real asset. Testing is a way to help him or her in this growth.


14.1 Why is testing important?

Testing will help you:

  • Verify understanding of your know-how: Does your AI Expert really understand, interpret and apply the information you have provided in the sources folder correctly?
  • Detect "hallucinations" and inaccuracies: even the most advanced AI can sometimes "make up" information or provide inaccurate or misleading answers. Testing helps to minimize these cases and identify areas where know-how needs to be added or refined.
  • Fine-tune tone, communication style and personality: does your AI's communication style match the personality defined in system_prompt.gdoc? Is the chosen language (e.g., tics/acting) consistent? Is the AI's tone friendly, expert, empathetic - exactly what you expect?
  • Ensure a smooth and intuitive user experience: does the onboarding process (messages from onboarding.gdoc) work as you expect? Is the interaction with the AI natural, understandable and easy even for less tech-savvy users?
  • Identify "weak spots" and knowledge gaps: where does the AI need to add knowledge, clarify instructions, or where its answers are not deep or specific enough?
  • Check all settings for functionality: are disclaimers for sensitive topics set correctly? Do the links work if the AI provides them?
  • Understand the cost of operation: Testing will also help you better gauge what types of interactions and how complex responses generate higher cost AI credits.

14.2 Phase 1: Your internal testing (you as the creator as the first user)

Before you show AI Expert to the world, become its first user.

A) Testing as a "New User" - What will be the first impression?

Try out how your AI interacts with someone it doesn't know yet.

  1. Put yourself in the role of a newbie: Enter into a chat with your BuddyPro instance:

    /test:{ID_tester}

    Instead of ID_tester, you can use any name for your test profile, such as /test:John1. This tells the AI to treat you as a new user.

  2. Activate the test profile by invitation: After switching to /test:{ID_tester}, type (use your generated code):

    /start TESTCODE1
  3. Verify onboarding: Review the welcome message from onboarding.gdoc. Are they easy to understand? Do the links work? Is the first impression positive?

  4. Return to admin mode: When you're done, enter:

    /untest

B) Exploring AI knowledge and behavior

Now just talk to your AI - explore what it can do and how it reacts.

  • Ask questions on a variety of topics from your know-how, from simple questions to more complex scenarios.
  • See what it does when you phrase a question vaguely or ask something completely out of scope.
  • Note whether the answers are clear, whether they match the defined personality, and whether the AI uses disclaimers correctly for sensitive topics.

Don't be afraid to experiment. The goal is to understand how your AI "thinks."


14.3 Phase 2: Testing with a small, select group of external beta testers

When you feel the AI is working well internally, invite a few more people to give you valuable feedback from a real user's perspective.

A) Selecting testers

Reach out to 5-15 people who represent your target audience (e.g., loyal clients, colleagues, active community members).

B) Generate tester invitations

Create a unique invitation for each tester (or small group) to give them controlled access. Use the command:

/generateBuddyProInvite:{trialMessages}:{code}:{usersLimit}

Example: /generateBuddyProInvite:150:TESTER01:1 creates an invitation for one tester with 150 messages and the code TESTER01. Then send the generated link (t.me/VaseUserNameBot?start=TESTER01) to the testers.

C) Simple assignment for testers

Let them know what you expect from them. It doesn't have to be anything complicated - ask them to try talking to the AI about topics they'd be interested in, noting clarity, any bugs, or overall impression. At the same time, remind them that they are testing AI that is still learning, and that their feedback is important.

D) Easy feedback collection

Arrange a way for them to let you know their insights (e.g., shared Google Doc, email, private Telegram group). Specific examples and ideally screenshots are important if you run into a problem.

tip

Remember that testing is not a one-time event, but rather a cycle of trying, learning, and improving. Any feedback, whether your own or from testers, is a valuable resource for improving your AI Expert. Have fun with this process of discovery!


Frequently Asked Questions (FAQ)

What should I do if AI Expert keeps "hallucinating" or responding incorrectly after my internal testing?

This is a normal part of the process. If you run into problems:

  1. Refine the instructions in system_prompt.gdoc. Maybe the AI needs clearer rules or boundaries.
  2. Fill in know-how in sources. Often AI "hallucinates" when it doesn't have enough specific information on a given topic.
  3. Check role definitions: If the error is related to a specific area, look at the definition of the role. Command /lastRole will tell you which role has been used. You may need to modify the role or have it re-generated after adding know-how.

Don't be afraid to experiment with modifications - it's a debugging process.