Request a Demo

We look forward to showing you Velaris, but first we'd like to know a little bit about you.

Module 1

Prompt Engineering Masterclass

If you’ve ever tried to use AI only to give up after receiving generic, off-brand, or inaccurate outputs, you’re not alone.

Prompting is a skill that seems simple on the surface, but has hidden nuances that prevent you from getting the best out of AI.

That’s why this module isn’t simply a basic prompting guide. It’s an in-depth masterclass that will cover:

  • A simple 5-part framework that you can apply to immediately see better results

  • 10+ ready-to-use prompts you can apply to your role right away

  • A step-by-step guide on how to build your own specialized Customer Success Coaches

  • Iteration techniques that actually work instead of sending you round in circles.

Progress is automatically saved on your browser.

Introduction

If you’ve scrolled on LinkedIn recently, you’ve probably seen everyone bragging about how AI has made them 10X more efficient. But when you tried using ChatGPT, or Google Gemini, or whatever LLM you prefer, did you feel like something was… missing? Like the outputs were just not as polished, accurate, or relevant as you expected? 

That’s exactly what this module is designed to fix. We’ve got tons of practical advice on how you can immediately improve your prompts. And unlike generic AI prompting courses, this module is specifically tailored for Customer Success professionals. Every example, every tweak, and every strategy is meant for the work you actually do. 

Some of the things we'll cover include:

  • Examples of prompts fine-tuned for Customer Success Managers
  • Optimizing prompts for customer data analysis 
  • The fundamentals of using agents for Customer Success

The best part is, despite how much of an impact these tips can have on the quality of your outputs, the actual tweaks to your prompts aren’t too complex. With some small (but smart) changes in the right places, you’ll have the AI delivering exactly what you need for your CS function. So by the end of this module, you’ll be able to:

  • Write and refine clear, actionable prompts that get AI to deliver exactly what you need for CS tasks.
  • Apply advanced prompting techniques, such as prompt chaining and tree of thought, for more complex tasks.
  • Experiment with AI agents to simulate scenarios and get expert feedback on meetings, flows, or customer interactions.

Let's find out how great prompts can make AI your most powerful tool for Customer Success!

Mark complete and continue →

Chapter 1: Setting Up AI for Success

Emails that need a dozen rewrites, reports that skip the details you actually care about, or insights that are just vague. If this sounds like the kind of output you get when you ask AI for help, it’s because your prompts are lacking some core components. We’ll outline the simplest fixes for this issue in this chapter. 

The one thing to do before you offer a single prompt

AI has lots of exciting possibilities, but we don’t want to run before we can walk. AI works best when it is fed with data. Training the AI with information about your product, industry, and customers makes sure it understands your business environment, terminology, and customer needs. Without this training, even the clearest prompts will only get you generic or incomplete responses. 

Practical Exercise: Training your LLM 

There’s no better way to learn how to set up your AI than to actually have a go at it! Go open up your favourite LLM like ChatGPT or Gemini, and stick to this easy-to-follow checklist:

Step 1: Gather Core Materials

Collect the foundational documents your AI will need to understand your business context. Examples include:

  • Product guides or feature documentation
  • Onboarding playbooks or account workflows
  • Customer personas and account tiers
  • Past email templates, QBR decks, or reporting formats

Step 2: Feed Iteratively

Upload these materials into your AI platform in small batches. Make sure to tell the AI what the materials are as you upload them. Start with the most critical resources (like the documents in Step 1). Then, to keep your AI up-to-date, gradually add:

  • Recent release notes or feature updates
  • Updated account-level data or customer health metrics
  • Emerging best practices for Customer Success

Step 3: Customize AI Writing Style

You can teach your AI to match your style so it communicates the way you would. 

Most LLMs let you customize writing preferences right inside their settings. For example, in ChatGPT you can open Settings → Customisation → Personalisation to tell it how you like your writing to sound. This is where you can specify things like:

  • Avoiding em dashes or long, complex sentences
  • Using a friendly but professional tone
  • Keeping responses concise or expanding on detail when needed

If you want to take it a step further, you can even create a custom GPT (or the equivalent in other LLMs). Upload a few examples of your own writing like past emails, reports, or summaries. Then describe the kind of tone and structure you want it to replicate. This helps the AI learn your style, vocabulary, and rhythm. 

Step 4: Test AI Outputs

Run sample prompts using the uploaded content. Test the AI’s responses against real examples, and check if the outputs:

  • Use correct product terminology
  • Reflect an accurate account or customer context
  • Follow your company’s communication style

Step 5: Refine and Add Context

If outputs are off-target, provide clarifications or additional materials. Specify the purpose of each document to help the AI internalize the context for future prompts. 

Example:
Use this document to understand the structure of our accounts and common onboarding challenges for new customers.

By completing this task, your AI will have the “map” it needs to start navigating. The more trained it is to your specific use cases, the faster and more accurately it can deliver quality responses. 

Since your AI is now ready with the right context, it’s time to start crafting prompts. How you ask it to use the content you’ve provided the AI really determines whether you get useful results or vague responses. In Chapter 2, we’ll introduce the 5-part framework for prompting, a structured approach that ensures every prompt you write is clear and precise.

A note on data privacy and security

Just a quick reminder before you continue with the course that it’s important to keep your company’s data policies in mind when using AI tools. Make sure you’re only using data that’s approved for external tools.

We recommend using an LLM provider officially approved by your company, or a trusted internal platform where AI is securely embedded, like Velaris. 

This course is designed to help you understand how to use AI effectively, but where and with what data you apply these skills should follow your organization’s compliance guidelines.

Chapter 2: The 5-Part Framework to Effective Prompting

After you’ve trained your AI, the next fundamental step is sticking to a framework when prompting. This framework is based on the official prompting framework developed by Google:

T - Task

C - Context

R - Reference

E - Evaluate

I - Iterate

You can use this mnemonic to remember it: Thoughtfully Create Really Excellent Inputs. But if you find this hard to remember, use this version we developed that might be easier for a CS professional to recall: Today's Customers Really Expect Insights. 

T - Task

Most people go straight into writing a prompt without a clear vision of what they want the AI to do. This usually results in the AI guessing, and you ending up with a vague or generic output.

Take a moment to define the task. Be explicit about what you want the AI to produce. Are you asking for an email draft, a product recommendation, or an analysis of customer data? What length are you expecting the response to be? Do you want the format of the response to be in bullets, tables, or paragraphs?

The clearer you are about the task, the better the AI can understand and provide a response you like. 

Example:

Write a 200-word email to a customer explaining a delayed rollout of a new feature in their subscription plan, and outline the steps your team is taking to resolve it in bullets afterwards.

Pro Tip: Use action verbs, like “summarize”, “list”, or “outline”, to describe the required action. You’ll get a response more specific to the task you have in mind. 

C - Context

If you’re clear about the task, you’ll get a pretty relevant response. But it won’t be polished as it would be if you gave the AI context. Without context, responses will usually be technically correct but missing nuance, lacking relevance, or failing to emphasize what matters most to the customer. 

Try adding relevant background information so that the AI has all the details it needs to make informed decisions. Context can look like customer history, product details, tone preferences, or any other relevant facts that help narrow down the scope of the AI’s response.

Example:
The customer is on a Premium subscription plan and was scheduled to receive Feature X last week. The rollout was delayed due to technical updates. The tone should be professional yet empathetic and reassuring. 

Pro Tip: Use personas to guide the AI. For example, starting a prompt with “You are an Operations Analyst” gives the AI context for tone, style, and focus. 

The richer the context, the more tailored and actionable the AI's output will be.

R - References

We often assume the AI is already familiar with the right style, tone, or company standards that is required in our work. But that means the AI might produce content that’s inconsistent with your brand voice, misses key phrasing conventions, or doesn’t follow your internal processes. 

A fantastic way to guide the AI in the right direction is providing references. 

These could be articles, documents, or specific guidelines. This step is especially helpful when you want the AI to align with your company's tone, style, or specific industry knowledge.

Example:
Provide the AI with a link to the company’s Premium subscription rollout email template or internal communication guidelines for feature delays.

E - Evaluate

Once you've written your prompt, take a moment to evaluate it. A common mistake is always waiting until after generating the output to realize there were gaps in the prompt, which forces rework and wastes time. 

The goal here is to make sure that your initial prompts are good enough that the AI has everything it needs to provide useful outputs from the get go. If it doesn’t have a solid initial prompt as a base, you’re going to be stuck in a long back-and-forth with the AI as you gradually try to adjust the prompt each time it gives an inadequate response. 

To evaluate, there are a few questions you can ask yourself. Does the content of the prompt actually address the intended task? Does the prompt include enough details to guide its response? Is the prompt clear enough that the AI won’t misinterpret it? 

Example:
Evaluate the prompt to check if it requires the AI to highlight the key steps your team is taking to resolve a delayed feature rollout, reflect the account’s Premium subscription status, and maintain a professional yet empathetic tone.

I - Iterate

AI isn’t perfect, which means that even after you get good at evaluating your prompts and refining them, the first response you get might not fully meet your expectations. So the final step you can take is iteration

This is where you can adjust your prompt based on the output you receive. If the AI missed a key detail or didn’t quite capture the tone you wanted, you can rephrase your prompt or provide more context. Think of this as an ongoing conversation with the AI, where you improve the quality of the results with each iteration.

Example:
Rewrite the email keeping the content professional and informative, but make the tone friendlier and more conversational. Focus more on highlighting the steps our team is taking and the reassurance about next steps.

Pro Tip:

You can even ask the AI for help on how to prompt! Just ask it to suggest or refine prompts for your task, then tweak them based on context and desired output.

Practical Exercise: Apply the 5-Part Framework

Let’s get some practice in creating a fully structured prompt using this framework. We’ll give you a scenario, and you can try applying the framework to a prompt to get the best possible output., Bonus points if you used the mnemonic to remember the framework while doing this exercise!

After you’re done, check out our example prompt to see if you wrote something similar. It doesn’t have to be exactly the same, as long as you’ve followed the framework. 

Scenario: Draft a welcome email series for new customers joining a SaaS platform.

Example c (can we hide this solution until clicked?)

  • Task (T): Generate three email options introducing the platform, explaining key features, and guiding the customer on first steps. Each email should be around 150–200 words.

  • Context (C): You are a Brand Copywriter. The audience is new users; tone should be friendly, approachable, and professional. Highlight value without overwhelming the reader.

  • References (R): Include supporting material like customer persona summaries, previous high-performing welcome emails, and onboarding playbook excerpts that highlight key points to emphasize in the emails.

  • Evaluate (E): Check that each email clearly communicates next steps, aligns with the friendly tone, and differentiates the three options for A/B testing.

  • Iterate (I): Adjust the prompt if the tone is off, content feels repetitive, or the instructions for next steps are unclear.

By completing this exercise, you now have hands-on experience in applying the 5-Part Framework to a real Customer Success scenario. Try to make it a habit to follow this structure when prompting.

Now that you’ve learned the foundation of effective prompting with the 5-part framework, it’s time to go more in-depth. In the next chapter, let's take the iteration process we mentioned, and look at some different practical strategies to refine and tweak your prompts. 

Chapter 3: Iteration Methods

AI responses are not always perfect on the first try, and maybe not even on the second. So refinement of prompts is an essential skill.

Iteration allows you to adjust your approach based on the AI’s output. Create a feedback loop, where each time you see the AI’s response, you observe what you like and what you don’t like. Now you can improve your prompt to make the next output closer to what you actually want.

In this chapter, we’re going to look in detail at the different iteration methods you can use to take your prompting skills to the next level. 

1. Clarification of Ambiguities

Sometimes AI is like your coworker that doesn’t get it but nods anyway. Vague or unclear instructions are quite likely to be misinterpreted, and the response you get can vary from slightly off-base to completely irrelevant. 

If the response isn’t what you expected, it’s important to identify where the ambiguity lies and clarify it in your prompt. For instance, if you ask the AI to "create a report on customer usage data," the AI might not know what exactly you're interested in.

Example:

  • Initial Prompt: Create a report on customer usage data.

  • Revised Prompt: Create a report on customer usage data for the past month, focusing on the most used features and breaking it down by customer segment.

In this case, the original prompt is ambiguous because it doesn’t specify the time frame or the specific data points needed. 

Pro Tip: When a prompt feels unclear, try reading it out loud. If it sounds vague to you, it will be vague to the AI too. Adding one or two clarifying instructions in your prompt can dramatically improve precision.

2. Clarify Incorrect Information

Every now and then, the AI will very confidently give you the wrong answer. Which is why you have to be prepared to do research and fact check something if it feels off. 

If the AI gives an incorrect or unclear example, asking it to correct itself can lead to more accurate outputs. Point out mistakes directly, so that the AI can recheck information and offer a better response in the next iteration. If possible, it can help to tell the AI exactly what is wrong about its output. 

Example:

  • Initial Prompt: What is the difference between gross and net retention rates?
  • Revised Prompt: You incorrectly defined gross retention as the total revenue retained, without factoring in lost revenue due to churn or downgrades. Try again.

3. Changing the Order of Content in Your Prompt

AI models process information in sequence, so the order in which you present details can influence how the AI understands and prioritizes the information. If the initial response isn't quite right, consider experimenting with the order of the elements in your prompt and see how it affects the output.

For instance, if you provide a background context before specifying the task, the AI might interpret the context as the main focus and give a response that's too broad. On the other hand, starting with a clear task before adding context may help the AI focus on the action you're asking for.

Example:

  • Initial Prompt: A customer has been facing issues with their recent order, and we want to offer a refund. I need to write an email to a customer. The tone should be empathetic and apologetic.
  • Revised Prompt: Write an empathetic and apologetic email offering a refund to a customer who has been facing issues with their recent order.

4. Request More Examples

If the AI provides a generic response, asking for more examples can help the AI elaborate further and offer more targeted insights. This is useful when you need deeper exploration or a variety of options for a given task. You can also add a modifier that specifies what sort of examples you’re looking for.

Example:

  • Initial Prompt: What should I include in my customer success toolkit?
  • Revised Prompt: Give me more examples of tools for customer success, especially for managing customer health.

5. Providing Feedback to AI

After receiving an output, giving feedback on what you liked and didn’t like can help the AI refine its next response. For instance, you can ask it to “be more concise,” “include more examples,” or “adjust the tone to be more formal.”

Example:

  • Initial Output: The AI generates a list of suggestions but the explanations are too long.
  • Revised Prompt: Shorten the explanations and focus only on the key points for each suggestion.

Pro Tip: Use analogies to give the AI more specific feedback. For example, you can say: You described Feature X wrong. Think of Feature X as a gym membership. It encourages regular “workouts” (usage) and builds habits over time.

By iterating based on feedback, you're making the AI more aligned with your preferences and needs.

Applying Iteration to Your Workflows

These iteration methods are particularly useful in scenarios that require a high degree of precision and personalization, such as:

  • Customer Success Communications: Crafting customer emails or responses where tone and empathy matter.
  • Data Analysis and Reporting: Refining summaries of customer health data or usage metrics to highlight the most relevant insights.
  • Strategic Planning: Generating ideas for QBRs, success plans, or upsell opportunities where the output must be aligned with specific business goals.

Pro Tip: Keep a “prompt journal.” Track the prompts you’ve tried, what worked, and what didn’t. Over time, you’ll build a library of effective prompts tailored for your SaaS accounts and CSM workflows.

With time, you’ll develop an intuition for how to quickly refine prompts and get the most accurate responses. But even when you feel familiar with the iteration process, don’t forget to keep experimenting and try new ways of adjusting prompts! 

Chapter 4: Multimodal Prompting

The most obvious way to interact with LLMs is through text. You type a question, you get a response. But the data we use in CS comes in all sorts of formats, so text isn’t enough if you want richer and context-aware outputs.

That’s where multimodality comes in. In simple terms, multimodality means making prompts that combine different types of inputs, like text, images, tables, and even charts. 

It’s actually a game-changer for CS once you get the hang of it. For example, you could feed the AI your QBR slides, product usage charts, and key account notes all at once, and it could generate an email, summary, or report that takes everything into account. 

Unfortunately, dumping multiple data types without giving the AI any guidance is like giving someone a stack of spreadsheets and saying, “figure it out.” It usually doesn't end well. So let’s talk about the efficient way of doing multimodal prompts. 

How to Do It Right

  1. Combine Inputs Thoughtfully: Provide text and visuals together, but guide the AI on what matters most. For example, attach a usage chart and explain, Focus on the trends for high-touch accounts over the past quarter.

It helps to start small. Combine just two input types first (like text + chart) before adding more. This reduces confusion and helps you see how the AI interprets multimodal data.

Pro Tip: Label each input clearly (e.g., Chart 1: Feature adoption by month) so the AI knows what to reference.

  1. Describe Before Analyzing: Ask the AI to describe the visual first before moving on to the core task. This lets you know if the AI is actually seeing and interpreting the data correctly. 

Example:

Describe the attached product usage chart. Highlight the most active features, any notable drops in usage, and key trends over the past quarter. Then summarize actionable insights for the account.

  1. Specify Relationships: If you provide multiple data sources, clarify how they relate to each other. Tell the AI how they should be used together to produce its output. Missing a clear link between multiple data sources might make the AI treat each input independently and miss important connections. 

Example:

Use the table of login frequency and the chart of feature adoption to highlight which features drive the most engagement.

  1. Ask the AI to Explain Its Reasoning: Sometimes, when the AI seems off the mark with its outputs, you need to hear it “think out loud”. For complex or nuanced analysis of multimodal inputs, have the AI explain its interpretation before generating outputs. This gives you transparency and helps catch misinterpretations early.

Example:

Based on the attached dashboard and usage table, explain your reasoning for identifying which features are underutilized and which show growth.

  1. Guide the Output: Multimodal inputs are only useful if you tell the AI what to produce. Are you summarizing? Drafting an email? Preparing slides? Be explicit. And don’t assume the AI will know which input is more important, tell it!

Example: 

Summarize this data focusing primarily on the chart trends, and use the table to add context.

Multimodal Prompt Examples in Customer Success

You can find more prompts for CS in Chapter 5, but here are a few examples to get you started on multimodal prompts:

  • Analyzing Feature Adoption: Using the attached product usage table and dashboard screenshot, create a 1-page summary of feature adoption trends for this account over the past quarter. Highlight the top 5 most used features. Keep the summary concise and focused on insights that would be relevant for a QBR.

  • QBR Preparation: Review the attached QBR slides, previous meeting notes, and recent customer comments. Produce a concise QBR report for this account that highlights key wins and suggests actionable expansion opportunities. Organize the report into clear sections: Wins, Risks, Recommendations, and Next Steps.

  • Customer Emails: Using the attached charts showing feature usage trends, and supplementing it with data from the account notes provided, draft a personalized email to the customer. Celebrate their achievements with the most-used features, and gently encourage adoption of underused features. Make the email friendly, professional, and empathetic, and include a clear call-to-action for the next steps.

Multimodality is a powerful way to piece together data from different sources instead of wasting hours manually piecing it together. Eventually the outputs you get will feel like they “get it” in a way text-only prompts never could.

In our next chapter, we’ll look at when human intervention is required, and what that interaction looks like in keeping your AI outputs accurate while still saving time.

Chapter 5: Human-in-the-loop 

AI can do a lot of heavy lifting in Customer Success like drafting emails, summarizing account data, or generating QBR insights, but it’s not perfect. Sometimes it misinterprets context, misses key details, or even introduces errors. That’s why a Human-in-the-Loop (HITL) system is essential: you guide, review, and validate AI outputs to ensure accuracy, relevance, and professionalism.

Understanding AI Limitations

Before diving into HITL, it’s important to acknowledge some common issues:

  • Hallucinations: AI may generate information that isn’t true or doesn’t exist.
  • Biases: AI outputs can reflect biases in the data it was trained on.
  • Context gaps: Without careful input, AI may misinterpret nuanced customer scenarios.

Knowing these limitations helps you apply HITL effectively and avoid sending flawed outputs to customers.

Core HITL Practices

Instead of focusing on step-by-step prompt iteration like in Chapter 2, HITL is about oversight and decision-making. Here’s how to structure it:

  1. Review and Verify: Treat the AI output as a draft. Check for accuracy, tone, and alignment with customer context.
  2. Focus on High-Impact Areas: Prioritize reviewing metrics, key insights, and communication tone over minor wording. Small phrasing changes can be done afterwards.
  3. Provide Feedback: Note mistakes directly in follow-up prompts so the AI improves over time. Telling the AI to “remember” certain preferences you have for certain types of outputs can help speed things along in the future.
  4. Use Structured Checklists: A checklist ensures nothing important slips through. Good news, we’ve got one pre-made that you can adjust to your liking. 

HITL Checklist for CSMs

When using AI in Customer Success workflows, keep this checklist in mind:

  • Ensure AI is suitable for the task. (Yes to summarizing usage data, no to highly strategic or nuanced decisions). 
  • Don’t expose sensitive customer or internal data. Never feed sensitive account details, PII, or proprietary data into AI without safeguards.
  • Always double-check AI outputs before sharing with the customer or your team.
  • Get internal company approval before using AI-generated outputs for client-facing communications or reports.
  • When appropriate, let customers or internal teams know which parts of your work were AI-assisted. 

This checklist helps maintain trust and compliance while still making the most out of AI.

HITL Examples in Customer Success

  • Email Review: AI drafts a personalized email to a Premium account. You check the tone, verify that the usage data matches the charts, and ensure the messaging aligns with the account’s context.
  • QBR Summaries: AI creates a 1-page account summary. You confirm highlighted metrics are accurate, clarify insights, and adjust phrasing to emphasize wins.
  • Adoption Insights: AI analyzes feature usage across multiple accounts. You validate trends, add context from customer notes, and correct any misinterpretations.

Pro Tips

  • Keep a reusable HITL template for recurring tasks to streamline reviews.
  • Focus review effort on the outputs that have the highest customer impact.

You’ve learned a lot about the foundations of good prompting. Now it’s time to put it into practice. In the next chapter, we’ll dive into real-world prompt examples for CSMs. Consider it your “prompt toolkit”: everything you’ve learned so far, ready to turn into outputs that actually make your life easier (and your customers happier).

Chapter 6: Prompt Examples for CSMs

It’s time to look at some real examples of prompts tailored to Customer Success workflows. Below, we’ve organized prompts by key CS functions so you can plug them into your AI tools and start generating actionable outputs right away. Each prompt includes a persona, context, and expected output format to make it precise and ready-to-use.

These are prompts designed for tasks that you’ll probably come across in your day-to-day as a CS professional, so they’re likely to be immediately applicable. But you can always tweak them to your liking. 

⚠️ Before you prompt

Remember, AI works best when you have maximized the context you feed it. Before using any of the prompts below, make sure you have trained your LLM by giving it documentation about your product, industry, and customers. Read more about how to train your LLM here.

Onboarding

1. Onboarding Plan Generator

You are an Onboarding Specialist. Create a comprehensive onboarding plan for a new customer account. The account is [insert tier], and the customer’s goals are [insert goals]. Include relevant milestones and ownership details. Follow typical CSM onboarding templates with clear deadlines and responsible owners for each task. Provide a table with three columns: Task/Milestone, Owner, Due Date. Ensure the plan is actionable and easy to follow for the team. Highlight dependencies between tasks where necessary, and make it clear which milestones are critical for a successful onboarding.

References you can provide:

  • Previous onboarding plan templates
  • Sample customer journey maps
  • Project management calendars or Gantt charts
2. Welcome Email Variants

You are a Brand Copywriter. Draft three variations of a welcome email for a new customer in the [insert industry] sector. The customer is new to the platform, and the desired tone is [insert tone, e.g., friendly, professional, energetic]. Include references to their account tier or goals if relevant. Follow previous high-performing onboarding email templates for structure and personalization. Provide three distinct email drafts, each concise (150–200 words), with subtle differences in tone or phrasing suitable for A/B testing. Ensure the emails are clear, welcoming, and actionable, and highlight the next steps the customer should take.

References you can provide:

  • Past successful welcome email examples
  • Brand voice guidelines or style guides
  • Customer persona documents
3. Onboarding Health Score Rubric

I want to create a health score rubric for my customers. Their industry is [industry] and our product helps them [product value proposition]. Create a health score rubric to assess the onboarding progress of new customer accounts. The key onboarding metrics we track are [your metrics]. Follow best practices for weighted scoring models used in Customer Success to define thresholds for Healthy, At Risk, and Critical accounts. Provide a weighted formula for the health score, define thresholds for each health level, and explain how each metric contributes to the overall score. Ensure the rubric is actionable, clearly highlights risk areas, and can be applied across multiple customer tiers consistently."

References you can provide:

  • Existing health score frameworks
  • Historical onboarding performance data
  • Account tier definitions and benchmarks

Adoption & Engagement

1. Use-Case Mapping Workshop Plan

I am a CSM who needs to run a use-case mapping workshop. Design a use-case mapping workshop plan based on the jobs-to-be-done (JTBD) framework for our customer accounts. Produce a structured outline that can be directly imported into a Miro board, including each workshop activity, its purpose, and the sequence of exercises. Highlight key discussion points and deliverables for each session to ensure participants can leave with actionable insights. Make the plan detailed enough to run the workshop smoothly, but clear enough for easy adaptation.

References you can provide:

  • Previous workshop agendas or Miro board templates
  • JTBD framework documentation or examples
  • Customer personas or historical use cases
2. Champion Enablement Pack

Create a comprehensive champion enablement pack that includes slide decks, FAQ content, and email snippets for customer champions. The materials should support adoption and engagement, clearly explaining key features, best practices, and actionable next steps. Organize the output so each component can be used independently or together, making it easy for customer champions to onboard their teams and drive engagement effectively.

References you can provide:

  • Previous champion enablement materials (slides, FAQs)
  • Product feature documentation or internal guides
  • Example customer communications for reference
3. Quarterly Adoption Campaign

Develop a quarterly adoption campaign plan for our product, including a timeline, the communication channels to use, and measurable KPIs for tracking success. Make it actionable, showing which activities happen when, who is responsible for execution, and how progress will be measured. Include recommendations for maximizing feature adoption and engagement, tailored to high-priority accounts.

References you can provide:

  • Past quarterly adoption campaign plans
  • Product usage data or engagement reports
  • Marketing channel guides and KPI benchmarks

Support & Operations

1. Post-Incident Customer Notes 

You are an Account Manager. Summarize the recent customer incident described in materials attached as references by producing a detailed root cause analysis (RCA) and a prevention plan. Include a clear description of what went wrong, contributing factors, and immediate corrective actions taken. Then outline recommendations to prevent similar issues in the future, including process improvements, monitoring steps, and any communication actions needed with the customer. Make the output structured and actionable so it can be shared directly with internal teams and referenced for future incidents.

References you can provide:

  • Incident logs, support tickets, emails, and Slack messages detailing the incident and subsequent events
  • Internal incident response guidelines
  • Previous RCA reports
2. Support Trend Digest

You are an Operations Analyst. Analyze the support tickets from the past week based on the attached tags and categories, and generate a digest highlighting the top five shifts or trends in customer issues. For each trend, include a brief description, potential causes, and recommended actions for the support team to address recurring problems. Make the digest concise, actionable, and easy for the team to review during weekly operations meetings.

References you can provide:

  • Historical support ticket data
  • Tagging or categorization guidelines
  • Previous trend analysis reports
3. Premium Support Entitlement Matrix

Create a premium support entitlement matrix that clearly maps each support tier to its corresponding benefits and service levels. Include details such as response times, dedicated support channels, and any additional perks for higher-tier customers. Present the matrix in a table format that can be used internally for account planning and externally to communicate entitlements to customers.

References you can provide:

  • Current support tier definitions and benefits
  • SLA or service-level documentation
  • Previous entitlement matrices or internal templates

Renewals & Expansions

1. Renewal Runway Plan

You are a Revenue CSM. Create a detailed renewal runway plan for the account, covering the timeline from T–120 to T–30 days before contract renewal. Include tasks for each stage, assign owners, and specify deadlines. Highlight critical actions such as customer check-ins, contract reviews, and risk mitigation steps. Make the output structured and actionable so the team can follow it step-by-step to ensure a smooth renewal process.

References you can provide:

  • Previous renewal runway plans
  • Account tier and contract details
  • Past renewal communications or templates
2. Commercial Proposal Explainer

Write a 150-word rationale explaining the commercial proposal for this account. Clearly outline the reasoning behind pricing decisions, value considerations, and any customizations for the customer. Ensure the explanation is concise, persuasive, and suitable for internal review or sharing with stakeholders.

References you can provide:

  • Sample commercial proposals or pricing notes
  • Pricing guidelines or frameworks
  • Account-specific context (tier, usage, previous agreements)
3. Upsell Hypothesis Builder

You are a Product Advisor. Using the list of modules the customer has not yet adopted, generate an upsell hypothesis assessment. Identify which modules are the best fit for the customer based on usage patterns and potential business impact. Prioritize opportunities and include reasoning for each recommendation so the account team can make informed decisions on next steps.

References you can provide:

  • Customer usage data and module adoption history
  • Product documentation for each module
  • Past upsell campaign results or templates

Voice of Customer & Surveys 

1. Survey Question Bank

You are a Research Ops specialist. Create a set of ready-to-use survey questions for customer feedback. All questions should be close-ended, use a Likert scale, and avoid bias. Include questions that cover product satisfaction, adoption, and engagement. Ensure the questions are clear, concise, and actionable, so they can be deployed directly in a customer survey.

References you can provide:

  • Previous survey question banks
  • Customer personas or segment definitions
  • Survey design guidelines or best practices
2. Thematic Clustering of Comments

Analyze the provided customer comments and cluster them into themes. For each theme, provide the number of mentions, a brief description, and representative verbatim examples. Ensure the output clearly highlights the key areas of concern, praise, or feature requests to inform actionable insights for the Customer Success team.

References you can provide:

  • Raw customer feedback or comment datasets
  • Previous thematic analysis reports
  • Categorization or tagging guidelines
3. NPS Follow-Up Drafts

You are a Relationship Manager. Draft follow-up emails for customers based on their NPS scores. Create three sets of emails tailored for detractors, passives, and promoters. Each email should be personalized, professional, and aligned with the company’s tone, encouraging engagement, feedback, or further action as appropriate for each group.

References you can provide:

  • Previous NPS follow-up email examples
  • Brand voice and tone guidelines
  • Customer segment information (tier, account type, industry

QBR/EBR 

1. QBR Outline by Persona

Create a QBR agenda tailored for executive stakeholders, including CFO, CTO, and VP-level participants. Provide agenda variants for each persona that highlight metrics, achievements, risks, and strategic recommendations relevant to their focus areas. Ensure each agenda is concise, actionable, and aligned with best practices for executive meetings.

References you can provide:

  • Previous QBR agendas
  • Company reporting templates or slide decks
  • Executive stakeholder notes or role-specific insights
2. QBR Follow-Up Email

You are an Account Manager. Draft a follow-up email after a QBR meeting. Include a recap of the key discussion points, agreed-upon mutual actions, and links to relevant reference materials or reports. Make the email clear, professional, and actionable, so the recipient can quickly understand next steps and priorities.

References you can provide:

  • Meeting notes or recordings
  • Previous follow-up email examples
  • Reference links to QBR dashboards or reports
3. EBR Template Personalization

Using the customer’s industry as context, personalize an Executive Business Review (EBR) template. Include tailored proof points, industry-specific insights, and messaging that aligns with the customer’s strategic priorities. Ensure the output is ready for use in an executive presentation or report.

References you can provide:

  • Previous EBR templates
  • Industry benchmarks or market research
  • Customer-specific performance data or case studies

Product Feedback & Roadmap

1. Feedback De-dup & Merge

You are a Product Operations specialist. Review the provided customer feedback and merge duplicate requests into a single canonical request. Include relevant metadata such as request type, number of mentions, priority, and source. Ensure the output is clean, structured, and ready to be used for product planning and decision-making.

References you can provide:

  • Raw customer feedback datasets
  • Previous canonical request templates
  • Product roadmap or prioritization guidelines
2. Impact vs. Effort Matrix

You are a Product Manager. Analyze the provided list of potential product initiatives and create an impact vs. effort matrix. Rank each initiative based on its expected business impact and implementation effort, and assign scores for both dimensions. Present the output in a clear, structured list that can guide prioritization decisions.

References you can provide:

  • Historical impact vs. effort matrices
  • Initiative descriptions and business impact estimates
  • Company prioritization frameworks or scoring guidelines
3. Roadmap Note to Customer

You are a Product Manager. Draft a roadmap update note for the customer, clearly communicating progress on features, upcoming releases, and any changes in timelines. Be transparent about expectations, highlight key updates, and keep the tone professional and informative. Ensure the note is concise, actionable, and suitable for sharing directly with the customer.

References you can provide:

  • Previous roadmap update notes
  • Product release schedules or timelines
  • Customer account details and context

Documentation & Content

1. Help-Center Article Drafts

You are a Tech Writer. Draft a help-center article that clearly explains [insert topic or feature]. Structure the article with headings, subheadings, and step-by-step instructions. Include GIFs or placeholders for visuals where appropriate to illustrate each step. Ensure the content is concise, easy to follow, and ready for publication on the help center.

References you can provide:

  • Existing help-center articles or templates
  • Product documentation and feature guides
  • Style guides or branding guidelines
2. Role-Based Quickstart Sheets

You are an Enablement specialist. Create quickstart sheets tailored to different roles: Admin, Analyst, and End-User. Each sheet should provide step-by-step instructions for getting started, key tips, and best practices specific to the role. Format the output so that each role’s sheet can be used independently, with clear headings and concise, actionable content.

References you can provide:

  • Existing quickstart guides or templates
  • Role-specific process documentation
  • Product usage instructions or screenshots
3. Release Notes Summary

You are a Product Marketing Manager (PMM). Summarize the latest release notes into scannable bullet points suitable for internal and external stakeholders. Highlight new features, improvements, and bug fixes. Keep the content concise, clear, and structured so readers can quickly understand the key updates and their impact.

References you can provide:

  • Full release notes documents
  • Previous release note summaries
  • Product feature documentation and change logs

We’ve covered a wide range of prompts tailored to Customer Success workflows. With all the valuable outputs you can generate using these prompts, you have the potential to stand out in your organization. But that potential will only be noticed if you can communicate your work clearly. Chapter 7 will show you how to take that AI-generated analysis and turn it into polished, compelling presentations and slide decks. You’ll learn how to highlight what matters most, and make sure your data has real influence across teams so your work gets noticed and acted on.

Chapter 7: Presentations

Insights you uncover are much more valuable if you can communicate them clearly and persuasively to customers, internal teams, and executives. Luckily, AI can help you out with that too. 

You can create accurate and professional-looking slide decks very quickly with AI, as long as you know the right instructions and structure to provide. In this chapter, we’ll show you how to get AI to help you create presentations that are clear, to the point, and actually useful for making decisions.

How to generate presentations correctly

1. Be Explicit About the Audience

You wouldn’t want to bombard the average customer with high-level technical details on your slides. AI outputs will vary depending on the intended reader. This means you have to specify who will consume the presentation and their level of expertise. You can also indicate the level of detail and the tone that is appropriate. 

Example:

Create slides for a VP-level audience focusing on adoption trends and churn risk. Don’t focus on technical implementation details. Keep the tone conversational and easy to understand.

2. Attach supporting information

Provide the AI with relevant context such as datasets, business context, and your business goals. Without this, the AI might produce generic statements or insights that don’t align with the story you want to tell. Attaching context also helps the AI prioritize the right metrics, select meaningful visuals, and structure the presentation logically from summary to recommendations.

Example Prompt:

You are a CSM creating a QBR slide deck for a VP of Customer Success. Using the attached dataset of account adoption metrics and churn risk, along with our business goals for increasing upsell in the next quarter, generate a slide deck outline. Include slide titles, key bullet points, tables, and charts, making sure the content focuses on the metrics that matter most for decision-making.

3. Specify the Format

If you just say, “give me a presentation”, you’ll probably get an ugly block of text on all your slides. You have to be clear on what format you want the slides to convey information. Think about the following on how you want the presentation to be structured, and relay that to the AI. 

  • Do you want bullet points, tables, charts, or diagrams?
  • Do you prefer summary or detailed slides?
  • What should be the sequence of slides?

Example:

You are a CSM creating a QBR slide deck for a VP of Customer Success. Using the attached account data, generate a slide deck outline with the following structure:

  • Each slide should have a clear title
  • Include 3–5 concise bullet points per slide
  • Include tables for metrics like feature adoption and renewal risk
  • Include charts for trends over time
  • The deck should start with a summary slide and follow with detailed slides for each account or feature
  • Suggest a logical flow from overview to insights to recommendations
4. Iterate and Review

Treat the initial AI output as a draft, because the deck will probably have a fair amount of imperfections. Start by checking for clarity and accuracy; make sure each slide communicates its intended message and has correct data.

Next, assess the relevance of each slide. Remove any content that doesn’t contribute to the key insights or decision points. Don’t hesitate to ask the AI to improve or fill in gaps. 

Iteratively refining the deck with AI saves time compared to starting from scratch and ensures that your final presentation is polished and actionable. 

Here are some prompts that can come in handy when iterating:

  • Simplifying Complex Slides:
    The slides have multiple overlapping metrics. Revise the deck so each slide focuses on one key insight, simplify complex charts, and highlight the top three KPIs per slide.

  • Tailoring Tone and Messaging:
    Adjust the slides to make the tone more executive-friendly: concise bullet points, clear headlines, and actionable recommendations. Remove overly technical language and jargon.
  • Refining Visual Emphasis and Insights:

Review the slides and revise it to emphasize the most important trends and metrics. Adjust the charts so the key insights are immediately clear, reorder slides to tell a logical story from summary to recommendations, and add short annotations explaining why each metric matters for decision-making. 

Pro Tip:

  • AI has a tendency to generate multiple slides with overlapping content, so ask the AI to merge slides to reduce redundancy. 

You’re now well on your way to understanding the basics of AI prompting. But what if you could push your AI even further? Can you get it to reason, plan, and handle more complex tasks like a seasoned CSM? In Chapter 8, we’ll dive into Advanced Prompting Techniques, exploring approaches that help you squeeze more precision and creativity out of your AI. 

Chapter 8: Advanced prompting techniques

Most of your daily tasks can be handled with the fundamentals you learned in the previous chapters. But sometimes your tasks are more complex, like summarizing dense call transcripts, planning multi-step initiatives, or reasoning through abstract problems. 

In these situations, your standard prompting methods may not be enough to get what you want. But AI doesn’t have "Intelligence" in the name for nothing. With some lesser known, but highly effective advanced prompting methods, you can achieve high-value outputs even for complex tasks. Let’s run through some high-level prompting techniques in this chapter.

Prompt Chaining

Occasionally, when you give the AI a prompt that is long or complex, it’ll just acknowledge some parts of the input and ignore others when generating its response. The answer to this is prompt chaining.

Prompt chaining involves breaking a complex task into smaller, sequential prompts. Instead of asking the AI to do everything in one go, you feed outputs from one prompt into the next. This allows the AI to focus on one step at a time and helps avoid missing details. It also has the added benefit of getting the outputs to each smaller task separately, making it easier to follow. 

Examples:

  • Step 1: Summarize key points from the QBR transcript by account. 
  • Step 2: Identify risks or opportunities mentioned in each account summary. 
  • Step 3: Generate a list of recommended next steps for the account team based on the identified risks and opportunities.
  • Step 1: Identify accounts with upcoming renewals.
  • Step 2: Analyze usage, health scores, and engagement for each account.
  • Step 3: Suggest personalized actions and messaging for each account to maximize renewal likelihood.

Chain-of-Thought Prompting

Ever had a situation where AI gives you an answer so weird, you have no idea how it came up with it? Or a time where the output seems logical, but you just want to make sure the AI hasn't missed anything important. 

Say hello to chain-of-thought prompting, where you ask the AI to explain its reasoning step by step. The explanation itself is useful to check for logical gaps, but just asking it to do this has been shown to improve the AI’s ability to deal with reasoning-based problems. 

Examples:

  • Analyze the attached feature adoption data and explain step by step if these accounts are at significant risk of churn. If they are, explain why that is true. Include the metrics considered, thresholds used, and reasoning behind each risk assignment.
  • Segment accounts based on adoption, support tickets, and NPS scores. Explain your reasoning for each segment and highlight any potential anomalies or exceptions that require manual review.

Tree-of-Thought Prompting

Tree-of-thought prompting is the AI version of a human method of problem solving we subconsciously use all the time. It involves considering several partial solutions, evaluating them, and backtracking if a path is unlikely to lead to a useful outcome. 

This is helpful for abstract or multi-branch problems, like planning success programs, creating help documentation, or structuring playbooks. It encourages the AI to explore multiple possible approaches before converging on a solution. And there’s sure to be some insights you can get from the reasons the AI uses to reject or choose a particular solution. 

Examples:

  • Develop a success plan for a customer that includes onboarding, adoption campaigns, and upsell opportunities. First, outline three possible approaches for each area. Then, evaluate the pros and cons of each approach and recommend the most effective combination.
  • Plan a quarterly adoption campaign for mid-tier accounts. Suggest three alternative campaign strategies with timelines, communication channels, and key metrics. Then analyze which strategy is most likely to drive adoption given resource constraints and historical data.

With these advanced prompting strategies under your belt, you can be much more prepared in tackling complex, multi-step, or abstract tasks.

Speaking of advanced prompting, it’s time to take your skills to the next level of AI: agents. In Chapter 9, we’ll learn how to set up AI agents that can simulate roles, provide expert feedback, and automate tasks for Customer Success teams. 

Chapter 9: Agent Basics

The traditional idea of AI as just a bot that replies to your questions is outdated thanks to one of the most exciting developments of AI, the introduction of agents. Agents are essentially virtual teammates, and in the fast moving function of Customer Success which often operates with limited resources, they’re an absolute windfall. 

Unlike the standard process of interacting with AI, where you give a prompt and get an output based on it, agents can simulate roles, provide ongoing feedback, and handle multi-step processes autonomously. Learning how to get the most of agents will help you refine your strategies, fine-tune your work, and make smarter decisions. 

In this chapter, we’ll cover the two types of agents that are easy to set up, but incredibly useful for CS.

1. Simulation Agents

Simulation agents are designed to act as a stand-in for a human role, letting you practice, plan, or test different approaches in a risk-free environment. They are especially useful for preparing for meetings, strategizing account plans, or rehearsing customer interactions.

The big advantage of simulation agents is that they enable scalability, letting you explore multiple scenarios quickly and efficiently, while gathering feedback for your approach in each situation. This helps you prepare for a variety of situations without extra manual effort.

How to Implement:
  1. Define the agent’s persona: Decide the role it should simulate, e.g., customer, sales coach, or onboarding specialist.

  2. Provide context: Give the agent relevant customer history, account data, or scenario details.

  3. Set clear instructions: Specify the type of advice, tone, and outputs expected.

  4. Iterate: Test different prompts and refine them to get outputs that match your real-world expectations.

Example Prompts:

  • You are a customer considering renewing your subscription. Using the attached account and feature usage data, simulate a conversation where you express concerns about feature adoption. Respond naturally to my answers and include possible objections and questions.
  • You are a recruiter interviewing candidates for a Customer Success Manager role. Using the attached job description and candidate profiles, simulate a live interview. Ask realistic questions about experience, problem-solving, and account management skills, and provide feedback on how well each candidate’s answers demonstrate fit for the role.
  • You are a customer being considered for an upsell. Using the attached account and usage data, simulate a conversation with me, the CSM. Express realistic concerns, ask questions about features, and respond naturally to my suggestions. Highlight objections, interest areas, and potential barriers to purchasing additional modules.

Practical Exercise: Simulation Agent

Creating an agent that works well for a situation can be tricky, so let’s give it some practice. Try designing an agent for the following scenario, and check the example prompt to see if you’re on the right track. 

Scenario: You are a new CSM managing a portfolio of mid-market accounts. One of your customers is approaching renewal, and you want to prepare for a conversation to encourage them to adopt a new premium feature. You want to anticipate potential objections, questions, and concerns from the customer, and test different ways to present the upsell.

Example Prompt:

Step 1: Define PersonaYou are a customer considering renewing your subscription.

Step 2: Provide ContextThe customer has been using the platform for 12 months, has moderate adoption of existing features, and has shown interest in but not yet adopted Feature X.

Step 3: Set InstructionsSimulate a conversation where you express concerns about adopting Feature X. Respond naturally to my answers, raise realistic objections, ask clarifying questions, and provide feedback as a cautious but interested customer. Include potential objections and interest areas, and highlight any barriers to purchasing additional modules.

Step 4: Iterate

  • Adjust the prompt if the agent is too easy-going or unrealistic.

  • Add new context or constraints (e.g., budget limitations, internal approvals needed).

  • Test multiple simulations to see how different approaches affect the customer’s responses.

2. Expert Feedback Agents

Expert feedback agents function as a virtual advisor, reviewing your work and providing guidance or corrections. They are perfect for QBR decks, email drafts, success plans, or operational workflows.

It’s like having a seasoned teammate looking over your shoulder (minus the judgement). They make sure your slides, emails, and reports meet company standards while catching gaps, errors, or opportunities you might have missed. No more spending hours revising. On top of that, agents can advise on your approach and strategy, helping you capitalize on every opportunity that presents itself to you. 

How to Implement:
  1. Define the agent’s expertise: Decide whether it should act as a CSM coach, product advisor, or workflow analyst.

  2. Provide reference materials: Share templates, style guides, prior examples, or playbooks so the agent understands standards.

  3. Set expectations: Clearly define what you want reviewed: strategy, tone, structure, metrics, clarity, etc.

  4. Iterate feedback loops: Ask the agent to provide recommendations, adjust outputs, and refine until the results meet your needs.
Example Prompts:
  • You are an experienced CSM. Using the attached account list and health metrics, advise me, a new CSM managing 50 mid-market accounts, on how to prioritize accounts, organize my daily workflow, and ensure nothing falls through the cracks. Highlight tools, strategies, and systems that help manage multiple accounts efficiently, and suggest routines for tracking progress and risks
  • You are a sales coach. Review the attached pitch deck for an upsell and suggest improvements. Highlight any objections or questions the customer may have that I have missed. 
  • You are a conversion copywriter. Review this upsell recommendation email and provide feedback to make the messaging more persuasive and aligned with customer context.
  • You are an operations analyst. Review this adoption report and flag any inconsistencies in the data or presentation format. Suggest ways to make insights more actionable for the account team.

3. Solutions Engineer Agents

Sometimes, in addition to role-playing and reviewing, you need the AI to build with you. That’s what the Solutions Engineer Agent is for. Technical teams can get swamped with work, and waiting on them for every setup or configuration can create a lot of delays. 

This agent acts like a technically skilled teammate who understands both your product and your customer’s goals. You can use it to design workflows, configure features, or troubleshoot complex setups directly inside your CS tools.

Using this agent helps you prototype solutions faster, translate customer needs into product workflows, validate ideas before involving engineering, and overall feel more confident in technical conversations with clients. 

How to Implement

1. Feed product information
Give the agent all the context it needs to “learn” your system. There’s no need to offer it proprietary code, but the kind of materials a skilled solutions engineer would reference:

  • Use cases and feature overviews
  • Help desk articles or how-to guides
  • Miro board diagrams, solution flows, and configuration checklists
  • Sheets or tables showing workflows or logic steps

2. Add customer context
Once the agent understands your product, feed it details about your customer:

  • Their business goals and success criteria
  • What they’re trying to achieve with your product
  • Publicly available context such as case studies or customer stories

3. Ask it to build with you
Now you can start using the agent as a technical partner. Ask it to:

  • Recommend how to configure a feature based on customer goals
  • Design a workflow in your product that solves a specific pain point
  • Suggest automation setups or integration options
  • Create diagrams or outlines for internal solution documentation
Example Prompts

Prompt 1:
You are a Solutions Engineer familiar with [Product Name]. Using the attached product setup guides, Miro workflows, and customer use case summary, recommend the best way to configure Feature X for a customer whose goal is to automate reporting and reduce manual data entry. Include steps, dependencies, and potential risks.

Prompt 2:
You are a Solutions Engineer helping a CSM design a solution for a customer in the finance industry. The customer’s goal is to improve visibility into account performance. Using the attached documentation and customer background, suggest a dashboard setup and integration plan that would best meet their needs.

4. Communications Feedback Agent

Every great CSM also has to be a good storyteller; they need to be someone who can communicate clearly, confidently, and with impact. The Communications Feedback Agent is a more specialized agent that helps you build that skill like a coach sitting in on every customer call.

This agent reviews your calls, notes, or transcripts and gives you practical, personalized feedback on your communication style. It can help you spot filler words, repetitive phrasing, or areas where your message loses focus.

If you want to sound more concise, credible, and confident in every customer conversation, give this agent a go. 

How to Implement

1. Feed communication data
Upload your own call transcripts, meeting summaries, or customer call recordings. You can also add:

  • Email threads or Slack messages
  • Meeting notes or follow-up summaries

This gives the agent real examples of how you communicate, so it can analyze your tone, clarity, and habits.

2. Define what “great” looks like
Feed the agent frameworks or communication standards that align with your role. Examples would be:

  • Soft skill benchmarks like listening and pacing. Do you interrupt or dominate the conversation, or do you allow space for the customer to share? 
  • Customer Success communication standards like value articulation through linking product benefits to the customer’s business goals. 
  • Internal communication guidelines like tone of voice or customer empathy standards.
  • Selling frameworks like MEDDPICC, SPIN Selling, or Challenger to analyze whether you are collecting the necessary information.

3. Set feedback goals
Tell the agent what you want to improve on. It could be reducing filler words (“um,” “like,” “you know”), improving pacing, or strengthening how you summarize customer value.

4. Iterate and gamify
Have the agent score each call, track improvement over time, and suggest a small challenge for your next meeting. Try challenging yourself to reduce filler words by 20% or keep your talk ratio below 50% on calls. 

You can even ask it to create a personal communication dashboard or habit tracker.
Example Prompts

Prompt 1:
You are a Communications Coach for Customer Success Managers. Using the attached call transcript, analyze my speech patterns. Count filler words, repetitive phrases, and talk-to-listen ratio. Provide an improvement plan with specific goals for the next 30 days and suggestions for more confident phrasing.

Prompt 2:
You are a CSM communications expert trained in the MEDDPICC framework. Review the attached discovery call transcript and highlight where I successfully covered the MEDDPICC elements and where I could have gone deeper. Suggest 3 specific questions I could ask in future calls to uncover more value."

Prompt 3:
Using the attached meeting transcript, identify moments where my explanations were too long or unclear. Recommend simpler or more engaging alternatives for key parts of the conversation, and list power phrases I can use to sound more confident.

By now, you’ve learned a lot about prompting, like how to structure prompts, apply advanced techniques, and set up AI agents. But before we wrap up, we’ve got a final bonus chapter to check out. In Chapter 10, we’ll cover LLM selection, showing you how to choose the right model for different Customer Success tasks, what to consider when evaluating options, and how to match a model’s capabilities to your team’s needs. Think of it as the cherry on top; everything you’ve learned so far works even better when paired with the right AI engine.

Bonus Chapter 1 - LLM Section

Choosing the right Large Language Model (LLM) shapes the quality, speed, and reliability of your outputs, so it’s worth understanding what makes one LLM better suited than another for specific Customer Success tasks.

Not all LLMs are created equal. Some excel at reasoning and multi-step planning, while others are faster and more cost-efficient for straightforward tasks. Selecting the wrong model can lead to outputs that are inconsistent, less accurate, or require more iterative refining. Choosing the right LLM ensures smoother workflows, faster turnarounds, and higher-quality insights you can confidently share with customers, executives, or cross-functional teams.

Key Considerations When Choosing an LLM

  1. Task Fit: Reasoning models are ideal for complex, multi-step tasks, like planning success programs or analyzing dense call transcripts. GPT-style models are faster and more cost-efficient, but benefit from very explicit instructions.
  2. Model Size & Trade-offs: Large models are better at understanding complex prompts and problem-solving across domains, while smaller models are faster and cheaper but less capable with intricate tasks.
  3. Consistency: Sticking to the same model for recurring workflows ensures consistent behavior and reliable outputs.
  4. Steerability & Instructions: GPT-5 in particular responds best to well-specified prompts, especially when they clearly include the logic, data, and constraints required to complete the task.

Example LLM Choices for Common CSM Tasks

Table 1
CS Task Recommended LLM Why It Fits
Customer Communication & Templates

GPT-style (fast, cost-efficient)

Clear, natural language for emails, follow-ups, and success stories. But requires explicit instructions to ensure tone and messaging align.
QBR & Adoption Reports Models like Claude have the best analytical reasoning
Handles complex multi-step analysis, dense transcripts, and multi-account comparisons.
Multimodal Workflows
Google Gemini and Gemini Flash are known to have relatively better multimodal support
Combines text with tables, charts, or screenshots for accurate insights.
Simulation & Feedback Agents
Claude + GPT-5 steerable models. Claude Sonnet can build more complex agents
Supports role simulation, multi-step guidance, and iterative feedback loops.
Data Analysis & Insights
Claude + GPT-5 steerable models
Extract patterns, flag anomalies, and provide actionable next steps across datasets.

Making Your Choice Actionable

Experimenting with each LLM is well worth your time, since it can have a huge impact on your workflow. Here’s a structured method you can follow:

  1. Start with your most common tasks: Focus on workflows where AI will have the biggest impact.
  2. Test multiple models: Run the same prompt across reasoning and GPT-style models to see which produces the clearest, most actionable output.
  3. Iterate with feedback: Refine prompts and provide context to see how each model responds, particularly on multi-step tasks or simulations.
  4. Document model use: Record which LLM is used for each workflow, the reasoning behind the choice, and any instructions or templates for future consistency.

We’ve covered an overview of the best available LLMs in this chapter. Check out our second bonus chapter to learn in-depth how to use one particular tool, Gemini, in your workspace.

Bonus Chapter 2 - Gemini Workspace Mastery

Unlike a separate chatbot window or AI tool, Gemini can be integrated right where you work, like in Gmail, Docs, Sheets, and Meet. This lets you turn your daily workflow into a faster, smarter version of itself. 

In this bonus chapter, we’ll walk through how to use Gemini (and its reusable automations, GEMs) to save hours every week while elevating how you prep, analyse, and communicate.

1. Overview of Gemini in Google Workspace

Gemini is available as an add-on for Google Workspace Business or Enterprise plans. If you don’t see it, ask your admin to enable it in the Workspace Admin Console.

Since Gemini is directly available inside your Workspace apps, it already understands the context of the file or conversation you’re working on. The benefit of this is that, instead of starting from scratch, Gemini can read and reason based on your open document, spreadsheet, or email thread.

Here’s what that looks like across tools:

  • Gmail: Summarise long email threads, draft customer updates, or rewrite messages in a specific tone.
  • Docs: Generate QBR outlines, recap meeting notes, or create renewal summaries directly inside a doc.
  • Sheets: Analyse adoption data, highlight risk accounts, and generate visual summaries or tables in seconds.
  • Meet: Get instant post-meeting summaries and action items without needing to transcribe manually.

2.  Building GEMs for Recurring CS Tasks

GEMs are reusable prompt templates inside Google Workspace that automate repetitive workflows. 

Here are three powerful GEMs every CSM should build:

  • Onboarding Assistant GEM
    Input: customer goals and timelines from a shared Doc or Sheet.
    Output: a structured onboarding or success plan with milestones, owners, and due dates.

    Example:Using this customer onboarding data, generate a success plan with 3 phases: setup, training, and adoption. Assign responsibilities to team members.

  • Renewal Prep GEM
    Input: adoption and billing data from Sheets.
    Output: a concise health summary highlighting risk factors and renewal blockers.

    Example:
    Summarise this account’s health for renewal prep using metrics in this Sheet. Focus on   renewal likelihood, last-touch date, and key usage patterns.

  • Feature-Request Summariser
    Input: product feedback or NPS responses in a Doc.
    Output: patterns and themes for roadmap syncs.

    Example:
    Cluster these feature requests into categories and highlight the top 3 recurring themes. Include example quotes for each.

Once built, you can share GEMs with your CS team and have a shared library of automation for daily tasks.

3. Integrating Gemini Into Your Daily Flow

Once you’ve mastered the basics, you can make Gemini part of your everyday workflow without even leaving your Google Workspace tools.

Examples:

  • From Gmail, trigger a renewal prep GEM right from the side panel to summarise the account before your email reply.
  • In Sheets, run a GEM to pull metrics, then generate a QBR deck or monthly performance summary automatically in Docs.
  • During Meet, use Gemini summaries to pre-populate action items and assign owners after calls.

Small automations like these can shave off hours each week, and make you look consistently prepared.

With the proper model powering your prompts and agents, all the techniques you’ve learned, from prompt chaining to advanced reasoning and agent simulations, you can work more efficiently and more accurately, giving you a real edge in Customer Success.

Conclusion

Great work! You’ve now got a solid foundation in prompt engineering for CS. Here’s a quick recap of what we covered:

  • Setting up AI: How to personalize your AI with material and context
  • The 5-Part Prompting Framework: Task, Context, References, Evaluate, Iterate
  • Iteration Methods: How to refine prompts, clarify ambiguities, adjust tone, reorder content, and request additional examples.
  • Multimodal Prompting: Using text, tables, charts, and images together to generate richer outputs.
  • Human-in-the-Loop (HITL): Best practices for reviewing AI outputs, spotting hallucinations, and managing biases.
  • Prompt Examples for CSMs: Ready-to-use prompts for onboarding, adoption, engagement, support, and more.
  • Presentations: How to turn AI-generated insights into polished, clear, and decision-ready slides, emails, and reports.
  • Advanced Prompting Techniques: Prompt chaining, chain-of-thought, and tree-of-thought approaches for complex, multi-step, or abstract tasks.
  • Agent Basics: Simulation and expert feedback agents to scale strategic thinking, scenario planning, and review processes.
  • LLM Selection: Choosing the right Large Language Model for each CSM task to maximize efficiency, reasoning, and output quality.

But this is just the start; there’s much more to come in our AI in CS series, where you’ll explore advanced workflows, analytics, and strategies to amplify your impact even further. Keep experimenting, iterating, and applying what you’ve learned!

Want to see Velaris in action?

Discover the difference it can make for your team.

Please enter your email to keep reading
-->
close icon black