We look forward to showing you Velaris, but first we'd like to know a little bit about you.
If you’ve ever tried to use AI only to give up after receiving generic, off-brand, or inaccurate outputs, you’re not alone.
Prompting is a skill that seems simple on the surface, but has hidden nuances that prevent you from getting the best out of AI.
That’s why this module isn’t simply a basic prompting guide. It’s an in-depth masterclass that will cover:
A simple 5-part framework that you can apply to immediately see better results
10+ ready-to-use prompts you can apply to your role right away
A step-by-step guide on how to build your own specialized Customer Success Coaches
Iteration techniques that actually work instead of sending you round in circles.

If you’ve scrolled on LinkedIn recently, you’ve probably seen everyone bragging about how AI has made them 10X more efficient. But when you tried using ChatGPT, or Google Gemini, or whatever LLM you prefer, did you feel like something was… missing? Like the outputs were just not as polished, accurate, or relevant as you expected?
That’s exactly what this module is designed to fix. We’ve got tons of practical advice on how you can immediately improve your prompts. And unlike generic AI prompting courses, this module is specifically tailored for Customer Success professionals. Every example, every tweak, and every strategy is meant for the work you actually do.
Some of the things we'll cover include:
The best part is, despite how much of an impact these tips can have on the quality of your outputs, the actual tweaks to your prompts aren’t too complex. With some small (but smart) changes in the right places, you’ll have the AI delivering exactly what you need for your CS function. So by the end of this module, you’ll be able to:
Let's find out how great prompts can make AI your most powerful tool for Customer Success!
Emails that need a dozen rewrites, reports that skip the details you actually care about, or insights that are just vague. If this sounds like the kind of output you get when you ask AI for help, it’s because your prompts are lacking some core components. We’ll outline the simplest fixes for this issue in this chapter.
AI has lots of exciting possibilities, but we don’t want to run before we can walk. AI works best when it is fed with data. Training the AI with information about your product, industry, and customers makes sure it understands your business environment, terminology, and customer needs. Without this training, even the clearest prompts will only get you generic or incomplete responses.
There’s no better way to learn how to set up your AI than to actually have a go at it! Go open up your favourite LLM like ChatGPT or Gemini, and stick to this easy-to-follow checklist:
Step 1: Gather Core Materials
Collect the foundational documents your AI will need to understand your business context. Examples include:
Step 2: Feed Iteratively
Upload these materials into your AI platform in small batches. Make sure to tell the AI what the materials are as you upload them. Start with the most critical resources (like the documents in Step 1). Then, to keep your AI up-to-date, gradually add:
Step 3: Customize AI Writing Style
You can teach your AI to match your style so it communicates the way you would.
Most LLMs let you customize writing preferences right inside their settings. For example, in ChatGPT you can open Settings → Customisation → Personalisation to tell it how you like your writing to sound. This is where you can specify things like:
If you want to take it a step further, you can even create a custom GPT (or the equivalent in other LLMs). Upload a few examples of your own writing like past emails, reports, or summaries. Then describe the kind of tone and structure you want it to replicate. This helps the AI learn your style, vocabulary, and rhythm.
Step 4: Test AI Outputs
Run sample prompts using the uploaded content. Test the AI’s responses against real examples, and check if the outputs:
Step 5: Refine and Add Context
If outputs are off-target, provide clarifications or additional materials. Specify the purpose of each document to help the AI internalize the context for future prompts.
Example:
Use this document to understand the structure of our accounts and common onboarding challenges for new customers.
By completing this task, your AI will have the “map” it needs to start navigating. The more trained it is to your specific use cases, the faster and more accurately it can deliver quality responses.
Since your AI is now ready with the right context, it’s time to start crafting prompts. How you ask it to use the content you’ve provided the AI really determines whether you get useful results or vague responses. In Chapter 2, we’ll introduce the 5-part framework for prompting, a structured approach that ensures every prompt you write is clear and precise.
Just a quick reminder before you continue with the course that it’s important to keep your company’s data policies in mind when using AI tools. Make sure you’re only using data that’s approved for external tools.
We recommend using an LLM provider officially approved by your company, or a trusted internal platform where AI is securely embedded, like Velaris.
This course is designed to help you understand how to use AI effectively, but where and with what data you apply these skills should follow your organization’s compliance guidelines.
After you’ve trained your AI, the next fundamental step is sticking to a framework when prompting. This framework is based on the official prompting framework developed by Google:
T - Task
C - Context
R - Reference
E - Evaluate
I - Iterate
You can use this mnemonic to remember it: Thoughtfully Create Really Excellent Inputs. But if you find this hard to remember, use this version we developed that might be easier for a CS professional to recall: Today's Customers Really Expect Insights.
Most people go straight into writing a prompt without a clear vision of what they want the AI to do. This usually results in the AI guessing, and you ending up with a vague or generic output.
Take a moment to define the task. Be explicit about what you want the AI to produce. Are you asking for an email draft, a product recommendation, or an analysis of customer data? What length are you expecting the response to be? Do you want the format of the response to be in bullets, tables, or paragraphs?
The clearer you are about the task, the better the AI can understand and provide a response you like.
Example:
Write a 200-word email to a customer explaining a delayed rollout of a new feature in their subscription plan, and outline the steps your team is taking to resolve it in bullets afterwards.
Pro Tip: Use action verbs, like “summarize”, “list”, or “outline”, to describe the required action. You’ll get a response more specific to the task you have in mind.
If you’re clear about the task, you’ll get a pretty relevant response. But it won’t be polished as it would be if you gave the AI context. Without context, responses will usually be technically correct but missing nuance, lacking relevance, or failing to emphasize what matters most to the customer.
Try adding relevant background information so that the AI has all the details it needs to make informed decisions. Context can look like customer history, product details, tone preferences, or any other relevant facts that help narrow down the scope of the AI’s response.
Example:
The customer is on a Premium subscription plan and was scheduled to receive Feature X last week. The rollout was delayed due to technical updates. The tone should be professional yet empathetic and reassuring. 
Pro Tip: Use personas to guide the AI. For example, starting a prompt with “You are an Operations Analyst” gives the AI context for tone, style, and focus.
The richer the context, the more tailored and actionable the AI's output will be.
We often assume the AI is already familiar with the right style, tone, or company standards that is required in our work. But that means the AI might produce content that’s inconsistent with your brand voice, misses key phrasing conventions, or doesn’t follow your internal processes.
A fantastic way to guide the AI in the right direction is providing references.
These could be articles, documents, or specific guidelines. This step is especially helpful when you want the AI to align with your company's tone, style, or specific industry knowledge.
Example:
Provide the AI with a link to the company’s Premium subscription rollout email template or internal communication guidelines for feature delays.
Once you've written your prompt, take a moment to evaluate it. A common mistake is always waiting until after generating the output to realize there were gaps in the prompt, which forces rework and wastes time.
The goal here is to make sure that your initial prompts are good enough that the AI has everything it needs to provide useful outputs from the get go. If it doesn’t have a solid initial prompt as a base, you’re going to be stuck in a long back-and-forth with the AI as you gradually try to adjust the prompt each time it gives an inadequate response.
To evaluate, there are a few questions you can ask yourself. Does the content of the prompt actually address the intended task? Does the prompt include enough details to guide its response? Is the prompt clear enough that the AI won’t misinterpret it?
Example:
Evaluate the prompt to check if it requires the AI to highlight the key steps your team is taking to resolve a delayed feature rollout, reflect the account’s Premium subscription status, and maintain a professional yet empathetic tone.
AI isn’t perfect, which means that even after you get good at evaluating your prompts and refining them, the first response you get might not fully meet your expectations. So the final step you can take is iteration.
This is where you can adjust your prompt based on the output you receive. If the AI missed a key detail or didn’t quite capture the tone you wanted, you can rephrase your prompt or provide more context. Think of this as an ongoing conversation with the AI, where you improve the quality of the results with each iteration.
Example:
Rewrite the email keeping the content professional and informative, but make the tone friendlier and more conversational. Focus more on highlighting the steps our team is taking and the reassurance about next steps.
Pro Tip:
You can even ask the AI for help on how to prompt! Just ask it to suggest or refine prompts for your task, then tweak them based on context and desired output.
Let’s get some practice in creating a fully structured prompt using this framework. We’ll give you a scenario, and you can try applying the framework to a prompt to get the best possible output., Bonus points if you used the mnemonic to remember the framework while doing this exercise!
After you’re done, check out our example prompt to see if you wrote something similar. It doesn’t have to be exactly the same, as long as you’ve followed the framework.
Scenario: Draft a welcome email series for new customers joining a SaaS platform.
Example c (can we hide this solution until clicked?)
By completing this exercise, you now have hands-on experience in applying the 5-Part Framework to a real Customer Success scenario. Try to make it a habit to follow this structure when prompting.
Now that you’ve learned the foundation of effective prompting with the 5-part framework, it’s time to go more in-depth. In the next chapter, let's take the iteration process we mentioned, and look at some different practical strategies to refine and tweak your prompts.
AI responses are not always perfect on the first try, and maybe not even on the second. So refinement of prompts is an essential skill.
Iteration allows you to adjust your approach based on the AI’s output. Create a feedback loop, where each time you see the AI’s response, you observe what you like and what you don’t like. Now you can improve your prompt to make the next output closer to what you actually want.
In this chapter, we’re going to look in detail at the different iteration methods you can use to take your prompting skills to the next level.
Sometimes AI is like your coworker that doesn’t get it but nods anyway. Vague or unclear instructions are quite likely to be misinterpreted, and the response you get can vary from slightly off-base to completely irrelevant.
If the response isn’t what you expected, it’s important to identify where the ambiguity lies and clarify it in your prompt. For instance, if you ask the AI to "create a report on customer usage data," the AI might not know what exactly you're interested in.
Example:
In this case, the original prompt is ambiguous because it doesn’t specify the time frame or the specific data points needed.
Pro Tip: When a prompt feels unclear, try reading it out loud. If it sounds vague to you, it will be vague to the AI too. Adding one or two clarifying instructions in your prompt can dramatically improve precision.
Every now and then, the AI will very confidently give you the wrong answer. Which is why you have to be prepared to do research and fact check something if it feels off.
If the AI gives an incorrect or unclear example, asking it to correct itself can lead to more accurate outputs. Point out mistakes directly, so that the AI can recheck information and offer a better response in the next iteration. If possible, it can help to tell the AI exactly what is wrong about its output.
Example:
AI models process information in sequence, so the order in which you present details can influence how the AI understands and prioritizes the information. If the initial response isn't quite right, consider experimenting with the order of the elements in your prompt and see how it affects the output.
For instance, if you provide a background context before specifying the task, the AI might interpret the context as the main focus and give a response that's too broad. On the other hand, starting with a clear task before adding context may help the AI focus on the action you're asking for.
Example:
If the AI provides a generic response, asking for more examples can help the AI elaborate further and offer more targeted insights. This is useful when you need deeper exploration or a variety of options for a given task. You can also add a modifier that specifies what sort of examples you’re looking for.
Example:
After receiving an output, giving feedback on what you liked and didn’t like can help the AI refine its next response. For instance, you can ask it to “be more concise,” “include more examples,” or “adjust the tone to be more formal.”
Example:
Pro Tip: Use analogies to give the AI more specific feedback. For example, you can say: You described Feature X wrong. Think of Feature X as a gym membership. It encourages regular “workouts” (usage) and builds habits over time.
By iterating based on feedback, you're making the AI more aligned with your preferences and needs.
These iteration methods are particularly useful in scenarios that require a high degree of precision and personalization, such as:
Pro Tip: Keep a “prompt journal.” Track the prompts you’ve tried, what worked, and what didn’t. Over time, you’ll build a library of effective prompts tailored for your SaaS accounts and CSM workflows.
With time, you’ll develop an intuition for how to quickly refine prompts and get the most accurate responses. But even when you feel familiar with the iteration process, don’t forget to keep experimenting and try new ways of adjusting prompts!
The most obvious way to interact with LLMs is through text. You type a question, you get a response. But the data we use in CS comes in all sorts of formats, so text isn’t enough if you want richer and context-aware outputs.
That’s where multimodality comes in. In simple terms, multimodality means making prompts that combine different types of inputs, like text, images, tables, and even charts.
It’s actually a game-changer for CS once you get the hang of it. For example, you could feed the AI your QBR slides, product usage charts, and key account notes all at once, and it could generate an email, summary, or report that takes everything into account.
Unfortunately, dumping multiple data types without giving the AI any guidance is like giving someone a stack of spreadsheets and saying, “figure it out.” It usually doesn't end well. So let’s talk about the efficient way of doing multimodal prompts.
It helps to start small. Combine just two input types first (like text + chart) before adding more. This reduces confusion and helps you see how the AI interprets multimodal data.
Pro Tip: Label each input clearly (e.g., Chart 1: Feature adoption by month) so the AI knows what to reference.
Example:
Describe the attached product usage chart. Highlight the most active features, any notable drops in usage, and key trends over the past quarter. Then summarize actionable insights for the account.
Example:
Use the table of login frequency and the chart of feature adoption to highlight which features drive the most engagement.
Example:
Based on the attached dashboard and usage table, explain your reasoning for identifying which features are underutilized and which show growth.
Example:
Summarize this data focusing primarily on the chart trends, and use the table to add context. 
You can find more prompts for CS in Chapter 5, but here are a few examples to get you started on multimodal prompts:
Multimodality is a powerful way to piece together data from different sources instead of wasting hours manually piecing it together. Eventually the outputs you get will feel like they “get it” in a way text-only prompts never could.
In our next chapter, we’ll look at when human intervention is required, and what that interaction looks like in keeping your AI outputs accurate while still saving time.
AI can do a lot of heavy lifting in Customer Success like drafting emails, summarizing account data, or generating QBR insights, but it’s not perfect. Sometimes it misinterprets context, misses key details, or even introduces errors. That’s why a Human-in-the-Loop (HITL) system is essential: you guide, review, and validate AI outputs to ensure accuracy, relevance, and professionalism.
Before diving into HITL, it’s important to acknowledge some common issues:
Knowing these limitations helps you apply HITL effectively and avoid sending flawed outputs to customers.
Instead of focusing on step-by-step prompt iteration like in Chapter 2, HITL is about oversight and decision-making. Here’s how to structure it:
When using AI in Customer Success workflows, keep this checklist in mind:
This checklist helps maintain trust and compliance while still making the most out of AI.
You’ve learned a lot about the foundations of good prompting. Now it’s time to put it into practice. In the next chapter, we’ll dive into real-world prompt examples for CSMs. Consider it your “prompt toolkit”: everything you’ve learned so far, ready to turn into outputs that actually make your life easier (and your customers happier).
It’s time to look at some real examples of prompts tailored to Customer Success workflows. Below, we’ve organized prompts by key CS functions so you can plug them into your AI tools and start generating actionable outputs right away. Each prompt includes a persona, context, and expected output format to make it precise and ready-to-use.
These are prompts designed for tasks that you’ll probably come across in your day-to-day as a CS professional, so they’re likely to be immediately applicable. But you can always tweak them to your liking.
Remember, AI works best when you have maximized the context you feed it. Before using any of the prompts below, make sure you have trained your LLM by giving it documentation about your product, industry, and customers. Read more about how to train your LLM here.
You are an Onboarding Specialist. Create a comprehensive onboarding plan for a new customer account. The account is [insert tier], and the customer’s goals are [insert goals]. Include relevant milestones and ownership details. Follow typical CSM onboarding templates with clear deadlines and responsible owners for each task. Provide a table with three columns: Task/Milestone, Owner, Due Date. Ensure the plan is actionable and easy to follow for the team. Highlight dependencies between tasks where necessary, and make it clear which milestones are critical for a successful onboarding.
References you can provide:
You are a Brand Copywriter. Draft three variations of a welcome email for a new customer in the [insert industry] sector. The customer is new to the platform, and the desired tone is [insert tone, e.g., friendly, professional, energetic]. Include references to their account tier or goals if relevant. Follow previous high-performing onboarding email templates for structure and personalization. Provide three distinct email drafts, each concise (150–200 words), with subtle differences in tone or phrasing suitable for A/B testing. Ensure the emails are clear, welcoming, and actionable, and highlight the next steps the customer should take.
References you can provide:
I want to create a health score rubric for my customers. Their industry is [industry] and our product helps them [product value proposition]. Create a health score rubric to assess the onboarding progress of new customer accounts. The key onboarding metrics we track are [your metrics]. Follow best practices for weighted scoring models used in Customer Success to define thresholds for Healthy, At Risk, and Critical accounts. Provide a weighted formula for the health score, define thresholds for each health level, and explain how each metric contributes to the overall score. Ensure the rubric is actionable, clearly highlights risk areas, and can be applied across multiple customer tiers consistently."
References you can provide:
I am a CSM who needs to run a use-case mapping workshop. Design a use-case mapping workshop plan based on the jobs-to-be-done (JTBD) framework for our customer accounts. Produce a structured outline that can be directly imported into a Miro board, including each workshop activity, its purpose, and the sequence of exercises. Highlight key discussion points and deliverables for each session to ensure participants can leave with actionable insights. Make the plan detailed enough to run the workshop smoothly, but clear enough for easy adaptation.
References you can provide:
Create a comprehensive champion enablement pack that includes slide decks, FAQ content, and email snippets for customer champions. The materials should support adoption and engagement, clearly explaining key features, best practices, and actionable next steps. Organize the output so each component can be used independently or together, making it easy for customer champions to onboard their teams and drive engagement effectively.
References you can provide:
Develop a quarterly adoption campaign plan for our product, including a timeline, the communication channels to use, and measurable KPIs for tracking success. Make it actionable, showing which activities happen when, who is responsible for execution, and how progress will be measured. Include recommendations for maximizing feature adoption and engagement, tailored to high-priority accounts.
References you can provide:
You are an Account Manager. Summarize the recent customer incident described in materials attached as references by producing a detailed root cause analysis (RCA) and a prevention plan. Include a clear description of what went wrong, contributing factors, and immediate corrective actions taken. Then outline recommendations to prevent similar issues in the future, including process improvements, monitoring steps, and any communication actions needed with the customer. Make the output structured and actionable so it can be shared directly with internal teams and referenced for future incidents.
References you can provide:
You are an Operations Analyst. Analyze the support tickets from the past week based on the attached tags and categories, and generate a digest highlighting the top five shifts or trends in customer issues. For each trend, include a brief description, potential causes, and recommended actions for the support team to address recurring problems. Make the digest concise, actionable, and easy for the team to review during weekly operations meetings.
References you can provide:
Create a premium support entitlement matrix that clearly maps each support tier to its corresponding benefits and service levels. Include details such as response times, dedicated support channels, and any additional perks for higher-tier customers. Present the matrix in a table format that can be used internally for account planning and externally to communicate entitlements to customers.
References you can provide:
You are a Revenue CSM. Create a detailed renewal runway plan for the account, covering the timeline from T–120 to T–30 days before contract renewal. Include tasks for each stage, assign owners, and specify deadlines. Highlight critical actions such as customer check-ins, contract reviews, and risk mitigation steps. Make the output structured and actionable so the team can follow it step-by-step to ensure a smooth renewal process.
References you can provide:
Write a 150-word rationale explaining the commercial proposal for this account. Clearly outline the reasoning behind pricing decisions, value considerations, and any customizations for the customer. Ensure the explanation is concise, persuasive, and suitable for internal review or sharing with stakeholders.
References you can provide:
You are a Product Advisor. Using the list of modules the customer has not yet adopted, generate an upsell hypothesis assessment. Identify which modules are the best fit for the customer based on usage patterns and potential business impact. Prioritize opportunities and include reasoning for each recommendation so the account team can make informed decisions on next steps.
References you can provide:
You are a Research Ops specialist. Create a set of ready-to-use survey questions for customer feedback. All questions should be close-ended, use a Likert scale, and avoid bias. Include questions that cover product satisfaction, adoption, and engagement. Ensure the questions are clear, concise, and actionable, so they can be deployed directly in a customer survey.
References you can provide:
Analyze the provided customer comments and cluster them into themes. For each theme, provide the number of mentions, a brief description, and representative verbatim examples. Ensure the output clearly highlights the key areas of concern, praise, or feature requests to inform actionable insights for the Customer Success team.
References you can provide:
You are a Relationship Manager. Draft follow-up emails for customers based on their NPS scores. Create three sets of emails tailored for detractors, passives, and promoters. Each email should be personalized, professional, and aligned with the company’s tone, encouraging engagement, feedback, or further action as appropriate for each group.
References you can provide:
Create a QBR agenda tailored for executive stakeholders, including CFO, CTO, and VP-level participants. Provide agenda variants for each persona that highlight metrics, achievements, risks, and strategic recommendations relevant to their focus areas. Ensure each agenda is concise, actionable, and aligned with best practices for executive meetings.
References you can provide:
You are an Account Manager. Draft a follow-up email after a QBR meeting. Include a recap of the key discussion points, agreed-upon mutual actions, and links to relevant reference materials or reports. Make the email clear, professional, and actionable, so the recipient can quickly understand next steps and priorities.
References you can provide:
Using the customer’s industry as context, personalize an Executive Business Review (EBR) template. Include tailored proof points, industry-specific insights, and messaging that aligns with the customer’s strategic priorities. Ensure the output is ready for use in an executive presentation or report.
References you can provide:
You are a Product Operations specialist. Review the provided customer feedback and merge duplicate requests into a single canonical request. Include relevant metadata such as request type, number of mentions, priority, and source. Ensure the output is clean, structured, and ready to be used for product planning and decision-making.
References you can provide:
You are a Product Manager. Analyze the provided list of potential product initiatives and create an impact vs. effort matrix. Rank each initiative based on its expected business impact and implementation effort, and assign scores for both dimensions. Present the output in a clear, structured list that can guide prioritization decisions.
References you can provide:
You are a Product Manager. Draft a roadmap update note for the customer, clearly communicating progress on features, upcoming releases, and any changes in timelines. Be transparent about expectations, highlight key updates, and keep the tone professional and informative. Ensure the note is concise, actionable, and suitable for sharing directly with the customer.
References you can provide:
You are a Tech Writer. Draft a help-center article that clearly explains [insert topic or feature]. Structure the article with headings, subheadings, and step-by-step instructions. Include GIFs or placeholders for visuals where appropriate to illustrate each step. Ensure the content is concise, easy to follow, and ready for publication on the help center.
References you can provide:
You are an Enablement specialist. Create quickstart sheets tailored to different roles: Admin, Analyst, and End-User. Each sheet should provide step-by-step instructions for getting started, key tips, and best practices specific to the role. Format the output so that each role’s sheet can be used independently, with clear headings and concise, actionable content.
References you can provide:
You are a Product Marketing Manager (PMM). Summarize the latest release notes into scannable bullet points suitable for internal and external stakeholders. Highlight new features, improvements, and bug fixes. Keep the content concise, clear, and structured so readers can quickly understand the key updates and their impact.
References you can provide:
We’ve covered a wide range of prompts tailored to Customer Success workflows. With all the valuable outputs you can generate using these prompts, you have the potential to stand out in your organization. But that potential will only be noticed if you can communicate your work clearly. Chapter 7 will show you how to take that AI-generated analysis and turn it into polished, compelling presentations and slide decks. You’ll learn how to highlight what matters most, and make sure your data has real influence across teams so your work gets noticed and acted on.
Insights you uncover are much more valuable if you can communicate them clearly and persuasively to customers, internal teams, and executives. Luckily, AI can help you out with that too.
You can create accurate and professional-looking slide decks very quickly with AI, as long as you know the right instructions and structure to provide. In this chapter, we’ll show you how to get AI to help you create presentations that are clear, to the point, and actually useful for making decisions.
You wouldn’t want to bombard the average customer with high-level technical details on your slides. AI outputs will vary depending on the intended reader. This means you have to specify who will consume the presentation and their level of expertise. You can also indicate the level of detail and the tone that is appropriate.
Example:
Create slides for a VP-level audience focusing on adoption trends and churn risk. Don’t focus on technical implementation details. Keep the tone conversational and easy to understand.
Provide the AI with relevant context such as datasets, business context, and your business goals. Without this, the AI might produce generic statements or insights that don’t align with the story you want to tell. Attaching context also helps the AI prioritize the right metrics, select meaningful visuals, and structure the presentation logically from summary to recommendations.
Example Prompt:
You are a CSM creating a QBR slide deck for a VP of Customer Success. Using the attached dataset of account adoption metrics and churn risk, along with our business goals for increasing upsell in the next quarter, generate a slide deck outline. Include slide titles, key bullet points, tables, and charts, making sure the content focuses on the metrics that matter most for decision-making.
If you just say, “give me a presentation”, you’ll probably get an ugly block of text on all your slides. You have to be clear on what format you want the slides to convey information. Think about the following on how you want the presentation to be structured, and relay that to the AI.
Example:
You are a CSM creating a QBR slide deck for a VP of Customer Success. Using the attached account data, generate a slide deck outline with the following structure:
Treat the initial AI output as a draft, because the deck will probably have a fair amount of imperfections. Start by checking for clarity and accuracy; make sure each slide communicates its intended message and has correct data.
Next, assess the relevance of each slide. Remove any content that doesn’t contribute to the key insights or decision points. Don’t hesitate to ask the AI to improve or fill in gaps.
Iteratively refining the deck with AI saves time compared to starting from scratch and ensures that your final presentation is polished and actionable.
Here are some prompts that can come in handy when iterating:
Review the slides and revise it to emphasize the most important trends and metrics. Adjust the charts so the key insights are immediately clear, reorder slides to tell a logical story from summary to recommendations, and add short annotations explaining why each metric matters for decision-making.
Pro Tip:
You’re now well on your way to understanding the basics of AI prompting. But what if you could push your AI even further? Can you get it to reason, plan, and handle more complex tasks like a seasoned CSM? In Chapter 8, we’ll dive into Advanced Prompting Techniques, exploring approaches that help you squeeze more precision and creativity out of your AI.
Most of your daily tasks can be handled with the fundamentals you learned in the previous chapters. But sometimes your tasks are more complex, like summarizing dense call transcripts, planning multi-step initiatives, or reasoning through abstract problems.
In these situations, your standard prompting methods may not be enough to get what you want. But AI doesn’t have "Intelligence" in the name for nothing. With some lesser known, but highly effective advanced prompting methods, you can achieve high-value outputs even for complex tasks. Let’s run through some high-level prompting techniques in this chapter.
Occasionally, when you give the AI a prompt that is long or complex, it’ll just acknowledge some parts of the input and ignore others when generating its response. The answer to this is prompt chaining.
Prompt chaining involves breaking a complex task into smaller, sequential prompts. Instead of asking the AI to do everything in one go, you feed outputs from one prompt into the next. This allows the AI to focus on one step at a time and helps avoid missing details. It also has the added benefit of getting the outputs to each smaller task separately, making it easier to follow.
Examples:
Ever had a situation where AI gives you an answer so weird, you have no idea how it came up with it? Or a time where the output seems logical, but you just want to make sure the AI hasn't missed anything important.
Say hello to chain-of-thought prompting, where you ask the AI to explain its reasoning step by step. The explanation itself is useful to check for logical gaps, but just asking it to do this has been shown to improve the AI’s ability to deal with reasoning-based problems.
Examples:
Tree-of-thought prompting is the AI version of a human method of problem solving we subconsciously use all the time. It involves considering several partial solutions, evaluating them, and backtracking if a path is unlikely to lead to a useful outcome.
This is helpful for abstract or multi-branch problems, like planning success programs, creating help documentation, or structuring playbooks. It encourages the AI to explore multiple possible approaches before converging on a solution. And there’s sure to be some insights you can get from the reasons the AI uses to reject or choose a particular solution.
Examples:
With these advanced prompting strategies under your belt, you can be much more prepared in tackling complex, multi-step, or abstract tasks.
Speaking of advanced prompting, it’s time to take your skills to the next level of AI: agents. In Chapter 9, we’ll learn how to set up AI agents that can simulate roles, provide expert feedback, and automate tasks for Customer Success teams.
The traditional idea of AI as just a bot that replies to your questions is outdated thanks to one of the most exciting developments of AI, the introduction of agents. Agents are essentially virtual teammates, and in the fast moving function of Customer Success which often operates with limited resources, they’re an absolute windfall.
Unlike the standard process of interacting with AI, where you give a prompt and get an output based on it, agents can simulate roles, provide ongoing feedback, and handle multi-step processes autonomously. Learning how to get the most of agents will help you refine your strategies, fine-tune your work, and make smarter decisions.
In this chapter, we’ll cover the two types of agents that are easy to set up, but incredibly useful for CS.
Simulation agents are designed to act as a stand-in for a human role, letting you practice, plan, or test different approaches in a risk-free environment. They are especially useful for preparing for meetings, strategizing account plans, or rehearsing customer interactions.
The big advantage of simulation agents is that they enable scalability, letting you explore multiple scenarios quickly and efficiently, while gathering feedback for your approach in each situation. This helps you prepare for a variety of situations without extra manual effort.
Example Prompts:
Practical Exercise: Simulation Agent
Creating an agent that works well for a situation can be tricky, so let’s give it some practice. Try designing an agent for the following scenario, and check the example prompt to see if you’re on the right track.
Scenario: You are a new CSM managing a portfolio of mid-market accounts. One of your customers is approaching renewal, and you want to prepare for a conversation to encourage them to adopt a new premium feature. You want to anticipate potential objections, questions, and concerns from the customer, and test different ways to present the upsell.
Example Prompt:
Step 1: Define PersonaYou are a customer considering renewing your subscription.
Step 2: Provide ContextThe customer has been using the platform for 12 months, has moderate adoption of existing features, and has shown interest in but not yet adopted Feature X.
Step 3: Set InstructionsSimulate a conversation where you express concerns about adopting Feature X. Respond naturally to my answers, raise realistic objections, ask clarifying questions, and provide feedback as a cautious but interested customer. Include potential objections and interest areas, and highlight any barriers to purchasing additional modules.
Step 4: Iterate
Expert feedback agents function as a virtual advisor, reviewing your work and providing guidance or corrections. They are perfect for QBR decks, email drafts, success plans, or operational workflows.
It’s like having a seasoned teammate looking over your shoulder (minus the judgement). They make sure your slides, emails, and reports meet company standards while catching gaps, errors, or opportunities you might have missed. No more spending hours revising. On top of that, agents can advise on your approach and strategy, helping you capitalize on every opportunity that presents itself to you.
Sometimes, in addition to role-playing and reviewing, you need the AI to build with you. That’s what the Solutions Engineer Agent is for. Technical teams can get swamped with work, and waiting on them for every setup or configuration can create a lot of delays.
This agent acts like a technically skilled teammate who understands both your product and your customer’s goals. You can use it to design workflows, configure features, or troubleshoot complex setups directly inside your CS tools.
Using this agent helps you prototype solutions faster, translate customer needs into product workflows, validate ideas before involving engineering, and overall feel more confident in technical conversations with clients.
1. Feed product information
Give the agent all the context it needs to “learn” your system. There’s no need to offer it proprietary code, but the kind of materials a skilled solutions engineer would reference:
2. Add customer context
Once the agent understands your product, feed it details about your customer:
3. Ask it to build with you
Now you can start using the agent as a technical partner. Ask it to:
Prompt 1:
You are a Solutions Engineer familiar with [Product Name]. Using the attached product setup guides, Miro workflows, and customer use case summary, recommend the best way to configure Feature X for a customer whose goal is to automate reporting and reduce manual data entry. Include steps, dependencies, and potential risks.
Prompt 2:
 You are a Solutions Engineer helping a CSM design a solution for a customer in the finance industry. The customer’s goal is to improve visibility into account performance. Using the attached documentation and customer background, suggest a dashboard setup and integration plan that would best meet their needs.
Every great CSM also has to be a good storyteller; they need to be someone who can communicate clearly, confidently, and with impact. The Communications Feedback Agent is a more specialized agent that helps you build that skill like a coach sitting in on every customer call.
This agent reviews your calls, notes, or transcripts and gives you practical, personalized feedback on your communication style. It can help you spot filler words, repetitive phrasing, or areas where your message loses focus.
If you want to sound more concise, credible, and confident in every customer conversation, give this agent a go.
1. Feed communication data
Upload your own call transcripts, meeting summaries, or customer call recordings. You can also add:
This gives the agent real examples of how you communicate, so it can analyze your tone, clarity, and habits.
2. Define what “great” looks like
Feed the agent frameworks or communication standards that align with your role. Examples would be:
3. Set feedback goals
Tell the agent what you want to improve on. It could be reducing filler words (“um,” “like,” “you know”), improving pacing, or strengthening how you summarize customer value.
4. Iterate and gamify
Have the agent score each call, track improvement over time, and suggest a small challenge for your next meeting. Try challenging yourself to reduce filler words by 20% or keep your talk ratio below 50% on calls. 
You can even ask it to create a personal communication dashboard or habit tracker.
Prompt 1:
You are a Communications Coach for Customer Success Managers. Using the attached call transcript, analyze my speech patterns. Count filler words, repetitive phrases, and talk-to-listen ratio. Provide an improvement plan with specific goals for the next 30 days and suggestions for more confident phrasing.
Prompt 2:
You are a CSM communications expert trained in the MEDDPICC framework. Review the attached discovery call transcript and highlight where I successfully covered the MEDDPICC elements and where I could have gone deeper. Suggest 3 specific questions I could ask in future calls to uncover more value."
Prompt 3:
Using the attached meeting transcript, identify moments where my explanations were too long or unclear. Recommend simpler or more engaging alternatives for key parts of the conversation, and list power phrases I can use to sound more confident.
By now, you’ve learned a lot about prompting, like how to structure prompts, apply advanced techniques, and set up AI agents. But before we wrap up, we’ve got a final bonus chapter to check out. In Chapter 10, we’ll cover LLM selection, showing you how to choose the right model for different Customer Success tasks, what to consider when evaluating options, and how to match a model’s capabilities to your team’s needs. Think of it as the cherry on top; everything you’ve learned so far works even better when paired with the right AI engine.
Choosing the right Large Language Model (LLM) shapes the quality, speed, and reliability of your outputs, so it’s worth understanding what makes one LLM better suited than another for specific Customer Success tasks.
Not all LLMs are created equal. Some excel at reasoning and multi-step planning, while others are faster and more cost-efficient for straightforward tasks. Selecting the wrong model can lead to outputs that are inconsistent, less accurate, or require more iterative refining. Choosing the right LLM ensures smoother workflows, faster turnarounds, and higher-quality insights you can confidently share with customers, executives, or cross-functional teams.
Experimenting with each LLM is well worth your time, since it can have a huge impact on your workflow. Here’s a structured method you can follow:
We’ve covered an overview of the best available LLMs in this chapter. Check out our second bonus chapter to learn in-depth how to use one particular tool, Gemini, in your workspace.
Unlike a separate chatbot window or AI tool, Gemini can be integrated right where you work, like in Gmail, Docs, Sheets, and Meet. This lets you turn your daily workflow into a faster, smarter version of itself.
In this bonus chapter, we’ll walk through how to use Gemini (and its reusable automations, GEMs) to save hours every week while elevating how you prep, analyse, and communicate.
Gemini is available as an add-on for Google Workspace Business or Enterprise plans. If you don’t see it, ask your admin to enable it in the Workspace Admin Console.
Since Gemini is directly available inside your Workspace apps, it already understands the context of the file or conversation you’re working on. The benefit of this is that, instead of starting from scratch, Gemini can read and reason based on your open document, spreadsheet, or email thread.
Here’s what that looks like across tools:
GEMs are reusable prompt templates inside Google Workspace that automate repetitive workflows.
Here are three powerful GEMs every CSM should build:
Once built, you can share GEMs with your CS team and have a shared library of automation for daily tasks.
Once you’ve mastered the basics, you can make Gemini part of your everyday workflow without even leaving your Google Workspace tools.
Examples:
Small automations like these can shave off hours each week, and make you look consistently prepared.
With the proper model powering your prompts and agents, all the techniques you’ve learned, from prompt chaining to advanced reasoning and agent simulations, you can work more efficiently and more accurately, giving you a real edge in Customer Success.
Great work! You’ve now got a solid foundation in prompt engineering for CS. Here’s a quick recap of what we covered:
But this is just the start; there’s much more to come in our AI in CS series, where you’ll explore advanced workflows, analytics, and strategies to amplify your impact even further. Keep experimenting, iterating, and applying what you’ve learned!