How to Run Better Deal Reviews with AI Meeting Data
Deal reviews are one of the most important meetings in any sales organization. They're also one of the most broken.
The standard deal review goes like this: a rep walks through their pipeline, narrates the status of each deal, and the manager asks probing questions. The rep answers from memory - or from the CRM notes they updated five minutes before the meeting. The manager makes a judgment call about whether the deal is real based on... the rep's confidence level and storytelling ability.
This is not a data-driven process. It's a narrative-driven process. And narratives are unreliable.
The Problem With Narrative-Based Deal Reviews
When deal reviews rely on rep narratives, several failure modes emerge:
Confirmation bias. Reps who are emotionally invested in a deal unconsciously filter what they report. They emphasize positive signals and minimize red flags. This isn't dishonesty - it's human nature.
Recency bias. The last call carries outsized weight. If the most recent conversation was positive, the entire deal looks good. If it was negative, the deal looks at risk. Neither snapshot tells the full story.
Memory gaps. By the time a deal review happens, the rep may have had six conversations with the prospect over three months. Nobody accurately remembers what the economic buyer said on call two about their budget approval process.
Inconsistent evaluation criteria. Without a standard framework applied to every deal, each rep presents information differently. One rep talks about timeline, another talks about relationships, another focuses on product fit. The manager can't compare apples to apples.
What AI Meeting Data Changes
When every sales call is transcribed and analyzed automatically, deal reviews can shift from "tell me about this deal" to "let's look at what actually happened in the conversations."
Here's what that looks like in practice:
MEDDIC Fields Extracted From Actual Conversations
Instead of asking "Who's the economic buyer?" and accepting whatever the rep says, you can look at AI-extracted MEDDIC data from the actual calls:
| MEDDIC Element | What AI Extracts | Example |
|---|---|---|
| Metrics | Specific numbers and KPIs the prospect mentioned | "They said they're losing 40 hours/month on manual data entry" |
| Economic Buyer | Who has budget authority based on conversation evidence | "VP of Ops said she needs to get CFO sign-off for anything over $50K" |
| Decision Criteria | What the prospect said matters to them | "They specifically asked about SOC 2 compliance and SSO support" |
| Decision Process | Steps they described for making a purchase | "Legal review, then security assessment, then pilot with IT team" |
| Identify Pain | Pain points expressed in the prospect's own words | "Their current tool goes down twice a month and they lose customer data" |
| Champion | Who is actively advocating internally | "Director of Engineering volunteered to run the internal pilot" |
IceCubes extracts these automatically after every sales call. By the time a deal review happens, you have MEDDIC data populated from what the prospect actually said - not from the rep's interpretation.
Objection Tracking Across All Conversations
Smart Tags in IceCubes let you track objections across every call in a deal cycle. Set up a Smart Tag for objections, and you'll see:
- What objections were raised and when
- Whether objections were addressed or left unresolved
- Whether the same objection keeps coming up (a sign it wasn't truly handled)
- New objections that emerged in the latest call
A manager can walk into a deal review knowing that the prospect raised pricing concerns in calls two, three, and five, and that the rep's discounting conversation in call four didn't resolve it. That's a materially different conversation than "how's the pricing discussion going?"
Competitor Mentions Over Time
Another Smart Tag use case: tracking when and how competitors come up in conversations. You can see:
- Which competitors the prospect is evaluating
- What specific features or capabilities the prospect associates with each competitor
- Whether competitor mentions are increasing or decreasing over time
- Whether the prospect is using competitor quotes as negotiation leverage
Next Steps and Action Items With Accountability
Every call generates action items with assignees and due dates. When you review a deal, you can see:
- What the rep committed to doing after each call
- What the prospect committed to doing
- Which action items were completed and which were dropped
- Whether there's a pattern of the prospect not following through (a disengagement signal)
A Better Deal Review Format
Here's a practical format for running deal reviews with AI meeting data:
1. Pre-Review Preparation (5 Minutes Per Deal)
Before the deal review meeting, the manager reviews:
- AI-extracted MEDDIC data from all calls in the deal
- Smart Tag outputs for objections and competitor mentions
- Action items from the last three calls
- The most recent meeting summary
This replaces the manager going in cold and relying on the rep's oral update.
2. During the Review: Evidence-Based Discussion
Instead of "Walk me through this deal," try:
"I see the prospect mentioned [competitor] in three of your last four calls. What's your competitive strategy?" The rep can't dodge this - the data shows the competitor is a real factor.
"The MEDDIC extraction shows we haven't identified the economic buyer. Who controls the budget?" If the rep says it's the VP they've been talking to, but the AI extracted a comment from that VP saying "I'll need to run this up the chain," you have a coaching moment.
"There were four action items from the last call. Two on our side are done, but the prospect's two are both overdue. What's happening?" This surfaces stalled deals before they go dark.
"The prospect's stated decision criteria include SOC 2 compliance. Have we addressed that?" Pulling directly from what the prospect said keeps the conversation grounded.
3. Post-Review: Track What Changed
After the deal review, any commitments the rep makes can be checked against future call data. Did the rep address the unresolved objection? Did the prospect complete their overdue action items? AI meeting data creates a closed loop.
Common Objections to Data-Driven Deal Reviews
"This feels like micromanagement." It's not about monitoring reps - it's about giving them better tools. Reps benefit from having accurate MEDDIC data they didn't have to manually enter. It saves them CRM data entry time and helps them identify gaps in their own deals.
"Reps will push back on being recorded." Most reps prefer transcription over manual note-taking. The key is positioning it as a tool that helps them, not surveillance. With IceCubes, there's no bot joining the call, so there's no prospect-facing friction either.
"We don't use MEDDIC." The principle applies to any sales framework. Whether you use MEDDIC, BANT, SPICED, or your own custom methodology, the point is extracting structured data from unstructured conversations. IceCubes' Smart Tags let you define whatever criteria matter to your sales process.
Making the Transition
You don't have to overhaul your entire deal review process overnight. A practical starting point:
- Have reps install IceCubes and run it on all external sales calls for two weeks. No bot joins the call, so there's zero friction with prospects.
- Review the MEDDIC extraction on three to five active deals. Compare what the AI extracted with what the rep has in the CRM.
- Run one deal review using the AI data alongside the rep's narrative. See if the data surfaces anything the narrative missed.
- Refine your Smart Tags based on what signals matter most to your team's deal qualification process.
Most managers who try this find at least one deal where the AI data contradicts the rep's narrative - usually a deal that's further from closing than the rep believes. That single insight often pays for the tool.
Get Started
IceCubes offers 50 free AI credits with no credit card required. Install the extension, run it on your team's next sales calls, and see what MEDDIC extraction looks like when it's based on what actually happened in the conversation.