Dealer Groups: How to Benchmark Lead Response Across Stores
Your Mississauga store responds in 47 seconds. Your Oakville store takes 3 hours. Same brand, same CRM, same pay plan. The difference isn’t talent or market conditions. It’s that nobody’s been comparing the two stores using the same scorecard. For multi-rooftop dealer groups, speed-to-lead isn’t a single-store problem. It’s a group-wide visibility problem, and fixing it starts with a monthly benchmark that every GM can see.
It sounds like you’ve suspected for a while that some stores are faster than others, but you’ve never had the numbers side by side. Each GM reports their own version of “response time,” measured differently, and nobody wants to be the one who asks for a real apples-to-apples comparison. Meanwhile the revenue gap between your best and worst store keeps growing, and you can’t quantify it.
Why Most Dealer Groups Don’t Know Their Worst Store
It sounds like something that should already be solved. You’ve got a CRM at every location. Every internet lead is timestamped. But most dealer groups never compare stores head-to-head because each GM reports their own numbers, and those numbers are measured differently at every rooftop.
One store reports “average time to first response.” That includes auto-emails sent at 0:03, which pulls the average down to a flattering 4 minutes. Another store reports time to first phone attempt, which looks worse but tells you more. A third store doesn’t report response time at all because their CRM doesn’t make it easy to extract.
Pied Piper’s 2024 study of over 4,000 dealerships found the average response time exceeded 90 minutes. But that’s across the industry. Within a single dealer group, the spread between best and worst store is often 50x or more. A 2024 Fullpath study measured the business-hours average at 47 minutes, which means your slowest store is likely well past that. Ringlead’s Dealer Response Index found similar patterns: only the top 10% of 200 dealerships achieved sub-60-second live calls.
The first step is getting every store on the same measurement.
Measure Median, Not Average
The most important decision in your benchmark is which number you track. Average response time is the wrong one.
Want to see the response path on your own phone? Try the live demo and watch how Ringlead handles an internet lead before the customer shops the dealer across town.
Here’s why. A store handles 200 internet leads in a month. 190 of them get a call within 60 seconds. 10 leads arrive overnight and sit until 9 AM, clocking 8 to 14 hours each. The average response time for that store: somewhere around 45 minutes. The median: 52 seconds. The average suggests the store has a serious problem. The median tells the real story: this store is fast, and they need after-hours coverage.
Average gets pulled by outliers. Median gives you the experience of the typical customer. When you’re comparing five, ten, or twenty rooftops, median is the only number that produces a fair ranking.
Velocify research shows leads contacted within 60 seconds convert at roughly 4x the rate of leads contacted at the industry average. That 4x multiplier applies to the median customer experience, not the average. Track what matters.
Normalize for Volume Differences
Your flagship store in Toronto pulls 400 internet leads a month. Your rural location outside Barrie pulls 90. Comparing raw response times across these two stores is misleading, because the Toronto store has more staff, more shifts, and more infrastructure to handle the volume.
Instead of raw times, use a ratio: contact-rate-60. That’s the percentage of internet leads that received a live phone call within 60 seconds of submission.
| Store | Monthly Leads | Contact-Rate-60 | Median Response |
|---|---|---|---|
| Toronto (Yonge) | 412 | 74% | 38 sec |
| Mississauga | 287 | 81% | 29 sec |
| Oakville | 193 | 22% | 11 min 47 sec |
| Barrie | 88 | 68% | 44 sec |
Now you can compare. Mississauga is your best store despite lower volume. Oakville is your problem, and the gap isn’t subtle. You don’t need a statistician to read this table. Every GM in the room can see it.
The 78% of buyers who purchase from the first dealer to make live contact (automotive industry research) don’t care how many leads your store gets. They care whether your store called them before anyone else did.
Build the Monthly Scorecard
A benchmark only works if it arrives on the same day every month, formatted the same way, with no room for interpretation. Here’s what yours should include.
Five columns, no more:
- Store name. Sorted by contact-rate-60, best to worst.
- Total internet leads received. So GMs can’t claim low volume as an excuse for bad response.
- Contact-rate-60. Percentage of leads contacted by live phone within 60 seconds. This is your headline number.
- Median response time. In seconds for stores under 2 minutes, in minutes for everyone else. The unit difference is intentional. Seeing “11 min 47 sec” next to “29 sec” hits harder than seeing two numbers in the same unit.
- Month-over-month trend. An arrow or percentage change from last month. GMs respond to trajectories as much as absolutes.
Optional sixth column: Internet lead close rate. This connects speed to revenue and makes the business case impossible to ignore. If your 81% contact-rate-60 store closes at 22% and your 22% contact-rate-60 store closes at 9%, the conversation shifts from “we should answer faster” to “Oakville is leaving $380,000 a year on the table.”
Track business-hours and after-hours separately. 40-45% of leads arrive outside business hours, and lumping them together hides whether a store has a staffing problem or a coverage problem. Two rows per store: one for 8 AM to 7 PM, one for everything else.
Use the Data in GM Meetings
The scorecard only works if it creates consequences. Here’s how to present it without turning the meeting into a blame session.
See the lead response flow live
Drop your number and see the same phone flow Ringlead uses for dealership internet leads.
Try the Live DemoLead with the winner. Don’t start by calling out Oakville. Start by asking the Mississauga GM what they changed last month that pushed their contact-rate-60 from 73% to 81%. Let them explain their process. Let the other GMs hear it. The message lands harder coming from a peer than from the group VP.
Show the revenue gap. Take the top store’s close rate on internet leads and apply it to the bottom store’s lead volume. “If Oakville closed internet leads at Mississauga’s rate, that’s 14 more deals a month at $3,200 front gross. That’s $44,800 your store is missing every month.” At $537,600 annualized, it gets attention fast.
Set a floor, not a ceiling. Don’t ask every store to hit 81%. Set a minimum contact-rate-60 of 50% and give the bottom stores 90 days. That’s achievable with basic lead routing changes and doesn’t require new technology or headcount.
Publish the rankings. Every GM should see every store’s numbers. Not to shame anyone, but because competitive pressure works. When a GM sees their store ranked 7th out of 8, they’ll ask their internet manager why before the meeting is over. This is the same psychology behind sales leaderboards, applied to process instead of production.
The Competitive Effect Is Real
Something happens when stores see each other’s numbers for the first time. The reaction is predictable and it’s useful.
The top-performing store gets public recognition, which reinforces their behavior. The middle stores immediately want to know what the top store is doing differently. They’ll call each other. They’ll copy the process. This is exactly what you want.
The bottom store gets uncomfortable. That discomfort is the point. A GM who’s been telling the group VP that “we’re working on response time” can’t hide behind vague progress when the scorecard shows they’re dead last for the third month running. Why 90 minutes kills close rate isn’t theory when the P&L proves it at your own rooftop.
Within 60 to 90 days of starting cross-store benchmarking, most dealer groups see their worst store improve by 30-50%. Not because they hired more staff. Not because they bought new software. Because somebody finally showed them the scoreboard.
One group we’ve worked with saw their slowest store drop from a 14-minute median to under 90 seconds in 11 weeks. The GM didn’t change the routing or the CRM. He changed who was responsible for answering internet leads during which shift, and he checked the scorecard every Friday morning.
What to Measure Beyond Speed
Speed is the starting point, not the finish line. Once your group has contact-rate-60 above 50% across every store, add these to the scorecard:
Appointment set rate on first call. A 40-second response followed by a weak phone call doesn’t produce deals. Pair speed data with call scoring to find stores that are fast but not effective.
Lead source response time. Break contact-rate-60 by source: website forms, third-party (AutoTrader, CarGurus), OEM leads. Some stores are fast on website leads but ignore third-party leads because the CRM puts them in a different queue.
Day-of-week and time-of-day patterns. Your Barrie store might be at 90% contact-rate-60 Tuesday through Friday but drop to 15% on Saturday afternoons when the floor is packed with walk-ins. Ringlead data shows Saturday between 11 AM and 3 PM is the highest-volume lead window for most stores and the worst response window. That’s a scheduling problem, not a motivation problem.
How to Start This Month
You don’t need a new platform to start benchmarking. You need three things.
First: Run a speed audit at every store. Submit test leads at different times and days. Record time to live phone contact, not time to auto-email. This gives you a baseline.
Second: Pull 30 days of CRM data and calculate median response time and contact-rate-60 for each store. If your CRM can’t export this easily, pull call logs and match them to lead submission timestamps. It takes a few hours per store the first time. After that, it’s a repeatable report.
Third: Put it on the next GM meeting agenda. Show the table. Lead with the winner. Set the floor at 50% contact-rate-60. Schedule the next scorecard for 30 days out.
That’s it. No new tools, no budget request, no six-month rollout. A spreadsheet, a meeting, and a willingness to show every store where they stand.
Frequently Asked Questions
Why should I use median instead of average for lead response benchmarking?
Average response time gets inflated by a handful of leads that sit for hours or overnight. A store with 190 leads answered in under a minute and 10 leads that sat for 12 hours will show an “average” of 45 minutes, which doesn’t reflect the typical customer’s experience. Median gives you the response time of the middle lead, and that’s the number that predicts close rate.
How do I compare stores that get very different lead volumes?
Use contact-rate-60: the percentage of internet leads that got a live phone call within 60 seconds. This ratio normalizes across volume. A store handling 400 leads at 74% and a store handling 90 leads at 68% can be compared fairly because you’re measuring consistency, not raw count.
How often should I publish the scorecard?
Monthly. Weekly data is too volatile because one snowstorm or holiday throws the numbers off. Monthly gives enough volume to be statistically meaningful while keeping GMs accountable on a tight feedback loop.
What belongs on a multi-store lead response scorecard?
Five columns: store name (sorted by performance), total leads received, contact-rate-60, median response time in seconds, and month-over-month trend. Add internet lead close rate as an optional sixth column to connect speed to revenue.
What’s a good contact-rate-60 target?
Top-performing stores hit 80% or above. The industry average sits below 10%. For most dealer groups, setting a floor of 50% and giving underperforming stores 90 days to reach it is a realistic starting target that still produces measurable revenue improvement.
How do I present the data without making GMs defensive?
Lead with the top-performing store. Ask that GM what they’re doing right and let them explain it to the group. The bottom store sees the gap without being singled out. Competitive pressure from peers works better than top-down criticism.
Does cross-store benchmarking actually change behavior?
Yes. Most dealer groups see their worst store improve by 30-50% within 60 to 90 days of starting monthly scorecards. Visibility creates accountability. Nobody wants to be last on a board their peers can see.
Should I include after-hours leads in the scorecard?
Yes, but separate them. Show business-hours contact-rate-60 and after-hours contact-rate-60 as distinct rows. This tells you whether a store has a speed problem during operating hours, a coverage gap after hours, or both.
20 appointments in 30 days
See the live phone demo and how Ringlead turns the internet leads you already have into more booked appointments.
Try the Demo