
What good looks like: The maintenance teams that control winter create a simple vulnerability assessment before the season starts. They pull CMMS reports showing which units had multiple service calls last winter, which units are over ten years old, and which locations skipped scheduled PM cycles. This becomes their preparation target list.
What good looks like: The best operations create a simple priority matrix before winter arrives. Tier 1 sites get responses within 2 hours. Tier 2 sites get responses within 4 hours. Tier 3 sites get responses within 8 hours. The criteria for each tier are documented and understood by coordinators and contractors alike.
What good looks like: Operations managing winter effectively review their service history from the previous two years and identify the ten most common failure points across their HVAC equipment. They purchase strategic inventory of these components and stage them at high-priority locations or regional hubs where contractors can access them quickly.
What good looks like: The operations that document weather impact effectively have simple data capture built into their work order closure process. When coordinators or contractors close winter service work orders, they document relevant conditions in standardized fields: travel delays (Y/N and duration), hazardous access conditions (Y/N and description), extreme weather impact on repair execution (Y/N and specifics).
What good looks like: The best operations don't ask "should we invest in winter preparation?" They ask "what will happen if we don't?" They calculate last year's reactive costs, project similar patterns for the upcoming season, and demonstrate that preparation spending avoids significantly larger emergency spending.
What good looks like: Operations with robust winter protocols can hand someone their documented procedures and have them managing emergency situations within days, not after surviving a full season. The protocols specify decision authority at each level, escalation paths when normal processes break down, and criteria for making judgment calls when perfect information isn't available.

Gap identified: Interview mentioned "get Jim to go check them out" but didn't specify which contractor (Jim's company name not provided), what the assessment should include beyond basic inspection, or how findings get documented and prioritized for repair vs. monitor vs. replace decisions.
Action needed: Define complete assessment scope, documentation requirements, and decision criteria for action items coming out of assessments.


Metric to track: Pre-season assessment completion rate, tracked weekly starting October 1st. If you're under 50% complete by October 15th, you know immediately that you'll miss the November 1st deadline. That gives you two weeks to add contractor resources or adjust scope.
Metrics to track: Service calls per location, cost per location, repeat call rate by location. This identifies your problem children—the 20 locations generating 60% of your service calls. Those locations need equipment replacement, not more repairs.
Metrics to track: Contractor response time by priority tier, SLA compliance rate, time-stamped work order data. Objective data settles disputes. Either contractors are meeting response times (store expectations need adjustment) or they're not (contractor performance needs addressing).
Metrics: Pre-season assessment completion rate, failure rate in assessed vs. non-assessed equipment. These metrics prove whether your vulnerability identification is working.
Metrics: Contractor response time by priority tier, time from failure to restoration by priority level. These metrics show whether priority decisions translate to differentiated service delivery.
Metrics: Parts availability rate, first-time fix rate. These metrics reveal whether your inventory strategy actually supports rapid repairs.
Metrics: Documentation completion rate for weather factors, percentage of delayed service with documented weather justification. These metrics show whether your team captures context systematically.
Metrics: Emergency service cost percentage, total cost per location year-over-year trend, failure rate by equipment age. These metrics prove (or disprove) preparation ROI.
Metrics: Coordinator decision time on priority calls, escalation frequency, SLA compliance rate. These metrics reveal whether your documented processes work under pressure.
Action taken: Immediately contracted with backup provider to complete remaining 30 assessments. Split the list based on geographic proximity to minimize contractor travel time. Both contractors completed remaining work by November 5th one week late but still ahead of serious cold weather.
Impact: Avoided entering winter with majority of vulnerable equipment unassessed. The 12 units that were assessed and repaired had zero failures during winter. The 30 units assessed late by backup contractor had two failures both minor repairs that got scheduled service. Success came from knowing mid-October that the plan wasn't working, not discovering it in November when weather eliminated options.
Action taken: Presented data to contractor in mid-December performance review. Contractor claimed times were skewed by a few extreme weather days. Data showed problem was consistent across all conditions. Gave contractor two weeks to demonstrate improvement or contract would be terminated mid-season. Contractor added a dedicated technician for this account. Response times improved to 2.2 hours average for Tier 1, 4.5 hours for Tier 2 by early January. Still slightly over SLA but acceptable improvement trajectory.
Impact: Avoided full season of poor performance by addressing problem mid-season with objective data. Stores noticed faster response. More importantly, coordinator had documented performance data to support contract renegotiation discussions for next season they negotiated 12% lower rates based on mid-season performance issues even though contractor improved.
Action taken: Built business case for equipment replacement program focusing on these eight highest-cost locations. ROI calculation showed payback period of 4-5 years based on eliminating chronic emergency repair costs, improving energy efficiency, and reducing operational disruption. Secured capital funding to replace six of the eight units before next winter (budget limitation). The two remaining units got scheduled for replacement in Year 2.
Impact: The six locations with new equipment had zero emergency service calls the following winter, saving approximately $25,000 in emergency repair costs in Year 1 alone. The two locations with old equipment still generated 5 emergency calls between them, reinforcing that replacement was the right solution.