Temperstack
Main WebsiteFeaturesPricingBlogAbout usRequest a Demo
  • Overview
    • What is Temperstack?
    • Use Cases
  • User Managment
    • Getting started as Admin
      • Inviting Users
      • Mapping multiple services to a Team
      • Single Sign-On (SSO)
      • Customising ALCOM Audit & scanning
    • Getting Started as a User /Responder
    • Managing profile & contact details
  • Integrations
    • Integrating your Observability tools
      • Setting up AWS Integration
        • Multiple AWS Account Integration
        • IAM Setup Guide
          • Creating IAM User: Temperstack with Policy
          • Creating IAM Role: Temperstack with Policy
      • Setting up Microsoft Azure Integration
        • Creating Access for Temperstack in Azure
      • Setting up Google Cloud Platform Integration
        • Creating Access for Temperstack in GCP
      • Setting up Datadog Integration
        • Creating Access for Temperstack in Datadog
        • Managing resources with Datadog
      • Setting up NewRelic Integration
        • Creating Access for Temperstack in NewRelic
        • Managing resources with New Relic
      • Setting up Splunk Integration
        • Creating Access for Temperstack in Splunk
        • Managing resources with Splunk
      • Setting up Appdynamics Integration
        • Creating Access for Temperstack in Appdynamics
        • Managing resources with Appdynamics
      • Setting up Dynatrace Integration
        • Creating Access for Temperstack in Dynatrace
        • Managing resources with Dynatrace
      • Setting up Oracle Cloud Infrastructure
        • Creating Access for Temperstack in OCI
    • Integrating Custom Alerts & Other Alerting sources
      • Webhook Integration
      • Ingesting Emails as alerts
      • Integrating alert listeners from other observability tools
  • Alert routing & Response Managment
    • On-call scheduling and Escalation Policies
    • Setting up Services
    • Alert notification channels
      • Integrating Slack channels
      • Integrating MS Team
    • Mapping resources to Services
      • Rule based resource to Service Mapping
      • Using AI suggested mapping rules
    • Testing Alerting and Notifications
    • Responding to Alerts
  • Monitoring
    • Setting up and maintaining Comprehensive alerting
      • Alerting Templates- metrics & customisation
      • ALCOM and identifying monitoring gaps
      • Programmatically setting up missing alerts in your Observability tool
      • Alert noise Reduction & Optimisation
  • Uptime Monitoring
    • Real time Availability Monitoring
  • Incident analysis & communication
    • External and Internal service Status Pages
      • Instruction to migrate subscribers from Statuspage
  • AI-Powered Issue Resolution
    • AI powered contextual Runbooks
    • Incident command - alert grouping by incident
    • AI Powered Root cause Identification
  • Reporting & Governance
    • Temperstack Dashboard
    • SLO Dashboard
    • MTTA MTTR
  • Billing & Help
    • FAQs
    • Support
Powered by GitBook
On this page
  1. Reporting & Governance

MTTA MTTR

Last updated 5 months ago

This page offers a comprehensive view of critical incident management metrics, allowing users to monitor, analyze, and improve operational performance. The metrics displayed include Mean Time to Acknowledge (MTTA) and Mean Time to Resolve (MTTR), alongside other related statistics and trends, providing actionable insights into incident response efficiency.

Components

1. Filters Section

The Filters section enables users to customize the data view based on specific criteria, ensuring that the metrics presented are relevant to their focus.

  • Teams:

    • A dropdown menu allows users to filter the metrics by a specific team.

    • The default selection is "All Teams," which provides a comprehensive view across all teams.

  • Services:

    • This dropdown narrows the metrics down to incidents related to a specific service.

    • The default option shows metrics for all services.

  • Date Range:

    • Calendar pickers let users specify the start and end dates for the analysis period.

2. Statistics

The statistics section provides an overview of key metrics in an easy-to-read format, with comparisons to previous periods for trend analysis.

Metrics Displayed:

  1. Incidents:

  • Displays the total number of incidents recorded in the selected time frame.

  • Includes a percentage indicator showing how the count has changed compared to the previous period (e.g., "79% lower").

  • Purpose: Helps users understand the incident volume and track reductions over time.

  1. MTTA (Mean Time to Acknowledge):

  • Represents the average time taken to acknowledge an incident after it is reported.

  • Includes a percentage change indicator to highlight improvements or declines (e.g., "89% lower").

  • Purpose: Tracks responsiveness to incoming incidents.

  1. TTA (95th Percentile Time to Acknowledge):

  • Displays the time within which 95% of incidents were acknowledged, indicating performance consistency.

  • Includes a percentage change indicator to show improvements or declines compared to the previous period (e.g., "96% lower").

  • Purpose: Helps evaluate acknowledgment efficiency, particularly for high-severity incidents or outliers.

  1. MTTR (Mean Time to Resolve):

  • Represents the average time taken to resolve incidents within the selected period.

  • Highlights the percentage improvement or decline compared to the previous period (e.g., "99% lower").

  • Purpose: Provides insights into the efficiency of resolution processes.

  1. 95th Percentile TTR (Time to Resolve):

  • Shows the resolution time for the 95th percentile of incidents, highlighting outliers or particularly challenging cases.

  • Includes a percentage indicator to show changes compared to the previous period (e.g., "99% lower").

  • Purpose: Helps identify the efficiency of resolving high-severity or complex incidents.

3. Trend Chart

The trend chart provides a visual representation of MTTA and MTTR metrics over time, making it easier to identify patterns, trends, and outliers.

  • Graph Details:

    • X-Axis: Represents the dates within the selected time range.

    • Y-Axis: Displays the resolution time in minutes.

    • Data Lines:

      • Two distinct lines indicate MTTA and MTTR metrics.

      • Lines are color-coded for clarity (e.g., orange for MTTA and purple for MTTR).

This dashboard is an essential tool for teams managing incidents, helping them improve response times, optimize resources, and enhance overall service reliability.

MTTA MTTR - Dashboard