Skip to main content

Alerts App

The Alerts App is your early warning system. It monitors equipment conditions in real time and notifies the right people the moment an asset enters a problematic state β€” whether that's a pressure threshold breach, a temperature spike, a low fuel level, or a diagnostic fault code. Catch issues before they become failures.


πŸ“Š The Dashboard​

The Dashboard gives you immediate visibility into all active alert activity across your fleet.

Summary Charts​

The top of the Dashboard contains four charts.

Alerts Priority Statistics β€” A donut chart showing the current count of Low, Medium, and High priority alerts across all your accessible assets. Hover over any segment to see the exact count.

Alerts Status Statistics β€” A donut chart showing how many alerts are currently Active, Acknowledged, or Snoozed. Use this to assess how well your team is keeping up with alert volume.

Alerts Priority Trend β€” A line chart showing how High, Medium, and Low priority alert counts have changed over the last 7 days. Hover over any point to see exact counts for a specific date. Use this to identify whether fleet conditions are improving or deteriorating.

Alerts Status Trend β€” A line chart showing Active, Acknowledged, and Cleared alert counts over the last 7 days. A rising Acknowledged count alongside a rising Active count may indicate your team is aware of issues but unable to resolve them β€” a signal worth investigating.

Active Alerts List​

Below the charts, the Alerts List shows every active alert across all assets you have access to.

What does each column mean?
ColumnDescription
AssetThe equipment generating the alert β€” click to navigate directly to that asset's detail page
Serial NumberAsset serial number
PriorityColor-coded badge β€” High (red), Medium (orange), Low (green)
StatusCurrent alert status β€” Active or Acknowledged
EventThe sensor or condition that triggered the alert
TypeAlert or Fault
DescriptionWhat condition occurred and why the alert fired
CreatedWhen the alert was first triggered
Acknowledge / ClearAction icons for managing the alert workflow
Add CommentLog a note against the asset's timeline

Available tools: Search by asset name, serial number, or status Β· Filter by priority Β· Request Data to export the current list to CSV.

Managing Active Alerts​

Acknowledging an Alert

Click the πŸ‘ thumbs up icon on an alert row to acknowledge it. This signals to your team that someone is aware of the condition and investigating β€” preventing duplicate response efforts.

Clearing an Alert

Click the βœ• clear icon on an acknowledged alert to close it once the issue has been resolved. The alert moves to Historical Alerts and is no longer visible on the active Dashboard.

⚠️ Clear alerts only after the underlying issue has been addressed. Clearing an alert does not resolve the equipment condition β€” it only removes it from the active view.

Adding a Comment

Click the comment icon on any alert row to attach a note to the asset's record. In the dialog, select the note type (Service Technician, Customer, or Distributor) and enter your note.

πŸ“Œ Comments added from the Alerts Dashboard are permanently recorded in the asset's Timeline tab on the Asset Detail Page β€” creating a traceable link between the alert event and your response actions.

Navigating to the Asset

Click the asset name in any alert row to go directly to that asset's detail page. The ALERTS tab on the asset will show complete information about all current and historical alerts for that equipment.


πŸ“‹ Alerts vs. Faults​

The platform distinguishes between two types of alert conditions:

TypeDescription
AlertA threshold-based condition triggered when a sensor value crosses a configured limit (e.g., temperature exceeds 200Β°F)
FaultA diagnostic trouble code from the asset's onboard controller β€” automatically appears on the dashboard with medium priority, even without a specific alert configuration

πŸ“Œ When a fault is detected, it automatically appears on the dashboard β€” no separate alert configuration is required for fault codes.


🚦 Alert Priority Levels​

Every alert is assigned a priority that determines both urgency and how the asset's map pin is displayed.

PriorityBadge ColorMap Pin ColorMeaning
πŸ”΄ HighRedRedSerious condition requiring immediate response β€” potential equipment damage or safety risk
🟑 MediumOrangeYellowImportant issue to be addressed promptly during normal operations
🟒 LowGreenGreen (unchanged)Informational β€” monitor but no urgent action required

πŸ’‘ If an asset has multiple active alerts simultaneously, the map pin displays the highest priority color across all active conditions.


πŸ” Alert Status Lifecycle​

Every alert moves through a defined workflow from trigger to resolution.

StatusMeaningTypical Next Action
πŸ”΄ ActiveAlert has triggered β€” no action taken yetInvestigate and acknowledge
πŸ‘ AcknowledgedA team member has noted the alert and is awareInvestigate and resolve, then clear
βœ… ClearedAlert has been resolved and closedMoves to Historical Alerts

πŸ“Œ Cleared alerts are removed from the active Dashboard and are only visible in the Historical Alerts page.


πŸ“ Historical Alerts​

Click Historical Alerts in the sidebar to review alerts that have been cleared or have returned to normal operating conditions.

Searching Historical Alerts​

Historical Alerts requires search criteria before results are shown. Apply the following filters to retrieve records:

FilterDescription
Date RangeSet a From and To date. Maximum range is 31 days. Data available up to 13 months back.
StatusFilter by Cleared or Returned to Normal
PriorityFilter by High, Medium, or Low
AssetNarrow to a specific asset

Click GO to retrieve matching records.

Historical Alerts Table​

What does each column mean?
ColumnDescription
AssetEquipment that generated the alert
Serial NumberAsset serial number
PriorityAlert priority at time of trigger
EventThe condition that triggered the alert
TypeAlert or Fault
StatusCleared or Returned to Normal
CreatedWhen the alert originally fired
Cleared / ResolvedWhen it was closed

Click any row to view the full alert chronology β€” when it became active, who was notified, when it was acknowledged, and when it was cleared or returned to normal.


βš™οΈ Alert Configurations​

Alert Configurations are the rules that define what conditions trigger alerts, which assets they apply to, and who gets notified. Click Configuration in the sidebar to view and manage all configurations.

Configuration List​

Each configuration in the list shows its name, the number of assets assigned to it, and the number of trigger conditions it contains.

Creating an Alert Configuration​

Click βž• on the Configuration list to create a new alert rule. Setup involves four steps.

Step 1 β€” Name and Notifications

What does each field mean?
FieldDescription
Configuration NameA clear name describing what condition this configuration monitors (e.g., "High Discharge Pressure β€” Plant A")
Notification ContactsEmail addresses and platform users who receive notifications when the alert fires. Type @ to select platform users or enter any external email. SMS notification is also available.

πŸ“Œ Tip: If you include the word "fault" in the configuration name or the sensor name, the platform will classify resulting alerts as Faults β€” which are displayed and reported separately from standard threshold alerts.

Step 2 β€” Assign Assets

Select the assets this configuration applies to from the dropdown. When any assigned asset meets a trigger condition, it enters an alerting state and appears on the Dashboard.

⚠️ An alert configuration must have at least one asset assigned. The configuration will not activate without an associated asset.

Step 3 β€” Configure Triggers

Triggers define the specific conditions that cause alerts to fire. A single configuration can have multiple triggers, allowing one set of assets to be monitored for several different conditions simultaneously.

⚠️ Do not create multiple triggers for the same sensor within a single configuration. If you need additional trigger conditions for the same sensor, create a separate alert configuration.

For each trigger, configure the following:

What does each field mean?
FieldDescription
Sensor GroupThe sensor group on the selected assets to monitor
SensorThe specific data point to watch (e.g., DischargePressure, FuelLevel, CoolantTemp)
ConditionThe comparison operator β€” equals, does not equal, less than, greater than, greater than or equal to, less than or equal to
Threshold ValueThe value at which the condition triggers the alert
PriorityHigh, Medium, or Low β€” determines map pin color and notification urgency
Time DelayHow long the condition must persist before the alert fires (see Data Types section below)
Auto ClearWhether the alert clears automatically when the condition returns to normal β€” choose Immediate or after a 24-hour delay
Notify on ReturnWhether to send a notification when the asset returns to a normal state
DescriptionA plain-language explanation of what this trigger monitors and why

Step 4 β€” Save

Submit the configuration. Alert rules are immediately active and begin monitoring all assigned assets.

Understanding Data Types and Time Delays​

Alert behavior depends on whether the incoming sensor data is periodic or change-of-state. Setting the right time delay for each type prevents false alerts while ensuring genuine problems are caught quickly.

Data TypeHow It WorksRecommended Delay Approach
PeriodicOperational snapshots sent on a fixed schedule β€” typically every 15 minutes. Examples: run hours, RPM, coolant temperature, pressure readings.Set a delay of at least 15 minutes so the system can receive and confirm the next scheduled reading before firing an alert. A momentary anomaly in one reading will not trigger a notification.
Change of StateTransmitted immediately when a status changes β€” engine on/off, switch positions, diagnostic fault codes.Use a short delay (a few minutes) to prevent excessive notifications if the condition toggles rapidly around its threshold.

πŸ’‘ A well-calibrated time delay reduces alert fatigue β€” the condition where too many non-critical notifications cause teams to begin ignoring alerts. Longer delays reduce false positives; shorter delays provide faster awareness. Tune based on the criticality of the monitored condition.

Editing an Alert Configuration​

Open any configuration from the Configuration list and click EDIT. All fields β€” name, notifications, assets, and triggers β€” can be updated. Changes take effect immediately after saving.

Deleting an Alert Configuration​

Click the delete icon on the configuration's row in the Configuration list. Deleting a configuration removes all associated alert rules and notification subscriptions. Historical alert records for assets under this configuration are preserved.


πŸ”„ Common Workflows​

Workflow 1 β€” Daily Alert Review​

  1. Open the Alerts Dashboard.
  2. Review the Priority Statistics chart β€” note any High priority alerts.
  3. Check the Priority Trend β€” is alert volume rising or falling over the past 7 days?
  4. Work through the Active Alerts list, starting with High priority items.
  5. Acknowledge each alert you are investigating.
  6. Navigate to the asset detail page for any alert requiring deeper inspection.
  7. Clear alerts once the underlying condition has been resolved.

βœ… Result: Your team has a clear, shared understanding of which alerts are being handled and which still require attention β€” preventing duplicate effort and missed issues.


Workflow 2 β€” Investigating and Closing an Alert​

  1. Identify the alert in the Dashboard list.
  2. Click the πŸ‘ Acknowledge icon to signal you are investigating.
  3. Click the asset name to navigate to the Asset Detail Page.
  4. Open the ALERTS tab to review the full alert history for this asset.
  5. Investigate the condition using the SENSORS and TIMELINE tabs.
  6. Add a comment documenting your findings and actions taken.
  7. Return to the Dashboard and click βœ• Clear once the issue is resolved.

βœ… Result: A complete audit trail from alert trigger through investigation to resolution, all linked to the asset's permanent record.


Workflow 3 β€” Setting Up a New Alert Configuration​

  1. Create a new alert configuration:
    • Name it to clearly describe the monitored condition.
    • Add notification contacts (users and/or external emails).
    • Assign the relevant assets.
    • Add triggers:
      • Select sensor group and sensor.
      • Set condition type and threshold value.
      • Set priority (High / Medium / Low).
      • Configure time delay appropriate for the data type.
      • Set auto-clear behavior.
    • Save.

βœ… Result: The platform begins monitoring all assigned assets against the configured conditions and sends immediate notifications when any threshold is crossed.


Workflow 4 β€” Reviewing Historical Alert Patterns​

  1. Click Historical Alerts in the sidebar.
  2. Set a date range (up to 31 days at a time, up to 13 months back).
  3. Filter by a specific asset, priority, or status if needed.
  4. Click GO to retrieve records.
  5. Look for assets or conditions that appear frequently β€” repeated alerts on the same asset for the same condition may indicate a maintenance issue or a miscalibrated threshold.
  6. Export to CSV using Request Data for trend analysis or management reporting.

βœ… Result: Pattern analysis across historical data identifies root causes rather than repeatedly responding to symptoms.


βœ… Best Practices​

  • πŸ”” Check the Dashboard at the start of every shift. Alerts waiting unacknowledged accumulate quickly on active fleets. Regular review prevents High priority items from aging unnoticed.

  • πŸ‘ Acknowledge before investigating. Acknowledging an alert as soon as you begin looking into it signals to the rest of your team that it is being handled β€” preventing two people from independently investigating the same issue.

  • πŸ“ Use comments to document your response. Notes added from the Alert Dashboard are recorded permanently in the asset's Timeline. Future team members β€” and future you β€” will benefit from knowing what was done and why.

  • ⏱️ Set time delays thoughtfully. A delay that is too short generates false alerts from momentary fluctuations. A delay that is too long may let a genuine problem go unnoticed. Match the delay to the data transmission interval and the criticality of the condition.

  • 🎯 Reserve High priority for genuine emergencies. If too many conditions are flagged High, teams begin to treat all alerts with the same level of urgency β€” which is none. High priority should mean "respond now, regardless of time."

  • πŸ“Š Export and review monthly. Regular exports of alert data allow you to identify recurring conditions, track fleet health trends over time, and make a data-driven case for maintenance investment or equipment replacement.


πŸ’‘ Tips & Shortcuts​

TipHow
Jump to the asset from an alertClick the asset name in any alert row to go directly to the Asset Detail Page
Export active alertsClick Request Data on the Dashboard to download the current alert list to CSV
Search for a specific alertUse the search field on the Dashboard to filter by asset name, serial number, or status
View full historical detailClick any row in Historical Alerts to see the complete alert chronology
Keep the Dashboard cleanClear alerts promptly once resolved β€” a cluttered Dashboard makes it harder to spot new issues


πŸ“– Alert Configuration Examples​

Alerts depend on the type of data you receive:

  • Periodic data β€” operational snapshots sent on schedule (typically every 15 min). Examples: Engine Run Hours, RPM, Coolant Temp.
  • Change-of-state data β€” status changes transmitted immediately. Examples: engine on/off, low fuel switch, J1939 faults.

The delay workflow prevents excessive notifications from toggling conditions.

Example 1: Low-Fuel Alert (Periodic Data)​

  • Config: If Fuel Level ≀ 20% for 0 minutes β†’ high priority, auto-clear after 24h.
  • Fuel level reports 19% β€” below threshold. Alert appears immediately with zero-minute delay.

Example 2: Low-Temperature Alert (Periodic Data)​

  • Config: If Fluid Temp < 10Β°C for 20 minutes β†’ medium priority, auto-clear after 24h.
  • 9Β°C reported. Next value in ~15 min. If next report shows above 10Β°C β†’ no alert. If still 8Β°C β†’ alert fires at 20 min mark.

Example 3: High-Pressure Alert (Change-of-State)​

  • Config: If High Pressure Alarm = active for 5 minutes β†’ high priority, auto-clear after 24h.
  • 5-minute delay prevents notification floods from toggling. If inactive arrives during delay β†’ cancelled. If still active β†’ alert fires.