Troubleshoot an Activity to Find a Common Thread (Commonalities Analysis for an Activity)

The Commonalities Analysis dashboard (for a single activity) performs automatic and intelligent troubleshooting to find the common elements of a seemingly random problem with that activity. It checks through hundreds of possible culprits (like the location, or time of day, laptop model and many more), and displays only the highest concentration of poor performers. This can be a powerful tool to help isolate the problem and offer clues to resolving it.


There are two types of Commonalities Analysis dashboards, one for troubleshooting the overall performance of an application, and one for investigating a single user activity within an application. This one is for investigating a single user activity within an application.

For example, if you receive random reports of slow response times from some users on a specific activity you would usually cycle through endless possible attributes, looking for the common thread of this problem so you can troubleshoot it. Perhaps they all use a six-inch tablet, or perhaps it only happens when people connect to a particular server, or between 5 and 7am on weekdays, or only if your computer has more than four CPU cores, and so on. Commonalities Analysis checks all of this automatically in seconds, and lists the most obvious contenders in a simple, intuitive view, so that you can troubleshoot further.


This dashboard is most effective at isolating a single thread which is common to all occurrences of a problem. However, if your issue is a complex combination of narrow criteria, for example if it only happens in Munich on Lenovo laptops which have 8 core CPUs during peak hours, you may find that looking at any one of these elements is not enough to show a significant change in performance. To investigate complex patterns of data correlations, use the Analyze dashboards.

The Commonalities Analysis dashboard (for an activity)

Look for the entries whose performance time (horizontal bar) is significantly longer than its average value (vertical bar), either in the Performance Score column (for acute problems) or in the Activity Response column (for chronic problems).


Automatic correlations still require a human to manually declutter any senseless or obvious suggestions. Any dependent attributes should be discounted.

For example, if the Legal department opens Word documents much slower than other departments, it may simply be that contracts are much larger than others in the company.See this step below to remove obvious attributes.

Isolate the most likely common themes

For example, if several entries have much slower performance than other attributes, and they refer to a location, or time slots in the day, focus your troubleshooting on what may be happening at that site or at that time which might cause a slowdown in performance. Alternatively, if a specific department's devices respond slower to an activity, focus your troubleshooting there.

For chronic (long term) problems which have been continuing for some time, choose to sort the criteria by their slowest response times (Sort By > Activity Response in the top bar). For recent problems, use the Performance Score. For more information, see below.

After you found a likely culprit, you can select to view the users most impacted by this slow performance, and (if relevant) drill down to see more information about a specific user's device.


  1. Step 1 Open a browser and sign in to Aternity.
  2. Step 2 You can access the Commonalities Analysis dashboard for activities by drilling down on an activity with poor performance from any of the following dashboards:
  3. Step 3 For acute (recent) problems, select Sort By > Score in the top bar. For chronic (long-term) problems, select Sort By > Activity Response.

    If you suspect there is a particular part of the response time which is the culprit, you can sort by client time, network time or server time. For example, if you are looking for activities suffering from a clogged network bandwidth, select Sort By > Network Time.

    Sort depending on chronic or acute problem

    Use the Score to measure short term (acute) sudden changes in performance, as they rely on recent baseline measurements. The score reflects a recent change because it would be significantly different from the established baseline response times. For example, if a mail usually opens in 1.5s, (the baseline response time), it creates a minor baseline (small departure from the baseline) and a major baseline (significant departure). If performance is suddenly (acutely) much slower, like 5s, it would be beyond the major baseline, and therefore have a red status with a low score.

    Use the actual response times (not scores) to check the performance of chronic (long term) problems. You cannot rely on measurements based on the recent baselines, as those responses would be chronically slow for some time, thereby skewing baselines to make those times look normal. In this example, if the activity for opening mails has been 5s for several weeks, the system adjusts its baselines to 5s, so this now looks normal, and therefore has a green status with a good score, which is misleading.

  4. Step 4 Find the elements (attributes) which are common to all instances of poor performance in this activity, by viewing the Attributes section.
    Isolate a common thread in slow performers
    Field Description
    Attribute and Value (columns)

    Displays a common attribute and value. Each row is dedicated to a single value of an attribute, like a specific number of CPU cores, or a specific time slot in the day. For example:

    • The performance on a device with 8 CPU cores would have an Attribute called CPU Cores and a Value of 8.

    • The performance at three o'clock in the morning would have an Attribute call Hour in Day and a Value of 3 AM.

    Total Activities (column)

    Displays the impact of this problem, by showing the total number of times this activity was performed when the Attribute had this Value. For example, if only a handful of people use email on a device with 8 CPU cores, you can determine the urgency for solving this problem accordingly.

    Performance Score

    Displays the average performance activity score of this activity when performed by devices which have this entry's Attribute and Value. For example, it shows the score for anyone who opened mail on a device with 8 CPU cores during the dashboard's timeframe.

    The vertical bar represents the average of this measurement during the dashboard's timeframe, so you can see if this is a mild or severe departure from the norm. In this example, it shows the average regardless of their CPU cores.

    Activity Response

    Displays the average response time of this activity when performed by devices which have this entry's Attribute and Value. The response times of activities are split into client time ( dark blue), and the combination or union of the server time ( light blue) and the network time ( blue). For example, it could show the average delay time for anyone who opened mail on a device with Windows 7 Enterprise.

    The vertical bar represents the average of this measurement during the dashboard's timeframe, so you can see if this is a mild or severe departure from the norm. In this example, it shows the average regardless of the time of day.

  5. Step 5 To manually remove clutter, hide attributes which are dependent on each other or correlations based on only a few examples, using the Attributes and Total Activities drop-down menus at the top of this section of the window.

    For example, if the Tokyo office uses Japanese Windows, and this dashboard shows a strong correlation between slow Japanese Windows and the Tokyo office, you cannot yet say if it is due to the OS or the location, or some other issue, since any problem in that location would obviously be associated with that operating system. So you must find other distinguishing attributes, like looking for that OS in other locations, or focusing on entirely different attributes to determine the culprit.

    Clear some attributes from the list of common elements
    Field Description
    Attributes (drop-down menu)

    Deselect the attributes to hide in the Attributes column, and select Apply.

    For example, if you definitely know that the data center is not the determining factor of the problem, you can remove all data centers listed in the Attributes column by deselecting this item from the menu.

    For a full list of monitored attributes, see View Data Monitored by Aternity.

    Total Activities (drop-down menu)

    Select the minimum number of instances where this activity was performed, which all had this attribute in common.

    For example, if a problem is widely reported for sending mails, dozens or hundreds of times per hour, but your list of attributes is cluttered with items whose Total Activities column is low (like 3, 10, 20, 6 and so on), you know these attributes cannot be the common thread for this problem, because it is much more widespread, and these attributes are only common for a few users. You can remove the clutter by selecting At Least 50, to hide attributes with less than 50 instances of this activity.

  6. Step 6 To view the impacted users of a single common attribute (like all those performing this activity on Windows 10), select that row from the Attributes section, and view the Users with Worst Performance section.

    You can display the data so that the worst or best performance is displayed at the top, by selecting View > Worst or View > Best from the top bar. The differences between good and poor user or device performance could give you clues to improve their performance.

    View the users suffering most from those with this common theme

    At a glance, you may be able to spot additional common attribute values in the columns of the Users section, which can help isolate more common themes of poor performance.

    Field Description

    Displays the username of the person accessing each device.

    Device Name

    Displays the hostname of the monitored device. View it in the Windows Control Panel > System > Computer Name, or on Apple Macs in System Preferences > Sharing > Computer Name.

    Device Type

    Displays the type of device reporting performance to Aternity.

    Operating System Displays the generic name and version of the operating system (like MS Windows 10, MS Windows Server 2008 R2, MacOS 10.3, iOS 10 or Android 6).

    (Desktops, laptops and mobile devices only) Displays the number of CPU cores of the device.


    Displays the size of RAM of the device.


    Displays the name of the department to which the user or the device belongs.

    (Windows) Agent 9.x or later queries Windows network user information, accessing the Active Directory user > Properties > Department.

    (Mobile) Mobile apps can set this manually in the Aternity Mobile SDK.

    (Business) Location

    Displays the current geographic location of the device.

    If Aternity uses site-based location mapping, it reports the location as Off-site when the device is not connected to the Microsoft Active Directory. For legacy location mapping, if it cannot determine the location name, it reports it as Not Mapped. A mobile device with no location name reports as Off-site if it is on 3G or 4G/LTE, or Not Mapped if it is on WiFi.

    On virtual deployments (virtual applications like Citrix XenApp and virtual desktops like Citrix XenDesktop), Aternity always tries to report the location of the end user's front-end device by detecting its subnet.

    Max. Response

    (For managed applications only) Displays the longest response time for this activity from this device within the dashboard's timeframe.

    Total Activities

    (For managed applications only) Displays the number of times someone performed this activity during the timeframe, thereby adding weight to the impact of this problem. If the same user performs the same activity twice, it counts as two.

    Performance Score

    (For managed applications only) Displays the overall activity score for this application, calculated by condensing all the activity statuses into a single value. Use this for acute (recent) problems in performance.

    Activity Response

    (For managed applications only) Displays the response time of the activity. The response times of activities are split into client time ( dark blue), and the combination or union of the server time ( light blue) and the network time ( blue).

  7. Step 7 To obtain more information on the details of one of the devices impacted by this issue, hover over a device's Total Activities, Performance Score or Activity Response bar.
    View more details of a device impacted by this issue

    You can drill down to:

  8. Step 8 You can limit the display of this dashboard using the menus at the top of the window.

    You can zoom in on a specific timeframe, or limit the list of attributes, or display the best performances instead of the worst ones.

    Select the data to display in the dashboard
    Field Description

    Choose the start time of the data displayed in this dashboard.

    You can access data in this dashboard (retention) going back up to 14 days. This dashboard's data refreshes every 10 minutes.


    Select from this menu to display the best or the worst performances in the lower pane of the dashboard.

    Sort By

    Select to determine the contents of the rightmost column.