In my current role, we’ve created something we call internally as Minimum Regression Test Runs. It’s basically a minimum set of test cases that must be executed for every release, no matter how big or small the release is. More on this in a future post but while in the process of this task, I struggled to figure out what configurations (ie; platforms/devices/OS versions) the tests should be run with. This is where I thought analytics would be super helpful.
One of the most important tools we have allows us to quickly, through a dashboard, see the following (a note that this information does not map back to individual users…privacy is important):
- % of our users on a specific app version and build
- % of our users on a specific operating system
- % of our users on a specific device
- crash free rate with a breakdown of each point above
I also did an online search of the most popular operating system versions for both iOS and Android. Not surprisingly, the data was very close to what our internal analytics was showing. For Mac and Windows, we didn’t have internal data readily available so I relied on online data of the most popular versions for each.
I used the above information to map as closely as possible our current list of devices we carry in-house to that which covers the highest percentage of our users. This is what I had to work with at the time, but you could use any source of information that is available to aid in your decision making.
Analytics can be used to prioritize bugs during a release, to assess any sort of risk, or even to decide what features to add. It’s an extremely useful tool for eveyone, including Test Engineers, to provide a higher quality and superior product to the user.