QA Testing Strategies for Analytical Dashboards

My team is building a dashboard for a SaaS / multi-tenant application using Cognos. The problem I am facing is the correct testing strategy.

Right now, testing one report with start and end date filters (in month / year format), one-dimensional filter and two controls for selecting a measure (there are 7 measures that can be represented as sums or different values).

In addition, users can drill points in the resulting report for detailed transactional data.

It is also implicit that reports for one tenant do not display data for another tenant.

So here's the problem. This simple report takes two weeks to test, including hundreds of tests for a huge set of filter and measure combinations. It seems gross overkill to me.

Is there a "strategy" that can be used to reliably reduce the search space and avoid retesting?

+2


source to share


3 answers


Good question! When we usually publish (or want) new reports in Tableau that go into our SSAS cube, we usually ask a specific group of people to act as the superuser group in order to use the report as if it were in production. While this may not take a specific period of time, say you only have 2 days to test it, it will continue for several weeks. In the meantime, bug fixes or changes can be made and reassigned to the same group without having to spend time stopping testers, making them wait for fixes, and continuing.



Don't get me wrong, as the expiration start date is still perfect, but actually submitting the report in bulk in a small group often helps move things faster than going through each test case.

+1


source


They seem to be trying to test "all possible combinations" in which the report might be used. It might be wise to do this for a select few reports that best represent typical or critical reports. This will help eliminate major flaws in design, architecture, or implementation.

But trying to test every possible combination for every report in the hope of finding every error is impossible. Ajdam's proposal makes sense and is characteristic of "compromises." It's all about time, resources and what matters most in this situation.



So, I would suggest a hybrid of the two (pick a small subset of the reports for extensive testing, but make sure they focus on finding bugs that are likely to be available to other reports). Then check each individual report using a technique like ajdams.

+1


source


What would really help your testing catch up with speed and reliability would be to actually prepare test cases that contain the required reporting functionality. As suggested earlier, the beta user group will help you catch up and fix bugs during beta testing.

0


source







All Articles