Multiple applications in multiple languages ​​on different OS. Should I try to use a single test wiring?

I am the newest member of the project, which is an amalgam of various applications written in different programming languages ​​on both Unix and Windows. I get the "honor" of figuring out how to implement an overnight regression build / test for all these different applications.

Unfortunately, these applications were not built using TDD principles and do not have any significant structural test units. My instinct is yelling at me to try to avoid reinventing the wheel and to "try" to find a way to make the most of the code for this nighttime testing architecture.

How would anyone write Test Cases that use as much code as possible .. when faced with multiple languages ​​on different operating systems ... and compounded by the fact that not all applications are web services or even web applications?

My only conclusion is that the test drivers and test cases need to be specific to each application and I cannot use any significant code reuse.

Any suggestions or suggestions for providing a quick Kick In The Head for requesting this Question would be greatly appreciated and appreciated :)

+1


source to share


3 answers


This is the difficult one I've seen before. I think that eventually you will have to make a decision on this issue, but you may need a slightly different approach to start. It looks like this app has been around. There should be one or more bug bugs that you can try to find out the most common type of error. Applications usually have the aspect that is most susceptible to defects, and this is where I'll start with some test cases. You are essentially regressing the most productive bug reports in any old way, and stitching those scripts in any old way.



Once you know about this app, and you know it very soon after doing the above, you can come up with grander and easier to maintain, use, or use the app for testing. Hope this helps.

+1


source


Only my 2 cents are worth ...

In order to successfully test the tester versus the developers, as far as I understand, you need all development involved in writing the test code.



Perhaps if you can facilitate a common interface for different applications and services, it might give you some progress.

0


source


It's hard to say how much of this is possible in your case ... but it would be great if you could come up with a declarative mechanism for describing your test cases, perhaps using text files or XML to detail the parameters, expected outputs, expected return codes, etc. ... different cases. Thus, if these test cases are valid for multiple operating systems / environments, you can implement the code to execute the test cases once for each environment, but you can reuse all test cases.

Of course, your mileage may vary depending on the complexity of the interfaces / scripts / applications needed to test, and how easy it would be to express test cases with data.

In terms of preparing test cases, I was also previously responsible for preparing tests for old, "old" code that was not built with "validation" in mind. I like Andrew's suggestion; using previous error / regression data, it would be helpful to find which tests will give you the most benefit for your dollar. It would also be a good idea to try and introduce new engineering processes in your team - for every new bug / problem / regression fixed from now on, try adding a test case that catches the problem. This will help you create a set of test cases that might be relevant ...

0


source







All Articles