There are two ways to test software. The first and most obvious is to simply allow users to test functionality by using the software as it was intended. This is the method likeliest to produce the most useful and practical results. The other method is by automating testing. This requires a second piece of software designed to provide input and analyze output from the original application. Automated testing is quite useful in situations where a large volume of output is required to determine whether the application is performing properly.
For web sites, the process of testing can be performed in exactly the same way. Users can operate and explore the web site to see if it does what it is supposed to do. It is also possible to write software that can provide input and analyze output from the site automatically.
The best way to visualize the difference between “real browser” testing, where an actual browser is used to operate and test a web site, and automated testing, where various protocol requests are sent to a site and the resulting output is checked to see if it includes the right combinations of data and presentation, is to imagine test-driving a car.
The “real browser” method for a car would be to put a robot in the driver’s seat and have it physically operate the controls to maneuver the vehicle. The automated method would be to operate the vehicle by remote control. After a few moments, it should become clear why one version is more effective than the other.
The key problem with virtual browsers is they have no method of testing or evaluating the functionality in the client. The only way they can communicate with a web application or web site is to test the code residing on the server. If they make the attempt to automate the process of running and testing the client-side code, user interface controls and presentation logic, then they are replicating functionality already found in browsers, which calls into question the decision to exclude the browser in the first place.
Client-side code can make a huge difference in the proper functionality of a web site or web application. For example, if a user can’t find a button, or misinterprets the user interface controls such that they can’t execute the proper sequence of button clicks, selections or menu choices to perform a certain task, that failure cannot be simulated in software. Unless the testing programs are deliberately written to ignore certain user interface elements, or written to malfunction on purpose, they can’t simulate confusion or mistakes.
What most web users don’t know is the entirety of the world wide web is not only accessible through browsers, but it is also accessible through a command line interface. All of the popular Internet protocols are text-based, meaning that with simple or complex typed commands, it is possible to “navigate” the web, e-mail, Usenet, FTP and various other protocols. It’s not practical, necessarily, because software exists to put an understandable front end on these operations, but it is possible.
Text-based commands are designed to be collected into sequences, which means with a little organization it becomes possible to operate a web site by just sending it text commands. Since the resulting output is also text, it can be parsed for accuracy. This is the basis for something called a “virtual” browser.
While it is possible to automate the act of operating a graphical interface, it can’t be done without either using a browser, or writing software that mimics the functionality of a browser. This is the key flaw in the idea of testing a web application through automation. Software that is designed to take input and produce output can be fully automated, because providing “input” is something that software can do on its own. In fact, the concept of “pipes” in the UNIX operating system takes full advantage of this by turning the output of one program into the input for another.
But when it comes to not just providing input, but making decisions about how that input should be organized, it becomes much harder to automate. If the problem is avoided, it is possible to get some of the benefits, but the testing will be excluding everything related to the client’s user interface.
Automated software testing definitely has its place, and can be an enormous cost savings. But one of the most important metrics regarding web applications and sites is ease of use and user satisfaction: Two things that can only be measured by working with real users and real browsers.