About The Browser User Controversy
Key Idea: In creating realistic load for performance testing of AJAX applications it's critical to simulate users realistically, and for this reason the trend is toward using Browser Users (BUs). Why is this so important? And how many BUs can you get on one machine? On 10 machines? On 100 machines?
The question often comes up: What is all this ruckus about "browser users"? Why is it such an important?
Introduction
The general discussion you're probably referring to is about server loading methods. The goal in such projects is to simulate large numbers of users to do "web performance testing" in general. And, more specifically, to do "AJAX applications performance testing" in particular. The claim is made that the usual methods -- which typcially are based on simulating HTTP/S traffic -- don't work with AJAX.
There are [at least] two dimensions to this discussion:
- The simulation of users needs to be realistic if you are to believe the results; and,
- It is very difficult to get good scaling factors so that your realistic sessions impose realistic load.
About AJAX Applications
AJAX applications are NOT stateful and typically involve complex interactions between the JavaScript (JScript) programs in the browser and various programs running in the server that cooperate with each other. And, because it is "A"JAX it is "A"synchronous by nature.
Consequently, the big issue with testing an AJAX application is timing.
Now, the use of HTTP/S protocol traffic simulation is insufficient for AJAX applications because HTTP/S is "memoryless" -- whereas AJAX applications do have internal state. A "memoryless" simulation can't remember any state, and therefore can't wait for download work to be completed.
The way most HTTP/S loading approaches overcome that is to put in user "think time" or "waits" -- so that the traffic is slow enough to account for typical delays. This is good practice, but...what happens when the server slows down?
No matter how long a Wait you put in, there is ALWAYS a load level at which the Wait times are "not long enough" and the test fails. When your AJAX tests "fail to sync" you are out of luck. But an eValid script playback, given that it has been adapted to be self-synchronizing, won't have these timing issues. You'll never "fail to sync" -- because the script has programming in it that directs the eValid browser to wait in just the right ways.
Creating Realistic Load
There are two main ways to create load on a server:
- Virtual Users (VUs) -- This methods uses simulated HTTP/S protocol traffic to produce the effect of a user running a specific application.
This approach scales well -- because all that is needed is to generate HTTP/S traffic, and this can be done with simple HTTP Get commands. On the other hand, the use of a stateless protocol disables end-user reality when you have an AJAX application under scrutiny
- Browser Users (BUs) -- Running a test playback with an actual browser instance, which is what eValid does.
The scaling issue here is more complex, because to run 10, or 100, or 1,000 BUs requires quite a lot of compute power. Typically, with eValid BUs, we find that you will run out of RAM first, and so on a typical XP box you will max out at about 100 BUs per user. (Interestingly, our experience is that you run out of RAM first and we don't usually have problems with I/O capacity with higher loading levels).
To get to the 1,000 BU level -- or to even higher BU counts -- you will need to have a fairly strong machine (RAM is the scarce resource) and use multiple user accounts.
At the end of the day, if you imagine a 100-BUs loading scenario, what you have is a group of eValid instances each running a separate playback script and each one independently acting like an AJAX application user. Any question about realism is moot. You're actually running real browsers. And, your load, performance, and testing results show you accurately and easily what the end-users are experiencing.