Tuesday, May 31, 2011

How eValid Scales

There has been a lot of discussion in various forums -- including in our own eValid User Forum -- about how to scale tests up from singleton runs to large load tests. The goal seems in all cases to be to preserve the precision and accuracy of an eValid playback but still run enough evalid's in parallel to yield a realistic load.


The natural questions are, when you increase the number of eValid instances, what resources are used up first and what limits are hit along the way. Here are some of the main points that contribute to this discussion, in order of where they form limits:



  1. Initial RAM Usage
    When eValid finishes loading -- even when the initial page is about:blank the footprint is about 12 MBytes. That means that an initial load of 100 BUs will consume 1.2 GBytes of RAM. Which is certainly feasible with a 3 GByte machine.

  2. RAM Usage Growth
    As each browser continues working its footprint grows because the browser stores in its workspace all of the JavaScript files and a lot of other material that it picks up along the way. The longer the test the greater the growth. We have noted footprints for very long tests as much as 100+ MBytes per BU. (This behavior argues for smaller tests and/or for more-often restart of the BUs in your load test run.)

  3. Cache Space
    As your tests run on and on, with more and more pages, there are more pages stored in the cache. In LoadTest runs, though, this doesn't matter as much because you typically don't use cache at all. (Periodic DeleteCache instructions are a good idea.)

  4. Virtual Memory Size
    There are special instructions that come with eValid to make adjustments to the Virtual Memory size, etc. This is not a fixed number, but when you do the "machine adjustments" on most Windows systems you ask Windows to use the best size of the paging files to the maximum available.

  5. GDI Object Count
    You can't control the GDI object count, but we know (from our conversations with Microsoft technical support) that you may run past the built-in Windows limit of between 256 and 65,536 GDI objects per user.

  6. Input/Output [I/O] Bandwidth
    This depends a lot on the web application you are testing. Some are very economical of I/O capacity, and others are very hungry. The only way to make sure this is not a limit is to look at the PerfMon data in the Task Manage to monitor the networking activity.

There are some other tricks to maximize the number of BUs per run -- tricks that we use to get over 10,000 BUs off one machine. That'll be a subject for a subsequent BLOG post.

Wednesday, May 25, 2011

Selected User Forum Posts

Beginning in mid-2010 we have directed all technical support questions to the eValid User Forum. We have learned that when one user has an issue, all users can profit from the answer.

Here is an additional selection of some of the posts that we think would be of general interest.

Wednesday, May 18, 2011

Selected User Forum Posts

Beginning in mid-2010 we have directed all technical support questions to the eValid User Forum. We have learned that when one user has an issue, all users can profit from the answer.

Here is an additional selection of some of the posts that we think would be of general interest.

Friday, May 13, 2011

Results From A Small Experiment

Previous test reports pointed out how eValid pretends to be a particular smartphone, looks at how several different websites behaved in delivering material to that smartphone client browser, and studies the differences in the responses by the Amazon website to different smartphones.

Next, we tried a simple experiment with a smartphone application that uses AJAX. We wanted to see how the use of AJAX in an application affects smartphone behavior. For our test, we chose a beta-status application of a regional transportation system that offered real-time schedule information delivery to smartphones.

The experiment involved a simple test: On the mobile application, eValid pretending to be an IPhone 4 browser, request schedule data, synchronize final delivery, and measure the amount of time that delivery took. We found from static tests of that script that the data download on a per-test basis was about 330 KBytes.

The LoadTest script was set to start Browser Users (BUs) off at a constant rate until a total of 100 was reached. The repetition count for each BU was set high enough so that at the end of the test all 100 BUs were repeating the test -- all independently. The timing tests were very interesting. The LoadTest Data Report suggests that the response time for this application degrades by a factor of about 5:1 when the number of simultaneous requests grows above 75.

(Please note that this experiment was done to illustrate eValid capability; we're not making any recommendations to the agency actually responsible for the application implementation.)

You can read about our short experiment in detail at this page: Mobile Agent Test Page -- Loading Experiment #1.

Thursday, May 12, 2011

Selected User Forum Posts

Beginning in mid-2010 we have directed all technical support questions to the eValid User Forum. We have learned that when one user has an issue, all users can profit from the answer.

Here is an additional selection of some of the posts that we think would be of general interest.

Tuesday, May 10, 2011

More about Mobile Phone Response

A previous item pointed out how eValid could pretend to be a particular smartphone, and looked at how several different websites behaved in delivering material to that smartphone client browser.

A companion experiment that we did tried out five different smartphones on the Amazon website. (We chose Amazon because it seemed to be one that had very good smartphone options.) The results are shown in Mobile Agent Test Page -- Multiple Phones.

What is interesting about these data points is how much the size of the download varies from phone to phone: a ratio of 6.15+ from largest to smallest download volume.

If you are concerned with optimization of an application to a range of different smartphones this comparison method will be of great interest.

Monday, May 9, 2011

Selected User Forum Posts

Beginning in mid-2010 we have directed all technical support questions to the eValid User Forum. We have learned that when one user has an issue, all users can profit from the answer.

Here is an additional selection of some of the posts that we think would be of general interest.

Thursday, May 5, 2011

eValid Applied To Mobile Devices

In response to some user's questions, we recently did some experimentation with evalid imitating different mobile devices. The question was, can eValid show the differences between server response on a variety of mobile devices?

We used the SetUserAgent command to set eValid to provide authentication as if from a different user agent. We chose to use the user agent string for an iPhone -- not for any particular reason and certainly not an endorsement -- just to see what would happen when imitating that popular device.

Next, we chose five popular sites to run a simple test on: Amazon, Meebo, Samsung, Sony, and Verizon. (Again, no endorsement implied).

The outcome of our experiment is shown in this simple summary: Mobile Agent Test Page

As you can see from this page, in most cases the servers responded differently -- as you would expect -- when told to feed data to a mobile device. The side-by-side comparisons of the single response page show how different things are. For three of the five applications the size of the delivered data was < 20% of the full-size file.

Note that the page rendering appeared to be quite normal (except for total screen area). We didn't note any essential differences.

But not in every case. The Sony and Verizon mobile downloads are around 90% of the full-size download. We checked the actual content and found for these it appears there were some Flash items and also some movie items in what is sent to the mobile device.

One obvious implication is that this technique may be very valuable in using eValid to drive substantial loads into a server that is serving mobile applications. eValid's inherent scalability is what does the trick.