eValid seems to be used increasingly for monitoring applications because it is a stateful, realistic, efficient and reliable engine for collecting user-oriented response-time data.
In monitoring mode, the Windows scheduler launches one or more eValid playbacks via the batch interface, and the data that's generated shows up in the network monitoring environment. We have standard PERL-based integrations with Nagios, Nagvis, GroundWork, Springsoft, Hyperic, Zenoss, etc.
A common question is: How do I size my machine to get the most eValid output?
Here's a case study that gives you some idea of what you might get, based on recent inputs from one of our customers.
The machine being used to drive the monitoring is a 2.8 GHZ dual-core Pentium with 4 GB of RAM running in Windows 2003 Server (SP2). The machine has been "adjusted" to modify the OS internal performance priorities, adjust the heap space, and adjust the virtual memory sizing parameters.
The scripts being by the eValid instance run are fairly typical of functional tests used in monitoring mode. They break down as follows: (a) ~20% "simple", being just visits to one or two URLs; (b) ~60% "typical", involving a login and post-authentication visits to several pages; and, (c) ~20% "heavy", meaning they involve playback into detailed AJAX pages that involve many of eValid's more-advanced features. Most of the tests can be run in parallel -- as many as 8 in parallel, in practice -- but some of the tests have interactions that require them to be run serially.
CPU utilization of about 40% happens when this one machine is running about 600 tests per hour, all re-run at a 5-minute interval. This totals out to about a half-million tests per month.
At 600 tests per hour and at 40% utilization we think this machine is pretty much at its limits. Pushing any more runs the risk of not having enough extra time available in case one or more or many of the tests start to take longer.
All in all, it is a pretty impressive result.
Tuesday, July 21, 2009
Monday, July 20, 2009
SJSU Course on Software Test Automation Features eValid
Students in Prof. Jerry Gao's course on Software Quality and Testing (CMPE 296X) scheduled for the Summer 2009 will have access to eValid as part of the regular coursework.
All in all, some 35 students in the course will be using eValid to test a particular tree-processing example program's GUI component.
We're pleased to be able to support this coursework in this way and we hope to continue to provide eValid support to SJSU and other academic institutions in the future.
All in all, some 35 students in the course will be using eValid to test a particular tree-processing example program's GUI component.
We're pleased to be able to support this coursework in this way and we hope to continue to provide eValid support to SJSU and other academic institutions in the future.
Thursday, July 16, 2009
Webinar: Regression Testing for AJAX/Web 2.0
Run Regression Tests on Complex, Dynamic Applications
Make Tests Resilient and Robust, Tolerant Of Application Changes
Thursday, 23 July 2009 — 2:00 PM Eastern Time / 11:00 AM Pacific Time
Make Tests Resilient and Robust, Tolerant Of Application Changes
Thursday, 23 July 2009 — 2:00 PM Eastern Time / 11:00 AM Pacific Time
Peace of mind equals knowing that your web application is performing as you expect. Yet, web applications change in subtle ways, and you want your tests to be tolerant of just the right amount of change, without spending more.
eValid capabilities such as Adaptive Playback, Index/Motion (Algorithmic/Structural) commands, and DOM Checking capabilities can make tests very reliable -- even when page structure and details, but not intent and effect, change drastically -- and even for dynamic Web 2.0 applications.
Talk about saving time, money and getting more work done with less energy. Build reliable, robust tests once, and you won't have to worry about them again.
Outline
eValid capabilities such as Adaptive Playback, Index/Motion (Algorithmic/Structural) commands, and DOM Checking capabilities can make tests very reliable -- even when page structure and details, but not intent and effect, change drastically -- and even for dynamic Web 2.0 applications.
Talk about saving time, money and getting more work done with less energy. Build reliable, robust tests once, and you won't have to worry about them again.
Outline
- eValid architecture and structure: How eValid works.
- Functional test creation: "What you see is what you record is what eValid reproduces".
- Adaptive Playback: Automated tolerance of basic page changes.
- DOM Synchronization: How to keep asynchronous (AJAX) operations from spoiling your tests.
- Index/Motion (Algorithmic/Structural) commands: New ways to bullet-proof tests aimed at Web 2.0 applications.
- eV.Manager Operation: Put your tests into a test suite with central execution control.
Wednesday, July 15, 2009
DOM-based Commands - "index/motion" Commands
We have been getting a lot of interest in the ability eValid has to process pages in a high-level way using some of its DOM-based commands.
We've been calling these "index/motion" commands because they have to do with identifying objects on the page by their element index and then moving around on the page so you can take action on a particular page feature or element value.
Here is a description of this process, which in eValid is used to convert a recorded script into one that is less dependent on dynamic page variations.
See: Manual Script Creation Process
The set of commands available really gives a tester a kind of algorithmic or structural way of doing the testing, and we thought it would be valuable to outline how that capability plays out.
This description of eValid's Algorithmic/Structural Testing capability makes the role these commands have very clear.
We've been calling these "index/motion" commands because they have to do with identifying objects on the page by their element index and then moving around on the page so you can take action on a particular page feature or element value.
Here is a description of this process, which in eValid is used to convert a recorded script into one that is less dependent on dynamic page variations.
See: Manual Script Creation Process
The set of commands available really gives a tester a kind of algorithmic or structural way of doing the testing, and we thought it would be valuable to outline how that capability plays out.
This description of eValid's Algorithmic/Structural Testing capability makes the role these commands have very clear.
Tuesday, July 7, 2009
Scripts, Synchronization, Index/Motion vs. AJAX Text Box Tricks
It seems that AJAX is sneaking in everywhere you look these days. Improving the user experience, as it is supposed to!
It's probably not news to you that Google's venerable main search page now has an "autosuggest" feature -- it feeds you possible targets of your search, based on the keystrokes that appear. It's a powerful, user-friendly feature, and it's done with AJAX methods that continuously interrogate the server with what you've typed as you type it and dynamically modify the page in your browser -- all as you continue to type in your search.
The main question is: How do you test this kind of thing? Here are some answers:
- If you want to make a manual recording the regular steps don't work, because it's AJAX. Here is how to make a good recording, using out-of-the-box methods for Recording of Autosuggest Text Boxes.
- But if you want to do the same test, but you're willing to spend the time to create the commands using eValid Index/Motion commands, here is an example of Autosuggest Test Box Processing with Index/Motion Commands
- A rather more complex example of the technique is this scripted solution which exercises the ICEfaces Autocompleter Function, where the script, like the one above, is entirely self-synchronizing and fully tolerant to all but drastic page structure changes.
Subscribe to:
Posts (Atom)