Thursday, August 30, 2012

Monitoring Data Collected By eValid

The use of eValid as a "smart" monitoring agent continues to grow. Customers are finding that eValid can monitor a complex AJAX page, involving complicated interactive transactions, with very high reliability and repeatability.

The key to this is the "structural testing" capability in eValid that works by directly accessing DOM with eValid's DOM manipulation commands. Here is an outline of how Structural/Algorithmic Testing compares with regular record/play testing. In addition, as the chart shows, eValid's Structural/Algorithmic tests can be extended to a fully programmatic mode using the eValid Programmatic Interface (EPI) for C++.

Here is a short description about How Well eValid Works in this kind of monitoring application, taken from the website of our Swiss Monitoring Partner RealStuff Informatik AG. Installations made by them at local companies in Switzerland have relied on the advanced Structural/Algorithmic approach for much of the monitoring done.

Monday, August 13, 2012

Latest User Forum Posts


 Beginning in mid-2010 we have directed all technical support questions to the eValid User Forum. We have learned that when one user has an issue, all users can profit from the answer.

Here is an additional selection of some of the posts that we think would be of general interest.

Thursday, August 9, 2012

Selected User Forum Posts

Beginning in mid-2010 we have directed all technical support questions to the eValid User Forum. We have learned that when one user has an issue, all users can profit from the answer.

Here is an additional selection of some of the posts that we think would be of general interest.

Monday, August 6, 2012

Academic Users Review eValid After Real-World Project Use

Since 2009, the Center for Systems and Software Engineering (CSSE), at the University of Southern California (USC), under the leadership of Dr. Barry Boehm and CSSE Assistant Director Prof. A. Winsor Brown, eValid has been part of the student's projects. For the new academic year, Dr. Sue Koolmanojwong has decided to continue using eValid in her two-semester course.

In the two-semester team project based course sequence CSCI577a/b, students learn to use best software engineering practices to develop software systems for real clients. They adopt the Incremental Commitment Model (ICM) to develop software projects, and in most cases, the students have little to no industry experience and little project management knowledge.

During the second half of the software engineering course sequence, the student teams implement and test the systems that they have designed. These software systems are then put to use in a real operational scenarios by real clients; therefore, they require certain levels of testing to verify and validate that the quality of service, or level of service, of the systems can be achieved. For the past few years, some of the project teams have been utilizing eValid to automate their testing process as well as to prove their achievement of the level of service goals, especially for load and performance testing. At the end of each project life cycle, students get the chance to reflect and review the entire development process that they went through, including the testing process.

PhD student P. Aroonvatanaporn, came to this conclusion: "Despite certain areas that eValid may be lacking, it has been proven that the eValid tool has played significant roles in help creating successful projects in the software engineering course at USC. The projects developed and deployed as a result of this course have achieved roughly 90% on-time delivery and client satisfactory. Part of this success comes from the use of good testing suites, such as eValid, to ensure product quality before they are delivered to the clients."

Here are some of the students' specific observations:
  • "Our team found it [eValid] very easy to use as we had to check for large amount of data in form of documents, images and digitized documents for load testing. Moreover it performs line-by-line checking and catches any loose ends that have been overlooked." -- A. Paradkar (Team 13).
  • "This tool is very useful and powerful; it not only can perform functional testing but also can perform pressure [load] testing." -- Qi Li, (Team 12).
  • "I used eValid to test 10 concurrent users. I first recorded a playback of one of our test cases. I then created a load test that ran the saved playback 10 times, with a wait period of 0 between each call. When I ran the load test, eValid opened 9 more instances of itself and ran the playback in each instance" -- W. Halverson (Team 5).
  • "eValid provides the user with an interface that is simple and intuitive enough that it does not require users to high technical knowledge to use the tool." -- P. Aroonvatanaporn (Team Administrator)
  • "[eValid] could be improved by making the Load Test Functionality automated. Modification of a Test Script should be minimum and support [the] change [of] number of users to any amount should be configurable through the GUI. Similarly, for Performance Testing one should be able to set the timer at any given point in the sequence of action/activity recorded." -- V. Bhaskar Rao (Team 3). (Our observation is that this student did not read the manual, as both of his requirements are actually implemented in the eValid Product.)
We are very happy to provide the CSSE students the opportunity to make good use of our product and to have a hands-on learning experiences with real-world applications.

We certainly appreciate their work and the work of everyone at USC/CSSE, and we especially thank Mr. Aroonvatanaporn for this very positive review.

Wednesday, August 1, 2012

Do you ever take on "test suite development" work?

In our User Forum, Kobep wrote:
Do you ever take on "test suite development" work?

Our primary goal is to support the eValid product suite, but yes, on occasion we do take on the job of building a test suite for a customer. But it's important to note that we are not primarily a service organization and as a result of that we do NOT send any script development work offshore.

A typical test suite that we would develop would the for a relatively stable product, for which there is reasonably stable documentation. Our goal would be to build up a suite of test -- a nominal count of 100 tests, arranged ideally in ten groups of ten tests. While we welcome the chance to make tests test powerful and effective -- and we generally succeed in this goal -- the main purpose of such a project is to help do a "technology transfer" to the customer.

We've found over and over that when a customer has a set of tests that are in the customer's context -- they run on the customer's machine and exercise a web application that is familiar to all of the staff involved on the customer side -- that the suite works very well as a mechanism for the customer to take over the suite for long-term maintenance. It seems that having both the context, a working sample, and ownership of the actual tests all serve to make it very easy for customer staff members to understand and maintain/modify/upgrade the tests.

In some cases, when there is some structural testing script modification that has to be done, then the examples in the nominal 100-case suite also serve as the basis for future modification.

The good news is that we like to do these projects on a very short term basis -- in some cases it's taken less than a week or two to flesh out tests for an application. And, yes, some of those tests take a LOT longer than most of them; there always seems to be a rough one in the bunch.

If you're interested in the details, give us a call.

eValid Support