What we discovered in our early engineering work was that there was a vast difference in the overall performance -- and a vast difference in how much each testbed trial solution interacted with the behavior of the web application. It was the potential for "bad interaction" which pointed the way for the team, because the underlying assumption was that, to be a good test driver, the test drive could not get in the way of the thing being tested.
Of all of the alternatives that were considered, we leaded strongly on the one prototype product that was built with an open-source browser. Having that kind of direct, immediate, "intimate" control of all of the signals and events and transfers and other activity while the browser gets the page components, assembles them into the internal document object model, and finally renders them to the user -- that level of interaction, we felt, was "perfect" for what we envisioned eValid to do.
We sill had to deal with the question of "overhead." No matter how efficiently you build a test driver, it is software so it takes some fraction of the CPU to execute. We settled on a target of not more than 0.1% interference with the application. And, the solution we ultimately chose, and which has become the eValid architecture, has met that goal in every regard.
eValid Tech Support