Wednesday, November 28, 2012

Testing Mobile Device Web Applications

According to recent trade press articles, by 2016 there'll be more mobile devices than people! Really! More than one per person, world wide, the article said.

The uses to which these devices are put -- telecommunications (making phone calls), texting, social networking, shopping, photography, book readers, etc. -- often have a very significant web interaction. Web "apps" -- which run native on mobile devices -- also often have a great deal of website interaction, but because they run native on the device.

Our primary interest is in the quality control testing and analysis of applications that run on mobile devices using some form of browser to interact with a website. This is a fairly broad area, and involves tens of thousands of applications ranging from bus-system arrival time updates to online shopping. Given this focus on quality, what are the components of web application behavior that are the most important? Here's a starting list -- what most users would consider important -- the key performance indicators:
  • Appearance: How does the web application appear to the user on this device? Are all the essential items of information present?
  • [Relative] Performance: How fast is the application -- when delivered to the device? Does the speed vary with the device? How much?
  • Availability: What is the uptime percentage? How often is the application served incorrectly, or not at all.
  • Data Volume: How much data does the server deliver to this device? Does the amount of data vary, device to device? What are the biggest items being sent over the internet to the device? How essential are they?
  • Response Validity: Is the information presented to the user correct? Are the answers the ones you expect? Is anything left out that should be there but isn't?
No single test solution can address 100% of these factors -- that's beyond the state of the art. But we believe that the eValid test engine can cover a majority of these concerns by providing quantitative measurements of key quality factors. It can do this by being able to imitate devices realistically, and its ability to make detailed measurements "as if" the web pages were being sent to the actual device, even though it is imitated.

Imitating Devices
The way eValid imitates devices is through the User Agent String (UAS) (if you click this link you'll the user agent that applies to your browser). What this does is set the variable string that the browser reports to the server to be what you specify in your settings. eValid has a special SetUserAgent command that sets the string for a newly launched sub-window, and has a similar command line switch for a batch-run.

Once the UAS is set, it stays that way for the duration of the run (or the duration of the sub-window). The server response to every request as if it was responding to the browser for that particular device. In many cases the material that the server sends to you differs based on which UAS you're using. Devices with smaller screens, or with limited bandwidth, tend to be sent smaller volumes of data.

The main advantage of this approach is that you can use eValid as a single point of measurement for all of the main factors that you need to look at in terms of quality of a web application: content, size, and speed. This means that you have a single point solution so that measurements can be made in parallel for comparison purposes if you want. You could have 10 sub-windows open on your screen, each of them imitating a different device. eValid's regular reporting of HTML download speed and volume are available -- just are they are when eValid "imitates" the IE browser of which it is an identical twin.

Examples
Here are some examples of how this kind of analysis works and the kinds of results you can get.
  • In this One Phone, Multiple Sites example you see eValid imitating an iPhone (a currently preeminent smartphone) as it navigates to six different website home pages. What is noteworthy is that the data volume ratio, comparing the HTML delivered to the PC with that sent to the imitated device, the ratio ranges from 5% to 93% (but is never 100%).
  • In this One Site, Multiple Devices example we've set eValid up to imitate five different mobile tablet devices, including an Apple iPad and a current Samsung Galaxy. In this case all of the devices are navigated to the Amazon.com home page. As before the variance in the what is displayed -- as well as the wide range in the data volume downloaded -- striking.
  • As a final example, here is a run where we had 20 different mobile devices look at (read and render) the same Amazon.com landing page. The Selected Device Screenshots show how easily it is to confirm operation of a website with many different mobile devices at the same time. The total byte count for the downloaded page size is shown for each screenshot. This data was generated automatically with built-in capabilities of eValid as described in the Single Platform Testing page, which shows the parametric script we used to collect the data.
Summary
eValid can imitate and test any kind of mobile device reliably, with little technical difficulty, and with full realism in terms of download timing and capacity, and with the capability to inspect page layout and other aesthetics.

No comments: