Otero - Open Test Robot Platform

Authors: Antti Jääskeläinen (TUT), Matti Vuori (TUT), Kimmo Jokinen (OptoFidelity), Tommi Toropainen (Intel), Heikki Virtanen (TUT), Arttu Lämsä (VTT), Miska Seppänen (VTT)

Introduction

This document describes Open Test Robot Platform, Otero. It is an open specification and open source platform developed for practitioner of test robot system developers and especially for those who would like to build a low end system to enable them to learn about test robotics and plan for a professional system acquisition. It is also a tool for test education and training and for researchers working in this area. The platform is currently software only, but we give some guidance about its adaptation to inexpensive robots.

This is a high level document that will not show all the details. Here we describe the general ideas of the platform in order to make the possible users and developers understand what it is all about -- what it offers and contains. Detailed descriptions of the platform architecture and the APIs are provided in separate documents.

The platform has been developed in collaboration between Finnish partners Tampere University of Technology, Intel, OptoFidelity and VTT in RATA research project (https://wiki.tut.fi/RATA/WebHome). The participants are grateful for Tekes for financial support without which this project would not have been possible.

1 Open Test Robot Platform in a nutshell

1.1 Why bother with this?

Test automation is meeting new challenges all the time. Two essential phenomena are the need to perform accurate device behaviour measurements that are not affected by any additional software componenents during the testing and the ability to rapidly test systems when there are no possibilities to use internal agents to drive the softwar under test.

Using testing robots is one great solution to both of the challenges. Currently the possibilities are not known in the industry as there are no "learner grade" systems that companies could use to become familiar with the technology and to assess its suitability for their testing processes.

This platform offers a great possibility for companies to try, at minimal cost, what robot assisted testing could offer them and to prepare for an acquistion of a professional test robot system. Systems built around the platform and a low-cost robot can be used for trying out the testing technology, training of testers and test automation engineers -- a company can arrange a small training robot for each unit and each team if they so wish. Testing tool developers and researchers can use the system to develop new, advanced testing systems. One leading idea in the platform is that it can scale -- its components can be changed to more professional ones and it can be compatible with robots at many levels of capability and robustness, giving companies a growth path to the future.

In general, robot assisted testing is at best in:
  • Validation testing of touch panels -- you can really trust the results.
  • Device measurements -- power consumption figures etc. are not affected by instrumentation.
  • Functional testing of device applications -- removing test agents for production will not cause nasty surprises.
  • Testing of safety critical system -- the devices under test can match the production versions 100 %.
  • In general anywhere, where non-intrusiveness is a benefit -- the device under test is not affected by any additional software.
  • Testing of multiplatform products -- the robot system is fully OS and hardware independent making tests comparable and reuseable across platforms.
  • Testing where device HW keys or button need to be operared -- a simple robot can press HW keys on the panel and a more complex robot can press buttons on the side of the device -- even turning power off and on whenever the tester wishes, without manual intervention.
The alternatives are using traditional software agents in the device to run the tests, and those can be the best solution in many situations. Likewise, manual testing can be used, but nowadays it is best applied in exploratory testing and companies like to automate other types of testing. Remember that robot assisted testing is not a silver bullet: it can be an addition to a company's test assets, but usually will not replace other types of testing or test automation, at least not completely.

We'll tell more about all this in later chapters.

1.2 Capabilities of this platform vs professional

The context of this capability assessment is DUT-level functional and performance testing. Touch module / device level touch performance testing is left out of the scope.

Otero (Open Test Robot platform) can carry out functional testing quite extensively. Most of the use scenarios inlude tapping and swiping the display with one finger. Otero inexpensive robot recommendation shall contain only one finger. It shall also include an inexpensive camera for detecting shapes and text from the display. Due to the low cost, the camera may not be able to extract the shape and text from highest resolution displays. From the software point of view, the algorithms related to the camera are 1) shape matching and 2) optical character recognition (OCR). Shape analysis (aka icon detection) may be expected to work quite near to the capabilities of the professional systems, but OCR may perform less well, especially when it comes to different language dictionaries.

The working space of a Otero's inexpensive robot recommendation is naturally quite small, and maximum of two phones may be fitted to the robot's working area, while professional SCARA/multiple joint robots can handle several devices in their working space.

Carrying out some of the UI perfomance test cases requires quite fast movements and high acceleration, which are not necessarily possible to do with Otero's inexpensive robot mechanics.

Some test cases require that fingers are changed automatically. Otero does not include functionalities for this as standard.

1.3 Special issues: when to go for a commercial robot system

Otero Open Robot Test Platform is adequate for quite extensive testing. Especially its software architecture and interface -- Open robot API -- are well-designed by professionals. This ensures that flexibility and scaleability are built-in features. The selected hardware (robot, camera, finger implementation) have quite definitive role. By using industrial grade robotics with Open robot API, it is possible to implement semi-professional system with quite moderate investment.

Based on the analysis in the previous chapter, following requirements possibly ignite a need for going to a commercial robot system:
  • Time to market. Otero is not a ready product, so naturally if the intention is to have the system up and running in production as soon as possible, it is sensible to use commercial off-the-shelf systems
  • Need to work with multiple DUTs, thus having large working space
  • Need to work with multiple fingers (at least two)
  • Automatic tooltip (finger) changer
  • Support for multi-language OCR dictionaries

2 Contents of the platform

2.1 Scope

Otero includes all the systems required to observe and control a system under test through a touch screen. These systems are divided into a number of discrete modules with well-defined interfaces, which allows a test system to be assembled to match the requirements of testing and the available resources. For most of the modules, the core of Otero, we also provide reference implementations which enable its quick adoption. Some modules must be acquired from third parties. These include physical devices such as the robot itself, DUT-specific software such as device control APIs, and modules whose implementation requires high technical expertise such as text recognition.

As Otero is focused specifically on testing the DUT through a touch screen, it does not include potential other methods for handling the DUT or its environment. A fully functional test system naturally also requires some form of test automation to produce test steps and an actual system to be tested, but these are likewise not considered part of Otero itself.

2.2 Architecture

components.png

2.2.1 Core modules

These modules form the core of the Otero architecture. They have fixed interfaces and reference implementations will be provided for them. Included modules: user interaction, vision, camera control, gesture, robot control.

The user interaction module implements high-level commands that describe general user operations on the DUT. The implementation is done by transforming the commands into individual gestures and text & image lookups from DUT screen.

The vision module provides processed information on the current state of the DUT. This information is usually based on the contents of its display, but other sources may be used as well. The vision module (along with the camera control and text & image recognition) may also be replaced with a programmatic API that has programmatic access to GUI components on the DUT.

The camera control module produces images of the DUT display, corrected and cropped as necessary. It can be implemented with a physical camera taking pictures of the DUT or a programmatic access to DUT screenshots. If a programmatic API with access to GUI components is used, this module is in practice merged with the vision module.

The gesture module provides an interface for executing individual gestures on the DUT. It also checks that the attempted gestures can be performed safely. Depending on the implementation, it may either translate the gestures into instructions for the robot module, directly control a robot that provides a higher-level API, or control the DUT through a programmatic API.

In most cases, the robot control module is not a separate component but merged into gesture module, which handles robot control directly. This is because there is too much variation and no generally accepted semantics for robot control. For example, some robot manufactures have implemented some important high-level gestures in firmware.

2.2.2 Third-party modules

These modules are an important part of a fully functional test robot platform, but they are not provided as part of the Otero implementation. Included modules: text recognition, image recognition, camera, device screenshot API, robot, device control API.

The text & image recognition modules are tools for extracting information from the DUT images. One is responsible for finding text, the other for locating sub-images.

A physical camera is used to take pictures of the DUT screen.

Alternatively, a device screenshot API can be used to request current screenshots from the DUT.

A physical robot is used to operate the DUT controls.

Alternatively, a device control API can be used to control the DUT programmatically.

2.2.3 Associated modules

Associated modules are not part of the Otero architecture, meaning that their interfaces and implementation are left open. However, they are still necessary for the use of Otero in automated testing, either controlling core modules or being controlled by them. Included modules: test control, device under test.

Test control is the module that controls the test run, whether a script, model-based test generator or something else. The tester may fill this slot by whatever test automation they are already using.

The device under test is whatever is to be tested or otherwise controlled. It is assumed to be some variety of a touchscreen device.

2.2.4 External modules

The larger test automation system may include external modules such as those outlined here. However, they are not necessary for the use of the Otero architecture. Included modules: environmental control & measurement.

The environmental control & measurement module (or a collection of such modules) can be used to control the test environment and thereby indirectly affect the DUT, as well as collect various data about the DUT and environment. In practice, it may have an architecture similar to the robot/camera structure of the Otero architecture. While some features belonging to the purview of this module are often needed in testing, the exact requirements vary so much that there is no sense in providing a fixed interface, architecture or implementation.

2.3 Availability

At the time of writing this Otero is still in early phases of development, and the complete platform is not yet available. In-progress API definitions, reference implementations and documentation are available at the project Github repository at https://github.com/teroc/Otero.

2.4 Getting hardware to run it

Apart from the DUT itself, there are two pieces of hardware required by Otero: the robot and the camera. Either or both may be replaced with software emulation if necessary.

Simple affordable robots are available from various vendors such as Marginally Clever (https://www.marginallyclever.com/shop/), RepRap (http://reprap.org/wiki/Robot_arm), or Tapster (http://www.thingiverse.com/thing:123420). Similar ones may also be produced and assembled from 3D-printed parts and cheap servos. Such robots are likely too flimsy and unreliable for production work, but are suitable for trying out the platform itself and possibly for teaching and research purposes.

Logitech HD webcams (http://www.logitech.com/en-us/webcam-communications/webcams) are an affordable camera option. Their products should provide good enough images to document a test run, but depending on the DUT may not meet the requirements of text and image recognition.

Hardware suitable for production work is available from vendors such as OptoFidelity (http://www.optofidelity.com/), who provide integrated robot/camera systems specifically designed for testing purposes. Other vendors of quality robots include Epson (http://robots.epson.com/), Janome (http://www.janomeie.com/) and Mitsubishi (http://mitsubishirobotics.com/). These high-quality robot systems are more expensive, but provide greater precision and durability, and typically come with support and service plans.

If top quality images of the DUT screen are required, potential vendors for suitable cameras include Allied Vision Technologies (http://www.alliedvisiontec.com/emea/home.html), IDS (http://en.ids-imaging.com/), Red Digital Cinema (http://www.red.com/), and Ximea (http://www.ximea.com/). The following table presents Machine vision cameras and models divided into few categories based on their pricing.

Budget machine vision (~100..500 EUR) Semi-quality machine vision (~500..5000 EUR) High-quality / special cameras (~5000 EUR and above)

Basler Ace-series

IDS uEYE

The Imaging Source

Basler other models

IDS other models

JAI a/s

Baumer

Dalsa

Imperx

Point Grey Reseach

SVS-Vistek

RED cameras

Redlake / Princeton Instruments

It is good to understand that it is not only the camera's cost that matters. High-quality cameras also require quality optics, and the cost is usually at least half the price of the camera, sometimes even more.

2.5 Expansion possibilities

Otero can be expanded in several directions. First, new implementations may be created for any of the defined APIs, with support for additional parameters for existing methods. The Otero APIs list some parameters (along with suggested semantics) that specific implementations may support. Others may naturally be added as needed.

Second, the APIs can be expanded by creating new methods. For example, the basic version of the platform only supports one-finger robots, so an API expansion is required in order to take full advantage of multi-finger models. Such expansions may be included in Otero in the future.

Third, entirely new modules may be added into the Otero architecture to perform tasks other than controlling the DUT through a touch screen and reading its display. For example, a new module could be added to send IP packages to the DUT as testing requires, or to log events at different parts of the test system.

3 Example usage

3.1 Functional Testing

Functional testing typically takes place several times in development life cycle (e.g. release cycle). Same tests may be executed by development team, integration team and system validation. Furthermore these test cases are executed hundreds or thousands of time during product development lifecycle to gate regression in product software.

A typical (good) functional test is short, its steps are precisely described, and it has clear pre-conditions and pass/fail criteria. Simplified (error recovery and some of the verification points not described) camera test case as example for Android:
Step Description Expected outcome
Pre-condition Device up and in home screen. Device camera points to imaging target.  
Step 1 Robot locates application grid icon and taps it Application grid opens (verified with machine vision)
Step 2 Robot locates Google Camera icon and taps it. Camera application opens (verified)
Step 3 Robot locates imaging button (picture captured) Successful image capture verified (small preview window visible in upper right corner of the screen)
Post-condition Robot taps back button until device is back in home screen.  

3.2 Power consumption measurements

Advantage of robotized testing is that it is fully OS independent and non-intrusive. There is no need to modify hardware or install any additional software on the device under test. This is very important when measuring power charasteristics of device under test. Any additional SW installed to the DUT has negative impact to power measurement results as it increases load which increases consumed power. The device may also behave differently from functional point of view. For example connecting a USB cable to a phone in order to control it with software causes the device to begin charging and it may not be able to enter to sleep states.

The impact of power management for modern portable devices is significant. To keep power consumption in bare minimum functional blocks are powered only when needed. As a consequence the clock tree is complex. In order to verify correct behavior of clock tree specific test cases to verify power consumption characteristics in defined use cases may be defined. Simplified video recording test case as example:

Step Description Expected outcome
Pre-condition Device up and in home screen. Device camera points to imaging target.  
Step 1 Platform locates application grid Icon and taps it Application grid opens (verified with machine vision)
Step 2 Platform locates Google Camera icon and taps it. Camera application opens (verified)
Step 3 Platform locates icon to switch to video mode from still capture and taps it. Mode change verified.
Step 4 Platform locates video record button and taps it. Start of video capture verified (record indicator lit)
Step 5 Power consumption measurement started Power data acquired
Step 6 Video recorded for specified time  
Step 7 After defined time has elapsed, video recording is stopped. Power measurement data analysed and results stored. Stop of video capture verified (record indicator not lit). power data is validated.
Post-condition Robot taps back button until device is back in home screen.  

3.3 User Interface Performance measurements

User experience is an important factor of modern portable device. The device should react promptly to end user actions and reactions should be consistent. User experience can be measured using performance hooks in SW stack or using high speed camera to store UI activity and post-process image analysis to determine application behaviour. Using time stamps of instrumented SW blocks has negative impact to device performance (see chapter 5). UX performance characteristics are determined per defined use case. Simplified camera application opening time test case as example:

Step Description Expected outcome
Pre-condition Device up and in home screen. Device camera points to imaging target. High speed camera points to DUT screen and optics adjusted accordingly.  
Step 1 Platform locates application grid icon and taps it Application grid opens (verified with machine vision)
Step 2 High speed camera set to measurement state.  
Step 3 Platform locates Google Camera icon and taps it. High speed camera triggered to capture screen.  
Step 4 Camera application opens. Device screen stabilised. Measurement stopped Camera application opening verified. Post processing started and results analysed.
Post-condition Robot taps back button until device is back in home screen.  

3.4 Aging

Aging is common phenomena with modern mobile devices. When an end-user buys a new device, it usually works fine for half a year or so, but after that problems may start to arise. The device is getting slower, it has more instabilities and the battery runs out faster. Batteries in particular age due their physical characteristics whose impact increases slowly over time but aging in power management system also has a major effect. All of these have an impact to end user satisfaction. Thus, finding issues causing unacceptably fast aging is very important. A testing method called aging testing has been developed for this purpose.

Aging testing or accelerated testing is testing that uses aggravated usage conditions to speed up the aging process of device under test. It is used to help determine the long-term effects of expected levels of stress within a shorter time, usually in a laboratory by controlled standard test methods (see http://en.wikipedia.org/wiki/Accelerated_aging for more information).

The purpose of aging tests is to validate that performance and reliability remain at target levels throughout the expected lifespan of the product while it is exposed to heavy and diverse end-user-like usage. The following figure depicts a typical aging test cycle.

AgingTesCycle.png

The DUT is used in versatile manner as an end user would do for defined period of time. This period typically represents period of calendar time (e.g. 6 months). During test cycle selected Performance / Power consumptions tests are performed in defined intervals. If there is an aging effect it can be seen from the data series obtained from performance and power tests. Increased incidence of instabilities as a function of time is also suggestive of aging effect.

3.5. Utilizing real life user data

Device usage carried out by humans can be still be utilized in robot testing. During the RATA project software components were developed to collect user data and to utilize it for automated testing. Touch events (coordinates, timestamps) were recorded from the users' devices and after a certain period of time a Markovian model was generated out of the data by applying statistical methods to it. The model contained information about the user interface views that the users used, probabilities for transitions between them and transition paths, e.g. what positions of the screen were touched or swiped and when. The model can be created for only one user or it can be created by combining data from multiple users.The generated model was then used to create robot control scripts. These scripts control the robot by imitating the actions that the users did in the recording phase, taking the user interface transition probabilities into account.

This approach could be used for example when measuring the effects of the aging of the device.

3.2 Conlusions:

These conclusions are based on observations about feasibility of robotized testing for testing of modern mobile platforms. Based on observations robotized testing is very well suited for devices with following characteristics:
  • Battery operated
  • Expected use time is several hours
  • Feature set is reasonably large
  • Complex SW Architecture and power management system (clock tree controllable in detailed level)
Robotised testing follows same main principles as any automated testing: There needs to be reasonable Return Of Investment when robotising testing. Test cases suitable for robotisation:
  • Are executed often
  • Are simple
  • Are not automatable by other means (not possible, using internal agents causes unrealiability to test results)
Utilising robotics and machine vision in testing introduces capabilities which supplements conventional testing methods like Internal Agent based automation and manual testing:
  • Man-machine collaboration which improves productivity
  • End User point of view in testing focus enabling higher end user satisfaction
  • Accurate automated measurements with high repeatability
Robotics should be seen testing method supplementing conventional testing methods and way to increase productivity and quality of testing.

4 Technical and monetary benefits

Digesting the techical benefits can be divided into two categories: robot testing versus human testing and robot testing versus instrumented testing.

4.1 Robot testing versus human testing

From technical point of view, robot offers unquestionable accuracy and repeatability when compared to manual testing carried out by humans. Humans are prone to errors, so keeping up to the test specification is not 100% sure.

4.2 Robot testing versus instrumented testing

When compared to an instrumentation choice, the benefit of robot testing is that there is no need for specific API -- device is used like end users do. With instrumentation, testing might even necessitate the installation of a specific testing API that does not exist in production. Instrumented API may slow down the DUT thus affecting the results. In contrast, a robot does not affect the device under test.

Instrumented testing or at least data collection would however complete the testing by robots. Data collection may also be a lighter operation from the instrumentation load point of view. Instrumented testing could also be used as a tool to configure the robot testing, like explained in the previous chapter.

4.3 Business benefits

Opent Test Robot Platform enables easy access and hands-on into robot assisted testing for parties who are interested in familiarizing themselves with the possibilities that robot assisted testing can bring, without major investments and without big sacrifices done to the business process. Only low-cost investments are needed compared to commercial, industrial grade solutions.

Notable direct and indirect business benefits can be achieved by using robot assisted testing, when compared to the manual and instrumented testing. Direct business benefits of robot assisted test automation:
  • Wider test option coverage brings more business oportunities
  • Improved production process speed lowers the production costs
  • Faster test cycles -> delivering products to the business quicker and in better shape (time-to-market)
  • Possibility to increase efficiency and satisfaction of employees working with QA and testing, by shifting testers away from monotonic duties and allocating higher level tasks for them
  • Costs from robot assisted test automation are more fixed and easier to predict than labor related costs (improved cost control)
Indirect business benefits of robot assisted test automation:
  • Even small competitive advantage can be turned into high income revenue on a extremely competitive and fast phased high technology market (competitive advantage)
  • Possibility to expand testing related offering with robot assisted test automation (enables business to penetrate on wider markets and capture bigger market share)
  • Gives more flexibity and agility in testing process (improved business agility)
Like instrumented testing, robot-based testing scales up nicely, when testing volumes need to be increased. One industrial robot can handle several devices under test, which decreases the cost per tested device.

Robots do not sleep, they can work 24/7. They do not need expensive office space and furniture. Robots are mechanical devices and they require some maintenance and care to get most out them. Well and timely maintained robots are providing reliable and accurate test results and best possible Return of Investement is achieved.

5 Future plans

Currently, the platform is at its infancy and to be really beneficial and usable for practitioners will require plenty of work. The testing APIs need some development and so do the hardware adaptation especially to entry-level robots -- while improving API compatibility with higher level robot hardware in order to provide a growth path for companies. See the discussion about the limitations of the platform earlier in this document.

For the development we have planned two main lines of action:

First, the current consortium will plan and try to find funding for a new development / research project on robot testing. That project would allocate some of its resources for the development of this open platform. During that project we aim to fix some of the current limitations and also for example enhance the platform to support new product technologies and features, including support for new gestures.

But development of a platform such as this requires more than that. Large scale collaboration is the key to getting speed to the platform development, to guiding it to the right directions, to get the robot testing culture growing and to make this technology familiar to companies and the testing community. Therefore, we will open a global community for the platform development that will welcome all kinds of interested people and parties to the collaboration. The community should provide an interesting environment for robotics hackers, test automation professionals, robot hardware vendors and companies who are looking into this kind of testing solutions. The community should provide a space for developing both the software platform and its robot hardware adaptations -- and whatever innovations start from those. Growing a community will take time and effort and because of that we need some external funding from the planned development project to get the community started and active.

However, even if those plans do not for some reason work out, the platform will receive smaller scale development at least for education and research purposes and the platform source code and other resources will remain in open repositories for any practitioner to use.

APPENDIX: Links to documents about robot assisted testing

For more information about our developments in the testing robot area, see the publications page of the RATA project at https://wiki.tut.fi/RATA/PublicationsAndDownloads. There you will find reports about the application of robot assisted testing, the contents and design principles of such system and also support for test modeling.
Print version |  PDF  | History: r33 < r32 < r31 < r30 | 
Topic revision: r33 - 05 Jun 2014 - 15:23:26 - Users.TommiToropainen
 

TUTWiki

Copyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TUTWiki? Send feedback