Skip to content

TaaaS.net – Evaluation

2013 June 13
by jonathon.wright

Automation evaluation for evaluating automation based solutions based on the objectives / goals of automation:

Automation Goals

  • Maintainable – reduce the amount of test maintenance effort through use of the self-maintaining test asset loader/scraper;
  • Effective – self-validating test assets achieved using natural language with context sensitive validation against business and testing rules, workflows and data;
  • Relevant – clear traceability of the business value of Automation through the visualisation of the tests via Business Process Modelling (BPMNv2.2 compliant);
  • Reusable – Unified platform which non-domain experts can use a natural language to represent business processes and user story acceptance criteria;
  • Manageable – reports on SUT health including ratings such as percentage availability since build/release, reported errors over time and traffic to error ratio;
  • Accessible – to enable collaboration on concurrent design and development;
  • Robust – to provide object/event/error handling and recover with fault tolerance is built in to report and continue on different levels of fuzzy matching combined with the non-technology specific test definition language;
  • Portable – technology agonistic – Platform, client/component, browser, version & language Test type agnostic – smoke, regression, integration & performance;
  • Reliable – to provide fault tolerance over a number of scalable test agents;
  • Diagnosable – actionable defects provided by environment under test (EUT) live pause-playback supported by dynamic data adapters (DDA) for accelerated defect investigation and resolution
  • Measurable – provide testing dashboard along with customisable reporting.

Score Card

  • Platform Support – Support for multiple operating systems, tablets & mobile
  • Technology Support – “multi-compiler” vs. “compiler-specific” test tools;
  • Browser Support – Internet Explorer, Firefox, Google Chrome or any other browser based on web browser controls;
  • Data Source Support – obtain data from text and XML files, Excel worksheets and databases like SQL Server, Oracle and MySQL;
  • Multi-Language Support – localized solutions supporting Unicode;
  • Test Type Support – functional, non-functional and unit (i.e. nUnit & MSTest);
  • Test Approach Support – i.e. Hybrid-Keyword/Data-Driven testing;
  • Results & Reporting Integration – including images, files, databases, XML documents;
  • Test Asset / Object Management – map an object not only by its caption or identifier;
  • Class Identification – GAP analysis of object classes (generic / custom) and associated methods capabilities based on complexity, usage, risk, feasibility and re-usability;
  • Test Scenario Maintenance – manual effort (XPATH/regular expressions), self-maintaining (descriptive programming/fuzzy logic) or script less (DSLs);
  • Continuous Build & Integration / Delivery Integration – with build & delivery solution;
  • Future proofing – external encapsulation of test assets & associated meta data (XAML/xPDL), expandability (API/DLL/. NET), HTTP/WCF/COM/WSD and OCR/IR;
  • License, Support & Maintenance Costs – pricing policy along with any hidden costs.

Test Approach Support

This can also be referred to as test automation framework/testware generation that is going to be used:

Picture1

Test Approach cross reference chart below:

TestApproach_background

Taken from the “Hybrid Keyword Data Driven Framework” presented at the ANZTB in 2010 updated with the “Test Automation As A Service” presented at STARWest in 2012