Overview
Over the past couple of decades product manufacturers have been increasingly adding instrumentation to everyday items, coffee makers, light switches, and garage door openers to name a few. Alongside the addition of digital instrumentation to household items, there has also been the recent explosion of small, internet connected smart devices introduced to the consumer market, which are collectively known as the ‘the internet of things’. Together these two recent trends have largely contributed to the massive increase in the amount software written that is targeted for deployment on embedded systems. Recent studies have discovered that over 60 percent of embedded software projects overrun their schedules, even if more than 50 percent of their overall time is dedicated to testing. (Saini, 2012) This fact is quite troubling given the advancements made in the realm of software testing over the last decade, however it is obvious that large efforts are needed to address the deficiencies in software testing as it pertains to the specific difficulties encountered during the development of embedded software.
Embedded systems, due to their nature have many impositions, such as being highly memory and performance constrained, which makes more traditional software design and testing approaches less effective in the production of software with guaranteed quality. As well as being a less suitable application for many testing and design methodologies, a significant portion of embedded systems are deployed in life or safety critical situations, for instance, embedded software can be found in motor vehicle control systems as well as safety devices such as smoke detectors. In these applications embedded systems are required eliminate any single failure points, and minimize multipoint failures, as such failures can be the direct cause for the loss of life or critical injury. (Sangiovanni-Vincentelli & Martin, 2001) This paper provides a brief overview of six methodologies and strategies that can be used to address these particular shortcomings. Although most of the ideas presented in this paper can be applied generally to all software development, in this document we seek only to analyze them specifically as they apply to embedded software development.
Tools & Strategies
Several tools and strategies exist that may be used throughout the embedded software lifecycle, which help to facilitate the verification and testing of the system. Each tool or strategy has inherent strengths and weaknesses, which are discussed in this section.
Hardware Debugging
Most modern embedded System on chip (SoC) devices support some form of hardware based debugging. Generally, hardware debugging allows for a host development machine to interface directly with the SoC hardware to control the software currently deployed on the device. This allows the software programmer or system tester to control the execution of the software in similar ways to traditional software debugging, providing them with the ability to pause, step, or resume the execution at specified points. (Tizer & Palsberg, 2005) Devices that support more advanced hardware debugging features also allow for values stored within the device’s SRAM memory or working registers to be modified from the host machine during execution. Hardware debugging allows the programmer and system testers to perform white box testing on the software running in its target environment. White box testing on the target environment is important due to the fact that many software issues, particularly ones involving hardware interfacing, may be difficult if not impossible to detect during testing outside of the target hardware environment. The use of hardware debugging early in the development cycle as a software testing tool also helps to verify the correctness of operation of the hardware platform, and may help to discover errors in design earlier in the product development cycle, that otherwise may have gone uncaught until much later in the development cycle, when they are much more costly to rectify. Although hardware debugging is a useful tool to aid the programmers with white box testing, it does have several major drawbacks. First, in order for hardware debugging to be possible, the developer must have access to the finalized hardware platform throughout the software development cycle, which in many cases is not possible, due to the fact that the target hardware for the project is designed in parallel with the software that will ultimately run on it. The second major issue with hardware debugging is the fact that it is time prohibitive. Hardware debugging requires the developer to compile the software using the manufacturer’s tool chain, upload the resulting binary to the device, initiate the execution of the software using a proprietary hardware peripheral, and connect a software debugger backend such as GDB. This adds several time consuming steps to the develop-debug cycle, with, depending on complexity, the compile and upload steps potentially taking several minutes to complete. Along with the additional time required to complete a code, compile, upload, debug, change cycle, hardware debugging typically requires the programmer to maintain two separate builds of the software, as most compilers do not maintain the complete debugging information necessary to utilize hardware debugging when the software is compiled using the highest optimization settings. (Tizer & Palsberg, 2005) Due to this fact, the programmer must generally maintain two builds of the software, an optimized production version, and an un-optimized debug version. Another issue that tends to arise during hardware debugging is the so called ‘Heisen-bug’ problem. While testing and debugging it is common for the developer to insert numerous printf function calls so that he may observe the state of the software externally on a connected terminal device. The process of interlacing printf calls throughout the source code has a tendency to modify its timing and behavior enough that it may induce bugs in the system that appear and disappear depending upon the number and timing of the printf function calls. (Tizer & Palsberg, 2005)
As well as enabling direct debugging of software while running on the target hardware, hardware debugging can also be used to instrument and test other facets of the system. Hardware debugging can be used to profile the entire system, or individual software components, in aspects such as performance, timing, loop iterations or variable access counts. This type of software testing can be very useful to ensure that the system meets performance requirements in ‘real- world’ scenarios.
Hardware Simulation
Hardware simulation provides an alternative to executing the software directly on the target hardware by providing a software simulation of the hardware. Hardware simulation provides similar functionality to Hardware debugging, however, it has its own set of strengths and weaknesses. First, hardware simulation removes the requirement for complete physical hardware to be available to the developers during development and testing of the software. Although not requiring the system hardware to be physically available to the developer, it does still require the hardware platform for the project to be selected as hardware simulators are specific to a platform of family of hardware devices. Due to the fact simulators are closely tied to the hardware specific features and instruction sets of the devices they serve to simulate, hardware vendor provided simulators only typically exist for popular off-the-shelf devices or device families. Based on this restriction alone, hardware simulation is best suited for projects that are utilizing off-the-shelf devices, as projects that employ either custom or non-mainstream devices would require the developer to implement a simulator for the device as well as the project software. Another inherent issue with hardware simulation lies in the fact that embedded systems are typically not standalone type devices, rather they are usually part of a larger system, which requires interactions and inputs from external sensors and devices which are difficult, if not impossible to simulate.
Cross-Compilation
Software written for embedded devices is typically written in a C-derived language, like C++ or plain old C; Cross-compilation is both a set of tools and a strategy that exploits the fact that C-derived languages are very portable, and can be compiled for nearly any hardware architecture or operating environment. Cross compilation requires that there be available compilers that produces native binaries for both the target hardware and the developer’s host machine architecture, this allows the software to be compiled for, and run natively on the developer’s host machine. Compiling the software for the host machine allows the software to be tested in an environment where memory and processor resources are not constrained, and as a result, a full featured test-harness can be used to implement a comprehensive set of tests, which, could not be implemented to run on the target hardware itself.
Compiling software intended to run on a very specific device or family of devices requires mindful planning and programming, as device specific features and function calls are not likely to available on the developer’s host machine. In order for the software to compile and run cleanly on the host machine the software must be constructed such that all low level hardware specific interfacing and function calls are abstracted into isolated modules. Abstracting the hardware functions allows mock objects to stand in for the hardware specific modules when compiled for the testing on the host machine. Using mock objects also allows further instrumentation of the system under test, as all calls to the modules replaced by mock objects can be monitored to ensure that calls made to the hardware interfaces occur when expected, and are provided with the values expected.
A beneficial side-effect of designing the system to be cross-compliable is that the code that is hardware specific is entirely abstracted from the application logic, producing code that is much more modular, and by extension highly portable. Producing portable code is desirable for several reasons, most notable however, it allows for the software to be much more easily ported to run on a different device or device family than one(s) for which it was originally intended to run on. Although cross-compilation benefits the developer and the quality of the software produced, it does have several drawbacks. Like hardware debugging cross-compilation may suffer from the ‘Heisen-bug’ effect, where bugs that are present on either the target hardware or on the developer’s machine may not be present on the other. In this case the ‘Heisen-bug’ issue can be attributed to any number of differences between the compilers for each platform as well as the standard libraries that the binary is linked against. (Saini, 2012) One way to mitigate this issue is to ensure that the software is compiled often with the target device’s compiler. (Kim, 2008)
Due to the numerous benefits to the developer, the development process, the testing process and the induced level of modularity to the software being developed cross-compilation is a strategy that should be utilized in every project that is eligible to do so. Although it should be noted that using the cross-compilation strategy to test the application logic is useful, it cannot be used to directly test any hardware specific interfaces.
Methodologies
Platform-Based Design
Platform-based design builds upon the ideas surrounding modularization presented within the Cross-Compilation section of this document. Platform based design defines a set of levels of abstractions, each of which encompasses a specific layer of system development and define mappings to and from layers above and below it. The goal in defining these platforms is to provide a deterministic way to map a set of system constraints and requirements to a specific set of reusable software and hardware instances or components, helping to ensure validity in system design choices. Platform based design also serves to ensure that individual system concerns are orthogonalized, ensuring that, implementation, and communication between layers are cleanly decoupled from each other. (Sangiovanni-Vincentelli & Martin, 2001) Ensuring that each component is completely separated from the all others helps to ensure that each platform is entirely modular and each platform instance can be easily switched out for another without affecting the overall system. As it applies to embedded systems, platform based design defines two generic platforms, the architecture platform and the API platform, combined these two platforms are known as the system platform. The architecture platform is comprised of the set of hardware components, either discrete or integrated, such as programmable cores, instruction sets, memory, and I/O subsystems, and their characteristics, such as size, power consumption, and performance. The API platform wraps the architecture platform in a high level abstraction, upon which the application software can be built. It is worth noting that the higher the level of abstraction the API platform is over the architecture platform the greater the number of architecture platform instances it will encompass. Although increasing the number of underlying platform instances covered by the API platform can be advantageous in some cases, it does complicate choosing a single optimal instance. (Sangiovanni-Vincentelli & Martin, 2001)
Although platform-based design does foster a mindset of complete modularization during the design process of a system as well as providing a deterministic way of choosing underlying platform components from higher level platform abstractions which provide verification of design choices, it does not directly facilitate testing of software code. Platform based design does however, through the enforcement of modularization and orthogonalization of individual software components, facilitate thorough application software testing, which without these strict levels of modularization could be deeply intertwined with pieces code from the API platform, which is very difficult to test.
Object Oriented Design
When designing a software system one of many design patterns for code structure could be adopted, such as functional, procedural, and of course object oriented. If the software to be developed is targeted to run on a standard x86 architecture processor the system designer may choose one of many languages and supported design pattern that best suit the requirements of the system. However, for software targeted to run on an embedded device, object oriented design is the best pattern to utilize. (Saini, 2012) Object orientated design is best suited for use in embedded environments for several reasons. First, object orientation allows for the hardware dependent code necessary for the application software to interface with the embedded device’s processor’s components, to be cleanly and completely isolated from the application code. The advantages of this include greatly improved application code reusability, and cleaner design and implementation of application code, but most importantly it improves testability. By adopting an object oriented design methodology and isolating application code from hardware dependent interface code, the application software can be directly tested independent of the hardware that it is intended to run on. If object oriented design was not used, the application code would have hardware dependent code woven throughout it, forcing all testing to be done on the target hardware, which as indicated in previous sections of this document can be slow, cumbersome, and error prone, as well it forces hardware and software to be debugged in tandem, further complicating testing and bug squashing. The second way that adopting object oriented design benefits software testing is that any device, other than the main hardware processor itself, can, at any time, be replaced in code with a mock object. A mock object is an object that adopts the same interface as the object that it is replacing, or mocking, however its behavior can be precisely controlled and all interactions with it can be easily monitored. The behavior of mock objects enables object and module interactions to be tested and verified in a rather simplistic, non-intrusive way. Also mock objects can be used during the development process to stand-in for devices which are external to the main processor that may not yet be available, or have behaviors that are particularly difficult to control, hence the use of object oriented design and mock objects in particular can help the system develop at a much higher rate than it otherwise would. (Cordeiro, et al., 2008)
A general case study presented by Lewis Sykalski directly compared two software systems designed to completed the same task, with one being system being implemented procedurally and the second using object oriented principles. The case study compares several facets of the systems including performance, memory efficiency, reliability, modifiability, and testability. In his findings Sykalski presents the Cyclomatic Complexity for procedural system as 147, whereas the CC of the object oriented system was found to be only 17. (Sykalski, 2014) Since we can related Cyclomatic Complexity directly to path based testing, we can infer that the procedural code would require, at minimum, 147 test cases to achieve 100% path coverage, whereas the object oriented system would require only 17 test cases for 100% path coverage. This result reflects an approximate 8.5 times improvement in general system testability.
We can conclude that object oriented design should be utilized in all software projects that are targeted to run embedded platforms, as it promotes modularization and code reuse, through which it enables more thorough testing of object interactions, also known as integration testing.
Aspect Oriented Design
Aspect orientation is an architectural design approach that enables the modularization of aspects of a system that would otherwise be intertwined throughout numerous system modules. (Kim, 2008) In traditional object oriented design system components are modularized into units known as objects which contain all of the functionality and data required for that component, however, it is typical for some properties, or aspects, of a system, such as logging or resource management, to be required by multiple modules of the system, and because they are required by numerous modules that cannot be cleanly modularized themselves using a pure object oriented approach. Aspect oriented design extends object oriented design by providing a methodology that enables these system wide aspects to be modularized. To achieve this, aspect orientation defines four new constructs: aspects, join points, pointcuts, and advice. An aspect is the container that wraps the functionality of a property that ‘crosscuts’ several code modules; an aspect is akin to an object in object oriented programming. A join point is a well-defined point in execution of a method, whereas a pointcut is a collection of join points that match a set of predefined conditions. Lastly advice is simply the functionality to be inserted at each of the join points matched by the pointcut; depending on the aspect framework being utilized, advice can typically be inserted either before or after the matched join point.
Using aspect orientation we can improve testing at the unit, integration, and system levels by creating a trace aspect that can unobtrusively observe preconditions and post conditions within the target testing level during software execution, and generate a log file of observed events. This log of observed events and conditions can then be used, after the execution of the software has completed, to compare to the expected order and conditions of execution of each join point to the expected orders and conditions, as defined in the test cases within the software test suite. Although testing based on this methodology is very similar to the developer simply lacing the source code with printf statements, and manually verifying the output, aspect orientation provides several key advantages. First, aspect orientation does not require the developer to manually insert printf statements throughout the code, instead, a single logging or tracing aspect can be implemented, which can match any number of join points automatically, greatly simplifying code maintenance. Second, aspects can be used in conjunction with unit testing framework to automatically execute tests and determine the validity of the system under test, or SUT, based on the defined system requirements.
Although aspect oriented programming does show promise as a method to help modularize system code and unobtrusively test it, aspect orientation does suffer from a few drawbacks. First, aspect orientation itself is a relatively immature approach, especially when compared with object orientation, and as a result the toolsets used to implement aspects reflect this fact. Pesonen et al. note in their experiences that when evaluating the C++ aspect orientation framework, AspectC++, as a tool to help test Symbian embedded OS, weaving a single tracing aspect into their code resulted in the compiled binary nearly doubling in size. (Pesonen, Katara, & Mikkonen, 2006) They also noted that the AspectC++ compiler, or weaver, had severe performance issues, such that even weaving a small amount of source code took a significant amount of time on regular developer workstation. It is worth noting that the Symbian OS is a software system that is orders of magnitude larger than the average modern embedded system; At the time of writing the Symbian OS has been open sources by Nokia Inc. (it can be found on sourceforge), and the zipped source code of the OS weighs in at over 500mb.1 Both of these issues can be directly attributed to the relative immaturity of the methodology and its toolsets. (Pesonen, Katara, & Mikkonen, 2006) More recent case studies, such as the one presented by Metsa et al. show that aspect oriented testing can be successfully used to reduce the time and effort required to complete testing on an embedded system. The system they discuss is one that is used on assembly lines of embedded devices, whereby each device has the software deployed to it, where the purpose of this software is to ensure the quality of manufacturing of each particular device. The overall system software in this case was approximately 200K lines of code, with a compiled size of 100KB. (Metsa, Maoz, Katara, & Mikkonen, 2013) Using aspects to test one of the major subsystems of the software, which comprised approximately 25% of the overall system, they were able to implement complete robust testing of the subsystem with only 174 lines of test code. (Metsa, Maoz, Katara, & Mikkonen, 2013) This amounts to approximately 174 lines of test code to 50,000 lines of application code, or a ratio of 1:287 (0.0034 in decimal). This is a significant achievement, as according to TDD testing guidelines from the European Organization for Nuclear Research (CERN) used for the development of software for their Large Hadron Collider (LHC) project, the target ratio should be approximately 1:1 (or 1.0 in decimal, to achieve acceptable code coverage (Lambert, 2014).
Based on the information available on the subject we can conclude that aspect oriented design will most certainly become commonplace at some point in the future when the toolsets used to implement designs based on this ideal grow enough to deliver all of the features promised by the methodology itself. However, aspect oriented tools are mature enough at this point in time that we can recommend that aspect orientation could be utilized for the
1 Symbian source code dump can be retrieved from http://sourceforge.net/projects/symbiandump/ implementation of crosscutting concerns and testing on smaller embedded software projects such as those that employ the use of small 8-bit microcontrollers and low powered 32bit ARM type microprocessors. Due to its nature Aspect orientated based testing is best suited for black-box type testing, more specifically, used as a direct replacement for black box testing that requires the developer to manually instrument the source code.
Conclusions
Developing quality software for an embedded device presents a unique set of challenges over and above normal challenges faced when developing software for a more robust, higher powered system. The constraints placed on both the software and the testing process of the software require special consideration in order to achieve a thoroughly tested, quality end product. We have presented and analyzed six different approached to overcoming specific shortcomings experienced during both the development and testing of software which is targeted for low power embedded devices, each presenting their own set of merits and shortcomings. Overall, it is clear that not one single approach or method alone is capable of ensuring quality software is produced, rather the individual project should be analyzed in detail in order to determine the requirements for the particular product, and the combination of the discussed techniques employed that enable the necessary level of testing coverage. In general we can recommend that some methods always be utilized, like Object orientation, and other strategies like cross-compilation be employed in every project that it is possible to do so, while others like aspect oriented design should only be used in small projects, or when it has the potential to vastly improve the design, implementation, modularization and maintenance of the software code.
Topic Motivation
The motivation for the choice to analyze this topic comes from two areas. First is the recent proliferation of devices collectively known as the ‘internet of things’ into our daily lives. These devices have quickly become common-place everyday items for the majority of people living in the developed world. Some of these devices are relatively benign, such as an activity tracking wristband (ex: Fitbit product line), but a large portion of these devices can unquestionably be considered safety critical, such as the devices that are capable of controlling and monitoring AC power in our homes (ex: Belkin Wemo devices) or the smart coffee/espresso makers, some of which are controlling several pounds of high temperature steam, enough to severely burn or injure a human operator. These safety critical ‘internet of things’ devices are a great accolade in the pursuit to have technology empower and better our daily lives, at the same time the software that powers these devices must be thoroughly tested in order to prevent serious human injury or death due to a simple coding error.
The second, and largest part of the motivation for this topic, is my own personal experiences and contributions to an ambitious open source project. The project, called BrewTroller (www.oscsys.com/projects/brewtroller), is an open source software project that utilizes a custom built, Arduino derived embedded processor to instrument and automate beer brewery equipment. I myself am an avid home-brewer with a relatively complicated set of equipment, all driven by BrewTroller. In the last couple of years I have attempted to add some large amounts of new functionality to the project, allowing for external control of the system using either a web based application or an iOS based touch device. It was during this time I began to realize how horribly architected the software was (almost the entire code base was designed procedurally, with most all variables declared on the global scope), and in large due to this architecture, the software was completely untested. This is troubling because the software is very much safety critical as many of the systems that this software controls have upwards of 20 pounds of steam pressure, as well as hundreds of gallons of boiling liquids, large natural gas or propane fired heaters, and high voltage electrical heaters and components. Over the last half a year or so, I have been working towards rebuilding the entire software system, from the ground up to solve these issues, and this paper was the perfect excuse to continue my research, in depth, on how to solve them.
Essay: Embedded systems software testing
Essay details and download:
- Subject area(s): Computer science essays
- Reading time: 15 minutes
- Price: Free download
- Published: 28 September 2015*
- Last Modified: 23 July 2024
- File format: Text
- Words: 4,404 (approx)
- Number of pages: 18 (approx)
Text preview of this essay:
This page of the essay has 4,404 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, Embedded systems software testing. Available from:<https://www.essaysauce.com/computer-science-essays/essay-embedded-systems-software-testing/> [Accessed 10-01-25].
These Computer science essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.