Bringing the power of machine learning to product design

Published on: 20th November 2017

Machine learning is now being used to help construct smarter algorithms to control everyday products.
Over the past few years, DCA has expanded its use of Matlab (a tool for mathematics, data analysis, simulation and modelling) to explore and inform our work. As this machine learning example shows, our use of Matlab is not only improving the quality of the designed product, it is also optimising the design process.

INTERPRETING SENSOR DATA IN CONNECTED DEVICES

In recent years, we have seen a plethora of low cost sensing technologies become more readily available to the electronics hardware engineer. The demand for such technologies is linked to the demand for connected devices, facilitated by the rapid proliferation of low power Bluetooth, WiFi, and NFC. These wireless technologies can be combined with microprocessors and sensors in integrated circuit packages, making ultra-compact, ultra-low power Internet of Things products a reality.  

The attractive volume price, low power consumption and compact packaging of these systems on-chip make it possible to add significant value and functionality to existing devices. The temptation with such technology is to connect everything to everything else, which can create a wealth of problems including security issues and spiralling over-complexity. It is down to us, as designers of electronic products, to ensure that such products are not burdened with unnecessary features and frustrating user experiences.

Thus, efficiently managing and interpreting the large quantities of data available to a sensor-enabled device can be a major challenge. DCA has employed machine learning techniques on recent projects to develop algorithms that run on low power embedded microcontrollers, where the precise relationship between the raw sensor data and the required operation of the device is not clear-cut. Key to this is the ability of modern low power microcontrollers to do large amounts of floating-point arithmetic very quickly.

A ‘SMART’ EXAMPLE

Hand held tools are just one example where the addition of some form of ‘intelligence’ to the tool can improve overall system performance. In specialist cases, it is advantageous if the function and performance of precision hand held tools changes based on their position, motion and orientation (think surgical tools or scientific instruments). However, given complex contexts of use, it is typically not possible to infer the correct tool performance and function based on a simple algorithm. The user experience may be greatly improved by increasing the number of sensory inputs and by developing algorithms that are more complicated.

The challenge, however, is that as the number of sensors increases, so does the challenge in writing the complex algorithm to respond to their inputs. This very quickly becomes an incredibly time-intensive task. There may be accelerometers, pressure sensors, temperature sensors and more available which need to be interpreted.

Machine learning is one way of not only speeding up the algorithm writing process, but also of creating algorithms that go beyond what would be possible without such an approach. Nuances of a particular system can be captured by a machine learning process that would be missed by a more traditional method.

There are also pitfalls such as over-fitting, where looking too closely at the detail causes the designer to miss the bigger picture. On some types of algorithm, it is possible to overlook optimal solutions by finding what is known as local maxima or minima, where one algorithm configuration is homed in on too early and another, better, solution is missed.

In these circumstances, the machine learning capabilities of Matlab can be utilised. Matlab’s Statistics & Machine Learning Toolbox allows us to explore the parameters of a classification algorithm, manage our data to mitigate the aforementioned pitfalls and train an algorithm, before embedding the result in our product.

In essence, the process starts in reverse – the sensors are all activated and the product is placed in the location and orientation (with the correct motion) for a given activity. Data is collected, recorded and coded to indicate the situation in which it was recorded. The process is then repeated for a range of different tasks that the device will be required to distinguish between, creating training data for the classification algorithm.

The next step of the process is to use Matlab’s Statistics & Machine Learning Toolbox to determine parameters for the classification algorithm, with the collected training data set as an input. Picking these parameters would normally be a labour-intensive process, but the toolbox typically allows different algorithms and adjustments to be compared in a matter of minutes.

The classifier algorithm must now be validated. In the first instance, the training data collected at the start of the process can be used. Once this has been proved to work, the process can be repeated with new data that has not been used to train the algorithm.

Validating the model on a simulation running on a PC is one thing, but ensuring the algorithm is suitable for use on a small, embedded target is quite different. Using Matlab’s Embedded Coder Toolbox, it is possible to generate a version of the algorithm in embedded C, which can be run on an embedded microcontroller such as an STM32.  This can be directly used in the final software, or it can be hand-optimised to fit in with the rest of the software environment and for the specific target functionality.

The Matlab simulation toolbox, Simulink, can be used to capture real-time sensor data into the PC and send it to the algorithm running on the microcontroller. The algorithm output can then displayed on the PC. This facilitates live demonstrations with key stakeholders of the solution, allowing them to gain a more intuitive understanding of the progress that has been made, and communicate potential limitations of the algorithm.

Thanks to the pace with which Matlab allows machine learning development, the communication of this demonstrator and accuracy of the algorithm can be used to support the decision to continue product development. This saves development time, and has the potential to produce a more advanced solution at an earlier stage of the project than would otherwise be possible with more traditional methods. It also provides, of course, the groundwork for incorporating a version of the algorithm into the final product.

Jonathan Storey, Tom Evans