Software for sensors has evolved from simply reading out and evaluating sensor data to making intelligent decisions based on that data. This transformation has been enabled by new technologies in the areas of software synthesis and artificial intelligence (AI). They help us actualize new features that make consumer devices smarter, dramatically improving the user experience through greater interactivity and higher levels of automated personalization.
Stefan Finkbeiner, CEO & General Manager, Bosch Sensortec, will explore the topic during his keynote, How Software Makes MEMS Sensors into Smart Systems, at this month’s MEMS & Sensors Executive Congress (October 22-24, 2019, Coronado Island Marriott Resort & Spa, Coronado, Calif.). Finkbeiner shared his views ahead of MEMS & Sensors Executive Congress (MSEC).
What is the relationship between MEMS sensors suppliers and specialized software synthesis providers?
Collaboration is a key driver for innovation in sensor software. There are already several fruitful collaborations between MEMS sensors suppliers and specialized software providers, which are mostly startups. Collaborations with providers of simulation and evaluation tools as well as with well-known universities in the field of AI are starting to show positive results.
Domain expertise is also critical for developing smart sensor software, making it essential to future sensing solutions.
How does software synthesis relate to sensor fusion?
Put simply, software synthesis refers to ways of automatically generating code based on domain knowledge and given constraints for specific product versions. Sensor fusion combines sensor data from different kinds of sources in order to improve the results.
Software synthesis techniques enable a level of automation that creates new opportunities for more complex sensor fusion, which was formerly out of reach when using traditional approaches that involved, for example, big data and a large number of potential data sources.
The traditional sensor fusion toolset can now be further extended by machine learning techniques that help to determine which sources are more reliable than others and how to combine data streams. This topic and others are still active areas of research. A wearable device with motion detection is a case in point. With unsupervised learning, the device could identify short vs. long cyclically repeating motions, and could treat them differently from other types of motion.
How is the new software synthesis-AI approach different from previous approaches? To what degree will the new approach open up new applications?
Traditionally, technology companies have used cloud computing for data storage and machine learning on aggregated user data. In that model, MEMS sensors generate large amounts of data that power-hungry hardware (such as digital signal processors) must process; in addition, machine learning generally requires lots of power-hungry cloud nodes with GPUs. This tradition, however, is not the best option for many users. Just think for a moment about all the scenarios in battery-powered devices where frequent battery charging frustrates users.
Leveraging both software synthesis and AI techniques in MEMS sensors is therefore a very promising approach because it supports improved recognition and learning inside the sensor. This means that user-specific data isn’t transferred to the cloud; it remains private inside the sensor. This improves existing applications that learn “all the time” and opens up new opportunities for applications, e.g., smart clothing, predicting a product’s lifespan, detecting whether a window or door is open or closed—all without server connectivity.
How will such software adapt to the individual user?
Devices will offer much more personalized information to users. For example, optimizing a step counter to match the height, age or Body Mass Index (BMI) of a user—or to adapt to a user’s environment (is the person running on a beach, hiking up a mountain or strolling in a park?)—will provide more accurate information on calories burned. Not every step is created equal, and both pre-loaded personal data as well as real-world environmental data will prove that some steps consume a lot more energy than others.
What would you like MSEC attendees to take away from your presentation?
I want to introduce the journey of software development by illustrating specific use-case examples. I would also like to offer my outlook on the role of software and AI in MEMS sensors, to increase their adoption in current and new applications. Ultimately, I think it’s important to raise awareness in our industry on why we should embrace the use of software and AI.
Join us at MSEC to meet with Bosch Sensortec and other industry influencers driving innovation in the MEMS & sensors industry. Registration is open.
About Stefan Finkbeiner
Stefan Finkbeiner will present How Software Makes MEMS Sensors into Smart Systems on October 23, 2019 at MSEC. Connect with him at MSEC or via LinkedIn. You can get more information on Bosch Sensortec’s products and solutions online: https://www.bosch-sensortec.com/
Stefan Finkbeiner, Ph.D., CEO & General Manager, Bosch Sensortec, was appointed CEO of Bosch Sensortec in 2012. He joined the Robert Bosch GmbH in 1995 and has been working in different positions related to the research, development, manufacturing, and marketing of sensors for more than 20 years. Senior positions at Bosch have included director of marketing for sensors, director of corporate research in microsystems technology, and vice president of engineering for sensors.
Finkbeiner received his Diploma in Physics from the University of Karlsruhe in 1992. He then studied at the Max-Planck-Institute in Stuttgart and there received his Ph.D. in Physics in 1995. In 2015, Finkbeiner received
Disclaimer: The views expressed here are those of the interviewee and do not necessarily represent the views of AZoM.com Limited (T/A) AZoNetwork, the owner and operator of this website. This disclaimer forms part of the Terms and Conditions of use of this website.