Bob Bouthillier
Director, Electrical Engineering, Veranex
In this three-part series, Bob Bouthillier discusses the state of artificial intelligence (AI) and machine learning (ML) today, its potential across a variety of applications, an analysis of challenges faced by a leading mobile ECG company when implementing ML for ECG waveform analysis, and the impact of ML implementations on power consumption and latency.
Part 1: Purpose-Built Applications
A Venture Capitalist recently joked that to fund a startup, all one must do is choose a URL that ends in ‘.ai’.
Although he was not serious, it was an acknowledgment that companies pursuing Artificial Intelligence (AI) are getting much attention and there is a fear of missing out (FOMO) in the investment community if one of these AI startups develop a ‘killer app’ that achieves unicorn status without them.
As someone who has been developing products for 30 years, one of the most frequent questions clients ask is how to leverage AI in their products to keep them relevant in a world where AI seems to be growing into almost everything these days. There is also a belief that firms like Google, Apple, Amazon, Meta, and Microsoft have all of the data, so how can others ever compete in AI?
The good news about data is that every company has deep knowledge in their domain, and has key data related to their business that is different than the data held by the five big technology companies listed above. The real question is whether your data is in a form that is accessible to build models that will be useful in your business.
Many AI applications focus on a classification task where the data must be labeled to make it useful. A classic example is that of collecting and labeling images as was done by Fei-Fei Li, to create the ImageNet database which includes 100,000 synonyms known as ‘synsets’, with 1000 images for each synset. This labeled database of images has been instrumental in advancing the task of object recognition in Machine-Learning applications since ImageNet began its effort in 2009, and it is largely responsible for how well an AI algorithm can recognize cats, dogs, and other objects in images.
While the terms Artificial Intelligence (AI) and Machine Learning (ML) are often used almost interchangeably, ML is the process of building a model that classifies data into sub-groups or for continuous data like temperature predictions, it uses regression to represent data along a continuum. Artificial Intelligence is the term we assign to the resulting output from the machine-learning model.
It has been said that current AI models function more like a purpose-built appliance than like a human brain because each application gains expertise in a very narrow space, and this does not generalize well to other spaces. To continue the analogy, a dishwasher, and a washing machine are both purpose-built appliances, and one would not be pleased with the results if they put dishes into the clothes washer, and this is the case for most AI models.
As an example, the Amazon Alexa app can excel in natural language processing to play music, answer questions, set timers, and even tell jokes. However, Alexa would require additional training to recognize cats or dogs if a camera were connected to it. While these AI applications offer convenience for users within narrow spaces, none of them approach the level of generalized intelligence that is common for a three-year-old child.
In the spirit of building purpose-built appliances, let’s look at an example of a practical device to track activity for a patient who wishes to improve their health and wellness. This activity tracker is a coin-sized sensor-tile device with BLE plus a multi-axis IMU sensor located in a user’s pocket. Let’s consider how to develop the software for this product by traditional methods vs. an ML approach. Under a traditional model, a programmer would create a function to determine the orientation of the activity tracker before developing functions to capture data generated from a variety of users when they are walking, running, jumping, etc. This effort requires a substantial amount of time as a programmer has to inspect the IMU data for each user, recognize how this data translates into labeled motion sequences, and then tune the algorithm to recognize each data sequence as an activity.
When the same activity tracker is built using ML, users are given a mobile app and the sensor-tile device to put in their pockets. When the sensor-tile detects motion, it notifies the mobile app to ask the user what activity they are doing. The user’s response “labels” each activity, and as this is repeated for all users, a trove of labeled data is easily collected from a group of sensor-tile devices. At the end of each day, the sensor data from all ‘tiles’ is fed back into an ML model along with the labeled activities from the mobile apps, and the revised model is downloaded into all users’ sensor-tile units. Each sensor tile now recognizes more activities and users continue to select their activity to label the data. As the model becomes more mature, the mobile app may suggest the activity being detected and allow the user to ‘confirm’ or ‘correct’ the activity as needed. This is essentially crowd-sourcing the labeling of data and it results in a more robust activity sensor as more users participate in its use.
This is an example of a supervised machine-learning application with a classifier that learns to correlate IMU data patterns with activities like walking/running/skipping, etc. If it were trained on only one user, it may work reliably with that user, but would probably do a poor job of identifying activities with different users because the training data is from only a single user. This is a classic case of what is called ‘overfitting’ where a machine-learning model does not generalize well to other users’ data. For this reason, it is important to collect data from a large-enough cross-section of users most of which (80%) become the training set for your model. The balance of the data collected (20%) is saved to become the “test set” that is used in the qualification and testing of the ML algorithm to verify its capabilities on previously untested data.
While this was a fairly simple example, there are many more applications for AI and ML. In my next blog, we will review challenges faced by a key player in the mobile ECG market when using ML to analyze ECG waveforms.
Frequently Asked Questions About AI/ML
The AI Hype: Fact or Fiction?
There is a current fascination with AI, with some companies potentially overemphasizing its role to attract investment. However, AI can be a valuable tool when strategically applied.
How can medical device companies leverage AI without being outshined by tech giants?
The good news is that every company possesses unique domain knowledge and data specific to its field. This data, while distinct from the vast datasets held by major tech companies, can be instrumental in building ML models that enhance your products. The key lies in ensuring your data is accessible and usable for model development.
Understanding AI and ML Terminology
The terms “Artificial Intelligence” (AI) and “Machine Learning” (ML) are often used interchangeably. For clarification, ML refers to the process of building models that classify data or predict continuous values (e.g., temperature). AI encompasses the intelligent output generated by these ML models.
AI as a Purpose-Built Appliance
Current AI models excel in specific tasks but struggle with broader applications. Analogy can be drawn to purpose-built appliances like dishwashers or washing machines. Each excels in its designated function but wouldn’t perform well if used for the other’s purpose. Similarly, an AI model trained for music playback wouldn’t be adept at image recognition without additional training.
Traditional vs. ML Approach: An Activity Tracker Example
Let’s consider an activity tracker worn by a user to monitor their health and wellness. This coin-sized device uses sensors to track movement. Here’s how we could develop the software for this tracker using traditional and ML-based methods:
- Traditional Approach: A programmer would write functions to determine the tracker’s orientation and capture movement data from various user activities (walking, running, jumping, etc.). This process requires significant time and effort, as the programmer needs to analyze sensor data from multiple users, identify patterns corresponding to specific activities, and then refine algorithms to recognize these patterns.
- ML Approach: Users would be provided with a mobile app and the activity tracker. Whenever motion is detected, the mobile app prompts the user to identify the activity they’re performing. These user responses serve as “labels” for the data. As more users participate, a vast amount of labeled data is collected. This data is then fed back into an ML model, along with the labeled activities, to refine the model’s ability to recognize different activities. Over time, the model becomes more sophisticated, potentially suggesting activities to the user through the mobile app, with options to confirm or correct suggestions.
This is a practical example of supervised machine learning with a classifier that learns to associate specific movement patterns (IMU data) with activities like walking, running, or skipping. However, training the model on data from a single user might lead to poor performance when used by others (overfitting). To address this, data from a diverse group of users is crucial. Typically, 80% of the collected data is used to train the model, while the remaining 20% serves as a “test set” to evaluate the model’s ability to handle unseen data during qualification and testing phases.
While this is a simplified example, it demonstrates the potential of AI and ML in various applications. In our next blog post, we’ll explore the challenges faced by a leading mobile ECG company when implementing ML for ECG waveform analysis.