When are traditional algorithms a better choice than Machine Learning (ML)?
Many healthcare applications involve data classification tasks, such as interpreting sensor data to determine patient activity (sitting, walking, sleeping) or analyzing test results. Traditionally, these tasks are handled by pre-programmed algorithms. However, ML offers an alternative approach. The key factor influencing this choice is the availability of labeled data. Supervised learning, the most common ML approach, requires a large dataset where each data point is labeled with the desired outcome. For instance, an ML model designed to detect heart rhythm abnormalities from ECG data needs a vast collection of ECG recordings, each labeled as normal or abnormal. If such labeled data is limited or unavailable, a traditional, rule-based algorithm might be a more suitable solution.
Real-world example: Why did initial attempts at AliveCor’s ML model fail?
Dr. Eric Topal, in his book “Deep Medicine,” presents a case study from AliveCor’s development of an AI-powered ECG device for detecting high potassium levels. Initial attempts using a limited dataset of ECG T-waves and outpatient lab results proved unsuccessful. The ML model couldn’t find a reliable correlation between the data. However, when AliveCor expanded the data to include full ECGs and lab results from hospitalized patients (collected closer to the ECG readings), the model identified a clear correlation between complete ECG data and elevated potassium. This highlights the importance of using comprehensive, high-quality data for training ML models.
The success story: How did AliveCor leverage ML to create a valuable product?
By providing the ML model with a more comprehensive dataset (full ECGs and near-time lab results), AliveCor successfully trained a model to detect high potassium levels. This model was then validated using a separate dataset, demonstrating its accuracy and reliability. As a supervised learning classifier, AliveCor’s ML model received FDA clearance to be marketed for detecting elevated potassium.
Key takeaways: When is supervised learning with ML classifiers a good choice? The AliveCor example showcases the effectiveness of supervised learning with ML classifiers in various medical device applications. Several cloud-based software tools simplify the creation of ML models that can be deployed on embedded devices with minimal performance loss.
When might a traditional algorithm be preferable to an ML model?
While ML offers a powerful toolset, it’s not always the optimal solution. Here are some factors to consider:
- Complexity: ML models introduce additional complexity to the system design.
- Data Requirements: Training and testing ML models often necessitate significant data volumes.
- Well-defined tasks: For well-defined tasks with clear steps (e.g., heart rate detection from ECG signals), traditional algorithms might be more efficient and require less data compared to an ML model.
Just as a skilled musician understands when to play and when to hold back, effectively designing medical devices involves choosing the right tools for the job. Carefully evaluate your specific needs and consider traditional algorithms before jumping to an ML approach. Our team is always here to help and in my next blog, we will review the impact of latency with an ML implementation vs. the latency of an algorithm performing the same function in a tech-enabled version of a familiar everyday medical product.