UX for Lung Cancer Detection
Overview:
Lung cancer is the leading cause of cancer-related deaths worldwide. In 2022, approximately 2.5 million new cases were diagnosed globally, resulting in over 1.8 million deaths.
Siemens Healthineers stands at the forefront of medical imaging and diagnostics, significantly enhancing early cancer detection. In fiscal year 2024, the company reported revenues of €22.36 billion, marking a 3.2% increase from the previous year. Their advanced technologies play a crucial role in early-stage cancer detection, thereby improving survival rates and providing healthcare professionals with precise tools for critical decision-making.
Problem Statement:
Despite advancements in early detection, the process of diagnosing lung cancer is still highly complex and time-sensitive, placing significant strain on both medical professionals and patients. The challenge lies in integrating diagnostic tools seamlessly into the existing healthcare workflow without adding unnecessary complexity to an already demanding process.
A solution was required to address this issue by providing a user-centric design that simplifies the detection process. The goal was to create a system that provided 90% accuracy in lung cancer detection, minimized user friction, and improved the usability of detection models.
Roles:
Research, Usability Testing, Machine Learning Development, Deep Learning Development, Stakeholder Management, Iteration
Duration:
Jan 2022 - Apr 2022
Tools:
MSoffice, Python, Visual Studio Code, Neural Networks.
Procedure
Identify Issues and Difficulties: Conducted user research through surveys and interviews with patients, doctors, and engineers to identify the challenges and pain points in lung cancer detection.
Model Development:
Built an initial machine learning (ML) model for lung cancer detection, using traditional ML techniques to analyze patient data.
Developed a deep learning (DL) model, employing advanced neural networks for more accurate and efficient detection of lung cancer patterns in medical imaging.
Created a hybrid model, combining the strengths of both ML and DL.
Implementation and Testing: Integrated the models into the diagnostic workflow, testing them to identify any friction between the different stages of the process affecting UX.
Refinement Based on Feedback: Gathered feedback from healthcare professionals and patients, using this data to refine and optimize the models for better performance and user experience, ensuring the solution aligned with practical needs and clinical requirements.
-
Understand the problem, the factors of the problem, the symptoms, and the solutions
-
Implement the solutions, check paramters, and test across conditions
-
Review the solution based on feedbacks, to identify the arenas of improvement and problems
-
Repeat until design satisfies all criterias.
Why use both Machine Learning and Deep Learning?
Machine Learning (ML)
Great for handling smaller, structured data (like patient history).
Faster and less resource-intensive.
Efficient for making quick predictions
Deep Learning (ML)
Excels at analyzing complex, unstructured data (like medical images).
Achieves high accuracy, especially for tasks like detecting patterns in scans.
Learns directly from data without needing explicit instructions.
Working of Residual Neural Networks
Working of Convolutional Neural Networks
Outcome:
All models achieved at least 96% accuracy in lung cancer detection.
Three distinct models were created, each optimized for specific scenarios, offering flexibility based on requirements.
The integration into the existing design system was seamless, with minimal complications and a low learning curve, ensuring smooth adoption.