Background
In the healthcare industry, timely and accurate information can greatly impact patient care, particularly for senior populations with specific health-related queries. To enhance accessibility to reliable health information, I developed a Generative AI powered chatbot for healthcare, using Retrieval-Augmented Generation (RAG). This AI chatbot leverages AWS services to process, clean, and integrate data from a curated Senior Health Q&A dataset. Designed with reliability and data quality as core pillars, this chatbot application aims to deliver accurate, contextually relevant answers to senior healthcare questions in a user-friendly interface.
This exercise was part of the AWS Studio Workshop hands-on lab.
The Challenge
Building a healthcare chatbot for senior citizens came with several challenges:
- Data Integrity and Quality: Ensuring that the health-related question-and-answer dataset maintained high standards for data quality was critical, as errors in health information could lead to adverse consequences.
- Data Transformation and Accessibility: Converting raw healthcare data from XML files into a format suitable for vector embedding required meticulous processing.
- Embedding and Storage: Mapping the healthcare data into vector embeddings for rapid, context-aware responses while maintaining data integrity.
- Seamless User Interaction: Developing an intuitive, responsive interface that allows users to easily query the chatbot while ensuring consistent back-end functionality.
- Scalability and Accuracy: Ensuring the system could scale to handle a high volume of user queries while consistently delivering accurate information.
The Solution
To address these challenges, I designed a Gen AI application using AWS services for data ingestion, quality assurance, embedding, and retrieval. Below is a step-by-step outline of our approach:
Data Exploration and Preparation
- Data Cataloging with AWS Glue Crawlers: I began by using AWS Glue Crawlers to scan and catalog the Senior Health Q&A dataset stored in an Amazon S3 bucket. This provided a structured view of the dataset for easy access and processing.
- Data Quality Check with AWS Glue Data Quality Jobs: To ensure only reliable data was embedded in our vector database, I used AWS Glue Data Quality jobs to separate high-quality records from potentially flawed ones. Any records flagged for quality issues were isolated for review and correction by the data source owners via Athena queries.
- Data Transformation: Cleaned XML data was processed into text files by an AWS Glue processing job. This transformation allowed us to prepare the data for the vector embedding stage, ensuring consistent data format and improved data flow into the AI model.
Vector Embedding and Storage
- Embedding with Sentence Transformers: Using the all-MiniLM-L6-v2 model, each question and answer in the dataset was chunked and converted into a 384-dimensional dense vector. I chose this sentence-transformers model to maintain context continuity across question and answer pairs, making the chatbot’s responses coherent and contextually relevant.
- Storing Embeddings in Amazon OpenSearch: The embeddings were stored in Amazon OpenSearch Service, which allowed efficient retrieval of relevant answers based on user queries. Amazon SageMaker Processing jobs were leveraged to interface with Langchain and convert these data chunks into vector embeddings.
Interactive Chatbot Development
- Building the Front-End with Streamlit: To provide an accessible user experience, I used Streamlit to develop the chatbot’s interface. The Streamlit front end consisted of a simple search bar for users to enter questions, with real-time response generation enabled by backend RAG functionality.
- Connecting via API Gateway and Lambda: User queries from the Streamlit app triggered a REST API via API Gateway, which called a Lambda function linked to our SageMaker endpoint hosting the RAG model. The chatbot then retrieved the relevant response from the embedded data, ensuring users received accurate, context-specific answers.
The Results
The implementation of the RAG-powered healthcare chatbot produced remarkable results:
- Improved Data Quality and Integrity: By isolating and addressing low-quality records before embedding, the system maintained a high standard for reliable responses, building trust among users in the chatbot’s recommendations.
- Faster, Contextually Relevant Responses: Embedding healthcare data allowed the chatbot to provide timely and relevant answers, reducing the delay between question and response while preserving contextual accuracy.
- User-Friendly Interaction: The Streamlit interface allowed users to interact seamlessly with the chatbot, enabling an intuitive, conversational experience for health queries such as "What causes COPD?"
- Scalability and Responsiveness: With Amazon OpenSearch storing vector embeddings, the system scaled effectively, handling a growing volume of health-related questions without compromising speed or accuracy.
Conclusion
This case study demonstrates the powerful impact of a RAG-powered Generative AI solution in healthcare, especially for senior care. By combining AWS Glue for data quality and transformation, SageMaker for embedding and model hosting, and Streamlit for a responsive user interface, I developed an accessible and reliable healthcare chatbot. This approach highlights the importance of data integrity, privacy, and user-centric design in building responsible AI applications in healthcare. The scalable nature of this architecture also allows me to expand this solution to address a broader range of medical topics, providing future benefits for patients, caregivers, and healthcare providers alike.
References:
- AWS Workshop Studio - Join hands-on events and workshops.Retrieved from https://catalog.workshops.aws/
- Streamlit. Retrieved from https://streamlit.io/