AI / EdTechLive

100Minds.ai

A practice-based leadership and power skills training platform powered by AI. I built the core AI infrastructure — a voice-enabled tutor, an interactive AI avatar, and RAG pipelines grounded in curated learning content.

View Project
AI Engineer
2024
AI / EdTech
100Minds.ai overview
RAG
Grounded AI Responses
< 1s
Voice Response Latency
Live
In Production
2
AI Interfaces Built

The Challenge

Leadership training typically lives in static courses, PDFs, or expensive in-person workshops. The team at 100Minds wanted to make leadership skills genuinely interactive — where learners could practice real scenarios with an AI that responds intelligently, retains context, and grounds its answers in verified content rather than hallucinating advice.

The Solution

I built a RAG pipeline that ingests and chunks the platform's curated training content into a vector database, so every AI response is anchored to real material. On top of that, I integrated a voice-enabled tutor for spoken interaction and an AI avatar for immersive scenario practice — making the experience feel closer to a coaching session than a course.

What I Built

01

Designed and implemented the RAG pipeline — ingestion, chunking, embedding, and retrieval — grounding all AI responses in curated training content

02

Built the voice-enabled AI tutor with real-time streaming, low-latency response, and natural interruption handling

03

Integrated an interactive AI avatar for immersive scenario-based learning, synchronized with live audio output

04

Architected the LangChain/LangGraph workflow orchestrating context retrieval, response generation, and session memory

05

Optimized chunking strategy and vector retrieval to maximize factual precision without losing conversational context

100Minds.ai architecture

The Story

The hardest part wasn't the RAG pipeline — it was making the voice interaction feel natural. Early versions had noticeable latency and awkward turn-taking. I had to tune the streaming pipeline and interruption handling to get it to a point where users forgot they were talking to an AI. The avatar added another layer of complexity: synchronizing lip movement and expression with live audio output required careful orchestration between the voice model and the avatar rendering layer.

What I Learned

RAG quality is entirely dependent on chunking strategy — chunking too coarsely loses precision, too finely loses context. I spent more time on this than any other part of the system. I also learned that voice AI UX is its own discipline: latency tolerance, interruption handling, and conversational pacing matter as much as the model quality underneath.

Technologies Used

LangChainLangGraphRAGVector DatabaseVoice AIOpenAI APIPythonFastAPI

Back to

All Case Studies