A full-stack AI-powered customer support chat application built with TypeScript, React, and OpenAI. This application provides intelligent, context-aware responses for an e-commerce store using GPT-3.5-turbo.
- π¬ Real-time AI Chat: Instant responses from OpenAI-powered support agent
- π Conversation Persistence: Messages stored in SQLite with session management
- π± Responsive UI: Clean, modern interface with professional light theme
- π‘οΈ Robust Error Handling: Graceful handling of API failures, timeouts, and invalid inputs
- π Domain Knowledge: Pre-configured with e-commerce store knowledge base
- β‘ Input Validation: Client and server-side validation with sensible limits
- π¨ Great UX: Auto-scroll, typing indicators, message timestamps with local time
- π§ͺ Mock Mode: Test without API keys using pattern-matched responses
- Runtime: Node.js 20+
- Language: TypeScript
- Framework: Express.js
- Database: SQLite (better-sqlite3)
- LLM: OpenAI GPT-3.5-turbo
- Dev Tools: tsx (hot reload)
- Framework: React 18
- Build Tool: Vite
- Language: TypeScript
- Styling: CSS3 (custom, no frameworks)
spur-ai-agent/
βββ backend/
β βββ src/
β β βββ database/
β β β βββ connection.ts # SQLite connection & initialization
β β β βββ migrate.ts # Database schema migrations
β β β βββ models.ts # Data models & database queries
β β βββ middleware/
β β β βββ validation.ts # Request validation & error handling
β β βββ routes/
β β β βββ chat.ts # Chat API endpoints
β β βββ services/
β β β βββ knowledge.ts # E-commerce knowledge base
β β β βββ llmService.ts # OpenAI integration & mock mode
β β βββ index.ts # Express server entry point
β βββ data/ # SQLite database (auto-created)
β βββ package.json
β βββ tsconfig.json
β βββ .env.example # Environment variables template
β
βββ frontend/
βββ src/
β βββ api/
β β βββ chat.ts # Backend API client
β βββ components/
β β βββ ChatInput.tsx # Message input component
β β βββ ChatInput.css
β β βββ Message.tsx # Message bubble component
β β βββ Message.css
β βββ hooks/
β β βββ useChat.ts # Chat state management hook
β βββ App.tsx # Main application component
β βββ App.css
β βββ main.tsx # React entry point
βββ index.html
βββ package.json
βββ tsconfig.json
βββ vite.config.ts
- Node.js 20+ installed
- npm
- OpenAI API key (Get one here) - Optional, can use mock mode
# Clone and navigate to project
cd spur-ai-agent
# Run automated installation (installs all dependencies)
bash install.sh
# Configure environment
cd backend
cp .env.example .env
# Edit .env and add your OpenAI API key (or use LLM_PROVIDER=mock for testing)
# Start backend (in terminal 1)
cd ..
bash start-backend.sh
# Start frontend (in terminal 2)
bash start-frontend.shOpen http://localhost:5173 in your browser!
# Install backend dependencies
cd backend
npm install
# Install frontend dependencies
cd ../frontend
npm install
# Configure backend
cd ../backend
cp .env.example .env
# Edit .env with your API key or use mock mode
# Start backend
npm run dev
# In a new terminal, start frontend
cd frontend
npm run devEdit backend/.env:
# LLM Provider: 'openai' or 'mock'
LLM_PROVIDER=openai
# OpenAI API key (only needed if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-your-key-here
# Server settings
PORT=3000
NODE_ENV=development
DATABASE_PATH=./data/chat.db
CORS_ORIGIN=http://localhost:5173Mock Mode: Set LLM_PROVIDER=mock to test without an API key. It provides intelligent pattern-matched responses for common questions.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β React Frontend β
β (localhost:5173) β
β ββββββββββββ ββββββββββββ ββββββββββββ β
β β App.tsx ββ β useChat ββ β API β β
β β β β Hook β β Client β β
β ββββββββββββ ββββββββββββ ββββββ¬ββββββ β
β β β β
β ββββββββββββ ββββββββββββ β β
β β Message β βChatInput β β β
β ββββββββββββ ββββββββββββ β β
ββββββββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββ
β HTTP POST
β /api/chat/message
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Express Backend β
β (localhost:3000) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Middleware: CORS, Validation, Error Handler β β
β ββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Routes: /api/chat/message, /api/chat/history β β
β ββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β LLM Service: OpenAI GPT-3.5 / Mock Mode β β
β ββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ β
β β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β Database: SQLite (conversations + messages) β β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
1. Layered Backend Architecture
- Routes: HTTP request/response handling
- Middleware: Validation, CORS, error handling
- Services: Business logic and LLM integration
- Data: Database models and queries
2. SQLite Database
- Zero configuration required
- Perfect for prototypes and demos
- File-based, portable
- Easy migration path to PostgreSQL
3. Session Management
- UUID-based session tracking
- Sessions stored in localStorage (frontend)
- No authentication required (as per requirements)
4. Frontend State Management
- Custom
useChathook encapsulates all chat logic - Optimistic UI updates for better UX
- Auto-scroll to latest messages
- Clean separation between UI and logic
5. Error Handling Strategy
- Multiple validation layers
- Graceful LLM failure handling
- User-friendly error messages
- Global error boundary
Send a message and get AI response.
Request:
{
"message": "What's your return policy?",
"sessionId": "optional-session-uuid"
}Response:
{
"reply": "We accept returns within 30 days...",
"sessionId": "abc-123-def",
"messageId": "msg-789",
"timestamp": "2025-12-29T10:30:00.000Z"
}Validation:
message: Required, 1-2000 characterssessionId: Optional UUID string
Retrieve conversation history for a session.
Response:
{
"sessionId": "abc-123-def",
"messages": [
{
"id": "msg-1",
"sender": "user",
"text": "Hello!",
"timestamp": "2025-12-29T10:29:00.000Z"
}
],
"conversationInfo": {
"created_at": "2025-12-29T10:29:00.000Z",
"updated_at": "2025-12-29T10:30:00.000Z"
}
}Health check endpoint.
Response:
{
"status": "ok",
"timestamp": "2025-12-29T10:30:00.000Z"
}The application uses OpenAI's GPT-3.5-turbo model with:
- System prompt with e-commerce store context
- Conversation history for context-aware responses
- Knowledge base with FAQs embedded in prompts
- Error handling with fallback messages
For testing without API costs:
- Pattern-matched responses for common queries
- Covers shipping, returns, tracking, payments, warranty
- Fallback to generic helpful responses
- Perfect for development and demos
Configuration:
LLM_PROVIDER=mock # Use mock mode
LLM_PROVIDER=openai # Use OpenAI (requires API key)CREATE TABLE conversations (
id TEXT PRIMARY KEY,
created_at TEXT NOT NULL,
updated_at TEXT NOT NULL
);CREATE TABLE messages (
id TEXT PRIMARY KEY,
conversation_id TEXT NOT NULL,
sender TEXT NOT NULL CHECK(sender IN ('user', 'ai')),
text TEXT NOT NULL,
timestamp TEXT NOT NULL,
FOREIGN KEY (conversation_id) REFERENCES conversations(id)
);- Professional light theme with blue gradient accents
- Responsive design works on mobile and desktop
- Message timestamps displayed in local timezone
- Auto-scroll to latest messages
- Loading states with typing indicators
- Error states with user-friendly messages
- Smooth animations and hover effects
cd backend
npm run dev # Starts server with hot reload (tsx watch)cd frontend
npm run dev # Starts Vite dev server with hot reload# Build backend
cd backend
npm run build
npm start
# Build frontend
cd frontend
npm run build
npm run preview# In backend/.env
LLM_PROVIDER=mock
# Start servers and test various queries:
# - "How does shipping work?"
# - "What's your return policy?"
# - "Track my order"
# - "Payment methods?"# In backend/.env
LLM_PROVIDER=openai
OPENAI_API_KEY=sk-your-key
# Start servers and test natural language queries- Check Node version:
node --version(requires 20+) - Verify .env file exists in backend/
- Check if port 3000 is available
- Ensure backend is running on port 3000
- Check CORS_ORIGIN in backend/.env matches frontend URL
- Verify no firewall blocking localhost connections
- Delete
backend/data/chat.dband restart (auto-recreates) - Check write permissions in backend/data/ folder
- Verify API key is valid
- Check account has credits
- Switch to
LLM_PROVIDER=mockfor testing
- SQLite vs PostgreSQL: Chose SQLite for simplicity, would use PostgreSQL for production
- No Authentication: Simplified per requirements, but easy to add JWT auth
- Client-side Sessions: Sessions in localStorage, would use server-side sessions in production
- No Rate Limiting: Would add rate limiting for production API
- Simple Error Logging: Would integrate structured logging (e.g., Winston) for production
- User Authentication: JWT-based auth with login/register
- Real-time Updates: WebSocket for live chat updates
- Message Search: Full-text search across conversation history
- File Uploads: Support for image/document uploads
- Admin Dashboard: Analytics and conversation management
- Multi-language: i18n support for multiple languages
- Voice Input: Speech-to-text integration
- Sentiment Analysis: Track customer satisfaction
- Export Conversations: PDF/CSV export functionality
- Canned Responses: Quick reply templates for common questions
MIT
This is a home assignment project. Not accepting contributions at this time.
Built with β€οΈ using TypeScript, React, and OpenAI