Chatbot Interface Pages¶
Purpose: User-facing chatbot interaction interfaces
Category: Chatbot UIs
Pages: 3 pages
Complexity: Interactive chat interfaces with real-time communication
Overview¶
Pages where end-users interact with the chatbots. Each type (3D, Text, Voice) has unique UI/UX optimized for that modality.
1. 3D Chatbot Interface¶
File: src/app/3d-chatbot/page.tsx (524 lines)
Route: /3d-chatbot
Type: Protected
Purpose: Interactive 3D avatar chatbot with tab-based admin interface
⭐ Most Complex UI Page - Three.js + Multi-Tab System
[Previously documented in detail - see earlier documentation]
2. Text Chatbot Interface¶
File: src/app/text-chatbot/page.tsx (225 lines)
Route: /text-chatbot
Type: Public (no auth required)
Purpose: Clean text-only chat interface
🎯 Simplest Chat UI - Pure Text Messaging
State Variables (5)¶
- messages (
Message[]) - Chat history with timestamps - inputMessage (
string) - Current text input - isLoading (
boolean) - Bot response loading state - messagesEndRef (
useRef<HTMLDivElement>) - Auto-scroll target - chatContainerRef (
useRef<HTMLDivElement>) - Chat container
Message Interface¶
Initial State¶
Default Bot Message:
const [messages, setMessages] = useState<Message[]>([
{
id: 1,
text: "Hi, How can I help you today?",
sender: "bot",
timestamp: formatMessageTime(),
},
]);
Message Flow¶
1. User Submits:
const userMessage: Message = {
id: Date.now(),
text: inputMessage,
sender: "user",
timestamp: formatMessageTime(),
};
setMessages((prev) => [...prev, userMessage]);
2. API Call:
const formData = new FormData();
formData.append("user_id", userId || "default_user");
formData.append("project_id", projectId || "");
formData.append("session_id", generateSessionId());
formData.append("question", userMessage.text);
const response = await fetch(
`${process.env.NEXT_PUBLIC_API_BASE_URL}/v2/get-response-text-chatbot`,
{ method: "POST", body: formData }
);
3. Bot Response:
const data = await response.json();
const botResponse: Message = {
id: Date.now() + 1,
text: data.text || "Sorry, I couldn't process that request.",
sender: "bot",
timestamp: formatMessageTime(),
};
setMessages((prev) => [...prev, botResponse]);
Session Management¶
Random Session ID Generation:
const generateSessionId = (): string => {
return (
Math.random().toString(36).substring(2, 15) +
Math.random().toString(36).substring(2, 15)
);
};
Session Storage:
user_id- From sessionStorage (or "default_user")selected_project_id- Project context
Auto-Scroll Feature¶
useEffect(() => {
scrollToBottom();
}, [messages]);
const scrollToBottom = (): void => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
};
Scroll Target:
UI Components¶
Header:
- Orange gradient background (orange-500 to orange-600)
- AI Assistant icon (chat bubble SVG)
- Green "Online" status indicator
Chat Messages:
- User messages: Orange gradient bubble, right-aligned
- Bot messages: White bubble, left-aligned
- Avatar icons for both user and bot
- Timestamp display (HH:MM format)
Loading Indicator:
{
isLoading && (
<div className="bg-white px-4 py-3 rounded-2xl">
<div className="flex space-x-2">
<div className="w-2 h-2 bg-orange-500 rounded-full animate-bounce"></div>
<div
className="w-2 h-2 bg-orange-500 rounded-full animate-bounce"
style={{ animationDelay: "0.2s" }}
></div>
<div
className="w-2 h-2 bg-orange-500 rounded-full animate-bounce"
style={{ animationDelay: "0.4s" }}
></div>
</div>
</div>
);
}
Input Form:
- Rounded input field
- Send button with SVG icon
- Disabled state when empty or loading
- "Powered by Askgalore" footer
Styling Details¶
Container:
- Max width: 512px (max-w-md)
- White rounded card
- Shadow-2xl for depth
- Centered on gradient background (gray-900 to black)
Message Bubbles:
- User:
bg-gradient-to-r from-orange-500 to-orange-600 - Bot:
bg-white border border-gray-100 - Rounded corners (rounded-2xl)
- Tail removed on sender side (rounded-br-none / rounded-bl-none)
Responsive:
- Padding adjustments
- Mobile-optimized spacing
- Smooth scrolling
Error Handling¶
API Failure:
catch (error) {
const errorMessage: Message = {
id: Date.now() + 1,
text: "Sorry, there was an error processing your request.",
sender: "bot"
};
setMessages(prev => [...prev, errorMessage]);
}
API Integration¶
Endpoint: POST /v2/get-response-text-chatbot
Request:
user_id(string)project_id(string)session_id(string)question(string)
Response:
3. Voice Chatbot Interface¶
File: src/app/voice-chatbot/page.tsx (384 lines)
Route: /voice-chatbot
Type: Public
Purpose: Voice-first chatbot with speech recognition and audio playback
🎤 Voice-Enabled UI - Web Speech API + Audio Playback
State Variables (8)¶
- messages (
Message[]) - Chat history with audio - inputMessage (
string) - Text input - isLoading (
boolean) - Bot processing - isListening (
boolean) - Speech recognition active - currentlyPlaying (
number | null) - Playing message ID - messagesEndRef (
useRef) - Auto-scroll - audioRef (
useRef<HTMLAudioElement>) - Audio player - recognitionRef (
useRef<any>) - Speech recognition instance
Enhanced Message Interface¶
type Message = {
id: number;
text: string;
sender: "user" | "bot";
audio?: string; // Base64 audio for bot responses
};
Web Speech API Integration¶
Initialization:
useEffect(() => {
const SpeechRecognition =
window.SpeechRecognition || window.webkitSpeechRecognition;
if (SpeechRecognition) {
recognitionRef.current = new SpeechRecognition();
recognitionRef.current.continuous = false;
recognitionRef.current.interimResults = false;
recognitionRef.current.lang = "en-US";
recognitionRef.current.onresult = (event: any) => {
const transcript = event.results[0][0].transcript;
setInputMessage(transcript);
// Auto-submit after 500ms
setTimeout(() => handleSubmitVoiceInput(transcript), 500);
};
recognitionRef.current.onend = () => setIsListening(false);
recognitionRef.current.onerror = (event: any) => {
console.error("Speech recognition error", event.error);
setIsListening(false);
};
}
}, []);
Toggle Voice Input:
const toggleVoiceInput = () => {
if (!recognitionRef.current) {
alert("Speech recognition is not supported in your browser.");
return;
}
if (isListening) {
recognitionRef.current.stop();
} else {
setInputMessage("");
recognitionRef.current.start();
setIsListening(true);
}
};
Audio Playback System¶
Convert Base64 to Audio:
const playAudio = (messageId: number, base64Audio: string) => {
if (!audioRef.current) return;
// Convert base64 to Blob
const binaryString = atob(base64Audio);
const len = binaryString.length;
const bytes = new Uint8Array(len);
for (let i = 0; i < len; i++) {
bytes[i] = binaryString.charCodeAt(i);
}
const blob = new Blob([bytes], { type: "audio/mpeg" });
const audioUrl = URL.createObjectURL(blob);
audioRef.current.src = audioUrl;
audioRef.current
.play()
.then(() => setCurrentlyPlaying(messageId))
.catch((e) => console.error("Audio playback failed:", e));
};
Toggle Playback:
const toggleAudioPlayback = (messageId: number, base64Audio: string) => {
if (currentlyPlaying === messageId) {
audioRef.current?.pause();
setCurrentlyPlaying(null);
} else {
playAudio(messageId, base64Audio);
}
};
Audio Element Events:
useEffect(() => {
if (audioRef.current) {
audioRef.current.onended = () => setCurrentlyPlaying(null);
audioRef.current.onpause = () => setCurrentlyPlaying(null);
}
}, []);
API Integration¶
Endpoint: POST /v2/get-response-voice-chatbot
Request:
user_id(string)project_id(string)session_id(string)question(string)
Response:
Handle Response:
const data = await response.json();
const botResponse: Message = {
id: Date.now() + 1,
text: data.text || "Sorry, I couldn't process that request.",
sender: "bot",
audio: data.audio, // Include audio
};
setMessages((prev) => [...prev, botResponse]);
// Auto-play audio if available
if (data.audio) {
playAudio(botResponse.id, data.audio);
}
UI Components¶
Header:
- Purple-blue gradient (purple-600 to blue-500)
- Microphone icon
- "Voice Assistant" title
- "Active now" status
Voice Input Button:
<button
type="button"
className={isListening ? "text-red-500 animate-pulse" : "text-gray-400 hover:text-blue-500"}
onClick={toggleVoiceInput}
>
<svg><!-- Microphone icon --></svg>
</button>
Message with Audio:
{message.sender === "bot" && message.audio && (
<button
onClick={() => toggleAudioPlayback(message.id, message.audio!)}
className={currentlyPlaying === message.id
? "bg-purple-100 text-purple-800"
: "bg-blue-100 text-blue-800"}
>
{currentlyPlaying === message.id ? (
<>
<svg><!-- Pause icon --></svg>
<span>Pause</span>
</>
) : (
<>
<svg><!-- Play icon --></svg>
<span>Play</span>
</>
)}
</button>
)}
Loading Indicator:
- 3 animated dots (blue-purple gradient)
- Animation delay: 0s, 0.2s, 0.4s
Styling Details¶
Theme: Purple-blue gradient
- User messages:
bg-gradient-to-r from-blue-500 to-purple-600 - Bot messages: White with border
- Icons: Purple-blue gradient
Input Field:
- Microphone button (left)
- Text input (center)
- Send button (right)
- Disabled during voice recognition
Placeholder:
- Normal: "Type your message..."
- Listening: "Listening..."
Browser Compatibility¶
Speech Recognition:
- Chrome 25+
- Edge 79+
- Safari 14.1+ (with webkit prefix)
- Firefox: Not supported (shows alert)
Audio Playback:
- All modern browsers (HTML5 audio)
- MP3 format support required
Cleanup on Unmount¶
return () => {
if (recognitionRef.current) {
recognitionRef.current.abort();
}
if (audioRef.current) {
audioRef.current.pause();
audioRef.current.src = "";
}
};
Summary Comparison¶
| Feature | 3D Chatbot | Text Chatbot | Voice Chatbot |
|---|---|---|---|
| Lines | 524 | 225 | 384 |
| Authentication | Protected | Public | Public |
| 3D Avatar | ✅ Three.js | ❌ N/A | ❌ N/A |
| Voice Input | Optional | ❌ No | ✅ Web Speech API |
| Voice Output | ✅ Yes | ❌ No | ✅ Base64 MP3 |
| Auto-play Audio | Possible | ❌ No | ✅ Yes |
| Speech Recognition | No | ❌ No | ✅ Yes |
| Audio Playback | Yes | ❌ No | ✅ Play/Pause Toggle |
| Theme Color | Orange | Orange | Purple-Blue |
| Session ID | Required | Generated | Generated |
| API Endpoint | Custom | /v2/get-response-text-chatbot |
/v2/get-response-voice-chatbot |
| Complexity | ⭐⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐⭐ |
Common Features (All 3)¶
- Message History - Array-based chat state
- Auto-Scroll - Scroll to bottom on new message
- Loading Indicator - Animated dots
- Timestamps - HH:MM format
- Error Handling - Fallback error messages
- Responsive Design - Mobile-optimized
- Real-time Chat - Async API calls
- Session Storage - user_id, project_id
API Comparison¶
| Chatbot Type | Endpoint | Request Body | Response |
|---|---|---|---|
| Text | /v2/get-response-text-chatbot |
FormData (user_id, project_id, session_id, question) | { text } |
| Voice | /v2/get-response-voice-chatbot |
FormData (same) | { text, audio } |
| 3D | Custom (multiple endpoints) | Various | Complex object |
Total Interface Code: ~1,133 lines (3D: 524 + Text: 225 + Voice: 384)
"From simple text to rich voice, every conversation documented."