Chat - AI Conversation

Converse with your knowledge base. AI tool of Alchemy Lab.

Converse with your knowledge base. AI tool of Alchemy Lab.

Core Features

  • 💬 Multi-model Support: Gemini Flash / LongCat Flash / LongCat Thinking
  • 📎 File Upload: Support images and documents (PDF, code, etc.)
  • 💾 Session Management: Auto-save conversation history, grouped by time
  • 🎯 Smart Suggestions: Preset prompts (summarize, extract, explain, outline)
  • 🔒 Access Control: Optional password protection

Quick Start

Web UI Usage

  1. Visit alchemy.izoa.fun/chat
  2. Select model (Gemini 3 Flash Preview recommended)
  3. Type message or upload files
  4. Start conversation

Preset Prompts:

  • Summarize key points: Apply 4 principles to refine information
  • Extract reusable insights: Extract problem essence and solutions
  • Create structured outline: Organize content hierarchically
  • Explain in simple terms: Explain complex concepts in plain language

API Call

curl -X POST https://api.alchemy.izoa.fun/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "messages": [
      {"role": "user", "content": "Explain quantum computing simply"}
    ],
    "model": "gemini-3-flash-preview",
    "stream": true
  }'

Response (Streaming):

data: {"choices":[{"delta":{"content":"Quantum"}}]}
data: {"choices":[{"delta":{"content":" computing"}}]}
...
data: [DONE]

API Reference

POST /api/chat - Send Message

Request Body:

{
  messages: Array<{
    role: "user" | "assistant";
    content: string;
    attachments?: {
      images?: Array<{id: string; preview: string}>;
      files?: Array<{name: string; size: number; type: string; content?: string}>;
    };
  }>;
  model: "gemini-3-flash-preview" | "LongCat-Flash-Chat" | "LongCat-Flash-Thinking";
  stream?: boolean;  // Default true
}

Response: OpenAI-compatible streaming response

// stream=true (default)
data: {"id":"chatcmpl-xxx","choices":[{"delta":{"content":"text"}}]}
data: [DONE]

// stream=false
{
  "id": "chatcmpl-xxx",
  "choices": [{
    "message": {
      "role": "assistant",
      "content": "complete response"
    }
  }]
}

POST /api/chat/verify - Verify Password

If Chat feature has password protection enabled, verify first:

curl -X POST https://api.alchemy.izoa.fun/api/chat/verify \
  -H "Content-Type: application/json" \
  -d '{"password": "your-password"}'

Response:

{
  "success": true
}

Supported Models

Model Description Features
Gemini 3 Flash Preview Google's latest fast model Fast, versatile, recommended
LongCat Flash Chat Optimized conversation model Chat-optimized
LongCat Flash Thinking Reasoning model Deep thinking, logical reasoning

Attachment Support

Image Upload

  • Formats: JPEG, PNG, WebP, GIF
  • Size: Max 10MB per image
  • Quantity: Max 10 images/message
  • Preview: Thumbnail and fullscreen view support

Example:

// Browser upload
const formData = new FormData();
formData.append('file', imageFile);

fetch('/api/chat/upload', {
  method: 'POST',
  body: formData
});

File Upload

  • Formats: PDF, TXT, Markdown, code files, etc.
  • Size: Max 5MB per file
  • Quantity: Max 5 files/message
  • Processing: Auto-extract text content

Supported File Types:

  • Documents: .pdf, .txt, .md
  • Code: .js, .ts, .py, .go, .java, .cpp, etc.
  • Config: .json, .yaml, .toml, .xml

Session Management

Auto-save

All conversations auto-save to browser local storage (IndexedDB):

  • Grouped by time (Today, Yesterday, Last 7 days, Last 30 days, Older)
  • Auto-generate conversation title (first message)
  • Support rename and delete

Export Conversation

# Export as Markdown
Click "Export" button in top-right of chat interface

Export format:

# Conversation: Explain quantum computing

**User** (2026-01-27 14:30)
Explain quantum computing simply

**Assistant** (2026-01-27 14:30)
Quantum computing uses quantum bits...

Tech Stack

  • Frontend: Next.js 15 + React 18 + TypeScript
  • AI SDK: Vercel AI SDK (OpenAI-compatible)
  • State Management: Zustand + IndexedDB
  • UI Components: shadcn/ui + Tailwind CSS
  • Markdown Rendering: react-markdown + remark-gfm
  • Code Highlighting: Shiki
  • Backend Proxy: new-api (unified interface)

Use Cases

1. Document Summarization

Upload technical docs → Use "Summarize key points" prompt → Get structured summary

2. Code Review

Upload code file → Ask "What's wrong with this code?" → Get improvement suggestions

3. Knowledge Extraction

Upload meeting notes → Use "Extract reusable insights" prompt → Get action items and decisions

4. Learning Assistant

Upload paper PDF → Use "Explain in simple terms" prompt → Get plain-language explanation

Configuration

Environment Variables

# .env
CHAT_PASSWORD=your-secure-password  # Optional, leave empty to disable password protection
NEW_API_URL=https://your-new-api-instance.com
NEW_API_KEY=sk-xxx

Model Configuration

Configure available models in ui/src/config/models.ts:

export const CHAT_MODELS = [
  {
    id: "gemini-3-flash-preview",
    name: "Gemini 3 Flash (Preview)",
    description: "Fast & Versatile",
  },
  // ...more models
];

Performance Metrics

Metric Value
First Response Latency ~200-500ms
Streaming Speed ~50-100 tokens/s
Image Upload Speed ~1-3s/image
Session Load Time ~50-200ms
Memory Usage ~100-300MB

Limits and Constraints

  • Message Length: Max 32k tokens per message
  • Context Window: Varies by model (Gemini ~1M tokens)
  • Image Quantity: Max 10 images/message
  • File Size: PDF max 5MB, images max 10MB
  • Session Quantity: Unlimited (stored locally)

Privacy and Security

  • Local Storage: All conversations stored in browser locally, not uploaded to server
  • Password Protection: Optional password protection to prevent unauthorized access
  • Data Transmission: HTTPS encryption
  • File Processing: Uploaded files only processed temporarily during session

Troubleshooting

Issue: Cannot Send Message

Cause: Model API unavailable or key error

Solution:

  1. Check backend logs: docker logs alchemy-api
  2. Verify NEW_API_KEY and NEW_API_URL
  3. Try switching to another model

Issue: Image Upload Failed

Cause: File too large or format not supported

Solution:

  • Ensure image <10MB
  • Use supported formats (JPEG, PNG, WebP, GIF)
  • Try compressing image

Issue: Response Stops Midway

Cause: Network interruption or timeout

Solution:

  • Refresh page and retry
  • Check network connection
  • Use "Stop" button to stop, then resend

Related Resources

Feedback and Support