Features β Private Chat Hub
Privacy-first AI on your terms. Own your conversations.
Features
A complete overview of everything Private Chat Hub can do.
π€ Multi-Backend AI Support
Private Chat Hub supports four distinct AI backend types. Switch between them from the settings screen at any time:
- Ollama β Connect to any self-hosted Ollama server over your local network. Supports the full Ollama model library: Llama 3, Mistral, Gemma, Qwen, DeepSeek, Phi, and more. Full tool-calling and vision support for compatible models.
- LM Studio β Connect to LM Studio's local server running any GGUF model. Vision and multimodal input supported where the loaded model allows.
- OpenCode β Route requests through a self-hosted opencode server to cloud providers (OpenAI GPT-4o, Anthropic Claude, Google Gemini, and more). You supply your own API keys; traffic goes through your own server.
- On-Device (LiteRT) β Run Gemma 3 or Gemini Nano locally on the Android device with Google's LiteRT runtime. Completely offlineβno network requests made.
β‘ Real-Time Streaming Responses
Responses appear token-by-token as the model generates them, just like a human typing. The app handles streaming for all supported backends:
- Ollama β newline-delimited JSON stream
- LM Studio β Server-Sent Events (SSE)
- OpenCode β SSE event stream
- On-Device β live inference callbacks
π οΈ Tool Calling / Function Calling
AI models can invoke built-in tools during a conversation to retrieve real-world information or perform actions. Tool calling is supported on Ollama (and cloud models via OpenCode).
Built-in tools:
- Current Date & Time β Returns the current timestamp
- Get Current Location β Returns the device's approximate location via IP geolocation
- Web Search β Searches the web using Jina AI and returns results with clickable source links
- Fetch URL β Retrieves and returns the contents of any web page or URL
- Show Notification β Sends a local Android notification from the AI response
Tools can be individually enabled or disabled per conversation in Chat Settings β Tools. The app uses an agentic loop: if the model requests a tool call, the app executes it and sends the result back, continuing until a final text response is produced.
π Web Search Integration
The built-in web search tool uses Jina AI to fetch live search results. When the AI uses web search:
- Results include titles, snippets, and URLs
- Source links are rendered as tappable links in the response
- Works on any model that supports tool calling
- No Jina API key required for basic usage
ποΈ Vision & Image Input
Attach images and documents directly to your chat messages:
- Pick photos from the gallery or take a new photo with the camera
- Attach files (PDFs, text, documents) for document-based Q&A
- Vision models (e.g.
llava,gemma3,llama3.2-vision) can describe, analyze, and reason about attached images - Images are encoded locally and sent to the model; they never leave your network
- Works with Ollama vision models and LM Studio vision models
- Receive shared images from other Android apps via Share intent
π Text-to-Speech (TTS)
Have AI responses read aloud using your device's built-in TTS engine:
- Tap the speaker icon on any message to start playback
- Streaming TTS: speech begins playing before the full response is complete
- Tap again to stop playback at any time
- Uses Android's native TTS engine (no additional apps required)
- Works with all backends and all models
π Model Comparison
Run the same prompt against two or more models simultaneously and compare responses side-by-side:
- Open the comparison screen from the chat menu
- Select any two models (can mix backends, e.g. Ollama vs LM Studio)
- Responses appear in parallel columns, streaming in real time
- Quickly evaluate quality, speed, and style differences between models
π System Prompt per Conversation
Each conversation can have its own system prompt (AI persona / instructions):
- Set or edit the system prompt in Chat Settings
- The prompt is sent to the model at the start of every request
- Persisted with the conversation so it survives app restarts
- Examples: "You are a Python expert", "Respond only in French", "Be concise"
π Project Workspaces
Organise conversations into named projects or topics:
- Create projects from the conversation list screen
- Assign conversations to projects for easy navigation
- Filter the conversation list by project
- Ideal for separating work, personal, and experimental chats
π§ Thinking / Extended Reasoning
Support for models that expose their internal reasoning steps:
- Thinking traces are displayed in a collapsible section above the final answer
- Works with Ollama models that support the
thinkparameter (e.g.deepseek-r1,qwq) - Helps you verify the model's reasoning process
πΆ Offline & Message Queuing
- On-device models work completely offlineβno network required
- Network connectivity is monitored in real time
- Messages sent while offline are queued locally and delivered automatically when reconnected
- Connection status is displayed in the app's status bar
π¬ Conversation Management
- Full conversation history with search
- Edit the conversation title at any time
- Delete individual messages or entire conversations
- Export conversations as JSON or Markdown
- Import previously exported conversations
- Conversations are stored entirely on-device (SQLite / shared preferences)
βοΈ Rich Rendering: Markdown & LaTeX
- Responses are rendered with full Markdown support: headings, bold, italic, lists, tables, code blocks with syntax highlighting
- LaTeX / MathJax expressions (inline and block) are rendered as proper math notation
- Code blocks include a copy button
- Links in responses are tappable
π Privacy & Data Ownership
- All conversation data stored locally on your device
- No telemetry, analytics, or crash reporting sent anywhere
- Cloud API keys (for OpenCode) stored encrypted on-device
- Network traffic goes only to your own servers (or directly on-device)
- Open-source (MIT License) β audit the code yourself
- Export and own your full conversation history at any time
π± Android Integration
- Receive text and images shared from other apps via Android's Share intent
- Share conversations or responses to other Android apps
- Local push notifications for long-running AI tasks
- Material Design 3 adaptive theming (dark mode always on)
- Background task support with notification actions
- Clipboard integration for quick copy/paste
π Learn More
Installation Guide β’ API Comparison (Ollama vs LM Studio vs OpenCode) β’ GitHub Repository