Build Your Own AI Chatbot Client with lobe-chat

lobe-chat is a open-source project for build an AI client. It supports multiple AI providers, such as OpenAI, Claude 3, Gemini and more. It offers several useful features, including Local Large Language Model (LLM) support, Model Visual Recognition, TTS & STT Voice Conversation, Text to Image Generation, Plugin System (Function Calling), Agent Market (GPTs), Progressive Web App (PWA), Mobile Device Adaptation, and Custom Themes.

How to Deploy

Deploying with Docker

# Always pull the latest Docker image before running
docker pull lobehub/lobe-chat
docker run -d \
--name lobe-chat \
--restart always \
-p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \

Deploying to Vercel

You can also fork the lobe-chat project and deploy it to Vercel.

Setting Up lobe-chat

Required Settings

The API key is a required property that must be set.

If you set the OPENAI_API_KEY environment variable when you start the project, you can use the chatbot application directly. lobe-chat will not show an error or prompt you to set an API key. If you want to authenticate users, you can set the ACCESS_CODE environment variable.

If you don’t set the environment variables OPENAI_API_KEY and ACCESS_CODE when you start the project, lobe-chat will show an error on the web page and prompt you to set an API key. You can also set an API key in the settings page before using the chatbot.

Optional Settings

Set Default Agent

Model Settings

  • Model: Choose your preferred language model, such as GPT-4.

Set an API proxy

If you need to use the OpenAI service through a proxy, you can configure the proxy address using the OPENAI_PROXY_URL environment variable:


If you want to use a localhost proxy

-e OPENAI_PROXY_URL=http://localhost:18080/v1 \
--network="host" \


# connect to proxy Docker container
-e OPENAI_PROXY_URL=http://{containerName}:{containerAppPort}/v1 \
--network {someNetwork} \