Build Your Own AI Chatbot Client with ChatGPT-Next-Web

ChatGPT-Next-Web is an open-source project for building an AI chatbot client. This project is designed to be cross-platform, allowing it to be used on various operating systems. It currently can be used as a web or PWA application, or as a desktop application on Linux, Windows, or macOS. Additionally, it supports several AI providers, including OpenAI and Google AI.

How ChatGPT-Next-Web Works

ChatGPT-Next-Web manages your API keys locally in the browser. When you send a message in the chat box, ChatGPT-Next-Web will, based on your settings, send a request to the AI provider and render the response message.

How to Deploy

Deploying with Docker

# Always pull the latest Docker image before running
docker pull yidadaa/chatgpt-next-web
docker run -d \
--name chatgpt-next-web \
--restart always \
-p 3000:3000 \

Deploying to Vercel

You can also fork the ChatGPT-Next-Web project and deploy it to Vercel.

Setting Up ChatGPT-Next-Web

Click the settings button in the lower left corner to open the settings.

Required Settings

OpenAI API Key

Before using ChatGPT-Next-Web, you must set your OpenAI API Key in the Settings -> Custom Endpoint -> OpenAI API Key section.

Optional Settings

OpenAI Endpoint

If you have a self-deployed AI service API, you can set the value to something like http://localhost:18080.


You can set your preferred model, such as gpt-4-0125-preview.


Self-deployed AI services

You can use the copilot-gpt4-service to build a self-deployed AI service. To start an AI service, run the following command:

docker run -d \
--name copilot-gpt4-service \
--restart always \
-p 18080:8080 \


docker network create chatgpt

docker run -d \
--name copilot-gpt4-service \
--restart always \
-p 18080:8080 \
--network chatgpt \

OpenAI Proxy

openai-scf-proxy: Use Tencent Cloud Serverless to set up OpenAI proxy in one minute.