Taogen's Blog

Stay hungry stay foolish.

java.util.Date is in core java API since Java 1.0. And it is not recommended for use. If you are using Java 8 or a later version, it is better to use java.time.LocalDate and java.time.Instant, which were introduced in Java 8.

In this post, I will cover common operations related to datetime.

Read more »

Add watermark

document.getElementsByTagName('body')[0].style.backgroundImage = 'url("data:image/svg+xml;utf8,<svg xmlns=\'http://www.w3.org/2000/svg\' version=\'1.1\' height=\'100px\' width=\'100px\'><text transform=\'translate(20, 100) rotate(-30)\' fill=\'rgba(128,128,128, 0.3)\' font-size=\'20\' >watermark</text></svg>")';
const div = document.createElement("div");
div.innerHTML = `<div id="watermark" style="position: fixed; top: 0px; left: 0px; width: 100%; height: 100%; background-image: url(&quot;data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' version='1.1' width='440px' height='293.3333333333333px'><text transform='translate(5, 100) rotate(-20)' fill='rgba(128,128,128, 0.3)' font-size='20' >watermark</text></svg>&quot;); background-repeat: repeat; background-size: 300px 200px; pointer-events: none; z-index: 9999; opacity: 0.5;"></div>`;
document.body.appendChild(div);

lobe-chat is a open-source project for build an AI client. It supports multiple AI providers, such as OpenAI, Claude 3, Gemini and more. It offers several useful features, including Local Large Language Model (LLM) support, Model Visual Recognition, TTS & STT Voice Conversation, Text to Image Generation, Plugin System (Function Calling), Agent Market (GPTs), Progressive Web App (PWA), Mobile Device Adaptation, and Custom Themes.

How to Deploy

Deploying with Docker

# Always pull the latest Docker image before running
docker pull lobehub/lobe-chat
docker run -d \
--name lobe-chat \
--restart always \
-p 3210:3210 \
-e OPENAI_API_KEY=sk-xxxx \
-e ACCESS_CODE=YOUR_PASSWORD \
lobehub/lobe-chat

Deploying to Vercel

You can also fork the lobe-chat project and deploy it to Vercel.

Setting Up lobe-chat

Required Settings

The API key is a required property that must be set.

If you set the OPENAI_API_KEY environment variable when you start the project, you can use the chatbot application directly. lobe-chat will not show an error or prompt you to set an API key. If you want to authenticate users, you can set the ACCESS_CODE environment variable.

If you don’t set the environment variables OPENAI_API_KEY and ACCESS_CODE when you start the project, lobe-chat will show an error on the web page and prompt you to set an API key. You can also set an API key in the settings page before using the chatbot.

Optional Settings

Set Default Agent

Model Settings

  • Model: Choose your preferred language model, such as GPT-4.

Set an API proxy

If you need to use the OpenAI service through a proxy, you can configure the proxy address using the OPENAI_PROXY_URL environment variable:

-e OPENAI_PROXY_URL=https://my-api-proxy.com/v1

If you want to use a localhost proxy

-e OPENAI_PROXY_URL=http://localhost:18080/v1 \
--network="host" \

or

# connect to proxy Docker container
-e OPENAI_PROXY_URL=http://{containerName}:{containerAppPort}/v1 \
--network {someNetwork} \

ChatGPT-Next-Web is an open-source project for building an AI chatbot client. This project is designed to be cross-platform, allowing it to be used on various operating systems. It currently can be used as a web or PWA application, or as a desktop application on Linux, Windows, or macOS. Additionally, it supports several AI providers, including OpenAI and Google AI.

How ChatGPT-Next-Web Works

ChatGPT-Next-Web manages your API keys locally in the browser. When you send a message in the chat box, ChatGPT-Next-Web will, based on your settings, send a request to the AI provider and render the response message.

How to Deploy

Deploying with Docker

# Always pull the latest Docker image before running
docker pull yidadaa/chatgpt-next-web
docker run -d \
--name chatgpt-next-web \
--restart always \
-p 3000:3000 \
yidadaa/chatgpt-next-web

Deploying to Vercel

You can also fork the ChatGPT-Next-Web project and deploy it to Vercel.

Setting Up ChatGPT-Next-Web

Click the settings button in the lower left corner to open the settings.

Required Settings

OpenAI API Key

Before using ChatGPT-Next-Web, you must set your OpenAI API Key in the Settings -> Custom Endpoint -> OpenAI API Key section.

Optional Settings

OpenAI Endpoint

If you have a self-deployed AI service API, you can set the value to something like http://localhost:18080.

Model

You can set your preferred model, such as gpt-4-0125-preview.

Others

Self-deployed AI services

You can use the copilot-gpt4-service to build a self-deployed AI service. To start an AI service, run the following command:

docker run -d \
--name copilot-gpt4-service \
--restart always \
-p 18080:8080 \
aaamoon/copilot-gpt4-service:latest

or

docker network create chatgpt

docker run -d \
--name copilot-gpt4-service \
--restart always \
-p 18080:8080 \
--network chatgpt \
aaamoon/copilot-gpt4-service:latest

OpenAI Proxy

openai-scf-proxy: Use Tencent Cloud Serverless to set up OpenAI proxy in one minute.

0%