lobe-chat is a open-source project for build an AI client. It supports multiple AI providers, such as OpenAI, Claude 3, Gemini and more. It offers several useful features, including Local Large Language Model (LLM) support, Model Visual Recognition, TTS & STT Voice Conversation, Text to Image Generation, Plugin System (Function Calling), Agent Market (GPTs), Progressive Web App (PWA), Mobile Device Adaptation, and Custom Themes.
You can also fork the lobe-chat project and deploy it to Vercel.
Setting Up lobe-chat
Required Settings
The API key is a required property that must be set.
If you set the OPENAI_API_KEY environment variable when you start the project, you can use the chatbot application directly. lobe-chat will not show an error or prompt you to set an API key. If you want to authenticate users, you can set the ACCESS_CODE environment variable.
If you don’t set the environment variables OPENAI_API_KEY and ACCESS_CODE when you start the project, lobe-chat will show an error on the web page and prompt you to set an API key. You can also set an API key in the settings page before using the chatbot.
Optional Settings
Set Default Agent
Model Settings
Model: Choose your preferred language model, such as GPT-4.
Set an API proxy
If you need to use the OpenAI service through a proxy, you can configure the proxy address using the OPENAI_PROXY_URL environment variable:
ChatGPT-Next-Web is an open-source project for building an AI chatbot client. This project is designed to be cross-platform, allowing it to be used on various operating systems. It currently can be used as a web or PWA application, or as a desktop application on Linux, Windows, or macOS. Additionally, it supports several AI providers, including OpenAI and Google AI.
How ChatGPT-Next-Web Works
ChatGPT-Next-Web manages your API keys locally in the browser. When you send a message in the chat box, ChatGPT-Next-Web will, based on your settings, send a request to the AI provider and render the response message.
How to Deploy
Deploying with Docker
# Always pull the latest Docker image before running docker pull yidadaa/chatgpt-next-web docker run -d \ --name chatgpt-next-web \ --restart always \ -p 3000:3000 \ yidadaa/chatgpt-next-web
Deploying to Vercel
You can also fork the ChatGPT-Next-Web project and deploy it to Vercel.
Setting Up ChatGPT-Next-Web
Click the settings button in the lower left corner to open the settings.
Required Settings
OpenAI API Key
Before using ChatGPT-Next-Web, you must set your OpenAI API Key in the Settings -> Custom Endpoint -> OpenAI API Key section.
Optional Settings
OpenAI Endpoint
If you have a self-deployed AI service API, you can set the value to something like http://localhost:18080.
Model
You can set your preferred model, such as gpt-4-0125-preview.
Others
Self-deployed AI services
You can use the copilot-gpt4-service to build a self-deployed AI service. To start an AI service, run the following command:
Gradle is a build automation tool for multi-language software development. It controls the development process in the tasks of compilation and packaging to testing, deployment, and publishing.
In this post, I will introduce the basic use of Gradle. This post is based on Gradle 8.6 and Kotlin DSL.
Initialize a Gradle Project
You can run the following commands to initialize a Java project with Gradle:
The --use-defaults option applies default values for options that were not explicitly configured. 1) Default dsl: kotlin. 2) Default package: org.example. 3) Default test-framework: junit-jupiter. 4) Default project-name: the same with the root directory name.
If you wan to create a Spring Boot application, you can use spring initializr.
Configurations
There are two configuration files in Gradle: build.gradle and settings.gradle. They are both important configuration files in a Gradle project, but they serve different purposes.
build.gradle is a script file that defines the configuration of a project. It’s written in the Groovy or Kotlin programming languages, and it specifies how tasks are executed, dependencies are managed, and artifacts are built. This file typically resides in the root directory of your project.
settings.gradle is focused on configuring the structure of a multi-project build and managing the relationships between different projects within it.
Plugins
You can add plugins to configuration file build.gradle.kts like this:
plugins { java id("org.springframework.boot") version "3.2.3" id("io.spring.dependency-management") version "1.1.4" }
Gradle core plugins:
java: Provides support for building any type of Java project.
application: Provides support for building JVM-based, runnable applications.
Spring Boot plugins:
org.springframework.boot: Spring Boot Gradle Plugin
io.spring.dependency-management: A Gradle plugin that provides Maven-like dependency management functionality. It will control the versions of your project’s direct and transitive dependencies.
In Gradle, dependencies can be classified into several types based on where they come from and how they are managed. Here are the main dependency types:
Compile Dependencies:
These are dependencies required for compiling and building your project. They typically include libraries and frameworks that your code directly depends on to compile successfully.
Dependencies declared with compile are visible to all modules, including downstream consumers. This means that If Module A has a compile dependency on a library, and Module B depends on Module A, then Module B also has access to that library transitively. However, this also exposes the implementation details of Module A to Module B, potentially causing coupling between modules. In Gradle 3.4 and later, compile is deprecated in favor of implementation.
Implementation Dependencies:
Introduced in Gradle 3.4, these dependencies are similar to compile dependencies but have a more restricted visibility.
They are not exposed to downstream consumers of your library or module. This allows for better encapsulation and prevents leaking implementation details. This means that if Module A has an implementation dependency on a library, Module B, depending on Module A, does not have access to that library transitively. This enhances encapsulation and modularity by hiding implementation details of a module from its consumers. It allows for better dependency management and reduces coupling between modules in multi-module projects.
Runtime Dependencies: Dependencies that are only required at runtime, not for compilation. They are needed to execute your application but not to build it.
Test Dependencies: Dependencies required for testing your code. These include testing frameworks, libraries, and utilities used in unit tests, integration tests, or other testing scenarios.
Optional Dependencies: Dependencies that are not strictly required for your project to function but are nice to have. Gradle does not include optional dependencies by default, but you can specify them if needed.
Tasks
tasks.withType<Test> { useJUnitPlatform() }
Run Tasks
To list all the available tasks in the project:
$ gradle tasks
Build Java
Before building a Java project, ensure that the java plugin is added to the configuration file build.gradle.kts.
plugins { java }
Running the following command to build the project
$ gradle build
Run Java main class
To run a Java project, ensure that the application plugin and the mainClass configuration are added to the configuration file build.gradle.kts. The application plugin makes code runnable.
plugins { // Apply the application plugin to add support for building a CLI application in Java. application }
application { // Define the main class for the application. mainClass = "org.example.App" }
Running the following command to run the main method of a Java project:
$ gradle run
Gradle Wrapper
The Gradle Wrapper is the preferred way of starting a Gradle build. It consists of a batch script for Windows and a shell script for OS X and Linux. These scripts allow you to run a Gradle build without requiring that Gradle be installed on your system.
The Wrapper is a script that invokes a declared version of Gradle, downloading it beforehand if necessary. As a result, developers can get up and running with a Gradle project quickly.
Gradle Wrapper files:
gradle/wrapper/gradle-wrapper.jar: The Wrapper JAR file containing code for downloading the Gradle distribution.
gradle/wrapper/gradle-wrapper.properties: A properties file responsible for configuring the Wrapper runtime behavior e.g. the Gradle version compatible with this version.
gradlew,gradlew.bat: A shell script and a Windows batch script for executing the build with the Wrapper.
If the project you are working on does not contain those Wrapper files, you can generate them.
$ gradle wrapper
Run tasks with gradlew:
$ ./gradlew tasks $ ./gradlew build $ ./gradlew run $ ./gradlew test
Configuration Examples
An example of Gradle configuration for Java projects
// This dependency is used by the application. implementation(libs.guava) // Logging implementation("org.slf4j:slf4j-api:${Versions.slf4jVersion}") implementation("ch.qos.logback:logback-classic:${Versions.logbackVersion}") // dotenv implementation("io.github.cdimascio:dotenv-java:${Versions.dotenvVersion}")
// Lombok compileOnly("org.projectlombok:lombok:${Versions.lombokVersion}") annotationProcessor("org.projectlombok:lombok:${Versions.lombokVersion}") testCompileOnly("org.projectlombok:lombok:${Versions.lombokVersion}") testAnnotationProcessor("org.projectlombok:lombok:${Versions.lombokVersion}") // Some more dependencies // ... }
// Apply a specific Java toolchain to ease working on different environments. java { toolchain { languageVersion = JavaLanguageVersion.of(21) } }
application { // Define the main class for the application. mainClass = "com.taogen.App" }
tasks.named<Test>("test") { // Use JUnit Platform for unit tests. useJUnitPlatform() }
An example of Gradle configuration for Spring Boot projects
plugins { java id("org.springframework.boot") version "3.5.3" id("io.spring.dependency-management") version "1.1.7" } group = "com.taogen" version = "0.0.1-SNAPSHOT"
Logback is a popular logging framework for Java applications, designed as a successor to the well-known Apache Log4j framework. It’s known for its flexibility, performance, and configurability. Logback is extensively used in enterprise-level Java applications for logging events and messages.
In this post, I will cover various aspects of using Logback with Spring Boot.
Dependencies
Before we can use Logback in a Spring Boot application, we need to add its library dependencies to the project.
Appenders define the destination and formatting (optional) for log messages.
Types of Logback Appenders:
ConsoleAppender: Writes log messages to the console window (standard output or error).
FileAppender: Writes log messages to a specified file.
RollingFileAppender: Similar to FileAppender, but it creates new log files based on size or time intervals, preventing a single file from growing too large.
SocketAppender: Sends log messages over a network socket to a remote logging server.
SMTPAppender: Sends log messages as email notifications.
<configuration> <appendername="ROLLING_FILE"class="ch.qos.logback.core.rolling.RollingFileAppender"> <file>${file.log.dir}/${file.log.filename}.log</file> <rollingPolicyclass="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- Log file will roll over daily --> <fileNamePattern>${file.log.pathPattern}</fileNamePattern> <!-- Keep 30 days' worth of logs --> <maxHistory>30</maxHistory> </rollingPolicy> <filterclass="ch.qos.logback.classic.filter.ThresholdFilter"> <!-- Log messages greater than or equal to the level --> <level>INFO</level> </filter> <encoder> <pattern>${file.log.pattern}</pattern> </encoder> </appender>
</configuration>
RollingPolicy
A rollingPolicy is a component attached to specific appenders that dictates how and when log files are automatically managed, primarily focusing on file size and archiving. Its primary function is to prevent log files from becoming excessively large, improving manageability and performance.
Purpose:
Prevents large log files: By periodically rolling over (rotating) log files, you avoid single files growing too large, which can be cumbersome to manage and slow down access.
Archiving logs: Rolling policies can archive rolled-over log files, allowing you to retain historical logs for analysis or auditing purposes.
Functionality:
Triggers rollover: Based on the defined policy, the rollingPolicy determines when to create a new log file and potentially archive the existing one. Common triggers include exceeding a certain file size or reaching a specific time interval (e.g., daily, weekly).
Defines archive format: The policy can specify how archived log files are named and organized. This helps maintain a clear structure for historical logs.
Benefits of using rollingPolicy:
Manageability: Keeps log files at a manageable size, making them easier to handle and access.
Performance: Prevents performance issues associated with excessively large files.
Archiving: Allows you to retain historical logs for later use.
Common types of rollingPolicy in Logback:
SizeBasedTriggeringPolicy: Rolls over the log file when it reaches a specific size limit (e.g., 10 MB).
TimeBasedRollingPolicy: Rolls over the log file based on a time interval (e.g., daily, weekly, monthly).
SizeAndTimeBasedRollingPolicy: Combines size and time-based triggers, offering more control over rolling behavior.
TimeBasedRollingPolicy
<configuration> <appendername="ROLLING_FILE"class="ch.qos.logback.core.rolling.RollingFileAppender"> ... <rollingPolicyclass="ch.qos.logback.core.rolling.TimeBasedRollingPolicy"> <!-- Log file will roll over daily --> <fileNamePattern>${file.log.dir}/${file.log.filename}-%d{yyyy-MM-dd}.log</fileNamePattern> <!-- Keep 30 days' worth of logs --> <maxHistory>30</maxHistory> </rollingPolicy> ... </appender> </configuration>
SizeAndTimeBasedRollingPolicy
<configuration> <appendername="ROLLING_FILE"class="ch.qos.logback.core.rolling.RollingFileAppender"> ... <rollingPolicyclass="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy"> <fileNamePattern>${file.log.dir}/${file.log.filename}-%d{yyyy-MM-dd}.%i.log.gz</fileNamePattern> <!-- each archived file's size will be max 10MB --> <maxFileSize>10MB</maxFileSize> <!-- 30 days to keep --> <maxHistory>30</maxHistory> <!-- total size of all archive files, if total size > 100GB, it will delete old archived file --> <totalSizeCap>100GB</totalSizeCap> </rollingPolicy> ... </appender> </configuration>
Filter
A filter attached to an appender allows you to control which log events are ultimately written to the defined destination (file, console, etc.) by the appender.
Commonly used filters:
ThresholdFilter: This filter allows log events whose level is greater than or equal to the specified level to pass through. For example, if you set the threshold to INFO, then only log events with level INFO, WARN, ERROR, and FATAL will pass through.
LevelFilter: Similar to ThresholdFilter, but it allows more fine-grained control. You can specify both the level to match and whether to accept or deny log events at that level.
A logger in logback.xml represents a category or source for log messages within your application.
There are two types of logger tags in Logback: <root> and <logger>. They have hierarchical relationships. All <logger> are <root> child logger. Loggers can inherit their parent logger’s configurations. <root> represents the top level in the logger hierarchy which receives all package log messages. <logger> receives log messages from a specified package.
level="INFO": define the default logger level to INFO for all loggers.
<appender-ref>: Send messages to CONSOLE and ROLLING_FILE appender.
<logger>
name="com.taogen: It receive the com.taogen package log messages.
level="DEBUG": It overrides the logger level to DEBUG.
additivity="false": If the message has been sent to a appender by its parent logger, current logger will not send the message to the same appender again.
<appender-ref>: Send message to CONSOLE and ROLLING_FILE appender.
@Test voidtest1() { log.debug("This is a debug message"); log.info("This is an info message"); log.warn("This is a warn message"); log.error("This is an error message"); logger.debug("This is a debug message"); customLogger.debug("This is a debug message"); customLogger.info("This is an info message"); customLogger.warn("This is a warn message"); customLogger.error("This is an error message"); } }
@Slf4j is a Lombok annotation that automatically creates a private static final field named log of type org.slf4j.Logger. This log field is initialized with an instance of the SLF4J logger for the current class.
The commonly used Logback levels (in order of increasing severity):
TRACE: Captures the most detailed information.
DEBUG: general application events and progress.
INFO: general application events and progress.
WARN: potential problems that might not cause immediate failures.
ERROR: errors that prevent the program from functioning correctly.
Relationships between Logger object and <logger> in logback.xml
<logging> defined in logback.xml usually uses a package path as its name. Otherwise, use a custom name.
If you use Logback to print log messages in Java code, first, you need to pass a class or string to LoggerFactory.getLogger() method to get a Logger object, then call logger’s methods, such as debug().
If the Logger object is obtained through a class, Logback looks for <logger> from logback.xml using the object’s package or parent package path. If the Logger object is obtained through a string, Logback uses the string to find a custom <logger> from logback.xml.
More Configurations
Custom Loggers
You can create a custom logger by setting a name instead of using a package path as its name.
Oriented toward novices. It is assumed that most of the readers of the document are novices. This way you will write more understandable, in-depth, detailed, and readable.
Structure
Overall logic: what, why, how, when, where.
Try to break out into detailed subdirectories. You can quickly locate what you want to see.
Details
The steps should be clear. Label steps 1, 2, and 3.
Try to add links to nouns that you can give links to. E.g. official website, explanation of specialized terms.
The code field is to be marked with a code, e.g. code.
Use tables as much as possible for structured information.
Try to use pictures where you can illustrate to make a clearer and more visual illustration. Don’t mind the hassle. It is more visual and easier to read. For example: UML, flow chart.
Give a link to the reference content at the end.
Others
After writing, read it through at least once. Timely detection and revision of some statement errors, incoherence; inaccuracy and lack of clarity of expression, and so on.
Before write the code, you can write statistical SQLs first. Because the core of statistical APIs are SQLs. As the saying goes, “first, solve the problem, then, write the code”.
Define Parameter and Response VOs
VO (value object) is typically used for data transfer between business layers and only contains data.
Parameter VOs
@Data publicclassSomeStatParam { private Date beginTime; private Date endTime; private Integer type; private Integer userId; ... }