Performance Optimization for the Entire Process of Web Applications
In this post, I will cover the performance optimization for the entire process of web applications, including the frontend, backend, networking, and web server.
In this post, I will cover the performance optimization for the entire process of web applications, including the frontend, backend, networking, and web server.
Add watermark
document.getElementsByTagName('body')[0].style.backgroundImage = 'url("data:image/svg+xml;utf8,<svg xmlns=\'http://www.w3.org/2000/svg\' version=\'1.1\' height=\'100px\' width=\'100px\'><text transform=\'translate(20, 100) rotate(-30)\' fill=\'rgba(128,128,128, 0.3)\' font-size=\'20\' >watermark</text></svg>")'; |
const div = document.createElement("div"); |
1. Meaningful Names.
2. Don’t use magic literals (numbers and strings). Using variables or constants with descriptive names. For example, int MAX_LEN = 100;
3. Every function does only one thing.
4. Class should be small.
5. Keep it simple, stupid. Don’t optimize prematurely.
Make a robust, well-developed solution. Do a detail design. First, solve the problem. Then, write the code.
Unit tests. Reduce potential errors.
Error handling.
Consider boundary values.
Check for null values.
Avoid hard coded database or other credentials in code. Using environment variables or a Dotenv file.
Make a maintainable and extensible system design and solution. For example:
Reduce or eliminate duplicate code. Don’t Repeat Yourself.
Place global properties in the project configuration file, not in the source code.
Decoupling modules, class and functions.
SOLID design principles. (Single Responsibility Principle, Open-Closed Principle, Liskov Substitution Principle, Interface Segregation Principle, and Dependency Inversion Principle)
Writing documentation.
Using the same styles. The same form items in different pages use the same styles of UI components.
Client/web pages can be displayed normally in different devices/browsers and with different resolutions.
Docker has been blocked in China since June 6, 2024. In this post, I will cover how to use docker in China.
lobe-chat is a open-source project for build an AI client. It supports multiple AI providers, such as OpenAI, Claude 3, Gemini and more. It offers several useful features, including Local Large Language Model (LLM) support, Model Visual Recognition, TTS & STT Voice Conversation, Text to Image Generation, Plugin System (Function Calling), Agent Market (GPTs), Progressive Web App (PWA), Mobile Device Adaptation, and Custom Themes.
# Always pull the latest Docker image before running |
You can also fork the lobe-chat project and deploy it to Vercel.
The API key is a required property that must be set.
If you set the OPENAI_API_KEY environment variable when you start the project, you can use the chatbot application directly. lobe-chat will not show an error or prompt you to set an API key. If you want to authenticate users, you can set the ACCESS_CODE environment variable.
If you don’t set the environment variables OPENAI_API_KEY and ACCESS_CODE when you start the project, lobe-chat will show an error on the web page and prompt you to set an API key. You can also set an API key in the settings page before using the chatbot.
Set Default Agent
Model Settings
Set an API proxy
If you need to use the OpenAI service through a proxy, you can configure the proxy address using the OPENAI_PROXY_URL
environment variable:
-e OPENAI_PROXY_URL=https://my-api-proxy.com/v1 |
If you want to use a localhost proxy
-e OPENAI_PROXY_URL=http://localhost:18080/v1 \ |
or
# connect to proxy Docker container |
ChatGPT-Next-Web is an open-source project for building an AI chatbot client. This project is designed to be cross-platform, allowing it to be used on various operating systems. It currently can be used as a web or PWA application, or as a desktop application on Linux, Windows, or macOS. Additionally, it supports several AI providers, including OpenAI and Google AI.
ChatGPT-Next-Web manages your API keys locally in the browser. When you send a message in the chat box, ChatGPT-Next-Web will, based on your settings, send a request to the AI provider and render the response message.
# Always pull the latest Docker image before running |
You can also fork the ChatGPT-Next-Web project and deploy it to Vercel.
Click the settings button in the lower left corner to open the settings.
OpenAI API Key
Before using ChatGPT-Next-Web, you must set your OpenAI API Key in the Settings -> Custom Endpoint -> OpenAI API Key section.
OpenAI Endpoint
If you have a self-deployed AI service API, you can set the value to something like http://localhost:18080.
Model
You can set your preferred model, such as gpt-4-0125-preview
.
Self-deployed AI services
You can use the copilot-gpt4-service
to build a self-deployed AI service. To start an AI service, run the following command:
docker run -d \ |
or
docker network create chatgpt |
OpenAI Proxy
openai-scf-proxy: Use Tencent Cloud Serverless to set up OpenAI proxy in one minute.
Gradle is a build automation tool for multi-language software development. It controls the development process in the tasks of compilation and packaging to testing, deployment, and publishing.
In this post, I will introduce the basic use of Gradle. This post is based on Gradle 8.6 and Kotlin DSL.
You can run the following commands to initialize a Java project with Gradle:
$ gradle init --use-defaults --type java-application |
or
$ gradle init \ |
If you wan to create a Spring Boot application, you can use spring initializr.
There are two configuration files in Gradle: build.gradle
and settings.gradle
. They are both important configuration files in a Gradle project, but they serve different purposes.
build.gradle
is a script file that defines the configuration of a project. It’s written in the Groovy or Kotlin programming languages, and it specifies how tasks are executed, dependencies are managed, and artifacts are built. This file typically resides in the root directory of your project.
settings.gradle
is focused on configuring the structure of a multi-project build and managing the relationships between different projects within it.
You can add plugins to configuration file build.gradle.kts
like this:
plugins { |
Gradle core plugins:
java
: Provides support for building any type of Java project.application
: Provides support for building JVM-based, runnable applications.Spring Boot plugins:
org.springframework.boot
: Spring Boot Gradle Pluginio.spring.dependency-management
: A Gradle plugin that provides Maven-like dependency management functionality. It will control the versions of your project’s direct and transitive dependencies.More plugins:
The following properties are the common properties for Java projects.
group = "com.example" |
java { |
application { |
A Repository is a source for 3rd party libraries.
repositories { |
You can declare dependencies in build.gradle.kts
like this
dependencies { |
In Gradle, dependencies can be classified into several types based on where they come from and how they are managed. Here are the main dependency types:
compile
are visible to all modules, including downstream consumers. This means that If Module A has a compile dependency on a library, and Module B depends on Module A, then Module B also has access to that library transitively. However, this also exposes the implementation details of Module A to Module B, potentially causing coupling between modules. In Gradle 3.4 and later, compile
is deprecated in favor of implementation
.tasks.withType<Test> { |
To list all the available tasks in the project:
$ gradle tasks |
Before building a Java project, ensure that the java
plugin is added to the configuration file build.gradle.kts
.
plugins { |
Running the following command to build the project
$ gradle build |
To run a Java project, ensure that the application
plugin and the mainClass
configuration are added to the configuration file build.gradle.kts
. The application
plugin makes code runnable.
plugins { |
Running the following command to run the main method of a Java project:
$ gradle run |
The Gradle Wrapper is the preferred way of starting a Gradle build. It consists of a batch script for Windows and a shell script for OS X and Linux. These scripts allow you to run a Gradle build without requiring that Gradle be installed on your system.
The Wrapper is a script that invokes a declared version of Gradle, downloading it beforehand if necessary. As a result, developers can get up and running with a Gradle project quickly.
Gradle Wrapper files:
gradle/wrapper/gradle-wrapper.jar
: The Wrapper JAR file containing code for downloading the Gradle distribution.gradle/wrapper/gradle-wrapper.properties
: A properties file responsible for configuring the Wrapper runtime behavior e.g. the Gradle version compatible with this version.gradlew
, gradlew.bat
: A shell script and a Windows batch script for executing the build with the Wrapper.If the project you are working on does not contain those Wrapper files, you can generate them.
$ gradle wrapper |
Run tasks with gradlew:
$ ./gradlew tasks |
[1] Building Java Projects with Gradle
Logback is a popular logging framework for Java applications, designed as a successor to the well-known Apache Log4j framework. It’s known for its flexibility, performance, and configurability. Logback is extensively used in enterprise-level Java applications for logging events and messages.
In this post, I will cover various aspects of using Logback with Spring Boot.
Before we can use Logback in a Spring Boot application, we need to add its library dependencies to the project.
<dependency> |
It contains ch.qos.logback:logback-classic
and org.slf4j:slf4j-api
Spring Boot projects use logback-spring.xml
or logback.xml
in the resources
directory as the Logback configuration file by default.
Priority of the Logback default configuration file: logback.xml
> logback-spring.xml
.
If you want to use a custom filename. You can specify the log configuration file path in application.xml
or application.yml
. For example:
logging: |
You can define some properties that can be referenced in strings. Common properties: log message pattern, log file path, etc.
<configuration> |
Appenders define the destination and formatting (optional) for log messages.
Types of Logback Appenders:
ConsoleAppender
<configuration> |
FileAppender
<configuration> |
RollingFileAppender
<configuration> |
A rollingPolicy is a component attached to specific appenders that dictates how and when log files are automatically managed, primarily focusing on file size and archiving. Its primary function is to prevent log files from becoming excessively large, improving manageability and performance.
Purpose:
Functionality:
Benefits of using rollingPolicy:
Common types of rollingPolicy in Logback:
TimeBasedRollingPolicy
<configuration> |
SizeAndTimeBasedRollingPolicy
<configuration> |
A filter attached to an appender allows you to control which log events are ultimately written to the defined destination (file, console, etc.) by the appender.
Commonly used filters:
Filter only INFO level log messages.
<configuration> |
Filter level greater than INFO
<configuration> |
A logger in logback.xml
represents a category or source for log messages within your application.
There are two types of logger tags in Logback: <root>
and <logger>
. They have hierarchical relationships. All <logger>
are <root>
child logger. Loggers can inherit their parent logger’s configurations. <root>
represents the top level in the logger hierarchy which receives all package log messages. <logger>
receives log messages from a specified package.
<configuration> |
<root>
: It receive all package log messages.level="INFO"
: define the default logger level to INFO for all loggers.<appender-ref>
: Send messages to CONSOLE and ROLLING_FILE appender.<logger>
name="com.taogen
: It receive the com.taogen
package log messages.level="DEBUG"
: It overrides the logger level to DEBUG.additivity="false"
: If the message has been sent to a appender by its parent logger, current logger will not send the message to the same appender again.<appender-ref>
: Send message to CONSOLE and ROLLING_FILE appender.package com.taogen.commons.boot.mybatisplus; |
@Slf4j
is a Lombok annotation that automatically creates a private static final field named log
of type org.slf4j.Logger
. This log
field is initialized with an instance of the SLF4J logger for the current class.
private static Logger log = LoggerFactory.getLogger(LogTest.class); |
The commonly used Logback levels (in order of increasing severity):
Relationships between Logger object and <logger>
in logback.xml
<logging>
defined in logback.xml
usually uses a package path as its name. Otherwise, use a custom name.LoggerFactory.getLogger()
method to get a Logger object, then call logger’s methods, such as debug()
.<logger>
from logback.xml
using the object’s package or parent package path. If the Logger object is obtained through a string, Logback uses the string to find a custom <logger>
from logback.xml
.You can create a custom logger by setting a name instead of using a package path as its name.
<configuration> |
Output log messages:
2024-03-07 09:20:43 [main] INFO my-custom-log - Hello! |
Note that if you use a custom logger, you can’t get class information from the log message.
<springProfile>
<configuration> |
You can dynamically set the log configuration file path in application.yml
. Different spring boot environments use different log configuration files.
application.yml
logging: |
logback-dev.xml
<configuration> |
logback-prod.xml
<configuration> |
Goals
<?xml version="1.0" encoding="UTF-8"?> |
code
.