Taogen's Blog

Stay hungry stay foolish.

Web Servers

Nginx

Error Information

Status Code: 504

Response:

<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>

Default Timeout

The default timeout for Nginx is 60 seconds.

Settings

Update proxy timeout to 180 seconds:

http {
proxy_connect_timeout 180s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
...
}

Java HTTP Client

Spring RestTemplate

Default Timeout

The default timeout is infinite.

By default RestTemplate uses SimpleClientHttpRequestFactory and that in turn uses HttpURLConnection.

By default the timeout for HttpURLConnection is 0 - ie infinite, unless it has been set by these properties :

-Dsun.net.client.defaultConnectTimeout=TimeoutInMiliSec 
-Dsun.net.client.defaultReadTimeout=TimeoutInMiliSec

Settings

@Bean
public RestTemplate restTemplate() {
SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory();
// Time to establish a connection to the server from the client-side. Set to 20s.
factory.setConnectTimeout(20000);
// Time to finish reading data from the socket. Set to 300s.
factory.setReadTimeout(300000);
return new RestTemplate(factory);
}

JavaScript HTTP Client

axios

Default Timeout

The default timeout is 0 (no timeout).

Settings

const instance = axios.create({
baseURL: 'https://some-domain.com/api/',
// `timeout` specifies the number of milliseconds before the request times out.
// If the request takes longer than `timeout`, the request will be aborted.
timeout: 60000,
...
});

Plugins

Browser plugins

Chrome extensions

IDE plugins

Web, Mobile and Desktop Application

Management System

Website

Website categories

  1. E-commerce: Websites that facilitate online buying and selling of goods and services, such as Amazon or eBay.
    1. Shopping mall
  2. Social Networking: Websites that connect people and allow them to interact and share information, such as Facebook or LinkedIn.
    1. IM
    2. Forum/BBS
  3. News and Media: Websites that provide news articles, videos, and other multimedia content, such as CNN or BBC.
  4. Blogs and Personal Websites: Websites where individuals or organizations publish articles and personal opinions, such as WordPress or Blogger.
  5. Educational: Websites that provide information, resources, and learning materials for educational purposes, such as Khan Academy or Coursera.
  6. Entertainment: Websites that offer various forms of entertainment, such as games, videos, music, or movies, such as Netflix or YouTube.
  7. Government and Nonprofit: Websites belonging to government institutions or nonprofit organizations, providing information, services, and resources, such as whitehouse.gov or Red Cross.
  8. Business and Corporate: Websites representing businesses and corporations, providing information about products, services, and company details, such as Apple or Coca-Cola.
  9. Sports: Websites dedicated to sports news, scores, analysis, and related information, such as ESPN or NBA.
  10. Travel and Tourism: Websites that provide information and services related to travel planning, accommodations, and tourist attractions, such as TripAdvisor or Booking.com.

Mobile Software

Desktop Software

  • Instant message. E.g. Telegram.
  • Email client. E.g. Mozilla Thunderbird.
  • Web browser. E.g. Google Chrome.
  • Office software. E.g. Microsoft Office, Typora, XMind.
  • Note-taking software. E.g. Notion, Evernote.
  • PDF reader. E.g. SumatraPDF.
  • File processing. E.g. 7-Zip
  • Media player. E.g. VLC.
  • Media processing. E.g. FFmpeg, HandBrake, GIMP.
  • Flashcard app. E.g. anki.
  • Stream Media. E.g. Spotify.
  • HTTP proxy. E.g. V2rayN.

Libraries, Tools, Services

Libraries

  • General-purpose libraries for programming language. E.g. Apache Commons Lang.
  • File processing. E.g. Apache POI.
  • Data parser. E.g. org.json.
  • Chart, Report, Graph.
  • Logging.
  • Testing.
  • HTTP Client.

Developer Tools

  • Editor
  • IDE
  • Service Client.

Services

  • Web servers. E.g. Nginx, Apache Tomcat.
  • Databases. E.g. MySQL.
  • Cache. E.g. Redis.
  • Search engines. E.g. Elasticsearch.
  • Deliver software / contioner. E.g. Docker.
  • Other services. E.g. Gotenberg, Aliyun services (media, ai).

Operating Systems

Programming Languages

Apache PDFBox is a Java tool for working with PDF documents. In this post, we’ll introduce how to use Apache PDFBox to handle PDF files. The code examples in this post are based on pdfbox v2.0.29.

<dependency>
<groupId>org.apache.pdfbox</groupId>
<artifactId>pdfbox</artifactId>
<version>2.0.29</version>
</dependency>

Extract Text

Extract all page text

String inputFilePath = "your/pdf/filepath";
// Load PDF document
PDDocument document = PDDocument.load(new File(inputFilePath));
// Create PDFTextStripper instance
PDFTextStripper pdfStripper = new PDFTextStripper();
// Extract text from PDF
String text = pdfStripper.getText(document);
// Print extracted text
System.out.println(text);
// Close the document
document.close();

Extract page by page

String inputFilePath = "your/pdf/filepath";
// Load the PDF document
PDDocument document = PDDocument.load(new File(inputFilePath));
// Create an instance of PDFTextStripper
PDFTextStripper stripper = new PDFTextStripper();
// Iterate through each page and extract the text
for (int pageNumber = 1; pageNumber <= document.getNumberOfPages(); pageNumber++) {
stripper.setStartPage(pageNumber);
stripper.setEndPage(pageNumber);

String text = stripper.getText(document);
System.out.println("Page " + pageNumber + ":");
System.out.println(text);
}
// Close the PDF document
document.close();

Split and Merge

Split

private static void splitPdf(String inputFilePath, String outputDir) throws IOException {
File file = new File(inputFilePath);
// Load the PDF document
PDDocument document = PDDocument.load(file);
// Create a PDF splitter object
Splitter splitter = new Splitter();
// Split the document
List<PDDocument> splitDocuments = splitter.split(document);
// Get an iterator for the split documents
Iterator<PDDocument> iterator = splitDocuments.iterator();
// Iterate through the split documents and save them
int i = 1;
while (iterator.hasNext()) {
PDDocument splitDocument = iterator.next();
String outputFilePath = new StringBuilder().append(outputDir)
.append(File.separator)
.append(file.getName().replaceAll("[.](pdf|PDF)", ""))
.append("_split_")
.append(i)
.append(".pdf")
.toString();
splitDocument.save(outputFilePath);
splitDocument.close();
i++;
}
// Close the source document
document.close();
System.out.println("PDF split successfully!");
}

Merge PDF files

private static void mergePdfFiles(List<String> inputFilePaths, String outputFilePath) throws IOException {
PDFMergerUtility merger = new PDFMergerUtility();
// Add as many files as you need
for (String inputFilePath : inputFilePaths) {
merger.addSource(new File(inputFilePath));
}
merger.setDestinationFileName(outputFilePath);
merger.mergeDocuments();
System.out.println("PDF files merged successfully!");
}

Insert and remove pages

Insert pages

public static void insertPage(String sourceFile, String targetFile, int pageIndex) throws IOException {
// Load the existing PDF document
PDDocument sourceDoc = PDDocument.load(new File(sourceFile));
Integer sourcePageCount = sourceDoc.getNumberOfPages();
// Validate the requested page index
if (pageIndex < 0 || pageIndex > sourcePageCount) {
throw new IllegalArgumentException("Invalid page index");
}
// Create a new blank page
PDPage newPage = new PDPage();
// Insert the new page at the requested index
if (sourcePageCount.equals(pageIndex)) {
sourceDoc.getPages().add(newPage);
} else {
sourceDoc.getPages().insertBefore(newPage, sourceDoc.getPages().get(pageIndex));
}
// Save the modified PDF document to a target file
sourceDoc.save(targetFile);
// Close the source and target documents
sourceDoc.close();
}

Remove pages

private static void removePage(String inputFilePath, String outputFilePath, int pageIndex) throws IOException {

PDDocument sourceDoc = PDDocument.load(new File(inputFilePath));
Integer sourcePageCount = sourceDoc.getNumberOfPages();
// Validate the requested page index
if (pageIndex < 0 || pageIndex >= sourcePageCount) {
throw new IllegalArgumentException("Invalid page index");
}
sourceDoc.getPages().remove(pageIndex);
sourceDoc.save(outputFilePath);
sourceDoc.close();
}
private static void removePage2(String inputFilePath, String outputFilePath, int pageIndex) throws IOException {
PDDocument sourceDoc = PDDocument.load(new File(inputFilePath));
Integer sourcePageCount = sourceDoc.getNumberOfPages();
// Validate the requested page index
if (pageIndex < 0 || pageIndex >= sourcePageCount) {
throw new IllegalArgumentException("Invalid page index");
}
Splitter splitter = new Splitter();
List<PDDocument> pages = splitter.split(sourceDoc);
pages.remove(pageIndex);
PDDocument outputDocument = new PDDocument();
for (PDDocument page : pages) {
outputDocument.addPage(page.getPage(0));
}
outputDocument.save(outputFilePath);
sourceDoc.close();
outputDocument.close();
}

Encryption

Encrypt

public static void encryptPdf(String inputFilePath, String outputFilePath, String password) throws IOException {
PDDocument doc = PDDocument.load(new File(inputFilePath));

AccessPermission ap = new AccessPermission();
// disable printing,
ap.setCanPrint(false);
//disable copying
ap.setCanExtractContent(false);
//Disable other things if needed...

// Owner password (to open the file with all permissions)
// User password (to open the file but with restricted permissions)
StandardProtectionPolicy spp = new StandardProtectionPolicy(password, password, ap);
// Define the length of the encryption key.
// Possible values are 40, 128 or 256.
int keyLength = 256;
spp.setEncryptionKeyLength(keyLength);

//Apply protection
doc.protect(spp);

doc.save(outputFilePath);
doc.close();
}

Update password

public static void updatePdfPassword(String inputFilePath, String outputFilePath,
String oldPassword, String newPassword) throws IOException {
PDDocument doc = PDDocument.load(new File(inputFilePath), oldPassword);

AccessPermission ap = new AccessPermission();
// disable printing,
ap.setCanPrint(false);
//disable copying
ap.setCanExtractContent(false);
//Disable other things if needed...

// Owner password (to open the file with all permissions)
// User password (to open the file but with restricted permissions)
StandardProtectionPolicy spp = new StandardProtectionPolicy(newPassword, newPassword, ap);
// Define the length of the encryption key.
// Possible values are 40, 128 or 256.
int keyLength = 256;
spp.setEncryptionKeyLength(keyLength);

//Apply protection
doc.protect(spp);

doc.save(outputFilePath);
doc.close();
}

Remove password

public static void removePdfPassword(String inputFilePath, String outputFilePath,
String password) throws IOException {
PDDocument doc = PDDocument.load(new File(inputFilePath), password);
// Set the document access permissions
doc.setAllSecurityToBeRemoved(true);
// Save the unprotected PDF document
doc.save(outputFilePath);
// Close the document
doc.close();
}

Convert to Image

PDF to Image

public static void pdfToImage(String pdfFilePath, String imageFileDir) throws IOException {
File file = new File(pdfFilePath);
PDDocument document = PDDocument.load(file);
// Create PDFRenderer object to render each page as an image
PDFRenderer pdfRenderer = new PDFRenderer(document);
// Iterate over all the pages and convert each page to an image
for (int pageIndex = 0; pageIndex < document.getNumberOfPages(); pageIndex++) {
// Render the page as an image
// 100 DPI: general-quality
// 300 DPI: high-quality
// 600 DPI: pristine-quality
BufferedImage image = pdfRenderer.renderImageWithDPI(pageIndex, 300);
// Save the image to a file
String imageFilePath = new StringBuilder()
.append(imageFileDir)
.append(File.separator)
.append(file.getName().replaceAll("[.](pdf|PDF)", ""))
.append("_")
.append(pageIndex + 1)
.append(".png")
.toString();
ImageIO.write(image, "PNG", new File(imageFilePath));
}
// Close the document
document.close();
}

Image to PDF

private static void imageToPdf(String imagePath, String pdfPath) throws IOException {
try (PDDocument doc = new PDDocument()) {
PDPage page = new PDPage();
doc.addPage(page);
// createFromFile is the easiest way with an image file
// if you already have the image in a BufferedImage,
// call LosslessFactory.createFromImage() instead
PDImageXObject pdImage = PDImageXObject.createFromFile(imagePath, doc);
// draw the image at full size at (x=0, y=0)
try (PDPageContentStream contents = new PDPageContentStream(doc, page)) {
// to draw the image at PDF width
int scaledWidth = 600;
if (pdImage.getWidth() < 600) {
scaledWidth = pdImage.getWidth();
}
contents.drawImage(pdImage, 0, 0, scaledWidth, pdImage.getHeight() * scaledWidth / pdImage.getWidth());
}
doc.save(pdfPath);
}
}

Create PDFs

String outputFilePath = "output/pdf/filepath";

PDDocument document = new PDDocument();
PDPage page = new PDPage(PDRectangle.A4);
document.addPage(page);
// Create content stream to draw on the page
PDPageContentStream contentStream = new PDPageContentStream(document, page);
contentStream.setFont(PDType1Font.HELVETICA, 12);
// Insert text
contentStream.beginText();
contentStream.newLineAtOffset(100, 700);
contentStream.showText("Hello, World!");
contentStream.endText();
// Load the image
String imageFilePath = "C:\\Users\\Taogen\\Pictures\\icon.jpg";
PDImageXObject image = PDImageXObject.createFromFile(imageFilePath, document);
// Set the scale and position of the image on the page
float scale = 0.5f; // adjust the scale as needed
float x = 100; // x-coordinate of the image
float y = 500; // y-coordinate of the image
// Draw the image on the page
contentStream.drawImage(image, x, y, image.getWidth() * scale, image.getHeight() * scale);
contentStream.close();
document.save(outputFilePath);
document.close();

Compress (TODO)

Watermark (Todo)

I. Basic concepts

Package Management on Operating Systems

Debian/Ubuntu Package Management

Advanced Packaging Tool – APT

apt-get is a command line tool for interacting with the Advanced Package Tool (APT) library (a package management system for Linux distributions). It allows you to search for, install, manage, update, and remove software.

Configuration of the APT system repositories is stored in the /etc/apt/sources.list file and the /etc/apt/sources.list.d directory. You can add additional repositories in a separate file in the /etc/apt/sources.list.d directory, for example, redis.list, docker.list.

dpkg

dpkg is a package manager for Debian-based systems. It can install, remove, and build packages, but unlike other package management systems, it cannot automatically download and install packages – or their dependencies. APT and Aptitude are newer, and layer additional features on top of dpkg.

# install
sudo dpkg -i package_file.deb
# uninstall
sudo apt-get remove package_name

CentOS/RHEL Package Management

Yellow Dog Updater, Modified (YUM)

YUM is the primary package management tool for installing, updating, removing, and managing software packages in Red Hat Enterprise Linux. YUM performs dependency resolution when installing, updating, and removing software packages. YUM can manage packages from installed repositories in the system or from .rpm packages. The main configuration file for YUM is at /etc/yum.conf, and all the repos are at /etc/yum.repos.d.

# add repos config to /etc/yum.repos.d
...
# clear repo cache
yum clean all
# create repo cache
yum makecache
# search package
yum search {package_name}

# upgrade package
yum update

# install package
yum install {package_name}

# uninstall package
yum remove {package_name}

RPM (RPM Package Manager)

RPM is a popular package management tool in Red Hat Enterprise Linux-based distros. Using RPM, you can install, uninstall, and query individual software packages. Still, it cannot manage dependency resolution like YUM. RPM does provide you useful output, including a list of required packages. An RPM package consists of an archive of files and metadata. Metadata includes helper scripts, file attributes, and information about packages.

# install
sudo rpm -i package_file.rpm
sudo rpm --install package_file.rpm
# reinstall
sudo rpm --reinstall package_file.rpm
# uninstall
sudo rpm -e package_name
sudo rpm --erase package_name

Windows Package Management

Chocolatey

winget

Microsoft Store

MacOS Package Management

brew

Mac App Store

Service Manager

Systemd

II. Software Installation

JDK/JRE

Headless version

Headless is the same version than the latter without the support of keyboard, mouse and display systems. Hence it has less dependencies and it makes it more suitable for server application.

Debian/Ubuntu/Deepin

Install openjdk from Official APT Repositories

Supported Operating Systems

  • Ubuntu/Debian
  • Deepin

Installing

# insall
sudo apt-get install openjdk-8-jdk
# verify. If the installation was successful, you can see the Java version.
java -version

Installing Options

  • openjdk-8/11/17-jdk
  • openjdk-8/11/17-jdk-headless
  • openjdk-8/11/17-jre
  • openjdk-8/11/17-jre-headless

Maven

CentOS/RHEL

Install from the EPEL YUM repository

# Add the EPEL repository, and update YUM to confirm your change
sudo yum install epel-release
sudo yum update
# install
sudo yum install maven -y
# verify
mvn -v

Add Aliyun Mirror. Add the following lines to the tag <mirrors> in the /etc/maven/settings.xml

<mirror>
<id>alimaven</id>
<name>aliyun maven</name>
<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>

Python

Debian/Ubuntu

Install from the official APT repository

# Update the environment
sudo apt update
# install
sudo apt install python3 -y
# verify
python3 -V

CentOS/RHEL

Install from the Official YUM repository

# Update the environment. Make sure that we are working with the most up to date environment possible in terms of our packages
sudo yum update -y
# install
sudo yum install -y python3
# verify
python3 -V

Node.js

CentOS/RHEL

Install from the EPEL YUM repository

# Add the EPEL repository, and update YUM to confirm your change
sudo yum install epel-release
sudo yum update
# install
sudo yum install nodejs
# verify
node --version

Redis

Linux

Install from Snapcraft

The Snapcraft store provides Redis packages that can be installed on platforms that support snap. Snap is supported and available on most major Linux distributions.

sudo snap install redis

If your Linux does not currently have snap installed, install it using the instructions described in Installing snapd.

Debian/Ubuntu/Deepin

Install from the official APT repositories

sudo apt-get update
sudo apt-get install redis-server

Update config

sudo vim /etc/redis/redis.conf

Uncomment the following line

# supervised auto

to

supervised auto

Enable and restart Redis service

sudo systemctl enable redis.service
sudo systemctl restart redis.service

Verify

systemctl status redis
redis-cli ping

Install from the Redis APT repository

Most major Linux distributions provide packages for Redis.

# prerequisites
sudo apt install lsb-release curl gpg
# add the repository to the apt index, update it. and then install redis. 

curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list

sudo apt-get update
sudo apt-get install redis

# verify
systemctl status redis
redis-cli ping

CentOS/RHEL

Install from the EPEL YUM repository

  1. Add the EPEL repository, and update YUM to confirm your change:

    sudo yum install epel-release
    sudo yum update
  2. Install Redis:

    sudo yum install redis
  3. Start Redis:

    sudo systemctl start redis

    Optional: To automatically start Redis on boot:

    sudo systemctl enable redis

Verify the Installation

Verify that Redis is running with redis-cli:

redis-cli ping

If Redis is running, it will return:

PONG

Windows

Redis is not officially supported on Windows.

Install from Source

Supported Operating Systems

  • All Linux distros (distributions)
  • MacOS

You can compile and install Redis from source on variety of platforms and operating systems including Linux and macOS. Redis has no dependencies other than a C compiler and libc.

# Download source files
wget https://download.redis.io/redis-stable.tar.gz
# Compiling
tar -xzvf redis-stable.tar.gz
cd redis-stable
make
# make sure the build is correct
make test

If the compile succeeds, you’ll find several Redis binaries in the src directory, including:

  • redis-server: the Redis Server itself
  • redis-cli is the command line interface utility to talk with Redis.

Starting and stopping Redis

cd redis-stable
# starting redis server
./src/redis-server &
# starting redis server with config
./src/redis-server redis.conf &
# stopping redis server
ps -ef | grep redis-server | awk '{print $2}' | head -1 | xargs kill -9
# connect to redis
./src/redis-cli
# auth
127.0.0.1:6379> auth YOUR_PASSWORD

update password in redis.conf

# requirepass foobared

to

requirepass YOUR_STRONG_PASSWORD

Manage Redis service using systemd

Create the /etc/systemd/system/redis.service file, and add the following line to the file

[Unit]
Description=Redis
After=network.target

[Service]
ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf
ExecStop=/usr/local/bin/redis-cli shutdown
Restart=always

[Install]
WantedBy=multi-user.target

copy the files

cd /path/to/redis/source/dir
sudo mkdir /etc/redis
sudo cp redis.conf /etc/redis/
sudo cp src/redis-server /usr/local/bin/
sudo cp src/redis-cli /usr/local/bin/

Edit config

sudo vim /etc/redis/redis.conf

Uncomment the following line

# supervised auto

to

supervised auto

Enable and start Redis service

systemctl enable redis
systemctl start redis
systemctl status redis

To verify Redis is up and running, run the following command:

redis-cli PING

MySQL

Linux

Install from binary distributions

Aim: Creating a MySQL service starts automatically when the computer starts up.

Download generic Unix/Linux binary package

Linux - Generic (glibc 2.12) (x86, 64-bit), Compressed TAR Archive. For example: mysql-5.7.44-linux-glibc2.12-x86_64.tar.gz

Installing

# Install dependency `libaio` library
yum search libaio
yum install libaio

# Create a mysql User and Group
groupadd mysql
useradd -r -g mysql -s /bin/false mysql

# Obtain and Unpack the Distribution
cd /usr/local
tar zxvf /path/to/mysql-VERSION-OS.tar.gz
# This enables you to refer more easily to it as /usr/local/mysql.
ln -s full-path-to-mysql-VERSION-OS mysql
# add the `/usr/local/mysql/bin` directory to your `PATH` variable
cp /etc/profile /etc/profile.bak.$(date '+%Y-%m-%d_%H-%M-%S')
echo 'export PATH=$PATH:/usr/local/mysql/bin' >> /etc/profile
cat /etc/profile
source /etc/profile

# Creating a Safe Directory For Import and Export Operations
cd /usr/local/mysql
mkdir mysql-files
chown mysql:mysql mysql-files
chmod 750 mysql-files

# Initialize the data directory.
bin/mysqld --initialize --user=mysql # A temporary password is generated for root@localhost: Trbgylojs1!w
bin/mysql_ssl_rsa_setup

# Start mysql server
bin/mysqld_safe --user=mysql &

# Next command is optional
cp support-files/mysql.server /etc/init.d/mysql.server

Note: This procedure assumes that you have root (administrator) access to your system. Alternatively, you can prefix each command using the sudo (Linux) or pfexec (Solaris) command.

Managing MySQL Server with systemd


Create a user for remote access

  1. Enable MySQL server port in the firewall

If the firewall management on Linux uses ufw, you can run the following command to enable MySQL server port.

ufw allow 3306/tcp
  1. Update bind-address in /etc/my.cnf

Change 127.0.0.1 to Local IP like 192.168.1.100

bind-address=192.168.1.100
  1. Create a MySQL user for remote login
mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
mysql> CREATE USER 'root'@'%' IDENTIFIED BY 'password';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
  1. Verify

Connect the remote MySQL server from your local computer

# testing the port is open
$ telnet {server_ip} 3306
# test MySQL connection
$ mysql -h {server_ip} -u root -p
Enter password:

Errors

Error: mysql: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory

When you run mysql -u root -p.

Solutions

# centos
yum install ncurses-compat-libs

Error: ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

To set up your password for the first time:

mysql> SET PASSWORD = PASSWORD('new password');

Windows

Docker

docker run --name={mysql_container_name} -d -p {exposed_port}:3306 \
-e MYSQL_ROOT_HOST='%' -e MYSQL_ROOT_PASSWORD='{your_password}' \
--restart unless-stopped \
-v mysql_data:/var/lib/mysql \
mysql/mysql-server:{version}

Elasticsearch

Kibana

CentOS/RHEL

Install from the elastic YUM repository

Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a file called kibana.repo in the /etc/yum.repos.d/ directory for RedHat based distributions, or in the /etc/zypp/repos.d/ directory for OpenSuSE based distributions, containing:

[kibana-8.x]
name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

You can now install Kibana with one of the following commands:

# older Red Hat based distributions
sudo yum install kibana
# Fedora and other newer Red Hat distributions
sudo dnf install kibana
# OpenSUSE based distributions
sudo zypper install kibana

Install by downloading RPM file

Downloading the Kibana RPM file

wget https://artifacts.elastic.co/downloads/kibana/kibana-8.8.2-x86_64.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.8.2-x86_64.rpm.sha512
shasum -a 512 -c kibana-8.8.2-x86_64.rpm.sha512
sudo rpm --install kibana-8.8.2-x86_64.rpm

Start Elasticsearch and generate an enrollment token for Kibana

When you start Elasticsearch for the first time, the following security configuration occurs automatically:

  • Authentication and authorization are enabled, and a password is generated for the elastic built-in superuser.
  • Certificates and keys for TLS are generated for the transport and HTTP layer, and TLS is enabled and configured with these keys and certificates.

The password and certificate and keys are output to your terminal.

Run Kibana with systemd

To configure Kibana to start automatically when the system starts, run the following commands:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

Kibana can be started and stopped as follows:

sudo systemctl start kibana.service
sudo systemctl stop kibana.service

Log information can be accessed via journalctl -u kibana.service.

Configure Kibana via the config file

Kibana loads its configuration from the /etc/kibana/kibana.yml file by default. The format of this config file is explained in Configuring Kibana.

Nginx

Apache Tomcat

Docker

Debian/Ubuntu/Deepin

Install from the Docker APT repository

Set up the repository

  1. Update the apt package index and install packages to allow apt to use a repository over HTTPS
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
  1. Add Docker’s official GPG key:
sudo install -m 0755 -d /etc/apt/keyrings

# 中科大源 docker - debian/deepin
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Official docker - debian
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Official docker - ubuntu
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

sudo chmod a+r /etc/apt/keyrings/docker.gpg
  1. Use the following command to set up the repository
# 中科大源 docker - debian/deepin
echo 'deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.ustc.edu.cn/docker-ce/linux/debian buster stable' | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Official docker - debian
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Official docker - ubuntu
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

  1. Update the apt package index:
sudo apt-get update
  1. Install Docker Engine, containerd, and Docker Compose.
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  1. Verify that the Docker Engine installation is successful by running the hello-world image
sudo docker run hello-world

View Docker version and status

docker version
systemctl status docker

References

Linux package management with YUM and RPM

Package management - Ubuntu

systemd.unit — Unit configuration

Installing Redis

MySQL

In this post, I will cover how to configure CORS in a Spring Boot project. If you want to understand how CORS works, you can check out the article Understanding CORS.

Configuring HTTP Request CORS

Controller CORS Configuration

Use @CrossOrigin annotation

Add a @CrossOrigin annotation to the controller class

// no credentials
@CrossOrigin
@RestController
@RequestMapping("/my")
public class MyController {
@GetMapping
public String testGet() {
return "hello \n" + new Date();
}
}

Add a @CrossOrigin annotation to the controller method

@RestController
@RequestMapping("/my")
public class MyController {
// no credentials
@CrossOrigin
@GetMapping
public String testGet() {
return "hello \n" + new Date();
}
}
// with credentials
@CrossOrigin(origins = {"http://localhost"}, allowCredentials = "true")
// or
@CrossOrigin(originPatterns = {"http://localhost:[*]"}, allowCredentials = "true")

Properties of CrossOrigin

  • origins: by default, it’s *. You can specify allowed origins like @CrossOrigin(origins = {"http://localhost"}). You also can specify allowed origins by patterns like @CrossOrigin(originPatterns = {"http://*.taogen.com:[*]"}).

Add a @CrossOrigin annotation to the controller method or the controller class. It is equivalent to

  1. responding a successful result to the preflight request. For example

    HTTP/1.1 204 No Content
    Connection: keep-alive
    Access-Control-Allow-Origin: https://foo.bar.org
    Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE, PUT
    Access-Control-Max-Age: 86400
  2. adding the following headers to the HTTP response headers

    Access-Control-Allow-Origin: *
    Vary: Access-Control-Request-Headers
    Vary: Access-Control-Request-Method
    Vary: Origin

Update HTTP response headers

Only for GET, POST and HEAD requests without custom headers. In other words, it does not work for preflight requests.

@RestController
@RequestMapping("/my")
public class MyController {

@GetMapping
public String testGet(HttpServletResponse response) {
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Max-Age", "86400");
return "test get\n" + new Date();
}

@PostMapping
public String testPost(HttpServletResponse response) {
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Max-Age", "86400");
return "test post\n" + new Date();
}
}
// with credentials
response.setHeader("Access-Control-Allow-Origin", "{your_host}"); // e.g. http://localhost or reqs.getHeader("Origin")
response.setHeader("Access-Control-Allow-Credentials", "true");
response.setHeader("Access-Control-Max-Age", "86400");

For ‘DELETE + Preflight’ or ‘PUT + Preflight’ requests, adding header ‘Access-Control-Allow-Origin: *’ to HttpServletResponse does not enable CORS. This will result in the following error

Access to XMLHttpRequest at 'http://localhost:8080/my' from origin 'http://localhost' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

For requests with custom headers, adding header ‘Access-Control-Allow-Origin: *’ to HttpServletResponse does not enable CORS. This will result in the following error

Access to XMLHttpRequest at 'http://localhost:8080/my' from origin 'http://localhost' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Global CORS configuration

WebMvcConfigurer.addCorsMappings

The WebMvcConfigurer.addCorsMappings has the same function as the @CrossOrigin annotation.

@Configuration
public class CorsConfiguration {
@Bean
public WebMvcConfigurer corsConfigurer() {
return new WebMvcConfigurer() {
@Override
public void addCorsMappings(CorsRegistry registry) {
// no credentials
registry.addMapping("/**")
.allowedOrigins("*")
.allowedMethods("GET", "POST", "HEAD", "PUT", "DELETE", "PATCH");
}
};
}
}
// with credentials
registry.addMapping("/**")
.allowedOrigins("{your_host}") // e.g. http://localhost
.allowCredentials(true)
.allowedMethods("GET", "POST", "HEAD", "PUT", "DELETE", "PATCH");
  • pathPattern: /myRequestMapping, /**, /myRequestMapping/**, /*
  • allowedOrigins: By default, all origins are allowed. Its default value is *. You can specify allowed origins like "http://localhost".
  • allowedOriginPatterns: for example, http://localhost:[*], http://192.168.0.*:[*], https://demo.com
  • allowedMethods: By default, GET, HEAD, and POST methods are allowed. You can enable all methods by setting its value to "GET", "POST", "HEAD", "PUT", "DELETE", "PATCH".

Filters

@Component
public class CorsFilter implements Filter {

@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
HttpServletResponse response = (HttpServletResponse) res;
HttpServletRequest reqs = (HttpServletRequest) req;
// no credentials
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Allow-Methods", "POST, GET, PATCH, DELETE, PUT, PATCH");
response.setHeader("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
response.setHeader("Access-Control-Max-Age", "86400");
chain.doFilter(req, res);
}
}
// with credentials
response.setHeader("Access-Control-Allow-Origin", "{your_host}"); // e.g. http://localhost or reqs.getHeader("Origin")
response.setHeader("Access-Control-Allow-Credentials", "true");
response.setHeader("Access-Control-Allow-Methods", "POST, GET, DELETE, PUT, PATCH");
response.setHeader("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
response.setHeader("Access-Control-Max-Age", "86400");

How to allow CORS requests

To allow CORS requests, You need to add the following headers to your HTTP response

  • Access-Control-Allow-Origin: * or {specific_host}
  • Access-Control-Allow-Methods: POST, GET, PATCH, DELETE, PUT
  • Access-Control-Allow-Headers: Origin, X-Requested-With, Content-Type, Accept, {your_custom_header_name}

If you request with cookie, you need to add another header Access-Control-Allow-Credentials: true to the HTTP response, and the value of Access-Control-Allow-Origin cannot be *.

Optional HTTP response headers for CORS requests:

  • Access-Control-Max-Age: 86400: tell the browser to cache the preflight response.

Note

  • The wildcard * is not supported for the Access-Control-Allow-Headers value.
  • if the value of Access-Control-Allow-Credentials is true, the value of Access-Control-Allow-Origin cannot be *. The Access-Control-Allow-Credentials: true means request with the cookie.

Preflight requests

A CORS preflight request is a CORS request that checks to see if the CORS protocol is understood and a server is aware using specific methods and headers. It is an OPTIONS request, using three HTTP request headers: Access-Control-Request-Method, Access-Control-Request-Headers, and the Origin header. A preflight request is automatically issued by a browser. If the server allows it, then it will respond to the preflight request with an Access-Control-Allow-Methods response header, which lists DELETE or PUT.

Situations that require a preflight request

  • DELETE and PUT requests.
  • Requests with custom headers.

The preflight response can be optionally cached for the requests created in the same URL using Access-Control-Max-Age header. Note that if it is cached, it will not issue a preflight request.

What is SSL Certificate

An SSL Certificate is essentially an X.509 certificate. X.509 is a standard that defines the structure of the certificate. It defines the data fields that should be included in the SSL certificate. X.509 uses a formal language called Abstract Syntax Notation One (ASN.1) to express the certificate’s data structure.

There are different formats of X.509 certificates such as PEM, DER, PKCS#7 and PKCS#12. PEM and PKCS#7 formats use Base64 ASCII encoding while DER and PKCS#12 use binary encoding. The certificate files have different extensions based on the format and encoding they use.

The X.509 Certificate’s encoding formats and file extensions

Web Servers and SSL certificate formats

Tomcat: Keystore (.jks) with PKCS#7 (.p7b) Format

Apache: PEM (.crt+.key)

Nginx: PEM (.pem+.key)

IIS: PKCS#12 (.pfx)

JKS: Keystore

Generate a Self-Signed Certificate

OpenSSL

# interactive
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 365
Enter PEM pass phrase: 
Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []:
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:
# non-interactive and 10 years expiration
openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -sha256 -days 3650 -nodes -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"

openssl req

PKCS#10 X.509 Certificate Signing Request (CSR) Management.

Required options

  • -x509: Output a x509 structure instead of a cert request (Required by some CA’s)
  • -newkey val: Specify as type:bits. (key algorithm and key size). For example, -newkey rsa:4096
  • -keyout outfile: File to send the key to (private key)
  • -out outfile: Output file (certificate)
  • -days +int: Number of days cert is valid for
  • -*: Any supported digest. For example, -sha256

Optional options

  • -nodes: Don’t encrypt the output key.
  • -subj val: Set or modify request subject. (non-interactive). For example, -subj "/C=XX/ST=StateName/L=CityName/O=CompanyName/OU=CompanySectionName/CN=CommonNameOrHostname"

Extract Public Key From SSL Certificate

OpenSSL

openssl x509 -pubkey -in cert.pem -noout -out public_key.pem

openssl x509

X.509 Certificate Data Management.

Options

  • -pubkey: Output the public key
  • -in infile: Input file - default stdin
  • -out outfile: Output file - default stdout
  • -noout: No output, just status. (Don’t append certificate to output public key file.)

Verify public key and private key

Creating a signed digest of a file:

openssl dgst -sha512 -sign private_key.pem -out digest.sha512 file.txt

Verify a signed digest:

openssl dgst -sha512 -verify public_key.pem -signature digest.sha512 file.txt

Convert SSL certificate formats

OpenSSL

OpenSSL Convert PEM

Convert PEM(.pem) to DER(.der)

openssl x509 -outform der -in certificate.pem -out certificate.der

Convert PEM(.cer) to PKCS#7(.p7b)

openssl crl2pkcs7 -nocrl -certfile certificate.cer -out certificate.p7b -certfile CACert.cer

Convert PEM(.crt) to PKCS#12(.pfx)

openssl pkcs12 -export -out certificate.pfx -inkey privateKey.key -in certificate.crt -certfile CACert.crt

OpenSSL Convert DER

Convert DER(.cer) to PEM(.pem)

openssl x509 -inform der -in certificate.cer -out certificate.pem

OpenSSL Convert P7B

Convert PKCS#7(.p7b) to PEM(.cer)

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer

Convert PKCS#7(.p7b) to PKCS#12(.pfx)

openssl pkcs7 -print_certs -in certificate.p7b -out certificate.cer
openssl pkcs12 -export -in certificate.cer -inkey privateKey.key -out certificate.pfx -certfile CACert.cer

OpenSSL Convert PFX

Convert PKCS#12(.pfx) to PEM(.cer)

openssl pkcs12 -in certificate.pfx -out certificate.cer -nodes

Java Keystore

keytool -genkeypair: to generated a key pair and a self-sign certificate in a keystore file

keytool -genkeypair -keysize 1024 -alias herong_key \
-keypass keypass -keystore herong.jks -storepass jkspass
What is your first and last name?
[Unknown]: Herong Yang
What is the name of your organizational unit?
[Unknown]: Herong Unit
What is the name of your organization?
[Unknown]: Herong Company
What is the name of your City or Locality?
[Unknown]: Herong City
What is the name of your State or Province?
[Unknown]: Herong State
What is the two-letter country code for this unit?
[Unknown]: CA
Is CN=Herong Yang, OU=Herong Unit, O=Herong Company, L=Herong City,
ST=Herong State, C=CA correct?
[no]: yes

Import

-importcert/-import

// Installing the Self-Signed Certificate on the Client
keytool -importcert -alias alias_name -file path_to_certificate_file -keystore truststore_file

-importcert -trustcacerts

// Importing a CA-Signed Certificate
keytool -import -trustcacerts -alias alias_name -file certificate_file -keystore keystore_file

Export

-exportcert/-export: to export the certificate in DER format.

keytool -exportcert -alias herong_key -keypass keypass \
-keystore herong.jks -storepass jkspass -file keytool_crt.der

-exportcert -rfc: to export the certificate in PEM format.

keytool -exportcert -alias herong_key -keypass keypass \
-keystore herong.jks -storepass jkspass -rfc -file keytool_crt.pem

Copy

Move SSL Certificate to another JKS Keystore

"C:\Program Files\Java\jre7\bin\keytool.exe" -importkeystore -srckeystore "D:\source-keystore.jks" -destkeystore "D:\destination-keystore.jks" -srcstorepass password -deststorepass password -srcalias "www.mysecuresite.com"

References

OpenSSL

SSL Certificate

Conversion

Java Keystore

Nginx

Tomcat

Elasticsearch Java API

Java Low Level REST client

  • Since 5.6.x

    <dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-client</artifactId>
    <version>8.10.1</version>
    </dependency>

Java High Level REST Client

  • 5.6.x~7.17.x, Deprecated

    <dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>elasticsearch-rest-high-level-client</artifactId>
    <version>7.17.13</version>
    </dependency>

Java Transport Client

  • 5.0.x~7.17.x, Deprecated

    <dependency>
    <groupId>org.elasticsearch.client</groupId>
    <artifactId>transport</artifactId>
    <version>7.17.13</version>
    </dependency>

Java API Client

  • Since 7.15.x

  • depends on Low Level REST Client

    <dependency>
    <groupId>co.elastic.clients</groupId>
    <artifactId>elasticsearch-java</artifactId>
    <version>8.10.0</version>
    </dependency>

Query

Basic query

Query DSL

{
"from": 0,
"size": 10,
"sort": [
{
"pub_time": {
"order": "desc"
}
}
],
"query": {
"bool": {
"must": [],
"must_not": [],
"should": []
}
},
"aggs": {
"term_aggregation": {
"terms": {
"field": "category"
}
}
}
}

Java Low Level REST Client

Response performRequest(String method, String endpoint, Map<String, String> params, HttpEntity entity, Header... headers)

Java High Level REST Client

SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
BoolQueryBuilder queryBuilder = QueryBuilders.boolQuery();
queryBuilder.must(xxx);
queryBuilder.mustNot(xxx);
queryBuilder.should(xxx);
searchSourceBuilder.from(0);
searchSourceBuilder.size(10);
searchSourceBuilder.sort("pub_time", SortOrder.DESC);
searchSourceBuilder.query(queryBuilder);
searchSourceBuilder.aggregation(
AggregationBuilders.terms("term_aggregation")
.field("category")
);
SearchRequest searchRequest = new SearchRequest("indexName");
searchRequest.source(searchSourceBuilder);
// Print DSL query
// System.out.println(searchRequest.source().toString())
SearchResponse searchResponse = restHighLevelClient.search(searchRequest);

Java API Client

// When your index contains semi-structured data or if you don’t have a domain object definition, you can also read the document as raw JSON data. You can use Jackson’s ObjectNode or any JSON representation that can be deserialized by the JSON mapper associated to the ElasticsearchClient.  
SearchResponse<ObjectNode> response = client.search(s -> s
.index("indexName")
.from(0)
.size(10)
.sort(so -> so
.field(FieldSort.of(f -> f
.field("pub_time")
.order(SortOrder.Desc))
)
)
.query(q -> q
.bool(b -> b
.must(m -> m.term(t -> t
.field("name")
.value("value")
))
)
)
.aggregations("term_aggregation", a -> a
.terms(t -> t.field("category"))
),
ObjectNode.class
);

Specify query fields

Query DSL

{
"_source": ["author", "host"],
"query": {}
}

Java High Level REST Client

searchSourceBuilder.fetchSource(new String[]{"author", "host"}, null);

Query by id

Query DSL

GET /my_index/{document_id}
// or
GET /my_index/{doc_type}/{document_id}

Java High Level REST Client

GetRequest getRequest = new GetRequest(indexName).id(id);
GetResponse getResponse = restHighLevelClient.get(getRequest);

Query by ids

Query DSL

GET /my_index/_search
{
"query":{
"ids": {
"values": ["202308227d464b3da5b01f966458cafa", "20230822dfc84f58b7c8243013da3063"]
}
}
}

Conditions

wildcard

Query DSL

{
"wildcard": {
"ip_region": "*山东*"
}
}

Java High Level REST Client

WildcardQueryBuilder ipRegionQuery = QueryBuilders.wildcardQuery("ip_region", "*山东*");

Logical Operation

must/must_not

{
"bool": {
"must": [
{
"match_phrase": {
"title": "医院"
}
}
]
}
}

Java High Level REST Client

BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.must(QueryBuilders.matchPhraseQuery("title", "医院"));

should

Query DSL

{
"bool": {
"should": [
{
"match_phrase": {
"title": "医院"
}
},
{
"match_phrase": {
"content": "医院"
}
}
],
"minimum_should_match": 1
}
}

Java High Level REST Client

BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
boolQueryBuilder.minimumShouldMatch(1);
boolQueryBuilder.should(QueryBuilders.matchPhraseQuery("title", "医院"));
boolQueryBuilder.should(QueryBuilders.matchPhraseQuery("content", "医院"));

Aggregation

terms

Query DSL

{
"aggs": {
"term_aggregation": {
"terms": {
"field": "category"
}
}
}
}

Java High Level REST Client

searchSourceBuilder.aggregation(
AggregationBuilders.terms("term_aggregation")
.field("category")
);

Elasticsearch no longer recommend using the scroll API for deep pagination. If you need to preserve the index state while paging through more than 10,000 hits, use the search_after parameter with a point in time (PIT).

In order to use scrolling, the initial search request should specify the scroll parameter in the query string, which tells Elasticsearch how long it should keep the “search context” alive (see Keeping the search context alive), eg ?scroll=1m.

The size parameter allows you to configure the maximum number of hits to be returned with each batch of results. Each call to the scroll API returns the next batch of results until there are no more results left to return, ie the hits array is empty.

POST /my-index-000001/_search?scroll=1m
{
"size": 100,
"query": {
"match": {
"message": "foo"
}
}
}

The result from the above request includes a _scroll_id, which should be passed to the scroll API in order to retrieve the next batch of results.

POST /_search/scroll                                                               
{
"scroll" : "1m",
"scroll_id" : "DXF1ZXJ5QW5kRmV0Y2gBAAAAAAAAAD4WYm9laVYtZndUQlNsdDcwakFMNjU1QQ=="
}

Update

Update by document ID

Using doc - to update multiple fields at once

POST /my_index/_doc/{document_id}/_update
{
"doc": {
"field1": "updated_value1",
"field2": "updated_value2"
}
}

Using script

POST /my_index/_doc/{document_id}/_update
{
"script": {
"source": "ctx._source.field_name = params.new_value",
"lang": "painless",
"params": {
"new_value": "updated_value"
}
}
}

Java High Level REST Client

// Create an instance of the UpdateRequest class
UpdateRequest request = new UpdateRequest("your_index", "your_type", "your_id");

// Prepare the update request
Map<String, Object> updatedFields = new HashMap<>();
updatedFields.put("field1", "updated value");
updatedFields.put("field2", "another updated value");
request.doc(updatedFields);

// Execute the update request
UpdateResponse response = client.update(request, RequestOptions.DEFAULT);

// Check the response status
if (response.status() == RestStatus.OK) {
System.out.println("Document updated successfully");
} else {
System.out.println("Failed to update document: " + response.status().name());
}

Update by document ids

Query DSL

POST /your-index/_update_by_query
{
"query":{
"ids": {
"values": ["xxx", "xxx"]
}
},
"script": {
"source": "ctx._source.field_name = 'updated-value'"
}
}

Java High Level REST Client

for (String id : ids) {
XContentBuilder contentBuilder = XContentFactory.jsonBuilder()
.startObject()
.field("status", "0") // update stauts to "0"
.endObject();

UpdateRequest updateRequest = new UpdateRequest(indexName, "data", id)
.doc(contentBuilder);

bulkRequest.add(updateRequest);
}

BulkResponse bulkResponse = dxRestHighLevelClient.bulk(bulkRequest);

if (bulkResponse.hasFailures()) {
System.out.println("has failures");
// Handle failure cases
} else {
// Handle success cases
}

Update by query

Query DSL

POST /your-index/_update_by_query
{
"query": {
"term": {
"field": "value"
}
},
"script": {
"source": "ctx._source.field = 'updated-value'"
}
}

Development of new software products

Build the basic framework of the project.

Developing new features for existing applications. (refer to the corresponding section)

Note: Development of new software product require a lot of time to optimize the user experience and requirements. Therefore, a lot of code modifications are also required.

Developing new features for existing applications

Functional modules

Understand the software requirements.

Design data models and databases.

API design.

Detailed design.

Write unit tests and code.

Test and Fix bugs.

Modify the code due to the modification of the requirements.

Test and Fix bugs.

Data analysis modules

Understand the software requirements.

Write query statements (SQL, Elasticsearch DSL) for data statistics.

Merge query statements to reduce the number of queries.

API design.

Write the code. Define the data query and response objects and finish the code.

Test and Fix bugs.

Scheduled Tasks

Small functions

Understand the software requirements.

Modify data models and databases. (optional)

API design. (optional)

Detailed design.

Write unit tests and code.

Test and Fix bugs.

Modification of system functionality

Understand the software requirements.

Modify data models and databases. (optional)

Modify API. (optional)

Detailed design.

Modify unit tests and code.

Test and Fix bugs.

Performance Optimization

Positioning problem

Try to find the tuning approach

Software product customization (new features and modifications)

Developing new features for existing applications. (refer to the corresponding section)

Modification of system functionality. (refer to the corresponding section)

Maintain systems and miscellaneous

System troubleshooting and fixing errors.

Update data.

Import data.

Export data.

Third party service renewal.

Integrating Code Libraries

Integrating Third-Party Service API or SDK

Common third-party services

  • Cloud platform
    • Storage
      • OSS
    • AI + Machine Learning
      • OCR
    • Media
      • Intelligent Media Services
  • Payment. E.g. Alipay.
  • Mobile Push notifications. E.g. Jiguang, Getui.
  • Short Message Service (SMS)
  • Social. E.g. QQ, WeChat, and Dingtalk open platform, Twitter Developer Platform, Slack API.

Providing APIs to Third-Party

Sometimes we need to redirect to our other websites without login again. In addition to single sign-on, we can also add a URL parameter to achieve automatic login.

The Process of Login By URL Parameters

The frontend requests the backend API to get the loginSign string for setting the redirect URL parameters. The redirect URL like https://xxx.com/xxx?loginSign=xxx

The backend constructs the loginSign value

  • Query the redirected website username and password.
  • Generate a random string.
  • Get the current timestamp.
  • Use the RSA public key to encrypt the username, password, timestamp, randomStr.

Return the loginSign value to frontend.

The client user clicks the redirect URL.

When the target website frontend checks that the loginSign parameter appears on the web page URL, it uses this parameter to request login automatically.

The target website backend decrypts the loginSign value, and checks the username and the password. If they are correct returns an access token, otherwise, returns an error code.

Construct the URL Parameter loginSign

Add a newline \n (ASCII 0x0A) to the end of each parameter.

username\n
password\n
timestamp\n
randomStr\n
  • timestamp: the request timestamp.

Use the RSA public key to encrypt the string {username}\n{password}\n{timestamp}\n{randomStr}\n

Verify the URL Parameter loginSign

Use the RSA private key to decrypt the loginSign value.

Verify the request timestamp if it’s within 60 seconds of the current time.

Verify the username and password.

0%