Taogen's Blog

Stay hungry stay foolish.

Configuring DNS

Ensure your Linux server’s DNS configuration file /etc/resolv.conf contains some DNS servers.

If there are no DNS servers in the /etc/resolv.conf, you must add some DNS servers to the file.

vim /etc/resolv.conf

Add the following content to the file /etc/resolv.conf

nameserver 192.168.0.1
nameserver 8.8.8.8
nameserver 8.8.4.4

You can try to restart your system networking to check whether the problem of being unable to ping domain names is resolved. See the section “Restart Networking” of this post.

Configuring Default Route Gateway

You need to check your route table and check if the destination host 0.0.0.0 is routed to the default gateway IP (e.g. 192.168.0.1). If not you need to update the gateway IP.

Get Default Gateway IP

$ ip r | grep default
default via 192.168.0.1 dev eth0 proto dhcp metric 100

Some computers might have multiple default gateways. The gateway with lowest Metric is the first to be searched and used as the default gateway.

My server’s default gateway IP is 192.168.0.1.

Check the Route Table

Print the route table:

$ sudo route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0
...
  • Destination: The destination network or destination host.
  • Gateway: The gateway address or ’*’ if none set.
  • Genmask: The netmask for the destination net: 255.255.255.255 for a host destination and 0.0.0.0 for the default route.

What Is The Meaning of 0.0.0.0 In Routing Table?

Each network host has a default route for each network card. This will create a 0.0.0.0 route for such card. The address 0.0.0.0 generally means “any address”. If a packet destination doesn’t match an individual address in the table, it must match a 0.0.0.0 gateway address. In other words, default gateway is always pointed by 0.0.0.0

Update the 0.0.0.0 Route to the Default Gateway

Update the destination host 0.0.0.0 route the the default gateway IP e.g 192.168.0.1.

ip command

You can temporarily update the 0.0.0.0 route gateway by the ip command.

Add the default route:

ip route add default via 192.168.0.1

You can also delete the default route:

ip route delete default

route command

You can temporarily update the 0.0.0.0 route gateway by the route command.

Add the default route:

route add default gw 192.168.0.1

You can also delete the default route:

route del default gw 192.168.0.1

Update configuration file

You can permanently update the 0.0.0.0 route gateway by in system configuration file.

CentOS/RHEL

vim /etc/sysconfig/network

Add the following content to the file /etc/sysconfig/network

NETWORKING=yes
GATEWAY=192.168.0.1

Debian/Ubuntu

vim /etc/network/interfaces

Find network interface and add the following option

... 
gateway 192.168.0.1
...

Restart Networking

After update the gateway configuration file, you need restart the networking.

Restart the networking on CentOS/RHEL

sudo systemctl restart NetworkManager.service
# or
sudo systemctl stop NetworkManager.service
sudo systemctl start NetworkManager.service

Restart the networking on Debian/Ubuntu

sudo /etc/init.d/networking restart

Appendixes

Some public DNS servers

# OpenDNS
208.67.222.222
208.67.220.220

# Cloudflare
1.1.1.1
1.0.0.1

# Google
8.8.8.8
8.8.4.4

# Quad9
9.9.9.9

References

[1] Understanding Routing Table

[2] What Is The Meaning of 0.0.0.0 In Routing Table?

介绍麒麟操作系统

银河麒麟(NeoKylin)是由中国麒麟软件有限公司基于Linux开发的商业操作系统。其社区版为Ubuntu Kylin。中标麒麟(NeoKylin)与银河麒麟同为中国麒麟软件有限公司基于Linux开发的商业操作系统。

银河麒麟高级服务器操作系统V10(Kylin Linux Advanced Server V10 (Tercel))是针对企业级关键业务,适应虚拟化、云计算、大数据、工业互联网时代对主机系统可靠性、安全性、性能、扩展性和实时性等需求,依据CMMI5级标准研制的提供内生本质安全、云原生支持、自主平台深入优化、高性能、易管理的新一代自主服务器操作系统。银河麒麟系统采用同源构建支持六款自主CPU平台(飞腾、鲲鹏、龙芯、申威、海光、兆芯等国产CPU),所有组件基于同一套源代码构建。

查看操作系统信息

# 查看 Linux 系统发行版
$ cat /etc/os-release
NAME="Kylin Linux Advanced Server"
VERSION="V10 (Tercel)"
ID="kylin"
VERSION_ID="V10"
PRETTY_NAME="Kylin Linux Advanced Server V10 (Tercel)"
ANSI_COLOR="0;31"
# 查看 CPU 架构
$ lscpu
Architecture:                    aarch64
CPU op-mode(s): 64-bit
Model name: Kunpeng-920
...
Details
Architecture:                    aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: HiSilicon
Model: 0
Model name: Kunpeng-920
Stepping: 0x1
BogoMIPS: 200.00
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs

配置 yum 源仓库

使用中标软件的 yum 源仓库

# 备份系统原来的 yum 配置
mv /etc/yum.repos.d/kylin_aarch64.repo /etc/yum.repos.d/kylin_aarch64.repo.bak
# 创建新的 yum 源配置文件
vi /etc/yum.repos.d/kylin_aarch64.repo

将下面的内容添加到 /etc/yum.repos.d/kylin_aarch64.repo 文件中

[ks10-adv-os]
name=Kylin-Linux-Advanced-Server-os
baseurl=http://update.cs2c.com.cn:8080/NS/V10/V10SP1/os/adv/lic/base/aarch64/
gpgcheck=0
enable=1

baseurl格式为 http://update.cs2c.com.cn:8080/NS/V10/{版本}/os/adv/lic/base/{架构}/

这里的版本选择的是 V10SP1,架构选的是 aarch64(鲲鹏)。

# 清除原有 yum 缓存和生成新的缓存
$ sudo yum clean all && sudo yum makecache
# 测试
$ sudo yum update
$ sudo yum install java-1.8.0-openjdk
Is this ok [y/N]: N

介绍 CTyunOS

天翼云操作系统 CTyunOS 是基于欧拉开源操作系统(openEuler,简称“欧拉”)开发的,欧拉操作系统是华为基于 CentOS 开发的。2019年9月,华为宣布 EulerOS 开源,开源名称为 openEuler(开源欧拉)。EulerOS 支持 AArch64(鲲鹏)处理器、容器虚拟化技术,是一个面向企业级的通用服务器架构平台。华为的鸿蒙是手机手表的操作系统,面向C端。而欧拉是电脑服务器的操作系统,是面向B端。

CTyunOS 操作系统基于 openEuler 20.03 LTS 版本自主研发的。CTyunOS 3 是 CTyunOS 在2023年4月发布的最新版本,上游采用了openEuler社区发布的长期稳定版本 openEuler 22.03 LTS SP1作为基线,针对云计算、云原生场景进行了深度的开发,目前在天翼云上已经可以开通该操作系统的弹性云主机。

查看操作系统信息

# 查看 Linux 系统发行版
$ cat /etc/os-release
NAME="ctyunos"
VERSION="2.0.1"
ID="ctyunos"
VERSION_ID="2.0.1"
PRETTY_NAME="ctyunos 2.0.1"
ANSI_COLOR="0;31"
# 查看 CPU 架构
$ lscpu
Architecture:                    aarch64
CPU op-mode(s): 64-bit
Model name: Kunpeng-920
...
Details
Architecture:                    aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
NUMA node(s): 1
Vendor ID: HiSilicon
Model: 0
Model name: Kunpeng-920
Stepping: 0x1
BogoMIPS: 200.00
NUMA node0 CPU(s): 0-3
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs

配置 yum 源仓库

# 备份系统原来的 yum 配置
$ mv ctyunos.repo ctyunos.repo.bak
# 下载华为云的 openeuler yum 源配置文件
$ curl -O https://repo.huaweicloud.com/repository/conf/openeuler_aarch64.repo
# 移动到系统的 yum 源配置目录
$ mv openeuler_aarch64.repo /etc/yum.repos.d
# 清除原有 yum 缓存和生成新的缓存
$ sudo yum clean all && yum makecache
# 测试
$ sudo yum update
$ sudo yum install java-1.8.0-openjdk
Is this ok [y/N]: N

华为云 Linux 软件包仓库配置文件目录:https://repo.huaweicloud.com/repository/conf/

References

[1] Configuring Yum and Yum Repositories - RedHat

[2] 配置OpenEuler的网络yum源

[3] CentOS配置yum仓库的三种方法

File Operations

Archive Files

zip

# Create an achive file
zip -r achive.zip dir_name

# List files of the achive file
unzip -l achive.zip

# Extract files
unzip achive.zip
unzip achive.zip -d /target/filepath

tar

# Create an achive file
tar -czvf achive.tar.gz dir_name
tar -cjvf achive.tar.bz2 dir_name
tar -cvf achive.tar dir_name

# List files of the achive file
tar -tf achive.tar.gz

# Extract files
tar -xvf achive.tar.gz
tar -xvf achive.tar.gz -C /target/filepath

Files and Directories

Directories

Directories Description
/bin binary or executable programs.
/boot It contains all the boot-related information files and folders such as conf, grub, etc.
/dev device files such as dev/sda1, dev/sda2, etc.
/lib kernel modules and a shared library
/etc system configuration files.
/home home directory. It is the default current directory.
/media removal media devices are inserted
/mnt temporary mount point
/opt optional or third-party software.
/proc It is a virtual and pseudo-file system to contains info about the running processes with a specific process ID or PID.
/root root home directory
/run It stores volatile runtime data. run time variables.
/sbin binary executable programs for an administrator.
/sys It is a virtual file system for modern Linux distributions to store and allows modification of the devices connected to the system.
/tmp temporary space, typically cleared on reboot.
/usr User related programs.
/var variable data files. log files.
  • /usr/bin: Executables binary files. E.g. java, mvn, git, apt, kill.
  • /usr/local: To keep self-compiled or third-party programs.
  • /usr/sbin: This directory contains programs for administering a system, meant to be run by ‘root’. Like ‘/sbin’, it’s not part of a user’s $PATH. Examples of included binaries here are chroot, useradd, in.tftpd and pppconfig.
  • /usr/share: This directory contains ‘shareable’, architecture-independent files (docs, icons, fonts etc).
  • /var: to write logging.

Configuration Files

Bash Configuration Files

File Description
/etc/profile This is a “System wide” initialization file that is executed during login. This file provides initial environment variables and initial “PATH” locations.
/etc/bashrc This again is a “System Wide” initialization file. This file is executed each time a Bash shell is opened by a user. Here you can define your default prompt and add alias information. Values in this file can be overridden by their local ~/.bashrc entry.
~/.bash_profile If this file exists, it is executed automatically after /etc/profile during the login process. This file can be used by each user to add individual entries. The file however is only executed once at login and normally then runs the users .bashrc file.
~/.bash_login If the “.bash_profile” does not exist, then this file will be executed automatically at login.
~/.profile If the “.bash_profile” or “.bash_login” do not exist, then this file is executed automatically at login.
~/.bashrc This file contains individual specific configurations. This file is read at login and also each time a new Bash shell is started. Ideally, this is where you should place any aliases.
~/.bash_logout This file is executed automatically during logout.
~/.inputrc This file is used to customize key bindings/key strokes.

Most global config files are located in the /etc directory

File Description
/etc/X11/ xorg specific config files
/etc/cups/ sub-directory containing configuration for the Common UNIX Printing System
/etc/xdg/ global configs for applications following freedesktop.org specification
/etc/ssh/ used to configure OpenSSH server behavior for the whole system
/etc/apparmor.d/ contains config files for the AppArmor system
/etc/udev/ udev related configuration

Important Global Config Files

File Description
/etc/resolv.conf used to define the DNS server(s) to use
/etc/bash.bashrc used to define the commands to execute when a user launches the bash shell
/etc/profile the login shell executes the commands in .profile script during startup
/etc/dhcp/dhclient.conf stores network related info required by DHCP clients
/etc/fstab decides where to mount all the partitions available to the system
/etc/hostname set the hostname for the machine
/etc/hosts a file which maps IP addresses to their hostnames
/etc/hosts.deny the remote hosts listed here are denied access to the machine
/etc/mime.types lists MIME-TYPES and filename extensions associated with them
/etc/motd configure the text shown when a user logs in to the host
/etc/timezone set the local timezone
/etc/sudoers the sudoers file controls the sudo related permission for users
/etc/httpd/conf and /etc/httpd.conf.d configuration for the apache web server
/etc/default/grub contains configuration used by the update-grub for generating /boot/grub/grub.cfg
/boot/grub/grub.cfg the update-grub command auto-generates this file using the settings defined in /etc/default/grub

Important User-Specific Config Files

File Description
$HOME/.xinitrc this allows us to set the directives for starting a window manager when using the startx command
$HOME/.vimrc vim configuration
$HOME/.bashrc script executed by bash when the user starts a non-login shell
$XDG_CONFIG_HOME/nvim/init.vim neovim configuration
$HOME/.editor sets the default editor for the user
$HOME/.gitconfig sets the default name and e-mail address to use for git commits
$HOME/.profile the login shell executes the commands in the .profile script during startup
$HOME/.ssh/config ssh configuration for a specific user

System Settings

System Time

Time

# show date time
date

# date time format
date '+%Y-%m-%d'
date '+%Y-%m-%d %H:%M:%S'
date '+%Y-%m-%d_%H-%M-%S'

# update time and date from the internet
timedatectl set-ntp true

Timezone

# list timezones
timedatectl list-timezones

# set timezone
timedatectl set-timezone Asia/Shanghai

# show time settings
timedatectl status

hostname

The hostname is used to distinguish devices within a local network. It’s the machine’s human-friendly name. In addition, computers can be found by others through the hostname, which enables data exchange within a network, for example. Hostnames are used on the internet as part of the fully qualified domain name.

you can configure a computer’s hostname:

# setting
$ hostnamectl set-hostname server1.example.com
# verify the setting
$ less /etc/hostname
# query your computer's hostname
$ hostname

hosts

The /etc/hosts file contains the Internet Protocol (IP) host names and addresses for the local host and other hosts in the Internet network. This file is used to resolve a name into an address (that is, to translate a host name into its Internet address).

sudo vim /etc/hosts

Add Environment Variables

/etc/profile

Add environment variables

cp /etc/profile "/etc/profile.bak.$(date '+%Y-%m-%d_%H-%M-%S')"
echo "export name=value" >> /etc/profile
cat /etc/profile
source /etc/profile

Add to path

cp /etc/profile /etc/profile.bak.$(date '+%Y-%m-%d_%H-%M-%S')
echo 'export PATH=$PATH:/usr/local/mysql/bin' >> /etc/profile
source /etc/profile

System Log

# print OOM killer log
dmesg -T | egrep -i 'killed process'

Upload File

scp

SCP will always overwrite existing files. Thus, in the case of a clean upload SCP should be slightly faster as it doesn’t have to wait for the server on the target system to compare files.

# transfer a file
scp local_file remoteuser@remote_ip_address:/remote_dir
# transfer multiple files
scp local_file1 local_file2 remoteuser@remote_ip_address:/remote_dir

# transfer a directory
scp -r local_dir remoteuser@remote_ip_address:/remote_dir
# transfer a file from remote host to local
scp remoteuser@remote_ip_address:/remote_file local_dir
# Transfer Files Between Two Remote Systems 
scp remoteuser@remote_ip_address:/remote_file remoteuser@remote_ip_address:/remote_file
  • -P SSH_port

rsync over ssh

In the case of a synchronization of files that change, like log files or list of source files in a repository, rsync is faster.

Copy a File from a Local Server to a Remote Server with SSH

rsync -avzhe ssh backup.tar.gz root@192.168.0.141:/backups/
# Show Progress While Transferring Data with Rsync
rsync -avzhe ssh --progress backup.tar.gz root@192.168.0.141:/backups/

Copy a File from a Remote Server to a Local Server with SSH

rsync -avzhe ssh root@192.168.0.141:/root/backup.tar.gz /tmp

sftp

sftp [username]@[remote hostname or IP address]

# download file to the local system's Home directory.
get [path to file]
# change directory
get [path to file] [path to directory]
# change filename
get [path to file] [new file name]

# upload the local system's Home directory's file to the remote server's current directory
put [path to file]
# change directory
put [path to file] [path to directory]
# change filename
put [path to file] [new file name]

Application Data

logging file path: /var/log/{application_name}

upload file path: /data/{application_name}/upload

application build and running file path: /var/java/{application_name}, /var/html/{application_name}

References

Reverse Proxy

HTTP

http {
...
server {
listen 80;
server_name myserver.com;
# The default root is /usr/share/nginx/www, /usr/share/nginx/html or /var/www/html
root /var/www/your_domain/html;

# Staic files
location / {
# redefine the root
root /var/www/your_domain/html;
try_files $uri $uri/ /index.html;
}

# API
location /api/ {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:8080/;
}

# Cache js, css, image, etc.
}
...
}
  • Response static files: {root path}/requestURI
  • Proxy requests: {proxy_pass path}/requestURI

Passing Request Headers

By default, NGINX redefines two header fields in proxied requests, “Host” and “Connection”, and eliminates the header fields whose values are empty strings. “Host” is set to the $proxy_host variable, and “Connection” is set to close.

HTTPS

http {
# reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections.
# or "ssl_session_cache builtin:1000 shared:SSL:10m;"
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

server {
listen 443 ssl;
server_name myproject.com;

ssl_certificate /etc/ssl/projectName/projectName.com.pem;
ssl_certificate_key /etc/ssl/projectName/projectName.com.key;

# Additional SSL configuration (if required)
# enabling keepalive connections to send several requests via one connection and the second is to reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections.
keepalive_timeout 70;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;

# same with HTTP
location / {
...
}
}
...
}

HTTP to HTTPS

http {
...
server {
listen 80;
server_name myserver.com;
return 301 https://myserver.com$request_uri;
}
...
}

HTTP2

The ngx_http_v2_module module (1.9.5) provides support for HTTP/2. This module is not built by default, it should be enabled with the --with-http_v2_module configuration parameter.

http {
server {
# enable http2
listen 443 ssl http2;

ssl_certificate server.crt;
ssl_certificate_key server.key;
}
}

Settings

Timeout

http {
...
# proxy_connect_timeout default 60s
proxy_connect_timeout 180s;
# proxy_send_timeout default 60s
proxy_send_timeout 180s;
# proxy_read_timeout default 60s
proxy_read_timeout 180s;
...
}

Upload File Size

http {
...
# client_max_body_size default 1M
client_max_body_size 100M;
...
}

Cache

http {
server {
...
location /static {
root /var/www/your_domain/staic;
# To disable access log off for not hitting the I/O limit
access_log off;
# or "expires max";
expires 7d;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
...
}
}
http {
server {
...
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico)$ {
root /var/www/your_domain/staic;
# To disable access log off for not hitting the I/O limit
access_log off;
# or "expires max";
expires 7d;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
...
}
}

After setting cache, the following header in response headers:

Cache-Control: max-age=604800, public, must-revalidate, proxy-revalidate

Load Balancing

http {
...
upstream backend-server {
server xxx.xxx.xxx.xxx:8080 max_fails=1 fail_timeout=300s;
server xxx.xxx.xxx.xxx:8080 max_fails=1 fail_timeout=300s;
...
}

server {
...
location /api/ {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://backend-server/;
}
}
}

Test the Nginx Configuration is Updated

Adding the following config to the Nginx configuration file. You can verify if the configuration is updated by updating the return status code (e.g. 403 Forbidden, 406 Not Acceptable, 423 Locked) of the /test location and visiting the test URL http://yourDomain/testConfig.

location /testConfig {
# 403 Forbidden, 406 Not Acceptable, 423 Locked
return 403;
}

Appendixes

Embedded Variables

  • $proxy_host: name and port of a proxied server as specified in the proxy_pass directive;
  • $proxy_add_x_forwarded_for: the “X-Forwarded-For” client request header field with the $remote_addr variable appended to it, separated by a comma. If the “X-Forwarded-For” field is not present in the client request header, the $proxy_add_x_forwarded_for variable is equal to the $remote_addr variable.
  • $host: In this order of precedence: host name from the request line, or host name from the “Host” request header field, or the server name matching a request.
  • $remote_addr: Client address

Build Nginx From Source

# Download Nginx source code
wget http://nginx.org/download/nginx-{latest-stable-version}.tar.gz

You can know the latest version of Nginx by visiting the Nginx download page.

tar -zxvf nginx-{latest-stable-version}.tar.gz

cd nginx-{latest-stable-version}

# Configuring Nginx
./configure \
--with-pcre \
--with-http_ssl_module \
--with-http_image_filter_module=dynamic \
--modules-path=/etc/nginx/modules \
--with-http_v2_module \
--with-stream=dynamic \
--with-http_addition_module \
--with-http_mp4_module \
--with-http_gzip_static_module

More configuration

Common errors when running ./configure

1. ./configure: error: the HTTP rewrite module requires the PCRE library.

Solution

sudo apt update && sudo apt upgrade
apt-get install libpcre3 libpcre3-dev

Successful Output

Configuration summary
+ using system PCRE2 library
+ using system OpenSSL library
+ using system zlib library

nginx path prefix: "/usr/local/nginx"
nginx binary file: "/usr/local/nginx/sbin/nginx"
nginx modules path: "/etc/nginx/modules"
nginx configuration prefix: "/usr/local/nginx/conf"
nginx configuration file: "/usr/local/nginx/conf/nginx.conf"
nginx pid file: "/usr/local/nginx/logs/nginx.pid"
nginx error log file: "/usr/local/nginx/logs/error.log"
nginx http access log file: "/usr/local/nginx/logs/access.log"
nginx http client request body temporary files: "client_body_temp"
nginx http proxy temporary files: "proxy_temp"
nginx http fastcgi temporary files: "fastcgi_temp"
nginx http uwsgi temporary files: "uwsgi_temp"
nginx http scgi temporary files: "scgi_temp"

You can add the following parameters to specify paths:

--prefix=/var/www/html \
--sbin-path=/usr/sbin/nginx \
--modules-path=/etc/nginx/modules \
--conf-path=/etc/nginx/nginx.conf \
--pid-path=/var/run/nginx.pid \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--lock-path=/var/lock/nginx.lock \
# Build nginx
$ make
$ sudo make install
# Start nginx
$ cd /usr/local/nginx/sbin
$ nginx -V
$ ./nginx
# Verify
$ curl http://localhost

References

[1] Configuring HTTPS servers - Nginx

[2] Alphabetical index of variables - Nginx

[3] Serving Static Content - Nginx

[4] NGINX Reverse Proxy - Nginx

Web Servers

Nginx

Error Information

Status Code: 504

Response:

<html>
<head><title>504 Gateway Time-out</title></head>
<body bgcolor="white">
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>

Default Timeout

The default timeout for Nginx is 60 seconds.

Settings

Update proxy timeout to 180 seconds:

http {
proxy_connect_timeout 180s;
proxy_send_timeout 180s;
proxy_read_timeout 180s;
...
}

Java HTTP Client

Spring RestTemplate

Default Timeout

The default timeout is infinite.

By default RestTemplate uses SimpleClientHttpRequestFactory and that in turn uses HttpURLConnection.

By default the timeout for HttpURLConnection is 0 - ie infinite, unless it has been set by these properties :

-Dsun.net.client.defaultConnectTimeout=TimeoutInMiliSec 
-Dsun.net.client.defaultReadTimeout=TimeoutInMiliSec

Settings

@Bean
public RestTemplate restTemplate() {
SimpleClientHttpRequestFactory factory = new SimpleClientHttpRequestFactory();
// Time to establish a connection to the server from the client-side. Set to 20s.
factory.setConnectTimeout(20000);
// Time to finish reading data from the socket. Set to 300s.
factory.setReadTimeout(300000);
return new RestTemplate(factory);
}

JavaScript HTTP Client

axios

Default Timeout

The default timeout is 0 (no timeout).

Settings

const instance = axios.create({
baseURL: 'https://some-domain.com/api/',
// `timeout` specifies the number of milliseconds before the request times out.
// If the request takes longer than `timeout`, the request will be aborted.
timeout: 60000,
...
});

Plugins

Browser plugins

Chrome extensions

IDE plugins

Web, Mobile and Desktop Application

Management System

Website

Website categories

  1. E-commerce: Websites that facilitate online buying and selling of goods and services, such as Amazon or eBay.
    1. Shopping mall
  2. Social Networking: Websites that connect people and allow them to interact and share information, such as Facebook or LinkedIn.
    1. IM
    2. Forum/BBS
  3. News and Media: Websites that provide news articles, videos, and other multimedia content, such as CNN or BBC.
  4. Blogs and Personal Websites: Websites where individuals or organizations publish articles and personal opinions, such as WordPress or Blogger.
  5. Educational: Websites that provide information, resources, and learning materials for educational purposes, such as Khan Academy or Coursera.
  6. Entertainment: Websites that offer various forms of entertainment, such as games, videos, music, or movies, such as Netflix or YouTube.
  7. Government and Nonprofit: Websites belonging to government institutions or nonprofit organizations, providing information, services, and resources, such as whitehouse.gov or Red Cross.
  8. Business and Corporate: Websites representing businesses and corporations, providing information about products, services, and company details, such as Apple or Coca-Cola.
  9. Sports: Websites dedicated to sports news, scores, analysis, and related information, such as ESPN or NBA.
  10. Travel and Tourism: Websites that provide information and services related to travel planning, accommodations, and tourist attractions, such as TripAdvisor or Booking.com.

Mobile Software

Desktop Software

  • Instant message. E.g. Telegram.
  • Email client. E.g. Mozilla Thunderbird.
  • Web browser. E.g. Google Chrome.
  • Office software. E.g. Microsoft Office, Typora, XMind.
  • Note-taking software. E.g. Notion, Evernote.
  • PDF reader. E.g. SumatraPDF.
  • File processing. E.g. 7-Zip
  • Media player. E.g. VLC.
  • Media processing. E.g. FFmpeg, HandBrake, GIMP.
  • Flashcard app. E.g. anki.
  • Stream Media. E.g. Spotify.
  • HTTP proxy. E.g. V2rayN.

Libraries, Tools, Services

Libraries

  • General-purpose libraries for programming language. E.g. Apache Commons Lang.
  • File processing. E.g. Apache POI.
  • Data parser. E.g. org.json.
  • Chart, Report, Graph.
  • Logging.
  • Testing.
  • HTTP Client.

Developer Tools

  • Editor
  • IDE
  • Service Client.

Services

  • Web servers. E.g. Nginx, Apache Tomcat.
  • Databases. E.g. MySQL.
  • Cache. E.g. Redis.
  • Search engines. E.g. Elasticsearch.
  • Deliver software / contioner. E.g. Docker.
  • Other services. E.g. Gotenberg, Aliyun services (media, ai).

Operating Systems

Programming Languages

Apache PDFBox is a Java tool for working with PDF documents. In this post, we’ll introduce how to use Apache PDFBox to handle PDF files. The code examples in this post are based on pdfbox v2.0.29.

<dependency>
<groupId>org.apache.pdfbox</groupId>
<artifactId>pdfbox</artifactId>
<version>2.0.29</version>
</dependency>

Extract Text

Extract all page text

String inputFilePath = "your/pdf/filepath";
// Load PDF document
PDDocument document = PDDocument.load(new File(inputFilePath));
// Create PDFTextStripper instance
PDFTextStripper pdfStripper = new PDFTextStripper();
// Extract text from PDF
String text = pdfStripper.getText(document);
// Print extracted text
System.out.println(text);
// Close the document
document.close();

Extract page by page

String inputFilePath = "your/pdf/filepath";
// Load the PDF document
PDDocument document = PDDocument.load(new File(inputFilePath));
// Create an instance of PDFTextStripper
PDFTextStripper stripper = new PDFTextStripper();
// Iterate through each page and extract the text
for (int pageNumber = 1; pageNumber <= document.getNumberOfPages(); pageNumber++) {
stripper.setStartPage(pageNumber);
stripper.setEndPage(pageNumber);

String text = stripper.getText(document);
System.out.println("Page " + pageNumber + ":");
System.out.println(text);
}
// Close the PDF document
document.close();

Split and Merge

Split

private static void splitPdf(String inputFilePath, String outputDir) throws IOException {
File file = new File(inputFilePath);
// Load the PDF document
PDDocument document = PDDocument.load(file);
// Create a PDF splitter object
Splitter splitter = new Splitter();
// Split the document
List<PDDocument> splitDocuments = splitter.split(document);
// Get an iterator for the split documents
Iterator<PDDocument> iterator = splitDocuments.iterator();
// Iterate through the split documents and save them
int i = 1;
while (iterator.hasNext()) {
PDDocument splitDocument = iterator.next();
String outputFilePath = new StringBuilder().append(outputDir)
.append(File.separator)
.append(file.getName().replaceAll("[.](pdf|PDF)", ""))
.append("_split_")
.append(i)
.append(".pdf")
.toString();
splitDocument.save(outputFilePath);
splitDocument.close();
i++;
}
// Close the source document
document.close();
System.out.println("PDF split successfully!");
}

Merge PDF files

private static void mergePdfFiles(List<String> inputFilePaths, String outputFilePath) throws IOException {
PDFMergerUtility merger = new PDFMergerUtility();
// Add as many files as you need
for (String inputFilePath : inputFilePaths) {
merger.addSource(new File(inputFilePath));
}
merger.setDestinationFileName(outputFilePath);
merger.mergeDocuments();
System.out.println("PDF files merged successfully!");
}

Insert and remove pages

Insert pages

public static void insertPage(String sourceFile, String targetFile, int pageIndex) throws IOException {
// Load the existing PDF document
PDDocument sourceDoc = PDDocument.load(new File(sourceFile));
Integer sourcePageCount = sourceDoc.getNumberOfPages();
// Validate the requested page index
if (pageIndex < 0 || pageIndex > sourcePageCount) {
throw new IllegalArgumentException("Invalid page index");
}
// Create a new blank page
PDPage newPage = new PDPage();
// Insert the new page at the requested index
if (sourcePageCount.equals(pageIndex)) {
sourceDoc.getPages().add(newPage);
} else {
sourceDoc.getPages().insertBefore(newPage, sourceDoc.getPages().get(pageIndex));
}
// Save the modified PDF document to a target file
sourceDoc.save(targetFile);
// Close the source and target documents
sourceDoc.close();
}

Remove pages

private static void removePage(String inputFilePath, String outputFilePath, int pageIndex) throws IOException {

PDDocument sourceDoc = PDDocument.load(new File(inputFilePath));
Integer sourcePageCount = sourceDoc.getNumberOfPages();
// Validate the requested page index
if (pageIndex < 0 || pageIndex >= sourcePageCount) {
throw new IllegalArgumentException("Invalid page index");
}
sourceDoc.getPages().remove(pageIndex);
sourceDoc.save(outputFilePath);
sourceDoc.close();
}
private static void removePage2(String inputFilePath, String outputFilePath, int pageIndex) throws IOException {
PDDocument sourceDoc = PDDocument.load(new File(inputFilePath));
Integer sourcePageCount = sourceDoc.getNumberOfPages();
// Validate the requested page index
if (pageIndex < 0 || pageIndex >= sourcePageCount) {
throw new IllegalArgumentException("Invalid page index");
}
Splitter splitter = new Splitter();
List<PDDocument> pages = splitter.split(sourceDoc);
pages.remove(pageIndex);
PDDocument outputDocument = new PDDocument();
for (PDDocument page : pages) {
outputDocument.addPage(page.getPage(0));
}
outputDocument.save(outputFilePath);
sourceDoc.close();
outputDocument.close();
}

Encryption

Encrypt

public static void encryptPdf(String inputFilePath, String outputFilePath, String password) throws IOException {
PDDocument doc = PDDocument.load(new File(inputFilePath));

AccessPermission ap = new AccessPermission();
// disable printing,
ap.setCanPrint(false);
//disable copying
ap.setCanExtractContent(false);
//Disable other things if needed...

// Owner password (to open the file with all permissions)
// User password (to open the file but with restricted permissions)
StandardProtectionPolicy spp = new StandardProtectionPolicy(password, password, ap);
// Define the length of the encryption key.
// Possible values are 40, 128 or 256.
int keyLength = 256;
spp.setEncryptionKeyLength(keyLength);

//Apply protection
doc.protect(spp);

doc.save(outputFilePath);
doc.close();
}

Update password

public static void updatePdfPassword(String inputFilePath, String outputFilePath,
String oldPassword, String newPassword) throws IOException {
PDDocument doc = PDDocument.load(new File(inputFilePath), oldPassword);

AccessPermission ap = new AccessPermission();
// disable printing,
ap.setCanPrint(false);
//disable copying
ap.setCanExtractContent(false);
//Disable other things if needed...

// Owner password (to open the file with all permissions)
// User password (to open the file but with restricted permissions)
StandardProtectionPolicy spp = new StandardProtectionPolicy(newPassword, newPassword, ap);
// Define the length of the encryption key.
// Possible values are 40, 128 or 256.
int keyLength = 256;
spp.setEncryptionKeyLength(keyLength);

//Apply protection
doc.protect(spp);

doc.save(outputFilePath);
doc.close();
}

Remove password

public static void removePdfPassword(String inputFilePath, String outputFilePath,
String password) throws IOException {
PDDocument doc = PDDocument.load(new File(inputFilePath), password);
// Set the document access permissions
doc.setAllSecurityToBeRemoved(true);
// Save the unprotected PDF document
doc.save(outputFilePath);
// Close the document
doc.close();
}

Convert to Image

PDF to Image

public static void pdfToImage(String pdfFilePath, String imageFileDir) throws IOException {
File file = new File(pdfFilePath);
PDDocument document = PDDocument.load(file);
// Create PDFRenderer object to render each page as an image
PDFRenderer pdfRenderer = new PDFRenderer(document);
// Iterate over all the pages and convert each page to an image
for (int pageIndex = 0; pageIndex < document.getNumberOfPages(); pageIndex++) {
// Render the page as an image
// 100 DPI: general-quality
// 300 DPI: high-quality
// 600 DPI: pristine-quality
BufferedImage image = pdfRenderer.renderImageWithDPI(pageIndex, 300);
// Save the image to a file
String imageFilePath = new StringBuilder()
.append(imageFileDir)
.append(File.separator)
.append(file.getName().replaceAll("[.](pdf|PDF)", ""))
.append("_")
.append(pageIndex + 1)
.append(".png")
.toString();
ImageIO.write(image, "PNG", new File(imageFilePath));
}
// Close the document
document.close();
}

Image to PDF

private static void imageToPdf(String imagePath, String pdfPath) throws IOException {
try (PDDocument doc = new PDDocument()) {
PDPage page = new PDPage();
doc.addPage(page);
// createFromFile is the easiest way with an image file
// if you already have the image in a BufferedImage,
// call LosslessFactory.createFromImage() instead
PDImageXObject pdImage = PDImageXObject.createFromFile(imagePath, doc);
// draw the image at full size at (x=0, y=0)
try (PDPageContentStream contents = new PDPageContentStream(doc, page)) {
// to draw the image at PDF width
int scaledWidth = 600;
if (pdImage.getWidth() < 600) {
scaledWidth = pdImage.getWidth();
}
contents.drawImage(pdImage, 0, 0, scaledWidth, pdImage.getHeight() * scaledWidth / pdImage.getWidth());
}
doc.save(pdfPath);
}
}

Create PDFs

String outputFilePath = "output/pdf/filepath";

PDDocument document = new PDDocument();
PDPage page = new PDPage(PDRectangle.A4);
document.addPage(page);
// Create content stream to draw on the page
PDPageContentStream contentStream = new PDPageContentStream(document, page);
contentStream.setFont(PDType1Font.HELVETICA, 12);
// Insert text
contentStream.beginText();
contentStream.newLineAtOffset(100, 700);
contentStream.showText("Hello, World!");
contentStream.endText();
// Load the image
String imageFilePath = "C:\\Users\\Taogen\\Pictures\\icon.jpg";
PDImageXObject image = PDImageXObject.createFromFile(imageFilePath, document);
// Set the scale and position of the image on the page
float scale = 0.5f; // adjust the scale as needed
float x = 100; // x-coordinate of the image
float y = 500; // y-coordinate of the image
// Draw the image on the page
contentStream.drawImage(image, x, y, image.getWidth() * scale, image.getHeight() * scale);
contentStream.close();
document.save(outputFilePath);
document.close();

Compress (TODO)

Watermark (Todo)

I. Basic concepts

Package Management on Operating Systems

Debian/Ubuntu Package Management

Advanced Packaging Tool – APT

apt-get is a command line tool for interacting with the Advanced Package Tool (APT) library (a package management system for Linux distributions). It allows you to search for, install, manage, update, and remove software.

Configuration of the APT system repositories is stored in the /etc/apt/sources.list file and the /etc/apt/sources.list.d directory. You can add additional repositories in a separate file in the /etc/apt/sources.list.d directory, for example, redis.list, docker.list.

dpkg

dpkg is a package manager for Debian-based systems. It can install, remove, and build packages, but unlike other package management systems, it cannot automatically download and install packages – or their dependencies. APT and Aptitude are newer, and layer additional features on top of dpkg.

# install
sudo dpkg -i package_file.deb
# uninstall
sudo apt-get remove package_name

CentOS/RHEL Package Management

Yellow Dog Updater, Modified (YUM)

YUM is the primary package management tool for installing, updating, removing, and managing software packages in Red Hat Enterprise Linux. YUM performs dependency resolution when installing, updating, and removing software packages. YUM can manage packages from installed repositories in the system or from .rpm packages. The main configuration file for YUM is at /etc/yum.conf, and all the repos are at /etc/yum.repos.d.

# add repos config to /etc/yum.repos.d
...
# clear repo cache
yum clean all
# create repo cache
yum makecache
# search package
yum search {package_name}

# upgrade package
yum update

# install package
yum install {package_name}

# uninstall package
yum remove {package_name}

RPM (RPM Package Manager)

RPM is a popular package management tool in Red Hat Enterprise Linux-based distros. Using RPM, you can install, uninstall, and query individual software packages. Still, it cannot manage dependency resolution like YUM. RPM does provide you useful output, including a list of required packages. An RPM package consists of an archive of files and metadata. Metadata includes helper scripts, file attributes, and information about packages.

# install
sudo rpm -i package_file.rpm
sudo rpm --install package_file.rpm
# reinstall
sudo rpm --reinstall package_file.rpm
# uninstall
sudo rpm -e package_name
sudo rpm --erase package_name

Windows Package Management

Chocolatey

winget

Microsoft Store

MacOS Package Management

brew

Mac App Store

Service Manager

Systemd

II. Software Installation

JDK/JRE

Headless version

Headless is the same version than the latter without the support of keyboard, mouse and display systems. Hence it has less dependencies and it makes it more suitable for server application.

Debian/Ubuntu/Deepin

Install openjdk from Official APT Repositories

Supported Operating Systems

  • Ubuntu/Debian
  • Deepin

Installing

# insall
sudo apt-get install openjdk-8-jdk
# verify. If the installation was successful, you can see the Java version.
java -version

Installing Options

  • openjdk-8/11/17-jdk
  • openjdk-8/11/17-jdk-headless
  • openjdk-8/11/17-jre
  • openjdk-8/11/17-jre-headless

Maven

CentOS/RHEL

Install from the EPEL YUM repository

# Add the EPEL repository, and update YUM to confirm your change
sudo yum install epel-release
sudo yum update
# install
sudo yum install maven -y
# verify
mvn -v

Add Aliyun Mirror. Add the following lines to the tag <mirrors> in the /etc/maven/settings.xml

<mirror>
<id>alimaven</id>
<name>aliyun maven</name>
<url>http://maven.aliyun.com/nexus/content/groups/public/</url>
<mirrorOf>central</mirrorOf>
</mirror>

Python

Debian/Ubuntu

Install from the official APT repository

# Update the environment
sudo apt update
# install
sudo apt install python3 -y
# verify
python3 -V

CentOS/RHEL

Install from the Official YUM repository

# Update the environment. Make sure that we are working with the most up to date environment possible in terms of our packages
sudo yum update -y
# install
sudo yum install -y python3
# verify
python3 -V

Node.js

CentOS/RHEL

Install from the EPEL YUM repository

# Add the EPEL repository, and update YUM to confirm your change
sudo yum install epel-release
sudo yum update
# install
sudo yum install nodejs
# verify
node --version

Redis

Linux

Install from Snapcraft

The Snapcraft store provides Redis packages that can be installed on platforms that support snap. Snap is supported and available on most major Linux distributions.

sudo snap install redis

If your Linux does not currently have snap installed, install it using the instructions described in Installing snapd.

Debian/Ubuntu/Deepin

Install from the official APT repositories

sudo apt-get update
sudo apt-get install redis-server

Update config

sudo vim /etc/redis/redis.conf

Uncomment the following line

# supervised auto

to

supervised auto

Enable and restart Redis service

sudo systemctl enable redis.service
sudo systemctl restart redis.service

Verify

systemctl status redis
redis-cli ping

Install from the Redis APT repository

Most major Linux distributions provide packages for Redis.

# prerequisites
sudo apt install lsb-release curl gpg
# add the repository to the apt index, update it. and then install redis. 

curl -fsSL https://packages.redis.io/gpg | sudo gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/redis.list

sudo apt-get update
sudo apt-get install redis

# verify
systemctl status redis
redis-cli ping

CentOS/RHEL

Install from the EPEL YUM repository

  1. Add the EPEL repository, and update YUM to confirm your change:

    sudo yum install epel-release
    sudo yum update
  2. Install Redis:

    sudo yum install redis
  3. Start Redis:

    sudo systemctl start redis

    Optional: To automatically start Redis on boot:

    sudo systemctl enable redis

Verify the Installation

Verify that Redis is running with redis-cli:

redis-cli ping

If Redis is running, it will return:

PONG

Windows

Redis is not officially supported on Windows.

Install from Source

Supported Operating Systems

  • All Linux distros (distributions)
  • MacOS

You can compile and install Redis from source on variety of platforms and operating systems including Linux and macOS. Redis has no dependencies other than a C compiler and libc.

# Download source files
wget https://download.redis.io/redis-stable.tar.gz
# Compiling
tar -xzvf redis-stable.tar.gz
cd redis-stable
make
# make sure the build is correct
make test

If the compile succeeds, you’ll find several Redis binaries in the src directory, including:

  • redis-server: the Redis Server itself
  • redis-cli is the command line interface utility to talk with Redis.

Starting and stopping Redis

cd redis-stable
# starting redis server
./src/redis-server &
# starting redis server with config
./src/redis-server redis.conf &
# stopping redis server
ps -ef | grep redis-server | awk '{print $2}' | head -1 | xargs kill -9
# connect to redis
./src/redis-cli
# auth
127.0.0.1:6379> auth YOUR_PASSWORD

update password in redis.conf

# requirepass foobared

to

requirepass YOUR_STRONG_PASSWORD

Manage Redis service using systemd

Create the /etc/systemd/system/redis.service file, and add the following line to the file

[Unit]
Description=Redis
After=network.target

[Service]
ExecStart=/usr/local/bin/redis-server /etc/redis/redis.conf
ExecStop=/usr/local/bin/redis-cli shutdown
Restart=always

[Install]
WantedBy=multi-user.target

copy the files

cd /path/to/redis/source/dir
sudo mkdir /etc/redis
sudo cp redis.conf /etc/redis/
sudo cp src/redis-server /usr/local/bin/
sudo cp src/redis-cli /usr/local/bin/

Edit config

sudo vim /etc/redis/redis.conf

Uncomment the following line

# supervised auto

to

supervised auto

Enable and start Redis service

systemctl enable redis
systemctl start redis
systemctl status redis

To verify Redis is up and running, run the following command:

redis-cli PING

MySQL

Linux

Install from binary distributions

Aim: Creating a MySQL service starts automatically when the computer starts up.

Download generic Unix/Linux binary package

Linux - Generic (glibc 2.12) (x86, 64-bit), Compressed TAR Archive. For example: mysql-5.7.44-linux-glibc2.12-x86_64.tar.gz

Installing

# Install dependency `libaio` library
yum search libaio
yum install libaio

# Create a mysql User and Group
groupadd mysql
useradd -r -g mysql -s /bin/false mysql

# Obtain and Unpack the Distribution
cd /usr/local
tar zxvf /path/to/mysql-VERSION-OS.tar.gz
# This enables you to refer more easily to it as /usr/local/mysql.
ln -s full-path-to-mysql-VERSION-OS mysql
# add the `/usr/local/mysql/bin` directory to your `PATH` variable
cp /etc/profile /etc/profile.bak.$(date '+%Y-%m-%d_%H-%M-%S')
echo 'export PATH=$PATH:/usr/local/mysql/bin' >> /etc/profile
cat /etc/profile
source /etc/profile

# Creating a Safe Directory For Import and Export Operations
cd /usr/local/mysql
mkdir mysql-files
chown mysql:mysql mysql-files
chmod 750 mysql-files

# Initialize the data directory.
bin/mysqld --initialize --user=mysql # A temporary password is generated for root@localhost: Trbgylojs1!w
bin/mysql_ssl_rsa_setup

# Start mysql server
bin/mysqld_safe --user=mysql &

# Next command is optional
cp support-files/mysql.server /etc/init.d/mysql.server

Note: This procedure assumes that you have root (administrator) access to your system. Alternatively, you can prefix each command using the sudo (Linux) or pfexec (Solaris) command.

Managing MySQL Server with systemd


Create a user for remote access

  1. Enable MySQL server port in the firewall

If the firewall management on Linux uses ufw, you can run the following command to enable MySQL server port.

ufw allow 3306/tcp
  1. Update bind-address in /etc/my.cnf

Change 127.0.0.1 to Local IP like 192.168.1.100

bind-address=192.168.1.100
  1. Create a MySQL user for remote login
mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
mysql> CREATE USER 'root'@'%' IDENTIFIED BY 'password';
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
mysql> FLUSH PRIVILEGES;
mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
  1. Verify

Connect the remote MySQL server from your local computer

# testing the port is open
$ telnet {server_ip} 3306
# test MySQL connection
$ mysql -h {server_ip} -u root -p
Enter password:

Errors

Error: mysql: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory

When you run mysql -u root -p.

Solutions

# centos
yum install ncurses-compat-libs

Error: ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

To set up your password for the first time:

mysql> SET PASSWORD = PASSWORD('new password');

Windows

Docker

docker run --name={mysql_container_name} -d -p {exposed_port}:3306 \
-e MYSQL_ROOT_HOST='%' -e MYSQL_ROOT_PASSWORD='{your_password}' \
--restart unless-stopped \
-v mysql_data:/var/lib/mysql \
mysql/mysql-server:{version}

Elasticsearch

Kibana

CentOS/RHEL

Install from the elastic YUM repository

Download and install the public signing key:

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Create a file called kibana.repo in the /etc/yum.repos.d/ directory for RedHat based distributions, or in the /etc/zypp/repos.d/ directory for OpenSuSE based distributions, containing:

[kibana-8.x]
name=Kibana repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

You can now install Kibana with one of the following commands:

# older Red Hat based distributions
sudo yum install kibana
# Fedora and other newer Red Hat distributions
sudo dnf install kibana
# OpenSUSE based distributions
sudo zypper install kibana

Install by downloading RPM file

Downloading the Kibana RPM file

wget https://artifacts.elastic.co/downloads/kibana/kibana-8.8.2-x86_64.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-8.8.2-x86_64.rpm.sha512
shasum -a 512 -c kibana-8.8.2-x86_64.rpm.sha512
sudo rpm --install kibana-8.8.2-x86_64.rpm

Start Elasticsearch and generate an enrollment token for Kibana

When you start Elasticsearch for the first time, the following security configuration occurs automatically:

  • Authentication and authorization are enabled, and a password is generated for the elastic built-in superuser.
  • Certificates and keys for TLS are generated for the transport and HTTP layer, and TLS is enabled and configured with these keys and certificates.

The password and certificate and keys are output to your terminal.

Run Kibana with systemd

To configure Kibana to start automatically when the system starts, run the following commands:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable kibana.service

Kibana can be started and stopped as follows:

sudo systemctl start kibana.service
sudo systemctl stop kibana.service

Log information can be accessed via journalctl -u kibana.service.

Configure Kibana via the config file

Kibana loads its configuration from the /etc/kibana/kibana.yml file by default. The format of this config file is explained in Configuring Kibana.

Nginx

Apache Tomcat

Docker

Debian/Ubuntu/Deepin

Install from the Docker APT repository

Set up the repository

  1. Update the apt package index and install packages to allow apt to use a repository over HTTPS
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
  1. Add Docker’s official GPG key:
sudo install -m 0755 -d /etc/apt/keyrings

# 中科大源 docker - debian/deepin
curl -fsSL https://mirrors.ustc.edu.cn/docker-ce/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Official docker - debian
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Official docker - ubuntu
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

sudo chmod a+r /etc/apt/keyrings/docker.gpg
  1. Use the following command to set up the repository
# 中科大源 docker - debian/deepin
echo 'deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://mirrors.ustc.edu.cn/docker-ce/linux/debian buster stable' | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Official docker - debian
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Official docker - ubuntu
echo \
"deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
"$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

  1. Update the apt package index:
sudo apt-get update
  1. Install Docker Engine, containerd, and Docker Compose.
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  1. Verify that the Docker Engine installation is successful by running the hello-world image
sudo docker run hello-world

View Docker version and status

docker version
systemctl status docker

References

Linux package management with YUM and RPM

Package management - Ubuntu

systemd.unit — Unit configuration

Installing Redis

MySQL

In this post, we’ll cover how to configure CORS in a Spring Boot project. If you want to understand how CORS works, you can check out the article Understanding CORS.

Configuring HTTP Request CORS

Controller CORS Configuration

Use @CrossOrigin annotation

Add a @CrossOrigin annotation to the controller class

// no credentials
@CrossOrigin
@RestController
@RequestMapping("/my")
public class MyController {
@GetMapping
public String testGet() {
return "hello \n" + new Date();
}
}

Add a @CrossOrigin annotation to the controller method

@RestController
@RequestMapping("/my")
public class MyController {
// no credentials
@CrossOrigin
@GetMapping
public String testGet() {
return "hello \n" + new Date();
}
}
// with credentials
@CrossOrigin(origins = {"http://localhost"}, allowCredentials = "true")
// or
@CrossOrigin(originPatterns = {"http://localhost:[*]"}, allowCredentials = "true")

Properties of CrossOrigin

  • origins: by default, it’s *. You can specify allowed origins like @CrossOrigin(origins = {"http://localhost"}). You also can specify allowed origins by patterns like @CrossOrigin(originPatterns = {"http://*.taogen.com:[*]"}).

Add a @CrossOrigin annotation to the controller method or the controller class. It is equivalent to

  1. responding a successful result to the preflight request. For example

    HTTP/1.1 204 No Content
    Connection: keep-alive
    Access-Control-Allow-Origin: https://foo.bar.org
    Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE, PUT
    Access-Control-Max-Age: 86400
  2. adding the following headers to the HTTP response headers

    Access-Control-Allow-Origin: *
    Vary: Access-Control-Request-Headers
    Vary: Access-Control-Request-Method
    Vary: Origin

Update HTTP response headers

Only for GET, POST and HEAD requests without custom headers. In other words, it does not work for preflight requests.

@RestController
@RequestMapping("/my")
public class MyController {

@GetMapping
public String testGet(HttpServletResponse response) {
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Max-Age", "86400");
return "test get\n" + new Date();
}

@PostMapping
public String testPost(HttpServletResponse response) {
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Max-Age", "86400");
return "test post\n" + new Date();
}
}
// with credentials
response.setHeader("Access-Control-Allow-Origin", "{your_host}"); // e.g. http://localhost or reqs.getHeader("Origin")
response.setHeader("Access-Control-Allow-Credentials", "true");
response.setHeader("Access-Control-Max-Age", "86400");

For ‘DELETE + Preflight’ or ‘PUT + Preflight’ requests, adding header ‘Access-Control-Allow-Origin: *’ to HttpServletResponse does not enable CORS. This will result in the following error

Access to XMLHttpRequest at 'http://localhost:8080/my' from origin 'http://localhost' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

For requests with custom headers, adding header ‘Access-Control-Allow-Origin: *’ to HttpServletResponse does not enable CORS. This will result in the following error

Access to XMLHttpRequest at 'http://localhost:8080/my' from origin 'http://localhost' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Global CORS configuration

WebMvcConfigurer.addCorsMappings

The WebMvcConfigurer.addCorsMappings has the same function as the @CrossOrigin annotation.

@Configuration
public class CorsConfiguration {
@Bean
public WebMvcConfigurer corsConfigurer() {
return new WebMvcConfigurer() {
@Override
public void addCorsMappings(CorsRegistry registry) {
// no credentials
registry.addMapping("/**")
.allowedOrigins("*")
.allowedMethods("GET", "POST", "HEAD", "PUT", "DELETE", "PATCH");
}
};
}
}
// with credentials
registry.addMapping("/**")
.allowedOrigins("{your_host}") // e.g. http://localhost
.allowCredentials(true)
.allowedMethods("GET", "POST", "HEAD", "PUT", "DELETE", "PATCH");
  • pathPattern: /myRequestMapping, /**, /myRequestMapping/**, /*
  • allowedOrigins: By default, all origins are allowed. Its default value is *. You can specify allowed origins like "http://localhost".
  • allowedOriginPatterns: for example, http://localhost:[*], http://192.168.0.*:[*], https://demo.com
  • allowedMethods: By default, GET, HEAD, and POST methods are allowed. You can enable all methods by setting its value to "GET", "POST", "HEAD", "PUT", "DELETE", "PATCH".

Filters

@Component
public class CorsFilter implements Filter {

@Override
public void doFilter(ServletRequest req, ServletResponse res, FilterChain chain) throws IOException, ServletException {
HttpServletResponse response = (HttpServletResponse) res;
HttpServletRequest reqs = (HttpServletRequest) req;
// no credentials
response.setHeader("Access-Control-Allow-Origin", "*");
response.setHeader("Access-Control-Allow-Methods", "POST, GET, PATCH, DELETE, PUT, PATCH");
response.setHeader("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
response.setHeader("Access-Control-Max-Age", "86400");
chain.doFilter(req, res);
}
}
// with credentials
response.setHeader("Access-Control-Allow-Origin", "{your_host}"); // e.g. http://localhost or reqs.getHeader("Origin")
response.setHeader("Access-Control-Allow-Credentials", "true");
response.setHeader("Access-Control-Allow-Methods", "POST, GET, PATCH, DELETE, PUT, PATCH");
response.setHeader("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept");
response.setHeader("Access-Control-Max-Age", "86400");
0%