SSH, or secure shell, is an encrypted protocol used to manage and communicate with servers. You can connect to your server via SSH. There are a few different ways to login an SSH server. Public key authentication is one of the SSH authentication methods. It allows you to access a server via SSH without a password.
Creating SSH keys
List supported algorithms of SSH keys on your client and server:
We recommend using the ed25519 algorithm to generate your SSH key. The Ed25519 was introduced on OpenSSH version 6.5. It’s using elliptic curve cryptography that offers better security with faster performance compared to DSA, ECDSA, or RSA. The RSA is even considered not safe if it’s generated with a key smaller than 2048-bit length.
Enter file in which to save the key (/Users/taogen/.ssh/id_ed25519): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in {filename} Your public key has been saved in {filename}.pub
You can specify your SSH key’s filename. If you don’t want to change the filename, by default the private key filename is id_{algorithm} and the public key filename is id_{algorithm}.pub .
For security reasons, it’s best to set a passphrase for your SSH keys.
Coping the SSH public key to your server
Copying Your Public Key Using ssh-copy-id
The simplest way to copy your public key to an existing server is to use a utility called ssh-copy-id.
ssh-copy-id -i public_key_filepath username@remote_host # or use a specific SSH port ssh-copy-id -i public_key_filepath -p ssh_port username@remote_host
mkdir -p ~/.ssh: Creating the ~/.ssh directory if it doesn’t exist.
cat >> ~/.ssh/authorized_keys: append the standard output of the previous command of the pipeline to the file ~/.ssh/authorized_keys on the remote host.
Configuring SSH
If there are multiple SSH keys on your local system, you need to configure which destination server uses which SSH key. For example, there is an SSH key for GitHub and another SSH key for a remote server.
Creating the SSH configuration file ~/.ssh/config if it doesn’t exist.
vim ~/.ssh/config
Add the config like the folowing content
# GitHub Host github.com User git Port 22 Hostname github.com IdentityFile "~/.ssh/{your_private_key}" TCPKeepAlive yes IdentitiesOnly yes
# Remote server Host {remote_server_ip_address} User {username_for_ssh} Port {remote_server_ssh_port} IdentityFile "~/.ssh/{your_private_key}" TCPKeepAlive yes IdentitiesOnly yes
SSH login with the SSH private key
If you have copied your SSH public key to the server, SSH login will automatically use your private key. Otherwise, you will need to enter the password of the remote server’s user to login.
$ ssh username@remote_host # or use a specific port $ ssh -p ssh_port username@remote_host
Disabling password authentication on your server
Using password-based authentication exposes your server to brute-force attacks. You can disable password authentication by updating the configuration file /etc/ssh/sshd_config.
Before disabling password authentication, make sure that you either have SSH key-based authentication configured for the root account on this server, or preferably, that you have SSH key-based authentication configured for an account on this server with sudo access.
sudo vim /etc/ssh/sshd_config
Uncomment the following line by removing # at the beginning of the line:
PasswordAuthentication no
Save and close the file when you are finished. To actually implement the changes we just made, you must restart the service.
You can try to restart your system networking to check whether the problem of being unable to ping domain names is resolved. See the section “Restart Networking” of this post.
Configuring Default Route Gateway
You need to check your route table and check if the destination host 0.0.0.0 is routed to the default gateway IP (e.g. 192.168.0.1). If not you need to update the gateway IP.
Get Default Gateway IP
$ ip r | grep default
default via 192.168.0.1 dev eth0 proto dhcp metric 100
Some computers might have multiple default gateways. The gateway with lowest Metric is the first to be searched and used as the default gateway.
My server’s default gateway IP is 192.168.0.1.
Check the Route Table
Print the route table:
$ sudo route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 ...
Destination: The destination network or destination host.
Gateway: The gateway address or ’*’ if none set.
Genmask: The netmask for the destination net: 255.255.255.255 for a host destination and 0.0.0.0 for the default route.
What Is The Meaning of 0.0.0.0 In Routing Table?
Each network host has a default route for each network card. This will create a 0.0.0.0 route for such card. The address 0.0.0.0 generally means “any address”. If a packet destination doesn’t match an individual address in the table, it must match a 0.0.0.0 gateway address. In other words, default gateway is always pointed by 0.0.0.0
Update the 0.0.0.0 Route to the Default Gateway
Update the destination host 0.0.0.0 route the the default gateway IP e.g 192.168.0.1.
ip command
You can temporarily update the 0.0.0.0 route gateway by the ip command.
Add the default route:
ip route add default via 192.168.0.1
You can also delete the default route:
ip route delete default
route command
You can temporarily update the 0.0.0.0 route gateway by the route command.
Add the default route:
route add default gw 192.168.0.1
You can also delete the default route:
route del default gw 192.168.0.1
Update configuration file
You can permanently update the 0.0.0.0 route gateway by in system configuration file.
CentOS/RHEL
vim /etc/sysconfig/network
Add the following content to the file /etc/sysconfig/network
NETWORKING=yes GATEWAY=192.168.0.1
Debian/Ubuntu
vim /etc/network/interfaces
Find network interface and add the following option
... gateway 192.168.0.1 ...
Restart Networking
After update the gateway configuration file, you need restart the networking.
银河麒麟高级服务器操作系统V10(Kylin Linux Advanced Server V10 (Tercel))是针对企业级关键业务,适应虚拟化、云计算、大数据、工业互联网时代对主机系统可靠性、安全性、性能、扩展性和实时性等需求,依据CMMI5级标准研制的提供内生本质安全、云原生支持、自主平台深入优化、高性能、易管理的新一代自主服务器操作系统。银河麒麟系统采用同源构建支持六款自主CPU平台(飞腾、鲲鹏、龙芯、申威、海光、兆芯等国产CPU),所有组件基于同一套源代码构建。
查看操作系统信息
# 查看 Linux 系统发行版 $ cat /etc/os-release
NAME="Kylin Linux Advanced Server" VERSION="V10 (Tercel)" ID="kylin" VERSION_ID="V10" PRETTY_NAME="Kylin Linux Advanced Server V10 (Tercel)" ANSI_COLOR="0;31"
# 查看 CPU 架构 $ lscpu
Architecture: aarch64 CPU op-mode(s): 64-bit Model name: Kunpeng-920 ...
Details
Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: HiSilicon Model: 0 Model name: Kunpeng-920 Stepping: 0x1 BogoMIPS: 200.00 NUMA node0 CPU(s): 0-3 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Tsx async abort: Not affected Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
# Create an achive file tar -czvf achive.tar.gz dir_name tar -cjvf achive.tar.bz2 dir_name tar -cvf achive.tar dir_name
# List files of the achive file tar -tf achive.tar.gz
# Extract files tar -xvf achive.tar.gz tar -xvf achive.tar.gz -C /target/filepath
Get information of file
Real path
# Print real path of file realpath path/to/file
Architecture of software
# Architecture of software cat `which {your_software}` | file - # file type file path/to/file
Files and Directories
Directories
Directories
Description
/bin
binary or executable programs.
/boot
It contains all the boot-related information files and folders such as conf, grub, etc.
/dev
device files such as dev/sda1, dev/sda2, etc.
/lib
kernel modules and a shared library.
/etc
system configuration files.
/home
home directory. It is the default current directory.
/media
removal media devices are inserted
/mnt
temporary mount point
/opt
optional or third-party software.
/proc
It is a virtual and pseudo-file system to contains info about the running processes with a specific process ID or PID.
/root
root home directory
/run
It stores volatile runtime data. run time variables.
/sbin
binary executable programs for an administrator.
/sys
It is a virtual file system for modern Linux distributions to store and allows modification of the devices connected to the system.
/tmp
temporary space, typically cleared on reboot.
/usr
User related programs.
/var
variable data files. log files.
/lib: It is a place for the essential standard libraries. Think of libraries required for your system to run. If something in /bin or /sbin needs a library that library is likely in /lib.
/usr
/usr/bin: Executables binary files. E.g. java, mvn, git, apt, kill.
/usr/local: To keep self-compiled or third-party programs.
/usr/sbin: This directory contains programs for administering a system, meant to be run by ‘root’. Like ‘/sbin’, it’s not part of a user’s $PATH. Examples of included binaries here are chroot, useradd, in.tftpd and pppconfig.
/usr/lib: the /usr directory in general is as it sounds, a user based directory. Here you will find things used by the users on the system. So if you install an application that needs libraries they might go to /usr/lib. If a binary in /usr/bin or /usr/sbin needs a library it will likely be in /usr/lib.
/var: variable data files. log files.
/var/lib: the /var directory is the writable counterpart to the /usr directory which is often required to be read-only. So /var/lib would have a similar purpose as /usr/lib but with the ability to write to them.
Configuration Files
Bash Configuration Files
File
Description
/etc/profile
This is a “System wide” initialization file that is executed during login. This file provides initial environment variables and initial “PATH” locations.
/etc/bashrc
This again is a “System Wide” initialization file. This file is executed each time a Bash shell is opened by a user. Here you can define your default prompt and add alias information. Values in this file can be overridden by their local ~/.bashrc entry.
~/.bash_profile
If this file exists, it is executed automatically after /etc/profile during the login process. This file can be used by each user to add individual entries. The file however is only executed once at login and normally then runs the users .bashrc file.
~/.bash_login
If the “.bash_profile” does not exist, then this file will be executed automatically at login.
~/.profile
If the “.bash_profile” or “.bash_login” do not exist, then this file is executed automatically at login.
~/.bashrc
This file contains individual specific configurations. This file is read at login and also each time a new Bash shell is started. Ideally, this is where you should place any aliases.
~/.bash_logout
This file is executed automatically during logout.
~/.inputrc
This file is used to customize key bindings/key strokes.
Most global config files are located in the /etc directory
File
Description
/etc/X11/
xorg specific config files
/etc/cups/
sub-directory containing configuration for the Common UNIX Printing System
/etc/xdg/
global configs for applications following freedesktop.org specification
/etc/ssh/
used to configure OpenSSH server behavior for the whole system
/etc/apparmor.d/
contains config files for the AppArmor system
/etc/udev/
udev related configuration
Important Global Config Files
File
Description
/etc/resolv.conf
used to define the DNS server(s) to use
/etc/bash.bashrc
used to define the commands to execute when a user launches the bash shell
/etc/profile
the login shell executes the commands in .profile script during startup
/etc/dhcp/dhclient.conf
stores network related info required by DHCP clients
/etc/fstab
decides where to mount all the partitions available to the system
/etc/hostname
set the hostname for the machine
/etc/hosts
a file which maps IP addresses to their hostnames
/etc/hosts.deny
the remote hosts listed here are denied access to the machine
/etc/mime.types
lists MIME-TYPES and filename extensions associated with them
/etc/motd
configure the text shown when a user logs in to the host
/etc/timezone
set the local timezone
/etc/sudoers
the sudoers file controls the sudo related permission for users
/etc/httpd/conf and /etc/httpd.conf.d
configuration for the apache web server
/etc/default/grub
contains configuration used by the update-grub for generating /boot/grub/grub.cfg
/boot/grub/grub.cfg
the update-grub command auto-generates this file using the settings defined in /etc/default/grub
Important User-Specific Config Files
File
Description
$HOME/.xinitrc
this allows us to set the directives for starting a window manager when using the startx command
$HOME/.vimrc
vim configuration
$HOME/.bashrc
script executed by bash when the user starts a non-login shell
$XDG_CONFIG_HOME/nvim/init.vim
neovim configuration
$HOME/.editor
sets the default editor for the user
$HOME/.gitconfig
sets the default name and e-mail address to use for git commits
$HOME/.profile
the login shell executes the commands in the .profile script during startup
$HOME/.ssh/config
ssh configuration for a specific user
System Settings
System Time
Time
# show date time date
# date time format date'+%Y-%m-%d' date'+%Y-%m-%d %H:%M:%S' date'+%Y-%m-%d_%H-%M-%S'
# update time and date from the internet timedatectl set-ntp true
Timezone
# list timezones timedatectl list-timezones
# set timezone timedatectl set-timezone Asia/Shanghai
# show time settings timedatectl status
hostname
The hostname is used to distinguish devices within a local network. It’s the machine’s human-friendly name. In addition, computers can be found by others through the hostname, which enables data exchange within a network, for example. Hostnames are used on the internet as part of the fully qualified domain name.
you can configure a computer’s hostname:
# setting $ hostnamectl set-hostname server1.example.com # verify the setting $ less /etc/hostname # query your computer's hostname $ hostname
hosts
The /etc/hosts file contains the Internet Protocol (IP) host names and addresses for the local host and other hosts in the Internet network. This file is used to resolve a name into an address (that is, to translate a host name into its Internet address).
SCP will always overwrite existing files. Thus, in the case of a clean upload SCP should be slightly faster as it doesn’t have to wait for the server on the target system to compare files.
# transfer a file scp local_file remoteuser@remote_ip_address:/remote_dir # transfer multiple files scp local_file1 local_file2 remoteuser@remote_ip_address:/remote_dir
# transfer a directory scp -r local_dir remoteuser@remote_ip_address:/remote_dir
# transfer a file from remote host to local scp remoteuser@remote_ip_address:/remote_file local_dir
# Transfer Files Between Two Remote Systems scp remoteuser@remote_ip_address:/remote_file remoteuser@remote_ip_address:/remote_file
-P SSH_port
rsync over ssh
In the case of a synchronization of files that change, like log files or list of source files in a repository, rsync is faster.
Copy a File from a Local Server to a Remote Server with SSH
rsync -avzhe ssh backup.tar.gz root@192.168.0.141:/backups/ # Show Progress While Transferring Data with Rsync rsync -avzhe ssh --progress backup.tar.gz root@192.168.0.141:/backups/
Copy a File from a Remote Server to a Local Server with SSH
# download file to the local system's Home directory. get [path to file] # change directory get [path to file] [path to directory] # change filename get [path to file] [new file name]
# upload the local system's Home directory's file to the remote server's current directory put [path to file] # change directory put [path to file] [path to directory] # change filename put [path to file] [new file name]
Application Data
logging file path: /var/log/{application_name}
upload file path: /data/{application_name}/upload
application build and running file path: /var/java/{application_name}, /var/html/{application_name}
http { ... server { listen80; server_name myserver.com; # The default root is /usr/share/nginx/www, /usr/share/nginx/html or /var/www/html root /var/www/your_domain/html;
# Staic files location / { # redefine the root root /var/www/your_domain/html; try_files$uri$uri/index.html =404; # if not found redirect to home page $root/index.html # try_files $uri $uri/ /index.html; }
# API location /api/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Mapping http(s)://my-domain.com/api/xxx to http://localhost:8080/api/xxx. proxy_pass http://localhost:8080/api/; # Mapping http(s)://my-domain.com/api/xxx to https://localhost:8080/xxx. # proxy_pass http://localhost:8080/; }
# Cache js, css, image, etc. } ... }
Response static files: {root path}/requestURI
Proxy requests: {proxy_pass path}/requestURI
Passing Request Headers
By default, NGINX redefines two header fields in proxied requests, “Host” and “Connection”, and eliminates the header fields whose values are empty strings. “Host” is set to the $proxy_host variable, and “Connection” is set to close.
HTTPS
http { # reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. # or "ssl_session_cache builtin:1000 shared:SSL:10m;" ssl_session_cache shared:SSL:10m; ssl_session_timeout10m; # enabling keepalive connections to send several requests via one connection and the second is to reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. keepalive_timeout70; server { listen443 ssl; server_name myproject.com; ssl_certificate /etc/ssl/projectName/projectName.com.pem; ssl_certificate_key /etc/ssl/projectName/projectName.com.key; # Additional SSL configuration (if required) ssl_protocols TLSv1.3 TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_cipherson;
When you build static assets with versioning/hashing mechanisms, adding a version/hash to the filename (cache-busting filenames) or query string is a good way to manage caching. In such a case, you can add a long max-age value and immutable because the content will never change.
http { server { ... location /static { root /var/www/your_domain/staic; # To disable access log off for saving disk I/O access_logoff; # or "expires max"; expires7d; # build static assets with versioning/hashing mechanisms add_header Cache-Control "public, immutable"; # revalidate # add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } ... } }
http { server { ... location~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff2|woff|ttf|eot|pdf|txt)$ { root /var/www/your_domain/staic; # To disable access log off for not hitting the I/O limit access_logoff; # or "expires max"; expires7d; # build static assets with versioning/hashing mechanisms add_header Cache-Control "public, immutable"; # revalidate # add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } ... } }
After setting cache, the following headers will be present in the response headers:
Warning: Cache-Control "public, immutable" for JS, CSS may cause program exceptions after update program.
Compression
Enable gzip
http { ... server { ... # gzip can be set in `http {}` or `server {}` # --with-http_gzip_static_module gzipon; # By default, NGINX compresses responses only with MIME type text/html. To compress responses with other MIME types, include the gzip_types directive and list the additional types. gzip_types text/css application/json text/xml application/javascript; # To specify the minimum length of the response to compress, use the gzip_min_length directive. The default is 20 bytes gzip_min_length200; # Sets the number and size of buffers used to compress a response. gzip_buffers324k; # Sets a gzip compression level of a response. Acceptable values are in the range from 1 to 9. gzip_comp_level6; gzip_varyon; ... } }
After enable gzip, the following headers will be present in the response headers:
content-encoding: gzip
Enable HTTP/2
The ngx_http_v2_module module (1.9.5) provides support for HTTP/2. This module is not built by default, it should be enabled with the --with-http_v2_module configuration parameter.
To enable HTTP/2, you must first enable SSL/TLS on your website. HTTP/2 requires the use of SSL/TLS encryption, which provides a secure connection between the web server and the client’s browser.
Keepalive connections reduce the overhead of repeatedly establishing new connections for multiple requests from the same client (e.g., a web browser loading multiple assets from a single page). This saves time and resources associated with TCP handshake and SSL/TLS negotiation.
TCP Optimizations
http { sendfileon; tcp_nopushon; tcp_nodelayon; }
sendfile → efficient file transfer.
tcp_nopush → optimizes packet sending.
tcp_nodelay → reduces latency for small packets.
When sendfile on; and tcp_nopush on; are used together in Nginx for serving static files, Nginx will initially buffer data to send full TCP packets.
However, for the very last packet(s) of a file, which may not be full, Nginx will dynamically disable tcp_nopelay (effectively removing TCP_CORK) and enable tcp_nodelay to ensure that these remaining partial packets are sent immediately without delay, thus completing the file transfer quickly.
ssl_session_cache and ssl_session_timeout: Reuse SSL sessions → fewer handshakes.
ssl_protocols: TLS 1.3 is faster than TLS 1.2.
Worker Processes and Connections
http { worker_processes auto; }
worker_processes: This directive determines the number of Nginx worker processes that will handle incoming requests. A common practice is to set this to auto, which automatically sets the number of worker processes to match the number of CPU cores on your server. This ensures that each core is utilized efficiently.
Logging
Disabling Access Logs
While access logs are valuable for monitoring and debugging, they can consume significant CPU and disk resources on high-traffic sites. If you don’t need detailed logging for every request, you can either buffer the logs or disable them entirely to reduce overhead.
To disable access log for static files
http { server { ... location~* \.(js|css|png|jpg|jpeg|gif|svg|ico|woff2|woff|ttf|eot|pdf|txt)$ { # To disable access log off for saving disk I/O access_logoff; } ... } }
Or disable all access logs
http { # To disable access log off for saving disk I/O access_logoff; }
Buffering Logs (Optional)
Instead of writing to the log file for every single request, you can configure Nginx to buffer log data and write it in larger chunks. This reduces the number of disk I/O operations and can improve performance.
Benefits of Log Buffering:
Reduced I/O Operations: Fewer, larger writes to disk instead of many small writes, improving performance.
Lower CPU Consumption: Less overhead associated with managing individual log entries.
Improved Disk Lifespan: Reduced wear and tear on storage devices.
Considerations:
Data Latency: Buffered log entries are not immediately written to disk, which can introduce a slight delay in log availability for real-time analysis.
Memory Usage: Buffering consumes a small amount of memory per worker process.
Out-of-Order Entries: In rare cases, if multiple worker processes are writing to the same log file with buffering enabled, log entries might appear slightly out of order if not flushed simultaneously. However, most log analysis systems can handle this by sorting based on timestamps.
server { # API location /api/ { ... # Attach CORS headers only if it's a valid origin ($cors should not be empty) if ($cors != "") { # using $cors for specified sites or using $http_origin for any sites. proxy_hide_header'Access-Control-Allow-Origin'; add_header'Access-Control-Allow-Origin''$cors' always; proxy_hide_header'Access-Control-Allow-Credentials'; add_header'Access-Control-Allow-Credentials'true always; proxy_hide_header'Access-Control-Allow-Methods'; add_header'Access-Control-Allow-Methods''POST, GET, DELETE, PUT, PATCH' always; proxy_hide_header'Access-Control-Allow-Headers'; add_header'Access-Control-Allow-Headers''Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; } # Check if it's a preflight request and "cache" it for 20 days if ($request_method = 'OPTIONS') { add_header'Access-Control-Allow-Origin''$cors' always; add_header'Access-Control-Allow-Credentials'true always; add_header'Access-Control-Allow-Headers''Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; add_header'Access-Control-Max-Age'1728000; add_header'Content-Type''text/plain charset=UTF-8'; add_header'Content-Length'0; return204; } ... } } }
# Check if it's a preflight request and "cache" it for 20 days if ($request_method = 'OPTIONS') { add_header'Access-Control-Allow-Origin''$http_origin' always; add_header'Access-Control-Allow-Credentials'true always; add_header'Access-Control-Allow-Headers''Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; add_header'Access-Control-Max-Age'1728000; add_header'Content-Type''text/plain charset=UTF-8'; add_header'Content-Length'0; return204; } ... } } }
Load Balancing
http { ... upstream backend-server { server xxx.xxx.xxx.xxx:8080 max_fails=1 fail_timeout=300s; server xxx.xxx.xxx.xxx:8080 max_fails=1 fail_timeout=300s; ... }
Adding the following config to the Nginx configuration file. You can verify if the configuration is updated by updating the return status code (e.g. 403 Forbidden, 406 Not Acceptable, 423 Locked) of the /test location and visiting the test URL http://yourDomain/testConfig.
$proxy_host: name and port of a proxied server as specified in the proxy_pass directive;
$proxy_add_x_forwarded_for: the “X-Forwarded-For” client request header field with the $remote_addr variable appended to it, separated by a comma. If the “X-Forwarded-For” field is not present in the client request header, the $proxy_add_x_forwarded_for variable is equal to the $remote_addr variable.
$host: In this order of precedence: host name from the request line, or host name from the “Host” request header field, or the server name matching a request.
# Just remove the nginx binary file. Or completely remove nginx `sudo apt-get purge nginx` or `yum remove package` cd /usr/local/nginx/sbin mv nginx nginx.bak
# configure tar -zxvf nginx-{latest-stable-version}.tar.gz cd nginx-{latest-stable-version} # Configuring Nginx ./configure ...
# Build and install nginx $ make $ sudo make install
@Bean public RestTemplate restTemplate() { SimpleClientHttpRequestFactoryfactory=newSimpleClientHttpRequestFactory(); // Time to establish a connection to the server from the client-side. Set to 20s. factory.setConnectTimeout(20000); // Time to finish reading data from the socket. Set to 300s. factory.setReadTimeout(300000); returnnewRestTemplate(factory); }
JavaScript HTTP Client
axios
Default Timeout
The default timeout is 0 (no timeout).
Settings
const instance = axios.create({ baseURL: 'https://some-domain.com/api/', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 60000, ... });
E-commerce: Websites that facilitate online buying and selling of goods and services, such as Amazon or eBay.
Shopping mall
Social Networking: Websites that connect people and allow them to interact and share information, such as Facebook or LinkedIn.
IM
Forum/BBS
News and Media: Websites that provide news articles, videos, and other multimedia content, such as CNN or BBC.
Blogs and Personal Websites: Websites where individuals or organizations publish articles and personal opinions, such as WordPress or Blogger.
Educational: Websites that provide information, resources, and learning materials for educational purposes, such as Khan Academy or Coursera.
Entertainment: Websites that offer various forms of entertainment, such as games, videos, music, or movies, such as Netflix or YouTube.
Government and Nonprofit: Websites belonging to government institutions or nonprofit organizations, providing information, services, and resources, such as whitehouse.gov or Red Cross.
Business and Corporate: Websites representing businesses and corporations, providing information about products, services, and company details, such as Apple or Coca-Cola.
Sports: Websites dedicated to sports news, scores, analysis, and related information, such as ESPN or NBA.
Travel and Tourism: Websites that provide information and services related to travel planning, accommodations, and tourist attractions, such as TripAdvisor or Booking.com.
Mobile Software
Desktop Software
Instant message. E.g. Telegram.
Email client. E.g. Mozilla Thunderbird.
Web browser. E.g. Google Chrome.
Office software. E.g. Microsoft Office, Typora, XMind.
Note-taking software. E.g. Notion, Evernote.
PDF reader. E.g. SumatraPDF.
File processing. E.g. 7-Zip
Media player. E.g. VLC.
Media processing. E.g. FFmpeg, HandBrake, GIMP.
Flashcard app. E.g. anki.
Stream Media. E.g. Spotify.
HTTP proxy. E.g. V2rayN.
Libraries, Tools, Services
Libraries
General-purpose libraries for programming language. E.g. Apache Commons Lang.
File processing. E.g. Apache POI.
Data parser. E.g. org.json.
Chart, Report, Graph.
Logging.
Testing.
HTTP Client.
Developer Tools
Editor
IDE
Service Client.
Services
Web servers. E.g. Nginx, Apache Tomcat.
Databases. E.g. MySQL.
Cache. E.g. Redis.
Search engines. E.g. Elasticsearch.
Deliver software / contioner. E.g. Docker.
Other services. E.g. Gotenberg, Aliyun services (media, ai).
Apache PDFBox is a Java tool for working with PDF documents. In this post, I will introduce how to use Apache PDFBox to handle PDF files. The code examples in this post are based on pdfbox v2.0.29.
StringinputFilePath="your/pdf/filepath"; // Load PDF document PDDocumentdocument= PDDocument.load(newFile(inputFilePath)); // Create PDFTextStripper instance PDFTextStripperpdfStripper=newPDFTextStripper(); // Extract text from PDF Stringtext= pdfStripper.getText(document); // Print extracted text System.out.println(text); // Close the document document.close();
Extract page by page
StringinputFilePath="your/pdf/filepath"; // Load the PDF document PDDocumentdocument= PDDocument.load(newFile(inputFilePath)); // Create an instance of PDFTextStripper PDFTextStripperstripper=newPDFTextStripper(); // Iterate through each page and extract the text for (intpageNumber=1; pageNumber <= document.getNumberOfPages(); pageNumber++) { stripper.setStartPage(pageNumber); stripper.setEndPage(pageNumber);
Stringtext= stripper.getText(document); System.out.println("Page " + pageNumber + ":"); System.out.println(text); } // Close the PDF document document.close();
Split and Merge
Split
privatestaticvoidsplitPdf(String inputFilePath, String outputDir)throws IOException { Filefile=newFile(inputFilePath); // Load the PDF document PDDocumentdocument= PDDocument.load(file); // Create a PDF splitter object Splittersplitter=newSplitter(); // Split the document List<PDDocument> splitDocuments = splitter.split(document); // Get an iterator for the split documents Iterator<PDDocument> iterator = splitDocuments.iterator(); // Iterate through the split documents and save them inti=1; while (iterator.hasNext()) { PDDocumentsplitDocument= iterator.next(); StringoutputFilePath=newStringBuilder().append(outputDir) .append(File.separator) .append(file.getName().replaceAll("[.](pdf|PDF)", "")) .append("_split_") .append(i) .append(".pdf") .toString(); splitDocument.save(outputFilePath); splitDocument.close(); i++; } // Close the source document document.close(); System.out.println("PDF split successfully!"); }
Merge PDF files
privatestaticvoidmergePdfFiles(List<String> inputFilePaths, String outputFilePath)throws IOException { PDFMergerUtilitymerger=newPDFMergerUtility(); // Add as many files as you need for (String inputFilePath : inputFilePaths) { merger.addSource(newFile(inputFilePath)); } merger.setDestinationFileName(outputFilePath); merger.mergeDocuments(); System.out.println("PDF files merged successfully!"); }
Insert and remove pages
Insert pages
publicstaticvoidinsertPage(String sourceFile, String targetFile, int pageIndex)throws IOException { // Load the existing PDF document PDDocumentsourceDoc= PDDocument.load(newFile(sourceFile)); IntegersourcePageCount= sourceDoc.getNumberOfPages(); // Validate the requested page index if (pageIndex < 0 || pageIndex > sourcePageCount) { thrownewIllegalArgumentException("Invalid page index"); } // Create a new blank page PDPagenewPage=newPDPage(); // Insert the new page at the requested index if (sourcePageCount.equals(pageIndex)) { sourceDoc.getPages().add(newPage); } else { sourceDoc.getPages().insertBefore(newPage, sourceDoc.getPages().get(pageIndex)); } // Save the modified PDF document to a target file sourceDoc.save(targetFile); // Close the source and target documents sourceDoc.close(); }
Remove pages
privatestaticvoidremovePage(String inputFilePath, String outputFilePath, int pageIndex)throws IOException {
PDDocumentsourceDoc= PDDocument.load(newFile(inputFilePath)); IntegersourcePageCount= sourceDoc.getNumberOfPages(); // Validate the requested page index if (pageIndex < 0 || pageIndex >= sourcePageCount) { thrownewIllegalArgumentException("Invalid page index"); } sourceDoc.getPages().remove(pageIndex); sourceDoc.save(outputFilePath); sourceDoc.close(); }
privatestaticvoidremovePage2(String inputFilePath, String outputFilePath, int pageIndex)throws IOException { PDDocumentsourceDoc= PDDocument.load(newFile(inputFilePath)); IntegersourcePageCount= sourceDoc.getNumberOfPages(); // Validate the requested page index if (pageIndex < 0 || pageIndex >= sourcePageCount) { thrownewIllegalArgumentException("Invalid page index"); } Splittersplitter=newSplitter(); List<PDDocument> pages = splitter.split(sourceDoc); pages.remove(pageIndex); PDDocumentoutputDocument=newPDDocument(); for (PDDocument page : pages) { outputDocument.addPage(page.getPage(0)); } outputDocument.save(outputFilePath); sourceDoc.close(); outputDocument.close(); }
AccessPermissionap=newAccessPermission(); // disable printing, ap.setCanPrint(false); //disable copying ap.setCanExtractContent(false); //Disable other things if needed...
// Owner password (to open the file with all permissions) // User password (to open the file but with restricted permissions) StandardProtectionPolicyspp=newStandardProtectionPolicy(password, password, ap); // Define the length of the encryption key. // Possible values are 40, 128 or 256. intkeyLength=256; spp.setEncryptionKeyLength(keyLength);
AccessPermissionap=newAccessPermission(); // disable printing, ap.setCanPrint(false); //disable copying ap.setCanExtractContent(false); //Disable other things if needed...
// Owner password (to open the file with all permissions) // User password (to open the file but with restricted permissions) StandardProtectionPolicyspp=newStandardProtectionPolicy(newPassword, newPassword, ap); // Define the length of the encryption key. // Possible values are 40, 128 or 256. intkeyLength=256; spp.setEncryptionKeyLength(keyLength);
//Apply protection doc.protect(spp);
doc.save(outputFilePath); doc.close(); }
Remove password
publicstaticvoidremovePdfPassword(String inputFilePath, String outputFilePath, String password)throws IOException { PDDocumentdoc= PDDocument.load(newFile(inputFilePath), password); // Set the document access permissions doc.setAllSecurityToBeRemoved(true); // Save the unprotected PDF document doc.save(outputFilePath); // Close the document doc.close(); }
Convert to Image
PDF to Image
publicstaticvoidpdfToImage(String pdfFilePath, String imageFileDir)throws IOException { Filefile=newFile(pdfFilePath); PDDocumentdocument= PDDocument.load(file); // Create PDFRenderer object to render each page as an image PDFRendererpdfRenderer=newPDFRenderer(document); // Iterate over all the pages and convert each page to an image for (intpageIndex=0; pageIndex < document.getNumberOfPages(); pageIndex++) { // Render the page as an image // 100 DPI: general-quality // 300 DPI: high-quality // 600 DPI: pristine-quality BufferedImageimage= pdfRenderer.renderImageWithDPI(pageIndex, 300); // Save the image to a file StringimageFilePath=newStringBuilder() .append(imageFileDir) .append(File.separator) .append(file.getName().replaceAll("[.](pdf|PDF)", "")) .append("_") .append(pageIndex + 1) .append(".png") .toString(); ImageIO.write(image, "PNG", newFile(imageFilePath)); } // Close the document document.close(); }
Image to PDF
privatestaticvoidimageToPdf(String imagePath, String pdfPath)throws IOException { try (PDDocumentdoc=newPDDocument()) { PDPagepage=newPDPage(); doc.addPage(page); // createFromFile is the easiest way with an image file // if you already have the image in a BufferedImage, // call LosslessFactory.createFromImage() instead PDImageXObjectpdImage= PDImageXObject.createFromFile(imagePath, doc); // draw the image at full size at (x=0, y=0) try (PDPageContentStreamcontents=newPDPageContentStream(doc, page)) { // to draw the image at PDF width intscaledWidth=600; if (pdImage.getWidth() < 600) { scaledWidth = pdImage.getWidth(); } contents.drawImage(pdImage, 0, 0, scaledWidth, pdImage.getHeight() * scaledWidth / pdImage.getWidth()); } doc.save(pdfPath); } }
Create PDFs
StringoutputFilePath="output/pdf/filepath";
PDDocumentdocument=newPDDocument(); PDPagepage=newPDPage(PDRectangle.A4); document.addPage(page); // Create content stream to draw on the page PDPageContentStreamcontentStream=newPDPageContentStream(document, page); contentStream.setFont(PDType1Font.HELVETICA, 12); // Insert text contentStream.beginText(); contentStream.newLineAtOffset(100, 700); contentStream.showText("Hello, World!"); contentStream.endText(); // Load the image StringimageFilePath="C:\\Users\\Taogen\\Pictures\\icon.jpg"; PDImageXObjectimage= PDImageXObject.createFromFile(imageFilePath, document); // Set the scale and position of the image on the page floatscale=0.5f; // adjust the scale as needed floatx=100; // x-coordinate of the image floaty=500; // y-coordinate of the image // Draw the image on the page contentStream.drawImage(image, x, y, image.getWidth() * scale, image.getHeight() * scale); contentStream.close(); document.save(outputFilePath); document.close();