Before write the code, you can write statistical SQLs first. Because the core of statistical APIs are SQLs. As the saying goes, “first, solve the problem, then, write the code”.
Define Parameter and Response VOs
VO (value object) is typically used for data transfer between business layers and only contains data.
Parameter VOs
@Data publicclassSomeStatParam { private Date beginTime; private Date endTime; private Integer type; private Integer userId; ... }
Before integrating third-party APIs in your web application, it’s better to test the APIs with an API tool like Postman to ensure the APIs work properly in your local environment. It can also let you get familiar with the APIs.
Add API Configurations to Your Project
There are some common configurations of third-party APIs you need to add to your project. For example, authorization information (appId, appSecret), API URL prefix, and so on.
Add Configurations
Add the configurations to your spring boot configuration file application.yaml:
publicinterfaceSocialLoginResultHandler { Vo handleFunc1Result(JSONObject result); Vo handleFunc2Result(JSONObject result); }
@Component publicclassAuth0SocialLoginResultHandlerimplementsSocialLoginResultHandler { Vo handleFunc1Result(JSONObject result) {...} Vo handleFunc2Result(JSONObject result) {...} }
Use Your Interface
@Autowired @Qualifier("auth0SocialLoginService") private SocialLoginService socialLoginService; // or @Autowired private SocialLoginService auth0SocialLoginService; // The SocialLoginService object name must same as the implementing class name and the first character should be lowercase.
This post will cover how to create a static website using VitePress.
Before using VitePress you must install Node.js v18 or higher.
Initialize VitePress Project
Add vitepress dependency
$ mkdir my-site $ cd my-site
# add vitepress to devDependencies $ npm add -D vitepress # or $ yarn add -D vitepress
VitePress is used in the development process as a build tool. It converts your Markdown files to HTML files. You don’t need VitePress in runtime.
Scaffold a basic project
$ npx vitepress init
npx can run your installed package directly, you don’t need to add any npm script to your package.json. You also can do npx package_command by using npm run your_script.
You need to set some basic configuration for your website.
Setting the VitePress directory. ./ means using the root directory.
Setting your site title.
Setting your site description.
Other configurations just use the default settings.
┌ Welcome to VitePress! │ ◇ Where should VitePress initialize the config? │ ./ │ ◇ Site title: │ My Awesome Project │ ◇ Site description: │ A VitePress Site │ ◇ Theme: │ Default Theme │ ◇ Use TypeScript for config and theme files? │ Yes │ ◇ Add VitePress npm scripts to package.json? │ Yes │ └ Done! Now run npm run docs:dev and start writing.
Running the project
Start a local server
npm run docs:dev # or yarn docs:dev # or npx vitepress dev
Visiting http://localhost:5173 to access the website
Git Configurations
Initialize the git repository
$ cd my-site $ git init .
Config gitignore
Add the .vitepress/cache directory to .gitignore
.vitepress/cache
Result .gitignore
.idea
# macOS .DS_Store
# vitepress .vitepress/cache
# Dependency directories node_modules/
# build output dist
VitePress Configurations
Site Config
The site configuration file is .vitepress/config.mts
exportdefaultdefineConfig({ title: '{my_title}', description: '{my_description}', srcDir: 'src', srcExclude: [ 'someDir/**', 'someFile', ], // Whether to get the last updated timestamp for each page using Git. lastUpdated: true, head: [ ['link', {rel: 'shortcut icon', type: "image/jpeg", href: '/logo.jpeg'}], // These two are what you want to use by default ['link', {rel: 'apple-touch-icon', type: "image/jpeg", href: '/logo.jpeg'}], ['link', {rel: 'apple-touch-icon', type: "image/jpeg", sizes: "72x72", href: '/logo.jpeg'}], ['link', {rel: 'apple-touch-icon', type: "image/jpeg", sizes: "114x114", href: '/logo.jpeg'}], ['link', {rel: 'apple-touch-icon', type: "image/jpeg", sizes: "144x144", href: '/logo.jpeg'}], ['link', {rel: 'apple-touch-icon-precomposed', type: "image/jpeg", href: '/logo.jpeg'}], // This one works for anything below iOS 4.2 ['link', {rel: 'apple-touch-icon-precomposed apple-touch-icon', type: "image/jpeg", href: '/logo.jpeg'}], ], themeConfig: { //... } })
$ mkdir src/ && mv index.md src/
Files
src/public/logo.jepg
srcDir
Move the home page index.md to src/index.md
$ mkdir src/ $ mv index.md src/
exportdefaultdefineConfig({ srcDir: 'src', })
srcExclude
Optional config. If you need to add excluded directories and files.
SSH, or secure shell, is an encrypted protocol used to manage and communicate with servers. You can connect to your server via SSH. There are a few different ways to login an SSH server. Public key authentication is one of the SSH authentication methods. It allows you to access a server via SSH without a password.
Creating SSH keys
List supported algorithms of SSH keys on your client and server:
We recommend using the ed25519 algorithm to generate your SSH key. The Ed25519 was introduced on OpenSSH version 6.5. It’s using elliptic curve cryptography that offers better security with faster performance compared to DSA, ECDSA, or RSA. The RSA is even considered not safe if it’s generated with a key smaller than 2048-bit length.
Enter file in which to save the key (/Users/taogen/.ssh/id_ed25519): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in {filename} Your public key has been saved in {filename}.pub
You can specify your SSH key’s filename. If you don’t want to change the filename, by default the private key filename is id_{algorithm} and the public key filename is id_{algorithm}.pub .
For security reasons, it’s best to set a passphrase for your SSH keys.
Coping the SSH public key to your server
Copying Your Public Key Using ssh-copy-id
The simplest way to copy your public key to an existing server is to use a utility called ssh-copy-id.
ssh-copy-id -i public_key_filepath username@remote_host # or use a specific SSH port ssh-copy-id -i public_key_filepath -p ssh_port username@remote_host
mkdir -p ~/.ssh: Creating the ~/.ssh directory if it doesn’t exist.
cat >> ~/.ssh/authorized_keys: append the standard output of the previous command of the pipeline to the file ~/.ssh/authorized_keys on the remote host.
Configuring SSH
If there are multiple SSH keys on your local system, you need to configure which destination server uses which SSH key. For example, there is an SSH key for GitHub and another SSH key for a remote server.
Creating the SSH configuration file ~/.ssh/config if it doesn’t exist.
vim ~/.ssh/config
Add the config like the folowing content
# GitHub Host github.com User git Port 22 Hostname github.com IdentityFile "~/.ssh/{your_private_key}" TCPKeepAlive yes IdentitiesOnly yes
# Remote server Host {remote_server_ip_address} User {username_for_ssh} Port {remote_server_ssh_port} IdentityFile "~/.ssh/{your_private_key}" TCPKeepAlive yes IdentitiesOnly yes
SSH login with the SSH private key
If you have copied your SSH public key to the server, SSH login will automatically use your private key. Otherwise, you will need to enter the password of the remote server’s user to login.
$ ssh username@remote_host # or use a specific port $ ssh -p ssh_port username@remote_host
Disabling password authentication on your server
Using password-based authentication exposes your server to brute-force attacks. You can disable password authentication by updating the configuration file /etc/ssh/sshd_config.
Before disabling password authentication, make sure that you either have SSH key-based authentication configured for the root account on this server, or preferably, that you have SSH key-based authentication configured for an account on this server with sudo access.
sudo vim /etc/ssh/sshd_config
Uncomment the following line by removing # at the beginning of the line:
PasswordAuthentication no
Save and close the file when you are finished. To actually implement the changes we just made, you must restart the service.
You can try to restart your system networking to check whether the problem of being unable to ping domain names is resolved. See the section “Restart Networking” of this post.
Configuring Default Route Gateway
You need to check your route table and check if the destination host 0.0.0.0 is routed to the default gateway IP (e.g. 192.168.0.1). If not you need to update the gateway IP.
Get Default Gateway IP
$ ip r | grep default
default via 192.168.0.1 dev eth0 proto dhcp metric 100
Some computers might have multiple default gateways. The gateway with lowest Metric is the first to be searched and used as the default gateway.
My server’s default gateway IP is 192.168.0.1.
Check the Route Table
Print the route table:
$ sudo route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 ...
Destination: The destination network or destination host.
Gateway: The gateway address or ’*’ if none set.
Genmask: The netmask for the destination net: 255.255.255.255 for a host destination and 0.0.0.0 for the default route.
What Is The Meaning of 0.0.0.0 In Routing Table?
Each network host has a default route for each network card. This will create a 0.0.0.0 route for such card. The address 0.0.0.0 generally means “any address”. If a packet destination doesn’t match an individual address in the table, it must match a 0.0.0.0 gateway address. In other words, default gateway is always pointed by 0.0.0.0
Update the 0.0.0.0 Route to the Default Gateway
Update the destination host 0.0.0.0 route the the default gateway IP e.g 192.168.0.1.
ip command
You can temporarily update the 0.0.0.0 route gateway by the ip command.
Add the default route:
ip route add default via 192.168.0.1
You can also delete the default route:
ip route delete default
route command
You can temporarily update the 0.0.0.0 route gateway by the route command.
Add the default route:
route add default gw 192.168.0.1
You can also delete the default route:
route del default gw 192.168.0.1
Update configuration file
You can permanently update the 0.0.0.0 route gateway by in system configuration file.
CentOS/RHEL
vim /etc/sysconfig/network
Add the following content to the file /etc/sysconfig/network
NETWORKING=yes GATEWAY=192.168.0.1
Debian/Ubuntu
vim /etc/network/interfaces
Find network interface and add the following option
... gateway 192.168.0.1 ...
Restart Networking
After update the gateway configuration file, you need restart the networking.
银河麒麟高级服务器操作系统V10(Kylin Linux Advanced Server V10 (Tercel))是针对企业级关键业务,适应虚拟化、云计算、大数据、工业互联网时代对主机系统可靠性、安全性、性能、扩展性和实时性等需求,依据CMMI5级标准研制的提供内生本质安全、云原生支持、自主平台深入优化、高性能、易管理的新一代自主服务器操作系统。银河麒麟系统采用同源构建支持六款自主CPU平台(飞腾、鲲鹏、龙芯、申威、海光、兆芯等国产CPU),所有组件基于同一套源代码构建。
查看操作系统信息
# 查看 Linux 系统发行版 $ cat /etc/os-release
NAME="Kylin Linux Advanced Server" VERSION="V10 (Tercel)" ID="kylin" VERSION_ID="V10" PRETTY_NAME="Kylin Linux Advanced Server V10 (Tercel)" ANSI_COLOR="0;31"
# 查看 CPU 架构 $ lscpu
Architecture: aarch64 CPU op-mode(s): 64-bit Model name: Kunpeng-920 ...
Details
Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: HiSilicon Model: 0 Model name: Kunpeng-920 Stepping: 0x1 BogoMIPS: 200.00 NUMA node0 CPU(s): 0-3 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Tsx async abort: Not affected Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
# Create an achive file tar -czvf achive.tar.gz dir_name tar -cjvf achive.tar.bz2 dir_name tar -cvf achive.tar dir_name
# List files of the achive file tar -tf achive.tar.gz
# Extract files tar -xvf achive.tar.gz tar -xvf achive.tar.gz -C /target/filepath
Get information of file
Real path
# Print real path of file realpath path/to/file
Architecture of software
# Architecture of software cat `which {your_software}` | file - # file type file path/to/file
Files and Directories
Directories
Directories
Description
/bin
binary or executable programs.
/boot
It contains all the boot-related information files and folders such as conf, grub, etc.
/dev
device files such as dev/sda1, dev/sda2, etc.
/lib
kernel modules and a shared library.
/etc
system configuration files.
/home
home directory. It is the default current directory.
/media
removal media devices are inserted
/mnt
temporary mount point
/opt
optional or third-party software.
/proc
It is a virtual and pseudo-file system to contains info about the running processes with a specific process ID or PID.
/root
root home directory
/run
It stores volatile runtime data. run time variables.
/sbin
binary executable programs for an administrator.
/sys
It is a virtual file system for modern Linux distributions to store and allows modification of the devices connected to the system.
/tmp
temporary space, typically cleared on reboot.
/usr
User related programs.
/var
variable data files. log files.
/lib: It is a place for the essential standard libraries. Think of libraries required for your system to run. If something in /bin or /sbin needs a library that library is likely in /lib.
/usr
/usr/bin: Executables binary files. E.g. java, mvn, git, apt, kill.
/usr/local: To keep self-compiled or third-party programs.
/usr/sbin: This directory contains programs for administering a system, meant to be run by ‘root’. Like ‘/sbin’, it’s not part of a user’s $PATH. Examples of included binaries here are chroot, useradd, in.tftpd and pppconfig.
/usr/lib: the /usr directory in general is as it sounds, a user based directory. Here you will find things used by the users on the system. So if you install an application that needs libraries they might go to /usr/lib. If a binary in /usr/bin or /usr/sbin needs a library it will likely be in /usr/lib.
/var: variable data files. log files.
/var/lib: the /var directory is the writable counterpart to the /usr directory which is often required to be read-only. So /var/lib would have a similar purpose as /usr/lib but with the ability to write to them.
Configuration Files
Bash Configuration Files
File
Description
/etc/profile
This is a “System wide” initialization file that is executed during login. This file provides initial environment variables and initial “PATH” locations.
/etc/bashrc
This again is a “System Wide” initialization file. This file is executed each time a Bash shell is opened by a user. Here you can define your default prompt and add alias information. Values in this file can be overridden by their local ~/.bashrc entry.
~/.bash_profile
If this file exists, it is executed automatically after /etc/profile during the login process. This file can be used by each user to add individual entries. The file however is only executed once at login and normally then runs the users .bashrc file.
~/.bash_login
If the “.bash_profile” does not exist, then this file will be executed automatically at login.
~/.profile
If the “.bash_profile” or “.bash_login” do not exist, then this file is executed automatically at login.
~/.bashrc
This file contains individual specific configurations. This file is read at login and also each time a new Bash shell is started. Ideally, this is where you should place any aliases.
~/.bash_logout
This file is executed automatically during logout.
~/.inputrc
This file is used to customize key bindings/key strokes.
Most global config files are located in the /etc directory
File
Description
/etc/X11/
xorg specific config files
/etc/cups/
sub-directory containing configuration for the Common UNIX Printing System
/etc/xdg/
global configs for applications following freedesktop.org specification
/etc/ssh/
used to configure OpenSSH server behavior for the whole system
/etc/apparmor.d/
contains config files for the AppArmor system
/etc/udev/
udev related configuration
Important Global Config Files
File
Description
/etc/resolv.conf
used to define the DNS server(s) to use
/etc/bash.bashrc
used to define the commands to execute when a user launches the bash shell
/etc/profile
the login shell executes the commands in .profile script during startup
/etc/dhcp/dhclient.conf
stores network related info required by DHCP clients
/etc/fstab
decides where to mount all the partitions available to the system
/etc/hostname
set the hostname for the machine
/etc/hosts
a file which maps IP addresses to their hostnames
/etc/hosts.deny
the remote hosts listed here are denied access to the machine
/etc/mime.types
lists MIME-TYPES and filename extensions associated with them
/etc/motd
configure the text shown when a user logs in to the host
/etc/timezone
set the local timezone
/etc/sudoers
the sudoers file controls the sudo related permission for users
/etc/httpd/conf and /etc/httpd.conf.d
configuration for the apache web server
/etc/default/grub
contains configuration used by the update-grub for generating /boot/grub/grub.cfg
/boot/grub/grub.cfg
the update-grub command auto-generates this file using the settings defined in /etc/default/grub
Important User-Specific Config Files
File
Description
$HOME/.xinitrc
this allows us to set the directives for starting a window manager when using the startx command
$HOME/.vimrc
vim configuration
$HOME/.bashrc
script executed by bash when the user starts a non-login shell
$XDG_CONFIG_HOME/nvim/init.vim
neovim configuration
$HOME/.editor
sets the default editor for the user
$HOME/.gitconfig
sets the default name and e-mail address to use for git commits
$HOME/.profile
the login shell executes the commands in the .profile script during startup
$HOME/.ssh/config
ssh configuration for a specific user
System Settings
System Time
Time
# show date time date
# date time format date'+%Y-%m-%d' date'+%Y-%m-%d %H:%M:%S' date'+%Y-%m-%d_%H-%M-%S'
# update time and date from the internet timedatectl set-ntp true
Timezone
# list timezones timedatectl list-timezones
# set timezone timedatectl set-timezone Asia/Shanghai
# show time settings timedatectl status
hostname
The hostname is used to distinguish devices within a local network. It’s the machine’s human-friendly name. In addition, computers can be found by others through the hostname, which enables data exchange within a network, for example. Hostnames are used on the internet as part of the fully qualified domain name.
you can configure a computer’s hostname:
# setting $ hostnamectl set-hostname server1.example.com # verify the setting $ less /etc/hostname # query your computer's hostname $ hostname
hosts
The /etc/hosts file contains the Internet Protocol (IP) host names and addresses for the local host and other hosts in the Internet network. This file is used to resolve a name into an address (that is, to translate a host name into its Internet address).
SCP will always overwrite existing files. Thus, in the case of a clean upload SCP should be slightly faster as it doesn’t have to wait for the server on the target system to compare files.
# transfer a file scp local_file remoteuser@remote_ip_address:/remote_dir # transfer multiple files scp local_file1 local_file2 remoteuser@remote_ip_address:/remote_dir
# transfer a directory scp -r local_dir remoteuser@remote_ip_address:/remote_dir
# transfer a file from remote host to local scp remoteuser@remote_ip_address:/remote_file local_dir
# Transfer Files Between Two Remote Systems scp remoteuser@remote_ip_address:/remote_file remoteuser@remote_ip_address:/remote_file
-P SSH_port
rsync over ssh
In the case of a synchronization of files that change, like log files or list of source files in a repository, rsync is faster.
Copy a File from a Local Server to a Remote Server with SSH
rsync -avzhe ssh backup.tar.gz root@192.168.0.141:/backups/ # Show Progress While Transferring Data with Rsync rsync -avzhe ssh --progress backup.tar.gz root@192.168.0.141:/backups/
Copy a File from a Remote Server to a Local Server with SSH
# download file to the local system's Home directory. get [path to file] # change directory get [path to file] [path to directory] # change filename get [path to file] [new file name]
# upload the local system's Home directory's file to the remote server's current directory put [path to file] # change directory put [path to file] [path to directory] # change filename put [path to file] [new file name]
Application Data
logging file path: /var/log/{application_name}
upload file path: /data/{application_name}/upload
application build and running file path: /var/java/{application_name}, /var/html/{application_name}
http { ... server { listen80; server_name myserver.com; # The default root is /usr/share/nginx/www, /usr/share/nginx/html or /var/www/html root /var/www/your_domain/html;
# Staic files location / { # redefine the root root /var/www/your_domain/html; try_files$uri$uri/index.html =404; # if not found redirect to home page $root/index.html # try_files $uri $uri/ /index.html; }
# API location /api/ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # The URL of the back-end API has the /api prefix. https://example.com/api/xxx to https://api.example.com/api/xxx. proxy_pass http://localhost:8080/api/; # The URL of the back-end API does not have the /api prefix. The /api prefix is only used to map to the API interface. https://example.com/api/xxx to https://api.example.com/xxx. proxy_pass http://localhost:8080/; }
# Cache js, css, image, etc. } ... }
Response static files: {root path}/requestURI
Proxy requests: {proxy_pass path}/requestURI
Passing Request Headers
By default, NGINX redefines two header fields in proxied requests, “Host” and “Connection”, and eliminates the header fields whose values are empty strings. “Host” is set to the $proxy_host variable, and “Connection” is set to close.
HTTPS
http { # reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. # or "ssl_session_cache builtin:1000 shared:SSL:10m;" ssl_session_cache shared:SSL:10m; ssl_session_timeout10m; server { listen443 ssl; server_name myproject.com; ssl_certificate /etc/ssl/projectName/projectName.com.pem; ssl_certificate_key /etc/ssl/projectName/projectName.com.key; # Additional SSL configuration (if required) # enabling keepalive connections to send several requests via one connection and the second is to reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. keepalive_timeout70; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_cipherson;
The ngx_http_v2_module module (1.9.5) provides support for HTTP/2. This module is not built by default, it should be enabled with the --with-http_v2_module configuration parameter.
To enable HTTP/2, you must first enable SSL/TLS on your website. HTTP/2 requires the use of SSL/TLS encryption, which provides a secure connection between the web server and the client’s browser.
When you build static assets with versioning/hashing mechanisms, adding a version/hash to the filename or query string is a good way to manage caching. In such a case, you can add a long max-age value and immutable because the content will never change.
http { server { ... location /static { root /var/www/your_domain/staic; # To disable access log off for not hitting the I/O limit access_logoff; # or "expires max"; expires7d; # build static assets with versioning/hashing mechanisms add_header Cache-Control "public, immutable"; # revalidate # add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } ... } }
http { server { ... location~* \.(js|css|png|jpg|jpeg|gif|svg|ico)$ { root /var/www/your_domain/staic; # To disable access log off for not hitting the I/O limit access_logoff; # or "expires max"; expires7d; # build static assets with versioning/hashing mechanisms add_header Cache-Control "public, immutable"; # revalidate # add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } ... } }
After setting cache, the following headers will be present in the response headers:
Warning: Cache-Control "public, immutable" for JS, CSS may cause program exceptions after update program.
Compression
Enable gzip
http { ... server { ... # gzip can be set in `http {}` or `server {}` # --with-http_gzip_static_module gzipon; # By default, NGINX compresses responses only with MIME type text/html. To compress responses with other MIME types, include the gzip_types directive and list the additional types. gzip_types text/css text/xml application/javascript; # To specify the minimum length of the response to compress, use the gzip_min_length directive. The default is 20 bytes gzip_min_length200; # Sets the number and size of buffers used to compress a response. gzip_buffers324k; # Sets a gzip compression level of a response. Acceptable values are in the range from 1 to 9. gzip_comp_level6; gzip_varyon; ... } }
After enable gzip, the following headers will be present in the response headers:
server { # API location /api/ { ... # Attach CORS headers only if it's a valid origin ($cors should not be empty) if ($cors != "") { # using $cors for specified sites or using $http_origin for any sites. proxy_hide_header'Access-Control-Allow-Origin'; add_header'Access-Control-Allow-Origin''$cors' always; proxy_hide_header'Access-Control-Allow-Credentials'; add_header'Access-Control-Allow-Credentials'true always; proxy_hide_header'Access-Control-Allow-Methods'; add_header'Access-Control-Allow-Methods''POST, GET, DELETE, PUT, PATCH' always; proxy_hide_header'Access-Control-Allow-Headers'; add_header'Access-Control-Allow-Headers''Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; } # Check if it's a preflight request and "cache" it for 20 days if ($request_method = 'OPTIONS') { add_header'Access-Control-Allow-Origin''$cors' always; add_header'Access-Control-Allow-Credentials'true always; add_header'Access-Control-Allow-Headers''Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; add_header'Access-Control-Max-Age'1728000; add_header'Content-Type''text/plain charset=UTF-8'; add_header'Content-Length'0; return204; } ... } } }
# Check if it's a preflight request and "cache" it for 20 days if ($request_method = 'OPTIONS') { add_header'Access-Control-Allow-Origin''$http_origin' always; add_header'Access-Control-Allow-Credentials'true always; add_header'Access-Control-Allow-Headers''Accept,Authorization,Cache-Control,Content-Type,DNT,If-Modified-Since,Keep-Alive,Origin,User-Agent,X-Requested-With' always; add_header'Access-Control-Max-Age'1728000; add_header'Content-Type''text/plain charset=UTF-8'; add_header'Content-Length'0; return204; } ... } } }
Load Balancing
http { ... upstream backend-server { server xxx.xxx.xxx.xxx:8080 max_fails=1 fail_timeout=300s; server xxx.xxx.xxx.xxx:8080 max_fails=1 fail_timeout=300s; ... }
Adding the following config to the Nginx configuration file. You can verify if the configuration is updated by updating the return status code (e.g. 403 Forbidden, 406 Not Acceptable, 423 Locked) of the /test location and visiting the test URL http://yourDomain/testConfig.
$proxy_host: name and port of a proxied server as specified in the proxy_pass directive;
$proxy_add_x_forwarded_for: the “X-Forwarded-For” client request header field with the $remote_addr variable appended to it, separated by a comma. If the “X-Forwarded-For” field is not present in the client request header, the $proxy_add_x_forwarded_for variable is equal to the $remote_addr variable.
$host: In this order of precedence: host name from the request line, or host name from the “Host” request header field, or the server name matching a request.
# Just remove the nginx binary file. Or completely remove nginx `sudo apt-get purge nginx` or `yum remove package` cd /usr/local/nginx/sbin mv nginx nginx.bak
# configure tar -zxvf nginx-{latest-stable-version}.tar.gz cd nginx-{latest-stable-version} # Configuring Nginx ./configure ...
# Build and install nginx $ make $ sudo make install