You can try to restart your system networking to check whether the problem of being unable to ping domain names is resolved. See the section “Restart Networking” of this post.
Configuring Default Route Gateway
You need to check your route table and check if the destination host 0.0.0.0 is routed to the default gateway IP (e.g. 192.168.0.1). If not you need to update the gateway IP.
Get Default Gateway IP
$ ip r | grep default
default via 192.168.0.1 dev eth0 proto dhcp metric 100
Some computers might have multiple default gateways. The gateway with lowest Metric is the first to be searched and used as the default gateway.
My server’s default gateway IP is 192.168.0.1.
Check the Route Table
Print the route table:
$ sudo route -n
Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 ...
Destination: The destination network or destination host.
Gateway: The gateway address or ’*’ if none set.
Genmask: The netmask for the destination net: 255.255.255.255 for a host destination and 0.0.0.0 for the default route.
What Is The Meaning of 0.0.0.0 In Routing Table?
Each network host has a default route for each network card. This will create a 0.0.0.0 route for such card. The address 0.0.0.0 generally means “any address”. If a packet destination doesn’t match an individual address in the table, it must match a 0.0.0.0 gateway address. In other words, default gateway is always pointed by 0.0.0.0
Update the 0.0.0.0 Route to the Default Gateway
Update the destination host 0.0.0.0 route the the default gateway IP e.g 192.168.0.1.
ip command
You can temporarily update the 0.0.0.0 route gateway by the ip command.
Add the default route:
ip route add default via 192.168.0.1
You can also delete the default route:
ip route delete default
route command
You can temporarily update the 0.0.0.0 route gateway by the route command.
Add the default route:
route add default gw 192.168.0.1
You can also delete the default route:
route del default gw 192.168.0.1
Update configuration file
You can permanently update the 0.0.0.0 route gateway by in system configuration file.
CentOS/RHEL
vim /etc/sysconfig/network
Add the following content to the file /etc/sysconfig/network
NETWORKING=yes GATEWAY=192.168.0.1
Debian/Ubuntu
vim /etc/network/interfaces
Find network interface and add the following option
... gateway 192.168.0.1 ...
Restart Networking
After update the gateway configuration file, you need restart the networking.
银河麒麟高级服务器操作系统V10(Kylin Linux Advanced Server V10 (Tercel))是针对企业级关键业务,适应虚拟化、云计算、大数据、工业互联网时代对主机系统可靠性、安全性、性能、扩展性和实时性等需求,依据CMMI5级标准研制的提供内生本质安全、云原生支持、自主平台深入优化、高性能、易管理的新一代自主服务器操作系统。银河麒麟系统采用同源构建支持六款自主CPU平台(飞腾、鲲鹏、龙芯、申威、海光、兆芯等国产CPU),所有组件基于同一套源代码构建。
查看操作系统信息
# 查看 Linux 系统发行版 $ cat /etc/os-release
NAME="Kylin Linux Advanced Server" VERSION="V10 (Tercel)" ID="kylin" VERSION_ID="V10" PRETTY_NAME="Kylin Linux Advanced Server V10 (Tercel)" ANSI_COLOR="0;31"
# 查看 CPU 架构 $ lscpu
Architecture: aarch64 CPU op-mode(s): 64-bit Model name: Kunpeng-920 ...
Details
Architecture: aarch64 CPU op-mode(s): 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: HiSilicon Model: 0 Model name: Kunpeng-920 Stepping: 0x1 BogoMIPS: 200.00 NUMA node0 CPU(s): 0-3 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; __user pointer sanitization Vulnerability Spectre v2: Not affected Vulnerability Tsx async abort: Not affected Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm ssbs
# Create an achive file tar -czvf achive.tar.gz dir tar -cjvf achive.tar.bz2 dir tar -cvf achive.tar dir
# List files of the achive file tar -tf achive.tar.gz
# Extract files tar -xvf achive.tar.gz tar -xvf achive.tar.gz -C /target/filepath
Files and Directories
Directories
Directories
Description
/bin
binary or executable programs.
/boot
It contains all the boot-related information files and folders such as conf, grub, etc.
/dev
device files such as dev/sda1, dev/sda2, etc.
/lib
kernel modules and a shared library
/etc
system configuration files.
/home
home directory. It is the default current directory.
/media
removal media devices are inserted
/mnt
temporary mount point
/opt
optional or third-party software.
/proc
It is a virtual and pseudo-file system to contains info about the running processes with a specific process ID or PID.
/root
root home directory
/run
It stores volatile runtime data. run time variables.
/sbin
binary executable programs for an administrator.
/sys
It is a virtual file system for modern Linux distributions to store and allows modification of the devices connected to the system.
/tmp
temporary space, typically cleared on reboot.
/usr
User related programs.
/var
variable data files. log files.
/usr/bin: Executables binary files. E.g. java, mvn, git, apt, kill.
/usr/local: To keep self-compiled or third-party programs.
/usr/sbin: This directory contains programs for administering a system, meant to be run by ‘root’. Like ‘/sbin’, it’s not part of a user’s $PATH. Examples of included binaries here are chroot, useradd, in.tftpd and pppconfig.
This is a “System wide” initialization file that is executed during login. This file provides initial environment variables and initial “PATH” locations.
/etc/bashrc
This again is a “System Wide” initialization file. This file is executed each time a Bash shell is opened by a user. Here you can define your default prompt and add alias information. Values in this file can be overridden by their local ~/.bashrc entry.
~/.bash_profile
If this file exists, it is executed automatically after /etc/profile during the login process. This file can be used by each user to add individual entries. The file however is only executed once at login and normally then runs the users .bashrc file.
~/.bash_login
If the “.bash_profile” does not exist, then this file will be executed automatically at login.
~/.profile
If the “.bash_profile” or “.bash_login” do not exist, then this file is executed automatically at login.
~/.bashrc
This file contains individual specific configurations. This file is read at login and also each time a new Bash shell is started. Ideally, this is where you should place any aliases.
~/.bash_logout
This file is executed automatically during logout.
~/.inputrc
This file is used to customize key bindings/key strokes.
Most global config files are located in the /etc directory
File
Description
/etc/X11/
xorg specific config files
/etc/cups/
sub-directory containing configuration for the Common UNIX Printing System
/etc/xdg/
global configs for applications following freedesktop.org specification
/etc/ssh/
used to configure OpenSSH server behavior for the whole system
/etc/apparmor.d/
contains config files for the AppArmor system
/etc/udev/
udev related configuration
Important Global Config Files
File
Description
/etc/resolv.conf
used to define the DNS server(s) to use
/etc/bash.bashrc
used to define the commands to execute when a user launches the bash shell
/etc/profile
the login shell executes the commands in .profile script during startup
/etc/dhcp/dhclient.conf
stores network related info required by DHCP clients
/etc/fstab
decides where to mount all the partitions available to the system
/etc/hostname
set the hostname for the machine
/etc/hosts
a file which maps IP addresses to their hostnames
/etc/hosts.deny
the remote hosts listed here are denied access to the machine
/etc/mime.types
lists MIME-TYPES and filename extensions associated with them
/etc/motd
configure the text shown when a user logs in to the host
/etc/timezone
set the local timezone
/etc/sudoers
the sudoers file controls the sudo related permission for users
/etc/httpd/conf and /etc/httpd.conf.d
configuration for the apache web server
/etc/default/grub
contains configuration used by the update-grub for generating /boot/grub/grub.cfg
/boot/grub/grub.cfg
the update-grub command auto-generates this file using the settings defined in /etc/default/grub
Important User-Specific Config Files
File
Description
$HOME/.xinitrc
this allows us to set the directives for starting a window manager when using the startx command
$HOME/.vimrc
vim configuration
$HOME/.bashrc
script executed by bash when the user starts a non-login shell
$XDG_CONFIG_HOME/nvim/init.vim
neovim configuration
$HOME/.editor
sets the default editor for the user
$HOME/.gitconfig
sets the default name and e-mail address to use for git commits
$HOME/.profile
the login shell executes the commands in the .profile script during startup
$HOME/.ssh/config
ssh configuration for a specific user
System Settings
System Time
Time
# show date time date
# date time format date'+%Y-%m-%d' date'+%Y-%m-%d %H:%M:%S' date'+%Y-%m-%d_%H-%M-%S'
# update time and date from the internet timedatectl set-ntp true
Timezone
# list timezones timedatectl list-timezones
# set timezone timedatectl set-timezone Asia/Shanghai
# show time settings timedatectl status
hostname
The hostname is used to distinguish devices within a local network. It’s the machine’s human-friendly name. In addition, computers can be found by others through the hostname, which enables data exchange within a network, for example. Hostnames are used on the internet as part of the fully qualified domain name.
you can configure a computer’s hostname:
# setting $ hostnamectl set-hostname server1.example.com # verify the setting $ less /etc/hostname # query your computer's hostname $ hostname
hosts
The /etc/hosts file contains the Internet Protocol (IP) host names and addresses for the local host and other hosts in the Internet network. This file is used to resolve a name into an address (that is, to translate a host name into its Internet address).
SCP will always overwrite existing files. Thus, in the case of a clean upload SCP should be slightly faster as it doesn’t have to wait for the server on the target system to compare files.
# transfer a file scp local_file remoteuser@remote_ip_address:/remote_dir # transfer multiple files scp local_file1 local_file2 remoteuser@remote_ip_address:/remote_dir
# transfer a directory scp -r local_dir remoteuser@remote_ip_address:/remote_dir
# transfer a file from remote host to local scp remoteuser@remote_ip_address:/remote_file local_dir
# Transfer Files Between Two Remote Systems scp remoteuser@remote_ip_address:/remote_file remoteuser@remote_ip_address:/remote_file
-P SSH_port
rsync over ssh
In the case of a synchronization of files that change, like log files or list of source files in a repository, rsync is faster.
Copy a File from a Local Server to a Remote Server with SSH
rsync -avzhe ssh backup.tar.gz root@192.168.0.141:/backups/ # Show Progress While Transferring Data with Rsync rsync -avzhe ssh --progress backup.tar.gz root@192.168.0.141:/backups/
Copy a File from a Remote Server to a Local Server with SSH
# download file to the local system's Home directory. get [path to file] # change directory get [path to file] [path to directory] # change filename get [path to file] [new file name]
# upload the local system's Home directory's file to the remote server's current directory put [path to file] # change directory put [path to file] [path to directory] # change filename put [path to file] [new file name]
Application Data
logging file path: /var/log/{application_name}
upload file path: /data/{application_name}/upload
application build and running file path: /var/java/{application_name}, /var/html/{application_name}
http { ... server { listen80; server_name myserver.com; # The default root is /usr/share/nginx/www, /usr/share/nginx/html or /var/www/html root /var/www/your_domain/html;
By default, NGINX redefines two header fields in proxied requests, “Host” and “Connection”, and eliminates the header fields whose values are empty strings. “Host” is set to the $proxy_host variable, and “Connection” is set to close.
HTTPS
http { # reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. # or "ssl_session_cache builtin:1000 shared:SSL:10m;" ssl_session_cache shared:SSL:10m; ssl_session_timeout10m; server { listen443 ssl; server_name myproject.com; ssl_certificate /etc/ssl/projectName/projectName.com.pem; ssl_certificate_key /etc/ssl/projectName/projectName.com.key; # Additional SSL configuration (if required) # enabling keepalive connections to send several requests via one connection and the second is to reuse SSL session parameters to avoid SSL handshakes for parallel and subsequent connections. keepalive_timeout70; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4; ssl_prefer_server_cipherson;
The ngx_http_v2_module module (1.9.5) provides support for HTTP/2. This module is not built by default, it should be enabled with the --with-http_v2_module configuration parameter.
Adding the following config to the Nginx configuration file. You can verify if the configuration is updated by updating the return status code (e.g. 403 Forbidden, 406 Not Acceptable, 423 Locked) of the /test location and visiting the test URL http://yourDomain/testConfig.
$proxy_host: name and port of a proxied server as specified in the proxy_pass directive;
$proxy_add_x_forwarded_for: the “X-Forwarded-For” client request header field with the $remote_addr variable appended to it, separated by a comma. If the “X-Forwarded-For” field is not present in the client request header, the $proxy_add_x_forwarded_for variable is equal to the $remote_addr variable.
$host: In this order of precedence: host name from the request line, or host name from the “Host” request header field, or the server name matching a request.
@Bean public RestTemplate restTemplate() { SimpleClientHttpRequestFactoryfactory=newSimpleClientHttpRequestFactory(); // Time to establish a connection to the server from the client-side. Set to 20s. factory.setConnectTimeout(20000); // Time to finish reading data from the socket. Set to 300s. factory.setReadTimeout(300000); returnnewRestTemplate(factory); }
JavaScript HTTP Client
axios
Default Timeout
The default timeout is 0 (no timeout).
Settings
const instance = axios.create({ baseURL: 'https://some-domain.com/api/', // `timeout` specifies the number of milliseconds before the request times out. // If the request takes longer than `timeout`, the request will be aborted. timeout: 60000, ... });
E-commerce: Websites that facilitate online buying and selling of goods and services, such as Amazon or eBay.
Shopping mall
Social Networking: Websites that connect people and allow them to interact and share information, such as Facebook or LinkedIn.
IM
Forum/BBS
News and Media: Websites that provide news articles, videos, and other multimedia content, such as CNN or BBC.
Blogs and Personal Websites: Websites where individuals or organizations publish articles and personal opinions, such as WordPress or Blogger.
Educational: Websites that provide information, resources, and learning materials for educational purposes, such as Khan Academy or Coursera.
Entertainment: Websites that offer various forms of entertainment, such as games, videos, music, or movies, such as Netflix or YouTube.
Government and Nonprofit: Websites belonging to government institutions or nonprofit organizations, providing information, services, and resources, such as whitehouse.gov or Red Cross.
Business and Corporate: Websites representing businesses and corporations, providing information about products, services, and company details, such as Apple or Coca-Cola.
Sports: Websites dedicated to sports news, scores, analysis, and related information, such as ESPN or NBA.
Travel and Tourism: Websites that provide information and services related to travel planning, accommodations, and tourist attractions, such as TripAdvisor or Booking.com.
Mobile Software
Desktop Software
Instant message. E.g. Telegram.
Email client. E.g. Mozilla Thunderbird.
Web browser. E.g. Google Chrome.
Office software. E.g. Microsoft Office, Typora, XMind.
Note-taking software. E.g. Notion, Evernote.
PDF reader. E.g. SumatraPDF.
File processing. E.g. 7-Zip
Media player. E.g. VLC.
Media processing. E.g. FFmpeg, HandBrake, GIMP.
Flashcard app. E.g. anki.
Stream Media. E.g. Spotify.
HTTP proxy. E.g. V2rayN.
Libraries, Tools, Services
Libraries
General-purpose libraries for programming language. E.g. Apache Commons Lang.
File processing. E.g. Apache POI.
Data parser. E.g. org.json.
Chart, Report, Graph.
Logging.
Testing.
HTTP Client.
Developer Tools
Editor
IDE
Service Client.
Services
Web servers. E.g. Nginx, Apache Tomcat.
Databases. E.g. MySQL.
Cache. E.g. Redis.
Search engines. E.g. Elasticsearch.
Deliver software / contioner. E.g. Docker.
Other services. E.g. Gotenberg, Aliyun services (media, ai).
Apache PDFBox is a Java tool for working with PDF documents. In this post, we’ll introduce how to use Apache PDFBox to handle PDF files. The code examples in this post are based on pdfbox v2.0.29.
StringinputFilePath="your/pdf/filepath"; // Load PDF document PDDocumentdocument= PDDocument.load(newFile(inputFilePath)); // Create PDFTextStripper instance PDFTextStripperpdfStripper=newPDFTextStripper(); // Extract text from PDF Stringtext= pdfStripper.getText(document); // Print extracted text System.out.println(text); // Close the document document.close();
Extract page by page
StringinputFilePath="your/pdf/filepath"; // Load the PDF document PDDocumentdocument= PDDocument.load(newFile(inputFilePath)); // Create an instance of PDFTextStripper PDFTextStripperstripper=newPDFTextStripper(); // Iterate through each page and extract the text for (intpageNumber=1; pageNumber <= document.getNumberOfPages(); pageNumber++) { stripper.setStartPage(pageNumber); stripper.setEndPage(pageNumber);
Stringtext= stripper.getText(document); System.out.println("Page " + pageNumber + ":"); System.out.println(text); } // Close the PDF document document.close();
Split and Merge
Split
privatestaticvoidsplitPdf(String inputFilePath, String outputDir)throws IOException { Filefile=newFile(inputFilePath); // Load the PDF document PDDocumentdocument= PDDocument.load(file); // Create a PDF splitter object Splittersplitter=newSplitter(); // Split the document List<PDDocument> splitDocuments = splitter.split(document); // Get an iterator for the split documents Iterator<PDDocument> iterator = splitDocuments.iterator(); // Iterate through the split documents and save them inti=1; while (iterator.hasNext()) { PDDocumentsplitDocument= iterator.next(); StringoutputFilePath=newStringBuilder().append(outputDir) .append(File.separator) .append(file.getName().replaceAll("[.](pdf|PDF)", "")) .append("_split_") .append(i) .append(".pdf") .toString(); splitDocument.save(outputFilePath); splitDocument.close(); i++; } // Close the source document document.close(); System.out.println("PDF split successfully!"); }
Merge PDF files
privatestaticvoidmergePdfFiles(List<String> inputFilePaths, String outputFilePath)throws IOException { PDFMergerUtilitymerger=newPDFMergerUtility(); // Add as many files as you need for (String inputFilePath : inputFilePaths) { merger.addSource(newFile(inputFilePath)); } merger.setDestinationFileName(outputFilePath); merger.mergeDocuments(); System.out.println("PDF files merged successfully!"); }
Insert and remove pages
Insert pages
publicstaticvoidinsertPage(String sourceFile, String targetFile, int pageIndex)throws IOException { // Load the existing PDF document PDDocumentsourceDoc= PDDocument.load(newFile(sourceFile)); IntegersourcePageCount= sourceDoc.getNumberOfPages(); // Validate the requested page index if (pageIndex < 0 || pageIndex > sourcePageCount) { thrownewIllegalArgumentException("Invalid page index"); } // Create a new blank page PDPagenewPage=newPDPage(); // Insert the new page at the requested index if (sourcePageCount.equals(pageIndex)) { sourceDoc.getPages().add(newPage); } else { sourceDoc.getPages().insertBefore(newPage, sourceDoc.getPages().get(pageIndex)); } // Save the modified PDF document to a target file sourceDoc.save(targetFile); // Close the source and target documents sourceDoc.close(); }
Remove pages
privatestaticvoidremovePage(String inputFilePath, String outputFilePath, int pageIndex)throws IOException {
PDDocumentsourceDoc= PDDocument.load(newFile(inputFilePath)); IntegersourcePageCount= sourceDoc.getNumberOfPages(); // Validate the requested page index if (pageIndex < 0 || pageIndex >= sourcePageCount) { thrownewIllegalArgumentException("Invalid page index"); } sourceDoc.getPages().remove(pageIndex); sourceDoc.save(outputFilePath); sourceDoc.close(); }
privatestaticvoidremovePage2(String inputFilePath, String outputFilePath, int pageIndex)throws IOException { PDDocumentsourceDoc= PDDocument.load(newFile(inputFilePath)); IntegersourcePageCount= sourceDoc.getNumberOfPages(); // Validate the requested page index if (pageIndex < 0 || pageIndex >= sourcePageCount) { thrownewIllegalArgumentException("Invalid page index"); } Splittersplitter=newSplitter(); List<PDDocument> pages = splitter.split(sourceDoc); pages.remove(pageIndex); PDDocumentoutputDocument=newPDDocument(); for (PDDocument page : pages) { outputDocument.addPage(page.getPage(0)); } outputDocument.save(outputFilePath); sourceDoc.close(); outputDocument.close(); }
AccessPermissionap=newAccessPermission(); // disable printing, ap.setCanPrint(false); //disable copying ap.setCanExtractContent(false); //Disable other things if needed...
// Owner password (to open the file with all permissions) // User password (to open the file but with restricted permissions) StandardProtectionPolicyspp=newStandardProtectionPolicy(password, password, ap); // Define the length of the encryption key. // Possible values are 40, 128 or 256. intkeyLength=256; spp.setEncryptionKeyLength(keyLength);
AccessPermissionap=newAccessPermission(); // disable printing, ap.setCanPrint(false); //disable copying ap.setCanExtractContent(false); //Disable other things if needed...
// Owner password (to open the file with all permissions) // User password (to open the file but with restricted permissions) StandardProtectionPolicyspp=newStandardProtectionPolicy(newPassword, newPassword, ap); // Define the length of the encryption key. // Possible values are 40, 128 or 256. intkeyLength=256; spp.setEncryptionKeyLength(keyLength);
//Apply protection doc.protect(spp);
doc.save(outputFilePath); doc.close(); }
Remove password
publicstaticvoidremovePdfPassword(String inputFilePath, String outputFilePath, String password)throws IOException { PDDocumentdoc= PDDocument.load(newFile(inputFilePath), password); // Set the document access permissions doc.setAllSecurityToBeRemoved(true); // Save the unprotected PDF document doc.save(outputFilePath); // Close the document doc.close(); }
Convert to Image
PDF to Image
publicstaticvoidpdfToImage(String pdfFilePath, String imageFileDir)throws IOException { Filefile=newFile(pdfFilePath); PDDocumentdocument= PDDocument.load(file); // Create PDFRenderer object to render each page as an image PDFRendererpdfRenderer=newPDFRenderer(document); // Iterate over all the pages and convert each page to an image for (intpageIndex=0; pageIndex < document.getNumberOfPages(); pageIndex++) { // Render the page as an image // 100 DPI: general-quality // 300 DPI: high-quality // 600 DPI: pristine-quality BufferedImageimage= pdfRenderer.renderImageWithDPI(pageIndex, 300); // Save the image to a file StringimageFilePath=newStringBuilder() .append(imageFileDir) .append(File.separator) .append(file.getName().replaceAll("[.](pdf|PDF)", "")) .append("_") .append(pageIndex + 1) .append(".png") .toString(); ImageIO.write(image, "PNG", newFile(imageFilePath)); } // Close the document document.close(); }
Image to PDF
privatestaticvoidimageToPdf(String imagePath, String pdfPath)throws IOException { try (PDDocumentdoc=newPDDocument()) { PDPagepage=newPDPage(); doc.addPage(page); // createFromFile is the easiest way with an image file // if you already have the image in a BufferedImage, // call LosslessFactory.createFromImage() instead PDImageXObjectpdImage= PDImageXObject.createFromFile(imagePath, doc); // draw the image at full size at (x=0, y=0) try (PDPageContentStreamcontents=newPDPageContentStream(doc, page)) { // to draw the image at PDF width intscaledWidth=600; if (pdImage.getWidth() < 600) { scaledWidth = pdImage.getWidth(); } contents.drawImage(pdImage, 0, 0, scaledWidth, pdImage.getHeight() * scaledWidth / pdImage.getWidth()); } doc.save(pdfPath); } }
Create PDFs
StringoutputFilePath="output/pdf/filepath";
PDDocumentdocument=newPDDocument(); PDPagepage=newPDPage(PDRectangle.A4); document.addPage(page); // Create content stream to draw on the page PDPageContentStreamcontentStream=newPDPageContentStream(document, page); contentStream.setFont(PDType1Font.HELVETICA, 12); // Insert text contentStream.beginText(); contentStream.newLineAtOffset(100, 700); contentStream.showText("Hello, World!"); contentStream.endText(); // Load the image StringimageFilePath="C:\\Users\\Taogen\\Pictures\\icon.jpg"; PDImageXObjectimage= PDImageXObject.createFromFile(imageFilePath, document); // Set the scale and position of the image on the page floatscale=0.5f; // adjust the scale as needed floatx=100; // x-coordinate of the image floaty=500; // y-coordinate of the image // Draw the image on the page contentStream.drawImage(image, x, y, image.getWidth() * scale, image.getHeight() * scale); contentStream.close(); document.save(outputFilePath); document.close();
apt-get is a command line tool for interacting with the Advanced Package Tool (APT) library (a package management system for Linux distributions). It allows you to search for, install, manage, update, and remove software.
Configuration of the APT system repositories is stored in the /etc/apt/sources.list file and the /etc/apt/sources.list.d directory. You can add additional repositories in a separate file in the /etc/apt/sources.list.d directory, for example, redis.list, docker.list.
dpkg
dpkg is a package manager for Debian-based systems. It can install, remove, and build packages, but unlike other package management systems, it cannot automatically download and install packages – or their dependencies. APT and Aptitude are newer, and layer additional features on top of dpkg.
YUM is the primary package management tool for installing, updating, removing, and managing software packages in Red Hat Enterprise Linux. YUM performs dependency resolution when installing, updating, and removing software packages. YUM can manage packages from installed repositories in the system or from .rpm packages. The main configuration file for YUM is at /etc/yum.conf, and all the repos are at /etc/yum.repos.d.
# add repos config to /etc/yum.repos.d ... # clear repo cache yum clean all # create repo cache yum makecache
# search package yum search {package_name}
# upgrade package yum update
# install package yum install {package_name}
# uninstall package yum remove {package_name}
RPM (RPM Package Manager)
RPM is a popular package management tool in Red Hat Enterprise Linux-based distros. Using RPM, you can install, uninstall, and query individual software packages. Still, it cannot manage dependency resolution like YUM. RPM does provide you useful output, including a list of required packages. An RPM package consists of an archive of files and metadata. Metadata includes helper scripts, file attributes, and information about packages.
Headless is the same version than the latter without the support of keyboard, mouse and display systems. Hence it has less dependencies and it makes it more suitable for server application.
Debian/Ubuntu/Deepin
Install openjdk from Official APT Repositories
Supported Operating Systems
Ubuntu/Debian
Deepin
Installing
# insall sudo apt-get install openjdk-8-jdk # verify. If the installation was successful, you can see the Java version. java -version
# Update the environment. Make sure that we are working with the most up to date environment possible in terms of our packages sudo yum update -y # install sudo yum install -y python3 # verify python3 -V
# Add the EPEL repository, and update YUM to confirm your change sudo yum install epel-release sudo yum update # install sudo yum install nodejs # verify node --version
Redis
Linux
Install from Snapcraft
The Snapcraft store provides Redis packages that can be installed on platforms that support snap. Snap is supported and available on most major Linux distributions.
sudo snap install redis
If your Linux does not currently have snap installed, install it using the instructions described in Installing snapd.
Add the EPEL repository, and update YUM to confirm your change:
sudo yum install epel-release sudo yum update
Install Redis:
sudo yum install redis
Start Redis:
sudo systemctl start redis
Optional: To automatically start Redis on boot:
sudo systemctl enable redis
Verify the Installation
Verify that Redis is running with redis-cli:
redis-cli ping
If Redis is running, it will return:
PONG
Windows
Redis is not officially supported on Windows.
Install from Source
Supported Operating Systems
All Linux distros (distributions)
MacOS
You can compile and install Redis from source on variety of platforms and operating systems including Linux and macOS. Redis has no dependencies other than a C compiler and libc.
# Download source files wget https://download.redis.io/redis-stable.tar.gz # Compiling tar -xzvf redis-stable.tar.gz cd redis-stable make # make sure the build is correct make test
If the compile succeeds, you’ll find several Redis binaries in the src directory, including:
redis-server: the Redis Server itself
redis-cli is the command line interface utility to talk with Redis.
Starting and stopping Redis
cd redis-stable # starting redis server ./src/redis-server & # starting redis server with config ./src/redis-server redis.conf & # stopping redis server ps -ef | grep redis-server | awk '{print $2}' | head -1 | xargs kill -9 # connect to redis ./src/redis-cli # auth 127.0.0.1:6379> auth YOUR_PASSWORD
update password in redis.conf
# requirepass foobared
to
requirepass YOUR_STRONG_PASSWORD
Manage Redis service using systemd
Create the /etc/systemd/system/redis.service file, and add the following line to the file
# Create a mysql User and Group groupadd mysql useradd -r -g mysql -s /bin/false mysql
# Obtain and Unpack the Distribution cd /usr/local tar zxvf /path/to/mysql-VERSION-OS.tar.gz # This enables you to refer more easily to it as /usr/local/mysql. ln -s full-path-to-mysql-VERSION-OS mysql # add the `/usr/local/mysql/bin` directory to your `PATH` variable cp /etc/profile /etc/profile.bak.$(date'+%Y-%m-%d_%H-%M-%S') echo'export PATH=$PATH:/usr/local/mysql/bin' >> /etc/profile cat /etc/profile source /etc/profile
# Creating a Safe Directory For Import and Export Operations cd /usr/local/mysql mkdir mysql-files chown mysql:mysql mysql-files chmod 750 mysql-files
# Initialize the data directory. bin/mysqld --initialize --user=mysql # A temporary password is generated for root@localhost: Trbgylojs1!w bin/mysql_ssl_rsa_setup
# Start mysql server bin/mysqld_safe --user=mysql &
# Next command is optional cp support-files/mysql.server /etc/init.d/mysql.server
Note: This procedure assumes that you have root (administrator) access to your system. Alternatively, you can prefix each command using the sudo (Linux) or pfexec (Solaris) command.
Managing MySQL Server with systemd
Create a user for remote access
Enable MySQL server port in the firewall
If the firewall management on Linux uses ufw, you can run the following command to enable MySQL server port.
ufw allow 3306/tcp
Update bind-address in /etc/my.cnf
Change 127.0.0.1 to Local IP like 192.168.1.100
bind-address=192.168.1.100
Create a MySQL user for remote login
mysql> SELECT user,authentication_string,plugin,host FROM mysql.user; mysql> CREATE USER 'root'@'%' IDENTIFIED BY 'password'; mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION; mysql> FLUSH PRIVILEGES; mysql> SELECT user,authentication_string,plugin,host FROM mysql.user;
Verify
Connect the remote MySQL server from your local computer
# testing the port is open $ telnet {server_ip} 3306 # test MySQL connection $ mysql -h {server_ip} -u root -p Enter password:
Errors
Error: mysql: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory
When you run mysql -u root -p.
Solutions
# centos yum install ncurses-compat-libs
Error: ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.
Create a file called kibana.repo in the /etc/yum.repos.d/ directory for RedHat based distributions, or in the /etc/zypp/repos.d/ directory for OpenSuSE based distributions, containing:
You can now install Kibana with one of the following commands:
# older Red Hat based distributions sudo yum install kibana # Fedora and other newer Red Hat distributions sudo dnf install kibana # OpenSUSE based distributions sudo zypper install kibana
In this post, we’ll cover how to configure CORS in a Spring Boot project. If you want to understand how CORS works, you can check out the article Understanding CORS.
Configuring HTTP Request CORS
Controller CORS Configuration
Use @CrossOrigin annotation
Add a @CrossOrigin annotation to the controller class
// no credentials @CrossOrigin @RestController @RequestMapping("/my") publicclassMyController { @GetMapping public String testGet() { return"hello \n" + newDate(); } }
Add a @CrossOrigin annotation to the controller method
@RestController @RequestMapping("/my") publicclassMyController { // no credentials @CrossOrigin @GetMapping public String testGet() { return"hello \n" + newDate(); } }
// with credentials @CrossOrigin(origins = {"http://localhost"}, allowCredentials = "true") // or @CrossOrigin(originPatterns = {"http://localhost:[*]"}, allowCredentials = "true")
Properties of CrossOrigin
origins: by default, it’s *. You can specify allowed origins like @CrossOrigin(origins = {"http://localhost"}). You also can specify allowed origins by patterns like @CrossOrigin(originPatterns = {"http://*.taogen.com:[*]"}).
Add a @CrossOrigin annotation to the controller method or the controller class. It is equivalent to
responding a successful result to the preflight request. For example
HTTP/1.1 204 No Content Connection: keep-alive Access-Control-Allow-Origin: https://foo.bar.org Access-Control-Allow-Methods: POST, GET, OPTIONS, DELETE, PUT Access-Control-Max-Age: 86400
adding the following headers to the HTTP response headers
// with credentials response.setHeader("Access-Control-Allow-Origin", "{your_host}"); // e.g. http://localhost or reqs.getHeader("Origin") response.setHeader("Access-Control-Allow-Credentials", "true"); response.setHeader("Access-Control-Max-Age", "86400");
For ‘DELETE + Preflight’ or ‘PUT + Preflight’ requests, adding header ‘Access-Control-Allow-Origin: *’ to HttpServletResponse does not enable CORS. This will result in the following error
Access to XMLHttpRequest at 'http://localhost:8080/my' from origin 'http://localhost' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
For requests with custom headers, adding header ‘Access-Control-Allow-Origin: *’ to HttpServletResponse does not enable CORS. This will result in the following error
Access to XMLHttpRequest at 'http://localhost:8080/my' from origin 'http://localhost' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Global CORS configuration
WebMvcConfigurer.addCorsMappings
The WebMvcConfigurer.addCorsMappings has the same function as the @CrossOrigin annotation.
allowedOrigins: By default, all origins are allowed. Its default value is *. You can specify allowed origins like "http://localhost".
allowedOriginPatterns: for example, http://localhost:[*], http://192.168.0.*:[*], https://demo.com
allowedMethods: By default, GET, HEAD, and POST methods are allowed. You can enable all methods by setting its value to "GET", "POST", "HEAD", "PUT", "DELETE", "PATCH".