Taogen's Blog

Stay hungry stay foolish.

In this post, I will introduce the basic concepts of Docker. You’ll learn what docker is, why and when to use it.

What is Docker

Docker is an open source project for building, shipping, and running programs. It is a command line program, a background process, and a set of remote services that take a logistical approach to solving common software problems and simplifying your experience installing, running, publishing, and removing software. It accomplishes this by using an operating system technology called containers.

Running the hello-world in a container

Before running the hello-world program in a container, you need install Docker on your computer. You can download and install a Docker Desktop from https://docs.docker.com/install/. If you want to use docker on a Linux cloud server, you can download and install a Docker Engine from https://docs.docker.com/engine/install/.

After Docker is up and running on your computer, you can enter the following command to run the hello-world program provided by Docker in a container:

docker run hello-world
# or
docker run library/hello-world

After execute the above command you can see the output of the hello-world program

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

After print the above text, the program exits, and container is marked as stopped. The running state of a container is directly tied to the state of a single running program inside the container. If a program is running, the container is running. If the program is stopped, the container is stopped. Restarting a container will run the program again.

In the second time to run a container, you can use docker start <container> to run an existing container directly instead of create a new similar container from its image again.

The process of the docker run command execution is:

The hello-world is called the image or repository name. You can think of the image name as the name of the program you want to install or run. The image is a collection of files and metadata. The metadata includes the specific program to execute and other relevant configuration details.

Docker Hub is a public registry provide by Docker Inc. It is a repository service and it is a cloud-based service where people push their Docker Container Images and also pull the Docker Container Images from the Docker Hub.

Container

Historically, UNIX-style operating systems have used the term jail to describe a modified runtime environment that limits the scope of resources that a jailed program can access. Jail features go back to 1979 and have been in evolution ever since. In 2005, with the release of Sun’s Solaris 10 and Solaris Containers, container has become the preferred term for such a runtime environment. The goal has expanded from limiting filesystem scope to isolating a process from all resources except where explicitly allowed.

Using containers has been a best practice for a long time. But manually building containers can be challenging and easy to do incorrectly. Docker uses existing container engines to provide consistent containers built according to best practices. This puts stronger security within reach for everyone.

Containers vs Virtual machines

Virtual machines

  • Every virtual machine has a whole operating system
  • Take a long time (often minutes) to create.
  • Require significant resource overhead.

Container

  • All Docker containers share an operating system.
  • Docker containers don’t use any hardware virtualization. Programs running inside Docker containers interface directly with the host’s Linux kernel.
  • Many programs can run in isolation without running redundant operating systems or suffering the delay of full boot sequences.

Running software in containers for isolation

Each container is running as a child process of the Docker engine, wrapped with a container, and the delegate process is running in its own memory subspace of the user space. Programs running inside a container can access only their own memory and resources as scoped by the container.

Shipping containers

Docker use images to shipping containers. A Docker image is a bundled snapshot of all the files that should be available to a program running inside a container. You can create as many containers from an image as you want. Images are the shippable units in the Docker ecosystem.

Docker provides a set of infrastructure components that simplify distributing Docker images. These components are registries and indexes. You can use publicly available infrastructure provided by Docker Inc., other hosting companies, or your own registries and indexes. You can store and search images from a registry.

Why Use Docker

Docker makes it easy and simple to use the container and isolation features provided by operating systems.

Why use the container and isolation features

  • Dependency conflict.
  • Portability between operating systems. Docker runs natively on Linux and comes with a single virtual machine for macOS and Windows environments. You can run the same software on any system.
  • Protecting your computer. Docker prevents malicious program attacks through operating system resource access control.
  • Application removal. All program execution files and program-produced files are in a container. You can remove all of these files easily.

When to use Docker

Docker can run almost anywhere for any application. But currently Docker can run only applications that can run on a Linux operating system, or Windows applications on Windows Server. If you want to run a macOS or Windows native application on your desktop, you can’t yet do so with Docker.

References

[1] Jeffrey, Nickoloff and Stephen, Kuenzli. Docker in Action. 2nd ed., Manning Publications, 2019.

Pass data between a parent and a child component

Pass data from parent component to its child

Use props

Parent.js

<Child name={childName}></Child>

Child.js

<p>Receive from parent by props: {this.props.name}</p>

Pass data from child component to its parent

Pass function from parent component to its child.

The Child component call the function with parameters.

A complete example

Click to expand!

Parent.js

import { PureComponent } from 'react';
import { Input } from 'antd';

class Parent extends PureComponent {

constructor(props) {
super(props);
this.state = ({
childName: "default child name",
name: "default parent name"
});

}

handleInputChange = (e) => {
const name = e.target.value;
this.setState({
childName: name
})
}

handleParentValueChange = (value) => {
this.setState({
name: value
})
}

render() {
return (
<div>
<h2>I'm the parent page</h2>
<p>Receive from child: {this.state.name}</p>
Enter data to pass to child: <Input onChange={this.handleInputChange}></Input>
<Child name={this.state.childName} onParentNameChange={this.handleParentValueChange}></Child>
</div>
)
}
}

export default Parent;

Child.js

import { PureComponent } from 'react';
import { Input } from 'antd';

class Child extends PureComponent {

constructor(props) {
super(props);
this.state = ({});
}

handleInputChange = (e) => {
const name = e.target.value;
this.props.onParentNameChange(name);
}

render() {
return (
<div style={{backgroundColor: "lightgray", padding: "10px 10px 10px 10px"}}>
<h2>I'm the child page</h2>
<p>Receive from parent by props: {this.props.name}</p>
Enter data to pass to parent: <Input onChange={this.handleInputChange}></Input>
</div>
)
}
}

export default Child;

Pass functions between a parent and a child component

Pass functions to child components

If you need to have access to the parent component in the handler, you also need to bind the function to the component instance (see below).

There are several ways to make sure functions have access to component attributes like this.props and this.state.

Bind in Constructor (ES2015)

class Foo extends Component {
constructor(props) {
super(props);
this.handleClick = this.handleClick.bind(this);
}
handleClick() {
console.log('Click happened');
}
render() {
return <button onClick={this.handleClick}>Click Me</button>;
}
}

Note: Make sure you aren’t calling the function when you pass it to the component

render() {
// Wrong: handleClick is called instead of passed as a reference!
// The function being called every time the component renders.
return <button onClick={this.handleClick()}>Click Me</button>
}

Class Properties (ES2022)

class Foo extends Component {
handleClick = () => {
console.log('Click happened');
};
render() {
return <button onClick={this.handleClick}>Click Me</button>;
}
}

Note: Make sure you aren’t calling the function when you pass it to the component

render() {
// Wrong: handleClick is called instead of passed as a reference!
// The function being called every time the component renders.
return <button onClick={this.handleClick()}>Click Me</button>
}

Bind in Render

class Foo extends Component {
handleClick() {
console.log('Click happened');
}
render() {
return <button onClick={this.handleClick.bind(this)}>Click Me</button>;
}
}

Pass a parameter to an event handler in parent component’s render method. Values passed by child components will be ignored.

<button onClick={this.handleClick.bind(this, id)} />

Note: Using Function.prototype.bind in render creates a new function each time the component renders, which may have performance implications.

Arrow Function in Render

class Foo extends Component {
handleClick() {
console.log('Click happened');
}
render() {
return <button onClick={() => this.handleClick()}>Click Me</button>;
}
}

Pass a parameter to an event handler

<button onClick={() => this.handleClick(id)} />

Note: Using an arrow function in render creates a new function each time the component renders, which may break optimizations based on strict identity comparison.

Call child functions in a parent component


refs

Previously, refs were only supported for Class-based components. With the advent of React Hooks, that’s no longer the case.

Modern React with Hooks (v16.8+)

Hook parent and hook child (Functional Component Solution)

const { forwardRef, useRef, useImperativeHandle } = React;

const Parent = () => {
// In order to gain access to the child component instance,
// you need to assign it to a `ref`, so we call `useRef()` to get one
const childRef = useRef();

return (
<div>
<Child ref={childRef} />
<button onClick={() => childRef.current.getAlert()}>Click</button>
</div>
);
};

// We need to wrap component in `forwardRef` in order to gain
// access to the ref object that is assigned using the `ref` prop.
// This ref is passed as the second parameter to the function component.
const Child = forwardRef((props, ref) => {

// The component instance will be extended
// with whatever you return from the callback passed
// as the second argument
useImperativeHandle(ref, () => ({
getAlert() {
alert("getAlert from Child");
}
}));
return <h1>Hi</h1>;
});

Legacy API using Class Components (>= react@16.4)

Class parent and class child (Class Component Solution)

class Parent extends React.Component {
constructor(props) {
super(props)
this.myRef = React.createRef()
}

render() {
return (<View>
<Child ref={this.myRef}/>
<Button title={'call me'}
onPress={() => this.myRef.current.childMethod()}/>
</View>)
}
}

class Child extends React.Component {

childMethod() {
console.log('call me')
}

render() {
return (<View><Text> I am a child</Text></View>)
}
}

Class component and Hook

Class parent and hook child

class Parent extends React.Component {
constructor(props) {
super(props)
this.myRef = React.createRef()
}

render() {
return (<View>
<Child ref={this.myRef}/>
<Button title={'call me'}
onPress={() => this.myRef.current.childMethod()}/>
</View>)
}
}

const Child = React.forwardRef((props, ref) => {

useImperativeHandle(ref, () => ({
childMethod() {
childMethod()
}
}))

function childMethod() {
console.log('call me')
}

return (<View><Text> I am a child</Text></View>)
})

Hook parent and class child

function Parent(props) {

const myRef = useRef()

return (<View>
<Child ref={myRef}/>
<Button title={'call me'}
onPress={() => myRef.current.childMethod()}/>
</View>)
}

class Child extends React.Component {

childMethod() {
console.log('call me')
}

render() {
return (<View><Text> I am a child</Text></View>)
}
}

useEffect

Parent

const [refresh, doRefresh] = useState(0);
<Button onClick={() => doRefresh(prev => prev + 1)} />
<Children refresh={refresh} />

Children

useEffect(() => {
performRefresh(); //children function of interest
}, [props.refresh]);

Others

class Parent extends Component {
render() {
return (
<div>
<Child setClick={click => this.clickChild = click}/>
<button onClick={() => this.clickChild()}>Click</button>
</div>
);
}
}

class Child extends Component {
constructor(props) {
super(props);
this.getAlert = this.getAlert.bind(this);
}
componentDidMount() {
this.props.setClick(this.getAlert);
}
getAlert() {
alert('clicked');
}
render() {
return (
<h1 ref="hello">Hello</h1>
);
}
}

Share data between components with Redux

A basic example:

Click to expand!
import { createStore } from 'redux'

/**
* This is a reducer - a function that takes a current state value and an
* action object describing "what happened", and returns a new state value.
* A reducer's function signature is: (state, action) => newState
*
* The Redux state should contain only plain JS objects, arrays, and primitives.
* The root state value is usually an object. It's important that you should
* not mutate the state object, but return a new object if the state changes.
*
* You can use any conditional logic you want in a reducer. In this example,
* we use a switch statement, but it's not required.
*/
function counterReducer(state = { value: 0 }, action) {
switch (action.type) {
case 'counter/incremented':
return { value: state.value + 1 }
case 'counter/decremented':
return { value: state.value - 1 }
default:
return state
}
}

// Create a Redux store holding the state of your app.
// Its API is { subscribe, dispatch, getState }.
let store = createStore(counterReducer)

// You can use subscribe() to update the UI in response to state changes.
// Normally you'd use a view binding library (e.g. React Redux) rather than subscribe() directly.
// There may be additional use cases where it's helpful to subscribe as well.

store.subscribe(() => console.log(store.getState()))

// The only way to mutate the internal state is to dispatch an action.
// The actions can be serialized, logged or stored and later replayed.
store.dispatch({ type: 'counter/incremented' })
// {value: 1}
store.dispatch({ type: 'counter/incremented' })
// {value: 2}
store.dispatch({ type: 'counter/decremented' })
// {value: 1}

Pass data to redirect component

React Router

Pass data with <Redirect>:

<Redirect to={{
pathname: '/nav',
state: { id: '123' }
}}
/>

Pass data with history.push():

history.push({
pathname: '/about',
search: '?the=search',
state: { some: 'state' }
})

Access data:

this.props.location.state.id

UmiJS

Pass data with query string

import { history, router } from 'umi';

history.push('/path?field=value&field2=value2')
// or
history.push({
pathname: '/path',
query: {
field: value
}
})
// or
router.push('/path?field=value&field2=value2')
// or
router.push({
pathname: '/path',
query: {
field: value
}
})

Access query string

import { useLocation } from 'umi';

const location = useLocation();
console.log(location.query)

URL query string

Pass data to components use URL query string: url?field=value&field2=value2

Get query string parameters

const params = new Proxy(new URLSearchParams(window.location.search), {
get: (searchParams, prop) => searchParams.get(prop),
});
// Get the value of "some_key" in eg "https://example.com/?some_key=some_value"
let value = params.some_key; // "some_value"

References

Design

Creating good architecture design.

Creating good database design.

Creating good API design.

Unit test

Write unit tests or test-drive development.

Keep tests clean.

Name

Creating good names

Don’t use literals.

Functions

Keep functions small and do one thing

Write functions as writing a story.

Code

Avoid repeated code. Extract repeated code into a common function or utility function.

Consider using better data structures and algorithms.

Classes

Keep small classes

Consider creating a better class hierarchy

Consider using a better design pattern

Error Handling

Consider possible exceptions for each line of code.

match and match_phrase do not work for partial searches. We can use wildcard, query_string or regexp to match a partial string.

For example, search gt3p in the content column.

wildcard

{
"query": {
"wildcard": {
"content" : "*gt3p*"
}
}
}

query_string

{
"query": {
"query_string": {
"default_field": "content",
"query": "*gt3p*"
}
}
}

regexp

{
"query": {
"regexp": {
"content": ".*gt3p.*"
}
}
}

What does the shit code look like

Bad names for variables and functions. And there are a lot of literals.

Bad data structures and database schema design.

Very long functions. Do a lot of things in one function. The business processing logic is chaotic.

Implementing algorithms are very ugly and low performance.

There are some potential bugs and problems in the code.

What it’s like to read shit code

It’s hard to understand it. You need to read carefully line by line. This is very painful and time consuming.

When I read the code behind, I have forgotten the previous code. It’s hard to figure out the entire processing logic.

Hard to read and understand, ugly code implementation and existing potential bugs drive me crazy.

How should we read the shit code

It’s known reading shit code is very painful. But there are some tips that may relieve your headaches.

  1. Write a description of the logic of the code in your own words. It helps you understand the code more easily.
  2. Do some work to modify the code slightly, such as renaming some variables and updating code order. It makes the code easier to read.

Getting Started

Hello World

> print("Hello World")
Hello World

Get input data from console

input_string_var = input("Enter some data: ")

Comment

# Single line comments start with a number symbol.
""" Multiline strings can be written
using three "s, and are often used
as documentation.
"""

Variables and Data Types

Variables

There are no declarations, only assignments. Convention is to use lower_case_with_underscores.

some_var = 5

Data Types

Category Type
Text Type str
Numeric Types int, float, complex
Sequence Types list, tuple, range
Mapping Type dict
Set Types set, frozenset
Boolean Type bool
Binary Types bytes, bytearray, memoryview
None Type NoneType
> x = 5
> print(type(x))
<class 'int'>
Example Data Type
x = “Hello World” str
x = 20 int
x = 20.5 float
x = 1j complex
x = [“apple”, “banana”, “cherry”] list
x = (“apple”, “banana”, “cherry”) tuple
x = range(6) range
x = {“name” : “John”, “age” : 36} dict
x = {“apple”, “banana”, “cherry”} set
x = frozenset({“apple”, “banana”, “cherry”}) frozenset
x = True bool
x = b”Hello” bytes
x = bytearray(5) bytearray
x = memoryview(bytes(5)) memoryview
x = None NoneType
enumerate
seasons = ['Spring', 'Summer', 'Fall', 'Winter']
for index, ele in enumerate(seasons):
print(index, ele)

Type Conversion

str to int: int()

num: int = int("123")
print(type(num)) # <class 'int'>

int to str: str()

a: str = str(123)
print(type(a)) # <class 'str'>

String and Array

String

Strings are created with “ or ‘

str1 = "This is a string."
str2 = 'This is also a string.'

Multiple line string

str1 = """hello
world"""
print(str1)

Properties of Strings

len("This is a string")

Lookup

chatAt. the n character of the string

"hello"[0] # h

indexOf, lastIndexOf

"hello world".find("o") # 4
"hello world".rfind("o") # 7

Both index() and find() are identical in that they return the index position of the first occurrence of the substring from the main string.
The main difference is that Python find() produces -1 as output if it is unable to find the substring, whereas index() throws a ValueError exception.

String check

equals

'hello' == 'hello' # True

isEmpty

s is None or s == ''

contains

s = 'abc'
'a' in s # True
'aa' not in s # True

startsWith, endsWith

s = 'hello world'
s.startswith('hello')
s.endswith('world')

String conversion

To lowercase

"HELLO".lower()

To uppercase

"hello".upper()

String Handling

String concatenation

"Hello " + "world!"

Substring

string_val[start:end:step]
s = "abcdef"
print(s[:2]) # ab

Replace

Replace

str1 = "Hello World"
new_string = str1.replace("Hello", "Good Bye")

Replace with regex

original_string = "hello world"
replacement = '*'
new_string = re.sub(r'[aeiou]', replacement, original_string)

Trim

" abc ".strip()

Split

Split a string by delimiter: split()

print("hello world".split(" ")) # ['hello', 'world']

Split a string by regex: re.split()

import re

print(re.split(r"\s", "hello world"))

Join

Join string list

my_list = ['a', 'b', 'c', 'd']
my_string = ','.join(my_list)

String formatting

name = "Reiko"
format_str = f"She said her name is {name}."
format_str2 = f"{name} is {len(name)} characters long."
format_str = "She said her name is {}.".format("Reiko")
format_str = "She said her name is {name}.".format(name="Reiko")

Array / List

li = []
other_li = [4, 5, 6]
# Examine the length with "len()"
len(li)

Lookup

Access

# Access a list like you would any array
li[0]
# Look at the last element
li[-1]

indexOf

# Get the index of the first item found matching the argument
["a", "b", "c"].index("a") # 0

Contains

# Check for existence in a list with "in"
1 in [1,2,3] # => True

Operations

Insert / Append

# Add stuff to the end of a list with append
li.append(1)
# Insert an element at a specific index
li.insert(1, 2)

Update

li[1] = 11

Remove

# Remove from the end with pop
li.pop()
# Remove by index
del li[2] # delete the 2th element
# Remove by value
li.remove(2) # Remove first occurrence of a value

Handling

Deep copy (one layer)

li2 = li[:]

Sublist / Slice

li[start:end:step]
li[1:3]   # Return list from index 1 to 3
li[2:] # Return list starting from index 2
li[:3] # Return list from beginning until index 3
li[::2] # Return list selecting every second entry
li[::-1] # Return list in reverse order

Concatenate

li + other_li
li.extend(other_li)

Filter / Map / Reduce (sum, min, max) / Predicate (some, every)

Filter - list comprehension [x for x in X if P(f(x))] or [f(x) for x in X if P(f(x))]

list = [1, 2, 3]
new_list = [x for x in list if x > 1]
print(new_list)

Filter - lambda

list = [1, 2, 3]
filtered = filter(lambda x: x > 1, list)
for x in filtered:
print(x)

Map - list comprehension [x.field for x in S if P(x)]

list = [{"id": 1, "name": "Tom"}, {"id": 2, "name": "Jack"}]
name_list = [x['name'] for x in list]

Map - lambda

list = [{"id": 1, "name": "Tom"}, {"id": 2, "name": "Jack"}]
map(lambda x: x.name, list)

Reduce

list = [1, 2, 3, 4, 5]
sum(list)

Reduce - lambda

import functools 

list = [1, 2, 3, 4, 5]
# sum
functools.reduce(lambda a, b: a + b, list)
# min
functools.reduce(lambda a, b: a if a < b else b, list)
# max
functools.reduce(lambda a, b: a if a > b else b, list)

Predicate

predicate - some

list = [{"id": 1, "name": "Tom"}, {"id": 2, "name": "Jack"}]
bool(next((x for x in list if x['id'] == 1), None)) # True
bool(next((x for x in list if x['id'] == 3), None)) # Flase

Join

Sorting

Reversion

Deduplication

Tuple

Tuples are like lists but are immutable. You can’t insert, update, remove elements.

tup = (1, 2, 3)
# Tuples are created by default if you leave out the parentheses
tup2 = 11, 22, 33
tup[0] # => 1
tup[0] = 3 # Raises a TypeError

Access

tup[0]
len(tup)

Lookup

1 in tup  # => True
li.index("a")

Slice

tup[:2]

Concatenate

tup + (4, 5, 6) 

Unpack tuples (or lists) into variables

a, b, c = (1, 2, 3)
d, e, f = 4, 5, 6
# swap two values
e, d = d, e

Dict

empty_dict = {}
filled_dict = {"one": 1, "two": 2, "three": 3}

Note keys for dictionaries have to be immutable types. This is to ensure that the key can be converted to a constant hash value for quick look-ups. Immutable types include ints, floats, strings, tuples.

invalid_dict = {[1,2,3]: "123"}  # => Yield a TypeError: unhashable type: 'list'
valid_dict = {(1,2,3):[1,2,3]} # Values can be of any type, however.

Access

filled_dict["one"]
# Looking up a non-existing key is a KeyError
filled_dict["four"] # KeyError
# Use "get()" method to avoid the KeyError
filled_dict.get("one")
# The get method supports a default argument when the value is missing
filled_dict.get("one", 4)

Put

# Adding to a dictionary
filled_dict.update({"four":4}) # => {"one": 1, "two": 2, "three": 3, "four": 4}
filled_dict["four"] = 4 # another way to add to dict
# "setdefault()" inserts into a dictionary only if the given key isn't present
filled_dict.update({"four":4}) # => {"one": 1, "two": 2, "three": 3, "four": 4}
filled_dict["four"] = 4 # another way to add to dict

Delete

# Remove keys from a dictionary with del
del filled_dict["one"] # Removes the key "one" from filled dict

Lookup

"one" in filled_dict
list(filled_dict.keys())
list(filled_dict.values())

Get all keys as an iterable with “keys()”. We need to wrap the call in list() to turn it into a list. Note - for Python versions <3.7, dictionary key ordering is not guaranteed. Your results might not match the example below exactly. However, as of Python 3.7, dictionary items maintain the order at which they are inserted into the dictionary.

Traverse

my_dict = {"key1": "value1", "key2": "value2"}
for key in my_dict:
print(f"{key}: {my_dict[key]}")

Set

empty_set = set()
# Initialize a set with a bunch of values.
some_set = {1, 1, 2, 2, 3, 4} # some_set is now {1, 2, 3, 4}
# Similar to keys of a dictionary, elements of a set have to be immutable.
invalid_set = {[1], 1} # => Raises a TypeError: unhashable type: 'list'
valid_set = {(1,), 1}

Insert

my_set.add(5)

Delete

my_set.remove(1)

Lookup

2 in filled_set

Intersection/union/difference/subset

filled_set = {1, 2, 3, 4, 5}
other_set = {3, 4, 5, 6}
# Do set intersection with &
filled_set & other_set # => {3, 4, 5}
# Do set union with |
filled_set | other_set # => {1, 2, 3, 4, 5, 6}
# Do set difference with -
{1, 2, 3, 4} - {2, 3, 5} # => {1, 4}
# Do set symmetric difference with ^
{1, 2, 3, 4} ^ {2, 3, 5} # => {1, 4, 5}
# Check if set on the left is a superset of set on the right
{1, 2} >= {1, 2, 3} # => False
# Check if set on the left is a subset of set on the right
{1, 2} <= {1, 2, 3} # => True

Copy

# Make a one layer deep copy
filled_set = some_set.copy() # filled_set is {1, 2, 3, 4, 5}
filled_set is some_set # => False

Expressions

Arithmetic Operators

  • +: add
  • -: subtract
  • *: multiply
  • /: divide
  • //: integer division rounds down
  • %: modulo
  • **: exponentiation

Logical Operators

  • and
  • or
  • not

Note “and” and “or” are case-sensitive

Comparison operators

==, !=, >, <, >=, <=

Statements

Simple statements

Assignment

Call

return

Control Flow Statements

If Conditions

if…else

if some_var > 10:
print("some_var is totally bigger than 10.")
elif some_var < 10: # This elif clause is optional.
print("some_var is smaller than 10.")
else: # This is optional too.
print("some_var is indeed 10.")

case/switch

For loop

for

for animal in ["dog", "cat", "mouse"]:
print("{} is a mammal".format(animal))
for i, value in enumerate(["dog", "cat", "mouse"]):
print(i, value)
# "range(number)" returns an iterable of numbers from zero up to (but excluding) the given number
for i in range(4):
print(i)
# "range(lower, upper)" returns an iterable of numbers
from the lower number to the upper number
for i in range(4, 8):
print(i)
# "range(lower, upper, step)"
for i in range(4, 8, 2):
print(i)

while

x = 0
while x < 4:
print(x)
x += 1

do…while

Exception handling

# Handle exceptions with a try/except block
try:
# Use "raise" to raise an error
raise IndexError("This is an index error")
except IndexError as e:
pass # Refrain from this, provide a recovery (next example).
except (TypeError, NameError):
pass # Multiple exceptions can be processed jointly.
else: # Optional clause to the try/except block. Must follow
# all except blocks.
print("All good!") # Runs only if the code in try raises no exceptions
finally: # Execute under all circumstances
print("We can clean up resources here")

Functions

def add(x, y):
print("x is {} and y is {}".format(x, y))
return x + y

add(5, 6)

# Another way to call functions is with keyword arguments
add(y=6, x=5) # Keyword arguments can arrive in any order.
# You can define functions that take a variable number of positional arguments
def varargs(*args):
return args

varargs(1, 2, 3)
# You can define functions that take a variable number of keyword arguments, as well
def keyword_args(**kwargs):
return kwargs

keyword_args(big="foot", loch="ness")

Expand arguments

all_the_args(*args)            # equivalent: all_the_args(1, 2, 3, 4)
all_the_args(**kwargs) # equivalent: all_the_args(a=3, b=4)
all_the_args(*args, **kwargs) # equivalent: all_the_args(1, 2, 3, 4, a=3, b=4)
# global scope
x = 5

def set_global_x(num):
# global indicates that particular var lives in the global scope
global x
print(x) # => 5
x = num # global var x is now set to 6
print(x)

Nested function

def create_adder(x):
def adder(y):
return x + y
return adder

add_10 = create_adder(10)
add_10(3) # => 13

Anonymous functions

# There are also anonymous functions
(lambda x: x > 2)(3) # => True
(lambda x, y: x ** 2 + y ** 2)(2, 1) # => 5

Modules

Python modules are just ordinary Python files. You can write your own, and import them. The name of the module is the same as the name of the file.

If you have a Python script named math.py in the same folder as your current script, the file math.py will be loaded instead of the built-in Python module. This happens because the local folder has priority over Python’s built-in libraries.

# You can import modules
import math
print(math.sqrt(16)) # => 4.0

# You can get specific functions from a module
from math import ceil, floor
print(ceil(3.7)) # => 4.0
print(floor(3.7)) # => 3.0

# You can import all functions from a module.
# Warning: this is not recommended
from math import *

# You can shorten module names
import math as m
math.sqrt(16) == m.sqrt(16)

Classes

Classes

Class members

  • attribute
    • class attribute (set by class_name.class_attribute = value)
    • instance attribute (initialized by initializer)
    • instance properties (Properties are special kind of attributes which have getter, setter and delete methods like get, set and delete methods.)
  • Methods
    • initializer
    • instance method (called by instances)
    • class method (called by instances)
    • static method (called by class_name.static_method())
    • getter
    • setter

Note that the double leading and trailing underscores denote objects or attributes that are used by Python but that live in user-controlled namespaces. Methods(or objects or attributes) like: __init__, __str__, __repr__ etc. are called special methods (or sometimes called dunder methods). You should not invent such names on your own.

# We use the "class" statement to create a class
class Human:

# A class attribute. It is shared by all instances of this class
species = "H. sapiens"

# Basic initializer
def __init__(self, name):
# Assign the argument to the instance's name attribute
self.name = name

# Initialize property
self._age = 0

# An instance method. All methods take "self" as the first argument
def say(self, msg):
print("{name}: {message}".format(name=self.name, message=msg))

# Another instance method
def sing(self):
return 'yo... yo... microphone check... one two... one two...'

# A class method is shared among all instances
# They are called with the calling class as the first argument
@classmethod
def get_species(cls):
return cls.species

# A static method is called without a class or instance reference
@staticmethod
def grunt():
return "*grunt*"

# A property is just like a getter.
@property
def age(self):
return self._age

# This allows the property to be set
@age.setter
def age(self, age):
self._age = age

# This allows the property to be deleted
@age.deleter
def age(self):
del self._age
# Instantiate a class
i = Human(name="Ian")
# Call instance method
i.say("hi") # "Ian: hi"

j = Human("Joel")
j.say("hello")
# Call our class method
i.say(i.get_species()) # "Ian: H. sapiens"
# Change the class attribute (shared attribute)
Human.species = "H. neanderthalensis"
i.say(i.get_species()) # => "Ian: H. neanderthalensis"
j.say(j.get_species()) # => "Joel: H. neanderthalensis"
# Call the static method
print(Human.grunt()) # => "*grunt*"

# Static methods can be called by instances too
print(i.grunt())
# Update the property for this instance
i.age = 42
# Get the property
i.say(i.age) # => "Ian: 42"
j.say(j.age) # => "Joel: 0"
# Delete the property
del i.age
# i.age

Inheritance

# Define Batman as a child that inherits from both Superhero and Bat
class Batman(Superhero, Bat):

Standard Library

I/O Streams and Files

Read

# Instead of try/finally to cleanup resources you can use a with statement
with open("myfile.txt") as f:
for line in f:
print(line)
# Reading from a file
with open('myfile1.txt', "r+") as file:
contents = file.read() # reads a string from a file
print(contents)
with open('myfile2.txt', "r+") as file:
contents = json.load(file) # reads a json object from a file
print(contents)

Read a text file as string

content = Path('myfile.txt').read_text()

Write

# Writing to a file
contents = {"aa": 12, "bb": 21}
with open("myfile1.txt", "w+") as file:
file.write(str(contents))

Advanced Topics

Regex

Match string with pattern

import re
pattern = re.compile(r"^([A-Z][0-9]+)+$")
bool(pattern.match("A1")) # True
bool(pattern.match("a1")) # False
# or
re.match(r"^([A-Z][0-9]+)+$", "A1") # True

Find first match substrings and groups

import re
s = "A1B2"
pattern = re.compile(r"[A-Z][0-9]")
pattern.search(s).group(0) # A1
# or
re.search(r"[A-Z][0-9]", s).group(0) # A1

Find all match substrings and groups

import re
s = "A1B2"
pattern = re.compile(r"[A-Z][0-9]")
for m in pattern.finditer(s):
print(m.start(), m.end(), m.group(0))
0 2 A1
2 4 B2

Replace group

import re

def replace_group(source: str, pattern, group_to_replace: int, replacement: str):
length_adjust = 0;
result = source
for m in pattern.finditer(source):
result = replace(result, m.start(group_to_replace) + length_adjust, m.end(group_to_replace) + length_adjust,
replacement)
length_adjust = length_adjust + len(replacement) - len(m.group(group_to_replace))
return result

def replace(s, start, end, replacement):
return s[:start] + replacement + s[end:]

group_to_replace = 1;
s = "A1abc123B2"
pattern = re.compile(r"[A-Z]([0-9])")
replacement = '*'
print(replace_group(s, pattern, group_to_replace, replacement))
# A*abc123B*

Regex API

  • search(_string_[, _pos_[, _endpos_]])-> Match: checks for a match anywhere in the string
  • match(_string_[, _pos_[, _endpos_]]) -> Match: checks for a match only at the beginning of the string
  • findall(_string_[, _pos_[, _endpos_]]) -> list[string]: Return all non-overlapping matches of pattern in string, as a list of strings.
  • finditer(string[, pos[, endpos]]): Return an iterator yielding MatchObject instances over all non-overlapping matches for the RE pattern in string. The string is scanned left-to-right
  • groups([default]) -> tuple: Return a tuple containing all the subgroups of the match, from 1 up to however many groups are in the pattern.

References

[1] Learn Python in Y minutes

[2] Python Tutorial

IO Streams

Input Streams

Get a Input Stream From a Path

Get Input Stream from filepath

String filepath = "D:\\test.txt"
// Java IO
InputStream is = new FileInputStream(filepath);

// Java NIO
Path path = Paths.get(filepath);
System.out.println(path.normalize().toUri().toString()); // "file:///D:/test.txt"
InputStream is = new URL(path.toUri().toString()).openStream();

Get Input Stream from classpath

// Spring framework ClassPathResource
InputStream resourceAsStream = new ClassPathResource("application.yml").getInputStream();
// or
InputStream resourceAsStream = new ClassPathResource("/application.yml").getInputStream();

// Java ClassLoader
InputStream resourceAsStream = <CurrentClass>.class.getResourceAsStream("/application.yml");
// or
InputStream resourceAsStream = <CurrentClass>.class.getClassLoader().getResourceAsStream("application.yml");

Get Input Stream from file HTTP URL

// Java 8
InputStream input = new URL("http://xxx.xxx/fileUri").openStream();
// or
URLConnection connection = new URL(url + "?" + query).openConnection();
connection.setRequestProperty("Accept-Charset", charset);
InputStream response = connection.getInputStream();

// Java 9
HttpResponse response = HttpRequest
.create(new URI("http://xxx.xxx/fileUri"))
.headers("Foo", "foovalue", "Bar", "barvalue")
.GET()
.response();

// Spring Resource
Resource resource = new UrlResource("http://xxx.xxx/fileUri");
InputStream is = resource.getInputStream();

Read/Convert a Input Stream to a String

Using Stream API (Java 8)

new BufferedReader(new InputStreamReader(in)).lines().collect(Collectors.joining("\n"))

Using IOUtils.toString (Apache Commons IO API)

String result = IOUtils.toString(inputStream, StandardCharsets.UTF_8);

Using ByteArrayOutputStream and inputStream.read (JDK)

ByteArrayOutputStream result = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
for (int length; (length = inputStream.read(buffer)) != -1; ) {
result.write(buffer, 0, length);
}
// StandardCharsets.UTF_8.name() > JDK 7
return result.toString("UTF-8");

Performance: ByteArrayOutputStream > IOUtils.toString > Stream API

Output Streams

Write data to file

Write string to file

String s = "hello world";
String outputFilePath = new StringBuilder()
.append(System.getProperty("java.io.tmpdir"))
.append(UUID.randomUUID())
.append(".txt")
.toString();
try (BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(outputFilePath))) {
out.write(s.getBytes(StandardCharsets.UTF_8));
}
System.out.println("output file path: " + outputFilePath);

Read and write

Read From and Write to Files

Java IO

String inputFilePath = new StringBuilder()
.append(System.getProperty("java.io.tmpdir"))
.append("7d43f2b6-2145-4448-9c8f-c43f97ba4d9e.txt")
.toString();
String outputFilePath = new StringBuilder()
.append(System.getProperty("java.io.tmpdir"))
.append(UUID.randomUUID())
.append(".txt")
.toString();
try (BufferedInputStream in = new BufferedInputStream(new FileInputStream(inputFilePath));
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream(outputFilePath))) {
int b;
while ((b = in.read()) != -1) {
out.write(b);
}
}
System.out.println("output file path: " + outputFilePath);

For read and write, you can use the following two ways:

int b;
while ((b = in.read()) != -1) {
out.write(b);
}

or

byte[] buffer = new byte[1024];
int lengthRead;
while ((lengthRead = in.read(buffer)) > 0) {
out.write(buffer, 0, lengthRead);
out.flush();
}

Java NIO.2 API

String inputFilePath = new StringBuilder()
.append(System.getProperty("java.io.tmpdir"))
.append("7d43f2b6-2145-4448-9c8f-c43f97ba4d9e.txt")
.toString();
String outputFilePath = new StringBuilder()
.append(System.getProperty("java.io.tmpdir"))
.append(UUID.randomUUID())
.append(".txt")
.toString();
Path originalPath = new File(inputFilePath).toPath();
Path copied = Paths.get(outputFilePath);
Files.copy(originalPath, copied, StandardCopyOption.REPLACE_EXISTING);
System.out.println("output file path: " + outputFilePath);

Get a Path object by Paths.get(filePath) or new File(filePath).toPath()

By default, copying files and directories won’t overwrite existing ones, nor will it copy file attributes.

This behavior can be changed using the following copy options:

  • REPLACE_EXISTING – replace a file if it exists
  • COPY_ATTRIBUTES – copy metadata to the new file
  • NOFOLLOW_LINKS – shouldn’t follow symbolic links

Apache Commons IO API

FileUtils.copyFile(original, copied);

Files

Get File Path

get file path by class path

// Spring framework ClassPathResource
String filePath = new ClassPathResource(fileClassPath).getFile().getAbsolutePath();

// Java ClassLoader
URL url = FileUtils.class.getClassLoader()
.getResource(fileClassPath);
String filePath = Paths.get(url.toURI()).toFile().getAbsolutePath();

Creation

Create directory

File dir = new File(dirPath);
if (!dir.exists() || !dir.isDirectory()) {
dir.mkdirs();
}
Files.createDirectories(new File(outputDir).toPath());

Delete

Delete a file

File file = new File(filePath);
file.delete();
// or
file.deleteOnExit();

Delete a directory

Java API

// function to delete subdirectories and files
public static void deleteDirectory(File file)
{
// store all the paths of files and folders present inside directory
for (File subfile : file.listFiles()) {

// if it is a subfolder,e.g Rohan and Ritik,
// recursiley call function to empty subfolder
if (subfile.isDirectory()) {
deleteDirectory(subfile);
}

// delete files and empty subfolders
subfile.delete();
}
}

Apache Common IO API

FileUtils.deleteDirectory(new File(dir));

or

FileUtils.forceDelete(new File(dir));

Update

Traversal

Information

Java File Mime Type

// 1
String mimeType = Files.probeContentType(file.toPath());
// 2
String mimeType = URLConnection.guessContentTypeFromName(fileName);
// 3
FileNameMap fileNameMap = URLConnection.getFileNameMap();
String mimeType = fileNameMap.getContentTypeFor(file.getName());
// 4
MimetypesFileTypeMap fileTypeMap = new MimetypesFileTypeMap();
String mimeType = fileTypeMap.getContentType(file.getName());

Temporary Files and Directories

Temporary Directory

// java.io.tmpdir
System.getProperty("java.io.tmpdir")

Windows 10: C:\Users\{user}\AppData\Local\Temp\

Debian: /tmp

Temporary file

// If you don't specify the file suffix, the default file suffix is ".tmp".
File file = File.createTempFile("temp", null);
System.out.println(file.getAbsolutePath());
file.deleteOnExit();
Path path = Files.createTempFile(fileName, ".txt");
System.out.println(path.toString());

Problems

Character Encoding Problems

The one-arguments constructors of FileReader always use the platform default encoding which is generally a bad idea.

Since Java 11 FileReader has also gained constructors that accept an encoding: new FileReader(file, charset) and new FileReader(fileName, charset).

In earlier versions of java, you need to use new InputStreamReader(new FileInputStream(pathToFile), ).

References

Junk files are unnecessary files produced by the operating system or software. Junk files keep increasing, but our computer disks are limited. So we need to delete junk files frequently. Otherwise, we may not have enough free space.

Types of Junk Files

Here are common types of junk files:

  • Files in the Recycle Bin.
  • Windows temporary files. These are junk files whose use is temporary and become redundant once the current task is complete.
  • Windows and third-party software leftovers. When you uninstall a program, not all the files associated with the software are deleted.
  • Software cache files.
  • Log files.
  • Downloads. The downloads folder usually takes a chunk of your storage space. Usually, it contains unwanted installers, images, videos, and other redundant documents that accumulate over time.

Empty the Recycle Bin

:: empty Recycle Bin from the disk C
(ECHO Y | rd /s /q %systemdrive%\$RECYCLE.BIN) > %USERPROFILE%\Desktop\delete_files.log 2>&1
:: empty Recycle Bin from the disk D
(ECHO Y | rd /s /q d:\$RECYCLE.BIN) > %USERPROFILE%\Desktop\delete_files.log 2>&1
:: empty Recycle Bin from all disk drives. if used inside a batch file, replace %i with %%i
(FOR %i IN (a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z) DO (rd /s /q %i:\$RECYCLE.BIN)) > %USERPROFILE%\Desktop\delete_files.log 2>&1

Delete Temporary Files

To view temporary files

%SystemRoot%\explorer.exe %temp%

Delete all temporary files

del /s /q %USERPROFILE%\AppData\Local\Temp\*.* > %USERPROFILE%\Desktop\delete_files.log 2>&1
:: or
del /s /q %temp%\*.* > %USERPROFILE%\Desktop\delete_files.log 2>&1

Delete all empty directories in the temporary files directory

:: if used inside a batch file, replace %d with %%d
for /f "delims=" %d in ('dir /s /b /ad %USERPROFILE%\AppData\Local\Temp ^| sort /r') do rd "%d"

Only delete temporary files that were last modified less than 7 days ago and empty directories

:: if used inside a batch file, replace %d with %%d
((echo Y | FORFILES /s /p "%USERPROFILE%\AppData\Local\Temp" /M "*" -d -7 -c "cmd /c del /q @path") && (for /f "delims=" %d in ('dir /s /b /ad %USERPROFILE%\AppData\Local\Temp ^| sort /r') do rd "%d")) > %USERPROFILE%\Desktop\delete_files.log 2>&1

Delete Windows and Third-Party Software Leftovers

Chrome old version leftovers

"C:\Program Files\Google\Chrome\Application\{old_version}\*.*"

Delete Software Cache Files

Browser

Chat Software

Delete WeChat cache files

del /s /q "%USERPROFILE%\Documents\WeChat Files\*.*" > %USERPROFILE%\Desktop\delete_files.log 2>&1

Delete Log Files

Only delete log files that were last modified less than 7 days ago

cd C:\
(ECHO Y | FORFILES /s /p "C:" /M "*.log" -d -7 -c "cmd /c del /q @path")

Delete all disk drives log files

:: print files to delete
type NUL > %USERPROFILE%\Desktop\delete_files.log
FOR %i IN (a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z) DO (%i: && (ECHO Y | FORFILES /s /p "%i:" /M "*.log" -d -7 -c "cmd /c echo @path")) >> %USERPROFILE%\Desktop\delete_files.log 2>&1

:: delete
type NUL > %USERPROFILE%\Desktop\delete_files.log
FOR %i IN (a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z) DO (%i: && (ECHO Y | FORFILES /s /p "%i:" /M "*.log" -d -7 -c "cmd /c del /q @path")) >> %USERPROFILE%\Desktop\delete_files.log 2>&1

WeChat Log Files

del /s /q %USERPROFILE%\AppData\Roaming\Tencent\WeChat\log\*.xlog > %USERPROFILE%\Desktop\delete_files.log 2>&1

Apache Tomcat Log Files

del /q "C:\Program Files\Apache Software Foundation\Tomcat 8.0\logs\major\run.out.*" > %USERPROFILE%\Desktop\delete_files.log 2>&1

Command Usage

del

Deletes one or more files.

Syntax

del <option> <filepath_or_file_pattern>

Parameters

  • /q - Specifies quiet mode. You are not prompted for delete confirmation.
  • /s - Deletes specified files from the current directory and all subdirectories. Displays the names of the files as they are being deleted.
  • /? - Displays help at the command prompt.

rd

Syntax

rd [<drive>:]<path> [/s [/q]]

Parameters

  • /s - Deletes a directory tree (the specified directory and all its subdirectories, including all files).
  • /q - Specifies quiet mode. Does not prompt for confirmation when deleting a directory tree. The /q parameter works only if /s is also specified.
  • /? - Displays help at the command prompt.

forfiles

Selects and runs a command on a file or set of files.

Syntax

forfiles [/P pathname] [/M searchmask] [/S] [/C command] [/D [+ | -] [{<date> | <days>}]]

Parameters

  • /P <pathname> - Specifies the path from which to start the search. By default, searching starts in the current working directory. For example, /p "C:"
  • /M <searchmask> - Searches files according to the specified search mask. The default searchmask is *. For example, /M "*.log".
  • /S - Instructs the forfiles command to search in subdirectories recursively.
  • /C <command> - Runs the specified command on each file. Command strings should be wrapped in double quotes. The default command is "cmd /c echo @file". For example, -c "cmd /c del /q @path"
  • /D [{+\|-}][{<date> | <days>}] - Selects files with a last modified date within the specified time frame. For example, -d -7.

Wildcard

  • * - Match zero or more characters
  • ? - Match one character in that position
  • [ ] - Match a range of characters. For example, [a-l]ook matches book, cook, and look.
  • [ ] - Match specific characters. For example, [bc]ook matches book and cook.
  • ``*` - Match any character as a literal (not a wildcard character)

Run batch file with Task Scheduler

Open “Task Scheduler“ or Windows + R, input taskschd.msc

Right-click the “Task Scheduler Library” branch and select the New Folder option.

Confirm a name for the folder — for example, MyScripts.

Click the OK button.

Expand the “Task Scheduler Library” branch.

Right-click the MyScripts folder.

Select the Create Basic Task option.

In the “Name” field, confirm a name for the task — for example, ClearJunkBatch.

(Optional) In the “Description” field, write a description for the task.

Click the Next button.

Select the Monthly option.

  • Quick note: Task Scheduler lets you choose from different triggers, including a specific date, during startup, or when a user logs in to the computer. In this example, I will select the option to run a task every month, but you may need to configure additional parameters depending on your selection.

Click the Next button.

Use the “Start” settings to confirm the day and time to run the task.

Use the “Monthly” drop-down menu to pick the months of the year to run the task.

Use the “Days” or “On” drop-down menu to confirm the days to run the task.

Click the Next button.

Select the Start a program option to run the batch file.

In the “Program/script” field, click the Browse button.

Select the batch file you want to execute.

Click the Finish button.

References

Clear

Delete files

Auto Answer “Yes/No” to Prompt

Batch file

MySQL Server Configuration Files

Most MySQL programs can read startup options from option files (sometimes called configuration files). Option files provide a convenient way to specify commonly used options so that they need not be entered on the command line each time you run a program.

To determine whether a program reads option files, invoke it with the --help option. (For mysqld, use --verbose and --help.) If the program reads option files, the help message indicates which files it looks for and which option groups it recognizes.

Configuration Files on Windows

On Windows, MySQL programs read startup options from the files shown in the following table, in the specified order (files listed first are read first, files read later take precedence).

File Name Purpose
%WINDIR%\my.ini, %WINDIR%\my.cnf Global options
C:\my.ini, C:\my.cnf Global options
BASEDIR\my.ini, BASEDIR\my.cnf Global options
defaults-extra-file The file specified with --defaults-extra-file, if any
%APPDATA%\MySQL\.mylogin.cnf Login path options (clients only)
DATADIR\mysqld-auto.cnf System variables persisted with SET PERSIST or SET PERSIST_ONLY (server only)
  • %WINDIR% represents the location of your Windows directory. This is commonly C:\WINDOWS. You can run echo %WINDIR% to view the location.
  • %APPDATA% represents the value of the Windows application data directory. C:\Users\{userName}\AppData\Roaming.
  • BASEDIR represents the MySQL base installation directory. When MySQL 8.0 has been installed using MySQL Installer, this is typically C:\PROGRAMDIR\MySQL\MySQL Server 8.0 in which PROGRAMDIR represents the programs directory (usually Program Files for English-language versions of Windows). Although MySQL Installer places most files under PROGRAMDIR, it installs my.ini under the C:\ProgramData\MySQL\MySQL Server 8.0\ directory (DATADIR) by default.
  • DATADIR represents the MySQL data directory. As used to find mysqld-auto.cnf, its default value is the data directory location built in when MySQL was compiled, but can be changed by --datadir specified as an option-file or command-line option processed before mysqld-auto.cnf is processed. By default, the datadir is set to C:/ProgramData/MySQL/MySQL Server 8.0/Data in the BASEDIR\my.ini (C:\ProgramData\MySQL\MySQL Server 8.0\my.ini). You also can get the DATADIR location by running the SQL statement SELECT @@datadir;.

After you installed MySQL 8.0 on Windows, you only have a configuration file BASEDIR\my.ini (actually C:\ProgramData\MySQL\MySQL Server 8.0\my.ini).

Configuration Files on Unix-Like Systems

On Unix and Unix-like systems, MySQL programs read startup options from the files shown in the following table, in the specified order (files listed first are read first, files read later take precedence).

Note: On Unix platforms, MySQL ignores configuration files that are world-writable. This is intentional as a security measure.

File Name Purpose
/etc/my.cnf Global options
/etc/mysql/my.cnf Global options
SYSCONFDIR/my.cnf Global options
$MYSQL_HOME/my.cnf Server-specific options (server only)
defaults-extra-file The file specified with --defaults-extra-file, if any
~/.my.cnf User-specific options
~/.mylogin.cnf User-specific login path options (clients only)
DATADIR/mysqld-auto.cnf System variables persisted with SET PERSIST or SET PERSIST_ONLY (server only)
  • SYSCONFDIR represents the directory specified with the SYSCONFDIR option to CMake when MySQL was built. By default, this is the etc directory located under the compiled-in installation directory.
  • MYSQL_HOME is an environment variable containing the path to the directory in which the server-specific my.cnf file resides. If MYSQL_HOME is not set and you start the server using the mysqld_safe program, mysqld_safe sets it to BASEDIR, the MySQL base installation directory.
  • DATADIR represents the MySQL data directory. As used to find mysqld-auto.cnf, its default value is the data directory location built in when MySQL was compiled, but can be changed by --datadir specified as an option-file or command-line option processed before mysqld-auto.cnf is processed. By default, the datadir is set to /var/lib/mysql in the /etc/my.cnf or /etc/mysql/mysql. You also can get the DATADIR location by running the SQL statement SELECT @@datadir;.

After you installed MySQL 8.0 on Linux, you only have a configuration file /etc/my.cnf.

Configuration File Inclusions

It is possible to use !include directives in option files to include other option files and !includedir to search specific directories for option files. For example, to include the /home/mydir/myopt.cnf file, use the following directive:

!include /home/mydir/myopt.cnf

To search the /home/mydir directory and read option files found there, use this directive:

!includedir /home/mydir

MySQL makes no guarantee about the order in which option files in the directory are read.

Note: Any files to be found and included using the !includedir directive on Unix operating systems must have file names ending in .cnf. On Windows, this directive checks for files with the .ini or .cnf extension.

Why would you put some directives into separate files instead of just keeping them all in /etc/my.cnf? For modularity.

If you want to deploy some sets of config directives in a modular way, using a directory of individual files is a little easier than editing a single file. You might make a mistake in editing, and accidentally change a different line than you intended.

Also removing some set of configuration options is easy if they are organized into individual files. Just delete one of the files under /etc/my.cnf.d, and restart mysqld, and then it’s done.

Common Configurations

Change port

[client]
port=13306

[mysqld]
port=13306

References

[1] Using Option Files - MySQL Reference Manual

Git

Install Git

On Windows, download git for windows.

On Linux, running the command sudo apt-get install git to install git.

Verify that the installation was successful:

git --version

Git Settings

Setting your user name and email for git

git config --global user.name "taogen"
git config --global user.email "taogenjia@gmail.com"

Check your git settings

git config user.name
git config user.email

Checking for existing SSH keys

Before you generate an SSH key, you can check to see if you have any existing SSH keys.

  1. Open Terminal or Git Bash

  2. Enter ls -al ~/.ssh to see if existing SSH keys are present.

  3. Check the directory listing to see if you already have a public SSH key. By default, the filenames of supported public keys for GitHub are one of the following.

    • id_rsa.pub

    • id_ecdsa.pub

    • id_ed25519.pub

  4. Either generate a new SSH key or upload an existing key.

Generating a new SSH key and adding it to the ssh-agent

Generating a new SSH key

  1. Open Terminal or Git Bash

  2. Paste the text below, substituting in your GitHub email address.

    $ ssh-keygen -t ed25519 -C "your_email@example.com"

    Note: If you are using a legacy system that doesn’t support the Ed25519 algorithm, use:

    $ ssh-keygen -t rsa -b 4096 -C "your_email@example.com"

    After running the above command, you need to enter a file path or use the default file path and enter a passphrase or no passphrase.

    Generating public/private ALGORITHM key pair.
    Enter file in which to save the key (C:/Users/YOU/.ssh/id_ALGORITHM):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in C:/Users/YOU/.ssh/id_ALGORITHM.
    Your public key has been saved in C:/Users/YOU/.ssh/id_ALGORITHM.pub.
    The key fingerprint is:
    SHA256:24EfhoOdfZYXtdBt42wbDj7nnbO32F6TQsFejz95O/4 your_email@example.com
    The key's randomart image is:
    +--[ED25519 256]--+
    | .. o|
    | . .++|
    | o++.|
    | o = .ooB.|
    | . S = =o* +|
    | * =.+ =o|
    | . o .+=*|
    | +=O|
    | .oBE|
    +----[SHA256]-----+

Adding your SSH key to the ssh-agent

You can secure your SSH keys and configure an authentication agent so that you won’t have to reenter your passphrase every time you use your SSH keys.

  1. Ensure the ssh-agent is running.

Start it manually:

# start the ssh-agent in the background
$ eval "$(ssh-agent -s)"
> Agent pid 59566

Auto-launching the ssh-agent Configuration

You can run ssh-agent automatically when you open bash or Git shell. Copy the following lines and paste them into your ~/.profile or ~/.bashrc file in Git shell:

env=~/.ssh/agent.env

agent_load_env () { test -f "$env" && . "$env" >| /dev/null ; }

agent_start () {
(umask 077; ssh-agent >| "$env")
. "$env" >| /dev/null ; }

agent_load_env

# agent_run_state: 0=agent running w/ key; 1=agent w/o key; 2=agent not running
agent_run_state=$(ssh-add -l >| /dev/null 2>&1; echo $?)

if [ ! "$SSH_AUTH_SOCK" ] || [ $agent_run_state = 2 ]; then
agent_start
ssh-add
elif [ "$SSH_AUTH_SOCK" ] && [ $agent_run_state = 1 ]; then
ssh-add
fi

unset env
  1. Add your SSH private key to the ssh-agent.

If your private key is not stored in one of the default locations (like ~/.ssh/id_rsa), you’ll need to tell your SSH authentication agent where to find it. To add your key to ssh-agent, type ssh-add ~/path/to/my_key.

$ ssh-add ~/.ssh/id_ed25519

Adding a new SSH key to your GitHub account

  1. Open Terminal or Git Bash. Copy the SSH public key to your clipboard.

    $ pbcopy < ~/.ssh/id_ed25519.pub
    # Copies the contents of the id_ed25519.pub file to your clipboard

    or

    $ clip < ~/.ssh/id_ed25519.pub
    # Copies the contents of the id_ed25519.pub file to your clipboard

    or

    $ cat ~/.ssh/id_ed25519.pub
    # Then select and copy the contents of the id_ed25519.pub file
    # displayed in the terminal to your clipboard
  2. GitHub.com -> Settings -> Access - SSH and GPG keys -> New SSH key

Testing your SSH connection

After you’ve set up your SSH key and added it to your account on GitHub.com, you can test your connection.

  1. Open Terminal or Git Bash

  2. Enter the following command

    $ ssh -T git@github.com
    # Attempts to ssh to GitHub

    If you see the following message, you have successfully connected GitHub with SSH.

    > Hi USERNAME! You've successfully authenticated, but GitHub does not
    > provide shell access.

References

[1] Connecting to GitHub with SSH

0%