Taogen's Blog

Stay hungry stay foolish.

企业微信应用介绍

企业微信应用开发是借助企业微信提供的API,开发针对企业微信平台的应用。企业微信用户可以通过企业微信客户端提供的企业微信应用入口来找到应用。企业微信客户端有 iOS,Android,Windows 等版本,用户可以在多个端同时使用。

企业微信应用按照API接口类型分为:

  • 企业内部应用。仅应用开发的企业内部微信用户可以使用的应用。
  • 第三方应用。所有企业的微信用户可以使用的应用。该应用在企业微信第三方应用市场可以找到。
  • 智慧软硬件应用。面向智能硬件厂商,基于企业微信提供的硬件SDK,升级硬件能力,并提供软硬一体的场景化方案。所有企业的微信用户可以使用的应用。该应用在企业微信第三方应用市场可以找到。

这三种应用的大部分API是相同的,小部分API不同或者仅某种应用可以使用。

按照应用场景分为:

  • H5应用。网页类型的应用。
  • 小程序应用。
  • 群聊机器人。通过机器人,企业应用可以主动向群聊内发送多种类型的消息。
  • 管理和辅助类型应用。

企业微信应用的基本使用:

  1. 企业微信H5应用(微信用户主动使用该应用)。微信用户在企业微信客户端找到应用入口,进入应用的首页,企业微信用户使用应用页面上的功能。
  2. 管理和辅助类型应用(非微信用户使用或被动使用应用)。如通讯录管理、消息推送等。应用的管理后台主动触发相关功能。

企业微信应用开发

企业微信应用涉及的交互:

  • 企业微信客户端调用应用服务器的API
  • 应用服务器调用企业微信的API
  • 企业微信回调应用服务器API。

企业内部应用(自建应用)开发流程

  1. 注册企业微信。获取 corpId (Company ID)。
  2. 创建应用。登录企业微信管理后台,在应用管理中创建自建应用。上传 App Logo 和输入 App Name。得到 AgentId 和 Secret。
  3. 配置应用(非 H5 应用不需要配置)。企业微信管理后台的应用详情页面,1)配置应用主页,2)设置可信域名。
  4. 开发应用。通过调用企业微信服务端 API 实现相关业务功能。
  5. 将企业微信应用部署到自己的服务器。

Appendixes

HTML pages referred static files (images, js and css etc) can be cached in your browser by setting HTTP response header attributes about cache.

Two main types of cache headers, cache-control and expires, define the caching characteristics for your resources. Typically, cache-control is considered a more modern and flexible approach than expires, but both headers can be used simultaneously.

Cache headers are applied to resources at the server level – for example, in the .htaccess file on an Apache server, used by nearly half of all active websites – to set their caching characteristics. Caching is enabled by identifying a resource or type of resource, such as images or CSS files, and then specifying headers for the resource(s) with the desired caching options.

Stop using (HTTP 1.0) Replaced with (HTTP 1.1 since 1999)
Expires: [date] Cache-Control: max-age=[seconds]
Pragma: no-cache Cache-Control: no-cache

Setting HTTP cache in Spring framework

Setting HTTP response cache-control header in Spring framework

return ResponseEntity.ok()
.cacheControl(CacheControl.maxAge(365, TimeUnit.DAYS)
.cachePrivate()
.mustRevalidate())
.contentType(MediaType.parseMediaType("application/octet-stream"))
.header(HttpHeaders.CONTENT_DISPOSITION, "inline; filename=\"" + fileName + "\"")
.body(resource);

Setting HTTP cache in Nginx Server

Only set HTTP Cache-Control header for HTTP directly response by Nginx, not proxy_pass server HTTP responses. In other words, request static files via the server’s file path. For example:

  • expires 30d;
  • add_header Cache-Control “public, no-transform”;

conf/nginx.conf

server {
listen 80;
server_name localhost;
location / {
autoindex on;
root html;
index index.html index.htm;
}
location ~* \.(js|css|png|jpg|jpeg|gif|svg|ico)$ {
root /root/upload
expires 30d;
add_header Cache-Control "public, no-transform";
}
}

Cache-Control Directives

Cache

  • Cache types: private / public
  • Expired time: max-age / s-maxage
  • validated-related: no-cache, must-revalidate/proxy-revalidate, stale-while-revalidate, stale-if-error
  • no-transform
  • immutable

Examples:

Cache-Control: public, max-age=604800, must-revalidate

No Cache

  • no-store or max-age=0

Note that no-cache does not mean “don’t cache”. no-cache allows caches to store a response but requires them to revalidate it before reuse. If the sense of “don’t cache” that you want is actually “don’t store”, then no-store is the directive to use.

Examples

Cache-Control: no-store

Cache-Control Directive Details

Max Age for Files

  • ico/jpg/jpeg/png/gif: max-age=2592000 seconds (30 days) or max-age=31536000 (365 days)
  • pdf: max-age=2592000 seconds (30 days)
  • css/js: max-age=86400 seconds (1 hours) or max-age=2592000 seconds (30 days)

References

Elements

View Event Listeners of HTML Elements

Element -> Event Listeners

  • Ancestors: unchecked
  • All/Passive/Blocking: Blocking
  • Framework listeners: checked

Edit HTML element

You can edit HTML on the fly and preview the changes by selecting any element, choosing a DOM element within the panel, and double clicking on the opening tag to edit it.

Edit CSS property

you can also change CSS in Chrome DevTools and preview what the result will look like. This is probably one of the most common uses for this tool. Simply select the element you want to edit and under the styles panel you can add/change any CSS property you want.

Change color format

You can toggle between RGBA, HSL, and hexadecimal formatting by pressing Shift + Click on the color block.

Console

Design Mode

you can freely make edits to the page as if it were a document.

Open design mode: document.designMode = “on”

Monitoring events on-page elements

monitorEvents($0, ‘mouse’)

Sources

Pretty print

You can easily change the formatting of your minimized code by clicking on {}.

Multiple cursors

You can easily add multiple cursors by pressing Cmd + Click (Ctrl + Click) and entering information on multiple lines at the same time.

Search source code

You can quickly search all of your source code by pressing Cmd + Opt + F (Ctrl + Shift + F).

Network

Without Cache When Access Web Page

Checked “Disable cache”

Others

Dock Position

You can also change the Chrome DevTools dock position. You can either undock into a separate window, or dock it on the left, bottom, or right side of the browser. The dock position can be changed by pressing Cmd + Shift + D (Ctrl + Shift + D) or through the menu.

References

[1] Chrome DevTools

[2] Chrome DevTools - 20+ Tips and Tricks

[3] 8 Lesser Known but KILLER Features of Chrome DevTools

Spring Boot uses a very particular PropertySource order that is designed to allow sensible overriding of values. Properties are considered in the following order (with values from lower items overriding earlier ones):

  1. Default properties (specified by setting SpringApplication.setDefaultProperties).
  2. @PropertySource annotations on your @Configuration classes. Please note that such property sources are not added to the Environment until the application context is being refreshed. This is too late to configure certain properties such as logging.* and spring.main.* which are read before refresh begins.
  3. Config data (such as application.properties files).
    1. Application properties packaged inside your jar (application.properties and YAML variants).
    2. Profile-specific application properties packaged inside your jar (application-{profile}.properties and YAML variants).
    3. Application properties outside of your packaged jar (application.properties and YAML variants).
    4. Profile-specific application properties outside of your packaged jar (application-{profile}.properties and YAML variants).
  4. A RandomValuePropertySource that has properties only in random.*.
  5. OS environment variables.
  6. Java System properties (System.getProperties()).
  7. JNDI attributes from java:comp/env.
  8. ServletContext init parameters.
  9. ServletConfig init parameters.
  10. Properties from SPRING_APPLICATION_JSON (inline JSON embedded in an environment variable or system property).
  11. Command line arguments.
  12. properties attribute on your tests. Available on @SpringBootTest and the test annotations for testing a particular slice of your application.
  13. @TestPropertySource annotations on your tests.
  14. Devtools global settings properties in the $HOME/.config/spring-boot directory when devtools is active.

Config data

Specify a default configuration file

spring.config.location: specify a default configuration file path or directory

java -jar myproject.jar --spring.config.location=\
optional:classpath:/default.properties,\
optional:classpath:/override.properties
mvn spring-boot:run -Dspring.config.location="file:///Users/home/jdbc.properties"
mvn spring-boot:run -Dspring.config.location="file:///D:/config/aliyun-oss-java/application.yml"
mvn spring-boot:run -Dspring.config.location="/Users/home/jdbc.properties"
mvn spring-boot:run -Dspring.config.location="D:/config/aliyun-oss-java/application.yml"

Importing Additional Configuration File

spring.config.additional-location: additional configuration file that will override default configuration file

Program arguments

java -jar your_app.jar --spring.config.additional-location=xxx

System Properties (VM Arguments)

java -jar -Dspring.config.additional-location=xxx your_app.jar

application.properties

spring.config.import=developer.properties

OS Environment Variables

If you add new OS Environment Variables on Windows, you must restart your processes (Java process, Intellij IDEA) to read the new OS Environment Variables.

For any other Windows executable, system-level changes to the environment variables are only propagated to the process when it is restarted.

Add User variables or System variables on Linux or Windows

msg=hello
  1. Read by System Class
System.getenv("msg")
  1. Read by Environment object
@Autowired
private Environment environment;

environment.getProperty("msg")
  1. Injecting environment variables
@Value("${msg}")
private String msg;
  1. Setting application.properties values from environment
msg=${msg}

JSON Application Properties

Environment variables and system properties often have restrictions that mean some property names cannot be used. To help with this, Spring Boot allows you to encode a block of properties into a single JSON structure.

When your application starts, any spring.application.json or SPRING_APPLICATION_JSON properties will be parsed and added to the Environment.

For example, the SPRING_APPLICATION_JSON property can be supplied on the command line in a UN*X shell as an environment variable:

$ SPRING_APPLICATION_JSON='{"my":{"name":"test"}}' java -jar myapp.jar

In the preceding example, you end up with my.name=test in the Spring Environment.

The same JSON can also be provided as a system property:

$ java -Dspring.application.json='{"my":{"name":"test"}}' -jar myapp.jar

Or you could supply the JSON by using a command line argument:

$ java -jar myapp.jar --spring.application.json='{"my":{"name":"test"}}'

If you are deploying to a classic Application Server, you could also use a JNDI variable named java:comp/env/spring.application.json.

Accessing Command Line Properties

By default, SpringApplication converts any command line option arguments (that is, arguments starting with --, such as --server.port=9000) to a property and adds them to the Spring Environment. As mentioned previously, command line properties always take precedence over file-based property sources.

If you do not want command line properties to be added to the Environment, you can disable them by using SpringApplication.setAddCommandLineProperties(false).

Maven Command with application arguments:

mvn spring-boot:run -Dspring-boot.run.arguments="--arg1=value --arg2=value"
mvn spring-boot:run -D{arg.name}={value}
$ mvn spring-boot:run -Dspring-boot.run.arguments="--spring.profiles.active=production --server.port=8089"
$ mvn spring-boot:run -Dspring-boot.run.arguments="--spring.config.location=D:\config\aliyun-oss-java\application.yml"
# Spring-Boot 2.x
$ mvn spring-boot:run -Dspring-boot.run.profiles=local
# Spring-Boot 1.x and 2.x
$ mvn spring-boot:run -Dspring.profiles.active=local

java -jar with System Properties (VM Arguments)

java -jar -D{arg.name}={value} {jar-file} // sometimes value need add quotes ""
$ mvn clean install spring-boot:repackage -Dmaven.test.skip=true
$ java -jar -Dspring.config.location="D:\config\aliyun-oss-java\application.yml" your_application.jar
$ java -jar -Dspring.profiles.active=prod your_application.jar

java -jar with Program Arguments

java -jar <jar-file> --{arg.name}={value} // sometimes value need add quotes ""
$ mvn clean install spring-boot:repackage -Dmaven.test.skip=true
$ java -jar your_application.jar --spring.config.location="D:\config\aliyun-oss-java\application.yml"

References

Lambda

Optional

Checking value presence and conditional action

SysUser user = new SysUser();
SysDept dept = new SysDept();
dept.setDeptName("development");
user.setDept(dept);
Optional<String> optional = Optional.ofNullable(user)
.map(SysUser::getDept)
.map(SysDept::getDeptName);
optional.ifPresent(System.out::println);
String deptName = optional.orElse("default");
String deptName = optional.orElseGet(() -> "to get default");
optional.orElseThrow(() -> new RuntimeException("to throw exception"));

Stream

Create Streams

  1. Using collection
list.stream()
  1. Create a stream from specified values

Stream.of(T…t)

Stream<Integer> stream = Stream.of(1, 2, 3, 4, 5);
  1. Create a stream from an array
Arrays.stream(arr);
Stream.of(arr);
  1. Create an empty stream using Stream.empty()

The empty() method is used upon creation to avoid returning null for streams with no element.

Stream<String> streamOfArray
= Stream.empty();
  1. Using Stream.builder()
Stream.Builder<String> builder = Stream.builder();
Stream<String> stream = builder.add("a").add("b").add("c").build();
  1. Create an infinite Stream using Stream.iterate()
Stream.iterate(seedValue, (Integer n) -> n * n)
.limit(limitTerms)
.forEach(System.out::println);
  1. Create an infinite Stream using Stream.generate()
Stream.generate(Math::random)
.limit(limitTerms)
.forEach(System.out::println);
  1. Create stream from Iterator
Iterator<String> iterator = Arrays.asList("a", "b", "c").iterator();
Spliterator<T> spitr = Spliterators.spliteratorUnknownSize(iterator, Spliterator.NONNULL);
Stream<T> stream = StreamSupport.stream(spitr, false);
  1. Create stream from Iterable
Iterable<String> iterable = Arrays.asList("a", "b", "c");
Stream<T> stream = StreamSupport.stream(iterable.spliterator(), false);

Collections

Construction

Collection Element Type Conversion

String to Object

by assignment

// only array
String[] stringArray = new String[10];
Object[] objectArray = stringArray;

by contructor

// list
List<String> stringList = new ArrayList<>();
List<Object> objectList = new ArrayList<>(stringList);

// set
Set<String> stringSet = new HashSet<>();
Set<Object> objectSet = new HashSet<>(stringSet);

by for loop

// multiStringValueMap to multiObjectValueMap
Map<String, List<String>> multiStringValueMap = new HashMap<>();
multiStringValueMap.put("key1", Arrays.asList("taogen", "taogen2"));
multiStringValueMap.put(null, Arrays.asList(null, null, null));
multiStringValueMap.put("testNullValue", null);
Map<String, List<Object>> multiObjectValueMap = new HashMap<>();
multiStringValueMap.forEach((key, value) -> {
List<Object> objectList = null;
if (value != null) {
objectList = value.stream()
.collect(Collectors.toList());
}
multiObjectValueMap.put(key, objectList);
});
System.out.println(multiObjectValueMap);

Object to String

by Java Stream

List<Object> objectList = new ArrayList<>();
List<String> stringList = objectList.stream()
.map(object -> Objects.toString(object, null))
.collect(Collectors.toList());
for (Object s : objectList) {
System.out.println(s + ", isNull: " + Objects.isNull(s));
}

by for loop

List<Object> objectList = new ArrayList<>();
List<String> stringList = new ArrayList<>(objectList.size());
for (Object object : objectList) {
stringList.add(Objects.toString(object, null));
}
for (Object s : objectList) {
System.out.println(s + ", isNull: " + Objects.isNull(s));
}
// multiObjectValueMap to multiStringValueMap
Map<String, List<Object>> multiObjectValueMap = new HashMap<>();
multiObjectValueMap.put("key1", Arrays.asList(1, 2, 3));
multiObjectValueMap.put(null, Arrays.asList(null, null, null));
multiObjectValueMap.put("testNullValue", null);
multiObjectValueMap.put("key2", Arrays.asList("taogen", "taogen2"));
Map<String, List<String>> multiStringValueMap = new HashMap<>();
multiObjectValueMap.forEach((key, value) -> {
List<String> stringList = null;
if (value != null) {
stringList = value.stream()
.map(object -> Objects.toString(object, null))
.collect(Collectors.toList());
}
multiStringValueMap.put(key, stringList);
});
System.out.println(multiStringValueMap);

Warning: to convert a object value to a string value you can use Objects.toString(object) or object != null ? object.toString() : null. but not String.valueOf() and toString(). The result of String.valueOf(null) is “null” not null. If the object value is null, calling toString() will occur NullPointerExcpetion.

Collection Conversion

To Array

Object list to array

// use list.toArray()
User[] users = userList.toArray(new User[0]);
Integer[] integers = integerList.toArray(new Integer[0]);
String[] strings = stringList.toArray(new String[0]);
// use Java 8 stream
User[] users = userList.stream().toArray(User[]::new);
Integer[] integers = integerList.stream().toArray(Integer[]::new);
String[] strings = stringList.stream().toArray(String[]::new);
// Java 11
String[] strings = stringList.toArray(String[]::new);
// use for loop
int[] array = new int[list.size()];
for(int i = 0; i < list.size(); i++) array[i] = list.get(i);

To ArrayList

Convert Set to ArrayList

Set<String> set = new HashSet();
ArrayList<String> list = new ArrayList(set);
list.addAll(set);
// Java 8
List<String> list = set.stream().collect(Collectors.toList());
// Java 10
var list = List.copyOf(set);

Convert Wrapper Type Array to ArrayList

String[] array = new String[10];
Integer[] array2 = new Integer[10];
ArrayList<String> list = new ArrayList(Arrays.asList(array));

Convert Primitive Array to ArrayList

// use Arrays.stream()
int[] input = new int[]{1,2,3,4};
List<Integer> output = Arrays.stream(input).boxed().collect(Collectors.toList());
// use IntStream.of()
int[] input = new int[]{1,2,3,4};
List<Integer> output = IntStream.of(input).boxed().collect(Collectors.toList());

To Set

Convert ArrayList to Set

ArrayList<String> list = new ArrayList();
Set<String> set = new HashSet(list);
set.addAll(aList);
// Java 8
Set<String> set = list.stream().collect(Collectors.toSet());
// Java 10
var set = Set.copyOf(list);

Convert Wrapper Type Array to Set

String[] array = new String[10];
Set<String> set = new HashSet(Arrays.asList(array));

Convert other set classes

// to LinkedHashSet
list.stream().collect(Collectors.toCollection(LinkedHashSet::new))

To Map

Convert Object Fields of List to Map

List<SysUser> sysUserList = getUserList();
Map<Long, String> idToName = sysUserList.stream()
.collect(Collectors.toMap(SysUser::getId, SysUser::getName));
// or
Map<Long, String> idToName = sysUserList.stream()
.collect(Collectors.toMap(item -> item.getId(), item -> item.getName()));
List<IdName> list = getIdNameList();
Map<Long, IdName> idToObjMap = list.stream()
.collect(Collectors.toMap(IdName::getId, Function.identity()));
// or
Map<Long, IdName> idToObjMap = list.stream()
.collect(Collectors.toMap(item -> item.getId(), item -> item));

Convert to other map classes

// to TreeMap
List<IdName> idNameList = new ArrayList<>();
Map<Integer,String> idToNameMap = idNameList.stream().collect(Collectors.toMap(IdName::getId, IdName::getName, (o1, o2) -> o1, TreeMap::new));
// to ConcurrentMap
List<IdName> idNameList = new ArrayList<>();
idNameList.stream().collect(Collectors.toConcurrentMap(IdName::getId, IdName::getName));

Merge

Merge byte[] array

use System.arraycopy

byte[] one = getBytesForOne();
byte[] two = getBytesForTwo();
byte[] combined = new byte[one.length + two.length];

System.arraycopy(one,0,combined,0 ,one.length);
System.arraycopy(two,0,combined,one.length,two.length);

Use List

byte[] one = getBytesForOne();
byte[] two = getBytesForTwo();

List<Byte> list = new ArrayList<Byte>(Arrays.<Byte>asList(one));
list.addAll(Arrays.<Byte>asList(two));

byte[] combined = list.toArray(new byte[list.size()]);

Use ByteBuffer

byte[] allByteArray = new byte[one.length + two.length + three.length];

ByteBuffer buff = ByteBuffer.wrap(allByteArray);
buff.put(one);
buff.put(two);
buff.put(three);

byte[] combined = buff.array();

Convert List to Tree

convert list to tree with parentId

The data

[{
id: 1,
name: "a",
prarentId: 0
},{
id: 10,
name: "b",
prarentId: 1
},{
id: 2,
name: "c",
prarentId: 0
}]

The process of conversion

1. original list
a
b
c
2. link children and mark first level nodes
*a -> b
b
*c
3. get first level nodes
a -> b
c

Implementation

@Data
public class IdName {
private String id;
private String name;
private String parentId;
private List<IdName> children;

public IdName(String id, String name, String parentId) {
this.id = id;
this.name = name;
this.parentId = parentId;
}

private void putChildren(IdName idName) {
if (this.children == null) {
this.children = new ArrayList<>();
}
this.children.add(idName);
}

public static List<IdName> convertListToTree(List<IdName> list) {
if (list == null || list.isEmpty()) {
return Collections.emptyList();
}
Map<String, IdName> map = list.stream()
.collect(Collectors.toMap(IdName::getId, Function.identity()));
List<IdName> firstLevelNodeList = new ArrayList<>();
for (IdName idName : list) {
IdName parent = map.get(String.valueOf(idName.getParentId()));
if (parent != null) {
parent.putChildren(idName);
} else {
firstLevelNodeList.add(idName);
}
}
return firstLevelNodeList;
}
}

public static void main(String[] args) {
List<IdName> idNames = new ArrayList<>();
idNames.add(new IdName("1", "Jack", "0"));
idNames.add(new IdName("2", "Tom", "0"));
idNames.add(new IdName("3", "Jerry", "1"));
System.out.println(IdName.convertListToTree(idNames));
}

Multiple level data is in multiple tables

public List<AreaVo> getAreaVoList() {
List<Province> provinces = iProvinceService.list(
new LambdaQueryWrapper<Province>()
.select(Province::getId, Province::getName, Province::getCode));
List<City> cities = iCityService.list(
new LambdaQueryWrapper<City>()
.select(City::getId, City::getName, City::getCode, City::getProvinceCode));
List<County> counties = iCountyService.list(
new LambdaQueryWrapper<County>()
.select(County::getId, County::getName, County::getCode, County::getCityCode));
List<AreaVo> resultList = new ArrayList<>();
resultList.addAll(AreaVo.fromProvince(provinces));
resultList.addAll(AreaVo.fromCity(cities));
resultList.addAll(AreaVo.fromCounty(counties));
return AreaVo.convertListToTree(resultList);
}

public class AreaVo {
private String label;
private String value;
private String parentId;
private List<AreaVo> children;

public static List<AreaVo> fromProvince(List<Province> provinces) {
if (provinces == null || provinces.isEmpty()) {
return Collections.emptyList();
}
return provinces.stream()
.map(item -> new AreaVo(item.getName(), item.getCode(), "0"))
.sorted(Comparator.comparing(AreaVo::getValue))
.collect(Collectors.toList());
}
public static List<AreaVo> convertListToTree(List<AreaVo> list) {}
}

Find Path of Node In a Tree

private void setAreaForList(List<User> records) {
List<String> areaCodeList = records.stream()
.map(User::getAreaId)
.filter(Objects::nonNull)
.map(String::valueOf)
.collect(Collectors.toList());

List<AreaItem> areaItemList = iAreaService.findSelfAndAncestors(areaCodeList);
if (areaItemList == null || areaItemList.isEmpty()) {
return;
}
Map<String, AreaItem> areaItemMap = areaItemList.stream()
.collect(Collectors.toMap(AreaItem::getCode, Function.identity()));
records.stream()
.filter(entity -> entity.getAreaId() != null)
.forEach(entity -> {
List<AreaItem> areaPath = new ArrayList<>();
String areaCode = entity.getAreaId().toString();
String tempCode = areaCode;
AreaItem areaItem = null;
while ((areaItem = areaItemMap.get(tempCode)) != null) {
areaPath.add(0, areaItem);
tempCode = areaItem.getParentCode();
}
if (CollectionUtils.isEmpty(areaPath)) {
return;
}
int totalLevel = 2;
if (areaPath.size() < totalLevel) {
int supplementSize = totalLevel - areaPath.size();
for (int i = 0; i < supplementSize; i++) {
areaPath.add(null);
}
} else {
areaPath = areaPath.subList(0, totalLevel);
}
entity.setAreaPath(areaPath);
entity.setAreaArray(areaPath.stream()
.map(areaPathItem -> areaPathItem == null ? null : areaPathItem.getCode())
.collect(Collectors.toList()));
});
}

Multiple level data is in multiple tables

public List<AreaItem> findSelfAndAncestors(List<String> areaCodeList) {
if (CollectionUtils.isEmpty(areaCodeList)) {
return Collections.emptyList();
}
List<String> tempCodeList = areaCodeList;
List<AreaItem> resultAreaList = new ArrayList<>();
List<AreaCounty> countyList = iAreaCountyService.list(
new LambdaQueryWrapper<AreaCounty>()
.select(AreaCounty::getCode, AreaCounty::getName, AreaCounty::getCityCode)
.in(AreaCounty::getCode, tempCodeList));
if (!CollectionUtils.isEmpty(countyList)) {
AreaItem.fromAreaCounty(countyList)
.forEach(areaItem -> {
resultAreaList.add(areaItem);
areaItem.setLevel(3);
});
tempCodeList = areaCodeList;
tempCodeList.addAll(countyList.stream()
.map(AreaCounty::getCityCode)
.collect(Collectors.toList()));
}
List<AreaCity> cityList = iAreaCityService.list(
new LambdaQueryWrapper<AreaCity>()
.select(AreaCity::getCode, AreaCity::getName, AreaCity::getProvinceCode)
.in(AreaCity::getCode, tempCodeList));
if (!CollectionUtils.isEmpty(cityList)) {
AreaItem.fromAreaCity(cityList)
.forEach(areaItem -> {
resultAreaList.add(areaItem);
areaItem.setLevel(2);
});
tempCodeList = areaCodeList;
tempCodeList.addAll(cityList.stream()
.map(AreaCity::getProvinceCode)
.collect(Collectors.toList()));
}
List<AreaProvince> provinceList = iAreaProvinceService.list(
new LambdaQueryWrapper<AreaProvince>()
.select(AreaProvince::getCode, AreaProvince::getName)
.in(AreaProvince::getCode, tempCodeList));
if (!CollectionUtils.isEmpty(provinceList)) {
AreaItem.fromAreaProvince(provinceList)
.forEach(areaItem -> {
resultAreaList.add(areaItem);
areaItem.setLevel(1);
});
}
return resultAreaList;
}

Find Descendant Nodes in a Tree

Find self and descendant list

private List<User> findSelfAndDescendants(Integer parentId){
List<User> resultList = new ArrayList<>();
List<Integer> tempIds = new ArrayList<>();
tempIds.add(parentId);
List<User> descendants = null;
while (!CollectionUtils.isEmpty(descendants = getListByIds(tempIds))) {
resultList.addAll(descendants);
tempIds.clear();
tempIds = descendants.stream().map(User::getId).collect(Collectors.toList());
}
return resultList;
}
public List<User> getListByIds(List<Integer> ids){}

Find descendant list

private List<User> findDescendants(Integer parentId){
List<User> resultList = new ArrayList<>();
List<Integer> tempIds = new ArrayList<>();
tempIds.add(parentId);
List<User> descendants = null;
while (!CollectionUtils.isEmpty(descendants = getListByParentIds(tempIds))) {
resultList.addAll(descendants);
tempIds.clear();
tempIds = descendants.stream().map(User::getId).collect(Collectors.toList());
}
return resultList;
}
public List<User> getListByParentIds(List<Integer> tempIds){}

Find self and descendant ids

private List<Integer> findSelfAndDescendantIds(Integer parentId){
List<Integer> resultIds = new ArrayList<>();
resultIds.add(id);
List<Integer> tempIds = new ArrayList<>();
tempIds.add(parentId);
List<Integer> childrenIds = null;
while (!CollectionUtils.isEmpty(childrenIds = getChildrenIdsByParentIds(tempIds))) {
resultIds.addAll(childrenIds);
tempIds.clear();
tempIds.addAll(childrenIds);
}
return resultIds;
}
public List<Integer> getChildrenIdsByParentIds(List<Integer> parentIds){}
public Set<Integer> getDescendantIds(Integer deptId) {
List<Object> descendantIds = new ArrayList<>();
List<Object> childIds = this.baseMapper.selectObjs(new LambdaQueryWrapper<SysDept>()
.select(SysDept::getDeptId)
.eq(SysDept::getParentId, deptId));
while (!CollectionUtils.isEmpty(childIds)) {
descendantIds.add(childIds);
childIds = this.baseMapper.selectObjs(new LambdaQueryWrapper<SysDept>()
.select(SysDept::getDeptId)
.in(SysDept::getParentId, childIds));
}
return descendantIds.stream()
.map(Objects::toString)
.filter(Objects::nonNull)
.map(Integer::valueOf)
.collect(Collectors.toSet());
}

Operation

Traversal

Array Traversal

  • for (int i = 0; i < array.length; i++) {...}
  • Arrays.stream(array).xxx

List Traversal

  • for loop: for (int i = 0; i < list.size(); i++) {...}
  • enhanced for loop: for (Object o : list) {...}
  • iterator or listIterator
  • list.forEach(comsumer...)
  • list.stream().xxx

Handling List piece by piece

public static void main(String[] args) {
List<Integer> list = Arrays.asList(1, 2, 3, 4, 5);
int listSize = list.size();
int handlingSize = 3;
int startIndex = 0;
int endIndex = handlingSize;
while (startIndex < listSize) {
if (endIndex > listSize) {
endIndex = listSize;
}
handleList(list, startIndex, endIndex);
startIndex = endIndex;
endIndex = startIndex + handlingSize;
}
}
private static void handleList(List<Integer> list, int start, int end) {
for (int i = start; i < end; i++) {
System.out.println(list.get(i));
}
}

Map Traversal

  • for (String key : map.keySet()) {...}

  • for (Map.entry entry : map.entrySet()) {...}

  • Iterator

    Iterator<Map.Entry<String, Integer>> iterator = map.entrySet().iterator();
    while (iterator.hasNext()) {
    Map.Entry<String, Integer> entry = iterator.next();
    System.out.println(entry.getKey() + ":" + entry.getValue());
    }
  • map.forEach(biComsumer...)

    map.forEach((k, v) -> System.out.println((k + ":" + v)));
  • map.entrySet().stream()

forEach() vs stream()

  • If you just want to consume list, you best to choose forEach(), else stream().

Array

int a[] = new int[]{1, 2, 3};
System.out.println(Arrays.toString(a));

List, Set, Map

System.out.println(list);
System.out.println(set);
System.out.println(map);

Join

Use stream

List<String> names = Arrays.asList("Tom", "Jack", "Lucy");
System.out.println(names.stream().map(Object::toString).collect(Collectors.joining(",")));
List<Integer> ids = Arrays.asList(1, 2, 3);
System.out.println(ids.stream().map(Object::toString).collect(Collectors.joining(",")));

Use String.join()

List<String> names = Arrays.asList("Tom", "Jack", "Lucy");
System.out.println(String.join(",", names));

Remove elements from collection

List<Book> books = new ArrayList<Book>();
books.add(new Book(new ISBN("0-201-63361-2")));
books.add(new Book(new ISBN("0-201-63361-3")));
books.add(new Book(new ISBN("0-201-63361-4")));
  1. Collect objects set and removeAll()
  • Swap position of elements and create new collection copy from updated old collection. Don’t reorganize the collection.
  • T(n) = O(n), S(n) = O(n)
ISBN isbn = new ISBN("0-201-63361-2");
List<Book> found = new ArrayList<>();
for(Book book : books){
if(book.getIsbn().equals(isbn)){
found.add(book);
}
}
books.removeAll(found);

1.2 Collect indexes and remove one by one

  • T(n) = O(n), S(n) = O(m * n)

1.3 Collect objects set and remove one by one

  • T(n) = O(n), S(n) = O(m * n)
  1. Using iterator to remove in loop
  • Iterator using the cursor variable to traverse collection and remove by index of collection. If you remove a element, the cursor will update correctly. Iterator like forEach, but it index is not from 0 to size-1 of collection. The every remove operations will creating a new collection that copy from updated old collection.
  • T(n) = O(n), S(n) = O(m * n)
ListIterator<Book> iter = books.listIterator();
while(iter.hasNext()){
if(iter.next().getIsbn().equals(isbn)){
iter.remove();
}
}
  1. removeIf() method (JDK 8)
  • Swap position of elements, set new size for collection, and set null for between new size to old size elements.
  • T(n) = O(n), S(n) = O(1)
ISBN other = new ISBN("0-201-63361-2");
books.removeIf(b -> b.getIsbn().equals(other));
  1. Using filter of Stream API (JDK 8)
  • Creating new collection. Traversing has no order.
  • T(n) = O(n), S(n) = O(n) guess by “A stream does not store data and, in that sense, is not a data structure. It also never modifies the underlying data source.”
ISBN other = new ISBN("0-201-63361-2");
List<Book> filtered = books.stream()
.filter(b -> b.getIsbn().equals(other))
.collect(Collectors.toList());

Recommend: removeIf() > stream().filter() or parallelStream()> Collect objects set and removeAll() > Using iterator, or Collect indexes and remove one by one, or Collect objects set and remove one by one.

Deduplication

Deduplicate values

  1. Deduplicate values by stream distinct()
List<Integer> list = Arrays.asList(1, 2, 3, 2, 3, 4);
list = list.stream().distinct().collect(Collectors.toList());
System.out.println(list);
  1. Deduplicate values by creating a set
List<Integer> list = new ArrayList<>(Arrays.asList(1, 2, 3, 2, 3, 4));
Set<Integer> set = new LinkedHashSet<>(list);
list.clear(); // note: clear Arrays.asList will throw UnsupportedOperationException
list.addAll(set);
System.out.println(list);
List<Integer> list = Arrays.asList(1, 2, 3, 2, 3, 4);
list = new ArrayList<>(new LinkedHashSet<>(list));
System.out.println(list);

Deduplicate objects by property

  1. Deduplicate by stream
    List<User> list = list.stream().collect(Collectors.toMap(User::getName, Function.identity(), (p, q) -> p, LinkedHashMap::new)).values();
  2. Deduplicate objects by removing in for loop
    List<User> userList = buildUserList();
    System.out.println("Before: " + userList);
    Iterator<User> i = userList.iterator();
    while (i.hasNext()) {
    User user = i.next();
    if (user.getUserName().contains("test")) {
    i.remove();
    }
    }
    System.out.println("After: " + userList);
  3. Deduplicate objects by finding duplicate objects and then removing all of it
List<User> userList = buildUserList();
System.out.println("Before: " + userList);
List<User> toRemoveList = new ArrayList<>();
for (User user : userList) {
if (user.getUserName().contains("test")) {
toRemoveList.add(user);
}
}
userList.removeAll(toRemoveList);
System.out.println("After: " + userList);

Only one consecutive repeated element is retained

List<IdName> list = new ArrayList<>();
list.add(new IdName(1, "a"));
list.add(new IdName(2, "a"));
list.add(new IdName(3, "a"));
list.add(new IdName(4, "b"));
list.add(new IdName(5, "b"));
list.add(new IdName(6, "c"));
List<Integer> indexToRemove = new ArrayList<>();
for (int i = 0; i < list.size(); i++) {
if (i < list.size() - 1 && list.get(i).getName().equals(list.get(i + 1).getName())) {
indexToRemove.add(i);
}
}
for (int i = indexToRemove.size() - 1; i >= 0; i--) {
list.remove(indexToRemove.get(i).intValue());
}
System.out.println(list);

Output

[IdName(id=3, name=a), IdName(id=5, name=b), IdName(id=6, name=c)]

Ordered Collections

  1. Sorted Collection Classes
  • TreeSet
  • TreeMap
  1. Inserted order Collection Classes
  • LinkedList
  • LinkedHashSet
  • LinkedHashMap

Sorting

  1. Using Collections.sort(list) to sort Comparable elements

It uses merge sort. T(n) = O(log n)

  • sort(List<T> list)
  • sort(List<T> list, Comparator c)

Comparators

  • Comparator.naturalOrder()
  • Comparator.comparing(Function f)
  • Comparator.comparingInt(Function f)
  • Collections.reverseOrder()
  • Collections.reverseOrder(Comparator c)

(o1, o2) -> o1.getType().compareTo(o2.getType()) equals Comparator.comparing(User::getType)

Multiple fields with comparator

Comparator<Employee> compareByFirstName = Comparator.comparing(Employee::getFirstName);
Comparator<Employee> compareByLastName = Comparator.comparing(Employee::getLastName);
Comparator<Employee> compareByFullName = compareByFirstName.thenComparing(compareByLastName);
Comparator<Employee> compareByName = Comparator
.comparing(Employee::getFirstName)
.thenComparing(Employee::getLastName);
Comparator<IdName> c = (o1, o2) -> {
int i = o1.getFirstName().compareTo(o2.getFirstName());
if (i != 0) {
return i;
}
return o1.getLastName().compareTo(o2.getLastName());
};

Comparators avoid NullPointerException

Comparator<Employee> compareByName = Comparator
.comparing(Employee::getFirstName, Comparator.nullsLast(Comparator.naturalOrder()))
.thenComparing(Employee::getLastName, Comparator.nullsLast(Comparator.naturalOrder()));
Comparator<IdName> c = (o1, o2) -> {
// nullsLast
if (o1.getId() == null) {
return 1;
} else if (o2.getId() == null) {
return -1;
}
int i = o1.getId().compareTo(o2.getId());
if (i != 0) {
return i;
}
// nullsLast
if (o1.getName() == null) {
return 1;
} else if (o2.getName() == null) {
return -1;
}
return o1.getName().compareTo(o2.getName());
};
  1. Stream.sorted()
List<Integer> list = new ArrayList<>(Arrays.asList(1, 3, 2, 6, 5, 4, 9, 7));
list.stream().sorted().forEachOrdered(System.out::print);
list.stream().sorted((o1, o2) -> o1 - o2).forEachOrdered(System.out::print);
list.stream().sorted(Comparator.comparingInt(o -> o)).forEachOrdered(System.out::print);

Summary: if you don’t need to keep collections always be ordered, you just use Collections sort() to get sorted collections.

Compare object list using Collections.sort(objectList)

public class Animal implements Comparable<Animal> {
private String name;

@Override
public int compareTo(Animal o) {
return this.name.compareTo(o.name);
}
}
List<Animal> list = new ArrayList<>();
Collections.sort(list);
Collections.sort(list, Collections.reverseOrder());

Reversion

  1. Using void Collections.reverse(list)
List list = Arrays.asList("a", "b", "c");
Collections.reverse(list);
System.out.println(list);
  1. Using for loop
List<String> list = Arrays.asList("a", "b", "c");
for (int i = 0; i < list.size() / 2; i++) {
String temp = list.get(i);
list.set(i, list.get(list.size() - i - 1));
list.set(list.size() - i - 1, temp);
}
System.out.println(list);
  1. Using recursion
private static void reverse(List<String> list) {
if (list == null || list.size() <= 1) {
return;
}
String value = list.remove(0);
reverse(list);
list.add(value);
}

public static void main(String[] args) {
List<String> list = new ArrayList<>(Arrays.asList("a", "b", "c", "d"));
reverse(list);
System.out.println(list);
}

Computation

Reduction

for loop

List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sum = 0;
for (int x : numbers) {
sum += x;
}

stream

// the first x is 0, the first y is 1
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sum = numbers.stream().reduce(0, (x, y) -> x + y);
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
int sum = numbers.stream().reduce(0, Integer::sum);
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = numbers.stream().collect(Collectors.summingInt(Integer::intValue));

parallel stream (operations can run safely in parallel with almost no modification)

int sum = numbers.parallelStream().reduce(0, Integer::sum);

Group

Group

  • groupingBy()
  • partitioningBy()
// Group employees by department
Map<Department, List<Employee>> byDept = employees.stream()
.collect(Collectors.groupingBy(Employee::getDepartment));

// get user's roleIds
Map<Integer, Set<Integer>> userToRoleIds = userRoles.stream().collect(Collectors.groupingBy(UserRole::getUserId, Collectors.mapping(AssignmentDept::getRoleId, Collectors.toSet())));

// Partition students into passing and failing
Map<Boolean, List<Student>> passingFailing = students.stream()
.collect(Collectors.partitioningBy(s -> s.getGrade() >= PASS_THRESHOLD));

Aggregation

  • maxBy()
  • minBy()
  • averagingInt()
  • summingInt()
  • counting()
// Compute sum of salaries by department
Map<Department, Integer> totalByDept = employees.stream()
.collect(Collectors.groupingBy(Employee::getDepartment,
Collectors.summingInt(Employee::getSalary)));
List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5);
Integer sum = numbers.stream().collect(Collectors.summingInt(Integer::intValue));

Sort by grouped fields

// keep sorted when group. Using `TreeMap::new` or `() -> new TreeMap<>()`
Map<String, Double> averageAgeByType = userList
.stream()
.collect(Collectors.groupingBy(User::getType,
TreeMap::new,
Collectors.averagingInt(User::getAge)));
// sort list before group and keep insertion order when group
Map<String, Double> userAverageAgeMap2 = userList
.stream()
.sorted(Comparator.comparing(User::getType))
.collect(Collectors.groupingBy(User::getType,
LinkedHashMap::new,
Collectors.averagingInt(User::getAge)));

References

Background

When I execute a SQL script of the dump of a database structure and data, occurs an error “The user specified as a definer 'xxx'@'%' does not exist”.

Error Info

SQL Error (1449):The user specified as a definer ('xxx'@'%') does not exist

Solutions

This commonly occurs when exporting views/triggers/procedures from one database or server to another as the user that created that object no longer exists.

For example, the following is a trigger create statement:

CREATE 
DEFINER=`not_exist_user`
TRIGGER your_trigger BEFORE INSERT ON your_table
FOR EACH ROW SET new.create_time=NOW();

Solution 1: Change the DEFINER

This is possibly easiest to do when initially importing your database objects, by removing any DEFINER statements DEFINER=some_user from the dump.

Changing the definer later is a more little tricky. You can search solutions for “How to change the definer for views/triggers/procedures”.

Solution 2: Create the missing user

If you’ve found following error while using MySQL database:

The user specified as a definer ('some_user'@'%') does not exist`

Then you can solve it by using following :

CREATE USER 'some_user'@'%' IDENTIFIED BY 'complex-password';
GRANT ALL ON *.* TO 'some_user'@'%' IDENTIFIED BY 'complex-password';
/* or GRANT ALL ON *.* TO 'some_user'@'%'; */
FLUSH PRIVILEGES;

Reasons

My exported trigger has a definer user that does not exist.

When you insert data into the table used in the trigger, MySQL will occur the error “The user specified as a definer xxx does not exist”.

References

[1] MySQL error 1449: The user specified as a definer does not exist

Max Upload File Size in Nginx

The default max upload file size in Nginx

The default maximum body size of a client request, or maximum file size, that Nginx allows you to have, is 1M. So when you try to upload something larger than 1M, you get the following error: 413: Request Entity Too Large.

When over the max upload file size

When uploading a file over max size, Nginx returns

  • status code: 413 Request Entity Too Large

  • Content-Type: text/html

  • response body:

    <html>
    <head><title>413 Request Entity Too Large</title></head>
    <body>
    <center><h1>413 Request Entity Too Large</h1></center>
    <hr><center>nginx/1.18.0</center>
    </body>
    </html>

Solutions

Add the following settings to your Nginx configuration file nginx.conf

http {
client_max_body_size 100M;
....
}

Reload the Nginx configurations

$ nginx -s reload

Max Upload File Size in Spring Framework

The default max upload file size in Spring

MultipartProperties‘ Properties

  • max-file-size specifies the maximum size permitted for uploaded files. The default is 1MB
  • max-request-size specifies the maximum size allowed for multipart/form-data requests. The default is 10MB.

When over the max upload file size

The Java web project will throw the IllegalStateException

 - UT005023: Exception handling request to /file/uploadFile
java.lang.IllegalStateException: io.undertow.server.handlers.form.MultiPartParserDefinition$FileTooLargeException: UT000054: The maximum size 1048576 for an individual file in a multipart request was exceeded
at io.undertow.servlet.spec.HttpServletRequestImpl.parseFormData(HttpServletRequestImpl.java:847)
at io.undertow.servlet.spec.HttpServletRequestImpl.getParameter(HttpServletRequestImpl.java:722)
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:85)
...

Solutions

Add the following settings in your spring boot configuration file application.yml

spring:
servlet:
multipart:
# max single file size
max-file-size: 100MB
# max request size
max-request-size: 200MB

Background

Calling backend API, the status code of the response is 500, but the backend is not throw exceptions. The HTTP response message is “Proxy error: Could not proxy request”.

Error Info

Proxy error: Could not proxy request /captchaImage from localhost:8070 to http://10.0.0.74:8090 (ECONNREFUSED).

Solutions

  1. Make sure the config devServer.proxy.target is correct.

vue.config.js

devServer: {
...
proxy: {
[process.env.VUE_APP_BASE_API]: {
target: `http://localhost:8090`,
...
}
},
}
  1. Make sure you can visit the backend server anonymous request APIs in your PC, e.g. http://localhost:8090/captchaImage.

Reasons

The config devServer.proxy.target has no http prefix. e.g. target: localhost:8090.

Or the port number of backend server is not correct.

JUnit 5 Improvements

  • JUnit 5 leverages features from Java 8 or later, such as lambda functions, making tests more powerful and easier to maintain.
  • JUnit 5 has added some very useful new features for describing, organizing, and executing tests. For instance, tests get better display names and can be organized hierarchically.
  • JUnit 5 is organized into multiple libraries, so only the features you need are imported into your project. With build systems such as Maven and Gradle, including the right libraries is easy.
  • JUnit 5 can use more than one extension at a time, which JUnit 4 could not (only one runner could be used at a time). This means you can easily combine the Spring extension with other extensions (such as your own custom extension).

Differences

Imports

  • org.junit.Test => org.junit.jupiter.api.Test
  • Assert => Assertions
    • org.junit.Assert => org.junit.jupiter.api.Assertions

Annotations

  • @Before => @BeforeEach

    • org.junit.Before => org.junit.jupiter.api.BeforeEach
  • @After => @AfterEach

  • @BeforeClass => @BeforeAll

  • @AfterClass => @AfterAll

  • @Ignore => @Disabled

    • org.junit.Ignore => org.junit.jupiter.api.Disabled
  • @Category => @Tag

  • @RunWith, @Rule, @ClassRule => @ExtendWith and @RegisterExtension

    • org.junit.runner.RunWith => org.junit.jupiter.api.extension.ExtendWith
    • @RunWith(SpringRunner.class) => @ExtendWith(SpringExtension.class)
    • org.springframework.test.context.junit4.SpringRunner => org.springframework.test.context.junit.jupiter.SpringExtension

Assertion Methods

JUnit 5 assertions are now in org.junit.jupiter.api.Assertions. Most of the common assertions, such as assertEquals() and assertNotNull(), look the same as before, but there are a few differences:

  • The error message is now the last argument, for example: assertEquals("my message", 1, 2) is now assertEquals(1, 2, "my message").
  • Most assertions now accept a lambda that constructs the error message, which is called only when the assertion fails.
  • assertTimeout() and assertTimeoutPreemptively() have replaced the @Timeout annotation (there is an @Timeout annotation in JUnit 5, but it works differently than in JUnit 4).
  • There are several new assertions, described below.

Note that you can continue to use assertions from JUnit 4 in a JUnit 5 test if you prefer.

Assumptions

Executes the supplied Executable, but only if the supplied assumption is valid.

JUnit 4

assumeThat("alwaysPasses", 1, is(1)); // passes
foo(); // will execute
assumeThat("alwaysFails", 0, is(1)); // assumption failure! test halts
int x = 1 / 0; // will never execute

JUnit 5

@Test
void testNothingInParticular() throws Exception {
Assumptions.assumingThat("DEV".equals(System.getenv("ENV")), () -> {
assertEquals(...);
});
}

Extending JUnit

JUnit 4

@RunWith(SpringRunner.class) // SpringRunner is an alias for the SpringJUnit4ClassRunner.
//@RunWith(SpringJUnit4ClassRunner.class)
public class MyControllerTest {
// ...
}

JUnit 5

@ExtendWith(SpringExtension.class)
class MyControllerTest {
// ...
}

Expect Exceptions

JUnit 4

@Test(expected = Exception.class)
public void testThrowsException() throws Exception {
// ...
}

JUnit 5

@Test
void testThrowsException() throws Exception {
Assertions.assertThrows(Exception.class, () -> {
//...
});
}

Timeout

JUnit 4

@Test(timeout = 10)
public void testFailWithTimeout() throws InterruptedException {
Thread.sleep(100);
}

JUnit 5

@Test
void testFailWithTimeout() throws InterruptedException {
Assertions.assertTimeout(Duration.ofMillis(10), () -> Thread.sleep(100));
}

Converting a Test to JUnit 5

To convert an existing JUnit 4 test to JUnit 5, use the following steps, which should work for most tests:

  1. Update imports to remove JUnit 4 and add JUnit 5. For instance, update the package name for the @Test annotation, and update both the package and class name for assertions (from Asserts to Assertions). Don’t worry yet if there are compilation errors, because completing the following steps should resolve them.
  2. Globally replace old annotations and class names with new ones. For example, replace all @Before with @BeforeEach and all Asserts with Assertions.
  3. Update assertions; any assertions that provide a message need to have the message argument moved to the end (pay special attention when all three arguments are strings!). Also, update timeouts and expected exceptions (see above for examples).
  4. Update assumptions if you are using them.
  5. Replace any instances of @RunWith, @Rule, or @ClassRule with the appropriate @ExtendWith annotations. You may need to find updated documentation online for the extensions you’re using for examples.

New Features

Display Names

you can add the @DisplayName annotation to classes and methods. The name is used when generating reports, which makes it easier to describe the purpose of tests and track down failures, for example:

@DisplayName("Test MyClass")
class MyClassTest {
@Test
@DisplayName("Verify MyClass.myMethod returns true")
void testMyMethod() throws Exception {
// ...
}
}

Assertion Methods

JUnit 5 introduced some new assertions, such as the following:

assertIterableEquals() performs a deep verification of two iterables using equals().

void assertIterableEquals(Iterable<?> expected, Iterable> actual)

assertLinesMatch() verifies that two lists of strings match; it accepts regular expressions in the expected argument.

void assertLinesMatch(List<String> expectedLines, List<String> actualLines)

assertAll() groups multiple assertions together. Asserts that all supplied executables do not throw exceptions. The added benefit is that all assertions are performed, even if individual assertions fail.

void assertAll(Executable... executables)

assertThrows() and assertDoesNotThrow() have replaced the expected property in the @Test annotation.

<T extends Throwable> T assertThrows(Class<T> expectedType, Executable executable)
void assertDoesNotThrow (Executable executable)

Nested tests

Test suites in JUnit 4 were useful, but nested tests in JUnit 5 are easier to set up and maintain, and they better describe the relationships between test groups.

Parameterized tests

Test parameterization existed in JUnit 4, with built-in libraries such as JUnit4Parameterized or third-party libraries such as JUnitParams. In JUnit 5, parameterized tests are completely built in and adopt some of the best features from JUnit4Parameterized and JUnitParams, for example:

@ParameterizedTest
@ValueSource(strings = {"foo", "bar"})
@NullAndEmptySource
void myParameterizedTest(String arg) {
underTest.performAction(arg);
}

Conditional test execution

JUnit 5 provides the ExecutionCondition extension API to enable or disable a test or container (test class) conditionally. This is like using @Disabled on a test but it can define custom conditions. There are multiple built-in conditions, such as these:

  • @EnabledOnOs and @DisabledOnOs: Enables or disables a test only on specified operating systems
  • @EnabledOnJre and @DisabledOnJre: Specifies the test should be enabled or disabled for particular versions of Java
  • @EnabledIfSystemProperty: Enables a test based on the value of a JVM system property
  • @EnabledIf: Uses scripted logic to enable a test if scripted conditions are met

Test templates

Test templates are not regular tests; they define a set of steps to perform, which can then be executed elsewhere using a specific invocation context. This means that you can define a test template once, and then build a list of invocation contexts at runtime to run that test with. For details and examples, see the documentation.

Dynamic tests

Dynamic tests are like test templates; the tests to run are generated at runtime. However, while test templates are defined with a specific set of steps and run multiple times, dynamic tests use the same invocation context but can execute different logic. One use for dynamic tests would be to stream a list of abstract objects and perform a separate set of assertions for each based on their concrete types. There are good examples in the documentation.

Spring Boot Test With JUnit

Spring Boot Test With JUnit 4

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<!-- Starting with Spring Boot 2.4, JUnit 5’s vintage engine has been removed from spring-boot-starter-test. If we still want to write tests using JUnit 4, we need to add the following Maven dependency -->
<dependency>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.hamcrest</groupId>
<artifactId>hamcrest-core</artifactId>
</exclusion>
</exclusions>
</dependency>
@RunWith(SpringRunner.class)
@SpringBootTest
public class MyServiceTest {
@Autowired
private MyRepository myRepository;

@org.junit.Test
public void test(){
}
}

Spring Boot Test With JUnit 5

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
@ExtendWith(SpringExtension.class)
@SpringBootTest
public class MyServiceTest {
@Autowired
private MyRepository myRepository;

@org.junit.jupiter.api.Test
public void test(){
}
}

Conclusion

Although you probably won’t need to convert your old JUnit 4 tests to JUnit 5 unless you want to use new JUnit 5 features, there are compelling reasons to switch to JUnit 5.

References

Migrating from JUnit 4 to JUnit 5: Important Differences and Benefits

JUnit 5

JUnit 4

Problem Description

Given two integers dividend and divisor, divide two integers without using multiplication, division, and mod operator.

The integer division should truncate toward zero, which means losing its fractional part. For example, 8.345 would be truncated to 8, and -2.7335 would be truncated to -2.

Return the quotient after dividing dividend by divisor.

Note: Assume we are dealing with an environment that could only store integers within the 32-bit signed integer range: [−2^31, 2^31 − 1]. For this problem, if the quotient is strictly greater than 2^31 - 1, then return 2^31 - 1, and if the quotient is strictly less than -2^31, then return -2^31.

Example 1:

Input: dividend = 10, divisor = 3
Output: 3
Explanation: 10/3 = 3.33333.. which is truncated to 3.

Example 2:

Input: dividend = 7, divisor = -3
Output: -2
Explanation: 7/-3 = -2.33333.. which is truncated to -2.

Constraints:

  • -2^31 <= dividend, divisor <= 2^31 - 1
  • divisor != 0

Related Topics

  • Math
  • Bit Manipulation

Analysis

set quotient = 0
n ∈ N
when divisor * 2 ^ n <= dividend < divisor * 2 ^ (n+1)
quotient = quotient + (2 ^ n)
dividend = dividend - (divisor ^ n)
when divisor <= dividend < divisor * 2
quotient = quotient + 1
dividend = dividend - divisor
when dividend < divisor
return quotient

Solution

public int divide(int dividend, int divisor) {
if (dividend == Integer.MIN_VALUE && divisor == -1) return Integer.MAX_VALUE; //Cornor case when -2^31 is divided by -1 will give 2^31 which doesnt exist so overflow

boolean negative = dividend < 0 ^ divisor < 0; //Logical XOR will help in deciding if the results is negative only if any one of them is negative

dividend = Math.abs(dividend);
divisor = Math.abs(divisor);
int quotient = 0, subQuot = 0;

while (dividend - divisor >= 0) {
for (subQuot = 0; dividend - (divisor << subQuot << 1) >= 0; subQuot++);
quotient += 1 << subQuot; //Add to the quotient
dividend -= divisor << subQuot; //Substract from dividend to start over with the remaining
}
return negative ? -quotient : quotient;
}

TODO

Demonstrate dividend - (divisor ^ (subQuot + 1)) >= 0 is always right?

References

[1] Divide Two Integers - Java | 0ms | 100% faster | Obeys all conditions

0%