From 04574e20f337fc12d9a6ddfcc332cadfc78734b5 Mon Sep 17 00:00:00 2001 From: nicksxs Date: Sun, 27 Aug 2023 10:27:38 +0800 Subject: [PATCH] Site updated: 2023-08-27 10:27:36 --- .../index.html | 4 +- .../index.html | 4 +- .../index.html | 4 +- .../index.html | 4 +- .../index.html | 2 +- .../07/聊聊最近平淡的生活/index.html | 2 +- .../index.html | 2 +- .../index.html | 4 +- archives/2023/08/index.html | 2 +- archives/2023/index.html | 2 +- archives/index.html | 2 +- atom.xml | 6 +- baidusitemap.xml | 66 +- categories/Java/SpringBoot/index.html | 2 +- index.html | 2 +- leancloud.memo | 1 + leancloud_counter_security_urls.json | 2 +- search.xml | 18378 ++++++++-------- sitemap.xml | 1562 +- tags/Dubbo/index.html | 2 +- tags/Windows/index.html | 2 +- tags/php/index.html | 2 +- 22 files changed, 10029 insertions(+), 10028 deletions(-) diff --git a/2020/10/25/Leetcode-104-二叉树的最大深度-Maximum-Depth-of-Binary-Tree-题解分析/index.html b/2020/10/25/Leetcode-104-二叉树的最大深度-Maximum-Depth-of-Binary-Tree-题解分析/index.html index d856235cde..8c3856aa70 100644 --- a/2020/10/25/Leetcode-104-二叉树的最大深度-Maximum-Depth-of-Binary-Tree-题解分析/index.html +++ b/2020/10/25/Leetcode-104-二叉树的最大深度-Maximum-Depth-of-Binary-Tree-题解分析/index.html @@ -1,4 +1,4 @@ -Leetcode 104 二叉树的最大深度(Maximum Depth of Binary Tree) 题解分析 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Leetcode 104 二叉树的最大深度(Maximum Depth of Binary Tree) 题解分析

题目介绍

给定一个二叉树,找出其最大深度。

二叉树的深度为根节点到最远叶子节点的最长路径上的节点数。

说明: 叶子节点是指没有子节点的节点。

示例:
给定二叉树 [3,9,20,null,null,15,7],

  3
+Leetcode 104 二叉树的最大深度(Maximum Depth of Binary Tree) 题解分析 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Leetcode 104 二叉树的最大深度(Maximum Depth of Binary Tree) 题解分析

题目介绍

给定一个二叉树,找出其最大深度。

二叉树的深度为根节点到最远叶子节点的最长路径上的节点数。

说明: 叶子节点是指没有子节点的节点。

示例:
给定二叉树 [3,9,20,null,null,15,7],

  3
  / \
 9  20
   /  \
@@ -20,4 +20,4 @@
     }
     // 前面返回后,左右取大者
     return Math.max(left + 1, right + 1);
-}

分析

其实对于树这类题,一般是以递归形式比较方便,只是要注意退出条件

0%
\ No newline at end of file +}

分析

其实对于树这类题,一般是以递归形式比较方便,只是要注意退出条件

0%
\ No newline at end of file diff --git a/2020/11/01/Apollo-的-value-注解是怎么自动更新的/index.html b/2020/11/01/Apollo-的-value-注解是怎么自动更新的/index.html index fa88df689b..85114f7fc8 100644 --- a/2020/11/01/Apollo-的-value-注解是怎么自动更新的/index.html +++ b/2020/11/01/Apollo-的-value-注解是怎么自动更新的/index.html @@ -1,4 +1,4 @@ -Apollo 的 value 注解是怎么自动更新的 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Apollo 的 value 注解是怎么自动更新的

在前司和目前公司,用的配置中心都是使用的 Apollo,经过了业界验证,比较强大的配置管理系统,特别是在0.10 后开始支持对使用 value 注解的配置值进行自动更新,今天刚好有个同学问到我,就顺便写篇文章记录下,其实也是借助于 spring 强大的 bean 生命周期管理,可以实现BeanPostProcessor接口,使用postProcessBeforeInitialization方法,来对bean 内部的属性和方法进行判断,是否有 value 注解,如果有就是将它注册到一个 map 中,可以看到这个方法com.ctrip.framework.apollo.spring.annotation.SpringValueProcessor#processField

@Override
+Apollo 的 value 注解是怎么自动更新的 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Apollo 的 value 注解是怎么自动更新的

在前司和目前公司,用的配置中心都是使用的 Apollo,经过了业界验证,比较强大的配置管理系统,特别是在0.10 后开始支持对使用 value 注解的配置值进行自动更新,今天刚好有个同学问到我,就顺便写篇文章记录下,其实也是借助于 spring 强大的 bean 生命周期管理,可以实现BeanPostProcessor接口,使用postProcessBeforeInitialization方法,来对bean 内部的属性和方法进行判断,是否有 value 注解,如果有就是将它注册到一个 map 中,可以看到这个方法com.ctrip.framework.apollo.spring.annotation.SpringValueProcessor#processField

@Override
   protected void processField(Object bean, String beanName, Field field) {
     // register @Value on field
     Value value = field.getAnnotation(Value.class);
@@ -61,4 +61,4 @@
        updateSpringValue(val);
      }
    }
- }

其实原理很简单,就是得了解知道下

0%
\ No newline at end of file + }

其实原理很简单,就是得了解知道下

0%
\ No newline at end of file diff --git a/2020/12/13/Leetcode-105-从前序与中序遍历序列构造二叉树-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-题解分析/index.html b/2020/12/13/Leetcode-105-从前序与中序遍历序列构造二叉树-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-题解分析/index.html index 1bca8f51f2..7b66dbfebc 100644 --- a/2020/12/13/Leetcode-105-从前序与中序遍历序列构造二叉树-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-题解分析/index.html +++ b/2020/12/13/Leetcode-105-从前序与中序遍历序列构造二叉树-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-题解分析/index.html @@ -1,4 +1,4 @@ -Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析

题目介绍

Given preorder and inorder traversal of a tree, construct the binary tree.
给定一棵树的前序和中序遍历,构造出一棵二叉树

注意

You may assume that duplicates do not exist in the tree.
你可以假设树中没有重复的元素。(PS: 不然就没法做了呀)

例子:

preorder = [3,9,20,15,7]
+Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析

题目介绍

Given preorder and inorder traversal of a tree, construct the binary tree.
给定一棵树的前序和中序遍历,构造出一棵二叉树

注意

You may assume that duplicates do not exist in the tree.
你可以假设树中没有重复的元素。(PS: 不然就没法做了呀)

例子:

preorder = [3,9,20,15,7]
 inorder = [9,3,15,20,7]

返回的二叉树

  3
  / \
 9  20
@@ -32,4 +32,4 @@ inorder = [9,3,15,20,7]
0%
\ No newline at end of file +}
0%
\ No newline at end of file diff --git a/2021/01/24/Leetcode-124-二叉树中的最大路径和-Binary-Tree-Maximum-Path-Sum-题解分析/index.html b/2021/01/24/Leetcode-124-二叉树中的最大路径和-Binary-Tree-Maximum-Path-Sum-题解分析/index.html index 461c5ca8d7..5cb4b02887 100644 --- a/2021/01/24/Leetcode-124-二叉树中的最大路径和-Binary-Tree-Maximum-Path-Sum-题解分析/index.html +++ b/2021/01/24/Leetcode-124-二叉树中的最大路径和-Binary-Tree-Maximum-Path-Sum-题解分析/index.html @@ -1,4 +1,4 @@ -Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析

题目介绍

A path in a binary tree is a sequence of nodes where each pair of adjacent nodes in the sequence has an edge connecting them. A node can only appear in the sequence at most once. Note that the path does not need to pass through the root.

The path sum of a path is the sum of the node’s values in the path.

Given the root of a binary tree, return the maximum path sum of any path.

路径 被定义为一条从树中任意节点出发,沿父节点-子节点连接,达到任意节点的序列。该路径 至少包含一个 节点,且不一定经过根节点。

路径和 是路径中各节点值的总和。

给你一个二叉树的根节点 root ,返回其 最大路径和

简要分析

其实这个题目会被误解成比较简单,左子树最大的,或者右子树最大的,或者两边加一下,仔细想想都不对,其实有可能是产生于左子树中,或者右子树中,这两个都是指跟左子树根还有右子树根没关系的,这么说感觉不太容易理解,画个图

可以看到图里,其实最长路径和是左边这个子树组成的,跟根节点还有右子树完全没关系,然后再想一种情况,如果是整棵树就是图中的左子树,那么这个最长路径和就是左子树加右子树加根节点了,所以不是我一开始想得那么简单,在代码实现中也需要一些技巧

代码

int ansNew = Integer.MIN_VALUE;
+Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析

题目介绍

A path in a binary tree is a sequence of nodes where each pair of adjacent nodes in the sequence has an edge connecting them. A node can only appear in the sequence at most once. Note that the path does not need to pass through the root.

The path sum of a path is the sum of the node’s values in the path.

Given the root of a binary tree, return the maximum path sum of any path.

路径 被定义为一条从树中任意节点出发,沿父节点-子节点连接,达到任意节点的序列。该路径 至少包含一个 节点,且不一定经过根节点。

路径和 是路径中各节点值的总和。

给你一个二叉树的根节点 root ,返回其 最大路径和

简要分析

其实这个题目会被误解成比较简单,左子树最大的,或者右子树最大的,或者两边加一下,仔细想想都不对,其实有可能是产生于左子树中,或者右子树中,这两个都是指跟左子树根还有右子树根没关系的,这么说感觉不太容易理解,画个图

可以看到图里,其实最长路径和是左边这个子树组成的,跟根节点还有右子树完全没关系,然后再想一种情况,如果是整棵树就是图中的左子树,那么这个最长路径和就是左子树加右子树加根节点了,所以不是我一开始想得那么简单,在代码实现中也需要一些技巧

代码

int ansNew = Integer.MIN_VALUE;
 public int maxPathSum(TreeNode root) {
         maxSumNew(root);
         return ansNew;
@@ -21,4 +21,4 @@
     int res = Math.max(left + right + root.val, currentSum);
     ans = Math.max(res, ans);
     return currentSum;
-}

这里非常重要的就是 ansNew 是最后的一个结果,而对于 maxSumNew 这个函数的返回值其实是需要包含了一个连续结果,因为要返回继续去算路径和,所以返回的是 currentSum,最终结果是 ansNew

结果图

难得有个 100%,贴个图哈哈

0%
\ No newline at end of file +}

这里非常重要的就是 ansNew 是最后的一个结果,而对于 maxSumNew 这个函数的返回值其实是需要包含了一个连续结果,因为要返回继续去算路径和,所以返回的是 currentSum,最终结果是 ansNew

结果图

难得有个 100%,贴个图哈哈

0%
\ No newline at end of file diff --git a/2021/06/06/聊聊如何识别和意识到日常生活中的各类危险/index.html b/2021/06/06/聊聊如何识别和意识到日常生活中的各类危险/index.html index b0e5bac6d5..7b9ad570db 100644 --- a/2021/06/06/聊聊如何识别和意识到日常生活中的各类危险/index.html +++ b/2021/06/06/聊聊如何识别和意识到日常生活中的各类危险/index.html @@ -1 +1 @@ -聊聊如何识别和意识到日常生活中的各类危险 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

聊聊如何识别和意识到日常生活中的各类危险

这篇博客的灵感又是来自于我从绍兴来杭州的路上,在我们进站以后上电梯快到的时候,突然前面不动了,右边我能看到的是有个人的行李箱一时拎不起来,另一边后面看到其实是个小孩子在那哭闹,一位妈妈就在那停着安抚或者可能有点手足无措,其实这一点应该是在几年前慢慢意识到是个非常危险的场景,特别是像绍兴北站这样上去站台是非常长的电梯,因为最近扩建改造,车次减少了很多,所以每一班都有很多人,检票上站台的电梯都是满员运转,试想这种情况,如果刚才那位妈妈再多停留一点时间,很可能就会出现后面的人上不来被挤下去,再严重点就是踩踏事件,但是这类情况很少人真的意识到,非常明显的例子就是很多人拿着比较大比较重的行李箱,不走垂梯,并且在快到的时候没有提前准备好,有可能在玩手机啥的,如果提不动,后面又是挤满人了,就很可能出现前面说的这种情况,并且其实这种是非紧急情况,大多数人都没有心理准备,一旦发生后果可能就会很严重,例如火灾地震疏散大部分人或者说负责引导的都是指示要有序撤离,防止踩踏,但是普通坐个扶梯,一般都不会有这个意识,但是如果这个时间比较长,出现了人员站不住往后倒了,真的会很严重。所以如果自己是带娃的或者带了很重的行李箱的,请提前做好准备,看到前面有人带的,最好也保持一定距离。
还有比如日常走路,旁边有车子停着的情况,比较基本的看车灯有没有亮着,亮着的是否是倒车灯,这种应该特别注意远离,至少保持距离,不能挨着走,很多人特别是一些老年人,在一些人比较多的路上,往往完全无视旁边这些车的状态,我走我的路,谁敢阻拦我,管他车在那动不动,其实真的非常危险,车子本身有视线死角,再加上司机的驾驶习惯和状态,想去送死跟碰瓷的除外,还有就是有一些车会比较特殊,车子发动着,但是没灯,可能是车子灯坏了或者司机通过什么方式关了灯,这种比较难避开,不过如果车子打着了,一般会有比较大的热量散发,车子刚灭了也会有,反正能远离点尽量远离,从轿车的车前面走过挨着走要比从屁股后面挨着走稍微安全一些,但也最好不要挨着车走。
最后一点其实是我觉得是我自己比较怕死,一般对来向的车或者从侧面出来的车会做更长的预判距离,特别是电瓶车,一般是不让人的,像送外卖的小哥,的确他们不太容易,但是真的很危险啊,基本就生死看刹车,能刹住就赚了,刹不住就看身子骨扛不扛撞了,只是这里要多说点又要谈到资本的趋利性了,总是想法设法的压榨以获取更多的利益,也不扯远了,能远离就远离吧。

0%
\ No newline at end of file +聊聊如何识别和意识到日常生活中的各类危险 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

聊聊如何识别和意识到日常生活中的各类危险

这篇博客的灵感又是来自于我从绍兴来杭州的路上,在我们进站以后上电梯快到的时候,突然前面不动了,右边我能看到的是有个人的行李箱一时拎不起来,另一边后面看到其实是个小孩子在那哭闹,一位妈妈就在那停着安抚或者可能有点手足无措,其实这一点应该是在几年前慢慢意识到是个非常危险的场景,特别是像绍兴北站这样上去站台是非常长的电梯,因为最近扩建改造,车次减少了很多,所以每一班都有很多人,检票上站台的电梯都是满员运转,试想这种情况,如果刚才那位妈妈再多停留一点时间,很可能就会出现后面的人上不来被挤下去,再严重点就是踩踏事件,但是这类情况很少人真的意识到,非常明显的例子就是很多人拿着比较大比较重的行李箱,不走垂梯,并且在快到的时候没有提前准备好,有可能在玩手机啥的,如果提不动,后面又是挤满人了,就很可能出现前面说的这种情况,并且其实这种是非紧急情况,大多数人都没有心理准备,一旦发生后果可能就会很严重,例如火灾地震疏散大部分人或者说负责引导的都是指示要有序撤离,防止踩踏,但是普通坐个扶梯,一般都不会有这个意识,但是如果这个时间比较长,出现了人员站不住往后倒了,真的会很严重。所以如果自己是带娃的或者带了很重的行李箱的,请提前做好准备,看到前面有人带的,最好也保持一定距离。
还有比如日常走路,旁边有车子停着的情况,比较基本的看车灯有没有亮着,亮着的是否是倒车灯,这种应该特别注意远离,至少保持距离,不能挨着走,很多人特别是一些老年人,在一些人比较多的路上,往往完全无视旁边这些车的状态,我走我的路,谁敢阻拦我,管他车在那动不动,其实真的非常危险,车子本身有视线死角,再加上司机的驾驶习惯和状态,想去送死跟碰瓷的除外,还有就是有一些车会比较特殊,车子发动着,但是没灯,可能是车子灯坏了或者司机通过什么方式关了灯,这种比较难避开,不过如果车子打着了,一般会有比较大的热量散发,车子刚灭了也会有,反正能远离点尽量远离,从轿车的车前面走过挨着走要比从屁股后面挨着走稍微安全一些,但也最好不要挨着车走。
最后一点其实是我觉得是我自己比较怕死,一般对来向的车或者从侧面出来的车会做更长的预判距离,特别是电瓶车,一般是不让人的,像送外卖的小哥,的确他们不太容易,但是真的很危险啊,基本就生死看刹车,能刹住就赚了,刹不住就看身子骨扛不扛撞了,只是这里要多说点又要谈到资本的趋利性了,总是想法设法的压榨以获取更多的利益,也不扯远了,能远离就远离吧。

0%
\ No newline at end of file diff --git a/2021/11/07/聊聊最近平淡的生活/index.html b/2021/11/07/聊聊最近平淡的生活/index.html index 4f58195e94..437cabbcc2 100644 --- a/2021/11/07/聊聊最近平淡的生活/index.html +++ b/2021/11/07/聊聊最近平淡的生活/index.html @@ -1 +1 @@ -聊聊最近平淡的生活之又聊通勤 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

聊聊最近平淡的生活之又聊通勤

一直以来过着特别平淡普通的生活,不过大多数人应该都这样吧,也许有些人可以把平凡的生活过得精彩,最简单的说明就是朋友圈吧,看我一年的盆友圈虽然在发,不过大概 90%的都是发发跑步的打卡,偶尔会有稀稀拉拉的点赞,天天上班,也不喜欢发什么状态,觉得没什么人关注,索性不发。

只是这么平淡的生活就有一些自己比较心烦纠结的,之前有提到过的交通,最近似乎又发现了一点,就真相总是让人跌破眼镜,以前觉得我可能是胆子比较小,所以会觉得怎么路上这些电瓶都是这么肆无忌惮的往我冲过来,后面慢慢有一种借用电视剧读心神探的概念,安全距离,觉得大部分人跟我一样,骑电瓶车什么的总还是有个安全距离,只是可能这个安全距离对于不同的人不一样,那些骑电瓶车的潜意识里的安全距离是非常短,所以经常会骑车离着你非常近才会刹车,但是这个安全距离理论最近又被推翻了,因为经历过几次电瓶车就是已经跟你有身体接触了,但是没到把人撞倒的程度,似乎这些骑电瓶车的觉得步行的行人在人行道上是空气,蹭一下也无所谓,反正不能挡我的路,总感觉要不是我在前面骑自行车太慢挡着电瓶车,不然他们都能起飞去干掉 F35 解放湾湾了;

另一个问题应该是说我们交通规则普及的太少,虽然我们没有路权这个名词概念,但是其实是有这个优先级的,包括像杭州是以公交车在人行道礼让行人闻名的,其实这个文明的行为只限于人行道在直行路中间的,大部分在十字路口,右转的公交车很少会让直行人行道的,前提是直行的绿灯的时候,特别是像公交车这样,车身特别长,右转的时候会有比较大的死角,如果是公交车先转,行人或者自行车很容易被卷进去,非常危险的,私家车就更不用说了,反正右转即使人行道上人非常多要转的也是一秒都不等,所以我自己在开车的时候是尽量在右转的时候等人行道上的行人或者骑车的走完,因为总会觉得我是不是有点双标,骑车走路的时候希望开车的能按规则让我,自己开车的时候又想赶紧开走,所以在开车的时候尽量做到让行车和骑车的。

还有个其实是写着写着想起来的,比如我骑车左转的时候,因为我是左转到对角那就到了,跟那些左转后要再直行的不一样,我们应该在学车的时候也学过,超车要从左边超,但是往往那些骑电瓶车的在左转的时候会从我右边超过来再往左边撇过去,如果留的空间大还好,有些电瓶车就是如果车头超过了就不管他的车屁股,如果我不减速,自行车就被刮倒了,可能的确是别人就不是人,只要不把你撞倒就无所谓,反正为了你自己不被撞倒你肯定会让的。

0%
\ No newline at end of file +聊聊最近平淡的生活之又聊通勤 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

聊聊最近平淡的生活之又聊通勤

一直以来过着特别平淡普通的生活,不过大多数人应该都这样吧,也许有些人可以把平凡的生活过得精彩,最简单的说明就是朋友圈吧,看我一年的盆友圈虽然在发,不过大概 90%的都是发发跑步的打卡,偶尔会有稀稀拉拉的点赞,天天上班,也不喜欢发什么状态,觉得没什么人关注,索性不发。

只是这么平淡的生活就有一些自己比较心烦纠结的,之前有提到过的交通,最近似乎又发现了一点,就真相总是让人跌破眼镜,以前觉得我可能是胆子比较小,所以会觉得怎么路上这些电瓶都是这么肆无忌惮的往我冲过来,后面慢慢有一种借用电视剧读心神探的概念,安全距离,觉得大部分人跟我一样,骑电瓶车什么的总还是有个安全距离,只是可能这个安全距离对于不同的人不一样,那些骑电瓶车的潜意识里的安全距离是非常短,所以经常会骑车离着你非常近才会刹车,但是这个安全距离理论最近又被推翻了,因为经历过几次电瓶车就是已经跟你有身体接触了,但是没到把人撞倒的程度,似乎这些骑电瓶车的觉得步行的行人在人行道上是空气,蹭一下也无所谓,反正不能挡我的路,总感觉要不是我在前面骑自行车太慢挡着电瓶车,不然他们都能起飞去干掉 F35 解放湾湾了;

另一个问题应该是说我们交通规则普及的太少,虽然我们没有路权这个名词概念,但是其实是有这个优先级的,包括像杭州是以公交车在人行道礼让行人闻名的,其实这个文明的行为只限于人行道在直行路中间的,大部分在十字路口,右转的公交车很少会让直行人行道的,前提是直行的绿灯的时候,特别是像公交车这样,车身特别长,右转的时候会有比较大的死角,如果是公交车先转,行人或者自行车很容易被卷进去,非常危险的,私家车就更不用说了,反正右转即使人行道上人非常多要转的也是一秒都不等,所以我自己在开车的时候是尽量在右转的时候等人行道上的行人或者骑车的走完,因为总会觉得我是不是有点双标,骑车走路的时候希望开车的能按规则让我,自己开车的时候又想赶紧开走,所以在开车的时候尽量做到让行车和骑车的。

还有个其实是写着写着想起来的,比如我骑车左转的时候,因为我是左转到对角那就到了,跟那些左转后要再直行的不一样,我们应该在学车的时候也学过,超车要从左边超,但是往往那些骑电瓶车的在左转的时候会从我右边超过来再往左边撇过去,如果留的空间大还好,有些电瓶车就是如果车头超过了就不管他的车屁股,如果我不减速,自行车就被刮倒了,可能的确是别人就不是人,只要不把你撞倒就无所谓,反正为了你自己不被撞倒你肯定会让的。

0%
\ No newline at end of file diff --git a/2023/08/13/springboot-mappings-注册逻辑/index.html b/2023/08/13/springboot-mappings-注册逻辑/index.html index e0bbad8522..4b7cc92fda 100644 --- a/2023/08/13/springboot-mappings-注册逻辑/index.html +++ b/2023/08/13/springboot-mappings-注册逻辑/index.html @@ -118,4 +118,4 @@ finally { this.readWriteLock.writeLock().unlock(); } - }

底层的存储就是上一篇说的 mappingLookup 来存储信息

0%
\ No newline at end of file + }

底层的存储就是上一篇说的 mappingLookup 来存储信息

0%
\ No newline at end of file diff --git a/2023/08/20/springboot-web-server-启动逻辑/index.html b/2023/08/20/springboot-web-server-启动逻辑/index.html index 3b11af7d98..f56c6495d9 100644 --- a/2023/08/20/springboot-web-server-启动逻辑/index.html +++ b/2023/08/20/springboot-web-server-启动逻辑/index.html @@ -1,4 +1,4 @@ -springboot web server 启动逻辑 - Java - SpringBoot | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

springboot web server 启动逻辑 - Java - SpringBoot

springboot 的一个方便之处就是集成了 web server 进去,接着上一篇继续来看下这个 web server 的启动过程
基于 springboot 的 2.2.9.RELEASE 版本
整个 springboot 体系主体就是看 org.springframework.context.support.AbstractApplicationContext#refresh 刷新方法,
而启动 web server 的方法就是在其中的 OnRefresh

try {
+springboot web server 启动逻辑 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

springboot web server 启动逻辑

springboot 的一个方便之处就是集成了 web server 进去,接着上一篇继续来看下这个 web server 的启动过程
基于 springboot 的 2.2.9.RELEASE 版本
整个 springboot 体系主体就是看 org.springframework.context.support.AbstractApplicationContext#refresh 刷新方法,
而启动 web server 的方法就是在其中的 OnRefresh

try {
 				// Allows post-processing of the bean factory in context subclasses.
 				postProcessBeanFactory(beanFactory);
 
@@ -166,4 +166,4 @@
             this.server.addService(service);
             return this.server;
         }
-    }

然后就是启动 server,后面可以继续看这个启动 TomcatServer 内部的逻辑

0%
\ No newline at end of file + }

然后就是启动 server,后面可以继续看这个启动 TomcatServer 内部的逻辑

0%
\ No newline at end of file diff --git a/archives/2023/08/index.html b/archives/2023/08/index.html index 61294e3d35..0e550b3e13 100644 --- a/archives/2023/08/index.html +++ b/archives/2023/08/index.html @@ -1 +1 @@ -归档 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file +归档 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file diff --git a/archives/2023/index.html b/archives/2023/index.html index 5d81159cc2..96620492af 100644 --- a/archives/2023/index.html +++ b/archives/2023/index.html @@ -1 +1 @@ -归档 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file +归档 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file diff --git a/archives/index.html b/archives/index.html index 67127c6e17..df8e7bc84c 100644 --- a/archives/index.html +++ b/archives/index.html @@ -1 +1 @@ -归档 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file +归档 | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file diff --git a/atom.xml b/atom.xml index bd603e8776..2d9e0d54b4 100644 --- a/atom.xml +++ b/atom.xml @@ -6,7 +6,7 @@ - 2023-08-20T09:38:56.588Z + 2023-08-27T02:25:44.332Z https://nicksxs.me/ @@ -17,11 +17,11 @@ Hexo - springboot web server 启动逻辑 - Java - SpringBoot + springboot web server 启动逻辑 https://nicksxs.me/2023/08/20/springboot-web-server-%E5%90%AF%E5%8A%A8%E9%80%BB%E8%BE%91/ 2023-08-20T09:38:56.000Z - 2023-08-20T09:38:56.588Z + 2023-08-27T02:25:44.332Z diff --git a/baidusitemap.xml b/baidusitemap.xml index 81df9c4edd..6e51fd303b 100644 --- a/baidusitemap.xml +++ b/baidusitemap.xml @@ -2,7 +2,7 @@ https://nicksxs.me/2023/08/20/springboot-web-server-%E5%90%AF%E5%8A%A8%E9%80%BB%E8%BE%91/ - 2023-08-20 + 2023-08-27 https://nicksxs.me/2023/08/13/springboot-mappings-%E6%B3%A8%E5%86%8C%E9%80%BB%E8%BE%91/ @@ -277,15 +277,15 @@ 2022-06-11 - https://nicksxs.me/2022/02/27/Disruptor-%E7%B3%BB%E5%88%97%E4%BA%8C/ + https://nicksxs.me/2022/02/13/Disruptor-%E7%B3%BB%E5%88%97%E4%B8%80/ 2022-06-11 - https://nicksxs.me/2020/08/22/Filter-Intercepter-Aop-%E5%95%A5-%E5%95%A5-%E5%95%A5-%E8%BF%99%E4%BA%9B%E9%83%BD%E6%98%AF%E5%95%A5/ + https://nicksxs.me/2022/02/27/Disruptor-%E7%B3%BB%E5%88%97%E4%BA%8C/ 2022-06-11 - https://nicksxs.me/2022/02/13/Disruptor-%E7%B3%BB%E5%88%97%E4%B8%80/ + https://nicksxs.me/2020/08/22/Filter-Intercepter-Aop-%E5%95%A5-%E5%95%A5-%E5%95%A5-%E8%BF%99%E4%BA%9B%E9%83%BD%E6%98%AF%E5%95%A5/ 2022-06-11 @@ -297,11 +297,11 @@ 2022-06-11 - https://nicksxs.me/2021/07/04/Leetcode-42-%E6%8E%A5%E9%9B%A8%E6%B0%B4-Trapping-Rain-Water-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + https://nicksxs.me/2021/05/01/Leetcode-48-%E6%97%8B%E8%BD%AC%E5%9B%BE%E5%83%8F-Rotate-Image-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ 2022-06-11 - https://nicksxs.me/2021/05/01/Leetcode-48-%E6%97%8B%E8%BD%AC%E5%9B%BE%E5%83%8F-Rotate-Image-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + https://nicksxs.me/2021/07/04/Leetcode-42-%E6%8E%A5%E9%9B%A8%E6%B0%B4-Trapping-Rain-Water-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ 2022-06-11 @@ -317,15 +317,15 @@ 2022-06-11 - https://nicksxs.me/2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0-%E6%89%80%E6%9C%89%E6%9D%83%E4%BA%8C/ + https://nicksxs.me/2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/ 2022-06-11 - https://nicksxs.me/2022/01/30/spring-event-%E4%BB%8B%E7%BB%8D/ + https://nicksxs.me/2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0-%E6%89%80%E6%9C%89%E6%9D%83%E4%BA%8C/ 2022-06-11 - https://nicksxs.me/2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/ + https://nicksxs.me/2022/01/30/spring-event-%E4%BB%8B%E7%BB%8D/ 2022-06-11 @@ -364,16 +364,12 @@ https://nicksxs.me/2021/10/17/%E8%81%8A%E4%B8%80%E4%B8%8B-RocketMQ-%E7%9A%84%E6%B6%88%E6%81%AF%E5%AD%98%E5%82%A8%E5%9B%9B/ 2022-06-11 - - https://nicksxs.me/2021/09/26/%E8%81%8A%E4%B8%80%E4%B8%8B-SpringBoot-%E4%B8%AD%E5%8A%A8%E6%80%81%E5%88%87%E6%8D%A2%E6%95%B0%E6%8D%AE%E6%BA%90%E7%9A%84%E6%96%B9%E6%B3%95/ - 2022-06-11 - https://nicksxs.me/2021/09/19/%E8%81%8A%E4%B8%80%E4%B8%8B-SpringBoot-%E4%B8%AD%E4%BD%BF%E7%94%A8%E7%9A%84-cglib-%E4%BD%9C%E4%B8%BA%E5%8A%A8%E6%80%81%E4%BB%A3%E7%90%86%E4%B8%AD%E7%9A%84%E4%B8%80%E4%B8%AA%E6%B3%A8%E6%84%8F%E7%82%B9/ 2022-06-11 - https://nicksxs.me/2020/11/22/%E8%81%8A%E8%81%8A-Dubbo-%E7%9A%84%E5%AE%B9%E9%94%99%E6%9C%BA%E5%88%B6/ + https://nicksxs.me/2021/09/26/%E8%81%8A%E4%B8%80%E4%B8%8B-SpringBoot-%E4%B8%AD%E5%8A%A8%E6%80%81%E5%88%87%E6%8D%A2%E6%95%B0%E6%8D%AE%E6%BA%90%E7%9A%84%E6%96%B9%E6%B3%95/ 2022-06-11 @@ -381,7 +377,7 @@ 2022-06-11 - https://nicksxs.me/2021/06/13/%E8%81%8A%E8%81%8A-Java-%E7%9A%84%E7%B1%BB%E5%8A%A0%E8%BD%BD%E6%9C%BA%E5%88%B6%E4%BA%8C/ + https://nicksxs.me/2020/11/22/%E8%81%8A%E8%81%8A-Dubbo-%E7%9A%84%E5%AE%B9%E9%94%99%E6%9C%BA%E5%88%B6/ 2022-06-11 @@ -389,15 +385,15 @@ 2022-06-11 - https://nicksxs.me/2021/03/28/%E8%81%8A%E8%81%8A-Linux-%E4%B8%8B%E7%9A%84-top-%E5%91%BD%E4%BB%A4/ + https://nicksxs.me/2021/12/12/%E8%81%8A%E8%81%8A-Sharding-Jdbc-%E7%9A%84%E7%AE%80%E5%8D%95%E4%BD%BF%E7%94%A8/ 2022-06-11 - https://nicksxs.me/2021/12/26/%E8%81%8A%E8%81%8A-Sharding-Jdbc-%E7%9A%84%E7%AE%80%E5%8D%95%E5%8E%9F%E7%90%86%E5%88%9D%E7%AF%87/ + https://nicksxs.me/2022/01/09/%E8%81%8A%E8%81%8A-Sharding-Jdbc-%E5%88%86%E5%BA%93%E5%88%86%E8%A1%A8%E4%B8%8B%E7%9A%84%E5%88%86%E9%A1%B5%E6%96%B9%E6%A1%88/ 2022-06-11 - https://nicksxs.me/2021/12/12/%E8%81%8A%E8%81%8A-Sharding-Jdbc-%E7%9A%84%E7%AE%80%E5%8D%95%E4%BD%BF%E7%94%A8/ + https://nicksxs.me/2021/06/13/%E8%81%8A%E8%81%8A-Java-%E7%9A%84%E7%B1%BB%E5%8A%A0%E8%BD%BD%E6%9C%BA%E5%88%B6%E4%BA%8C/ 2022-06-11 @@ -405,17 +401,21 @@ 2022-06-11 - https://nicksxs.me/2022/01/09/%E8%81%8A%E8%81%8A-Sharding-Jdbc-%E5%88%86%E5%BA%93%E5%88%86%E8%A1%A8%E4%B8%8B%E7%9A%84%E5%88%86%E9%A1%B5%E6%96%B9%E6%A1%88/ + https://nicksxs.me/2020/12/27/%E8%81%8A%E8%81%8A-mysql-%E7%B4%A2%E5%BC%95%E7%9A%84%E4%B8%80%E4%BA%9B%E7%BB%86%E8%8A%82/ 2022-06-11 - https://nicksxs.me/2020/12/27/%E8%81%8A%E8%81%8A-mysql-%E7%B4%A2%E5%BC%95%E7%9A%84%E4%B8%80%E4%BA%9B%E7%BB%86%E8%8A%82/ + https://nicksxs.me/2021/12/26/%E8%81%8A%E8%81%8A-Sharding-Jdbc-%E7%9A%84%E7%AE%80%E5%8D%95%E5%8E%9F%E7%90%86%E5%88%9D%E7%AF%87/ 2022-06-11 https://nicksxs.me/2021/05/30/%E8%81%8A%E8%81%8A%E4%BC%A0%E8%AF%B4%E4%B8%AD%E7%9A%84-ThreadLocal/ 2022-06-11 + + https://nicksxs.me/2021/03/28/%E8%81%8A%E8%81%8A-Linux-%E4%B8%8B%E7%9A%84-top-%E5%91%BD%E4%BB%A4/ + 2022-06-11 + https://nicksxs.me/2021/12/05/%E8%81%8A%E8%81%8A%E9%83%A8%E5%88%86%E5%85%AC%E4%BA%A4%E8%BD%A6%E7%9A%84%E8%AE%BE%E8%AE%A1bug/ 2022-06-11 @@ -788,10 +788,6 @@ https://nicksxs.me/2015/04/14/Add-Two-Number/ 2020-01-12 - - https://nicksxs.me/2014/12/24/MFC%20%E6%A8%A1%E6%80%81%E5%AF%B9%E8%AF%9D%E6%A1%86/ - 2020-01-12 - https://nicksxs.me/2019/12/10/Redis-Part-1/ 2020-01-12 @@ -805,11 +801,11 @@ 2020-01-12 - https://nicksxs.me/2017/05/09/ambari-summary/ + https://nicksxs.me/2014/12/24/MFC%20%E6%A8%A1%E6%80%81%E5%AF%B9%E8%AF%9D%E6%A1%86/ 2020-01-12 - https://nicksxs.me/2015/01/14/Two-Sum/ + https://nicksxs.me/2017/05/09/ambari-summary/ 2020-01-12 @@ -820,6 +816,10 @@ https://nicksxs.me/2016/08/14/docker-mysql-cluster/ 2020-01-12 + + https://nicksxs.me/2015/01/14/Two-Sum/ + 2020-01-12 + https://nicksxs.me/2016/10/11/minimum-size-subarray-sum-209/ 2020-01-12 @@ -833,15 +833,15 @@ 2020-01-12 - https://nicksxs.me/2020/01/10/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E4%B8%89/ + https://nicksxs.me/2019/12/26/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D/ 2020-01-12 - https://nicksxs.me/2020/01/04/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E4%BA%8C/ + https://nicksxs.me/2020/01/10/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E4%B8%89/ 2020-01-12 - https://nicksxs.me/2019/12/26/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D/ + https://nicksxs.me/2020/01/04/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E4%BA%8C/ 2020-01-12 @@ -857,19 +857,19 @@ 2020-01-12 - https://nicksxs.me/2015/01/04/Path-Sum/ + https://nicksxs.me/2015/03/11/Number-Of-1-Bits/ 2020-01-12 - https://nicksxs.me/2015/03/11/Number-Of-1-Bits/ + https://nicksxs.me/2015/01/04/Path-Sum/ 2020-01-12 - https://nicksxs.me/2015/06/22/invert-binary-tree/ + https://nicksxs.me/2014/12/23/my-new-post/ 2020-01-12 - https://nicksxs.me/2014/12/23/my-new-post/ + https://nicksxs.me/2015/06/22/invert-binary-tree/ 2020-01-12 diff --git a/categories/Java/SpringBoot/index.html b/categories/Java/SpringBoot/index.html index 4ec47fb734..d2e3618fb8 100644 --- a/categories/Java/SpringBoot/index.html +++ b/categories/Java/SpringBoot/index.html @@ -1 +1 @@ -分类: SpringBoot | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file +分类: SpringBoot | Nicksxs's Blog

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

0%
\ No newline at end of file diff --git a/index.html b/index.html index 8e198672de..c3145b4ca2 100644 --- a/index.html +++ b/index.html @@ -1,4 +1,4 @@ -Nicksxs's Blog - What hurts more, the pain of hard work or the pain of regret?

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

springboot 的一个方便之处就是集成了 web server 进去,接着上一篇继续来看下这个 web server 的启动过程
基于 springboot 的 2.2.9.RELEASE 版本
整个 springboot 体系主体就是看 org.springframework.context.support.AbstractApplicationContext#refresh 刷新方法,
而启动 web server 的方法就是在其中的 OnRefresh

try {
+Nicksxs's Blog - What hurts more, the pain of hard work or the pain of regret?

Nicksxs's Blog

What hurts more, the pain of hard work or the pain of regret?

springboot 的一个方便之处就是集成了 web server 进去,接着上一篇继续来看下这个 web server 的启动过程
基于 springboot 的 2.2.9.RELEASE 版本
整个 springboot 体系主体就是看 org.springframework.context.support.AbstractApplicationContext#refresh 刷新方法,
而启动 web server 的方法就是在其中的 OnRefresh

try {
 				// Allows post-processing of the bean factory in context subclasses.
 				postProcessBeanFactory(beanFactory);
 
diff --git a/leancloud.memo b/leancloud.memo
index 4627c43b2d..223222395e 100644
--- a/leancloud.memo
+++ b/leancloud.memo
@@ -228,4 +228,5 @@
 {"title":"java 中发起 http 请求时证书问题解决记录","url":"/2023/07/29/java-中发起-http-请求时证书问题解决记录/"},
 {"title":"springboot 获取 web 应用中所有的接口 url","url":"/2023/08/06/springboot-获取-web-应用中所有的接口-url/"},
 {"title":"springboot mappings 注册逻辑","url":"/2023/08/13/springboot-mappings-注册逻辑/"},
+{"title":"springboot web server 启动逻辑 - Java - SpringBoot","url":"/2023/08/20/springboot-web-server-启动逻辑/"},
 ]
\ No newline at end of file
diff --git a/leancloud_counter_security_urls.json b/leancloud_counter_security_urls.json
index b5af2840ae..f93e22693b 100644
--- a/leancloud_counter_security_urls.json
+++ b/leancloud_counter_security_urls.json
@@ -1 +1 @@
-[{"title":"村上春树《1Q84》读后感","url":"/2019/12/18/1Q84读后感/"},{"title":"2019年终总结","url":"/2020/02/01/2019年终总结/"},{"title":"2020年中总结","url":"/2020/07/11/2020年中总结/"},{"title":"2021 年中总结","url":"/2021/07/18/2021-年中总结/"},{"title":"2020 年终总结","url":"/2021/03/31/2020-年终总结/"},{"title":"2021 年终总结","url":"/2022/01/22/2021-年终总结/"},{"title":"34_Search_for_a_Range","url":"/2016/08/14/34-Search-for-a-Range/"},{"title":"AQS篇二 之 Condition 浅析笔记","url":"/2021/02/21/AQS-之-Condition-浅析笔记/"},{"title":"AQS篇一","url":"/2021/02/14/AQS篇一/"},{"title":"add-two-number","url":"/2015/04/14/Add-Two-Number/"},{"title":"AbstractQueuedSynchronizer","url":"/2019/09/23/AbstractQueuedSynchronizer/"},{"title":"Apollo 客户端启动过程分析","url":"/2022/09/18/Apollo-客户端启动过程分析/"},{"title":"Apollo 的 value 注解是怎么自动更新的","url":"/2020/11/01/Apollo-的-value-注解是怎么自动更新的/"},{"title":"Apollo 如何获取当前环境","url":"/2022/09/04/Apollo-如何获取当前环境/"},{"title":"Clone Graph Part I","url":"/2014/12/30/Clone-Graph-Part-I/"},{"title":"Comparator使用小记","url":"/2020/04/05/Comparator使用小记/"},{"title":"Disruptor 系列二","url":"/2022/02/27/Disruptor-系列二/"},{"title":"Filter, Interceptor, Aop, 啥, 啥, 啥? 这些都是啥?","url":"/2020/08/22/Filter-Intercepter-Aop-啥-啥-啥-这些都是啥/"},{"title":"Dubbo 使用的几个记忆点","url":"/2022/04/02/Dubbo-使用的几个记忆点/"},{"title":"Disruptor 系列一","url":"/2022/02/13/Disruptor-系列一/"},{"title":"G1收集器概述","url":"/2020/02/09/G1收集器概述/"},{"title":"JVM源码分析之G1垃圾收集器分析一","url":"/2019/12/07/JVM-G1-Part-1/"},{"title":"Leetcode 021 合并两个有序链表 ( Merge Two Sorted Lists ) 题解分析","url":"/2021/10/07/Leetcode-021-合并两个有序链表-Merge-Two-Sorted-Lists-题解分析/"},{"title":"Leetcode 028 实现 strStr() ( Implement strStr() ) 题解分析","url":"/2021/10/31/Leetcode-028-实现-strStr-Implement-strStr-题解分析/"},{"title":"2022 年终总结","url":"/2023/01/15/2022-年终总结/"},{"title":"Disruptor 系列三","url":"/2022/09/25/Disruptor-系列三/"},{"title":"Leetcode 053 最大子序和 ( Maximum Subarray ) 题解分析","url":"/2021/11/28/Leetcode-053-最大子序和-Maximum-Subarray-题解分析/"},{"title":"Leetcode 1115 交替打印 FooBar ( Print FooBar Alternately *Medium* ) 题解分析","url":"/2022/05/01/Leetcode-1115-交替打印-FooBar-Print-FooBar-Alternately-Medium-题解分析/"},{"title":"Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析","url":"/2020/12/13/Leetcode-105-从前序与中序遍历序列构造二叉树-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-题解分析/"},{"title":"Leetcode 121 买卖股票的最佳时机(Best Time to Buy and Sell Stock) 题解分析","url":"/2021/03/14/Leetcode-121-买卖股票的最佳时机-Best-Time-to-Buy-and-Sell-Stock-题解分析/"},{"title":"Leetcode 104 二叉树的最大深度(Maximum Depth of Binary Tree) 题解分析","url":"/2020/10/25/Leetcode-104-二叉树的最大深度-Maximum-Depth-of-Binary-Tree-题解分析/"},{"title":"Leetcode 1260 二维网格迁移 ( Shift 2D Grid *Easy* ) 题解分析","url":"/2022/07/22/Leetcode-1260-二维网格迁移-Shift-2D-Grid-Easy-题解分析/"},{"title":"Leetcode 155 最小栈(Min Stack) 题解分析","url":"/2020/12/06/Leetcode-155-最小栈-Min-Stack-题解分析/"},{"title":"Leetcode 1862 向下取整数对和 ( Sum of Floored Pairs *Hard* ) 题解分析","url":"/2022/09/11/Leetcode-1862-向下取整数对和-Sum-of-Floored-Pairs-Hard-题解分析/"},{"title":"Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析","url":"/2021/01/24/Leetcode-124-二叉树中的最大路径和-Binary-Tree-Maximum-Path-Sum-题解分析/"},{"title":"Leetcode 16 最接近的三数之和 ( 3Sum Closest *Medium* ) 题解分析","url":"/2022/08/06/Leetcode-16-最接近的三数之和-3Sum-Closest-Medium-题解分析/"},{"title":"Leetcode 20 有效的括号 ( Valid Parentheses *Easy* ) 题解分析","url":"/2022/07/02/Leetcode-20-有效的括号-Valid-Parentheses-Easy-题解分析/"},{"title":"Leetcode 2 Add Two Numbers 题解分析","url":"/2020/10/11/Leetcode-2-Add-Two-Numbers-题解分析/"},{"title":"Leetcode 278 第一个错误的版本 ( First Bad Version *Easy* ) 题解分析","url":"/2022/08/14/Leetcode-278-第一个错误的版本-First-Bad-Version-Easy-题解分析/"},{"title":"Leetcode 160 相交链表(intersection-of-two-linked-lists) 题解分析","url":"/2021/01/10/Leetcode-160-相交链表-intersection-of-two-linked-lists-题解分析/"},{"title":"Leetcode 234 回文链表(Palindrome Linked List) 题解分析","url":"/2020/11/15/Leetcode-234-回文联表-Palindrome-Linked-List-题解分析/"},{"title":"Leetcode 3 Longest Substring Without Repeating Characters 题解分析","url":"/2020/09/20/Leetcode-3-Longest-Substring-Without-Repeating-Characters-题解分析/"},{"title":"Leetcode 349 两个数组的交集 ( Intersection of Two Arrays *Easy* ) 题解分析","url":"/2022/03/07/Leetcode-349-两个数组的交集-Intersection-of-Two-Arrays-Easy-题解分析/"},{"title":"Leetcode 236 二叉树的最近公共祖先(Lowest Common Ancestor of a Binary Tree) 题解分析","url":"/2021/05/23/Leetcode-236-二叉树的最近公共祖先-Lowest-Common-Ancestor-of-a-Binary-Tree-题解分析/"},{"title":"Leetcode 42 接雨水 (Trapping Rain Water) 题解分析","url":"/2021/07/04/Leetcode-42-接雨水-Trapping-Rain-Water-题解分析/"},{"title":"Leetcode 4 寻找两个正序数组的中位数 ( Median of Two Sorted Arrays *Hard* ) 题解分析","url":"/2022/03/27/Leetcode-4-寻找两个正序数组的中位数-Median-of-Two-Sorted-Arrays-Hard-题解分析/"},{"title":"Leetcode 698 划分为k个相等的子集 ( Partition to K Equal Sum Subsets *Medium* ) 题解分析","url":"/2022/06/19/Leetcode-698-划分为k个相等的子集-Partition-to-K-Equal-Sum-Subsets-Medium-题解分析/"},{"title":"Leetcode 48 旋转图像(Rotate Image) 题解分析","url":"/2021/05/01/Leetcode-48-旋转图像-Rotate-Image-题解分析/"},{"title":"Leetcode 83 删除排序链表中的重复元素 ( Remove Duplicates from Sorted List *Easy* ) 题解分析","url":"/2022/03/13/Leetcode-83-删除排序链表中的重复元素-Remove-Duplicates-from-Sorted-List-Easy-题解分析/"},{"title":"Headscale初体验以及踩坑记","url":"/2023/01/22/Headscale初体验以及踩坑记/"},{"title":"leetcode no.3","url":"/2015/04/15/Leetcode-No-3/"},{"title":"Linux 下 grep 命令的一点小技巧","url":"/2020/08/06/Linux-下-grep-命令的一点小技巧/"},{"title":"MFC 模态对话框","url":"/2014/12/24/MFC 模态对话框/"},{"title":"Maven实用小技巧","url":"/2020/02/16/Maven实用小技巧/"},{"title":"Path Sum","url":"/2015/01/04/Path-Sum/"},{"title":"Number of 1 Bits","url":"/2015/03/11/Number-Of-1-Bits/"},{"title":"Redis_分布式锁","url":"/2019/12/10/Redis-Part-1/"},{"title":"Reverse Bits","url":"/2015/03/11/Reverse-Bits/"},{"title":"Leetcode 885 螺旋矩阵 III ( Spiral Matrix III *Medium* ) 题解分析","url":"/2022/08/23/Leetcode-885-螺旋矩阵-III-Spiral-Matrix-III-Medium-题解分析/"},{"title":"Reverse Integer","url":"/2015/03/13/Reverse-Integer/"},{"title":"ambari-summary","url":"/2017/05/09/ambari-summary/"},{"title":"two sum","url":"/2015/01/14/Two-Sum/"},{"title":"binary-watch","url":"/2016/09/29/binary-watch/"},{"title":"docker-mysql-cluster","url":"/2016/08/14/docker-mysql-cluster/"},{"title":"docker比一般多一点的初学者介绍三","url":"/2020/03/21/docker比一般多一点的初学者介绍三/"},{"title":"docker比一般多一点的初学者介绍二","url":"/2020/03/15/docker比一般多一点的初学者介绍二/"},{"title":"docker比一般多一点的初学者介绍四","url":"/2022/12/25/docker比一般多一点的初学者介绍四/"},{"title":"docker比一般多一点的初学者介绍","url":"/2020/03/08/docker比一般多一点的初学者介绍/"},{"title":"docker使用中发现的echo命令的一个小技巧及其他","url":"/2020/03/29/echo命令的一个小技巧/"},{"title":"dubbo 客户端配置的一个重要知识点","url":"/2022/06/11/dubbo-客户端配置的一个重要知识点/"},{"title":"gogs使用webhook部署react单页应用","url":"/2020/02/22/gogs使用webhook部署react单页应用/"},{"title":"dnsmasq的一个使用注意点","url":"/2023/04/16/dnsmasq的一个使用注意点/"},{"title":"invert-binary-tree","url":"/2015/06/22/invert-binary-tree/"},{"title":"Leetcode 747 至少是其他数字两倍的最大数 ( Largest Number At Least Twice of Others *Easy* ) 题解分析","url":"/2022/10/02/Leetcode-747-至少是其他数字两倍的最大数-Largest-Number-At-Least-Twice-of-Others-Easy-题解分析/"},{"title":"C++ 指针使用中的一个小问题","url":"/2014/12/23/my-new-post/"},{"title":"minimum-size-subarray-sum-209","url":"/2016/10/11/minimum-size-subarray-sum-209/"},{"title":"mybatis 的 foreach 使用的注意点","url":"/2022/07/09/mybatis-的-foreach-使用的注意点/"},{"title":"mybatis 的 $ 和 # 是有啥区别","url":"/2020/09/06/mybatis-的-和-是有啥区别/"},{"title":"hexo 配置系列-接入Algolia搜索","url":"/2023/04/02/hexo-配置系列-接入Algolia搜索/"},{"title":"headscale 添加节点","url":"/2023/07/09/headscale-添加节点/"},{"title":"java 中发起 http 请求时证书问题解决记录","url":"/2023/07/29/java-中发起-http-请求时证书问题解决记录/"},{"title":"github 小技巧-更新 github host key","url":"/2023/03/28/github-小技巧-更新-github-host-key/"},{"title":"mybatis系列-connection连接池解析","url":"/2023/02/19/mybatis系列-connection连接池解析/"},{"title":"mybatis系列-sql 类的简单使用","url":"/2023/03/12/mybatis系列-sql-类的简单使用/"},{"title":"mybatis系列-foreach 解析","url":"/2023/06/11/mybatis系列-foreach-解析/"},{"title":"mybatis 的缓存是怎么回事","url":"/2020/10/03/mybatis-的缓存是怎么回事/"},{"title":"mybatis系列-dataSource解析","url":"/2023/01/08/mybatis系列-dataSource解析/"},{"title":"mybatis系列-mybatis是如何初始化mapper的","url":"/2022/12/04/mybatis是如何初始化mapper的/"},{"title":"nginx 日志小记","url":"/2022/04/17/nginx-日志小记/"},{"title":"openresty","url":"/2019/06/18/openresty/"},{"title":"mybatis系列-typeAliases系统","url":"/2023/01/01/mybatis系列-typeAliases系统/"},{"title":"pcre-intro-and-a-simple-package","url":"/2015/01/16/pcre-intro-and-a-simple-package/"},{"title":"php-abstract-class-and-interface","url":"/2016/11/10/php-abstract-class-and-interface/"},{"title":"mybatis系列-sql 类的简要分析","url":"/2023/03/19/mybatis系列-sql-类的简要分析/"},{"title":"mybatis系列-第一条sql的细节","url":"/2022/12/11/mybatis系列-第一条sql的细节/"},{"title":"mybatis系列-第一条sql的更多细节","url":"/2022/12/18/mybatis系列-第一条sql的更多细节/"},{"title":"rabbitmq-tips","url":"/2017/04/25/rabbitmq-tips/"},{"title":"redis 的 rdb 和 COW 介绍","url":"/2021/08/15/redis-的-rdb-和-COW-介绍/"},{"title":"redis数据结构介绍三-第三部分 整数集合","url":"/2020/01/10/redis数据结构介绍三/"},{"title":"redis数据结构介绍二-第二部分 跳表","url":"/2020/01/04/redis数据结构介绍二/"},{"title":"redis数据结构介绍-第一部分 SDS,链表,字典","url":"/2019/12/26/redis数据结构介绍/"},{"title":"redis数据结构介绍五-第五部分 对象","url":"/2020/01/20/redis数据结构介绍五/"},{"title":"mybatis系列-入门篇","url":"/2022/11/27/mybatis系列-入门篇/"},{"title":"redis淘汰策略复习","url":"/2021/08/01/redis淘汰策略复习/"},{"title":"redis数据结构介绍四-第四部分 压缩表","url":"/2020/01/19/redis数据结构介绍四/"},{"title":"redis系列介绍七-过期策略","url":"/2020/04/12/redis系列介绍七/"},{"title":"redis数据结构介绍六 快表","url":"/2020/01/22/redis数据结构介绍六/"},{"title":"redis过期策略复习","url":"/2021/07/25/redis过期策略复习/"},{"title":"redis系列介绍八-淘汰策略","url":"/2020/04/18/redis系列介绍八/"},{"title":"rust学习笔记-所有权二","url":"/2021/04/18/rust学习笔记-所有权二/"},{"title":"rust学习笔记-所有权三之切片","url":"/2021/05/16/rust学习笔记-所有权三之切片/"},{"title":"spark-little-tips","url":"/2017/03/28/spark-little-tips/"},{"title":"spring event 介绍","url":"/2022/01/30/spring-event-介绍/"},{"title":"springboot mappings 注册逻辑","url":"/2023/08/13/springboot-mappings-注册逻辑/"},{"title":"powershell 初体验","url":"/2022/11/13/powershell-初体验/"},{"title":"rust学习笔记-所有权一","url":"/2021/04/18/rust学习笔记/"},{"title":"springboot 获取 web 应用中所有的接口 url","url":"/2023/08/06/springboot-获取-web-应用中所有的接口-url/"},{"title":"summary-ranges-228","url":"/2016/10/12/summary-ranges-228/"},{"title":"springboot web server 启动逻辑 - Java - SpringBoot","url":"/2023/08/20/springboot-web-server-启动逻辑/"},{"title":"swoole-websocket-test","url":"/2016/07/13/swoole-websocket-test/"},{"title":"wordpress 忘记密码的一种解决方法","url":"/2021/12/05/wordpress-忘记密码的一种解决方法/"},{"title":"《垃圾回收算法手册读书》笔记之整理算法","url":"/2021/03/07/《垃圾回收算法手册读书》笔记之整理算法/"},{"title":"spring boot中的 http 接口返回 json 形式的小注意点","url":"/2023/06/25/spring-boot中的-http-接口返回-json-形式的小注意点/"},{"title":"powershell 初体验二","url":"/2022/11/20/powershell-初体验二/"},{"title":"《长安的荔枝》读后感","url":"/2022/07/17/《长安的荔枝》读后感/"},{"title":"上次的其他 外行聊国足","url":"/2022/03/06/上次的其他-外行聊国足/"},{"title":"win 下 vmware 虚拟机搭建黑裙 nas 的小思路","url":"/2023/06/04/win-下-vmware-虚拟机搭建黑裙-nas-的小思路/"},{"title":"ssh 小技巧-端口转发","url":"/2023/03/26/ssh-小技巧-端口转发/"},{"title":"一个 nginx 的简单记忆点","url":"/2022/08/21/一个-nginx-的简单记忆点/"},{"title":"介绍下最近比较实用的端口转发","url":"/2021/11/14/介绍下最近比较实用的端口转发/"},{"title":"介绍一下 RocketMQ","url":"/2020/06/21/介绍一下-RocketMQ/"},{"title":"从丁仲礼被美国制裁聊点啥","url":"/2020/12/20/从丁仲礼被美国制裁聊点啥/"},{"title":"从清华美院学姐聊聊我们身边的恶人","url":"/2020/11/29/从清华美院学姐聊聊我们身边的恶人/"},{"title":"关于读书打卡与分享","url":"/2021/02/07/关于读书打卡与分享/"},{"title":"关于公共交通再吐个槽","url":"/2021/03/21/关于公共交通再吐个槽/"},{"title":"《寻羊历险记》读后感","url":"/2023/07/23/《寻羊历险记》读后感/"},{"title":"分享一次折腾老旧笔记本的体验-续续篇","url":"/2023/02/26/分享一次折腾老旧笔记本的体验-续续篇/"},{"title":"分享一次折腾老旧笔记本的体验","url":"/2023/02/05/分享一次折腾老旧笔记本的体验/"},{"title":"分享记录一下一个 git 操作方法","url":"/2022/02/06/分享记录一下一个-git-操作方法/"},{"title":"nas 中使用 tmm 刮削视频","url":"/2023/07/02/使用-tmm-刮削视频/"},{"title":"分享记录一下一个 scp 操作方法","url":"/2022/02/06/分享记录一下一个-scp-操作方法/"},{"title":"关于 npe 的一个小记忆点","url":"/2023/07/16/关于-npe-的一个小记忆点/"},{"title":"分享一次折腾老旧笔记本的体验-续篇","url":"/2023/02/12/分享一次折腾老旧笔记本的体验-续篇/"},{"title":"在老丈人家的小工记三","url":"/2020/09/13/在老丈人家的小工记三/"},{"title":"在老丈人家的小工记五","url":"/2020/10/18/在老丈人家的小工记五/"},{"title":"在老丈人家的小工记四","url":"/2020/09/26/在老丈人家的小工记四/"},{"title":"在 wsl 2 中开启 ssh 连接","url":"/2023/04/23/在-wsl-2-中开启-ssh-连接/"},{"title":"寄生虫观后感","url":"/2020/03/01/寄生虫观后感/"},{"title":"我是如何走上跑步这条不归路的","url":"/2020/07/26/我是如何走上跑步这条不归路的/"},{"title":"周末我在老丈人家打了天小工","url":"/2020/08/16/周末我在老丈人家打了天小工/"},{"title":"屯菜惊魂记","url":"/2022/04/24/屯菜惊魂记/"},{"title":"搬运两个 StackOverflow 上的 Mysql 编码相关的问题解答","url":"/2022/01/16/搬运两个-StackOverflow-上的-Mysql-编码相关的问题解答/"},{"title":"是何原因竟让两人深夜奔袭十公里","url":"/2022/06/05/是何原因竟让两人深夜奔袭十公里/"},{"title":"分享一次比较诡异的 Windows 下 U盘无法退出的经历","url":"/2023/01/29/分享一次比较诡异的-Windows-下-U盘无法退出的经历/"},{"title":"看完了扫黑风暴,聊聊感想","url":"/2021/10/24/看完了扫黑风暴-聊聊感想/"},{"title":"小工周记一","url":"/2023/03/05/小工周记一/"},{"title":"给小电驴上牌","url":"/2022/03/20/给小电驴上牌/"},{"title":"聊一下 RocketMQ 的 DefaultMQPushConsumer 源码","url":"/2020/06/26/聊一下-RocketMQ-的-Consumer/"},{"title":"聊一下 RocketMQ 的 NameServer 源码","url":"/2020/07/05/聊一下-RocketMQ-的-NameServer-源码/"},{"title":"聊一下 RocketMQ 的消息存储之 MMAP","url":"/2021/09/04/聊一下-RocketMQ-的消息存储/"},{"title":"聊一下 RocketMQ 的消息存储三","url":"/2021/10/03/聊一下-RocketMQ-的消息存储三/"},{"title":"聊一下 RocketMQ 的消息存储二","url":"/2021/09/12/聊一下-RocketMQ-的消息存储二/"},{"title":"深度学习入门初认识","url":"/2023/04/30/深度学习入门初认识/"},{"title":"聊一下 RocketMQ 的顺序消息","url":"/2021/08/29/聊一下-RocketMQ-的顺序消息/"},{"title":"聊一下 RocketMQ 的消息存储四","url":"/2021/10/17/聊一下-RocketMQ-的消息存储四/"},{"title":"聊一下 SpringBoot 中动态切换数据源的方法","url":"/2021/09/26/聊一下-SpringBoot-中动态切换数据源的方法/"},{"title":"聊一下 SpringBoot 设置非 web 应用的方法","url":"/2022/07/31/聊一下-SpringBoot-设置非-web-应用的方法/"},{"title":"聊在东京奥运会闭幕式这天","url":"/2021/08/08/聊在东京奥运会闭幕式这天/"},{"title":"聊在东京奥运会闭幕式这天-二","url":"/2021/08/19/聊在东京奥运会闭幕式这天-二/"},{"title":"聊聊 Dubbo 的 SPI 续之自适应拓展","url":"/2020/06/06/聊聊-Dubbo-的-SPI-续之自适应拓展/"},{"title":"聊聊 Dubbo 的 SPI","url":"/2020/05/31/聊聊-Dubbo-的-SPI/"},{"title":"聊一下 SpringBoot 中使用的 cglib 作为动态代理中的一个注意点","url":"/2021/09/19/聊一下-SpringBoot-中使用的-cglib-作为动态代理中的一个注意点/"},{"title":"聊聊 Dubbo 的容错机制","url":"/2020/11/22/聊聊-Dubbo-的容错机制/"},{"title":"聊聊 Java 中绕不开的 Synchronized 关键字-二","url":"/2021/06/27/聊聊-Java-中绕不开的-Synchronized-关键字-二/"},{"title":"聊一下关于怎么陪伴学习","url":"/2022/11/06/聊一下关于怎么陪伴学习/"},{"title":"聊聊 Java 的类加载机制一","url":"/2020/11/08/聊聊-Java-的类加载机制/"},{"title":"聊聊 Java 中绕不开的 Synchronized 关键字","url":"/2021/06/20/聊聊-Java-中绕不开的-Synchronized-关键字/"},{"title":"聊聊 Java 的类加载机制二","url":"/2021/06/13/聊聊-Java-的类加载机制二/"},{"title":"聊聊 Java 自带的那些*逆天*工具","url":"/2020/08/02/聊聊-Java-自带的那些逆天工具/"},{"title":"聊聊 Java 的 equals 和 hashCode 方法","url":"/2021/01/03/聊聊-Java-的-equals-和-hashCode-方法/"},{"title":"聊聊 Linux 下的 top 命令","url":"/2021/03/28/聊聊-Linux-下的-top-命令/"},{"title":"聊聊 Sharding-Jdbc 的简单原理初篇","url":"/2021/12/26/聊聊-Sharding-Jdbc-的简单原理初篇/"},{"title":"聊聊 Sharding-Jdbc 的简单使用","url":"/2021/12/12/聊聊-Sharding-Jdbc-的简单使用/"},{"title":"聊聊 dubbo 的线程池","url":"/2021/04/04/聊聊-dubbo-的线程池/"},{"title":"聊聊 RocketMQ 的 Broker 源码","url":"/2020/07/19/聊聊-RocketMQ-的-Broker-源码/"},{"title":"聊聊 Sharding-Jdbc 分库分表下的分页方案","url":"/2022/01/09/聊聊-Sharding-Jdbc-分库分表下的分页方案/"},{"title":"聊聊 mysql 的 MVCC 续篇","url":"/2020/05/02/聊聊-mysql-的-MVCC-续篇/"},{"title":"聊聊 mysql 的 MVCC","url":"/2020/04/26/聊聊-mysql-的-MVCC/"},{"title":"聊聊Java中的单例模式","url":"/2019/12/21/聊聊Java中的单例模式/"},{"title":"聊聊 redis 缓存的应用问题","url":"/2021/01/31/聊聊-redis-缓存的应用问题/"},{"title":"聊聊 mysql 的 MVCC 续续篇之锁分析","url":"/2020/05/10/聊聊-mysql-的-MVCC-续续篇之加锁分析/"},{"title":"聊聊 mysql 索引的一些细节","url":"/2020/12/27/聊聊-mysql-索引的一些细节/"},{"title":"聊聊一次 brew update 引发的血案","url":"/2020/06/13/聊聊一次-brew-update-引发的血案/"},{"title":"聊聊 SpringBoot 自动装配","url":"/2021/07/11/聊聊SpringBoot-自动装配/"},{"title":"聊聊传说中的 ThreadLocal","url":"/2021/05/30/聊聊传说中的-ThreadLocal/"},{"title":"聊聊厦门旅游的好与不好","url":"/2021/04/11/聊聊厦门旅游的好与不好/"},{"title":"聊聊我刚学会的应用诊断方法","url":"/2020/05/22/聊聊我刚学会的应用诊断方法/"},{"title":"聊聊如何识别和意识到日常生活中的各类危险","url":"/2021/06/06/聊聊如何识别和意识到日常生活中的各类危险/"},{"title":"聊聊我的远程工作体验","url":"/2022/06/26/聊聊我的远程工作体验/"},{"title":"聊聊我理解的分布式事务","url":"/2020/05/17/聊聊我理解的分布式事务/"},{"title":"聊聊最近平淡的生活之又聊通勤","url":"/2021/11/07/聊聊最近平淡的生活/"},{"title":"聊聊最近平淡的生活之看《神探狄仁杰》","url":"/2021/12/19/聊聊最近平淡的生活之看《神探狄仁杰》/"},{"title":"聊聊给亲戚朋友的老电脑重装系统那些事儿","url":"/2021/05/09/聊聊给亲戚朋友的老电脑重装系统那些事儿/"},{"title":"聊聊这次换车牌及其他","url":"/2022/02/20/聊聊这次换车牌及其他/"},{"title":"聊聊那些加塞狗","url":"/2021/01/17/聊聊那些加塞狗/"},{"title":"聊聊部分公交车的设计bug","url":"/2021/12/05/聊聊部分公交车的设计bug/"},{"title":"聊聊最近平淡的生活之看看老剧","url":"/2021/11/21/聊聊最近平淡的生活之看看老剧/"},{"title":"聊聊最近平淡的生活之《花束般的恋爱》观后感","url":"/2021/12/31/聊聊最近平淡的生活之《花束般的恋爱》观后感/"},{"title":"记一个容器中 dubbo 注册的小知识点","url":"/2022/10/09/记一个容器中-dubbo-注册的小知识点/"},{"title":"记录一次折腾自组 nas 的失败经历-续续篇","url":"/2023/05/28/记录一次折腾自组-nas-的失败经历-续续篇/"},{"title":"记录一次折腾自组 nas 的失败经历-续篇","url":"/2023/05/14/记录一次折腾自组-nas-的失败经历-续篇/"},{"title":"记录下 Java Stream 的一些高效操作","url":"/2022/05/15/记录下-Java-Lambda-的一些高效操作/"},{"title":"记录一次折腾自组 nas 的失败经历","url":"/2023/05/07/记录一次折腾自组-nas-的失败经历/"},{"title":"记录下 phpunit 的入门使用方法之setUp和tearDown","url":"/2022/10/23/记录下-phpunit-的入门使用方法之setUp和tearDown/"},{"title":"记录一次折腾自组 nas 的失败经历-续续续篇","url":"/2023/06/18/记录一次折腾自组-nas-的失败经历-续续续篇/"},{"title":"记录下 zookeeper 集群迁移和易错点","url":"/2022/05/29/记录下-zookeeper-集群迁移/"},{"title":"解决 网络文件夹目前是以其他用户名和密码进行映射的 问题","url":"/2023/04/09/解决-网络文件夹目前是以其他用户名和密码进行映射的/"},{"title":"这周末我又在老丈人家打了天小工","url":"/2020/08/30/这周末我又在老丈人家打了天小工/"},{"title":"重看了下《蛮荒记》说说感受","url":"/2021/10/10/重看了下《蛮荒记》说说感受/"},{"title":"闲聊下乘公交的用户体验","url":"/2021/02/28/闲聊下乘公交的用户体验/"},{"title":"闲话篇-也算碰到了为老不尊和坏人变老了的典型案例","url":"/2022/05/22/闲话篇-也算碰到了为老不尊和坏人变老了的典型案例/"},{"title":"记录下 phpunit 的入门使用方法","url":"/2022/10/16/记录下-phpunit-的入门使用方法/"},{"title":"闲话篇-路遇神逻辑骑车带娃爹","url":"/2022/05/08/闲话篇-路遇神逻辑骑车带娃爹/"},{"title":"难得的大扫除","url":"/2022/04/10/难得的大扫除/"},{"title":"记录下 redis 的一些使用方法","url":"/2022/10/30/记录下-redis-的一些使用方法/"},{"title":"记录下把小米路由器 4A 千兆版刷成 openwrt 的过程","url":"/2023/05/21/记录下把小米路由器-4A-千兆版刷成-openwrt-的过程/"}]
\ No newline at end of file
+[{"title":"2019年终总结","url":"/2020/02/01/2019年终总结/"},{"title":"2020 年终总结","url":"/2021/03/31/2020-年终总结/"},{"title":"村上春树《1Q84》读后感","url":"/2019/12/18/1Q84读后感/"},{"title":"2020年中总结","url":"/2020/07/11/2020年中总结/"},{"title":"2021 年终总结","url":"/2022/01/22/2021-年终总结/"},{"title":"34_Search_for_a_Range","url":"/2016/08/14/34-Search-for-a-Range/"},{"title":"2022 年终总结","url":"/2023/01/15/2022-年终总结/"},{"title":"AQS篇二 之 Condition 浅析笔记","url":"/2021/02/21/AQS-之-Condition-浅析笔记/"},{"title":"AbstractQueuedSynchronizer","url":"/2019/09/23/AbstractQueuedSynchronizer/"},{"title":"AQS篇一","url":"/2021/02/14/AQS篇一/"},{"title":"add-two-number","url":"/2015/04/14/Add-Two-Number/"},{"title":"Apollo 如何获取当前环境","url":"/2022/09/04/Apollo-如何获取当前环境/"},{"title":"Apollo 客户端启动过程分析","url":"/2022/09/18/Apollo-客户端启动过程分析/"},{"title":"Apollo 的 value 注解是怎么自动更新的","url":"/2020/11/01/Apollo-的-value-注解是怎么自动更新的/"},{"title":"Clone Graph Part I","url":"/2014/12/30/Clone-Graph-Part-I/"},{"title":"Comparator使用小记","url":"/2020/04/05/Comparator使用小记/"},{"title":"2021 年中总结","url":"/2021/07/18/2021-年中总结/"},{"title":"Disruptor 系列三","url":"/2022/09/25/Disruptor-系列三/"},{"title":"Disruptor 系列一","url":"/2022/02/13/Disruptor-系列一/"},{"title":"Disruptor 系列二","url":"/2022/02/27/Disruptor-系列二/"},{"title":"Dubbo 使用的几个记忆点","url":"/2022/04/02/Dubbo-使用的几个记忆点/"},{"title":"Filter, Interceptor, Aop, 啥, 啥, 啥? 这些都是啥?","url":"/2020/08/22/Filter-Intercepter-Aop-啥-啥-啥-这些都是啥/"},{"title":"Leetcode 021 合并两个有序链表 ( Merge Two Sorted Lists ) 题解分析","url":"/2021/10/07/Leetcode-021-合并两个有序链表-Merge-Two-Sorted-Lists-题解分析/"},{"title":"G1收集器概述","url":"/2020/02/09/G1收集器概述/"},{"title":"JVM源码分析之G1垃圾收集器分析一","url":"/2019/12/07/JVM-G1-Part-1/"},{"title":"Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析","url":"/2020/12/13/Leetcode-105-从前序与中序遍历序列构造二叉树-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-题解分析/"},{"title":"Leetcode 053 最大子序和 ( Maximum Subarray ) 题解分析","url":"/2021/11/28/Leetcode-053-最大子序和-Maximum-Subarray-题解分析/"},{"title":"Leetcode 121 买卖股票的最佳时机(Best Time to Buy and Sell Stock) 题解分析","url":"/2021/03/14/Leetcode-121-买卖股票的最佳时机-Best-Time-to-Buy-and-Sell-Stock-题解分析/"},{"title":"Leetcode 1115 交替打印 FooBar ( Print FooBar Alternately *Medium* ) 题解分析","url":"/2022/05/01/Leetcode-1115-交替打印-FooBar-Print-FooBar-Alternately-Medium-题解分析/"},{"title":"Leetcode 028 实现 strStr() ( Implement strStr() ) 题解分析","url":"/2021/10/31/Leetcode-028-实现-strStr-Implement-strStr-题解分析/"},{"title":"Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析","url":"/2021/01/24/Leetcode-124-二叉树中的最大路径和-Binary-Tree-Maximum-Path-Sum-题解分析/"},{"title":"Leetcode 1260 二维网格迁移 ( Shift 2D Grid *Easy* ) 题解分析","url":"/2022/07/22/Leetcode-1260-二维网格迁移-Shift-2D-Grid-Easy-题解分析/"},{"title":"Leetcode 155 最小栈(Min Stack) 题解分析","url":"/2020/12/06/Leetcode-155-最小栈-Min-Stack-题解分析/"},{"title":"Leetcode 16 最接近的三数之和 ( 3Sum Closest *Medium* ) 题解分析","url":"/2022/08/06/Leetcode-16-最接近的三数之和-3Sum-Closest-Medium-题解分析/"},{"title":"Leetcode 160 相交链表(intersection-of-two-linked-lists) 题解分析","url":"/2021/01/10/Leetcode-160-相交链表-intersection-of-two-linked-lists-题解分析/"},{"title":"Leetcode 104 二叉树的最大深度(Maximum Depth of Binary Tree) 题解分析","url":"/2020/10/25/Leetcode-104-二叉树的最大深度-Maximum-Depth-of-Binary-Tree-题解分析/"},{"title":"Leetcode 2 Add Two Numbers 题解分析","url":"/2020/10/11/Leetcode-2-Add-Two-Numbers-题解分析/"},{"title":"Leetcode 20 有效的括号 ( Valid Parentheses *Easy* ) 题解分析","url":"/2022/07/02/Leetcode-20-有效的括号-Valid-Parentheses-Easy-题解分析/"},{"title":"Leetcode 234 回文链表(Palindrome Linked List) 题解分析","url":"/2020/11/15/Leetcode-234-回文联表-Palindrome-Linked-List-题解分析/"},{"title":"Leetcode 1862 向下取整数对和 ( Sum of Floored Pairs *Hard* ) 题解分析","url":"/2022/09/11/Leetcode-1862-向下取整数对和-Sum-of-Floored-Pairs-Hard-题解分析/"},{"title":"Leetcode 236 二叉树的最近公共祖先(Lowest Common Ancestor of a Binary Tree) 题解分析","url":"/2021/05/23/Leetcode-236-二叉树的最近公共祖先-Lowest-Common-Ancestor-of-a-Binary-Tree-题解分析/"},{"title":"Leetcode 349 两个数组的交集 ( Intersection of Two Arrays *Easy* ) 题解分析","url":"/2022/03/07/Leetcode-349-两个数组的交集-Intersection-of-Two-Arrays-Easy-题解分析/"},{"title":"Leetcode 278 第一个错误的版本 ( First Bad Version *Easy* ) 题解分析","url":"/2022/08/14/Leetcode-278-第一个错误的版本-First-Bad-Version-Easy-题解分析/"},{"title":"Leetcode 3 Longest Substring Without Repeating Characters 题解分析","url":"/2020/09/20/Leetcode-3-Longest-Substring-Without-Repeating-Characters-题解分析/"},{"title":"Leetcode 4 寻找两个正序数组的中位数 ( Median of Two Sorted Arrays *Hard* ) 题解分析","url":"/2022/03/27/Leetcode-4-寻找两个正序数组的中位数-Median-of-Two-Sorted-Arrays-Hard-题解分析/"},{"title":"Leetcode 48 旋转图像(Rotate Image) 题解分析","url":"/2021/05/01/Leetcode-48-旋转图像-Rotate-Image-题解分析/"},{"title":"Leetcode 42 接雨水 (Trapping Rain Water) 题解分析","url":"/2021/07/04/Leetcode-42-接雨水-Trapping-Rain-Water-题解分析/"},{"title":"Leetcode 698 划分为k个相等的子集 ( Partition to K Equal Sum Subsets *Medium* ) 题解分析","url":"/2022/06/19/Leetcode-698-划分为k个相等的子集-Partition-to-K-Equal-Sum-Subsets-Medium-题解分析/"},{"title":"Leetcode 83 删除排序链表中的重复元素 ( Remove Duplicates from Sorted List *Easy* ) 题解分析","url":"/2022/03/13/Leetcode-83-删除排序链表中的重复元素-Remove-Duplicates-from-Sorted-List-Easy-题解分析/"},{"title":"Leetcode 885 螺旋矩阵 III ( Spiral Matrix III *Medium* ) 题解分析","url":"/2022/08/23/Leetcode-885-螺旋矩阵-III-Spiral-Matrix-III-Medium-题解分析/"},{"title":"Linux 下 grep 命令的一点小技巧","url":"/2020/08/06/Linux-下-grep-命令的一点小技巧/"},{"title":"Headscale初体验以及踩坑记","url":"/2023/01/22/Headscale初体验以及踩坑记/"},{"title":"leetcode no.3","url":"/2015/04/15/Leetcode-No-3/"},{"title":"Maven实用小技巧","url":"/2020/02/16/Maven实用小技巧/"},{"title":"Number of 1 Bits","url":"/2015/03/11/Number-Of-1-Bits/"},{"title":"Leetcode 747 至少是其他数字两倍的最大数 ( Largest Number At Least Twice of Others *Easy* ) 题解分析","url":"/2022/10/02/Leetcode-747-至少是其他数字两倍的最大数-Largest-Number-At-Least-Twice-of-Others-Easy-题解分析/"},{"title":"Path Sum","url":"/2015/01/04/Path-Sum/"},{"title":"Redis_分布式锁","url":"/2019/12/10/Redis-Part-1/"},{"title":"Reverse Bits","url":"/2015/03/11/Reverse-Bits/"},{"title":"Reverse Integer","url":"/2015/03/13/Reverse-Integer/"},{"title":"MFC 模态对话框","url":"/2014/12/24/MFC 模态对话框/"},{"title":"ambari-summary","url":"/2017/05/09/ambari-summary/"},{"title":"binary-watch","url":"/2016/09/29/binary-watch/"},{"title":"docker-mysql-cluster","url":"/2016/08/14/docker-mysql-cluster/"},{"title":"docker比一般多一点的初学者介绍","url":"/2020/03/08/docker比一般多一点的初学者介绍/"},{"title":"docker比一般多一点的初学者介绍三","url":"/2020/03/21/docker比一般多一点的初学者介绍三/"},{"title":"two sum","url":"/2015/01/14/Two-Sum/"},{"title":"docker比一般多一点的初学者介绍四","url":"/2022/12/25/docker比一般多一点的初学者介绍四/"},{"title":"dubbo 客户端配置的一个重要知识点","url":"/2022/06/11/dubbo-客户端配置的一个重要知识点/"},{"title":"docker使用中发现的echo命令的一个小技巧及其他","url":"/2020/03/29/echo命令的一个小技巧/"},{"title":"headscale 添加节点","url":"/2023/07/09/headscale-添加节点/"},{"title":"gogs使用webhook部署react单页应用","url":"/2020/02/22/gogs使用webhook部署react单页应用/"},{"title":"docker比一般多一点的初学者介绍二","url":"/2020/03/15/docker比一般多一点的初学者介绍二/"},{"title":"github 小技巧-更新 github host key","url":"/2023/03/28/github-小技巧-更新-github-host-key/"},{"title":"C++ 指针使用中的一个小问题","url":"/2014/12/23/my-new-post/"},{"title":"mybatis 的 $ 和 # 是有啥区别","url":"/2020/09/06/mybatis-的-和-是有啥区别/"},{"title":"minimum-size-subarray-sum-209","url":"/2016/10/11/minimum-size-subarray-sum-209/"},{"title":"mybatis 的 foreach 使用的注意点","url":"/2022/07/09/mybatis-的-foreach-使用的注意点/"},{"title":"mybatis 的缓存是怎么回事","url":"/2020/10/03/mybatis-的缓存是怎么回事/"},{"title":"hexo 配置系列-接入Algolia搜索","url":"/2023/04/02/hexo-配置系列-接入Algolia搜索/"},{"title":"mybatis系列-dataSource解析","url":"/2023/01/08/mybatis系列-dataSource解析/"},{"title":"mybatis系列-sql 类的简单使用","url":"/2023/03/12/mybatis系列-sql-类的简单使用/"},{"title":"java 中发起 http 请求时证书问题解决记录","url":"/2023/07/29/java-中发起-http-请求时证书问题解决记录/"},{"title":"mybatis系列-sql 类的简要分析","url":"/2023/03/19/mybatis系列-sql-类的简要分析/"},{"title":"invert-binary-tree","url":"/2015/06/22/invert-binary-tree/"},{"title":"dnsmasq的一个使用注意点","url":"/2023/04/16/dnsmasq的一个使用注意点/"},{"title":"mybatis系列-mybatis是如何初始化mapper的","url":"/2022/12/04/mybatis是如何初始化mapper的/"},{"title":"mybatis系列-foreach 解析","url":"/2023/06/11/mybatis系列-foreach-解析/"},{"title":"mybatis系列-connection连接池解析","url":"/2023/02/19/mybatis系列-connection连接池解析/"},{"title":"mybatis系列-入门篇","url":"/2022/11/27/mybatis系列-入门篇/"},{"title":"nginx 日志小记","url":"/2022/04/17/nginx-日志小记/"},{"title":"openresty","url":"/2019/06/18/openresty/"},{"title":"pcre-intro-and-a-simple-package","url":"/2015/01/16/pcre-intro-and-a-simple-package/"},{"title":"mybatis系列-第一条sql的更多细节","url":"/2022/12/18/mybatis系列-第一条sql的更多细节/"},{"title":"php-abstract-class-and-interface","url":"/2016/11/10/php-abstract-class-and-interface/"},{"title":"mybatis系列-第一条sql的细节","url":"/2022/12/11/mybatis系列-第一条sql的细节/"},{"title":"rabbitmq-tips","url":"/2017/04/25/rabbitmq-tips/"},{"title":"redis 的 rdb 和 COW 介绍","url":"/2021/08/15/redis-的-rdb-和-COW-介绍/"},{"title":"redis数据结构介绍-第一部分 SDS,链表,字典","url":"/2019/12/26/redis数据结构介绍/"},{"title":"redis数据结构介绍三-第三部分 整数集合","url":"/2020/01/10/redis数据结构介绍三/"},{"title":"redis数据结构介绍二-第二部分 跳表","url":"/2020/01/04/redis数据结构介绍二/"},{"title":"redis数据结构介绍五-第五部分 对象","url":"/2020/01/20/redis数据结构介绍五/"},{"title":"redis数据结构介绍六 快表","url":"/2020/01/22/redis数据结构介绍六/"},{"title":"redis数据结构介绍四-第四部分 压缩表","url":"/2020/01/19/redis数据结构介绍四/"},{"title":"redis淘汰策略复习","url":"/2021/08/01/redis淘汰策略复习/"},{"title":"mybatis系列-typeAliases系统","url":"/2023/01/01/mybatis系列-typeAliases系统/"},{"title":"redis系列介绍七-过期策略","url":"/2020/04/12/redis系列介绍七/"},{"title":"redis系列介绍八-淘汰策略","url":"/2020/04/18/redis系列介绍八/"},{"title":"redis过期策略复习","url":"/2021/07/25/redis过期策略复习/"},{"title":"rust学习笔记-所有权一","url":"/2021/04/18/rust学习笔记/"},{"title":"rust学习笔记-所有权二","url":"/2021/04/18/rust学习笔记-所有权二/"},{"title":"spark-little-tips","url":"/2017/03/28/spark-little-tips/"},{"title":"rust学习笔记-所有权三之切片","url":"/2021/05/16/rust学习笔记-所有权三之切片/"},{"title":"spring event 介绍","url":"/2022/01/30/spring-event-介绍/"},{"title":"springboot mappings 注册逻辑","url":"/2023/08/13/springboot-mappings-注册逻辑/"},{"title":"powershell 初体验","url":"/2022/11/13/powershell-初体验/"},{"title":"springboot web server 启动逻辑","url":"/2023/08/20/springboot-web-server-启动逻辑/"},{"title":"powershell 初体验二","url":"/2022/11/20/powershell-初体验二/"},{"title":"summary-ranges-228","url":"/2016/10/12/summary-ranges-228/"},{"title":"swoole-websocket-test","url":"/2016/07/13/swoole-websocket-test/"},{"title":"wordpress 忘记密码的一种解决方法","url":"/2021/12/05/wordpress-忘记密码的一种解决方法/"},{"title":"win 下 vmware 虚拟机搭建黑裙 nas 的小思路","url":"/2023/06/04/win-下-vmware-虚拟机搭建黑裙-nas-的小思路/"},{"title":"《垃圾回收算法手册读书》笔记之整理算法","url":"/2021/03/07/《垃圾回收算法手册读书》笔记之整理算法/"},{"title":"spring boot中的 http 接口返回 json 形式的小注意点","url":"/2023/06/25/spring-boot中的-http-接口返回-json-形式的小注意点/"},{"title":"springboot 获取 web 应用中所有的接口 url","url":"/2023/08/06/springboot-获取-web-应用中所有的接口-url/"},{"title":"《长安的荔枝》读后感","url":"/2022/07/17/《长安的荔枝》读后感/"},{"title":"一个 nginx 的简单记忆点","url":"/2022/08/21/一个-nginx-的简单记忆点/"},{"title":"上次的其他 外行聊国足","url":"/2022/03/06/上次的其他-外行聊国足/"},{"title":"介绍一下 RocketMQ","url":"/2020/06/21/介绍一下-RocketMQ/"},{"title":"介绍下最近比较实用的端口转发","url":"/2021/11/14/介绍下最近比较实用的端口转发/"},{"title":"ssh 小技巧-端口转发","url":"/2023/03/26/ssh-小技巧-端口转发/"},{"title":"从清华美院学姐聊聊我们身边的恶人","url":"/2020/11/29/从清华美院学姐聊聊我们身边的恶人/"},{"title":"从丁仲礼被美国制裁聊点啥","url":"/2020/12/20/从丁仲礼被美国制裁聊点啥/"},{"title":"关于公共交通再吐个槽","url":"/2021/03/21/关于公共交通再吐个槽/"},{"title":"《寻羊历险记》读后感","url":"/2023/07/23/《寻羊历险记》读后感/"},{"title":"关于读书打卡与分享","url":"/2021/02/07/关于读书打卡与分享/"},{"title":"nas 中使用 tmm 刮削视频","url":"/2023/07/02/使用-tmm-刮削视频/"},{"title":"分享一次折腾老旧笔记本的体验","url":"/2023/02/05/分享一次折腾老旧笔记本的体验/"},{"title":"关于 npe 的一个小记忆点","url":"/2023/07/16/关于-npe-的一个小记忆点/"},{"title":"分享记录一下一个 git 操作方法","url":"/2022/02/06/分享记录一下一个-git-操作方法/"},{"title":"分享记录一下一个 scp 操作方法","url":"/2022/02/06/分享记录一下一个-scp-操作方法/"},{"title":"周末我在老丈人家打了天小工","url":"/2020/08/16/周末我在老丈人家打了天小工/"},{"title":"分享一次折腾老旧笔记本的体验-续续篇","url":"/2023/02/26/分享一次折腾老旧笔记本的体验-续续篇/"},{"title":"在老丈人家的小工记五","url":"/2020/10/18/在老丈人家的小工记五/"},{"title":"在老丈人家的小工记三","url":"/2020/09/13/在老丈人家的小工记三/"},{"title":"在老丈人家的小工记四","url":"/2020/09/26/在老丈人家的小工记四/"},{"title":"小工周记一","url":"/2023/03/05/小工周记一/"},{"title":"分享一次折腾老旧笔记本的体验-续篇","url":"/2023/02/12/分享一次折腾老旧笔记本的体验-续篇/"},{"title":"寄生虫观后感","url":"/2020/03/01/寄生虫观后感/"},{"title":"屯菜惊魂记","url":"/2022/04/24/屯菜惊魂记/"},{"title":"我是如何走上跑步这条不归路的","url":"/2020/07/26/我是如何走上跑步这条不归路的/"},{"title":"是何原因竟让两人深夜奔袭十公里","url":"/2022/06/05/是何原因竟让两人深夜奔袭十公里/"},{"title":"看完了扫黑风暴,聊聊感想","url":"/2021/10/24/看完了扫黑风暴-聊聊感想/"},{"title":"分享一次比较诡异的 Windows 下 U盘无法退出的经历","url":"/2023/01/29/分享一次比较诡异的-Windows-下-U盘无法退出的经历/"},{"title":"聊一下 RocketMQ 的 DefaultMQPushConsumer 源码","url":"/2020/06/26/聊一下-RocketMQ-的-Consumer/"},{"title":"聊一下 RocketMQ 的 NameServer 源码","url":"/2020/07/05/聊一下-RocketMQ-的-NameServer-源码/"},{"title":"给小电驴上牌","url":"/2022/03/20/给小电驴上牌/"},{"title":"聊一下 RocketMQ 的消息存储之 MMAP","url":"/2021/09/04/聊一下-RocketMQ-的消息存储/"},{"title":"搬运两个 StackOverflow 上的 Mysql 编码相关的问题解答","url":"/2022/01/16/搬运两个-StackOverflow-上的-Mysql-编码相关的问题解答/"},{"title":"在 wsl 2 中开启 ssh 连接","url":"/2023/04/23/在-wsl-2-中开启-ssh-连接/"},{"title":"聊一下 RocketMQ 的消息存储三","url":"/2021/10/03/聊一下-RocketMQ-的消息存储三/"},{"title":"聊一下 RocketMQ 的消息存储二","url":"/2021/09/12/聊一下-RocketMQ-的消息存储二/"},{"title":"聊一下 RocketMQ 的消息存储四","url":"/2021/10/17/聊一下-RocketMQ-的消息存储四/"},{"title":"聊一下 RocketMQ 的顺序消息","url":"/2021/08/29/聊一下-RocketMQ-的顺序消息/"},{"title":"聊一下 SpringBoot 中使用的 cglib 作为动态代理中的一个注意点","url":"/2021/09/19/聊一下-SpringBoot-中使用的-cglib-作为动态代理中的一个注意点/"},{"title":"聊一下 SpringBoot 中动态切换数据源的方法","url":"/2021/09/26/聊一下-SpringBoot-中动态切换数据源的方法/"},{"title":"聊一下 SpringBoot 设置非 web 应用的方法","url":"/2022/07/31/聊一下-SpringBoot-设置非-web-应用的方法/"},{"title":"深度学习入门初认识","url":"/2023/04/30/深度学习入门初认识/"},{"title":"聊在东京奥运会闭幕式这天","url":"/2021/08/08/聊在东京奥运会闭幕式这天/"},{"title":"聊在东京奥运会闭幕式这天-二","url":"/2021/08/19/聊在东京奥运会闭幕式这天-二/"},{"title":"聊聊 Dubbo 的 SPI 续之自适应拓展","url":"/2020/06/06/聊聊-Dubbo-的-SPI-续之自适应拓展/"},{"title":"聊聊 Dubbo 的 SPI","url":"/2020/05/31/聊聊-Dubbo-的-SPI/"},{"title":"聊聊 Java 中绕不开的 Synchronized 关键字-二","url":"/2021/06/27/聊聊-Java-中绕不开的-Synchronized-关键字-二/"},{"title":"聊聊 Java 的类加载机制一","url":"/2020/11/08/聊聊-Java-的类加载机制/"},{"title":"聊聊 Dubbo 的容错机制","url":"/2020/11/22/聊聊-Dubbo-的容错机制/"},{"title":"聊聊 Java 中绕不开的 Synchronized 关键字","url":"/2021/06/20/聊聊-Java-中绕不开的-Synchronized-关键字/"},{"title":"聊聊 Java 自带的那些*逆天*工具","url":"/2020/08/02/聊聊-Java-自带的那些逆天工具/"},{"title":"聊聊 Sharding-Jdbc 的简单使用","url":"/2021/12/12/聊聊-Sharding-Jdbc-的简单使用/"},{"title":"聊聊 Java 的 equals 和 hashCode 方法","url":"/2021/01/03/聊聊-Java-的-equals-和-hashCode-方法/"},{"title":"聊聊 Sharding-Jdbc 分库分表下的分页方案","url":"/2022/01/09/聊聊-Sharding-Jdbc-分库分表下的分页方案/"},{"title":"聊聊 Java 的类加载机制二","url":"/2021/06/13/聊聊-Java-的类加载机制二/"},{"title":"聊聊 dubbo 的线程池","url":"/2021/04/04/聊聊-dubbo-的线程池/"},{"title":"聊聊 mysql 的 MVCC 续篇","url":"/2020/05/02/聊聊-mysql-的-MVCC-续篇/"},{"title":"聊一下关于怎么陪伴学习","url":"/2022/11/06/聊一下关于怎么陪伴学习/"},{"title":"聊聊 mysql 的 MVCC 续续篇之锁分析","url":"/2020/05/10/聊聊-mysql-的-MVCC-续续篇之加锁分析/"},{"title":"聊聊 mysql 索引的一些细节","url":"/2020/12/27/聊聊-mysql-索引的一些细节/"},{"title":"聊聊Java中的单例模式","url":"/2019/12/21/聊聊Java中的单例模式/"},{"title":"聊聊 redis 缓存的应用问题","url":"/2021/01/31/聊聊-redis-缓存的应用问题/"},{"title":"聊聊 RocketMQ 的 Broker 源码","url":"/2020/07/19/聊聊-RocketMQ-的-Broker-源码/"},{"title":"聊聊 Sharding-Jdbc 的简单原理初篇","url":"/2021/12/26/聊聊-Sharding-Jdbc-的简单原理初篇/"},{"title":"聊聊 mysql 的 MVCC","url":"/2020/04/26/聊聊-mysql-的-MVCC/"},{"title":"聊聊传说中的 ThreadLocal","url":"/2021/05/30/聊聊传说中的-ThreadLocal/"},{"title":"聊聊一次 brew update 引发的血案","url":"/2020/06/13/聊聊一次-brew-update-引发的血案/"},{"title":"聊聊我刚学会的应用诊断方法","url":"/2020/05/22/聊聊我刚学会的应用诊断方法/"},{"title":"聊聊我的远程工作体验","url":"/2022/06/26/聊聊我的远程工作体验/"},{"title":"聊聊我理解的分布式事务","url":"/2020/05/17/聊聊我理解的分布式事务/"},{"title":"聊聊 Linux 下的 top 命令","url":"/2021/03/28/聊聊-Linux-下的-top-命令/"},{"title":"聊聊最近平淡的生活之《花束般的恋爱》观后感","url":"/2021/12/31/聊聊最近平淡的生活之《花束般的恋爱》观后感/"},{"title":"聊聊 SpringBoot 自动装配","url":"/2021/07/11/聊聊SpringBoot-自动装配/"},{"title":"聊聊最近平淡的生活之又聊通勤","url":"/2021/11/07/聊聊最近平淡的生活/"},{"title":"聊聊最近平淡的生活之看《神探狄仁杰》","url":"/2021/12/19/聊聊最近平淡的生活之看《神探狄仁杰》/"},{"title":"聊聊最近平淡的生活之看看老剧","url":"/2021/11/21/聊聊最近平淡的生活之看看老剧/"},{"title":"聊聊那些加塞狗","url":"/2021/01/17/聊聊那些加塞狗/"},{"title":"聊聊这次换车牌及其他","url":"/2022/02/20/聊聊这次换车牌及其他/"},{"title":"聊聊部分公交车的设计bug","url":"/2021/12/05/聊聊部分公交车的设计bug/"},{"title":"聊聊如何识别和意识到日常生活中的各类危险","url":"/2021/06/06/聊聊如何识别和意识到日常生活中的各类危险/"},{"title":"聊聊给亲戚朋友的老电脑重装系统那些事儿","url":"/2021/05/09/聊聊给亲戚朋友的老电脑重装系统那些事儿/"},{"title":"记一个容器中 dubbo 注册的小知识点","url":"/2022/10/09/记一个容器中-dubbo-注册的小知识点/"},{"title":"记录一次折腾自组 nas 的失败经历-续篇","url":"/2023/05/14/记录一次折腾自组-nas-的失败经历-续篇/"},{"title":"聊聊厦门旅游的好与不好","url":"/2021/04/11/聊聊厦门旅游的好与不好/"},{"title":"记录一次折腾自组 nas 的失败经历-续续篇","url":"/2023/05/28/记录一次折腾自组-nas-的失败经历-续续篇/"},{"title":"记录一次折腾自组 nas 的失败经历-续续续篇","url":"/2023/06/18/记录一次折腾自组-nas-的失败经历-续续续篇/"},{"title":"记录一次折腾自组 nas 的失败经历","url":"/2023/05/07/记录一次折腾自组-nas-的失败经历/"},{"title":"记录下 zookeeper 集群迁移和易错点","url":"/2022/05/29/记录下-zookeeper-集群迁移/"},{"title":"记录下把小米路由器 4A 千兆版刷成 openwrt 的过程","url":"/2023/05/21/记录下把小米路由器-4A-千兆版刷成-openwrt-的过程/"},{"title":"这周末我又在老丈人家打了天小工","url":"/2020/08/30/这周末我又在老丈人家打了天小工/"},{"title":"重看了下《蛮荒记》说说感受","url":"/2021/10/10/重看了下《蛮荒记》说说感受/"},{"title":"记录下 phpunit 的入门使用方法之setUp和tearDown","url":"/2022/10/23/记录下-phpunit-的入门使用方法之setUp和tearDown/"},{"title":"闲话篇-也算碰到了为老不尊和坏人变老了的典型案例","url":"/2022/05/22/闲话篇-也算碰到了为老不尊和坏人变老了的典型案例/"},{"title":"闲聊下乘公交的用户体验","url":"/2021/02/28/闲聊下乘公交的用户体验/"},{"title":"闲话篇-路遇神逻辑骑车带娃爹","url":"/2022/05/08/闲话篇-路遇神逻辑骑车带娃爹/"},{"title":"记录下 redis 的一些使用方法","url":"/2022/10/30/记录下-redis-的一些使用方法/"},{"title":"难得的大扫除","url":"/2022/04/10/难得的大扫除/"},{"title":"解决 网络文件夹目前是以其他用户名和密码进行映射的 问题","url":"/2023/04/09/解决-网络文件夹目前是以其他用户名和密码进行映射的/"},{"title":"记录下 phpunit 的入门使用方法","url":"/2022/10/16/记录下-phpunit-的入门使用方法/"},{"title":"记录下 Java Stream 的一些高效操作","url":"/2022/05/15/记录下-Java-Lambda-的一些高效操作/"}]
\ No newline at end of file
diff --git a/search.xml b/search.xml
index 4f184ab27d..ec78d93937 100644
--- a/search.xml
+++ b/search.xml
@@ -1,25 +1,5 @@
 
 
-  
-    村上春树《1Q84》读后感
-    /2019/12/18/1Q84%E8%AF%BB%E5%90%8E%E6%84%9F/
-    看完了村上春树的《1Q84》,这应该是第五本看的他的书了,继 跑步,挪威的森林,刺杀骑士团长,海边的卡夫卡之后,不是其中最长的,好像是海边的卡夫卡还是刺杀骑士团长比较长一点,都是在微信读书上看的,比较方便,最开始在上面看的是高晓松的《鱼羊野史》,不知道为啥取这个名字,但是还是满吸引我的,不过由于去年的种种,没有很多心思把它看完,而且本身的组织形式就是比较松散的,看到哪算哪,其实一些野史部分是我比较喜欢,有些谈到人物的就不太有兴趣,而且类似于大祥哥吃的东西,反正都是哇,怎么这么好吃,嗯,太爱(niu)你(bi)了,高晓松就是这个人是我最喜欢的 xxx 家,我也没去细究过他有没有说重复过,反正是不太爱,后来因为这书还一度对战争史有了浓厚的兴趣,然而事实告诉我,大部头的战争史,其实正史我是真的啃不下去,我可能只对其中 10%的内容感兴趣,不过终于也在今年把它看完了,好像高晓松的晓说也最终季了,貌似其中讲朝鲜战争的还被和谐了,看样子是说出了一些故事(truth)。

-

本来只是想把 《1Q84》的读后感写下,现在觉得还是把这篇当成我今年的读书总结吧,不过先从《1Q84》说起。

-

严格来讲,这不是很书面化的读后感,可能我想写的也只是像聊天一样的说下我读过的书,包括的技术博客其实也是类似的,以后或许会转变,但是目前水平如此吧,写多了可能会变好,也可能不会。

-

开始正文吧,这书有点类似于海边的卡夫卡,一开始是通过两条故事线,穿插着叙述,一条是青豆的,不算是个职业杀手的女杀手,要去解决一个经常家暴的斯文败类,穿着描述得比较性感吧,杀人方式是通过比较长的细针,从脖子后面一个精巧的位置插入,可以造成是未知原因死亡的假象,可能会推断成心梗之类的,这里有个前置的细节,就是青豆是乘坐一辆很高级的出租车,内饰什么的都非常有质感,有点不像一辆出租车,然后车里放了一首比较小众的歌,雅纳切克的《小交响曲》,但是青豆知道它,这跟后面的情节也有些许关系,这是女主人公青豆的出场;相应的男主的出场印象不是太深刻,男主叫天吾,是个不知名的作家,跟一个叫小松的编辑有比较好的关系,虽然天吾还没有拿到比较有分量的奖项,但是小松很看好他,也让他帮忙审校一个新作家奖的投稿文章,虽然天吾自身还没获得过这个奖,天吾还有个正式工作,是当数学老师,天吾在学生时代是个数学天才,但后面有对文学产生了兴趣,文学还不足以养活自己,靠着教课还是能保持温饱;

-

接下来是正式故事的起点了,就是小松收到了一部小说投稿,名叫《空气蛹》,是个叫深绘里的女孩子投的稿,小松对他赋予了很高的评价,这里好像记岔了,好像是天吾对这部小说很有好感,但是小松比较怀疑,然后小松看了之后也有了浓厚的兴趣,这里就是开端了,小松想让天吾来重写润色这部《空气蛹》,因为故事本身很有分量,但是描写手法叙事方式等都很拙劣,而天吾正好擅长这个,小松对天吾的评价是,描写技巧无可挑剔,就是故事主体的火花还没际遇迸发,需要一个导火索,这个就可以类比我们程序员,很多比较初中级的程序员主要擅长在原来的代码上修修改改或者给他分配个小功能,比较高级的程序员就需要能做一些项目的架构设计,核心的技术方案设计,以前我也觉得写文档这个比较无聊,但是当一个项目真的比较庞大,复杂的时候,整体和核心部分的架构设计和方案还是需要有文档沉淀的,不然别人不知道没法接受,自己过段时间也会忘记。

-

对于小松的这个建议,他的初衷是想搅一搅这个死气沉沉套路颇深的文坛,因为本身《空气蛹》这部小说的内容很吸引人,小松想通过天吾的润色补充让这部小说冲击新人奖,有种恶作剧的意图,天吾对此表示很多担心和顾虑,小松的这个建议其实也是一种文学作假,有两方面的担心,一方面是原作者深绘里是否同意如此操作,一方面是外界如果发现了这个事实会有什么样的后果,但是小松表示不用担心,前一步由小松牵线,让天吾跟原作者深绘里当面沟通这个代写是否被允许,结果当然是被允许了,这里有了对深绘里的初步描写,按我的理解是比较仙的感觉,然后语言沟通有些吃力,或者说有她自己的特色,当面沟通时貌似是让深绘里回去再考虑下,然后后面再由天吾去深绘里寄宿的戎野老师家沟通具体的细节。

-

2019年12月18日23:37:19 更新
去到戎野老师家之后,天吾知道了关于深绘里的一些事情,深绘里的父亲与戎野老师应该是老友,深绘里的父亲在当初成立了一个叫”先驱”的公社,一个独立运行的社会组织,以运营农场作为物资来源,追求更为松散的共同体,即不过分激进地公有制,进行松散的共同生活,承认私有财产,简而言之就是这样一个能稳定存活下来的独立社会组织,但是随着稳定运行,内部的激进派和稳健派开始出现分歧,不可磨合,后来两派就分裂了,深绘里的父亲,深田保留在了稳健派,但是此时其实深田保内心是矛盾的,以为一开始其实是他倡导的独立革命才组织起了这群人,然而现在他又认清了现实社会已经不太相信能通过革命来独立的可能性,后来激进派便开始越加封闭,而且进行军事训练和思想教育,而后这个先驱的激进派别便有了新的名字”黎明”,深绘里也是在此时从先驱逃离来投靠戎野老师
暂时先写到这,未完待续~

-]]>
- - 生活 - 读后感 - 村上春树 - - - 读后感 - -
2019年终总结 /2020/02/01/2019%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93/ @@ -50,70 +30,67 @@ - 2020年中总结 - /2020/07/11/2020%E5%B9%B4%E4%B8%AD%E6%80%BB%E7%BB%93/ - 很快2020 年就过了一半了,而且是今年这么特殊的一年,很多事情都发生的出乎意料,疫情这个绕不过去的话题,之前写了点比较愤青的文字,感觉不太适合发出来就烂在草稿箱里吧,这个目前一大影响估计是今年都没办法完全摘下口罩了,前面几个月来回杭州都开车,因为彭埠大桥不通行了,实在是非常不方便,每条路都灰常堵,心累,吐槽下杭州的交通规划和交警同志,工作实在做的不咋地。

-

另外一件是就是蜗壳,从前不知道黝黑蜗壳是啥意思,只是经常会在他的视频里看到,大学的时候在缘网下了一个集锦,炒鸡帅气,各种空接扣篮,越来越能明白那句“你永远不知道意外和明天不知道哪个会先来,且行且珍惜”的含义,只是听了很多道理,依然活不好这一生,知易行难,王阳明真的是这方面的大师,有空可以看看这方面的书,一直想写写我跟篮球跟蜗壳的这十几年,争取能早日写好吧,不过得找个静得下来的时候写。

-

正事方面上半年还是挺让人失望的,没有达成一些目标,应该还是能力不足吧,技术方面分析一下还是停留在看的表面层,有些实操的,或者结合业务场景的能力不太行,算是在坚持写写 blog,主要是被这个每周一篇的目标推着走,有时会比较焦虑,内容产出也还比较差,希望能在后面有些改善,可能会降低频率,只是觉得降低了也不一定能有比较好的提升,无法战胜自己的惰性,所以暂时还是坚持下这个目标吧,还有就是 coding 能力,有时候也应该刷刷题,提升思维敏捷度,大脑用太少可能生锈了,况且本来就不是很有优势,虽然失望也只能继续努力吧,日拱一卒,来日方长,加油吧~😔

-

还有就是跑步减肥了,截止今天,上半年跑了 136 公里了,因为疫情影响,农历年后是从 4 月 17 号开始跑的,去年跑到了 300 公里,奖励自己了一个手表(真的挺后悔的,还不如 200 块买个手表),今年希望可以能在这个基础上再进一步,一直跟领导说,跑步算是我坚持下来的唯一一个好习惯了,618 买了个跑步机,周末回家了可以不受天气影响的多跑跑,不过如果天气好可能还是会出去跑跑,跑步机跑道多少还是有点拘束,只是感觉可能是我还是吃得太多了🤦‍♂️,效果不是很明显,还在 80 这个坎徘徊,等于浪费了大半年,可能是年初的项目太费心力,压力比较大,吃得更多,是不是可以算工伤😄,这方面也需要好好调整,可以放得开一点,虽然不太可能一下子到位,但是总要去努力下,随着年龄成长总要承担更多,也要看得开一点,没法事事如愿,尽力就好了,减肥这个事情还在结合一些俯卧撑啥的,希望也能坚持下去,加油吧,不知道原话怎么说的,意思是人类最大的勇敢就是看透了人世间的苦难,仍然热爱生活。我当然没可能让内心变得这么强大,试着去努力吧,奥力给!

+ 2020 年终总结 + /2021/03/31/2020-%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93/ + 拖更原因

这篇年终总结本来应该在农历过完年就出来的,结果是对没有受疫情影响的春节放假时间空闲情况预估太良好,虽然公司调了几天假,但是因为春节期间疫情状况比较好,本来酒店都不让接待聚餐什么的,后来统统放开,结果就是从初一到初六每天要不就是去亲戚家,要不就是去酒店饭店吃饭,计划很丰满,现实很骨感,时间感觉一下就没了,然后年后感觉有点犯懒了,所以才拖到现在。

+

生活-健身跑步

去年(19 年)的时候跑步突破了 300 公里,然后20 年给自己定了个 400 公里的目标,结果意料之中的没成功,原因可能疫情算一点吧,后面买了跑步机之后,基本周末回家都能跑一下,但是最后还是只跑了300 多公里,总的keep 记录跑量也没超过 1000 公里,所以跑步这个目标还是没成功的,不过还算是比去年多跑一点,这样也算后面好突破点,后面的目标就不定的太高了,每年能比前一年多一点就好,其实跑步已经从一种减肥方式变成一种习惯了,一周一次的跑步已经比较难有效减重了,但是对于保持精力和身体状态还是很有效和重要的,只是对于目前的体重还是要多减下去一些跑步才好,太重了对膝盖负担太大了,可惜还是时间呐,游泳骑车什么的都需要更苛刻的条件和时间,饮食呢控制起来比较难(贪吃
终于在 3 月底之前跑到了 1000 公里,迟了三个月,不过也总算达到了,只是体重控制还是不行,有试着走走楼梯,但是感觉对膝盖负担比较大,得再想想用什么方式

+

+

技术成长

一直提不起笔来写这篇年终总结还有个比较大的原因是觉得20 年的成长不如预期,大小目标都没怎么完成,比如深入了解 jvm,是想能有些深入的见解,而不再是某些点的比较片面的理解,系统性的归纳总结也比较少,每个方向或多或少有些看法和理解,但是不全面,一些东西看过了也会忘记,需要温故而知新,比如 AQS 的内容,第一次读其实理解比较浅,后面就强迫自己去读,去写,才有了一些比之前更深入的理解,因为很多文章都是带有作者思路的引导,适不适合自己都要看是否能从他的思路把它看懂,有些就差别很大,这个跟看书也一样,有些书大众一致推荐,一般情况下大多是经典的好的,但是也有可能是不太适合自己的,可能有时候机缘巧合看到的反而让人茅塞顿开,在 todo 里已经积攒了好多的点和面需要去学习实践,一方面是自己懒,一方面是时间也相对偏少,看看 21 年能不能有所提升,加强“时间管理”,哈哈

+

技术上主要是看了 mysql 的 mvcc 相关内容,rocketmq 的,redis 的代码,还有 mybatis 等,其实每一个都能写很多,也有很多值得学习的,需要全面系统学习,之前想好好画一个思维导图,将整个技术体系都梳理下,还只做了一点点,方式也有点问题,应该从大到小,而不是深度优先,细节有很多,每一个方面都有自己比较熟悉擅长的,也有不太了解的,可以做一个评分,这个也是亟待改善的,希望今年能完成。

+

博客

博客方面 20 年一年整是写了 53 篇,差不多是一周一篇的节奏,这个还是不错的,虽然博客质量参差不齐,但是这个更新频率还是比较好的,并且也定了个潜规则,可以一周技术一周生活,这样能缓解水文的频率,提高些技术文章的质量,虽然结果并没有好多少,不过感觉还是可以这么坚持的,能提高一些技术文章的质量那就更好了

]]>
生活 - 年中总结 + 年终总结 + 2020 + 年终总结 2020 生活 + 年终总结 2020 - 年中总结 + 2021 + 拖更
- 2021 年中总结 - /2021/07/18/2021-%E5%B9%B4%E4%B8%AD%E6%80%BB%E7%BB%93/ - 又到半年总结时,第一次写总结类型的文章感觉挺好写的,但是后面总觉得这过去的一段时间所做的事情,能力上的成长低于预期,但是是需要总结下,找找问题,顺便展望下未来。

-

这一年做的最让自己满意的应该就是看了一些书,由折腾群洋总发起的读书打卡活动,到目前为止已经读完了这几本书,《cUrl 必知必会》,《古董局中局 1》,《古董局中局 2》,《算法图解》,《每天 5 分钟玩转 Kubernetes》《幸福了吗?》《高可用可伸缩微服务架构:基于 Dubbo、Spring Cloud和 Service Mesh》《Rust 权威指南》后面可以写个专题说说看的这些书,虽然每天打卡如果时间安排不好,并且看的书像 rust 这样比较难的话还是会有点小焦虑,不过也是个调整过程,一方面可以在白天就抽空看一会,然后也不必要每次都看很大一章,注重吸收。

-

技术上的成长的话,有一些比较小的长进吧,对于一些之前忽视的 synchronized,ThreadLocal 和 AQS 等知识点做了下查漏补缺了,然后多了解了一些 Java 垃圾回收的内容,但是在实操上还是比较欠缺,成型的技术方案,架构上所谓的优化也比较少,一些想法也还有考虑不周全的地方,还需要多花时间和心思去学习加强,特别是在目前已经有的基础上如何做系统深层次的优化,既不要是鸡毛蒜皮的,也不能出现一些不可接受的问题和故障,这是个很重要的课题,需要好好学习,后面考虑定一些周期性目标,两个月左右能有一些成果和总结。

-

另外一部分是自己的服务,因为 ucloud 的机器太贵就没续费了,所以都迁移到腾讯云的小机器上了,顺便折腾了一点点 traefik,但是还很不熟练,不太习惯这一套,一方面是 docker 还不习惯,这也加重了对这套环境的不适应,还是习惯裸机部署,另一方面就是 k8s 了,家里的机器还没虚拟化,没有很好的条件可以做实验,这也是读书打卡的一个没做好的点,整体的学习效果受限于深度和实操,后面是看都是用 traefik,也找到了一篇文章可以 traefik 转发到裸机应用,因为主仓库用的是裸机的 gogs。

-

还有就是运动减肥上,唉,这又是很大的一个痛点,基本没效果,只是还算稳定,昨天看到一个视频说还需要力量训练来增肌,以此可以提升基础代谢,打算往这个方向尝试下,因为今天没有疫情限制了,在 6 月底完成了 200 公里的跑步小目标,只是有些膝盖跟大腿根外侧不适,抽空得去看下医生,后面打算每天也能做点卷腹跟俯卧撑。

-

下半年还希望能继续多看看书,比很多网上各种乱七八糟的文章会好很多,结合豆瓣评分,找一些评价高一些的文章,但也不是说分稍低点的就不行,有些也看人是不是适合,一般 6 分以上评价比较多的就可以试试。

+ 村上春树《1Q84》读后感 + /2019/12/18/1Q84%E8%AF%BB%E5%90%8E%E6%84%9F/ + 看完了村上春树的《1Q84》,这应该是第五本看的他的书了,继 跑步,挪威的森林,刺杀骑士团长,海边的卡夫卡之后,不是其中最长的,好像是海边的卡夫卡还是刺杀骑士团长比较长一点,都是在微信读书上看的,比较方便,最开始在上面看的是高晓松的《鱼羊野史》,不知道为啥取这个名字,但是还是满吸引我的,不过由于去年的种种,没有很多心思把它看完,而且本身的组织形式就是比较松散的,看到哪算哪,其实一些野史部分是我比较喜欢,有些谈到人物的就不太有兴趣,而且类似于大祥哥吃的东西,反正都是哇,怎么这么好吃,嗯,太爱(niu)你(bi)了,高晓松就是这个人是我最喜欢的 xxx 家,我也没去细究过他有没有说重复过,反正是不太爱,后来因为这书还一度对战争史有了浓厚的兴趣,然而事实告诉我,大部头的战争史,其实正史我是真的啃不下去,我可能只对其中 10%的内容感兴趣,不过终于也在今年把它看完了,好像高晓松的晓说也最终季了,貌似其中讲朝鲜战争的还被和谐了,看样子是说出了一些故事(truth)。

+

本来只是想把 《1Q84》的读后感写下,现在觉得还是把这篇当成我今年的读书总结吧,不过先从《1Q84》说起。

+

严格来讲,这不是很书面化的读后感,可能我想写的也只是像聊天一样的说下我读过的书,包括的技术博客其实也是类似的,以后或许会转变,但是目前水平如此吧,写多了可能会变好,也可能不会。

+

开始正文吧,这书有点类似于海边的卡夫卡,一开始是通过两条故事线,穿插着叙述,一条是青豆的,不算是个职业杀手的女杀手,要去解决一个经常家暴的斯文败类,穿着描述得比较性感吧,杀人方式是通过比较长的细针,从脖子后面一个精巧的位置插入,可以造成是未知原因死亡的假象,可能会推断成心梗之类的,这里有个前置的细节,就是青豆是乘坐一辆很高级的出租车,内饰什么的都非常有质感,有点不像一辆出租车,然后车里放了一首比较小众的歌,雅纳切克的《小交响曲》,但是青豆知道它,这跟后面的情节也有些许关系,这是女主人公青豆的出场;相应的男主的出场印象不是太深刻,男主叫天吾,是个不知名的作家,跟一个叫小松的编辑有比较好的关系,虽然天吾还没有拿到比较有分量的奖项,但是小松很看好他,也让他帮忙审校一个新作家奖的投稿文章,虽然天吾自身还没获得过这个奖,天吾还有个正式工作,是当数学老师,天吾在学生时代是个数学天才,但后面有对文学产生了兴趣,文学还不足以养活自己,靠着教课还是能保持温饱;

+

接下来是正式故事的起点了,就是小松收到了一部小说投稿,名叫《空气蛹》,是个叫深绘里的女孩子投的稿,小松对他赋予了很高的评价,这里好像记岔了,好像是天吾对这部小说很有好感,但是小松比较怀疑,然后小松看了之后也有了浓厚的兴趣,这里就是开端了,小松想让天吾来重写润色这部《空气蛹》,因为故事本身很有分量,但是描写手法叙事方式等都很拙劣,而天吾正好擅长这个,小松对天吾的评价是,描写技巧无可挑剔,就是故事主体的火花还没际遇迸发,需要一个导火索,这个就可以类比我们程序员,很多比较初中级的程序员主要擅长在原来的代码上修修改改或者给他分配个小功能,比较高级的程序员就需要能做一些项目的架构设计,核心的技术方案设计,以前我也觉得写文档这个比较无聊,但是当一个项目真的比较庞大,复杂的时候,整体和核心部分的架构设计和方案还是需要有文档沉淀的,不然别人不知道没法接受,自己过段时间也会忘记。

+

对于小松的这个建议,他的初衷是想搅一搅这个死气沉沉套路颇深的文坛,因为本身《空气蛹》这部小说的内容很吸引人,小松想通过天吾的润色补充让这部小说冲击新人奖,有种恶作剧的意图,天吾对此表示很多担心和顾虑,小松的这个建议其实也是一种文学作假,有两方面的担心,一方面是原作者深绘里是否同意如此操作,一方面是外界如果发现了这个事实会有什么样的后果,但是小松表示不用担心,前一步由小松牵线,让天吾跟原作者深绘里当面沟通这个代写是否被允许,结果当然是被允许了,这里有了对深绘里的初步描写,按我的理解是比较仙的感觉,然后语言沟通有些吃力,或者说有她自己的特色,当面沟通时貌似是让深绘里回去再考虑下,然后后面再由天吾去深绘里寄宿的戎野老师家沟通具体的细节。

+

2019年12月18日23:37:19 更新
去到戎野老师家之后,天吾知道了关于深绘里的一些事情,深绘里的父亲与戎野老师应该是老友,深绘里的父亲在当初成立了一个叫”先驱”的公社,一个独立运行的社会组织,以运营农场作为物资来源,追求更为松散的共同体,即不过分激进地公有制,进行松散的共同生活,承认私有财产,简而言之就是这样一个能稳定存活下来的独立社会组织,但是随着稳定运行,内部的激进派和稳健派开始出现分歧,不可磨合,后来两派就分裂了,深绘里的父亲,深田保留在了稳健派,但是此时其实深田保内心是矛盾的,以为一开始其实是他倡导的独立革命才组织起了这群人,然而现在他又认清了现实社会已经不太相信能通过革命来独立的可能性,后来激进派便开始越加封闭,而且进行军事训练和思想教育,而后这个先驱的激进派别便有了新的名字”黎明”,深绘里也是在此时从先驱逃离来投靠戎野老师
暂时先写到这,未完待续~

]]>
生活 - 年中总结 - 2021 + 读后感 + 村上春树 - 生活 - 2021 - 年中总结 - 技术 - 读书 + 读后感
- 2020 年终总结 - /2021/03/31/2020-%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93/ - 拖更原因

这篇年终总结本来应该在农历过完年就出来的,结果是对没有受疫情影响的春节放假时间空闲情况预估太良好,虽然公司调了几天假,但是因为春节期间疫情状况比较好,本来酒店都不让接待聚餐什么的,后来统统放开,结果就是从初一到初六每天要不就是去亲戚家,要不就是去酒店饭店吃饭,计划很丰满,现实很骨感,时间感觉一下就没了,然后年后感觉有点犯懒了,所以才拖到现在。

-

生活-健身跑步

去年(19 年)的时候跑步突破了 300 公里,然后20 年给自己定了个 400 公里的目标,结果意料之中的没成功,原因可能疫情算一点吧,后面买了跑步机之后,基本周末回家都能跑一下,但是最后还是只跑了300 多公里,总的keep 记录跑量也没超过 1000 公里,所以跑步这个目标还是没成功的,不过还算是比去年多跑一点,这样也算后面好突破点,后面的目标就不定的太高了,每年能比前一年多一点就好,其实跑步已经从一种减肥方式变成一种习惯了,一周一次的跑步已经比较难有效减重了,但是对于保持精力和身体状态还是很有效和重要的,只是对于目前的体重还是要多减下去一些跑步才好,太重了对膝盖负担太大了,可惜还是时间呐,游泳骑车什么的都需要更苛刻的条件和时间,饮食呢控制起来比较难(贪吃
终于在 3 月底之前跑到了 1000 公里,迟了三个月,不过也总算达到了,只是体重控制还是不行,有试着走走楼梯,但是感觉对膝盖负担比较大,得再想想用什么方式

-

-

技术成长

一直提不起笔来写这篇年终总结还有个比较大的原因是觉得20 年的成长不如预期,大小目标都没怎么完成,比如深入了解 jvm,是想能有些深入的见解,而不再是某些点的比较片面的理解,系统性的归纳总结也比较少,每个方向或多或少有些看法和理解,但是不全面,一些东西看过了也会忘记,需要温故而知新,比如 AQS 的内容,第一次读其实理解比较浅,后面就强迫自己去读,去写,才有了一些比之前更深入的理解,因为很多文章都是带有作者思路的引导,适不适合自己都要看是否能从他的思路把它看懂,有些就差别很大,这个跟看书也一样,有些书大众一致推荐,一般情况下大多是经典的好的,但是也有可能是不太适合自己的,可能有时候机缘巧合看到的反而让人茅塞顿开,在 todo 里已经积攒了好多的点和面需要去学习实践,一方面是自己懒,一方面是时间也相对偏少,看看 21 年能不能有所提升,加强“时间管理”,哈哈

-

技术上主要是看了 mysql 的 mvcc 相关内容,rocketmq 的,redis 的代码,还有 mybatis 等,其实每一个都能写很多,也有很多值得学习的,需要全面系统学习,之前想好好画一个思维导图,将整个技术体系都梳理下,还只做了一点点,方式也有点问题,应该从大到小,而不是深度优先,细节有很多,每一个方面都有自己比较熟悉擅长的,也有不太了解的,可以做一个评分,这个也是亟待改善的,希望今年能完成。

-

博客

博客方面 20 年一年整是写了 53 篇,差不多是一周一篇的节奏,这个还是不错的,虽然博客质量参差不齐,但是这个更新频率还是比较好的,并且也定了个潜规则,可以一周技术一周生活,这样能缓解水文的频率,提高些技术文章的质量,虽然结果并没有好多少,不过感觉还是可以这么坚持的,能提高一些技术文章的质量那就更好了

+ 2020年中总结 + /2020/07/11/2020%E5%B9%B4%E4%B8%AD%E6%80%BB%E7%BB%93/ + 很快2020 年就过了一半了,而且是今年这么特殊的一年,很多事情都发生的出乎意料,疫情这个绕不过去的话题,之前写了点比较愤青的文字,感觉不太适合发出来就烂在草稿箱里吧,这个目前一大影响估计是今年都没办法完全摘下口罩了,前面几个月来回杭州都开车,因为彭埠大桥不通行了,实在是非常不方便,每条路都灰常堵,心累,吐槽下杭州的交通规划和交警同志,工作实在做的不咋地。

+

另外一件是就是蜗壳,从前不知道黝黑蜗壳是啥意思,只是经常会在他的视频里看到,大学的时候在缘网下了一个集锦,炒鸡帅气,各种空接扣篮,越来越能明白那句“你永远不知道意外和明天不知道哪个会先来,且行且珍惜”的含义,只是听了很多道理,依然活不好这一生,知易行难,王阳明真的是这方面的大师,有空可以看看这方面的书,一直想写写我跟篮球跟蜗壳的这十几年,争取能早日写好吧,不过得找个静得下来的时候写。

+

正事方面上半年还是挺让人失望的,没有达成一些目标,应该还是能力不足吧,技术方面分析一下还是停留在看的表面层,有些实操的,或者结合业务场景的能力不太行,算是在坚持写写 blog,主要是被这个每周一篇的目标推着走,有时会比较焦虑,内容产出也还比较差,希望能在后面有些改善,可能会降低频率,只是觉得降低了也不一定能有比较好的提升,无法战胜自己的惰性,所以暂时还是坚持下这个目标吧,还有就是 coding 能力,有时候也应该刷刷题,提升思维敏捷度,大脑用太少可能生锈了,况且本来就不是很有优势,虽然失望也只能继续努力吧,日拱一卒,来日方长,加油吧~😔

+

还有就是跑步减肥了,截止今天,上半年跑了 136 公里了,因为疫情影响,农历年后是从 4 月 17 号开始跑的,去年跑到了 300 公里,奖励自己了一个手表(真的挺后悔的,还不如 200 块买个手表),今年希望可以能在这个基础上再进一步,一直跟领导说,跑步算是我坚持下来的唯一一个好习惯了,618 买了个跑步机,周末回家了可以不受天气影响的多跑跑,不过如果天气好可能还是会出去跑跑,跑步机跑道多少还是有点拘束,只是感觉可能是我还是吃得太多了🤦‍♂️,效果不是很明显,还在 80 这个坎徘徊,等于浪费了大半年,可能是年初的项目太费心力,压力比较大,吃得更多,是不是可以算工伤😄,这方面也需要好好调整,可以放得开一点,虽然不太可能一下子到位,但是总要去努力下,随着年龄成长总要承担更多,也要看得开一点,没法事事如愿,尽力就好了,减肥这个事情还在结合一些俯卧撑啥的,希望也能坚持下去,加油吧,不知道原话怎么说的,意思是人类最大的勇敢就是看透了人世间的苦难,仍然热爱生活。我当然没可能让内心变得这么强大,试着去努力吧,奥力给!

]]>
生活 - 年终总结 - 2020 - 年终总结 + 年中总结 2020 生活 - 年终总结 2020 - 2021 - 拖更 + 年中总结
@@ -179,6 +156,22 @@ public: c++ + + 2022 年终总结 + /2023/01/15/2022-%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93/ + 一年又一年,时间匆匆,这一年过得不太容易,很多事情都是来得猝不及防,很多规划也照例是没有完成,今年更多了一些,又是比较丧的一篇总结
工作上的变化让我多理解了一些社会跟职场的现实吧,可能的确是我不够优秀,也可能是其他,说回我自身,在工作中今年应该是收获比较一般的一年,不能说没有,对原先不熟悉的业务的掌握程度有了比较大的提升,只是问题依旧存在,也挺难推动完全改变,只能尽自己所能,而这一点也主要是在团队中的定位因为前面说的一些原因,在前期不明确,限制比较大,虽然现在并没有完全解决,但也有了一些明显的改善,如果明年继续为这家公司服务,希望能有所突破,在人心沟通上的技巧总是比较反感,可也是不得不使用或者说被迫学习使用的,LD说我的对错观太强了,拗不过来,希望能有所改变。
长远的规划上没有什么明确的想法,很容易否定原来的各种想法,见识过各种现实的残酷,明白以前的一些想法不够全面或者比较幼稚,想有更上一层楼的机会,只是不希望是通过自己不认可的方式。比较能接受的是通过提升自己的技术和执行力,能够有更进一步的可能。
技术上是挺失败的去年跟前年还是能看一些书,学一些东西,今年少了很多,可能对原来比较熟悉的都有些遗忘,最近有在改善博客的内容,能更多的是系列化的,由浅入深,只是还很不完善,没什么规划,体系上也还不完整,不过还是以mybatis作为一个开头,后续新开始的内容或者原先写过的相关的都能做个整理,不再是想到啥就写点啥。最近的一个重点是在k8s上,学习方式跟一些特别优秀的人比起来还是会慢一些,不过也是自己的方法,能够更深入的理解整个体系,并讲解出来,可能会尝试采用视频的方式,对一些比较好的内容做尝试,看看会不会有比较好的数据和反馈,在22年还苟着周更的独立技术博客也算是比较稀有了的,其他站的发布也要勤一些,形成所谓的“矩阵”。
跑步减肥这个么还是比较惨,22年只跑了368公里,比21年少了85公里,有一些客观但很多是主观的原因,还是需要跑起来,只是减肥也很迫切,体重比较大跑步还是有些压力的,买了动感单车,就是时间稍长屁股痛这个目前比较难解决,骑还是每天在骑就是强度跟时间不太够,要保证每天30分钟的量可能会比较好。
加油吧,愿23年家人和自己都健康,顺遂。大家也一样。

+]]>
+ + 生活 + 年终总结 + + + 生活 + 年终总结 + 2022 + 2023 + +
AQS篇二 之 Condition 浅析笔记 /2021/02/21/AQS-%E4%B9%8B-Condition-%E6%B5%85%E6%9E%90%E7%AC%94%E8%AE%B0/ @@ -676,6 +669,72 @@ public: unlock + + AbstractQueuedSynchronizer + /2019/09/23/AbstractQueuedSynchronizer/ + 最近看了大神的 AQS 的文章,之前总是断断续续地看一点,每次都知难而退,下次看又从头开始,昨天总算硬着头皮看完了第一部分
首先 AQS 只要有这些属性

+
// 头结点,你直接把它当做 当前持有锁的线程 可能是最好理解的
+private transient volatile Node head;
+
+// 阻塞的尾节点,每个新的节点进来,都插入到最后,也就形成了一个链表
+private transient volatile Node tail;
+
+// 这个是最重要的,代表当前锁的状态,0代表没有被占用,大于 0 代表有线程持有当前锁
+// 这个值可以大于 1,是因为锁可以重入,每次重入都加上 1
+private volatile int state;
+
+// 代表当前持有独占锁的线程,举个最重要的使用例子,因为锁可以重入
+// reentrantLock.lock()可以嵌套调用多次,所以每次用这个来判断当前线程是否已经拥有了锁
+// if (currentThread == getExclusiveOwnerThread()) {state++}
+private transient Thread exclusiveOwnerThread; //继承自AbstractOwnableSynchronizer
+

大概了解了 aqs 底层的双向等待队列,
结构是这样的

每个 node 里面主要是的代码结构也比较简单

+
static final class Node {
+    // 标识节点当前在共享模式下
+    static final Node SHARED = new Node();
+    // 标识节点当前在独占模式下
+    static final Node EXCLUSIVE = null;
+
+    // ======== 下面的几个int常量是给waitStatus用的 ===========
+    /** waitStatus value to indicate thread has cancelled */
+    // 代码此线程取消了争抢这个锁
+    static final int CANCELLED =  1;
+    /** waitStatus value to indicate successor's thread needs unparking */
+    // 官方的描述是,其表示当前node的后继节点对应的线程需要被唤醒
+    static final int SIGNAL    = -1;
+    /** waitStatus value to indicate thread is waiting on condition */
+    // 本文不分析condition,所以略过吧,下一篇文章会介绍这个
+    static final int CONDITION = -2;
+    /**
+     * waitStatus value to indicate the next acquireShared should
+     * unconditionally propagate
+     */
+    // 同样的不分析,略过吧
+    static final int PROPAGATE = -3;
+    // =====================================================
+
+
+    // 取值为上面的1、-1、-2、-3,或者0(以后会讲到)
+    // 这么理解,暂时只需要知道如果这个值 大于0 代表此线程取消了等待,
+    //    ps: 半天抢不到锁,不抢了,ReentrantLock是可以指定timeouot的。。。
+    volatile int waitStatus;
+    // 前驱节点的引用
+    volatile Node prev;
+    // 后继节点的引用
+    volatile Node next;
+    // 这个就是线程本尊
+    volatile Thread thread;
+
+}
+

其实可以主要关注这个 waitStatus 因为这个是后面的节点给前面的节点设置的,等于-1 的时候代表后面有节点等待,需要去唤醒,
这里使用了一个变种的 CLH 队列实现,CLH 队列相关内容可以查看这篇 自旋锁、排队自旋锁、MCS锁、CLH锁

+]]>
+ + java + + + java + aqs + +
AQS篇一 /2021/02/14/AQS%E7%AF%87%E4%B8%80/ @@ -1008,87 +1067,62 @@ public: - AbstractQueuedSynchronizer - /2019/09/23/AbstractQueuedSynchronizer/ - 最近看了大神的 AQS 的文章,之前总是断断续续地看一点,每次都知难而退,下次看又从头开始,昨天总算硬着头皮看完了第一部分
首先 AQS 只要有这些属性

-
// 头结点,你直接把它当做 当前持有锁的线程 可能是最好理解的
-private transient volatile Node head;
+    Apollo 如何获取当前环境
+    /2022/09/04/Apollo-%E5%A6%82%E4%BD%95%E8%8E%B7%E5%8F%96%E5%BD%93%E5%89%8D%E7%8E%AF%E5%A2%83/
+    在用 Apollo 作为配置中心的过程中才到过几个坑,这边记录下,因为运行 java 服务的启动参数一般比较固定,所以我们在一个新环境里运行的时候没有特意去检查,然后突然发现业务上有一些数据异常,排查之后才发现java 服务连接了测试环境的 apollo,而原因是因为环境变量传了-Denv=fat,而在我们的环境配置中 fat 就是代表测试环境, 其实应该是-Denv=pro,而 apollo 总共有这些环境

+
public enum Env{
+  LOCAL, DEV, FWS, FAT, UAT, LPT, PRO, TOOLS, UNKNOWN;
 
-// 阻塞的尾节点,每个新的节点进来,都插入到最后,也就形成了一个链表
-private transient volatile Node tail;
+  public static Env fromString(String env) {
+    Env environment = EnvUtils.transformEnv(env);
+    Preconditions.checkArgument(environment != UNKNOWN, String.format("Env %s is invalid", env));
+    return environment;
+  }
+}
+

而这些解释

+
/**
+ * Here is the brief description for all the predefined environments:
+ * <ul>
+ *   <li>LOCAL: Local Development environment, assume you are working at the beach with no network access</li>
+ *   <li>DEV: Development environment</li>
+ *   <li>FWS: Feature Web Service Test environment</li>
+ *   <li>FAT: Feature Acceptance Test environment</li>
+ *   <li>UAT: User Acceptance Test environment</li>
+ *   <li>LPT: Load and Performance Test environment</li>
+ *   <li>PRO: Production environment</li>
+ *   <li>TOOLS: Tooling environment, a special area in production environment which allows
+ * access to test environment, e.g. Apollo Portal should be deployed in tools environment</li>
+ * </ul>
+ */
+

那如果要在运行时知道 apollo 当前使用的环境可以用这个

+
Env apolloEnv = ApolloInjector.getInstance(ConfigUtil.class).getApolloEnv();
+

简单记录下。

+]]>
+ + Java + + + Java + Apollo + environment + + + + Apollo 客户端启动过程分析 + /2022/09/18/Apollo-%E5%AE%A2%E6%88%B7%E7%AB%AF%E5%90%AF%E5%8A%A8%E8%BF%87%E7%A8%8B%E5%88%86%E6%9E%90/ + 入口是可以在 springboot 的启动类上打上EnableApolloConfig 注解

+
@Import(ApolloConfigRegistrar.class)
+public @interface EnableApolloConfig {
+

这个 import 实现了

+
public class ApolloConfigRegistrar implements ImportBeanDefinitionRegistrar {
 
-// 这个是最重要的,代表当前锁的状态,0代表没有被占用,大于 0 代表有线程持有当前锁
-// 这个值可以大于 1,是因为锁可以重入,每次重入都加上 1
-private volatile int state;
+  private ApolloConfigRegistrarHelper helper = ServiceBootstrap.loadPrimary(ApolloConfigRegistrarHelper.class);
 
-// 代表当前持有独占锁的线程,举个最重要的使用例子,因为锁可以重入
-// reentrantLock.lock()可以嵌套调用多次,所以每次用这个来判断当前线程是否已经拥有了锁
-// if (currentThread == getExclusiveOwnerThread()) {state++}
-private transient Thread exclusiveOwnerThread; //继承自AbstractOwnableSynchronizer
-

大概了解了 aqs 底层的双向等待队列,
结构是这样的

每个 node 里面主要是的代码结构也比较简单

-
static final class Node {
-    // 标识节点当前在共享模式下
-    static final Node SHARED = new Node();
-    // 标识节点当前在独占模式下
-    static final Node EXCLUSIVE = null;
-
-    // ======== 下面的几个int常量是给waitStatus用的 ===========
-    /** waitStatus value to indicate thread has cancelled */
-    // 代码此线程取消了争抢这个锁
-    static final int CANCELLED =  1;
-    /** waitStatus value to indicate successor's thread needs unparking */
-    // 官方的描述是,其表示当前node的后继节点对应的线程需要被唤醒
-    static final int SIGNAL    = -1;
-    /** waitStatus value to indicate thread is waiting on condition */
-    // 本文不分析condition,所以略过吧,下一篇文章会介绍这个
-    static final int CONDITION = -2;
-    /**
-     * waitStatus value to indicate the next acquireShared should
-     * unconditionally propagate
-     */
-    // 同样的不分析,略过吧
-    static final int PROPAGATE = -3;
-    // =====================================================
-
-
-    // 取值为上面的1、-1、-2、-3,或者0(以后会讲到)
-    // 这么理解,暂时只需要知道如果这个值 大于0 代表此线程取消了等待,
-    //    ps: 半天抢不到锁,不抢了,ReentrantLock是可以指定timeouot的。。。
-    volatile int waitStatus;
-    // 前驱节点的引用
-    volatile Node prev;
-    // 后继节点的引用
-    volatile Node next;
-    // 这个就是线程本尊
-    volatile Thread thread;
-
-}
-

其实可以主要关注这个 waitStatus 因为这个是后面的节点给前面的节点设置的,等于-1 的时候代表后面有节点等待,需要去唤醒,
这里使用了一个变种的 CLH 队列实现,CLH 队列相关内容可以查看这篇 自旋锁、排队自旋锁、MCS锁、CLH锁

-]]>
- - java - - - java - aqs - -
- - Apollo 客户端启动过程分析 - /2022/09/18/Apollo-%E5%AE%A2%E6%88%B7%E7%AB%AF%E5%90%AF%E5%8A%A8%E8%BF%87%E7%A8%8B%E5%88%86%E6%9E%90/ - 入口是可以在 springboot 的启动类上打上EnableApolloConfig 注解

-
@Import(ApolloConfigRegistrar.class)
-public @interface EnableApolloConfig {
-

这个 import 实现了

-
public class ApolloConfigRegistrar implements ImportBeanDefinitionRegistrar {
-
-  private ApolloConfigRegistrarHelper helper = ServiceBootstrap.loadPrimary(ApolloConfigRegistrarHelper.class);
-
-  @Override
-  public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) {
-    helper.registerBeanDefinitions(importingClassMetadata, registry);
-  }
-}
+ @Override + public void registerBeanDefinitions(AnnotationMetadata importingClassMetadata, BeanDefinitionRegistry registry) { + helper.registerBeanDefinitions(importingClassMetadata, registry); + } +}

然后就调用了

com.ctrip.framework.apollo.spring.spi.DefaultApolloConfigRegistrarHelper#registerBeanDefinitions
@@ -1500,50 +1534,9 @@ public: Java Apollo + environment value 注解 - environment - -
- - Apollo 如何获取当前环境 - /2022/09/04/Apollo-%E5%A6%82%E4%BD%95%E8%8E%B7%E5%8F%96%E5%BD%93%E5%89%8D%E7%8E%AF%E5%A2%83/ - 在用 Apollo 作为配置中心的过程中才到过几个坑,这边记录下,因为运行 java 服务的启动参数一般比较固定,所以我们在一个新环境里运行的时候没有特意去检查,然后突然发现业务上有一些数据异常,排查之后才发现java 服务连接了测试环境的 apollo,而原因是因为环境变量传了-Denv=fat,而在我们的环境配置中 fat 就是代表测试环境, 其实应该是-Denv=pro,而 apollo 总共有这些环境

-
public enum Env{
-  LOCAL, DEV, FWS, FAT, UAT, LPT, PRO, TOOLS, UNKNOWN;
-
-  public static Env fromString(String env) {
-    Env environment = EnvUtils.transformEnv(env);
-    Preconditions.checkArgument(environment != UNKNOWN, String.format("Env %s is invalid", env));
-    return environment;
-  }
-}
-

而这些解释

-
/**
- * Here is the brief description for all the predefined environments:
- * <ul>
- *   <li>LOCAL: Local Development environment, assume you are working at the beach with no network access</li>
- *   <li>DEV: Development environment</li>
- *   <li>FWS: Feature Web Service Test environment</li>
- *   <li>FAT: Feature Acceptance Test environment</li>
- *   <li>UAT: User Acceptance Test environment</li>
- *   <li>LPT: Load and Performance Test environment</li>
- *   <li>PRO: Production environment</li>
- *   <li>TOOLS: Tooling environment, a special area in production environment which allows
- * access to test environment, e.g. Apollo Portal should be deployed in tools environment</li>
- * </ul>
- */
-

那如果要在运行时知道 apollo 当前使用的环境可以用这个

-
Env apolloEnv = ApolloInjector.getInstance(ConfigUtil.class).getApolloEnv();
-

简单记录下。

-]]>
- - Java - - - Java - Apollo - environment
@@ -1701,531 +1694,170 @@ Node *clone(Node *graph) { - Disruptor 系列二 - /2022/02/27/Disruptor-%E7%B3%BB%E5%88%97%E4%BA%8C/ - 这里开始慢慢深入的讲一下 disruptor,首先是 lock free , 相比于前面介绍的两个阻塞队列,
disruptor 本身是不直接使用锁的,因为本身的设计是单个线程去生产,通过 cas 来维护头指针,
不直接维护尾指针,这样就减少了锁的使用,提升了性能;第二个是这次介绍的重点,
减少 false sharing 的情况,也就是常说的 伪共享 问题,那么什么叫 伪共享 呢,
这里要扯到一些 cpu 缓存的知识,

譬如我在用的这个笔记本

这里就可能看到 L2 Cache 就是针对每个核的

这里可以看到现代 CPU 的结构里,分为三级缓存,越靠近 cpu 的速度越快,存储容量越小,
而 L1 跟 L2 是 CPU 核专属的每个核都有自己的 L1 和 L2 的,其中 L1 还分为数据和指令,
像我上面的图中显示的 L1 Cache 只有 64KB 大小,其中数据 32KB,指令 32KB,
而 L2 则有 256KB,L3 有 4MB,其中的 Line Size 是我们这里比较重要的一个值,
CPU 其实会就近地从 Cache 中读取数据,碰到 Cache Miss 就再往下一级 Cache 读取,
每次读取是按照缓存行 Cache Line 读取,并且也遵循了“就近原则”,
也就是相近的数据有可能也会马上被读取,所以以行的形式读取,然而这也造成了 false sharing
因为类似于 ArrayBlockingQueue,需要有 takeIndex , putIndex , count , 因为在同一个类中,
很有可能存在于同一个 Cache Line 中,但是这几个值会被不同的线程修改,
导致从 Cache 取出来以后立马就会被失效,所谓的就近原则也就没用了,
因为需要反复地标记 dirty 脏位,然后把 Cache 刷掉,就造成了false sharing这种情况
而在 disruptor 中则使用了填充的方式,让我的头指针能够不产生false sharing

-
class LhsPadding
-{
-    protected long p1, p2, p3, p4, p5, p6, p7;
-}
-
-class Value extends LhsPadding
-{
-    protected volatile long value;
-}
-
-class RhsPadding extends Value
-{
-    protected long p9, p10, p11, p12, p13, p14, p15;
-}
-
-/**
- * <p>Concurrent sequence class used for tracking the progress of
- * the ring buffer and event processors.  Support a number
- * of concurrent operations including CAS and order writes.
- *
- * <p>Also attempts to be more efficient with regards to false
- * sharing by adding padding around the volatile field.
- */
-public class Sequence extends RhsPadding
-{
-

通过代码可以看到,sequence 中其实真正有意义的是 value 字段,因为需要在多线程环境下可见也
使用了volatile 关键字,而 LhsPaddingRhsPadding 分别在value 前后填充了各
7 个 long 型的变量,long 型的变量在 Java 中是占用 8 bytes,这样就相当于不管怎么样,
value 都会单独使用一个缓存行,使得其不会产生 false sharing 的问题。

+ 2021 年中总结 + /2021/07/18/2021-%E5%B9%B4%E4%B8%AD%E6%80%BB%E7%BB%93/ + 又到半年总结时,第一次写总结类型的文章感觉挺好写的,但是后面总觉得这过去的一段时间所做的事情,能力上的成长低于预期,但是是需要总结下,找找问题,顺便展望下未来。

+

这一年做的最让自己满意的应该就是看了一些书,由折腾群洋总发起的读书打卡活动,到目前为止已经读完了这几本书,《cUrl 必知必会》,《古董局中局 1》,《古董局中局 2》,《算法图解》,《每天 5 分钟玩转 Kubernetes》《幸福了吗?》《高可用可伸缩微服务架构:基于 Dubbo、Spring Cloud和 Service Mesh》《Rust 权威指南》后面可以写个专题说说看的这些书,虽然每天打卡如果时间安排不好,并且看的书像 rust 这样比较难的话还是会有点小焦虑,不过也是个调整过程,一方面可以在白天就抽空看一会,然后也不必要每次都看很大一章,注重吸收。

+

技术上的成长的话,有一些比较小的长进吧,对于一些之前忽视的 synchronized,ThreadLocal 和 AQS 等知识点做了下查漏补缺了,然后多了解了一些 Java 垃圾回收的内容,但是在实操上还是比较欠缺,成型的技术方案,架构上所谓的优化也比较少,一些想法也还有考虑不周全的地方,还需要多花时间和心思去学习加强,特别是在目前已经有的基础上如何做系统深层次的优化,既不要是鸡毛蒜皮的,也不能出现一些不可接受的问题和故障,这是个很重要的课题,需要好好学习,后面考虑定一些周期性目标,两个月左右能有一些成果和总结。

+

另外一部分是自己的服务,因为 ucloud 的机器太贵就没续费了,所以都迁移到腾讯云的小机器上了,顺便折腾了一点点 traefik,但是还很不熟练,不太习惯这一套,一方面是 docker 还不习惯,这也加重了对这套环境的不适应,还是习惯裸机部署,另一方面就是 k8s 了,家里的机器还没虚拟化,没有很好的条件可以做实验,这也是读书打卡的一个没做好的点,整体的学习效果受限于深度和实操,后面是看都是用 traefik,也找到了一篇文章可以 traefik 转发到裸机应用,因为主仓库用的是裸机的 gogs。

+

还有就是运动减肥上,唉,这又是很大的一个痛点,基本没效果,只是还算稳定,昨天看到一个视频说还需要力量训练来增肌,以此可以提升基础代谢,打算往这个方向尝试下,因为今天没有疫情限制了,在 6 月底完成了 200 公里的跑步小目标,只是有些膝盖跟大腿根外侧不适,抽空得去看下医生,后面打算每天也能做点卷腹跟俯卧撑。

+

下半年还希望能继续多看看书,比很多网上各种乱七八糟的文章会好很多,结合豆瓣评分,找一些评价高一些的文章,但也不是说分稍低点的就不行,有些也看人是不是适合,一般 6 分以上评价比较多的就可以试试。

]]>
- Java + 生活 + 年中总结 + 2021 - Java - Disruptor + 生活 + 2021 + 年中总结 + 技术 + 读书
- Filter, Interceptor, Aop, 啥, 啥, 啥? 这些都是啥? - /2020/08/22/Filter-Intercepter-Aop-%E5%95%A5-%E5%95%A5-%E5%95%A5-%E8%BF%99%E4%BA%9B%E9%83%BD%E6%98%AF%E5%95%A5/ - 本来是想取个像现在那些公众号转了又转的文章标题,”面试官再问你xxxxx,就把这篇文章甩给他看”这种标题,但是觉得实在太 low 了,还是用一部我比较喜欢的电影里的一句台词,《人在囧途》里王宝强对着那张老板给他的欠条,看不懂字时候说的那句,这些都是些啥(第四声)
当我刚开始面 Java 的时候,其实我真的没注意这方面的东西,实话说就是不知道这些是啥,开发中用过 Interceptor和 Aop,了解 aop 的实现原理,但是不知道 Java web 中的 Filter 是怎么回事,知道 dubbo 的 filter,就这样,所以被问到了的确是回答不出来,可能就觉得这个渣渣,这么简单的都不会,所以还是花点时间来看看这个是个啥,为了避免我口吐芬芳,还是耐下性子来简单说下这几个东西
首先是 servlet,怎么去解释这个呢,因为之前是 PHPer,所以比较喜欢用它来举例子,在普通的 PHP 的 web 应用中一般有几部分组成,接受 HTTP 请求的是前置的 nginx 或者 apache,但是这俩玩意都是只能处理静态的请求,远古时代 PHP 和 HTML 混编是通过 apache 的 php module,跟后来 nginx 使用 php-fpm 其实道理类似,就是把请求中需要 PHP 处理的转发给 PHP,在 Java 中呢,是有个比较牛叉的叫 Tomcat 的,它可以把请求转成 servlet,而 servlet 其实就是一种实现了特定接口的 Java 代码,

-

-package javax.servlet;
+    Disruptor 系列三
+    /2022/09/25/Disruptor-%E7%B3%BB%E5%88%97%E4%B8%89/
+    原来一直有点被误导,
gatingSequences用来标识每个 processer 的操作位点,但是怎么记录更新有点搞不清楚
其实问题在于 gatingSequences 是个 Sequence 数组,首先要看下怎么加进去的,
可以看到是在 com.lmax.disruptor.RingBuffer#addGatingSequences 这个方法里添加
首先是 com.lmax.disruptor.dsl.Disruptor#handleEventsWith(com.lmax.disruptor.EventHandler<? super T>...)
然后执行 com.lmax.disruptor.dsl.Disruptor#createEventProcessors(com.lmax.disruptor.Sequence[], com.lmax.disruptor.EventHandler<? super T>[])

+
EventHandlerGroup<T> createEventProcessors(
+        final Sequence[] barrierSequences,
+        final EventHandler<? super T>[] eventHandlers)
+    {
+        checkNotStarted();
 
-import java.io.IOException;
+        final Sequence[] processorSequences = new Sequence[eventHandlers.length];
+        final SequenceBarrier barrier = ringBuffer.newBarrier(barrierSequences);
 
-/**
- * Defines methods that all servlets must implement.
- *
- * <p>
- * A servlet is a small Java program that runs within a Web server. Servlets
- * receive and respond to requests from Web clients, usually across HTTP, the
- * HyperText Transfer Protocol.
- *
- * <p>
- * To implement this interface, you can write a generic servlet that extends
- * <code>javax.servlet.GenericServlet</code> or an HTTP servlet that extends
- * <code>javax.servlet.http.HttpServlet</code>.
- *
- * <p>
- * This interface defines methods to initialize a servlet, to service requests,
- * and to remove a servlet from the server. These are known as life-cycle
- * methods and are called in the following sequence:
- * <ol>
- * <li>The servlet is constructed, then initialized with the <code>init</code>
- * method.
- * <li>Any calls from clients to the <code>service</code> method are handled.
- * <li>The servlet is taken out of service, then destroyed with the
- * <code>destroy</code> method, then garbage collected and finalized.
- * </ol>
- *
- * <p>
- * In addition to the life-cycle methods, this interface provides the
- * <code>getServletConfig</code> method, which the servlet can use to get any
- * startup information, and the <code>getServletInfo</code> method, which allows
- * the servlet to return basic information about itself, such as author,
- * version, and copyright.
- *
- * @see GenericServlet
- * @see javax.servlet.http.HttpServlet
- */
-public interface Servlet {
+        for (int i = 0, eventHandlersLength = eventHandlers.length; i < eventHandlersLength; i++)
+        {
+            final EventHandler<? super T> eventHandler = eventHandlers[i];
 
-    /**
-     * Called by the servlet container to indicate to a servlet that the servlet
-     * is being placed into service.
-     *
-     * <p>
-     * The servlet container calls the <code>init</code> method exactly once
-     * after instantiating the servlet. The <code>init</code> method must
-     * complete successfully before the servlet can receive any requests.
-     *
-     * <p>
-     * The servlet container cannot place the servlet into service if the
-     * <code>init</code> method
-     * <ol>
-     * <li>Throws a <code>ServletException</code>
-     * <li>Does not return within a time period defined by the Web server
-     * </ol>
-     *
-     *
-     * @param config
-     *            a <code>ServletConfig</code> object containing the servlet's
-     *            configuration and initialization parameters
-     *
-     * @exception ServletException
-     *                if an exception has occurred that interferes with the
-     *                servlet's normal operation
-     *
-     * @see UnavailableException
-     * @see #getServletConfig
-     */
-    public void init(ServletConfig config) throws ServletException;
+            // 这里将 handler 包装成一个 BatchEventProcessor
+            final BatchEventProcessor<T> batchEventProcessor =
+                new BatchEventProcessor<>(ringBuffer, barrier, eventHandler);
 
-    /**
-     *
-     * Returns a {@link ServletConfig} object, which contains initialization and
-     * startup parameters for this servlet. The <code>ServletConfig</code>
-     * object returned is the one passed to the <code>init</code> method.
-     *
-     * <p>
-     * Implementations of this interface are responsible for storing the
-     * <code>ServletConfig</code> object so that this method can return it. The
-     * {@link GenericServlet} class, which implements this interface, already
-     * does this.
-     *
-     * @return the <code>ServletConfig</code> object that initializes this
-     *         servlet
-     *
-     * @see #init
-     */
-    public ServletConfig getServletConfig();
+            if (exceptionHandler != null)
+            {
+                batchEventProcessor.setExceptionHandler(exceptionHandler);
+            }
 
-    /**
-     * Called by the servlet container to allow the servlet to respond to a
-     * request.
-     *
-     * <p>
-     * This method is only called after the servlet's <code>init()</code> method
-     * has completed successfully.
-     *
-     * <p>
-     * The status code of the response always should be set for a servlet that
-     * throws or sends an error.
-     *
-     *
-     * <p>
-     * Servlets typically run inside multithreaded servlet containers that can
-     * handle multiple requests concurrently. Developers must be aware to
-     * synchronize access to any shared resources such as files, network
-     * connections, and as well as the servlet's class and instance variables.
-     * More information on multithreaded programming in Java is available in <a
-     * href
-     * ="http://java.sun.com/Series/Tutorial/java/threads/multithreaded.html">
-     * the Java tutorial on multi-threaded programming</a>.
-     *
-     *
-     * @param req
-     *            the <code>ServletRequest</code> object that contains the
-     *            client's request
-     *
-     * @param res
-     *            the <code>ServletResponse</code> object that contains the
-     *            servlet's response
-     *
-     * @exception ServletException
-     *                if an exception occurs that interferes with the servlet's
-     *                normal operation
-     *
-     * @exception IOException
-     *                if an input or output exception occurs
-     */
-    public void service(ServletRequest req, ServletResponse res)
-            throws ServletException, IOException;
-
-    /**
-     * Returns information about the servlet, such as author, version, and
-     * copyright.
-     *
-     * <p>
-     * The string that this method returns should be plain text and not markup
-     * of any kind (such as HTML, XML, etc.).
-     *
-     * @return a <code>String</code> containing servlet information
-     */
-    public String getServletInfo();
+            consumerRepository.add(batchEventProcessor, eventHandler, barrier);
+            processorSequences[i] = batchEventProcessor.getSequence();
+        }
 
-    /**
-     * Called by the servlet container to indicate to a servlet that the servlet
-     * is being taken out of service. This method is only called once all
-     * threads within the servlet's <code>service</code> method have exited or
-     * after a timeout period has passed. After the servlet container calls this
-     * method, it will not call the <code>service</code> method again on this
-     * servlet.
-     *
-     * <p>
-     * This method gives the servlet an opportunity to clean up any resources
-     * that are being held (for example, memory, file handles, threads) and make
-     * sure that any persistent state is synchronized with the servlet's current
-     * state in memory.
-     */
-    public void destroy();
-}
-
-

重点看 servlet 的 service方法,就是接受请求,处理完了给响应,不说细节,不然光 Tomcat 的能说半年,所以呢再进一步去理解,其实就能知道,就是一个先后的问题,盗个图

filter 跟后两者最大的不一样其实是一个基于 servlet,在非常外层做的处理,然后是 interceptor 的 prehandle 跟 posthandle,接着才是我们常规的 aop,就这么点事情,做个小试验吧(还是先补段代码吧)

-

Filter

// ---------------------------------------------------- FilterChain Methods
+        updateGatingSequencesForNextInChain(barrierSequences, processorSequences);
 
-    /**
-     * Invoke the next filter in this chain, passing the specified request
-     * and response.  If there are no more filters in this chain, invoke
-     * the <code>service()</code> method of the servlet itself.
-     *
-     * @param request The servlet request we are processing
-     * @param response The servlet response we are creating
-     *
-     * @exception IOException if an input/output error occurs
-     * @exception ServletException if a servlet exception occurs
-     */
-    @Override
-    public void doFilter(ServletRequest request, ServletResponse response)
-        throws IOException, ServletException {
+        return new EventHandlerGroup<>(this, consumerRepository, processorSequences);
+    }
- if( Globals.IS_SECURITY_ENABLED ) { - final ServletRequest req = request; - final ServletResponse res = response; - try { - java.security.AccessController.doPrivileged( - new java.security.PrivilegedExceptionAction<Void>() { - @Override - public Void run() - throws ServletException, IOException { - internalDoFilter(req,res); - return null; - } - } - ); - } catch( PrivilegedActionException pe) { - Exception e = pe.getException(); - if (e instanceof ServletException) - throw (ServletException) e; - else if (e instanceof IOException) - throw (IOException) e; - else if (e instanceof RuntimeException) - throw (RuntimeException) e; - else - throw new ServletException(e.getMessage(), e); +

BatchEventProcessor 在类内有个定义 sequence

+
private final Sequence sequence = new Sequence(Sequencer.INITIAL_CURSOR_VALUE);
+

然后在上面循环中的这一句取出来

+
processorSequences[i] = batchEventProcessor.getSequence();
+

调用com.lmax.disruptor.dsl.Disruptor#updateGatingSequencesForNextInChain 方法

+
private void updateGatingSequencesForNextInChain(final Sequence[] barrierSequences, final Sequence[] processorSequences)
+    {
+        if (processorSequences.length > 0)
+        {
+            // 然后在这里添加
+            ringBuffer.addGatingSequences(processorSequences);
+            for (final Sequence barrierSequence : barrierSequences)
+            {
+                ringBuffer.removeGatingSequence(barrierSequence);
             }
-        } else {
-            internalDoFilter(request,response);
+            consumerRepository.unMarkEventProcessorsAsEndOfChain(barrierSequences);
         }
-    }
-    private void internalDoFilter(ServletRequest request,
-                                  ServletResponse response)
-        throws IOException, ServletException {
-
-        // Call the next filter if there is one
-        if (pos < n) {
-            ApplicationFilterConfig filterConfig = filters[pos++];
-            try {
-                Filter filter = filterConfig.getFilter();
+    }
- if (request.isAsyncSupported() && "false".equalsIgnoreCase( - filterConfig.getFilterDef().getAsyncSupported())) { - request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR, Boolean.FALSE); - } - if( Globals.IS_SECURITY_ENABLED ) { - final ServletRequest req = request; - final ServletResponse res = response; - Principal principal = - ((HttpServletRequest) req).getUserPrincipal(); +

而如何更新则是在处理器 com.lmax.disruptor.BatchEventProcessor#run

+
public void run()
+    {
+        if (running.compareAndSet(IDLE, RUNNING))
+        {
+            sequenceBarrier.clearAlert();
 
-                    Object[] args = new Object[]{req, res, this};
-                    SecurityUtil.doAsPrivilege ("doFilter", filter, classType, args, principal);
-                } else {
-                    filter.doFilter(request, response, this);
+            notifyStart();
+            try
+            {
+                if (running.get() == RUNNING)
+                {
+                    processEvents();
                 }
-            } catch (IOException | ServletException | RuntimeException e) {
-                throw e;
-            } catch (Throwable e) {
-                e = ExceptionUtils.unwrapInvocationTargetException(e);
-                ExceptionUtils.handleThrowable(e);
-                throw new ServletException(sm.getString("filterChain.filter"), e);
-            }
-            return;
-        }
-
-        // We fell off the end of the chain -- call the servlet instance
-        try {
-            if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
-                lastServicedRequest.set(request);
-                lastServicedResponse.set(response);
             }
-
-            if (request.isAsyncSupported() && !servletSupportsAsync) {
-                request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR,
-                        Boolean.FALSE);
+            finally
+            {
+                notifyShutdown();
+                running.set(IDLE);
             }
-            // Use potentially wrapped request from this point
-            if ((request instanceof HttpServletRequest) &&
-                    (response instanceof HttpServletResponse) &&
-                    Globals.IS_SECURITY_ENABLED ) {
-                final ServletRequest req = request;
-                final ServletResponse res = response;
-                Principal principal =
-                    ((HttpServletRequest) req).getUserPrincipal();
-                Object[] args = new Object[]{req, res};
-                SecurityUtil.doAsPrivilege("service",
-                                           servlet,
-                                           classTypeUsedInService,
-                                           args,
-                                           principal);
-            } else {
-                servlet.service(request, response);
+        }
+        else
+        {
+            // This is a little bit of guess work.  The running state could of changed to HALTED by
+            // this point.  However, Java does not have compareAndExchange which is the only way
+            // to get it exactly correct.
+            if (running.get() == RUNNING)
+            {
+                throw new IllegalStateException("Thread is already running");
             }
-        } catch (IOException | ServletException | RuntimeException e) {
-            throw e;
-        } catch (Throwable e) {
-            e = ExceptionUtils.unwrapInvocationTargetException(e);
-            ExceptionUtils.handleThrowable(e);
-            throw new ServletException(sm.getString("filterChain.servlet"), e);
-        } finally {
-            if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
-                lastServicedRequest.set(null);
-                lastServicedResponse.set(null);
+            else
+            {
+                earlyExit();
             }
         }
-    }
-

注意看这一行
filter.doFilter(request, response, this);
是不是看懂了,就是个 filter 链,但是这个代码在哪呢,org.apache.catalina.core.ApplicationFilterChain#doFilter
然后是interceptor,

-
protected void doDispatch(HttpServletRequest request, HttpServletResponse response) throws Exception {
-        HttpServletRequest processedRequest = request;
-        HandlerExecutionChain mappedHandler = null;
-        boolean multipartRequestParsed = false;
-        WebAsyncManager asyncManager = WebAsyncUtils.getAsyncManager(request);
+    }
+

然后是

+
private void processEvents()
+    {
+        T event = null;
+        long nextSequence = sequence.get() + 1L;
 
-        try {
-            try {
-                ModelAndView mv = null;
-                Object dispatchException = null;
+        while (true)
+        {
+            try
+            {
+                final long availableSequence = sequenceBarrier.waitFor(nextSequence);
+                if (batchStartAware != null)
+                {
+                    batchStartAware.onBatchStart(availableSequence - nextSequence + 1);
+                }
 
-                try {
-                    processedRequest = this.checkMultipart(request);
-                    multipartRequestParsed = processedRequest != request;
-                    mappedHandler = this.getHandler(processedRequest);
-                    if (mappedHandler == null) {
-                        this.noHandlerFound(processedRequest, response);
-                        return;
-                    }
-
-                    HandlerAdapter ha = this.getHandlerAdapter(mappedHandler.getHandler());
-                    String method = request.getMethod();
-                    boolean isGet = "GET".equals(method);
-                    if (isGet || "HEAD".equals(method)) {
-                        long lastModified = ha.getLastModified(request, mappedHandler.getHandler());
-                        if ((new ServletWebRequest(request, response)).checkNotModified(lastModified) && isGet) {
-                            return;
-                        }
-                    }
-
-                    /** 
-                     * 看这里看这里‼️
-                     */
-                    if (!mappedHandler.applyPreHandle(processedRequest, response)) {
-                        return;
-                    }
-
-                    mv = ha.handle(processedRequest, response, mappedHandler.getHandler());
-                    if (asyncManager.isConcurrentHandlingStarted()) {
-                        return;
-                    }
-
-                    this.applyDefaultViewName(processedRequest, mv);
-                    /** 
-                     * 再看这里看这里‼️
-                     */
-                    mappedHandler.applyPostHandle(processedRequest, response, mv);
-                } catch (Exception var20) {
-                    dispatchException = var20;
-                } catch (Throwable var21) {
-                    dispatchException = new NestedServletException("Handler dispatch failed", var21);
+                while (nextSequence <= availableSequence)
+                {
+                    event = dataProvider.get(nextSequence);
+                    eventHandler.onEvent(event, nextSequence, nextSequence == availableSequence);
+                    nextSequence++;
                 }
-
-                this.processDispatchResult(processedRequest, response, mappedHandler, mv, (Exception)dispatchException);
-            } catch (Exception var22) {
-                this.triggerAfterCompletion(processedRequest, response, mappedHandler, var22);
-            } catch (Throwable var23) {
-                this.triggerAfterCompletion(processedRequest, response, mappedHandler, new NestedServletException("Handler processing failed", var23));
+                // 如果正常处理完,那就是会更新为 availableSequence,因为都处理好了
+                sequence.set(availableSequence);
             }
-
-        } finally {
-            if (asyncManager.isConcurrentHandlingStarted()) {
-                if (mappedHandler != null) {
-                    mappedHandler.applyAfterConcurrentHandlingStarted(processedRequest, response);
+            catch (final TimeoutException e)
+            {
+                notifyTimeout(sequence.get());
+            }
+            catch (final AlertException ex)
+            {
+                if (running.get() != RUNNING)
+                {
+                    break;
                 }
-            } else if (multipartRequestParsed) {
-                this.cleanupMultipart(processedRequest);
             }
-
+            catch (final Throwable ex)
+            {
+                handleEventException(ex, nextSequence, event);
+                // 如果是异常就只是 nextSequence
+                sequence.set(nextSequence);
+                nextSequence++;
+            }
         }
-    }
-

代码在哪呢,org.springframework.web.servlet.DispatcherServlet#doDispatch,然后才是我们自己写的 aop,是不是差不多明白了,嗯,接下来是例子
写个 filter

-
public class DemoFilter extends HttpServlet implements Filter {
-    @Override
-    public void init(FilterConfig filterConfig) throws ServletException {
-        System.out.println("==>DemoFilter启动");
-    }
-
-    @Override
-    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
-        // 将请求转换成HttpServletRequest 请求
-        HttpServletRequest req = (HttpServletRequest) servletRequest;
-        HttpServletResponse resp = (HttpServletResponse) servletResponse;
-        System.out.println("before filter");
-        filterChain.doFilter(req, resp);
-        System.out.println("after filter");
-    }
-
-    @Override
-    public void destroy() {
-
-    }
-}
-

因为用的springboot,所以就不写 web.xml 了,写个配置类

-
@Configuration
-public class FilterConfiguration {
-    @Bean
-    public FilterRegistrationBean filterDemo4Registration() {
-        FilterRegistrationBean registration = new FilterRegistrationBean();
-        //注入过滤器
-        registration.setFilter(new DemoFilter());
-        //拦截规则
-        registration.addUrlPatterns("/*");
-        //过滤器名称
-        registration.setName("DemoFilter");
-        //是否自动注册 false 取消Filter的自动注册
-        registration.setEnabled(true);
-        //过滤器顺序
-        registration.setOrder(1);
-        return registration;
-    }
-
-}
-

然后再来个 interceptor 和 aop,以及一个简单的请求处理

-
public class DemoInterceptor extends HandlerInterceptorAdapter {
-    @Override
-    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
-        System.out.println("preHandle test");
-        return true;
-    }
-
-    @Override
-    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
-        System.out.println("postHandle test");
-    }
-}
-@Aspect
-@Component
-public class DemoAspect {
-
-    @Pointcut("execution( public * com.nicksxs.springbootdemo.demo.DemoController.*())")
-    public void point() {
-
-    }
-
-    @Before("point()")
-    public void doBefore(){
-        System.out.println("==doBefore==");
-    }
-
-    @After("point()")
-    public void doAfter(){
-        System.out.println("==doAfter==");
-    }
-}
-@RestController
-public class DemoController {
-
-    @RequestMapping("/hello")
-    @ResponseBody
-    public String hello() {
-        return "hello world";
-    }
-}
-

好了,请求一下,看看 stdout,

搞定完事儿~

-]]>
- - Java - Filter - Interceptor - AOP - Spring - Servlet - Interceptor - AOP - - - Java - Filter - Interceptor - AOP - Spring - Tomcat - Servlet - Web - - - - Dubbo 使用的几个记忆点 - /2022/04/02/Dubbo-%E4%BD%BF%E7%94%A8%E7%9A%84%E5%87%A0%E4%B8%AA%E8%AE%B0%E5%BF%86%E7%82%B9/ - 因为后台使用的 dubbo 作为 rpc 框架,并且会有一些日常使用情景有一些小的技巧,在这里做下记录作笔记用

-

dubbo 只拉取不注册

<dubbo:registry address="zookeeper://127.0.0.1:2181" register="false" />
-

就是只要 register="false" 就可以了,这样比如我们在开发环境想运行服务,但又不想让开发环境正常的请求调用到本地来,当然这不是唯一的方式,通过 dubbo 2.7 以上的 tag 路由也可以实现或者自行改造拉取和注册服务的逻辑,因为注册到注册中心的其实是一串带参数的 url,还是比较方便改造的。相反的就是只注册,不拉取

-

dubbo 只注册不拉取

<dubbo:registry address="zookeeper://127.0.0.1:2181" subscribe="false" />
-

这个使用场景就是如果我这个服务只作为 provider,没有任何调用其他的服务,其实就可以这么设置

-

权重配置

<dubbo:provider loadbalance="random" weight="50"/>
-

首先这是在使用了随机的负载均衡策略的时候可以进行配置,并且是对于多个 provider 的情况下,这样其实也可以部分解决上面的只拉取不注册的问题,我把自己的权重调成 0 或者很低

+ }
]]>
Java - Dubbo Java - Dubbo - RPC - 负载均衡 + Disruptor
@@ -2318,412 +1950,1012 @@ Node *clone(Node *graph) { - G1收集器概述 - /2020/02/09/G1%E6%94%B6%E9%9B%86%E5%99%A8%E6%A6%82%E8%BF%B0/ - G1: The Garbage-First Collector, 垃圾回收优先的垃圾回收器,目标是用户多核 cpu 和大内存的机器,最大的特点就是可预测的停顿时间,官方给出的介绍是提供一个用户在大的堆内存情况下一个低延迟表现的解决方案,通常是 6GB 及以上的堆大小,有低于 0.5 秒稳定的可预测的停顿时间。

-

这里主要介绍这个比较新的垃圾回收器,在 G1 之前的垃圾回收器都是基于如下图的内存结构分布,有新生代,老年代和永久代(jdk8 之前),然后G1 往前的那些垃圾回收器都有个分代,比如 serial,parallel 等,一般有个应用的组合,最初的 serial 和 serial old,因为新生代和老年代的收集方式不太一样,新生代主要是标记复制,所以有 eden 跟两个 survival区,老年代一般用标记整理方式,而 G1 对这个不太一样。

看一下 G1 的内存分布

可以看到这有很大的不同,G1 通过将内存分成大小相等的 region,每个region是存在于一个连续的虚拟内存范围,对于某个 region 来说其角色是类似于原来的收集器的Eden、Survivor、Old Generation,这个具体在代码层面

-
// We encode the value of the heap region type so the generation can be
- // determined quickly. The tag is split into two parts:
- //
- //   major type (young, old, humongous, archive)           : top N-1 bits
- //   minor type (eden / survivor, starts / cont hum, etc.) : bottom 1 bit
- //
- // If there's need to increase the number of minor types in the
- // future, we'll have to increase the size of the latter and hence
- // decrease the size of the former.
- //
- // 00000 0 [ 0] Free
- //
- // 00001 0 [ 2] Young Mask
- // 00001 0 [ 2] Eden
- // 00001 1 [ 3] Survivor
- //
- // 00010 0 [ 4] Humongous Mask
- // 00100 0 [ 8] Pinned Mask
- // 00110 0 [12] Starts Humongous
- // 00110 1 [13] Continues Humongous
- //
- // 01000 0 [16] Old Mask
- //
- // 10000 0 [32] Archive Mask
- // 11100 0 [56] Open Archive
- // 11100 1 [57] Closed Archive
- //
- typedef enum {
-   FreeTag               = 0,
-
-   YoungMask             = 2,
-   EdenTag               = YoungMask,
-   SurvTag               = YoungMask + 1,
+    Disruptor 系列二
+    /2022/02/27/Disruptor-%E7%B3%BB%E5%88%97%E4%BA%8C/
+    这里开始慢慢深入的讲一下 disruptor,首先是 lock free , 相比于前面介绍的两个阻塞队列,
disruptor 本身是不直接使用锁的,因为本身的设计是单个线程去生产,通过 cas 来维护头指针,
不直接维护尾指针,这样就减少了锁的使用,提升了性能;第二个是这次介绍的重点,
减少 false sharing 的情况,也就是常说的 伪共享 问题,那么什么叫 伪共享 呢,
这里要扯到一些 cpu 缓存的知识,

譬如我在用的这个笔记本

这里就可能看到 L2 Cache 就是针对每个核的

这里可以看到现代 CPU 的结构里,分为三级缓存,越靠近 cpu 的速度越快,存储容量越小,
而 L1 跟 L2 是 CPU 核专属的每个核都有自己的 L1 和 L2 的,其中 L1 还分为数据和指令,
像我上面的图中显示的 L1 Cache 只有 64KB 大小,其中数据 32KB,指令 32KB,
而 L2 则有 256KB,L3 有 4MB,其中的 Line Size 是我们这里比较重要的一个值,
CPU 其实会就近地从 Cache 中读取数据,碰到 Cache Miss 就再往下一级 Cache 读取,
每次读取是按照缓存行 Cache Line 读取,并且也遵循了“就近原则”,
也就是相近的数据有可能也会马上被读取,所以以行的形式读取,然而这也造成了 false sharing
因为类似于 ArrayBlockingQueue,需要有 takeIndex , putIndex , count , 因为在同一个类中,
很有可能存在于同一个 Cache Line 中,但是这几个值会被不同的线程修改,
导致从 Cache 取出来以后立马就会被失效,所谓的就近原则也就没用了,
因为需要反复地标记 dirty 脏位,然后把 Cache 刷掉,就造成了false sharing这种情况
而在 disruptor 中则使用了填充的方式,让我的头指针能够不产生false sharing

+
class LhsPadding
+{
+    protected long p1, p2, p3, p4, p5, p6, p7;
+}
 
-   HumongousMask         = 4,
-   PinnedMask            = 8,
-   StartsHumongousTag    = HumongousMask | PinnedMask,
-   ContinuesHumongousTag = HumongousMask | PinnedMask + 1,
+class Value extends LhsPadding
+{
+    protected volatile long value;
+}
 
-   OldMask               = 16,
-   OldTag                = OldMask,
+class RhsPadding extends Value
+{
+    protected long p9, p10, p11, p12, p13, p14, p15;
+}
 
-   // Archive regions are regions with immutable content (i.e. not reclaimed, and
-   // not allocated into during regular operation). They differ in the kind of references
-   // allowed for the contained objects:
-   // - Closed archive regions form a separate self-contained (closed) object graph
-   // within the set of all of these regions. No references outside of closed
-   // archive regions are allowed.
-   // - Open archive regions have no restrictions on the references of their objects.
-   // Objects within these regions are allowed to have references to objects
-   // contained in any other kind of regions.
-   ArchiveMask           = 32,
-   OpenArchiveTag        = ArchiveMask | PinnedMask | OldMask,
-   ClosedArchiveTag      = ArchiveMask | PinnedMask | OldMask + 1
- } Tag;
- -

hotspot/share/gc/g1/heapRegionType.hpp

-

当执行垃圾收集时,G1以类似于CMS收集器的方式运行。 G1执行并发全局标记阶段,以确定整个堆中对象的存活性。标记阶段完成后,G1知道哪些region是基本空的。它首先收集这些region,通常会产生大量的可用空间。这就是为什么这种垃圾收集方法称为“垃圾优先”的原因。顾名思义,G1将其收集和压缩活动集中在可能充满可回收对象(即垃圾)的堆区域。 G1使用暂停预测模型来满足用户定义的暂停时间目标,并根据指定的暂停时间目标选择要收集的区域数。

-

由G1标识为可回收的区域是使用撤离的方式(Evacuation)。 G1将对象从堆的一个或多个区域复制到堆上的单个区域,并在此过程中压缩并释放内存。撤离是在多处理器上并行执行的,以减少暂停时间并增加吞吐量。因此,对于每次垃圾收集,G1都在用户定义的暂停时间内连续工作以减少碎片。这是优于前面两种方法的。 CMS(并发标记扫描)垃圾收集器不进行压缩。 ParallelOld垃圾回收仅执行整个堆压缩,这导致相当长的暂停时间。

-

需要重点注意的是,G1不是实时收集器。它很有可能达到设定的暂停时间目标,但并非绝对确定。 G1根据先前收集的数据,估算在用户指定的目标时间内可以收集多少个区域。因此,收集器具有收集区域成本的合理准确的模型,并且收集器使用此模型来确定要收集哪些和多少个区域,同时保持在暂停时间目标之内。

-

注意:G1同时具有并发(与应用程序线程一起运行,例如优化,标记,清理)和并行(多线程,例如stw)阶段。Full GC仍然是单线程的,但是如果正确调优,您的应用程序应该可以避免Full GC。

-

在前面那篇中在代码层面简单的了解了这个可预测时间的过程,这也是 G1 的一大特点。

+/** + * <p>Concurrent sequence class used for tracking the progress of + * the ring buffer and event processors. Support a number + * of concurrent operations including CAS and order writes. + * + * <p>Also attempts to be more efficient with regards to false + * sharing by adding padding around the volatile field. + */ +public class Sequence extends RhsPadding +{
+

通过代码可以看到,sequence 中其实真正有意义的是 value 字段,因为需要在多线程环境下可见也
使用了volatile 关键字,而 LhsPaddingRhsPadding 分别在value 前后填充了各
7 个 long 型的变量,long 型的变量在 Java 中是占用 8 bytes,这样就相当于不管怎么样,
value 都会单独使用一个缓存行,使得其不会产生 false sharing 的问题。

]]>
Java - JVM - GC - C++ Java - JVM - C++ - G1 - GC - Garbage-First Collector + Disruptor
- JVM源码分析之G1垃圾收集器分析一 - /2019/12/07/JVM-G1-Part-1/ - 对 Java 的 gc 实现比较感兴趣,原先一般都是看周志明的书,但其实并没有讲具体的 gc 源码,而是把整个思路和流程讲解了一下
特别是 G1 的具体实现
一般对 G1 的理解其实就是把原先整块的新生代老年代分成了以 region 为单位的小块内存,简而言之,就是原先对新生代老年代的收集会涉及到整个代的堆内存空间,而G1 把它变成了更细致的小块内存
这带来了一个很明显的好处和一个很明显的坏处,好处是内存收集可以更灵活,耗时会变短,但整个收集的处理复杂度就变高了
目前看了一点点关于 G1 收集的预期时间相关的代码

-
HeapWord* G1CollectedHeap::do_collection_pause(size_t word_size,
-                                               uint gc_count_before,
-                                               bool* succeeded,
-                                               GCCause::Cause gc_cause) {
-  assert_heap_not_locked_and_not_at_safepoint();
-  VM_G1CollectForAllocation op(word_size,
-                               gc_count_before,
-                               gc_cause,
-                               false, /* should_initiate_conc_mark */
-                               g1_policy()->max_pause_time_ms());
-  VMThread::execute(&op);
+    Dubbo 使用的几个记忆点
+    /2022/04/02/Dubbo-%E4%BD%BF%E7%94%A8%E7%9A%84%E5%87%A0%E4%B8%AA%E8%AE%B0%E5%BF%86%E7%82%B9/
+    因为后台使用的 dubbo 作为 rpc 框架,并且会有一些日常使用情景有一些小的技巧,在这里做下记录作笔记用

+

dubbo 只拉取不注册

<dubbo:registry address="zookeeper://127.0.0.1:2181" register="false" />
+

就是只要 register="false" 就可以了,这样比如我们在开发环境想运行服务,但又不想让开发环境正常的请求调用到本地来,当然这不是唯一的方式,通过 dubbo 2.7 以上的 tag 路由也可以实现或者自行改造拉取和注册服务的逻辑,因为注册到注册中心的其实是一串带参数的 url,还是比较方便改造的。相反的就是只注册,不拉取

+

dubbo 只注册不拉取

<dubbo:registry address="zookeeper://127.0.0.1:2181" subscribe="false" />
+

这个使用场景就是如果我这个服务只作为 provider,没有任何调用其他的服务,其实就可以这么设置

+

权重配置

<dubbo:provider loadbalance="random" weight="50"/>
+

首先这是在使用了随机的负载均衡策略的时候可以进行配置,并且是对于多个 provider 的情况下,这样其实也可以部分解决上面的只拉取不注册的问题,我把自己的权重调成 0 或者很低

+]]>
+ + Java + Dubbo + + + Java + Dubbo + RPC + 负载均衡 + + + + Filter, Interceptor, Aop, 啥, 啥, 啥? 这些都是啥? + /2020/08/22/Filter-Intercepter-Aop-%E5%95%A5-%E5%95%A5-%E5%95%A5-%E8%BF%99%E4%BA%9B%E9%83%BD%E6%98%AF%E5%95%A5/ + 本来是想取个像现在那些公众号转了又转的文章标题,”面试官再问你xxxxx,就把这篇文章甩给他看”这种标题,但是觉得实在太 low 了,还是用一部我比较喜欢的电影里的一句台词,《人在囧途》里王宝强对着那张老板给他的欠条,看不懂字时候说的那句,这些都是些啥(第四声)
当我刚开始面 Java 的时候,其实我真的没注意这方面的东西,实话说就是不知道这些是啥,开发中用过 Interceptor和 Aop,了解 aop 的实现原理,但是不知道 Java web 中的 Filter 是怎么回事,知道 dubbo 的 filter,就这样,所以被问到了的确是回答不出来,可能就觉得这个渣渣,这么简单的都不会,所以还是花点时间来看看这个是个啥,为了避免我口吐芬芳,还是耐下性子来简单说下这几个东西
首先是 servlet,怎么去解释这个呢,因为之前是 PHPer,所以比较喜欢用它来举例子,在普通的 PHP 的 web 应用中一般有几部分组成,接受 HTTP 请求的是前置的 nginx 或者 apache,但是这俩玩意都是只能处理静态的请求,远古时代 PHP 和 HTML 混编是通过 apache 的 php module,跟后来 nginx 使用 php-fpm 其实道理类似,就是把请求中需要 PHP 处理的转发给 PHP,在 Java 中呢,是有个比较牛叉的叫 Tomcat 的,它可以把请求转成 servlet,而 servlet 其实就是一种实现了特定接口的 Java 代码,

+

+package javax.servlet;
 
-  HeapWord* result = op.result();
-  bool ret_succeeded = op.prologue_succeeded() && op.pause_succeeded();
-  assert(result == NULL || ret_succeeded,
-         "the result should be NULL if the VM did not succeed");
-  *succeeded = ret_succeeded;
+import java.io.IOException;
 
-  assert_heap_not_locked();
-  return result;
-}
-

这里就是收集时需要停顿的,其中VMThread::execute(&op);是具体执行的,真正执行的是VM_G1CollectForAllocation::doit方法

-
void VM_G1CollectForAllocation::doit() {
-  G1CollectedHeap* g1h = G1CollectedHeap::heap();
-  assert(!_should_initiate_conc_mark || g1h->should_do_concurrent_full_gc(_gc_cause),
-      "only a GC locker, a System.gc(), stats update, whitebox, or a hum allocation induced GC should start a cycle");
+/**
+ * Defines methods that all servlets must implement.
+ *
+ * <p>
+ * A servlet is a small Java program that runs within a Web server. Servlets
+ * receive and respond to requests from Web clients, usually across HTTP, the
+ * HyperText Transfer Protocol.
+ *
+ * <p>
+ * To implement this interface, you can write a generic servlet that extends
+ * <code>javax.servlet.GenericServlet</code> or an HTTP servlet that extends
+ * <code>javax.servlet.http.HttpServlet</code>.
+ *
+ * <p>
+ * This interface defines methods to initialize a servlet, to service requests,
+ * and to remove a servlet from the server. These are known as life-cycle
+ * methods and are called in the following sequence:
+ * <ol>
+ * <li>The servlet is constructed, then initialized with the <code>init</code>
+ * method.
+ * <li>Any calls from clients to the <code>service</code> method are handled.
+ * <li>The servlet is taken out of service, then destroyed with the
+ * <code>destroy</code> method, then garbage collected and finalized.
+ * </ol>
+ *
+ * <p>
+ * In addition to the life-cycle methods, this interface provides the
+ * <code>getServletConfig</code> method, which the servlet can use to get any
+ * startup information, and the <code>getServletInfo</code> method, which allows
+ * the servlet to return basic information about itself, such as author,
+ * version, and copyright.
+ *
+ * @see GenericServlet
+ * @see javax.servlet.http.HttpServlet
+ */
+public interface Servlet {
 
-  if (_word_size > 0) {
-    // An allocation has been requested. So, try to do that first.
-    _result = g1h->attempt_allocation_at_safepoint(_word_size,
-                                                   false /* expect_null_cur_alloc_region */);
-    if (_result != NULL) {
-      // If we can successfully allocate before we actually do the
-      // pause then we will consider this pause successful.
-      _pause_succeeded = true;
-      return;
-    }
-  }
+    /**
+     * Called by the servlet container to indicate to a servlet that the servlet
+     * is being placed into service.
+     *
+     * <p>
+     * The servlet container calls the <code>init</code> method exactly once
+     * after instantiating the servlet. The <code>init</code> method must
+     * complete successfully before the servlet can receive any requests.
+     *
+     * <p>
+     * The servlet container cannot place the servlet into service if the
+     * <code>init</code> method
+     * <ol>
+     * <li>Throws a <code>ServletException</code>
+     * <li>Does not return within a time period defined by the Web server
+     * </ol>
+     *
+     *
+     * @param config
+     *            a <code>ServletConfig</code> object containing the servlet's
+     *            configuration and initialization parameters
+     *
+     * @exception ServletException
+     *                if an exception has occurred that interferes with the
+     *                servlet's normal operation
+     *
+     * @see UnavailableException
+     * @see #getServletConfig
+     */
+    public void init(ServletConfig config) throws ServletException;
 
-  GCCauseSetter x(g1h, _gc_cause);
-  if (_should_initiate_conc_mark) {
-    // It's safer to read old_marking_cycles_completed() here, given
-    // that noone else will be updating it concurrently. Since we'll
-    // only need it if we're initiating a marking cycle, no point in
-    // setting it earlier.
-    _old_marking_cycles_completed_before = g1h->old_marking_cycles_completed();
+    /**
+     *
+     * Returns a {@link ServletConfig} object, which contains initialization and
+     * startup parameters for this servlet. The <code>ServletConfig</code>
+     * object returned is the one passed to the <code>init</code> method.
+     *
+     * <p>
+     * Implementations of this interface are responsible for storing the
+     * <code>ServletConfig</code> object so that this method can return it. The
+     * {@link GenericServlet} class, which implements this interface, already
+     * does this.
+     *
+     * @return the <code>ServletConfig</code> object that initializes this
+     *         servlet
+     *
+     * @see #init
+     */
+    public ServletConfig getServletConfig();
 
-    // At this point we are supposed to start a concurrent cycle. We
-    // will do so if one is not already in progress.
-    bool res = g1h->g1_policy()->force_initial_mark_if_outside_cycle(_gc_cause);
+    /**
+     * Called by the servlet container to allow the servlet to respond to a
+     * request.
+     *
+     * <p>
+     * This method is only called after the servlet's <code>init()</code> method
+     * has completed successfully.
+     *
+     * <p>
+     * The status code of the response always should be set for a servlet that
+     * throws or sends an error.
+     *
+     *
+     * <p>
+     * Servlets typically run inside multithreaded servlet containers that can
+     * handle multiple requests concurrently. Developers must be aware to
+     * synchronize access to any shared resources such as files, network
+     * connections, and as well as the servlet's class and instance variables.
+     * More information on multithreaded programming in Java is available in <a
+     * href
+     * ="http://java.sun.com/Series/Tutorial/java/threads/multithreaded.html">
+     * the Java tutorial on multi-threaded programming</a>.
+     *
+     *
+     * @param req
+     *            the <code>ServletRequest</code> object that contains the
+     *            client's request
+     *
+     * @param res
+     *            the <code>ServletResponse</code> object that contains the
+     *            servlet's response
+     *
+     * @exception ServletException
+     *                if an exception occurs that interferes with the servlet's
+     *                normal operation
+     *
+     * @exception IOException
+     *                if an input or output exception occurs
+     */
+    public void service(ServletRequest req, ServletResponse res)
+            throws ServletException, IOException;
 
-    // The above routine returns true if we were able to force the
-    // next GC pause to be an initial mark; it returns false if a
-    // marking cycle is already in progress.
-    //
-    // If a marking cycle is already in progress just return and skip the
-    // pause below - if the reason for requesting this initial mark pause
-    // was due to a System.gc() then the requesting thread should block in
-    // doit_epilogue() until the marking cycle is complete.
-    //
-    // If this initial mark pause was requested as part of a humongous
-    // allocation then we know that the marking cycle must just have
-    // been started by another thread (possibly also allocating a humongous
-    // object) as there was no active marking cycle when the requesting
-    // thread checked before calling collect() in
-    // attempt_allocation_humongous(). Retrying the GC, in this case,
-    // will cause the requesting thread to spin inside collect() until the
-    // just started marking cycle is complete - which may be a while. So
-    // we do NOT retry the GC.
-    if (!res) {
-      assert(_word_size == 0, "Concurrent Full GC/Humongous Object IM shouldn't be allocating");
-      if (_gc_cause != GCCause::_g1_humongous_allocation) {
-        _should_retry_gc = true;
-      }
-      return;
-    }
-  }
-
-  // Try a partial collection of some kind.
-  _pause_succeeded = g1h->do_collection_pause_at_safepoint(_target_pause_time_ms);
-
-  if (_pause_succeeded) {
-    if (_word_size > 0) {
-      // An allocation had been requested. Do it, eventually trying a stronger
-      // kind of GC.
-      _result = g1h->satisfy_failed_allocation(_word_size, &_pause_succeeded);
-    } else {
-      bool should_upgrade_to_full = !g1h->should_do_concurrent_full_gc(_gc_cause) &&
-                                    !g1h->has_regions_left_for_allocation();
-      if (should_upgrade_to_full) {
-        // There has been a request to perform a GC to free some space. We have no
-        // information on how much memory has been asked for. In case there are
-        // absolutely no regions left to allocate into, do a maximally compacting full GC.
-        log_info(gc, ergo)("Attempting maximally compacting collection");
-        _pause_succeeded = g1h->do_full_collection(false, /* explicit gc */
-                                                   true   /* clear_all_soft_refs */);
-      }
-    }
-    guarantee(_pause_succeeded, "Elevated collections during the safepoint must always succeed.");
-  } else {
-    assert(_result == NULL, "invariant");
-    // The only reason for the pause to not be successful is that, the GC locker is
-    // active (or has become active since the prologue was executed). In this case
-    // we should retry the pause after waiting for the GC locker to become inactive.
-    _should_retry_gc = true;
-  }
-}
-

这里可以看到核心的是G1CollectedHeap::do_collection_pause_at_safepoint这个方法,它带上了目标暂停时间的值

-
G1CollectedHeap::do_collection_pause_at_safepoint(double target_pause_time_ms) {
-  assert_at_safepoint_on_vm_thread();
-  guarantee(!is_gc_active(), "collection is not reentrant");
+    /**
+     * Returns information about the servlet, such as author, version, and
+     * copyright.
+     *
+     * <p>
+     * The string that this method returns should be plain text and not markup
+     * of any kind (such as HTML, XML, etc.).
+     *
+     * @return a <code>String</code> containing servlet information
+     */
+    public String getServletInfo();
 
-  if (GCLocker::check_active_before_gc()) {
-    return false;
-  }
+    /**
+     * Called by the servlet container to indicate to a servlet that the servlet
+     * is being taken out of service. This method is only called once all
+     * threads within the servlet's <code>service</code> method have exited or
+     * after a timeout period has passed. After the servlet container calls this
+     * method, it will not call the <code>service</code> method again on this
+     * servlet.
+     *
+     * <p>
+     * This method gives the servlet an opportunity to clean up any resources
+     * that are being held (for example, memory, file handles, threads) and make
+     * sure that any persistent state is synchronized with the servlet's current
+     * state in memory.
+     */
+    public void destroy();
+}
+
+

重点看 servlet 的 service方法,就是接受请求,处理完了给响应,不说细节,不然光 Tomcat 的能说半年,所以呢再进一步去理解,其实就能知道,就是一个先后的问题,盗个图

filter 跟后两者最大的不一样其实是一个基于 servlet,在非常外层做的处理,然后是 interceptor 的 prehandle 跟 posthandle,接着才是我们常规的 aop,就这么点事情,做个小试验吧(还是先补段代码吧)

+

Filter

// ---------------------------------------------------- FilterChain Methods
 
-  _gc_timer_stw->register_gc_start();
+    /**
+     * Invoke the next filter in this chain, passing the specified request
+     * and response.  If there are no more filters in this chain, invoke
+     * the <code>service()</code> method of the servlet itself.
+     *
+     * @param request The servlet request we are processing
+     * @param response The servlet response we are creating
+     *
+     * @exception IOException if an input/output error occurs
+     * @exception ServletException if a servlet exception occurs
+     */
+    @Override
+    public void doFilter(ServletRequest request, ServletResponse response)
+        throws IOException, ServletException {
 
-  GCIdMark gc_id_mark;
-  _gc_tracer_stw->report_gc_start(gc_cause(), _gc_timer_stw->gc_start());
+        if( Globals.IS_SECURITY_ENABLED ) {
+            final ServletRequest req = request;
+            final ServletResponse res = response;
+            try {
+                java.security.AccessController.doPrivileged(
+                    new java.security.PrivilegedExceptionAction<Void>() {
+                        @Override
+                        public Void run()
+                            throws ServletException, IOException {
+                            internalDoFilter(req,res);
+                            return null;
+                        }
+                    }
+                );
+            } catch( PrivilegedActionException pe) {
+                Exception e = pe.getException();
+                if (e instanceof ServletException)
+                    throw (ServletException) e;
+                else if (e instanceof IOException)
+                    throw (IOException) e;
+                else if (e instanceof RuntimeException)
+                    throw (RuntimeException) e;
+                else
+                    throw new ServletException(e.getMessage(), e);
+            }
+        } else {
+            internalDoFilter(request,response);
+        }
+    }
+    private void internalDoFilter(ServletRequest request,
+                                  ServletResponse response)
+        throws IOException, ServletException {
 
-  SvcGCMarker sgcm(SvcGCMarker::MINOR);
-  ResourceMark rm;
+        // Call the next filter if there is one
+        if (pos < n) {
+            ApplicationFilterConfig filterConfig = filters[pos++];
+            try {
+                Filter filter = filterConfig.getFilter();
 
-  g1_policy()->note_gc_start();
+                if (request.isAsyncSupported() && "false".equalsIgnoreCase(
+                        filterConfig.getFilterDef().getAsyncSupported())) {
+                    request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR, Boolean.FALSE);
+                }
+                if( Globals.IS_SECURITY_ENABLED ) {
+                    final ServletRequest req = request;
+                    final ServletResponse res = response;
+                    Principal principal =
+                        ((HttpServletRequest) req).getUserPrincipal();
 
-  wait_for_root_region_scanning();
+                    Object[] args = new Object[]{req, res, this};
+                    SecurityUtil.doAsPrivilege ("doFilter", filter, classType, args, principal);
+                } else {
+                    filter.doFilter(request, response, this);
+                }
+            } catch (IOException | ServletException | RuntimeException e) {
+                throw e;
+            } catch (Throwable e) {
+                e = ExceptionUtils.unwrapInvocationTargetException(e);
+                ExceptionUtils.handleThrowable(e);
+                throw new ServletException(sm.getString("filterChain.filter"), e);
+            }
+            return;
+        }
 
-  print_heap_before_gc();
-  print_heap_regions();
-  trace_heap_before_gc(_gc_tracer_stw);
+        // We fell off the end of the chain -- call the servlet instance
+        try {
+            if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
+                lastServicedRequest.set(request);
+                lastServicedResponse.set(response);
+            }
 
-  _verifier->verify_region_sets_optional();
-  _verifier->verify_dirty_young_regions();
+            if (request.isAsyncSupported() && !servletSupportsAsync) {
+                request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR,
+                        Boolean.FALSE);
+            }
+            // Use potentially wrapped request from this point
+            if ((request instanceof HttpServletRequest) &&
+                    (response instanceof HttpServletResponse) &&
+                    Globals.IS_SECURITY_ENABLED ) {
+                final ServletRequest req = request;
+                final ServletResponse res = response;
+                Principal principal =
+                    ((HttpServletRequest) req).getUserPrincipal();
+                Object[] args = new Object[]{req, res};
+                SecurityUtil.doAsPrivilege("service",
+                                           servlet,
+                                           classTypeUsedInService,
+                                           args,
+                                           principal);
+            } else {
+                servlet.service(request, response);
+            }
+        } catch (IOException | ServletException | RuntimeException e) {
+            throw e;
+        } catch (Throwable e) {
+            e = ExceptionUtils.unwrapInvocationTargetException(e);
+            ExceptionUtils.handleThrowable(e);
+            throw new ServletException(sm.getString("filterChain.servlet"), e);
+        } finally {
+            if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
+                lastServicedRequest.set(null);
+                lastServicedResponse.set(null);
+            }
+        }
+    }
+

注意看这一行
filter.doFilter(request, response, this);
是不是看懂了,就是个 filter 链,但是这个代码在哪呢,org.apache.catalina.core.ApplicationFilterChain#doFilter
然后是interceptor,

+
protected void doDispatch(HttpServletRequest request, HttpServletResponse response) throws Exception {
+        HttpServletRequest processedRequest = request;
+        HandlerExecutionChain mappedHandler = null;
+        boolean multipartRequestParsed = false;
+        WebAsyncManager asyncManager = WebAsyncUtils.getAsyncManager(request);
 
-  // We should not be doing initial mark unless the conc mark thread is running
-  if (!_cm_thread->should_terminate()) {
-    // This call will decide whether this pause is an initial-mark
-    // pause. If it is, in_initial_mark_gc() will return true
-    // for the duration of this pause.
-    g1_policy()->decide_on_conc_mark_initiation();
-  }
+        try {
+            try {
+                ModelAndView mv = null;
+                Object dispatchException = null;
 
-  // We do not allow initial-mark to be piggy-backed on a mixed GC.
-  assert(!collector_state()->in_initial_mark_gc() ||
-          collector_state()->in_young_only_phase(), "sanity");
+                try {
+                    processedRequest = this.checkMultipart(request);
+                    multipartRequestParsed = processedRequest != request;
+                    mappedHandler = this.getHandler(processedRequest);
+                    if (mappedHandler == null) {
+                        this.noHandlerFound(processedRequest, response);
+                        return;
+                    }
 
-  // We also do not allow mixed GCs during marking.
-  assert(!collector_state()->mark_or_rebuild_in_progress() || collector_state()->in_young_only_phase(), "sanity");
+                    HandlerAdapter ha = this.getHandlerAdapter(mappedHandler.getHandler());
+                    String method = request.getMethod();
+                    boolean isGet = "GET".equals(method);
+                    if (isGet || "HEAD".equals(method)) {
+                        long lastModified = ha.getLastModified(request, mappedHandler.getHandler());
+                        if ((new ServletWebRequest(request, response)).checkNotModified(lastModified) && isGet) {
+                            return;
+                        }
+                    }
 
-  // Record whether this pause is an initial mark. When the current
-  // thread has completed its logging output and it's safe to signal
-  // the CM thread, the flag's value in the policy has been reset.
-  bool should_start_conc_mark = collector_state()->in_initial_mark_gc();
+                    /** 
+                     * 看这里看这里‼️
+                     */
+                    if (!mappedHandler.applyPreHandle(processedRequest, response)) {
+                        return;
+                    }
 
-  // Inner scope for scope based logging, timers, and stats collection
-  {
-    EvacuationInfo evacuation_info;
+                    mv = ha.handle(processedRequest, response, mappedHandler.getHandler());
+                    if (asyncManager.isConcurrentHandlingStarted()) {
+                        return;
+                    }
 
-    if (collector_state()->in_initial_mark_gc()) {
-      // We are about to start a marking cycle, so we increment the
-      // full collection counter.
-      increment_old_marking_cycles_started();
-      _cm->gc_tracer_cm()->set_gc_cause(gc_cause());
-    }
-
-    _gc_tracer_stw->report_yc_type(collector_state()->yc_type());
-
-    GCTraceCPUTime tcpu;
-
-    G1HeapVerifier::G1VerifyType verify_type;
-    FormatBuffer<> gc_string("Pause Young ");
-    if (collector_state()->in_initial_mark_gc()) {
-      gc_string.append("(Concurrent Start)");
-      verify_type = G1HeapVerifier::G1VerifyConcurrentStart;
-    } else if (collector_state()->in_young_only_phase()) {
-      if (collector_state()->in_young_gc_before_mixed()) {
-        gc_string.append("(Prepare Mixed)");
-      } else {
-        gc_string.append("(Normal)");
-      }
-      verify_type = G1HeapVerifier::G1VerifyYoungNormal;
-    } else {
-      gc_string.append("(Mixed)");
-      verify_type = G1HeapVerifier::G1VerifyMixed;
-    }
-    GCTraceTime(Info, gc) tm(gc_string, NULL, gc_cause(), true);
-
-    uint active_workers = AdaptiveSizePolicy::calc_active_workers(workers()->total_workers(),
-                                                                  workers()->active_workers(),
-                                                                  Threads::number_of_non_daemon_threads());
-    active_workers = workers()->update_active_workers(active_workers);
-    log_info(gc,task)("Using %u workers of %u for evacuation", active_workers, workers()->total_workers());
+                    this.applyDefaultViewName(processedRequest, mv);
+                    /** 
+                     * 再看这里看这里‼️
+                     */
+                    mappedHandler.applyPostHandle(processedRequest, response, mv);
+                } catch (Exception var20) {
+                    dispatchException = var20;
+                } catch (Throwable var21) {
+                    dispatchException = new NestedServletException("Handler dispatch failed", var21);
+                }
 
-    TraceCollectorStats tcs(g1mm()->incremental_collection_counters());
-    TraceMemoryManagerStats tms(&_memory_manager, gc_cause(),
-                                collector_state()->yc_type() == Mixed /* allMemoryPoolsAffected */);
+                this.processDispatchResult(processedRequest, response, mappedHandler, mv, (Exception)dispatchException);
+            } catch (Exception var22) {
+                this.triggerAfterCompletion(processedRequest, response, mappedHandler, var22);
+            } catch (Throwable var23) {
+                this.triggerAfterCompletion(processedRequest, response, mappedHandler, new NestedServletException("Handler processing failed", var23));
+            }
 
-    G1HeapTransition heap_transition(this);
-    size_t heap_used_bytes_before_gc = used();
+        } finally {
+            if (asyncManager.isConcurrentHandlingStarted()) {
+                if (mappedHandler != null) {
+                    mappedHandler.applyAfterConcurrentHandlingStarted(processedRequest, response);
+                }
+            } else if (multipartRequestParsed) {
+                this.cleanupMultipart(processedRequest);
+            }
 
-    // Don't dynamically change the number of GC threads this early.  A value of
-    // 0 is used to indicate serial work.  When parallel work is done,
-    // it will be set.
+        }
+    }
+

代码在哪呢,org.springframework.web.servlet.DispatcherServlet#doDispatch,然后才是我们自己写的 aop,是不是差不多明白了,嗯,接下来是例子
写个 filter

+
public class DemoFilter extends HttpServlet implements Filter {
+    @Override
+    public void init(FilterConfig filterConfig) throws ServletException {
+        System.out.println("==>DemoFilter启动");
+    }
 
-    { // Call to jvmpi::post_class_unload_events must occur outside of active GC
-      IsGCActiveMark x;
+    @Override
+    public void doFilter(ServletRequest servletRequest, ServletResponse servletResponse, FilterChain filterChain) throws IOException, ServletException {
+        // 将请求转换成HttpServletRequest 请求
+        HttpServletRequest req = (HttpServletRequest) servletRequest;
+        HttpServletResponse resp = (HttpServletResponse) servletResponse;
+        System.out.println("before filter");
+        filterChain.doFilter(req, resp);
+        System.out.println("after filter");
+    }
 
-      gc_prologue(false);
+    @Override
+    public void destroy() {
 
-      if (VerifyRememberedSets) {
-        log_info(gc, verify)("[Verifying RemSets before GC]");
-        VerifyRegionRemSetClosure v_cl;
-        heap_region_iterate(&v_cl);
-      }
+    }
+}
+

因为用的springboot,所以就不写 web.xml 了,写个配置类

+
@Configuration
+public class FilterConfiguration {
+    @Bean
+    public FilterRegistrationBean filterDemo4Registration() {
+        FilterRegistrationBean registration = new FilterRegistrationBean();
+        //注入过滤器
+        registration.setFilter(new DemoFilter());
+        //拦截规则
+        registration.addUrlPatterns("/*");
+        //过滤器名称
+        registration.setName("DemoFilter");
+        //是否自动注册 false 取消Filter的自动注册
+        registration.setEnabled(true);
+        //过滤器顺序
+        registration.setOrder(1);
+        return registration;
+    }
 
-      _verifier->verify_before_gc(verify_type);
+}
+

然后再来个 interceptor 和 aop,以及一个简单的请求处理

+
public class DemoInterceptor extends HandlerInterceptorAdapter {
+    @Override
+    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
+        System.out.println("preHandle test");
+        return true;
+    }
 
-      _verifier->check_bitmaps("GC Start");
+    @Override
+    public void postHandle(HttpServletRequest request, HttpServletResponse response, Object handler, ModelAndView modelAndView) throws Exception {
+        System.out.println("postHandle test");
+    }
+}
+@Aspect
+@Component
+public class DemoAspect {
 
-#if COMPILER2_OR_JVMCI
-      DerivedPointerTable::clear();
-#endif
+    @Pointcut("execution( public * com.nicksxs.springbootdemo.demo.DemoController.*())")
+    public void point() {
 
-      // Please see comment in g1CollectedHeap.hpp and
-      // G1CollectedHeap::ref_processing_init() to see how
-      // reference processing currently works in G1.
+    }
 
-      // Enable discovery in the STW reference processor
-      _ref_processor_stw->enable_discovery();
+    @Before("point()")
+    public void doBefore(){
+        System.out.println("==doBefore==");
+    }
 
-      {
-        // We want to temporarily turn off discovery by the
-        // CM ref processor, if necessary, and turn it back on
-        // on again later if we do. Using a scoped
-        // NoRefDiscovery object will do this.
-        NoRefDiscovery no_cm_discovery(_ref_processor_cm);
+    @After("point()")
+    public void doAfter(){
+        System.out.println("==doAfter==");
+    }
+}
+@RestController
+public class DemoController {
 
-        // Forget the current alloc region (we might even choose it to be part
-        // of the collection set!).
-        _allocator->release_mutator_alloc_region();
+    @RequestMapping("/hello")
+    @ResponseBody
+    public String hello() {
+        return "hello world";
+    }
+}
+

好了,请求一下,看看 stdout,

搞定完事儿~

+]]>
+ + Java + Filter + Interceptor - AOP + Spring + Servlet + Interceptor + AOP + + + Java + Filter + Interceptor + AOP + Spring + Tomcat + Servlet + Web + +
+ + Leetcode 021 合并两个有序链表 ( Merge Two Sorted Lists ) 题解分析 + /2021/10/07/Leetcode-021-%E5%90%88%E5%B9%B6%E4%B8%A4%E4%B8%AA%E6%9C%89%E5%BA%8F%E9%93%BE%E8%A1%A8-Merge-Two-Sorted-Lists-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + 题目介绍

Merge two sorted linked lists and return it as a sorted list. The list should be made by splicing together the nodes of the first two lists.

+

将两个升序链表合并为一个新的 升序 链表并返回。新链表是通过拼接给定的两个链表的所有节点组成的。

+

示例 1

+
+

输入:l1 = [1,2,4], l2 = [1,3,4]
输出:[1,1,2,3,4,4]

+
+

示例 2

+

输入: l1 = [], l2 = []
输出: []

+
+

示例 3

+

输入: l1 = [], l2 = [0]
输出: [0]

+
+

简要分析

这题是 Easy 的,看着也挺简单,两个链表进行合并,就是比较下大小,可能将就点的话最好就在两个链表中原地合并

+

题解代码

public ListNode mergeTwoLists(ListNode l1, ListNode l2) {
+        // 下面两个if判断了入参的边界,如果其一为null,直接返回另一个就可以了
+        if (l1 == null) {
+            return l2;
+        }
+        if (l2 == null) {
+            return l1;
+        }
+        // new 一个合并后的头结点
+        ListNode merged = new ListNode();
+        // 这个是当前节点
+        ListNode current = merged;
+        // 一开始给这个while加了l1和l2不全为null的条件,后面想了下不需要
+        // 因为内部前两个if就是跳出条件
+        while (true) {
+            if (l1 == null) {
+                // 这里其实跟开头类似,只不过这里需要将l2剩余部分接到merged链表后面
+                // 所以不能是直接current = l2,这样就是把后面的直接丢了
+                current.val = l2.val;
+                current.next = l2.next;
+                break;
+            }
+            if (l2 == null) {
+                current.val = l1.val;
+                current.next = l1.next;
+                break;
+            }
+            // 这里是两个链表都不为空的时候,就比较下大小
+            if (l1.val < l2.val) {
+                current.val = l1.val;
+                l1 = l1.next;
+            } else {
+                current.val = l2.val;
+                l2 = l2.next;
+            }
+            // 这里是new个新的,其实也可以放在循环头上
+            current.next = new ListNode();
+            current = current.next;
+        }
+        current = null;
+        // 返回这个头结点
+        return merged;
+    }
- // This timing is only used by the ergonomics to handle our pause target. - // It is unclear why this should not include the full pause. We will - // investigate this in CR 7178365. - // - // Preserving the old comment here if that helps the investigation: - // - // The elapsed time induced by the start time below deliberately elides - // the possible verification above. - double sample_start_time_sec = os::elapsedTime(); +

结果

+]]>
+ + Java + leetcode + + + leetcode + java + 题解 + +
+ + G1收集器概述 + /2020/02/09/G1%E6%94%B6%E9%9B%86%E5%99%A8%E6%A6%82%E8%BF%B0/ + G1: The Garbage-First Collector, 垃圾回收优先的垃圾回收器,目标是用户多核 cpu 和大内存的机器,最大的特点就是可预测的停顿时间,官方给出的介绍是提供一个用户在大的堆内存情况下一个低延迟表现的解决方案,通常是 6GB 及以上的堆大小,有低于 0.5 秒稳定的可预测的停顿时间。

+

这里主要介绍这个比较新的垃圾回收器,在 G1 之前的垃圾回收器都是基于如下图的内存结构分布,有新生代,老年代和永久代(jdk8 之前),然后G1 往前的那些垃圾回收器都有个分代,比如 serial,parallel 等,一般有个应用的组合,最初的 serial 和 serial old,因为新生代和老年代的收集方式不太一样,新生代主要是标记复制,所以有 eden 跟两个 survival区,老年代一般用标记整理方式,而 G1 对这个不太一样。

看一下 G1 的内存分布

可以看到这有很大的不同,G1 通过将内存分成大小相等的 region,每个region是存在于一个连续的虚拟内存范围,对于某个 region 来说其角色是类似于原来的收集器的Eden、Survivor、Old Generation,这个具体在代码层面

+
// We encode the value of the heap region type so the generation can be
+ // determined quickly. The tag is split into two parts:
+ //
+ //   major type (young, old, humongous, archive)           : top N-1 bits
+ //   minor type (eden / survivor, starts / cont hum, etc.) : bottom 1 bit
+ //
+ // If there's need to increase the number of minor types in the
+ // future, we'll have to increase the size of the latter and hence
+ // decrease the size of the former.
+ //
+ // 00000 0 [ 0] Free
+ //
+ // 00001 0 [ 2] Young Mask
+ // 00001 0 [ 2] Eden
+ // 00001 1 [ 3] Survivor
+ //
+ // 00010 0 [ 4] Humongous Mask
+ // 00100 0 [ 8] Pinned Mask
+ // 00110 0 [12] Starts Humongous
+ // 00110 1 [13] Continues Humongous
+ //
+ // 01000 0 [16] Old Mask
+ //
+ // 10000 0 [32] Archive Mask
+ // 11100 0 [56] Open Archive
+ // 11100 1 [57] Closed Archive
+ //
+ typedef enum {
+   FreeTag               = 0,
 
-        g1_policy()->record_collection_pause_start(sample_start_time_sec);
+   YoungMask             = 2,
+   EdenTag               = YoungMask,
+   SurvTag               = YoungMask + 1,
 
-        if (collector_state()->in_initial_mark_gc()) {
-          concurrent_mark()->pre_initial_mark();
-        }
+   HumongousMask         = 4,
+   PinnedMask            = 8,
+   StartsHumongousTag    = HumongousMask | PinnedMask,
+   ContinuesHumongousTag = HumongousMask | PinnedMask + 1,
 
-        g1_policy()->finalize_collection_set(target_pause_time_ms, &_survivor);
+   OldMask               = 16,
+   OldTag                = OldMask,
 
-        evacuation_info.set_collectionset_regions(collection_set()->region_length());
+   // Archive regions are regions with immutable content (i.e. not reclaimed, and
+   // not allocated into during regular operation). They differ in the kind of references
+   // allowed for the contained objects:
+   // - Closed archive regions form a separate self-contained (closed) object graph
+   // within the set of all of these regions. No references outside of closed
+   // archive regions are allowed.
+   // - Open archive regions have no restrictions on the references of their objects.
+   // Objects within these regions are allowed to have references to objects
+   // contained in any other kind of regions.
+   ArchiveMask           = 32,
+   OpenArchiveTag        = ArchiveMask | PinnedMask | OldMask,
+   ClosedArchiveTag      = ArchiveMask | PinnedMask | OldMask + 1
+ } Tag;
- // Make sure the remembered sets are up to date. This needs to be - // done before register_humongous_regions_with_cset(), because the - // remembered sets are used there to choose eager reclaim candidates. - // If the remembered sets are not up to date we might miss some - // entries that need to be handled. - g1_rem_set()->cleanupHRRS(); +

hotspot/share/gc/g1/heapRegionType.hpp

+

当执行垃圾收集时,G1以类似于CMS收集器的方式运行。 G1执行并发全局标记阶段,以确定整个堆中对象的存活性。标记阶段完成后,G1知道哪些region是基本空的。它首先收集这些region,通常会产生大量的可用空间。这就是为什么这种垃圾收集方法称为“垃圾优先”的原因。顾名思义,G1将其收集和压缩活动集中在可能充满可回收对象(即垃圾)的堆区域。 G1使用暂停预测模型来满足用户定义的暂停时间目标,并根据指定的暂停时间目标选择要收集的区域数。

+

由G1标识为可回收的区域是使用撤离的方式(Evacuation)。 G1将对象从堆的一个或多个区域复制到堆上的单个区域,并在此过程中压缩并释放内存。撤离是在多处理器上并行执行的,以减少暂停时间并增加吞吐量。因此,对于每次垃圾收集,G1都在用户定义的暂停时间内连续工作以减少碎片。这是优于前面两种方法的。 CMS(并发标记扫描)垃圾收集器不进行压缩。 ParallelOld垃圾回收仅执行整个堆压缩,这导致相当长的暂停时间。

+

需要重点注意的是,G1不是实时收集器。它很有可能达到设定的暂停时间目标,但并非绝对确定。 G1根据先前收集的数据,估算在用户指定的目标时间内可以收集多少个区域。因此,收集器具有收集区域成本的合理准确的模型,并且收集器使用此模型来确定要收集哪些和多少个区域,同时保持在暂停时间目标之内。

+

注意:G1同时具有并发(与应用程序线程一起运行,例如优化,标记,清理)和并行(多线程,例如stw)阶段。Full GC仍然是单线程的,但是如果正确调优,您的应用程序应该可以避免Full GC。

+

在前面那篇中在代码层面简单的了解了这个可预测时间的过程,这也是 G1 的一大特点。

+]]>
+ + Java + JVM + GC + C++ + + + Java + JVM + C++ + G1 + GC + Garbage-First Collector + +
+ + JVM源码分析之G1垃圾收集器分析一 + /2019/12/07/JVM-G1-Part-1/ + 对 Java 的 gc 实现比较感兴趣,原先一般都是看周志明的书,但其实并没有讲具体的 gc 源码,而是把整个思路和流程讲解了一下
特别是 G1 的具体实现
一般对 G1 的理解其实就是把原先整块的新生代老年代分成了以 region 为单位的小块内存,简而言之,就是原先对新生代老年代的收集会涉及到整个代的堆内存空间,而G1 把它变成了更细致的小块内存
这带来了一个很明显的好处和一个很明显的坏处,好处是内存收集可以更灵活,耗时会变短,但整个收集的处理复杂度就变高了
目前看了一点点关于 G1 收集的预期时间相关的代码

+
HeapWord* G1CollectedHeap::do_collection_pause(size_t word_size,
+                                               uint gc_count_before,
+                                               bool* succeeded,
+                                               GCCause::Cause gc_cause) {
+  assert_heap_not_locked_and_not_at_safepoint();
+  VM_G1CollectForAllocation op(word_size,
+                               gc_count_before,
+                               gc_cause,
+                               false, /* should_initiate_conc_mark */
+                               g1_policy()->max_pause_time_ms());
+  VMThread::execute(&op);
 
-        register_humongous_regions_with_cset();
+  HeapWord* result = op.result();
+  bool ret_succeeded = op.prologue_succeeded() && op.pause_succeeded();
+  assert(result == NULL || ret_succeeded,
+         "the result should be NULL if the VM did not succeed");
+  *succeeded = ret_succeeded;
 
-        assert(_verifier->check_cset_fast_test(), "Inconsistency in the InCSetState table.");
+  assert_heap_not_locked();
+  return result;
+}
+

这里就是收集时需要停顿的,其中VMThread::execute(&op);是具体执行的,真正执行的是VM_G1CollectForAllocation::doit方法

+
void VM_G1CollectForAllocation::doit() {
+  G1CollectedHeap* g1h = G1CollectedHeap::heap();
+  assert(!_should_initiate_conc_mark || g1h->should_do_concurrent_full_gc(_gc_cause),
+      "only a GC locker, a System.gc(), stats update, whitebox, or a hum allocation induced GC should start a cycle");
 
-        // We call this after finalize_cset() to
-        // ensure that the CSet has been finalized.
-        _cm->verify_no_cset_oops();
+  if (_word_size > 0) {
+    // An allocation has been requested. So, try to do that first.
+    _result = g1h->attempt_allocation_at_safepoint(_word_size,
+                                                   false /* expect_null_cur_alloc_region */);
+    if (_result != NULL) {
+      // If we can successfully allocate before we actually do the
+      // pause then we will consider this pause successful.
+      _pause_succeeded = true;
+      return;
+    }
+  }
 
-        if (_hr_printer.is_active()) {
-          G1PrintCollectionSetClosure cl(&_hr_printer);
-          _collection_set.iterate(&cl);
-        }
+  GCCauseSetter x(g1h, _gc_cause);
+  if (_should_initiate_conc_mark) {
+    // It's safer to read old_marking_cycles_completed() here, given
+    // that noone else will be updating it concurrently. Since we'll
+    // only need it if we're initiating a marking cycle, no point in
+    // setting it earlier.
+    _old_marking_cycles_completed_before = g1h->old_marking_cycles_completed();
 
-        // Initialize the GC alloc regions.
-        _allocator->init_gc_alloc_regions(evacuation_info);
+    // At this point we are supposed to start a concurrent cycle. We
+    // will do so if one is not already in progress.
+    bool res = g1h->g1_policy()->force_initial_mark_if_outside_cycle(_gc_cause);
 
-        G1ParScanThreadStateSet per_thread_states(this, workers()->active_workers(), collection_set()->young_region_length());
-        pre_evacuate_collection_set();
+    // The above routine returns true if we were able to force the
+    // next GC pause to be an initial mark; it returns false if a
+    // marking cycle is already in progress.
+    //
+    // If a marking cycle is already in progress just return and skip the
+    // pause below - if the reason for requesting this initial mark pause
+    // was due to a System.gc() then the requesting thread should block in
+    // doit_epilogue() until the marking cycle is complete.
+    //
+    // If this initial mark pause was requested as part of a humongous
+    // allocation then we know that the marking cycle must just have
+    // been started by another thread (possibly also allocating a humongous
+    // object) as there was no active marking cycle when the requesting
+    // thread checked before calling collect() in
+    // attempt_allocation_humongous(). Retrying the GC, in this case,
+    // will cause the requesting thread to spin inside collect() until the
+    // just started marking cycle is complete - which may be a while. So
+    // we do NOT retry the GC.
+    if (!res) {
+      assert(_word_size == 0, "Concurrent Full GC/Humongous Object IM shouldn't be allocating");
+      if (_gc_cause != GCCause::_g1_humongous_allocation) {
+        _should_retry_gc = true;
+      }
+      return;
+    }
+  }
 
-        // Actually do the work...
-        evacuate_collection_set(&per_thread_states);
+  // Try a partial collection of some kind.
+  _pause_succeeded = g1h->do_collection_pause_at_safepoint(_target_pause_time_ms);
 
-        post_evacuate_collection_set(evacuation_info, &per_thread_states);
+  if (_pause_succeeded) {
+    if (_word_size > 0) {
+      // An allocation had been requested. Do it, eventually trying a stronger
+      // kind of GC.
+      _result = g1h->satisfy_failed_allocation(_word_size, &_pause_succeeded);
+    } else {
+      bool should_upgrade_to_full = !g1h->should_do_concurrent_full_gc(_gc_cause) &&
+                                    !g1h->has_regions_left_for_allocation();
+      if (should_upgrade_to_full) {
+        // There has been a request to perform a GC to free some space. We have no
+        // information on how much memory has been asked for. In case there are
+        // absolutely no regions left to allocate into, do a maximally compacting full GC.
+        log_info(gc, ergo)("Attempting maximally compacting collection");
+        _pause_succeeded = g1h->do_full_collection(false, /* explicit gc */
+                                                   true   /* clear_all_soft_refs */);
+      }
+    }
+    guarantee(_pause_succeeded, "Elevated collections during the safepoint must always succeed.");
+  } else {
+    assert(_result == NULL, "invariant");
+    // The only reason for the pause to not be successful is that, the GC locker is
+    // active (or has become active since the prologue was executed). In this case
+    // we should retry the pause after waiting for the GC locker to become inactive.
+    _should_retry_gc = true;
+  }
+}
+

这里可以看到核心的是G1CollectedHeap::do_collection_pause_at_safepoint这个方法,它带上了目标暂停时间的值

+
G1CollectedHeap::do_collection_pause_at_safepoint(double target_pause_time_ms) {
+  assert_at_safepoint_on_vm_thread();
+  guarantee(!is_gc_active(), "collection is not reentrant");
 
-        const size_t* surviving_young_words = per_thread_states.surviving_young_words();
-        free_collection_set(&_collection_set, evacuation_info, surviving_young_words);
+  if (GCLocker::check_active_before_gc()) {
+    return false;
+  }
 
-        eagerly_reclaim_humongous_regions();
+  _gc_timer_stw->register_gc_start();
 
-        record_obj_copy_mem_stats();
-        _survivor_evac_stats.adjust_desired_plab_sz();
-        _old_evac_stats.adjust_desired_plab_sz();
+  GCIdMark gc_id_mark;
+  _gc_tracer_stw->report_gc_start(gc_cause(), _gc_timer_stw->gc_start());
 
-        double start = os::elapsedTime();
-        start_new_collection_set();
-        g1_policy()->phase_times()->record_start_new_cset_time_ms((os::elapsedTime() - start) * 1000.0);
+  SvcGCMarker sgcm(SvcGCMarker::MINOR);
+  ResourceMark rm;
 
-        if (evacuation_failed()) {
-          set_used(recalculate_used());
-          if (_archive_allocator != NULL) {
-            _archive_allocator->clear_used();
-          }
-          for (uint i = 0; i < ParallelGCThreads; i++) {
-            if (_evacuation_failed_info_array[i].has_failed()) {
-              _gc_tracer_stw->report_evacuation_failed(_evacuation_failed_info_array[i]);
-            }
-          }
-        } else {
-          // The "used" of the the collection set have already been subtracted
-          // when they were freed.  Add in the bytes evacuated.
-          increase_used(g1_policy()->bytes_copied_during_gc());
+  g1_policy()->note_gc_start();
+
+  wait_for_root_region_scanning();
+
+  print_heap_before_gc();
+  print_heap_regions();
+  trace_heap_before_gc(_gc_tracer_stw);
+
+  _verifier->verify_region_sets_optional();
+  _verifier->verify_dirty_young_regions();
+
+  // We should not be doing initial mark unless the conc mark thread is running
+  if (!_cm_thread->should_terminate()) {
+    // This call will decide whether this pause is an initial-mark
+    // pause. If it is, in_initial_mark_gc() will return true
+    // for the duration of this pause.
+    g1_policy()->decide_on_conc_mark_initiation();
+  }
+
+  // We do not allow initial-mark to be piggy-backed on a mixed GC.
+  assert(!collector_state()->in_initial_mark_gc() ||
+          collector_state()->in_young_only_phase(), "sanity");
+
+  // We also do not allow mixed GCs during marking.
+  assert(!collector_state()->mark_or_rebuild_in_progress() || collector_state()->in_young_only_phase(), "sanity");
+
+  // Record whether this pause is an initial mark. When the current
+  // thread has completed its logging output and it's safe to signal
+  // the CM thread, the flag's value in the policy has been reset.
+  bool should_start_conc_mark = collector_state()->in_initial_mark_gc();
+
+  // Inner scope for scope based logging, timers, and stats collection
+  {
+    EvacuationInfo evacuation_info;
+
+    if (collector_state()->in_initial_mark_gc()) {
+      // We are about to start a marking cycle, so we increment the
+      // full collection counter.
+      increment_old_marking_cycles_started();
+      _cm->gc_tracer_cm()->set_gc_cause(gc_cause());
+    }
+
+    _gc_tracer_stw->report_yc_type(collector_state()->yc_type());
+
+    GCTraceCPUTime tcpu;
+
+    G1HeapVerifier::G1VerifyType verify_type;
+    FormatBuffer<> gc_string("Pause Young ");
+    if (collector_state()->in_initial_mark_gc()) {
+      gc_string.append("(Concurrent Start)");
+      verify_type = G1HeapVerifier::G1VerifyConcurrentStart;
+    } else if (collector_state()->in_young_only_phase()) {
+      if (collector_state()->in_young_gc_before_mixed()) {
+        gc_string.append("(Prepare Mixed)");
+      } else {
+        gc_string.append("(Normal)");
+      }
+      verify_type = G1HeapVerifier::G1VerifyYoungNormal;
+    } else {
+      gc_string.append("(Mixed)");
+      verify_type = G1HeapVerifier::G1VerifyMixed;
+    }
+    GCTraceTime(Info, gc) tm(gc_string, NULL, gc_cause(), true);
+
+    uint active_workers = AdaptiveSizePolicy::calc_active_workers(workers()->total_workers(),
+                                                                  workers()->active_workers(),
+                                                                  Threads::number_of_non_daemon_threads());
+    active_workers = workers()->update_active_workers(active_workers);
+    log_info(gc,task)("Using %u workers of %u for evacuation", active_workers, workers()->total_workers());
+
+    TraceCollectorStats tcs(g1mm()->incremental_collection_counters());
+    TraceMemoryManagerStats tms(&_memory_manager, gc_cause(),
+                                collector_state()->yc_type() == Mixed /* allMemoryPoolsAffected */);
+
+    G1HeapTransition heap_transition(this);
+    size_t heap_used_bytes_before_gc = used();
+
+    // Don't dynamically change the number of GC threads this early.  A value of
+    // 0 is used to indicate serial work.  When parallel work is done,
+    // it will be set.
+
+    { // Call to jvmpi::post_class_unload_events must occur outside of active GC
+      IsGCActiveMark x;
+
+      gc_prologue(false);
+
+      if (VerifyRememberedSets) {
+        log_info(gc, verify)("[Verifying RemSets before GC]");
+        VerifyRegionRemSetClosure v_cl;
+        heap_region_iterate(&v_cl);
+      }
+
+      _verifier->verify_before_gc(verify_type);
+
+      _verifier->check_bitmaps("GC Start");
+
+#if COMPILER2_OR_JVMCI
+      DerivedPointerTable::clear();
+#endif
+
+      // Please see comment in g1CollectedHeap.hpp and
+      // G1CollectedHeap::ref_processing_init() to see how
+      // reference processing currently works in G1.
+
+      // Enable discovery in the STW reference processor
+      _ref_processor_stw->enable_discovery();
+
+      {
+        // We want to temporarily turn off discovery by the
+        // CM ref processor, if necessary, and turn it back on
+        // on again later if we do. Using a scoped
+        // NoRefDiscovery object will do this.
+        NoRefDiscovery no_cm_discovery(_ref_processor_cm);
+
+        // Forget the current alloc region (we might even choose it to be part
+        // of the collection set!).
+        _allocator->release_mutator_alloc_region();
+
+        // This timing is only used by the ergonomics to handle our pause target.
+        // It is unclear why this should not include the full pause. We will
+        // investigate this in CR 7178365.
+        //
+        // Preserving the old comment here if that helps the investigation:
+        //
+        // The elapsed time induced by the start time below deliberately elides
+        // the possible verification above.
+        double sample_start_time_sec = os::elapsedTime();
+
+        g1_policy()->record_collection_pause_start(sample_start_time_sec);
+
+        if (collector_state()->in_initial_mark_gc()) {
+          concurrent_mark()->pre_initial_mark();
+        }
+
+        g1_policy()->finalize_collection_set(target_pause_time_ms, &_survivor);
+
+        evacuation_info.set_collectionset_regions(collection_set()->region_length());
+
+        // Make sure the remembered sets are up to date. This needs to be
+        // done before register_humongous_regions_with_cset(), because the
+        // remembered sets are used there to choose eager reclaim candidates.
+        // If the remembered sets are not up to date we might miss some
+        // entries that need to be handled.
+        g1_rem_set()->cleanupHRRS();
+
+        register_humongous_regions_with_cset();
+
+        assert(_verifier->check_cset_fast_test(), "Inconsistency in the InCSetState table.");
+
+        // We call this after finalize_cset() to
+        // ensure that the CSet has been finalized.
+        _cm->verify_no_cset_oops();
+
+        if (_hr_printer.is_active()) {
+          G1PrintCollectionSetClosure cl(&_hr_printer);
+          _collection_set.iterate(&cl);
+        }
+
+        // Initialize the GC alloc regions.
+        _allocator->init_gc_alloc_regions(evacuation_info);
+
+        G1ParScanThreadStateSet per_thread_states(this, workers()->active_workers(), collection_set()->young_region_length());
+        pre_evacuate_collection_set();
+
+        // Actually do the work...
+        evacuate_collection_set(&per_thread_states);
+
+        post_evacuate_collection_set(evacuation_info, &per_thread_states);
+
+        const size_t* surviving_young_words = per_thread_states.surviving_young_words();
+        free_collection_set(&_collection_set, evacuation_info, surviving_young_words);
+
+        eagerly_reclaim_humongous_regions();
+
+        record_obj_copy_mem_stats();
+        _survivor_evac_stats.adjust_desired_plab_sz();
+        _old_evac_stats.adjust_desired_plab_sz();
+
+        double start = os::elapsedTime();
+        start_new_collection_set();
+        g1_policy()->phase_times()->record_start_new_cset_time_ms((os::elapsedTime() - start) * 1000.0);
+
+        if (evacuation_failed()) {
+          set_used(recalculate_used());
+          if (_archive_allocator != NULL) {
+            _archive_allocator->clear_used();
+          }
+          for (uint i = 0; i < ParallelGCThreads; i++) {
+            if (_evacuation_failed_info_array[i].has_failed()) {
+              _gc_tracer_stw->report_evacuation_failed(_evacuation_failed_info_array[i]);
+            }
+          }
+        } else {
+          // The "used" of the the collection set have already been subtracted
+          // when they were freed.  Add in the bytes evacuated.
+          increase_used(g1_policy()->bytes_copied_during_gc());
         }
 
         if (collector_state()->in_initial_mark_gc()) {
@@ -3007,66 +3239,219 @@ Node *clone(Node *graph) {
       
   
   
-    Leetcode 021 合并两个有序链表 ( Merge Two Sorted Lists ) 题解分析
-    /2021/10/07/Leetcode-021-%E5%90%88%E5%B9%B6%E4%B8%A4%E4%B8%AA%E6%9C%89%E5%BA%8F%E9%93%BE%E8%A1%A8-Merge-Two-Sorted-Lists-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/
-    题目介绍

Merge two sorted linked lists and return it as a sorted list. The list should be made by splicing together the nodes of the first two lists.

-

将两个升序链表合并为一个新的 升序 链表并返回。新链表是通过拼接给定的两个链表的所有节点组成的。

-

示例 1

-
-

输入:l1 = [1,2,4], l2 = [1,3,4]
输出:[1,1,2,3,4,4]

-
-

示例 2

-

输入: l1 = [], l2 = []
输出: []

-
-

示例 3

-

输入: l1 = [], l2 = [0]
输出: [0]

-
-

简要分析

这题是 Easy 的,看着也挺简单,两个链表进行合并,就是比较下大小,可能将就点的话最好就在两个链表中原地合并

-

题解代码

public ListNode mergeTwoLists(ListNode l1, ListNode l2) {
-        // 下面两个if判断了入参的边界,如果其一为null,直接返回另一个就可以了
-        if (l1 == null) {
-            return l2;
+    Leetcode 105 从前序与中序遍历序列构造二叉树(Construct Binary Tree from Preorder and Inorder Traversal) 题解分析
+    /2020/12/13/Leetcode-105-%E4%BB%8E%E5%89%8D%E5%BA%8F%E4%B8%8E%E4%B8%AD%E5%BA%8F%E9%81%8D%E5%8E%86%E5%BA%8F%E5%88%97%E6%9E%84%E9%80%A0%E4%BA%8C%E5%8F%89%E6%A0%91-Construct-Binary-Tree-from-Preorder-and-Inorder-Traversal-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/
+    题目介绍

Given preorder and inorder traversal of a tree, construct the binary tree.
给定一棵树的前序和中序遍历,构造出一棵二叉树

+

注意

You may assume that duplicates do not exist in the tree.
你可以假设树中没有重复的元素。(PS: 不然就没法做了呀)

+

例子:

preorder = [3,9,20,15,7]
+inorder = [9,3,15,20,7]
+

返回的二叉树

+
  3
+ / \
+9  20
+  /  \
+ 15   7
+ + +

简要分析

看到这个题可以想到一个比较常规的解法就是递归拆树,前序就是根左右,中序就是左根右,然后就是通过前序已经确定的根在中序中找到,然后去划分左右子树,这个例子里是 3,找到中序中的位置,那么就可以确定,9 是左子树,15,20,7是右子树,然后对应的可以根据左右子树的元素数量在前序中划分左右子树,再继续递归就行

+
class Solution {
+    public TreeNode buildTree(int[] preorder, int[] inorder) {
+      // 获取下数组长度
+        int n = preorder.length;
+        // 排除一下异常和边界
+        if (n != inorder.length) {
+            return null;
         }
-        if (l2 == null) {
-            return l1;
+        if (n == 0) {
+            return null;
         }
-        // new 一个合并后的头结点
-        ListNode merged = new ListNode();
-        // 这个是当前节点
-        ListNode current = merged;
-        // 一开始给这个while加了l1和l2不全为null的条件,后面想了下不需要
-        // 因为内部前两个if就是跳出条件
-        while (true) {
-            if (l1 == null) {
-                // 这里其实跟开头类似,只不过这里需要将l2剩余部分接到merged链表后面
-                // 所以不能是直接current = l2,这样就是把后面的直接丢了
-                current.val = l2.val;
-                current.next = l2.next;
-                break;
-            }
-            if (l2 == null) {
-                current.val = l1.val;
-                current.next = l1.next;
+        if (n == 1) {
+            return new TreeNode(preorder[0]);
+        }
+        // 获得根节点
+        TreeNode node = new TreeNode(preorder[0]);
+        int pos = 0;
+        // 找到中序中的位置
+        for (int i = 0; i < inorder.length; i++) {
+            if (node.val == inorder[i]) {
+                pos = i;
                 break;
             }
-            // 这里是两个链表都不为空的时候,就比较下大小
-            if (l1.val < l2.val) {
-                current.val = l1.val;
-                l1 = l1.next;
-            } else {
-                current.val = l2.val;
-                l2 = l2.next;
-            }
-            // 这里是new个新的,其实也可以放在循环头上
-            current.next = new ListNode();
-            current = current.next;
         }
-        current = null;
-        // 返回这个头结点
-        return merged;
-    }
+ // 划分左右再进行递归,注意下`Arrays.copyOfRange`的用法 + node.left = buildTree(Arrays.copyOfRange(preorder, 1, pos + 1), Arrays.copyOfRange(inorder, 0, pos)); + node.right = buildTree(Arrays.copyOfRange(preorder, pos + 1, n), Arrays.copyOfRange(inorder, pos + 1, n)); + return node; + } +}
]]>
+ + Java + leetcode + Binary Tree + java + Binary Tree + DFS + + + leetcode + java + Binary Tree + 二叉树 + 题解 + 递归 + Preorder Traversal + Inorder Traversal + 前序 + 中序 + +
+ + Leetcode 053 最大子序和 ( Maximum Subarray ) 题解分析 + /2021/11/28/Leetcode-053-%E6%9C%80%E5%A4%A7%E5%AD%90%E5%BA%8F%E5%92%8C-Maximum-Subarray-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + 题目介绍

Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.

+

A subarray is a contiguous part of an array.

+

示例

Example 1:

+
+

Input: nums = [-2,1,-3,4,-1,2,1,-5,4]
Output: 6
Explanation: [4,-1,2,1] has the largest sum = 6.

+
+

Example 2:

+
+

Input: nums = [1]
Output: 1

+
+

Example 3:

+
+

Input: nums = [5,4,-1,7,8]
Output: 23

+
+

说起来这个题其实非常有渊源,大学数据结构的第一个题就是这个,而最佳的算法就是传说中的 online 算法,就是遍历一次就完了,最基本的做法就是记下来所有的连续子数组,然后求出最大的那个。

+

代码

public int maxSubArray(int[] nums) {
+        int max = nums[0];
+        int sum = nums[0];
+        for (int i = 1; i < nums.length; i++) {
+            // 这里最重要的就是这一行了,其实就是如果前面的 sum 是小于 0 的,那么就不需要前面的 sum,反正加上了还不如不加大
+            sum = Math.max(nums[i], sum + nums[i]);
+            // max 是用来承载最大值的
+            max = Math.max(max, sum);
+        }
+        return max;
+    }
]]>
+ + Java + leetcode + + + leetcode + java + 题解 + +
+ + Leetcode 121 买卖股票的最佳时机(Best Time to Buy and Sell Stock) 题解分析 + /2021/03/14/Leetcode-121-%E4%B9%B0%E5%8D%96%E8%82%A1%E7%A5%A8%E7%9A%84%E6%9C%80%E4%BD%B3%E6%97%B6%E6%9C%BA-Best-Time-to-Buy-and-Sell-Stock-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + 题目介绍

You are given an array prices where prices[i] is the price of a given stock on the ith day.

+

You want to maximize your profit by choosing a single day to buy one stock and choosing a different day in the future to sell that stock.

+

Return the maximum profit you can achieve from this transaction. If you cannot achieve any profit, return 0.

+

给定一个数组 prices ,它的第 i 个元素 prices[i] 表示一支给定股票第 i 天的价格。

+

你只能选择 某一天 买入这只股票,并选择在 未来的某一个不同的日子 卖出该股票。设计一个算法来计算你所能获取的最大利润。

+

返回你可以从这笔交易中获取的最大利润。如果你不能获取任何利润,返回 0

+

简单分析

其实这个跟二叉树的最长路径和有点类似,需要找到整体的最大收益,但是在迭代过程中需要一个当前的值

+
int maxSofar = 0;
+public int maxProfit(int[] prices) {
+    if (prices.length <= 1) {
+        return 0;
+    }
+    int maxIn = prices[0];
+    int maxOut = prices[0];
+    for (int i = 1; i < prices.length; i++) {
+        if (maxIn > prices[i]) {
+            // 当循环当前值小于之前的买入值时就当成买入值,同时卖出也要更新
+            maxIn = prices[i];
+            maxOut = prices[i];
+        }
+        if (prices[i] > maxOut) {
+            // 表示一个可卖出点,即比买入值高时
+            maxOut = prices[i];
+            // 需要设置一个历史值
+            maxSofar = Math.max(maxSofar, maxOut - maxIn);
+        }
+    }
+    return maxSofar;
+}
-

结果

+

总结下

一开始看到 easy 就觉得是很简单,就没有 maxSofar ,但是一提交就出现问题了
对于[2, 4, 1]这种就会变成 0,所以还是需要一个历史值来存放历史最大值,这题有点动态规划的意思

+]]>
+ + Java + leetcode + java + DP + DP + + + leetcode + java + 题解 + DP + +
+ + Leetcode 1115 交替打印 FooBar ( Print FooBar Alternately *Medium* ) 题解分析 + /2022/05/01/Leetcode-1115-%E4%BA%A4%E6%9B%BF%E6%89%93%E5%8D%B0-FooBar-Print-FooBar-Alternately-Medium-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + 无聊想去 roll 一题就看到了有并发题,就找到了这题,其实一眼看我的想法也是用信号量,但是用 condition 应该也是可以处理的,不过这类问题好像本地有点难调,因为它好像是抽取代码执行的,跟直观的逻辑比较不一样
Suppose you are given the following code:

+
class FooBar {
+  public void foo() {
+    for (int i = 0; i < n; i++) {
+      print("foo");
+    }
+  }
+
+  public void bar() {
+    for (int i = 0; i < n; i++) {
+      print("bar");
+    }
+  }
+}
+

The same instance of FooBar will be passed to two different threads:

+
    +
  • thread A will call foo(), while
  • +
  • thread B will call bar().
    Modify the given program to output "foobar" n times.
  • +
+

示例

Example 1:

+

Input: n = 1
Output: “foobar”
Explanation: There are two threads being fired asynchronously. One of them calls foo(), while the other calls bar().
“foobar” is being output 1 time.

+
+

Example 2:

+

Input: n = 2
Output: “foobarfoobar”
Explanation: “foobar” is being output 2 times.

+
+

题解

简析

其实用信号量是很直观的,就是让打印 foo 的线程先拥有信号量,打印后就等待,给 bar 信号量 + 1,然后 bar 线程运行打印消耗 bar 信号量,再给 foo 信号量 + 1

+

code

class FooBar {
+    
+    private final Semaphore foo = new Semaphore(1);
+    private final Semaphore bar = new Semaphore(0);
+    private int n;
+
+    public FooBar(int n) {
+        this.n = n;
+    }
+
+    public void foo(Runnable printFoo) throws InterruptedException {
+        
+        for (int i = 0; i < n; i++) {
+            foo.acquire();
+        	// printFoo.run() outputs "foo". Do not change or remove this line.
+        	printFoo.run();
+            bar.release();
+        }
+    }
+
+    public void bar(Runnable printBar) throws InterruptedException {
+        
+        for (int i = 0; i < n; i++) {
+            bar.acquire();
+            // printBar.run() outputs "bar". Do not change or remove this line.
+        	printBar.run();
+            foo.release();
+        }
+    }
+}
]]>
Java @@ -3076,6 +3461,7 @@ Node *clone(Node *graph) { leetcode java 题解 + Print FooBar Alternately
@@ -3154,194 +3540,128 @@ Output: 0 - 2022 年终总结 - /2023/01/15/2022-%E5%B9%B4%E7%BB%88%E6%80%BB%E7%BB%93/ - 一年又一年,时间匆匆,这一年过得不太容易,很多事情都是来得猝不及防,很多规划也照例是没有完成,今年更多了一些,又是比较丧的一篇总结
工作上的变化让我多理解了一些社会跟职场的现实吧,可能的确是我不够优秀,也可能是其他,说回我自身,在工作中今年应该是收获比较一般的一年,不能说没有,对原先不熟悉的业务的掌握程度有了比较大的提升,只是问题依旧存在,也挺难推动完全改变,只能尽自己所能,而这一点也主要是在团队中的定位因为前面说的一些原因,在前期不明确,限制比较大,虽然现在并没有完全解决,但也有了一些明显的改善,如果明年继续为这家公司服务,希望能有所突破,在人心沟通上的技巧总是比较反感,可也是不得不使用或者说被迫学习使用的,LD说我的对错观太强了,拗不过来,希望能有所改变。
长远的规划上没有什么明确的想法,很容易否定原来的各种想法,见识过各种现实的残酷,明白以前的一些想法不够全面或者比较幼稚,想有更上一层楼的机会,只是不希望是通过自己不认可的方式。比较能接受的是通过提升自己的技术和执行力,能够有更进一步的可能。
技术上是挺失败的去年跟前年还是能看一些书,学一些东西,今年少了很多,可能对原来比较熟悉的都有些遗忘,最近有在改善博客的内容,能更多的是系列化的,由浅入深,只是还很不完善,没什么规划,体系上也还不完整,不过还是以mybatis作为一个开头,后续新开始的内容或者原先写过的相关的都能做个整理,不再是想到啥就写点啥。最近的一个重点是在k8s上,学习方式跟一些特别优秀的人比起来还是会慢一些,不过也是自己的方法,能够更深入的理解整个体系,并讲解出来,可能会尝试采用视频的方式,对一些比较好的内容做尝试,看看会不会有比较好的数据和反馈,在22年还苟着周更的独立技术博客也算是比较稀有了的,其他站的发布也要勤一些,形成所谓的“矩阵”。
跑步减肥这个么还是比较惨,22年只跑了368公里,比21年少了85公里,有一些客观但很多是主观的原因,还是需要跑起来,只是减肥也很迫切,体重比较大跑步还是有些压力的,买了动感单车,就是时间稍长屁股痛这个目前比较难解决,骑还是每天在骑就是强度跟时间不太够,要保证每天30分钟的量可能会比较好。
加油吧,愿23年家人和自己都健康,顺遂。大家也一样。

+ Leetcode 124 二叉树中的最大路径和(Binary Tree Maximum Path Sum) 题解分析 + /2021/01/24/Leetcode-124-%E4%BA%8C%E5%8F%89%E6%A0%91%E4%B8%AD%E7%9A%84%E6%9C%80%E5%A4%A7%E8%B7%AF%E5%BE%84%E5%92%8C-Binary-Tree-Maximum-Path-Sum-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + 题目介绍

A path in a binary tree is a sequence of nodes where each pair of adjacent nodes in the sequence has an edge connecting them. A node can only appear in the sequence at most once. Note that the path does not need to pass through the root.

+

The path sum of a path is the sum of the node’s values in the path.

+

Given the root of a binary tree, return the maximum path sum of any path.

+

路径 被定义为一条从树中任意节点出发,沿父节点-子节点连接,达到任意节点的序列。该路径 至少包含一个 节点,且不一定经过根节点。

+

路径和 是路径中各节点值的总和。

+

给你一个二叉树的根节点 root ,返回其 最大路径和

+

简要分析

其实这个题目会被误解成比较简单,左子树最大的,或者右子树最大的,或者两边加一下,仔细想想都不对,其实有可能是产生于左子树中,或者右子树中,这两个都是指跟左子树根还有右子树根没关系的,这么说感觉不太容易理解,画个图

可以看到图里,其实最长路径和是左边这个子树组成的,跟根节点还有右子树完全没关系,然后再想一种情况,如果是整棵树就是图中的左子树,那么这个最长路径和就是左子树加右子树加根节点了,所以不是我一开始想得那么简单,在代码实现中也需要一些技巧

+

代码

int ansNew = Integer.MIN_VALUE;
+public int maxPathSum(TreeNode root) {
+        maxSumNew(root);
+        return ansNew;
+    }
+    
+public int maxSumNew(TreeNode root) {
+    if (root == null) {
+        return 0;
+    }
+    // 这里是个简单的递归,就是去递归左右子树,但是这里其实有个概念,当这样处理时,其实相当于把子树的内部的最大路径和已经算出来了
+    int left = maxSumNew(root.left);
+    int right = maxSumNew(root.right);
+    // 这里前面我有点没想明白,但是看到 ansNew 的比较,其实相当于,返回的是三种情况里的最大值,一个是左子树+根,一个是右子树+根,一个是单独根节点,
+    // 这样这个递归的返回才会有意义,不然像原来的方法,它可能是跳着的,但是这种情况其实是借助于 ansNew 这个全局的最大值,因为原来我觉得要比较的是
+    // left, right, left + root , right + root, root, left + right + root 这些的最大值,这里是分成了两个阶段,left 跟 right 的最大值已经在上面的
+    // 调用过程中赋值给 ansNew 了    
+    int currentSum = Math.max(Math.max(root.val + left , root.val + right), root.val);
+    // 这边返回的是 currentSum,然后再用它跟 left + right + root 进行对比,然后再去更新 ans
+    // PS: 有个小点也是这边的破局点,就是这个 ansNew
+    int res = Math.max(left + right + root.val, currentSum);
+    ans = Math.max(res, ans);
+    return currentSum;
+}
+ +

这里非常重要的就是 ansNew 是最后的一个结果,而对于 maxSumNew 这个函数的返回值其实是需要包含了一个连续结果,因为要返回继续去算路径和,所以返回的是 currentSum,最终结果是 ansNew

+

结果图

难得有个 100%,贴个图哈哈

]]>
- 生活 - 年终总结 + Java + leetcode + Binary Tree + java + Binary Tree - 生活 - 年终总结 - 2022 - 2023 - -
- - Disruptor 系列三 - /2022/09/25/Disruptor-%E7%B3%BB%E5%88%97%E4%B8%89/ - 原来一直有点被误导,
gatingSequences用来标识每个 processer 的操作位点,但是怎么记录更新有点搞不清楚
其实问题在于 gatingSequences 是个 Sequence 数组,首先要看下怎么加进去的,
可以看到是在 com.lmax.disruptor.RingBuffer#addGatingSequences 这个方法里添加
首先是 com.lmax.disruptor.dsl.Disruptor#handleEventsWith(com.lmax.disruptor.EventHandler<? super T>...)
然后执行 com.lmax.disruptor.dsl.Disruptor#createEventProcessors(com.lmax.disruptor.Sequence[], com.lmax.disruptor.EventHandler<? super T>[])

-
EventHandlerGroup<T> createEventProcessors(
-        final Sequence[] barrierSequences,
-        final EventHandler<? super T>[] eventHandlers)
-    {
-        checkNotStarted();
-
-        final Sequence[] processorSequences = new Sequence[eventHandlers.length];
-        final SequenceBarrier barrier = ringBuffer.newBarrier(barrierSequences);
-
-        for (int i = 0, eventHandlersLength = eventHandlers.length; i < eventHandlersLength; i++)
-        {
-            final EventHandler<? super T> eventHandler = eventHandlers[i];
-
-            // 这里将 handler 包装成一个 BatchEventProcessor
-            final BatchEventProcessor<T> batchEventProcessor =
-                new BatchEventProcessor<>(ringBuffer, barrier, eventHandler);
-
-            if (exceptionHandler != null)
-            {
-                batchEventProcessor.setExceptionHandler(exceptionHandler);
-            }
-
-            consumerRepository.add(batchEventProcessor, eventHandler, barrier);
-            processorSequences[i] = batchEventProcessor.getSequence();
-        }
-
-        updateGatingSequencesForNextInChain(barrierSequences, processorSequences);
-
-        return new EventHandlerGroup<>(this, consumerRepository, processorSequences);
-    }
- -

BatchEventProcessor 在类内有个定义 sequence

-
private final Sequence sequence = new Sequence(Sequencer.INITIAL_CURSOR_VALUE);
-

然后在上面循环中的这一句取出来

-
processorSequences[i] = batchEventProcessor.getSequence();
-

调用com.lmax.disruptor.dsl.Disruptor#updateGatingSequencesForNextInChain 方法

-
private void updateGatingSequencesForNextInChain(final Sequence[] barrierSequences, final Sequence[] processorSequences)
-    {
-        if (processorSequences.length > 0)
-        {
-            // 然后在这里添加
-            ringBuffer.addGatingSequences(processorSequences);
-            for (final Sequence barrierSequence : barrierSequences)
-            {
-                ringBuffer.removeGatingSequence(barrierSequence);
-            }
-            consumerRepository.unMarkEventProcessorsAsEndOfChain(barrierSequences);
-        }
-    }
- -

而如何更新则是在处理器 com.lmax.disruptor.BatchEventProcessor#run

-
public void run()
-    {
-        if (running.compareAndSet(IDLE, RUNNING))
-        {
-            sequenceBarrier.clearAlert();
-
-            notifyStart();
-            try
-            {
-                if (running.get() == RUNNING)
-                {
-                    processEvents();
-                }
-            }
-            finally
-            {
-                notifyShutdown();
-                running.set(IDLE);
-            }
-        }
-        else
-        {
-            // This is a little bit of guess work.  The running state could of changed to HALTED by
-            // this point.  However, Java does not have compareAndExchange which is the only way
-            // to get it exactly correct.
-            if (running.get() == RUNNING)
-            {
-                throw new IllegalStateException("Thread is already running");
-            }
-            else
-            {
-                earlyExit();
-            }
-        }
-    }
-

然后是

-
private void processEvents()
-    {
-        T event = null;
-        long nextSequence = sequence.get() + 1L;
-
-        while (true)
-        {
-            try
-            {
-                final long availableSequence = sequenceBarrier.waitFor(nextSequence);
-                if (batchStartAware != null)
-                {
-                    batchStartAware.onBatchStart(availableSequence - nextSequence + 1);
-                }
-
-                while (nextSequence <= availableSequence)
-                {
-                    event = dataProvider.get(nextSequence);
-                    eventHandler.onEvent(event, nextSequence, nextSequence == availableSequence);
-                    nextSequence++;
-                }
-                // 如果正常处理完,那就是会更新为 availableSequence,因为都处理好了
-                sequence.set(availableSequence);
-            }
-            catch (final TimeoutException e)
-            {
-                notifyTimeout(sequence.get());
-            }
-            catch (final AlertException ex)
-            {
-                if (running.get() != RUNNING)
-                {
-                    break;
-                }
-            }
-            catch (final Throwable ex)
-            {
-                handleEventException(ex, nextSequence, event);
-                // 如果是异常就只是 nextSequence
-                sequence.set(nextSequence);
-                nextSequence++;
-            }
-        }
-    }
-]]>
- - Java - - - Java - Disruptor + leetcode + java + Binary Tree + 二叉树 + 题解
- Leetcode 053 最大子序和 ( Maximum Subarray ) 题解分析 - /2021/11/28/Leetcode-053-%E6%9C%80%E5%A4%A7%E5%AD%90%E5%BA%8F%E5%92%8C-Maximum-Subarray-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ - 题目介绍

Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.

-

A subarray is a contiguous part of an array.

-

示例

Example 1:

+ Leetcode 1260 二维网格迁移 ( Shift 2D Grid *Easy* ) 题解分析 + /2022/07/22/Leetcode-1260-%E4%BA%8C%E7%BB%B4%E7%BD%91%E6%A0%BC%E8%BF%81%E7%A7%BB-Shift-2D-Grid-Easy-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ + 题目介绍

Given a 2D grid of size m x n and an integer k. You need to shift the grid k times.

+

In one shift operation:

+

Element at grid[i][j] moves to grid[i][j + 1].
Element at grid[i][n - 1] moves to grid[i + 1][0].
Element at grid[m - 1][n - 1] moves to grid[0][0].
Return the 2D grid after applying shift operation k times.

+

示例

Example 1:

-

Input: nums = [-2,1,-3,4,-1,2,1,-5,4]
Output: 6
Explanation: [4,-1,2,1] has the largest sum = 6.

+

Input: grid = [[1,2,3],[4,5,6],[7,8,9]], k = 1
Output: [[9,1,2],[3,4,5],[6,7,8]]

-

Example 2:

+

Example 2:

-

Input: nums = [1]
Output: 1

+

Input: grid = [[3,8,1,9],[19,7,2,5],[4,6,11,10],[12,0,21,13]], k = 4
Output: [[12,0,21,13],[3,8,1,9],[19,7,2,5],[4,6,11,10]]

-

Example 3:

-
-

Input: nums = [5,4,-1,7,8]
Output: 23

+

Example 3:

+

Input: grid = [[1,2,3],[4,5,6],[7,8,9]], k = 9
Output: [[1,2,3],[4,5,6],[7,8,9]]

-

说起来这个题其实非常有渊源,大学数据结构的第一个题就是这个,而最佳的算法就是传说中的 online 算法,就是遍历一次就完了,最基本的做法就是记下来所有的连续子数组,然后求出最大的那个。

-

代码

public int maxSubArray(int[] nums) {
-        int max = nums[0];
-        int sum = nums[0];
-        for (int i = 1; i < nums.length; i++) {
-            // 这里最重要的就是这一行了,其实就是如果前面的 sum 是小于 0 的,那么就不需要前面的 sum,反正加上了还不如不加大
-            sum = Math.max(nums[i], sum + nums[i]);
-            // max 是用来承载最大值的
-            max = Math.max(max, sum);
+

提示

    +
  • m == grid.length
  • +
  • n == grid[i].length
  • +
  • 1 <= m <= 50
  • +
  • 1 <= n <= 50
  • +
  • -1000 <= grid[i][j] <= 1000
  • +
  • 0 <= k <= 100
  • +
+

解析

这个题主要是矩阵或者说数组的操作,并且题目要返回的是个 List,所以也不用原地操作,只需要找对位置就可以了,k 是多少就相当于让这个二维数组头尾衔接移动 k 个元素

+

代码

public List<List<Integer>> shiftGrid(int[][] grid, int k) {
+        // 行数
+        int m = grid.length;
+        // 列数
+        int n = grid[0].length;
+        // 偏移值,取下模
+        k = k % (m * n);
+        // 反向取下数量,因为我打算直接从头填充新的矩阵
+        /*
+         *    比如
+         *    1 2 3
+         *    4 5 6
+         *    7 8 9
+         *    需要变成
+         *    9 1 2
+         *    3 4 5
+         *    6 7 8
+         *    就要从 9 开始填充
+         */
+        int reverseK = m * n - k;
+        List<List<Integer>> matrix = new ArrayList<>();
+        // 这类就是两层循环
+        for (int i = 0; i < m; i++) {
+            List<Integer> line = new ArrayList<>();
+            for (int j = 0; j < n; j++) {
+                // 数量会随着循环迭代增长, 确认是第几个
+                int currentNum = reverseK + i * n +  (j + 1);
+                // 这里处理下到达矩阵末尾后减掉 m * n
+                if (currentNum > m * n) {
+                    currentNum -= m * n;
+                }
+                // 根据矩阵列数 n 算出在原来矩阵的位置
+                int last = (currentNum - 1) % n;
+                int passLine = (currentNum - 1) / n;
+
+                line.add(grid[passLine][last]);
+            }
+            matrix.add(line);
         }
-        return max;
-    }
]]> + return matrix; + }
+ +

结果数据


比较慢

+]]> Java leetcode @@ -3350,206 +3670,245 @@ Output: 0
-

有个别应用用的是这个

-
<dependency>
-    <groupId>com.101tec</groupId>
-    <artifactId>zkclient</artifactId>
-    <version>0.11</version>
-</dependency>
-

还有的应用是找不到相关的依赖,并且这些的使用没有个比较好的说明,为啥用前者,为啥用后者,有啥注意点,
首先在使用 2.6.5 的 alibaba 的 dubbo 的时候,只使用后者是会报错的,至于为啥会报错,其实就是这篇文章想说明的点
报错的内容其实很简单, 就是缺少这个 org.apache.curator.framework.CuratorFrameworkFactory
这个类看着像是依赖上面的配置,但是应该不需要两个配置一块用的,所以还是需要去看代码
通过找上面类被依赖的和 dubbo 连接注册中心相关的代码,看到了这段指点迷津的代码

-
@SPI("curator")
-public interface ZookeeperTransporter {
-
-    @Adaptive({Constants.CLIENT_KEY, Constants.TRANSPORTER_KEY})
-    ZookeeperClient connect(URL url);
-
-}
-

众所周知,dubbo 创造了叫自适应扩展点加载的神奇技术,这里的 adaptive 注解中的Constants.CLIENT_KEYConstants.TRANSPORTER_KEY 可以在配置 dubbo 的注册信息的时候进行配置,如果是通过 xml 配置的话,可以在 <dubbo:registry/> 这个 tag 中的以上两个 key 进行配置,
具体在 dubbo.xsd 中有描述

-
<xsd:element name="registry" type="registryType">
-        <xsd:annotation>
-            <xsd:documentation><![CDATA[ The registry config ]]></xsd:documentation>
-        </xsd:annotation>
-    </xsd:element>
- -


并且在 spi 的配置com.alibaba.dubbo.remoting.zookeeper.ZookeeperTransporter 中可以看到

-
zkclient=com.alibaba.dubbo.remoting.zookeeper.zkclient.ZkclientZookeeperTransporter
-curator=com.alibaba.dubbo.remoting.zookeeper.curator.CuratorZookeeperTransporter
-
-zkclient=com.alibaba.dubbo.remoting.zookeeper.zkclient.ZkclientZookeeperTransporter
-curator=com.alibaba.dubbo.remoting.zookeeper.curator.CuratorZookeeperTransporter
-
-zkclient=com.alibaba.dubbo.remoting.zookeeper.zkclient.ZkclientZookeeperTransporter
-curator=com.alibaba.dubbo.remoting.zookeeper.curator.CuratorZookeeperTransporter
-

而在上面的代码里默认的SPI 值是 curator,所以如果不配置,那就会报上面找不到类的问题,所以如果需要使用 zkclient 的,就需要在<dubbo:registry/> 配置中添加 client="zkclient"这个配置,所以有些地方还是需要懂一些更深层次的原理,但也不至于每个东西都要抠到每一行代码原理,除非就是专门做这一块的。
还有一点是发现有些应用是碰运气,刚好有个三方包把这个类带进来了,但是这个应用就没有单独配置这块,如果不了解或者后续忘了再来查问题就会很奇怪

+ headscale 添加节点 + /2023/07/09/headscale-%E6%B7%BB%E5%8A%A0%E8%8A%82%E7%82%B9/ + 添加节点

添加节点非常简单,比如 app store 或者官网可以下载 mac 的安装包,

+

安装包直接下载可以在这里,下载安装完后还需要做一些处理,才能让 Tailscale 使用 Headscale 作为控制服务器。当然,Headscale 已经给我们提供了详细的操作步骤,你只需要在浏览器中打开 URL:http://<HEADSCALE_PUB_IP>:<HEADSCALE_PUB_PORT>/apple,记得端口替换成自己的,就会看到这样的说明页

+

image

+

然后对于像我这样自己下载的客户端安装包,也就是standalone client,就可以用下面的命令

+

defaults write io.tailscale.ipn.macsys ControlURL http://<HEADSCALE_PUB_IP>:<HEADSCALE_PUB_PORT> 类似于 Windows 客户端需要写入注册表,就是把控制端的地址改成了我们自己搭建的 headscale 的,设置完以后就打开 tailscale 客户端右键点击 login,就会弹出一个浏览器地址

+

image

+

按照这个里面的命令去 headscale 的机器上执行,注意要替换 namespace,对于最新的 headscale 已经把 namespace 废弃改成 user 了,这点要注意了,其他客户端也同理,现在还有个好消息,安卓和 iOS 客户端也已经都可以用了,后面可以在介绍下局域网怎么部分打通和自建 derper。

]]>
- Java - Dubbo + headscale - Java - Dubbo + headscale
@@ -6071,120 +6101,73 @@ myusername ALL = resolv-file=/opt/homebrew/etc/dnsmasq.d/resolv.dnsmasq.conf
-

结果发现 dnsmasq 就起不来了,因为是 brew 服务的形式起来,发现日志也没有, dnsmasq 配置文件本身也没什么日志,这个是最讨厌的,网上搜了一圈也都没有, brew services 的服务如果启动状态是 error,并且服务本身没有日志的话就是一头雾水,并且对于 plist 来说,即使我手动加了标准输出和错误输出,brew services restart 的时候也是会被重新覆盖,
后来仔细看了下这个问题,发现它下面有这么一行配置

-
conf-dir=/opt/homebrew/etc/dnsmasq.d/,*.conf
-

想了一下发现这个问题其实很简单,dnsmasq 应该是不支持同一配置文件加载两次,
我把 resolv 文件放在了同一个配置文件目录下,所以就被加载了两次,所以改掉目录就行了,但是目前看 dnsmasq 还不符合我的要求,也有可能我还没完全了解 dnsmasq 的使用方法,我想要的是比如按特定的域名后缀来配置对应的 dns 服务器,这样就不太会被影响,可以试试 AdGuard 看

+ docker比一般多一点的初学者介绍二 + /2020/03/15/docker%E6%AF%94%E4%B8%80%E8%88%AC%E5%A4%9A%E4%B8%80%E7%82%B9%E7%9A%84%E5%88%9D%E5%AD%A6%E8%80%85%E4%BB%8B%E7%BB%8D%E4%BA%8C/ + 限制下 docker 的 cpu 使用率

这里我们开始玩一点有意思的,我们在容器里装下 vim 和 gcc,然后写这样一段 c 代码

+
#include <stdio.h>
+int main(void)
+{
+    int i = 0;
+    for(;;) i++;
+    return 0;
+}
+

就是一个最简单的死循环,然后在容器里跑起来

+
$ gcc 1.c 
+$ ./a.out
+

然后我们来看下系统资源占用(CPU)
Xs562iawhHyMxeO
上图是在容器里的,可以看到 cpu 已经 100%了
然后看看容器外面的
ecqH8XJ4k7rKhzu
可以看到一个核的 cpu 也被占满了,因为是个双核的机器,并且代码是单线程的
然后呢我们要做点啥
因为已经在这个 ubuntu 容器中装了 vim 和 gcc,考虑到国内的网络,所以我们先把这个容器 commit 一下,

+
docker commit -a "nick" -m "my ubuntu" f63c5607df06 my_ubuntu:v1
+

然后再运行起来

+
docker run -it --cpus=0.1 my_ubuntu:v1 bash
+


我们的代码跟可执行文件都还在,要的就是这个效果,然后再运行一下

结果是这个样子的,有点神奇是不,关键就在于 run 的时候的--cpus=0.1这个参数,它其实就是基于我前一篇说的 cgroup 技术,能将进程之间的cpu,内存等资源进行隔离

+

开始第一个 Dockerfile

上一面为了复用那个我装了 vim 跟 gcc 的容器,我把它提交到了本地,使用了docker commit命令,有点类似于 git 的 commit,但是这个不是个很好的操作方式,需要手动介入,这里更推荐使用 Dockerfile 来构建镜像

+
From ubuntu:latest
+MAINTAINER Nicksxs "nicksxs@hotmail.com"
+RUN  sed -i s@/archive.ubuntu.com/@/mirrors.aliyun.com/@g /etc/apt/sources.list
+RUN apt-get clean
+RUN apt-get update && apt install -y nginx
+RUN echo 'Hi, i am in container' \
+    > /usr/share/nginx/html/index.html
+EXPOSE 80
+

先解释下这是在干嘛,首先是这个From ubuntu:latest基于的 ubuntu 的最新版本的镜像,然后第二行是维护人的信息,第三四行么作为墙内人你懂的,把 ubuntu 的源换成阿里云的,不然就有的等了,然后就是装下 nginx,往默认的 nginx 的入口 html 文件里输入一行欢迎语,然后暴露 80 端口
然后我们使用sudo docker build -t="nicksxs/static_web" .命令来基于这个 Dockerfile 构建我们自己的镜像,过程中是这样的


可以看到图中,我的 Dockerfile 是 7 行,里面就执行了 7 步,并且每一步都有一个类似于容器 id 的层 id 出来,这里就是一个比较重要的东西,docker 在构建的时候其实是有这个层的概念,Dockerfile 里的每一行都会往上加一层,这里有还注意下命令后面的.,代表当前目录下会自行去寻找 Dockerfile 进行构建,构建完了之后我们再看下我们的本地镜像

我们自己的镜像出现啦
然后有个问题,如果这个构建中途报了错咋办呢,来试试看,我们把 nginx 改成随便的一个错误名,nginxx(不知道会不会运气好真的有这玩意),再来 build 一把

找不到 nginxx 包,是不是这个镜像就完全不能用呢,当然也不是,因为前面说到了,docker 是基于层去构建的,可以看到前面的 4 个 step 都没报错,那我们基于最后的成功步骤创建下容器看看
也就是sudo docker run -t -i bd26f991b6c8 /bin/bash
答案是可以的,只是没装成功 nginx

还有一点注意到没,前面的几个 step 都有一句 Using cache,说明 docker 在构建镜像的时候是有缓存的,这也更能说明 docker 是基于层去构建镜像,同样的底包,同样的步骤,这些层是可以被复用的,这就是 docker 的构建缓存,当然我们也可以在 build 的时候加上--no-cache去把构建缓存禁用掉。

]]>
- dns + Docker + 介绍 - dnsmasq + Docker + namespace + cgroup
- invert-binary-tree - /2015/06/22/invert-binary-tree/ - Invert a binary tree

-
     4
-   /   \
-  2     7
- / \   / \
-1   3 6   9
-
-

to

-
     4
-   /   \
-  7     2
- / \   / \
-9   6 3   1
-
-

Trivia:
This problem was inspired by this original tweet by Max Howell:

+ github 小技巧-更新 github host key + /2023/03/28/github-%E5%B0%8F%E6%8A%80%E5%B7%A7-%E6%9B%B4%E6%96%B0-github-host-key/ + 最近一次推送博客,发现报了个错推不上去,

+
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
+
+IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
+Someone could be eavesdropping on you right now (man-in-the-middle attack)!
+It is also possible that a host key has just been changed.
+

错误信息是这样,有点奇怪也没干啥,网上一搜发现是We updated our RSA SSH host key
简单翻一下就是

-

Google: 90% of our engineers use the software you wrote (Homebrew),
but you can’t invert a binary tree on a whiteboard so fuck off.

-
-
/**
- * Definition for a binary tree node.
- * struct TreeNode {
- *     int val;
- *     TreeNode *left;
- *     TreeNode *right;
- *     TreeNode(int x) : val(x), left(NULL), right(NULL) {}
- * };
- */
-class Solution {
-public:
-    TreeNode* invertTree(TreeNode* root) {
-        if(root == NULL) return root;
-        TreeNode* temp;
-        temp = invertTree(root->left);
-        root->left = invertTree(root->right);
-        root->right = temp;
-        return root;
-    }
-};
]]>
- - leetcode - - - leetcode - c++ - -
- - Leetcode 747 至少是其他数字两倍的最大数 ( Largest Number At Least Twice of Others *Easy* ) 题解分析 - /2022/10/02/Leetcode-747-%E8%87%B3%E5%B0%91%E6%98%AF%E5%85%B6%E4%BB%96%E6%95%B0%E5%AD%97%E4%B8%A4%E5%80%8D%E7%9A%84%E6%9C%80%E5%A4%A7%E6%95%B0-Largest-Number-At-Least-Twice-of-Others-Easy-%E9%A2%98%E8%A7%A3%E5%88%86%E6%9E%90/ - 题目介绍

You are given an integer array nums where the largest integer is unique.

-

Determine whether the largest element in the array is at least twice as much as every other number in the array. If it is, return the index of the largest element, or return -1 otherwise.
确认在数组中的最大数是否是其余任意数的两倍大及以上,如果是返回索引,如果不是返回-1

-

示例

Example 1:

-

Input: nums = [3,6,1,0]
Output: 1
Explanation: 6 is the largest integer.
For every other number in the array x, 6 is at least twice as big as x.
The index of value 6 is 1, so we return 1.

+

在3月24日协调世界时大约05:00时,出于谨慎,我们更换了用于保护 GitHub.com 的 Git 操作的 RSA SSH 主机密钥。我们这样做是为了保护我们的用户免受任何对手模仿 GitHub 或通过 SSH 窃听他们的 Git 操作的机会。此密钥不授予对 GitHub 基础设施或客户数据的访问权限。此更改仅影响通过使用 RSA 的 SSH 进行的 Git 操作。GitHub.com 和 HTTPS Git 操作的网络流量不受影响。

-

Example 2:

-

Input: nums = [1,2,3,4]
Output: -1
Explanation: 4 is less than twice the value of 3, so we return -1.

+

要解决也比较简单就是重置下 host key,

+
+

Host Key是服务器用来证明自己身份的一个永久性的非对称密钥

-

提示:

    -
  • 2 <= nums.length <= 50
  • -
  • 0 <= nums[i] <= 100
  • -
  • The largest element in nums is unique.
  • -
-

简要解析

这个题easy是题意也比较简单,找最大值,并且最大值是其他任意值的两倍及以上,其实就是找最大值跟次大值,比较下就好了

-

代码

public int dominantIndex(int[] nums) {
-    int largest = Integer.MIN_VALUE;
-    int second = Integer.MIN_VALUE;
-    int largestIndex = -1;
-    for (int i = 0; i < nums.length; i++) {
-        // 如果有最大的就更新,同时更新最大值和第二大的
-        if (nums[i] > largest) {
-            second = largest;
-            largest = nums[i];
-            largestIndex = i;
-        } else if (nums[i] > second) {
-            // 没有超过最大的,但是比第二大的更大就更新第二大的
-            second = nums[i];
-        }
-    }
-
-    // 判断下是否符合题目要求,要是所有值的两倍及以上
-    if (largest >= 2 * second) {
-        return largestIndex;
-    } else {
-        return -1;
-    }
-}
-

通过图

第一次错了是把第二大的情况只考虑第一种,也有可能最大值完全没经过替换就变成最大值了

+

使用

+
ssh-keygen -R github.com
+

然后在首次建立连接的时候同意下就可以了

]]> - Java - leetcode + ssh + 技巧 - leetcode - java - 题解 + ssh + 端口转发 @@ -6200,110 +6183,6 @@ public: 博客,文章 - - minimum-size-subarray-sum-209 - /2016/10/11/minimum-size-subarray-sum-209/ - problem

Given an array of n positive integers and a positive integer s, find the minimal length of a subarray of which the sum ≥ s. If there isn’t one, return 0 instead.

-

For example, given the array [2,3,1,2,4,3] and s = 7,
the subarray [4,3] has the minimal length under the problem constraint.

-

题解

参考,滑动窗口,跟之前Data Structure课上的online算法有点像,链接

-

Code

class Solution {
-public:
-    int minSubArrayLen(int s, vector<int>& nums) {
-        int len = nums.size();
-        if(len == 0) return 0;
-        int minlen = INT_MAX;
-        int sum = 0;
-        
-        int left = 0;
-        int right = -1;
-        while(right < len)
-        {
-            while(sum < s && right < len)
-                sum += nums[++right];
-            if(sum >= s)
-            {
-                minlen = minlen < right - left + 1 ? minlen : right - left + 1;
-                sum -= nums[left++];
-            }
-        }
-        return minlen > len ? 0 : minlen;
-    }
-};
-]]>
- - leetcode - - - leetcode - c++ - -
- - mybatis 的 foreach 使用的注意点 - /2022/07/09/mybatis-%E7%9A%84-foreach-%E4%BD%BF%E7%94%A8%E7%9A%84%E6%B3%A8%E6%84%8F%E7%82%B9/ - mybatis 在作为轻量级 orm 框架,如果要使用类似于 in 查询的语句,除了直接替换字符串,还可以使用 foreach 标签
在mybatis的 dtd 文件中可以看到可以配置这些字段,

-
<!ELEMENT foreach (#PCDATA | include | trim | where | set | foreach | choose | if | bind)*>
-<!ATTLIST foreach
-collection CDATA #REQUIRED
-item CDATA #IMPLIED
-index CDATA #IMPLIED
-open CDATA #IMPLIED
-close CDATA #IMPLIED
-separator CDATA #IMPLIED
->
-

collection 表示需要使用 foreach 的集合,item 表示进行迭代的变量名,index 就是索引值,而 open 跟 close
代表拼接的起始和结束符号,一般就是左右括号,separator 则是每个 item 直接的分隔符

-

例如写了一个简单的 sql 查询

-
<select id="search" parameterType="list" resultMap="StudentMap">
-    select * from student
-    <where>
-        id in
-        <foreach collection="list" item="item" open="(" close=")" separator=",">
-            #{item}
-        </foreach>
-    </where>
-</select>
-

这里就发现了一个问题,collection 对应的这个值,如果传入的参数是个 HashMap,collection 的这个值就是以此作为
key 从这个 HashMap 获取对应的集合,但是这里有几个特殊的小技巧,
在上面的这个方法对应的接口方法定义中

-
public List<Student> search(List<Long> userIds);
-

我是这么定义的,而 collection 的值是list,这里就有一点不能理解了,但其实是 mybatis 考虑到使用的方便性,
帮我们做了一点点小转换,我们翻一下 mybatis 的DefaultSqlSession 中的代码可以看到

-
@Override
-public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {
-  try {
-    MappedStatement ms = configuration.getMappedStatement(statement);
-    return executor.query(ms, wrapCollection(parameter), rowBounds, Executor.NO_RESULT_HANDLER);
-  } catch (Exception e) {
-    throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
-  } finally {
-    ErrorContext.instance().reset();
-  }
-}
-// 就是在这帮我们做了转换
-  private Object wrapCollection(final Object object) {
-  if (object instanceof Collection) {
-    StrictMap<Object> map = new StrictMap<Object>();
-    map.put("collection", object);
-    if (object instanceof List) {
-      // 如果类型是list 就会转成以 list 为 key 的 map
-      map.put("list", object);
-    }
-    return map;
-  } else if (object != null && object.getClass().isArray()) {
-    StrictMap<Object> map = new StrictMap<Object>();
-    map.put("array", object);
-    return map;
-  }
-  return object;
-  }
]]>
- - Java - Mybatis - Mysql - - - Java - Mysql - Mybatis - -
mybatis 的 $ 和 # 是有啥区别 /2020/09/06/mybatis-%E7%9A%84-%E5%92%8C-%E6%98%AF%E6%9C%89%E5%95%A5%E5%8C%BA%E5%88%AB/ @@ -6400,150 +6279,574 @@ public class DynamicSqlSource implements SqlSource { - hexo 配置系列-接入Algolia搜索 - /2023/04/02/hexo-%E9%85%8D%E7%BD%AE%E7%B3%BB%E5%88%97-%E6%8E%A5%E5%85%A5Algolia%E6%90%9C%E7%B4%A2/ - 博客之前使用的是 local search,最开始感觉使用体验还不错,速度也不慢,最近自己搜了下觉得效果差了很多,不知道是啥原因,所以接入有 next 主题支持的 Algolia 搜索,next 主题的文档已经介绍的很清楚了,这边就记录下,
首先要去 Algolia 开通下账户,创建一个索引

创建好后要去找一下 api key 的配置,这个跟 next 主题的说明已经有些不一样了
在设置里可以找到

这里默认会有两个 key

一个是 search only,一个是 admin key,需要再创建一个自定义 key
这个 key 需要有这些权限,称为 High-privilege API key, 后面有用

然后就是到博客目录下安装

-
cd hexo-site
-npm install hexo-algolia
-

然后在 hexo 站点配置中添加

-
algolia:
-  applicationID: "Application ID"
-  apiKey: "Search-only API key"
-  indexName: "indexName"
-

包括应用 Id,只搜索的 api key(默认给创建好的那个),indexName 就是最开始创建的 index 名,

-
export HEXO_ALGOLIA_INDEXING_KEY=High-privilege API key # Use Git Bash
-# set HEXO_ALGOLIA_INDEXING_KEY=High-privilege API key # Use Windows command line
-hexo clean
-hexo algolia
-

然后再到 next 配置中开启 algolia_search

-
# Algolia Search
-algolia_search:
-  enable: true
-  hits:
-    per_page: 10
-

搜索的界面其实跟 local 的差不多,就是搜索效果会好一些

也推荐可以搜搜过往的内容,已经左边有个热度的,做了个按阅读量排序的榜单。

-]]>
- - hexo - 技巧 - - - hexo - -
- - headscale 添加节点 - /2023/07/09/headscale-%E6%B7%BB%E5%8A%A0%E8%8A%82%E7%82%B9/ - 添加节点

添加节点非常简单,比如 app store 或者官网可以下载 mac 的安装包,

-

安装包直接下载可以在这里,下载安装完后还需要做一些处理,才能让 Tailscale 使用 Headscale 作为控制服务器。当然,Headscale 已经给我们提供了详细的操作步骤,你只需要在浏览器中打开 URL:http://<HEADSCALE_PUB_IP>:<HEADSCALE_PUB_PORT>/apple,记得端口替换成自己的,就会看到这样的说明页

-

image

-

然后对于像我这样自己下载的客户端安装包,也就是standalone client,就可以用下面的命令

-

defaults write io.tailscale.ipn.macsys ControlURL http://<HEADSCALE_PUB_IP>:<HEADSCALE_PUB_PORT> 类似于 Windows 客户端需要写入注册表,就是把控制端的地址改成了我们自己搭建的 headscale 的,设置完以后就打开 tailscale 客户端右键点击 login,就会弹出一个浏览器地址

-

image

-

按照这个里面的命令去 headscale 的机器上执行,注意要替换 namespace,对于最新的 headscale 已经把 namespace 废弃改成 user 了,这点要注意了,其他客户端也同理,现在还有个好消息,安卓和 iOS 客户端也已经都可以用了,后面可以在介绍下局域网怎么部分打通和自建 derper。

+ minimum-size-subarray-sum-209 + /2016/10/11/minimum-size-subarray-sum-209/ + problem

Given an array of n positive integers and a positive integer s, find the minimal length of a subarray of which the sum ≥ s. If there isn’t one, return 0 instead.

+

For example, given the array [2,3,1,2,4,3] and s = 7,
the subarray [4,3] has the minimal length under the problem constraint.

+

题解

参考,滑动窗口,跟之前Data Structure课上的online算法有点像,链接

+

Code

class Solution {
+public:
+    int minSubArrayLen(int s, vector<int>& nums) {
+        int len = nums.size();
+        if(len == 0) return 0;
+        int minlen = INT_MAX;
+        int sum = 0;
+        
+        int left = 0;
+        int right = -1;
+        while(right < len)
+        {
+            while(sum < s && right < len)
+                sum += nums[++right];
+            if(sum >= s)
+            {
+                minlen = minlen < right - left + 1 ? minlen : right - left + 1;
+                sum -= nums[left++];
+            }
+        }
+        return minlen > len ? 0 : minlen;
+    }
+};
]]>
- headscale + leetcode - headscale + leetcode + c++
- java 中发起 http 请求时证书问题解决记录 - /2023/07/29/java-%E4%B8%AD%E5%8F%91%E8%B5%B7-http-%E8%AF%B7%E6%B1%82%E6%97%B6%E8%AF%81%E4%B9%A6%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3%E8%AE%B0%E5%BD%95/ - 再一次环境部署是发现了个问题,就是在请求微信 https 请求的时候,出现了个错误
No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
一开始以为是环境问题,从 oracle 的 jdk 换成了基于 openjdk 的底包,没有 javax 的关系,
完整的提示包含了 javax 的异常
java.lang.RuntimeException: javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
后面再看了下,是不是也可能是证书的问题,然后就去找了下是不是证书相关的,
可以看到在 /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security 路径下的 java.security
jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA,
而正好在我们代码里 createSocketFactory 的时候使用了 TLSv1 这个证书协议

-
SSLContext sslContext = SSLContext.getInstance("TLS");
-sslContext.init(kmf.getKeyManagers(), null, new SecureRandom());
-return new SSLConnectionSocketFactory(sslContext, new String[]{"TLSv1"}, null, new DefaultHostnameVerifier());
-

所以就有两种方案,一个是使用更新版本的 TLS 或者另一个就是使用比较久的 jdk,这也说明其实即使都是 jdk8 的,不同的小版本差异还是会有些影响,有的时候对于这些错误还是需要更深入地学习,不能一概而之认为就是 jdk 用的是 oracle 还是 openjdk 的,不同的错误可能就需要仔细确认原因所在。

-]]>
+ mybatis 的 foreach 使用的注意点 + /2022/07/09/mybatis-%E7%9A%84-foreach-%E4%BD%BF%E7%94%A8%E7%9A%84%E6%B3%A8%E6%84%8F%E7%82%B9/ + mybatis 在作为轻量级 orm 框架,如果要使用类似于 in 查询的语句,除了直接替换字符串,还可以使用 foreach 标签
在mybatis的 dtd 文件中可以看到可以配置这些字段,

+
<!ELEMENT foreach (#PCDATA | include | trim | where | set | foreach | choose | if | bind)*>
+<!ATTLIST foreach
+collection CDATA #REQUIRED
+item CDATA #IMPLIED
+index CDATA #IMPLIED
+open CDATA #IMPLIED
+close CDATA #IMPLIED
+separator CDATA #IMPLIED
+>
+

collection 表示需要使用 foreach 的集合,item 表示进行迭代的变量名,index 就是索引值,而 open 跟 close
代表拼接的起始和结束符号,一般就是左右括号,separator 则是每个 item 直接的分隔符

+

例如写了一个简单的 sql 查询

+
<select id="search" parameterType="list" resultMap="StudentMap">
+    select * from student
+    <where>
+        id in
+        <foreach collection="list" item="item" open="(" close=")" separator=",">
+            #{item}
+        </foreach>
+    </where>
+</select>
+

这里就发现了一个问题,collection 对应的这个值,如果传入的参数是个 HashMap,collection 的这个值就是以此作为
key 从这个 HashMap 获取对应的集合,但是这里有几个特殊的小技巧,
在上面的这个方法对应的接口方法定义中

+
public List<Student> search(List<Long> userIds);
+

我是这么定义的,而 collection 的值是list,这里就有一点不能理解了,但其实是 mybatis 考虑到使用的方便性,
帮我们做了一点点小转换,我们翻一下 mybatis 的DefaultSqlSession 中的代码可以看到

+
@Override
+public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {
+  try {
+    MappedStatement ms = configuration.getMappedStatement(statement);
+    return executor.query(ms, wrapCollection(parameter), rowBounds, Executor.NO_RESULT_HANDLER);
+  } catch (Exception e) {
+    throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
+  } finally {
+    ErrorContext.instance().reset();
+  }
+}
+// 就是在这帮我们做了转换
+  private Object wrapCollection(final Object object) {
+  if (object instanceof Collection) {
+    StrictMap<Object> map = new StrictMap<Object>();
+    map.put("collection", object);
+    if (object instanceof List) {
+      // 如果类型是list 就会转成以 list 为 key 的 map
+      map.put("list", object);
+    }
+    return map;
+  } else if (object != null && object.getClass().isArray()) {
+    StrictMap<Object> map = new StrictMap<Object>();
+    map.put("array", object);
+    return map;
+  }
+  return object;
+  }
]]>
- java + Java + Mybatis + Mysql - java + Java + Mysql + Mybatis
- github 小技巧-更新 github host key - /2023/03/28/github-%E5%B0%8F%E6%8A%80%E5%B7%A7-%E6%9B%B4%E6%96%B0-github-host-key/ - 最近一次推送博客,发现报了个错推不上去,

-
WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!
+    mybatis 的缓存是怎么回事
+    /2020/10/03/mybatis-%E7%9A%84%E7%BC%93%E5%AD%98%E6%98%AF%E6%80%8E%E4%B9%88%E5%9B%9E%E4%BA%8B/
+    Java 真的是任何一个中间件,比较常用的那种,都有很多内容值得深挖,比如这个缓存,慢慢有过一些感悟,比如如何提升性能,缓存无疑是一大重要手段,最底层开始 CPU 就有缓存,而且又小又贵,再往上一点内存一般作为硬盘存储在运行时的存储,一般在代码里也会用内存作为一些本地缓存,譬如数据库,像 mysql 这种也是有innodb_buffer_pool来提升查询效率,本质上理解就是用更快的存储作为相对慢存储的缓存,减少查询直接访问较慢的存储,并且这个都是相对的,比起 cpu 的缓存,那内存也是渣,但是与普通机械硬盘相比,那也是两个次元的水平。

+

闲扯这么多来说说 mybatis 的缓存,mybatis 一般作为一个轻量级的 orm 使用,相对应的就是比较重量级的 hibernate,不过不在这次讨论范围,上一次是主要讲了 mybatis 在解析 sql 过程中,对于两种占位符的不同替换实现策略,这次主要聊下 mybatis 的缓存,前面其实得了解下前置的东西,比如 sqlsession,先当做我们知道 sqlsession 是个什么玩意,可能或多或少的知道 mybatis 是有两级缓存,

+

一级缓存

第一级的缓存是在 BaseExecutor 中的 PerpetualCache,它是个最基本的缓存实现类,使用了 HashMap 实现缓存功能,代码其实没几十行

+
public class PerpetualCache implements Cache {
 
-IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
-Someone could be eavesdropping on you right now (man-in-the-middle attack)!
-It is also possible that a host key has just been changed.
-

错误信息是这样,有点奇怪也没干啥,网上一搜发现是We updated our RSA SSH host key
简单翻一下就是

-
-

在3月24日协调世界时大约05:00时,出于谨慎,我们更换了用于保护 GitHub.com 的 Git 操作的 RSA SSH 主机密钥。我们这样做是为了保护我们的用户免受任何对手模仿 GitHub 或通过 SSH 窃听他们的 Git 操作的机会。此密钥不授予对 GitHub 基础设施或客户数据的访问权限。此更改仅影响通过使用 RSA 的 SSH 进行的 Git 操作。GitHub.com 和 HTTPS Git 操作的网络流量不受影响。

-
-

要解决也比较简单就是重置下 host key,

-
-

Host Key是服务器用来证明自己身份的一个永久性的非对称密钥

-
-

使用

-
ssh-keygen -R github.com
-

然后在首次建立连接的时候同意下就可以了

-]]>
+ private final String id; + + private final Map<Object, Object> cache = new HashMap<>(); + + public PerpetualCache(String id) { + this.id = id; + } + + @Override + public String getId() { + return id; + } + + @Override + public int getSize() { + return cache.size(); + } + + @Override + public void putObject(Object key, Object value) { + cache.put(key, value); + } + + @Override + public Object getObject(Object key) { + return cache.get(key); + } + + @Override + public Object removeObject(Object key) { + return cache.remove(key); + } + + @Override + public void clear() { + cache.clear(); + } + + @Override + public boolean equals(Object o) { + if (getId() == null) { + throw new CacheException("Cache instances require an ID."); + } + if (this == o) { + return true; + } + if (!(o instanceof Cache)) { + return false; + } + + Cache otherCache = (Cache) o; + return getId().equals(otherCache.getId()); + } + + @Override + public int hashCode() { + if (getId() == null) { + throw new CacheException("Cache instances require an ID."); + } + return getId().hashCode(); + } + +}
+

可以看一下BaseExecutor 的构造函数

+
protected BaseExecutor(Configuration configuration, Transaction transaction) {
+    this.transaction = transaction;
+    this.deferredLoads = new ConcurrentLinkedQueue<>();
+    this.localCache = new PerpetualCache("LocalCache");
+    this.localOutputParameterCache = new PerpetualCache("LocalOutputParameterCache");
+    this.closed = false;
+    this.configuration = configuration;
+    this.wrapper = this;
+  }
+

就是把 PerpetualCache 作为 localCache,然后怎么使用我看简单看一下,BaseExecutor 的查询首先是调用这个函数

+
@Override
+  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
+    BoundSql boundSql = ms.getBoundSql(parameter);
+    CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);
+    return query(ms, parameter, rowBounds, resultHandler, key, boundSql);
+  }
+

可以看到首先是调用了 createCacheKey 方法,这个方法呢,先不看怎么写的,如果我们自己要实现这么个缓存,首先这个缓存 key 的设计也是个问题,如果是以表名加主键作为 key,那么分页查询,或者没有主键的时候就不行,来看看 mybatis 是怎么设计的

+
@Override
+  public CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
+    if (closed) {
+      throw new ExecutorException("Executor was closed.");
+    }
+    CacheKey cacheKey = new CacheKey();
+    cacheKey.update(ms.getId());
+    cacheKey.update(rowBounds.getOffset());
+    cacheKey.update(rowBounds.getLimit());
+    cacheKey.update(boundSql.getSql());
+    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
+    TypeHandlerRegistry typeHandlerRegistry = ms.getConfiguration().getTypeHandlerRegistry();
+    // mimic DefaultParameterHandler logic
+    for (ParameterMapping parameterMapping : parameterMappings) {
+      if (parameterMapping.getMode() != ParameterMode.OUT) {
+        Object value;
+        String propertyName = parameterMapping.getProperty();
+        if (boundSql.hasAdditionalParameter(propertyName)) {
+          value = boundSql.getAdditionalParameter(propertyName);
+        } else if (parameterObject == null) {
+          value = null;
+        } else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
+          value = parameterObject;
+        } else {
+          MetaObject metaObject = configuration.newMetaObject(parameterObject);
+          value = metaObject.getValue(propertyName);
+        }
+        cacheKey.update(value);
+      }
+    }
+    if (configuration.getEnvironment() != null) {
+      // issue #176
+      cacheKey.update(configuration.getEnvironment().getId());
+    }
+    return cacheKey;
+  }
+
+

首先需要 id,这个 id 是 mapper 里方法的 id, 然后是偏移量跟返回行数,再就是 sql,然后是参数,基本上是会有影响的都加进去了,在这个 update 里面

+
public void update(Object object) {
+    int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object);
+
+    count++;
+    checksum += baseHashCode;
+    baseHashCode *= count;
+
+    hashcode = multiplier * hashcode + baseHashCode;
+
+    updateList.add(object);
+  }
+

其实是一个 hash 转换,具体不纠结,就是提高特异性,然后回来就是继续调用 query

+
@Override
+  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
+    ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
+    if (closed) {
+      throw new ExecutorException("Executor was closed.");
+    }
+    if (queryStack == 0 && ms.isFlushCacheRequired()) {
+      clearLocalCache();
+    }
+    List<E> list;
+    try {
+      queryStack++;
+      list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
+      if (list != null) {
+        handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
+      } else {
+        list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
+      }
+    } finally {
+      queryStack--;
+    }
+    if (queryStack == 0) {
+      for (DeferredLoad deferredLoad : deferredLoads) {
+        deferredLoad.load();
+      }
+      // issue #601
+      deferredLoads.clear();
+      if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
+        // issue #482
+        clearLocalCache();
+      }
+    }
+    return list;
+  }
+

可以看到是先从 localCache 里取,取不到再 queryFromDatabase,其实比较简单,这是一级缓存,考虑到 sqlsession 跟 BaseExecutor 的关系,其实是随着 sqlsession 来保证这个缓存不会出现脏数据幻读的情况,当然事务相关的后面可能再单独聊。

+

二级缓存

其实这个一级二级顺序有点反过来,其实查询的是先走的二级缓存,当然二级的需要配置开启,默认不开,
需要通过

+
<setting name="cacheEnabled" value="true"/>
+

来开启,然后我们的查询就会走到

+
public class CachingExecutor implements Executor {
+
+  private final Executor delegate;
+  private final TransactionalCacheManager tcm = new TransactionalCacheManager();
+

这个 Executor 中,这里我放了类里面的元素,发现没有一个 Cache 类,这就是一个特点了,往下看查询过程

+
@Override
+  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
+    BoundSql boundSql = ms.getBoundSql(parameterObject);
+    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
+    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+  }
+
+  @Override
+  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
+      throws SQLException {
+    Cache cache = ms.getCache();
+    if (cache != null) {
+      flushCacheIfRequired(ms);
+      if (ms.isUseCache() && resultHandler == null) {
+        ensureNoOutParams(ms, boundSql);
+        @SuppressWarnings("unchecked")
+        List<E> list = (List<E>) tcm.getObject(cache, key);
+        if (list == null) {
+          list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+          tcm.putObject(cache, key, list); // issue #578 and #116
+        }
+        return list;
+      }
+    }
+    return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+  }
+

看到没,其实缓存是从 tcm 这个成员变量里取,而这个是什么呢,事务性缓存(直译下),因为这个其实是用 MappedStatement 里的 Cache 作为key 从 tcm 的 map 取出来的

+
public class TransactionalCacheManager {
+
+  private final Map<Cache, TransactionalCache> transactionalCaches = new HashMap<>();
+

MappedStatement是被全局使用的,所以其实二级缓存是跟着 mapper 的 namespace 走的,可以被多个 CachingExecutor 获取到,就会出现线程安全问题,线程安全问题可以用SynchronizedCache来解决,就是加锁,但是对于事务中的脏读,使用了TransactionalCache来解决这个问题,

+
public class TransactionalCache implements Cache {
+
+  private static final Log log = LogFactory.getLog(TransactionalCache.class);
+
+  private final Cache delegate;
+  private boolean clearOnCommit;
+  private final Map<Object, Object> entriesToAddOnCommit;
+  private final Set<Object> entriesMissedInCache;
+

在事务还没提交的时候,会把中间状态的数据放在 entriesToAddOnCommit 中,只有在提交后会放进共享缓存中,

+
public void commit() {
+    if (clearOnCommit) {
+      delegate.clear();
+    }
+    flushPendingEntries();
+    reset();
+  }
]]>
- ssh - 技巧 + Java + Mybatis + Spring + Mybatis + 缓存 + Mybatis - ssh - 端口转发 + Java + Mysql + Mybatis + 缓存
- mybatis系列-connection连接池解析 - /2023/02/19/mybatis%E7%B3%BB%E5%88%97-connection%E8%BF%9E%E6%8E%A5%E6%B1%A0%E8%A7%A3%E6%9E%90/ - 连接池主要是两个逻辑,首先是获取连接的逻辑,结合代码来讲一讲

-
private PooledConnection popConnection(String username, String password) throws SQLException {
-    boolean countedWait = false;
-    PooledConnection conn = null;
-    long t = System.currentTimeMillis();
-    int localBadConnectionCount = 0;
-
-    // 循环获取连接
-    while (conn == null) {
-      // 加锁
-      lock.lock();
-      try {
-        // 如果闲置的连接列表不为空
+    hexo 配置系列-接入Algolia搜索
+    /2023/04/02/hexo-%E9%85%8D%E7%BD%AE%E7%B3%BB%E5%88%97-%E6%8E%A5%E5%85%A5Algolia%E6%90%9C%E7%B4%A2/
+    博客之前使用的是 local search,最开始感觉使用体验还不错,速度也不慢,最近自己搜了下觉得效果差了很多,不知道是啥原因,所以接入有 next 主题支持的 Algolia 搜索,next 主题的文档已经介绍的很清楚了,这边就记录下,
首先要去 Algolia 开通下账户,创建一个索引

创建好后要去找一下 api key 的配置,这个跟 next 主题的说明已经有些不一样了
在设置里可以找到

这里默认会有两个 key

一个是 search only,一个是 admin key,需要再创建一个自定义 key
这个 key 需要有这些权限,称为 High-privilege API key, 后面有用

然后就是到博客目录下安装

+
cd hexo-site
+npm install hexo-algolia
+

然后在 hexo 站点配置中添加

+
algolia:
+  applicationID: "Application ID"
+  apiKey: "Search-only API key"
+  indexName: "indexName"
+

包括应用 Id,只搜索的 api key(默认给创建好的那个),indexName 就是最开始创建的 index 名,

+
export HEXO_ALGOLIA_INDEXING_KEY=High-privilege API key # Use Git Bash
+# set HEXO_ALGOLIA_INDEXING_KEY=High-privilege API key # Use Windows command line
+hexo clean
+hexo algolia
+

然后再到 next 配置中开启 algolia_search

+
# Algolia Search
+algolia_search:
+  enable: true
+  hits:
+    per_page: 10
+

搜索的界面其实跟 local 的差不多,就是搜索效果会好一些

也推荐可以搜搜过往的内容,已经左边有个热度的,做了个按阅读量排序的榜单。

+]]>
+ + hexo + 技巧 + + + hexo + + + + mybatis系列-dataSource解析 + /2023/01/08/mybatis%E7%B3%BB%E5%88%97-dataSource%E8%A7%A3%E6%9E%90/ + DataSource 作为数据库查询的最重要的数据源,在 mybatis 中也展开来说下
首先是解析的过程

+
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
+ +

在构建 SqlSessionFactory 也就是 DefaultSqlSessionFactory 的时候,

+
public SqlSessionFactory build(InputStream inputStream) {
+    return build(inputStream, null, null);
+  }
+public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties) {
+    try {
+      XMLConfigBuilder parser = new XMLConfigBuilder(inputStream, environment, properties);
+      return build(parser.parse());
+    } catch (Exception e) {
+      throw ExceptionFactory.wrapException("Error building SqlSession.", e);
+    } finally {
+      ErrorContext.instance().reset();
+      try {
+      	if (inputStream != null) {
+      	  inputStream.close();
+      	}
+      } catch (IOException e) {
+        // Intentionally ignore. Prefer previous error.
+      }
+    }
+  }
+

前面也说过,就是解析 mybatis-config.xmlConfiguration

+
public Configuration parse() {
+  if (parsed) {
+    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
+  }
+  parsed = true;
+  parseConfiguration(parser.evalNode("/configuration"));
+  return configuration;
+}
+private void parseConfiguration(XNode root) {
+  try {
+    // issue #117 read properties first
+    propertiesElement(root.evalNode("properties"));
+    Properties settings = settingsAsProperties(root.evalNode("settings"));
+    loadCustomVfs(settings);
+    loadCustomLogImpl(settings);
+    typeAliasesElement(root.evalNode("typeAliases"));
+    pluginElement(root.evalNode("plugins"));
+    objectFactoryElement(root.evalNode("objectFactory"));
+    objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
+    reflectorFactoryElement(root.evalNode("reflectorFactory"));
+    settingsElement(settings);
+    // read it after objectFactory and objectWrapperFactory issue #631
+    // -------------> 是在这里解析了DataSource
+    environmentsElement(root.evalNode("environments"));
+    databaseIdProviderElement(root.evalNode("databaseIdProvider"));
+    typeHandlerElement(root.evalNode("typeHandlers"));
+    mapperElement(root.evalNode("mappers"));
+  } catch (Exception e) {
+    throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
+  }
+}
+

环境解析了这一块的内容

+
<environments default="development">
+        <environment id="development">
+            <transactionManager type="JDBC"/>
+            <dataSource type="POOLED">
+                <property name="driver" value="${driver}"/>
+                <property name="url" value="${url}"/>
+                <property name="username" value="${username}"/>
+                <property name="password" value="${password}"/>
+            </dataSource>
+        </environment>
+    </environments>
+

解析也是自上而下的,

+
private void environmentsElement(XNode context) throws Exception {
+  if (context != null) {
+    if (environment == null) {
+      environment = context.getStringAttribute("default");
+    }
+    for (XNode child : context.getChildren()) {
+      String id = child.getStringAttribute("id");
+      if (isSpecifiedEnvironment(id)) {
+        TransactionFactory txFactory = transactionManagerElement(child.evalNode("transactionManager"));
+        DataSourceFactory dsFactory = dataSourceElement(child.evalNode("dataSource"));
+        DataSource dataSource = dsFactory.getDataSource();
+        Environment.Builder environmentBuilder = new Environment.Builder(id)
+            .transactionFactory(txFactory)
+            .dataSource(dataSource);
+        configuration.setEnvironment(environmentBuilder.build());
+        break;
+      }
+    }
+  }
+}
+

前面第一步是解析事务管理器元素

+
private TransactionFactory transactionManagerElement(XNode context) throws Exception {
+  if (context != null) {
+    String type = context.getStringAttribute("type");
+    Properties props = context.getChildrenAsProperties();
+    TransactionFactory factory = (TransactionFactory) resolveClass(type).getDeclaredConstructor().newInstance();
+    factory.setProperties(props);
+    return factory;
+  }
+  throw new BuilderException("Environment declaration requires a TransactionFactory.");
+}
+

而这里的 resolveClass 其实就使用了上一篇的 typeAliases 系统,这里是使用了 JdbcTransactionFactory 作为事务管理器,
后面的就是 DataSourceFactory 的创建也是 DataSource 的创建

+
private DataSourceFactory dataSourceElement(XNode context) throws Exception {
+  if (context != null) {
+    String type = context.getStringAttribute("type");
+    Properties props = context.getChildrenAsProperties();
+    DataSourceFactory factory = (DataSourceFactory) resolveClass(type).getDeclaredConstructor().newInstance();
+    factory.setProperties(props);
+    return factory;
+  }
+  throw new BuilderException("Environment declaration requires a DataSourceFactory.");
+}
+

因为在config文件中设置了Pooled,所以对应创建的就是 PooledDataSourceFactory
但是这里其实有个比较需要注意的,mybatis 这里的其实是继承了 UnpooledDataSourceFactory
将基础方法都放在了 UnpooledDataSourceFactory

+
public class PooledDataSourceFactory extends UnpooledDataSourceFactory {
+
+  public PooledDataSourceFactory() {
+    this.dataSource = new PooledDataSource();
+  }
+
+}
+

这里只保留了在构造方法里创建 DataSource
而这个 PooledDataSource 虽然没有直接继承 UnpooledDataSource,但其实
在构造方法里也是

+
public PooledDataSource() {
+  dataSource = new UnpooledDataSource();
+}
+

至于为什么这么做呢应该也是考虑到能比较多的复用代码,因为 Pooled 其实跟 Unpooled 最重要的差别就在于是不是每次都重开连接
使用连接池能够让应用在有大量查询的时候不用反复创建连接,省去了建联的网络等开销,Unpooled就是完成与数据库的连接,并可以获取该连接
主要的代码

+
@Override
+public Connection getConnection() throws SQLException {
+  return doGetConnection(username, password);
+}
+
+@Override
+public Connection getConnection(String username, String password) throws SQLException {
+  return doGetConnection(username, password);
+}
+private Connection doGetConnection(String username, String password) throws SQLException {
+  Properties props = new Properties();
+  if (driverProperties != null) {
+    props.putAll(driverProperties);
+  }
+  if (username != null) {
+    props.setProperty("user", username);
+  }
+  if (password != null) {
+    props.setProperty("password", password);
+  }
+  return doGetConnection(props);
+}
+private Connection doGetConnection(Properties properties) throws SQLException {
+  initializeDriver();
+  Connection connection = DriverManager.getConnection(url, properties);
+  configureConnection(connection);
+  return connection;
+}
+

而对于Pooled就会处理池化的逻辑

+
private PooledConnection popConnection(String username, String password) throws SQLException {
+    boolean countedWait = false;
+    PooledConnection conn = null;
+    long t = System.currentTimeMillis();
+    int localBadConnectionCount = 0;
+
+    while (conn == null) {
+      lock.lock();
+      try {
         if (!state.idleConnections.isEmpty()) {
           // Pool has available connection
-          // 连接池有可用的连接
           conn = state.idleConnections.remove(0);
           if (log.isDebugEnabled()) {
             log.debug("Checked out connection " + conn.getRealHashCode() + " from pool.");
           }
         } else {
           // Pool does not have available connection
-          // 进入这个分支表示没有空闲连接,但是活跃连接数还没达到最大活跃连接数上限,那么这时候就可以创建一个新连接
           if (state.activeConnections.size() < poolMaximumActiveConnections) {
             // Can create new connection
-            // 这里创建连接我们之前讲过,
             conn = new PooledConnection(dataSource.getConnection(), this);
             if (log.isDebugEnabled()) {
               log.debug("Created connection " + conn.getRealHashCode() + ".");
             }
           } else {
             // Cannot create new connection
-            // 进到这个分支了就表示没法创建新连接了,那么怎么办呢,这里引入了一个 poolMaximumCheckoutTime,这代表了我去控制连接一次被使用的最长时间,如果超过这个时间了,我就要去关闭失效它
             PooledConnection oldestActiveConnection = state.activeConnections.get(0);
             long longestCheckoutTime = oldestActiveConnection.getCheckoutTime();
             if (longestCheckoutTime > poolMaximumCheckoutTime) {
               // Can claim overdue connection
-              // 所有超时连接从池中被借出的次数+1
               state.claimedOverdueConnectionCount++;
-              // 所有超时连接从池中被借出并归还的时间总和 + 当前连接借出时间
               state.accumulatedCheckoutTimeOfOverdueConnections += longestCheckoutTime;
-              // 所有连接从池中被借出并归还的时间总和 + 当前连接借出时间
               state.accumulatedCheckoutTime += longestCheckoutTime;
-              // 从活跃连接数中移除此连接
               state.activeConnections.remove(oldestActiveConnection);
-              // 如果该连接不是自动提交的,则尝试回滚
               if (!oldestActiveConnection.getRealConnection().getAutoCommit()) {
                 try {
                   oldestActiveConnection.getRealConnection().rollback();
@@ -6559,7 +6862,6 @@ It is also possible that a host key has just
                   log.debug("Bad connection. Could not roll back");
                 }
               }
-              // 用此连接的真实连接再创建一个连接,并设置时间
               conn = new PooledConnection(oldestActiveConnection.getRealConnection(), this);
               conn.setCreatedTimestamp(oldestActiveConnection.getCreatedTimestamp());
               conn.setLastUsedTimestamp(oldestActiveConnection.getLastUsedTimestamp());
@@ -6569,9 +6871,7 @@ It is also possible that a host key has just
               }
             } else {
               // Must wait
-              // 这样还是获取不到连接就只能等待了
               try {
-                // 标记状态,然后把等待计数+1
                 if (!countedWait) {
                   state.hadToWaitCount++;
                   countedWait = true;
@@ -6580,9 +6880,7 @@ It is also possible that a host key has just
                   log.debug("Waiting as long as " + poolTimeToWait + " milliseconds for connection.");
                 }
                 long wt = System.currentTimeMillis();
-                // 等待 poolTimeToWait 时间
                 condition.await(poolTimeToWait, TimeUnit.MILLISECONDS);
-                // 记录等待时间
                 state.accumulatedWaitTime += System.currentTimeMillis() - wt;
               } catch (InterruptedException e) {
                 // set interrupt flag
@@ -6592,20 +6890,15 @@ It is also possible that a host key has just
             }
           }
         }
-        // 如果连接不为空
         if (conn != null) {
           // ping to server and check the connection is valid or not
-          // 判断是否有效
           if (conn.isValid()) {
             if (!conn.getRealConnection().getAutoCommit()) {
-              // 回滚未提交的
               conn.getRealConnection().rollback();
             }
             conn.setConnectionTypeCode(assembleConnectionTypeCode(dataSource.getUrl(), username, password));
-            // 设置时间
             conn.setCheckoutTimestamp(System.currentTimeMillis());
             conn.setLastUsedTimestamp(System.currentTimeMillis());
-            // 添加进活跃连接
             state.activeConnections.add(conn);
             state.requestCount++;
             state.accumulatedRequestTime += System.currentTimeMillis() - t;
@@ -6613,11 +6906,9 @@ It is also possible that a host key has just
             if (log.isDebugEnabled()) {
               log.debug("A bad connection (" + conn.getRealHashCode() + ") was returned from the pool, getting another connection.");
             }
-            // 连接无效,坏连接+1
             state.badConnectionCount++;
             localBadConnectionCount++;
             conn = null;
-            // 如果坏连接已经超过了容忍上限,就抛异常
             if (localBadConnectionCount > (poolMaximumIdleConnections + poolMaximumLocalBadConnectionTolerance)) {
               if (log.isDebugEnabled()) {
                 log.debug("PooledDataSource: Could not get a good connection to the database.");
@@ -6627,75 +6918,26 @@ It is also possible that a host key has just
           }
         }
       } finally {
-        // 释放锁
         lock.unlock();
       }
 
     }
 
     if (conn == null) {
-      // 连接仍为空
       if (log.isDebugEnabled()) {
         log.debug("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
       }
-      // 抛出异常
       throw new SQLException("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
     }
-    // fanhui 
-    return conn;
-  }
-

然后是还回连接

-
protected void pushConnection(PooledConnection conn) throws SQLException {
-    // 加锁
-    lock.lock();
-    try {
-      // 从活跃连接中移除当前连接
-      state.activeConnections.remove(conn);
-      if (conn.isValid()) {
-        // 当前的空闲连接数小于连接池中允许的最大空闲连接数
-        if (state.idleConnections.size() < poolMaximumIdleConnections && conn.getConnectionTypeCode() == expectedConnectionTypeCode) {
-          // 记录借出时间
-          state.accumulatedCheckoutTime += conn.getCheckoutTime();
-          if (!conn.getRealConnection().getAutoCommit()) {
-            // 同样是做回滚
-            conn.getRealConnection().rollback();
-          }
-          // 新建一个连接
-          PooledConnection newConn = new PooledConnection(conn.getRealConnection(), this);
-          // 加入到空闲连接列表中
-          state.idleConnections.add(newConn);
-          newConn.setCreatedTimestamp(conn.getCreatedTimestamp());
-          newConn.setLastUsedTimestamp(conn.getLastUsedTimestamp());
-          // 原连接失效
-          conn.invalidate();
-          if (log.isDebugEnabled()) {
-            log.debug("Returned connection " + newConn.getRealHashCode() + " to pool.");
-          }
-          // 提醒前面等待的
-          condition.signal();
-        } else {
-          // 上面是相同的,就是这里是空闲连接数已经超过上限
-          state.accumulatedCheckoutTime += conn.getCheckoutTime();
-          if (!conn.getRealConnection().getAutoCommit()) {
-            conn.getRealConnection().rollback();
-          }
-          conn.getRealConnection().close();
-          if (log.isDebugEnabled()) {
-            log.debug("Closed connection " + conn.getRealHashCode() + ".");
-          }
-          conn.invalidate();
-        }
-      } else {
-        if (log.isDebugEnabled()) {
-          log.debug("A bad connection (" + conn.getRealHashCode() + ") attempted to return to the pool, discarding connection.");
-        }
-        state.badConnectionCount++;
-      }
-    } finally {
-      lock.unlock();
-    }
-  }
+ return conn; + }
+

它的入口不是个get方法,而是pop,从含义来来讲就不一样
org.apache.ibatis.datasource.pooled.PooledDataSource#getConnection()

+
@Override
+public Connection getConnection() throws SQLException {
+  return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
+}
+

对于具体怎么获取连接我们可以下一篇具体讲下

]]>
Java @@ -6765,1330 +7007,1303 @@ WHERE (id = #{id})
-

具体的代码会执行到这

-
private void mapperElement(XNode parent) throws Exception {
-  if (parent != null) {
-    for (XNode child : parent.getChildren()) {
-      if ("package".equals(child.getName())) {
-        // 这里解析的不是 package
-        String mapperPackage = child.getStringAttribute("name");
-        configuration.addMappers(mapperPackage);
-      } else {
-        // 根据 resource 和 url 还有 mapperClass 判断
-        String resource = child.getStringAttribute("resource");
-        String url = child.getStringAttribute("url");
-        String mapperClass = child.getStringAttribute("class");
-        // resource 不为空其他为空的情况,就开始将 resource 读成输入流
-        if (resource != null && url == null && mapperClass == null) {
-          ErrorContext.instance().resource(resource);
-          try(InputStream inputStream = Resources.getResourceAsStream(resource)) {
-            // 初始化 XMLMapperBuilder 来解析 mapper
-            XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, resource, configuration.getSqlFragments());
-            mapperParser.parse();
-          }
-

然后再是 parse 过程

-
public void parse() {
-  if (!configuration.isResourceLoaded(resource)) {
-    // 解析 mapper 节点,也就是下图中的mapper
-    configurationElement(parser.evalNode("/mapper"));
-    configuration.addLoadedResource(resource);
-    bindMapperForNamespace();
-  }
-
-  parsePendingResultMaps();
-  parsePendingCacheRefs();
-  parsePendingStatements();
-}
-

image

-

继续往下走

-
private void configurationElement(XNode context) {
-    try {
-      String namespace = context.getStringAttribute("namespace");
-      if (namespace == null || namespace.isEmpty()) {
-        throw new BuilderException("Mapper's namespace cannot be empty");
+    java 中发起 http 请求时证书问题解决记录
+    /2023/07/29/java-%E4%B8%AD%E5%8F%91%E8%B5%B7-http-%E8%AF%B7%E6%B1%82%E6%97%B6%E8%AF%81%E4%B9%A6%E9%97%AE%E9%A2%98%E8%A7%A3%E5%86%B3%E8%AE%B0%E5%BD%95/
+    再一次环境部署是发现了个问题,就是在请求微信 https 请求的时候,出现了个错误
No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
一开始以为是环境问题,从 oracle 的 jdk 换成了基于 openjdk 的底包,没有 javax 的关系,
完整的提示包含了 javax 的异常
java.lang.RuntimeException: javax.net.ssl.SSLHandshakeException: No appropriate protocol (protocol is disabled or cipher suites are inappropriate)
后面再看了下,是不是也可能是证书的问题,然后就去找了下是不是证书相关的,
可以看到在 /usr/lib/jvm/java-8-openjdk-amd64/jre/lib/security 路径下的 java.security
jdk.tls.disabledAlgorithms=SSLv3, TLSv1, TLSv1.1, RC4, DES, MD5withRSA,
而正好在我们代码里 createSocketFactory 的时候使用了 TLSv1 这个证书协议

+
SSLContext sslContext = SSLContext.getInstance("TLS");
+sslContext.init(kmf.getKeyManagers(), null, new SecureRandom());
+return new SSLConnectionSocketFactory(sslContext, new String[]{"TLSv1"}, null, new DefaultHostnameVerifier());
+

所以就有两种方案,一个是使用更新版本的 TLS 或者另一个就是使用比较久的 jdk,这也说明其实即使都是 jdk8 的,不同的小版本差异还是会有些影响,有的时候对于这些错误还是需要更深入地学习,不能一概而之认为就是 jdk 用的是 oracle 还是 openjdk 的,不同的错误可能就需要仔细确认原因所在。

+]]>
+ + java + + + java + + + + mybatis系列-sql 类的简要分析 + /2023/03/19/mybatis%E7%B3%BB%E5%88%97-sql-%E7%B1%BB%E7%9A%84%E7%AE%80%E8%A6%81%E5%88%86%E6%9E%90/ + 上次就比较简单的讲了使用,这块也比较简单,因为封装得不是很复杂,首先我们从 select 作为入口来看看,这个具体的实现,

+
String selectSql = new SQL() {{
+            SELECT("id", "name");
+            FROM("student");
+            WHERE("id = #{id}");
+        }}.toString();
+

SELECT 方法的实现,

+
public T SELECT(String... columns) {
+  sql().statementType = SQLStatement.StatementType.SELECT;
+  sql().select.addAll(Arrays.asList(columns));
+  return getSelf();
+}
+

statementType是个枚举

+
public enum StatementType {
+  DELETE, INSERT, SELECT, UPDATE
+}
+

那这个就是个 select 语句,然后会把参数转成 list 添加到 select 变量里,
然后是 from 语句,这个大概也能猜到就是设置下表名,

+
public T FROM(String table) {
+  sql().tables.add(table);
+  return getSelf();
+}
+

往 tables 里添加了 table,这个 tables 是什么呢
这里也可以看下所有的变量,

+
StatementType statementType;
+List<String> sets = new ArrayList<>();
+List<String> select = new ArrayList<>();
+List<String> tables = new ArrayList<>();
+List<String> join = new ArrayList<>();
+List<String> innerJoin = new ArrayList<>();
+List<String> outerJoin = new ArrayList<>();
+List<String> leftOuterJoin = new ArrayList<>();
+List<String> rightOuterJoin = new ArrayList<>();
+List<String> where = new ArrayList<>();
+List<String> having = new ArrayList<>();
+List<String> groupBy = new ArrayList<>();
+List<String> orderBy = new ArrayList<>();
+List<String> lastList = new ArrayList<>();
+List<String> columns = new ArrayList<>();
+List<List<String>> valuesList = new ArrayList<>();
+

可以看到是一堆 List 先暂存这些sql 片段,然后再拼装成 sql 语句,
因为它重写了 toString 方法

+
@Override
+public String toString() {
+  StringBuilder sb = new StringBuilder();
+  sql().sql(sb);
+  return sb.toString();
+}
+

调用的 sql 方法是

+
public String sql(Appendable a) {
+      SafeAppendable builder = new SafeAppendable(a);
+      if (statementType == null) {
+        return null;
       }
-      builderAssistant.setCurrentNamespace(namespace);
-      // 处理cache 和 cache 应用
-      cacheRefElement(context.evalNode("cache-ref"));
-      cacheElement(context.evalNode("cache"));
-      parameterMapElement(context.evalNodes("/mapper/parameterMap"));
-      resultMapElements(context.evalNodes("/mapper/resultMap"));
-      sqlElement(context.evalNodes("/mapper/sql"));
-      // 因为我们是个 sql 查询,所以具体逻辑是在这里面
-      buildStatementFromContext(context.evalNodes("select|insert|update|delete"));
-    } catch (Exception e) {
-      throw new BuilderException("Error parsing Mapper XML. The XML location is '" + resource + "'. Cause: " + e, e);
-    }
-  }
-

然后是

-
private void buildStatementFromContext(List<XNode> list) {
-  if (configuration.getDatabaseId() != null) {
-    buildStatementFromContext(list, configuration.getDatabaseId());
-  }
-  // 然后没有 databaseId 就走到这
-  buildStatementFromContext(list, null);
-}
-

继续

-
private void buildStatementFromContext(List<XNode> list, String requiredDatabaseId) {
-  for (XNode context : list) {
-    // 创建语句解析器
-    final XMLStatementBuilder statementParser = new XMLStatementBuilder(configuration, builderAssistant, context, requiredDatabaseId);
-    try {
-      // 解析节点
-      statementParser.parseStatementNode();
-    } catch (IncompleteElementException e) {
-      configuration.addIncompleteStatement(statementParser);
-    }
-  }
-}
-

这个代码比较长,做下简略,只保留相关代码

-
public void parseStatementNode() {
-    String id = context.getStringAttribute("id");
-    String databaseId = context.getStringAttribute("databaseId");
 
-    if (!databaseIdMatchesCurrent(id, databaseId, this.requiredDatabaseId)) {
-      return;
-    }
+      String answer;
 
-    String nodeName = context.getNode().getNodeName();
-    SqlCommandType sqlCommandType = SqlCommandType.valueOf(nodeName.toUpperCase(Locale.ENGLISH));
-    boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
-    boolean flushCache = context.getBooleanAttribute("flushCache", !isSelect);
-    boolean useCache = context.getBooleanAttribute("useCache", isSelect);
-    boolean resultOrdered = context.getBooleanAttribute("resultOrdered", false);
+      switch (statementType) {
+        case DELETE:
+          answer = deleteSQL(builder);
+          break;
+
+        case INSERT:
+          answer = insertSQL(builder);
+          break;
 
+        case SELECT:
+          answer = selectSQL(builder);
+          break;
 
-    // 简略前后代码,主要看这里,创建 sqlSource
+        case UPDATE:
+          answer = updateSQL(builder);
+          break;
 
-    SqlSource sqlSource = langDriver.createSqlSource(configuration, context, parameterTypeClass);
-    
-
-

然后根据 LanguageDriver,我们这用的 XMLLanguageDriver,先是初始化

-
  @Override
-  public SqlSource createSqlSource(Configuration configuration, XNode script, Class<?> parameterType) {
-    XMLScriptBuilder builder = new XMLScriptBuilder(configuration, script, parameterType);
-    return builder.parseScriptNode();
-  }
-// 初始化有一些逻辑
-  public XMLScriptBuilder(Configuration configuration, XNode context, Class<?> parameterType) {
-    super(configuration);
-    this.context = context;
-    this.parameterType = parameterType;
-    // 特别是这,我这次特意在 mapper 中加了 foreach,就是为了说下这一块的解析
-    initNodeHandlerMap();
-  }
-// 设置各种类型的处理器
-  private void initNodeHandlerMap() {
-    nodeHandlerMap.put("trim", new TrimHandler());
-    nodeHandlerMap.put("where", new WhereHandler());
-    nodeHandlerMap.put("set", new SetHandler());
-    nodeHandlerMap.put("foreach", new ForEachHandler());
-    nodeHandlerMap.put("if", new IfHandler());
-    nodeHandlerMap.put("choose", new ChooseHandler());
-    nodeHandlerMap.put("when", new IfHandler());
-    nodeHandlerMap.put("otherwise", new OtherwiseHandler());
-    nodeHandlerMap.put("bind", new BindHandler());
-  }
-

初始化解析器以后就开始解析了

-
public SqlSource parseScriptNode() {
-  // 先是解析 parseDynamicTags
-  MixedSqlNode rootSqlNode = parseDynamicTags(context);
-  SqlSource sqlSource;
-  if (isDynamic) {
-    sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
+        default:
+          answer = null;
+      }
+
+      return answer;
+    }
+

根据上面的 statementType判断是个什么 sql,我们这个是 selectSQL 就走的 SELECT 这个分支

+
private String selectSQL(SafeAppendable builder) {
+  if (distinct) {
+    sqlClause(builder, "SELECT DISTINCT", select, "", "", ", ");
   } else {
-    sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
+    sqlClause(builder, "SELECT", select, "", "", ", ");
   }
-  return sqlSource;
-}
-

但是这里可能做的事情比较多

-
protected MixedSqlNode parseDynamicTags(XNode node) {
-    List<SqlNode> contents = new ArrayList<>();
-    // 获取子节点,这里可以把我 xml 中的 SELECT 语句分成三部分,第一部分是 select 到 in,然后是 foreach 部分,最后是\n结束符
-    NodeList children = node.getNode().getChildNodes();
-    for (int i = 0; i < children.getLength(); i++) {
-      XNode child = node.newXNode(children.item(i));
-      // 第一个节点是个纯 text 节点就会走到这
-      if (child.getNode().getNodeType() == Node.CDATA_SECTION_NODE || child.getNode().getNodeType() == Node.TEXT_NODE) {
-        String data = child.getStringBody("");
-        TextSqlNode textSqlNode = new TextSqlNode(data);
-        if (textSqlNode.isDynamic()) {
-          contents.add(textSqlNode);
-          isDynamic = true;
-        } else {
-          // 在 content 中添加这个 node
-          contents.add(new StaticTextSqlNode(data));
+
+  sqlClause(builder, "FROM", tables, "", "", ", ");
+  joins(builder);
+  sqlClause(builder, "WHERE", where, "(", ")", " AND ");
+  sqlClause(builder, "GROUP BY", groupBy, "", "", ", ");
+  sqlClause(builder, "HAVING", having, "(", ")", " AND ");
+  sqlClause(builder, "ORDER BY", orderBy, "", "", ", ");
+  limitingRowsStrategy.appendClause(builder, offset, limit);
+  return builder.toString();
+}
+

上面的可以看出来就是按我们常规的 sql 理解顺序来处理
就是select ... from ... where ... 这样子
再看下 sqlClause 的代码

+
private void sqlClause(SafeAppendable builder, String keyword, List<String> parts, String open, String close,
+                           String conjunction) {
+      if (!parts.isEmpty()) {
+        if (!builder.isEmpty()) {
+          builder.append("\n");
         }
-      } else if (child.getNode().getNodeType() == Node.ELEMENT_NODE) { // issue #628
-        // 第二个节点是个带 foreach 的,是个内部元素节点
-        String nodeName = child.getNode().getNodeName();
-        // 通过 nodeName 获取处理器
-        NodeHandler handler = nodeHandlerMap.get(nodeName);
-        if (handler == null) {
-          throw new BuilderException("Unknown element <" + nodeName + "> in SQL statement.");
+        builder.append(keyword);
+        builder.append(" ");
+        builder.append(open);
+        String last = "________";
+        for (int i = 0, n = parts.size(); i < n; i++) {
+          String part = parts.get(i);
+          if (i > 0 && !part.equals(AND) && !part.equals(OR) && !last.equals(AND) && !last.equals(OR)) {
+            builder.append(conjunction);
+          }
+          builder.append(part);
+          last = part;
         }
-        // 调用处理器来处理
-        handler.handleNode(child, contents);
-        isDynamic = true;
+        builder.append(close);
       }
-    }
-    // 然后返回这个混合 sql 节点
-    return new MixedSqlNode(contents);
-  }
-

再看下 handleNode 的逻辑

-
    @Override
-    public void handleNode(XNode nodeToHandle, List<SqlNode> targetContents) {
-      // 又会套娃执行这里的 parseDynamicTags
-      MixedSqlNode mixedSqlNode = parseDynamicTags(nodeToHandle);
-      String collection = nodeToHandle.getStringAttribute("collection");
-      Boolean nullable = nodeToHandle.getBooleanAttribute("nullable");
-      String item = nodeToHandle.getStringAttribute("item");
-      String index = nodeToHandle.getStringAttribute("index");
-      String open = nodeToHandle.getStringAttribute("open");
-      String close = nodeToHandle.getStringAttribute("close");
-      String separator = nodeToHandle.getStringAttribute("separator");
-      ForEachSqlNode forEachSqlNode = new ForEachSqlNode(configuration, mixedSqlNode, collection, nullable, index, item, open, close, separator);
-      targetContents.add(forEachSqlNode);
-    }
-// 这里走的逻辑不一样了
-protected MixedSqlNode parseDynamicTags(XNode node) {
-    List<SqlNode> contents = new ArrayList<>();
-    // 这里是 foreach 内部的,所以是个 text_node
-    NodeList children = node.getNode().getChildNodes();
-    for (int i = 0; i < children.getLength(); i++) {
-      XNode child = node.newXNode(children.item(i));
-      // 第一个节点是个纯 text 节点就会走到这
-      if (child.getNode().getNodeType() == Node.CDATA_SECTION_NODE || child.getNode().getNodeType() == Node.TEXT_NODE) {
-        String data = child.getStringBody("");
-        TextSqlNode textSqlNode = new TextSqlNode(data);
-        // 判断是否动态是根据代码里是否有 ${}
-        if (textSqlNode.isDynamic()) {
-          contents.add(textSqlNode);
-          isDynamic = true;
-        } else {
-          // 所以还是会走到这
-          // 在 content 中添加这个 node
-          contents.add(new StaticTextSqlNode(data));
-        }
-// 最后继续包装成 MixedSqlNode
-// 再回到这里
-    @Override
-    public void handleNode(XNode nodeToHandle, List<SqlNode> targetContents) {
-      MixedSqlNode mixedSqlNode = parseDynamicTags(nodeToHandle);
-      // 处理 foreach 内部的各个变量
-      String collection = nodeToHandle.getStringAttribute("collection");
-      Boolean nullable = nodeToHandle.getBooleanAttribute("nullable");
-      String item = nodeToHandle.getStringAttribute("item");
-      String index = nodeToHandle.getStringAttribute("index");
-      String open = nodeToHandle.getStringAttribute("open");
-      String close = nodeToHandle.getStringAttribute("close");
-      String separator = nodeToHandle.getStringAttribute("separator");
-      ForEachSqlNode forEachSqlNode = new ForEachSqlNode(configuration, mixedSqlNode, collection, nullable, index, item, open, close, separator);
-      targetContents.add(forEachSqlNode);
-    }
-

再回过来

-
public SqlSource parseScriptNode() {
-  MixedSqlNode rootSqlNode = parseDynamicTags(context);
-  SqlSource sqlSource;
-  // 因为在 foreach 节点处理时直接是把 isDynamic 置成了 true
-  if (isDynamic) {
-    // 所以是个 DynamicSqlSource
-    sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
-  } else {
-    sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
-  }
-  return sqlSource;
-}
-

这里就做完了预处理工作,真正在执行的执行的时候还需要进一步解析

-

因为前面讲过很多了,所以直接跳到这里

-
  @Override
-  public <T> T selectOne(String statement, Object parameter) {
-    // Popular vote was to return null on 0 results and throw exception on too many.
-    // 都知道是在这进去
-    List<T> list = this.selectList(statement, parameter);
-    if (list.size() == 1) {
-      return list.get(0);
-    } else if (list.size() > 1) {
-      throw new TooManyResultsException("Expected one result (or null) to be returned by selectOne(), but found: " + list.size());
-    } else {
-      return null;
-    }
-  }
+    }
+

这里的拼接方式还需要判断 AND 和 OR 的判断逻辑,其他就没什么特别的了,只是where 语句中的 lastList 不知道是干嘛的,好像只有添加跟赋值的操作,有知道的大神也可以评论指导下

+]]>
+ + Java + Mybatis + + + Java + Mysql + Mybatis + +
+ + invert-binary-tree + /2015/06/22/invert-binary-tree/ + Invert a binary tree

+
     4
+   /   \
+  2     7
+ / \   / \
+1   3 6   9
+
+

to

+
     4
+   /   \
+  7     2
+ / \   / \
+9   6 3   1
+
+

Trivia:
This problem was inspired by this original tweet by Max Howell:

+
+

Google: 90% of our engineers use the software you wrote (Homebrew),
but you can’t invert a binary tree on a whiteboard so fuck off.

+
+
/**
+ * Definition for a binary tree node.
+ * struct TreeNode {
+ *     int val;
+ *     TreeNode *left;
+ *     TreeNode *right;
+ *     TreeNode(int x) : val(x), left(NULL), right(NULL) {}
+ * };
+ */
+class Solution {
+public:
+    TreeNode* invertTree(TreeNode* root) {
+        if(root == NULL) return root;
+        TreeNode* temp;
+        temp = invertTree(root->left);
+        root->left = invertTree(root->right);
+        root->right = temp;
+        return root;
+    }
+};
]]>
+ + leetcode + + + leetcode + c++ + +
+ + dnsmasq的一个使用注意点 + /2023/04/16/dnsmasq%E7%9A%84%E4%B8%80%E4%B8%AA%E4%BD%BF%E7%94%A8%E6%B3%A8%E6%84%8F%E7%82%B9/ + 在本地使用了 valet 做 php 的开发环境,因为可以指定自定义域名和证书,碰巧最近公司的网络环境比较糟糕,就想要在自定义 dns 上下点功夫,本来我们经常要在 dns 那配置个内部的 dns 地址,就想是不是可以通过 dnsmasq 来解决,
却在第一步碰到个低级的问题,在 dnsmasq 的主配置文件里
我配置了解析文件路径配置
像这样

+
resolv-file=/opt/homebrew/etc/dnsmasq.d/resolv.dnsmasq.conf
+

结果发现 dnsmasq 就起不来了,因为是 brew 服务的形式起来,发现日志也没有, dnsmasq 配置文件本身也没什么日志,这个是最讨厌的,网上搜了一圈也都没有, brew services 的服务如果启动状态是 error,并且服务本身没有日志的话就是一头雾水,并且对于 plist 来说,即使我手动加了标准输出和错误输出,brew services restart 的时候也是会被重新覆盖,
后来仔细看了下这个问题,发现它下面有这么一行配置

+
conf-dir=/opt/homebrew/etc/dnsmasq.d/,*.conf
+

想了一下发现这个问题其实很简单,dnsmasq 应该是不支持同一配置文件加载两次,
我把 resolv 文件放在了同一个配置文件目录下,所以就被加载了两次,所以改掉目录就行了,但是目前看 dnsmasq 还不符合我的要求,也有可能我还没完全了解 dnsmasq 的使用方法,我想要的是比如按特定的域名后缀来配置对应的 dns 服务器,这样就不太会被影响,可以试试 AdGuard 看

+]]>
+ + dns + + + dnsmasq + +
+ + mybatis系列-mybatis是如何初始化mapper的 + /2022/12/04/mybatis%E6%98%AF%E5%A6%82%E4%BD%95%E5%88%9D%E5%A7%8B%E5%8C%96mapper%E7%9A%84/ + 前一篇讲了mybatis的初始化使用,如果我第一次看到这个使用入门文档,比较会产生疑惑的是配置了mapper,怎么就能通过selectOne跟语句id就能执行sql了,那么第一个问题,就是mapper是怎么被解析的,存在哪里,怎么被拿出来的

+

添加解析mapper

org.apache.ibatis.session.SqlSessionFactoryBuilder#build(java.io.InputStream)
+public SqlSessionFactory build(InputStream inputStream) {
+  return build(inputStream, null, null);
+}
- @Override - public <E> List<E> selectList(String statement, Object parameter) { - return this.selectList(statement, parameter, RowBounds.DEFAULT); - } - @Override - public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) { - return selectList(statement, parameter, rowBounds, Executor.NO_RESULT_HANDLER); - } - private <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) { +

通过读取mybatis-config.xml来构建SqlSessionFactory,

+
public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties) {
+  try {
+    // 创建下xml的解析器
+    XMLConfigBuilder parser = new XMLConfigBuilder(inputStream, environment, properties);
+    // 进行解析,后再构建
+    return build(parser.parse());
+  } catch (Exception e) {
+    throw ExceptionFactory.wrapException("Error building SqlSession.", e);
+  } finally {
+    ErrorContext.instance().reset();
     try {
-      // 前面也讲过这个,
-      MappedStatement ms = configuration.getMappedStatement(statement);
-      return executor.query(ms, wrapCollection(parameter), rowBounds, handler);
-    } catch (Exception e) {
-      throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
-    } finally {
-      ErrorContext.instance().reset();
-    }
-  }
-  // 包括这里,是调用的org.apache.ibatis.executor.CachingExecutor#query(org.apache.ibatis.mapping.MappedStatement, java.lang.Object, org.apache.ibatis.session.RowBounds, org.apache.ibatis.session.ResultHandler)
-  @Override
-  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
-    BoundSql boundSql = ms.getBoundSql(parameterObject);
-    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
-    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
-  }
-// 然后是获取 BoundSql
-  public BoundSql getBoundSql(Object parameterObject) {
-    BoundSql boundSql = sqlSource.getBoundSql(parameterObject);
-    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
-    if (parameterMappings == null || parameterMappings.isEmpty()) {
-      boundSql = new BoundSql(configuration, boundSql.getSql(), parameterMap.getParameterMappings(), parameterObject);
+       if (inputStream != null) {
+         inputStream.close();
+       }
+    } catch (IOException e) {
+      // Intentionally ignore. Prefer previous error.
     }
+  }
- // check for nested result maps in parameter mappings (issue #30) - for (ParameterMapping pm : boundSql.getParameterMappings()) { - String rmId = pm.getResultMapId(); - if (rmId != null) { - ResultMap rm = configuration.getResultMap(rmId); - if (rm != null) { - hasNestedResultMaps |= rm.hasNestedResultMaps(); - } - } - } +

创建XMLConfigBuilder

+
public XMLConfigBuilder(InputStream inputStream, String environment, Properties props) {
+    // --------> 创建 XPathParser
+  this(new XPathParser(inputStream, true, props, new XMLMapperEntityResolver()), environment, props);
+}
 
-    return boundSql;
+public XPathParser(InputStream inputStream, boolean validation, Properties variables, EntityResolver entityResolver) {
+    commonConstructor(validation, variables, entityResolver);
+    this.document = createDocument(new InputSource(inputStream));
   }
-// 因为前面讲了是生成的 DynamicSqlSource,所以也是调用这个的 getBoundSql
-  @Override
-  public BoundSql getBoundSql(Object parameterObject) {
-    DynamicContext context = new DynamicContext(configuration, parameterObject);
-    // 重点关注着
-    rootSqlNode.apply(context);
-    SqlSourceBuilder sqlSourceParser = new SqlSourceBuilder(configuration);
-    Class<?> parameterType = parameterObject == null ? Object.class : parameterObject.getClass();
-    SqlSource sqlSource = sqlSourceParser.parse(context.getSql(), parameterType, context.getBindings());
-    BoundSql boundSql = sqlSource.getBoundSql(parameterObject);
-    context.getBindings().forEach(boundSql::setAdditionalParameter);
-    return boundSql;
+
+private XMLConfigBuilder(XPathParser parser, String environment, Properties props) {
+  super(new Configuration());
+  ErrorContext.instance().resource("SQL Mapper Configuration");
+  this.configuration.setVariables(props);
+  this.parsed = false;
+  this.environment = environment;
+  this.parser = parser;
+}
+ +

这里主要是创建了Builder包含了Parser
然后调用parse方法

+
public Configuration parse() {
+  if (parsed) {
+    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
   }
-// 继续是这个 DynamicSqlNode 的 apply
-  public boolean apply(DynamicContext context) {
-    contents.forEach(node -> node.apply(context));
-    return true;
+  // 标记下是否已解析,但是这里是否有线程安全问题
+  parsed = true;
+  // --------> 解析配置
+  parseConfiguration(parser.evalNode("/configuration"));
+  return configuration;
+}
+ +

实际的解析区分了各类标签

+
private void parseConfiguration(XNode root) {
+  try {
+    // issue #117 read properties first
+    // 解析properties,这个不是spring自带的,需要额外配置,并且在config文件里应该放在最前
+    propertiesElement(root.evalNode("properties"));
+    Properties settings = settingsAsProperties(root.evalNode("settings"));
+    loadCustomVfs(settings);
+    loadCustomLogImpl(settings);
+    typeAliasesElement(root.evalNode("typeAliases"));
+    pluginElement(root.evalNode("plugins"));
+    objectFactoryElement(root.evalNode("objectFactory"));
+    objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
+    reflectorFactoryElement(root.evalNode("reflectorFactory"));
+    settingsElement(settings);
+    // read it after objectFactory and objectWrapperFactory issue #631
+    environmentsElement(root.evalNode("environments"));
+    databaseIdProviderElement(root.evalNode("databaseIdProvider"));
+    typeHandlerElement(root.evalNode("typeHandlers"));
+    // ----------> 我们需要关注的是mapper的处理
+    mapperElement(root.evalNode("mappers"));
+  } catch (Exception e) {
+    throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
   }
-// 看下面的图
-

image

-

我们重点看 foreach 的逻辑

-
@Override
-  public boolean apply(DynamicContext context) {
-    Map<String, Object> bindings = context.getBindings();
-    final Iterable<?> iterable = evaluator.evaluateIterable(collectionExpression, bindings,
-      Optional.ofNullable(nullable).orElseGet(configuration::isNullableOnForEach));
-    if (iterable == null || !iterable.iterator().hasNext()) {
-      return true;
-    }
-    boolean first = true;
-    // 开始符号
-    applyOpen(context);
-    int i = 0;
-    for (Object o : iterable) {
-      DynamicContext oldContext = context;
-      if (first || separator == null) {
-        context = new PrefixedContext(context, "");
-      } else {
-        context = new PrefixedContext(context, separator);
-      }
-      int uniqueNumber = context.getUniqueNumber();
-      // Issue #709
-      if (o instanceof Map.Entry) {
-        @SuppressWarnings("unchecked")
-        Map.Entry<Object, Object> mapEntry = (Map.Entry<Object, Object>) o;
-        applyIndex(context, mapEntry.getKey(), uniqueNumber);
-        applyItem(context, mapEntry.getValue(), uniqueNumber);
+}
+ +

然后就是调用到mapperElement方法了

+
private void mapperElement(XNode parent) throws Exception {
+  if (parent != null) {
+    for (XNode child : parent.getChildren()) {
+      if ("package".equals(child.getName())) {
+        String mapperPackage = child.getStringAttribute("name");
+        configuration.addMappers(mapperPackage);
       } else {
-        applyIndex(context, i, uniqueNumber);
-        applyItem(context, o, uniqueNumber);
-      }
-      // 转换变量名,变成这种形式 select * from student where id in
-      //   (  
-      //  #{__frch_id_0}
-      //   )
-      contents.apply(new FilteredDynamicContext(configuration, context, index, item, uniqueNumber));
-      if (first) {
-        first = !((PrefixedContext) context).isPrefixApplied();
-      }
-      context = oldContext;
-      i++;
-    }
-    applyClose(context);
-    context.getBindings().remove(item);
-    context.getBindings().remove(index);
-    return true;
-  }
-// 回到外层就会调用 parse 方法, 把#{} 这段替换成 ?
-public SqlSource parse(String originalSql, Class<?> parameterType, Map<String, Object> additionalParameters) {
-    ParameterMappingTokenHandler handler = new ParameterMappingTokenHandler(configuration, parameterType, additionalParameters);
-    GenericTokenParser parser = new GenericTokenParser("#{", "}", handler);
-    String sql;
-    if (configuration.isShrinkWhitespacesInSql()) {
-      sql = parser.parse(removeExtraWhitespaces(originalSql));
-    } else {
-      sql = parser.parse(originalSql);
-    }
-    return new StaticSqlSource(configuration, sql, handler.getParameterMappings());
-  }
-

image

-

可以看到这里,然后再进行替换

-

image

-

真实的从 ? 替换成具体的变量值,是在这里
org.apache.ibatis.executor.SimpleExecutor#doQuery
调用了

-
private Statement prepareStatement(StatementHandler handler, Log statementLog) throws SQLException {
-    Statement stmt;
-    Connection connection = getConnection(statementLog);
-    stmt = handler.prepare(connection, transaction.getTimeout());
-    handler.parameterize(stmt);
-    return stmt;
-  }
-  @Override
-  public void parameterize(Statement statement) throws SQLException {
-    parameterHandler.setParameters((PreparedStatement) statement);
-  }
-    @Override
-  public void setParameters(PreparedStatement ps) {
-    ErrorContext.instance().activity("setting parameters").object(mappedStatement.getParameterMap().getId());
-    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
-    if (parameterMappings != null) {
-      for (int i = 0; i < parameterMappings.size(); i++) {
-        ParameterMapping parameterMapping = parameterMappings.get(i);
-        if (parameterMapping.getMode() != ParameterMode.OUT) {
-          Object value;
-          String propertyName = parameterMapping.getProperty();
-          if (boundSql.hasAdditionalParameter(propertyName)) { // issue #448 ask first for additional params
-            value = boundSql.getAdditionalParameter(propertyName);
-          } else if (parameterObject == null) {
-            value = null;
-          } else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
-            value = parameterObject;
-          } else {
-            MetaObject metaObject = configuration.newMetaObject(parameterObject);
-            value = metaObject.getValue(propertyName);
-          }
-          TypeHandler typeHandler = parameterMapping.getTypeHandler();
-          JdbcType jdbcType = parameterMapping.getJdbcType();
-          if (value == null && jdbcType == null) {
-            jdbcType = configuration.getJdbcTypeForNull();
+        String resource = child.getStringAttribute("resource");
+        String url = child.getStringAttribute("url");
+        String mapperClass = child.getStringAttribute("class");
+        if (resource != null && url == null && mapperClass == null) {
+          ErrorContext.instance().resource(resource);
+          try(InputStream inputStream = Resources.getResourceAsStream(resource)) {
+            XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, resource, configuration.getSqlFragments());
+            // --------> 我们这没有指定package,所以是走到这
+            mapperParser.parse();
           }
-          try {
-            // --------------------------> 
-            // 替换变量
-            typeHandler.setParameter(ps, i + 1, value, jdbcType);
-          } catch (TypeException | SQLException e) {
-            throw new TypeException("Could not set parameters for mapping: " + parameterMapping + ". Cause: " + e, e);
+        } else if (resource == null && url != null && mapperClass == null) {
+          ErrorContext.instance().resource(url);
+          try(InputStream inputStream = Resources.getUrlAsStream(url)){
+            XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, url, configuration.getSqlFragments());
+            mapperParser.parse();
           }
+        } else if (resource == null && url == null && mapperClass != null) {
+          Class<?> mapperInterface = Resources.classForName(mapperClass);
+          configuration.addMapper(mapperInterface);
+        } else {
+          throw new BuilderException("A mapper element may only specify a url, resource or class, but not more than one.");
         }
       }
     }
-  }
-]]>
- - Java - Mybatis - - - Java - Mysql - Mybatis - -
- - mybatis 的缓存是怎么回事 - /2020/10/03/mybatis-%E7%9A%84%E7%BC%93%E5%AD%98%E6%98%AF%E6%80%8E%E4%B9%88%E5%9B%9E%E4%BA%8B/ - Java 真的是任何一个中间件,比较常用的那种,都有很多内容值得深挖,比如这个缓存,慢慢有过一些感悟,比如如何提升性能,缓存无疑是一大重要手段,最底层开始 CPU 就有缓存,而且又小又贵,再往上一点内存一般作为硬盘存储在运行时的存储,一般在代码里也会用内存作为一些本地缓存,譬如数据库,像 mysql 这种也是有innodb_buffer_pool来提升查询效率,本质上理解就是用更快的存储作为相对慢存储的缓存,减少查询直接访问较慢的存储,并且这个都是相对的,比起 cpu 的缓存,那内存也是渣,但是与普通机械硬盘相比,那也是两个次元的水平。

-

闲扯这么多来说说 mybatis 的缓存,mybatis 一般作为一个轻量级的 orm 使用,相对应的就是比较重量级的 hibernate,不过不在这次讨论范围,上一次是主要讲了 mybatis 在解析 sql 过程中,对于两种占位符的不同替换实现策略,这次主要聊下 mybatis 的缓存,前面其实得了解下前置的东西,比如 sqlsession,先当做我们知道 sqlsession 是个什么玩意,可能或多或少的知道 mybatis 是有两级缓存,

-

一级缓存

第一级的缓存是在 BaseExecutor 中的 PerpetualCache,它是个最基本的缓存实现类,使用了 HashMap 实现缓存功能,代码其实没几十行

-
public class PerpetualCache implements Cache {
-
-  private final String id;
+  }
+}
- private final Map<Object, Object> cache = new HashMap<>(); +

核心就在这个parse()方法

+
public void parse() {
+  if (!configuration.isResourceLoaded(resource)) {
+    // -------> 然后就是走到这里,配置xml的mapper节点的内容
+    configurationElement(parser.evalNode("/mapper"));
+    configuration.addLoadedResource(resource);
+    bindMapperForNamespace();
+  }
 
-  public PerpetualCache(String id) {
-    this.id = id;
-  }
+  parsePendingResultMaps();
+  parsePendingCacheRefs();
+  parsePendingStatements();
+}
- @Override - public String getId() { - return id; - } +

具体的处理逻辑

+
private void configurationElement(XNode context) {
+  try {
+    String namespace = context.getStringAttribute("namespace");
+    if (namespace == null || namespace.isEmpty()) {
+      throw new BuilderException("Mapper's namespace cannot be empty");
+    }
+    builderAssistant.setCurrentNamespace(namespace);
+    cacheRefElement(context.evalNode("cache-ref"));
+    cacheElement(context.evalNode("cache"));
+    parameterMapElement(context.evalNodes("/mapper/parameterMap"));
+    resultMapElements(context.evalNodes("/mapper/resultMap"));
+    sqlElement(context.evalNodes("/mapper/sql"));
+    // ------->  走到这,从上下文构建statement
+    buildStatementFromContext(context.evalNodes("select|insert|update|delete"));
+  } catch (Exception e) {
+    throw new BuilderException("Error parsing Mapper XML. The XML location is '" + resource + "'. Cause: " + e, e);
+  }
+}
- @Override - public int getSize() { - return cache.size(); - } +

具体代码在这,从上下文构建statement,只不过区分了下databaseId

+
private void buildStatementFromContext(List<XNode> list) {
+  if (configuration.getDatabaseId() != null) {
+    buildStatementFromContext(list, configuration.getDatabaseId());
+  }
+  // -----> 判断databaseId
+  buildStatementFromContext(list, null);
+}
- @Override - public void putObject(Object key, Object value) { - cache.put(key, value); - } +

判断下databaseId

+
private void buildStatementFromContext(List<XNode> list, String requiredDatabaseId) {
+  for (XNode context : list) {
+    final XMLStatementBuilder statementParser = new XMLStatementBuilder(configuration, builderAssistant, context, requiredDatabaseId);
+    try {
+      // -------> 解析statement节点
+      statementParser.parseStatementNode();
+    } catch (IncompleteElementException e) {
+      configuration.addIncompleteStatement(statementParser);
+    }
+  }
+}
- @Override - public Object getObject(Object key) { - return cache.get(key); - } +

接下来就是真正处理的xml语句内容的,各个节点的信息内容

+
public void parseStatementNode() {
+  String id = context.getStringAttribute("id");
+  String databaseId = context.getStringAttribute("databaseId");
 
-  @Override
-  public Object removeObject(Object key) {
-    return cache.remove(key);
-  }
+  if (!databaseIdMatchesCurrent(id, databaseId, this.requiredDatabaseId)) {
+    return;
+  }
 
-  @Override
-  public void clear() {
-    cache.clear();
-  }
+  String nodeName = context.getNode().getNodeName();
+  SqlCommandType sqlCommandType = SqlCommandType.valueOf(nodeName.toUpperCase(Locale.ENGLISH));
+  boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
+  boolean flushCache = context.getBooleanAttribute("flushCache", !isSelect);
+  boolean useCache = context.getBooleanAttribute("useCache", isSelect);
+  boolean resultOrdered = context.getBooleanAttribute("resultOrdered", false);
 
-  @Override
-  public boolean equals(Object o) {
-    if (getId() == null) {
-      throw new CacheException("Cache instances require an ID.");
-    }
-    if (this == o) {
-      return true;
-    }
-    if (!(o instanceof Cache)) {
-      return false;
-    }
+  // Include Fragments before parsing
+  XMLIncludeTransformer includeParser = new XMLIncludeTransformer(configuration, builderAssistant);
+  includeParser.applyIncludes(context.getNode());
 
-    Cache otherCache = (Cache) o;
-    return getId().equals(otherCache.getId());
-  }
+  String parameterType = context.getStringAttribute("parameterType");
+  Class<?> parameterTypeClass = resolveClass(parameterType);
 
-  @Override
-  public int hashCode() {
-    if (getId() == null) {
-      throw new CacheException("Cache instances require an ID.");
-    }
-    return getId().hashCode();
-  }
+  String lang = context.getStringAttribute("lang");
+  LanguageDriver langDriver = getLanguageDriver(lang);
 
-}
-

可以看一下BaseExecutor 的构造函数

-
protected BaseExecutor(Configuration configuration, Transaction transaction) {
-    this.transaction = transaction;
-    this.deferredLoads = new ConcurrentLinkedQueue<>();
-    this.localCache = new PerpetualCache("LocalCache");
-    this.localOutputParameterCache = new PerpetualCache("LocalOutputParameterCache");
-    this.closed = false;
-    this.configuration = configuration;
-    this.wrapper = this;
-  }
-

就是把 PerpetualCache 作为 localCache,然后怎么使用我看简单看一下,BaseExecutor 的查询首先是调用这个函数

-
@Override
-  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
-    BoundSql boundSql = ms.getBoundSql(parameter);
-    CacheKey key = createCacheKey(ms, parameter, rowBounds, boundSql);
-    return query(ms, parameter, rowBounds, resultHandler, key, boundSql);
-  }
-

可以看到首先是调用了 createCacheKey 方法,这个方法呢,先不看怎么写的,如果我们自己要实现这么个缓存,首先这个缓存 key 的设计也是个问题,如果是以表名加主键作为 key,那么分页查询,或者没有主键的时候就不行,来看看 mybatis 是怎么设计的

-
@Override
-  public CacheKey createCacheKey(MappedStatement ms, Object parameterObject, RowBounds rowBounds, BoundSql boundSql) {
-    if (closed) {
-      throw new ExecutorException("Executor was closed.");
-    }
-    CacheKey cacheKey = new CacheKey();
-    cacheKey.update(ms.getId());
-    cacheKey.update(rowBounds.getOffset());
-    cacheKey.update(rowBounds.getLimit());
-    cacheKey.update(boundSql.getSql());
-    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
-    TypeHandlerRegistry typeHandlerRegistry = ms.getConfiguration().getTypeHandlerRegistry();
-    // mimic DefaultParameterHandler logic
-    for (ParameterMapping parameterMapping : parameterMappings) {
-      if (parameterMapping.getMode() != ParameterMode.OUT) {
-        Object value;
-        String propertyName = parameterMapping.getProperty();
-        if (boundSql.hasAdditionalParameter(propertyName)) {
-          value = boundSql.getAdditionalParameter(propertyName);
-        } else if (parameterObject == null) {
-          value = null;
-        } else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
-          value = parameterObject;
-        } else {
-          MetaObject metaObject = configuration.newMetaObject(parameterObject);
-          value = metaObject.getValue(propertyName);
-        }
-        cacheKey.update(value);
-      }
-    }
-    if (configuration.getEnvironment() != null) {
-      // issue #176
-      cacheKey.update(configuration.getEnvironment().getId());
-    }
-    return cacheKey;
+  // Parse selectKey after includes and remove them.
+  processSelectKeyNodes(id, parameterTypeClass, langDriver);
+
+  // Parse the SQL (pre: <selectKey> and <include> were parsed and removed)
+  KeyGenerator keyGenerator;
+  String keyStatementId = id + SelectKeyGenerator.SELECT_KEY_SUFFIX;
+  keyStatementId = builderAssistant.applyCurrentNamespace(keyStatementId, true);
+  if (configuration.hasKeyGenerator(keyStatementId)) {
+    keyGenerator = configuration.getKeyGenerator(keyStatementId);
+  } else {
+    keyGenerator = context.getBooleanAttribute("useGeneratedKeys",
+        configuration.isUseGeneratedKeys() && SqlCommandType.INSERT.equals(sqlCommandType))
+        ? Jdbc3KeyGenerator.INSTANCE : NoKeyGenerator.INSTANCE;
   }
-
-

首先需要 id,这个 id 是 mapper 里方法的 id, 然后是偏移量跟返回行数,再就是 sql,然后是参数,基本上是会有影响的都加进去了,在这个 update 里面

-
public void update(Object object) {
-    int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object);
 
-    count++;
-    checksum += baseHashCode;
-    baseHashCode *= count;
+  // 语句的主要参数解析
+  SqlSource sqlSource = langDriver.createSqlSource(configuration, context, parameterTypeClass);
+  StatementType statementType = StatementType.valueOf(context.getStringAttribute("statementType", StatementType.PREPARED.toString()));
+  Integer fetchSize = context.getIntAttribute("fetchSize");
+  Integer timeout = context.getIntAttribute("timeout");
+  String parameterMap = context.getStringAttribute("parameterMap");
+  String resultType = context.getStringAttribute("resultType");
+  Class<?> resultTypeClass = resolveClass(resultType);
+  String resultMap = context.getStringAttribute("resultMap");
+  String resultSetType = context.getStringAttribute("resultSetType");
+  ResultSetType resultSetTypeEnum = resolveResultSetType(resultSetType);
+  if (resultSetTypeEnum == null) {
+    resultSetTypeEnum = configuration.getDefaultResultSetType();
+  }
+  String keyProperty = context.getStringAttribute("keyProperty");
+  String keyColumn = context.getStringAttribute("keyColumn");
+  String resultSets = context.getStringAttribute("resultSets");
 
-    hashcode = multiplier * hashcode + baseHashCode;
+  // --------> 添加映射的statement
+  builderAssistant.addMappedStatement(id, sqlSource, statementType, sqlCommandType,
+      fetchSize, timeout, parameterMap, parameterTypeClass, resultMap, resultTypeClass,
+      resultSetTypeEnum, flushCache, useCache, resultOrdered,
+      keyGenerator, keyProperty, keyColumn, databaseId, langDriver, resultSets);
+}
- updateList.add(object); - }
-

其实是一个 hash 转换,具体不纠结,就是提高特异性,然后回来就是继续调用 query

-
@Override
-  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
-    ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
-    if (closed) {
-      throw new ExecutorException("Executor was closed.");
-    }
-    if (queryStack == 0 && ms.isFlushCacheRequired()) {
-      clearLocalCache();
-    }
-    List<E> list;
-    try {
-      queryStack++;
-      list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
-      if (list != null) {
-        handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
-      } else {
-        list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
-      }
-    } finally {
-      queryStack--;
-    }
-    if (queryStack == 0) {
-      for (DeferredLoad deferredLoad : deferredLoads) {
-        deferredLoad.load();
-      }
-      // issue #601
-      deferredLoads.clear();
-      if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
-        // issue #482
-        clearLocalCache();
-      }
-    }
-    return list;
-  }
-

可以看到是先从 localCache 里取,取不到再 queryFromDatabase,其实比较简单,这是一级缓存,考虑到 sqlsession 跟 BaseExecutor 的关系,其实是随着 sqlsession 来保证这个缓存不会出现脏数据幻读的情况,当然事务相关的后面可能再单独聊。

-

二级缓存

其实这个一级二级顺序有点反过来,其实查询的是先走的二级缓存,当然二级的需要配置开启,默认不开,
需要通过

-
<setting name="cacheEnabled" value="true"/>
-

来开启,然后我们的查询就会走到

-
public class CachingExecutor implements Executor {
 
-  private final Executor delegate;
-  private final TransactionalCacheManager tcm = new TransactionalCacheManager();
-

这个 Executor 中,这里我放了类里面的元素,发现没有一个 Cache 类,这就是一个特点了,往下看查询过程

-
@Override
-  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
-    BoundSql boundSql = ms.getBoundSql(parameterObject);
-    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
-    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+

添加的逻辑具体可以看下

+
public MappedStatement addMappedStatement(
+    String id,
+    SqlSource sqlSource,
+    StatementType statementType,
+    SqlCommandType sqlCommandType,
+    Integer fetchSize,
+    Integer timeout,
+    String parameterMap,
+    Class<?> parameterType,
+    String resultMap,
+    Class<?> resultType,
+    ResultSetType resultSetType,
+    boolean flushCache,
+    boolean useCache,
+    boolean resultOrdered,
+    KeyGenerator keyGenerator,
+    String keyProperty,
+    String keyColumn,
+    String databaseId,
+    LanguageDriver lang,
+    String resultSets) {
+
+  if (unresolvedCacheRef) {
+    throw new IncompleteElementException("Cache-ref not yet resolved");
   }
 
-  @Override
-  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
-      throws SQLException {
-    Cache cache = ms.getCache();
-    if (cache != null) {
-      flushCacheIfRequired(ms);
-      if (ms.isUseCache() && resultHandler == null) {
-        ensureNoOutParams(ms, boundSql);
-        @SuppressWarnings("unchecked")
-        List<E> list = (List<E>) tcm.getObject(cache, key);
-        if (list == null) {
-          list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
-          tcm.putObject(cache, key, list); // issue #578 and #116
-        }
-        return list;
-      }
-    }
-    return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
-  }
-

看到没,其实缓存是从 tcm 这个成员变量里取,而这个是什么呢,事务性缓存(直译下),因为这个其实是用 MappedStatement 里的 Cache 作为key 从 tcm 的 map 取出来的

-
public class TransactionalCacheManager {
+  id = applyCurrentNamespace(id, false);
+  boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
 
-  private final Map<Cache, TransactionalCache> transactionalCaches = new HashMap<>();
-

MappedStatement是被全局使用的,所以其实二级缓存是跟着 mapper 的 namespace 走的,可以被多个 CachingExecutor 获取到,就会出现线程安全问题,线程安全问题可以用SynchronizedCache来解决,就是加锁,但是对于事务中的脏读,使用了TransactionalCache来解决这个问题,

-
public class TransactionalCache implements Cache {
+  MappedStatement.Builder statementBuilder = new MappedStatement.Builder(configuration, id, sqlSource, sqlCommandType)
+      .resource(resource)
+      .fetchSize(fetchSize)
+      .timeout(timeout)
+      .statementType(statementType)
+      .keyGenerator(keyGenerator)
+      .keyProperty(keyProperty)
+      .keyColumn(keyColumn)
+      .databaseId(databaseId)
+      .lang(lang)
+      .resultOrdered(resultOrdered)
+      .resultSets(resultSets)
+      .resultMaps(getStatementResultMaps(resultMap, resultType, id))
+      .resultSetType(resultSetType)
+      .flushCacheRequired(valueOrDefault(flushCache, !isSelect))
+      .useCache(valueOrDefault(useCache, isSelect))
+      .cache(currentCache);
 
-  private static final Log log = LogFactory.getLog(TransactionalCache.class);
+  ParameterMap statementParameterMap = getStatementParameterMap(parameterMap, parameterType, id);
+  if (statementParameterMap != null) {
+    statementBuilder.parameterMap(statementParameterMap);
+  }
 
-  private final Cache delegate;
-  private boolean clearOnCommit;
-  private final Map<Object, Object> entriesToAddOnCommit;
-  private final Set<Object> entriesMissedInCache;
-

在事务还没提交的时候,会把中间状态的数据放在 entriesToAddOnCommit 中,只有在提交后会放进共享缓存中,

-
public void commit() {
-    if (clearOnCommit) {
-      delegate.clear();
-    }
-    flushPendingEntries();
-    reset();
-  }
]]> + MappedStatement statement = statementBuilder.build(); + // ------> 正好是这里在configuration中添加了映射好的statement + configuration.addMappedStatement(statement); + return statement; +}
+ +

而里面就是往map里添加

+
public void addMappedStatement(MappedStatement ms) {
+  mappedStatements.put(ms.getId(), ms);
+}
+ +

获取mapper

StudentDO studentDO = session.selectOne("com.nicksxs.mybatisdemo.StudentMapper.selectStudent", 1);
+ +

就是调用了 org.apache.ibatis.session.defaults.DefaultSqlSession#selectOne(java.lang.String, java.lang.Object)

+
public <T> T selectOne(String statement, Object parameter) {
+  // Popular vote was to return null on 0 results and throw exception on too many.
+  List<T> list = this.selectList(statement, parameter);
+  if (list.size() == 1) {
+    return list.get(0);
+  } else if (list.size() > 1) {
+    throw new TooManyResultsException("Expected one result (or null) to be returned by selectOne(), but found: " + list.size());
+  } else {
+    return null;
+  }
+}
+ +

调用实际的实现方法

+
public <E> List<E> selectList(String statement, Object parameter) {
+  return this.selectList(statement, parameter, RowBounds.DEFAULT);
+}
+ +

这里还有一层

+
public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {
+  return selectList(statement, parameter, rowBounds, Executor.NO_RESULT_HANDLER);
+}
+ + +

根本的就是从configuration里获取了mappedStatement

+
private <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) {
+  try {
+    // 这里进行了获取
+    MappedStatement ms = configuration.getMappedStatement(statement);
+    return executor.query(ms, wrapCollection(parameter), rowBounds, handler);
+  } catch (Exception e) {
+    throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
+  } finally {
+    ErrorContext.instance().reset();
+  }
+}
+]]> Java Mybatis - Spring - Mybatis - 缓存 - Mybatis Java Mysql Mybatis - 缓存 - mybatis系列-dataSource解析 - /2023/01/08/mybatis%E7%B3%BB%E5%88%97-dataSource%E8%A7%A3%E6%9E%90/ - DataSource 作为数据库查询的最重要的数据源,在 mybatis 中也展开来说下
首先是解析的过程

-
SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
- -

在构建 SqlSessionFactory 也就是 DefaultSqlSessionFactory 的时候,

-
public SqlSessionFactory build(InputStream inputStream) {
-    return build(inputStream, null, null);
+    mybatis系列-foreach 解析
+    /2023/06/11/mybatis%E7%B3%BB%E5%88%97-foreach-%E8%A7%A3%E6%9E%90/
+    在 org.apache.ibatis.builder.xml.XMLConfigBuilder#parseConfiguration 中进行配置解析,其中这一行就是解析 mappers

+
mapperElement(root.evalNode("mappers"));
+

具体的代码会执行到这

+
private void mapperElement(XNode parent) throws Exception {
+  if (parent != null) {
+    for (XNode child : parent.getChildren()) {
+      if ("package".equals(child.getName())) {
+        // 这里解析的不是 package
+        String mapperPackage = child.getStringAttribute("name");
+        configuration.addMappers(mapperPackage);
+      } else {
+        // 根据 resource 和 url 还有 mapperClass 判断
+        String resource = child.getStringAttribute("resource");
+        String url = child.getStringAttribute("url");
+        String mapperClass = child.getStringAttribute("class");
+        // resource 不为空其他为空的情况,就开始将 resource 读成输入流
+        if (resource != null && url == null && mapperClass == null) {
+          ErrorContext.instance().resource(resource);
+          try(InputStream inputStream = Resources.getResourceAsStream(resource)) {
+            // 初始化 XMLMapperBuilder 来解析 mapper
+            XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, resource, configuration.getSqlFragments());
+            mapperParser.parse();
+          }
+

然后再是 parse 过程

+
public void parse() {
+  if (!configuration.isResourceLoaded(resource)) {
+    // 解析 mapper 节点,也就是下图中的mapper
+    configurationElement(parser.evalNode("/mapper"));
+    configuration.addLoadedResource(resource);
+    bindMapperForNamespace();
   }
-public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties) {
+
+  parsePendingResultMaps();
+  parsePendingCacheRefs();
+  parsePendingStatements();
+}
+

image

+

继续往下走

+
private void configurationElement(XNode context) {
     try {
-      XMLConfigBuilder parser = new XMLConfigBuilder(inputStream, environment, properties);
-      return build(parser.parse());
-    } catch (Exception e) {
-      throw ExceptionFactory.wrapException("Error building SqlSession.", e);
-    } finally {
-      ErrorContext.instance().reset();
-      try {
-      	if (inputStream != null) {
-      	  inputStream.close();
-      	}
-      } catch (IOException e) {
-        // Intentionally ignore. Prefer previous error.
+      String namespace = context.getStringAttribute("namespace");
+      if (namespace == null || namespace.isEmpty()) {
+        throw new BuilderException("Mapper's namespace cannot be empty");
       }
+      builderAssistant.setCurrentNamespace(namespace);
+      // 处理cache 和 cache 应用
+      cacheRefElement(context.evalNode("cache-ref"));
+      cacheElement(context.evalNode("cache"));
+      parameterMapElement(context.evalNodes("/mapper/parameterMap"));
+      resultMapElements(context.evalNodes("/mapper/resultMap"));
+      sqlElement(context.evalNodes("/mapper/sql"));
+      // 因为我们是个 sql 查询,所以具体逻辑是在这里面
+      buildStatementFromContext(context.evalNodes("select|insert|update|delete"));
+    } catch (Exception e) {
+      throw new BuilderException("Error parsing Mapper XML. The XML location is '" + resource + "'. Cause: " + e, e);
     }
-  }
-

前面也说过,就是解析 mybatis-config.xmlConfiguration

-
public Configuration parse() {
-  if (parsed) {
-    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
-  }
-  parsed = true;
-  parseConfiguration(parser.evalNode("/configuration"));
-  return configuration;
-}
-private void parseConfiguration(XNode root) {
-  try {
-    // issue #117 read properties first
-    propertiesElement(root.evalNode("properties"));
-    Properties settings = settingsAsProperties(root.evalNode("settings"));
-    loadCustomVfs(settings);
-    loadCustomLogImpl(settings);
-    typeAliasesElement(root.evalNode("typeAliases"));
-    pluginElement(root.evalNode("plugins"));
-    objectFactoryElement(root.evalNode("objectFactory"));
-    objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
-    reflectorFactoryElement(root.evalNode("reflectorFactory"));
-    settingsElement(settings);
-    // read it after objectFactory and objectWrapperFactory issue #631
-    // -------------> 是在这里解析了DataSource
-    environmentsElement(root.evalNode("environments"));
-    databaseIdProviderElement(root.evalNode("databaseIdProvider"));
-    typeHandlerElement(root.evalNode("typeHandlers"));
-    mapperElement(root.evalNode("mappers"));
-  } catch (Exception e) {
-    throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
+  }
+

然后是

+
private void buildStatementFromContext(List<XNode> list) {
+  if (configuration.getDatabaseId() != null) {
+    buildStatementFromContext(list, configuration.getDatabaseId());
   }
-}
-

环境解析了这一块的内容

-
<environments default="development">
-        <environment id="development">
-            <transactionManager type="JDBC"/>
-            <dataSource type="POOLED">
-                <property name="driver" value="${driver}"/>
-                <property name="url" value="${url}"/>
-                <property name="username" value="${username}"/>
-                <property name="password" value="${password}"/>
-            </dataSource>
-        </environment>
-    </environments>
-

解析也是自上而下的,

-
private void environmentsElement(XNode context) throws Exception {
-  if (context != null) {
-    if (environment == null) {
-      environment = context.getStringAttribute("default");
-    }
-    for (XNode child : context.getChildren()) {
-      String id = child.getStringAttribute("id");
-      if (isSpecifiedEnvironment(id)) {
-        TransactionFactory txFactory = transactionManagerElement(child.evalNode("transactionManager"));
-        DataSourceFactory dsFactory = dataSourceElement(child.evalNode("dataSource"));
-        DataSource dataSource = dsFactory.getDataSource();
-        Environment.Builder environmentBuilder = new Environment.Builder(id)
-            .transactionFactory(txFactory)
-            .dataSource(dataSource);
-        configuration.setEnvironment(environmentBuilder.build());
-        break;
-      }
+  // 然后没有 databaseId 就走到这
+  buildStatementFromContext(list, null);
+}
+

继续

+
private void buildStatementFromContext(List<XNode> list, String requiredDatabaseId) {
+  for (XNode context : list) {
+    // 创建语句解析器
+    final XMLStatementBuilder statementParser = new XMLStatementBuilder(configuration, builderAssistant, context, requiredDatabaseId);
+    try {
+      // 解析节点
+      statementParser.parseStatementNode();
+    } catch (IncompleteElementException e) {
+      configuration.addIncompleteStatement(statementParser);
     }
   }
-}
-

前面第一步是解析事务管理器元素

-
private TransactionFactory transactionManagerElement(XNode context) throws Exception {
-  if (context != null) {
-    String type = context.getStringAttribute("type");
-    Properties props = context.getChildrenAsProperties();
-    TransactionFactory factory = (TransactionFactory) resolveClass(type).getDeclaredConstructor().newInstance();
-    factory.setProperties(props);
-    return factory;
-  }
-  throw new BuilderException("Environment declaration requires a TransactionFactory.");
-}
-

而这里的 resolveClass 其实就使用了上一篇的 typeAliases 系统,这里是使用了 JdbcTransactionFactory 作为事务管理器,
后面的就是 DataSourceFactory 的创建也是 DataSource 的创建

-
private DataSourceFactory dataSourceElement(XNode context) throws Exception {
-  if (context != null) {
-    String type = context.getStringAttribute("type");
-    Properties props = context.getChildrenAsProperties();
-    DataSourceFactory factory = (DataSourceFactory) resolveClass(type).getDeclaredConstructor().newInstance();
-    factory.setProperties(props);
-    return factory;
-  }
-  throw new BuilderException("Environment declaration requires a DataSourceFactory.");
-}
-

因为在config文件中设置了Pooled,所以对应创建的就是 PooledDataSourceFactory
但是这里其实有个比较需要注意的,mybatis 这里的其实是继承了 UnpooledDataSourceFactory
将基础方法都放在了 UnpooledDataSourceFactory

-
public class PooledDataSourceFactory extends UnpooledDataSourceFactory {
+}
+

这个代码比较长,做下简略,只保留相关代码

+
public void parseStatementNode() {
+    String id = context.getStringAttribute("id");
+    String databaseId = context.getStringAttribute("databaseId");
 
-  public PooledDataSourceFactory() {
-    this.dataSource = new PooledDataSource();
-  }
+    if (!databaseIdMatchesCurrent(id, databaseId, this.requiredDatabaseId)) {
+      return;
+    }
 
-}
-

这里只保留了在构造方法里创建 DataSource
而这个 PooledDataSource 虽然没有直接继承 UnpooledDataSource,但其实
在构造方法里也是

-
public PooledDataSource() {
-  dataSource = new UnpooledDataSource();
-}
-

至于为什么这么做呢应该也是考虑到能比较多的复用代码,因为 Pooled 其实跟 Unpooled 最重要的差别就在于是不是每次都重开连接
使用连接池能够让应用在有大量查询的时候不用反复创建连接,省去了建联的网络等开销,Unpooled就是完成与数据库的连接,并可以获取该连接
主要的代码

-
@Override
-public Connection getConnection() throws SQLException {
-  return doGetConnection(username, password);
-}
+    String nodeName = context.getNode().getNodeName();
+    SqlCommandType sqlCommandType = SqlCommandType.valueOf(nodeName.toUpperCase(Locale.ENGLISH));
+    boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
+    boolean flushCache = context.getBooleanAttribute("flushCache", !isSelect);
+    boolean useCache = context.getBooleanAttribute("useCache", isSelect);
+    boolean resultOrdered = context.getBooleanAttribute("resultOrdered", false);
 
-@Override
-public Connection getConnection(String username, String password) throws SQLException {
-  return doGetConnection(username, password);
-}
-private Connection doGetConnection(String username, String password) throws SQLException {
-  Properties props = new Properties();
-  if (driverProperties != null) {
-    props.putAll(driverProperties);
+
+    // 简略前后代码,主要看这里,创建 sqlSource
+
+    SqlSource sqlSource = langDriver.createSqlSource(configuration, context, parameterTypeClass);
+    
+
+

然后根据 LanguageDriver,我们这用的 XMLLanguageDriver,先是初始化

+
  @Override
+  public SqlSource createSqlSource(Configuration configuration, XNode script, Class<?> parameterType) {
+    XMLScriptBuilder builder = new XMLScriptBuilder(configuration, script, parameterType);
+    return builder.parseScriptNode();
   }
-  if (username != null) {
-    props.setProperty("user", username);
+// 初始化有一些逻辑
+  public XMLScriptBuilder(Configuration configuration, XNode context, Class<?> parameterType) {
+    super(configuration);
+    this.context = context;
+    this.parameterType = parameterType;
+    // 特别是这,我这次特意在 mapper 中加了 foreach,就是为了说下这一块的解析
+    initNodeHandlerMap();
   }
-  if (password != null) {
-    props.setProperty("password", password);
+// 设置各种类型的处理器
+  private void initNodeHandlerMap() {
+    nodeHandlerMap.put("trim", new TrimHandler());
+    nodeHandlerMap.put("where", new WhereHandler());
+    nodeHandlerMap.put("set", new SetHandler());
+    nodeHandlerMap.put("foreach", new ForEachHandler());
+    nodeHandlerMap.put("if", new IfHandler());
+    nodeHandlerMap.put("choose", new ChooseHandler());
+    nodeHandlerMap.put("when", new IfHandler());
+    nodeHandlerMap.put("otherwise", new OtherwiseHandler());
+    nodeHandlerMap.put("bind", new BindHandler());
+  }
+

初始化解析器以后就开始解析了

+
public SqlSource parseScriptNode() {
+  // 先是解析 parseDynamicTags
+  MixedSqlNode rootSqlNode = parseDynamicTags(context);
+  SqlSource sqlSource;
+  if (isDynamic) {
+    sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
+  } else {
+    sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
   }
-  return doGetConnection(props);
-}
-private Connection doGetConnection(Properties properties) throws SQLException {
-  initializeDriver();
-  Connection connection = DriverManager.getConnection(url, properties);
-  configureConnection(connection);
-  return connection;
-}
-

而对于Pooled就会处理池化的逻辑

-
private PooledConnection popConnection(String username, String password) throws SQLException {
-    boolean countedWait = false;
-    PooledConnection conn = null;
-    long t = System.currentTimeMillis();
-    int localBadConnectionCount = 0;
-
-    while (conn == null) {
-      lock.lock();
-      try {
-        if (!state.idleConnections.isEmpty()) {
-          // Pool has available connection
-          conn = state.idleConnections.remove(0);
-          if (log.isDebugEnabled()) {
-            log.debug("Checked out connection " + conn.getRealHashCode() + " from pool.");
-          }
+  return sqlSource;
+}
+

但是这里可能做的事情比较多

+
protected MixedSqlNode parseDynamicTags(XNode node) {
+    List<SqlNode> contents = new ArrayList<>();
+    // 获取子节点,这里可以把我 xml 中的 SELECT 语句分成三部分,第一部分是 select 到 in,然后是 foreach 部分,最后是\n结束符
+    NodeList children = node.getNode().getChildNodes();
+    for (int i = 0; i < children.getLength(); i++) {
+      XNode child = node.newXNode(children.item(i));
+      // 第一个节点是个纯 text 节点就会走到这
+      if (child.getNode().getNodeType() == Node.CDATA_SECTION_NODE || child.getNode().getNodeType() == Node.TEXT_NODE) {
+        String data = child.getStringBody("");
+        TextSqlNode textSqlNode = new TextSqlNode(data);
+        if (textSqlNode.isDynamic()) {
+          contents.add(textSqlNode);
+          isDynamic = true;
         } else {
-          // Pool does not have available connection
-          if (state.activeConnections.size() < poolMaximumActiveConnections) {
-            // Can create new connection
-            conn = new PooledConnection(dataSource.getConnection(), this);
-            if (log.isDebugEnabled()) {
-              log.debug("Created connection " + conn.getRealHashCode() + ".");
-            }
-          } else {
-            // Cannot create new connection
-            PooledConnection oldestActiveConnection = state.activeConnections.get(0);
-            long longestCheckoutTime = oldestActiveConnection.getCheckoutTime();
-            if (longestCheckoutTime > poolMaximumCheckoutTime) {
-              // Can claim overdue connection
-              state.claimedOverdueConnectionCount++;
-              state.accumulatedCheckoutTimeOfOverdueConnections += longestCheckoutTime;
-              state.accumulatedCheckoutTime += longestCheckoutTime;
-              state.activeConnections.remove(oldestActiveConnection);
-              if (!oldestActiveConnection.getRealConnection().getAutoCommit()) {
-                try {
-                  oldestActiveConnection.getRealConnection().rollback();
-                } catch (SQLException e) {
-                  /*
-                     Just log a message for debug and continue to execute the following
-                     statement like nothing happened.
-                     Wrap the bad connection with a new PooledConnection, this will help
-                     to not interrupt current executing thread and give current thread a
-                     chance to join the next competition for another valid/good database
-                     connection. At the end of this loop, bad {@link @conn} will be set as null.
-                   */
-                  log.debug("Bad connection. Could not roll back");
-                }
-              }
-              conn = new PooledConnection(oldestActiveConnection.getRealConnection(), this);
-              conn.setCreatedTimestamp(oldestActiveConnection.getCreatedTimestamp());
-              conn.setLastUsedTimestamp(oldestActiveConnection.getLastUsedTimestamp());
-              oldestActiveConnection.invalidate();
-              if (log.isDebugEnabled()) {
-                log.debug("Claimed overdue connection " + conn.getRealHashCode() + ".");
-              }
-            } else {
-              // Must wait
-              try {
-                if (!countedWait) {
-                  state.hadToWaitCount++;
-                  countedWait = true;
-                }
-                if (log.isDebugEnabled()) {
-                  log.debug("Waiting as long as " + poolTimeToWait + " milliseconds for connection.");
-                }
-                long wt = System.currentTimeMillis();
-                condition.await(poolTimeToWait, TimeUnit.MILLISECONDS);
-                state.accumulatedWaitTime += System.currentTimeMillis() - wt;
-              } catch (InterruptedException e) {
-                // set interrupt flag
-                Thread.currentThread().interrupt();
-                break;
-              }
-            }
-          }
+          // 在 content 中添加这个 node
+          contents.add(new StaticTextSqlNode(data));
         }
-        if (conn != null) {
-          // ping to server and check the connection is valid or not
-          if (conn.isValid()) {
-            if (!conn.getRealConnection().getAutoCommit()) {
-              conn.getRealConnection().rollback();
-            }
-            conn.setConnectionTypeCode(assembleConnectionTypeCode(dataSource.getUrl(), username, password));
-            conn.setCheckoutTimestamp(System.currentTimeMillis());
-            conn.setLastUsedTimestamp(System.currentTimeMillis());
-            state.activeConnections.add(conn);
-            state.requestCount++;
-            state.accumulatedRequestTime += System.currentTimeMillis() - t;
-          } else {
-            if (log.isDebugEnabled()) {
-              log.debug("A bad connection (" + conn.getRealHashCode() + ") was returned from the pool, getting another connection.");
-            }
-            state.badConnectionCount++;
-            localBadConnectionCount++;
-            conn = null;
-            if (localBadConnectionCount > (poolMaximumIdleConnections + poolMaximumLocalBadConnectionTolerance)) {
-              if (log.isDebugEnabled()) {
-                log.debug("PooledDataSource: Could not get a good connection to the database.");
-              }
-              throw new SQLException("PooledDataSource: Could not get a good connection to the database.");
-            }
-          }
+      } else if (child.getNode().getNodeType() == Node.ELEMENT_NODE) { // issue #628
+        // 第二个节点是个带 foreach 的,是个内部元素节点
+        String nodeName = child.getNode().getNodeName();
+        // 通过 nodeName 获取处理器
+        NodeHandler handler = nodeHandlerMap.get(nodeName);
+        if (handler == null) {
+          throw new BuilderException("Unknown element <" + nodeName + "> in SQL statement.");
         }
-      } finally {
-        lock.unlock();
+        // 调用处理器来处理
+        handler.handleNode(child, contents);
+        isDynamic = true;
       }
-
     }
-
-    if (conn == null) {
-      if (log.isDebugEnabled()) {
-        log.debug("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
-      }
-      throw new SQLException("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
+    // 然后返回这个混合 sql 节点
+    return new MixedSqlNode(contents);
+  }
+

再看下 handleNode 的逻辑

+
    @Override
+    public void handleNode(XNode nodeToHandle, List<SqlNode> targetContents) {
+      // 又会套娃执行这里的 parseDynamicTags
+      MixedSqlNode mixedSqlNode = parseDynamicTags(nodeToHandle);
+      String collection = nodeToHandle.getStringAttribute("collection");
+      Boolean nullable = nodeToHandle.getBooleanAttribute("nullable");
+      String item = nodeToHandle.getStringAttribute("item");
+      String index = nodeToHandle.getStringAttribute("index");
+      String open = nodeToHandle.getStringAttribute("open");
+      String close = nodeToHandle.getStringAttribute("close");
+      String separator = nodeToHandle.getStringAttribute("separator");
+      ForEachSqlNode forEachSqlNode = new ForEachSqlNode(configuration, mixedSqlNode, collection, nullable, index, item, open, close, separator);
+      targetContents.add(forEachSqlNode);
     }
+// 这里走的逻辑不一样了
+protected MixedSqlNode parseDynamicTags(XNode node) {
+    List<SqlNode> contents = new ArrayList<>();
+    // 这里是 foreach 内部的,所以是个 text_node
+    NodeList children = node.getNode().getChildNodes();
+    for (int i = 0; i < children.getLength(); i++) {
+      XNode child = node.newXNode(children.item(i));
+      // 第一个节点是个纯 text 节点就会走到这
+      if (child.getNode().getNodeType() == Node.CDATA_SECTION_NODE || child.getNode().getNodeType() == Node.TEXT_NODE) {
+        String data = child.getStringBody("");
+        TextSqlNode textSqlNode = new TextSqlNode(data);
+        // 判断是否动态是根据代码里是否有 ${}
+        if (textSqlNode.isDynamic()) {
+          contents.add(textSqlNode);
+          isDynamic = true;
+        } else {
+          // 所以还是会走到这
+          // 在 content 中添加这个 node
+          contents.add(new StaticTextSqlNode(data));
+        }
+// 最后继续包装成 MixedSqlNode
+// 再回到这里
+    @Override
+    public void handleNode(XNode nodeToHandle, List<SqlNode> targetContents) {
+      MixedSqlNode mixedSqlNode = parseDynamicTags(nodeToHandle);
+      // 处理 foreach 内部的各个变量
+      String collection = nodeToHandle.getStringAttribute("collection");
+      Boolean nullable = nodeToHandle.getBooleanAttribute("nullable");
+      String item = nodeToHandle.getStringAttribute("item");
+      String index = nodeToHandle.getStringAttribute("index");
+      String open = nodeToHandle.getStringAttribute("open");
+      String close = nodeToHandle.getStringAttribute("close");
+      String separator = nodeToHandle.getStringAttribute("separator");
+      ForEachSqlNode forEachSqlNode = new ForEachSqlNode(configuration, mixedSqlNode, collection, nullable, index, item, open, close, separator);
+      targetContents.add(forEachSqlNode);
+    }
+

再回过来

+
public SqlSource parseScriptNode() {
+  MixedSqlNode rootSqlNode = parseDynamicTags(context);
+  SqlSource sqlSource;
+  // 因为在 foreach 节点处理时直接是把 isDynamic 置成了 true
+  if (isDynamic) {
+    // 所以是个 DynamicSqlSource
+    sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
+  } else {
+    sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
+  }
+  return sqlSource;
+}
+

这里就做完了预处理工作,真正在执行的执行的时候还需要进一步解析

+

因为前面讲过很多了,所以直接跳到这里

+
  @Override
+  public <T> T selectOne(String statement, Object parameter) {
+    // Popular vote was to return null on 0 results and throw exception on too many.
+    // 都知道是在这进去
+    List<T> list = this.selectList(statement, parameter);
+    if (list.size() == 1) {
+      return list.get(0);
+    } else if (list.size() > 1) {
+      throw new TooManyResultsException("Expected one result (or null) to be returned by selectOne(), but found: " + list.size());
+    } else {
+      return null;
+    }
+  }
 
-    return conn;
-  }
-

它的入口不是个get方法,而是pop,从含义来来讲就不一样
org.apache.ibatis.datasource.pooled.PooledDataSource#getConnection()

-
@Override
-public Connection getConnection() throws SQLException {
-  return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
-}
-

对于具体怎么获取连接我们可以下一篇具体讲下

-]]>
- - Java - Mybatis - - - Java - Mysql - Mybatis - - - - mybatis系列-mybatis是如何初始化mapper的 - /2022/12/04/mybatis%E6%98%AF%E5%A6%82%E4%BD%95%E5%88%9D%E5%A7%8B%E5%8C%96mapper%E7%9A%84/ - 前一篇讲了mybatis的初始化使用,如果我第一次看到这个使用入门文档,比较会产生疑惑的是配置了mapper,怎么就能通过selectOne跟语句id就能执行sql了,那么第一个问题,就是mapper是怎么被解析的,存在哪里,怎么被拿出来的

-

添加解析mapper

org.apache.ibatis.session.SqlSessionFactoryBuilder#build(java.io.InputStream)
-public SqlSessionFactory build(InputStream inputStream) {
-  return build(inputStream, null, null);
-}
- -

通过读取mybatis-config.xml来构建SqlSessionFactory,

-
public SqlSessionFactory build(InputStream inputStream, String environment, Properties properties) {
-  try {
-    // 创建下xml的解析器
-    XMLConfigBuilder parser = new XMLConfigBuilder(inputStream, environment, properties);
-    // 进行解析,后再构建
-    return build(parser.parse());
-  } catch (Exception e) {
-    throw ExceptionFactory.wrapException("Error building SqlSession.", e);
-  } finally {
-    ErrorContext.instance().reset();
+  @Override
+  public <E> List<E> selectList(String statement, Object parameter) {
+    return this.selectList(statement, parameter, RowBounds.DEFAULT);
+  }
+  @Override
+  public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {
+    return selectList(statement, parameter, rowBounds, Executor.NO_RESULT_HANDLER);
+  }
+  private <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) {
     try {
-       if (inputStream != null) {
-         inputStream.close();
-       }
-    } catch (IOException e) {
-      // Intentionally ignore. Prefer previous error.
+      // 前面也讲过这个,
+      MappedStatement ms = configuration.getMappedStatement(statement);
+      return executor.query(ms, wrapCollection(parameter), rowBounds, handler);
+    } catch (Exception e) {
+      throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
+    } finally {
+      ErrorContext.instance().reset();
     }
-  }
- -

创建XMLConfigBuilder

-
public XMLConfigBuilder(InputStream inputStream, String environment, Properties props) {
-    // --------> 创建 XPathParser
-  this(new XPathParser(inputStream, true, props, new XMLMapperEntityResolver()), environment, props);
-}
-
-public XPathParser(InputStream inputStream, boolean validation, Properties variables, EntityResolver entityResolver) {
-    commonConstructor(validation, variables, entityResolver);
-    this.document = createDocument(new InputSource(inputStream));
   }
-
-private XMLConfigBuilder(XPathParser parser, String environment, Properties props) {
-  super(new Configuration());
-  ErrorContext.instance().resource("SQL Mapper Configuration");
-  this.configuration.setVariables(props);
-  this.parsed = false;
-  this.environment = environment;
-  this.parser = parser;
-}
- -

这里主要是创建了Builder包含了Parser
然后调用parse方法

-
public Configuration parse() {
-  if (parsed) {
-    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
+  // 包括这里,是调用的org.apache.ibatis.executor.CachingExecutor#query(org.apache.ibatis.mapping.MappedStatement, java.lang.Object, org.apache.ibatis.session.RowBounds, org.apache.ibatis.session.ResultHandler)
+  @Override
+  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
+    BoundSql boundSql = ms.getBoundSql(parameterObject);
+    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
+    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
   }
-  // 标记下是否已解析,但是这里是否有线程安全问题
-  parsed = true;
-  // --------> 解析配置
-  parseConfiguration(parser.evalNode("/configuration"));
-  return configuration;
-}
+// 然后是获取 BoundSql + public BoundSql getBoundSql(Object parameterObject) { + BoundSql boundSql = sqlSource.getBoundSql(parameterObject); + List<ParameterMapping> parameterMappings = boundSql.getParameterMappings(); + if (parameterMappings == null || parameterMappings.isEmpty()) { + boundSql = new BoundSql(configuration, boundSql.getSql(), parameterMap.getParameterMappings(), parameterObject); + } -

实际的解析区分了各类标签

-
private void parseConfiguration(XNode root) {
-  try {
-    // issue #117 read properties first
-    // 解析properties,这个不是spring自带的,需要额外配置,并且在config文件里应该放在最前
-    propertiesElement(root.evalNode("properties"));
-    Properties settings = settingsAsProperties(root.evalNode("settings"));
-    loadCustomVfs(settings);
-    loadCustomLogImpl(settings);
-    typeAliasesElement(root.evalNode("typeAliases"));
-    pluginElement(root.evalNode("plugins"));
-    objectFactoryElement(root.evalNode("objectFactory"));
-    objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
-    reflectorFactoryElement(root.evalNode("reflectorFactory"));
-    settingsElement(settings);
-    // read it after objectFactory and objectWrapperFactory issue #631
-    environmentsElement(root.evalNode("environments"));
-    databaseIdProviderElement(root.evalNode("databaseIdProvider"));
-    typeHandlerElement(root.evalNode("typeHandlers"));
-    // ----------> 我们需要关注的是mapper的处理
-    mapperElement(root.evalNode("mappers"));
-  } catch (Exception e) {
-    throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
-  }
-}
+ // check for nested result maps in parameter mappings (issue #30) + for (ParameterMapping pm : boundSql.getParameterMappings()) { + String rmId = pm.getResultMapId(); + if (rmId != null) { + ResultMap rm = configuration.getResultMap(rmId); + if (rm != null) { + hasNestedResultMaps |= rm.hasNestedResultMaps(); + } + } + } -

然后就是调用到mapperElement方法了

-
private void mapperElement(XNode parent) throws Exception {
-  if (parent != null) {
-    for (XNode child : parent.getChildren()) {
-      if ("package".equals(child.getName())) {
-        String mapperPackage = child.getStringAttribute("name");
-        configuration.addMappers(mapperPackage);
+    return boundSql;
+  }
+// 因为前面讲了是生成的 DynamicSqlSource,所以也是调用这个的 getBoundSql
+  @Override
+  public BoundSql getBoundSql(Object parameterObject) {
+    DynamicContext context = new DynamicContext(configuration, parameterObject);
+    // 重点关注着
+    rootSqlNode.apply(context);
+    SqlSourceBuilder sqlSourceParser = new SqlSourceBuilder(configuration);
+    Class<?> parameterType = parameterObject == null ? Object.class : parameterObject.getClass();
+    SqlSource sqlSource = sqlSourceParser.parse(context.getSql(), parameterType, context.getBindings());
+    BoundSql boundSql = sqlSource.getBoundSql(parameterObject);
+    context.getBindings().forEach(boundSql::setAdditionalParameter);
+    return boundSql;
+  }
+// 继续是这个 DynamicSqlNode 的 apply
+  public boolean apply(DynamicContext context) {
+    contents.forEach(node -> node.apply(context));
+    return true;
+  }
+// 看下面的图
+

image

+

我们重点看 foreach 的逻辑

+
@Override
+  public boolean apply(DynamicContext context) {
+    Map<String, Object> bindings = context.getBindings();
+    final Iterable<?> iterable = evaluator.evaluateIterable(collectionExpression, bindings,
+      Optional.ofNullable(nullable).orElseGet(configuration::isNullableOnForEach));
+    if (iterable == null || !iterable.iterator().hasNext()) {
+      return true;
+    }
+    boolean first = true;
+    // 开始符号
+    applyOpen(context);
+    int i = 0;
+    for (Object o : iterable) {
+      DynamicContext oldContext = context;
+      if (first || separator == null) {
+        context = new PrefixedContext(context, "");
       } else {
-        String resource = child.getStringAttribute("resource");
-        String url = child.getStringAttribute("url");
-        String mapperClass = child.getStringAttribute("class");
-        if (resource != null && url == null && mapperClass == null) {
-          ErrorContext.instance().resource(resource);
-          try(InputStream inputStream = Resources.getResourceAsStream(resource)) {
-            XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, resource, configuration.getSqlFragments());
-            // --------> 我们这没有指定package,所以是走到这
-            mapperParser.parse();
-          }
-        } else if (resource == null && url != null && mapperClass == null) {
-          ErrorContext.instance().resource(url);
-          try(InputStream inputStream = Resources.getUrlAsStream(url)){
-            XMLMapperBuilder mapperParser = new XMLMapperBuilder(inputStream, configuration, url, configuration.getSqlFragments());
-            mapperParser.parse();
-          }
-        } else if (resource == null && url == null && mapperClass != null) {
-          Class<?> mapperInterface = Resources.classForName(mapperClass);
-          configuration.addMapper(mapperInterface);
-        } else {
-          throw new BuilderException("A mapper element may only specify a url, resource or class, but not more than one.");
-        }
+        context = new PrefixedContext(context, separator);
+      }
+      int uniqueNumber = context.getUniqueNumber();
+      // Issue #709
+      if (o instanceof Map.Entry) {
+        @SuppressWarnings("unchecked")
+        Map.Entry<Object, Object> mapEntry = (Map.Entry<Object, Object>) o;
+        applyIndex(context, mapEntry.getKey(), uniqueNumber);
+        applyItem(context, mapEntry.getValue(), uniqueNumber);
+      } else {
+        applyIndex(context, i, uniqueNumber);
+        applyItem(context, o, uniqueNumber);
+      }
+      // 转换变量名,变成这种形式 select * from student where id in
+      //   (  
+      //  #{__frch_id_0}
+      //   )
+      contents.apply(new FilteredDynamicContext(configuration, context, index, item, uniqueNumber));
+      if (first) {
+        first = !((PrefixedContext) context).isPrefixApplied();
       }
+      context = oldContext;
+      i++;
     }
+    applyClose(context);
+    context.getBindings().remove(item);
+    context.getBindings().remove(index);
+    return true;
   }
-}
- -

核心就在这个parse()方法

-
public void parse() {
-  if (!configuration.isResourceLoaded(resource)) {
-    // -------> 然后就是走到这里,配置xml的mapper节点的内容
-    configurationElement(parser.evalNode("/mapper"));
-    configuration.addLoadedResource(resource);
-    bindMapperForNamespace();
-  }
-
-  parsePendingResultMaps();
-  parsePendingCacheRefs();
-  parsePendingStatements();
-}
- -

具体的处理逻辑

-
private void configurationElement(XNode context) {
-  try {
-    String namespace = context.getStringAttribute("namespace");
-    if (namespace == null || namespace.isEmpty()) {
-      throw new BuilderException("Mapper's namespace cannot be empty");
+// 回到外层就会调用 parse 方法, 把#{} 这段替换成 ?
+public SqlSource parse(String originalSql, Class<?> parameterType, Map<String, Object> additionalParameters) {
+    ParameterMappingTokenHandler handler = new ParameterMappingTokenHandler(configuration, parameterType, additionalParameters);
+    GenericTokenParser parser = new GenericTokenParser("#{", "}", handler);
+    String sql;
+    if (configuration.isShrinkWhitespacesInSql()) {
+      sql = parser.parse(removeExtraWhitespaces(originalSql));
+    } else {
+      sql = parser.parse(originalSql);
     }
-    builderAssistant.setCurrentNamespace(namespace);
-    cacheRefElement(context.evalNode("cache-ref"));
-    cacheElement(context.evalNode("cache"));
-    parameterMapElement(context.evalNodes("/mapper/parameterMap"));
-    resultMapElements(context.evalNodes("/mapper/resultMap"));
-    sqlElement(context.evalNodes("/mapper/sql"));
-    // ------->  走到这,从上下文构建statement
-    buildStatementFromContext(context.evalNodes("select|insert|update|delete"));
-  } catch (Exception e) {
-    throw new BuilderException("Error parsing Mapper XML. The XML location is '" + resource + "'. Cause: " + e, e);
+    return new StaticSqlSource(configuration, sql, handler.getParameterMappings());
+  }
+

image

+

可以看到这里,然后再进行替换

+

image

+

真实的从 ? 替换成具体的变量值,是在这里
org.apache.ibatis.executor.SimpleExecutor#doQuery
调用了

+
private Statement prepareStatement(StatementHandler handler, Log statementLog) throws SQLException {
+    Statement stmt;
+    Connection connection = getConnection(statementLog);
+    stmt = handler.prepare(connection, transaction.getTimeout());
+    handler.parameterize(stmt);
+    return stmt;
   }
-}
- -

具体代码在这,从上下文构建statement,只不过区分了下databaseId

-
private void buildStatementFromContext(List<XNode> list) {
-  if (configuration.getDatabaseId() != null) {
-    buildStatementFromContext(list, configuration.getDatabaseId());
+  @Override
+  public void parameterize(Statement statement) throws SQLException {
+    parameterHandler.setParameters((PreparedStatement) statement);
   }
-  // -----> 判断databaseId
-  buildStatementFromContext(list, null);
-}
- -

判断下databaseId

-
private void buildStatementFromContext(List<XNode> list, String requiredDatabaseId) {
-  for (XNode context : list) {
-    final XMLStatementBuilder statementParser = new XMLStatementBuilder(configuration, builderAssistant, context, requiredDatabaseId);
-    try {
-      // -------> 解析statement节点
-      statementParser.parseStatementNode();
-    } catch (IncompleteElementException e) {
-      configuration.addIncompleteStatement(statementParser);
+    @Override
+  public void setParameters(PreparedStatement ps) {
+    ErrorContext.instance().activity("setting parameters").object(mappedStatement.getParameterMap().getId());
+    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
+    if (parameterMappings != null) {
+      for (int i = 0; i < parameterMappings.size(); i++) {
+        ParameterMapping parameterMapping = parameterMappings.get(i);
+        if (parameterMapping.getMode() != ParameterMode.OUT) {
+          Object value;
+          String propertyName = parameterMapping.getProperty();
+          if (boundSql.hasAdditionalParameter(propertyName)) { // issue #448 ask first for additional params
+            value = boundSql.getAdditionalParameter(propertyName);
+          } else if (parameterObject == null) {
+            value = null;
+          } else if (typeHandlerRegistry.hasTypeHandler(parameterObject.getClass())) {
+            value = parameterObject;
+          } else {
+            MetaObject metaObject = configuration.newMetaObject(parameterObject);
+            value = metaObject.getValue(propertyName);
+          }
+          TypeHandler typeHandler = parameterMapping.getTypeHandler();
+          JdbcType jdbcType = parameterMapping.getJdbcType();
+          if (value == null && jdbcType == null) {
+            jdbcType = configuration.getJdbcTypeForNull();
+          }
+          try {
+            // --------------------------> 
+            // 替换变量
+            typeHandler.setParameter(ps, i + 1, value, jdbcType);
+          } catch (TypeException | SQLException e) {
+            throw new TypeException("Could not set parameters for mapping: " + parameterMapping + ". Cause: " + e, e);
+          }
+        }
+      }
     }
-  }
-}
- -

接下来就是真正处理的xml语句内容的,各个节点的信息内容

-
public void parseStatementNode() {
-  String id = context.getStringAttribute("id");
-  String databaseId = context.getStringAttribute("databaseId");
-
-  if (!databaseIdMatchesCurrent(id, databaseId, this.requiredDatabaseId)) {
-    return;
-  }
-
-  String nodeName = context.getNode().getNodeName();
-  SqlCommandType sqlCommandType = SqlCommandType.valueOf(nodeName.toUpperCase(Locale.ENGLISH));
-  boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
-  boolean flushCache = context.getBooleanAttribute("flushCache", !isSelect);
-  boolean useCache = context.getBooleanAttribute("useCache", isSelect);
-  boolean resultOrdered = context.getBooleanAttribute("resultOrdered", false);
-
-  // Include Fragments before parsing
-  XMLIncludeTransformer includeParser = new XMLIncludeTransformer(configuration, builderAssistant);
-  includeParser.applyIncludes(context.getNode());
-
-  String parameterType = context.getStringAttribute("parameterType");
-  Class<?> parameterTypeClass = resolveClass(parameterType);
-
-  String lang = context.getStringAttribute("lang");
-  LanguageDriver langDriver = getLanguageDriver(lang);
-
-  // Parse selectKey after includes and remove them.
-  processSelectKeyNodes(id, parameterTypeClass, langDriver);
+  }
+]]>
+ + Java + Mybatis + + + Java + Mysql + Mybatis + +
+ + mybatis系列-connection连接池解析 + /2023/02/19/mybatis%E7%B3%BB%E5%88%97-connection%E8%BF%9E%E6%8E%A5%E6%B1%A0%E8%A7%A3%E6%9E%90/ + 连接池主要是两个逻辑,首先是获取连接的逻辑,结合代码来讲一讲

+
private PooledConnection popConnection(String username, String password) throws SQLException {
+    boolean countedWait = false;
+    PooledConnection conn = null;
+    long t = System.currentTimeMillis();
+    int localBadConnectionCount = 0;
 
-  // Parse the SQL (pre: <selectKey> and <include> were parsed and removed)
-  KeyGenerator keyGenerator;
-  String keyStatementId = id + SelectKeyGenerator.SELECT_KEY_SUFFIX;
-  keyStatementId = builderAssistant.applyCurrentNamespace(keyStatementId, true);
-  if (configuration.hasKeyGenerator(keyStatementId)) {
-    keyGenerator = configuration.getKeyGenerator(keyStatementId);
-  } else {
-    keyGenerator = context.getBooleanAttribute("useGeneratedKeys",
-        configuration.isUseGeneratedKeys() && SqlCommandType.INSERT.equals(sqlCommandType))
-        ? Jdbc3KeyGenerator.INSTANCE : NoKeyGenerator.INSTANCE;
-  }
-
-  // 语句的主要参数解析
-  SqlSource sqlSource = langDriver.createSqlSource(configuration, context, parameterTypeClass);
-  StatementType statementType = StatementType.valueOf(context.getStringAttribute("statementType", StatementType.PREPARED.toString()));
-  Integer fetchSize = context.getIntAttribute("fetchSize");
-  Integer timeout = context.getIntAttribute("timeout");
-  String parameterMap = context.getStringAttribute("parameterMap");
-  String resultType = context.getStringAttribute("resultType");
-  Class<?> resultTypeClass = resolveClass(resultType);
-  String resultMap = context.getStringAttribute("resultMap");
-  String resultSetType = context.getStringAttribute("resultSetType");
-  ResultSetType resultSetTypeEnum = resolveResultSetType(resultSetType);
-  if (resultSetTypeEnum == null) {
-    resultSetTypeEnum = configuration.getDefaultResultSetType();
-  }
-  String keyProperty = context.getStringAttribute("keyProperty");
-  String keyColumn = context.getStringAttribute("keyColumn");
-  String resultSets = context.getStringAttribute("resultSets");
-
-  // --------> 添加映射的statement
-  builderAssistant.addMappedStatement(id, sqlSource, statementType, sqlCommandType,
-      fetchSize, timeout, parameterMap, parameterTypeClass, resultMap, resultTypeClass,
-      resultSetTypeEnum, flushCache, useCache, resultOrdered,
-      keyGenerator, keyProperty, keyColumn, databaseId, langDriver, resultSets);
-}
- - -

添加的逻辑具体可以看下

-
public MappedStatement addMappedStatement(
-    String id,
-    SqlSource sqlSource,
-    StatementType statementType,
-    SqlCommandType sqlCommandType,
-    Integer fetchSize,
-    Integer timeout,
-    String parameterMap,
-    Class<?> parameterType,
-    String resultMap,
-    Class<?> resultType,
-    ResultSetType resultSetType,
-    boolean flushCache,
-    boolean useCache,
-    boolean resultOrdered,
-    KeyGenerator keyGenerator,
-    String keyProperty,
-    String keyColumn,
-    String databaseId,
-    LanguageDriver lang,
-    String resultSets) {
-
-  if (unresolvedCacheRef) {
-    throw new IncompleteElementException("Cache-ref not yet resolved");
-  }
-
-  id = applyCurrentNamespace(id, false);
-  boolean isSelect = sqlCommandType == SqlCommandType.SELECT;
-
-  MappedStatement.Builder statementBuilder = new MappedStatement.Builder(configuration, id, sqlSource, sqlCommandType)
-      .resource(resource)
-      .fetchSize(fetchSize)
-      .timeout(timeout)
-      .statementType(statementType)
-      .keyGenerator(keyGenerator)
-      .keyProperty(keyProperty)
-      .keyColumn(keyColumn)
-      .databaseId(databaseId)
-      .lang(lang)
-      .resultOrdered(resultOrdered)
-      .resultSets(resultSets)
-      .resultMaps(getStatementResultMaps(resultMap, resultType, id))
-      .resultSetType(resultSetType)
-      .flushCacheRequired(valueOrDefault(flushCache, !isSelect))
-      .useCache(valueOrDefault(useCache, isSelect))
-      .cache(currentCache);
-
-  ParameterMap statementParameterMap = getStatementParameterMap(parameterMap, parameterType, id);
-  if (statementParameterMap != null) {
-    statementBuilder.parameterMap(statementParameterMap);
-  }
-
-  MappedStatement statement = statementBuilder.build();
-  // ------>  正好是这里在configuration中添加了映射好的statement
-  configuration.addMappedStatement(statement);
-  return statement;
-}
- -

而里面就是往map里添加

-
public void addMappedStatement(MappedStatement ms) {
-  mappedStatements.put(ms.getId(), ms);
-}
- -

获取mapper

StudentDO studentDO = session.selectOne("com.nicksxs.mybatisdemo.StudentMapper.selectStudent", 1);
+ // 循环获取连接 + while (conn == null) { + // 加锁 + lock.lock(); + try { + // 如果闲置的连接列表不为空 + if (!state.idleConnections.isEmpty()) { + // Pool has available connection + // 连接池有可用的连接 + conn = state.idleConnections.remove(0); + if (log.isDebugEnabled()) { + log.debug("Checked out connection " + conn.getRealHashCode() + " from pool."); + } + } else { + // Pool does not have available connection + // 进入这个分支表示没有空闲连接,但是活跃连接数还没达到最大活跃连接数上限,那么这时候就可以创建一个新连接 + if (state.activeConnections.size() < poolMaximumActiveConnections) { + // Can create new connection + // 这里创建连接我们之前讲过, + conn = new PooledConnection(dataSource.getConnection(), this); + if (log.isDebugEnabled()) { + log.debug("Created connection " + conn.getRealHashCode() + "."); + } + } else { + // Cannot create new connection + // 进到这个分支了就表示没法创建新连接了,那么怎么办呢,这里引入了一个 poolMaximumCheckoutTime,这代表了我去控制连接一次被使用的最长时间,如果超过这个时间了,我就要去关闭失效它 + PooledConnection oldestActiveConnection = state.activeConnections.get(0); + long longestCheckoutTime = oldestActiveConnection.getCheckoutTime(); + if (longestCheckoutTime > poolMaximumCheckoutTime) { + // Can claim overdue connection + // 所有超时连接从池中被借出的次数+1 + state.claimedOverdueConnectionCount++; + // 所有超时连接从池中被借出并归还的时间总和 + 当前连接借出时间 + state.accumulatedCheckoutTimeOfOverdueConnections += longestCheckoutTime; + // 所有连接从池中被借出并归还的时间总和 + 当前连接借出时间 + state.accumulatedCheckoutTime += longestCheckoutTime; + // 从活跃连接数中移除此连接 + state.activeConnections.remove(oldestActiveConnection); + // 如果该连接不是自动提交的,则尝试回滚 + if (!oldestActiveConnection.getRealConnection().getAutoCommit()) { + try { + oldestActiveConnection.getRealConnection().rollback(); + } catch (SQLException e) { + /* + Just log a message for debug and continue to execute the following + statement like nothing happened. + Wrap the bad connection with a new PooledConnection, this will help + to not interrupt current executing thread and give current thread a + chance to join the next competition for another valid/good database + connection. At the end of this loop, bad {@link @conn} will be set as null. + */ + log.debug("Bad connection. Could not roll back"); + } + } + // 用此连接的真实连接再创建一个连接,并设置时间 + conn = new PooledConnection(oldestActiveConnection.getRealConnection(), this); + conn.setCreatedTimestamp(oldestActiveConnection.getCreatedTimestamp()); + conn.setLastUsedTimestamp(oldestActiveConnection.getLastUsedTimestamp()); + oldestActiveConnection.invalidate(); + if (log.isDebugEnabled()) { + log.debug("Claimed overdue connection " + conn.getRealHashCode() + "."); + } + } else { + // Must wait + // 这样还是获取不到连接就只能等待了 + try { + // 标记状态,然后把等待计数+1 + if (!countedWait) { + state.hadToWaitCount++; + countedWait = true; + } + if (log.isDebugEnabled()) { + log.debug("Waiting as long as " + poolTimeToWait + " milliseconds for connection."); + } + long wt = System.currentTimeMillis(); + // 等待 poolTimeToWait 时间 + condition.await(poolTimeToWait, TimeUnit.MILLISECONDS); + // 记录等待时间 + state.accumulatedWaitTime += System.currentTimeMillis() - wt; + } catch (InterruptedException e) { + // set interrupt flag + Thread.currentThread().interrupt(); + break; + } + } + } + } + // 如果连接不为空 + if (conn != null) { + // ping to server and check the connection is valid or not + // 判断是否有效 + if (conn.isValid()) { + if (!conn.getRealConnection().getAutoCommit()) { + // 回滚未提交的 + conn.getRealConnection().rollback(); + } + conn.setConnectionTypeCode(assembleConnectionTypeCode(dataSource.getUrl(), username, password)); + // 设置时间 + conn.setCheckoutTimestamp(System.currentTimeMillis()); + conn.setLastUsedTimestamp(System.currentTimeMillis()); + // 添加进活跃连接 + state.activeConnections.add(conn); + state.requestCount++; + state.accumulatedRequestTime += System.currentTimeMillis() - t; + } else { + if (log.isDebugEnabled()) { + log.debug("A bad connection (" + conn.getRealHashCode() + ") was returned from the pool, getting another connection."); + } + // 连接无效,坏连接+1 + state.badConnectionCount++; + localBadConnectionCount++; + conn = null; + // 如果坏连接已经超过了容忍上限,就抛异常 + if (localBadConnectionCount > (poolMaximumIdleConnections + poolMaximumLocalBadConnectionTolerance)) { + if (log.isDebugEnabled()) { + log.debug("PooledDataSource: Could not get a good connection to the database."); + } + throw new SQLException("PooledDataSource: Could not get a good connection to the database."); + } + } + } + } finally { + // 释放锁 + lock.unlock(); + } -

就是调用了 org.apache.ibatis.session.defaults.DefaultSqlSession#selectOne(java.lang.String, java.lang.Object)

-
public <T> T selectOne(String statement, Object parameter) {
-  // Popular vote was to return null on 0 results and throw exception on too many.
-  List<T> list = this.selectList(statement, parameter);
-  if (list.size() == 1) {
-    return list.get(0);
-  } else if (list.size() > 1) {
-    throw new TooManyResultsException("Expected one result (or null) to be returned by selectOne(), but found: " + list.size());
-  } else {
-    return null;
-  }
-}
+ } -

调用实际的实现方法

-
public <E> List<E> selectList(String statement, Object parameter) {
-  return this.selectList(statement, parameter, RowBounds.DEFAULT);
-}
- -

这里还有一层

-
public <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds) {
-  return selectList(statement, parameter, rowBounds, Executor.NO_RESULT_HANDLER);
-}
+ if (conn == null) { + // 连接仍为空 + if (log.isDebugEnabled()) { + log.debug("PooledDataSource: Unknown severe error condition. The connection pool returned a null connection."); + } + // 抛出异常 + throw new SQLException("PooledDataSource: Unknown severe error condition. The connection pool returned a null connection."); + } + // fanhui + return conn; + }
+

然后是还回连接

+
protected void pushConnection(PooledConnection conn) throws SQLException {
+    // 加锁
+    lock.lock();
+    try {
+      // 从活跃连接中移除当前连接
+      state.activeConnections.remove(conn);
+      if (conn.isValid()) {
+        // 当前的空闲连接数小于连接池中允许的最大空闲连接数
+        if (state.idleConnections.size() < poolMaximumIdleConnections && conn.getConnectionTypeCode() == expectedConnectionTypeCode) {
+          // 记录借出时间
+          state.accumulatedCheckoutTime += conn.getCheckoutTime();
+          if (!conn.getRealConnection().getAutoCommit()) {
+            // 同样是做回滚
+            conn.getRealConnection().rollback();
+          }
+          // 新建一个连接
+          PooledConnection newConn = new PooledConnection(conn.getRealConnection(), this);
+          // 加入到空闲连接列表中
+          state.idleConnections.add(newConn);
+          newConn.setCreatedTimestamp(conn.getCreatedTimestamp());
+          newConn.setLastUsedTimestamp(conn.getLastUsedTimestamp());
+          // 原连接失效
+          conn.invalidate();
+          if (log.isDebugEnabled()) {
+            log.debug("Returned connection " + newConn.getRealHashCode() + " to pool.");
+          }
+          // 提醒前面等待的
+          condition.signal();
+        } else {
+          // 上面是相同的,就是这里是空闲连接数已经超过上限
+          state.accumulatedCheckoutTime += conn.getCheckoutTime();
+          if (!conn.getRealConnection().getAutoCommit()) {
+            conn.getRealConnection().rollback();
+          }
+          conn.getRealConnection().close();
+          if (log.isDebugEnabled()) {
+            log.debug("Closed connection " + conn.getRealHashCode() + ".");
+          }
+          conn.invalidate();
+        }
+      } else {
+        if (log.isDebugEnabled()) {
+          log.debug("A bad connection (" + conn.getRealHashCode() + ") attempted to return to the pool, discarding connection.");
+        }
+        state.badConnectionCount++;
+      }
+    } finally {
+      lock.unlock();
+    }
+  }
+]]>
+ + Java + Mybatis + + + Java + Mysql + Mybatis + +
+ + mybatis系列-入门篇 + /2022/11/27/mybatis%E7%B3%BB%E5%88%97-%E5%85%A5%E9%97%A8%E7%AF%87/ + mybatis是我们比较常用的orm框架,下面是官网的介绍

+
+

MyBatis 是一款优秀的持久层框架,它支持自定义 SQL、存储过程以及高级映射。MyBatis 免除了几乎所有的 JDBC 代码以及设置参数和获取结果集的工作。MyBatis 可以通过简单的 XML 或注解来配置和映射原始类型、接口和 Java POJO(Plain Old Java Objects,普通老式 Java 对象)为数据库中的记录。

+
+

mybatis一大特点,或者说比较为人熟知的应该就是比 hibernate 是更轻量化,为国人所爱好的orm框架,对于hibernate目前还没有深入的拆解过,后续可以也写一下,在使用体验上觉得是个比较精巧的框架,看代码也比较容易,所以就想写个系列,第一篇先是介绍下使用
根据官网的文档上我们先来尝试一下简单使用
首先我们有个简单的配置,这个文件是mybatis-config.xml

+
<?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE configuration
+        PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
+        "https://mybatis.org/dtd/mybatis-3-config.dtd">
+<configuration>
+    <!-- 需要加入的properties-->
+    <properties resource="application-development.properties"/>
+    <!-- 指出使用哪个环境,默认是development-->
+    <environments default="development">
+        <environment id="development">
+        <!-- 指定事务管理器类型-->
+            <transactionManager type="JDBC"/>
+            <!-- 指定数据源类型-->
+            <dataSource type="POOLED">
+                <!-- 下面就是具体的参数占位了-->
+                <property name="driver" value="${driver}"/>
+                <property name="url" value="${url}"/>
+                <property name="username" value="${username}"/>
+                <property name="password" value="${password}"/>
+            </dataSource>
+        </environment>
+    </environments>
+    <mappers>
+        <!-- 指定mapper xml的位置或文件-->
+        <mapper resource="mapper/StudentMapper.xml"/>
+    </mappers>
+</configuration>
+

在代码里创建mybatis里重要入口

+
String resource = "mybatis-config.xml";
+InputStream inputStream = Resources.getResourceAsStream(resource);
+SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
+

然后我们上面的StudentMapper.xml

+
<?xml version="1.0" encoding="UTF-8" ?>
+<!DOCTYPE mapper
+        PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
+        "https://mybatis.org/dtd/mybatis-3-mapper.dtd">
+<mapper namespace="com.nicksxs.mybatisdemo.StudentMapper">
+    <select id="selectStudent" resultType="com.nicksxs.mybatisdemo.StudentDO">
+        select * from student where id = #{id}
+    </select>
+</mapper>
+

那么我们就要使用这个mapper,

+
String resource = "mybatis-config.xml";
+InputStream inputStream = Resources.getResourceAsStream(resource);
+SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
+try (SqlSession session = sqlSessionFactory.openSession()) {
+    StudentDO studentDO = session.selectOne("com.nicksxs.mybatisdemo.StudentMapper.selectStudent", 1);
+    System.out.println("id is " + studentDO.getId() + " name is " +studentDO.getName());
+} catch (Exception e) {
+    e.printStackTrace();
+}
+

sqlSessionFactory是sqlSession的工厂,我们可以通过sqlSessionFactory来创建sqlSession,而SqlSession 提供了在数据库执行 SQL 命令所需的所有方法。你可以通过 SqlSession 实例来直接执行已映射的 SQL 语句。可以看到mapper.xml中有定义mapper的namespace,就可以通过session.selectOne()传入namespace+id来调用这个方法
但是这样调用比较不合理的点,或者说按后面mybatis优化之后我们可以指定mapper接口

+
public interface StudentMapper {
 
-

根本的就是从configuration里获取了mappedStatement

-
private <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) {
-  try {
-    // 这里进行了获取
-    MappedStatement ms = configuration.getMappedStatement(statement);
-    return executor.query(ms, wrapCollection(parameter), rowBounds, handler);
-  } catch (Exception e) {
-    throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
-  } finally {
-    ErrorContext.instance().reset();
-  }
-}
+ public StudentDO selectStudent(Long id); +}
+

就可以可以通过mapper接口获取方法,这样就不用涉及到未知的变量转换等异常

+
try (SqlSession session = sqlSessionFactory.openSession()) {
+    StudentMapper mapper = session.getMapper(StudentMapper.class);
+    StudentDO studentDO = mapper.selectStudent(1L);
+    System.out.println("id is " + studentDO.getId() + " name is " +studentDO.getName());
+} catch (Exception e) {
+    e.printStackTrace();
+}
+

这一篇咱们先介绍下简单的使用,后面可以先介绍下这些的原理。

]]>
Java @@ -8210,111 +8425,6 @@ location ~*openresty
- - mybatis系列-typeAliases系统 - /2023/01/01/mybatis%E7%B3%BB%E5%88%97-typeAliases%E7%B3%BB%E7%BB%9F/ - 其实前面已经聊到过这个概念,在mybatis的配置中,以及一些初始化逻辑都是用了typeAliases,

-
<typeAliases>
-  <typeAlias alias="Author" type="domain.blog.Author"/>
-  <typeAlias alias="Blog" type="domain.blog.Blog"/>
-  <typeAlias alias="Comment" type="domain.blog.Comment"/>
-  <typeAlias alias="Post" type="domain.blog.Post"/>
-  <typeAlias alias="Section" type="domain.blog.Section"/>
-  <typeAlias alias="Tag" type="domain.blog.Tag"/>
-</typeAliases>
-

可以在这里注册类型别名,然后在mybatis中配置使用时,可以简化这些类型的使用,其底层逻辑主要是一个map,

-
public class TypeAliasRegistry {
-
-  private final Map<String, Class<?>> typeAliases = new HashMap<>();
-

以string作为key,class对象作为value,比如我们在一开始使用的配置文件

-
<dataSource type="POOLED">
-    <property name="driver" value="${driver}"/>
-    <property name="url" value="${url}"/>
-    <property name="username" value="${username}"/>
-    <property name="password" value="${password}"/>
-</dataSource>
-

这里使用的dataSource是POOLED,那它肯定是个别名或者需要对应处理
而这个别名就是在Configuration的构造方法里初始化

-
public Configuration() {
-    typeAliasRegistry.registerAlias("JDBC", JdbcTransactionFactory.class);
-    typeAliasRegistry.registerAlias("MANAGED", ManagedTransactionFactory.class);
-
-    typeAliasRegistry.registerAlias("JNDI", JndiDataSourceFactory.class);
-    typeAliasRegistry.registerAlias("POOLED", PooledDataSourceFactory.class);
-    typeAliasRegistry.registerAlias("UNPOOLED", UnpooledDataSourceFactory.class);
-
-    typeAliasRegistry.registerAlias("PERPETUAL", PerpetualCache.class);
-    typeAliasRegistry.registerAlias("FIFO", FifoCache.class);
-    typeAliasRegistry.registerAlias("LRU", LruCache.class);
-    typeAliasRegistry.registerAlias("SOFT", SoftCache.class);
-    typeAliasRegistry.registerAlias("WEAK", WeakCache.class);
-
-    typeAliasRegistry.registerAlias("DB_VENDOR", VendorDatabaseIdProvider.class);
-
-    typeAliasRegistry.registerAlias("XML", XMLLanguageDriver.class);
-    typeAliasRegistry.registerAlias("RAW", RawLanguageDriver.class);
-
-    typeAliasRegistry.registerAlias("SLF4J", Slf4jImpl.class);
-    typeAliasRegistry.registerAlias("COMMONS_LOGGING", JakartaCommonsLoggingImpl.class);
-    typeAliasRegistry.registerAlias("LOG4J", Log4jImpl.class);
-    typeAliasRegistry.registerAlias("LOG4J2", Log4j2Impl.class);
-    typeAliasRegistry.registerAlias("JDK_LOGGING", Jdk14LoggingImpl.class);
-    typeAliasRegistry.registerAlias("STDOUT_LOGGING", StdOutImpl.class);
-    typeAliasRegistry.registerAlias("NO_LOGGING", NoLoggingImpl.class);
-
-    typeAliasRegistry.registerAlias("CGLIB", CglibProxyFactory.class);
-    typeAliasRegistry.registerAlias("JAVASSIST", JavassistProxyFactory.class);
-
-    languageRegistry.setDefaultDriverClass(XMLLanguageDriver.class);
-    languageRegistry.register(RawLanguageDriver.class);
-  }
-

正是通过typeAliasRegistry.registerAlias("POOLED", PooledDataSourceFactory.class);这一行,注册了
POOLED对应的别名类型是PooledDataSourceFactory.class
具体的注册方法是在

-
public void registerAlias(String alias, Class<?> value) {
-  if (alias == null) {
-    throw new TypeException("The parameter alias cannot be null");
-  }
-  // issue #748
-  // 转换成小写,
-  String key = alias.toLowerCase(Locale.ENGLISH);
-  // 判断是否已经注册过了
-  if (typeAliases.containsKey(key) && typeAliases.get(key) != null && !typeAliases.get(key).equals(value)) {
-    throw new TypeException("The alias '" + alias + "' is already mapped to the value '" + typeAliases.get(key).getName() + "'.");
-  }
-  // 放进map里
-  typeAliases.put(key, value);
-}
-

而获取的逻辑在这

-
public <T> Class<T> resolveAlias(String string) {
-    try {
-      if (string == null) {
-        return null;
-      }
-      // issue #748
-      // 同样的转成小写
-      String key = string.toLowerCase(Locale.ENGLISH);
-      Class<T> value;
-      if (typeAliases.containsKey(key)) {
-        value = (Class<T>) typeAliases.get(key);
-      } else {
-        // 这里还有从路径下处理的逻辑
-        value = (Class<T>) Resources.classForName(string);
-      }
-      return value;
-    } catch (ClassNotFoundException e) {
-      throw new TypeException("Could not resolve type alias '" + string + "'.  Cause: " + e, e);
-    }
-  }
-

逻辑比较简单,但是在mybatis中也是不可或缺的一块概念

-]]>
- - Java - Mybatis - - - Java - Mysql - Mybatis - -
pcre-intro-and-a-simple-package /2015/01/16/pcre-intro-and-a-simple-package/ @@ -8365,1169 +8475,1038 @@ int pcre_exec(const pcre *code, const pcre_extra *extra, const char *subject, in - php-abstract-class-and-interface - /2016/11/10/php-abstract-class-and-interface/ - PHP抽象类和接口
    -
  • 抽象类与接口
  • -
  • 抽象类内可以包含非抽象函数,即可实现函数
  • -
  • 抽象类内必须包含至少一个抽象方法,抽象类和接口均不能实例化
  • -
  • 抽象类可以设置访问级别,接口默认都是public
  • -
  • 类可以实现多个接口但不能继承多个抽象类
  • -
  • 类必须实现抽象类和接口里的抽象方法,不一定要实现抽象类的非抽象方法
  • -
  • 接口内不能定义变量,但是可以定义常量
  • -
-

示例代码

<?php
-interface int1{
-    const INTER1 = 111;
-    function inter1();
-}
-interface int2{
-    const INTER1 = 222;
-    function inter2();
-}
-abstract class abst1{
-    public function abstr1(){
-        echo 1111;
-    }
-    abstract function abstra1(){
-        echo 'ahahahha';
-    }
-}
-abstract class abst2{
-    public function abstr2(){
-        echo 1111;
-    }
-    abstract function abstra2();
-}
-class normal1 extends abst1{
-    protected function abstr2(){
-        echo 222;
-    }
-}
+ mybatis系列-第一条sql的更多细节 + /2022/12/18/mybatis%E7%B3%BB%E5%88%97-%E7%AC%AC%E4%B8%80%E6%9D%A1sql%E7%9A%84%E6%9B%B4%E5%A4%9A%E7%BB%86%E8%8A%82/ + 执行细节
首先设置了默认的languageDriver
org/mybatis/mybatis/3.5.11/mybatis-3.5.11-sources.jar!/org/apache/ibatis/session/Configuration.java:215
configuration的构造方法里

+
languageRegistry.setDefaultDriverClass(XMLLanguageDriver.class);
-

result

PHP Fatal error:  Abstract function abst1::abstra1() cannot contain body in new.php on line 17
+

而在
org.apache.ibatis.builder.xml.XMLStatementBuilder#parseStatementNode
中,创建了sqlSource,这里就会根据前面的 LanguageDriver 的实现选择对应的 sqlSource

+
SqlSource sqlSource = langDriver.createSqlSource(configuration, context, parameterTypeClass);
-Fatal error: Abstract function abst1::abstra1() cannot contain body in php on line 17
-]]>
- - php - - - php - -
- - mybatis系列-sql 类的简要分析 - /2023/03/19/mybatis%E7%B3%BB%E5%88%97-sql-%E7%B1%BB%E7%9A%84%E7%AE%80%E8%A6%81%E5%88%86%E6%9E%90/ - 上次就比较简单的讲了使用,这块也比较简单,因为封装得不是很复杂,首先我们从 select 作为入口来看看,这个具体的实现,

-
String selectSql = new SQL() {{
-            SELECT("id", "name");
-            FROM("student");
-            WHERE("id = #{id}");
-        }}.toString();
-

SELECT 方法的实现,

-
public T SELECT(String... columns) {
-  sql().statementType = SQLStatement.StatementType.SELECT;
-  sql().select.addAll(Arrays.asList(columns));
-  return getSelf();
-}
-

statementType是个枚举

-
public enum StatementType {
-  DELETE, INSERT, SELECT, UPDATE
-}
-

那这个就是个 select 语句,然后会把参数转成 list 添加到 select 变量里,
然后是 from 语句,这个大概也能猜到就是设置下表名,

-
public T FROM(String table) {
-  sql().tables.add(table);
-  return getSelf();
-}
-

往 tables 里添加了 table,这个 tables 是什么呢
这里也可以看下所有的变量,

-
StatementType statementType;
-List<String> sets = new ArrayList<>();
-List<String> select = new ArrayList<>();
-List<String> tables = new ArrayList<>();
-List<String> join = new ArrayList<>();
-List<String> innerJoin = new ArrayList<>();
-List<String> outerJoin = new ArrayList<>();
-List<String> leftOuterJoin = new ArrayList<>();
-List<String> rightOuterJoin = new ArrayList<>();
-List<String> where = new ArrayList<>();
-List<String> having = new ArrayList<>();
-List<String> groupBy = new ArrayList<>();
-List<String> orderBy = new ArrayList<>();
-List<String> lastList = new ArrayList<>();
-List<String> columns = new ArrayList<>();
-List<List<String>> valuesList = new ArrayList<>();
-

可以看到是一堆 List 先暂存这些sql 片段,然后再拼装成 sql 语句,
因为它重写了 toString 方法

+

createSqlSource 就会调用

@Override
-public String toString() {
-  StringBuilder sb = new StringBuilder();
-  sql().sql(sb);
-  return sb.toString();
-}
-

调用的 sql 方法是

-
public String sql(Appendable a) {
-      SafeAppendable builder = new SafeAppendable(a);
-      if (statementType == null) {
-        return null;
+public SqlSource createSqlSource(Configuration configuration, XNode script, Class<?> parameterType) {
+  XMLScriptBuilder builder = new XMLScriptBuilder(configuration, script, parameterType);
+  return builder.parseScriptNode();
+}
+ +

再往下的逻辑在 parseScriptNode 中,org.apache.ibatis.scripting.xmltags.XMLScriptBuilder#parseScriptNode

+
public SqlSource parseScriptNode() {
+  MixedSqlNode rootSqlNode = parseDynamicTags(context);
+  SqlSource sqlSource;
+  if (isDynamic) {
+    sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
+  } else {
+    sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
+  }
+  return sqlSource;
+}
+ +

首先要解析dynamicTag,调用了org.apache.ibatis.scripting.xmltags.XMLScriptBuilder#parseDynamicTags

+
protected MixedSqlNode parseDynamicTags(XNode node) {
+    List<SqlNode> contents = new ArrayList<>();
+    NodeList children = node.getNode().getChildNodes();
+    for (int i = 0; i < children.getLength(); i++) {
+      XNode child = node.newXNode(children.item(i));
+      if (child.getNode().getNodeType() == Node.CDATA_SECTION_NODE || child.getNode().getNodeType() == Node.TEXT_NODE) {
+        String data = child.getStringBody("");
+        TextSqlNode textSqlNode = new TextSqlNode(data);
+        // ---------> 主要是这边的逻辑
+        if (textSqlNode.isDynamic()) {
+          contents.add(textSqlNode);
+          isDynamic = true;
+        } else {
+          contents.add(new StaticTextSqlNode(data));
+        }
+      } else if (child.getNode().getNodeType() == Node.ELEMENT_NODE) { // issue #628
+        String nodeName = child.getNode().getNodeName();
+        NodeHandler handler = nodeHandlerMap.get(nodeName);
+        if (handler == null) {
+          throw new BuilderException("Unknown element <" + nodeName + "> in SQL statement.");
+        }
+        handler.handleNode(child, contents);
+        isDynamic = true;
       }
+    }
+    return new MixedSqlNode(contents);
+  }
- String answer; +

判断是否是动态sql,调用了org.apache.ibatis.scripting.xmltags.TextSqlNode#isDynamic

+
public boolean isDynamic() {
+  DynamicCheckerTokenParser checker = new DynamicCheckerTokenParser();
+  // ----------> 主要是这里的方法
+  GenericTokenParser parser = createParser(checker);
+  parser.parse(text);
+  return checker.isDynamic();
+}
- switch (statementType) { - case DELETE: - answer = deleteSQL(builder); - break; +

创建parser的时候可以看到这个parser是干了啥,其实就是找有没有${ , }

+
private GenericTokenParser createParser(TokenHandler handler) {
+  return new GenericTokenParser("${", "}", handler);
+}
- case INSERT: - answer = insertSQL(builder); - break; +

如果是的话,就在上面把 isDynamic 设置为true 如果是true 的话就创建 DynamicSqlSource

+
sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
- case SELECT: - answer = selectSQL(builder); - break; +

如果不是的话就创建RawSqlSource

+
sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
+```java
 
-        case UPDATE:
-          answer = updateSQL(builder);
-          break;
+但是这不是一个真实可用的 `sqlSource` ,
+实际创建的时候会走到这
+```java
+public RawSqlSource(Configuration configuration, SqlNode rootSqlNode, Class<?> parameterType) {
+    this(configuration, getSql(configuration, rootSqlNode), parameterType);
+  }
 
-        default:
-          answer = null;
-      }
+  public RawSqlSource(Configuration configuration, String sql, Class<?> parameterType) {
+    SqlSourceBuilder sqlSourceParser = new SqlSourceBuilder(configuration);
+    Class<?> clazz = parameterType == null ? Object.class : parameterType;
+    sqlSource = sqlSourceParser.parse(sql, clazz, new HashMap<>());
+  }
- return answer; - }
-

根据上面的 statementType判断是个什么 sql,我们这个是 selectSQL 就走的 SELECT 这个分支

-
private String selectSQL(SafeAppendable builder) {
-  if (distinct) {
-    sqlClause(builder, "SELECT DISTINCT", select, "", "", ", ");
+

具体的sqlSource是通过org.apache.ibatis.builder.SqlSourceBuilder#parse 创建的
具体的代码逻辑是

+
public SqlSource parse(String originalSql, Class<?> parameterType, Map<String, Object> additionalParameters) {
+  ParameterMappingTokenHandler handler = new ParameterMappingTokenHandler(configuration, parameterType, additionalParameters);
+  GenericTokenParser parser = new GenericTokenParser("#{", "}", handler);
+  String sql;
+  if (configuration.isShrinkWhitespacesInSql()) {
+    sql = parser.parse(removeExtraWhitespaces(originalSql));
   } else {
-    sqlClause(builder, "SELECT", select, "", "", ", ");
+    sql = parser.parse(originalSql);
   }
+  return new StaticSqlSource(configuration, sql, handler.getParameterMappings());
+}
- sqlClause(builder, "FROM", tables, "", "", ", "); - joins(builder); - sqlClause(builder, "WHERE", where, "(", ")", " AND "); - sqlClause(builder, "GROUP BY", groupBy, "", "", ", "); - sqlClause(builder, "HAVING", having, "(", ")", " AND "); - sqlClause(builder, "ORDER BY", orderBy, "", "", ", "); - limitingRowsStrategy.appendClause(builder, offset, limit); - return builder.toString(); -}
-

上面的可以看出来就是按我们常规的 sql 理解顺序来处理
就是select ... from ... where ... 这样子
再看下 sqlClause 的代码

-
private void sqlClause(SafeAppendable builder, String keyword, List<String> parts, String open, String close,
-                           String conjunction) {
-      if (!parts.isEmpty()) {
-        if (!builder.isEmpty()) {
-          builder.append("\n");
-        }
-        builder.append(keyword);
-        builder.append(" ");
-        builder.append(open);
-        String last = "________";
-        for (int i = 0, n = parts.size(); i < n; i++) {
-          String part = parts.get(i);
-          if (i > 0 && !part.equals(AND) && !part.equals(OR) && !last.equals(AND) && !last.equals(OR)) {
-            builder.append(conjunction);
-          }
-          builder.append(part);
-          last = part;
-        }
-        builder.append(close);
-      }
-    }
-

这里的拼接方式还需要判断 AND 和 OR 的判断逻辑,其他就没什么特别的了,只是where 语句中的 lastList 不知道是干嘛的,好像只有添加跟赋值的操作,有知道的大神也可以评论指导下

-]]> - - Java - Mybatis - - - Java - Mysql - Mybatis - - - - mybatis系列-第一条sql的细节 - /2022/12/11/mybatis%E7%B3%BB%E5%88%97-%E7%AC%AC%E4%B8%80%E6%9D%A1sql%E7%9A%84%E7%BB%86%E8%8A%82/ - 先补充两个点,
第一是前面我们说了
使用org.apache.ibatis.builder.xml.XMLConfigBuilder 创建了parser解析器,那么解析的结果是什么
看这个方法的返回值

-
public Configuration parse() {
-  if (parsed) {
-    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
-  }
-  parsed = true;
-  parseConfiguration(parser.evalNode("/configuration"));
-  return configuration;
-}
- -

返回的是 org.apache.ibatis.session.Configuration , 而这个 Configuration 也是 mybatis 中特别重要的配置核心类,贴一下里面的成员变量,

-
public class Configuration {
-
-  protected Environment environment;
-
-  protected boolean safeRowBoundsEnabled;
-  protected boolean safeResultHandlerEnabled = true;
-  protected boolean mapUnderscoreToCamelCase;
-  protected boolean aggressiveLazyLoading;
-  protected boolean multipleResultSetsEnabled = true;
-  protected boolean useGeneratedKeys;
-  protected boolean useColumnLabel = true;
-  protected boolean cacheEnabled = true;
-  protected boolean callSettersOnNulls;
-  protected boolean useActualParamName = true;
-  protected boolean returnInstanceForEmptyRow;
-  protected boolean shrinkWhitespacesInSql;
-  protected boolean nullableOnForEach;
-  protected boolean argNameBasedConstructorAutoMapping;
-
-  protected String logPrefix;
-  protected Class<? extends Log> logImpl;
-  protected Class<? extends VFS> vfsImpl;
-  protected Class<?> defaultSqlProviderType;
-  protected LocalCacheScope localCacheScope = LocalCacheScope.SESSION;
-  protected JdbcType jdbcTypeForNull = JdbcType.OTHER;
-  protected Set<String> lazyLoadTriggerMethods = new HashSet<>(Arrays.asList("equals", "clone", "hashCode", "toString"));
-  protected Integer defaultStatementTimeout;
-  protected Integer defaultFetchSize;
-  protected ResultSetType defaultResultSetType;
-  protected ExecutorType defaultExecutorType = ExecutorType.SIMPLE;
-  protected AutoMappingBehavior autoMappingBehavior = AutoMappingBehavior.PARTIAL;
-  protected AutoMappingUnknownColumnBehavior autoMappingUnknownColumnBehavior = AutoMappingUnknownColumnBehavior.NONE;
-
-  protected Properties variables = new Properties();
-  protected ReflectorFactory reflectorFactory = new DefaultReflectorFactory();
-  protected ObjectFactory objectFactory = new DefaultObjectFactory();
-  protected ObjectWrapperFactory objectWrapperFactory = new DefaultObjectWrapperFactory();
-
-  protected boolean lazyLoadingEnabled = false;
-  protected ProxyFactory proxyFactory = new JavassistProxyFactory(); // #224 Using internal Javassist instead of OGNL
+

这里创建的其实是StaticSqlSource ,多带一句前面的parser是将原来这样select * from student where id = #{id}sql 解析成了select * from student where id = ? 然后创建了StaticSqlSource

+
public StaticSqlSource(Configuration configuration, String sql, List<ParameterMapping> parameterMappings) {
+  this.sql = sql;
+  this.parameterMappings = parameterMappings;
+  this.configuration = configuration;
+}
- protected String databaseId; - /** - * Configuration factory class. - * Used to create Configuration for loading deserialized unread properties. - * - * @see <a href='https://github.com/mybatis/old-google-code-issues/issues/300'>Issue 300 (google code)</a> - */ - protected Class<?> configurationFactory; +

为什么前面要讲这么多好像没什么关系的代码呢,其实在最开始我们执行sql的代码中

+
@Override
+  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
+    BoundSql boundSql = ms.getBoundSql(parameterObject);
+    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
+    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+  }
- protected final MapperRegistry mapperRegistry = new MapperRegistry(this); - protected final InterceptorChain interceptorChain = new InterceptorChain(); - protected final TypeHandlerRegistry typeHandlerRegistry = new TypeHandlerRegistry(this); - protected final TypeAliasRegistry typeAliasRegistry = new TypeAliasRegistry(); - protected final LanguageDriverRegistry languageRegistry = new LanguageDriverRegistry(); +

这里获取了BoundSql,而BoundSql是怎么来的呢,首先调用了org.apache.ibatis.mapping.MappedStatement#getBoundSql

+
public BoundSql getBoundSql(Object parameterObject) {
+    BoundSql boundSql = sqlSource.getBoundSql(parameterObject);
+    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
+    if (parameterMappings == null || parameterMappings.isEmpty()) {
+      boundSql = new BoundSql(configuration, boundSql.getSql(), parameterMap.getParameterMappings(), parameterObject);
+    }
 
-  protected final Map<String, MappedStatement> mappedStatements = new StrictMap<MappedStatement>("Mapped Statements collection")
-      .conflictMessageProducer((savedValue, targetValue) ->
-          ". please check " + savedValue.getResource() + " and " + targetValue.getResource());
-  protected final Map<String, Cache> caches = new StrictMap<>("Caches collection");
-  protected final Map<String, ResultMap> resultMaps = new StrictMap<>("Result Maps collection");
-  protected final Map<String, ParameterMap> parameterMaps = new StrictMap<>("Parameter Maps collection");
-  protected final Map<String, KeyGenerator> keyGenerators = new StrictMap<>("Key Generators collection");
+    // check for nested result maps in parameter mappings (issue #30)
+    for (ParameterMapping pm : boundSql.getParameterMappings()) {
+      String rmId = pm.getResultMapId();
+      if (rmId != null) {
+        ResultMap rm = configuration.getResultMap(rmId);
+        if (rm != null) {
+          hasNestedResultMaps |= rm.hasNestedResultMaps();
+        }
+      }
+    }
 
-  protected final Set<String> loadedResources = new HashSet<>();
-  protected final Map<String, XNode> sqlFragments = new StrictMap<>("XML fragments parsed from previous mappers");
+    return boundSql;
+  }
- protected final Collection<XMLStatementBuilder> incompleteStatements = new LinkedList<>(); - protected final Collection<CacheRefResolver> incompleteCacheRefs = new LinkedList<>(); - protected final Collection<ResultMapResolver> incompleteResultMaps = new LinkedList<>(); - protected final Collection<MethodResolver> incompleteMethods = new LinkedList<>();
+

而我们从上面的解析中可以看到这里的sqlSource是一层RawSqlSource , 它的getBoundSql又是调用内部的sqlSource的方法

+
@Override
+public BoundSql getBoundSql(Object parameterObject) {
+  return sqlSource.getBoundSql(parameterObject);
+}
-

这么多成员变量,先不一一解释作用,但是其中的几个参数我们应该是已经知道了的,第一个就是 mappedStatements ,上一篇我们知道被解析的mapper就是放在这里,后面的 resultMapsparameterMaps 也比较常用的就是我们参数和结果的映射map,这里跟我之前有一篇解释为啥我们一些变量的使用会比较特殊,比如list,可以参考这篇keyGenerators是在我们需要定义主键生成器的时候使用。
然后第二点是我们创建的 org.apache.ibatis.session.SqlSessionFactory 是哪个,

-
public SqlSessionFactory build(Configuration config) {
-  return new DefaultSqlSessionFactory(config);
-}
+

内部的sqlSource 就是StaticSqlSource

+
@Override
+public BoundSql getBoundSql(Object parameterObject) {
+  return new BoundSql(configuration, sql, parameterMappings, parameterObject);
+}
-

是这个 DefaultSqlSessionFactory ,这是其中一个 SqlSessionFactory 的实现
接下来我们看看 openSession 里干了啥

-
public SqlSession openSession() {
-  return openSessionFromDataSource(configuration.getDefaultExecutorType(), null, false);
-}
+

这个BoundSql的内容也比较简单

+
public BoundSql(Configuration configuration, String sql, List<ParameterMapping> parameterMappings, Object parameterObject) {
+  this.sql = sql;
+  this.parameterMappings = parameterMappings;
+  this.parameterObject = parameterObject;
+  this.additionalParameters = new HashMap<>();
+  this.metaParameters = configuration.newMetaObject(additionalParameters);
+}
-

这边有几个参数,第一个是默认的执行器类型,往上找找上面贴着的 Configuration 的成员变量里可以看到默认是
protected ExecutorType defaultExecutorType = ExecutorType.SIMPLE;

-

因为没有指明特殊的执行逻辑,所以默认我们也就用简单类型的,第二个参数是是事务级别,第三个是是否自动提交

-
private SqlSession openSessionFromDataSource(ExecutorType execType, TransactionIsolationLevel level, boolean autoCommit) {
-  Transaction tx = null;
+

而上次在这边org.apache.ibatis.executor.SimpleExecutor#doQuery 的时候落了个东西,就是StatementHandler的逻辑

+
@Override
+public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
+  Statement stmt = null;
   try {
-    final Environment environment = configuration.getEnvironment();
-    final TransactionFactory transactionFactory = getTransactionFactoryFromEnvironment(environment);
-    tx = transactionFactory.newTransaction(environment.getDataSource(), level, autoCommit);
-    // --------> 先关注这里
-    final Executor executor = configuration.newExecutor(tx, execType);
-    return new DefaultSqlSession(configuration, executor, autoCommit);
-  } catch (Exception e) {
-    closeTransaction(tx); // may have fetched a connection so lets call close()
-    throw ExceptionFactory.wrapException("Error opening session.  Cause: " + e, e);
+    Configuration configuration = ms.getConfiguration();
+    StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
+    stmt = prepareStatement(handler, ms.getStatementLog());
+    return handler.query(stmt, resultHandler);
   } finally {
-    ErrorContext.instance().reset();
+    closeStatement(stmt);
   }
-}
+}
-

具体是调用了 Configuration 的这个方法

-
public Executor newExecutor(Transaction transaction, ExecutorType executorType) {
-  executorType = executorType == null ? defaultExecutorType : executorType;
-  Executor executor;
-  if (ExecutorType.BATCH == executorType) {
-    executor = new BatchExecutor(this, transaction);
-  } else if (ExecutorType.REUSE == executorType) {
-    executor = new ReuseExecutor(this, transaction);
-  } else {
-    // ---------> 会走到这个分支
-    executor = new SimpleExecutor(this, transaction);
-  }
-  if (cacheEnabled) {
-    executor = new CachingExecutor(executor);
+

它是通过statementType来区分应该使用哪个statementHandler,我们这使用的就是PreparedStatementHandler

+
public RoutingStatementHandler(Executor executor, MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) {
+
+  switch (ms.getStatementType()) {
+    case STATEMENT:
+      delegate = new SimpleStatementHandler(executor, ms, parameter, rowBounds, resultHandler, boundSql);
+      break;
+    case PREPARED:
+      delegate = new PreparedStatementHandler(executor, ms, parameter, rowBounds, resultHandler, boundSql);
+      break;
+    case CALLABLE:
+      delegate = new CallableStatementHandler(executor, ms, parameter, rowBounds, resultHandler, boundSql);
+      break;
+    default:
+      throw new ExecutorException("Unknown statement type: " + ms.getStatementType());
   }
-  executor = (Executor) interceptorChain.pluginAll(executor);
-  return executor;
-}
-

上面传入的 executorTypeConfiguration 的默认类型,也就是 simple 类型,并且 cacheEnabledConfiguration 默认为 true,所以会包装成CachingExecutor ,然后后面就是插件了,这块我们先不展开
然后我们的openSession返回的就是创建了DefaultSqlSession

-
public DefaultSqlSession(Configuration configuration, Executor executor, boolean autoCommit) {
-    this.configuration = configuration;
-    this.executor = executor;
-    this.dirty = false;
-    this.autoCommit = autoCommit;
-  }
+}
-

然后就是调用 selectOne, 因为前面已经把这部分代码说过了,就直接跳转过来
org.apache.ibatis.session.defaults.DefaultSqlSession#selectList(java.lang.String, java.lang.Object, org.apache.ibatis.session.RowBounds, org.apache.ibatis.session.ResultHandler)

-
private <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) {
+

所以上次有个细节可以补充,这边的doQuery里面的handler.query 应该是调用了PreparedStatementHandler 的query方法

+
@Override
+public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
+  Statement stmt = null;
   try {
-    MappedStatement ms = configuration.getMappedStatement(statement);
-    return executor.query(ms, wrapCollection(parameter), rowBounds, handler);
-  } catch (Exception e) {
-    throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
+    Configuration configuration = ms.getConfiguration();
+    StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
+    stmt = prepareStatement(handler, ms.getStatementLog());
+    return handler.query(stmt, resultHandler);
   } finally {
-    ErrorContext.instance().reset();
+    closeStatement(stmt);
   }
-}
+}
-

因为前面说了 executor 包装了 CachingExecutor ,所以会先调用

+ +

因为上面prepareStatement中getConnection拿到connection是com.mysql.cj.jdbc.ConnectionImpl#ConnectionImpl(com.mysql.cj.conf.HostInfo)

@Override
-public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
-  BoundSql boundSql = ms.getBoundSql(parameterObject);
-  CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
-  return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+public <E> List<E> query(Statement statement, ResultHandler resultHandler) throws SQLException {
+  PreparedStatement ps = (PreparedStatement) statement;
+  ps.execute();
+  return resultSetHandler.handleResultSets(ps);
 }
-

然后是调用的真实的query方法

-
@Override
-public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
-    throws SQLException {
-  Cache cache = ms.getCache();
-  if (cache != null) {
-    flushCacheIfRequired(ms);
-    if (ms.isUseCache() && resultHandler == null) {
-      ensureNoOutParams(ms, boundSql);
-      @SuppressWarnings("unchecked")
-      List<E> list = (List<E>) tcm.getObject(cache, key);
-      if (list == null) {
-        list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
-        tcm.putObject(cache, key, list); // issue #578 and #116
-      }
-      return list;
-    }
-  }
-  return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
-}
+

那又为什么是这个呢,可以在网上找,我们在mybatis-config.xml里配置的

+
<transactionManager type="JDBC"/>
-

这里是第一次查询,没有缓存就先到最后一行,继续是调用到 org.apache.ibatis.executor.BaseExecutor#queryFromDatabase

-
@Override
-  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
-    ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
-    if (closed) {
-      throw new ExecutorException("Executor was closed.");
-    }
-    if (queryStack == 0 && ms.isFlushCacheRequired()) {
-      clearLocalCache();
-    }
-    List<E> list;
+

因此在parseConfiguration中配置environment时

+
private void parseConfiguration(XNode root) {
     try {
-      queryStack++;
-      list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
-      if (list != null) {
-        handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
-      } else {
-        // ----------->会走到这里
-        list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
-      }
-    } finally {
-      queryStack--;
+      // issue #117 read properties first
+      propertiesElement(root.evalNode("properties"));
+      Properties settings = settingsAsProperties(root.evalNode("settings"));
+      loadCustomVfs(settings);
+      loadCustomLogImpl(settings);
+      typeAliasesElement(root.evalNode("typeAliases"));
+      pluginElement(root.evalNode("plugins"));
+      objectFactoryElement(root.evalNode("objectFactory"));
+      objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
+      reflectorFactoryElement(root.evalNode("reflectorFactory"));
+      settingsElement(settings);
+      // read it after objectFactory and objectWrapperFactory issue #631
+      // ----------> 就是这里
+      environmentsElement(root.evalNode("environments"));
+      databaseIdProviderElement(root.evalNode("databaseIdProvider"));
+      typeHandlerElement(root.evalNode("typeHandlers"));
+      mapperElement(root.evalNode("mappers"));
+    } catch (Exception e) {
+      throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
     }
-    if (queryStack == 0) {
-      for (DeferredLoad deferredLoad : deferredLoads) {
-        deferredLoad.load();
-      }
-      // issue #601
-      deferredLoads.clear();
-      if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) {
-        // issue #482
-        clearLocalCache();
+  }
+ +

调用的这个方法通过获取xml中的transactionManager 配置的类型,也就是JDBC

+
private void environmentsElement(XNode context) throws Exception {
+  if (context != null) {
+    if (environment == null) {
+      environment = context.getStringAttribute("default");
+    }
+    for (XNode child : context.getChildren()) {
+      String id = child.getStringAttribute("id");
+      if (isSpecifiedEnvironment(id)) {
+        // -------> 找到这里
+        TransactionFactory txFactory = transactionManagerElement(child.evalNode("transactionManager"));
+        DataSourceFactory dsFactory = dataSourceElement(child.evalNode("dataSource"));
+        DataSource dataSource = dsFactory.getDataSource();
+        Environment.Builder environmentBuilder = new Environment.Builder(id)
+            .transactionFactory(txFactory)
+            .dataSource(dataSource);
+        configuration.setEnvironment(environmentBuilder.build());
+        break;
       }
     }
-    return list;
-  }
- -

然后是

-
private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
-  List<E> list;
-  localCache.putObject(key, EXECUTION_PLACEHOLDER);
-  try {
-    list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
-  } finally {
-    localCache.removeObject(key);
   }
-  localCache.putObject(key, list);
-  if (ms.getStatementType() == StatementType.CALLABLE) {
-    localOutputParameterCache.putObject(key, parameter);
+}
+ +

是通过以下方法获取的,

+
// 方法全限定名 org.apache.ibatis.builder.xml.XMLConfigBuilder#transactionManagerElement
+private TransactionFactory transactionManagerElement(XNode context) throws Exception {
+    if (context != null) {
+      String type = context.getStringAttribute("type");
+      Properties props = context.getChildrenAsProperties();
+      TransactionFactory factory = (TransactionFactory) resolveClass(type).getDeclaredConstructor().newInstance();
+      factory.setProperties(props);
+      return factory;
+    }
+    throw new BuilderException("Environment declaration requires a TransactionFactory.");
   }
-  return list;
-}
-

然后就是 simpleExecutor 的执行过程

-
@Override
-public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
-  Statement stmt = null;
-  try {
-    Configuration configuration = ms.getConfiguration();
-    StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
-    stmt = prepareStatement(handler, ms.getStatementLog());
-    return handler.query(stmt, resultHandler);
-  } finally {
-    closeStatement(stmt);
+// 方法全限定名 org.apache.ibatis.builder.BaseBuilder#resolveClass
+protected <T> Class<? extends T> resolveClass(String alias) {
+    if (alias == null) {
+      return null;
+    }
+    try {
+      return resolveAlias(alias);
+    } catch (Exception e) {
+      throw new BuilderException("Error resolving class. Cause: " + e, e);
+    }
   }
-}
-

接下去其实就是跟jdbc交互了

-
@Override
-public <E> List<E> query(Statement statement, ResultHandler resultHandler) throws SQLException {
-  PreparedStatement ps = (PreparedStatement) statement;
-  ps.execute();
-  return resultSetHandler.handleResultSets(ps);
-}
+// 方法全限定名 org.apache.ibatis.builder.BaseBuilder#resolveAlias + protected <T> Class<? extends T> resolveAlias(String alias) { + return typeAliasRegistry.resolveAlias(alias); + } +// 方法全限定名 org.apache.ibatis.type.TypeAliasRegistry#resolveAlias + public <T> Class<T> resolveAlias(String string) { + try { + if (string == null) { + return null; + } + // issue #748 + String key = string.toLowerCase(Locale.ENGLISH); + Class<T> value; + if (typeAliases.containsKey(key)) { + value = (Class<T>) typeAliases.get(key); + } else { + value = (Class<T>) Resources.classForName(string); + } + return value; + } catch (ClassNotFoundException e) { + throw new TypeException("Could not resolve type alias '" + string + "'. Cause: " + e, e); + } + }
+

而通过JDBC获取得是啥的,就是在Configuration的构造方法里写了的JdbcTransactionFactory

+
public Configuration() {
+  typeAliasRegistry.registerAlias("JDBC", JdbcTransactionFactory.class);
-

com.mysql.cj.jdbc.ClientPreparedStatement#execute

-
public boolean execute() throws SQLException {
-        try {
-            synchronized(this.checkClosed().getConnectionMutex()) {
-                JdbcConnection locallyScopedConn = this.connection;
-                if (!this.doPingInstead && !this.checkReadOnlySafeStatement()) {
-                    throw SQLError.createSQLException(Messages.getString("PreparedStatement.20") + Messages.getString("PreparedStatement.21"), "S1009", this.exceptionInterceptor);
-                } else {
-                    ResultSetInternalMethods rs = null;
-                    this.lastQueryIsOnDupKeyUpdate = false;
-                    if (this.retrieveGeneratedKeys) {
-                        this.lastQueryIsOnDupKeyUpdate = this.containsOnDuplicateKeyUpdate();
-                    }
+

所以我们在这

+
private SqlSession openSessionFromDataSource(ExecutorType execType, TransactionIsolationLevel level, boolean autoCommit) {
+  Transaction tx = null;
+  try {
+    final Environment environment = configuration.getEnvironment();
+    final TransactionFactory transactionFactory = getTransactionFactoryFromEnvironment(environment);
- this.batchedGeneratedKeys = null; - this.resetCancelledState(); - this.implicitlyCloseAllOpenResults(); - this.clearWarnings(); - if (this.doPingInstead) { - this.doPingInstead(); - return true; - } else { - this.setupStreamingTimeout(locallyScopedConn); - Message sendPacket = ((PreparedQuery)this.query).fillSendPacket(((PreparedQuery)this.query).getQueryBindings()); - String oldDb = null; - if (!locallyScopedConn.getDatabase().equals(this.getCurrentDatabase())) { - oldDb = locallyScopedConn.getDatabase(); - locallyScopedConn.setDatabase(this.getCurrentDatabase()); - } +

获得到的TransactionFactory 就是 JdbcTransactionFactory ,而后

+
tx = transactionFactory.newTransaction(environment.getDataSource(), level, autoCommit);
+```java
 
-                        CachedResultSetMetaData cachedMetadata = null;
-                        boolean cacheResultSetMetadata = (Boolean)locallyScopedConn.getPropertySet().getBooleanProperty(PropertyKey.cacheResultSetMetadata).getValue();
-                        if (cacheResultSetMetadata) {
-                            cachedMetadata = locallyScopedConn.getCachedMetaData(((PreparedQuery)this.query).getOriginalSql());
-                        }
+创建的transaction就是JdbcTransaction 
+```java
+  @Override
+  public Transaction newTransaction(DataSource ds, TransactionIsolationLevel level, boolean autoCommit) {
+    return new JdbcTransaction(ds, level, autoCommit, skipSetAutoCommitOnClose);
+  }
- locallyScopedConn.setSessionMaxRows(this.getQueryInfo().getFirstStmtChar() == 'S' ? this.maxRows : -1); - rs = this.executeInternal(this.maxRows, sendPacket, this.createStreamingResultSet(), this.getQueryInfo().getFirstStmtChar() == 'S', cachedMetadata, false); - if (cachedMetadata != null) { - locallyScopedConn.initializeResultsMetadataFromCache(((PreparedQuery)this.query).getOriginalSql(), cachedMetadata, rs); - } else if (rs.hasRows() && cacheResultSetMetadata) { - locallyScopedConn.initializeResultsMetadataFromCache(((PreparedQuery)this.query).getOriginalSql(), (CachedResultSetMetaData)null, rs); - } +

然后我们再会上去看代码getConnection ,

+
protected Connection getConnection(Log statementLog) throws SQLException {
+  // -------> 这里的transaction就是JdbcTransaction
+  Connection connection = transaction.getConnection();
+  if (statementLog.isDebugEnabled()) {
+    return ConnectionLogger.newInstance(connection, statementLog, queryStack);
+  } else {
+    return connection;
+  }
+}
- if (this.retrieveGeneratedKeys) { - rs.setFirstCharOfQuery(this.getQueryInfo().getFirstStmtChar()); - } +

即调用了

+
  @Override
+  public Connection getConnection() throws SQLException {
+    if (connection == null) {
+      openConnection();
+    }
+    return connection;
+  }
 
-                        if (oldDb != null) {
-                            locallyScopedConn.setDatabase(oldDb);
-                        }
+  protected void openConnection() throws SQLException {
+    if (log.isDebugEnabled()) {
+      log.debug("Opening JDBC Connection");
+    }
+    connection = dataSource.getConnection();
+    if (level != null) {
+      connection.setTransactionIsolation(level.getLevel());
+    }
+    setDesiredAutoCommit(autoCommit);
+  }
+  @Override
+  public Connection getConnection() throws SQLException {
+    return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
+  }
 
-                        if (rs != null) {
-                            this.lastInsertId = rs.getUpdateID();
-                            this.results = rs;
-                        }
+private PooledConnection popConnection(String username, String password) throws SQLException {
+    boolean countedWait = false;
+    PooledConnection conn = null;
+    long t = System.currentTimeMillis();
+    int localBadConnectionCount = 0;
 
-                        return rs != null && rs.hasRows();
-                    }
+    while (conn == null) {
+      lock.lock();
+      try {
+        if (!state.idleConnections.isEmpty()) {
+          // Pool has available connection
+          conn = state.idleConnections.remove(0);
+          if (log.isDebugEnabled()) {
+            log.debug("Checked out connection " + conn.getRealHashCode() + " from pool.");
+          }
+        } else {
+          // Pool does not have available connection
+          if (state.activeConnections.size() < poolMaximumActiveConnections) {
+            // Can create new connection
+            // ------------> 走到这里会创建PooledConnection,但是里面会先调用dataSource.getConnection()
+            conn = new PooledConnection(dataSource.getConnection(), this);
+            if (log.isDebugEnabled()) {
+              log.debug("Created connection " + conn.getRealHashCode() + ".");
+            }
+          } else {
+            // Cannot create new connection
+            PooledConnection oldestActiveConnection = state.activeConnections.get(0);
+            long longestCheckoutTime = oldestActiveConnection.getCheckoutTime();
+            if (longestCheckoutTime > poolMaximumCheckoutTime) {
+              // Can claim overdue connection
+              state.claimedOverdueConnectionCount++;
+              state.accumulatedCheckoutTimeOfOverdueConnections += longestCheckoutTime;
+              state.accumulatedCheckoutTime += longestCheckoutTime;
+              state.activeConnections.remove(oldestActiveConnection);
+              if (!oldestActiveConnection.getRealConnection().getAutoCommit()) {
+                try {
+                  oldestActiveConnection.getRealConnection().rollback();
+                } catch (SQLException e) {
+                  /*
+                     Just log a message for debug and continue to execute the following
+                     statement like nothing happened.
+                     Wrap the bad connection with a new PooledConnection, this will help
+                     to not interrupt current executing thread and give current thread a
+                     chance to join the next competition for another valid/good database
+                     connection. At the end of this loop, bad {@link @conn} will be set as null.
+                   */
+                  log.debug("Bad connection. Could not roll back");
+                }
+              }
+              conn = new PooledConnection(oldestActiveConnection.getRealConnection(), this);
+              conn.setCreatedTimestamp(oldestActiveConnection.getCreatedTimestamp());
+              conn.setLastUsedTimestamp(oldestActiveConnection.getLastUsedTimestamp());
+              oldestActiveConnection.invalidate();
+              if (log.isDebugEnabled()) {
+                log.debug("Claimed overdue connection " + conn.getRealHashCode() + ".");
+              }
+            } else {
+              // Must wait
+              try {
+                if (!countedWait) {
+                  state.hadToWaitCount++;
+                  countedWait = true;
                 }
+                if (log.isDebugEnabled()) {
+                  log.debug("Waiting as long as " + poolTimeToWait + " milliseconds for connection.");
+                }
+                long wt = System.currentTimeMillis();
+                condition.await(poolTimeToWait, TimeUnit.MILLISECONDS);
+                state.accumulatedWaitTime += System.currentTimeMillis() - wt;
+              } catch (InterruptedException e) {
+                // set interrupt flag
+                Thread.currentThread().interrupt();
+                break;
+              }
             }
-        } catch (CJException var11) {
-            throw SQLExceptionsMapping.translateException(var11, this.getExceptionInterceptor());
+          }
         }
-    }
+ if (conn != null) { + // ping to server and check the connection is valid or not + if (conn.isValid()) { + if (!conn.getRealConnection().getAutoCommit()) { + conn.getRealConnection().rollback(); + } + conn.setConnectionTypeCode(assembleConnectionTypeCode(dataSource.getUrl(), username, password)); + conn.setCheckoutTimestamp(System.currentTimeMillis()); + conn.setLastUsedTimestamp(System.currentTimeMillis()); + state.activeConnections.add(conn); + state.requestCount++; + state.accumulatedRequestTime += System.currentTimeMillis() - t; + } else { + if (log.isDebugEnabled()) { + log.debug("A bad connection (" + conn.getRealHashCode() + ") was returned from the pool, getting another connection."); + } + state.badConnectionCount++; + localBadConnectionCount++; + conn = null; + if (localBadConnectionCount > (poolMaximumIdleConnections + poolMaximumLocalBadConnectionTolerance)) { + if (log.isDebugEnabled()) { + log.debug("PooledDataSource: Could not get a good connection to the database."); + } + throw new SQLException("PooledDataSource: Could not get a good connection to the database."); + } + } + } + } finally { + lock.unlock(); + } -]]> - - Java - Mybatis - - - Java - Mysql - Mybatis - - - - mybatis系列-第一条sql的更多细节 - /2022/12/18/mybatis%E7%B3%BB%E5%88%97-%E7%AC%AC%E4%B8%80%E6%9D%A1sql%E7%9A%84%E6%9B%B4%E5%A4%9A%E7%BB%86%E8%8A%82/ - 执行细节
首先设置了默认的languageDriver
org/mybatis/mybatis/3.5.11/mybatis-3.5.11-sources.jar!/org/apache/ibatis/session/Configuration.java:215
configuration的构造方法里

-
languageRegistry.setDefaultDriverClass(XMLLanguageDriver.class);
+ } -

而在
org.apache.ibatis.builder.xml.XMLStatementBuilder#parseStatementNode
中,创建了sqlSource,这里就会根据前面的 LanguageDriver 的实现选择对应的 sqlSource

-
SqlSource sqlSource = langDriver.createSqlSource(configuration, context, parameterTypeClass);
+ if (conn == null) { + if (log.isDebugEnabled()) { + log.debug("PooledDataSource: Unknown severe error condition. The connection pool returned a null connection."); + } + throw new SQLException("PooledDataSource: Unknown severe error condition. The connection pool returned a null connection."); + } -

createSqlSource 就会调用

-
@Override
-public SqlSource createSqlSource(Configuration configuration, XNode script, Class<?> parameterType) {
-  XMLScriptBuilder builder = new XMLScriptBuilder(configuration, script, parameterType);
-  return builder.parseScriptNode();
-}
+ return conn; + }
-

再往下的逻辑在 parseScriptNode 中,org.apache.ibatis.scripting.xmltags.XMLScriptBuilder#parseScriptNode

-
public SqlSource parseScriptNode() {
-  MixedSqlNode rootSqlNode = parseDynamicTags(context);
-  SqlSource sqlSource;
-  if (isDynamic) {
-    sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
-  } else {
-    sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
+

其实就是调用的

+
// org.apache.ibatis.datasource.unpooled.UnpooledDataSource#getConnection()
+  @Override
+  public Connection getConnection() throws SQLException {
+    return doGetConnection(username, password);
   }
-  return sqlSource;
-}
+```java -

首先要解析dynamicTag,调用了org.apache.ibatis.scripting.xmltags.XMLScriptBuilder#parseDynamicTags

-
protected MixedSqlNode parseDynamicTags(XNode node) {
-    List<SqlNode> contents = new ArrayList<>();
-    NodeList children = node.getNode().getChildNodes();
-    for (int i = 0; i < children.getLength(); i++) {
-      XNode child = node.newXNode(children.item(i));
-      if (child.getNode().getNodeType() == Node.CDATA_SECTION_NODE || child.getNode().getNodeType() == Node.TEXT_NODE) {
-        String data = child.getStringBody("");
-        TextSqlNode textSqlNode = new TextSqlNode(data);
-        // ---------> 主要是这边的逻辑
-        if (textSqlNode.isDynamic()) {
-          contents.add(textSqlNode);
-          isDynamic = true;
-        } else {
-          contents.add(new StaticTextSqlNode(data));
-        }
-      } else if (child.getNode().getNodeType() == Node.ELEMENT_NODE) { // issue #628
-        String nodeName = child.getNode().getNodeName();
-        NodeHandler handler = nodeHandlerMap.get(nodeName);
-        if (handler == null) {
-          throw new BuilderException("Unknown element <" + nodeName + "> in SQL statement.");
-        }
-        handler.handleNode(child, contents);
-        isDynamic = true;
-      }
+然后就是
+```java
+private Connection doGetConnection(String username, String password) throws SQLException {
+    Properties props = new Properties();
+    if (driverProperties != null) {
+      props.putAll(driverProperties);
     }
-    return new MixedSqlNode(contents);
-  }
- -

判断是否是动态sql,调用了org.apache.ibatis.scripting.xmltags.TextSqlNode#isDynamic

-
public boolean isDynamic() {
-  DynamicCheckerTokenParser checker = new DynamicCheckerTokenParser();
-  // ----------> 主要是这里的方法
-  GenericTokenParser parser = createParser(checker);
-  parser.parse(text);
-  return checker.isDynamic();
-}
+ if (username != null) { + props.setProperty("user", username); + } + if (password != null) { + props.setProperty("password", password); + } + return doGetConnection(props); + }
-

创建parser的时候可以看到这个parser是干了啥,其实就是找有没有${ , }

-
private GenericTokenParser createParser(TokenHandler handler) {
-  return new GenericTokenParser("${", "}", handler);
-}
+

继续这个逻辑

+
  private Connection doGetConnection(Properties properties) throws SQLException {
+    initializeDriver();
+    Connection connection = DriverManager.getConnection(url, properties);
+    configureConnection(connection);
+    return connection;
+  }
+    @CallerSensitive
+    public static Connection getConnection(String url,
+        java.util.Properties info) throws SQLException {
 
-

如果是的话,就在上面把 isDynamic 设置为true 如果是true 的话就创建 DynamicSqlSource

-
sqlSource = new DynamicSqlSource(configuration, rootSqlNode);
+ return (getConnection(url, info, Reflection.getCallerClass())); + } +private static Connection getConnection( + String url, java.util.Properties info, Class<?> caller) throws SQLException { + /* + * When callerCl is null, we should check the application's + * (which is invoking this class indirectly) + * classloader, so that the JDBC driver class outside rt.jar + * can be loaded from here. + */ + ClassLoader callerCL = caller != null ? caller.getClassLoader() : null; + synchronized(DriverManager.class) { + // synchronize loading of the correct classloader. + if (callerCL == null) { + callerCL = Thread.currentThread().getContextClassLoader(); + } + } -

如果不是的话就创建RawSqlSource

-
sqlSource = new RawSqlSource(configuration, rootSqlNode, parameterType);
-```java
+        if(url == null) {
+            throw new SQLException("The url cannot be null", "08001");
+        }
 
-但是这不是一个真实可用的 `sqlSource` ,
-实际创建的时候会走到这
-```java
-public RawSqlSource(Configuration configuration, SqlNode rootSqlNode, Class<?> parameterType) {
-    this(configuration, getSql(configuration, rootSqlNode), parameterType);
-  }
+        println("DriverManager.getConnection(\"" + url + "\")");
 
-  public RawSqlSource(Configuration configuration, String sql, Class<?> parameterType) {
-    SqlSourceBuilder sqlSourceParser = new SqlSourceBuilder(configuration);
-    Class<?> clazz = parameterType == null ? Object.class : parameterType;
-    sqlSource = sqlSourceParser.parse(sql, clazz, new HashMap<>());
-  }
+ // Walk through the loaded registeredDrivers attempting to make a connection. + // Remember the first exception that gets raised so we can reraise it. + SQLException reason = null; -

具体的sqlSource是通过org.apache.ibatis.builder.SqlSourceBuilder#parse 创建的
具体的代码逻辑是

-
public SqlSource parse(String originalSql, Class<?> parameterType, Map<String, Object> additionalParameters) {
-  ParameterMappingTokenHandler handler = new ParameterMappingTokenHandler(configuration, parameterType, additionalParameters);
-  GenericTokenParser parser = new GenericTokenParser("#{", "}", handler);
-  String sql;
-  if (configuration.isShrinkWhitespacesInSql()) {
-    sql = parser.parse(removeExtraWhitespaces(originalSql));
-  } else {
-    sql = parser.parse(originalSql);
-  }
-  return new StaticSqlSource(configuration, sql, handler.getParameterMappings());
-}
+ for(DriverInfo aDriver : registeredDrivers) { + // If the caller does not have permission to load the driver then + // skip it. + if(isDriverAllowed(aDriver.driver, callerCL)) { + try { + // ----------> driver[className=com.mysql.cj.jdbc.Driver@64030b91] + println(" trying " + aDriver.driver.getClass().getName()); + Connection con = aDriver.driver.connect(url, info); + if (con != null) { + // Success! + println("getConnection returning " + aDriver.driver.getClass().getName()); + return (con); + } + } catch (SQLException ex) { + if (reason == null) { + reason = ex; + } + } -

这里创建的其实是StaticSqlSource ,多带一句前面的parser是将原来这样select * from student where id = #{id}sql 解析成了select * from student where id = ? 然后创建了StaticSqlSource

-
public StaticSqlSource(Configuration configuration, String sql, List<ParameterMapping> parameterMappings) {
-  this.sql = sql;
-  this.parameterMappings = parameterMappings;
-  this.configuration = configuration;
-}
+ } else { + println(" skipping: " + aDriver.getClass().getName()); + } -

为什么前面要讲这么多好像没什么关系的代码呢,其实在最开始我们执行sql的代码中

-
@Override
-  public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
-    BoundSql boundSql = ms.getBoundSql(parameterObject);
-    CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
-    return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
-  }
+ } -

这里获取了BoundSql,而BoundSql是怎么来的呢,首先调用了org.apache.ibatis.mapping.MappedStatement#getBoundSql

-
public BoundSql getBoundSql(Object parameterObject) {
-    BoundSql boundSql = sqlSource.getBoundSql(parameterObject);
-    List<ParameterMapping> parameterMappings = boundSql.getParameterMappings();
-    if (parameterMappings == null || parameterMappings.isEmpty()) {
-      boundSql = new BoundSql(configuration, boundSql.getSql(), parameterMap.getParameterMappings(), parameterObject);
-    }
+        // if we got here nobody could connect.
+        if (reason != null)    {
+            println("getConnection failed: " + reason);
+            throw reason;
+        }
 
-    // check for nested result maps in parameter mappings (issue #30)
-    for (ParameterMapping pm : boundSql.getParameterMappings()) {
-      String rmId = pm.getResultMapId();
-      if (rmId != null) {
-        ResultMap rm = configuration.getResultMap(rmId);
-        if (rm != null) {
-          hasNestedResultMaps |= rm.hasNestedResultMaps();
+        println("getConnection: no suitable driver found for "+ url);
+        throw new SQLException("No suitable driver found for "+ url, "08001");
+    }
+ + +

上面的driver就是driver[className=com.mysql.cj.jdbc.Driver@64030b91]

+
// com.mysql.cj.jdbc.NonRegisteringDriver#connect
+public Connection connect(String url, Properties info) throws SQLException {
+        try {
+            try {
+                if (!ConnectionUrl.acceptsUrl(url)) {
+                    return null;
+                } else {
+                    ConnectionUrl conStr = ConnectionUrl.getConnectionUrlInstance(url, info);
+                    switch (conStr.getType()) {
+                        case SINGLE_CONNECTION:
+                            return ConnectionImpl.getInstance(conStr.getMainHost());
+                        case FAILOVER_CONNECTION:
+                        case FAILOVER_DNS_SRV_CONNECTION:
+                            return FailoverConnectionProxy.createProxyInstance(conStr);
+                        case LOADBALANCE_CONNECTION:
+                        case LOADBALANCE_DNS_SRV_CONNECTION:
+                            return LoadBalancedConnectionProxy.createProxyInstance(conStr);
+                        case REPLICATION_CONNECTION:
+                        case REPLICATION_DNS_SRV_CONNECTION:
+                            return ReplicationConnectionProxy.createProxyInstance(conStr);
+                        default:
+                            return null;
+                    }
+                }
+            } catch (UnsupportedConnectionStringException var5) {
+                return null;
+            } catch (CJException var6) {
+                throw (UnableToConnectException)ExceptionFactory.createException(UnableToConnectException.class, Messages.getString("NonRegisteringDriver.17", new Object[]{var6.toString()}), var6);
+            }
+        } catch (CJException var7) {
+            throw SQLExceptionsMapping.translateException(var7);
         }
-      }
-    }
+    }
- return boundSql; - }
+

这是个 SINGLE_CONNECTION ,所以调用的就是 return ConnectionImpl.getInstance(conStr.getMainHost());
然后在这里设置了代理类

+
public PooledConnection(Connection connection, PooledDataSource dataSource) {
+    this.hashCode = connection.hashCode();
+    this.realConnection = connection;
+    this.dataSource = dataSource;
+    this.createdTimestamp = System.currentTimeMillis();
+    this.lastUsedTimestamp = System.currentTimeMillis();
+    this.valid = true;
+    this.proxyConnection = (Connection) Proxy.newProxyInstance(Connection.class.getClassLoader(), IFACES, this);
+  }
-

而我们从上面的解析中可以看到这里的sqlSource是一层RawSqlSource , 它的getBoundSql又是调用内部的sqlSource的方法

+

结合这个

@Override
-public BoundSql getBoundSql(Object parameterObject) {
-  return sqlSource.getBoundSql(parameterObject);
+public Connection getConnection() throws SQLException {
+  return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
 }
-

内部的sqlSource 就是StaticSqlSource

-
@Override
-public BoundSql getBoundSql(Object parameterObject) {
-  return new BoundSql(configuration, sql, parameterMappings, parameterObject);
-}
+

所以最终的connection就是com.mysql.cj.jdbc.ConnectionImpl@358ab600

+]]>
+ + Java + Mybatis + + + Java + Mysql + Mybatis + +
+ + php-abstract-class-and-interface + /2016/11/10/php-abstract-class-and-interface/ + PHP抽象类和接口
    +
  • 抽象类与接口
  • +
  • 抽象类内可以包含非抽象函数,即可实现函数
  • +
  • 抽象类内必须包含至少一个抽象方法,抽象类和接口均不能实例化
  • +
  • 抽象类可以设置访问级别,接口默认都是public
  • +
  • 类可以实现多个接口但不能继承多个抽象类
  • +
  • 类必须实现抽象类和接口里的抽象方法,不一定要实现抽象类的非抽象方法
  • +
  • 接口内不能定义变量,但是可以定义常量
  • +
+

示例代码

<?php
+interface int1{
+    const INTER1 = 111;
+    function inter1();
+}
+interface int2{
+    const INTER1 = 222;
+    function inter2();
+}
+abstract class abst1{
+    public function abstr1(){
+        echo 1111;
+    }
+    abstract function abstra1(){
+        echo 'ahahahha';
+    }
+}
+abstract class abst2{
+    public function abstr2(){
+        echo 1111;
+    }
+    abstract function abstra2();
+}
+class normal1 extends abst1{
+    protected function abstr2(){
+        echo 222;
+    }
+}
-

这个BoundSql的内容也比较简单

-
public BoundSql(Configuration configuration, String sql, List<ParameterMapping> parameterMappings, Object parameterObject) {
-  this.sql = sql;
-  this.parameterMappings = parameterMappings;
-  this.parameterObject = parameterObject;
-  this.additionalParameters = new HashMap<>();
-  this.metaParameters = configuration.newMetaObject(additionalParameters);
-}
+

result

PHP Fatal error:  Abstract function abst1::abstra1() cannot contain body in new.php on line 17
 
-

而上次在这边org.apache.ibatis.executor.SimpleExecutor#doQuery 的时候落了个东西,就是StatementHandler的逻辑

-
@Override
-public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
-  Statement stmt = null;
-  try {
-    Configuration configuration = ms.getConfiguration();
-    StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
-    stmt = prepareStatement(handler, ms.getStatementLog());
-    return handler.query(stmt, resultHandler);
-  } finally {
-    closeStatement(stmt);
+Fatal error: Abstract function abst1::abstra1() cannot contain body in php on line 17
+]]> + + php + + + php + + + + mybatis系列-第一条sql的细节 + /2022/12/11/mybatis%E7%B3%BB%E5%88%97-%E7%AC%AC%E4%B8%80%E6%9D%A1sql%E7%9A%84%E7%BB%86%E8%8A%82/ + 先补充两个点,
第一是前面我们说了
使用org.apache.ibatis.builder.xml.XMLConfigBuilder 创建了parser解析器,那么解析的结果是什么
看这个方法的返回值

+
public Configuration parse() {
+  if (parsed) {
+    throw new BuilderException("Each XMLConfigBuilder can only be used once.");
   }
-}
+ parsed = true; + parseConfiguration(parser.evalNode("/configuration")); + return configuration; +}
-

它是通过statementType来区分应该使用哪个statementHandler,我们这使用的就是PreparedStatementHandler

-
public RoutingStatementHandler(Executor executor, MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) {
+

返回的是 org.apache.ibatis.session.Configuration , 而这个 Configuration 也是 mybatis 中特别重要的配置核心类,贴一下里面的成员变量,

+
public class Configuration {
 
-  switch (ms.getStatementType()) {
-    case STATEMENT:
-      delegate = new SimpleStatementHandler(executor, ms, parameter, rowBounds, resultHandler, boundSql);
-      break;
-    case PREPARED:
-      delegate = new PreparedStatementHandler(executor, ms, parameter, rowBounds, resultHandler, boundSql);
-      break;
-    case CALLABLE:
-      delegate = new CallableStatementHandler(executor, ms, parameter, rowBounds, resultHandler, boundSql);
-      break;
-    default:
-      throw new ExecutorException("Unknown statement type: " + ms.getStatementType());
-  }
+  protected Environment environment;
 
-}
+ protected boolean safeRowBoundsEnabled; + protected boolean safeResultHandlerEnabled = true; + protected boolean mapUnderscoreToCamelCase; + protected boolean aggressiveLazyLoading; + protected boolean multipleResultSetsEnabled = true; + protected boolean useGeneratedKeys; + protected boolean useColumnLabel = true; + protected boolean cacheEnabled = true; + protected boolean callSettersOnNulls; + protected boolean useActualParamName = true; + protected boolean returnInstanceForEmptyRow; + protected boolean shrinkWhitespacesInSql; + protected boolean nullableOnForEach; + protected boolean argNameBasedConstructorAutoMapping; -

所以上次有个细节可以补充,这边的doQuery里面的handler.query 应该是调用了PreparedStatementHandler 的query方法

-
@Override
-public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
-  Statement stmt = null;
-  try {
-    Configuration configuration = ms.getConfiguration();
-    StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
-    stmt = prepareStatement(handler, ms.getStatementLog());
-    return handler.query(stmt, resultHandler);
-  } finally {
-    closeStatement(stmt);
-  }
-}
+ protected String logPrefix; + protected Class<? extends Log> logImpl; + protected Class<? extends VFS> vfsImpl; + protected Class<?> defaultSqlProviderType; + protected LocalCacheScope localCacheScope = LocalCacheScope.SESSION; + protected JdbcType jdbcTypeForNull = JdbcType.OTHER; + protected Set<String> lazyLoadTriggerMethods = new HashSet<>(Arrays.asList("equals", "clone", "hashCode", "toString")); + protected Integer defaultStatementTimeout; + protected Integer defaultFetchSize; + protected ResultSetType defaultResultSetType; + protected ExecutorType defaultExecutorType = ExecutorType.SIMPLE; + protected AutoMappingBehavior autoMappingBehavior = AutoMappingBehavior.PARTIAL; + protected AutoMappingUnknownColumnBehavior autoMappingUnknownColumnBehavior = AutoMappingUnknownColumnBehavior.NONE; + protected Properties variables = new Properties(); + protected ReflectorFactory reflectorFactory = new DefaultReflectorFactory(); + protected ObjectFactory objectFactory = new DefaultObjectFactory(); + protected ObjectWrapperFactory objectWrapperFactory = new DefaultObjectWrapperFactory(); -

因为上面prepareStatement中getConnection拿到connection是com.mysql.cj.jdbc.ConnectionImpl#ConnectionImpl(com.mysql.cj.conf.HostInfo)

-
@Override
-public <E> List<E> query(Statement statement, ResultHandler resultHandler) throws SQLException {
-  PreparedStatement ps = (PreparedStatement) statement;
-  ps.execute();
-  return resultSetHandler.handleResultSets(ps);
-}
+ protected boolean lazyLoadingEnabled = false; + protected ProxyFactory proxyFactory = new JavassistProxyFactory(); // #224 Using internal Javassist instead of OGNL -

那又为什么是这个呢,可以在网上找,我们在mybatis-config.xml里配置的

-
<transactionManager type="JDBC"/>
+ protected String databaseId; + /** + * Configuration factory class. + * Used to create Configuration for loading deserialized unread properties. + * + * @see <a href='https://github.com/mybatis/old-google-code-issues/issues/300'>Issue 300 (google code)</a> + */ + protected Class<?> configurationFactory; -

因此在parseConfiguration中配置environment时

-
private void parseConfiguration(XNode root) {
-    try {
-      // issue #117 read properties first
-      propertiesElement(root.evalNode("properties"));
-      Properties settings = settingsAsProperties(root.evalNode("settings"));
-      loadCustomVfs(settings);
-      loadCustomLogImpl(settings);
-      typeAliasesElement(root.evalNode("typeAliases"));
-      pluginElement(root.evalNode("plugins"));
-      objectFactoryElement(root.evalNode("objectFactory"));
-      objectWrapperFactoryElement(root.evalNode("objectWrapperFactory"));
-      reflectorFactoryElement(root.evalNode("reflectorFactory"));
-      settingsElement(settings);
-      // read it after objectFactory and objectWrapperFactory issue #631
-      // ----------> 就是这里
-      environmentsElement(root.evalNode("environments"));
-      databaseIdProviderElement(root.evalNode("databaseIdProvider"));
-      typeHandlerElement(root.evalNode("typeHandlers"));
-      mapperElement(root.evalNode("mappers"));
-    } catch (Exception e) {
-      throw new BuilderException("Error parsing SQL Mapper Configuration. Cause: " + e, e);
-    }
-  }
+ protected final MapperRegistry mapperRegistry = new MapperRegistry(this); + protected final InterceptorChain interceptorChain = new InterceptorChain(); + protected final TypeHandlerRegistry typeHandlerRegistry = new TypeHandlerRegistry(this); + protected final TypeAliasRegistry typeAliasRegistry = new TypeAliasRegistry(); + protected final LanguageDriverRegistry languageRegistry = new LanguageDriverRegistry(); -

调用的这个方法通过获取xml中的transactionManager 配置的类型,也就是JDBC

-
private void environmentsElement(XNode context) throws Exception {
-  if (context != null) {
-    if (environment == null) {
-      environment = context.getStringAttribute("default");
-    }
-    for (XNode child : context.getChildren()) {
-      String id = child.getStringAttribute("id");
-      if (isSpecifiedEnvironment(id)) {
-        // -------> 找到这里
-        TransactionFactory txFactory = transactionManagerElement(child.evalNode("transactionManager"));
-        DataSourceFactory dsFactory = dataSourceElement(child.evalNode("dataSource"));
-        DataSource dataSource = dsFactory.getDataSource();
-        Environment.Builder environmentBuilder = new Environment.Builder(id)
-            .transactionFactory(txFactory)
-            .dataSource(dataSource);
-        configuration.setEnvironment(environmentBuilder.build());
-        break;
-      }
-    }
-  }
-}
+ protected final Map<String, MappedStatement> mappedStatements = new StrictMap<MappedStatement>("Mapped Statements collection") + .conflictMessageProducer((savedValue, targetValue) -> + ". please check " + savedValue.getResource() + " and " + targetValue.getResource()); + protected final Map<String, Cache> caches = new StrictMap<>("Caches collection"); + protected final Map<String, ResultMap> resultMaps = new StrictMap<>("Result Maps collection"); + protected final Map<String, ParameterMap> parameterMaps = new StrictMap<>("Parameter Maps collection"); + protected final Map<String, KeyGenerator> keyGenerators = new StrictMap<>("Key Generators collection"); -

是通过以下方法获取的,

-
// 方法全限定名 org.apache.ibatis.builder.xml.XMLConfigBuilder#transactionManagerElement
-private TransactionFactory transactionManagerElement(XNode context) throws Exception {
-    if (context != null) {
-      String type = context.getStringAttribute("type");
-      Properties props = context.getChildrenAsProperties();
-      TransactionFactory factory = (TransactionFactory) resolveClass(type).getDeclaredConstructor().newInstance();
-      factory.setProperties(props);
-      return factory;
-    }
-    throw new BuilderException("Environment declaration requires a TransactionFactory.");
-  }
+  protected final Set<String> loadedResources = new HashSet<>();
+  protected final Map<String, XNode> sqlFragments = new StrictMap<>("XML fragments parsed from previous mappers");
 
-// 方法全限定名 org.apache.ibatis.builder.BaseBuilder#resolveClass
-protected <T> Class<? extends T> resolveClass(String alias) {
-    if (alias == null) {
-      return null;
-    }
-    try {
-      return resolveAlias(alias);
-    } catch (Exception e) {
-      throw new BuilderException("Error resolving class. Cause: " + e, e);
-    }
-  }
+  protected final Collection<XMLStatementBuilder> incompleteStatements = new LinkedList<>();
+  protected final Collection<CacheRefResolver> incompleteCacheRefs = new LinkedList<>();
+  protected final Collection<ResultMapResolver> incompleteResultMaps = new LinkedList<>();
+  protected final Collection<MethodResolver> incompleteMethods = new LinkedList<>();
-// 方法全限定名 org.apache.ibatis.builder.BaseBuilder#resolveAlias - protected <T> Class<? extends T> resolveAlias(String alias) { - return typeAliasRegistry.resolveAlias(alias); - } -// 方法全限定名 org.apache.ibatis.type.TypeAliasRegistry#resolveAlias - public <T> Class<T> resolveAlias(String string) { - try { - if (string == null) { - return null; - } - // issue #748 - String key = string.toLowerCase(Locale.ENGLISH); - Class<T> value; - if (typeAliases.containsKey(key)) { - value = (Class<T>) typeAliases.get(key); - } else { - value = (Class<T>) Resources.classForName(string); - } - return value; - } catch (ClassNotFoundException e) { - throw new TypeException("Could not resolve type alias '" + string + "'. Cause: " + e, e); - } - }
-

而通过JDBC获取得是啥的,就是在Configuration的构造方法里写了的JdbcTransactionFactory

-
public Configuration() {
-  typeAliasRegistry.registerAlias("JDBC", JdbcTransactionFactory.class);
+

这么多成员变量,先不一一解释作用,但是其中的几个参数我们应该是已经知道了的,第一个就是 mappedStatements ,上一篇我们知道被解析的mapper就是放在这里,后面的 resultMapsparameterMaps 也比较常用的就是我们参数和结果的映射map,这里跟我之前有一篇解释为啥我们一些变量的使用会比较特殊,比如list,可以参考这篇keyGenerators是在我们需要定义主键生成器的时候使用。
然后第二点是我们创建的 org.apache.ibatis.session.SqlSessionFactory 是哪个,

+
public SqlSessionFactory build(Configuration config) {
+  return new DefaultSqlSessionFactory(config);
+}
-

所以我们在这

+

是这个 DefaultSqlSessionFactory ,这是其中一个 SqlSessionFactory 的实现
接下来我们看看 openSession 里干了啥

+
public SqlSession openSession() {
+  return openSessionFromDataSource(configuration.getDefaultExecutorType(), null, false);
+}
+ +

这边有几个参数,第一个是默认的执行器类型,往上找找上面贴着的 Configuration 的成员变量里可以看到默认是
protected ExecutorType defaultExecutorType = ExecutorType.SIMPLE;

+

因为没有指明特殊的执行逻辑,所以默认我们也就用简单类型的,第二个参数是是事务级别,第三个是是否自动提交

private SqlSession openSessionFromDataSource(ExecutorType execType, TransactionIsolationLevel level, boolean autoCommit) {
   Transaction tx = null;
   try {
     final Environment environment = configuration.getEnvironment();
-    final TransactionFactory transactionFactory = getTransactionFactoryFromEnvironment(environment);
- -

获得到的TransactionFactory 就是 JdbcTransactionFactory ,而后

-
tx = transactionFactory.newTransaction(environment.getDataSource(), level, autoCommit);
-```java
-
-创建的transaction就是JdbcTransaction 
-```java
-  @Override
-  public Transaction newTransaction(DataSource ds, TransactionIsolationLevel level, boolean autoCommit) {
-    return new JdbcTransaction(ds, level, autoCommit, skipSetAutoCommitOnClose);
-  }
- -

然后我们再会上去看代码getConnection ,

-
protected Connection getConnection(Log statementLog) throws SQLException {
-  // -------> 这里的transaction就是JdbcTransaction
-  Connection connection = transaction.getConnection();
-  if (statementLog.isDebugEnabled()) {
-    return ConnectionLogger.newInstance(connection, statementLog, queryStack);
-  } else {
-    return connection;
-  }
-}
- -

即调用了

-
  @Override
-  public Connection getConnection() throws SQLException {
-    if (connection == null) {
-      openConnection();
-    }
-    return connection;
+    final TransactionFactory transactionFactory = getTransactionFactoryFromEnvironment(environment);
+    tx = transactionFactory.newTransaction(environment.getDataSource(), level, autoCommit);
+    // --------> 先关注这里
+    final Executor executor = configuration.newExecutor(tx, execType);
+    return new DefaultSqlSession(configuration, executor, autoCommit);
+  } catch (Exception e) {
+    closeTransaction(tx); // may have fetched a connection so lets call close()
+    throw ExceptionFactory.wrapException("Error opening session.  Cause: " + e, e);
+  } finally {
+    ErrorContext.instance().reset();
   }
+}
- protected void openConnection() throws SQLException { - if (log.isDebugEnabled()) { - log.debug("Opening JDBC Connection"); - } - connection = dataSource.getConnection(); - if (level != null) { - connection.setTransactionIsolation(level.getLevel()); - } - setDesiredAutoCommit(autoCommit); +

具体是调用了 Configuration 的这个方法

+
public Executor newExecutor(Transaction transaction, ExecutorType executorType) {
+  executorType = executorType == null ? defaultExecutorType : executorType;
+  Executor executor;
+  if (ExecutorType.BATCH == executorType) {
+    executor = new BatchExecutor(this, transaction);
+  } else if (ExecutorType.REUSE == executorType) {
+    executor = new ReuseExecutor(this, transaction);
+  } else {
+    // ---------> 会走到这个分支
+    executor = new SimpleExecutor(this, transaction);
   }
-  @Override
-  public Connection getConnection() throws SQLException {
-    return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
+  if (cacheEnabled) {
+    executor = new CachingExecutor(executor);
   }
+  executor = (Executor) interceptorChain.pluginAll(executor);
+  return executor;
+}
-private PooledConnection popConnection(String username, String password) throws SQLException { - boolean countedWait = false; - PooledConnection conn = null; - long t = System.currentTimeMillis(); - int localBadConnectionCount = 0; +

上面传入的 executorTypeConfiguration 的默认类型,也就是 simple 类型,并且 cacheEnabledConfiguration 默认为 true,所以会包装成CachingExecutor ,然后后面就是插件了,这块我们先不展开
然后我们的openSession返回的就是创建了DefaultSqlSession

+
public DefaultSqlSession(Configuration configuration, Executor executor, boolean autoCommit) {
+    this.configuration = configuration;
+    this.executor = executor;
+    this.dirty = false;
+    this.autoCommit = autoCommit;
+  }
- while (conn == null) { - lock.lock(); - try { - if (!state.idleConnections.isEmpty()) { - // Pool has available connection - conn = state.idleConnections.remove(0); - if (log.isDebugEnabled()) { - log.debug("Checked out connection " + conn.getRealHashCode() + " from pool."); - } - } else { - // Pool does not have available connection - if (state.activeConnections.size() < poolMaximumActiveConnections) { - // Can create new connection - // ------------> 走到这里会创建PooledConnection,但是里面会先调用dataSource.getConnection() - conn = new PooledConnection(dataSource.getConnection(), this); - if (log.isDebugEnabled()) { - log.debug("Created connection " + conn.getRealHashCode() + "."); - } - } else { - // Cannot create new connection - PooledConnection oldestActiveConnection = state.activeConnections.get(0); - long longestCheckoutTime = oldestActiveConnection.getCheckoutTime(); - if (longestCheckoutTime > poolMaximumCheckoutTime) { - // Can claim overdue connection - state.claimedOverdueConnectionCount++; - state.accumulatedCheckoutTimeOfOverdueConnections += longestCheckoutTime; - state.accumulatedCheckoutTime += longestCheckoutTime; - state.activeConnections.remove(oldestActiveConnection); - if (!oldestActiveConnection.getRealConnection().getAutoCommit()) { - try { - oldestActiveConnection.getRealConnection().rollback(); - } catch (SQLException e) { - /* - Just log a message for debug and continue to execute the following - statement like nothing happened. - Wrap the bad connection with a new PooledConnection, this will help - to not interrupt current executing thread and give current thread a - chance to join the next competition for another valid/good database - connection. At the end of this loop, bad {@link @conn} will be set as null. - */ - log.debug("Bad connection. Could not roll back"); - } - } - conn = new PooledConnection(oldestActiveConnection.getRealConnection(), this); - conn.setCreatedTimestamp(oldestActiveConnection.getCreatedTimestamp()); - conn.setLastUsedTimestamp(oldestActiveConnection.getLastUsedTimestamp()); - oldestActiveConnection.invalidate(); - if (log.isDebugEnabled()) { - log.debug("Claimed overdue connection " + conn.getRealHashCode() + "."); - } - } else { - // Must wait - try { - if (!countedWait) { - state.hadToWaitCount++; - countedWait = true; - } - if (log.isDebugEnabled()) { - log.debug("Waiting as long as " + poolTimeToWait + " milliseconds for connection."); - } - long wt = System.currentTimeMillis(); - condition.await(poolTimeToWait, TimeUnit.MILLISECONDS); - state.accumulatedWaitTime += System.currentTimeMillis() - wt; - } catch (InterruptedException e) { - // set interrupt flag - Thread.currentThread().interrupt(); - break; - } - } - } - } - if (conn != null) { - // ping to server and check the connection is valid or not - if (conn.isValid()) { - if (!conn.getRealConnection().getAutoCommit()) { - conn.getRealConnection().rollback(); - } - conn.setConnectionTypeCode(assembleConnectionTypeCode(dataSource.getUrl(), username, password)); - conn.setCheckoutTimestamp(System.currentTimeMillis()); - conn.setLastUsedTimestamp(System.currentTimeMillis()); - state.activeConnections.add(conn); - state.requestCount++; - state.accumulatedRequestTime += System.currentTimeMillis() - t; - } else { - if (log.isDebugEnabled()) { - log.debug("A bad connection (" + conn.getRealHashCode() + ") was returned from the pool, getting another connection."); - } - state.badConnectionCount++; - localBadConnectionCount++; - conn = null; - if (localBadConnectionCount > (poolMaximumIdleConnections + poolMaximumLocalBadConnectionTolerance)) { - if (log.isDebugEnabled()) { - log.debug("PooledDataSource: Could not get a good connection to the database."); - } - throw new SQLException("PooledDataSource: Could not get a good connection to the database."); - } - } - } - } finally { - lock.unlock(); - } +

然后就是调用 selectOne, 因为前面已经把这部分代码说过了,就直接跳转过来
org.apache.ibatis.session.defaults.DefaultSqlSession#selectList(java.lang.String, java.lang.Object, org.apache.ibatis.session.RowBounds, org.apache.ibatis.session.ResultHandler)

+
private <E> List<E> selectList(String statement, Object parameter, RowBounds rowBounds, ResultHandler handler) {
+  try {
+    MappedStatement ms = configuration.getMappedStatement(statement);
+    return executor.query(ms, wrapCollection(parameter), rowBounds, handler);
+  } catch (Exception e) {
+    throw ExceptionFactory.wrapException("Error querying database.  Cause: " + e, e);
+  } finally {
+    ErrorContext.instance().reset();
+  }
+}
- } +

因为前面说了 executor 包装了 CachingExecutor ,所以会先调用

+
@Override
+public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler) throws SQLException {
+  BoundSql boundSql = ms.getBoundSql(parameterObject);
+  CacheKey key = createCacheKey(ms, parameterObject, rowBounds, boundSql);
+  return query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+}
- if (conn == null) { - if (log.isDebugEnabled()) { - log.debug("PooledDataSource: Unknown severe error condition. The connection pool returned a null connection."); +

然后是调用的真实的query方法

+
@Override
+public <E> List<E> query(MappedStatement ms, Object parameterObject, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql)
+    throws SQLException {
+  Cache cache = ms.getCache();
+  if (cache != null) {
+    flushCacheIfRequired(ms);
+    if (ms.isUseCache() && resultHandler == null) {
+      ensureNoOutParams(ms, boundSql);
+      @SuppressWarnings("unchecked")
+      List<E> list = (List<E>) tcm.getObject(cache, key);
+      if (list == null) {
+        list = delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+        tcm.putObject(cache, key, list); // issue #578 and #116
       }
-      throw new SQLException("PooledDataSource: Unknown severe error condition.  The connection pool returned a null connection.");
+      return list;
     }
-
-    return conn;
-  }
- -

其实就是调用的

-
// org.apache.ibatis.datasource.unpooled.UnpooledDataSource#getConnection()
-  @Override
-  public Connection getConnection() throws SQLException {
-    return doGetConnection(username, password);
   }
-```java
+  return delegate.query(ms, parameterObject, rowBounds, resultHandler, key, boundSql);
+}
-然后就是 -```java -private Connection doGetConnection(String username, String password) throws SQLException { - Properties props = new Properties(); - if (driverProperties != null) { - props.putAll(driverProperties); +

这里是第一次查询,没有缓存就先到最后一行,继续是调用到 org.apache.ibatis.executor.BaseExecutor#queryFromDatabase

+
@Override
+  public <E> List<E> query(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
+    ErrorContext.instance().resource(ms.getResource()).activity("executing a query").object(ms.getId());
+    if (closed) {
+      throw new ExecutorException("Executor was closed.");
     }
-    if (username != null) {
-      props.setProperty("user", username);
+    if (queryStack == 0 && ms.isFlushCacheRequired()) {
+      clearLocalCache();
     }
-    if (password != null) {
-      props.setProperty("password", password);
+    List<E> list;
+    try {
+      queryStack++;
+      list = resultHandler == null ? (List<E>) localCache.getObject(key) : null;
+      if (list != null) {
+        handleLocallyCachedOutputParameters(ms, key, parameter, boundSql);
+      } else {
+        // ----------->会走到这里
+        list = queryFromDatabase(ms, parameter, rowBounds, resultHandler, key, boundSql);
+      }
+    } finally {
+      queryStack--;
     }
-    return doGetConnection(props);
-  }
+ if (queryStack == 0) { + for (DeferredLoad deferredLoad : deferredLoads) { + deferredLoad.load(); + } + // issue #601 + deferredLoads.clear(); + if (configuration.getLocalCacheScope() == LocalCacheScope.STATEMENT) { + // issue #482 + clearLocalCache(); + } + } + return list; + }
-

继续这个逻辑

-
  private Connection doGetConnection(Properties properties) throws SQLException {
-    initializeDriver();
-    Connection connection = DriverManager.getConnection(url, properties);
-    configureConnection(connection);
-    return connection;
+

然后是

+
private <E> List<E> queryFromDatabase(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, CacheKey key, BoundSql boundSql) throws SQLException {
+  List<E> list;
+  localCache.putObject(key, EXECUTION_PLACEHOLDER);
+  try {
+    list = doQuery(ms, parameter, rowBounds, resultHandler, boundSql);
+  } finally {
+    localCache.removeObject(key);
   }
-    @CallerSensitive
-    public static Connection getConnection(String url,
-        java.util.Properties info) throws SQLException {
+  localCache.putObject(key, list);
+  if (ms.getStatementType() == StatementType.CALLABLE) {
+    localOutputParameterCache.putObject(key, parameter);
+  }
+  return list;
+}
- return (getConnection(url, info, Reflection.getCallerClass())); - } -private static Connection getConnection( - String url, java.util.Properties info, Class<?> caller) throws SQLException { - /* - * When callerCl is null, we should check the application's - * (which is invoking this class indirectly) - * classloader, so that the JDBC driver class outside rt.jar - * can be loaded from here. - */ - ClassLoader callerCL = caller != null ? caller.getClassLoader() : null; - synchronized(DriverManager.class) { - // synchronize loading of the correct classloader. - if (callerCL == null) { - callerCL = Thread.currentThread().getContextClassLoader(); - } - } +

然后就是 simpleExecutor 的执行过程

+
@Override
+public <E> List<E> doQuery(MappedStatement ms, Object parameter, RowBounds rowBounds, ResultHandler resultHandler, BoundSql boundSql) throws SQLException {
+  Statement stmt = null;
+  try {
+    Configuration configuration = ms.getConfiguration();
+    StatementHandler handler = configuration.newStatementHandler(wrapper, ms, parameter, rowBounds, resultHandler, boundSql);
+    stmt = prepareStatement(handler, ms.getStatementLog());
+    return handler.query(stmt, resultHandler);
+  } finally {
+    closeStatement(stmt);
+  }
+}
- if(url == null) { - throw new SQLException("The url cannot be null", "08001"); - } +

接下去其实就是跟jdbc交互了

+
@Override
+public <E> List<E> query(Statement statement, ResultHandler resultHandler) throws SQLException {
+  PreparedStatement ps = (PreparedStatement) statement;
+  ps.execute();
+  return resultSetHandler.handleResultSets(ps);
+}
- println("DriverManager.getConnection(\"" + url + "\")"); - - // Walk through the loaded registeredDrivers attempting to make a connection. - // Remember the first exception that gets raised so we can reraise it. - SQLException reason = null; - - for(DriverInfo aDriver : registeredDrivers) { - // If the caller does not have permission to load the driver then - // skip it. - if(isDriverAllowed(aDriver.driver, callerCL)) { - try { - // ----------> driver[className=com.mysql.cj.jdbc.Driver@64030b91] - println(" trying " + aDriver.driver.getClass().getName()); - Connection con = aDriver.driver.connect(url, info); - if (con != null) { - // Success! - println("getConnection returning " + aDriver.driver.getClass().getName()); - return (con); - } - } catch (SQLException ex) { - if (reason == null) { - reason = ex; +

com.mysql.cj.jdbc.ClientPreparedStatement#execute

+
public boolean execute() throws SQLException {
+        try {
+            synchronized(this.checkClosed().getConnectionMutex()) {
+                JdbcConnection locallyScopedConn = this.connection;
+                if (!this.doPingInstead && !this.checkReadOnlySafeStatement()) {
+                    throw SQLError.createSQLException(Messages.getString("PreparedStatement.20") + Messages.getString("PreparedStatement.21"), "S1009", this.exceptionInterceptor);
+                } else {
+                    ResultSetInternalMethods rs = null;
+                    this.lastQueryIsOnDupKeyUpdate = false;
+                    if (this.retrieveGeneratedKeys) {
+                        this.lastQueryIsOnDupKeyUpdate = this.containsOnDuplicateKeyUpdate();
                     }
-                }
 
-            } else {
-                println("    skipping: " + aDriver.getClass().getName());
-            }
+                    this.batchedGeneratedKeys = null;
+                    this.resetCancelledState();
+                    this.implicitlyCloseAllOpenResults();
+                    this.clearWarnings();
+                    if (this.doPingInstead) {
+                        this.doPingInstead();
+                        return true;
+                    } else {
+                        this.setupStreamingTimeout(locallyScopedConn);
+                        Message sendPacket = ((PreparedQuery)this.query).fillSendPacket(((PreparedQuery)this.query).getQueryBindings());
+                        String oldDb = null;
+                        if (!locallyScopedConn.getDatabase().equals(this.getCurrentDatabase())) {
+                            oldDb = locallyScopedConn.getDatabase();
+                            locallyScopedConn.setDatabase(this.getCurrentDatabase());
+                        }
 
-        }
+                        CachedResultSetMetaData cachedMetadata = null;
+                        boolean cacheResultSetMetadata = (Boolean)locallyScopedConn.getPropertySet().getBooleanProperty(PropertyKey.cacheResultSetMetadata).getValue();
+                        if (cacheResultSetMetadata) {
+                            cachedMetadata = locallyScopedConn.getCachedMetaData(((PreparedQuery)this.query).getOriginalSql());
+                        }
 
-        // if we got here nobody could connect.
-        if (reason != null)    {
-            println("getConnection failed: " + reason);
-            throw reason;
-        }
+                        locallyScopedConn.setSessionMaxRows(this.getQueryInfo().getFirstStmtChar() == 'S' ? this.maxRows : -1);
+                        rs = this.executeInternal(this.maxRows, sendPacket, this.createStreamingResultSet(), this.getQueryInfo().getFirstStmtChar() == 'S', cachedMetadata, false);
+                        if (cachedMetadata != null) {
+                            locallyScopedConn.initializeResultsMetadataFromCache(((PreparedQuery)this.query).getOriginalSql(), cachedMetadata, rs);
+                        } else if (rs.hasRows() && cacheResultSetMetadata) {
+                            locallyScopedConn.initializeResultsMetadataFromCache(((PreparedQuery)this.query).getOriginalSql(), (CachedResultSetMetaData)null, rs);
+                        }
 
-        println("getConnection: no suitable driver found for "+ url);
-        throw new SQLException("No suitable driver found for "+ url, "08001");
-    }
+ if (this.retrieveGeneratedKeys) { + rs.setFirstCharOfQuery(this.getQueryInfo().getFirstStmtChar()); + } + if (oldDb != null) { + locallyScopedConn.setDatabase(oldDb); + } -

上面的driver就是driver[className=com.mysql.cj.jdbc.Driver@64030b91]

-
// com.mysql.cj.jdbc.NonRegisteringDriver#connect
-public Connection connect(String url, Properties info) throws SQLException {
-        try {
-            try {
-                if (!ConnectionUrl.acceptsUrl(url)) {
-                    return null;
-                } else {
-                    ConnectionUrl conStr = ConnectionUrl.getConnectionUrlInstance(url, info);
-                    switch (conStr.getType()) {
-                        case SINGLE_CONNECTION:
-                            return ConnectionImpl.getInstance(conStr.getMainHost());
-                        case FAILOVER_CONNECTION:
-                        case FAILOVER_DNS_SRV_CONNECTION:
-                            return FailoverConnectionProxy.createProxyInstance(conStr);
-                        case LOADBALANCE_CONNECTION:
-                        case LOADBALANCE_DNS_SRV_CONNECTION:
-                            return LoadBalancedConnectionProxy.createProxyInstance(conStr);
-                        case REPLICATION_CONNECTION:
-                        case REPLICATION_DNS_SRV_CONNECTION:
-                            return ReplicationConnectionProxy.createProxyInstance(conStr);
-                        default:
-                            return null;
+                        if (rs != null) {
+                            this.lastInsertId = rs.getUpdateID();
+                            this.results = rs;
+                        }
+
+                        return rs != null && rs.hasRows();
                     }
                 }
-            } catch (UnsupportedConnectionStringException var5) {
-                return null;
-            } catch (CJException var6) {
-                throw (UnableToConnectException)ExceptionFactory.createException(UnableToConnectException.class, Messages.getString("NonRegisteringDriver.17", new Object[]{var6.toString()}), var6);
             }
-        } catch (CJException var7) {
-            throw SQLExceptionsMapping.translateException(var7);
+        } catch (CJException var11) {
+            throw SQLExceptionsMapping.translateException(var11, this.getExceptionInterceptor());
         }
-    }
- -

这是个 SINGLE_CONNECTION ,所以调用的就是 return ConnectionImpl.getInstance(conStr.getMainHost());
然后在这里设置了代理类

-
public PooledConnection(Connection connection, PooledDataSource dataSource) {
-    this.hashCode = connection.hashCode();
-    this.realConnection = connection;
-    this.dataSource = dataSource;
-    this.createdTimestamp = System.currentTimeMillis();
-    this.lastUsedTimestamp = System.currentTimeMillis();
-    this.valid = true;
-    this.proxyConnection = (Connection) Proxy.newProxyInstance(Connection.class.getClassLoader(), IFACES, this);
-  }
- -

结合这个

-
@Override
-public Connection getConnection() throws SQLException {
-  return popConnection(dataSource.getUsername(), dataSource.getPassword()).getProxyConnection();
-}
+ }
-

所以最终的connection就是com.mysql.cj.jdbc.ConnectionImpl@358ab600

]]> Java @@ -9585,6 +9564,92 @@ Starting node rabbit@rabbit2 SDS 简单动态字符串

先从Strings开始说,了解过 C 语言的应该知道,C 语言中的字符串其实是个 char[] 字符数组,redis 也不例外,只是最开始的版本就对这个做了一丢丢的优化,而正是这一丢丢的优化,让这个 redis 的使用效率提升了数倍

+
struct sdshdr {
+    // 字符串长度
+    int len;
+    // 字符串空余字符数
+    int free;
+    // 字符串内容
+    char buf[];
+};
+

这里引用了 redis 在 github 上最早的 2.2 版本的代码,代码路径是https://github.com/antirez/redis/blob/2.2/src/sds.h,可以看到这个结构体里只有仨元素,两个 int 型和一个 char 型数组,两个 int 型其实就是我说的优化,因为 C 语言本身的字符串数组,有两个问题,一个是要知道它实际已被占用的长度,需要去遍历这个数组,第二个就是比较容易踩坑的是遍历的时候要注意它有个以\0作为结尾的特点;通过上面的两个 int 型参数,一个是知道字符串目前的长度,一个是知道字符串还剩余多少位空间,这样子坐着两个操作从 O(N)简化到了O(1)了,还有第二个 free 还有个比较重要的作用就是能防止 C 字符串的溢出问题,在存储之前可以先判断 free 长度,如果长度不够就先扩容了,先介绍到这,这个系列可以写蛮多的,慢慢介绍吧

+

链表

链表是比较常见的数据结构了,但是因为 redis 是用 C 写的,所以在不依赖第三方库的情况下只能自己写一个了,redis 的链表是个有头的链表,而且是无环的,具体的结构我也找了 github 上最早版本的代码

+
typedef struct listNode {
+    // 前置节点
+    struct listNode *prev;
+    // 后置节点
+    struct listNode *next;
+    // 值
+    void *value;
+} listNode;
+
+typedef struct list {
+    // 链表表头
+    listNode *head;
+    // 当前节点,也可以说是最后节点
+    listNode *tail;
+    // 节点复制函数
+    void *(*dup)(void *ptr);
+    // 节点值释放函数
+    void (*free)(void *ptr);
+    // 节点值比较函数
+    int (*match)(void *ptr, void *key);
+    // 链表包含的节点数量
+    unsigned int len;
+} list;
+

代码地址是这个https://github.com/antirez/redis/blob/2.2/src/adlist.h
可以看下节点是由listNode承载的,包括值和一个指向前节点跟一个指向后一节点的两个指针,然后值是 void 指针类型,所以可以承载不同类型的值
然后是 list结构用来承载一个链表,包含了表头,和表尾,复制函数,释放函数和比较函数,还有链表长度,因为包含了前两个节点,找到表尾节点跟表头都是 O(1)的时间复杂度,还有节点数量,其实这个跟 SDS 是同一个做法,就是空间换时间,这也是写代码里比较常见的做法,以此让一些高频的操作提速。

+

字典

字典也是个常用的数据结构,其实只是叫法不同,数据结构中叫 hash 散列,Java 中叫 Map,PHP 中是数组 array,Python 中也叫字典 dict,因为纯 C 语言本身不带这些数据结构,所以这也是个痛并快乐着的过程,享受 C 语言的高性能的同时也要接受它只提供了语言的基本功能的现实,各种轮子都需要自己造,redis 同样实现了自己的字典
下面来看看代码

+
typedef struct dictEntry {
+    void *key;
+    void *val;
+    struct dictEntry *next;
+} dictEntry;
+
+typedef struct dictType {
+    unsigned int (*hashFunction)(const void *key);
+    void *(*keyDup)(void *privdata, const void *key);
+    void *(*valDup)(void *privdata, const void *obj);
+    int (*keyCompare)(void *privdata, const void *key1, const void *key2);
+    void (*keyDestructor)(void *privdata, void *key);
+    void (*valDestructor)(void *privdata, void *obj);
+} dictType;
+
+/* This is our hash table structure. Every dictionary has two of this as we
+ * implement incremental rehashing, for the old to the new table. */
+typedef struct dictht {
+    dictEntry **table;
+    unsigned long size;
+    unsigned long sizemask;
+    unsigned long used;
+} dictht;
+
+typedef struct dict {
+    dictType *type;
+    void *privdata;
+    dictht ht[2];
+    int rehashidx; /* rehashing not in progress if rehashidx == -1 */
+    int iterators; /* number of iterators currently running */
+} dict;
+

看了下这个 2.2 版本的代码跟最新版的其实也差的不是很多,所以还是照旧用老代码,可以看到上面四个结构体中,其实只有三个是存储数据用的,dictType 是用来放操作函数的,那么三个存放数据的结构体分别是干嘛的,这时候感觉需要一个图来说明比较好,稍等,我去画个图~

这个图看着应该比较清楚这些都是用来干嘛的了,dict 是我们的主体结构,它有一个指向 dictType 的指针,这里面包含了字典的操作函数,然后是一个私有数据指针,接下来是一个 dictht 的数组,包含两个dictht,这个就是用来存数据的了,然后是 rehashidx 表示重哈希的状态,当是-1 的时候表示当前没有重哈希,iterators 表示正在遍历的迭代器的数量。
首先说说为啥需要有两个 dictht,这是因为字典 dict 这个数据结构随着数据量的增减,会需要在中途做扩容或者缩容操作,如果只有一个的话,对它进行扩容缩容时会影响正常的访问和修改操作,或者说保证正常查询,修改的正确性会比较复杂,并且因为需要高效利用空间,不能一下子申请一个非常大的空间来存很少的数据。当 dict 中 dictht 中的数据量超过 size 的时候负载就超过了 1,就需要进行扩容,这里的其实跟 Java 中的 HashMap 比较类似,超过一定的负载之后进行扩容。这里为啥 size 会超过 1 呢,可能有部分不了解这类结构的同学会比较奇怪,其实就是上图中画的,在数据结构中对于散列的冲突有几类解决方法,比如转换成链表,二次散列,找下个空槽等,这里就使用了链表法,或者说拉链法。当一个新元素通过 hashFunction 得出的 key 跟 sizemask 取模之后的值相同了,那就将其放在原来的节点之前,变成链表挂在数组 dictht.table下面,放在原有节点前是考虑到可能会优先访问。
忘了说明下 dictht 跟 dictEntry 的关系了,dictht 就是个哈希表,它里面是个dictEntry 的二维数组,而 dictEntry 是个包含了 key-value 结构之外还有一个 next 指针,因此可以将哈希冲突的以链表的形式保存下来。
在重点说下重哈希,可能同样写 Java 的同学对这个比较有感觉,跟 HashMap 一样,会以 2 的 N 次方进行扩容,那么扩容的方法就会比较简单,每个键重哈希要不就在原来这个槽,要不就在原来的槽加原 dictht.size 的位置;然后是重头戏,具体是怎么做扩容呢,其实这里就把第二个 ht 用上了,其实这两个hashtable 的具体作用有点类似于 jvm 中的两个 survival 区,但是又不全一样,因为 redis 在扩容的时候是采用的渐进式地重哈希,什么叫渐进式的呢,就是它不是像 jvm 那种标记复制的模式直接将一个 eden 区和原来的 survival 区存活的对象复制到另一个 survival 区,而是在每一次添加,删除,查找或者更新操作时,都会额外的帮忙搬运一部分的原 dictht 中的数据,这里会根据 rehashidx 的值来判断,如果是-1 表示并没有在重哈希中,如果是 0 表示开始重哈希了,然后rehashidx 还会随着每次的帮忙搬运往上加,但全部被搬运完成后 rehashidx 又变回了-1,又可以扯到Java 中的 Concurrent HashMap, 他在扩容的时候也使用了类似的操作。

+]]> + + Redis + 数据结构 + C + 源码 + Redis + + + redis + 数据结构 + 源码 + + redis数据结构介绍三-第三部分 整数集合 /2020/01/10/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E4%B8%89/ @@ -9660,92 +9725,6 @@ int zslRandomLevel(void) { return (level<ZSKIPLIST_MAXLEVEL) ? level : ZSKIPLIST_MAXLEVEL; }

当随机值跟0xFFFF进行与操作小于ZSKIPLIST_P * 0xFFFF时才会增大 level 的值,因此保持了一个相对递减的概率
可以简单分析下,当 random() 的值小于 0xFFFF 的 1/4,才会 level + 1,就意味着当有 1 - 1/4也就是3/4的概率是直接跳出,所以一层的概率是3/4,也就是 1-P,二层的概率是 P*(1-P),三层的概率是 P² * (1-P) 依次递推。

-]]> - - Redis - 数据结构 - C - 源码 - Redis - - - redis - 数据结构 - 源码 - - - - redis数据结构介绍-第一部分 SDS,链表,字典 - /2019/12/26/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D/ - redis是现在服务端很常用的缓存中间件,其实原来还有memcache之类的竞品,但是现在貌似 redis 快一统江湖,这里当然不是在吹,只是个人角度的一个感觉,不权威只是主观感觉。
redis 主要有五种数据结构,StringsListsSetsHashesSorted Sets,这五种数据结构先简单介绍下,Strings类型的其实就是我们最常用的 key-value,实际开发中也会用的最多;Lists是列表,这个有些会用来做队列,因为 redis 目前常用的版本支持丰富的列表操作;还有是Sets集合,这个主要的特点就是集合中元素不重复,可以用在有这类需求的场景里;Hashes是叫散列,类似于 Python 中的字典结构;还有就是Sorted Sets这个是个有序集合;一眼看这些其实没啥特别的,除了最后这个有序集合,不过去了解背后的实现方式还是比较有意思的。

-

SDS 简单动态字符串

先从Strings开始说,了解过 C 语言的应该知道,C 语言中的字符串其实是个 char[] 字符数组,redis 也不例外,只是最开始的版本就对这个做了一丢丢的优化,而正是这一丢丢的优化,让这个 redis 的使用效率提升了数倍

-
struct sdshdr {
-    // 字符串长度
-    int len;
-    // 字符串空余字符数
-    int free;
-    // 字符串内容
-    char buf[];
-};
-

这里引用了 redis 在 github 上最早的 2.2 版本的代码,代码路径是https://github.com/antirez/redis/blob/2.2/src/sds.h,可以看到这个结构体里只有仨元素,两个 int 型和一个 char 型数组,两个 int 型其实就是我说的优化,因为 C 语言本身的字符串数组,有两个问题,一个是要知道它实际已被占用的长度,需要去遍历这个数组,第二个就是比较容易踩坑的是遍历的时候要注意它有个以\0作为结尾的特点;通过上面的两个 int 型参数,一个是知道字符串目前的长度,一个是知道字符串还剩余多少位空间,这样子坐着两个操作从 O(N)简化到了O(1)了,还有第二个 free 还有个比较重要的作用就是能防止 C 字符串的溢出问题,在存储之前可以先判断 free 长度,如果长度不够就先扩容了,先介绍到这,这个系列可以写蛮多的,慢慢介绍吧

-

链表

链表是比较常见的数据结构了,但是因为 redis 是用 C 写的,所以在不依赖第三方库的情况下只能自己写一个了,redis 的链表是个有头的链表,而且是无环的,具体的结构我也找了 github 上最早版本的代码

-
typedef struct listNode {
-    // 前置节点
-    struct listNode *prev;
-    // 后置节点
-    struct listNode *next;
-    // 值
-    void *value;
-} listNode;
-
-typedef struct list {
-    // 链表表头
-    listNode *head;
-    // 当前节点,也可以说是最后节点
-    listNode *tail;
-    // 节点复制函数
-    void *(*dup)(void *ptr);
-    // 节点值释放函数
-    void (*free)(void *ptr);
-    // 节点值比较函数
-    int (*match)(void *ptr, void *key);
-    // 链表包含的节点数量
-    unsigned int len;
-} list;
-

代码地址是这个https://github.com/antirez/redis/blob/2.2/src/adlist.h
可以看下节点是由listNode承载的,包括值和一个指向前节点跟一个指向后一节点的两个指针,然后值是 void 指针类型,所以可以承载不同类型的值
然后是 list结构用来承载一个链表,包含了表头,和表尾,复制函数,释放函数和比较函数,还有链表长度,因为包含了前两个节点,找到表尾节点跟表头都是 O(1)的时间复杂度,还有节点数量,其实这个跟 SDS 是同一个做法,就是空间换时间,这也是写代码里比较常见的做法,以此让一些高频的操作提速。

-

字典

字典也是个常用的数据结构,其实只是叫法不同,数据结构中叫 hash 散列,Java 中叫 Map,PHP 中是数组 array,Python 中也叫字典 dict,因为纯 C 语言本身不带这些数据结构,所以这也是个痛并快乐着的过程,享受 C 语言的高性能的同时也要接受它只提供了语言的基本功能的现实,各种轮子都需要自己造,redis 同样实现了自己的字典
下面来看看代码

-
typedef struct dictEntry {
-    void *key;
-    void *val;
-    struct dictEntry *next;
-} dictEntry;
-
-typedef struct dictType {
-    unsigned int (*hashFunction)(const void *key);
-    void *(*keyDup)(void *privdata, const void *key);
-    void *(*valDup)(void *privdata, const void *obj);
-    int (*keyCompare)(void *privdata, const void *key1, const void *key2);
-    void (*keyDestructor)(void *privdata, void *key);
-    void (*valDestructor)(void *privdata, void *obj);
-} dictType;
-
-/* This is our hash table structure. Every dictionary has two of this as we
- * implement incremental rehashing, for the old to the new table. */
-typedef struct dictht {
-    dictEntry **table;
-    unsigned long size;
-    unsigned long sizemask;
-    unsigned long used;
-} dictht;
-
-typedef struct dict {
-    dictType *type;
-    void *privdata;
-    dictht ht[2];
-    int rehashidx; /* rehashing not in progress if rehashidx == -1 */
-    int iterators; /* number of iterators currently running */
-} dict;
-

看了下这个 2.2 版本的代码跟最新版的其实也差的不是很多,所以还是照旧用老代码,可以看到上面四个结构体中,其实只有三个是存储数据用的,dictType 是用来放操作函数的,那么三个存放数据的结构体分别是干嘛的,这时候感觉需要一个图来说明比较好,稍等,我去画个图~

这个图看着应该比较清楚这些都是用来干嘛的了,dict 是我们的主体结构,它有一个指向 dictType 的指针,这里面包含了字典的操作函数,然后是一个私有数据指针,接下来是一个 dictht 的数组,包含两个dictht,这个就是用来存数据的了,然后是 rehashidx 表示重哈希的状态,当是-1 的时候表示当前没有重哈希,iterators 表示正在遍历的迭代器的数量。
首先说说为啥需要有两个 dictht,这是因为字典 dict 这个数据结构随着数据量的增减,会需要在中途做扩容或者缩容操作,如果只有一个的话,对它进行扩容缩容时会影响正常的访问和修改操作,或者说保证正常查询,修改的正确性会比较复杂,并且因为需要高效利用空间,不能一下子申请一个非常大的空间来存很少的数据。当 dict 中 dictht 中的数据量超过 size 的时候负载就超过了 1,就需要进行扩容,这里的其实跟 Java 中的 HashMap 比较类似,超过一定的负载之后进行扩容。这里为啥 size 会超过 1 呢,可能有部分不了解这类结构的同学会比较奇怪,其实就是上图中画的,在数据结构中对于散列的冲突有几类解决方法,比如转换成链表,二次散列,找下个空槽等,这里就使用了链表法,或者说拉链法。当一个新元素通过 hashFunction 得出的 key 跟 sizemask 取模之后的值相同了,那就将其放在原来的节点之前,变成链表挂在数组 dictht.table下面,放在原有节点前是考虑到可能会优先访问。
忘了说明下 dictht 跟 dictEntry 的关系了,dictht 就是个哈希表,它里面是个dictEntry 的二维数组,而 dictEntry 是个包含了 key-value 结构之外还有一个 next 指针,因此可以将哈希冲突的以链表的形式保存下来。
在重点说下重哈希,可能同样写 Java 的同学对这个比较有感觉,跟 HashMap 一样,会以 2 的 N 次方进行扩容,那么扩容的方法就会比较简单,每个键重哈希要不就在原来这个槽,要不就在原来的槽加原 dictht.size 的位置;然后是重头戏,具体是怎么做扩容呢,其实这里就把第二个 ht 用上了,其实这两个hashtable 的具体作用有点类似于 jvm 中的两个 survival 区,但是又不全一样,因为 redis 在扩容的时候是采用的渐进式地重哈希,什么叫渐进式的呢,就是它不是像 jvm 那种标记复制的模式直接将一个 eden 区和原来的 survival 区存活的对象复制到另一个 survival 区,而是在每一次添加,删除,查找或者更新操作时,都会额外的帮忙搬运一部分的原 dictht 中的数据,这里会根据 rehashidx 的值来判断,如果是-1 表示并没有在重哈希中,如果是 0 表示开始重哈希了,然后rehashidx 还会随着每次的帮忙搬运往上加,但全部被搬运完成后 rehashidx 又变回了-1,又可以扯到Java 中的 Concurrent HashMap, 他在扩容的时候也使用了类似的操作。

]]>
Redis @@ -9823,187 +9802,208 @@ typedef struct redisObject {
- mybatis系列-入门篇 - /2022/11/27/mybatis%E7%B3%BB%E5%88%97-%E5%85%A5%E9%97%A8%E7%AF%87/ - mybatis是我们比较常用的orm框架,下面是官网的介绍

-
-

MyBatis 是一款优秀的持久层框架,它支持自定义 SQL、存储过程以及高级映射。MyBatis 免除了几乎所有的 JDBC 代码以及设置参数和获取结果集的工作。MyBatis 可以通过简单的 XML 或注解来配置和映射原始类型、接口和 Java POJO(Plain Old Java Objects,普通老式 Java 对象)为数据库中的记录。

-
-

mybatis一大特点,或者说比较为人熟知的应该就是比 hibernate 是更轻量化,为国人所爱好的orm框架,对于hibernate目前还没有深入的拆解过,后续可以也写一下,在使用体验上觉得是个比较精巧的框架,看代码也比较容易,所以就想写个系列,第一篇先是介绍下使用
根据官网的文档上我们先来尝试一下简单使用
首先我们有个简单的配置,这个文件是mybatis-config.xml

-
<?xml version="1.0" encoding="UTF-8" ?>
-<!DOCTYPE configuration
-        PUBLIC "-//mybatis.org//DTD Config 3.0//EN"
-        "https://mybatis.org/dtd/mybatis-3-config.dtd">
-<configuration>
-    <!-- 需要加入的properties-->
-    <properties resource="application-development.properties"/>
-    <!-- 指出使用哪个环境,默认是development-->
-    <environments default="development">
-        <environment id="development">
-        <!-- 指定事务管理器类型-->
-            <transactionManager type="JDBC"/>
-            <!-- 指定数据源类型-->
-            <dataSource type="POOLED">
-                <!-- 下面就是具体的参数占位了-->
-                <property name="driver" value="${driver}"/>
-                <property name="url" value="${url}"/>
-                <property name="username" value="${username}"/>
-                <property name="password" value="${password}"/>
-            </dataSource>
-        </environment>
-    </environments>
-    <mappers>
-        <!-- 指定mapper xml的位置或文件-->
-        <mapper resource="mapper/StudentMapper.xml"/>
-    </mappers>
-</configuration>
-

在代码里创建mybatis里重要入口

-
String resource = "mybatis-config.xml";
-InputStream inputStream = Resources.getResourceAsStream(resource);
-SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
-

然后我们上面的StudentMapper.xml

-
<?xml version="1.0" encoding="UTF-8" ?>
-<!DOCTYPE mapper
-        PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
-        "https://mybatis.org/dtd/mybatis-3-mapper.dtd">
-<mapper namespace="com.nicksxs.mybatisdemo.StudentMapper">
-    <select id="selectStudent" resultType="com.nicksxs.mybatisdemo.StudentDO">
-        select * from student where id = #{id}
-    </select>
-</mapper>
-

那么我们就要使用这个mapper,

-
String resource = "mybatis-config.xml";
-InputStream inputStream = Resources.getResourceAsStream(resource);
-SqlSessionFactory sqlSessionFactory = new SqlSessionFactoryBuilder().build(inputStream);
-try (SqlSession session = sqlSessionFactory.openSession()) {
-    StudentDO studentDO = session.selectOne("com.nicksxs.mybatisdemo.StudentMapper.selectStudent", 1);
-    System.out.println("id is " + studentDO.getId() + " name is " +studentDO.getName());
-} catch (Exception e) {
-    e.printStackTrace();
-}
-

sqlSessionFactory是sqlSession的工厂,我们可以通过sqlSessionFactory来创建sqlSession,而SqlSession 提供了在数据库执行 SQL 命令所需的所有方法。你可以通过 SqlSession 实例来直接执行已映射的 SQL 语句。可以看到mapper.xml中有定义mapper的namespace,就可以通过session.selectOne()传入namespace+id来调用这个方法
但是这样调用比较不合理的点,或者说按后面mybatis优化之后我们可以指定mapper接口

-
public interface StudentMapper {
+    redis数据结构介绍六 快表
+    /2020/01/22/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E5%85%AD/
+    这应该是 redis 系列的最后一篇了,讲下快表,其实最前面讲的链表在早先的 redis 版本中也作为 list 的数据结构使用过,但是单纯的链表的缺陷之前也说了,插入便利,但是空间利用率低,并且不能进行二分查找等,检索效率低,ziplist 压缩表的产生也是同理,希望获得更好的性能,包括存储空间和访问性能等,原来我也不懂这个快表要怎么快,然后明白了一个道理,其实并没有什么银弹,只是大牛们会在适合的时候使用最适合的数据结构来实现性能的最大化,这里面有一招就是不同数据结构的组合调整,比如 Java 中的 HashMap,在链表节点数大于 8 时会转变成红黑树,以此提高访问效率,不费话了,回到快表,quicklist,这个数据结构主要使用在 list 类型中,如果我说其实这个 quicklist 就是个链表,可能大家不太会相信,但是事实上的确可以认为 quicklist 是个双向链表,看下代码

+
/* quicklistNode is a 32 byte struct describing a ziplist for a quicklist.
+ * We use bit fields keep the quicklistNode at 32 bytes.
+ * count: 16 bits, max 65536 (max zl bytes is 65k, so max count actually < 32k).
+ * encoding: 2 bits, RAW=1, LZF=2.
+ * container: 2 bits, NONE=1, ZIPLIST=2.
+ * recompress: 1 bit, bool, true if node is temporarry decompressed for usage.
+ * attempted_compress: 1 bit, boolean, used for verifying during testing.
+ * extra: 10 bits, free for future use; pads out the remainder of 32 bits */
+typedef struct quicklistNode {
+    struct quicklistNode *prev;
+    struct quicklistNode *next;
+    unsigned char *zl;
+    unsigned int sz;             /* ziplist size in bytes */
+    unsigned int count : 16;     /* count of items in ziplist */
+    unsigned int encoding : 2;   /* RAW==1 or LZF==2 */
+    unsigned int container : 2;  /* NONE==1 or ZIPLIST==2 */
+    unsigned int recompress : 1; /* was this node previous compressed? */
+    unsigned int attempted_compress : 1; /* node can't compress; too small */
+    unsigned int extra : 10; /* more bits to steal for future usage */
+} quicklistNode;
 
-    public StudentDO selectStudent(Long id);
-}
-

就可以可以通过mapper接口获取方法,这样就不用涉及到未知的变量转换等异常

-
try (SqlSession session = sqlSessionFactory.openSession()) {
-    StudentMapper mapper = session.getMapper(StudentMapper.class);
-    StudentDO studentDO = mapper.selectStudent(1L);
-    System.out.println("id is " + studentDO.getId() + " name is " +studentDO.getName());
-} catch (Exception e) {
-    e.printStackTrace();
-}
-

这一篇咱们先介绍下简单的使用,后面可以先介绍下这些的原理。

-]]>
- - Java - Mybatis - - - Java - Mysql - Mybatis - - - - redis淘汰策略复习 - /2021/08/01/redis%E6%B7%98%E6%B1%B0%E7%AD%96%E7%95%A5%E5%A4%8D%E4%B9%A0/ - 前面复习了 redis 的过期策略,这里再复习下淘汰策略,淘汰跟过期的区别有时候会被混淆了,过期主要针对那些设置了过期时间的 key,应该说是一种逻辑策略,是主动的还是被动的加定时的,两种有各自的取舍,而淘汰也可以看成是一种保持系统稳定的策略,因为如果内存满了,不采取任何策略处理,那大概率会导致系统故障,之前其实主要从源码角度分析过redis 的 LRU 和 LFU,但这个是偏底层的实现,抠得比较细,那么具体的系统层面的配置是有哪些策略,来看下 redis labs 的介绍

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
PolicyDescription
noeviction 不逐出Returns an error if the memory limit has been reached when trying to insert more data,插入更多数据时,如果内存达到上限了,返回错误
allkeys-lru 所有的 key 使用 lru 逐出Evicts the least recently used keys out of all keys 在所有 key 中逐出最近最少使用的
allkeys-lfu 所有的 key 使用 lfu 逐出Evicts the least frequently used keys out of all keys 在所有 key 中逐出最近最不频繁使用的
allkeys-random 所有的 key 中随机逐出Randomly evicts keys out of all keys 在所有 key 中随机逐出
volatile-lruEvicts the least recently used keys out of all keys with an “expire” field set 在设置了过期时间的 key 空间 expire 中使用 lru 策略逐出
volatile-lfuEvicts the least frequently used keys out of all keys with an “expire” field set 在设置了过期时间的 key 空间 expire 中使用 lfu 策略逐出
volatile-randomRandomly evicts keys with an “expire” field set 在设置了过期时间的 key 空间 expire 中随机逐出
volatile-ttlEvicts the shortest time-to-live keys out of all keys with an “expire” field set.在设置了过期时间的 key 空间 expire 中逐出更早过期的
-

而在这其中默认使用的策略是 volatile-lru,对 lru 跟 lfu 想有更多的了解可以看下我之前的文章redis系列介绍八-淘汰策略

-]]>
- - redis - - - redis - 淘汰策略 - 应用 - Evict - -
- - redis数据结构介绍四-第四部分 压缩表 - /2020/01/19/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E5%9B%9B/ - 在 redis 中还有一类表型数据结构叫压缩表,ziplist,它的目的是替代链表,链表是个很容易理解的数据结构,双向链表有前后指针,有带头结点的有的不带,但是链表有个比较大的问题是相对于普通的数组,它的内存不连续,碎片化的存储,内存利用效率不高,而且指针寻址相对于直接使用偏移量的话,也有一定的效率劣势,当然这不是主要的原因,ziplist 设计的主要目的是让链表的内存使用更高效

-
-

The ziplist is a specially encoded dually linked list that is designed to be very memory efficient.
这是摘自 redis 源码中ziplist.c 文件的注释,也说明了原因,它的大概结构是这样子

-
-
<zlbytes> <zltail> <zllen> <entry> <entry> ... <entry> <zlend>
-

其中
<zlbytes>表示 ziplist 占用的字节总数,类型是uint32_t,32 位的无符号整型,当然表示的字节数也包含自己本身占用的 4 个
<zltail> 类型也是是uint32_t,表示ziplist表中最后一项(entry)在ziplist中的偏移字节数。<zltail>的存在,使得我们可以很方便地找到最后一项(不用遍历整个ziplist),从而可以在ziplist尾端快速地执行push或pop操作。
<uint16_t zllen> 表示ziplist 中的数据项个数,因为是 16 位,所以当数量超过所能表示的最大的数量,它的 16 位全会置为 1,但是真实的数量需要遍历整个 ziplist 才能知道
<entry>是具体的数据项,后面解释
<zlend> ziplist 的最后一个字节,固定是255。
再看一下<entry>中的具体结构,

-
<prevlen> <encoding> <entry-data>
-

首先这个<prevlen>有两种情况,一种是前面的元素的长度,如果是小于等于 253的时候就用一个uint8_t 来表示前一元素的长度,如果大于的话他将占用五个字节,第一个字节是 254,即表示这个字节已经表示不下了,需要后面的四个字节帮忙表示
<encoding>这个就比较复杂,把源码的注释放下面先看下

-
* |00pppppp| - 1 byte
-*      String value with length less than or equal to 63 bytes (6 bits).
-*      "pppppp" represents the unsigned 6 bit length.
-* |01pppppp|qqqqqqqq| - 2 bytes
-*      String value with length less than or equal to 16383 bytes (14 bits).
-*      IMPORTANT: The 14 bit number is stored in big endian.
-* |10000000|qqqqqqqq|rrrrrrrr|ssssssss|tttttttt| - 5 bytes
-*      String value with length greater than or equal to 16384 bytes.
-*      Only the 4 bytes following the first byte represents the length
-*      up to 32^2-1. The 6 lower bits of the first byte are not used and
-*      are set to zero.
-*      IMPORTANT: The 32 bit number is stored in big endian.
-* |11000000| - 3 bytes
-*      Integer encoded as int16_t (2 bytes).
-* |11010000| - 5 bytes
-*      Integer encoded as int32_t (4 bytes).
-* |11100000| - 9 bytes
-*      Integer encoded as int64_t (8 bytes).
-* |11110000| - 4 bytes
-*      Integer encoded as 24 bit signed (3 bytes).
-* |11111110| - 2 bytes
-*      Integer encoded as 8 bit signed (1 byte).
-* |1111xxxx| - (with xxxx between 0000 and 1101) immediate 4 bit integer.
-*      Unsigned integer from 0 to 12. The encoded value is actually from
-*      1 to 13 because 0000 and 1111 can not be used, so 1 should be
-*      subtracted from the encoded 4 bit value to obtain the right value.
-* |11111111| - End of ziplist special entry.
-

首先如果 encoding 的前两位是 00 的话代表这个元素是个 6 位的字符串,即直接将数据保存在 encoding 中,不消耗额外的<entry-data>,如果前两位是 01 的话表示是个 14 位的字符串,如果是 10 的话表示encoding 块之后的四个字节是存放字符串类型的数据,encoding 的剩余 6 位置 0。
如果 encoding 的前两位是 11 的话表示这是个整型,具体的如果后两位是00的话,表示后面是个2字节的 int16_t 类型,如果是01的话,后面是个4字节的int32_t,如果是10的话后面是8字节的int64_t,如果是 11 的话后面是 3 字节的有符号整型,这些都要最后 4 位都是 0 的情况噢
剩下当是11111110时,则表示是一个1 字节的有符号数,如果是 1111xxxx,其中xxxx在0000 到 1101 表示实际的 1 到 13,为啥呢,因为 0000 前面已经用过了,而 1110 跟 1111 也都有用了。
看个具体的例子(上下有点对不齐,将就看)

-
[0f 00 00 00] [0c 00 00 00] [02 00] [00 f3] [02 f6] [ff]
-|**zlbytes***|  |***zltail***|  |*zllen*|  |entry1 entry2|  |zlend|
-

第一部分代表整个 ziplist 有 15 个字节,zlbytes 自己占了 4 个 zltail 表示最后一个元素的偏移量,第 13 个字节起,zllen 表示有 2 个元素,第一个元素是00f3,00表示前一个元素长度是 0,本来前面就没元素(不过不知道这个能不能优化这一字节),然后是 f3,换成二进制就是11110011,对照上面的注释,是落在|1111xxxx|这个类型里,注意这个其实是用 0001 到 1101 也就是 1到 13 来表示 0到 12,所以 f3 应该就是 2,第一个元素是 2,第二个元素呢,02 代表前一个元素也就是刚才说的这个,占用 2 字节,f6 展开也是刚才的类型,实际是 5,ff 表示 ziplist 的结尾,所以这个 ziplist 里面是两个元素,2 跟 5

+/* quicklistLZF is a 4+N byte struct holding 'sz' followed by 'compressed'. + * 'sz' is byte length of 'compressed' field. + * 'compressed' is LZF data with total (compressed) length 'sz' + * NOTE: uncompressed length is stored in quicklistNode->sz. + * When quicklistNode->zl is compressed, node->zl points to a quicklistLZF */ +typedef struct quicklistLZF { + unsigned int sz; /* LZF size in bytes*/ + char compressed[]; +} quicklistLZF; + +/* quicklist is a 40 byte struct (on 64-bit systems) describing a quicklist. + * 'count' is the number of total entries. + * 'len' is the number of quicklist nodes. + * 'compress' is: -1 if compression disabled, otherwise it's the number + * of quicklistNodes to leave uncompressed at ends of quicklist. + * 'fill' is the user-requested (or default) fill factor. */ +typedef struct quicklist { + quicklistNode *head; + quicklistNode *tail; + unsigned long count; /* total count of all entries in all ziplists */ + unsigned long len; /* number of quicklistNodes */ + int fill : 16; /* fill factor for individual nodes */ + unsigned int compress : 16; /* depth of end nodes not to compress;0=off */ +} quicklist;
+

粗略看下,quicklist 里有 head,tail, quicklistNode里有 prev,next 指针,是不是有链表的基本轮廓了,那么为啥这玩意要称为快表呢,快在哪,关键就在这个unsigned char *zl;zl 是不是前面又看到过,就是 ziplist ,这是什么鬼,链表里用压缩表,这不套娃么,先别急,回顾下前面说的 ziplist,ziplist 有哪些特点,内存利用率高,可以从表头快速定位到尾节点,节点可以从后往前找,但是有个缺点,就是从中间插入的效率比较低,需要整体往后移,这个其实是普通数组的优化版,但还是有数组的一些劣势,所以要真的快,是不是可以将链表跟数组真的结合起来。

+

ziplist

这里有两个 redis 的配置参数,list-max-ziplist-sizelist-compress-depth,先来说第一个,既然快表是将链表跟压缩表数组结合起来使用,那么具体怎么用呢,比如我有一个 10 个元素的 list,那具体怎么放,每个 quicklistNode 里放多大的 ziplist,假如每个快表节点的 ziplist 只放一个元素,那么其实这就退化成了一个链表,如果 10 个元素放在一个 quicklistNode 的 ziplist 里,那就退化成了一个 ziplist,所以有了这个 list-max-ziplist-size,而且它还比较牛,能取正负值,当是正值时,对应的就是每个 quicklistNode 的 ziplist 中的元素个数,比如配置了 list-max-ziplist-size = 5,那么我刚才的 10 个元素的 list 就是一个两个 quicklistNode 组成的快表,每个 quicklistNode 中的 ziplist 包含了五个元素,当 list-max-ziplist-size取负值的时候,它限制了 ziplist 的字节数

+
size_t offset = (-fill) - 1;
+if (offset < (sizeof(optimization_level) / sizeof(*optimization_level))) {
+    if (sz <= optimization_level[offset]) {
+        return 1;
+    } else {
+        return 0;
+    }
+} else {
+    return 0;
+}
+
+/* Optimization levels for size-based filling */
+static const size_t optimization_level[] = {4096, 8192, 16384, 32768, 65536};
+
+/* Create a new quicklist.
+ * Free with quicklistRelease(). */
+quicklist *quicklistCreate(void) {
+    struct quicklist *quicklist;
+
+    quicklist = zmalloc(sizeof(*quicklist));
+    quicklist->head = quicklist->tail = NULL;
+    quicklist->len = 0;
+    quicklist->count = 0;
+    quicklist->compress = 0;
+    quicklist->fill = -2;
+    return quicklist;
+}
+

这个 fill 就是传进来的 list-max-ziplist-size, 具体对应的就是

+
    +
  • -5: 每个quicklist节点上的ziplist大小不能超过64 Kb。(注:1kb => 1024 bytes)
  • +
  • -4: 每个quicklist节点上的ziplist大小不能超过32 Kb。
  • +
  • -3: 每个quicklist节点上的ziplist大小不能超过16 Kb。
  • +
  • -2: 每个quicklist节点上的ziplist大小不能超过8 Kb。(-2是Redis给出的默认值)也就是上面的 quicklist->fill = -2;
  • +
  • -1: 每个quicklist节点上的ziplist大小不能超过4 Kb。
  • +
+

压缩

list-compress-depth这个参数呢是用来配置压缩的,等等压缩是为啥,不是里面已经是压缩表了么,大牛们就是为了性能殚精竭虑,这里考虑到的是一个场景,一般状况下,list 都是两端的访问频率比较高,那么是不是可以对中间的数据进行压缩,那么这个参数就是用来表示

+
/* depth of end nodes not to compress;0=off */
+
    +
  • 0,代表不压缩,默认值
  • +
  • 1,两端各一个节点不压缩
  • +
  • 2,两端各两个节点不压缩
  • +
  • … 依次类推
    压缩后的 ziplist 就会变成 quicklistLZF,然后替换 zl 指针,这里使用的是 LZF 压缩算法,压缩后的 quicklistLZF 中的 compressed 也是个柔性数组,压缩后的 ziplist 整个就放进这个柔性数组
  • +
+

插入过程

简单说下插入元素的过程

+
/* Wrapper to allow argument-based switching between HEAD/TAIL pop */
+void quicklistPush(quicklist *quicklist, void *value, const size_t sz,
+                   int where) {
+    if (where == QUICKLIST_HEAD) {
+        quicklistPushHead(quicklist, value, sz);
+    } else if (where == QUICKLIST_TAIL) {
+        quicklistPushTail(quicklist, value, sz);
+    }
+}
+
+/* Add new entry to head node of quicklist.
+ *
+ * Returns 0 if used existing head.
+ * Returns 1 if new head created. */
+int quicklistPushHead(quicklist *quicklist, void *value, size_t sz) {
+    quicklistNode *orig_head = quicklist->head;
+    if (likely(
+            _quicklistNodeAllowInsert(quicklist->head, quicklist->fill, sz))) {
+        quicklist->head->zl =
+            ziplistPush(quicklist->head->zl, value, sz, ZIPLIST_HEAD);
+        quicklistNodeUpdateSz(quicklist->head);
+    } else {
+        quicklistNode *node = quicklistCreateNode();
+        node->zl = ziplistPush(ziplistNew(), value, sz, ZIPLIST_HEAD);
+
+        quicklistNodeUpdateSz(node);
+        _quicklistInsertNodeBefore(quicklist, quicklist->head, node);
+    }
+    quicklist->count++;
+    quicklist->head->count++;
+    return (orig_head != quicklist->head);
+}
+
+/* Add new entry to tail node of quicklist.
+ *
+ * Returns 0 if used existing tail.
+ * Returns 1 if new tail created. */
+int quicklistPushTail(quicklist *quicklist, void *value, size_t sz) {
+    quicklistNode *orig_tail = quicklist->tail;
+    if (likely(
+            _quicklistNodeAllowInsert(quicklist->tail, quicklist->fill, sz))) {
+        quicklist->tail->zl =
+            ziplistPush(quicklist->tail->zl, value, sz, ZIPLIST_TAIL);
+        quicklistNodeUpdateSz(quicklist->tail);
+    } else {
+        quicklistNode *node = quicklistCreateNode();
+        node->zl = ziplistPush(ziplistNew(), value, sz, ZIPLIST_TAIL);
+
+        quicklistNodeUpdateSz(node);
+        _quicklistInsertNodeAfter(quicklist, quicklist->tail, node);
+    }
+    quicklist->count++;
+    quicklist->tail->count++;
+    return (orig_tail != quicklist->tail);
+}
+
+/* Wrappers for node inserting around existing node. */
+REDIS_STATIC void _quicklistInsertNodeBefore(quicklist *quicklist,
+                                             quicklistNode *old_node,
+                                             quicklistNode *new_node) {
+    __quicklistInsertNode(quicklist, old_node, new_node, 0);
+}
+
+REDIS_STATIC void _quicklistInsertNodeAfter(quicklist *quicklist,
+                                            quicklistNode *old_node,
+                                            quicklistNode *new_node) {
+    __quicklistInsertNode(quicklist, old_node, new_node, 1);
+}
+
+/* Insert 'new_node' after 'old_node' if 'after' is 1.
+ * Insert 'new_node' before 'old_node' if 'after' is 0.
+ * Note: 'new_node' is *always* uncompressed, so if we assign it to
+ *       head or tail, we do not need to uncompress it. */
+REDIS_STATIC void __quicklistInsertNode(quicklist *quicklist,
+                                        quicklistNode *old_node,
+                                        quicklistNode *new_node, int after) {
+    if (after) {
+        new_node->prev = old_node;
+        if (old_node) {
+            new_node->next = old_node->next;
+            if (old_node->next)
+                old_node->next->prev = new_node;
+            old_node->next = new_node;
+        }
+        if (quicklist->tail == old_node)
+            quicklist->tail = new_node;
+    } else {
+        new_node->next = old_node;
+        if (old_node) {
+            new_node->prev = old_node->prev;
+            if (old_node->prev)
+                old_node->prev->next = new_node;
+            old_node->prev = new_node;
+        }
+        if (quicklist->head == old_node)
+            quicklist->head = new_node;
+    }
+    /* If this insert creates the only element so far, initialize head/tail. */
+    if (quicklist->len == 0) {
+        quicklist->head = quicklist->tail = new_node;
+    }
+
+    if (old_node)
+        quicklistCompress(quicklist, old_node);
+
+    quicklist->len++;
+}
+

前面第一步先根据插入的是头还是尾选择不同的 push 函数,quicklistPushHead 或者 quicklistPushTail,举例分析下从头插入的 quicklistPushHead,先判断当前的 quicklistNode 节点还能不能允许再往 ziplist 里添加元素,如果可以就添加,如果不允许就新建一个 quicklistNode,然后调用 _quicklistInsertNodeBefore 将节点插进去,具体插入quicklist节点的操作类似链表的插入。

]]>
Redis @@ -10019,192 +10019,409 @@ typedef struct redisObject {
- redis系列介绍七-过期策略 - /2020/04/12/redis%E7%B3%BB%E5%88%97%E4%BB%8B%E7%BB%8D%E4%B8%83/ - 这一篇不再是数据结构介绍了,大致的数据结构基本都介绍了,这一篇主要是查漏补缺,或者说讲一些重要且基本的概念,也可能是经常被忽略的,很多讲 redis 的系列文章可能都会忽略,学习 redis 的时候也会,因为觉得源码学习就是讲主要的数据结构和“算法”学习了就好了。
redis 的主要应用就是拿来作为高性能的缓存,那么缓存一般有些啥需要注意的,首先是访问速度,如果取得跟数据库一样快,那就没什么存在的意义,第二个是缓存的字面意思,我只是为了让数据读取快一些,通常大部分的场景这个是需要更新过期的,这里就把我要讲的第一点引出来了(真累,

-

redis过期策略

redis 是如何过期缓存的,可以猜测下,最无脑的就是每个设置了过期时间的 key 都设个定时器,过期了就删除,这种显然消耗太大,清理地最及时,还有的就是 redis 正在采用的懒汉清理策略和定期清理
懒汉策略就是在使用的时候去检查缓存是否过期,比如 get 操作时,先判断下这个 key 是否已经过期了,如果过期了就删掉,并且返回空,如果没过期则正常返回
主要代码是

-
/* This function is called when we are going to perform some operation
- * in a given key, but such key may be already logically expired even if
- * it still exists in the database. The main way this function is called
- * is via lookupKey*() family of functions.
- *
- * The behavior of the function depends on the replication role of the
- * instance, because slave instances do not expire keys, they wait
- * for DELs from the master for consistency matters. However even
- * slaves will try to have a coherent return value for the function,
- * so that read commands executed in the slave side will be able to
- * behave like if the key is expired even if still present (because the
- * master has yet to propagate the DEL).
- *
- * In masters as a side effect of finding a key which is expired, such
- * key will be evicted from the database. Also this may trigger the
- * propagation of a DEL/UNLINK command in AOF / replication stream.
- *
- * The return value of the function is 0 if the key is still valid,
- * otherwise the function returns 1 if the key is expired. */
-int expireIfNeeded(redisDb *db, robj *key) {
-    if (!keyIsExpired(db,key)) return 0;
-
-    /* If we are running in the context of a slave, instead of
-     * evicting the expired key from the database, we return ASAP:
-     * the slave key expiration is controlled by the master that will
-     * send us synthesized DEL operations for expired keys.
-     *
-     * Still we try to return the right information to the caller,
-     * that is, 0 if we think the key should be still valid, 1 if
-     * we think the key is expired at this time. */
-    if (server.masterhost != NULL) return 1;
-
-    /* Delete the key */
-    server.stat_expiredkeys++;
-    propagateExpire(db,key,server.lazyfree_lazy_expire);
-    notifyKeyspaceEvent(NOTIFY_EXPIRED,
-        "expired",key,db->id);
-    return server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
-                                         dbSyncDelete(db,key);
-}
-
-/* Check if the key is expired. */
-int keyIsExpired(redisDb *db, robj *key) {
-    mstime_t when = getExpire(db,key);
-    mstime_t now;
-
-    if (when < 0) return 0; /* No expire for this key */
-
-    /* Don't expire anything while loading. It will be done later. */
-    if (server.loading) return 0;
-
-    /* If we are in the context of a Lua script, we pretend that time is
-     * blocked to when the Lua script started. This way a key can expire
-     * only the first time it is accessed and not in the middle of the
-     * script execution, making propagation to slaves / AOF consistent.
-     * See issue #1525 on Github for more information. */
-    if (server.lua_caller) {
-        now = server.lua_time_start;
-    }
-    /* If we are in the middle of a command execution, we still want to use
-     * a reference time that does not change: in that case we just use the
-     * cached time, that we update before each call in the call() function.
-     * This way we avoid that commands such as RPOPLPUSH or similar, that
-     * may re-open the same key multiple times, can invalidate an already
-     * open object in a next call, if the next call will see the key expired,
-     * while the first did not. */
-    else if (server.fixed_time_expire > 0) {
-        now = server.mstime;
-    }
-    /* For the other cases, we want to use the most fresh time we have. */
-    else {
-        now = mstime();
-    }
-
-    /* The key expired if the current (virtual or real) time is greater
-     * than the expire time of the key. */
-    return now > when;
-}
-/* Return the expire time of the specified key, or -1 if no expire
- * is associated with this key (i.e. the key is non volatile) */
-long long getExpire(redisDb *db, robj *key) {
-    dictEntry *de;
+    redis数据结构介绍四-第四部分 压缩表
+    /2020/01/19/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E5%9B%9B/
+    在 redis 中还有一类表型数据结构叫压缩表,ziplist,它的目的是替代链表,链表是个很容易理解的数据结构,双向链表有前后指针,有带头结点的有的不带,但是链表有个比较大的问题是相对于普通的数组,它的内存不连续,碎片化的存储,内存利用效率不高,而且指针寻址相对于直接使用偏移量的话,也有一定的效率劣势,当然这不是主要的原因,ziplist 设计的主要目的是让链表的内存使用更高效

+
+

The ziplist is a specially encoded dually linked list that is designed to be very memory efficient.
这是摘自 redis 源码中ziplist.c 文件的注释,也说明了原因,它的大概结构是这样子

+
+
<zlbytes> <zltail> <zllen> <entry> <entry> ... <entry> <zlend>
+

其中
<zlbytes>表示 ziplist 占用的字节总数,类型是uint32_t,32 位的无符号整型,当然表示的字节数也包含自己本身占用的 4 个
<zltail> 类型也是是uint32_t,表示ziplist表中最后一项(entry)在ziplist中的偏移字节数。<zltail>的存在,使得我们可以很方便地找到最后一项(不用遍历整个ziplist),从而可以在ziplist尾端快速地执行push或pop操作。
<uint16_t zllen> 表示ziplist 中的数据项个数,因为是 16 位,所以当数量超过所能表示的最大的数量,它的 16 位全会置为 1,但是真实的数量需要遍历整个 ziplist 才能知道
<entry>是具体的数据项,后面解释
<zlend> ziplist 的最后一个字节,固定是255。
再看一下<entry>中的具体结构,

+
<prevlen> <encoding> <entry-data>
+

首先这个<prevlen>有两种情况,一种是前面的元素的长度,如果是小于等于 253的时候就用一个uint8_t 来表示前一元素的长度,如果大于的话他将占用五个字节,第一个字节是 254,即表示这个字节已经表示不下了,需要后面的四个字节帮忙表示
<encoding>这个就比较复杂,把源码的注释放下面先看下

+
* |00pppppp| - 1 byte
+*      String value with length less than or equal to 63 bytes (6 bits).
+*      "pppppp" represents the unsigned 6 bit length.
+* |01pppppp|qqqqqqqq| - 2 bytes
+*      String value with length less than or equal to 16383 bytes (14 bits).
+*      IMPORTANT: The 14 bit number is stored in big endian.
+* |10000000|qqqqqqqq|rrrrrrrr|ssssssss|tttttttt| - 5 bytes
+*      String value with length greater than or equal to 16384 bytes.
+*      Only the 4 bytes following the first byte represents the length
+*      up to 32^2-1. The 6 lower bits of the first byte are not used and
+*      are set to zero.
+*      IMPORTANT: The 32 bit number is stored in big endian.
+* |11000000| - 3 bytes
+*      Integer encoded as int16_t (2 bytes).
+* |11010000| - 5 bytes
+*      Integer encoded as int32_t (4 bytes).
+* |11100000| - 9 bytes
+*      Integer encoded as int64_t (8 bytes).
+* |11110000| - 4 bytes
+*      Integer encoded as 24 bit signed (3 bytes).
+* |11111110| - 2 bytes
+*      Integer encoded as 8 bit signed (1 byte).
+* |1111xxxx| - (with xxxx between 0000 and 1101) immediate 4 bit integer.
+*      Unsigned integer from 0 to 12. The encoded value is actually from
+*      1 to 13 because 0000 and 1111 can not be used, so 1 should be
+*      subtracted from the encoded 4 bit value to obtain the right value.
+* |11111111| - End of ziplist special entry.
+

首先如果 encoding 的前两位是 00 的话代表这个元素是个 6 位的字符串,即直接将数据保存在 encoding 中,不消耗额外的<entry-data>,如果前两位是 01 的话表示是个 14 位的字符串,如果是 10 的话表示encoding 块之后的四个字节是存放字符串类型的数据,encoding 的剩余 6 位置 0。
如果 encoding 的前两位是 11 的话表示这是个整型,具体的如果后两位是00的话,表示后面是个2字节的 int16_t 类型,如果是01的话,后面是个4字节的int32_t,如果是10的话后面是8字节的int64_t,如果是 11 的话后面是 3 字节的有符号整型,这些都要最后 4 位都是 0 的情况噢
剩下当是11111110时,则表示是一个1 字节的有符号数,如果是 1111xxxx,其中xxxx在0000 到 1101 表示实际的 1 到 13,为啥呢,因为 0000 前面已经用过了,而 1110 跟 1111 也都有用了。
看个具体的例子(上下有点对不齐,将就看)

+
[0f 00 00 00] [0c 00 00 00] [02 00] [00 f3] [02 f6] [ff]
+|**zlbytes***|  |***zltail***|  |*zllen*|  |entry1 entry2|  |zlend|
+

第一部分代表整个 ziplist 有 15 个字节,zlbytes 自己占了 4 个 zltail 表示最后一个元素的偏移量,第 13 个字节起,zllen 表示有 2 个元素,第一个元素是00f3,00表示前一个元素长度是 0,本来前面就没元素(不过不知道这个能不能优化这一字节),然后是 f3,换成二进制就是11110011,对照上面的注释,是落在|1111xxxx|这个类型里,注意这个其实是用 0001 到 1101 也就是 1到 13 来表示 0到 12,所以 f3 应该就是 2,第一个元素是 2,第二个元素呢,02 代表前一个元素也就是刚才说的这个,占用 2 字节,f6 展开也是刚才的类型,实际是 5,ff 表示 ziplist 的结尾,所以这个 ziplist 里面是两个元素,2 跟 5

+]]>
+ + Redis + 数据结构 + C + 源码 + Redis + + + redis + 数据结构 + 源码 + + + + redis淘汰策略复习 + /2021/08/01/redis%E6%B7%98%E6%B1%B0%E7%AD%96%E7%95%A5%E5%A4%8D%E4%B9%A0/ + 前面复习了 redis 的过期策略,这里再复习下淘汰策略,淘汰跟过期的区别有时候会被混淆了,过期主要针对那些设置了过期时间的 key,应该说是一种逻辑策略,是主动的还是被动的加定时的,两种有各自的取舍,而淘汰也可以看成是一种保持系统稳定的策略,因为如果内存满了,不采取任何策略处理,那大概率会导致系统故障,之前其实主要从源码角度分析过redis 的 LRU 和 LFU,但这个是偏底层的实现,抠得比较细,那么具体的系统层面的配置是有哪些策略,来看下 redis labs 的介绍

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
PolicyDescription
noeviction 不逐出Returns an error if the memory limit has been reached when trying to insert more data,插入更多数据时,如果内存达到上限了,返回错误
allkeys-lru 所有的 key 使用 lru 逐出Evicts the least recently used keys out of all keys 在所有 key 中逐出最近最少使用的
allkeys-lfu 所有的 key 使用 lfu 逐出Evicts the least frequently used keys out of all keys 在所有 key 中逐出最近最不频繁使用的
allkeys-random 所有的 key 中随机逐出Randomly evicts keys out of all keys 在所有 key 中随机逐出
volatile-lruEvicts the least recently used keys out of all keys with an “expire” field set 在设置了过期时间的 key 空间 expire 中使用 lru 策略逐出
volatile-lfuEvicts the least frequently used keys out of all keys with an “expire” field set 在设置了过期时间的 key 空间 expire 中使用 lfu 策略逐出
volatile-randomRandomly evicts keys with an “expire” field set 在设置了过期时间的 key 空间 expire 中随机逐出
volatile-ttlEvicts the shortest time-to-live keys out of all keys with an “expire” field set.在设置了过期时间的 key 空间 expire 中逐出更早过期的
+

而在这其中默认使用的策略是 volatile-lru,对 lru 跟 lfu 想有更多的了解可以看下我之前的文章redis系列介绍八-淘汰策略

+]]>
+ + redis + + + redis + 淘汰策略 + 应用 + Evict + +
+ + mybatis系列-typeAliases系统 + /2023/01/01/mybatis%E7%B3%BB%E5%88%97-typeAliases%E7%B3%BB%E7%BB%9F/ + 其实前面已经聊到过这个概念,在mybatis的配置中,以及一些初始化逻辑都是用了typeAliases,

+
<typeAliases>
+  <typeAlias alias="Author" type="domain.blog.Author"/>
+  <typeAlias alias="Blog" type="domain.blog.Blog"/>
+  <typeAlias alias="Comment" type="domain.blog.Comment"/>
+  <typeAlias alias="Post" type="domain.blog.Post"/>
+  <typeAlias alias="Section" type="domain.blog.Section"/>
+  <typeAlias alias="Tag" type="domain.blog.Tag"/>
+</typeAliases>
+

可以在这里注册类型别名,然后在mybatis中配置使用时,可以简化这些类型的使用,其底层逻辑主要是一个map,

+
public class TypeAliasRegistry {
 
-    /* No expire? return ASAP */
-    if (dictSize(db->expires) == 0 ||
-       (de = dictFind(db->expires,key->ptr)) == NULL) return -1;
+  private final Map<String, Class<?>> typeAliases = new HashMap<>();
+

以string作为key,class对象作为value,比如我们在一开始使用的配置文件

+
<dataSource type="POOLED">
+    <property name="driver" value="${driver}"/>
+    <property name="url" value="${url}"/>
+    <property name="username" value="${username}"/>
+    <property name="password" value="${password}"/>
+</dataSource>
+

这里使用的dataSource是POOLED,那它肯定是个别名或者需要对应处理
而这个别名就是在Configuration的构造方法里初始化

+
public Configuration() {
+    typeAliasRegistry.registerAlias("JDBC", JdbcTransactionFactory.class);
+    typeAliasRegistry.registerAlias("MANAGED", ManagedTransactionFactory.class);
 
-    /* The entry was found in the expire dict, this means it should also
-     * be present in the main dict (safety check). */
-    serverAssertWithInfo(NULL,key,dictFind(db->dict,key->ptr) != NULL);
-    return dictGetSignedIntegerVal(de);
-}
-

这里有几点要注意的,第一是当惰性删除时会根据lazyfree_lazy_expire这个参数去判断是执行同步删除还是异步删除,另外一点是对于 slave,是不需要执行的,因为会在 master 过期时向 slave 发送 del 指令。
光采用这个策略会有什么问题呢,假如一些key 一直未被访问,那这些 key 就不会过期了,导致一直被占用着内存,所以 redis 采取了懒汉式过期加定期过期策略,定期策略是怎么执行的呢

-
/* This function handles 'background' operations we are required to do
- * incrementally in Redis databases, such as active key expiring, resizing,
- * rehashing. */
-void databasesCron(void) {
-    /* Expire keys by random sampling. Not required for slaves
-     * as master will synthesize DELs for us. */
-    if (server.active_expire_enabled) {
-        if (server.masterhost == NULL) {
-            activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
-        } else {
-            expireSlaveKeys();
-        }
-    }
+    typeAliasRegistry.registerAlias("JNDI", JndiDataSourceFactory.class);
+    typeAliasRegistry.registerAlias("POOLED", PooledDataSourceFactory.class);
+    typeAliasRegistry.registerAlias("UNPOOLED", UnpooledDataSourceFactory.class);
 
-    /* Defrag keys gradually. */
-    activeDefragCycle();
+    typeAliasRegistry.registerAlias("PERPETUAL", PerpetualCache.class);
+    typeAliasRegistry.registerAlias("FIFO", FifoCache.class);
+    typeAliasRegistry.registerAlias("LRU", LruCache.class);
+    typeAliasRegistry.registerAlias("SOFT", SoftCache.class);
+    typeAliasRegistry.registerAlias("WEAK", WeakCache.class);
 
-    /* Perform hash tables rehashing if needed, but only if there are no
-     * other processes saving the DB on disk. Otherwise rehashing is bad
-     * as will cause a lot of copy-on-write of memory pages. */
-    if (!hasActiveChildProcess()) {
-        /* We use global counters so if we stop the computation at a given
-         * DB we'll be able to start from the successive in the next
-         * cron loop iteration. */
-        static unsigned int resize_db = 0;
-        static unsigned int rehash_db = 0;
-        int dbs_per_call = CRON_DBS_PER_CALL;
-        int j;
+    typeAliasRegistry.registerAlias("DB_VENDOR", VendorDatabaseIdProvider.class);
 
-        /* Don't test more DBs than we have. */
-        if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;
+    typeAliasRegistry.registerAlias("XML", XMLLanguageDriver.class);
+    typeAliasRegistry.registerAlias("RAW", RawLanguageDriver.class);
 
-        /* Resize */
-        for (j = 0; j < dbs_per_call; j++) {
-            tryResizeHashTables(resize_db % server.dbnum);
-            resize_db++;
-        }
+    typeAliasRegistry.registerAlias("SLF4J", Slf4jImpl.class);
+    typeAliasRegistry.registerAlias("COMMONS_LOGGING", JakartaCommonsLoggingImpl.class);
+    typeAliasRegistry.registerAlias("LOG4J", Log4jImpl.class);
+    typeAliasRegistry.registerAlias("LOG4J2", Log4j2Impl.class);
+    typeAliasRegistry.registerAlias("JDK_LOGGING", Jdk14LoggingImpl.class);
+    typeAliasRegistry.registerAlias("STDOUT_LOGGING", StdOutImpl.class);
+    typeAliasRegistry.registerAlias("NO_LOGGING", NoLoggingImpl.class);
 
-        /* Rehash */
-        if (server.activerehashing) {
-            for (j = 0; j < dbs_per_call; j++) {
-                int work_done = incrementallyRehash(rehash_db);
-                if (work_done) {
-                    /* If the function did some work, stop here, we'll do
-                     * more at the next cron loop. */
-                    break;
-                } else {
-                    /* If this db didn't need rehash, we'll try the next one. */
-                    rehash_db++;
-                    rehash_db %= server.dbnum;
-                }
-            }
-        }
-    }
-}
-/* Try to expire a few timed out keys. The algorithm used is adaptive and
- * will use few CPU cycles if there are few expiring keys, otherwise
- * it will get more aggressive to avoid that too much memory is used by
- * keys that can be removed from the keyspace.
- *
- * Every expire cycle tests multiple databases: the next call will start
- * again from the next db, with the exception of exists for time limit: in that
- * case we restart again from the last database we were processing. Anyway
- * no more than CRON_DBS_PER_CALL databases are tested at every iteration.
- *
- * The function can perform more or less work, depending on the "type"
- * argument. It can execute a "fast cycle" or a "slow cycle". The slow
- * cycle is the main way we collect expired cycles: this happens with
- * the "server.hz" frequency (usually 10 hertz).
- *
- * However the slow cycle can exit for timeout, since it used too much time.
- * For this reason the function is also invoked to perform a fast cycle
- * at every event loop cycle, in the beforeSleep() function. The fast cycle
- * will try to perform less work, but will do it much more often.
- *
- * The following are the details of the two expire cycles and their stop
- * conditions:
- *
- * If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a
- * "fast" expire cycle that takes no longer than EXPIRE_FAST_CYCLE_DURATION
- * microseconds, and is not repeated again before the same amount of time.
- * The cycle will also refuse to run at all if the latest slow cycle did not
- * terminate because of a time limit condition.
- *
- * If type is ACTIVE_EXPIRE_CYCLE_SLOW, that normal expire cycle is
- * executed, where the time limit is a percentage of the REDIS_HZ period
- * as specified by the ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC define. In the
- * fast cycle, the check of every database is interrupted once the number
- * of already expired keys in the database is estimated to be lower than
+    typeAliasRegistry.registerAlias("CGLIB", CglibProxyFactory.class);
+    typeAliasRegistry.registerAlias("JAVASSIST", JavassistProxyFactory.class);
+
+    languageRegistry.setDefaultDriverClass(XMLLanguageDriver.class);
+    languageRegistry.register(RawLanguageDriver.class);
+  }
+

正是通过typeAliasRegistry.registerAlias("POOLED", PooledDataSourceFactory.class);这一行,注册了
POOLED对应的别名类型是PooledDataSourceFactory.class
具体的注册方法是在

+
public void registerAlias(String alias, Class<?> value) {
+  if (alias == null) {
+    throw new TypeException("The parameter alias cannot be null");
+  }
+  // issue #748
+  // 转换成小写,
+  String key = alias.toLowerCase(Locale.ENGLISH);
+  // 判断是否已经注册过了
+  if (typeAliases.containsKey(key) && typeAliases.get(key) != null && !typeAliases.get(key).equals(value)) {
+    throw new TypeException("The alias '" + alias + "' is already mapped to the value '" + typeAliases.get(key).getName() + "'.");
+  }
+  // 放进map里
+  typeAliases.put(key, value);
+}
+

而获取的逻辑在这

+
public <T> Class<T> resolveAlias(String string) {
+    try {
+      if (string == null) {
+        return null;
+      }
+      // issue #748
+      // 同样的转成小写
+      String key = string.toLowerCase(Locale.ENGLISH);
+      Class<T> value;
+      if (typeAliases.containsKey(key)) {
+        value = (Class<T>) typeAliases.get(key);
+      } else {
+        // 这里还有从路径下处理的逻辑
+        value = (Class<T>) Resources.classForName(string);
+      }
+      return value;
+    } catch (ClassNotFoundException e) {
+      throw new TypeException("Could not resolve type alias '" + string + "'.  Cause: " + e, e);
+    }
+  }
+

逻辑比较简单,但是在mybatis中也是不可或缺的一块概念

+]]>
+ + Java + Mybatis + + + Java + Mysql + Mybatis + +
+ + redis系列介绍七-过期策略 + /2020/04/12/redis%E7%B3%BB%E5%88%97%E4%BB%8B%E7%BB%8D%E4%B8%83/ + 这一篇不再是数据结构介绍了,大致的数据结构基本都介绍了,这一篇主要是查漏补缺,或者说讲一些重要且基本的概念,也可能是经常被忽略的,很多讲 redis 的系列文章可能都会忽略,学习 redis 的时候也会,因为觉得源码学习就是讲主要的数据结构和“算法”学习了就好了。
redis 的主要应用就是拿来作为高性能的缓存,那么缓存一般有些啥需要注意的,首先是访问速度,如果取得跟数据库一样快,那就没什么存在的意义,第二个是缓存的字面意思,我只是为了让数据读取快一些,通常大部分的场景这个是需要更新过期的,这里就把我要讲的第一点引出来了(真累,

+

redis过期策略

redis 是如何过期缓存的,可以猜测下,最无脑的就是每个设置了过期时间的 key 都设个定时器,过期了就删除,这种显然消耗太大,清理地最及时,还有的就是 redis 正在采用的懒汉清理策略和定期清理
懒汉策略就是在使用的时候去检查缓存是否过期,比如 get 操作时,先判断下这个 key 是否已经过期了,如果过期了就删掉,并且返回空,如果没过期则正常返回
主要代码是

+
/* This function is called when we are going to perform some operation
+ * in a given key, but such key may be already logically expired even if
+ * it still exists in the database. The main way this function is called
+ * is via lookupKey*() family of functions.
+ *
+ * The behavior of the function depends on the replication role of the
+ * instance, because slave instances do not expire keys, they wait
+ * for DELs from the master for consistency matters. However even
+ * slaves will try to have a coherent return value for the function,
+ * so that read commands executed in the slave side will be able to
+ * behave like if the key is expired even if still present (because the
+ * master has yet to propagate the DEL).
+ *
+ * In masters as a side effect of finding a key which is expired, such
+ * key will be evicted from the database. Also this may trigger the
+ * propagation of a DEL/UNLINK command in AOF / replication stream.
+ *
+ * The return value of the function is 0 if the key is still valid,
+ * otherwise the function returns 1 if the key is expired. */
+int expireIfNeeded(redisDb *db, robj *key) {
+    if (!keyIsExpired(db,key)) return 0;
+
+    /* If we are running in the context of a slave, instead of
+     * evicting the expired key from the database, we return ASAP:
+     * the slave key expiration is controlled by the master that will
+     * send us synthesized DEL operations for expired keys.
+     *
+     * Still we try to return the right information to the caller,
+     * that is, 0 if we think the key should be still valid, 1 if
+     * we think the key is expired at this time. */
+    if (server.masterhost != NULL) return 1;
+
+    /* Delete the key */
+    server.stat_expiredkeys++;
+    propagateExpire(db,key,server.lazyfree_lazy_expire);
+    notifyKeyspaceEvent(NOTIFY_EXPIRED,
+        "expired",key,db->id);
+    return server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
+                                         dbSyncDelete(db,key);
+}
+
+/* Check if the key is expired. */
+int keyIsExpired(redisDb *db, robj *key) {
+    mstime_t when = getExpire(db,key);
+    mstime_t now;
+
+    if (when < 0) return 0; /* No expire for this key */
+
+    /* Don't expire anything while loading. It will be done later. */
+    if (server.loading) return 0;
+
+    /* If we are in the context of a Lua script, we pretend that time is
+     * blocked to when the Lua script started. This way a key can expire
+     * only the first time it is accessed and not in the middle of the
+     * script execution, making propagation to slaves / AOF consistent.
+     * See issue #1525 on Github for more information. */
+    if (server.lua_caller) {
+        now = server.lua_time_start;
+    }
+    /* If we are in the middle of a command execution, we still want to use
+     * a reference time that does not change: in that case we just use the
+     * cached time, that we update before each call in the call() function.
+     * This way we avoid that commands such as RPOPLPUSH or similar, that
+     * may re-open the same key multiple times, can invalidate an already
+     * open object in a next call, if the next call will see the key expired,
+     * while the first did not. */
+    else if (server.fixed_time_expire > 0) {
+        now = server.mstime;
+    }
+    /* For the other cases, we want to use the most fresh time we have. */
+    else {
+        now = mstime();
+    }
+
+    /* The key expired if the current (virtual or real) time is greater
+     * than the expire time of the key. */
+    return now > when;
+}
+/* Return the expire time of the specified key, or -1 if no expire
+ * is associated with this key (i.e. the key is non volatile) */
+long long getExpire(redisDb *db, robj *key) {
+    dictEntry *de;
+
+    /* No expire? return ASAP */
+    if (dictSize(db->expires) == 0 ||
+       (de = dictFind(db->expires,key->ptr)) == NULL) return -1;
+
+    /* The entry was found in the expire dict, this means it should also
+     * be present in the main dict (safety check). */
+    serverAssertWithInfo(NULL,key,dictFind(db->dict,key->ptr) != NULL);
+    return dictGetSignedIntegerVal(de);
+}
+

这里有几点要注意的,第一是当惰性删除时会根据lazyfree_lazy_expire这个参数去判断是执行同步删除还是异步删除,另外一点是对于 slave,是不需要执行的,因为会在 master 过期时向 slave 发送 del 指令。
光采用这个策略会有什么问题呢,假如一些key 一直未被访问,那这些 key 就不会过期了,导致一直被占用着内存,所以 redis 采取了懒汉式过期加定期过期策略,定期策略是怎么执行的呢

+
/* This function handles 'background' operations we are required to do
+ * incrementally in Redis databases, such as active key expiring, resizing,
+ * rehashing. */
+void databasesCron(void) {
+    /* Expire keys by random sampling. Not required for slaves
+     * as master will synthesize DELs for us. */
+    if (server.active_expire_enabled) {
+        if (server.masterhost == NULL) {
+            activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
+        } else {
+            expireSlaveKeys();
+        }
+    }
+
+    /* Defrag keys gradually. */
+    activeDefragCycle();
+
+    /* Perform hash tables rehashing if needed, but only if there are no
+     * other processes saving the DB on disk. Otherwise rehashing is bad
+     * as will cause a lot of copy-on-write of memory pages. */
+    if (!hasActiveChildProcess()) {
+        /* We use global counters so if we stop the computation at a given
+         * DB we'll be able to start from the successive in the next
+         * cron loop iteration. */
+        static unsigned int resize_db = 0;
+        static unsigned int rehash_db = 0;
+        int dbs_per_call = CRON_DBS_PER_CALL;
+        int j;
+
+        /* Don't test more DBs than we have. */
+        if (dbs_per_call > server.dbnum) dbs_per_call = server.dbnum;
+
+        /* Resize */
+        for (j = 0; j < dbs_per_call; j++) {
+            tryResizeHashTables(resize_db % server.dbnum);
+            resize_db++;
+        }
+
+        /* Rehash */
+        if (server.activerehashing) {
+            for (j = 0; j < dbs_per_call; j++) {
+                int work_done = incrementallyRehash(rehash_db);
+                if (work_done) {
+                    /* If the function did some work, stop here, we'll do
+                     * more at the next cron loop. */
+                    break;
+                } else {
+                    /* If this db didn't need rehash, we'll try the next one. */
+                    rehash_db++;
+                    rehash_db %= server.dbnum;
+                }
+            }
+        }
+    }
+}
+/* Try to expire a few timed out keys. The algorithm used is adaptive and
+ * will use few CPU cycles if there are few expiring keys, otherwise
+ * it will get more aggressive to avoid that too much memory is used by
+ * keys that can be removed from the keyspace.
+ *
+ * Every expire cycle tests multiple databases: the next call will start
+ * again from the next db, with the exception of exists for time limit: in that
+ * case we restart again from the last database we were processing. Anyway
+ * no more than CRON_DBS_PER_CALL databases are tested at every iteration.
+ *
+ * The function can perform more or less work, depending on the "type"
+ * argument. It can execute a "fast cycle" or a "slow cycle". The slow
+ * cycle is the main way we collect expired cycles: this happens with
+ * the "server.hz" frequency (usually 10 hertz).
+ *
+ * However the slow cycle can exit for timeout, since it used too much time.
+ * For this reason the function is also invoked to perform a fast cycle
+ * at every event loop cycle, in the beforeSleep() function. The fast cycle
+ * will try to perform less work, but will do it much more often.
+ *
+ * The following are the details of the two expire cycles and their stop
+ * conditions:
+ *
+ * If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a
+ * "fast" expire cycle that takes no longer than EXPIRE_FAST_CYCLE_DURATION
+ * microseconds, and is not repeated again before the same amount of time.
+ * The cycle will also refuse to run at all if the latest slow cycle did not
+ * terminate because of a time limit condition.
+ *
+ * If type is ACTIVE_EXPIRE_CYCLE_SLOW, that normal expire cycle is
+ * executed, where the time limit is a percentage of the REDIS_HZ period
+ * as specified by the ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC define. In the
+ * fast cycle, the check of every database is interrupted once the number
+ * of already expired keys in the database is estimated to be lower than
  * a given percentage, in order to avoid doing too much work to gain too
  * little memory.
  *
@@ -10447,261 +10664,6 @@ timelimit = config_cycle_slow_time_perc*1000000/server.hz/100;源码
       
   
-  
-    redis数据结构介绍六 快表
-    /2020/01/22/redis%E6%95%B0%E6%8D%AE%E7%BB%93%E6%9E%84%E4%BB%8B%E7%BB%8D%E5%85%AD/
-    这应该是 redis 系列的最后一篇了,讲下快表,其实最前面讲的链表在早先的 redis 版本中也作为 list 的数据结构使用过,但是单纯的链表的缺陷之前也说了,插入便利,但是空间利用率低,并且不能进行二分查找等,检索效率低,ziplist 压缩表的产生也是同理,希望获得更好的性能,包括存储空间和访问性能等,原来我也不懂这个快表要怎么快,然后明白了一个道理,其实并没有什么银弹,只是大牛们会在适合的时候使用最适合的数据结构来实现性能的最大化,这里面有一招就是不同数据结构的组合调整,比如 Java 中的 HashMap,在链表节点数大于 8 时会转变成红黑树,以此提高访问效率,不费话了,回到快表,quicklist,这个数据结构主要使用在 list 类型中,如果我说其实这个 quicklist 就是个链表,可能大家不太会相信,但是事实上的确可以认为 quicklist 是个双向链表,看下代码

-
/* quicklistNode is a 32 byte struct describing a ziplist for a quicklist.
- * We use bit fields keep the quicklistNode at 32 bytes.
- * count: 16 bits, max 65536 (max zl bytes is 65k, so max count actually < 32k).
- * encoding: 2 bits, RAW=1, LZF=2.
- * container: 2 bits, NONE=1, ZIPLIST=2.
- * recompress: 1 bit, bool, true if node is temporarry decompressed for usage.
- * attempted_compress: 1 bit, boolean, used for verifying during testing.
- * extra: 10 bits, free for future use; pads out the remainder of 32 bits */
-typedef struct quicklistNode {
-    struct quicklistNode *prev;
-    struct quicklistNode *next;
-    unsigned char *zl;
-    unsigned int sz;             /* ziplist size in bytes */
-    unsigned int count : 16;     /* count of items in ziplist */
-    unsigned int encoding : 2;   /* RAW==1 or LZF==2 */
-    unsigned int container : 2;  /* NONE==1 or ZIPLIST==2 */
-    unsigned int recompress : 1; /* was this node previous compressed? */
-    unsigned int attempted_compress : 1; /* node can't compress; too small */
-    unsigned int extra : 10; /* more bits to steal for future usage */
-} quicklistNode;
-
-/* quicklistLZF is a 4+N byte struct holding 'sz' followed by 'compressed'.
- * 'sz' is byte length of 'compressed' field.
- * 'compressed' is LZF data with total (compressed) length 'sz'
- * NOTE: uncompressed length is stored in quicklistNode->sz.
- * When quicklistNode->zl is compressed, node->zl points to a quicklistLZF */
-typedef struct quicklistLZF {
-    unsigned int sz; /* LZF size in bytes*/
-    char compressed[];
-} quicklistLZF;
-
-/* quicklist is a 40 byte struct (on 64-bit systems) describing a quicklist.
- * 'count' is the number of total entries.
- * 'len' is the number of quicklist nodes.
- * 'compress' is: -1 if compression disabled, otherwise it's the number
- *                of quicklistNodes to leave uncompressed at ends of quicklist.
- * 'fill' is the user-requested (or default) fill factor. */
-typedef struct quicklist {
-    quicklistNode *head;
-    quicklistNode *tail;
-    unsigned long count;        /* total count of all entries in all ziplists */
-    unsigned long len;          /* number of quicklistNodes */
-    int fill : 16;              /* fill factor for individual nodes */
-    unsigned int compress : 16; /* depth of end nodes not to compress;0=off */
-} quicklist;
-

粗略看下,quicklist 里有 head,tail, quicklistNode里有 prev,next 指针,是不是有链表的基本轮廓了,那么为啥这玩意要称为快表呢,快在哪,关键就在这个unsigned char *zl;zl 是不是前面又看到过,就是 ziplist ,这是什么鬼,链表里用压缩表,这不套娃么,先别急,回顾下前面说的 ziplist,ziplist 有哪些特点,内存利用率高,可以从表头快速定位到尾节点,节点可以从后往前找,但是有个缺点,就是从中间插入的效率比较低,需要整体往后移,这个其实是普通数组的优化版,但还是有数组的一些劣势,所以要真的快,是不是可以将链表跟数组真的结合起来。

-

ziplist

这里有两个 redis 的配置参数,list-max-ziplist-sizelist-compress-depth,先来说第一个,既然快表是将链表跟压缩表数组结合起来使用,那么具体怎么用呢,比如我有一个 10 个元素的 list,那具体怎么放,每个 quicklistNode 里放多大的 ziplist,假如每个快表节点的 ziplist 只放一个元素,那么其实这就退化成了一个链表,如果 10 个元素放在一个 quicklistNode 的 ziplist 里,那就退化成了一个 ziplist,所以有了这个 list-max-ziplist-size,而且它还比较牛,能取正负值,当是正值时,对应的就是每个 quicklistNode 的 ziplist 中的元素个数,比如配置了 list-max-ziplist-size = 5,那么我刚才的 10 个元素的 list 就是一个两个 quicklistNode 组成的快表,每个 quicklistNode 中的 ziplist 包含了五个元素,当 list-max-ziplist-size取负值的时候,它限制了 ziplist 的字节数

-
size_t offset = (-fill) - 1;
-if (offset < (sizeof(optimization_level) / sizeof(*optimization_level))) {
-    if (sz <= optimization_level[offset]) {
-        return 1;
-    } else {
-        return 0;
-    }
-} else {
-    return 0;
-}
-
-/* Optimization levels for size-based filling */
-static const size_t optimization_level[] = {4096, 8192, 16384, 32768, 65536};
-
-/* Create a new quicklist.
- * Free with quicklistRelease(). */
-quicklist *quicklistCreate(void) {
-    struct quicklist *quicklist;
-
-    quicklist = zmalloc(sizeof(*quicklist));
-    quicklist->head = quicklist->tail = NULL;
-    quicklist->len = 0;
-    quicklist->count = 0;
-    quicklist->compress = 0;
-    quicklist->fill = -2;
-    return quicklist;
-}
-

这个 fill 就是传进来的 list-max-ziplist-size, 具体对应的就是

-
    -
  • -5: 每个quicklist节点上的ziplist大小不能超过64 Kb。(注:1kb => 1024 bytes)
  • -
  • -4: 每个quicklist节点上的ziplist大小不能超过32 Kb。
  • -
  • -3: 每个quicklist节点上的ziplist大小不能超过16 Kb。
  • -
  • -2: 每个quicklist节点上的ziplist大小不能超过8 Kb。(-2是Redis给出的默认值)也就是上面的 quicklist->fill = -2;
  • -
  • -1: 每个quicklist节点上的ziplist大小不能超过4 Kb。
  • -
-

压缩

list-compress-depth这个参数呢是用来配置压缩的,等等压缩是为啥,不是里面已经是压缩表了么,大牛们就是为了性能殚精竭虑,这里考虑到的是一个场景,一般状况下,list 都是两端的访问频率比较高,那么是不是可以对中间的数据进行压缩,那么这个参数就是用来表示

-
/* depth of end nodes not to compress;0=off */
-
    -
  • 0,代表不压缩,默认值
  • -
  • 1,两端各一个节点不压缩
  • -
  • 2,两端各两个节点不压缩
  • -
  • … 依次类推
    压缩后的 ziplist 就会变成 quicklistLZF,然后替换 zl 指针,这里使用的是 LZF 压缩算法,压缩后的 quicklistLZF 中的 compressed 也是个柔性数组,压缩后的 ziplist 整个就放进这个柔性数组
  • -
-

插入过程

简单说下插入元素的过程

-
/* Wrapper to allow argument-based switching between HEAD/TAIL pop */
-void quicklistPush(quicklist *quicklist, void *value, const size_t sz,
-                   int where) {
-    if (where == QUICKLIST_HEAD) {
-        quicklistPushHead(quicklist, value, sz);
-    } else if (where == QUICKLIST_TAIL) {
-        quicklistPushTail(quicklist, value, sz);
-    }
-}
-
-/* Add new entry to head node of quicklist.
- *
- * Returns 0 if used existing head.
- * Returns 1 if new head created. */
-int quicklistPushHead(quicklist *quicklist, void *value, size_t sz) {
-    quicklistNode *orig_head = quicklist->head;
-    if (likely(
-            _quicklistNodeAllowInsert(quicklist->head, quicklist->fill, sz))) {
-        quicklist->head->zl =
-            ziplistPush(quicklist->head->zl, value, sz, ZIPLIST_HEAD);
-        quicklistNodeUpdateSz(quicklist->head);
-    } else {
-        quicklistNode *node = quicklistCreateNode();
-        node->zl = ziplistPush(ziplistNew(), value, sz, ZIPLIST_HEAD);
-
-        quicklistNodeUpdateSz(node);
-        _quicklistInsertNodeBefore(quicklist, quicklist->head, node);
-    }
-    quicklist->count++;
-    quicklist->head->count++;
-    return (orig_head != quicklist->head);
-}
-
-/* Add new entry to tail node of quicklist.
- *
- * Returns 0 if used existing tail.
- * Returns 1 if new tail created. */
-int quicklistPushTail(quicklist *quicklist, void *value, size_t sz) {
-    quicklistNode *orig_tail = quicklist->tail;
-    if (likely(
-            _quicklistNodeAllowInsert(quicklist->tail, quicklist->fill, sz))) {
-        quicklist->tail->zl =
-            ziplistPush(quicklist->tail->zl, value, sz, ZIPLIST_TAIL);
-        quicklistNodeUpdateSz(quicklist->tail);
-    } else {
-        quicklistNode *node = quicklistCreateNode();
-        node->zl = ziplistPush(ziplistNew(), value, sz, ZIPLIST_TAIL);
-
-        quicklistNodeUpdateSz(node);
-        _quicklistInsertNodeAfter(quicklist, quicklist->tail, node);
-    }
-    quicklist->count++;
-    quicklist->tail->count++;
-    return (orig_tail != quicklist->tail);
-}
-
-/* Wrappers for node inserting around existing node. */
-REDIS_STATIC void _quicklistInsertNodeBefore(quicklist *quicklist,
-                                             quicklistNode *old_node,
-                                             quicklistNode *new_node) {
-    __quicklistInsertNode(quicklist, old_node, new_node, 0);
-}
-
-REDIS_STATIC void _quicklistInsertNodeAfter(quicklist *quicklist,
-                                            quicklistNode *old_node,
-                                            quicklistNode *new_node) {
-    __quicklistInsertNode(quicklist, old_node, new_node, 1);
-}
-
-/* Insert 'new_node' after 'old_node' if 'after' is 1.
- * Insert 'new_node' before 'old_node' if 'after' is 0.
- * Note: 'new_node' is *always* uncompressed, so if we assign it to
- *       head or tail, we do not need to uncompress it. */
-REDIS_STATIC void __quicklistInsertNode(quicklist *quicklist,
-                                        quicklistNode *old_node,
-                                        quicklistNode *new_node, int after) {
-    if (after) {
-        new_node->prev = old_node;
-        if (old_node) {
-            new_node->next = old_node->next;
-            if (old_node->next)
-                old_node->next->prev = new_node;
-            old_node->next = new_node;
-        }
-        if (quicklist->tail == old_node)
-            quicklist->tail = new_node;
-    } else {
-        new_node->next = old_node;
-        if (old_node) {
-            new_node->prev = old_node->prev;
-            if (old_node->prev)
-                old_node->prev->next = new_node;
-            old_node->prev = new_node;
-        }
-        if (quicklist->head == old_node)
-            quicklist->head = new_node;
-    }
-    /* If this insert creates the only element so far, initialize head/tail. */
-    if (quicklist->len == 0) {
-        quicklist->head = quicklist->tail = new_node;
-    }
-
-    if (old_node)
-        quicklistCompress(quicklist, old_node);
-
-    quicklist->len++;
-}
-

前面第一步先根据插入的是头还是尾选择不同的 push 函数,quicklistPushHead 或者 quicklistPushTail,举例分析下从头插入的 quicklistPushHead,先判断当前的 quicklistNode 节点还能不能允许再往 ziplist 里添加元素,如果可以就添加,如果不允许就新建一个 quicklistNode,然后调用 _quicklistInsertNodeBefore 将节点插进去,具体插入quicklist节点的操作类似链表的插入。

-]]>
- - Redis - 数据结构 - C - 源码 - Redis - - - redis - 数据结构 - 源码 - -
- - redis过期策略复习 - /2021/07/25/redis%E8%BF%87%E6%9C%9F%E7%AD%96%E7%95%A5%E5%A4%8D%E4%B9%A0/ - redis过期策略复习

之前其实写过redis的过期的一些原理,这次主要是记录下,一些使用上的概念,主要是redis使用的过期策略是懒过期和定时清除,懒过期的其实比较简单,即是在key被访问的时候会顺带着判断下这个key是否已过期了,如果已经过期了,就不返回了,但是这种策略有个漏洞是如果有些key之后一直不会被访问了,就等于沉在池底了,所以需要有一个定时的清理机制,去从设置了过期的key池子(expires)里随机地捞key,具体的策略我们看下官网的解释

-
    -
  1. Test 20 random keys from the set of keys with an associated expire.
  2. -
  3. Delete all the keys found expired.
  4. -
  5. If more than 25% of keys were expired, start again from step 1.
  6. -
-

从池子里随机获取20个key,将其中过期的key删掉,如果这其中有超过25%的key已经过期了,那就再来一次,以此保持过期的key不超过25%(左右),并且这个定时策略可以在redis的配置文件

-
# Redis calls an internal function to perform many background tasks, like
-# closing connections of clients in timeout, purging expired keys that are
-# never requested, and so forth.
-#
-# Not all tasks are performed with the same frequency, but Redis checks for
-# tasks to perform according to the specified "hz" value.
-#
-# By default "hz" is set to 10. Raising the value will use more CPU when
-# Redis is idle, but at the same time will make Redis more responsive when
-# there are many keys expiring at the same time, and timeouts may be
-# handled with more precision.
-#
-# The range is between 1 and 500, however a value over 100 is usually not
-# a good idea. Most users should use the default of 10 and raise this up to
-# 100 only in environments where very low latency is required.
-hz 10
- -

可以配置这个hz的值,代表的含义是每秒的执行次数,默认是10,其实也用了hz的普遍含义。有兴趣可以看看之前写的一篇文章redis系列介绍七-过期策略

-]]>
- - redis - - - redis - 应用 - 过期策略 - -
redis系列介绍八-淘汰策略 /2020/04/18/redis%E7%B3%BB%E5%88%97%E4%BB%8B%E7%BB%8D%E5%85%AB/ @@ -11232,22 +11194,97 @@ uint8_t LFULogIncr(uint8_t counter) { - rust学习笔记-所有权二 - /2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0-%E6%89%80%E6%9C%89%E6%9D%83%E4%BA%8C/ - 这里需要说道函数和返回值了
可以看书上的这个例子

对于这种情况,当进入函数内部时,会把传入的变量的所有权转移进函数内部,如果最后还是要返回该变量,但是如果此时还要返回别的计算结果,就可能需要笨拙地使用元组

-

引用

此时我们就可以用引用来解决这个问题

-
fn main() {
-    let s1 = String::from("hello");
-    let len = calculate_length(&s1);
-
-    println!("The length of '{}' is {}", s1, len);
-}
-fn calculate_length(s: &String) -> usize {
-    s.len()
-}
-

这里的&符号就是引用的语义,它们允许你在不获得所有权的前提下使用值

由于引用不持有值的所有权,所以当引用离开当前作用域时,它指向的值也不会被丢弃

-

可变引用

而当我们尝试对引用的字符串进行修改时

-
fn main() {
+    redis过期策略复习
+    /2021/07/25/redis%E8%BF%87%E6%9C%9F%E7%AD%96%E7%95%A5%E5%A4%8D%E4%B9%A0/
+    redis过期策略复习

之前其实写过redis的过期的一些原理,这次主要是记录下,一些使用上的概念,主要是redis使用的过期策略是懒过期和定时清除,懒过期的其实比较简单,即是在key被访问的时候会顺带着判断下这个key是否已过期了,如果已经过期了,就不返回了,但是这种策略有个漏洞是如果有些key之后一直不会被访问了,就等于沉在池底了,所以需要有一个定时的清理机制,去从设置了过期的key池子(expires)里随机地捞key,具体的策略我们看下官网的解释

+
    +
  1. Test 20 random keys from the set of keys with an associated expire.
  2. +
  3. Delete all the keys found expired.
  4. +
  5. If more than 25% of keys were expired, start again from step 1.
  6. +
+

从池子里随机获取20个key,将其中过期的key删掉,如果这其中有超过25%的key已经过期了,那就再来一次,以此保持过期的key不超过25%(左右),并且这个定时策略可以在redis的配置文件

+
# Redis calls an internal function to perform many background tasks, like
+# closing connections of clients in timeout, purging expired keys that are
+# never requested, and so forth.
+#
+# Not all tasks are performed with the same frequency, but Redis checks for
+# tasks to perform according to the specified "hz" value.
+#
+# By default "hz" is set to 10. Raising the value will use more CPU when
+# Redis is idle, but at the same time will make Redis more responsive when
+# there are many keys expiring at the same time, and timeouts may be
+# handled with more precision.
+#
+# The range is between 1 and 500, however a value over 100 is usually not
+# a good idea. Most users should use the default of 10 and raise this up to
+# 100 only in environments where very low latency is required.
+hz 10
+ +

可以配置这个hz的值,代表的含义是每秒的执行次数,默认是10,其实也用了hz的普遍含义。有兴趣可以看看之前写的一篇文章redis系列介绍七-过期策略

+]]>
+ + redis + + + redis + 应用 + 过期策略 + + + + rust学习笔记-所有权一 + /2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0/ + 最近在看 《rust 权威指南》,还是难度比较大的,它里面的一些概念跟之前的用过的都有比较大的差别
比起有 gc 的虚拟机语言,跟像 C 和 C++这种主动释放内存的,rust 有他的独特点,主要是有三条

+
    +
  • Rust中的每一个值都有一个对应的变量作为它的所有者。
  • +
  • 在同一时间内,值有且只有一个所有者。
  • +
  • 当所有者离开自己的作用域时,它持有的值就会被释放掉。

    这里有两个重点:
  • +
  • s 在进入作用域后才变得有效
  • +
  • 它会保持自己的有效性直到自己离开作用域为止
  • +
+

然后看个案例

+
let x = 5;
+let y = x;
+

这个其实有两种,一般可以认为比较多实现的会使用 copy on write 之类的,先让两个都指向同一个快 5 的存储,在发生变更后开始正式拷贝,但是涉及到内存处理的便利性,对于这类简单类型,可以直接拷贝
但是对于非基础类型

+
let s1 = String::from("hello");
+let s2 = s1;
+
+println!("{}, world!", s1);
+

有可能认为有两种内存分布可能
先看下 string 的内存结构

第一种可能是

第二种是

我们来尝试编译下

发现有这个错误,其实在 rust 中let y = x这个行为的实质是移动,在赋值给 y 之后 x 就无效了

这样子就不会造成脱离作用域时,对同一块内存区域的二次释放,如果需要复制,可以使用 clone 方法

+
let s1 = String::from("hello");
+let s2 = s1.clone();
+
+println!("s1 = {}, s2 = {}", s1, s2);
+

这里其实会有点疑惑,为什么前面的x, y 的行为跟 s1, s2 的不一样,其实主要是基本类型和 string 这类的不定大小的类型的内存分配方式不同,x, y这类整型可以直接确定大小,可以直接在栈上分配,而像 string 和其他的变体结构体,其大小都是不能在编译时确定,所以需要在堆上进行分配

+]]>
+ + 语言 + Rust + + + Rust + 所有权 + 内存分布 + 新语言 + +
+ + rust学习笔记-所有权二 + /2021/04/18/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0-%E6%89%80%E6%9C%89%E6%9D%83%E4%BA%8C/ + 这里需要说道函数和返回值了
可以看书上的这个例子

对于这种情况,当进入函数内部时,会把传入的变量的所有权转移进函数内部,如果最后还是要返回该变量,但是如果此时还要返回别的计算结果,就可能需要笨拙地使用元组

+

引用

此时我们就可以用引用来解决这个问题

+
fn main() {
+    let s1 = String::from("hello");
+    let len = calculate_length(&s1);
+
+    println!("The length of '{}' is {}", s1, len);
+}
+fn calculate_length(s: &String) -> usize {
+    s.len()
+}
+

这里的&符号就是引用的语义,它们允许你在不获得所有权的前提下使用值

由于引用不持有值的所有权,所以当引用离开当前作用域时,它指向的值也不会被丢弃

+

可变引用

而当我们尝试对引用的字符串进行修改时

+
fn main() {
     let s1 = String::from("hello");
     change(&s1);
 }
@@ -11318,6 +11355,30 @@ uint8_t LFULogIncr(uint8_t counter) {
         不可变引用
       
   
+  
+    spark-little-tips
+    /2017/03/28/spark-little-tips/
+    spark 的一些粗浅使用经验

工作中学习使用了一下Spark做数据分析,主要是用spark的python接口,首先是pyspark.SparkContext(appName=xxx),这是初始化一个Spark应用实例或者说会话,不能重复,
返回的实例句柄就可以调用textFile(path)读取文本文件,这里的文本文件可以是HDFS上的文本文件,也可以普通文本文件,但是需要在Spark的所有集群上都存在,否则会
读取失败,parallelize则可以将python生成的集合数据读取后转换成rdd(A Resilient Distributed Dataset (RDD),一种spark下的基本抽象数据集),基于这个RDD就可以做
数据的流式计算,例如map reduce,在Spark中可以非常方便地实现

+

简单的mapreduce word count示例

textFile = sc.parallelize([(1,1), (2,1), (3,1), (4,1), (5,1),(1,1), (2,1), (3,1), (4,1), (5,1)])
+data = textFile.reduceByKey(lambda x, y: x + y).collect()
+for _ in data:
+    print(_)
+ + +

结果

(3, 2)
+(1, 2)
+(4, 2)
+(2, 2)
+(5, 2)
+]]>
+ + data analysis + + + spark + python + +
rust学习笔记-所有权三之切片 /2021/05/16/rust%E5%AD%A6%E4%B9%A0%E7%AC%94%E8%AE%B0-%E6%89%80%E6%9C%89%E6%9D%83%E4%B8%89%E4%B9%8B%E5%88%87%E7%89%87/ @@ -11382,30 +11443,6 @@ uint8_t LFULogIncr(uint8_t counter) { 切片 - - spark-little-tips - /2017/03/28/spark-little-tips/ - spark 的一些粗浅使用经验

工作中学习使用了一下Spark做数据分析,主要是用spark的python接口,首先是pyspark.SparkContext(appName=xxx),这是初始化一个Spark应用实例或者说会话,不能重复,
返回的实例句柄就可以调用textFile(path)读取文本文件,这里的文本文件可以是HDFS上的文本文件,也可以普通文本文件,但是需要在Spark的所有集群上都存在,否则会
读取失败,parallelize则可以将python生成的集合数据读取后转换成rdd(A Resilient Distributed Dataset (RDD),一种spark下的基本抽象数据集),基于这个RDD就可以做
数据的流式计算,例如map reduce,在Spark中可以非常方便地实现

-

简单的mapreduce word count示例

textFile = sc.parallelize([(1,1), (2,1), (3,1), (4,1), (5,1),(1,1), (2,1), (3,1), (4,1), (5,1)])
-data = textFile.reduceByKey(lambda x, y: x + y).collect()
-for _ in data:
-    print(_)
- - -

结果

(3, 2)
-(1, 2)
-(4, 2)
-(2, 2)
-(5, 2)
-]]>
- - data analysis - - - spark - python - -
spring event 介绍 /2022/01/30/spring-event-%E4%BB%8B%E7%BB%8D/ @@ -11700,163 +11737,31 @@ b 1