searchusermenu
  • 发布文章
  • 消息中心
点赞
收藏
评论
分享
原创

Elasticsearch7.x无法选主问题

2023-10-30 09:20:20
37
0

Elasticsearch 7.x 选举算法改为基于 Raft 的实现,与标准 Raft 相比,最大的区别是允许选民可以投多票,当产生多个主节点的时候,让最后一个当选,这样,可以更快地选出主节点。但是这种机制同样也有缺点,就是会使竞选过程比较激烈。特别是当集群节点数量比较多的时候,候选人反复竞争可能会持续很长时间

当遇到这种无法选主情况时,节点会有如下的日志:

master not discovered or elected yet, an election requires at least XX nodes with ids from [] which is a quorum; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 14, last-accepted version 71 in term 5

或者

failed to join{…}
CoordinationStateRejectedException: incoming term 4996 does not match current term

 

但是这些报错和问题根因没有啥关系,探测到的节点已经能够达到 quorum,然后继续discovery,让人很费解。这时得把日志调到 DEBUG:

{
    "persistent": {
        "logger.org.elasticsearch.cluster.service.MasterService": "DEBUG",
        "logger.org.elasticsearch.cluster.coordination": "DEBUG"
    }
}

跟随报错日志”which is a quorum; discovery will“,从代码一路跟过去,发现只有becomeCandidate的时候才会触发,搜索 debug 日志”coordinator becoming CANDIDATE“,找到以下信息:

[2021-02-17110:47:07,331][DEBUG][o.e.c.c.Coordinator[OWER, lastKnownLeader was [Optional[<C02D36GlMD6R}{HgU8AzfASnqvSij cDe][C02D36G1MD6R] joinLeaderlnTerm:oordinator becoming CANDIDATEin term 2513 (was0vFg}{fw2V24P5Stmhl6pEBu3xxg}{127.0.0.1}{127.0.0.1:9314}{dilmrt}{ml-machine_memory=34359738368f ml-max_open_jobs=20, xpack-installed=truef transform.node=true}]]) [2021-02-17T10:47:07,368][DEBUG][o.e.d.PeerFinder	] [C02D36G1MD6R] Peer{transportAddress=[::1]:9301f discoveryNode=nullf peersRequestInFlight=false} connection failedo rg- elasticsea rch.t ranspo rt. Connect? ranspo rtException: [][[::!]:9301] connect_exception
    at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener-onFailure(TcpTransport.java:998)〜[elasticsearch-7.7.0.jar:7.7.0]
    at ora.elasticsearch.action.ActionListener.XambdaitoBiConsumeriZCActionListener.iava:198) ~Felasticsearch-7.7.0.iar:7.7.01

 

注意他上一个状态是FOLLOWER,在 Elasticsearch 的选举状态图里,只有加入集群才会切换到FOLLOWER状态。这说明有主节点被选出来了,继续搜 ”coordinator becoming FOLLOWER “,可以找到他切换到FOLLOWER时Leader的源地址:

[2021-02-17T10:47:07,153][DEBUG][o.e.c.c.PublicationTransportHandler] [C02D36G1MD6R] received full cluster state version [93] with size [13793]
[2021-02-17710:47:07,177][DEBUG][o.e.c.c.Coordinator][C02D36G1MD6R] onFollowerCheckRequest:of [{C02D36GlMD6R}{HgU8AzfASnqv5ijcDe0vFg}{fw2V24P5Stmhl6pEBu3xxg}{127.0.0.1}{|127.0.0.JXdilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}] in term 2511 (was CANDIDATE, lastKnownLeader was [Optional.empty])

 

然后看一下 Leader 节点的日志,发现有:

elected-as-master (.. nodes joined) 以及
failing [elected-as-master

 

[2021-02-17Tll:33:40f305][WARN ][o.e.c-s.MasterService ] [C02D36G1MD6R] failing [elected—卜 elect leader, _BECOME_MASTER_TASK_, —FINISH-ELECTION」]: failed to commit cluster state version [103])rg.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator-java:1431) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable-java:88) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable- run(AbstractRunnable-java:39) ~[elasticsearch-7.7.0.jar:7-7.0]
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ^[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture-java:68) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator-java:1354) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) [elasticsearch-7.7.0-jar:7.7.0]
at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator.cancelActivePublication(Coordinator.java:1129) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator.becomeCandidate(Coordinator-java:544) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator-joinLeaderInTerm(Coordinator-java:457) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$2(JoinHelper-java:141) [elasticsearch-7.7.0.jar:7.7.0]

 

 

可以看到节点起初被成功选为 Master,但是后来因为收到其他节点拉选票的RequestVote 请求(joinLeaderInTerm函数是对竞选请求的处理)取消集群状态的发布,切换到候选人状态。如果他成功发布了集群状态,新主就可以顺利当选了。

我们再观察候选人每次发起 RequestVote 的周期以及成功情况:

 

grep -E "starting election with|elected-as-master" logs/my-debug.log |less

发现他有时候甚至来不及被选为master,都没有走到发布集群状态的流程:

[2021-02-18T15:58:19f541][DEBUG][o.e.c.c-Coordinator][C02D36G1MD6R]starting election with StartJoinRequest{tenn=10424,node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127・0.0.1}{127・0.0.1:9316}{dilmrt}{ml.machine_memory=34359738368f xpack-installed=true, transform■node=true、ml■max_open_jobs=20}}
[2021-02-18T15:58:24f909][DEBUG][o.e.c.c-Coordinator][C02D36G1MD6R]starting election with StartJoinRequest{term=10433/node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127.0.0.1}{127.0.0.1:9316}{dilmrt}{ml.machine_memory=34359738368f xpack.installed=true, transform■node=true、ml■max_open_jobs=20}}
[2021-02-18T15:58:33f470][DEBUG][o.e.c.c.Coordinator][C02D36G1MD6R]starting election]with StartJoinRequest{tenn=10448,node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127・0.0.1}{127・0.0.1:9316}{dilmrt}{ml.machine_memory=34359738368f xpack-installed=true, transform■node=true、ml■max_open_jobs=20}}

 

 

而来不及被选为 master 的原因是被其他候选人拉选票的请求打断:

 

[2021-02-18715:53:27,194][DEBUG][o.e.c.c-CoordinationState] [C02D36G1MD6R]handleStart Join:ignoring [StartJoinRequest{term=9854fnode={C02D36GlMD6R}{m47W97RlSKirn4cEHabkQw}{Tgn9Ag5wS0eAwPRO0cw-bA}{127.0.0.1}{127.0.0.l:9314}{dilmrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=true, transform.node=true}}] as term provided is not greater than current term [9855]
[2021-02-18715:53:27,194][DEBUG][o-e-c.c.PreVoteCollector I [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{BFhjJetNRoqViiREg8mPsg}{0AAm8XblTGiCR5oXqTtoMg}{127.0.0.1}{127.0.0.1:9319}{dilmrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=true, transform.node=true}f currentTerm=9854}
[2021-02-18715:53:27,194][DEBUG][o-e.c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{jFZFK70bQy6jJUtAdG58MQ}{zJ_Jmw5fSOujYUxUDsXBzwH127.0.0.1H127.0.0.1:9307}{dil mrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=true, transform.node=true}, currentTerm=9852}
[2021-02-18715:53:27,194][DEBUG][o-e-c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{J2P_QC8dSGGp5C6OJJP5nw}{CJqGAk5WQH6_VZlHRA5zOA}{127.0.0.1}{127.0.0.1:9304}{dil mrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=truef transform.node=true}f currentTerm=9853}
[2021-02-18715:53:27,195][DEBUG][o.e.c.c-CoordinationState] [C02D36G1MD6R]handle StartJoin7.0B0Bl}{127.0B0Bl:9313}{dilmrt}{ml.machine_memory=34359738368f ml■max_open_jobs=20, xpack-installed=true, transform.node=true}}:leaving term [9855] due to StartJoinRequest{term=9856fnode={C02D36GlMD6R}{5TeQB34ARHG2tHCb2DR7QQ}{Yul-rl3iS4SUMy5Vos9g5w}{12at org-elasticsearch.cluster.coordination.CoordinationState-at org.elasticsearch.cluster-coordination.Coordinationstate.handlestartJoin handleSta rtJoin
(CoordinationState-java:181)〜[elasticsearch-7.7.0.jar:7.7.0-SNAPSHOT]
(CoordinationState■java:181)〜[elasticsearch-7.7.0.jar:7.7.0-SNAPSHOT]
[2021-02-18715:53:27340][TRACE][o-e.c-c.PreVoteCollector ] [C02D36GlMD6R] updating with preVoteResponse=RreVoteResponse{currentTerm=9856f lastAcceptedTerm=8597f 'lastAcceptedVersion=371}f leader=null
     at org-elasticsearch.cluster.coordination.Coordinationstate.(Coordinationstate■java:181)〜[elasticsearch-7.7.0.jar:7.7.0-SNAPSHOT]

[2021-02-18715:53:27,606][DEBUG][o-e.c.c-CoordinationState] [C02D36G1MD6R]:ignoring [StartJoinRequest{term=9854fnode={C02D36GlMD6R}{J2P_QC8dSGGp5C6OJJP5nw}{CJqGAk5WQH6_VZlHRA5zOA}{127.0.0.1}{127.0.0.l:9304}{dilmrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack.inctalledWrue, transform.node=true}}] as term provided is not greater than current term [9856]
[2021-02-18115:53:27,633][DEBUG][o.e.c.c.Coordinator][C02D36G1MD6R] Itarting election with]StartJoinRequest{term=9858/node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127.0B0.1}{127.0.0.1:9316}{dilmrt-machine_memory=34359738368f xpack-installed=true, transform-node=truef ml-max_open_jobs=20}}
[2021-02-18715:53:27,632][DEBUG][o-e-c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{vY2rqLL9SbO3x_qg-nkJ9w}{Idvlwl9ySQqxAw43ANiRzQ}{127.0.0.1}{127.0.0.1:9310}{dilmarhi ha mAmnml hhah i nhc=7A	i nc-f-al 1 Arl=± n ia 十 rnnsfcrm nndA=± n	ri irfah-I-Tarm=QRRAl

 

 

在 Elasticsearch 的选举算法中,允许选民投多票,并让最后一个leader当选,这需要

  • 候选人在拉选票的过程中,如果收到其他候选人的 RequestVote,则清空自己已经收到的选票
  • 如果一个 leader 收到 RequestVote,则切换到候选人

决定竞选激烈程度的是几个超时时间控制的选举周期:

  • gracePeriod:固定的竞选时长限制,由cluster.election.duration配置,默认 500ms
  • maxTimeout:选举前的最大等待时间,由cluster.election.max_timeout配置,默认 10s
  • backoffTime:竞选失败进行重试时,用重试次数乘以 backoffTime,由cluster.election.back_off_time
    配置,默认 100ms

候选人的竞选过程周期性执行,执行周期就是在cluster.election.duration的基础上加上不超过最大等待时间的随机值。

 

//重试次数,每次加 1
final long thisAttempt = attempt.getAndIncrement();
//最大延迟时间不超过cluster.election.max_timeout配置,每次递增cluster.election.back_off_time
final long maxDelayMillis = Math.min(maxTimeout.millis(), initialTimeout.millis() + thisAttempt * backoffTime.millis());
//执行延迟在cluster.election.duration基础上递增随机值
final long delayMillis = toPositiveLongAtMost(random.nextLong(), maxDelayMillis) + gracePeriod.millis();
threadPool.scheduleUnlessShuttingDown(TimeValue.timeValueMillis(delayMillis), Names.GENERIC, runnable);

 

这个延迟时间是一个比较小的时间,我们截取两个时间点可以大致看一下增长情况,经过 4 分钟左右,竞选周期从 1 秒内增长到 7 秒。这个过程是线性增长的。

[2021-02-18T00:13:48,869][DEBUG]scheduleNextElection{ delayMillis=658}
[2021-02-18T00:17:53,743][DEBUG]scheduleNextElection{ delayMillis=7597}

从上面的实现可以看出,加大 cluster.election.duration可以降低选举激烈程度。

这里还有一个疑问,PreVote 是干什么去了,为什么没有拦截住?有两种情况:

  • 选民收到首次集群状态,才认为集群存在 Leader 了,后续的 Prevote 返回 false,但是在竞争激烈的时候没有节点被选为 leader。
  • 选民收到了首次集群状态,但在此之前又收到了其他节点的 RequestVote,导致自己的 term 更大了,首次集群状态因为 term 更低被忽略。如下图:
[2021-02-18712:53:02,797][DEBUG][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{Y80RYqNyRiK_rE38TbpZFw}{LFwjK-FlRHKiPnrsIGoYFQ}{127・0・0・ 1}{127・0・0・l:9317}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=truef transform.node=true}, currentTerm=8592}

[2021-02-18712:53:03,464][DEBUG][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R]accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{5TeQB34ARHG2tHCb2DR7QQ}{9u5fcoDBTZKMsPlqGKntVw}{127・0・0・1}{127・0・0・l:9309}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=true, transform・node=true}, currentTerm=8592}

[2021-02-18T12£53:03f470][DEBUG][o.e.c.c.Coordinationstate] [C02D36G1MD6R]
handteStartJoin :leaving term [8593] due to StartJoinRequest{term=8594fnode={C02D36GlMD6R}{HgU8AzfASnqv5ijcDe0vFg}{WyVP-LtOTtiUtCo6ilgiIQ}{127.0.0.1}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}

[2021-02-18712:53:04,118][TRACE][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R] updating with preVoteResponse=PreVoteResponse{currentTerm=8594f lastAcceptedTerm=8470, lastAcceptedVersion=366}, leader=null
[2021-02-18712:53:06,380][DEBUG][o.e.c.c.PreVoteCoUector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{joJil3nEQGCbLeXQVrE2Mw}{zKHyGdq0Q3e0bnG9tt3NLQ}{127・0・0・1}{127・0・0・l:9304}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=truef transform.node=true}, currentTerm=8593}

[2021-02-18712:53:06,600][DEBUG][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{hDfYOFUSqyWqwm0TqyaCg}{gBGf5AAkQYervsfPBr6W9g}{127・0・0・1}{127・0・0・l:9308}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=truef transform.node=true}, currentTerm=8593}

 

最后总结一下,虽然分析过程比较复杂,但是解决起来比较简单(不很完美):部署独立的主节点,并且可以考虑适当增大cluster.election.duration的配置

0条评论
0 / 1000
1****n
10文章数
0粉丝数
1****n
10 文章 | 0 粉丝
原创

Elasticsearch7.x无法选主问题

2023-10-30 09:20:20
37
0

Elasticsearch 7.x 选举算法改为基于 Raft 的实现,与标准 Raft 相比,最大的区别是允许选民可以投多票,当产生多个主节点的时候,让最后一个当选,这样,可以更快地选出主节点。但是这种机制同样也有缺点,就是会使竞选过程比较激烈。特别是当集群节点数量比较多的时候,候选人反复竞争可能会持续很长时间

当遇到这种无法选主情况时,节点会有如下的日志:

master not discovered or elected yet, an election requires at least XX nodes with ids from [] which is a quorum; discovery will continue using [] from hosts providers and [] from last-known cluster state; node term 14, last-accepted version 71 in term 5

或者

failed to join{…}
CoordinationStateRejectedException: incoming term 4996 does not match current term

 

但是这些报错和问题根因没有啥关系,探测到的节点已经能够达到 quorum,然后继续discovery,让人很费解。这时得把日志调到 DEBUG:

{
    "persistent": {
        "logger.org.elasticsearch.cluster.service.MasterService": "DEBUG",
        "logger.org.elasticsearch.cluster.coordination": "DEBUG"
    }
}

跟随报错日志”which is a quorum; discovery will“,从代码一路跟过去,发现只有becomeCandidate的时候才会触发,搜索 debug 日志”coordinator becoming CANDIDATE“,找到以下信息:

[2021-02-17110:47:07,331][DEBUG][o.e.c.c.Coordinator[OWER, lastKnownLeader was [Optional[<C02D36GlMD6R}{HgU8AzfASnqvSij cDe][C02D36G1MD6R] joinLeaderlnTerm:oordinator becoming CANDIDATEin term 2513 (was0vFg}{fw2V24P5Stmhl6pEBu3xxg}{127.0.0.1}{127.0.0.1:9314}{dilmrt}{ml-machine_memory=34359738368f ml-max_open_jobs=20, xpack-installed=truef transform.node=true}]]) [2021-02-17T10:47:07,368][DEBUG][o.e.d.PeerFinder	] [C02D36G1MD6R] Peer{transportAddress=[::1]:9301f discoveryNode=nullf peersRequestInFlight=false} connection failedo rg- elasticsea rch.t ranspo rt. Connect? ranspo rtException: [][[::!]:9301] connect_exception
    at org.elasticsearch.transport.TcpTransport$ChannelsConnectedListener-onFailure(TcpTransport.java:998)〜[elasticsearch-7.7.0.jar:7.7.0]
    at ora.elasticsearch.action.ActionListener.XambdaitoBiConsumeriZCActionListener.iava:198) ~Felasticsearch-7.7.0.iar:7.7.01

 

注意他上一个状态是FOLLOWER,在 Elasticsearch 的选举状态图里,只有加入集群才会切换到FOLLOWER状态。这说明有主节点被选出来了,继续搜 ”coordinator becoming FOLLOWER “,可以找到他切换到FOLLOWER时Leader的源地址:

[2021-02-17T10:47:07,153][DEBUG][o.e.c.c.PublicationTransportHandler] [C02D36G1MD6R] received full cluster state version [93] with size [13793]
[2021-02-17710:47:07,177][DEBUG][o.e.c.c.Coordinator][C02D36G1MD6R] onFollowerCheckRequest:of [{C02D36GlMD6R}{HgU8AzfASnqv5ijcDe0vFg}{fw2V24P5Stmhl6pEBu3xxg}{127.0.0.1}{|127.0.0.JXdilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}] in term 2511 (was CANDIDATE, lastKnownLeader was [Optional.empty])

 

然后看一下 Leader 节点的日志,发现有:

elected-as-master (.. nodes joined) 以及
failing [elected-as-master

 

[2021-02-17Tll:33:40f305][WARN ][o.e.c-s.MasterService ] [C02D36G1MD6R] failing [elected—卜 elect leader, _BECOME_MASTER_TASK_, —FINISH-ELECTION」]: failed to commit cluster state version [103])rg.elasticsearch.cluster.coordination.FailedToCommitClusterStateException: publication failed
at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication$4.onFailure(Coordinator-java:1431) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.action.ActionRunnable.onFailure(ActionRunnable-java:88) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.AbstractRunnable- run(AbstractRunnable-java:39) ~[elasticsearch-7.7.0.jar:7-7.0]
at org.elasticsearch.common.util.concurrent.EsExecutors$DirectExecutorService.execute(EsExecutors.java:225) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ListenableFuture.notifyListener(ListenableFuture.java:106) ^[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.common.util.concurrent.ListenableFuture.addListener(ListenableFuture-java:68) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator$CoordinatorPublication.onCompletion(Coordinator-java:1354) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Publication.onPossibleCompletion(Publication.java:125) [elasticsearch-7.7.0-jar:7.7.0]
at org.elasticsearch.cluster.coordination.Publication.cancel(Publication.java:89) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator.cancelActivePublication(Coordinator.java:1129) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator.becomeCandidate(Coordinator-java:544) ~[elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.Coordinator-joinLeaderInTerm(Coordinator-java:457) [elasticsearch-7.7.0.jar:7.7.0]
at org.elasticsearch.cluster.coordination.JoinHelper.lambda$new$2(JoinHelper-java:141) [elasticsearch-7.7.0.jar:7.7.0]

 

 

可以看到节点起初被成功选为 Master,但是后来因为收到其他节点拉选票的RequestVote 请求(joinLeaderInTerm函数是对竞选请求的处理)取消集群状态的发布,切换到候选人状态。如果他成功发布了集群状态,新主就可以顺利当选了。

我们再观察候选人每次发起 RequestVote 的周期以及成功情况:

 

grep -E "starting election with|elected-as-master" logs/my-debug.log |less

发现他有时候甚至来不及被选为master,都没有走到发布集群状态的流程:

[2021-02-18T15:58:19f541][DEBUG][o.e.c.c-Coordinator][C02D36G1MD6R]starting election with StartJoinRequest{tenn=10424,node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127・0.0.1}{127・0.0.1:9316}{dilmrt}{ml.machine_memory=34359738368f xpack-installed=true, transform■node=true、ml■max_open_jobs=20}}
[2021-02-18T15:58:24f909][DEBUG][o.e.c.c-Coordinator][C02D36G1MD6R]starting election with StartJoinRequest{term=10433/node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127.0.0.1}{127.0.0.1:9316}{dilmrt}{ml.machine_memory=34359738368f xpack.installed=true, transform■node=true、ml■max_open_jobs=20}}
[2021-02-18T15:58:33f470][DEBUG][o.e.c.c.Coordinator][C02D36G1MD6R]starting election]with StartJoinRequest{tenn=10448,node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127・0.0.1}{127・0.0.1:9316}{dilmrt}{ml.machine_memory=34359738368f xpack-installed=true, transform■node=true、ml■max_open_jobs=20}}

 

 

而来不及被选为 master 的原因是被其他候选人拉选票的请求打断:

 

[2021-02-18715:53:27,194][DEBUG][o.e.c.c-CoordinationState] [C02D36G1MD6R]handleStart Join:ignoring [StartJoinRequest{term=9854fnode={C02D36GlMD6R}{m47W97RlSKirn4cEHabkQw}{Tgn9Ag5wS0eAwPRO0cw-bA}{127.0.0.1}{127.0.0.l:9314}{dilmrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=true, transform.node=true}}] as term provided is not greater than current term [9855]
[2021-02-18715:53:27,194][DEBUG][o-e-c.c.PreVoteCollector I [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{BFhjJetNRoqViiREg8mPsg}{0AAm8XblTGiCR5oXqTtoMg}{127.0.0.1}{127.0.0.1:9319}{dilmrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=true, transform.node=true}f currentTerm=9854}
[2021-02-18715:53:27,194][DEBUG][o-e.c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{jFZFK70bQy6jJUtAdG58MQ}{zJ_Jmw5fSOujYUxUDsXBzwH127.0.0.1H127.0.0.1:9307}{dil mrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=true, transform.node=true}, currentTerm=9852}
[2021-02-18715:53:27,194][DEBUG][o-e-c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{J2P_QC8dSGGp5C6OJJP5nw}{CJqGAk5WQH6_VZlHRA5zOA}{127.0.0.1}{127.0.0.1:9304}{dil mrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack-installed=truef transform.node=true}f currentTerm=9853}
[2021-02-18715:53:27,195][DEBUG][o.e.c.c-CoordinationState] [C02D36G1MD6R]handle StartJoin7.0B0Bl}{127.0B0Bl:9313}{dilmrt}{ml.machine_memory=34359738368f ml■max_open_jobs=20, xpack-installed=true, transform.node=true}}:leaving term [9855] due to StartJoinRequest{term=9856fnode={C02D36GlMD6R}{5TeQB34ARHG2tHCb2DR7QQ}{Yul-rl3iS4SUMy5Vos9g5w}{12at org-elasticsearch.cluster.coordination.CoordinationState-at org.elasticsearch.cluster-coordination.Coordinationstate.handlestartJoin handleSta rtJoin
(CoordinationState-java:181)〜[elasticsearch-7.7.0.jar:7.7.0-SNAPSHOT]
(CoordinationState■java:181)〜[elasticsearch-7.7.0.jar:7.7.0-SNAPSHOT]
[2021-02-18715:53:27340][TRACE][o-e.c-c.PreVoteCollector ] [C02D36GlMD6R] updating with preVoteResponse=RreVoteResponse{currentTerm=9856f lastAcceptedTerm=8597f 'lastAcceptedVersion=371}f leader=null
     at org-elasticsearch.cluster.coordination.Coordinationstate.(Coordinationstate■java:181)〜[elasticsearch-7.7.0.jar:7.7.0-SNAPSHOT]

[2021-02-18715:53:27,606][DEBUG][o-e.c.c-CoordinationState] [C02D36G1MD6R]:ignoring [StartJoinRequest{term=9854fnode={C02D36GlMD6R}{J2P_QC8dSGGp5C6OJJP5nw}{CJqGAk5WQH6_VZlHRA5zOA}{127.0.0.1}{127.0.0.l:9304}{dilmrt}{ml.machine_memory=34359738368f ml.max_open_jobs=20# xpack.inctalledWrue, transform.node=true}}] as term provided is not greater than current term [9856]
[2021-02-18115:53:27,633][DEBUG][o.e.c.c.Coordinator][C02D36G1MD6R] Itarting election with]StartJoinRequest{term=9858/node={C02D36GlMD6R}{ahl6F9fBQ06WQVQQ3s5i5w}{7ERiDnWHRD-akA9aCUL9wA}{127.0B0.1}{127.0.0.1:9316}{dilmrt-machine_memory=34359738368f xpack-installed=true, transform-node=truef ml-max_open_jobs=20}}
[2021-02-18715:53:27,632][DEBUG][o-e-c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{vY2rqLL9SbO3x_qg-nkJ9w}{Idvlwl9ySQqxAw43ANiRzQ}{127.0.0.1}{127.0.0.1:9310}{dilmarhi ha mAmnml hhah i nhc=7A	i nc-f-al 1 Arl=± n ia 十 rnnsfcrm nndA=± n	ri irfah-I-Tarm=QRRAl

 

 

在 Elasticsearch 的选举算法中,允许选民投多票,并让最后一个leader当选,这需要

  • 候选人在拉选票的过程中,如果收到其他候选人的 RequestVote,则清空自己已经收到的选票
  • 如果一个 leader 收到 RequestVote,则切换到候选人

决定竞选激烈程度的是几个超时时间控制的选举周期:

  • gracePeriod:固定的竞选时长限制,由cluster.election.duration配置,默认 500ms
  • maxTimeout:选举前的最大等待时间,由cluster.election.max_timeout配置,默认 10s
  • backoffTime:竞选失败进行重试时,用重试次数乘以 backoffTime,由cluster.election.back_off_time
    配置,默认 100ms

候选人的竞选过程周期性执行,执行周期就是在cluster.election.duration的基础上加上不超过最大等待时间的随机值。

 

//重试次数,每次加 1
final long thisAttempt = attempt.getAndIncrement();
//最大延迟时间不超过cluster.election.max_timeout配置,每次递增cluster.election.back_off_time
final long maxDelayMillis = Math.min(maxTimeout.millis(), initialTimeout.millis() + thisAttempt * backoffTime.millis());
//执行延迟在cluster.election.duration基础上递增随机值
final long delayMillis = toPositiveLongAtMost(random.nextLong(), maxDelayMillis) + gracePeriod.millis();
threadPool.scheduleUnlessShuttingDown(TimeValue.timeValueMillis(delayMillis), Names.GENERIC, runnable);

 

这个延迟时间是一个比较小的时间,我们截取两个时间点可以大致看一下增长情况,经过 4 分钟左右,竞选周期从 1 秒内增长到 7 秒。这个过程是线性增长的。

[2021-02-18T00:13:48,869][DEBUG]scheduleNextElection{ delayMillis=658}
[2021-02-18T00:17:53,743][DEBUG]scheduleNextElection{ delayMillis=7597}

从上面的实现可以看出,加大 cluster.election.duration可以降低选举激烈程度。

这里还有一个疑问,PreVote 是干什么去了,为什么没有拦截住?有两种情况:

  • 选民收到首次集群状态,才认为集群存在 Leader 了,后续的 Prevote 返回 false,但是在竞争激烈的时候没有节点被选为 leader。
  • 选民收到了首次集群状态,但在此之前又收到了其他节点的 RequestVote,导致自己的 term 更大了,首次集群状态因为 term 更低被忽略。如下图:
[2021-02-18712:53:02,797][DEBUG][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{Y80RYqNyRiK_rE38TbpZFw}{LFwjK-FlRHKiPnrsIGoYFQ}{127・0・0・ 1}{127・0・0・l:9317}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=truef transform.node=true}, currentTerm=8592}

[2021-02-18712:53:03,464][DEBUG][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R]accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{5TeQB34ARHG2tHCb2DR7QQ}{9u5fcoDBTZKMsPlqGKntVw}{127・0・0・1}{127・0・0・l:9309}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=true, transform・node=true}, currentTerm=8592}

[2021-02-18T12£53:03f470][DEBUG][o.e.c.c.Coordinationstate] [C02D36G1MD6R]
handteStartJoin :leaving term [8593] due to StartJoinRequest{term=8594fnode={C02D36GlMD6R}{HgU8AzfASnqv5ijcDe0vFg}{WyVP-LtOTtiUtCo6ilgiIQ}{127.0.0.1}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}

[2021-02-18712:53:04,118][TRACE][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R] updating with preVoteResponse=PreVoteResponse{currentTerm=8594f lastAcceptedTerm=8470, lastAcceptedVersion=366}, leader=null
[2021-02-18712:53:06,380][DEBUG][o.e.c.c.PreVoteCoUector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{joJil3nEQGCbLeXQVrE2Mw}{zKHyGdq0Q3e0bnG9tt3NLQ}{127・0・0・1}{127・0・0・l:9304}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=truef transform.node=true}, currentTerm=8593}

[2021-02-18712:53:06,600][DEBUG][o.e.c.c.PreVoteCollector ] [C02D36G1MD6R] accpeting prevote for PreVoteRequest{sourceNode={C02D36GlMD6R}{hDfYOFUSqyWqwm0TqyaCg}{gBGf5AAkQYervsfPBr6W9g}{127・0・0・1}{127・0・0・l:9308}{dilmrt.machine_memory=34359738368, ml.max_open_jobs=20, xpack.installed=truef transform.node=true}, currentTerm=8593}

 

最后总结一下,虽然分析过程比较复杂,但是解决起来比较简单(不很完美):部署独立的主节点,并且可以考虑适当增大cluster.election.duration的配置

文章来自个人专栏
elasticsearch
10 文章 | 1 订阅
0条评论
0 / 1000
请输入你的评论
0
0