ICode9

精准搜索请尝试: 精确搜索
首页 > 其他分享> 文章详细

kraft 使用说明机翻版

2022-02-23 09:33:19  阅读:191  来源: 互联网

标签:kraft false 翻版 ZooKeeper Kafka 说明 controller kafka KRaft


KRaft(又名 KIP-500)模式预览版

介绍

现在可以在没有 Apache ZooKeeper 的情况下运行 Apache Kafka!我们称之为Kafka Raft 元数据模式,通常缩写为KRaft mode.
KRaft旨在发音为craft(如craftsmanship)。它目前是预览版,不应在生产中使用,但
可在 Kafka 3.1 版本中进行测试。

当 Kafka 集群处于 KRaft 模式时,它不会将其元数据存储在 ZooKeeper 中。事实上,你根本不需要运行 ZooKeeper,因为它将元数据存储在 KRaft quorum 的控制器节点中。

KRaft 模式有很多好处——有些明显,有些不那么明显。显然,管理和配置一项服务比管理和配置两项服务更好。此外,您现在可以运行单进程 Kafka 集群。
最重要的是,KRaft 模式更具可扩展性。我们希望能够在这种模式下支持更多的主题和分区。

快速开始

警告

Kafka 3.1 中的 KRaft 模式仅用于测试,不用于生产。我们还不支持将现有的基于 ZooKeeper 的 Kafka 集群升级到此模式。
可能存在错误,包括严重错误。如果您尝试 KRaft 模式的预览版,您应该假设您的数据随时可能丢失。

生成集群 ID

第一步是使用 kafka-storage 工具为您的新集群生成一个 ID:

$ ./bin/kafka-storage.sh random-uuid
xtzWWN4bTjitpL3kfd9s5g

格式化存储目录

下一步是格式化存储目录。如果您在单节点模式下运行,您可以使用一个命令来执行此操作:

$ ./bin/kafka-storage.sh format -t <uuid> -c ./config/kraft/server.properties
Formatting /tmp/kraft-combined-logs

如果您使用多个节点,那么您应该在每个节点上运行 format 命令。确保为每个集群使用相同的集群 ID。

启动 Kafka 服务器

最后,您已准备好在每个节点上启动 Kafka 服务器。

$ ./bin/kafka-server-start.sh ./config/kraft/server.properties
[2021-02-26 15:37:11,071] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2021-02-26 15:37:11,294] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2021-02-26 15:37:11,466] INFO [Log partition=__cluster_metadata-0, dir=/tmp/kraft-combined-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2021-02-26 15:37:11,509] INFO [raft-expiration-reaper]: Starting (kafka.raft.TimingWheelExpirationService$ExpiredOperationReaper)
[2021-02-26 15:37:11,640] INFO [RaftManager nodeId=1] Completed transition to Unattached(epoch=0, voters=[1], electionTimeoutMs=9037) (org.apache.kafka.raft.QuorumState)
...

就像使用基于 ZooKeeper 的代理一样,您可以连接到端口 9092(或您配置的任何端口)来执行管理操作或生产或使用数据。

$ ./bin/kafka-topics.sh --create --topic foo --partitions 1 --replication-factor 1 --bootstrap-server localhost:9092
Created topic foo.

部署

控制器服务器

在 KRaft 模式中,只有一小部分经过特殊选择的服务器可以充当控制器(与基于 ZooKeeper 的模式不同,任何服务器都可以成为
控制器)。特别选择的控制器服务器将参与元数据仲裁。每个控制器服务器要么是活动的,要么是
当前活动控制器服务器的热备用。

您通常会为此角色选择 3 或 5 台服务器,具体取决于成本和系统在
不影响可用性的情况下应承受的并发故障数量等因素。就像 ZooKeeper 一样,您必须保持大多数控制器处于活动状态以保持可用性。所以如果你有 3 个
控制器,你可以容忍 1 个故障;使用 5 个控制器,您可以容忍 2 个故障。

流程角色

每个 Kafka 服务器现在都有一个名为的新配置键 process.roles ,它可以具有以下值:

  • 如果 process.roles 设置为 broker ,则服务器在 KRaft 模式下充当代理。
  • 如果 process.roles 设置为 controller ,则服务器在 KRaft 模式下充当控制器。
  • 如果 process.roles 设置为 broker,controller ,则服务器在 KRaft 模式下充当代理和控制器。
  • 如果 `process.roles根本没有设置,则假定我们处于 ZooKeeper 模式。如前所述,如果不重新格式化,您目前无法在 ZooKeeper 模式和 KRaft 模式之间来回转换。

充当代理和控制器的节点称为“组合”节点。对于简单的用例,组合节点更易于操作,并允许您避免
与 JVM 相关的一些固定内存开销。主要缺点是控制器与系统其余部分的隔离度较低。例如,如果代理上的活动导致
内存不足的情况,则服务器的控制器部分不会与该 OOM 情况隔离。

Quorum Voters

All nodes in the system must set the controller.quorum.voters configuration. This identifies the quorum controller servers that should be used. All the controllers must be enumerated.
This is similar to how, when using ZooKeeper, the zookeeper.connect configuration must contain all the ZooKeeper servers. Unlike with the ZooKeeper config, however, controller.quorum.voters
also has IDs for each node. The format is id1@host1:port1,id2@host2:port2, etc.

So if you have 10 brokers and 3 controllers named controller1, controller2, controller3, you might have the following configuration on controller1:

process.roles=controller
node.id=1
listeners=CONTROLLER://controller1.example.com:9093
controller.quorum.voters=1@controller1.example.com:9093,2@controller2.example.com:9093,3@controller3.example.com:9093

Each broker and each controller must set controller.quorum.voters. Note that the node ID supplied in the controller.quorum.voters configuration must match that supplied to the server.
So on controller1, node.id must be set to 1, and so forth. Note that there is no requirement for controller IDs to start at 0 or 1. However, the easiest and least confusing way to allocate
node IDs is probably just to give each server a numeric ID, starting from 0.

Note that clients never need to configure controller.quorum.voters; only servers do.

Kafka Storage Tool

As described above in the QuickStart section, you must use the kafka-storage.sh tool to generate a cluster ID for your new cluster, and then run the format command on each node before starting the node.

This is different from how Kafka has operated in the past. Previously, Kafka would format blank storage directories automatically, and also generate a new cluster UUID automatically. One reason for the change
is that auto-formatting can sometimes obscure an error condition. For example, under UNIX, if a data directory can't be mounted, it may show up as blank. In this case, auto-formatting would be the wrong thing to do.

This is particularly important for the metadata log maintained by the controller servers. If two controllers out of three controllers were able to start with blank logs, a leader might be able to be elected with
nothing in the log, which would cause all metadata to be lost.

缺少 功能

我们目前不支持任何类型的升级,无论是从 KRaft 模式升级还是从 KRaft 模式升级。这是我们正在努力解决的一个重要差距。

以下 Kafka 功能尚未完全实现:

  • 支持某些安全功能:配置基于 KRaft 的授权器、设置 SCRAM、委托令牌等
    (尽管请注意,您可以将授权器 kafka.security.authorizer.AclAuthorizer 用于 KRaft 集群,即使
    它们是基于 ZooKeeper 的:只需定义 authorizer.class.name 和配置像往常一样授权)。
  • 支持某些配置,例如默认启用不干净的领导者选举或动态更改代理端点
  • 支持 KIP-112 “JBOD” 模式

我们希望知道您可能会遇到的一些问题是因为预览版中还不支持的什么功能导致的。 我们将在 trunk 分支中逐步弥补这些功能缺陷。

调试

如果遇到问题,您可能需要查看元数据日志。

卡夫卡转储日志

查看元数据日志的一种方法是使用 kafka-dump-log.sh 工具,如下所示:

$ ./bin/kafka-dump-log.sh  --cluster-metadata-decoder --skip-record-metadata --files /tmp/kraft-combined-logs/__cluster_metadata-0/*.log
Dumping /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log
Starting offset: 0
baseOffset: 0 lastOffset: 0 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: true position: 0 CreateTime: 1614382631640 size: 89 magic: 2 compresscodec: NONE crc: 1438115474 isvalid: true

baseOffset: 1 lastOffset: 1 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 89 CreateTime: 1614382632329 size: 137 magic: 2 compresscodec: NONE crc: 1095855865 isvalid: true
 payload: {"type":"REGISTER_BROKER_RECORD","version":0,"data":{"brokerId":1,"incarnationId":"P3UFsWoNR-erL9PK98YLsA","brokerEpoch":0,"endPoints":[{"name":"PLAINTEXT","host":"localhost","port":9092,"securityProtocol":0}],"features":[],"rack":null}}
baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 226 CreateTime: 1614382632453 size: 83 magic: 2 compresscodec: NONE crc: 455187130 isvalid: true
 payload: {"type":"UNFENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":0}}
baseOffset: 3 lastOffset: 3 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 1 isTransactional: false isControl: false position: 309 CreateTime: 1614382634484 size: 83 magic: 2 compresscodec: NONE crc: 4055692847 isvalid: true
 payload: {"type":"FENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":0}}
baseOffset: 4 lastOffset: 4 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: true position: 392 CreateTime: 1614382671857 size: 89 magic: 2 compresscodec: NONE crc: 1318571838 isvalid: true

baseOffset: 5 lastOffset: 5 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 481 CreateTime: 1614382672440 size: 137 magic: 2 compresscodec: NONE crc: 841144615 isvalid: true
 payload: {"type":"REGISTER_BROKER_RECORD","version":0,"data":{"brokerId":1,"incarnationId":"RXRJu7cnScKRZOnWQGs86g","brokerEpoch":4,"endPoints":[{"name":"PLAINTEXT","host":"localhost","port":9092,"securityProtocol":0}],"features":[],"rack":null}}
baseOffset: 6 lastOffset: 6 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 618 CreateTime: 1614382672544 size: 83 magic: 2 compresscodec: NONE crc: 4155905922 isvalid: true
 payload: {"type":"UNFENCE_BROKER_RECORD","version":0,"data":{"id":1,"epoch":4}}
baseOffset: 7 lastOffset: 8 count: 2 baseSequence: -1 lastSequence: -1 producerId: -1 producerEpoch: -1 partitionLeaderEpoch: 2 isTransactional: false isControl: false position: 701 CreateTime: 1614382712158 size: 159 magic: 2 compresscodec: NONE crc: 3726758683 isvalid: true
 payload: {"type":"TOPIC_RECORD","version":0,"data":{"name":"foo","topicId":"5zoAlv-xEh9xRANKXt1Lbg"}}
 payload: {"type":"PARTITION_RECORD","version":0,"data":{"partitionId":0,"topicId":"5zoAlv-xEh9xRANKXt1Lbg","replicas":[1],"isr":[1],"removingReplicas":null,"addingReplicas":null,"leader":1,"leaderEpoch":0,"partitionEpoch":0}}

元数据 Shell

检查元数据日志的另一个工具是 Kafka 元数据 shell。就像 ZooKeeper shell 一样,它允许您检查集群的元数据。

$ ./bin/kafka-metadata-shell.sh  --snapshot /tmp/kraft-combined-logs/__cluster_metadata-0/00000000000000000000.log
>> ls /
brokers  local  metadataQuorum  topicIds  topics
>> ls /topics
foo
>> cat /topics/foo/0/data
{
  "partitionId" : 0,
  "topicId" : "5zoAlv-xEh9xRANKXt1Lbg",
  "replicas" : [ 1 ],
  "isr" : [ 1 ],
  "removingReplicas" : null,
  "addingReplicas" : null,
  "leader" : 1,
  "leaderEpoch" : 0,
  "partitionEpoch" : 0
}
>> exit

标签:kraft,false,翻版,ZooKeeper,Kafka,说明,controller,kafka,KRaft
来源: https://www.cnblogs.com/jiangdewen/p/15924340.html

本站声明: 1. iCode9 技术分享网(下文简称本站)提供的所有内容,仅供技术学习、探讨和分享;
2. 关于本站的所有留言、评论、转载及引用,纯属内容发起人的个人观点,与本站观点和立场无关;
3. 关于本站的所有言论和文字,纯属内容发起人的个人观点,与本站观点和立场无关;
4. 本站文章均是网友提供,不完全保证技术分享内容的完整性、准确性、时效性、风险性和版权归属;如您发现该文章侵犯了您的权益,可联系我们第一时间进行删除;
5. 本站为非盈利性的个人网站,所有内容不会用来进行牟利,也不会利用任何形式的广告来间接获益,纯粹是为了广大技术爱好者提供技术内容和技术思想的分享性交流网站。

专注分享技术,共同学习,共同进步。侵权联系[81616952@qq.com]

Copyright (C)ICode9.com, All Rights Reserved.

ICode9版权所有