微信公众号搜"智元新知"关注
微信扫一扫可直接关注哦!

具有手动提交偏移量的 kafka 多线程消费者:KafkaConsumer 对于多线程访问不安全

如何解决具有手动提交偏移量的 kafka 多线程消费者:KafkaConsumer 对于多线程访问不安全

我使用 ArrayBlockingQueue 将 Kafka 消费者与接收器分离:

  1. Kafka的多线程消费,每个线程一个kafka消费者;
  2. Kafka 消费者手动管理偏移量;
  3. Kafka消费者将消息内容和包含OFFSET的回调函数包装成一个Record对象并发送给ArrayBlockingQueue
  4. Sink 从 ArrayBlockingQueue 获取记录并进行处理。 Sink 成功处理记录后,才会调用Record 对象的回调函数通知Kafka 消费者commitSync)

在操作过程中,遇到一个错误,困扰了我好几天。我不明白问题的哪一部分是错误的:

11:44:10.794 [pool-2-thread-1] ERROR com.alibaba.kafka.source.KafkaConsumerRunner - [pool-2-thread-1] ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
java.util.ConcurrentModificationException: KafkaConsumer is not safe for multi-threaded access
    at org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1824)
    at org.apache.kafka.clients.consumer.KafkaConsumer.acquireAndEnsureOpen(KafkaConsumer.java:1808)
    at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1255)
    at com.alibaba.kafka.source.KafkaConsumerRunner$1.call(KafkaConsumerRunner.java:75)
    at com.alibaba.kafka.source.KafkaConsumerRunner$1.call(KafkaConsumerRunner.java:71)
    at com.alibaba.kafka.sink.Sink.run(Sink.java:25)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

代码

Queues.java

public class Queues {
    public static volatile BlockingQueue[] queues;

    /**
     * Create Multiple Queues.
     * @param count The number of queues created.
     * @param capacity The Capacity of each queue.
     */
    public static void createQueues(final int count,final int capacity) {
        Queues.queues = new BlockingQueue[count];
        for (int i=0; i<count; ++i) {
            Queues.queues[i] = new ArrayBlockingQueue(capacity,true);
        }
    }
}

记录

@Builder
@Getter
public class Record {
    private final String value;
    private final Callable<Boolean> ackCallback;
}

Sink.java

public class Sink implements Runnable {
    private final int queueId;

    public Sink(int queueId) {
        this.queueId = queueId;
    }

    @Override
    public void run() {
        while (true) {
            try {
                Record record = (Record) Queues.queues[this.queueId].take();
                // (1) Handler: Write to database
                Thread.sleep(10);
                // (2) ACK: notify kafka consumer to commit offset manually
                record.getAckCallback().call();
            } catch (Exception e) {
                e.printstacktrace();
                System.exit(1);
            }
        }
    }
}

KafkaConsumerRunner

@Slf4j
public class KafkaConsumerRunner implements Runnable {
    private final String topic;
    private final KafkaConsumer<String,String> consumer;

    public KafkaConsumerRunner(String topic,Properties properties) {
        this.topic = topic;
        this.consumer = new KafkaConsumer<>(properties);
    }

    @Override
    public void run() {
        // offsets to commit
        Map<TopicPartition,OffsetAndMetadata> offsetsToCommit = new HashMap<>();
        // Subscribe topic
        this.consumer.subscribe(Collections.singletonList(this.topic));
        // Consume Kafka Message
        while (true) {
            try {
                ConsumerRecords<String,String> consumerRecords = this.consumer.poll(10000L);
                for (TopicPartition topicPartition : consumerRecords.partitions()) {
                    for (ConsumerRecord<String,String> consumerRecord : consumerRecords.records(topicPartition)) {
                        // (1) Restore [partition -> offset] Map
                        offsetsToCommit.put(topicPartition,new OffsetAndMetadata(consumerRecord.offset()));
                        // (2) Put into queue
                        int queueId = topicPartition.partition() % Queues.queues.length;
                        Queues.queues[queueId].put(Record.builder()
                                .value(consumerRecord.value())
                                .ackCallback(this.getAckCallback(offsetsToCommit))
                                .build());
                    }
                }
            } catch (ConcurrentModificationException | InterruptedException e) {
                log.error("[{}] {}",Thread.currentThread().getName(),ExceptionUtils.getMessage(e),e);
                System.exit(1);
            }
        }
    }

    private Callable<Boolean> getAckCallback(Map<TopicPartition,OffsetAndMetadata> offsets) {
        return new AckCallback<Boolean>(this.consumer,new HashMap<>(offsets)) {
            @Override
            public Boolean call() throws Exception {
                try {
                    this.getConsumer().commitSync(this.getoffsets());
                    return true;
                } catch (Exception e) {
                    log.error(String.format("[%s] %s",ExceptionUtils.getMessage(e)),e);
                    return false;
                }
            }
        };
    }

    @Getter
    @AllArgsConstructor
    abstract class AckCallback<T> implements Callable<T> {
        private final KafkaConsumer<String,String> consumer;
        private final Map<TopicPartition,OffsetAndMetadata> offsets;
    }
}

Application.java

public class Application {
    private static final String TOPIC = "YEWEI_TOPIC";
    private static final int QUEUE_COUNT = 1;
    private static final int QUEUE_CAPACITY = 4;
    
    private static void createQueues() {
        Queues.createQueues(QUEUE_COUNT,QUEUE_CAPACITY);
    }

    private static void startupsource() {
        if (null == System.getProperty("java.security.auth.login.config")) {
            System.setProperty("java.security.auth.login.config","jaas.conf");
        }

        Properties properties = new Properties();
        properties.put(ConsumerConfig.GROUP_ID_CONfig,"ConsumerGroup1");
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONfig,"cdh1:9092,cdh2:9092,cdh3:9092");
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONfig,org.apache.kafka.common.serialization.StringDeserializer.class);
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONfig,org.apache.kafka.common.serialization.StringDeserializer.class);
        properties.put(ConsumerConfig.MAX_POLL_RECORDS_CONfig,2);
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONfig,false);
        properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONfig,"earliest");
        properties.put(CommonClientConfigs.Security_PROTOCOL_CONfig,"SASL_PLAINTEXT");
        properties.put(SaslConfigs.SASL_MECHANISM,"PLAIN");

        ExecutorService executorService = Executors.newFixedThreadPool(QUEUE_COUNT);
        for (int queueId = 0; queueId < QUEUE_COUNT; ++queueId) {
            executorService.execute(new KafkaConsumerRunner(TOPIC,properties));
        }
    }

    private static void startupSinks() {
        ExecutorService executorService = Executors.newFixedThreadPool(QUEUE_COUNT);
        for (int queueId = 0; queueId < QUEUE_COUNT; ++queueId) {
            executorService.execute(new Sink(queueId));
        }
    }

    public static void main(String[] args) {
        Application.createQueues();
        Application.startupsource();
        Application.startupSinks();
    }
}

解决方法

我发现了这个问题。 Kafka 消费者运行在自己的线程中,也被 Sink 线程回调。 KafkaConsumer 的 pollcommitSync 方法只能应用于一个线程。请参阅org.apache.kafka.clients.consumer.KafkaConsumer#acquireAndEnsureOpen

更改为:Sink 回调不直接使用consumer 对象,而是将ACK 消息发送到LinkedTransferQueue。 KafkaConsumerRunner 每次轮询 LinkedTransferQueue 并批量处理 ACK

@Slf4j
public class KafkaConsumerRunner implements Runnable {
    private final String topic;
    private final BlockingQueue ackQueue;
    private final KafkaConsumer<String,String> consumer;

    public KafkaConsumerRunner(String topic,Properties properties) {
        this.topic = topic;
        this.ackQueue = new LinkedTransferQueue<Map<TopicPartition,OffsetAndMetadata>>();
        this.consumer = new KafkaConsumer<>(properties);
    }

    @Override
    public void run() {
        // Subscribe topic
        this.consumer.subscribe(Collections.singletonList(this.topic));
        // Consume Kafka Message
        while (true) {
            while (!this.ackQueue.isEmpty()) {
                try {
                    Map<TopicPartition,OffsetAndMetadata> offsets = (Map<TopicPartition,OffsetAndMetadata>) this.ackQueue.take();
                    this.consumer.commitSync(offsets);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }

            ...
        }
    }

    private Callable<Boolean> getAckCallback(Map<TopicPartition,OffsetAndMetadata> offsets) {
        return new AckCallback<Boolean>(new HashMap<>(offsets)) {
            @Override
            public Boolean call() throws Exception {
                try {
                    ackQueue.put(offsets);
                    return true;
                } catch (Exception e) {
                    log.error(String.format("[%s] %s",Thread.currentThread().getName(),ExceptionUtils.getMessage(e)),e);
                    System.exit(1);
                    return false;
                }
            }
        };
    }

    ...
}

版权声明:本文内容由互联网用户自发贡献,该文观点与技术仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 dio@foxmail.com 举报,一经查实,本站将立刻删除。