Class DeadLetterPublishingRecoverer

    • Field Summary

      Fields 
      Modifier and Type Field Description
      protected org.springframework.core.log.LogAccessor logger  
    • Constructor Summary

      Constructors 
      Constructor Description
      DeadLetterPublishingRecoverer​(java.util.function.Function<org.apache.kafka.clients.producer.ProducerRecord<?,​?>,​KafkaOperations<?,​?>> templateResolver, boolean transactional, java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.TopicPartition> destinationResolver)
      Create an instance with a template resolving function that receives the failed consumer record and the exception and returns a KafkaOperations and a flag on whether or not the publishing from this instance will be transactional or not.
      DeadLetterPublishingRecoverer​(java.util.Map<java.lang.Class<?>,​KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object>> templates)
      Create an instance with the provided templates and a default destination resolving function that returns a TopicPartition based on the original topic (appended with ".DLT") from the failed record, and the same partition as the failed record.
      DeadLetterPublishingRecoverer​(java.util.Map<java.lang.Class<?>,​KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object>> templates, java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.TopicPartition> destinationResolver)
      Create an instance with the provided templates and destination resolving function, that receives the failed consumer record and the exception and returns a TopicPartition.
      DeadLetterPublishingRecoverer​(KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object> template)
      Create an instance with the provided template and a default destination resolving function that returns a TopicPartition based on the original topic (appended with ".DLT") from the failed record, and the same partition as the failed record.
      DeadLetterPublishingRecoverer​(KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object> template, java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.TopicPartition> destinationResolver)
      Create an instance with the provided template and destination resolving function, that receives the failed consumer record and the exception and returns a TopicPartition.
    • Field Detail

      • logger

        protected final org.springframework.core.log.LogAccessor logger
    • Constructor Detail

      • DeadLetterPublishingRecoverer

        public DeadLetterPublishingRecoverer​(KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object> template)
        Create an instance with the provided template and a default destination resolving function that returns a TopicPartition based on the original topic (appended with ".DLT") from the failed record, and the same partition as the failed record. Therefore the dead-letter topic must have at least as many partitions as the original topic.
        Parameters:
        template - the KafkaOperations to use for publishing.
      • DeadLetterPublishingRecoverer

        public DeadLetterPublishingRecoverer​(KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object> template,
                                             java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.TopicPartition> destinationResolver)
        Create an instance with the provided template and destination resolving function, that receives the failed consumer record and the exception and returns a TopicPartition. If the partition in the TopicPartition is less than 0, no partition is set when publishing to the topic.
        Parameters:
        template - the KafkaOperations to use for publishing.
        destinationResolver - the resolving function.
      • DeadLetterPublishingRecoverer

        public DeadLetterPublishingRecoverer​(java.util.Map<java.lang.Class<?>,​KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object>> templates)
        Create an instance with the provided templates and a default destination resolving function that returns a TopicPartition based on the original topic (appended with ".DLT") from the failed record, and the same partition as the failed record. Therefore the dead-letter topic must have at least as many partitions as the original topic. The templates map keys are classes and the value the corresponding template to use for objects (producer record values) of that type. A LinkedHashMap is recommended when there is more than one template, to ensure the map is traversed in order. To send records with a null value, add a template with the Void class as a key; otherwise the first template from the map values iterator will be used.
        Parameters:
        templates - the KafkaOperationss to use for publishing.
      • DeadLetterPublishingRecoverer

        public DeadLetterPublishingRecoverer​(java.util.Map<java.lang.Class<?>,​KafkaOperations<? extends java.lang.Object,​? extends java.lang.Object>> templates,
                                             java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.TopicPartition> destinationResolver)
        Create an instance with the provided templates and destination resolving function, that receives the failed consumer record and the exception and returns a TopicPartition. If the partition in the TopicPartition is less than 0, no partition is set when publishing to the topic. The templates map keys are classes and the value the corresponding template to use for objects (producer record values) of that type. A LinkedHashMap is recommended when there is more than one template, to ensure the map is traversed in order. To send records with a null value, add a template with the Void class as a key; otherwise the first template from the map values iterator will be used.
        Parameters:
        templates - the KafkaOperationss to use for publishing.
        destinationResolver - the resolving function.
      • DeadLetterPublishingRecoverer

        public DeadLetterPublishingRecoverer​(java.util.function.Function<org.apache.kafka.clients.producer.ProducerRecord<?,​?>,​KafkaOperations<?,​?>> templateResolver,
                                             boolean transactional,
                                             java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.TopicPartition> destinationResolver)
        Create an instance with a template resolving function that receives the failed consumer record and the exception and returns a KafkaOperations and a flag on whether or not the publishing from this instance will be transactional or not. Also receives a destination resolving function that works similarly but returns a TopicPartition instead. If the partition in the TopicPartition is less than 0, no partition is set when publishing to the topic.
        Parameters:
        templateResolver - the function that resolver the KafkaOperations to use for publishing.
        transactional - whether or not publishing by this instance should be transactional
        destinationResolver - the resolving function.
        Since:
        2.7
    • Method Detail

      • setRetainExceptionHeader

        public void setRetainExceptionHeader​(boolean retainExceptionHeader)
        Set to true to retain a Java serialized DeserializationException header. By default, such headers are removed from the published record, unless both key and value deserialization exceptions occur, in which case, the DLT_* headers are created from the value exception and the key exception header is retained.
        Parameters:
        retainExceptionHeader - true to retain the
        Since:
        2.5
      • setHeadersFunction

        public void setHeadersFunction​(java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.header.Headers> headersFunction)
        Set a function which will be called to obtain additional headers to add to the published record. If a Header returned is an instance of DeadLetterPublishingRecoverer.SingleRecordHeader, then that header will replace any existing header of that name, rather than being appended as a new value.
        Parameters:
        headersFunction - the headers function.
        Since:
        2.5.4
        See Also:
        addHeadersFunction(BiFunction)
      • setVerifyPartition

        public void setVerifyPartition​(boolean verifyPartition)
        Set to false to disable partition verification. When true, verify that the partition returned by the resolver actually exists. If not, set the ProducerRecord.partition() to null, allowing the producer to determine the destination partition.
        Parameters:
        verifyPartition - false to disable.
        Since:
        2.7
        See Also:
        setPartitionInfoTimeout(Duration)
      • setPartitionInfoTimeout

        public void setPartitionInfoTimeout​(java.time.Duration partitionInfoTimeout)
        Time to wait for partition information when verifying. Default is 5 seconds.
        Parameters:
        partitionInfoTimeout - the timeout.
        Since:
        2.7
        See Also:
        setVerifyPartition(boolean)
      • setReplaceOriginalHeaders

        @Deprecated
        public void setReplaceOriginalHeaders​(boolean replaceOriginalHeaders)
        Deprecated.
        Set to false if you don't want to append the current "original" headers (topic, partition etc.) if they are already present. When false, only the first "original" headers are retained.
        Parameters:
        replaceOriginalHeaders - set to false not to replace.
        Since:
        2.7
      • setAppendOriginalHeaders

        public void setAppendOriginalHeaders​(boolean appendOriginalHeaders)
        Set to false if you don't want to append the current "original" headers (topic, partition etc.) if they are already present. When false, only the first "original" headers are retained.
        Parameters:
        appendOriginalHeaders - set to false not to replace.
        Since:
        2.7.9
      • setThrowIfNoDestinationReturned

        public void setThrowIfNoDestinationReturned​(boolean throwIfNoDestinationReturned)
        Set to true to throw an exception if the destination resolver function returns a null TopicPartition.
        Parameters:
        throwIfNoDestinationReturned - true to enable.
        Since:
        2.7
      • setFailIfSendResultIsError

        public void setFailIfSendResultIsError​(boolean failIfSendResultIsError)
        Set to true to enable waiting for the send result and throw an exception if it fails. It will wait for the milliseconds specified in waitForSendResultTimeout for the result.
        Parameters:
        failIfSendResultIsError - true to enable.
        Since:
        2.7
        See Also:
        setWaitForSendResultTimeout(Duration)
      • isFailIfSendResultIsError

        protected boolean isFailIfSendResultIsError()
        If true, wait for the send result and throw an exception if it fails. It will wait for the milliseconds specified in waitForSendResultTimeout for the result.
        Returns:
        true to wait.
        Since:
        2.7.14
        See Also:
        setWaitForSendResultTimeout(Duration)
      • setTimeoutBuffer

        public void setTimeoutBuffer​(long buffer)
        Set the number of milliseconds to add to the producer configuration delivery.timeout.ms property to avoid timing out before the Kafka producer. Default 5000.
        Parameters:
        buffer - the buffer.
        Since:
        2.7
        See Also:
        setWaitForSendResultTimeout(Duration)
      • getTimeoutBuffer

        protected long getTimeoutBuffer()
        The number of milliseconds to add to the producer configuration delivery.timeout.ms property to avoid timing out before the Kafka producer.
        Returns:
        the buffer.
        Since:
        2.7.14
      • setStripPreviousExceptionHeaders

        public void setStripPreviousExceptionHeaders​(boolean stripPreviousExceptionHeaders)
        Set to false to retain previous exception headers as well as headers for the current exception. Default is true, which means only the current headers are retained; setting it to false this can cause a growth in record size when a record is republished many times.
        Parameters:
        stripPreviousExceptionHeaders - false to retain all.
        Since:
        2.7.9
      • setSkipSameTopicFatalExceptions

        public void setSkipSameTopicFatalExceptions​(boolean skipSameTopicFatalExceptions)
        Set to false if you want to forward the record to the same topic even though the exception is fatal by this class' classification, e.g. to handle this scenario in a different layer.
        Parameters:
        skipSameTopicFatalExceptions - false to forward in this scenario.
      • isTransactional

        protected boolean isTransactional()
        True if publishing should run in a transaction.
        Returns:
        true for transactional.
        Since:
        2.7.14
      • addHeadersFunction

        public void addHeadersFunction​(java.util.function.BiFunction<org.apache.kafka.clients.consumer.ConsumerRecord<?,​?>,​java.lang.Exception,​org.apache.kafka.common.header.Headers> headersFunction)
        Add a function which will be called to obtain additional headers to add to the published record. Functions are called in the order that they are added, and after any function passed into setHeadersFunction(BiFunction). If a Header returned is an instance of DeadLetterPublishingRecoverer.SingleRecordHeader, then that header will replace any existing header of that name, rather than being appended as a new value.
        Parameters:
        headersFunction - the headers function.
        Since:
        2.8.4
        See Also:
        setHeadersFunction(BiFunction)
      • accept

        public void accept​(org.apache.kafka.clients.consumer.ConsumerRecord<?,​?> record,
                           @Nullable
                           org.apache.kafka.clients.consumer.Consumer<?,​?> consumer,
                           java.lang.Exception exception)
        Description copied from interface: ConsumerAwareRecordRecoverer
        Recover the record.
        Specified by:
        accept in interface ConsumerAwareRecordRecoverer
        Parameters:
        record - the record.
        consumer - the consumer.
        exception - the exception.
      • send

        protected void send​(org.apache.kafka.clients.producer.ProducerRecord<java.lang.Object,​java.lang.Object> outRecord,
                            KafkaOperations<java.lang.Object,​java.lang.Object> kafkaTemplate,
                            org.apache.kafka.clients.consumer.ConsumerRecord<?,​?> inRecord)
        Send the record.
        Parameters:
        outRecord - the record.
        kafkaTemplate - the template.
        inRecord - the consumer record.
        Since:
        2.7
      • createProducerRecord

        protected org.apache.kafka.clients.producer.ProducerRecord<java.lang.Object,​java.lang.Object> createProducerRecord​(org.apache.kafka.clients.consumer.ConsumerRecord<?,​?> record,
                                                                                                                                 org.apache.kafka.common.TopicPartition topicPartition,
                                                                                                                                 org.apache.kafka.common.header.Headers headers,
                                                                                                                                 @Nullable
                                                                                                                                 byte[] key,
                                                                                                                                 @Nullable
                                                                                                                                 byte[] value)
        Subclasses can override this method to customize the producer record to send to the DLQ. The default implementation simply copies the key and value from the consumer record and adds the headers. The timestamp is not set (the original timestamp is in one of the headers). IMPORTANT: if the partition in the TopicPartition is less than 0, it must be set to null in the ProducerRecord.
        Parameters:
        record - the failed record
        topicPartition - the TopicPartition returned by the destination resolver.
        headers - the headers - original record headers plus DLT headers.
        key - the key to use instead of the consumer record key.
        value - the value to use instead of the consumer record value.
        Returns:
        the producer record to send.
        See Also:
        KafkaHeaders
      • publish

        protected void publish​(org.apache.kafka.clients.producer.ProducerRecord<java.lang.Object,​java.lang.Object> outRecord,
                               KafkaOperations<java.lang.Object,​java.lang.Object> kafkaTemplate,
                               org.apache.kafka.clients.consumer.ConsumerRecord<?,​?> inRecord)
        Override this if you want more than just logging of the send result.
        Parameters:
        outRecord - the record to send.
        kafkaTemplate - the template.
        inRecord - the consumer record.
        Since:
        2.2.5
      • verifySendResult

        protected void verifySendResult​(KafkaOperations<java.lang.Object,​java.lang.Object> kafkaTemplate,
                                        org.apache.kafka.clients.producer.ProducerRecord<java.lang.Object,​java.lang.Object> outRecord,
                                        @Nullable
                                        org.springframework.util.concurrent.ListenableFuture<SendResult<java.lang.Object,​java.lang.Object>> sendResult,
                                        org.apache.kafka.clients.consumer.ConsumerRecord<?,​?> inRecord)
        Wait for the send future to complete.
        Parameters:
        kafkaTemplate - the template used to send the record.
        outRecord - the record.
        sendResult - the future.
        inRecord - the original consumer record.
      • determineSendTimeout

        protected java.time.Duration determineSendTimeout​(KafkaOperations<?,​?> template)
        Determine the send timeout based on the template's producer factory and setWaitForSendResultTimeout(Duration).
        Parameters:
        template - the template.
        Returns:
        the timeout.
        Since:
        2.7.14
      • getHeaderNames

        protected DeadLetterPublishingRecoverer.HeaderNames getHeaderNames()
        Override this if you want different header names to be used in the sent record.
        Returns:
        the header names.
        Since:
        2.7