package kafka
- Source
- package.scala
- Alphabetic
- By Inheritance
- kafka
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Package Members
Type Members
- sealed abstract class Acks extends AnyRef
The available options for ProducerSettings#withAcks.
The available options for ProducerSettings#withAcks.
Available options include:
- sealed abstract class AdminClientSettings extends AnyRef
AdminClientSettings contain settings necessary to create a KafkaAdminClient.
AdminClientSettings contain settings necessary to create a KafkaAdminClient. Several convenience functions are provided so that you don't have to work with
String
values and keys fromAdminClientConfig
. It's still possible to setAdminClientConfig
values with functions like withProperty.AdminClientSettings instances are immutable and all modification functions return a new AdminClientSettings instance.
Use AdminClientSettings#apply for the default settings, and then apply any desired modifications on top of that instance.
- sealed abstract class AutoOffsetReset extends AnyRef
The available options for ConsumerSettings#withAutoOffsetReset.
The available options for ConsumerSettings#withAutoOffsetReset.
Available options include:
- AutoOffsetReset#Earliest to reset to the earliest offsets,
- AutoOffsetReset#Latest to reset to the latest offsets,
- AutoOffsetReset#None to fail if no offsets are available.
- AutoOffsetReset#Earliest to reset to the earliest offsets,
- abstract class CommitRecovery extends AnyRef
CommitRecovery describes how to recover from exceptions raised while trying to commit offsets.
CommitRecovery describes how to recover from exceptions raised while trying to commit offsets. See CommitRecovery#Default for the default recovery strategy. If you do not wish to recover from any exceptions, you can use CommitRecovery#None.
To create a new CommitRecovery, simply create a new instance and implement the recoverCommitWith function with the wanted recovery strategy. To use the CommitRecovery, you can simply set it with ConsumerSettings#withCommitRecovery.
- sealed abstract class CommitRecoveryException extends KafkaException
CommitRecoveryException indicates that offset commit recovery was attempted
attempts
times foroffsets
, but that it wasn't able to complete successfully.CommitRecoveryException indicates that offset commit recovery was attempted
attempts
times foroffsets
, but that it wasn't able to complete successfully. The last encountered exception is provided aslastException
.Use CommitRecoveryException#apply to create a new instance.
- sealed abstract class CommitTimeoutException extends KafkaException
CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout.
CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.
- sealed abstract class CommittableConsumerRecord[F[_], +K, +V] extends AnyRef
CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka.
CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka. Offsets are normally committed in batches, either using CommittableOffsetBatch or via pipes, like commitBatchWithin. If you are not committing offsets to Kafka then you can use record to get the underlying record and also discard the offset.
While normally not necessary, CommittableConsumerRecord#apply can be used to create a new instance.
- sealed abstract class CommittableOffset[F[_]] extends AnyRef
CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit.
CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit. Note that offsets are normally committed in batches for performance reasons. Pipes like commitBatchWithin use CommittableOffsetBatch to commit the offsets in batches.
While normally not necessary, CommittableOffset#apply can be used to create a new instance.
- sealed abstract class CommittableOffsetBatch[F[_]] extends AnyRef
CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit.
CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit. An offset, or one more batch, can be added an existing batch using
updated
. Note that this requires the offsets per topic-partition to be included in-order, since offset commits in general require it.Use CommittableOffsetBatch#empty to create an empty batch. The CommittableOffset#batch function can be used to create a batch from an existing CommittableOffset.
If you have some offsets in-order per topic-partition, you can fold them together using CommittableOffsetBatch#empty and
updated
, or you can use CommittableOffsetBatch#fromFoldable. Generally, prefer to usefromFoldable
, as it has better performance. Provided pipes like commitBatchWithin are also to be preferred, as they also achieve better performance. - sealed abstract class CommittableProducerRecords[F[_], +K, +V] extends AnyRef
CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.
CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.
CommittableProducerRecordss can be created using one of the following options:
CommittableProducerRecords#apply
to produce zero or more records within the same transaction as the offset is committed.CommittableProducerRecords#one
to produce exactly one record within the same transaction as the offset is committed.
- sealed abstract class ConsumerGroupException extends KafkaException
Indicates that one or more of the following conditions occurred while attempting to commit offsets.
Indicates that one or more of the following conditions occurred while attempting to commit offsets.
- There were CommittableOffsets without a consumer group ID.
- There were CommittableOffsets for multiple consumer group IDs.
- There were CommittableOffsets without a consumer group ID.
- sealed abstract class ConsumerRecord[+K, +V] extends AnyRef
ConsumerRecord represents a record which has been consumed from Kafka.
ConsumerRecord represents a record which has been consumed from Kafka. At the very least, this includes a key of type
K
, value of typeV
, and the topic, partition, and offset of the consumed record.To create a new instance, use ConsumerRecord#apply
- sealed abstract class ConsumerSettings[F[_], K, V] extends AnyRef
ConsumerSettings contain settings necessary to create a KafkaConsumer.
ConsumerSettings contain settings necessary to create a KafkaConsumer. At the very least, this includes key and value deserializers.
The following consumer configuration defaults are used.
auto.offset.reset
is set tonone
to avoid the surprise of the otherwise defaultlatest
setting.enable.auto.commit
is set tofalse
since offset commits are managed manually.
Several convenience functions are provided so that you don't have to work with
String
values andConsumerConfig
for configuration. It's still possible to specifyConsumerConfig
values with functions like withProperty.ConsumerSettings instances are immutable and all modification functions return a new ConsumerSettings instance.
Use
ConsumerSettings#apply
to create a new instance. - sealed abstract class ConsumerShutdownException extends KafkaException
ConsumerShutdownException indicates that a request could not be completed because the consumer has already shutdown.
- sealed abstract class DeserializationException extends KafkaException
Exception raised with Deserializer#failWith when deserialization was unable to complete successfully.
- type Deserializer[F[_], A] = GenericDeserializer[KeyOrValue, F, A]
- sealed abstract class GenericDeserializer[-T <: KeyOrValue, F[_], A] extends AnyRef
Functional composable Kafka key- and record deserializer with support for effect types.
- sealed abstract class GenericSerializer[-T <: KeyOrValue, F[_], A] extends AnyRef
- sealed abstract class Header extends org.apache.kafka.common.header.Header
Header represents a
String
key andArray[Byte]
value which can be included as part of Headers when creating a ProducerRecord.Header represents a
String
key andArray[Byte]
value which can be included as part of Headers when creating a ProducerRecord. Headers are included together with a record once produced, and can be used by consumers.To create a new Header, use Header#apply.
- sealed abstract class HeaderDeserializer[A] extends AnyRef
HeaderDeserializer is a functional deserializer for Kafka record header values.
HeaderDeserializer is a functional deserializer for Kafka record header values. It's similar to Deserializer, except it only has access to the header bytes, and it does not interoperate with the Kafka
Deserializer
interface. - sealed abstract class HeaderSerializer[A] extends AnyRef
HeaderSerializer is a functional serializer for Kafka record header values.
HeaderSerializer is a functional serializer for Kafka record header values. It's similar to Serializer, except it only has access to the value, and it does not interoperate with the Kafka
Serializer
interface. - sealed abstract class Headers extends AnyRef
Headers represent an immutable append-only collection of Headers.
Headers represent an immutable append-only collection of Headers. To create a new Headers instance, you can use Headers#apply or Headers#empty and add an instance of Header using
append
. - type Id[+A] = A
- sealed abstract class IsolationLevel extends AnyRef
The available options for ConsumerSettings#withIsolationLevel.
The available options for ConsumerSettings#withIsolationLevel.
Available options include:
- IsolationLevel#ReadCommitted to only read committed records,
- IsolationLevel#ReadUncommitted to also read uncommitted records.
- IsolationLevel#ReadCommitted to only read committed records,
- sealed abstract class Jitter[F[_]] extends AnyRef
Jitter represents the ability to apply jitter to an existing value
n
, effectively multiplyingn
with a pseudorandom value between0
and1
(both inclusive, although implementation dependent).Jitter represents the ability to apply jitter to an existing value
n
, effectively multiplyingn
with a pseudorandom value between0
and1
(both inclusive, although implementation dependent).The default Jitter#default uses
java.util.Random
for pseudorandom values and always applies jitter with a value between0
(inclusive) and1
(exclusive). If no jitter is desired, use Jitter#none. - sealed abstract class KafkaAdminClient[F[_]] extends AnyRef
KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.
KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.
Use KafkaAdminClient.resource or KafkaAdminClient.stream to create an instance.
- type KafkaByteConsumer = Consumer[Array[Byte], Array[Byte]]
Alias for Java Kafka
Consumer[Array[Byte], Array[Byte]]
. - type KafkaByteConsumerRecord = org.apache.kafka.clients.consumer.ConsumerRecord[Array[Byte], Array[Byte]]
Alias for Java Kafka
ConsumerRecord[Array[Byte], Array[Byte]]
. - type KafkaByteConsumerRecords = ConsumerRecords[Array[Byte], Array[Byte]]
Alias for Java Kafka
ConsumerRecords[Array[Byte], Array[Byte]]
. - type KafkaByteProducer = Producer[Array[Byte], Array[Byte]]
Alias for Java Kafka
Producer[Array[Byte], Array[Byte]]
. - type KafkaByteProducerRecord = org.apache.kafka.clients.producer.ProducerRecord[Array[Byte], Array[Byte]]
Alias for Java Kafka
ProducerRecord[Array[Byte], Array[Byte]]
. - sealed abstract class KafkaConsumer[F[_], K, V] extends KafkaConsume[F, K, V] with KafkaConsumeChunk[F, K, V] with KafkaAssignment[F] with KafkaOffsetsV2[F] with KafkaSubscription[F] with KafkaTopicsV2[F] with KafkaCommit[F] with KafkaMetrics[F] with KafkaConsumerLifecycle[F]
KafkaConsumer represents a consumer of Kafka records, with the ability to
subscribe
to topics, start a single top-level stream, and optionally control it via the provided fiber instance.KafkaConsumer represents a consumer of Kafka records, with the ability to
subscribe
to topics, start a single top-level stream, and optionally control it via the provided fiber instance.The following top-level streams are provided.
- stream provides a single stream of records, where the order of records is guaranteed per
topic-partition.
- partitionedStream provides a stream with elements as streams that continually request
records for a single partition. Order is guaranteed per topic-partition, but all assigned
partitions will have to be processed in parallel.
- partitionsMapStream provides a stream where each element contains a current assignment.
The current assignment is the
Map
, where keys is aTopicPartition
, and values are streams with records for a particularTopicPartition
.
For the streams, records are wrapped in CommittableConsumerRecords which provide CommittableOffsets with the ability to commit record offsets to Kafka. For performance reasons, offsets are usually committed in batches using CommittableOffsetBatch. ProvidedPipe
s, like commitBatchWithin are available for batch committing offsets. If you are not committing offsets to Kafka, you can simply discard the CommittableOffset, and only make use of the record.
While it's technically possible to start more than one stream from a single KafkaConsumer, it is generally not recommended as there is no guarantee which stream will receive which records, and there might be an overlap, in terms of duplicate records, between the two streams. If a first stream completes, possibly with error, there's no guarantee the stream has processed all of the records it received, and a second stream from the same KafkaConsumer might not be able to pick up where the first one left off. Therefore, only create a single top-level stream per KafkaConsumer, and if you want to start a new stream if the first one finishes, let the KafkaConsumer shutdown and create a new one.
- stream provides a single stream of records, where the order of records is guaranteed per
topic-partition.
- type KafkaDeserializer[A] = org.apache.kafka.common.serialization.Deserializer[A]
Alias for Java Kafka
Deserializer[A]
. - type KafkaHeader = org.apache.kafka.common.header.Header
Alias for Java Kafka
Header
. - type KafkaHeaders = org.apache.kafka.common.header.Headers
Alias for Java Kafka
Headers
. - abstract class KafkaProducer[F[_], K, V] extends AnyRef
KafkaProducer represents a producer of Kafka records, with the ability to produce
ProducerRecord
s using produce. - sealed abstract class KafkaProducerConnection[F[_]] extends AnyRef
KafkaProducerConnection represents a connection to a Kafka broker that can be used to create KafkaProducer instances.
KafkaProducerConnection represents a connection to a Kafka broker that can be used to create KafkaProducer instances. All KafkaProducer instances created from an given KafkaProducerConnection share a single underlying connection.
- type KafkaSerializer[A] = org.apache.kafka.common.serialization.Serializer[A]
Alias for Java Kafka
Serializer[A]
. - sealed trait Key extends KeyOrValue
- type KeyDeserializer[F[_], A] = GenericDeserializer[Key, F, A]
- sealed trait KeyOrValue extends AnyRef
Phantom types to indicate whether a Serializer/Deserializer if for keys, values, or both
- type KeySerializer[F[_], A] = GenericSerializer[Key, F, A]
- sealed abstract class NotSubscribedException extends KafkaException
NotSubscribedException indicates that a
Stream
was started in KafkaConsumer even though the consumer had not been subscribed to any topics or assigned any partitions before starting. - sealed abstract class ProducerRecord[+K, +V] extends AnyRef
ProducerRecord represents a record which can be produced to Kafka.
ProducerRecord represents a record which can be produced to Kafka. At the very least, this includes a key of type
K
, a value of typeV
, and to which topic the record should be produced. The partition, timestamp, and headers can be set by using the withPartition, withTimestamp, and withHeaders functions, respectively.To create a new instance, use ProducerRecord#apply.
- type ProducerRecords[K, V] = Chunk[ProducerRecord[K, V]]
- type ProducerResult[K, V] = Chunk[(ProducerRecord[K, V], RecordMetadata)]
- sealed abstract class ProducerSettings[F[_], K, V] extends AnyRef
ProducerSettings contain settings necessary to create a KafkaProducer.
ProducerSettings contain settings necessary to create a KafkaProducer. At the very least, this includes a key serializer and a value serializer.
Several convenience functions are provided so that you don't have to work with
String
values andProducerConfig
for configuration. It's still possible to specifyProducerConfig
values with functions like withProperty.ProducerSettings instances are immutable and all modification functions return a new ProducerSettings instance.
Use
ProducerSettings#apply
to create a new instance. - sealed abstract class SerializationException extends KafkaException
Exception raised with Serializer#failWith when serialization was unable to complete successfully.
- type Serializer[F[_], A] = GenericSerializer[KeyOrValue, F, A]
- sealed abstract class Timestamp extends AnyRef
Timestamp is an optional timestamp value representing a createTime, logAppendTime, unknownTime, or no timestamp at all.
- abstract class TransactionalKafkaProducer[F[_], K, V] extends AnyRef
Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce
ProducerRecord
s and commit corresponding CommittableOffsets using produce.Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce
ProducerRecord
s and commit corresponding CommittableOffsets using produce.Records are wrapped in TransactionalProducerRecords, which is a chunk of CommittableProducerRecord which wrap zero or more records together with a CommittableOffset.
- type TransactionalProducerRecords[F[_], +K, +V] = Chunk[CommittableProducerRecords[F, K, V]]
- sealed abstract class TransactionalProducerSettings[F[_], K, V] extends AnyRef
TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer.
TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer. This includes a transactional ID and any other ProducerSettings.
TransactionalProducerSettings instances are immutable and modification functions return a new TransactionalProducerSettings instance.
Use TransactionalProducerSettings.apply to create a new instance.
- sealed abstract class UnexpectedTopicException extends KafkaException
UnexpectedTopicException is raised when serialization or deserialization occurred for an unexpected topic which isn't supported by the Serializer or Deserializer.
- sealed trait Value extends KeyOrValue
- type ValueDeserializer[F[_], A] = GenericDeserializer[Value, F, A]
- type ValueSerializer[F[_], A] = GenericSerializer[Value, F, A]
Value Members
- val Deserializer: GenericDeserializer.type
- val Serializer: GenericSerializer.type
- def commitBatchWithin[F[_]](n: Int, d: FiniteDuration)(implicit F: Temporal[F]): Pipe[F, CommittableOffset[F], Unit]
Commits offsets in batches of every
n
offsets or time window of lengthd
, whichever happens first.Commits offsets in batches of every
n
offsets or time window of lengthd
, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window. - object Acks
- object AdminClientSettings
- object AutoOffsetReset
- object CommitRecovery
- object CommitRecoveryException extends Serializable
- object CommittableConsumerRecord
- object CommittableOffset
- object CommittableOffsetBatch
- object CommittableProducerRecords
- object ConsumerRecord
- object ConsumerSettings
- object GenericDeserializer
- object GenericSerializer
Functional composable Kafka key- and record serializer with support for effect types.
- object Header
- object HeaderDeserializer
- object HeaderSerializer
- object Headers
- object IsolationLevel
- object Jitter
- object KafkaAdminClient
- object KafkaConsumer
- object KafkaProducer
- object KafkaProducerConnection
- object ProducerRecord
- object ProducerRecords
- object ProducerSettings
- object Timestamp
- object TransactionalKafkaProducer
- object TransactionalProducerRecords
- object TransactionalProducerSettings
- object instances