p

fs2

kafka

package kafka

Source
package.scala
Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. kafka
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. Protected

Package Members

  1. package admin
  2. package consumer
  3. package producer
  4. package security
  5. package vulcan

Type Members

  1. sealed abstract class Acks extends AnyRef

    The available options for ProducerSettings#withAcks.

    The available options for ProducerSettings#withAcks.

    Available options include:

    • Acks#Zero to not wait for any acknowledgement from the server,
    • Acks#One to only wait for acknowledgement from the leader node,
    • Acks#All to wait for acknowledgement from all in-sync replicas.
  2. sealed abstract class AdminClientSettings extends AnyRef

    AdminClientSettings contain settings necessary to create a KafkaAdminClient.

    AdminClientSettings contain settings necessary to create a KafkaAdminClient. Several convenience functions are provided so that you don't have to work with String values and keys from AdminClientConfig. It's still possible to set AdminClientConfig values with functions like withProperty.

    AdminClientSettings instances are immutable and all modification functions return a new AdminClientSettings instance.

    Use AdminClientSettings#apply for the default settings, and then apply any desired modifications on top of that instance.

  3. sealed abstract class AutoOffsetReset extends AnyRef

    The available options for ConsumerSettings#withAutoOffsetReset.

    The available options for ConsumerSettings#withAutoOffsetReset.

    Available options include:

  4. abstract class CommitRecovery extends AnyRef

    CommitRecovery describes how to recover from exceptions raised while trying to commit offsets.

    CommitRecovery describes how to recover from exceptions raised while trying to commit offsets. See CommitRecovery#Default for the default recovery strategy. If you do not wish to recover from any exceptions, you can use CommitRecovery#None.

    To create a new CommitRecovery, simply create a new instance and implement the recoverCommitWith function with the wanted recovery strategy. To use the CommitRecovery, you can simply set it with ConsumerSettings#withCommitRecovery.

  5. sealed abstract class CommitRecoveryException extends KafkaException

    CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully.

    CommitRecoveryException indicates that offset commit recovery was attempted attempts times for offsets, but that it wasn't able to complete successfully. The last encountered exception is provided as lastException.

    Use CommitRecoveryException#apply to create a new instance.

  6. sealed abstract class CommitTimeoutException extends KafkaException

    CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout.

    CommitTimeoutException indicates that offset commit took longer than the configured ConsumerSettings#commitTimeout. The timeout and offsets are included in the exception message.

  7. sealed abstract class CommittableConsumerRecord[F[_], +K, +V] extends AnyRef

    CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka.

    CommittableConsumerRecord is a Kafka record along with an instance of CommittableOffset, which can be used commit the record offset to Kafka. Offsets are normally committed in batches, either using CommittableOffsetBatch or via pipes, like commitBatchWithin. If you are not committing offsets to Kafka then you can use record to get the underlying record and also discard the offset.

    While normally not necessary, CommittableConsumerRecord#apply can be used to create a new instance.

  8. sealed abstract class CommittableOffset[F[_]] extends AnyRef

    CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit.

    CommittableOffset represents an offsetAndMetadata for a topicPartition, along with the ability to commit that offset to Kafka with commit. Note that offsets are normally committed in batches for performance reasons. Pipes like commitBatchWithin use CommittableOffsetBatch to commit the offsets in batches.

    While normally not necessary, CommittableOffset#apply can be used to create a new instance.

  9. sealed abstract class CommittableOffsetBatch[F[_]] extends AnyRef

    CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit.

    CommittableOffsetBatch represents a batch of Kafka offsets which can be committed together using commit. An offset, or one more batch, can be added an existing batch using updated. Note that this requires the offsets per topic-partition to be included in-order, since offset commits in general require it.

    Use CommittableOffsetBatch#empty to create an empty batch. The CommittableOffset#batch function can be used to create a batch from an existing CommittableOffset.

    If you have some offsets in-order per topic-partition, you can fold them together using CommittableOffsetBatch#empty and updated, or you can use CommittableOffsetBatch#fromFoldable. Generally, prefer to use fromFoldable, as it has better performance. Provided pipes like commitBatchWithin are also to be preferred, as they also achieve better performance.

  10. sealed abstract class CommittableProducerRecords[F[_], +K, +V] extends AnyRef

    CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.

    CommittableProducerRecords represents zero or more ProducerRecords and a CommittableOffset, used by TransactionalKafkaProducer to produce the records and commit the offset atomically.

    CommittableProducerRecordss can be created using one of the following options:

    • CommittableProducerRecords#apply to produce zero or more records within the same transaction as the offset is committed.
    • CommittableProducerRecords#one to produce exactly one record within the same transaction as the offset is committed.
  11. sealed abstract class ConsumerGroupException extends KafkaException

    Indicates that one or more of the following conditions occurred while attempting to commit offsets.

    Indicates that one or more of the following conditions occurred while attempting to commit offsets.

  12. sealed abstract class ConsumerRecord[+K, +V] extends AnyRef

    ConsumerRecord represents a record which has been consumed from Kafka.

    ConsumerRecord represents a record which has been consumed from Kafka. At the very least, this includes a key of type K, value of type V, and the topic, partition, and offset of the consumed record.

    To create a new instance, use ConsumerRecord#apply

  13. sealed abstract class ConsumerSettings[F[_], K, V] extends AnyRef

    ConsumerSettings contain settings necessary to create a KafkaConsumer.

    ConsumerSettings contain settings necessary to create a KafkaConsumer. At the very least, this includes key and value deserializers.

    The following consumer configuration defaults are used.

    • auto.offset.reset is set to none to avoid the surprise of the otherwise default latest setting.
    • enable.auto.commit is set to false since offset commits are managed manually.

    Several convenience functions are provided so that you don't have to work with String values and ConsumerConfig for configuration. It's still possible to specify ConsumerConfig values with functions like withProperty.

    ConsumerSettings instances are immutable and all modification functions return a new ConsumerSettings instance.

    Use ConsumerSettings#apply to create a new instance.

  14. sealed abstract class ConsumerShutdownException extends KafkaException

    ConsumerShutdownException indicates that a request could not be completed because the consumer has already shutdown.

  15. sealed abstract class DeserializationException extends KafkaException

    Exception raised with Deserializer#failWith when deserialization was unable to complete successfully.

  16. type Deserializer[F[_], A] = GenericDeserializer[KeyOrValue, F, A]
  17. sealed abstract class GenericDeserializer[-T <: KeyOrValue, F[_], A] extends AnyRef

    Functional composable Kafka key- and record deserializer with support for effect types.

  18. sealed abstract class GenericSerializer[-T <: KeyOrValue, F[_], A] extends AnyRef
  19. sealed abstract class Header extends org.apache.kafka.common.header.Header

    Header represents a String key and Array[Byte] value which can be included as part of Headers when creating a ProducerRecord.

    Header represents a String key and Array[Byte] value which can be included as part of Headers when creating a ProducerRecord. Headers are included together with a record once produced, and can be used by consumers.

    To create a new Header, use Header#apply.

  20. sealed abstract class HeaderDeserializer[A] extends AnyRef

    HeaderDeserializer is a functional deserializer for Kafka record header values.

    HeaderDeserializer is a functional deserializer for Kafka record header values. It's similar to Deserializer, except it only has access to the header bytes, and it does not interoperate with the Kafka Deserializer interface.

  21. sealed abstract class HeaderSerializer[A] extends AnyRef

    HeaderSerializer is a functional serializer for Kafka record header values.

    HeaderSerializer is a functional serializer for Kafka record header values. It's similar to Serializer, except it only has access to the value, and it does not interoperate with the Kafka Serializer interface.

  22. sealed abstract class Headers extends AnyRef

    Headers represent an immutable append-only collection of Headers.

    Headers represent an immutable append-only collection of Headers. To create a new Headers instance, you can use Headers#apply or Headers#empty and add an instance of Header using append.

  23. type Id[+A] = A
  24. sealed abstract class IsolationLevel extends AnyRef

    The available options for ConsumerSettings#withIsolationLevel.

    The available options for ConsumerSettings#withIsolationLevel.

    Available options include:

  25. sealed abstract class Jitter[F[_]] extends AnyRef

    Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

    Jitter represents the ability to apply jitter to an existing value n, effectively multiplying n with a pseudorandom value between 0 and 1 (both inclusive, although implementation dependent).

    The default Jitter#default uses java.util.Random for pseudorandom values and always applies jitter with a value between 0 (inclusive) and 1 (exclusive). If no jitter is desired, use Jitter#none.

  26. sealed abstract class KafkaAdminClient[F[_]] extends AnyRef

    KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.

    KafkaAdminClient represents an admin client for Kafka, which is able to describe queries about topics, consumer groups, offsets, and other entities related to Kafka.

    Use KafkaAdminClient.resource or KafkaAdminClient.stream to create an instance.

  27. type KafkaByteConsumer = Consumer[Array[Byte], Array[Byte]]

    Alias for Java Kafka Consumer[Array[Byte], Array[Byte]].

  28. type KafkaByteConsumerRecord = org.apache.kafka.clients.consumer.ConsumerRecord[Array[Byte], Array[Byte]]

    Alias for Java Kafka ConsumerRecord[Array[Byte], Array[Byte]].

  29. type KafkaByteConsumerRecords = ConsumerRecords[Array[Byte], Array[Byte]]

    Alias for Java Kafka ConsumerRecords[Array[Byte], Array[Byte]].

  30. type KafkaByteProducer = Producer[Array[Byte], Array[Byte]]

    Alias for Java Kafka Producer[Array[Byte], Array[Byte]].

  31. type KafkaByteProducerRecord = org.apache.kafka.clients.producer.ProducerRecord[Array[Byte], Array[Byte]]

    Alias for Java Kafka ProducerRecord[Array[Byte], Array[Byte]].

  32. sealed abstract class KafkaConsumer[F[_], K, V] extends KafkaConsume[F, K, V] with KafkaConsumeChunk[F, K, V] with KafkaAssignment[F] with KafkaOffsetsV2[F] with KafkaSubscription[F] with KafkaTopicsV2[F] with KafkaCommit[F] with KafkaMetrics[F] with KafkaConsumerLifecycle[F]

    KafkaConsumer represents a consumer of Kafka records, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

    KafkaConsumer represents a consumer of Kafka records, with the ability to subscribe to topics, start a single top-level stream, and optionally control it via the provided fiber instance.

    The following top-level streams are provided.

    • stream provides a single stream of records, where the order of records is guaranteed per topic-partition.
    • partitionedStream provides a stream with elements as streams that continually request records for a single partition. Order is guaranteed per topic-partition, but all assigned partitions will have to be processed in parallel.
    • partitionsMapStream provides a stream where each element contains a current assignment. The current assignment is the Map, where keys is a TopicPartition, and values are streams with records for a particular TopicPartition.
      For the streams, records are wrapped in CommittableConsumerRecords which provide CommittableOffsets with the ability to commit record offsets to Kafka. For performance reasons, offsets are usually committed in batches using CommittableOffsetBatch. Provided Pipes, like commitBatchWithin are available for batch committing offsets. If you are not committing offsets to Kafka, you can simply discard the CommittableOffset, and only make use of the record.

    While it's technically possible to start more than one stream from a single KafkaConsumer, it is generally not recommended as there is no guarantee which stream will receive which records, and there might be an overlap, in terms of duplicate records, between the two streams. If a first stream completes, possibly with error, there's no guarantee the stream has processed all of the records it received, and a second stream from the same KafkaConsumer might not be able to pick up where the first one left off. Therefore, only create a single top-level stream per KafkaConsumer, and if you want to start a new stream if the first one finishes, let the KafkaConsumer shutdown and create a new one.

  33. type KafkaDeserializer[A] = org.apache.kafka.common.serialization.Deserializer[A]

    Alias for Java Kafka Deserializer[A].

  34. type KafkaHeader = org.apache.kafka.common.header.Header

    Alias for Java Kafka Header.

  35. type KafkaHeaders = org.apache.kafka.common.header.Headers

    Alias for Java Kafka Headers.

  36. abstract class KafkaProducer[F[_], K, V] extends AnyRef

    KafkaProducer represents a producer of Kafka records, with the ability to produce ProducerRecords using produce.

  37. sealed abstract class KafkaProducerConnection[F[_]] extends AnyRef

    KafkaProducerConnection represents a connection to a Kafka broker that can be used to create KafkaProducer instances.

    KafkaProducerConnection represents a connection to a Kafka broker that can be used to create KafkaProducer instances. All KafkaProducer instances created from an given KafkaProducerConnection share a single underlying connection.

  38. type KafkaSerializer[A] = org.apache.kafka.common.serialization.Serializer[A]

    Alias for Java Kafka Serializer[A].

  39. sealed trait Key extends KeyOrValue
  40. type KeyDeserializer[F[_], A] = GenericDeserializer[Key, F, A]
  41. sealed trait KeyOrValue extends AnyRef

    Phantom types to indicate whether a Serializer/Deserializer if for keys, values, or both

  42. type KeySerializer[F[_], A] = GenericSerializer[Key, F, A]
  43. sealed abstract class NotSubscribedException extends KafkaException

    NotSubscribedException indicates that a Stream was started in KafkaConsumer even though the consumer had not been subscribed to any topics or assigned any partitions before starting.

  44. sealed abstract class ProducerRecord[+K, +V] extends AnyRef

    ProducerRecord represents a record which can be produced to Kafka.

    ProducerRecord represents a record which can be produced to Kafka. At the very least, this includes a key of type K, a value of type V, and to which topic the record should be produced. The partition, timestamp, and headers can be set by using the withPartition, withTimestamp, and withHeaders functions, respectively.

    To create a new instance, use ProducerRecord#apply.

  45. type ProducerRecords[K, V] = Chunk[ProducerRecord[K, V]]
  46. type ProducerResult[K, V] = Chunk[(ProducerRecord[K, V], RecordMetadata)]
  47. sealed abstract class ProducerSettings[F[_], K, V] extends AnyRef

    ProducerSettings contain settings necessary to create a KafkaProducer.

    ProducerSettings contain settings necessary to create a KafkaProducer. At the very least, this includes a key serializer and a value serializer.

    Several convenience functions are provided so that you don't have to work with String values and ProducerConfig for configuration. It's still possible to specify ProducerConfig values with functions like withProperty.

    ProducerSettings instances are immutable and all modification functions return a new ProducerSettings instance.

    Use ProducerSettings#apply to create a new instance.

  48. sealed abstract class SerializationException extends KafkaException

    Exception raised with Serializer#failWith when serialization was unable to complete successfully.

  49. type Serializer[F[_], A] = GenericSerializer[KeyOrValue, F, A]
  50. sealed abstract class Timestamp extends AnyRef

    Timestamp is an optional timestamp value representing a createTime, logAppendTime, unknownTime, or no timestamp at all.

  51. abstract class TransactionalKafkaProducer[F[_], K, V] extends AnyRef

    Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce ProducerRecords and commit corresponding CommittableOffsets using produce.

    Represents a producer of Kafka records specialized for 'read-process-write' streams, with the ability to atomically produce ProducerRecords and commit corresponding CommittableOffsets using produce.

    Records are wrapped in TransactionalProducerRecords, which is a chunk of CommittableProducerRecord which wrap zero or more records together with a CommittableOffset.

  52. type TransactionalProducerRecords[F[_], +K, +V] = Chunk[CommittableProducerRecords[F, K, V]]
  53. sealed abstract class TransactionalProducerSettings[F[_], K, V] extends AnyRef

    TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer.

    TransactionalProducerSettings contain settings necessary to create a TransactionalKafkaProducer. This includes a transactional ID and any other ProducerSettings.

    TransactionalProducerSettings instances are immutable and modification functions return a new TransactionalProducerSettings instance.

    Use TransactionalProducerSettings.apply to create a new instance.

  54. sealed abstract class UnexpectedTopicException extends KafkaException

    UnexpectedTopicException is raised when serialization or deserialization occurred for an unexpected topic which isn't supported by the Serializer or Deserializer.

  55. sealed trait Value extends KeyOrValue
  56. type ValueDeserializer[F[_], A] = GenericDeserializer[Value, F, A]
  57. type ValueSerializer[F[_], A] = GenericSerializer[Value, F, A]

Value Members

  1. val Deserializer: GenericDeserializer.type
  2. val Serializer: GenericSerializer.type
  3. def commitBatchWithin[F[_]](n: Int, d: FiniteDuration)(implicit F: Temporal[F]): Pipe[F, CommittableOffset[F], Unit]

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first.

    Commits offsets in batches of every n offsets or time window of length d, whichever happens first. If there are no offsets to commit within a time window, no attempt will be made to commit offsets for that time window.

  4. object Acks
  5. object AdminClientSettings
  6. object AutoOffsetReset
  7. object CommitRecovery
  8. object CommitRecoveryException extends Serializable
  9. object CommittableConsumerRecord
  10. object CommittableOffset
  11. object CommittableOffsetBatch
  12. object CommittableProducerRecords
  13. object ConsumerRecord
  14. object ConsumerSettings
  15. object GenericDeserializer
  16. object GenericSerializer

    Functional composable Kafka key- and record serializer with support for effect types.

  17. object Header
  18. object HeaderDeserializer
  19. object HeaderSerializer
  20. object Headers
  21. object IsolationLevel
  22. object Jitter
  23. object KafkaAdminClient
  24. object KafkaConsumer
  25. object KafkaProducer
  26. object KafkaProducerConnection
  27. object ProducerRecord
  28. object ProducerRecords
  29. object ProducerSettings
  30. object Timestamp
  31. object TransactionalKafkaProducer
  32. object TransactionalProducerRecords
  33. object TransactionalProducerSettings
  34. object instances

Inherited from AnyRef

Inherited from Any

Ungrouped