Skip to content

What is KPOps?

With a couple of easy commands in the shell, and a pipeline.yaml of under 30 lines, KPOps can not only deploy a Kafka pipeline1 to a Kubernetes cluster, but also reset, clean or destroy it!

Key features

  • Deploy Kafka apps to Kubernetes: KPOps allows to deploy consecutive Kafka Streams applications and producers using an easy-to-read and -write pipeline definition.
  • Manage Kafka Connectors: KPOps connects with your Kafka Connect cluster and deploys, validates, and deletes your connectors.
  • Configure multiple pipelines and steps: KPOps has various abstractions that simplify configuring multiple pipelines and steps within pipelines by sharing common configuration between different components, such as producers or streaming applications.
  • Handle your topics and schemas: KPOps not only creates and deletes your topics but also registers and deletes your schemas.
  • Clean termination of Kafka components: KPOps removes your pipeline components (i.e., Kafka Streams applications) from the Kubernetes cluster and cleans up the component-related states (i.e., removing/resetting offset of Kafka consumer groups).
  • Preview your pipeline changes: With the KPOps dry-run, you can ensure your pipeline definition is set up correctly. This helps to minimize downtime and prevent potential errors or issues that could impact your production environment.

Example

atm-fraud-pipeline

An overview of Word-count pipeline shown in Streams Explorer
Word-count pipeline.yaml
- type: producer-app
  name: data-producer
  app:
    image: bakdata/kpops-demo-sentence-producer

- type: streams-app
  name: word-counter
  to:
    topics:
      ${output_topic_name}:
        type: output
        configs:
          cleanup.policy: compact
  app:
    image: bakdata/kpops-demo-word-count-app
    replicaCount: 1

- type: kafka-sink-connector
  name: redis-sink-connector
  app:
    connector.class: com.github.jcustenborder.kafka.connect.redis.RedisSinkConnector
    redis.hosts: redis-headless:6379
    redis.database: 0
    tasks.max: 1
    key.converter: org.apache.kafka.connect.storage.StringConverter
    value.converter: org.apache.kafka.connect.storage.StringConverter

  1. A Kafka pipeline can consist of consecutive streaming applications, producers, and connectors