Docs Cloud Redpanda Connect Components Catalog Components Catalog Page options Copy as Markdown Copied! View as plain text Ask AI about this topic Add MCP server to VS Code Use the following table to search for available inputs, outputs, and processors. Type: All Types Selected ▼ Processor Input Output Scanner Metric Cache Tracer Rate limit Buffer a2a_message Transform and process message data ⚙️ Type Processor Support Certified amqp_0_9 Connects to an AMQP (0.91) queue. AMQP is a messaging protocol used by various message brokers, including RabbitMQ. Type Input Output Support Certified RabbitMQ AMQP archive Archives all the messages of a batch into a single message according to the selected archive format. ⚙️ Type Processor Support Certified ZIP TAR GZIP avro Performs Avro based operations on messages based on a schema. ⚙️ Type Processor Scanner Support Community aws_bedrock_chat Generates responses to messages in a chat conversation, using the AWS Bedrock API. Type Processor Support Certified Amazon AWS Bedrock Chat aws_bedrock_embeddings Generates vector embeddings from text prompts, using the AWS Bedrock API. Type Processor Support Certified Amazon AWS Bedrock Embeddings aws_dynamodb Stores key/value pairs as a single document in a DynamoDB table. Type Cache Output Support Community AWS DynamoDB Amazon DynamoDB DynamoDB aws_dynamodb_cdc Receive data from external sources 🔄 Type Input Support Certified aws_dynamodb_partiql Executes a PartiQL expression against a DynamoDB table for each message. Type Processor Support Certified Amazon AWS DynamoDB PartiQL aws_kinesis Receive messages from one or more Kinesis streams. Type Input Output Support Certified AWS Kinesis Amazon Kinesis Kinesis aws_kinesis_firehose Sends messages to a Kinesis Firehose delivery stream. Type Output Support Certified AWS Kinesis Firehose Amazon Kinesis Firehose Kinesis Firehose aws_lambda Invokes an AWS lambda for each message. Type Processor Support Certified AWS Lambda Amazon Lambda Lambda aws_s3 Stores each item in an S3 bucket as a file, where an item ID is the path of the item within the bucket. Type Cache Input Output Support Certified AWS S3 Amazon S3 S3 Simple Storage Service aws_sns Sends messages to an AWS SNS topic. Type Output Support Community AWS SNS Amazon SNS SNS Simple Notification Service aws_sqs Consume messages from an AWS SQS URL. Type Input Output Support Certified AWS SQS Amazon SQS SQS Simple Queue Service azure_blob_storage Downloads objects within an Azure Blob Storage container, optionally filtered by a prefix. Type Input Output Support Certified Azure Blob Storage Microsoft Azure Storage azure_cosmosdb Executes a SQL query against Azure CosmosDB and creates a batch of messages from each page of items. Type Input Output Processor Support Certified Microsoft Azure Azure azure_data_lake_gen2 Sends message parts as files to an Azure Data Lake Gen2 file system. Each file is uploaded with the file name specified in the path field. Type Output Support Certified Microsoft Azure Azure azure_queue_storage Dequeue objects from an Azure Storage Queue. Type Input Output Support Certified Azure Queue Storage Microsoft Azure Queue azure_table_storage Queries an Azure Storage Account Table, optionally with multiple filters. Type Input Output Support Certified Azure Table Storage Microsoft Azure Table batched Consumes data from a child input and applies a batching policy to the stream. 📦 Type Input Support Certified benchmark Logs throughput statistics for processed messages, and provides a summary of those statistics over the lifetime of the processor. ⚙️ Type Processor Support Certified bloblang Executes a Bloblang mapping on messages. ⚙️ Type Processor Support Certified bounds_check Removes messages (and batches) that do not fit within certain size boundaries. ⚙️ Type Processor Support Certified branch The branch processor allows you to create a new request message via a Bloblang mapping, execute a list of processors on the request messages, and, finally, map the result back into the source message using another mapping. ⚙️ Type Processor Support Certified broker Allows you to combine multiple inputs into a single stream of data, where each input will be read in parallel. 📥 Type Input Output Support Certified cache Stores each message in a cache. 💾 Type Output Processor Support Certified cached Cache the result of applying one or more processors to messages identified by a key. ⚙️ Type Processor Support Certified catch Applies a list of child processors only when a previous processing step has failed. ⚙️ Type Processor Support Certified chunker Split an input stream into chunks of a given number of bytes. 🔍 Type Scanner Support Certified cohere_chat Generates responses to messages in a chat conversation, using the Cohere API and external tools. Type Processor Support Certified cohere_embeddings Generates vector embeddings to represent input text, using the Cohere API. Type Processor Support Certified cohere_rerank Sends document strings to the Cohere API, which returns them ranked by their relevance to a specified query. Type Processor Support Certified compress Compresses messages according to the selected algorithm. Supported compression algorithms are: [flate gzip lz4 pgzip snappy zlib] ⚙️ Type Processor Support Certified csv Reads one or more CSV files as structured records following the format described in RFC 4180. 📥 Type Input Scanner Support Certified Comma-Separated Values cyborgdb Inserts items into a CyborgDB encrypted vector index. 📤 Type Output Support Community decompress Decompresses messages according to the selected algorithm. Supported decompression algorithms are: [bzip2 flate gzip lz4 pgzip snappy zlib] ⚙️ Type Processor Scanner Support Certified dedupe Deduplicates messages by storing a key value in a cache using the add operator. If the key already exists within the cache it is dropped. ⚙️ Type Processor Support Certified drop Send data to external destinations 🗑️ Type Output Support Certified drop_on Attempts to write messages to a child output and if the write fails for one of a list of configurable reasons the message is dropped (acked) instead of being reattempted (or nacked). 📤 Type Output Support Certified elasticsearch_v8 Publishes messages into an Elasticsearch index. If the index does not exist, this output creates it using dynamic mapping. Type Output Support Certified fallback Attempts to send each message to a child output, starting from the first output on the list. 📤 Type Output Support Certified for_each A processor that applies a list of child processors to messages of a batch as though they were each a batch of one message. ⚙️ Type Processor Support Certified gateway Receive data from external sources 📥 Type Input Support Certified gcp_bigquery Inserts message data as new rows in a Google Cloud BigQuery table. Type Output Support Certified GCP BigQuery Google BigQuery BigQuery gcp_bigquery_select Executes a SELECT query against BigQuery and creates a message for each row received. Type Input Processor Support Certified GCP BigQuery Google Cloud GCP gcp_cloud_storage Use a Google Cloud Storage bucket as a cache. Type Cache Input Output Support Certified GCP Cloud Storage Google Cloud Storage GCS gcp_cloudtrace Send tracing events to a Google Cloud Trace. Type Tracer Support Certified GCP Cloud Trace gcp_pubsub Consumes messages from a GCP Cloud Pub/Sub subscription. Type Input Output Support Certified GCP PubSub Google Cloud Pub/Sub GCP Pub/Sub Google Pub/Sub gcp_spanner_cdc Creates an input that consumes from a spanner change stream. Type Input Support Certified Google Cloud GCP gcp_vertex_ai_chat Generates responses to messages in a chat conversation, using the Vertex API AI. Type Processor Support Certified GCP Vertex AI Google Cloud GCP gcp_vertex_ai_embeddings Generates vector embeddings to represent a text string, using the Vertex AI API. Type Processor Support Certified Google Cloud GCP generate Generates messages at a given interval using a Bloblang mapping executed without a context. 📥 Type Input Support Certified git Clones a Git repository, reads its contents, then polls for new commits at a configurable interval. Any updates are emitted as new messages. 📥 Type Input Support Certified google_drive_download Downloads files from Google Drive that contain matching file IDs. Type Processor Support Certified google_drive_list_labels Lists labels for files on a Google Drive. Type Processor Support Certified google_drive_search Searches Google Drive for files that match a specified query and emits the results as a batch of messages. Type Processor Support Certified group_by Splits a batch of messages into N batches, where each resulting batch contains a group of messages determined by a Bloblang query. ⚙️ Type Processor Support Certified group_by_value Splits a batch of messages into N batches, where each resulting batch contains a group of messages determined by a function interpolated string evaluated per message. ⚙️ Type Processor Support Certified http Performs a HTTP request using a message batch as the request body, and replaces the original message parts with the body of the response. 🌐 Type Processor Support Certified http_client Connects to a server and continuously requests single messages. 🌐 Type Input Output Support Certified HTTP REST API REST http_server Receive messages sent over HTTP using POST requests. HTTP 2.0 is supported when using TLS, which is enabled when key and cert files are specified. 🌐 Type Input Output Support Certified HTTP REST API REST Gateway inproc Directly connect to an output within a Redpanda Connect process by referencing it by a chosen ID. 📥 Type Input Output Support Certified insert_part Insert a new message into a batch at an index. If the specified index is greater than the length of the existing batch it will be appended to the end. ⚙️ Type Processor Support Certified jira Queries Jira resources and returns structured data. ⚙️ Type Processor Support Certified jmespath Executes a JMESPath query on JSON documents and replaces the message with the resulting document. ⚙️ Type Processor Support Certified jq Transforms and filters messages using jq queries. Type Processor Support Certified json_array Consumes a stream of one or more JSON elements within a top level array. Type Scanner Support Community json_documents Consumes a stream of one or more JSON documents. Type Scanner Support Certified json_schema Checks messages against a provided JSONSchema definition but does not change the payload under any circumstances. Type Processor Support Certified JSON Schema kafka Connects to Kafka brokers and consumes one or more topics. 📥 Type Input Output Support Certified Apache Kafka kafka_franz A Kafka input using the Franz Kafka client library. Type Input Output Support Certified Apache Kafka Kafka legacy_redpanda_migrator Use this connector in conjunction with the legacyredpandamigrator output to migrate topics between Apache Kafka brokers. 📥 Type Input Output Support Certified redpanda_migrator legacy_redpanda_migrator_offsets Reads consumer group offsets for a specified set of topics using the Franz Kafka client library. 📥 Type Input Output Support Certified redpanda_migrator_offsets lines Split an input stream into a message per line of data. 🔍 Type Scanner Support Certified local The local rate limit is a simple X every Y type rate limit that can be shared across any number of components within the pipeline but does not support distributed rate limits across multiple running instances of Benthos. ⏱️ Type Rate_limit Support Certified log Prints a log event for each message. ⚙️ Type Processor Support Certified lru Stores key/value pairs in a lru in-memory cache. This cache is therefore reset every time the service restarts. 💾 Type Cache Support Community mapping Executes a Bloblang mapping on messages, creating a new document that replaces (or filters) the original message. ⚙️ Type Processor Support Certified memcached Connects to a cluster of memcached services, a prefix can be specified to allow multiple cache types to share a memcached cluster under different namespaces. Type Cache Support Community memory Stores consumed messages in memory and acknowledges them at the input level. 📦 Type Buffer Cache Support Certified metric Emit custom metrics by extracting values from messages. ⚙️ Type Processor Support Certified microsoft_sql_server_cdc Enables Change Data Capture by consuming from Microsoft SQL Server's change tables. Type Input Support Certified mongodb Use a MongoDB instance as a cache. Type Cache Input Output Processor Support Certified Mongo mongodb_cdc Streams data changes from a MongoDB replica set, using MongoDB's change streams to capture data updates. Type Input Support Certified MongoDB CDC mqtt Subscribe to topics on MQTT brokers. Type Input Output Support Certified multilevel Combines multiple caches as levels, performing read-through and write-through operations across them. 💾 Type Cache Support Certified mutation Executes a Bloblang mapping and directly transforms the contents of messages, mutating (or deleting) them. ⚙️ Type Processor Support Certified mysql_cdc Streams data changes from a MySQL database, using MySQL's binary log to capture data updates. Type Input Support Certified nats Subscribe to a NATS subject. Type Input Output Support Certified NATS.io nats_jetstream Reads messages from NATS JetStream subjects. Type Input Output Support Certified NATS JetStream NATS nats_kv Cache key/value pairs in a NATS key-value bucket. Type Cache Input Output Processor Support Certified NATS KV nats_request_reply Sends a message to a NATS subject and expects a reply back from a NATS subscriber acting as a responder. Type Processor Support Certified NATS Request Reply none Do not buffer messages. This is the default and most resilient configuration. 📦 Type Buffer Metric Tracer Support Certified noop Noop is a cache that stores nothing, all gets returns not found. Why? Sometimes doing nothing is the braver option. ⚙️ Type Cache Processor Support Certified openai_chat_completion Generates responses to messages in a chat conversation, using the OpenAI API and external tools. Type Processor Support Certified openai_embeddings Generates vector embeddings to represent input text, using the OpenAI API. Type Processor Support Certified openai_image_generation Generates an image from a text description and other attributes, using OpenAI API. Type Processor Support Certified openai_speech Generates audio from a text description and other attributes, using OpenAI API. Type Processor Support Certified openai_transcription Generates a transcription of spoken audio in the input language, using the OpenAI API. Type Processor Support Certified openai_translation Translates spoken audio into English, using the OpenAI API. Type Processor Support Certified opensearch Publishes messages into an Elasticsearch index. If the index does not exist then it is created with a dynamic mapping. Type Output Support Certified otlp_grpc Receive OpenTelemetry traces, logs, and metrics via OTLP/gRPC protocol. 📥 Type Input Output Support Certified OpenTelemetry OTLP OTel gRPC otlp_http Receive OpenTelemetry traces, logs, and metrics via OTLP/HTTP protocol. 🌐 Type Input Output Support Certified OpenTelemetry OTLP OTel parallel A processor that applies a list of child processors to messages of a batch as though they were each a batch of one message (similar to the for_each processor), but where each message is processed in parallel. ⚙️ Type Processor Support Certified parquet_decode Decodes Parquet files into a batch of structured messages. ⚙️ Type Processor Support Certified parquet_encode Encodes Parquet files from a batch of structured messages. ⚙️ Type Processor Support Certified parse_log Parses common log <> into <>. ⚙️ Type Processor Support Community pg_stream Receive data from external sources Type Input Support Certified pinecone Inserts items into a Pinecone index. Type Output Support Certified postgres_cdc Streams data changes from a PostgreSQL database using logical replication. Type Input Support Certified processors A processor grouping several sub-processors. ⚙️ Type Processor Support Certified prometheus Host endpoints (/metrics and /stats) for Prometheus scraping. Type Metric Support Certified qdrant Adds items to a Qdrant collection Type Output Processor Support Certified questdb Pushes messages to a QuestDB table. 📤 Type Output Support Certified rate_limit Throttles the throughput of a pipeline according to a specified rate_limit resource. ⏱️ Type Processor Support Certified re_match Split an input stream into segments matching against a regular expression. 🔍 Type Scanner Support Certified read_until Reads messages from a child input until a consumed message passes a Bloblang query, at which point the input closes. 📥 Type Input Support Certified redis Use a Redis instance as a cache. The expiration can be set to zero or an empty string in order to set no expiration. Type Cache Processor Rate_limit Support Certified redis_hash Sets Redis hash objects using the HMSET command. Type Output Support Certified Redis Hash Redis redis_list Pops messages from the beginning of a Redis list using the BLPop command. Type Input Output Support Certified Redis List Redis Lists Redis redis_pubsub Consume from a Redis publish/subscribe channel using either the SUBSCRIBE or PSUBSCRIBE commands. Type Input Output Support Certified Redis PubSub Redis Pub/Sub Redis redis_scan Scans the set of keys in the current selected database and gets their values, using the Scan and Get commands. Type Input Support Certified Redis redis_script Performs actions against Redis using LUA scripts. ⚙️ Type Processor Support Certified Redis Script redis_streams Pulls messages from Redis (v5.0+) streams with the XREADGROUP command. The client_id should be unique for each consumer of a group. Type Input Output Support Certified Redis Streams Redis redpanda A Kafka cache using the https://github.com/twmb/franz-go[Franz Kafka client library^]. Type Cache Input Output Tracer Support Certified redpanda_common Consumes data from a Redpanda (Kafka) broker, using credentials from a common redpanda configuration block. 📥 Type Input Output Support Certified redpanda_migrator Unified Kafka consumer for migrating data between Kafka/Redpanda clusters. 📥 Type Input Output Support Certified redpanda_migrator_bundle The redpandamigratorbundle input reads messages and schemas from an Apache Kafka or Redpanda cluster. 📥 Type Input Output Support Certified reject Rejects all messages, treating them as though the output destination failed to publish them. 📤 Type Output Support Certified reject_errored Rejects messages that have failed their processing steps, resulting in nack behavior at the input level, otherwise sends them to a child output. 📤 Type Output Support Certified resource Resource is an input type that channels messages from a resource input, identified by its name. 📥 Type Input Output Processor Support Certified retry Attempts to write messages to a child output and if the write fails for any reason the message is retried either until success or, if the retries or max elapsed time fields are non-zero, either is reached. 📤 Type Output Processor Support Certified ristretto Stores key/value pairs in a map held in the memory-bound Ristretto cache. 💾 Type Cache Support Community schema_registry Reads schemas from a schema registry. 📥 Type Input Output Support Certified schema_registry_decode Automatically decodes and validates messages with schemas from a Confluent Schema Registry service. ⚙️ Type Processor Support Certified schema_registry_encode Automatically encodes and validates messages with schemas from a Confluent Schema Registry service. ⚙️ Type Processor Support Certified select_parts Cherry pick a set of messages from a batch by their index. Indexes larger than the number of messages are simply ignored. ⚙️ Type Processor Support Certified sequence Reads messages from a sequence of child inputs, starting with the first and once that input gracefully terminates starts consuming from the next, and so on. 📥 Type Input Support Certified sftp Consumes files from an SFTP server. 📁 Type Input Output Support Certified skip_bom Skip one or more byte order marks for each opened child scanner. 🔍 Type Scanner Support Certified slack Connects to Slack using Socket Mode, and can receive events, interactions (automated and user-initiated), and slash commands. Type Input Support Certified slack_post Posts a new message to a Slack channel using the Slack API method chat.postMessage. 📤 Type Output Support Certified Slack Post slack_reaction Add or remove an emoji reaction to a Slack message. 📤 Type Output Support Certified Slack Reaction slack_thread Reads a Slack thread using the Slack API method conversations.replies. ⚙️ Type Processor Support Certified Slack Thread slack_users Returns the full profile of all users in your Slack organization using the API method users. 📥 Type Input Support Certified Slack Users sleep Sleep for a period of time specified as a duration string for each message. ⚙️ Type Processor Support Certified snowflake_put Sends messages to Snowflake stages and, optionally, calls Snowpipe to load this data into one or more tables. 📤 Type Output Support Certified Snowflake snowflake_streaming Allows Snowflake to ingest data from your data pipeline using Snowpipe Streaming. 📤 Type Output Support Certified Snowflake Streaming spicedb_watch Consumes messages from the Watch API of a SpiceDB instance. 📥 Type Input Support Community split Breaks message batches (synonymous with multiple part messages) into smaller batches. ⚙️ Type Processor Support Certified splunk Consumes messages from Splunk. Type Input Support Certified splunk_hec Publishes messages to a Splunk HTTP Endpoint Collector (HEC). Type Output Support Certified Splunk sql Uses an SQL database table as a destination for storing cache key/value items. 📤 Type Cache Output Processor Support Certified Community sql_driver_clickhouse Execute SQL queries across multiple database engines Type Sql_driver Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino ClickHouse sql_driver_mysql Execute SQL queries across multiple database engines 🗄️ Type Sql_driver Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino MYSQL sql_driver_oracle Execute SQL queries across multiple database engines 🗄️ Type Sql_driver Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino Oracle sql_driver_postgres Execute SQL queries across multiple database engines 🗄️ Type Sql_driver Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino PostgreSQL sql_driver_sqlite Execute SQL queries across multiple database engines 🗄️ Type Sql_driver Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino SQLite sql_insert Inserts a row into an SQL database for each message. 🗄️ Type Output Processor Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino SQL PostgreSQL MySQL Microsoft SQL Server ClickHouse Trino sql_raw Executes a select query and creates a message for each row received. 🗄️ Type Input Output Processor Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino SQL PostgreSQL MySQL Microsoft SQL Server ClickHouse Trino sql_select Executes a select query and creates a message for each row received. 🗄️ Type Input Processor Support Certified: MYSQL, Oracle, PostgreSQL, SQLiteCommunity: ClickHouse, Azure Cosmos DB, Microsoft SQL Server, Snowflake, Trino SQL PostgreSQL MySQL Microsoft SQL Server ClickHouse Trino switch The switch output type allows you to route messages to different outputs based on their contents. 📤 Type Output Processor Scanner Support Certified sync_response Returns the final message payload back to the input origin of the message, where it is dealt with according to that specific input type. 📤 Type Output Processor Support Certified system_window Chops a stream of messages into tumbling or sliding windows of fixed temporal size, following the system clock. 📦 Type Buffer Support Certified tar Consume a tar archive file by file. 🔍 Type Scanner Support Certified text_chunker Breaks down text-based message content into manageable chunks using a configurable strategy. ⚙️ Type Processor Support Certified timeplus Executes a streaming or table query on Timeplus Enterprise (Cloud or Self-Hosted) or the timeplusd component, and creates a structured message for each table row received. 📥 Type Input Output Support Community to_the_end Read the input stream all the way until the end and deliver it as a single message. 🔍 Type Scanner Support Certified try Executes a list of child processors on messages only if no prior processors have failed (or the errors have been cleared). ⚙️ Type Processor Support Certified ttlru Stores key/value pairs in a ttlru in-memory cache. This cache is therefore reset every time the service restarts. 💾 Type Cache Support Community unarchive Unarchives messages according to the selected archive format into multiple messages within a batch. ⚙️ Type Processor Support Certified ZIP TAR GZIP Archive while A processor that checks a Bloblang query against each batch of messages and executes child processors on them for as long as the query resolves to true. ⚙️ Type Processor Support Certified workflow Executes a topology of branch processors, performing them in parallel where possible. ⚙️ Type Processor Support Certified xml Parses messages as an XML document, performs a mutation on the data, and then overwrites the previous contents with the new value. ⚙️ Type Processor Support Community About Components Every Redpanda Connect pipeline has at least one input, an optional buffer, an output and any number of processors: input: kafka: addresses: [ TODO ] topics: [ foo, bar ] consumer_group: foogroup buffer: type: none pipeline: processors: - mapping: | message = this meta.link_count = links.length() output: aws_s3: bucket: TODO path: '${! meta("kafka_topic") }/${! json("message.id") }.json' These are the main components within Redpanda Connect and they provide the majority of useful behavior. Observability components There are also the observability components: logger, metrics, and tracing, which allow you to specify how Redpanda Connect exposes observability data. http: address: 0.0.0.0:4195 enabled: true debug_endpoints: false logger: format: json level: WARN metrics: statsd: address: localhost:8125 flush_period: 100ms tracer: jaeger: agent_address: localhost:6831 Resource components Finally, there are caches and rate limits. These are components that are referenced by core components and can be shared. input: http_client: # This is an input url: TODO rate_limit: foo_ratelimit # This is a reference to a rate limit pipeline: processors: - cache: # This is a processor resource: baz_cache # This is a reference to a cache operator: add key: '${! json("id") }' value: "x" - mapping: root = if errored() { deleted() } rate_limit_resources: - label: foo_ratelimit local: count: 500 interval: 1s cache_resources: - label: baz_cache memcached: addresses: [ localhost:11211 ] It’s also possible to configure inputs, outputs and processors as resources which allows them to be reused throughout a configuration with the resource input, resource output and resource processor respectively. For more information about any of these component types check out their sections: inputs processors outputs buffers metrics tracers logger caches rate limits Back to top × Simple online edits For simple changes, such as fixing a typo, you can edit the content directly on GitHub. Edit on GitHub Or, open an issue to let us know about something that you want us to change. Open an issue Contribution guide For extensive content updates, or if you prefer to work locally, read our contribution guide . Was this helpful? thumb_up thumb_down group Ask in the community mail Share your feedback group_add Make a contribution 🎉 Thanks for your feedback! Unit Testing Inputs