Track 3: Data Management (SPL2 Pipelines)
Tech Stack / Tools
Splunk Show instances are provided for Data Management processors (for testing).
It has to be an SPL2 pipeline that runs on the Data Management Ingest Processor.
SPL2 Pipeline Requirements
The submission must be a fully functional SPL2 pipeline that effectively processes, transforms, or optimizes the provided data sources.
The pipeline should provide clear value to Splunk users, addressing challenges such as data ingestion efficiency, transformation accuracy, storage optimization, or query performance.
It should be designed for scalability and reusability, allowing users to easily adapt it to similar data sources or use cases.
The pipeline must be syntactically correct and executable within Splunk's Ingest Processor or Search Processing workflows.
Pipeline Submission
SPL2 pipelines should be submitted through the Splunk Ideas portal. (Attach a step-by-step process to submit using the Splunk Ideas portal.)
The SPL2 pipelines must include clear, inline comments explaining the purpose of each function, transformation step, or key logic.
Participants should reference existing SPL2 pipeline templates in production or follow established best practices for efficiency, readability, and maintainability.
Documentation
A short document (1 page) explaining the issue the SPL2 pipeline addresses and its impact on data management, processing, or analysis.
A description of the expected outcome, including:
How the processed data improves usability, efficiency, or data optimization.
Example outputs, such as transformed logs, metrics, optimized queries, or reduced storage footprint.
Brief instructions on how to apply the SPL2 pipeline in the Ingest Processor, including any required configurations.