Data Engineer (Elastic SME) TS/SCI Required

  • Huntsville, Alabama, United States
  • Full-Time
  • On-Site
  • 160,000-220,000 USD / Year

Job Description:

The Opportunity

We are seeking a highly skilled Data Engineer to serve as a Subject Matter Expert (SME) in designing, implementing, and maintaining large-scale log ingestion architectures. This role focuses on building robust ingestion pipelines from multiple heterogeneous data sources and supporting high-availability production environments on air-gapped and restricted networks. You will be primarily responsible for ensuring data ingested into Elastic Security is identified, categorized, processed, and transformed in a reliable, scalable, and secure manner.

This position focuses on the integration of the Elastic Stack within a Managed Security Services (MSS) framework. You will be responsible for ensuring that security data is efficiently ingested and enriched to support real-time threat detection and analysis.

Core Responsibilities

  • Pipeline Architecture: Design and manage multi-pipeline Logstash architectures, including pipeline-to-pipeline routing and output isolator patterns.
  • Data Normalization: Normalize incoming data into Elastic Common Schema (ECS) compliant formats.
  • Performance Tuning: Tune Logstash JVM performance and troubleshoot ingestion bottlenecks to ensure mission-critical uptime.
  • Strategic Engineering: Work directly with security analysts and customers to prioritize high-value data, efficiently archiving less valuable data and eliminating zero-value noise.
  • Secure Data Flow: Apply data processing routines at the most efficient location as data flows through the pipelines, ensuring networks are not directly exposed by utilizing specific devices or DMZs for collection.
  • System Maintenance: Maintain the technical baseline of Logstash nodes deployed as VMs and Kubernetes Pods.

Technical Requirements

  • Elastic Stack Expertise: Deep experience with Elasticsearch, Logstash, Kibana, and Elastic Agent/Fleet.
  • Parsing & Transformation: Expert proficiency with Grok, Dissect, KV, JSON decoding, and Translate filters.
  • Environment Experience: Proven ability to support air-gapped artifact and package repositories and implement ingestion resiliency/failover strategies.
  • Data Sources: Experience ingesting logs from endpoints, network devices, cloud-native resources, Linux Audit, and Windows Event Logs.
  • Team Leadership: Ability to mentor team members by providing specialized data engineering training.

Additional Information

  • Work Environment: This position requires being onsite 4–5 days per week.
  • Clearance Growth: While a Secret clearance is required for certain tasks, there may be opportunities for clearance upgrades to the TS/SCI level based on mission requirements.
  • Benefits: Very strong 401(k), family medical benefits, thousands of dollars in training budget and much more
  • Interstate relocation package