Software Engineer II - Analytics Data Engineering
At Klaviyo, we value the unique backgrounds, experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements.
What this role actually needs.
At Klaviyo, we value the unique backgrounds, experiences and perspectives each Klaviyo (we call ourselves Klaviyos) brings to our workplace each and every day. We believe everyone deserves a fair shot at success and appreciate the experiences each person brings beyond the traditional job requirements. Responsibilities: - Build Production-Grade Foundations : Develop and maintain scalable data pipelines and core tables using PySpark, Airflow, and dbt. You will implement the foundational datasets that power our AI, ML, and Analytics products. - Optimize for Enterprise Performance : Tune Spark jobs and storage patterns to ensure low-latency data retrieval. You will help implement materialized views and efficient partitioning strategies to support high-performance reporting at scale. - Treat Data as a Product : Contribute to the full lifecycle of datasets. This includes defining clear data contracts with upstream teams, writing maintainable code via peer reviews, and ensuring every asset is well-documented and trusted by downstream users. - Drive Operational Excellence : Ensure the reliability of our data engine by monitoring for freshness, volume anomalies, and schema changes. You will be responsible for ensuring that when a customer loads a dashboard, the data is accurate and on time. - Partner Cross-Functionally : Collaborate with Product, Engineering, and AI/ML teams to define consistent metrics that align with business goals. You will act as a bridge to ensure new features land with robust data support. - Innovate with AI : Look for opportunities to put AI at the center of your workflow, whether it is using AI to generate tests, detect data anomalies, or accelerate complex analysis. Requirements: - The Experience: 2+ years of experience in data engineering or a data-intensive software engineering role. You’ve moved past the "beginner" phase and are comfortable taking a project from a design doc to a production deployment. - Fluent in SQL and Python: You have a solid grasp of SQL for high-performance querying and are comfortable using Python for data manipulation and automation. You focus on writing code that balances speed with reliability. - Distributed Systems Knowledge: You have hands-on experience with Spark (PySpark/SparkSQL) and understand how to tune jobs for performance in a cloud environment (AWS/EMR). - Modeling Intuition: You understand the difference between a "raw table" and a "semantic layer." You’ve worked with modern modeling tools (like dbt) and understand partitioning, schema evolution, and lakehouse concepts (Iceberg/Delta). - Performance Minded: You care about latency. You enjoy the challenge of making a query run faster and understand how to use materialized views and caching effectively. - Collaborative & Curious: You’re an inclusive collaborator who enjoys working with Product and Data Science. You’re excited to experiment with AI tools to make your own engineering workflow more efficient. Company context: Klaviyo is the public marketing automation and CDP company powering email, SMS, and AI personalization for ecommerce.
Day-to-day expectations
Klaviyo lists these responsibilities for the Software Engineer II - Analytics Data Engineering role.
- Build Production-Grade Foundations : Develop and maintain scalable data pipelines and core tables using PySpark, Airflow, and dbt. You will implement the foundational datasets that power our AI, ML, and Analytics products.
- Optimize for Enterprise Performance : Tune Spark jobs and storage patterns to ensure low-latency data retrieval. You will help implement materialized views and efficient partitioning strategies to support high-performance reporting at scale.
- Treat Data as a Product : Contribute to the full lifecycle of datasets. This includes defining clear data contracts with upstream teams, writing maintainable code via peer reviews, and ensuring every asset is well-documented and trusted by downstream users.
- Drive Operational Excellence : Ensure the reliability of our data engine by monitoring for freshness, volume anomalies, and schema changes. You will be responsible for ensuring that when a customer loads a dashboard, the data is accurate and on time.
- Partner Cross-Functionally : Collaborate with Product, Engineering, and AI/ML teams to define consistent metrics that align with business goals. You will act as a bridge to ensure new features land with robust data support.
- Innovate with AI : Look for opportunities to put AI at the center of your workflow, whether it is using AI to generate tests, detect data anomalies, or accelerate complex analysis.
What a strong candidate brings
These requirements are extracted from the source listing and normalized for UpJobz readers.
- The Experience: 2+ years of experience in data engineering or a data-intensive software engineering role. You’ve moved past the "beginner" phase and are comfortable taking a project from a design doc to a production deployment.
- Fluent in SQL and Python: You have a solid grasp of SQL for high-performance querying and are comfortable using Python for data manipulation and automation. You focus on writing code that balances speed with reliability.
- Distributed Systems Knowledge: You have hands-on experience with Spark (PySpark/SparkSQL) and understand how to tune jobs for performance in a cloud environment (AWS/EMR).
- Modeling Intuition: You understand the difference between a "raw table" and a "semantic layer." You’ve worked with modern modeling tools (like dbt) and understand partitioning, schema evolution, and lakehouse concepts (Iceberg/Delta).
- Performance Minded: You care about latency. You enjoy the challenge of making a query run faster and understand how to use materialized views and caching effectively.
- Collaborative & Curious: You’re an inclusive collaborator who enjoys working with Product and Data Science. You’re excited to experiment with AI tools to make your own engineering workflow more efficient.
Why this listing is more than a copied job post.
Software Engineer II - Analytics Data Engineering is framed against UpJobz source checks, country scope, compensation visibility, and work-authorization signals so candidates can make a faster go/no-go decision.
United States tech market
United States roles on UpJobz are filtered for high-tech relevance, source freshness, and actionable employer detail before they are allowed into SEO surfaces.
Compensation read
The employer source does not expose a reliable salary range, so candidates should ask for compensation early instead of waiting until late-stage interviews.
Work authorization read
Current extracted signal: Open to TN, H-1B, and OPT candidates already in the United States. UpJobz treats this as a search signal, not legal advice, and links visa-sensitive roles back to the relevant visa hub where possible.
Location read
On-site roles in Boston should be compared against commute, local salary bands, and nearby employer demand.
Browse similar jobs
Turn this listing into an application plan.
This is the first pass at the premium UpJobz layer: a fast brief that helps serious applicants move with more clarity.
Next moves
- Tailor your resume around ai and llm instead of sending a generic application.
- Use the first two bullets of your application to connect your background directly to software engineer ii - analytics data engineering is a high-signal on-site role in boston, and it is most realistic for open to tn, h-1b, and opt candidates already in the united states.
- Open the role quickly if it fits and bookmark three similar jobs before you leave the page.
Interview themes
Watchouts
- Compensation is hidden, so get range clarity in the first recruiter conversation.
- Use open to tn, h-1b, and opt candidates already in the united states as part of your positioning so the recruiter does not have to infer it.
- Show concrete examples of succeeding in on-site environments.
Keywords to match against your background
Use these terms to decide whether your resume, portfolio, and recent projects line up with the role.
Apply through the employer source
Open the source listing from klaviyo.com, confirm the role is still active, then apply on the employer or ATS page.
Source: klaviyo.com · Source ID: 7669291003 · Confidence: 88/100 · Last checked: May 7, 2026
How UpJobz verifies job sourcesContinue browsing tech jobs