Jobs

Be a part of it. Join the #AccelFamily

Senior Data Engineer

Campaign Monitor

Campaign Monitor

Data Science
Multiple locations
Posted on Friday, May 24, 2024

The Company

Marigold is the largest sender of personalized email on the planet. But we’re so much more than an email provider or cross-channel marketing hub. We’re committed to creating true partnerships with our clients, not just being another vendor. Working with some of the biggest names in ecommerce and publishing, we help deliver personalized email, mobile messaging, and onsite experiences to billions of consumers every year.

The Role

Marigold Engage by Sailthru is putting together a team to support and operate our data engineering platform, including data warehouse, pipelines and machine learning systems.

This job requires a high level of technical competency and a desire to own and evolve the data platform that our product relies upon. If you’re passionate about building cutting-edge data solutions, we want you!

Responsibilities

  • Driving the technical direction of our data platform’s architecture, whilst modernizing legacy components.

  • Ensuring reliable and cost-effective operation of our data pipeline and warehouses. This makes up a critical component of our product and is a key production platform for us.

  • Helping our customer success and support teams with escalations, and work with the team to diagnose and fix rare and interesting problems.

  • Being part of our regular on-call rotation with the other team members (approximately 4x people).

  • Drive our data governance and security practices.

Requirements

  • This isn’t your first swim in the data lake. You have experience working with technologies such as Databricks, Airflow, Spark, Snowflake, AWS and can hit the ground running helping us grow and develop our architecture.

  • Approximately 5+ years of experience in data engineering or other relevant technical field.

  • Databricks and AWS is strongly desired.

  • Unity Catalog, Snowflake, Spark experience helpful.

  • Comfort writing and reviewing code written in Python and Java.

  • An understanding of applications that contribute to and consume from the data lake, including event-driven architecture, Kafka and a conventional SaaS stack.

  • Know enough AWS to understand S3, IAM, compute workloads and keeping costs under control.

  • An interest in moving into more of the data science side of data.

  • (nice to have) Experience with machine learning including training models.