Leveraging your experience in building and maintaining complex data pipelines, you will drive the development of our analytics platform currently built on AWS using Apache Spark.
We are looking for someone who is eager to:
- Lead and own the technical vision and architecture for the data team
- Hire great software engineers to join the team
- Collaborate with other developers to ship new features
- Be in charge of the overall architecture of applications you build
- Ensure that we have the right tests and structure in place to make sure that we can move quickly without breaking everything
- Share your knowledge of development principles and best practices with the team
- Keep learning new technologies and be on the look-out for new ideas that we should try out
What we are looking for
- A Spark expert
- Experience with complex data pipelines in the Cloud
- Experience designing Data Lakes and Data Warehouses
- Quality-oriented mindset: testing, code reviews, code quality, etc.
- Awareness of performance considerations
- A passion for simple, maintainable, and readable code that balances pragmatism and performance
- Experience with AWS
- Experience with Kafka
- Experience with Airflow
How do we build our products?
We process hundreds of millions of requests per day and are building our analytics platform on Kinesis Firehose, AWS S3, EMR jobs, and TimescaleDB to provide performant analytics for our clients. Internally we use Athena and Redash.
For the frontend, we have adopted the micro-frontends architecture for building our user interface. Each team has the freedom to pick their own framework (Angular/React) for the frontend they need to build. All SPAs we build use our UI library and get plugged into the shell. We also build a multitude of SDKs from Web and Mobile (iOS, Android) to CTV and Unity.
We rely on a multitude of AWS/GCP services for building, deploying, serving, monitoring, and scaling our services. We use Gitlab for our code and CI/CD. To manage our issues we use Jira.
Our vision as a team
We are building a product and engineering team that is strongly committed to a high level of quality in our products and code. We believe that automation is the key to consistently achieving that along with velocity of development, joy, and pride in what we deliver.
At Didomi we are organized into feature teams and work with 2-week sprints. We do our best to avoid pointless meetings. The majority of the engineering team works remotely from all over the world, the only hard requirement is a 4-hour overlap with CET working hours.
We rely on automated tests of all sorts (unit, integration, linters, you-name-it!) and continuous integration/delivery to build flexible applications that are able to evolve without breaking. We trust that it enables engineers to focus on the quality of their code and iterate fast without fears of breaking stuff. And when we break stuff, we fix it, create a post-mortem and learn from our mistakes.
- An intro call with a Tech Lead or the CTO
- A code challenge to build a simple Spark job. This is used as the basis of discussion for the next step. You can find our challenges on our github page (https://github.com/didomi/challenges). We also accept suitable open source projects in place of the challenge.
- A 1h and 15min code review session and architecture discussion with 3-4 Didomi engineers
- A set of 1:1 30-minute calls with the CTO, engineers, and (occasionally) a product manager
For the architecture discussion, we plan to sketch an architecture (think of different data sources, data lakes, data warehouses, etc) and discuss options and tradeoffs as we would on a normal day at Didomi.
We understand you already have a job, obligations (and maybe a personal life!) so we’ll work with you to make sure it doesn’t take up too much of your time while still providing a good basis for a very concrete discussion.