Senior Data Engineer (Tumblr Core Data Engineering) Worldwide
We are the company behind WordPress.com, Jetpack, WooCommerce, and Tumblr! We are looking for a Senior Data Engineer to join our team of data scientists and engineers to build, deploy, and iterate on large-scale data pipelines and applications.
How do we work
We’re kind to each other and our users – we strive to build a positive, supportive, and inclusive culture of cohesive teams focused on delivering value to our customers.
We work as a global and distributed workforce resulting in a unique way of working built around our creed.
We offer flexible work arrangements allowing our team members to work when they feel best.
We welcome collaboration, and you can be involved in any discussion across our many communication channels.
Enough about us, let‘s talk more about you. The Senior Data Engineer position might be a good fit if you:
- While we don’t expect you to be an expert in all of the technologies we use, the ideal candidate should be willing to learn, though candidates should be familiar with the basics of Docker, have proficiency in Bash and a JVM language, Go or Python.
- Tumblr runs its own servers so there is a site reliability component to this position as well as an on-call rotation.
- You should care about architecture, unit testing, and building reliable infrastructure and pipelines.
- You should have experience in managing, monitoring, and tuning services in production.
- You are motivated to propose technical solutions, own software architecture, evaluate technologies and infrastructure, develop a rollout plan, and bring your solution into production with a focus on service availability, reliability, and performance.
Some other programming languages you will encounter include:
Some other technologies that we use include:
- Apache Flink
You are open and able to travel 3-4 weeks per year to meet your teammates in person. We hold an annual all-company meeting every year, and meet up with our teams for a week once or twice per year. Important note: at the moment all company travel has been suspended due to COVID-19. Automattic is monitoring government and health agency reports closely and responding however possible to prioritize safety and well-being for our team and communities.
A day in the life of a data engineer at Tumblr
These are some examples of projects and tasks that data engineers work on, from advising engineering teams, system reliability, and projects.
Advising engineering teams examples:
- Where can I find a specific dataset, where does the data originate from?
- Can we make a dataset in MySQL queryable in Hive or Druid?
- We need to retain a certain dataset for 3 years. Is this possible given the volume data that is created each day? Work with consumers of the data to identify efficiencies.
Data Engineering examples:
- Examine the data in HDFS and propose any solutions that could help us use the resources of the cluster more efficiently.
- Create a data retention policy for datasets in HDFS and communicate it to teams responsible for creating the datasets.
- Users have reported their ETL jobs are running slow, find out what is causing this and propose solutions.
- Users want to create a new topic on Kafka. What kind of questions should you ask before creating the new topic? If they don’t know the answers, propose steps that would help them.
- We’d like to build a near real time system for counting application crashes by various metrics operating system and app release version, and propose a technical solution.
- We send tens of thousands of push notifications per second. The existing queue we use has trouble scaling. Can we make it scale or propose an alternative solution to use instead?
- Build a stream processing pipeline that signals when users are online and expose this as a service that product teams can use to power their applications.
- A client wants to use Hive/MySQL/Druid/HBase as a datastore. Assist them in determining which is the most appropriate for their project's needs and explain to them the pros and cons of each option.
- A Hadoop datanode is reporting that it cannot write to a volume. Investigate and coordinate with the appropriate teams to resolve the issue.
- Create an upgrade plan for our Druid cluster
- We have gaps in monitoring both storage usage and resource utilisation (memory/cpu) on our Hadoop cluster. While we have a high level tactical view, we’d like to be able to break down utilisation to the user level. How might we achieve this?
- You’ve just received a page that a Kafka broker is down. What are the next steps?
We’re a distributed company with more than 2000 Automatticians in 96 countries speaking 120+ different languages. We democratize publishing and commerce so anyone with a story can tell it, and anyone with a product can sell it, regardless of income, gender, politics, language, or country.
We believe in Open Source and the vast majority of our work is available under the GPL.
Diversity, Equity, and Inclusion at Automattic
We’re improving diversity, equity, and inclusion in the tech industry. At Automattic, we want people to love their work and show respect and empathy to all. We welcome differences and strive to increase participation from traditionally underrepresented groups. Our DEI committee involves Automatticians across the company and drives grassroots change. For example, this group has helped facilitate private online spaces for affiliated Automatticians to gather and helps run a monthly DEI People Lab series for further learning. Diversity, Equity and Inclusion is a priority at Automattic, though our dedication influences far more than just Automatticians: We make our products freely available and translate our products into and offer customer support in numerous languages. We require unconscious bias training for our hiring teams and ensure our products are accessible across different bandwidths and devices. Learn more about our dedication to diversity, equity, and inclusion and our Employee Resource Groups.