For over 80 years, GfK has been a reliable and trusted insight partner for the world’s biggest companies and leading brands who make a difference in every consumer’s life - and we will continue to build on this. We connect data, science and innovative digital research solutions to provide answers for key business questions around consumers, markets, brands and media. With our headquarters in Germany and a presence in around 60 countries worldwide, you benefit from our global company with a diverse community of ~9,000 employees.
Harnessing the power of our workforce, the greatest asset we have is our people. As part of GfK, you can take your future into your own hands. We value talent, skills and responsibility and support your development within our international teams. We are proud of our heritage and our future: Currently we are in the latter stages of a transformational journey from a traditional market research company to a trusted provider of prescriptive data analytics powered by innovative technology. This is only possible with extraordinary people and this is why we are looking for YOU to help create our future. For our employees as well as for our clients we pursue one goal: Growth from Knowledge!
You are a highly experienced DevOps or Site Reliability Engineer with a passion for cloud infrastructure and automation.
As a Principal SRE, you feel you have the depth of skills and competencies required to take on the responsibility of Chapter Lead for our London SRE team whilst keeping your hands-on technically and delivering for our product development teams.
You may have line management experience already or you're at a point in your career when you feel ready to begin to lead and develop people.
You’re a self-starter and you love keeping up to date with the latest developments in cloud, configuration management and container technologies. You understand the benefits of an immutable infrastructure and you enjoy enabling self-service deployments through continuous delivery pipelines.
However, you have a head for operations and you know that security, backups, scalability and resilience are a paramount concern. In fact, you have a keen interest in security and you’re always looking for ways to use infrastructure-as-code to achieve security and compliance at scale. You understand that costs are a concern for the modern infrastructure engineer and you’re always looking for ways to reduce them.
You thrive in a collaborative DevOps culture and you love to work as part of a cross-functional agile product delivery team alongside developers, data scientists, product owners and technical architects.
You will be...
You will be working in our newly established London Development Centre. Working in partnership with our software engineering teams, the Site Reliability Engineering (SRE) org is responsible for building and operating the next-generation infrastructure to support the GfK product portfolio.
Working extensively with GCP(Google cloud platform) and related cloud technologies, you will be embedded within the product development squads to enable them to test and deploy software rapidly whilst ensuring the highest standards of reliability and security.
You will automate the build and deployment of infrastructure using tools such as Terraform, Puppet, Docker, Kubernetes & other orchestration technologies in a hybrid-cloud environment. You will influence architectural decisions with focus on security, scalability, cost and high-performance.
Part of your role will be to set up and maintain monitoring, metrics and reporting systems for fine-grained observability and actionable alerting and also create and maintain appropriate backup, restore and redundancy solutions for business-critical data.
- Expert Linux administration skills and deep understanding of networking and TCP/IP.
- Strong experience with Google Cloud Platform and Terraform
- Experience with Docker and Kubernetes.
- Excellent knowledge of technical architecture and modern design patterns, including micro-services, serverless functions, NoSQL, RESTful APIs, etc.
- Demonstrable skills in a Configuration Management tool such as Puppet or Ansible.
- Set up and supporting CI/CD pipelines and tooling using either Gitlab or Jenkins.
- Proficiency in a high level programming language such as Python, Ruby or Go
- Monitoring, log aggregation and alerting tooling (Stackdriver, Prometheus, ELK).
It would be a great addition if you had experience with…
- Big data technologies such as Hadoop, HDFS, Spark.
- Data streaming technologies such as Kafka, Flume, and Kinesis.
- Team leadership
We are a technology company on a mission to enable our customers to make confident business decisions. We provide insights and actionable recommendations by combining our data, market research heritage, and technical prowess in data analytics and machine learning.
Many say they have big data problems. Well we do have big data, and we also have big problems to solve. We are a group of diverse individuals who work together to apply modern technology and proven practices to solving problems around big data storage, big data processing, machine learning and complex data analysis and visualisation.
For our employees as well as for our clients we pursue one goal: Growth from Knowledge!
We love technology our tech stack stands as a testament to that: We are building cloud native applications with Kubernetes, Elasticsearch, Kafka, Java, Spring and more. We are agile and our focus is on constant improvement in our processes and ways of working and above all ourselves. We embrace change, we put our egos aside and take advice from others. We foster an engaged, open mindset within a non-hierarchical structure. We analyse, create, innovate and improve.
We offer an exciting work environment that brings people together. We encourage an entrepreneurial and innovative spirit. We make use of the latest digital technologies. We are looking for self-starters, who accept challenges and create solutions.
Can there be a better place to take center stage in the digital revolution? We are excited to getting to know you!