Supplier call 01/09/2019
The three things you must have strong experience in.
1. CI/CD Pipeline Creation (Preferribly Jenkins)
2. IM/IAM within AWS
3. Python coding
Project Description :
Senior DevOps engineer with AWS cloud infrastructure experience supporting big data platform
Senior DevOps Engineer
We are looking for a senior DevOps engineer with AWS cloud infrastructure experience to join our Big Data Platform Operations team. We provide cloud infrastructure to over 500 internal engineering and technical business users (Data Scientists and Analysts). This role will provide advanced operations support, contribute to automation and system improvements, and work directly with user teams to support platform adoption.
· Support key operational aspects of our infrastructure AWS cloud-based services and ensure SLAs are achieved or exceeded.
· Improve our infrastructure by developing and improving our CI/CD workflows and pipelines that provide stable and repeatable infrastructure and builds.
· Provide advanced engineering support services to end users:
o Gather technical details.
o Troubleshoot issues to resolve problems.
o Provide status updates to users and stakeholders.
o Track all details in the issue tracking system (JIRA).
· Provide issue review and triage problems for new service / support requests.
· Assist with training and onboarding for new end users.
· Contribute to Agile / Kanban workflows and team process work.
· Other operations and support duties as needed.
Required Job Qualifications:
· AWS: Working experience and a good understanding of AWS best-practices. Good knowledge of the AWS ecosystem, including: EC2, S3, VPC, CloudFormation, Athena, Lambda, etc. Advanced experience with IAM policy and role management is especially important. AWS certification helpful.
· Infrastructure Operations: 3+ years supporting systems infrastructure operations, upgrades, deployments, and monitoring. Ability to keep systems running at peak performance, upgrade operating systems, patching, and version upgrades as required.
· CI/CD: Experience with CI/CD methodologies and environments. Experience with tools such as: Terraform, Jenkins or Circle-CI and GitHub/Bitbucket.
· Programming: 2+ years experience programming with Python. Experience with Bash, REST APIs, and JSON encoding.
· Linux: 5+ years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu.
· Security: Experience implementing role-based security methodolgies, including AD integration, security policies, and auditing/logging in a Linux/AWS environment.
· Networking: Working knowledge of TCP/IP networking, HTTPS, load-balancers (ELB, HAProxy) and high availability architecture.
· Design and Implementation: Coordinate with other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs.
· Innovate: History of adoption of new technologies quickly. Research and deploy new tools and frameworks to build a sustainable big data platform.
· Agile/Scrum/Kanban experience. Collaborate with Project Managers, Product Managers, QA and Engineering teams to deliver results.
· Demonstrated communication and interpersonal skills.
· Proven track record of success in fast-moving organizations with complex technology applications.
Nice-to-Have Job Qualifications
· AWS Certification: Certification in AWS would be an important bonus.
· Containers: Knowledge of working with containers and orchestration tools such as Docker, AWS ECS, Kubernetes.
· Orchestration Tools: Experience with orchestration and configuration management tools (Ansible, Chef, Puppet, Salt).
· Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, Nagios, New Relic.
· ETL: Job scheduler experience like Oozie or Airflow. Nice to have Airflow experience.
· ELK stack: Experience with setting up ELK stack for Analytics.
· MS/BS in Computer Science or equivalent experience.