background_image
  • IMAGE: Return to Main
  • IMAGE: Show All Jobs


Position Details: DevOps/Data Engineer - (829582S)

Location: Beaverton, OR
Openings: 1
Job Number:

Share

Description:

Key responsibilities

· Serve as a lead in supporting and operating key aspects of our infrastructure services.

· Improve our infrastructure by developing and improving automation tools.

· Provide advanced engineering support services to end users:

Gather technical details.

Troubleshoot issues to resolve problems.

Provide status updates to users and stakeholders.

Track all details in the issue tracking system (JIRA).

· Provide issue review and triage problems for new service / support requests.

· Assist with training and onboarding for new end users.

· Use DevOps automation tools, including Jenkins build jobs and Puppet manifests.

· Contribute to Agile / Kanaban workflows and team process work.

· Other operations and support duties as needed.

 

Required Job Qualifications

· Linux: 5 or more years in Unix systems engineering with experience in Red Hat Linux, Centos or Ubuntu.

· AWS: Working experience and good understanding of the AWS environment, including VPC, EC2, EBS, S3, RDS, SQS, Cloud Formation, Lambda, ElasticSearch. Advanced experience with IAM policy and role management.

· Infrastructure Operations: 3+ years supporting systems infrastructure operations, upgrades, deployments, and monitoring.

· Demonstrated successful experience learning new technologies quickly.

· Programming: 2+ years experience programming with Python. Experience with Bash, REST APIs, and JSON encoding.

· Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR.

· DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Ansible, Chef, Puppet, Salt, Jenkins, Troposphere).

· Version Control: Working experience with one or more version control platforms (Git, TFS). Nice to have Git experience.

· ETL: Job scheduler experience like Oozie or Airflow. Nice to have Airflow experience.

· Data Science tools (nice to have): R, RStudio, JupyterHub, Zeppelin, Tensorflow

· Security: Experience implementing role based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment.

· Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, Nagios, New Relic.

· ELK stack: Experience with setting up ELK stack for Analytics.

· Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB, HAProxy) and high availability architecture.

· Ability to keep systems running at peak performance, upgrade operating system, patches, and version upgrades as required.

· Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs.

· Research and deploy new tools and frameworks to build a sustainable big data platform.

· Agile/Scrum/Kanban experience.

· Demonstrated communication and interpersonal skills.

· Proven track record of success in fast-moving organizations with complex technology applications.

· Collaborate with Project Managers, Product Managers, QA and Engineering teams to deliver results.

Educational qualifications

· MS/BS in Computer Science or related field.

Perform an action:

IMAGE: Apply to Position
mautic is open source marketing automation




Powered by: CATS - Applicant Tracking System