background_image
  • IMAGE: Return to Main
  • IMAGE: Show All Jobs


Position Details: Senior Data Engineer - (876854E)

Location: Beaverton, OR
Openings: 1
Job Number:

Share

Description:

Client has embraced big data technologies to enable data-driven decisions.  We’re looking to expand our Data Engineering team to keep pace.  As a Senior Data Engineer, you will work with a variety of talented Client teammates and be a driving force for building first-class solutions for Client Technology and its business partners, working on development projects related to supply chain, commerce, consumer behavior and web analytics among others.

 

Role responsibilities:

· Design and implement features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology

· Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes

· Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem

· Build utilities, user defined functions, and frameworks to better enable data flow patterns

· Research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing

· Define and apply appropriate data acquisition and consumption strategies for given technical scenarios

· Build and incorporate automated unit tests and participate in integration testing efforts

· Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to

· Work across teams to resolve operational and performance issues

 

The following qualifications and technical skills will position you well for this role:

· MS/BS in Computer Science, or related technical discipline

· 4+ years of experience in large-scale software development, 2+ years of big data experience

· Strong programming experience, Python preferred

· Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, Sqoop, etc.

· Experience with RDBMS systems, SQL and SQL Analytical functions

· Experience with workflow orchestration tools like Apache Airflow

· Experience with performance and scalability tuning

 

The following skills and experience are also relevant to our overall environment, and nice to have:

· Experience with Scala or Java

· Experience working in a public cloud environment, particularly AWS

· Familiarity with cloud warehouse tools like Snowflake

· Experience with messaging/streaming/complex event processing tooling and frameworks such as Kinesis, Kafka, Spark Streaming, Flink, Nifi, etc.

· Experience working with NoSQL data stores such as HBase, DynamoDB, etc.

· Experience building RESTful API’s to enable data consumption

· Familiarity with build tools such as Terraform or CloudFormation and automation tools such as Jenkins or Circle CI

· Familiarity with practices like Continuous Development, Continuous Integration and Automated Testing

· Experience in Agile/Scrum application development

 

These are the characteristics that we strive for in our own work. We would love to hear from candidates who embody the same:

· Desire to work collaboratively with your teammates to come up with the best solution to a problem

· Demonstrated experience and ability to deliver results on multiple projects in a fast-paced, agile environment

· Excellent problem-solving and interpersonal communication skills

· Strong desire to learn and share knowledge with others

Perform an action:

IMAGE: Apply to Position
mautic is open source marketing automation




Powered by: CATS - Applicant Tracking System