background_image
  • IMAGE: Return to Main
  • IMAGE: Show All Jobs


Position Details: Senior Data Engineer - 957990F

Location: Beaverton, OR
Openings: 1
Job Number:

Share

Description:

 

 

Become a Part of the client Team

Client does more than outfit the world’s best athletes. It is a place to explore potential, obliterate boundaries and push out the edges of what can be. The company looks for people who can grow, think, dream and create. Its culture thrives by embracing diversity and rewarding imagination. The brand seeks achievers, leaders and visionaries. At client, it’s about each person bringing skills and passion to a challenging and constantly evolving game.
Do you have a passion for digital technology, innovation and problem solving? Are you curious about how to turn billions of events and signals into meaningful information that not only provides insights into the present but also help predict the future? Are you interested in applying Data Engineering, Data Streaming and Big Data Technology to help deliver personalized experiences? If so, come join the talented team of engineers that are a driving force behind data engineering solutions at client on the Consumer Services Data Engineering Team.

The Consumer Services Data Engineering Team is building a new data warehouse and need a talented engineer to help drive this huge milestone across the line.  The data engineer will have experience sourcing in social media data into our newly designed data warehouse.

Requirements:

  • MS/BS in Computer Science, or related technical discipline
  • 5+ years of industry experience, 3+ years of relevant big data/dimensional/relational db experience
  • 5+ year experience in Python and Snowflake. Strong programming experience in Python
  • Demonstrated experience using APIs to extract data with consideration of authentication and authorization methods to adhere to privacy and security policies
  • Ability to architect, design and implement solutions with AWS Virtual Private Cloud, EC2, AWS Data Pipeline, AWS Cloud Formation, Auto Scaling, AWS Simple Storage Service, EMR and other AWS products. 
  • Data Modeling is required as the engineer will source the data and build the data model.
  • Extensive experience working with Hadoop and related processing frameworks such as Spark, Hive, Sqoop, etc.
  • Experience with workflow orchestration tools like Apache Airflow
  • Experience with performance and scalability tuning
  • Experience in Agile/Scrum application development using JIRA.
  • Experience working in a public cloud environment, particularly AWS
  • Experience with BitBucket
  • Implemented coding standards and long-term best practices

Required Soft Skills:

  • Demonstrated experience and ability to deliver results on multiple projects in a fast-paced, agile environment
  • Excellent problem-solving and interpersonal communication skills
  • Strong desire to learn and share knowledge with others
  • Passionate about data and striving for excellence
  • Desire to learn and understand the business and communicate with business stakeholders to accomplish business rules transformations and data validation while coding
  • Desire and ability to work collaboratively with your teammates to come up with the best solution to a problem. Specially with architects, product managers, scrum masters, engineers

Nice to Have:

  • Familiarity with practices like Continuous Development, Continuous Integration and Automated Testing
  • Familiarity with build tools such as CloudFormation and automation tools such as Jenkins or Circle CI
  • Call center data globally
  • Understands “social media” language and sourcing in social media data – ie Twitter, Facebook, Instagram
  • ErWin experience is a plus
  • Experience working and configuring data, custom fields and reporting in Sprinklr social application is a plus.

Role responsibilities:

  • Understand the technical details of the Sprinklr platform to extract the data using their APIs
  • Interrogate Sprinklr’s authentication and authorization methods to make sure it adheres to client privacy and security policies
  • Work with architects, product managers, scrum masters to deliver sprint goals every two weeks
  • Work with the data enablement team who are our business stakeholders to understand data requirements pertaining to metrics and quality
  • Design and implement features in collaboration with team engineers, product owners, data analysts, and business partners using Agile / Scrum methodology
  • Model the social media data elements and make sure it integrates with the existing data model
  • Work with the Salesforce engineer to determine how to integrate Sprinklr social media data with Salesforce application
  • Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes
  • Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem
  • Build utilities, user defined functions, and frameworks to better enable data flow patterns
  • Research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing
  • Define and apply appropriate data acquisition and consumption strategies for given technical scenarios
  • Build and incorporate automated unit tests and participate in integration testing efforts
  • Work with architecture/engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to
  • Work across teams, even third party vendors, to resolve operational and performance issues

The candidates coming through haven’t been technical enough with the stack we are using.

Python, ETL DW design and concepts – Snowflake a plus, understanding of APIs

Those are the top 3.

We are NOT looking for a data science w Python experience.

We are looking for a data engineer with Python experience. Someone who understands ETL process…

Loading data from source systems to a data lake.

Required

  • DATA MODEL
  • PYTHON
  • API
  • SNOWFLAKE SCHEMA
  • DATA WAREHOUSE DESIGN/ARCHITECTURE

Additional

  • EMR
  • ENGINEER
  • HADOOP
  • HIVE
  • INTEGRATION
  • METRICS
  • PROBLEM-SOLVING
  • SECURITY POLICIES
  • SIMPLE STORAGE SERVICE
  • SQOOP
  • AMAZON ELASTIC COMPUTE CLOUD
  • AMAZON SIMPLE STORAGE SERVICE
  • APACHE
  • APACHE HADOOP SQOOP
  • APPLICATION DEVELOPMENT
  • ARCHITECTURE
  • CODING
  • DATA ACQUISITION
  • DATA VALIDATION
  • DATABASE
  • DATABASES
  • ERWIN
  • INTEGRATION TESTING
  • INTEGRATOR
  • ITS
  • JENKINS
  • JIRA
  • PROBLEM SOLVING
  • STREAMING
  • UNIT TESTS
  • WORKFLOW

 

Perform an action:

IMAGE: Apply to Position
mautic is open source marketing automation




Powered by: CATS - Applicant Tracking System