Koch
Join our Talent Network
Skip to main content

SR PROGRAMMER/ANALYST

Description

Data Engineer

Georgia-Pacific’sPackaging & Cellulose and Building Products BI and Data Analytics group is looking for aData Engineerto join our team. We are looking for entrepreneurial-minded innovators who can help us further develop this service of exceptionally high value to our business.

A successful candidate will bring advancedknowledge of best-in-class development methodologies,a passion for scalable and high-reliability data systems,and a working knowledge of public cloud (preferablyAWS) services. You must be enthusiastically collaborative, value seeking, open to challenge and becomfortable withnew ideas and established approaches with an appetite for learning and innovation.

A Day In The Life Typically Includes:

  • Implement and support data pipelines and services

  • Be able to contribute to architectural discussions about new data system designs

  • Ability to prioritize, organize and coordinate simultaneous tasks/projects 

  • Experimenting with new technologies and solutions, identifying ways we can use technology to create superior value for our customers.

  • High initiative and a passion for driving rapid technical advancement 

  • Collaborate with a diverse IT team including business analysts, project managers, architects, developers or vendors to create or optimize innovative technologies and solutions

  • Be able to communicate complex solutions to stakeholders and other team members

  • Strong conceptual, analytical, and problem-solving abilities 

What You Will Need:

Basic Qualifications:

· Experiencein one or more common scripting languages (Python, Node.JS preferred)

· 3+ years’ experience in writing SQL queries against relational databases (SQL Server, Postgres SQL etc.…)

· 4+ years’ experience with CI/CD tools and concepts (Azure DevOps, Bash, PowerShell, Terraform)

· 2+ years’ experience in building multi-dimensional databases, cubes, and data warehouses

· 2+ years’ experiencewith Git or other version control technologies

· 3+ years’ experience in building data pipelines or ETL processes

· 3+ years’ experience in tuning queries for performance and scalability

· 5+ years of experience in AWS native tools set – S3, Redshift, Aurora, DynamoDB, Lambda, Kinesisor similar tools.

· 2+ years ofApache Spark development experience (preferablyPySpark)

What Will Put You Ahead?

Preferred Qualifications:

  • Understanding of Meta Data & Master Data Management concepts & practices

  • Experience working with enterprise data stores, data lakes, external data sources and APIs

  • Experience working with remote teams spread across the globe

  • Experience with broad set of analytics use cases; one or more of supply-chain, transportation, sales and operations

  • Bachelor’s Degree in Computer Science, Engineering or Mathematics preferred

  • 1 - 3 years’ experience in visualization tools, such as Tableau orPowerBI

    Sign up for our talent network.

    Not ready to apply? Take a minute to sign up to receive notifications on opportunities that match your interests.

    Sign Up Now