Job Description:
Position Description:
Develops PL/SQL and Extract Transfer and Load (ETL) code using Python. Delivers and releases valuable source code to applications across teams and according to Agile methodologies. Builds and deploys applications using Continuous Integration (CI) pipelines – Jenkins GitHub, Liquibase, and Amazon Web Services (AWS) Code Commit. Performs unit and integration testing using SQL queries, Python, and Spark. Provides ETL solutions by developing complex or multiple software applications. Crafts and implements operational data stores in multi-site High Availability environments using Amazon Web Services (AWS).
Primary Responsibilities:
- Prepares and maps technical design documents by capturing business and application requirements.
- Participates in sprint meetings and maintains feature feasibility and deployment.
- Validates, builds, and deploys analytical solutions across business functions.
- Extracts data from Oracle Analytics Servers (OAS) through Python scripts for Amazon Simple Storage Service (Amazon S3).
- Designs and develops pipelines with framework to transform and cleanse data.
- Coordinates Airflow tasks in production by monitoring and analyzing subjects through Airflow Console.
- Crafts and implements operational data stores and lakes in production environments.
- Develops and coordinates data life cycle in Python.
- Deploys database and applications through established Continuous Integration and Continuous Delivery (CI/CD) pipelines, using GitHub and Jenkins.
Education and Experience:
Bachelor’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Business Administration, Mathematics, Physics, or a closely related field and five (5) years of experience as a Principal Data Engineer (or closely related occupation) designing and building Web-based transaction processing applications and solutions in a financial services environment, using Python.
Or, alternatively, Master’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Business Administration, Mathematics, Physics, or a closely related field and three (3) years of experience as a Principal Data Engineer (or closely related occupation) designing and building Web-based transaction processing applications and solutions in a financial services environment, using Python.
Skills and Knowledge:
Candidate must also possess:
- Demonstrated Expertise (“DE”) developing software applications and tools in a Linux environment, using Python and SQL; and developing product roadmaps in support of data improvement using Colibra.
- DE performing database virtualization and coordinating test data management, data governance, lineage, and analytics, using Apache Kafka, Tableau, or PowerBI; and defining, developing, and implementing solutions to collect and deliver organizational metrics via dashboards, using PowerBI.
- DE developing ELT pipelines to migrate data to and from OBIEE catalog to Postgres; staging data on AWS and orchestrating ETL jobs by developing E2E ETL/ELT data pipelines using AWS Airflow; and automating the deployment of database and applications through established CI/CD pipelines, using GitHub, Liquibase, AWS Code Commit, or Jenkins.
- DE automating the build and release of software applications using Python and Shell scripting; and designing, developing, and deploying E2E infrastructure to host scalable business software in AWS, Azure cloud (IaaS and PaaS), or On premises.
#PE1M2
#LI-DNI
Certifications:
Category:
Information TechnologyFidelity’s hybrid working model blends the best of both onsite and offsite work experiences. Working onsite is important for our business strategy and our culture. We also value the benefits that working offsite offers associates. Most hybrid roles require associates to work onsite every other week (all business days, M-F) in a Fidelity office.