Synechron is a leading Digital Transformation consulting firm and is working to Accelerate Digital initiatives for banks, asset managers, an...
Synechron is a leading Digital Transformation consulting firm and is working to Accelerate Digital initiatives for banks, asset managers, and insurance companies around the world. Synechron uniquely delivers these firms an end-to-end Digital, Consulting and Technology capabilities with expertise in wholesale banking, wealth management and insurance as well as emerging technologies like Blockchain, Artificial Intelligence, and Data Science. Based in New York, the company has 22 offices around the globe, with over 10,000 employees producing over $650M+ in annual revenue.
Responsibilities:
- Design, architect and support new and existing data and ETL pipelines and recommend improvements and modifications.
- Create optimal data pipeline architecture and systems using Apache Airflow
- Assemble large, complex data sets that meet functional and non-functional business requirements.
- Be responsible for ingesting data into our data lake and providing frameworks and services for operating on that data including the use of Spark & Databricks.
- Analyze, debug and correct issues with data pipelines
- Operate on or build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and Azure technologies.
Required Skills:
- 8+ years of experience building high performance scalable enterprise analytics or data centric solutions
- At least 5 years of experience implementing complex ETL pipelines preferably in connection with Hadoop or Spark
- Experience working with Spark data pipeline and or streaming experience
- Exceptional coding and design skills in Java/Scala or C#.
- Expert in Python, or can demonstrate ability to readily learn new languages and affinity for them.
- Understand the nature of distributed systems design.
- Hands-on experience with Azure.
- Strong understanding & usage of algorithms and data structures.
- Work well in a team environment, and are a self-starter.
- Ability to lead, or have led teams before.
- Expertise building and managing python libraries
Nice to have Skills:
- Azure Data Lake experience
- Machine Learning expertise
- Experience with Apache Airflow
- Experience with C# .NET Core
- Experience with traditional data marts/warehouses
- provided by Dice