BigData/ Hadoop Architect

Company: Synechron Inc.
Location: Charlotte, North Carolina, United States
Type: Full-time
Posted: 16.SEP.2018
< >

Summary

Synechron is one of the fastest-growing digital, business consulting & technology firms in the world. Specialized in financial services, the...

Description

Synechron is one of the fastest-growing digital, business consulting & technology firms in the world. Specialized in financial services, the business' focus on embracing the most cutting-edge innovations combined with expert knowledge and technical expertise has allowed Synechron to reach $500+ million in annual revenue, 8,000 employees and 18 offices worldwide. Synechron is agile enough to invest R&D into the latest technologies to help financial services firms stand at the cutting-edge of innovation; yet, also large enough to scale any global project. Learn more at:

Synechron draws on over 15 years of financial services IT consulting experience to provide expert systems integration expertise and technical development work in highly-complex areas within financial services. This includes: Enterprise Architecture & Strategy, Application Development & Maintenance, Quality Assurance, Infrastructure Management, Data & Analytics and Cloud Computing. Synechron is one of the world's leading systems integrators for specialist technology solutions including: Murex, Calypso, Pega, and others and also provides traditional offshoring capabilities with off-shore development centres located in Pune, Bangalore, Hyderabad, and Chennai as well as near-shoring capabilities for European banks with development centres in Serbia. Synechron's technology team works with traditional technologies and platforms like Java, C++, Python, and others as well as the most cutting-edge technologies from blockchain to artificial intelligence. Learn more at:

Synechron on behalf of our client a leading financial corporation is seeking for Bigdata/ Hadoop Architect in financial services space in Charlotte, NC.

Job Responsibilities:

  • Develop new reference architecture and service mode.
  • Lead designers and other developers in the team to guide them and help providing right technical solutions to the business.
  • Set up the Cloudera platform and various tools (Sqoop, Hive, Impala, Spark).
  • Create Hive tables and Impala metadata to access the data from HDFS.
  • Lead Big Data projects for client
  • Must be flexible enough to co-ordinate between multiple teams onsite/offshore.

Skills Required:

Staging/Load area:

  • Hadoop HDFS
  • ETL Staging & Load for legacy systems
  • S3 as Object data store where possible (evaluate Scality or Hitachi)

Operational & Reporting Data Marts:

  • Shared HDFS storage for all Data Marts
  • S3 for Object data stores where applicable, especially for supporting OLTP or CRUD application capabilities
  • Apache Parquet or ORC for Columnar data stores
  • Apache Spark for data processing - Scala, Python or Java
  • Apache Spark for data access through Spark SQL, Data Frames
  • Dedicated Spark clusters for each Data Mart. The capacity of the cluster is sized based on the usage of the Data Mart.
  • Oracle or SQL Server for legacy Data Marts

Reporting Workbench/Portal:

  • HTML5 for UI
  • OBIEE for standard reporting
  • Tableau for self-service reporting

- provided by Dice Bigdata, Hadoop, Sqoop, Hive, Impala, Spark, HDFS, Scality, ETL, Scala, Data Mart, Pythin, SQL Server

 
Apply Now

Share

Free eBook

Flash-bkgn
Loader2 Processing ...