Big Data / Hadoop Developer

Company: Synechron Inc.
Location: Charlotte, North Carolina, United States
Type: Full-time
Posted: 15.DEC.2018

Summary

Synechron is one of the fastest-growing digital, business consulting & technology firms in the world. Specialized in financial services, the...

Description

Synechron is one of the fastest-growing digital, business consulting & technology firms in the world. Specialized in financial services, the business' focus on embracing the most cutting-edge innovations combined with expert knowledge and technical expertise has allowed Synechron to reach $500+ million in annual revenue, 8,000 employees and 18 offices worldwide. Synechron is agile enough to invest R&D into the latest technologies to help financial services firms stand at the cutting-edge of innovation; yet, also large enough to scale any global project. Learn more at:

Synechron draws on over 15 years of financial services IT consulting experience to provide expert systems integration expertise and technical development work in highly-complex areas within financial services. This includes: Enterprise Architecture & Strategy, Application Development & Maintenance, Quality Assurance, Infrastructure Management, Data & Analytics and Cloud Computing. Synechron is one of the world's leading systems integrators for specialist technology solutions including: Murex, Calypso, Pega, and others and also provides traditional offshoring capabilities with off-shore development centres located in Pune, Bangalore, Hyderabad, and Chennai as well as near-shoring capabilities for European banks with development centres in Serbia. Synechron's technology team works with traditional technologies and platforms like Java, C++, Python, and others as well as the most cutting-edge technologies from blockchain to artificial intelligence. Learn more at:

Synechron on behalf of our client a leading financial corporation is seeking for Bigdata/ Hadoop Developer in financial services space in Charlotte, NC.

Job Responsibilities:

  • Exposure to Object Storage - Scality , Hitachi etc.
  • Coding Experience (data wrangling, aggregation etc.) in Scala using Spark
  • Exposure in Kubernetes Instances for Compute Requirements
  • Strong Big Data Fundamentals & data life cycle exposure.
  • Experience in Java & should have strong programming background.
- provided by Dice Hadoop, Big Data, Spark, Scala, Hive, HDFS, Pyspark

 
Apply Now

Share

Free eBook

Flash-bkgn
Loader2 Processing ...