logo

View all jobs

Spark/Scala Developer - AD1203

Toronto, ON · Banking/Loans

 

We are looking for a hands-on Big Data Engineer to join our Data Engineering team. The role of Big Data Engineer in our Datahub team is accountable for delivering the infrastructure solutions of assigned big data applications throughout the complete use case lifecycle. Responsibilities include identifying and documenting big data use case requirements; leading the design and development of solutions; accountability for the implementation and production roll out of the solutions and training of the production staff for steady state assistance. The solution delivered need to be adoptable in various markets, resilient, scalable, secured and with high performance that meet all the functional and non-functional requirements.
 

  • Designing and developing data ingestion and processing/transformation frameworks leveraging Hadoop Open Source frameworks
  • Architectural design and solution implementation of large-scale Big Data use cases.
  • Designing, developing, and integrating ETL/ELT data pipelines.
  • Actively participate in addressing non-functional requirements such as performance, security, scalability, continuous integration, migration and compatibility.
  • Take ownership from design of the feature through first lines of code to how it performs in production (You build it, you run it)
  • Ensure fully automated testing by designing and writing automated unit, integration and acceptance tests.
     

What do you bring to the role?
 

  • BS/MS in computer science or equivalent technical experience.
  • Strong coding skills in Scala language with the Spark framework, as well as some experience with Shell Scripting languages.
  • A coding background in either Java or R.
  • Experience in API development, API product expertise, API design patterns, and API Security (API Key Validation, Authentication, Authorization and Identity)
  • Should have worked in Hadoop and Data Engineering space for at least 2 years in Apache Hadoop distributions.
  • Strong knowledge of big data open source technologies such as Hadoop, Workflow Managers such as NiFi, Kafka, Druid, Hive, Storm, Ignite, Kudu.

More Openings

Helpdesk Level 2 Support
System Admin
Deployment Supervisor

Share This Job

Powered by