logo

View all jobs

DevOps Engineer - TI157453

Edmonton, AB
Our Client is building the TV Experience of the future!

This is what they have to say:
If you want to be part of the most exciting journey, join our team! We are going to reinvent how entertainment is brought to people, through technology, content and exciting user experience.
We shall use the most modern and advanced technologies, creating small, independent and cohesive teams with diverse skills - to reach a commonly agreed target: rebuild entertainment!

Here’s the impact you’ll make and what we’ll accomplish together:
  • We need you to design, build and deploy the technology needed for the new TV experience, contributing to our teams with your knowledge, enthusiasm and creativity.
  • The technology we build with you shall define a reference for the TV of the future, and we shall leverage all the power of our Network to build an immersive, complete multimedia environment.
  • What we plan to do does not exist in the market, and we want to be the landmark for innovation in this exciting space.
  • We will partner with the most advanced technology players to redesign the TV architecture, and we shall leverage the cloud to access unlimited power.
  • We plan to reach users across all devices, and to create a unique, personal entertainment experience, starting from the TV. We are already collaborating with the leaders in this area, and many others plan to join.
  • We shall create a dynamic, open work environment, where creativity is superpowered by technology, in a continuous quest for innovation.
Required Skills: 
Big data engineers are responsible for developing, maintaining, evaluating and testing big data solutions. In addition, they are generally involved in the design of big data solutions. 
  • Responsible for Hadoop development.
  • Implementation including loading from disparate data sets, preprocessing using Hive and Pig.
  • Scope and deliver various Big Data solutions.
  • Ability to design solutions independently based on high-level architecture.
  • Manage the technical communication between the survey vendor and internal systems.
  • Maintain the production systems (Kafka, Hadoop, Cassandra, Elasticsearch).
  • Collaborate with other development and research teams.
  • Building a cloud based platform that allows easy development of new applications.
  • Proficient understanding of distributed computing principles.
  • Management of Hadoop cluster, with all included services.
  • Ability to solve any ongoing issues with operating the cluster.
  • Proficiency with Hadoop v2, MapReduce, HDFS.
  • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming.
  • Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala.
  • Experience with Spark.
  • Experience with integration of data from multiple data sources.
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB.
  • Knowledge of various ETL techniques and frameworks, such as Flume.
  • Experience with various messaging systems, such as Kafka or RabbitMQ.
  • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O .
  • Good understanding of Lambda Architecture, along with its advantages and drawbacks.
  • Experience with Cloudera/MapR/Hortonworks.
Powered by