Data Engineer

Experience: 3+years

Requirements:

  • Implement the tools and processes required of a data processing pipeline
  • Primary responsibilities include implementing ETL (extract, transform and load) pipelines, monitoring/maintaining data pipeline performance
  • Familiar with key architectures including Lambda and Kappa architectures
  • Broad experience across a set of data stores (e.g., SQL, PostgreSQL, MongoDB, InfluxDB, neo4j, Redis)
  • Experience in messaging systems (e.g., Apache Kafka, RabbitMQ)
  • Experience in data processing engines (e.g., BigQuery, Redshift, Snowflake)
  • Experience working on solutions that collect, process, store and analyze huge volume of data, fast moving data or data that has significant schema variability

Technologies

  • SQL, PostgreSQL, BigQuery, Redshift, Snowflake, MariaDB, MongoDB, InfluxDB, neo4j, Redis, Elasticsearch
  • Programming/Scripting Languages: Scala, SQL, Python, MapReduce  
  • Cloud: AWS / AZURE / GCP

    Apply Now to Join Us

    Choose FilesNo Files ChosenAccepted file types: js, xcf, mpp. Max. file size: 1 MB

    14 + 10 =