Big Data Engineer

Driving Digital Transformation through Big Data

Job details

Apply now

Sign up to apply

Or sign up to refer and earn a reward of €800


Big Data is not just a buzzword. It's a reality helping transform lives, companies and the entire World. We are among the few chosen to have been officially certified in Spark. We are breaking down barriers in the development of Big Data technologies, and we are building an unbeatable team of experts.

We are Stratians. What does that mean? It means that we commit mistakes because we are not afraid to try new things. We are grateful to each other. We listen, we share and we learn. In Stratio, all opinions count. We dream of making the impossible possible. We dare to dream.

Why is it interesting to work with us? Because we work with the latest technologies with responsibility, flexibility and a fantastic work environment. Once you cross the door of Stratio you realize that there is a line that separates the outside world from a new ecosystem, one that is creative, technological and geeky.

As you travel along the hallways, you sense the enthusiasm of its inhabitants, involved with everything around them in unexpected spaces, from the entertainment area with our ping pong and arcade passing through the meeting rooms to our very particular terrace with stunning views of Madrid, fussball and the ‘zen’ zone in which you can reorganize your ideas.

If you are not interested in the number of hours that your teammates take to do their work, neither do we. We will not be your shadow to see if you stay late in the office. The only thing that matters to us is the quality of the final result.

We're looking for the best Big Data Engineer to join one of the pioneering companies in the development of Big Data. We look for one of ours. Do you dare to join us?

Main requirements

  • Experience with Java/Scala
  • Parallel distributed processing based on the MapReduce paradigm with Spark or Apache Hadoop
  • Experience or knowledge with any of the main Big Data distributions: Cloudera, Hortonworks, MapR etc.
  • Distributed processing of real-time data with Storm or Spark Streaming
  • Knowledge of NoSQL databases: Cassandra, MongoDB, HBase, Redis, Aerospike, etc.
  • Cluster Resources Management: Mesos, YARN
  • Experience with recommendation engines and machine learning algorithms: Apache Mahout, MLlib, etc.
  • CEP engines such as Esper CEP or Siddhi
  • Experience with virtualization and cloud deployment AWS, JCloud, Docker
  • Experience with search engines based on Lucene such as Elasticsearch

Nice to have

  • Familiar with Git
  • Experience or knowledge of the tools available in the Hadoop ecosystem: Hive, Pig, Flume, Zookeeper, Sqoop, etc.


  • During your 1st day you will receive a welcome pack, you can sign up for soccer tournaments, ping-pong, paddle, nerf battles or demonstrate your culinary skills in our small Masterchef contests
  • We develop R+D+i collaborating with universities in 80/20 modality. That’s how we are, always looking for any excuse to stay connected with technology
  • Flexible working hours: we attach importance to having a flexible schedule so you can combine your personal and professional life. For example, 8:00 to 15:00 is possible
  • Ticket Restaurant, an allowance to use if you prefer to eat outside the office
  • Transport cards
  • Help with getting certifications
  • Do you have children? We also have nursery tickets
  • You will have a health insurance that even Dr. House would approve of
  • Training: you will be able to improve your English and technology through classes taught at our offices
  • Access to the best training events (Big Data Spain)
  • Fresh fruit in the office
  • Telecommuting

Apply now

Sign up to apply

Or sign up to refer and earn a reward of €800