Cloud Data Engineer
At Galeo we develop projects for our clients where we generallydesign, develop, deploy and maintain the projects . Here lies one of our main added values, and that is to have profiles of all kinds to provide the entire supply chain of a project.
We focus on public cloud (AWS, Azure and GCP) and, although we do everything, our forte is IoT projects and Big Data platforms in all their breadth.
We like to be daring, innovative, try new technologies and the automation of all the components of our projects.
We are looking for all-round data engineers, with experience in more than one cloud. At Galeo we work regularly with Spark in EMR or Databricks, with Kafka, Redshift, BigQuery, Airflow... We are confident that data engineers, being at the core of complex platforms, will end up being among the most versatile engineers in the market, acquiring during their career knowledge of many different cloud services, as well as networking, IaaC tools, containers, security… gradually becoming naturally trained to be able to design complex architectures.
The common working language for all engineers at Galeo is Python, although sometimes we are asked for Scala projects for this position.
We will highly value your communication skills and your ability to work in a team and to communicate with our clients, which will be important for the correct evolution of your career plan.
The facts are clear:
- A competitive salary, complemented by an attractive performance-based bonus.
- A training and professional growth plan. All GALEO engineers have annual training and certification objectives, customized and designed together with the company.
- Real access to certifications: AWS, Azure, GCP, Kubernetes, Terraform, Confluent, DataBricks…..
- An informal, flexible and friendly work environment. The work is done in small teams, 100% TELEWORKING or from our new office in Madrid, which will allow you to work from the location that best suits you. We hold two annual face-to-face meetings in a festive setting.
- Other financial incentives such as rewards for bringing to Galeo that friend who is a crack, or for coaching people more junior than you. More than 70% of Galeo’s employees come from internal referrals – would it be like this if it wasn’t a great place to work?
Roles and responsibilities
These will be your main functions:
- Work on projects with small teams, under the umbrella and direct contact of a larger and very experienced team.
- Work with Spark and other batch technologies.
- Work with Python in the creation of small programs and scripts, e.g. for streaming data processing.
- We will work with all kinds of databases and engines: Postgres, Redshift, BigQuery, Athena, MongoDB…..
- Working on part of the deployment of infrastructure or project components using IaaC
We do not expect you to control everything, we value what is not written here, and what matters to us is your ability to absorb new knowledge:
- 3 to 5 years of experience as Data Engineer.
- Experience in public cloud, AWS and Azure, at least be strong in one of the two.
- Have Spark mastered, as well as the construction of data lakes/delta lakes.
- Experience with Docker and containers is valuable.
- Experience with batch process orchestration tools is valuable.
- Experience in projects related to Kafka is a plus.
- Experience with Terraform or Cloud Formation is a plus.
- Valuable basic knowledge of Django, Flask, FastAPI for specific issues.
- Fundamental: appropriate personality to work in a team and deal directly with our customers. We work remotely, so communication with the rest of the world is essential for the formula to work.
- The most important thing is that you have experience in different technologies and you are motivated to keep learning from each technological ecosystem.
- Residing in Spain, we don’t care where.
- Videoconference with the Head of People of Galeo Tech.
- Video conference with Galeo Tech’s Tech Manager/CTO.
- Request for references (if not done before or during the selection process).
- Incorporation offer.