

Big Data Lead
Our partner is a well-established software development center in Bucharest, part of one of the world's leading financial groups.
The hub continues to expand by strengthening teams, developing new products, and enhancing the technical architecture of its key enterprise platforms through re-engineering efforts.
The role is part of the technical structure supporting the Treasury division, with a Big Data & BI ecosystem that provides real time relevant data analysis to financial departments and ensures regulatory compliance.
As a Big Data Lead, you'll help shape data strategy, optimize complex architectures, and mentor a talented team within one of the largest financial institutions in the world.
The technology stack is a mix of modern and legacy technologies, with the aim to harmonize and transition towards an even more unitary ecosystem: Hadoop, GCP, HDFS, Oracle, Flume, Spark, Scala, Hive, Impala, ElasticSearch, etc.
Your key responsibilities will revolve around:
Tech leadership & Hands-on coding (60-70% of your time)
Collaborate with business analysts and data scientists to translate business requirements into scalable data solutions.
Design end-to-end data workflows and ensure data lakes meet business and regulatory needs.
Design and develop scalable, real-time data pipelines using Scala, Spark, and Hadoop.
Build distributed and reliable ETL workflows for streaming and batch processing.
Optimize Spark performance, fine-tune applications, and enhance data processing throughput.
Work with Hive, Impala, ElasticSearch, Flume, and Kibana to manage structured and unstructured data.
Ensure high-quality data ingestion from diverse sources into the Hadoop ecosystem.
Develop advanced SQL-based transformations and efficient joins for Big Data analytics.
People management & Team Leadership (30-40% of your time)
Lead, mentor and grow a team of approx. 10 data engineers (big data & BI);
Set best practices for coding, performance tuning and data governance;
Provide technical direction and drive team innovation in Big Data solutions;
Work cross-functionally with stakeholders, architects, and product owners to deliver key projects.
To thrive in this role, you'll have:
At least 5 years of hands-on experience with Hadoop, Scala, and Spark.
Deep understanding of concurrency, multi-threading, and distributed computing.
Strong SQL and database expertise, with experience handling complex data structures.
Proven leadership skills – ability to mentor, guide, and manage a development team.
Experience in designing scalable Big Data architectures and implementing data workflows.
Ability to troubleshoot, optimize, and fine-tune Hadoop ecosystem performance.
Capable of working in virtual, cross-functional teams within a matrix organization.
Your benefits:
Annual Bonuses based on performance;
Medical care & Life insurance & Meal vouchers & Other Discounts;
Hybrid work – 2-3 days/week onsite. Offices are based in Pipera, close to the Metro station – the coolest offices in town;
24 days of holiday & extra days off;
A culture of continuous learning.
If you feel you still need to fill some gaps for this position, don't give up - let's talk first.
Hybrid, Bucharest


