Full time

Senior Software Data Engineer

Apply Now

Job description

Vaurse is an early-stage startup on a mission to transform how we recruit, work, and get paid.

Our first product, Vaurse talent, is an AI-based job matching platform that matches candidates’ dream jobs. We leverage intelligent technology to match people with the right opportunities based on their qualifications, skills, and interests. By eliminating the need to send a CV and cover letter for every job, providing real-time information, and promoting transparency, we make hiring simple, fast, and stress-free.

Vaurse talent is committed to ensuring objective hiring decisions. We use unbiased technology and objective assessments to showcase everyone’s potential and help minimize unconscious bias in the hiring process.

We make it super easy for Employers to find the best talent quickly to fill out job roles without hassle. Think of it as the “Google for Candidates.”

Joining as our Senior Software Data Engineer…

  • You will shape the core product and its tech stack from the ground up.
  • You will participate in the overall design, architecture, and integration of our data systems.
  • You will build, maintain and run secure cloud-native data collection, ETL, and analytics pipelines.
  • You will provide APIs to build user-facing applications on top of our data backends.
  • You will plan, gather requirements, collaborate, and communicate with stakeholders.
  • You will be a vital part of the technical decision-making process.
  • You will mentor other engineers and strive for best practices.
  • Ultimately, you will use data to help accelerate our mission impact and make a real difference.


  • 5+ years experience as a data engineer or backend engineer with a focus on data pipelines;
  • Fluency in Python or an equivalent language for modern data engineering;
  • Strong experience with industrial-grade relational and non-relational data storage (e.g., Postgres SQL, Elastic search) as well as writing and optimizing complex SQL queries and transformations;
  • Strong experience building robust ETL processes, data pipelines, and orchestrators (e.g., Airflow, dvc);
  • Experience with API frameworks (e.g., FastAPI, Django);
  • Experience with Deployment technologies (e.g., Docker, Kubernetes);
  • Comfortable with git and building CI/CD pipelines (e.g., github, github Actions);
  • Experience with Agile Methodologies and working in Sprints;
  • You are curious, pragmatic, quickly learn new domains, and can bring a fresh perspective to our team

Apply for this Job