Question
10+

Senior Data Platform Engineer

11/24/2025

The Senior Data Platform Engineer will design, build, and scale Darrow’s next-generation data platform, ensuring its architecture and reliability. This role involves collaborating with various teams to create a robust platform that supports complex data ingestion and AI-powered analytics.

Working Hours

40 hours/week

Company Size

51-200 employees

Language

English

Visa Sponsorship

No

About The Company
Darrow is transforming the legal ecosystem by turning scattered legal data into actionable insights that drive faster, fairer outcomes. We uncover hidden patterns, risks, and opportunities - helping legal professionals move quickly, assess cases more accurately, and resolve disputes more efficiently. Justice Through Intelligence means cutting through complexity and empowering legal professionals to drive real change.
About the Role

You are looking for a job that will truly engage you. You have an entrepreneurial spirit and can make things happen in a fast-paced startup environment. You want to grow and be challenged, but above all you want to work towards a mission, and for your work to have meaning.

We are Darrow – a fast-growing, mission-driven LegalTech startup with a mission to uncover legal wrongdoing and secure justice for impacted parties. Founded in 2020 in Tel Aviv, Israel, Darrow is revolutionizing the justice system. Our team of world-class legal experts and technologists has built an intelligence platform that uncovers egregious violations across legal domains, such as privacy and data breaches, consumer protection, securities and financial fraud, environment, and employment.  


We are looking for a Senior Data Platform Engineer to design, build, and scale Darrow’s next-generation data platform, the backbone powering our AI-driven insights.

This role sits at the intersection of data engineering, infrastructure, and MLOps, owning the architecture and reliability of our data ecosystem end-to-end.

You’ll work closely with data scientists,r&d teams, analysts to create a robust platform that supports varying use cases, complex ingestion, and AI-powered analytics.


Responsibilities:   

  • Architect and evolve a scalable, cloud-native data platform that supports batch, streaming, analytics, and AI/LLM workloads across R&D.
  • Help define and implement standards for how data is modeled, stored, governed, and accessed 
  • Design and build data lakes and data warehouses 
  • Develop and maintain complex, reliable, and observable data pipelines
  • Implement data quality, validation, and monitoring frameworks 
  • Collaborate with ML and data science teams to connect AI/LLM workloads to production data pipelines, enabling RAG, embeddings, and feature engineering flows.
  • Manage and optimize relational and non-relational datastores (Postgres, Elasticsearch, vector DBs, graph DBs).
  • Build internal tools and self-service capabilities that enable teams to easily ingest, transform, and consume data.
  • Contribute to data observability, governance, documentation, and platform visibility
  • Drive strong engineering practices
  • Evaluate and integrate emerging technologies that enhance scalability, reliability, and AI integration in the platform.




Responsibilities

null

Requirements


  • 7+ years experience building/operating data platforms 
  • Strong Python programming skills 
  • Proven experience with cloud data lakes and warehouses (Databricks, Snowflake, or equivalent).
  • Data orchestration experience (Airflow)
  • Solid understanding of AWS services 
  • Proficiency with relational databases and search/analytics stores 
  • Experience designing complex data pipelines, managing data quality, lineage, and observability in production.
  • Familiarity with CI/CD, GitOps, and IaC 
  • Excellent understanding of distributed systems, data partitioning, and schema evolution.
  • Strong communication skills, ability to document and present technical designs clearly.

Advantages

  • Experience with vector databases and graph databases
  • Experience integrating AI/LLM workloads into data pipelines (feature stores, retrieval pipelines, embeddings).
  • Familiarity with event streaming and CDC patterns.
  • Experience with data catalog, lineage, or governance tools
  • Knowledge of monitoring and alerting stacks 
  • Hands-on experience with multi-source data product architectures.
Key Skills
Data EngineeringCloud ComputingPython ProgrammingData OrchestrationAWS ServicesRelational DatabasesData QualityCI/CDDistributed SystemsData GovernanceMLOpsData PipelinesData LakesData WarehousesAI IntegrationMonitoring
Categories
TechnologyData & AnalyticsEngineeringLegalSoftware
Apply Now

Please let Darrow know you found this job on PrepPal. This helps us grow!

Apply Now
Get Ready for the Interview!

Do you know that we have special program that includes "Interview questions that asked by Darrow?"

Elevate your application

Generate a resume, cover letter, or prepare with our AI mock interviewer tailored to this job's requirements.