Question
FULL_TIME
Remote
5-10

Senior Full stack Data Engineer

1/29/2026

The Senior Full Stack Data Engineer will design, develop, and maintain end-to-end data pipelines and scalable lakehouse architectures. They will also collaborate with cross-functional teams to ensure data quality and performance optimization.

Working Hours

40 hours/week

Company Size

11-50 employees

Language

English

Visa Sponsorship

No

About The Company
We are a team of highly experienced sustainability experts, bringing over 20 years of wide-ranging technical, sectoral, and cultural expertise to your projects in Central Asia. This includes providing services around renewable energy supply, energy efficiency, ‘green’ technology and finance, climate change mitigation, water sustainability, green tourism, nature-based solutions, and sustainable consumption and production (SCP). This makes Unison one of the leading organisations contributing to the circular economy in the Central Asia region. We collaborate with clients to sustainably support their efforts in innovating and achieving long-term performance improvements. We advise governments, financial institutions, donors and civil society organizations on the most critical issues and provide sustainable solutions for Central Asia. Our partners include renowned international donor organisations and finance institutions, nonprofit organisations, and state authorities at the national, regional and local levels. We aim for large-scale, long-term positive changes in the energy, climate, government and environmental sectors.
About the Role

About the Role

We are looking for a Full Stack Data Engineer to design, build, and maintain scalable data platforms and pipelines. The ideal candidate has strong hands-on experience across data ingestion, transformation, orchestration, and cloud-based analytics, with a focus on modern lakehouse architectures.

Key Responsibilities
  • Design, develop, and maintain end-to-end data pipelines using Python, PySpark, and SQL
  • Build and optimize data transformation workflows using dbt on Snowflake
  • Develop scalable lakehouse architectures for structured and semi-structured data
  • Implement reliable data ingestion frameworks using Kafka, AWS Glue, and custom connectors
  • Orchestrate workflows and manage dependencies using Apache Airflow
  • Manage cloud infrastructure on AWS (S3, Glue, EMR, Redshift/Snowflake integrations)
  • Implement Infrastructure as Code (IaC) using Terraform
  • Collaborate with cross-functional teams to deliver analytics-ready datasets
  • Ensure data quality, performance optimization, and cost efficiency
  • Use GitLab for version control, CI/CD, and collaborative development
  • Monitor, troubleshoot, and resolve data pipeline issues in production environments

Required Skills & Qualifications
  • 4+ years of experience with AWS data services and cloud-based data engineering
  • Strong programming skills in Python and PySpark
  • Hands-on experience with Snowflake and dbt for data modeling and transformations
  • Solid understanding of SQL for complex analytical queries
  • Experience with Apache Airflow for workflow orchestration
  • Proficiency in Kafka for real-time/streaming data ingestion
  • Experience with AWS Glue/ Airflow for ETL development
  • Experience with Terraform for infrastructure automation
  • Strong experience with GitLab and CI/CD pipelines
Key Skills
PythonPySparkSQLSnowflakedbtApache AirflowKafkaAWS GlueTerraformGitLab
Categories
TechnologyData & AnalyticsSoftware
Apply Now

Please let Unison Group know you found this job on InterviewPal. This helps us grow!

Apply Now
Prepare for Your Interview

We scan and aggregate real interview questions reported by candidates across thousands of companies. This role already has a tailored question set waiting for you.

Elevate your application

Generate a resume, cover letter, or prepare with our AI mock interviewer tailored to this job's requirements.