Question
2-5

Data Engineer – Snowflake and DBT | 2025HP12003/#2Zdafega

12/3/2025

The Data Engineer will design and implement scalable ELT pipelines using dbt on Snowflake and build ingestion pipelines from various sources into Snowflake. They will collaborate with stakeholders to deliver clean, validated datasets and support consulting engagements with clear documentation and client-ready solutions.

Working Hours

40 hours/week

Company Size

2-10 employees

Language

English

Visa Sponsorship

No

About The Company
Predictable outcome through innovations, based upon mutual trust!! With the emergence of new technologies, many organizations are facing significant challenges such as increased stakeholder expectations, static or reduced budgets or the need to do more with less. This has led to many of them turning to IT to enable their future strategies – this is where we come in... Supporting your business to get value from investment in Information Technology. As a cloud service provider, we bring a wealth of experience, from a team of professionals who understand technology, particularly microsoft clouds and the positive impact it can have on improving business outcomes, allowing businesses to meet business objectives to succeed and grow. Every day, our consultants interact with our customers in different time zones, in different countries, in different cultures, in different languages to deliver innovative solutions for our customer’s business to establish a trustworthy relationship as your managed service provider. We operate from KOLKATA, INDIA & LONDON, UNITED KINGDOM… Innovatively Yours!!
About the Role

Job Summary

We are looking for an experienced and results-driven Data Engineer to join our growing Data Engineering team. The ideal candidate will be proficient in building scalable, high-performance data transformation pipelines using Snowflake and dbt or Matillion and be able to effectively work in a consulting setup. In this role, you will be instrumental in ingesting, transforming, and delivering high-quality data to enable data-driven decision-making across the client’s organization.

Job Responsibilities

1. Design and implement scalable ELT pipelines using dbt on Snowflake, following industry accepted best practices.

 

2. Build ingestion pipelines from various sources including relational databases, APIs, cloud storage and flat files into Snowflake.

 

3. Implement data modelling and transformation logic to support layered architecture (e.g., staging, intermediate, and mart layers or medallion architecture) to enable reliable and reusable data assets..

 

4. Leverage orchestration tools (e.g., Airflow,dbt Cloud, or Azure Data Factory) to schedule and monitor data workflows.

 

5. Apply dbt best practices: modular SQL development, testing, documentation, and version control.

 

6. Perform performance optimizations in dbt/Snowflake through clustering, query profiling, materialization, partitioning, and efficient SQL design.

 

7. Apply CI/CD and Git-based workflows for version-controlled deployments.

 

8. Contribute to growing internal knowledge base of dbt macros, conventions, and testing frameworks.

 

9. Collaborate with multiple stakeholders such as data analysts, data scientists, and data architects to understand requirements and deliver clean, validated datasets.

 

10. Write well-documented, maintainable code using Git for version control and CI/CD processes.

 

11. Participate in Agile ceremonies including sprint planning, stand-ups, and retrospectives.

 

12. Support consulting engagements through clear documentation, demos, and delivery of client-ready solutions.

Essential Skills

Required Qualifications

 

• 3 to 5 years of experience in data engineering roles, with 2+ years of hands-on experience in Snowflake and DBT or Matillion (Matillion-DPC is highly preferred, not mandatory

 

• Experience building and deploying DBT models in a production environment.

 

• Expert-level SQL and strong understanding of ELT principles. Strong understanding of ELT patterns and data modelling (Kimball/Dimensional preferred).

 

• Familiarity with data quality and validation techniques: dbt tests, dbt docs etc.

 

• Experience with Git, CI/CD, and deployment workflows in a team setting

 

• Familiarity with orchestrating workflows using tools like dbt Cloud, Airflow, or Azure Data Factory.

 

Core Competencies:

 

o Data Engineering and ELT Development:

 

Building robust and modular data pipelines using dbt.

Writing efficient SQL for data transformation and performance tuning in Snowflake.

Managing environments, sources, and deployment pipelines in dbt.

 

o Cloud Data Platform Expertise:

 

Strong proficiency with Snowflake: warehouse sizing, query profiling, data loading, and performance optimization.

Experience working with cloud storage (Azure Data Lake, AWS S3, or GCS) for ingestion and external stages.

 

Technical Toolset:

 

o Languages & Frameworks:

 

Python: For data transformation, notebook development, automation.

SQL: Strong grasp of SQL for querying and performance tuning.

 

 

Best Practices and Standards:

 

o Knowledge of modern data architecture concepts including layered architecture (e.g., staging → intermediate → marts, Matillion architecture).

 

Familiarity with data quality, unit testing (dbt tests), and documentation (dbt docs).

 

Security & Governance:

 

o Access and Permissions:

 

Understanding of access control within Snowflake (RBAC), role hierarchies, and secure data handling.

Familiar with data privacy policies (GDPR basics), encryption at rest/in transit.

 

Deployment & Monitoring:

 

o DevOps and Automation:

 

Version control using Git, experience with CI/CD practices in a data context.

Monitoring and logging of pipeline executions, alerting on failures.

 

Soft Skills:

 

o Communication & Collaboration:

 

Ability to present solutions and handle client demos/discussions.

Work closely with onshore and offshore team of analysts, data scientists, and architects.

Ability to document pipelines and transformations clearly.

Basic Agile/Scrum familiarity – working in sprints and logging tasks.

Comfort with ambiguity, competing priorities and fast-changing client environment.

 

 

 

Education:

 

o Bachelor’s or master’s degree in computer science, Data Engineering, or a related field.

 

o Certifications such as Snowflake SnowPro, dbt Certified Developer Data Engineering are a plus.

 

Please note the mandatory or most preferred skill set for this role

Must have experience in Snowflake

Must have experience in DBT or Matillion (Matillion-DPC is highly preferred)

Must have experience in SSIS

Background Check required

No criminal record

Others

·        Interview process- 2-3 technical round

·        This is 5 days work from office role in Hyderabad

·        You must be open to relocation or travel

·        You must join immediately



Key Skills
Data EngineeringSnowflakeDBTMatillionSQLELTData ModellingData QualityGitCI/CDAirflowAzure Data FactoryPythonDocumentationAgileCollaboration
Categories
TechnologyData & AnalyticsConsulting
Apply Now

Please let Mindverse Consulting Services know you found this job on PrepPal. This helps us grow!

Apply Now
Get Ready for the Interview!

Do you know that we have special program that includes "Interview questions that asked by Mindverse Consulting Services?"

Elevate your application

Generate a resume, cover letter, or prepare with our AI mock interviewer tailored to this job's requirements.