Data Analyst & Engineer
- Location
- Quito, Ecuador
- Employment Type
- Industry
- Job Family
- Technology
- Career Level
- Experienced
MAKE STRATEGY A REALITY | ACCELERATE YOUR GROWTH | CHOOSE YOUR PATH
As the world's leading change and transformation consultancy, we're helping businesses move from strategy to reality by taking a pragmatic and practical approach to build solutions that last.
We're seeking a Data Analyst & Engineer in Quito to help us take vision to value and create lasting impact.
YOU WILL:
-
Design and implement data ingestion pipelines using dlt (data load tool) to migrate SQL Server and API data sources into Snowflake, ensuring data quality and reliability
-
Develop and maintain dbt models following medallion architecture (bronze/silver/gold layers) to transform raw data into business-ready analytics models across multiple location schemas
-
Analyze legacy Python automation scripts to extract business logic and reimplement them as modern, scalable data transformations in dbt
-
Deliver code across three integrated repositories (legacy automation, Snowflake ingestion, dbt transformations) while maintaining version control best practices and documentation
-
Participate in daily standups to communicate progress, blockers, and technical decisions to cross-functional teams
-
Implement data quality tests and monitoring to ensure transformation accuracy and maintain trust in analytics outputs
-
Collaborate with stakeholders to understand reporting requirements and translate them into efficient data models
-
3+ years of experience with SQL and data modeling, with strong understanding of dimensional modeling and analytics schema design
-
Hands-on experience with dbt (data build tool) including model development, testing, documentation, and deployment across environments (dev/test/prod)
-
Proficiency in Python for data engineering tasks, including experience with data pipeline frameworks and working with APIs and database connections
-
Experience with cloud data warehouses (Snowflake preferred) and understanding of modern data stack architectures
-
Strong understanding of ETL/ELT patterns and medallion architecture (bronze/silver/gold layers)
-
Ability to read and reverse-engineer legacy code to extract business logic and data transformation requirements
-
Excellent communication skills with ability to articulate technical concepts and provide clear status updates in standup meetings
-
Experience with Git version control and collaborative development workflows
IDEALLY, WE'D LIKE:
-
Design and implement data ingestion pipelines using dlt (data load tool) to migrate SQL Server and API data sources into Snowflake, ensuring data quality and reliability
-
Develop and maintain dbt models following medallion architecture (bronze/silver/gold layers) to transform raw data into business-ready analytics models across multiple location schemas
-
Analyze legacy Python automation scripts to extract business logic and reimplement them as modern, scalable data transformations in dbt
-
Deliver code across three integrated repositories (legacy automation, Snowflake ingestion, dbt transformations) while maintaining version control best practices and documentation
-
Participate in daily standups to communicate progress, blockers, and technical decisions to cross-functional teams
-
Implement data quality tests and monitoring to ensure transformation accuracy and maintain trust in analytics outputs
-
Collaborate with stakeholders to understand reporting requirements and translate them into efficient data models
-
Hands-on experience with dlt (data load tool) framework or similar data ingestion tools (Airbyte, Fivetran, etc.)
-
Experience migrating legacy systems to modern cloud-based data platforms, particularly from on-premise to Snowflake
-
Familiarity with hospitality or retail analytics, including POS systems, revenue management, or business intelligence reporting
-
Knowledge of data quality frameworks, testing strategies, and data observability best practices
-
Experience with multi-tenant or multi-location data architectures requiring schema isolation and standardization
-
Understanding of agile development methodologies and experience working in sprint-based delivery teams
-
Bachelor's degree in Computer Science, Engineering, or a related field
-
5+ years of experience as a Data Engineer or similar role.
-
Hands-on experience with Azure Data Factory for building and managing data pipelines.
-
Proficiency in Power BI, with hands-on experience in creating dashboards, data models, and visual reports.
-
Proficiency in Databricks for data transformation, analytics, and collaborative data science workflows.
-
Experience with Azure and Microsoft Fabric, including data integration, pipeline creation, and working with cloud-based data architecture.
-
Experience integrating Azure services for scalable data solutions.
-
Strong understanding of data lake architecture and ETL/ELT processes using Azure tools.
-
Proficiency in programming languages such as Python.
-
Extensive experience with SQL.
-
Expertise in cloud platforms Azure and data warehousing solutions (e.g., Snowflake, BigQuery).
-
Familiarity with big data technologies (e.g. Spark).
-
Solid understanding of data security, compliance, and governance principles.
-
Experience with version control systems (e.g., Git) and CI/CD practices.
-
Strong analytical, problem-solving, and communication skills.
-
Expertise in identifying complex data challenges, evaluating options, and implementing scalable and effective solutions.
-
Proficiency in translating technical concepts into business value and effectively engaging with both technical and non-technical stakeholders.
REQUIRED SKILLS:
-
3+ years of experience with SQL and data modeling, with strong understanding of dimensional modeling and analytics schema design
-
Hands-on experience with dbt (data build tool) including model development, testing, documentation, and deployment across environments (dev/test/prod)
-
Proficiency in Python for data engineering tasks, including experience with data pipeline frameworks and working with APIs and database connections
-
Experience with cloud data warehouses (Snowflake preferred) and understanding of modern data stack architectures
-
Strong understanding of ETL/ELT patterns and medallion architecture (bronze/silver/gold layers)
-
Ability to read and reverse-engineer legacy code to extract business logic and data transformation requirements
-
Excellent communication skills with ability to articulate technical concepts and provide clear status updates in standup meetings
-
Experience with Git version control and collaborative development workflows
Desirables:
-
Experience with containerization tools (e.g., Docker, Kubernetes).
-
Knowledge of machine learning frameworks (e.g., TensorFlow, PyTorch).
-
Basic knowledge of containerization (e.g., Docker).
Applicants must be authorized to work in Ecuador, without the need for visa sponsorship by North Highland. Work visa sponsorship will not be provided, either now or in the future, for this position.
North Highland is an equal opportunity employer, and we adhere to all applicable laws and regulations to ensure a fair and equitable workplace. All qualified applicants will receive fair and impartial consideration without regard to race, color, sex, gender identity, religion, national origin, age, sexual orientation, disability, veteran status, or any other characteristic protected by law. We handle all information in accordance local privacy standards and maintain strict confidentiality.
Reference: 48861