Capgemini
Global leader in partnering with companies to transform and manage their business by harnessing the power of technology.
4 Rounds
~21 Days
Medium
The Interview Loop
Recruiter Screen (30 min)
Standard fit check, behavioral questions, and resume overview.
Technical Loop (3-4 Rounds)
Deep dive into domain knowledge, coding, and system design.
Interview Question Bank
Data Engineer
•
Behavioral
•
medium
Tell me about a time you had to explain a complex technical data pipeline issue to a non-technical client stakeholder.
#Communication
#Consulting
Data Engineer
•
Behavioral
•
medium
Describe a situation at a previous client where you had a very tight deadline for a data migration project. How did you prioritize your tasks?
#Time Management
#Agile
Data Engineer
•
Behavioral
•
medium
How do you handle changing data requirements from a client midway through an Agile sprint?
#Agile
#Adaptability
#Client Management
Data Engineer
•
Behavioral
•
hard
Tell me about a time you disagreed with a senior architect's design choice for a data pipeline. How did you resolve the disagreement?
#Conflict Resolution
#Technical Leadership
Data Engineer
•
Behavioral
•
medium
Describe a time you proactively identified a performance bottleneck in a production data pipeline and fixed it without being asked.
#Proactivity
#Problem Solving
Data Engineer
•
Coding
•
medium
Write a SQL query to find the nth highest salary for each department from an Employee table.
#Window Functions
#DENSE_RANK
#CTEs
Data Engineer
•
Coding
•
hard
Write a SQL query to identify users who have logged in for 3 or more consecutive days.
#Advanced SQL
#Self Joins
#LEAD/LAG
#Date Functions
Data Engineer
•
Coding
•
medium
Write a SQL MERGE statement to implement Slowly Changing Dimension (SCD) Type 2 logic for a customer dimension table.
#Data Warehousing
#SCD Type 2
#MERGE Statement
Data Engineer
•
Coding
•
medium
Write a PySpark script to read a CSV file, drop rows with nulls in a specific column, and write the output to Parquet partitioned by a date column.
#PySpark
#DataFrames
#I/O Operations
Data Engineer
•
Coding
•
hard
Write PySpark code to flatten a deeply nested JSON schema into a flat tabular DataFrame.
#PySpark
#Complex Data Types
#JSON
Data Engineer
•
Coding
•
medium
Given a PySpark DataFrame, how do you find the second most frequent item in a specific column?
#PySpark
#Aggregations
#Window Functions
Data Engineer
•
Coding
•
easy
Write a Python function to check if a given string is a valid palindrome, ignoring spaces, case, and special characters.
#Python
#String Manipulation
Data Engineer
•
Coding
•
medium
Write a Python script to merge multiple large CSV files efficiently without loading them entirely into memory.
#Python
#File I/O
#Memory Management
Data Engineer
•
Coding
•
medium
Implement a Python function to find the length of the longest substring without repeating characters.
#Python
#Sliding Window
Data Engineer
•
System Design
•
medium
Design a data model for a retail client migrating their legacy on-premise data warehouse to Snowflake.
#Data Modeling
#Snowflake
#Cloud Migration
Data Engineer
•
System Design
•
medium
How would you implement Change Data Capture (CDC) in a modern cloud data stack?
#CDC
#Data Architecture
#Streaming
Data Engineer
•
System Design
•
hard
Design a real-time streaming pipeline for IoT sensor data using Azure services.
#Azure
#Streaming
#IoT
#Architecture
Data Engineer
•
Technical
•
easy
Explain the exact differences between RANK(), DENSE_RANK(), and ROW_NUMBER() with a practical example.
#Window Functions
#Data Ranking
Data Engineer
•
Technical
•
medium
You have a slow-running SQL query with multiple joins on large tables. Walk me through your step-by-step approach to optimize it.
#Performance Tuning
#Execution Plans
#Indexing
Data Engineer
•
Technical
•
hard
How does Apache Spark handle data skewness? Explain techniques like salting.
#PySpark
#Performance Optimization
#Data Skew
Data Engineer
•
Technical
•
easy
What is the difference between narrow and wide transformations in Spark? Give examples of each.
#PySpark
#Spark Architecture
#Transformations
Data Engineer
•
Technical
•
medium
Explain Broadcast Hash Join and Sort Merge Join in Spark. When would you use one over the other?
#PySpark
#Joins
#Optimization
Data Engineer
•
Technical
•
medium
How do you manage memory in PySpark? Explain the difference between cache() and persist().
#PySpark
#Memory Management
Data Engineer
•
Technical
•
hard
Explain the Catalyst Optimizer in Spark. What are its main phases?
#Spark Architecture
#Catalyst Optimizer
Data Engineer
•
Technical
•
medium
How do you handle the 'small files problem' in Spark and HDFS/Cloud Storage?
#PySpark
#Storage Optimization
Data Engineer
•
Technical
•
easy
What is the difference between repartition() and coalesce() in PySpark?
#PySpark
#Partitioning
Data Engineer
•
Technical
•
easy
Explain the differences between a Star Schema and a Snowflake Schema. What are the pros and cons of each?
#Data Warehousing
#Dimensional Modeling
Data Engineer
•
Technical
•
medium
What are Slowly Changing Dimensions? Explain Type 1, Type 2, and Type 3 with examples.
#Data Warehousing
#SCD
Data Engineer
•
Technical
•
medium
How do you pass parameters dynamically between activities and pipelines in Azure Data Factory (ADF)?
#Azure Data Factory
#Pipeline Orchestration
Data Engineer
•
Technical
•
medium
Explain the architecture of Databricks. What is the difference between the control plane and the data plane?
#Databricks
#Cloud Architecture
Data Engineer
•
Technical
•
easy
How do you handle pipeline failures, retries, and alerting in Azure Data Factory?
#Azure Data Factory
#Error Handling
Data Engineer
•
Technical
•
medium
Describe the Medallion Architecture (Bronze, Silver, Gold) commonly used in Databricks.
#Databricks
#Data Lakehouse
#Medallion Architecture
Data Engineer
•
Technical
•
medium
How do you secure data at rest and in transit in Azure Data Lake Storage Gen2?
#Azure
#Data Security
Data Engineer
•
Technical
•
medium
Explain generators and decorators in Python. Provide a practical use case for a data engineering pipeline.
#Python
#Advanced Python
Data Engineer
•
Technical
•
hard
What is the Global Interpreter Lock (GIL) in Python, and how does it affect multithreading in data processing tasks?
#Python
#Concurrency
Difficulty Radar
Based on recent AI-sourced data.
Meet Your Interviewers
The "Standard" Interviewer
Senior EngineerFocuses on core competencies, system constraints, and clear communication.
SimulateUnwritten Rules
Think Out Loud
Always explain your thought process before writing code or drawing architecture.