InterviewStack.io LogoInterviewStack.io

Data Validation and Anomaly Detection Questions

Techniques for validating data quality and detecting anomalies using SQL: identifying nulls and missing values, finding duplicates and orphan records, range checks, sanity checks across aggregates, distribution checks, outlier detection heuristics, reconciliation queries across systems, and building SQL based alerts and integrity checks. Includes strategies for writing repeatable validation queries, comparing row counts and sums across pipelines, and documenting assumptions for investigative analysis.

HardSystem Design
0 practiced
Design an architecture to run real-time anomaly detection on user event streams and detect conversion-rate drops within 60 seconds. Include components for ingestion, stateful stream processing, low-latency feature computation, model serving or in-process detectors, storage of time-series for analysis, and an alerting pipeline. Consider scale (millions of events per minute), fault tolerance and strategies to reduce false positives.
MediumTechnical
0 practiced
Write a SQL query that flags days with unusually high transaction counts using a rolling 14-day mean and standard deviation. Table: transactions(transaction_id, occurred_at date). Detect days where count > mean + 3 * stddev computed over the prior 14 days (exclude current day from baseline). Return date, count, rolling_mean, rolling_stddev, z_score, is_spike.
MediumTechnical
0 practiced
Implement a Python function detect_mad_outliers(values: List[float], threshold: float = 3.5) -> List[int] that returns indices of outliers using Median Absolute Deviation (MAD). The function should ignore NaNs for computations but return indices relative to original list, and handle small sample sizes gracefully. Include a short docstring describing complexity and behavior.
HardTechnical
0 practiced
Design a synthetic data generation strategy to produce datasets exercising edge cases and anomalies for pipeline validation. Include methods to generate missing data patterns, duplicate bursts, time skew/latency, distribution shifts, and correlated feature anomalies. Explain how to parameterize generators and integrate them into unit/integration tests.
HardTechnical
0 practiced
Design a production-ready approach to detect concept drift for a supervised model. Include which metrics to track (prediction distributions, label distributions, feature importance shifts), statistical tests, retraining triggers with human review and canary deployment, and strategies for storing labels and ground truth to support timely evaluation.

Unlock Full Question Bank

Get access to hundreds of Data Validation and Anomaly Detection interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.