
Omkar
Azure Data Engineer
11
+
Years of Exp.

Technical Skills
Azure Data Factory
Azure Data Lake Storage
Azure Stream Analytics
Azure Data Lake Analytics
Azure Data Catalog
Azure Data Share
Azure Cosmos DB
Azure Event Hubs
Azure Logic Apps
Power BI
Java for Data Engineering
.NET for Data Engineering
Hadoop (including HDFS, Hive, Pig)
NoSQL databases (e.g., MongoDB, Cassandra)
Apache Kafka for Data Streaming
Git for Version Control
Jenkins for Continuous Integration/Continuous Deployment (CI/CD)
DevOps Practices for Data Engineering

Professional Summary
Responsibility
- 1. Design, develop, and manage data pipelines for ETL processes, ensuring data integration and transformation.
2. Implement data lake storage solutions, organize data hierarchies, and control access for efficient data storage and retrieval.
3. Create real-time data processing solutions, handling high-velocity data streams with low-latency processing.
4. Develop and optimize big data queries using U-SQL for data analysis and insights generation.
5. Establish and maintain a centralized metadata catalog to discover and understand data assets.
6. Collaborate with stakeholders by securely sharing datasets and insights across Azure subscriptions.
7. Design and manage globally distributed, highly responsive NoSQL databases for scalable applications
8. Set up event ingestion and processing solutions for real-time analytics and telemetry data.
9. Create workflow automation for data integration and data movement across various services.
10. Develop interactive dashboards and reports for data visualization and business intelligence.
11. Leverage Java for developing custom data processing applications and services.
12. Utilize .NET languages like C# for building data-centric applications and services.
13. Implement and optimize Hadoop-based solutions for big data storage and batch processing.
14. Manage NoSQL databases for unstructured and semi-structured data storage and retrieval.
15. Set up Kafka clusters and implement data streaming solutions for real-time data processing.
16. Collaborate effectively with version control systems to manage code changes and repositories.
17. Automate the build and deployment processes for data engineering solutions.
18. Implement DevOps principles for infrastructure as code, automated testing, and continuous delivery in data engineering projects.
19. Design and automate end-to-end data pipelines, ensuring data quality and reliability.
20. Cleanse and transform data from various sources to prepare it for analysis and reporting.
21. Create and maintain data models and schemas for efficient data storage and retrieval.
22. Implement data security measures, including encryption and access controls, to protect sensitive information.
23. Optimize data processing performance by fine-tuning configurations and queries.
24. Develop robust error handling and logging mechanisms for data pipelines.
25. Maintain comprehensive documentation for data engineering solutions, including architecture diagrams and data lineage.
26. Set up monitoring tools and alerts to proactively identify and address issues in data pipelines.
27. Optimize resource usage in Azure to control costs while maintaining performance.
28. Ensure data engineering solutions adhere to regulatory and compliance requirements.
29. Collaborate with cross-functional teams, including data scientists and business analysts, to deliver data-driven insights.
30. Provide training and knowledge-sharing sessions to empower colleagues with data engineering best practices.
Projects
Real-time Data Processing and Analytics
Multi-cloud Data Synchronization
1. Designed and developed Azure Data Factory pipelines to ingest data from Azure Event Hubs.
2. Configured Azure Stream Analytics jobs to process and transform incoming streaming data in real time.
3. Leveraged Azure Data Lake Storage for storing raw and processed data efficiently.
4. Utilized Azure Logic Apps to trigger notifications and alerts based on specific data conditions.
5. Designed Power BI dashboards for visualizing real-time data insights and trends.
6. Collaborated with business analysts to define key performance indicators (KPIs) for monitoring.
7. Integrated Git for version control to manage code changes and track pipeline modifications.
8. Conducted performance tuning of Stream Analytics queries for optimal data processing speed.
9. Implemented CI/CD pipelines using Jenkins to automate the deployment of Azure resources and Stream Analytics jobs.
10. Documented the end-to-end architecture, data flow, and monitoring procedures for knowledge sharing.
1. Designed and deployed Azure Data Factory pipelines to extract, transform, and load (ETL) data from on-premises systems.
2. Implemented Azure Data Lake Storage as a central repository for synchronized data.
3. Utilized Java and .NET for data engineering to develop custom connectors for data synchronization between Azure and non-Azure cloud environments.
4. Integrated Apache Kafka for data streaming to facilitate near-real-time data replication.
5. Configured Azure Data Share for secure data sharing with external cloud partners.
6. Implemented Git-based version control for managing code changes and pipeline configurations.
7. Developed monitoring and alerting using Azure Logic Apps for data synchronization workflows.
8. Set up CI/CD pipelines with Jenkins for continuous integration and deployment of data synchronization processes.
9. Documented architecture diagrams and data lineage for compliance and knowledge sharing.
10. Collaborated with cross-functional teams to ensure seamless data synchronization and collaboration across cloud platforms.
Migration to Cloud-native Data Platform
1. Led migration of on-premises data systems to Azure cloud-native platform.
2. Utilized Azure Data Factory for seamless data migration from on-premises databases.
3. Implemented Azure SQL Database Managed Instances for relational data storage.
4. Transformed data using Azure Databricks and PySpark for migration and analysis.
5. Automated ETL processes with Azure Logic Apps and Azure Functions.
6. Ensured data consistency and security during migration and transformation.
7. Reduced infrastructure costs by 20% and improved data availability for analytics.
Education

BE In Computer Science, DBIT- Mumbai University
Certificate
Microsoft Certified: Azure Data Engineer Associate
Microsoft Data Fundamentals
Databricks Certified Associate Developer for Apache Spark 3.0