Hire the best Apache Spark Engineers in Delhi, IN
Check out Apache Spark Engineers in Delhi, IN with the skills you need for your next job.
- $18 hourly
- 4.0/5
- (5 jobs)
As a passionate Data Engineer with expertise in the Azure ecosystem, I specialize in building scalable data pipelines, cloud-based data solutions, and interactive dashboards. I have hands-on experience with Azure Data Factory, Databricks (PySpark), and Power BI, helping clients turn complex data into actionable insights. I design efficient ETL processes, develop data models, and ensure data integration across platforms. My work focuses on delivering high-quality, scalable solutions that meet business needs and provide real-time data visualizations.Apache Spark
Microsoft Power BIFabricPySparkDatabricks PlatformSQL ProgrammingScalaETL PipelineAzure DevOpsPythonMicrosoft Azure - $25 hourly
- 0.0/5
- (0 jobs)
Senior Data Engineer | Azure | ETL | Big Data | SQL | Python | Apache Spark 🚀 Transforming Data into Insights with Scalable Azure Solutions With 5+ years of experience designing and optimizing cloud-based data solutions on Microsoft Azure, I specialize in building efficient, secure, and high-performance data pipelines. My expertise spans ETL development, data modeling, big data processing, and real-time analytics using tools like Azure Data Factory, Databricks, and Data Lake. 🔹 Tech Stack: SQL, Python, Apache Spark, Azure Data Services 🔹 Cloud Expertise: Azure Data Factory, Databricks, Data Lake, Synapse Analytics 🔹 Data Engineering Skills: ETL/ELT Pipelines, Data Modeling, Streaming Data Processing 🔹 Best Practices: Data Governance, Security, Performance Optimization 🔹 Automation & CI/CD: DevOps, GitHub Actions, Terraform I have a strong track record of mentoring junior engineers, leading Agile teams, and implementing enterprise-grade data platforms that drive business insights. Whether you need data pipelines, performance tuning, or a scalable cloud solution, I can help you achieve your goals efficiently. 📌 Let's discuss how I can add value to your project!Apache Spark
Apache HadoopHiveBig DataPySparkPython - $15 hourly
- 0.0/5
- (0 jobs)
SUMMARY Data Engineer with 5 years of experience in software development and data engineering. Proficient in Scala, Java, and SQL, with expertise in designing and maintaining data pipelines, ETL processes, and data warehousing. Committed to delivering highquality solutions that optimize performance and enhance security.Apache Spark
PythonJavaScalaSQLETL PipelineData ExtractionETL - $35 hourly
- 0.0/5
- (0 jobs)
I'm an experienced IT professional with nearly 14 years of expertise in software development and system architecture, particularly in the areas of core banking solutions, ERP systems, and data management platforms, specialized in designing and supporting window-based and web applications using technologies like C# .NET, ASP.NET MVC, WCF, Web API, Oracle 12c, and PL/SQL. I have gained significant experience in data integration, ETL processes, performance optimization, and reporting tools like Crystal Reports and SAP BusinessObjects. My technical strengths include proficiency with ORM frameworks, database design, and data warehousing, alongside hands-on expertise in Oracle and SQL server performance tuning. Throughout my career, I have contributed to major ERP implementation projects, including IFS applications, and developed solutions for complex business functions such as leave management, invoicing, and tax savings. I have also worked in various sectors including banking, financial services, and manufacturing, showcasing his ability to customize solutions that meet the specific needs of different industries. Key Skills: C# (.NET), PL/SQL, JavaScript, AngularJS, WCF, Web API, Data Analytics, ETL, Power BI, Tableau, Oracle 12c, SQL Server, Crystal Reports, Agile, Database Tuning, Azure Service Fabric, Unit Testing Frameworks, Azure Event Grid, Azure Function App ,Devops Pipeline, GitHub, TFS , Data Warehousing, Facts and Dimension ModelingApache Spark
Web APIMicrosoft SQL ServerOracle DatabaseMicrosoft AzureAzure DevOpsAzure Service FabricC#Problem SolvingDevOpsETL PipelineETLData ExtractionData Analysis - $40 hourly
- 0.0/5
- (0 jobs)
I am Anurag, I have been working in the area of NLP and conversational AI. - I'm a AI Researcher as well have published 10 research papers , especially in NLP & machine learning in reputed journals like springer & Swiss Chemical Society. - Before that, I also had a wide range of experience working in startups. In my career, I have led several projects focused on using machine learning cases to solve industrial problems. - I have also presented my work in various International conferences. - I believe in chatbots; we can build things to make human-machine interactions more realistic. - I also have deep knowledge of Python libraries such as pandas, Numpy, Scipy, dask etc. - My Specialties include: LLM fine tuning, AI agent development, Model development, Exploratory Analysis, Regression Analysis, Machine Learning Modelling, Cassandra, Apache Spark implementation and End to End engineering cycle. - Connect with me for any of your data science needs so that we can discuss how I can help your team. - Thank you everyone for your time, feel free to connect or reach out to us to me. - I am always happy to support you in every possible way.Apache Spark
Data VisualizationApache CassandraMachine LearningNatural Language ProcessingPythonData ScienceScala - $25 hourly
- 4.5/5
- (10 jobs)
Highly creative and multi-talented Data Science with extensive experience in scrapping, data cleaning, and visualization. Exceptional collaborative and interpersonal skills; dynamic team player with well-developed written and verbal communication abilities. Highly skilled in client and vendor relations and negotiations; talented at building and maintaining “win-win” partnerships. Passionate and inventive creator of innovative marketing strategies and campaigns; accustomed to performing in deadline-driven environments with an emphasis on working within budget requirements.Apache Spark
SEO Keyword ResearchMathematical ModelingWordPressSEO BacklinkingData AnalysisVisual Basic for ApplicationsStatistical ComputingSEO AuditFlaskData ScienceMachine LearningData MiningpandasPythonChatbot - $25 hourly
- 0.0/5
- (0 jobs)
As a passionate Senior Software Engineer at Warner Bros. Discovery, I manage the data platform behind HBO Max, a globally renowned streaming app. My experience spans scaling and optimizing data pipelines, designing robust data architectures, and creating high-performance platforms that power critical business decisions. I specialize in: ✅ Scalable data pipelines with Apache Spark & Databricks ✅ Data modeling and architecture for actionable insights ✅ Cloud-native solutions using AWS (S3, Redshift, Lambda, EMR, and more) ✅ Performance tuning & optimization for large-scale data platforms ✅ Streamlined workflows and cost-efficient solutions Data engineering is not just my profession—it's my passion. I thrive on solving complex problems, pushing the boundaries of data scalability, and building solutions that deliver impact. If you're looking for someone who truly loves working on Spark internals, designing resilient architectures, or optimizing platforms for efficiency, I’m here to help. Let’s work together to turn your data into a competitive advantage.Apache Spark
SnowflakeAWS LambdaAmazon RedshiftAmazon S3ETL PipelineData LakeData EngineeringApache FlinkApache AirflowDatabricks PlatformPySparkPython - $30 hourly
- 0.0/5
- (0 jobs)
I am an Data engineer with 7+ years of industry experience in SQL,Big data ,hive, python,hadoop,Datalake and AWSApache Spark
Apache HadoopPython ScriptJiraSQL ProgrammingData LakeHiveAmazon AthenaAmazon Web ServicesPythonBig Data - $22 hourly
- 0.0/5
- (0 jobs)
I am a seasoned professional with over 8 years of hands-on experience in the field of Big Data and Cloud Computing. My expertise spans across various technologies and platforms including Apache Spark, Hive, Hadoop, Multi-Cloud environments (AWS, GCP), Scala, PySpark, Couchbase, NoSQL databases, and orchestration tools like Apache Airflow. Throughout my career, I have successfully designed, developed, and implemented robust data pipelines and analytics solutions for diverse business needs. Key Skills and Experiences: Big Data Technologies: Proficient in Apache Spark, Hive, and Hadoop, with extensive experience in processing and analyzing large datasets efficiently. Cloud Computing: Skilled in multi-cloud environments, particularly AWS (Amazon Web Services) and GCP (Google Cloud Platform), adept at deploying scalable and resilient data solutions leveraging cloud-native services. Programming Languages: Expertise in Scala and PySpark for developing data processing applications and analytics algorithms, enabling high-performance computations on distributed systems. NoSQL Databases: Experienced in working with Couchbase and other NoSQL databases, proficient in designing and optimizing data models for non-relational data storage and retrieval. Data Orchestration: Utilized Apache Airflow and other orchestration tools to automate and schedule data workflows, ensuring efficient and reliable execution of data pipelines. Data Engineering: Strong background in data engineering principles and best practices, including data ingestion, transformation, cleansing, and enrichment, to support advanced analytics and machine learning initiatives. Solution Architecture: Demonstrated ability to design end-to-end data solutions tailored to specific business requirements, considering factors like scalability, performance, security, and cost-effectiveness. Collaboration and Communication: Effective communicator with cross-functional teams, capable of translating business requirements into technical solutions and driving alignment towards project goals. Throughout my career, I have consistently delivered high-quality solutions that drive actionable insights and enable data-driven decision-making for organizations across various industries. My passion for exploring emerging technologies and commitment to continuous learning ensures that I stay at the forefront of advancements in Big Data and Cloud Computing, enabling me to deliver innovative solutions that address evolving business challenges.Apache Spark
ScalaJavaHiveApache AirflowBigQueryPySparkApache Hadoop - $20 hourly
- 0.0/5
- (0 jobs)
A Data sensitive person with motivation to solve business problems using latest tools and technologies with the demonstrated 1+ years of experience in data driven solution making and 3+ years of experience in design and development of IT solutions. Good understanding of Big data end to end pipeline design and implementation at industry standards. Experienced in data crunching, data warehouse with knowledge of latest cloud tech stack as AWS and GCP. A problem solver ready to explore the tech skyApache Spark
Data AnalysisJavaScriptFlaskAmazon Web ServicesApache KafkaApache AirflowSqoopMongoDBMicrosoft Power BIPythonAWS LambdaSQLHivePySpark - $15 hourly
- 0.0/5
- (0 jobs)
🎓 𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗲𝗱 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹 with 5+ 𝘆𝗲𝗮𝗿𝘀 of experience and hands-on expertise in Designing and Implementing Data Solutions. 🔥 4+ Startup Tech Partnerships ⭐️ 100% Job Success Score 🏆 In the top 3% of all Upwork freelancers with Top Rated Plus 🏆 ✅ Excellent communication skills and fluent English If you’re reading my profile, you’ve got a challenge you need to solve and you are looking for someone with a broad skill set, minimal oversight and ownership mentality, then I’m your go-to expert. 📞 Connect with me today and let's discuss how we can turn your ideas into reality with creative and strategic partnership.📞 ⚡️Invite me to your job on Upwork to schedule a complimentary consultation call to discuss in detail the value and strength I can bring to your business, and how we can create a tailored solution for your exact needs. 𝙄 𝙝𝙖𝙫𝙚 𝙚𝙭𝙥𝙚𝙧𝙞𝙚𝙣𝙘𝙚 𝙞𝙣 𝙩𝙝𝙚 𝙛𝙤𝙡𝙡𝙤𝙬𝙞𝙣𝙜 𝙖𝙧𝙚𝙖𝙨, 𝙩𝙤𝙤𝙡𝙨 𝙖𝙣𝙙 𝙩𝙚𝙘𝙝𝙣𝙤𝙡𝙤𝙜𝙞𝙚𝙨: ► BIG DATA & DATA ENGINEERING Apache Spark, Hadoop, MapReduce, YARN, Pig, Hive, Kudu, HBase, Impala, Delta Lake, Oozie, NiFi, Kafka, Airflow, Kylin, Druid, Flink, Presto, Drill, Phoenix, Ambari, Ranger, Cloudera Manager, Zookeeper, Spark-Streaming, Streamsets, Snowflake ► CLOUD AWS -- EC2, S3, RDS, EMR, Redshift, Lambda, VPC, DynamoDB, Athena, Kinesis, Glue GCP -- BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Data Fusion Azure -- Data Factory, Synapse. HDInsight ► ANALYTICS, BI & DATA VISUALIZATION Tableau, Power BI, SSAS, SSMS, Superset, Grafana, Looker ► DATABASE SQL, NoSQL, Oracle, SQL Server, MySQL, PostgreSQL, MongoDB, PL/SQL, HBase, Cassandra ► OTHER SKILLS & TOOLS Docker, Kubernetes, Ansible, Pentaho, Python, Scala, Java, C, C++, C# 𝙒𝙝𝙚𝙣 𝙮𝙤𝙪 𝙝𝙞𝙧𝙚 𝙢𝙚, 𝙮𝙤𝙪 𝙘𝙖𝙣 𝙚𝙭𝙥𝙚𝙘𝙩: 🔸 Outstanding results and service 🔸 High-quality output on time, every time 🔸 Strong communication 🔸 Regular & ongoing updates Your complete satisfaction is what I aim for, so the job is not complete until you are satisfied! Whether you are a 𝗦𝘁𝗮𝗿𝘁𝘂𝗽, 𝗘𝘀𝘁𝗮𝗯𝗹𝗶𝘀𝗵𝗲𝗱 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗼𝗿 𝗹𝗼𝗼𝗸𝗶𝗻𝗴 𝗳𝗼𝗿 your next 𝗠𝗩𝗣, you will get 𝗛𝗶𝗴𝗵-𝗤𝘂𝗮𝗹𝗶𝘁𝘆 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀 at an 𝗔𝗳𝗳𝗼𝗿𝗱𝗮𝗯𝗹𝗲 𝗖𝗼𝘀𝘁, 𝗚𝘂𝗮𝗿𝗮𝗻𝘁𝗲𝗲𝗱. I hope you become one of my many happy clients. Reach out by inviting me to your project. I look forward to it! All the best, SrajanApache Spark
JavaScriptETL PipelineData ModelingDatabase ArchitectureDatabase ManagementAPI IntegrationMongoDBPostgreSQLElasticsearchAmazon Web ServicesGoogle Cloud PlatformHiveApache AirflowPython Want to browse more freelancers?
Sign up
How hiring on Upwork works
1. Post a job
Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.
2. Talent comes to you
Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.
3. Collaborate easily
Use Upwork to chat or video call, share files, and track project progress right from the app.
4. Payment simplified
Receive invoices and make payments through Upwork. Only pay for work you authorize.
How do I hire a Apache Spark Engineer near Delhi, on Upwork?
You can hire a Apache Spark Engineer near Delhi, on Upwork in four simple steps:
- Create a job post tailored to your Apache Spark Engineer project scope. We’ll walk you through the process step by step.
- Browse top Apache Spark Engineer talent on Upwork and invite them to your project.
- Once the proposals start flowing in, create a shortlist of top Apache Spark Engineer profiles and interview.
- Hire the right Apache Spark Engineer for your project from Upwork, the world’s largest work marketplace.
At Upwork, we believe talent staffing should be easy.
How much does it cost to hire a Apache Spark Engineer?
Rates charged by Apache Spark Engineers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.
Why hire a Apache Spark Engineer near Delhi, on Upwork?
As the world’s work marketplace, we connect highly-skilled freelance Apache Spark Engineers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Spark Engineer team you need to succeed.
Can I hire a Apache Spark Engineer near Delhi, within 24 hours on Upwork?
Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Spark Engineer proposals within 24 hours of posting a job description.