Hire the best Apache Spark Engineers in Gurgaon, IN
Check out Apache Spark Engineers in Gurgaon, IN with the skills you need for your next job.
- $25 hourly
- 4.9/5
- (7 jobs)
Hello, I’m Aditya Johar, a Data Scientist and Full Stack Developer with 9+ years of experience delivering innovative, tech-driven solutions. I focus on identifying areas where technology can reduce manual tasks, streamline workflows, and optimize resources By implementing smart automation solutions tailored to your specific needs, I can help your business cut costs, improve efficiency, and free up valuable time for more strategic, growth-focused initiatives. ---------------------------------TOP SOLUTIONS DEVELOPED--------------------------------- ✅Custom Software using Python (Django, Flask, FAST API), MERN/MEAN/MEVN Stacks ✅Interactive Data Visualization Dashboards - Power BI, Tableau, ETL etc ✅Intelligent Document Processing (IDP), RAG, LLMs, Chat GTP APIs ✅NLP: Sentiment Analysis, Text Summarization, Chatbots and Language Translation ✅COMPUTER VISION: Image and Video Classification, Object Detection, Face Recognition, Medical Image Analysis ✅RECOMMENDATION SYSTEMS: Product Recommendations (e.g., e-commerce), Content Recommendations (e.g., streaming services), Personalized Marketing ✅PREDICTIVE ANALYTICS: Sales and Demand Forecasting, Customer Churn Prediction, Stock Price Prediction, Equipment Maintenance Prediction ✅E-COMMERCE OPTIMIZATION: Dynamic Pricing, Inventory Management, Customer Lifetime Value Prediction ✅TIME SERIES ANALYSIS: Financial Market Analysis, Energy Consumption Forecasting, Weather Forecasting ✅SPEECH RECOGNITION: Virtual Call Center Agents, Voice Assistants (e.g., Siri, Alexa) ✅AI IN FINANCE: Credit Scoring, Algorithmic Trading, Fraud Prevention ✅AI IN HR: Candidate Screening, Employee Performance Analysis, Workforce Planning ✅CONVERSATIONAL AI: Customer Support Chatbots, Virtual Shopping Assistants, Voice Interfaces ✅AI IN EDUCATION: Personalized Learning Paths, Educational Chatbots, Plagiarism Detection ✅AI IN MARKETING: Customer Segmentation, Content Personalization, A/B Testing ✅SUPPLY CHAIN OPTIMIZATION: Demand Forecasting, Inventory Optimization, Route Planning And Many More use cases that we can discuss while we connect. "Ready to turn these possibilities into realities? I'm just a click away! Simply click the 'Invite to Job' or 'Hire Now' button in the top right corner of your screen."Apache Spark
DjangoApache AirflowApache HadoopTerraformPySparkApache KafkaFlaskBigQueryBERTPython Scikit-LearnpandasPythonTensorFlowData Science - $30 hourly
- 5.0/5
- (7 jobs)
Highly Skilled IT Professional Experience in GCP Cloud with over 3+ years of experience working as Cloud Data Engineer. Overall, 3+ years in IT Experience. In ML Pipeline Development, Analysis, Testing, Data Warehouse and Business Intelligence tools. Working within an Agile delivery methodology production implementation in iterative sprints My Skills: Cloud Based * Google Cloud Platform * Bigquery * Airflow * Dataproc, VetexAI * Composer * Google Cloud Storage * Google Cloud Function * Compute Engine Programming Languages: * Python * SQL * Shell Scripting.Apache Spark
Vertex AICloud MigrationData EngineeringGoogle SheetsSASData ScrapingData WarehousingPythonSQLBig DataData MigrationApache AirflowGoogle Cloud PlatformBigQuery - $6 hourly
- 4.9/5
- (2 jobs)
Achievement-driven & innovative professional with over 4 years of extensive experience in the field of Software Development Currently associated as Senior Software Engineer with Park+ Understanding of Software Development Lifecycle (SDLC) right from requirement analysis, documentation (functional specifications, technical design), coding, and testing (preparation of test cases along with implementation) to the maintenance of proposed applications Expertise in end-to-end implementation of various projects including designing, development, coding, integration and implementation of software applications Capable of independently executing various initiatives & ensuring timely & and smooth completion of the same; skilled at communicating complex technical requirements to non-technical stakeholders in an effective manner Accomplished at thriving in fast-paced environments, readily adapting to evolving business and technology challenges; exposure to designing, developing, testing, and debugging software; reviewing code and design Effective communicator with excellent relationship-building & and interpersonal skills; strong analytical, problem-solving & and organizational capabilities with a flexible & detail-oriented attitudeApache Spark
GoogleAmazonTerraformNoSQL DatabaseFastAPIDjango StackProduct DevelopmentSoftware DevelopmentAzure DevOpsAnsiblepandasSQLFlaskPython - $25 hourly
- 5.0/5
- (2 jobs)
A highly motivated person with strong technical, problem-solving, and excellent time management skills who likely to create an impact on the organization/work, he is part of and always love to socialize and experience new things in life. My hunger for the new challenges make me unique.Apache Spark
MySQLScalaMicrosoft AzureData AnalyticsSnowflakeSQL ProgrammingData EngineeringData Warehousing & ETL SoftwareETL PipelinePySparkSQLDatabricks PlatformPython - $45 hourly
- 0.0/5
- (0 jobs)
I’m Priyam - a results-driven Backend & Data Engineer Expert with 12 years of experience with a proven track record of delivering end-to-end data solutions, from ETL/ELT pipelines to data lakes, data warehousing, BI dashboards, and AI integrations. I specialize in architecting high-performance data platforms using modern tools and frameworks like Apache Airflow, Apache Spark, Hadoop, DBT, FiveTran, Snowflake, Kafka, Glue, Redshift, BigQuery, Power BI, Tableau, Looker, and more. Whether you need to migrate legacy systems to the cloud, design a secure data warehouse, or implement real-time streaming pipelines, I can help you build with confidence. 💡𝐂𝐨𝐫𝐞 𝐄𝐱𝐩𝐞𝐫𝐭𝐢𝐬𝐞: ✅ Design and deploy cloud-native data architectures (AWS | Azure | GCP) ✅ Build real-time and batch ETL/ELT pipelines with Airflow, Kafka, DBT, and Glue ✅ Implement data lake and data warehouse solutions (Snowflake, Redshift, BigQuery) ✅ Develop BI dashboards and reporting solutions (Tableau, Looker, Power BI, QuickSight) ✅ Enable data governance, lineage, and security best practices ✅ Integrate AI workflows and ML pipelines using modern cloud tools ✅ Lead data migration and modernization projects with zero downtime ✅ Architect scalable data infrastructure for enterprise-grade use cases (SQL → NoSQL, on-prem → cloud) 𝐖𝐡𝐲 𝐂𝐥𝐢𝐞𝐧𝐭𝐬 𝐂𝐡𝐨𝐨𝐬𝐞 𝐌𝐞: 🌐 12+ Years in Data Engineering, Backend Systems & Deep understanding of cloud ecosystems & distributed systems 📊 Strong grasp of business needs & ability to translate data into action 🛡️ Commitment to security, reliability & quality assurance 🤝 Transparent communication & on-time delivery 🔧 𝐓𝐨𝐨𝐥𝐬 & 𝐓𝐞𝐜𝐡𝐧𝐨𝐥𝐨𝐠𝐢𝐞𝐬: ✔️Cloud: AWS (Glue, Redshift, Athena, Lambda), Azure (ADF, Data Lake), GCP (BigQuery, Cloud Functions) ✔️Data Tools: Airflow, DBT, FiveTran, Airbyte, Kafka, Trino, PrestoDB, Hive, Cloudera ✔️Warehousing: Snowflake, BigQuery, Redshift ✔️Languages: Python, SQL, PySpark, Scala ✔️Databases: MySQL, PostgreSQL, MongoDB, Druid, Cassandra ✔️BI & Visualization: Power BI, Tableau, Looker, QuickSight, Qlik Sense ✔️Other: Data Governance, Security, Data Modeling (OLTP/OLAP), AI Automation & Integration 📈 𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐢𝐞𝐬 𝐒𝐞𝐫𝐯𝐞𝐝: Finance & FinTech | E-commerce & Retail | Healthcare & Pharma | AI/ML Startups | SaaS & Enterprise Platforms Are you looking for a senior-level Data Engineer who can build robust, scalable, and secure data platforms across cloud ecosystems like AWS, Azure, and GCP? Click "Send Message" to start a conversation with me! #Tags Data Engineering | Data Engineer | ETL | ELT | ETL Pipelines | Real-time Data | Data Pipeline | Apache Airflow | DBT | Kafka | Glue | FiveTran | Airbyte | Batch Processing | Streaming Data | Data Migration | Data Modernization | Cloud Migration | Data Lake | Data Lakes | Snowflake | Redshift | BigQuery | Data Warehouse | Data Warehousing | BI | Business Intelligence | Power BI | Tableau | Looker | QuickSight | Data Analytics | Data Visualization | Cloud Data Architect | AWS | Azure | GCP | Data Platform | Cloud Platform | Cloud Data Solutions | Data Integration | Data Governance | Data Security | AI Automation | AI Integration | Python | PySpark | SQL | Scala | Trino | PrestoDB | Hive | Athena | Cloudera | PostgreSQL | MySQL | Druid | Backend Engineer | Distributed Systems | Microservices | API Development | Data Architecture | OLAP | OLTP | Schema Conversion | End-to-End Data Solutions | Scalable Systems | Secure Data Pipelines | Agile Data EngineeringApache Spark
Amazon Web ServicesLookerMicrosoft Power BIPythonData VisualizationBusiness IntelligenceDevOps EngineeringBigQueryAPI DevelopmentTableauSQLData LakeETL PipelineData Engineering - $25 hourly
- 4.8/5
- (8 jobs)
Experienced Senior Software Developer with over 13+ years of expertise in designing, developing, and deploying scalable software solutions. Experienced Technical Architect | Driving Innovation in IT Solutions | Expert in System Design and Integration | Expert in Scalability and Performance Optimization. • 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: Expertise in 𝗣𝘆𝘁𝗵𝗼𝗻 with frameworks like 𝗗𝗷𝗮𝗻𝗴𝗼 and 𝗙𝗮𝘀𝘁𝗔𝗣𝗜 to build scalable web applications. • 𝗕𝗶𝗴 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴: Proficient in 𝗣𝘆𝗦𝗽𝗮𝗿𝗸 for handling large-scale data processing and analytics. • 𝗦𝗲𝗮𝗿𝗰𝗵 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝗲𝘀: Experience in integrating and optimizing search platforms like 𝗘𝗹𝗮𝘀𝘁𝗶𝗰𝘀𝗲𝗮𝗿𝗰𝗵 and 𝗔𝗽𝗮𝗰𝗵𝗲 𝗦𝗼𝗹𝗿. • 𝗚𝗼 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: Strong skills in 𝗚𝗼 (𝗚𝗼𝗹𝗮𝗻𝗴) for high-performance backend services. • 𝗥𝗲𝗮𝗹-𝘁𝗶𝗺𝗲 𝗗𝗮𝘁𝗮 𝗦𝘁𝗿𝗲𝗮𝗺𝗶𝗻𝗴: Hands-on experience with 𝗔𝗽𝗮𝗰𝗵𝗲 𝗞𝗮𝗳𝗸𝗮 for distributed event streaming and data pipelines. • 𝗙𝘂𝗹𝗹-𝗦𝘁𝗮𝗰𝗸 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁: Skilled in combining 𝗣𝗛𝗣/𝗣𝘆𝘁𝗵𝗼𝗻 with HTML, CSS and 𝗥𝗲𝗮𝗰𝘁.𝗷𝘀 to deliver seamless, interactive user interfaces and efficient server-side logic. • 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝘁 𝗔𝗣𝗜𝘀: Skilled in designing and building RESTful APIs with focus on performance and security. • 𝗘𝗻𝗱-𝘁𝗼-𝗘𝗻𝗱 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻𝘀: Capable of providing end-to-end development, from backend logic to search integration. • 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗠𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁: Strong experience with 𝗦𝗤𝗟 and 𝗡𝗼𝗦𝗤𝗟 databases, including 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟, 𝗠𝘆𝗦𝗤𝗟, 𝗠𝗼𝗻𝗴𝗼𝗗𝗕, and 𝗥𝗲𝗱𝗶𝘀. • 𝗣𝗲𝗿𝗳𝗼𝗿𝗺𝗮𝗻𝗰𝗲 𝗢𝗽𝘁𝗶𝗺𝗶𝘇𝗮𝘁𝗶𝗼𝗻: Focus on delivering high-performing, scalable applications tailored to client needs. • 𝗘𝘅𝗽𝗲𝗿𝘁 𝗶𝗻 𝗟𝗶𝗻𝘂𝘅 𝗦𝗲𝗿𝘃𝗲𝗿: Proficient in configuring, maintaining, and optimizing Linux-based environments (Ubuntu, CentOS, Red Hat). • 𝗔𝗪𝗦 𝗖𝗹𝗼𝘂𝗱 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀: Experience with EC2, S3, RDS, Lambda, VPC, IAM, and other AWS services for deployment, scaling, and security management. • 𝗠𝗶𝗰𝗿𝗼𝘀𝗼𝗳𝘁 𝗔𝘇𝘂𝗿𝗲 𝗦𝗲𝗿𝘃𝗶𝗰𝗲𝘀: Skilled in Azure Virtual Machines, Azure Storage, Virtual Networks, and resource management with Azure CLI & PowerShell. • 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲: Experience with integrating Linux servers with Jenkins, GitLab CI, Code deploy, GitHub and Docker for automated deployments on cloud platforms.Apache Spark
GolangFastAPIElasticsearchCodeIgniterDjangoPHPPySparkMySQLApache SolrLaravelAPIApache AirflowPythonData Scraping - $3 hourly
- 5.0/5
- (1 job)
Been working as a data engineer in companies for nearly 3 years now. Have experience with Python, SQL, pyspark, ETL. AWS experience with services such as Glue, EMR, Redshift, S3, RDS and have some snowflake experience too.Apache Spark
Selenium WebDriverLinuxAmazon EC2Amazon AthenaAWS LambdaAmazon S3Amazon RedshiftAWS GluePySparkSQLPython - $15 hourly
- 0.0/5
- (0 jobs)
A Big Data Developer with ~3 years of experience, leveraging years of honed expertise across a diverse spectrum of data stacks. My journey has been enriched by a wealth of experience, empowering me with a comprehensive skill set that spans Warehousing, ETL, Analytics, and Cloud Services. You have the data? Great !! I can help you analyze it using Python. It involves performing exploratory data analysis, hypothesis testing, and data visualization. You have Big Data? Even Better !! I can help you clean, transform, store and analyze it using big data technologies and productionize it using cloud services like AWS and GCP. You want to track business KPIs and metrics? Consider it done!! I can even help you develop reports using Tableau and PowerBI; this will always keep you ahead in your business. Specialized in the following data solutions: ✔️ Data Cleaning, Processing, and Machine Learning models ✔️ Building data warehouses using modern cloud platforms and technologies ✔️ Creating and automating data pipelines, real-time streaming & ETL processes ✔️ Building highly intuitive, interactive dashboards. ✔️ Data Migration (Heterogenous and Homogenous) Below are tools and technologies I have worked with - - Cloud: GCP (Google Cloud Platform), AWS (Amazon Web Services) - Databases: SQL Server, Snowflake, PostgreSQL, MySQL, S3 - Language & Libraries: Python, Pandas, Numpy, Matplotlib - Data Engineering: Spark, AWS Glue, PySpark, BiqQuery, Snowflake, ETL, Datawarehouse, Databricks, Data Lake - Orchestration Tools - Apache Airflow, and CronJobs, etc. - Reporting Tech: PowerBI, Tableau, Excel Let's collaborate and transform your idea into reality !!Apache Spark
TensorFlowMatplotlibSeabornPython Scikit-LearnpandasAnalyticsData LakeDeep LearningData AnalyticsMachine LearningAmazon Web ServicesSnowflakeSQLPython - $15 hourly
- 0.0/5
- (0 jobs)
Experienced data engineer skilled in harnessing big data technologies such as Hadoop, Spark, Snowflake, Databricks, Airflow, and Python to proficiently orchestrate data processing pipelines and analytical workflows. Demonstrated advanced expertise in infrastructure deployment, utilizing Docker and Kubernetes. Possesses a solid foundation in general computing principles and development methodologies. Additionally, excels in DevOps practices including CI/CD workflows, Jenkins automation, Git version control, and navigating cloud platforms for seamless deployment and scalability.Apache Spark
Amazon S3AWS LambdaData Warehousing & ETL SoftwareApache KafkaApache HiveApache HadoopSQLBig DataBigQueryJenkinsApache AirflowDatabricks PlatformSnowflakePython Want to browse more freelancers?
Sign up
How hiring on Upwork works
1. Post a job
Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.
2. Talent comes to you
Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.
3. Collaborate easily
Use Upwork to chat or video call, share files, and track project progress right from the app.
4. Payment simplified
Receive invoices and make payments through Upwork. Only pay for work you authorize.
How do I hire a Apache Spark Engineer near Gurgaon, on Upwork?
You can hire a Apache Spark Engineer near Gurgaon, on Upwork in four simple steps:
- Create a job post tailored to your Apache Spark Engineer project scope. We’ll walk you through the process step by step.
- Browse top Apache Spark Engineer talent on Upwork and invite them to your project.
- Once the proposals start flowing in, create a shortlist of top Apache Spark Engineer profiles and interview.
- Hire the right Apache Spark Engineer for your project from Upwork, the world’s largest work marketplace.
At Upwork, we believe talent staffing should be easy.
How much does it cost to hire a Apache Spark Engineer?
Rates charged by Apache Spark Engineers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.
Why hire a Apache Spark Engineer near Gurgaon, on Upwork?
As the world’s work marketplace, we connect highly-skilled freelance Apache Spark Engineers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Spark Engineer team you need to succeed.
Can I hire a Apache Spark Engineer near Gurgaon, within 24 hours on Upwork?
Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Spark Engineer proposals within 24 hours of posting a job description.