Hire the best Big Data Engineers in Faridabad, IN

Check out Big Data Engineers in Faridabad, IN with the skills you need for your next job.
Clients rate Big Data Engineers
Rating is 4.8 out of 5.
4.8/5
based on 294 client reviews
  • $20 hourly
    ✓ Technology executive specializing in architecting and implementing highly scalable solutions to drive brand awareness, increase revenues, optimize productivity and improve margins. ✓ Overseeing the data, security, maintenance, and network for a company. ✓ Implementing the businesses’ technical strategy and managing the overall technology roadmap of the business. ✓ involved with talent acquisition and its onboarding, training, and management of Project Manager, product Manager, Developers, Devops, Designers. ✓ Setting the technical strategy for the company to enable it to achieve its goals. ✓ Seeking out the current and future technology that will drive the company’s success. ✓ Focus on strategic alignment of technology goals to organizational vision. ✓ Passionately committed to technology team development, empowering people to accomplish their goals and coaching them realize their individual potential. ✓ Proven track-record of success in technology product development, cloud infrastructure,Building Data Platforms, ETL Pipelines, Streaming Pipelines ,e-commerce, CRM ,mobile strategy, and social media integration. I am working from last 8 years with Apache Spark, Lucene, ElasticSeach/Kibana, Amazon EC2.RDBM's(SQL, MySQL, Aurora, PSQL, Oracle), NoSQL engines (Hadoop/HBase, Cassandra,DynamoDB, MongoDB),GraphDB(Neo4j, Neptune) in-memory databases (Hazelcast, GridGain), Apache Spark/MLib, Weka, Kafka, clustered file systems, general-purpose computing on GPU. Deploying the ML DL Models on GPU Instances(Nvidia). Have a great experience in query optimization, application profiling, and troubleshooting. My area of expertise includes: - Python script - Jira,Trello,Azure DevOps - Web scraping - AWS(Redshift, Glue, ECS, EC2, EMR, Kinesis, S3, RDS, VPC, IAM,DMS) - GCP(Big Query, DataFlow, SnowFlow) - Microsoft Azure - Hadoop Big Data - Elasticsearch/Kibana/Logtash(ELK) - Hadoop setup on standalone, Cloudera, and HortonWorks. - SQL like MySQL PostgreSQL - NoSql Database like Hbase and MongoDB - Machine learning - Deep Learning - Spark with Mlib,GraphX - Sphinx - Memcache - MS BI/Tableau/GDS
    Featured Skill Big Data
    Kibana
    Apache Cassandra
    AWS CodeDeploy
    Apache NiFi
    MongoDB
    Golang
    Elasticsearch
    Apache Kafka
    Apache Hive
    Apache Pig
    MapReduce
    Machine Learning
    Python
    Apache Spark
  • $20 hourly
    Overview: With over 5 years of experience in IT recruitment and HR management, With a stellar 5-year track record, I am a highly accomplished Global Talent Source specializing in identifying and sourcing exceptional candidates for Technical positions across the USA, Canada, Asia, and Europe. Leveraging tools like LinkedIn Recruiter and X-Ray Search Boolean, I excel in identifying and searching for various roles. My approach includes sourcing and outreach to candidates via LinkedIn by connecting with them and inviting them to book an interview with the client via Google Calendar. My expertise spans various job portals, including LinkedIn, Jobstreet, Github, Monster, and CareerBuilder. I excel in sourcing talent from diverse sources, mastering the art of headhunting, and securing valuable references. Utilizing advanced techniques such as Boolean strings, Root Word, and Stem Word searches, I ensure no talent remains undiscovered. As an aggressive candidate sourcer, I am well-versed in both traditional and non-traditional sourcing methodologies, leaving no stone unturned to identify the best talent available. I conduct thorough research to analyze the available talent pool and potential candidates, screen and shortlist the most promising candidates, and conduct initial outreach through emails, InMails, and LinkedIn invites to gauge interest. Maintaining fruitful relationships with on-project candidates and potential talent is my priority, ensuring lasting partnerships that benefit both parties. As you embark on your search for top-tier candidates, I am here to offer unparalleled recruitment expertise, unmatched dedication, and prompt delivery of results. Together, let's elevate your business and find the perfect talent to fuel your success! Top Skills: Talent Sourcing Recruiting Headhunting LinkedIn Recruiting Internet Recruiting Recruitment Resourcing Boolean Searches Initial Screening LinkedIn Research Database Management Platforms and Tools: Searching Candidates: LinkedIn Recruiter, LinkedIn Sales Navigator, Indeed, Github, Glassdoor, Zip Recruiter, Hiretual, Naukri.com, Monster Other Tools: Google Drive, Google Docs, Google Sheets, Dropbox, Slack, ChatGPT Let's connect and discuss how I can add value to your recruitment endeavors on Upwork. I look forward to partnering with you on this exciting journey! Boolean Strings: "Technical Sourcer" OR "IT Sourcer" OR Sourcer OR "LinkedIn Recruiter" OR "LinkedIn Recruiting" OR "Candidate Sourcer" OR "Candidate Sourcing" OR "Candidates Sourcer" OR "Candidates Sourcing" OR "Tech Sourcer" OR "Talent Sourcer" OR "Talent Sourcing" OR "Technical Recruiting" OR "Resume Sourcer" OR "Resumes Sourcer" OR "Resume Sourcing" OR "Resumes Sourcing" OR Sourcing OR Recruiting OR Recruitment OR "LinkedIn Recruiter" OR Headhunter
    Featured Skill Big Data
    Recruiting
    Oracle BRM
    ServiceNow
    Golang
    Oracle
    .NET Framework
    GUI Design
    Python
    Node.js
    Java
    Angular
    JavaScript
    iOS
  • $25 hourly
    I bring extensive hands-on experience in the realm of data science, showcasing proficiency in various Hadoop components such as MapReduce, Hive, Pig, alongside a deep understanding of AWS cloud services. Over the course of my career, I have successfully executed numerous projects utilizing machine learning techniques for in-depth data analysis. Specifically, I leverage Apache Spark to efficiently process vast datasets for analytical purposes. My expertise extends to the full spectrum of Spark's capabilities, including Spark Streaming, Spark MLlib, and Spark GraphX, which have proven instrumental in enhancing the speed and scalability of data processing in various projects. I have implemented Spark MLlib to develop machine learning models tailored to meet specific client requirements, focusing on prediction and classification tasks. In my current role, I am deeply involved in working with Hadoop components, and I continue to harness the advanced features of Spark, such as Spark Streaming, MLlib, and GraphX, for real-time data processing requirements. Moreover, I actively incorporate DevOps practices into my workflow to ensure seamless collaboration between development and operations teams. This includes the integration of continuous integration/continuous deployment (CI/CD) pipelines, automated testing, and infrastructure as code (IaC) principles. Embracing a DevOps mindset enhances the overall efficiency and reliability of the software development lifecycle. I take pride in my ability to align machine learning methodologies with data processing workflows to meet client demands effectively. This involves leveraging Spark MLlib for predictive modeling and classification tasks, ensuring a holistic approach to addressing client requirements and business objectives. Throughout my journey in data science, I have remained dedicated to staying at the forefront of technology, constantly adapting to new tools and methodologies. I am enthusiastic about bringing this multifaceted expertise, encompassing data science and DevOps practices, to tackle new challenges and make meaningful contributions to future projects.
    Featured Skill Big Data
    Data Scraping
    Google Analytics
    AWS Lambda
    Apache Kafka
    Amazon DynamoDB
    Apache Hadoop
    BigQuery
    Amazon ECS
    SQL
    Sentiment Analysis
    Machine Learning
    NLTK
    Apache Spark MLlib
    Apache Spark
  • $30 hourly
    A seasoned data engineer & Certified AWS Solution Architect Associate with 8+ years of Data Engineering experience. Have designed & implemented multiple ETL pipelines on cloud like AWS, Azure and On-prem ecosystem. 💡 If you want to turn data into actionable insights or planning to use 5 V's of big data 👋 Hi. My name is Sushant and I'm a data engineering professional. 💡 My true passion is creating robust, scalable, and cost-effective big data solutions using mainly Apache Spark, Hadoop, Open source technologies and any cloud platforms like AWS, Azure or GCP. 💡During the last 9 years, I have worked on tech stacks like - Programming Languages: Java, Scala, Python - Bigdata Technologies: Hadoop, Apache Spark, Hive, HBase, Kafka, Airflow, Oozie, Elasticsearch etc. - Hadoop Distributions: Cloudera - AWS: EMR, EC2, S3, RDS, Data Pipeline, Glue, Kinesis, SNS, Lambda, Dynamo DB, SQS etc. - Azure: Data Factory, Function Apps, Azure Data Lake Storage (Gen1/Gen2), Databricks, Service Bus, Event Hub, Logic Apps, Virtual Machines, HDInsight - UI Technologies: HTML5, JavaScript, CSS - Data Visualization Tools: Grafana, Kibana, Graphite - Database: MySQL, Oracle, PostgreSQL - Version Control Tools: Git, SVN and much more. 5-step Approach 👣 Requirements Discussion + Prototyping + Visual Design + Backend Development + Support = Success! Usually, we customize that process depending on the project's needs and final goals. How to start? 🏁 Every product requires a clear roadmap and meaningful discussion to keep everything in check. But first, we need to understand your needs. Let’s talk! 💯 Working with me, you will receive a modern good looking application that will meet all guidelines with easy navigation, and of course, you will have unlimited revisions until you are 100% satisfied with the result.
    Featured Skill Big Data
    Docker
    Microsoft Azure
    Data Warehousing & ETL Software
    Scala
    Apache Hadoop
    Amazon Web Services
    Apache Airflow
    Data Modeling
    Apache Kafka
    Hive
    Linux
    Apache Spark
    PySpark
    Python
    SQL
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Big Data Engineer near Faridabad, on Upwork?

You can hire a Big Data Engineer near Faridabad, on Upwork in four simple steps:

  • Create a job post tailored to your Big Data Engineer project scope. We’ll walk you through the process step by step.
  • Browse top Big Data Engineer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Big Data Engineer profiles and interview.
  • Hire the right Big Data Engineer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Big Data Engineer?

Rates charged by Big Data Engineers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Big Data Engineer near Faridabad, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Big Data Engineers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Big Data Engineer team you need to succeed.

Can I hire a Big Data Engineer near Faridabad, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Big Data Engineer proposals within 24 hours of posting a job description.