Hire the best Apache Hive Developers in Islamabad, PK

Check out Apache Hive Developers in Islamabad, PK with the skills you need for your next job.
  • $40 hourly
    Hiya & thanks for dropping by my Upwork profile! You can call me Zee, I'm Microsoft certified Developer with a stellar track record of job success in software engineering. I take pride in communicating with global clients like you in a timely (quick) and appropriate (knowing to be technical or non-technical) manner at the right level of detail needed to give you peace of mind that your project is going to get done. For technical folks, I have working competency with the following: Big Data: Hive, Sqoop, Nifi, HDFS, Cloudera ETL: Teradata, Oracle, SQL/PL-SQL, Postgress Tool: Power BI, Oracle Data Integrator, Web development: WordPress, OSCommerce Languages: SQL, PHP, CSS, JQUERY, Microsoft Azure Data Fundamental. Non-technical? Don't worry--drop me a message, and we can discuss how to get what you need to be accomplished today!
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Talend Open Studio
    ETL Pipeline
    Data Warehousing
    Microsoft Power BI
    Oracle Data Integrator
    Business Intelligence
    Data Warehousing & ETL Software
    Sqoop
    Google Cloud Platform
    Data Integration
    Microsoft Azure SQL Database
    ETL
    PostgreSQL
    SQL
  • $30 hourly
    Hi, Programming is my passion. I have more than 2 years of experience in BigData application development. I am good in Extract, transform and load (ETL) using Apache technologies. I have plenty of experience in data processing, data wrangling using spark, hadoop and kafka. I have worked in companies to maintain there data flow from products to warehouse. My Offering (BigData Development): 1) Apache Spark (Scala and Python) 2) Hadoop 3) Hive 4) Kafka 5) Azure Databricks (From creating Spark App, doing its CI/CD till its deployment) 6) Creating Data Ingestion Pipelines in AzureDataFactory V2 (ADF). 6) Azure Technologies (Azure Batch, Logic Apps, Azure Functions, Azure Data Lake Store, Azure Blob Storage, Azure Data WareHouse) 7) ETL (Extract Transform Load) Using Azure Data Factory (Creating pipeline so that data can flow from different cloud storage to data warehouse) 8) Snowflake Integration 9) Octopus Deployment (Deploying applications to different environments from DEV to PROD) My team is offerings more services which includes: 1) Development in TypeScript using Node.js framework. 2) Unit testing using mocha and chai with shift left concept. 3) Scripting in Python, Batch and PowerShell. 4) Version control systems like Repos (Azure DevOps), Mercurial and Git. 5) Creation and maintenance of build (both classic and Yaml) and release pipelines on Azure DevOps. 6) InnerSource the npm packages to Azure Artifacts or Open Source on npmjs. 7) Continuous Integration (CI) and Continuous Deployment (CD) for releasing faster. 8) Setting up Jenkins Server and defining build and release processes. 9) Reports generation with CanvasJs, PowerBi etc.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Scraping
    Azure DevOps
    Snowflake
    Microsoft Azure
    Big Data
    Data Mining
    BigQuery
    Machine Learning
    Data Science
    Data Entry
    Databricks Platform
    Apache Spark
    Apache Hadoop
  • $20 hourly
    Hello! I am a skilled data engineer and software developer with expertise in several databases such as MySQL, PostgreSQL, Oracle, and MS SQL Server. I am also proficient in programming languages such as Java and Python and data analysis libraries like Pandas, NumPy, and Matplotlib. Additionally, I have experience in web scraping using Beautiful Soup and have worked on big data technologies such as Hadoop Distributed File System (HDFS), Apache Hive, and Apache HBase. I have extensive experience in data visualisation using well known tools like Power BI, Fine BI and Tableau. I have worked on various projects ranging from database design and development to data analysis and visualization. My experience includes developing and optimizing SQL queries, creating ETL pipelines, and designing data models for large-scale systems. I also have experience in developing web applications using Java and Python frameworks. I am a quick learner and always strive to keep up-to-date with the latest technologies and best practices. I am comfortable working with Linux and have experience in server administration and automation using shell scripting. If you are looking for a data engineer or software developer who can deliver quality work, then look no further. Let's discuss your project and see how I can help you achieve your goals.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Apache HBase
    Web Server
    Data Analysis
    Microsoft SQL Server
    MySQL
    Oracle
    PostgreSQL
    Linux
    SQLite
    Web Scraping
    Java
    SQL Programming
    Apache Hadoop
    Python
  • $25 hourly
    I have been working as a Cloudera Administrator in Telecommunication/Financial industry. I am installing/configuring/monitoring production clusters around 15 and 18 node clusters. I am available for install/configure/fixing issues and tuning your clusters. I have followings skills and experience: Cloudera Administrator Linux Administrator Crontab scheduling Shell Scripting Mysql MariaDB HDFS, Impala Hadoop, SQL , ETL,, Tera Data , RDBMS, NoSQL (MongoDB), Warehousing, SSRS, Data migration from one source to another. Cloudera Hadoop, Sqoop, Flume, HDFS, Big Data technologies. Performance monitoring Impala, Spark and hive jobs HDFS replication management Enable HA on masternodes and HDFS Worked on upgrading cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration. Cluster installation from scratch Add services Configure services Created CM dashboard for navigating services. CM User access management. Resolving bad and concerning health issues. Hands on experience on Redhat 7.5 Hands on experience with mysql (mariadb) for services configuration Configured cloudera navigator for audit logs Enabled HDFS HA Rebalance HDFS data on all hosts LDAP configuration on Cloudera Manager and Hue for business users login. Configure email alerts on service bad health Linux System Administration on RHEL Cloudera Administration on production server Resolving Cluster Health Issues. Configuring services in cluster. Enable and disable nodes for performing hardware activity Writing shell scripts Adding and configure new data node. Resolving bad health issues Crontab job scheduling Schedule spark and sqoop jobs using shell scripts Strong hands-on experience working with Impala, Hive, HDFS, Spark and YARN Strong hands-on experience with LDAP configuration. Strong hands-on experience with Master nodes HA and HDFS and other services. Adding and removing host from cluster. Experience with configuring dedicated cluster for KAFKA. Adding and removing dataodes in a secure way. Installed and Configured ELK and Configured Elasticsearch with Hadoop cluster for fast performance. Configured Cloudera Navigator Mysql user management Sentry user management Linux user management Experience in implementing and ongoing administrating infrastructure including performance tuning. Troubleshooting Spark jobs. Adding custom dashboards of Cloudera services health and memory charts. Hue user management Role assignment LDAP user management ELK use case Installation and configuration Elasticsearch, Kibana, and Logstash on test & production environment Extracted logs using Grok pattern Created Kibana dashboard. Integrated ELK with hadoop cluster for fast performance. I can do all that at very reasonable costs. Feel free to discuss your project.
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Linux System Administration
    Informatica
    Big Data
    Apache Kafka
    Apache Spark
    Apache Hadoop
    Cloudera
    SQL
    Hive Technology
    YARN
    ETL Pipeline
    Apache Impala
    Cluster Computing
  • $30 hourly
    Greetings! I am a dynamic data professional, adept at bridging the realms of Data Science and Data Engineering. Armed with an MS in Data Science and over 4 years of industry experience, I offer a potent mix of theoretical prowess and hands-on expertise. Expertise: — Data Science Mastery: Orchestrating end-to-end workflows encompassing data extraction, cleaning, and normalization. Employing cutting-edge techniques in transformation, feature extraction, and selection using Pandas, Numpy, Matplotlib, Seaborn and Scipy etc. — Machine Learning Wizardry: Proficient in deploying scikit-learn, keras, tensorflow, and pytorch for crafting predictive models that translate data into actionable insights. — Visualization Virtuoso: Designing compelling visual narratives with matplotlib, seaborn, plotly, R shiny, Power BI, Tableau, Google Data Studio, Klipfolio, and Qlik Sense. — Textual Alchemy: Expert in leveraging nltk, gensim, polyglot, textblob, and spacy for advanced text processing. — Time Series Sorcery: Applying sophisticated methods like LSTM, GRU, and ARIMA for precise time series analysis. — Automation Architect: Pioneering automated A/B testing processes to drive informed decision-making. — Data Engineering Excellence: Proficient in Azure, AWS, PySpark, Hadoop, Hive, and Airflow. Specializing in ETL processes, data warehousing, data modeling, and pipeline development. — Notebook Maestro: Crafting clear and concise solutions for complex tasks in Jupyter Notebooks. I am passionate about transforming raw data into actionable insights and architecting robust data infrastructures. Let's collaborate to elevate your data strategy to unprecedented heights!
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Data Engineering
    AWS Glue
    Data Lake
    Amazon Web Services
    Microsoft Azure
    Apache Airflow
    PySpark
    Data Visualization
    Microsoft Power BI
    ETL Pipeline
    Matplotlib
    Tableau
    Databricks Platform
    Python
  • $40 hourly
    I'm Muhammad Umair, a Data Engineering Consultant serving the enterprise Telecom industry over 6 years. My responsibility in my role to proposed the architecture and build scalable and secured Data Pipelines, deliver the quality data for insights which business needs for continuous growth using on demand tools and technologies in the industry. I have a good domain understanding of some core systems which is a key source of information in any enterprise solution like (CRM, AX, OCS) and some payment gateways like Ericsson Wallet Platform (EWP), Adyen Payments and Paypal. Technically, I'm a Databricks and Cloudera Certified Data Engineer having Expertise in Big Data Domain and Data Engineering stack are: - Cloudera Data Platform (CDP) - Hortonworks Data Platform (HDP), - Azure Cloud (Data Lake, Data Factory, Synapse, Databricks Lakehouse, Azure DevOps). - Great coding and problem solving skills in python/pyspark and java. - A have good ability to debug the problems in the existing code. - Writing Unit Test Cases for quality delivery. - Coordinating with the QA's to make sure the effective production deployment. - Great Problem solving skills using writing adhoc SQL queries. - Big Data Analytics and Data Warehousing. - I believe in Agile and follow the Scrum process to meet the deadlines and business deliverables. - I'm a Team Player and always believe in Team work and support. Certifications Added up to my Profile - Databricks Certified Data Engineer - Databricks Fundamentals of Lakehouse Accreditation - Cloudera Certified Spark & Hadoop Developer - AWS Cloud Practitioner
    vsuc_fltilesrefresh_TrophyIcon Apache Hive
    Database
    Microsoft Azure
    Data Warehousing
    Big Data
    Apache NiFi
    Data Integration
    Sqoop
    Hive
    Data Lake
    Apache Spark
    Apache Hadoop
    Databricks Platform
    Cloudera
    Python
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by

How do I hire a Apache Hive Developer near Islamabad, on Upwork?

You can hire a Apache Hive Developer near Islamabad, on Upwork in four simple steps:

  • Create a job post tailored to your Apache Hive Developer project scope. We’ll walk you through the process step by step.
  • Browse top Apache Hive Developer talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top Apache Hive Developer profiles and interview.
  • Hire the right Apache Hive Developer for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a Apache Hive Developer?

Rates charged by Apache Hive Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a Apache Hive Developer near Islamabad, on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance Apache Hive Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Apache Hive Developer team you need to succeed.

Can I hire a Apache Hive Developer near Islamabad, within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Apache Hive Developer proposals within 24 hours of posting a job description.