Hire the best HBase specialists

Check out HBase specialists with the skills you need for your next job.
  • $40 hourly
    I am a developer focused on providing highly efficient software solutions. - Full Stack Developer (WEB Applications and websites) - Big Data Engineer (Hadoop, PySpark, Hive, MapReduce) - E-Commerce Developer: Magento 2 - Woocommerce - Install, setup and manage VPSs and Big Data Clusters - Web Applications Security checks, and perform advanced security tests (Penetration tester) - Expert PHP Developer - Experienced PHP Laravel developer I am ready to take on new experiences and to engage in serious works.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Laravel
    Python
    Apache Spark
    Cloudera
    MongoDB
    Apache HBase
    Apache Hadoop
    PHP
    JavaScript
    CakePHP
  • $38 hourly
    💡 If you want to turn data into actionable insights or planning to use 5 V's of big data or if you want to turn your idea into a complete web product... I can help. 👋 Hi. My name is Prashant and I'm a Computer Engineer. 💡 My true passion is creating robust, scalable, and cost-effective solutions using mainly Java, Open source technologies. 💡During the last 11 years, I have worked with, 💽Big Data______🔍Searching____☁️Cloud services 📍 Apache Spark_📍ElasticSearch_📍AWS EMR 📍 Hadoop______📍Logstash_____📍AWS S3 📍 HBase_______📍Kibana_______📍AWS EC2 📍 Hive_________📍Lucene______ 📍AWS RDS 📍 Impala_______📍Apache Solr__📍AWS ElasticSearch 📍 Flume_______📍Filebeat______📍AWS Lambda 📍 Sqoop_______📍Winlogbeat___📍AWS Redshift 5-step Approach 👣 Requirements Discussion + Prototyping + Visual Design + Backend Development + Support = Success! Usually, we customize that process depending on the project's needs and final goals. How to start? 🏁 Every product requires a clear roadmap and meaningful discussion to keep everything in check. But first, we need to understand your needs. Let’s talk! 💯 Working with me, you will receive a modern good looking application that will meet all guidelines with easy navigation, and of course, you will have unlimited revisions until you are 100% satisfied with the result. Keywords that you can use to find me: Java Developer, ElasticSearch Developer, Big Data Developer, Team lead for Big Data application, Corporate, IT, Tech, Technology.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Big Data
    ETL
    Data Visualization
    SQL
    Amazon Web Services
    Amazon EC2
    ETL Pipeline
    Data Integration
    Data Migration
    Logstash
    Elasticsearch
    Apache Kafka
    Apache Spark
    Apache Hadoop
    Core Java
  • $70 hourly
    𝗖𝗲𝗿𝘁𝗶𝗳𝗶𝗲𝗱 𝗗𝗮𝘁𝗮 𝗣𝗿𝗼𝗳𝗲𝘀𝘀𝗶𝗼𝗻𝗮𝗹 with 𝟲+ 𝘆𝗲𝗮𝗿𝘀 of experience and hands-on expertise in Big Data, Data Engineering, Data Warehousing and Data Analytics. Looking for someone with a broad skill set, minimal oversight and ownership mentality then contact me to discuss in detail the value and strength I can bring to your company. 𝙄 𝙝𝙖𝙫𝙚 𝙚𝙭𝙥𝙚𝙧𝙞𝙚𝙣𝙘𝙚 𝙞𝙣 𝙩𝙝𝙚 𝙛𝙤𝙡𝙡𝙤𝙬𝙞𝙣𝙜 𝙖𝙧𝙚𝙖𝙨, 𝙩𝙤𝙤𝙡𝙨 𝙖𝙣𝙙 𝙩𝙚𝙘𝙝𝙣𝙤𝙡𝙤𝙜𝙞𝙚𝙨: ► BIG DATA & DATA ENGINEERING Apache Spark, Hadoop, MapReduce, YARN, Pig, Hive, Kudu, HBase, Impala, Delta Lake, Oozie, NiFi, Kafka, Airflow, Kylin, Druid, Flink, Presto, Drill, Phoenix, Ambari, Ranger, Cloudera Manager, Zookeeper, Spark-Streaming, Streamsets, Snowflake ► CLOUD AWS -- EC2, S3, RDS, EMR, Redshift, Lambda, VPC, DynamoDB, Athena, Kinesis, Glue GCP -- BigQuery, Dataflow, Pub/Sub, Dataproc, Cloud Data Fusion Azure -- Data Factory, Synapse. HDInsight ► ANALYTICS, BI & DATA VISUALIZATION Tableau, Power BI, SSAS, SSMS, Superset, Grafana, Looker ► DATABASE SQL, NoSQL, Oracle, SQL Server, MySQL, PostgreSQL, MongoDB, PL/SQL, HBase, Cassandra ► OTHER SKILLS & TOOLS Docker, Kubernetes, Ansible, Pentaho, Python, Scala, Java, C, C++, C# 𝙎𝙤𝙢𝙚 𝙤𝙛 𝙢𝙮 𝙢𝙖𝙟𝙤𝙧 𝙥𝙧𝙤𝙟𝙚𝙘𝙩𝙨 𝙞𝙣𝙘𝙡𝙪𝙙𝙚𝙙 - Designing Big Data architectures for the financial and telecom sector to power their data-driven digital transformation. - Implementing Data Lake and Data Warehousing solutions using Big Data tools. - Developing ETL workflows using Apache Spark, Apache NiFi, Streamsets, Apache Airflow, etc. - Hands-on experience with Big Data and Cloud technologies in implementation and architectural design of Data Lake and Data Warehouse. - Experienced in working with Cloudera, Hortonworks, AWS, GCP, and other Big Data and Cloud technologies. 𝙒𝙝𝙚𝙣 𝙮𝙤𝙪 𝙝𝙞𝙧𝙚 𝙢𝙚, 𝙮𝙤𝙪 𝙘𝙖𝙣 𝙚𝙭𝙥𝙚𝙘𝙩: - Outstanding results and service - High-quality output on time, every time - Strong communication - Regular & ongoing updates Your complete satisfaction is what I aim for, so the job is not complete until you are satisfied! Warm Regards, 𝗔𝗻𝗮𝘀
    vsuc_fltilesrefresh_TrophyIcon HBase
    Solution Architecture Consultation
    AWS Lambda
    Apache NiFi
    ETL Pipeline
    Data Management
    Data Warehousing
    AWS Glue
    Apache Spark
    Amazon Redshift
    Apache Hadoop
    ETL
    Python
    SQL
    Marketing Analytics
    Big Data
    Data Visualization
  • $45 hourly
    As a highly experienced Data Engineer with over 10+ years of expertise in the field, I have built a strong foundation in designing and implementing scalable, reliable, and efficient data solutions for a wide range of clients. I specialize in developing complex data architectures that leverage the latest technologies, including AWS, Azure, Spark, GCP, SQL, Python, and other big data stacks. My extensive experience includes designing and implementing large-scale data warehouses, data lakes, and ETL pipelines, as well as data processing systems that process and transform data in real-time. I am also well-versed in distributed computing and data modeling, having worked extensively with Hadoop, Spark, and NoSQL databases. As a team leader, I have successfully managed and mentored cross-functional teams of data engineers, data scientists, and data analysts, providing guidance and support to ensure the delivery of high-quality data-driven solutions that meet business objectives. If you are looking for a highly skilled Data Engineer with a proven track record of delivering scalable, reliable, and efficient data solutions, please do not hesitate to contact me. I am confident that I have the skills, experience, and expertise to meet your data needs and exceed your expectations.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Snowflake
    ETL
    PySpark
    MongoDB
    Unix Shell
    Data Migration
    Scala
    Microsoft Azure
    Amazon Web Services
    SQL
    Apache Hadoop
    Cloudera
    Apache Spark
  • $40 hourly
    I'm Linux DevOps and Cloud architect since 2002. Most of my professional career is with design, setup and DevOps of medium and high loaded web farms, NoSQL databases which are time-critical and require 24/7/365 uptime. During the last several years, I'm concentrated on architect & administration of Hadoop ecosystem, Big-Data systems (Cassandra, ElasticSearch, Riak ...) and distributed storage Ceph. I have big experience with a variety of web servers and load balancers (Apache, Nginx, HAProxy, Tomcat, Jetty etc .. ) as well as with cloud services such as AWS, Azure and GCP.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Big Data
    Apache HBase
    Linux System Administration
    Apache Cassandra
    Ceph
    Golang
    Nomad
    CI/CD Platform
    Apache Hadoop
    Consul
    Kubernetes
    Elasticsearch
    Google Cloud Platform
    Python
    Amazon Web Services
    Linux
  • $35 hourly
    🔍🚀 Welcome to a world of data-driven excellence! 🌐📊 Greetings, fellow professionals! I am thrilled to introduce myself as a dedicated Data Consultant / Engineer, leveraging years of honed expertise across a diverse spectrum of data stacks 🌍. My journey has been enriched by a wealth of experience, empowering me with a comprehensive skill set that spans Warehousing📦, ETL⚙, Analytics📈, and Cloud Services☁. Having earned the esteemed title of GCP Certified Professional Data Engineer 🛠, I am your partner in navigating the complex data landscape. My mission is to unearth actionable insights from raw data, shaping it into a strategic asset that fuels growth and innovation. With a deep-rooted passion for transforming data into valuable solutions, I am committed to crafting intelligent strategies that empower businesses to flourish. Let's embark on a collaborative journey to unlock the full potential of your data. Whether it's architecting robust data pipelines ⛓, optimizing storage solutions 🗃, or designing analytics frameworks 📊, I am dedicated to delivering excellence that transcends expectations. Reach out to me, and together, let's sculpt a future where data powers success. Thanks!
    vsuc_fltilesrefresh_TrophyIcon HBase
    PySpark
    Machine Learning
    Natural Language Processing
    Informatica
    Data Science
    Data Warehousing
    Snowflake
    Data Analysis
    Big Data
    BigQuery
    ETL
    Apache Airflow
    Apache Hadoop
    Apache Spark
    Databricks Platform
    Python
    Apache Hive
  • $95 hourly
    Hey, my name is Dima. I am CyberSecurity specialist and turnkey infrastructure expert for BigData solutions and data analysis. I'm passionate about building SOC, SOAR and SIEM solutions for enterprise clients in cloud as well as on-premise. Main interest is in building data ingestion, enrichment and analysis pipelines using highly available and fault tolerant servers. I have years of experience working with AWS and GCloud clouds as well as setting up secure and reliable bare-metal servers. Delivering solutions that ease control over the data, centralize security & threat intelligence and save the infrastructure cost is what gives me satisfaction. => Technological Summary CyberSecurity: Wazuh, ElasticSearch, Suricata, pfSense BigData: Cloudera CDP/CDF (Kafka, Apache NiFi), Apache Metron, Kylo, OpenSearch Infra as Code: Ansible, Terraform Clouds: AWS, Google Cloud, DigitalOcean, Linode Virtualization: VMware, Proxmox, KVM Automation: Jenkins, Gitlab Monitoring: Zabbix, Grafana, Kibana DBs: Cassandra, DataStax DSE, Snowflake Mail: MailCow SMTP/IMAP Server VPN: OpenVPN Server Programming languages: Bash, Python Operation Systems: Centos / RHEL, Ubuntu => Personal skills • Lead by example and treat team success above all • High thoroughness and endurance • Quick adaptability, eager to learn cutting edge technologies • Experienced troubleshooter • Procedures improver • Technical documentation writer • Visionary
    vsuc_fltilesrefresh_TrophyIcon HBase
    Elasticsearch
    Linux System Administration
    Apache Kafka
    Apache Hadoop
    Email Security
    Machine Learning
    ELK Stack
    Cloudera
    Zabbix
    MySQL
    Big Data
    Apache NiFi
    PfSense
    Red Hat Administration
    Proxmox VE
    Amazon Web Services
  • $65 hourly
    A Big Data Architect having more than 11 years of experience in building high accuracy search engines. I can be useful for you in : - JEE: Spring Boot/Batch/Rest, Hibernate, Lucene, JPA - Search : Lucene, Solr, ElasticSearch (ELK), HBase Indexer, Lily - BigData : Hadoop, HBase, Zookeeper, Mapreduce, Spark, Kafka - Reporting: Jasper report, iText, ... - DB: Postgres, Oracle, Mysql, Firebird - Servers: WAS 6.1 & 7, Weblogic, JBoss Seam, Tomcat
    vsuc_fltilesrefresh_TrophyIcon HBase
    Kerberos
    Change Management
    Apache Spark
    Java EE
    Spring Framework
    Elasticsearch
    Apache Lucene
    Apache Solr
    Apache HBase
    Apache Hadoop
  • $15 hourly
    --Cloud Big Data Engineer I am Azure certified data engineer with professional experience of DataBricks,DataFactory,StreamAnalytics,EventHubs,Datalake store. I have developed API driven and DataFactory orchestration , developed Databricks jobs orchestration, cluster creation and job management through DataBricks REST API. I have successfully developed around 3 full scale enterprises solution on Microsoft cloud(DataBricks,Datafactory,stream analytics, Datalake store,Blob storage) . I have developed DataBricks orchestration and cluster management mechanism in .NET c#, Java, Python. Hopefully I will serve you in better way due to my experience and knowledge. Following are BigData and cloud tools in which I have experties. -Apache Spark -scala -python -kafka -Datafactory -stream analytics -Eventhubs -spark streaming -Azure DataLake store -Azure Blob storage -parqute files -Snowflake MPP -Databricks -.NET C# --Webscraping Data mining I have professionalHDFS experience in Datamining , webscraping with selenium python. I have professional experience of scraping on many e-com sites like Amazon, Ali express, Ebay, Walmar and of social sites like Facebook, Twitter,linkdin and many other sites. I will provide required scraped data and script as well as support. Hopefully I will serve you in better way due to my relevant professional experience and knowledge .
    vsuc_fltilesrefresh_TrophyIcon HBase
    Google Cloud Platform
    Apache Airflow
    Apache Spark
    Data Management
    Microsoft Azure
    Selenium
    Snowflake
    Big Data
    Data Scraping
    Python
  • $55 hourly
    ⭐Top-rated DevOps expert with 60+ completed projects on Upwork. I lead a team of highly motivated DevOps experts who help SaaS businesses to build scalable, high available and secure infrastructures on AWS/GCP/Azure according to the best practices. What you will get with me and my team? ✅Price & Value balance. With us, you get top-quality DevOps assistance at a reasonable price. We are sensitive to customer’s needs and use a proactive mindset and efficient workflow to exceed your expectations. ✅Long-term support. We tend to build reliable long-term relationship with our customers, supporting them throughout their product lifecycle. ✅Relevant expertise. Our team members use all modern DevOps tools & technologies to cover various challenges. We have AWS/GCP certified experts in team and internal DevOps training program which helps to grow strong talents in-house. ✅Business-oriented approach. We work with startups and businesses of different scale every day and know how to enable the efficient use of resources and save up to 80% in your infrastructure costs, how to improve deployment time from weeks to minutes, and how to make your setup secure. ===== Technologies that I adopt and I’m an expert at: - AWS, GCP, Azure - Docker - Kubernetes - Terraform - Ansible - Jenkins, CircleCI, GitHub/BitBucket pipelines - Prometheus - Grafana - Kafka - Datadog etc. My area of expertise includes: DevOPS • CI/CD • Infrastructure as Code • Container Orchestration • Configuration Management SysOPS • Remote Infrastructure Management Services • Monitoring • Application Management • Infrastructure Upgrades/Migration CloudOPS • Cloud Administration (AWS, GCP, Azure, Digital Ocean) • Cloud Monitoring & Maintenance • Cloud Cost Optimization • Architecture Consulting • Cloud migration DbOPS • Remote Database Administration Services • Performance Optimization • Orchestrated Migration across platforms/ databases Industries I have experience with: - Fintech - Healthcare - Security - Marketing - Blockchain - Delivery, etc. ===== Case studies of the customers I have worked with: 🔸Safe and Cost-Effective Infrastructure to Bring One-Click Deployment For a Cyber-Security Provider: Deployment time decreased from 8 hours to 10 minutes (48x faster deployment). Fewer resources (time/costs/employee-hours) are now required as code deployment has moved from a tedious, manual process to a codified, automated mechanism. 🔸Backend Infrastructure Design & Development for all-in-one connectivity platform for restaurants & stores: Product provisioning time shortened from 1,5 weeks to 1 hour. Company can drive much more value to their customers now, due to being able to provision the product 200-300 times a month, instead of 5-6 monthly deployments. System redesign and automation allowed to save 30-35% of the budget and free the internal resources for more important tasks. ===== Feel free to contact me or book a consultation, I’d love to discuss your project and help you reach business goals.
    vsuc_fltilesrefresh_TrophyIcon HBase
    System Monitoring
    System Administration
    DevOps
    Solution Architecture
    Terraform
    Ansible
    Docker
    Infrastructure as Code
    Azure DevOps
    Jenkins
    CI/CD
    Kubernetes
    Google Cloud Platform
    DevOps Engineering
    Amazon Web Services
  • $25 hourly
    Good Day Over 9+ years of extensive hands-on experience in Big Data technologies from core Hadoop Eco System to GCP-AWS cloud-based platforms. Expertise in Cloud (GCP, AWS) and In-Premise (Hadoop) systems and its various components Experience working in different Google Cloud Platform Technologies like Big Query, Dataflow, Dataproc, Pub sub, Composer and AWS like EMR, Redshift, Lambda, Step Functions, EKS over open source Ecosystem Hadoop, HDFS, MapReduce, Kafka, Spark, Hive. - Design and Development of Ingestion Framework over Google Cloud, AWS and Hadoop cluster. - Good Knowledge on Hadoop Cluster architecture and monitoring. - Extensive Experience on importing and exporting data using Kafka. - Strong hands-on experience in ETL processing using Spark, Scala/Python and Kafka. - Integration various data science models into Data Engineering platform over cloud and in-premise - End-to-End Bigdata platform setup over In-premise to cloud - Migration of tradition data systems into cost-friendly, reliable, scalable data systems - Developing and scheduling ETL workflows in Hadoop using Oozie, Airflow, Google Cloud Composer - Setup, manage and optimize Distributed data warehouses like Hive, Big Query, Redshift - Managing different queues over pub/sub, Kafka - Handled various frequency based ingestion with Realtime, Near Realtime, Scheduled batch flow - Handling integration with RDBMS like MSSQL, MySQL and NOSQLs like MongoDB, Elasticsearch - Complete Data driven system build experience from Data Ingestion, Transformation, Store and Analytics over BI platforms like PowerBI, DataStudio - Managing various loggings using ELK (Elasticsearch, Logstash and Kibana) - Setting up various webserver configurations - Setting and managing Devops pipelines over Kubernetes, Docker, Azure Devops, Github devops
    vsuc_fltilesrefresh_TrophyIcon HBase
    AI Content Creation
    Database
    Python Script
    Machine Learning
    Flask
    API
    Data Migration
    Apache Airflow
    ETL
    Apache Kafka
    Google Cloud Platform
    Python
    SQL
    Amazon Web Services
    JavaScript
  • $30 hourly
    I did Linux administration for the first 10 years of my career, then worked 5 years as DevOps Engineer, and then added about 3 years of Cloud ( AWS, Google, DO) experience as well. I do Docker, Kubernetes, Logging & Monitoring ( DataDog, ELKs, Prom/Grafana ), CI/CD Pipelines, Helm charts, Terraform, ArgoCD, and Automation Daily. The numbers showing up on my profile here are only 10 percent of my career. Certifications: Google Cloud Professional Cloud Architect Google Cloud Professional Data Engineer Top Skills: Linux System Administration ( Debian, Ubuntu, Centos, RedHat ) DevOps, CI/CD Pipelines, and Automation ( ArogoCd, Jenkins, GitlabCI, GitHub Actions) AWS EKS Fargate , Pulumi, Terraform AWS ECS, Fargate, Pulumi/Python Google Kuberntes Engine ( GKE ) Digital Ocean Kubernetes Kubernetes experience in Production Scripting using Bash and Python Cloud System Administration and Troubleshooting, AWS, GCP, Digital Ocean TCP/IP Networking Version Control: Git, Github, Gitlab , BB Elasticsearch ( ECK on Kubernetes ), Logstash, Kibana Administration, ELK stack, Data pipelines Infrastructure as a Code with Terraform and Pulumi Atlassian JIRA and Confluence Administration Traffic Managers: Brocade VTM, Nginx, Ingress Controllers, TLS cert management and automation
    vsuc_fltilesrefresh_TrophyIcon HBase
    AWS Fargate
    Cloud Migration
    Docker
    GitLab
    Elasticsearch
    Kubernetes
    Linux System Administration
    DevOps
    GitHub
    CI/CD
    Terraform
    Python
    Google Cloud Platform
  • $15 hourly
    I have 4+ years of experience in the AI and Analytics department. I have full depth and understanding of the practices and processes followed under this domain. I have provided support to the business intelligence teams from different operational backgrounds like Life Sciences, Retails Hospitality & Consumer Goods. I am also having knowledge of the Cyber Security domain, so can work effectively in providing data-based decisions for securing your networks. I have done my graduation in Electronics and Communication Engineering and while pursuing that I developed a wireless robotic system using Raspberry Pi. In my free time, I used to play CTFs and solve cybersecurity puzzles. Most recently, I am working in PySpark to implement data quality checks and email alerts for any data issues in the client's Data Warehouse. Some of my previous projects include: - design a cloud-based data warehouse for BI teams - automating manual excel tasks in python-pandas - automate daily dashboard refresh in tableau - design data bricks job submission API - data collection using scrapy I am flexible in learning new technologies and frameworks. Please reach out if you have any questions about what I can do for you, or if I can provide you with any consultation.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Big Data
    Microsoft Azure
    Amazon S3
    Data Warehousing
    Databricks Platform
    Data Science
    ETL
    Scripts & Utilities
    Apache Hadoop
    Microsoft Azure SQL Database
    Automation
    Database
    Amazon Web Services
    Apache Spark
    SQL
    Python
  • $50 hourly
    JNCIE-SP #2975 , JNCIS-SEC, JNCIS-DevOps, CCNA R&S, LPI-1. Network Expert with more then 10 years of experience in the telecommunications industry. During my career I worked for 2 biggest ISP's in my country and one Tier1 ISP as part of Central Network Support Team(Level 3 Support). Besides of this, I also had a lot of other project, which involved network design, configuration,implementation and network automation. I have deep knowledge with Juniper Networks(JunOS), Cisco Systems(IOS, IOS-XR), Huawei Networking(VRP) and Mikrotik(RouterOS) products. During Network Engineer career, I was responsible for all kind of networking tasks, beginning from customer services or device configuration and ending with developing,extension of IP/MPLS network, traffic engineering or other technologies used in big ISP network. Also, usually I was responsible for testing and integration of new technologies in existing network without interruption of services. And the last, but not the least, I was responsible for providing and integration of network automation tools and monitoring systems. Experienced with Juniper routers and switches, MX, EX and SRX. Experienced with Cisco IOS routers and switches, 2900,6500​ and 7200 series. Experienced with Cisco IOS-XR routers, especially with ASR9k. Experienced with Huawei and Raisecom PON network devices, OLT/ONT. Experienced with Foundry/Brocade/EdgeCore/Arista switches. Experienced with Ericsson SmartEdge series router. Experienced with Mikrotik routers Experienced with NMS like U2000, Zabbix, Cacti and The Dude. Experienced with automation tools like Ansible and Python scripting. Experienced with Linux Systems - CentOS / Ubuntu Expert knowledge of xSTP, PIM, IS-IS, OSPF, BGP, MPLS, LDP/RSVP, MPLS VPNs(L2/L3/VPLS) etc...
    vsuc_fltilesrefresh_TrophyIcon HBase
    Extreme Networks
    Junos OS
    Cisco Certified Internetwork Expert
    Network Analysis
    Python
    Multiprotocol BGP
    Multiprotocol Label Switching
    OSPF
    Juniper
    Network Engineering
    Cisco Certified Network Professional
    Cisco Certified Network Associate
    Cisco Router
    Cisco IOS
  • $25 hourly
    I have been working as a Cloudera Administrator in Telecommunication/Financial industry. I am installing/configuring/monitoring production clusters around 15 and 18 node clusters. I am available for install/configure/fixing issues and tuning your clusters. I have followings skills and experience: Cloudera Administrator Linux Administrator Crontab scheduling Shell Scripting Mysql MariaDB HDFS, Impala Hadoop, SQL , ETL,, Tera Data , RDBMS, NoSQL (MongoDB), Warehousing, SSRS, Data migration from one source to another. Cloudera Hadoop, Sqoop, Flume, HDFS, Big Data technologies. Performance monitoring Impala, Spark and hive jobs HDFS replication management Enable HA on masternodes and HDFS Worked on upgrading cluster, commissioning & decommissioning of Data Nodes, Name Node recovery, capacity planning, and slots configuration. Cluster installation from scratch Add services Configure services Created CM dashboard for navigating services. CM User access management. Resolving bad and concerning health issues. Hands on experience on Redhat 7.5 Hands on experience with mysql (mariadb) for services configuration Configured cloudera navigator for audit logs Enabled HDFS HA Rebalance HDFS data on all hosts LDAP configuration on Cloudera Manager and Hue for business users login. Configure email alerts on service bad health Linux System Administration on RHEL Cloudera Administration on production server Resolving Cluster Health Issues. Configuring services in cluster. Enable and disable nodes for performing hardware activity Writing shell scripts Adding and configure new data node. Resolving bad health issues Crontab job scheduling Schedule spark and sqoop jobs using shell scripts Strong hands-on experience working with Impala, Hive, HDFS, Spark and YARN Strong hands-on experience with LDAP configuration. Strong hands-on experience with Master nodes HA and HDFS and other services. Adding and removing host from cluster. Experience with configuring dedicated cluster for KAFKA. Adding and removing dataodes in a secure way. Installed and Configured ELK and Configured Elasticsearch with Hadoop cluster for fast performance. Configured Cloudera Navigator Mysql user management Sentry user management Linux user management Experience in implementing and ongoing administrating infrastructure including performance tuning. Troubleshooting Spark jobs. Adding custom dashboards of Cloudera services health and memory charts. Hue user management Role assignment LDAP user management ELK use case Installation and configuration Elasticsearch, Kibana, and Logstash on test & production environment Extracted logs using Grok pattern Created Kibana dashboard. Integrated ELK with hadoop cluster for fast performance. I can do all that at very reasonable costs. Feel free to discuss your project.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Linux System Administration
    Informatica
    Big Data
    Hive Technology
    ETL Pipeline
    Apache Kafka
    Apache Hive
    Cluster Computing
    YARN
    Apache Hadoop
    Apache Spark
    Cloudera
    SQL
    Apache Impala
  • $20 hourly
    I am a passionate software developer who enjoys his job. I see every project as a means of growth both professionally and intellectually. I enjoy giving my clients exactly what they ask for and most importantly value for money. I strive not only to meet deadlines but also to give results that actually works. I have experience with PHP(Laravel/Lumen/CodeIgniter), Vuejs, Nodejs , GCP, AWS, RESTful APIs. Experience with CI/CD, Github Actions
    vsuc_fltilesrefresh_TrophyIcon HBase
    System Administration
    Android App Development
    Virtual Private Server
    Web Design
    Core PHP
    Web Hosting
    Lumen Micro Framework
    JavaScript
    PHP
    Vue.js
    Laravel
    MySQL
    CSS
    CodeIgniter
    Web Development
  • $25 hourly
    I'm a professional individual from Pakistan, working as a Technical team lead in a US based organization, having more than 5 years of professorial experience in analysis and development. I'm comfortable with any web scraping projects, Web crawler development, web extraction, conversion projects mainly related with PDF, XML to MS word, excel etc.PDF Extraction automation, XML related applications. Any challenge is a new lesson for me, hence I would like to address difficult tasks with hours of coding, being a individual I do like to code and find new opportunities for build up my career. Always looking forward for creative challenge.
    vsuc_fltilesrefresh_TrophyIcon HBase
    MySQL
    C#
    MongoDB
    Selenium
    Instagram
    Instagram API
    Automation
    Web Development
    Big Data
    Selenium WebDriver
    Java
    TensorFlow
    Python
    pandas
  • $15 hourly
    I am a highly skilled and experienced data scientist with a diverse skill set that includes expertise in various tools and technologies used in the field. I am adept at designing, building, and maintaining data pipelines, databases, and data warehousing solutions to store, process, and analyze large volumes of data. As a data scientist with experience in different tools, I have a deep understanding of the data science process and have honed your skills in using a variety of tools to extract insights from data. I am able to take on complex data problems, design and implement models, and deliver business value through data-driven decision-making. As a data engineer, I am skilled in designing, developing, and optimizing large-scale data infrastructures. I have a deep understanding of the various data management tools, programming languages, and data architectures required to build efficient and reliable data pipelines. I have expertise in data modeling, schema design, and ETL processes using tools such as Apache Flume Apache Airflow, and Apache Kafka. I am also proficient in working with both relational and NoSQL databases, data warehouses, and data lakes.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Data Analysis
    Data Mining
    ChatGPT
    Natural Language Processing
    TensorFlow
    Artificial Neural Network
    Machine Learning
    Data Science
    Apache Spark
    Deep Learning
    Python
  • $80 hourly
    Hi, I am Data Architect/Snr. Data Engineer with 10 years experience with RDBMS/NoSQL databases and processing large amounts of data. My experience related to Enterprise level/High profile projects in the past, but now I'm helping alot startups and small-mid sized companies. My core competences are : Data Modelling, Data Architecture on Cloud platforms, Database development, ETL and Business Intelligence, Database Administration Modelling of OLTP and Datawarehouse systems. It could be design of new schema, normalization/denormalization of existing model, Enterprise datawarehouse design based on Kimball/Inmon, Data Lake and Data Vault architectures, Modernization of existing Datamodel(s). DBA Activities - DB migrations, Backup & Recovery, Upgrades, Instance configurations, DB Monitoring, Horizontal scaling, Streaming/BDR replications. Sharding with postgreSQL extensions. Data Integration and ETL : Traditional batch ETL - Informatica, Talend, AWS Datapipeline, Matillion ETL Serverless ETL - AWS Lambda, Glue, Batch, AWS DMS, Google Cloud Functions Streaming ETL - Apache NiFi, Kafka, Kinesis streams SaaS ETL - Stitch, Alooma, Fivetran Direct loading with DBMS tools & scripting Building BI layer with Crystal reports , Tableau/QlickSense or other modern BI SaaS tool. Cloud containerization and deployment : Docker, Mesos/Kubernetes Java development : EE/SE , Spring, Hibernate, RESTful APIs, Maven Clouds : - Cloud migrations (AWS, Azure, GCP) - Cloud infrastructures (VPCs, EC2, Loadbalancing, Autoscaling, Security in AWS/GCP/Azure) - Processing in EMR Hadoop/HDInsight/Azure DF/Google PubSub - Athena, DynamoDB/Cosmos DB, Amazon Aurora - Development & Administration of RDS/Azure SQL/GCP databases - Building Analytics Solutions in Amazon Redshift/Azure PDW/Google Bigquery/Snowflake with End-to-End BI implementations Thank you for getting to the end of this boring details and looking forward working on exciting projects together :) Best Regards, Yegor.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Amazon EC2
    Amazon Web Services
    Oracle PLSQL
    Tableau
    Oracle Administration
    Amazon RDS
    Oracle Performance Tuning
    ETL
    Amazon Redshift
    PostgreSQL Programming
  • $20 hourly
    My motto is to build and innovate together! My clientele is consistently impressed by my creativity, dedication, and durability. Why I believe myself to be the best candidate and a great asset to your project. 🅐 Local Development Team Leader 🅑 Certified Full Stack Developer | React | Angular | Next | Node | MERN | MEAN | SAAS Developer 🅒 Proactive Communicator with keen eye to detail and focusing on Client Satisfaction ➢ Frontend Stack: React/Redux, JavaScript, Typescript, GraphQL, Gatsby, HTML/CSS, jQuery, Bootstrap, ECMAScript 6, HTML/CSS, Next.js, REST APIs, Three.js, React Hooks, MobX, Webpack. ➢ Backend Stack: NodeJS, Express, GraphQL, TypeScript, JavaScript, NestJS, Firebase. ➢Database: MySQL, Firebase, SQLite, PostgreSQL, MongoDB. ➢ Cloud Services: Firebase, Heroku, AWS ( EC2, S3, Amplify, Lambda, DynamoDB) ➢ Payment Providers - PayPal, Hyperpay, Stripe. ➢ Version Control - Git, Gitflow, SVN, smart GIT, Bitbucket, Gitlab, Github. I've accomplished over 50 projects. I value long-term partnerships with clients who can rely on me at any moment. Let’s connect and get you the best Web Soft product you can’t get anywhere else. Hoping to hear from you soon
    vsuc_fltilesrefresh_TrophyIcon HBase
    Web Development
    Amazon Web Services
    Redux
    TypeScript
    Back-End Development
    JavaScript
    CSS
    SaaS Development
    Next.js
    SQL
    DevOps
    Full-Stack Development
    MongoDB
    Node.js
    React
  • $25 hourly
    🏆 M.S. in Big Data Analytics & Artificial Intelligence 🏆 M.S. in Business Consulting 🏆 50+ projects completed I'm Rohan - Your Tech Bestie. With my technical skills and business acumen, I'll understand your needs, solve the problems, and scale your business. I follow a professional, friendly, and collaborative approach to achieve the outcomes with the highest quality. Here is what my valuable clients have to say about me: ✅ "Rohan is a very good freelancer! He knows his job, has very quick communication and nice quality. Recommended!" ✅ "Rohan was very responsive and easy to work with." ✅ "Great to work with Rohan. He is completed all the tasks on time and is a good in tech skills." ✅ "Rohan works quickly and communicates his limits efficiently." ✅ "Rohan was easy to work with, he communicated well and did an outstanding job. He was efficient and great at trouble shooting to get the job done. I would definitely recommend him highly and I will also rehire him for future jobs." ✅ "Fast turn around time and good quality of work and good english writing skills. Easy to communicate with. Recommended." 👉 Send me a message about your vision, and I'll reply with my best advice or refer you in the right direction.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Neural Network
    Data Visualization
    Model Optimization
    Model Tuning
    Quantitative Analysis
    Research Papers
    Business Intelligence
    Artificial Intelligence
    pandas
    Analytics
    Natural Language Processing
    Keras
    Python
    Machine Learning
  • $350 hourly
    I am a full-stack data scientist/data engineer with 14000+ hours on Upwork and many more offline. I am familiar with almost all major tech stacks on data science/engineering and app development. Front end: ui/ux, nodejs, react, angular Back end: micro service, rest api, database performance optimization CI/CD: jenkins, gitlab, Kubernetes Security: secure file transferring and oAuth etc ETL: scriptella, informatica, nifi Search Engine: elasticsearch Software Design Documentation: graphviz, mermaid Web scraping: Scrapy, Rotation Proxy, Selenium, Beautifulsoup Data Science: Python, Java, R, C/C++, NLP/NLG, AIGC, GPT-3, ChatGPT, HuggingFace, ML Predictive Modeling, Reinforcement Learning, Knowledge Graph Neo4j, Recommendation Engine, Deep Learning, Computer Vision, OCR, GAN, DeepFake, Stable Diffusion, Signal Processing, Voice Clone, Chatbot, Sports betting, Price Optimization, Time Series Analysis/Forecasting, Crypto, Solidity, Tokenomics etc. Research: Published around 100 papers on top-tier ML conferences and journals and 7 patents. I worked as a research scientist on machine learning and algorithm for IBM T.J. Watson Research Center, Industry Solution Group, from 2012 to 2017. I worked as a J2EE software engineer from 2006 to 2008. I obtained a Ph.D of Computer Science from University of California, Los Angeles, with major in Machine Learning and minors in Artificial Intelligence and Data Mining. I have been awarded the Most Outstanding Ph.D Graduate Award, the Northrup-Grumman Outstanding Graduate Student Research Award, the Chancellor Award for Most Outstanding Applicants, all from Computer Science Department, UCLA and the Chinese Government Award for Outstanding Chinese Students Overseas, 2010. I had worked as a consultant for many start-ups on various projects and I have solid background on both research and development.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Data Scraping
    Smart Contract
    Binance Coin
    Blockchain
    Bioinformatics
    Predictive Analytics
    Python
    Machine Learning
    Recommendation System
    Computer Vision
    Chatbot
    Natural Language Processing
    Deep Learning
  • $40 hourly
    I am a Sr Big Data Engineer with 15 years experience designing , developing and implementing data analytics applications, I have been working on Big Data Platforms using Spark and Scala/Java , Python, Hive, Pig , Sqoop, Oozie etc for last 10 years in an AWS env and Cloudera environments. Iam have expert level proficiency in creating data pipelines using various ETL methodologies, very good experience in writing Unix shell scripts, Python scripts, I am also very proficient in designing Databases/Data warehouse models , proficiency in writing Sql/PL SQL programming.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Cloud Computing
    Apache Kafka
    ETL Pipeline
    SQL
    RESTful API
    Oracle Applications
    Unix Shell
    Python
    Scala
    Java
    SQL Programming
    Apache Spark
    Big Data
  • $12 hourly
    - Bringing over 10 years of experience in the IT Industry and top decision-maker in several popular IT projects globally, I am the Sr. Technical Architect at Oodles Technologies. - More than 3000 Hours on Upwork - Worked with Fortune 500 Companies - My team has developed more than 700 customized web solutions, websites, and applications. - Our areas of expertise are SaaS, Web/Mobile Application, CRM and E-commerce application development. Our Technical Expertise Includes: - APP DEVELOPMENT: Hybrid App Development using React Native and Flutter, Native iOS & Android, Swift, Objective-C, Kotlin. - FRONTEND: Angular, ReactJS, Vue JS, Bootstrap, JavaScript, jQuery, Ajax, HTML/CSS. - BACKEND: Java, Spring Boot, Hibernate, Python, Django, Flask, Node.js, Dot Net. - DATABASE: MySQL, MS SQL Server, PostgreSQL, MongoDB, Cassandra, Redis. Our SaaS expertise includes: - SaaS Product Development Services & Consultation - SaaS Mobile App Development - Multi-Tenant SaaS Software Development - Microservices Application Development - Third-Party APIs Integration Different types of platforms we have developed so far: - E-commerce Platforms - Analytics Platforms - Logistics Platforms - Management Platforms - Healthcare Platforms - Lifestyle Platforms - Real estate Platforms - Travel and Tourism applications - CRM - Booking Platforms - Subscription-Based Platforms We maintain a collaborative and agile working environment while delivering on deadlines for global clients.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Tesseract OCR
    OCR Algorithm
    Recommendation System
    Machine Learning Model
    Chatbot Development
    YAML
    Information Architecture
    Competitive Analysis
    Artificial Intelligence
    SaaS
    Mobile App Development
    UX Design
    Technical Analysis
    Enterprise Resource Planning
    Agile Project Management
  • $25 hourly
    ✅ 24*7 Availability ✅ 5 Star Rating ✅ 520+ Projects ✅ 350+ Chrome Extensions I am a senior full-stack web-App and Big Data developer with a track record of delivering large-scale high-performance products and have professional experience in various technologies of frontend and backend development. ⭐Frontend: AngularJS, Vue JS, ReactJS/Redux, Bootstrap, JavaScript, TypeScript, jQuery, Ajax, HTML5, Material UI, Tailwind CSS. ⭐Backend: NodeJS, ExpressJS, PHP, Laravel, CakePHP, Yii, WordPress, Python, Django, Flask, RESTful API, GraphQL ⭐Mobile App: React-Native, Ionic, Flutter, Apache Cordova, IoT, Socket.io, iOS & Android. ⭐Browser Extension Development: Chrome, Edge, Firefox, Opera, Brave & Safari. ⭐Database: MongoDB, PostgreSQL, MongoDB, MySQL. ⭐Designing: UI/UX, Photoshop, Adobe XD, Adobe Illustrator, Corel Draw. ⭐Python scripting, Jira, Trello, and Azure DevOps for project management, web scraping, ⭐AWS (Redshift, Glue, ECS, EC2, EMR, Kinesis, S3, RDS, VPC, IAM, DMS), ⭐GCP (Big Query, DataFlow, SnowFlow), ⭐Microsoft Azure, Hadoop Big Data, Elasticsearch/Kibana/Logtash (ELK), ⭐Hadoop setup on standalone, Cloudera, and HortonWorks, ⭐SQL (MySQL, PostgreSQL), NoSQL databases like HBase and MongoDB, machine learning, deep learning, Spark with Mlib, GraphX, Sphinx, Memcache, and MS BI/Tableau/GDS for business intelligence and data visualization. ⭐Project management tools: Asana | Jira | Trello | Sl@ck | BaseCamp etc. **************************************************************************************************** ✅I am a technology executive with expertise in architecting and implementing highly scalable solutions that drive brand awareness, increase revenues, optimize productivity, and improve margins. ✅I oversee the data, security, maintenance, and network for a company, implementing the business's technical strategy and managing the overall technology roadmap. ✅Additionally, I am involved with talent acquisition and onboarding, training, and managing a team of Project Managers, Product Managers, Developers, DevOps professionals, and Designers. ✅My focus is on setting the technical strategy for the company to enable it to achieve its goals. I seek out current and future technology that will drive the company's success, and ensure that the technology goals are aligned with the organizational vision. ✅I am passionately committed to developing technology teams, empowering people to accomplish their goals, and coaching them to realize their individual potential. ✅I have a proven track record of success in technology product development, cloud infrastructure, building data platforms, ETL pipelines, streaming pipelines, e-commerce, CRM, mobile strategy, and social media integration. ✅Over the last 10+ years, I have worked extensively with Apache Spark, Lucene, ElasticSearch/Kibana, Amazon EC2, and a range of RDBMS (SQL, MySQL, Aurora, PSQL, Oracle), NoSQL engines (Hadoop/HBase, Cassandra, DynamoDB, MongoDB), GraphDB (Neo4j, Neptune), in-memory databases (Hazelcast, GridGain), and Apache Spark/MLib, Weka, Kafka, clustered file systems, and general-purpose computing on GPU. ✅I am experienced in deploying ML/DL models on GPU instances (Nvidia), query optimization, application profiling, and troubleshooting. Top 5 Reasons to Hire: ================ ✅ High-Quality Work ✅ Cost-Efficient Services ✅ Fluent English Verified ✅ 10+ Years Of Experience ✅ Long-term working relationship Let's get connected and proceed further, We would be happy to discuss every detail of your valued project to elaborate the work frame to get the best outcome as per your desired expectation. Thanks!
    vsuc_fltilesrefresh_TrophyIcon HBase
    RESTful API
    AI Model Integration
    DevOps Engineering
    Machine Learning
    API Development
    Android
    Angular
    Laravel
    jQuery
    Python
    .NET Framework
    Desktop Application
    Google Chrome Extension
    PHP
    React
  • $29 hourly
    *Experience* • Have hands-on experience upgrading the HDP or CDH cluster to Cloudera Data Private Cloud Platform [CDP Private Cloud]. • Extensive experience in installing, deploying, configuring, supporting, and managing Hadoop Clusters using Cloudera (CDH) Distributions and HDP hosted on Amazon web services (AWS) cloud and Microsoft Azure. • Experience in pgrading of Kafka, Airflow and CDSW • Configured various components such as HDFS, YARN, Sqoop, Flume, Kafka, HBase, Hive, Hue, Oozie, and Sentry. • Implemented Hadoop security. • Deployed production-grade Hadoop cluster and its components through Cloudera Manager/Ambari in a virtualized environment (AWS/Azure Cloud) as well as on-premises. • Configured HA for Hadoop services with backup & Disaster Recovery. • Setting Hadoop prerequisites on Linux server. • Secured the cluster using Kerberos & Sentry as well as Ranger and tls. • Experience in designing and building scalable infrastructure and platforms to collect and process very large amounts of structured and unstructured data. • Experience in adding and removing nodes, monitoring critical alerts, configuring high availability, configuring data backups, and data purging. • Cluster Management and troubleshooting on the Hadoop ecosystem. • Performance tuning, and solving Hadoop issues using CLI, CMUI by apache WebUI. • Report generation of running nodes using various benchmark operations. • Worked on AWS services such as EC2 instances, S3, Virtual private cloud, Security groups, and Microsoft Service like resource groups, resources (VM, disk, etc.), Azure blob storage, Azure storage replication. • configure private and public IP addresses, network routes, network interface, subnets, and virtual network on AWS/Microsoft Azure. • Troubleshooting, diagnosing, performance tuning, and solving the Hadoop issues. • Administration of Linux installation. • Fault finding, analysis and logging information for report. • Expert in administration of Kafka and deploying of UI tools to manage Kafka • Implementing HA for MySQL • Installing/Configuring Airflow for orchestration of jobs
    vsuc_fltilesrefresh_TrophyIcon HBase
    Apache Kafka
    Hortonworks
    Apache Hive
    Apache Airflow
    YARN
    Apache Hadoop
    Apache Zookeeper
    Apache Spark
    Cloudera
    Apache Impala
  • $40 hourly
    With more than 25 years of diverse experience in software development for internet and client/server applications under various environments, I have a proven track record of increasing responsibility in design, systems analysis/development, and complete life cycle project management. My skillset makes me an ideal candidate for both data science and data engineering tasks. I have experience with cutting-edge technologies in databases, search engines, in-memory computing, analytics, language and scripting, and servers. As a big data engineer, I have experience working with a variety of tools and technologies. One of my primary skills is working with PySpark and Spark to manage and analyze large datasets. I have used these tools to design and implement scalable data pipelines that can process terabytes of data efficiently. In addition to PySpark and Spark, I am proficient in Python programming, which has been useful for developing custom code to manipulate and analyze data. I have also worked extensively with Amazon's Elastic MapReduce (EMR) and Elastic Compute Cloud (EC2) to set up and manage large-scale distributed computing environments for big data applications. My experience also includes expertise in shell scripting, which I have used to automate various tasks such as data ingestion, processing, and transformation. I have used shell scripts to interact with APIs and data sources, extract data, and load it into databases or data lakes. As part of my big data engineering work, I have also gained experience with machine learning (ML) libraries such as Scikit-Learn, TensorFlow, and PyTorch. I have used these libraries to develop and deploy ML models for various use cases, including predictive maintenance, fraud detection, and recommendation systems. Other technologies that I have worked with include data storage technologies such as Hadoop, PostgreSQL, PostGIS, MySQL, and Percona. I have experience with NoSQL databases such as Cassandra, Riak, and CRATE, as well as big data processing frameworks such as Hive, Sqoop, Sphinx, and Solr. I am also proficient in using Redshift for data warehousing and Tokudb and HandlerSockets for NoSQL data management. Overall, my experience as a big data engineer has given me a broad range of skills and expertise in various technologies, which I can use to design, implement, and maintain complex big data solutions for different industries and use cases.
    vsuc_fltilesrefresh_TrophyIcon HBase
    Python
    Scala
    ETL Pipeline
    Data Modeling
    NoSQL Database
    BigQuery
    Apache Spark
    Sphinx
    Linux System Administration
    Amazon Redshift
    PostgreSQL
    ETL
    MySQL
    Database Optimization
    Apache Cassandra
  • Want to browse more freelancers?
    Sign up

How it works

 

1. Post a job (it’s free)

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by 5M+ businesses

How do I hire a HBase Specialist on Upwork?

You can hire a HBase Specialist on Upwork in four simple steps:

  • Create a job post tailored to your HBase Specialist project scope. We’ll walk you through the process step by step.
  • Browse top HBase Specialist talent on Upwork and invite them to your project.
  • Once the proposals start flowing in, create a shortlist of top HBase Specialist profiles and interview.
  • Hire the right HBase Specialist for your project from Upwork, the world’s largest work marketplace.

At Upwork, we believe talent staffing should be easy.

How much does it cost to hire a HBase Specialist?

Rates charged by HBase Specialists on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.

Why hire a HBase Specialist on Upwork?

As the world’s work marketplace, we connect highly-skilled freelance HBase Specialists and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream HBase Specialist team you need to succeed.

Can I hire a HBase Specialist within 24 hours on Upwork?

Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive HBase Specialist proposals within 24 hours of posting a job description.

Schedule a call