Hire the best HBase Specialists in India
Check out HBase Specialists in India with the skills you need for your next job.
- $50 hourly
- 5.0/5
- (2 jobs)
Principal Engineer in an eminent financial firm I have extensive experience in Data Processing Pipeline Development, Data Processing, Software Development and Architecture. . I have worked on Python, Data Processing Pipelines( Hadoop, Spark,Spark Streaming), Cloud ( AWS, Azure, GCP), Devops(Docker, Kubernetes, PCF), Java backend etc. I have developed cloud based distributed software systems which can work on scale. Some of my achievements include developing a stream processing pipeline which can handle 8 Billion events per month, developing sensor data ingestion pipeline which can work on millions of sensor events per minute, developing APIs which handles millions of active users etc. I have extensive experience in data modeling for different domains and data analysis tool development using Python and related tech . I have developed Android Apps which are live on Play store and has very good ratings.HBase
Apache HBaseAndroid AppBig DataApache KafkaSoftware DesignKubernetesDockerMongoDBAmazon DynamoDBAkkaAmazon Web ServicesSQLPythonApache SparkJava - $30 hourly
- 4.9/5
- (19 jobs)
6+ years of experience in architecting, designing and developing software across large scalable distributed systems and web applications. In my past experiences, I have been responsible for end-to-end development of features for Paytm Mall (Ecommerce), Paytm Smart Retail (B2B) and Paytm For Business(Merchant Platform). I am currently working on development of inhouse analytics platform for flipkart as Abobe Analytics is not scaling anymore at Flipkart's scale. Languages: Java, Scala, Python, JS Technologies: Spring, Spring Boot, Apache Flink, Spark,Django , Node.js, Express, Flask Data: Hibernate, Hadoop, Hive, Hbase, Druid, MySQL, SQLite, PostgreSQL, Elastic Search, Redis, SQLAlchemy Others: Kafka, RabbitMQ, Jenkins, Kibana, Nginx, Gunicorn, Celery, Supervisor, Datadog, JIRA, Git, CI/CD, TDDHBase
Amazon Web ServicesGoogle Cloud PlatformJavaBig DataApache HiveApache HadoopApache SparkApache HBaseApache FlinkApache KafkaDjangoElasticsearchJavaScriptPythonSQL - $75 hourly
- 0.0/5
- (0 jobs)
Passionate and driven Product Strategist with 12+ years of experience in building and managing enterprise-grade B2B and B2C products such as Epsilon People Cloud Customer Data Platforms, Oracle PLM Cloud Fusion Supply Chain Management, Logistics platform for Royal Mail Group (United Kingdom,) and Catering and Recipe Management platform for Wawa (USA.) Proven track record of working with cross-functional teams, across all phases of the product life cycle, with Professional Scrum Product Owner Certificate and MBA from IIM Bangalore with a focus on New Product Development, Strategy, and Marketing.HBase
Product DevelopmentOracle Fusion ApplicationsApache HBaseGDPRData CleaningMarketing AdvertisingGenerative AIScaled Agile FrameworkAgile Software DevelopmentTech & ITSupply Chain & LogisticsScrumEcommerceProduct Backlog - $200 hourly
- 0.0/5
- (0 jobs)
Objective My objective is to succeed in a stimulating and challenging environment, to expand my talent and skills and building the success of the company. Profile Interested in advance a career as a software engineer at reputed organization. Offering experience in software design and development, knowledge of hardware and coding as well as integrating innovative software solution as per client specification. SKILLSET AND APPLICATION WORKED * Cloudera Hdoop * Apache Hadoop HDFS * Apache Hive * Apache Sqoop * Apache Flume * Apache Oozie Agile working model * Kaban JIRA Board * My Service and Service now * Confluence update * CA work load automation * * GitHub * TeamCity * AutoSys/Putty * IntelliJ IDEA * Winscp * Cloudera HUE * SQL * Treadata HOBBIES * Mountain Climbing. * Road Trips in Bike. * Quote Writing. * Drawing.HBase
Data ExtractionETL PipelineData AnalysisHDFSMapReduceGitHubApache FlumeApigeeSqoopApache HBaseHive TechnologyApache HadoopBig Data - $55 hourly
- 0.0/5
- (0 jobs)
A software developer with wide experience, know how to engineer for scale, Resilience & Performance at Flipkart | Utilizes Java, Kubernetes, Kafka, Elasticsearch, Redis & More to Drive Innovation.HBase
ElasticsearchNode.jsApache StormApache KafkaApache HBaseJavaReactScriptingScriptWeb DevelopmentWeb ApplicationProduct Development - $30 hourly
- 5.0/5
- (5 jobs)
Big Data Hadoop:- Cloudera, Hive,Hue,Impala,Pig,Hive,Flume,Sqoop, Hortenworks Apache Hadoop Hbase Cloud:- AWS, VPC,EMR,S3,EC2 GCP Azure Security:- Mit Kerberos Active Directory Kerberos Sentry -----------------Video Editing And Youtube Thumbnail Design VN Video editing iMovie editing canva for Youtube video Thumbnail DesignHBase
Amazon EC2KerberosYouTube ThumbnailAmazon S3Apache FlumeApache HBaseApache PigClouderaApache HadoopMicrosoft Active DirectoryApache Hive - $12 hourly
- 0.0/5
- (0 jobs)
Greetings! I'm Akshay, a seasoned web developer with a passion for crafting stunning and functional websites. With 1.5 years of hands-on experience, I specialize in both frontend and backend development, ensuring a seamless and engaging user experience.HBase
Data Warehousing & ETL SoftwareETLGoogle Cloud PlatformAWS Cloud9Apache HBaseBig DataHiveOracleSQLApache ImpalaPySparkData StructuresDatabase Management SystemScalaPython - $15 hourly
- 0.0/5
- (0 jobs)
Total 6+ years of Experience in Big Data and cloud environment. Experienced in production grade Big Data clusters deployment using Open Source/ Cloudera/ Hortonworks/ CDP. Hands on experience on big data solutions such as Hadoop, HDFS, MapReduce (MR), Sparks, Hive, Hue, HBase, Sqoop, Yarn, kafka, AWS cloud computing, Sentry, flume,Kerberos, Activity Directory, Oozie, Zookeeper,Hbase, mongodb. strong Experienced in Hadoop Administration. Consulting and collaboration with BI team and Data scientists. Optimizing and Maintaining Scala-based spark applications. Ensuring Service High Availability(HA) and SLA'S. SQL (mySql, postgre-SQL, oracle),Scala,Shell scripting. Database management, log analysis, IntelliJ. Linux (RHEL, Centos, Unix), troubleshooting, RCA Experienced in AWS services EC2, VPC, VPN, IAM, ELB, EMR, S3, RDS,cloud9, cloudwatch. Familiar with GCP services compute Engine, Cloud storage, Cloud sql, Dataproc, Cloud-Nat, Big-query. Knowledge of ITIL, azure, git.HBase
Data WarehousingLinuxApache SparkApache HBaseClouderaApache HadoopETL - $12 hourly
- 0.0/5
- (0 jobs)
Professional Summary Experienced Data Engineer adept in leveraging Big Data technologies such as Hadoop, PySpark, MySQL, Hive, Sqoop, HBase, and Microsoft Azure. Skilled in Python and SQL programming, ETL pipeline development, data quality assurance, problem-solving, and Git version control. Seeking a challenging role to drive insights and business value in an innovative organization.HBase
GitPySparkData EngineeringMySQLSqoopPythonApache HBaseHiveApache HadoopBig DataETL PipelineETL - $3 hourly
- 0.0/5
- (0 jobs)
Experienced Data Engineer skilled in Hadoop, PySpark, and AWS, with a strong focus on optimizing big data pipelines and data integration. Seeking to contribute expertise and drive innovation in collaborative projects. 5+ years of hands-on experience with Spark, SQL, Python, and the Hadoop ecosystem. Solid understanding of Spark architecture, including Spark Core. Proficient in designing data extraction logic and optimizing big data pipelines. Skilled in processing large volumes of structured and semi-structured data from diverse sources. Strong scripting abilities in Python and SQL for data manipulation.HBase
Databricks PlatformApache HBaseHiveGitHubApache AirflowAmazon S3Amazon AthenaAWS GlueApache HadoopPythonSQLPySparkBig Data - $6 hourly
- 5.0/5
- (1 job)
Having 6 years of work experience (Core Java, JSP, Struts, Spring (IOC, MVC, DAO), Hibernate , JavaScript, Ajax, J-Query, CSS, My-SQL, Big-Data, Hadoop, Solr, Nutch, Mongodb,Hbase,Phoenix,Hive,Pig, Crawling, Data Extraction, Kafka,Storm,Spark,AngularJs,Sqoop, flume,hue, Explore git hub projects and implementation with own applications). Strengths: + Ability to efficiently and precisely solve the problem at hand. + Flexible character with inexhaustible stamina for work. + Capability to follow procedures and guidelines to meet deadlines. + Keen interest in learning and adopting new technologies. + Believe in win-win strategy + Hardworking, Sincere & optimistic + Self believeHBase
Apache SolrApache NutchApache HiveApache FlumeApache HadoopCore JavaApache HBaseApache KafkaMongoDB - $20 hourly
- 3.3/5
- (1 job)
Technologies experienced in: JAVA, Scala Big Data components experienced in: Deep knowledge of Big Data Ecosystem, Spark, Spark Streaming, Kafka, Kafka Streaming, HBase, Hive, Zookeeper, YARN, MapReduce, Kafka, Docker, SQOOP, MongoDB, JDBC, JSON, XML, Google Protocol Buffers etc. Hadoop distributions experienced in: Cloudera, Hortonworks. AWS Cloud experience in: * Fit AWS solutions inside a Big Data ecosystem * Leverage Apache Hadoop in the context of Amazon EMR * Identify the components of an Amazon EMR cluster, then launch and configure an Amazon EMR cluster * Use common programming frameworks available for Amazon EMR. * Improve the ease of use of Amazon EMR by using Hadoop User Experience (Hue) * Use in-memory analytics with Apache Spark on Amazon EMR * Use S3 for storage. * Identify the benefits of using Amazon Kinesis for near real-time Big Data processing * Leverage Amazon Redshift to efficiently store and analyze data. Technical Expertise: Languages - Java SE.HBase
RESTful APIApache KafkaRESTful ArchitectureApache SparkPerforceBig DataApache HadoopAmazonDevOpsApache HBaseJavaDockerGitMongoDBAmazon Web Services - $18 hourly
- 4.2/5
- (1 job)
Data engineer, Hadoop, Java, python, AWS, azure, gcp. Ambari, Kafka, hbase, Cassandra, elastic, mysqlHBase
ElasticsearchScalaQlik SenseETLAmazon RedshiftMySQLJavaAmazon AthenaAWS GlueApache HBasePySparkDatabricks PlatformApache SparkApache CassandraApache Hadoop - $20 hourly
- 0.0/5
- (0 jobs)
Hi, I am a Big Data developer with hands on experience in Spark, Scala, Python, SQL, Hadoop, Hive, HDFS, Airflow, AWS, IAM, Lambda, Glue, S3, EMR, DynamoDB, Redshift.HBase
Apache KafkaApache AirflowAmazon S3AWS GlueApache HBaseApache HiveSQLPySparkApache HadoopPythonData EngineeringBig DataApache Spark - $30 hourly
- 0.0/5
- (0 jobs)
I've interest in design system for solving the problems which companies face in the project I've 8 years of experience in the industry Enthusiastic to pick something interesting. Meanwhile I'm finding real life problems to solveHBase
Apache HBaseWebsiteAWS DevelopmentDocker Swarm ModeGitHubAnsibleDockerJavaApache TomcatSpring DataSpring Boot - $10 hourly
- 0.0/5
- (0 jobs)
Big Data Developer & Data Engineering Specialist With 7+ years of experience in software development and data engineering, I specialize in building scalable big data solutions using Apache Spark, Hadoop ecosystem, and cloud technologies. I've successfully delivered multiple enterprise-level projects, including data warehouse migrations, real-time data processing pipelines, and large-scale ETL implementations. Core Expertise Apache Spark development (Java/Scala) Data pipeline design and implementation Cloud computing and distributed systems Hadoop ecosystem (HDFS, Hive, HBase) Real-time data processing Performance optimization ETL/ELT workflows Data warehouse solutions Recent Achievements Successfully migrated legacy C/C++ based scoring models to modern Spark-based solutions, improving processing efficiency by implementing automated data cleaning and validation pipelines Designed and implemented automated data processing pipelines using Apache Nifi for handling daily banking transactions Built complex data transformations using Spark for financial data processing with strict validation requirements Developed optimization algorithms for campaign performance using Apache Spark in the AdTech domain Technical Skills Languages: Scala, Java Big Data: Apache Spark, Hadoop, Hive, Impala, HBase Data Processing: ETL/ELT, Data Warehousing Cloud Platforms: Experience with distributed computing environments Tools: Apache Nifi, Eclipse, IntelliJ Version Control: Git While I'm new to Upwork, I have successful freelancing experience on other platforms like freelancer.com with profile name: ubfapps and bring a proven track record of delivering high-quality solutions in the big data space. I'm known for my attention to detail, strong communication skills, and ability to translate complex requirements into efficient, scalable solutions. I'm particularly interested in projects involving: Big data pipeline development Data warehouse modernization ETL/ELT implementation Performance optimization Real-time data processing solutions Let's discuss how I can help you leverage big data technologies to solve your business challenges efficiently and effectively.HBase
Apache HadoopApache AirflowCloud ComputingGoogle Cloud PlatformMicrosoft AzureAWS DevelopmentApache HBaseHiveApache SparkData EngineeringBig DataETL PipelineData ExtractionETL - $25 hourly
- 4.0/5
- (1 job)
Mr. Prafulla has over 12 years of experience working on enterprise BI software products. Throughout, he worked upon development activity of 'IBM Netezza Performance Server' (which is a widely used data warehouse appliance) and Cognos Business Intelligence reporting (data analysis) product and Data Manager-ETL tool of IBM-Cognos. Currently, Prafulla is working as an independent Big Data consultant owning multiple responsibilities for setting up big data projects for multiple organizations. His some of the recent big data engagements include - Opinion Mining Engine – Used to derive sentiment score out of huge data from news sites Product Comparison Engine – Used to enable the online shopper to choose the seller who is selling the product at a lower price than the other online retailers. Design and development of Distributed Search and Selective Replication Technologies that Prafulla deal with frequently are- GOlang, Docker, Kubernetes, Hadoop, HBase, Hive, Apache Nutch, Couchbase, Java, C, C++ In past, he has been awarded “One of 15 percent top mentors” in IBM for providing mentor-ship to university graduates under University Relations program of IBM. In addition, he has received “IBM Bravo” award for setting up the team for India Development Center of Cognos during early days of Cognos in India after its acquisition by IBM.HBase
Apache HBaseVMware ESX ServerApache HadoopDockerApache NutchApache HiveGolangKubernetes - $20 hourly
- 0.0/5
- (0 jobs)
Seasoned Data Engineer with a robust background in data integration, transformation, and analytics, equipped with extensive hands-on experience in industry-leading tools and platforms. Demonstrated expertise in designing and implementing data solutions to drive business insights and operational efficiency.` Technical Proficiencies: Integration Tools: SSIS, Azure Data Factory Analytic Services: SSAS, Azure Synapse Analytics, HD Insight Visualization: Power BI Languages: Python, Java, SQL, DAX, PySpark Databases/Data Warehouses: SQL Server, MySQL, SQL Pool, Spark Pool Cloud Platforms: Microsoft Azure (Azure Data Factory, Synapse Analytics, HD Insights, Azure Databricks) Certifications: Databricks Certified Data Engineer Associate Microsoft Certified: Azure Data Fundamentals Microsoft Certified: Azure Fundamentals TrendyTech Certified Cloud Big Data EngineerHBase
AWS LambdaMicrosoft AzureData VisualizationData IntegrationData ExtractionData ModelingApache KafkaApache SparkApache HBaseSQL Server Integration ServicesMicrosoft SQL SSASBusiness IntelligenceMicrosoft Power BIPythonSQL - $8 hourly
- 0.0/5
- (0 jobs)
Data Engineer | IIIT Bangalore Graduate | 7+ Years of IT Experience Hello! I'm Saurabh, a seasoned Data Engineer with over 7 years of experience in the IT industry, specializing in data engineering and advanced analytics. I hold a degree from the prestigious IIIT Bangalore and am currently pursuing a Master's in Data Science from Liverpool John Moores University, further honing my skills and expanding my knowledge. Expertise: PySpark & SQL: Proficient in designing and optimizing data pipelines for large-scale data processing. AWS: Experienced with cloud technologies, including AWS Glue, Lambda, Athena, Step Functions, S3, IAM, SNS, SQS, CloudWatch, EC2 and Redshift. Python: Web-based software, Desktop application, Website. Big Data Tools: Hands-on experience with PySpark, Hadoop, Hive, Secondary Skills: Java: Solid understanding of Java for application development. Airflow: Expertise in scheduling and orchestrating data workflows. Professional Achievements: Led a team to migrate from Java & Oracle to PySpark & PostgreSQL, resulting in a 60% improvement in data pipeline efficiency and a 25% reduction in operational costs. Designed and implemented a scalable data pipeline using PySpark on AWS EMR for personalized product recommendations, processing over 10 TB of data and boosting customer engagement by 35%. What I Bring to the Table: A problem-solving mindset, capable of tackling complex technical challenges and delivering innovative solutions. A commitment to continuous learning and staying up-to-date with the latest advancements in backend development and cloud technologies. Excellent communication skills, ensuring clear and effective collaboration with clients and team members. Let's collaborate to bring your vision to life with top-notch backend solutions for your desktop and web applications. Feel free to reach out to discuss your project requirements and how I can contribute to your success.HBase
Website OptimizationWeb APIWebsite IntegrationReactJavaApache AirflowAWS LambdaAWS GluePySparkApache HBaseSqoopMySQLApache HadoopPython - $6 hourly
- 0.0/5
- (0 jobs)
I’m a software developer experienced in building web applications for small and medium-sized businesses. Whether you’re trying to solve issues, list your services, or need consultancy, I can help Knows Java, Unix, Database, FrameworksHBase
HibernateSpring BootUnix ShellApache HBasePostgreSQLOracleSQLUnixJavaArchitectureProblem Solving Want to browse more freelancers?
Sign up
How hiring on Upwork works
1. Post a job
Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.
2. Talent comes to you
Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.
3. Collaborate easily
Use Upwork to chat or video call, share files, and track project progress right from the app.
4. Payment simplified
Receive invoices and make payments through Upwork. Only pay for work you authorize.