Hire the best Hadoop Developers & Programmers in Arizona

Check out Hadoop Developers & Programmers in Arizona with the skills you need for your next job.
  • $40 hourly
    I am a highly skilled data engineer with a strong background in data warehousing, ETL, data modeling and data governance. I have experience in big data platforms such as Hadoop, Spark, and Hive, and I am proficient in programming languages such as Python, SQL, and Java. I have experience working with cloud computing platforms like AWS, Azure, and Google Cloud, and I am well-versed in data visualization and reporting tools like Tableau, Power BI, and Looker. I have a strong understanding of database management systems, such as MySQL, PostgreSQL, and MongoDB. I have hands-on experience working with machine learning and artificial intelligence concepts, and I am well-versed in data governance, security, and compliance best practices. I am known for my strong problem-solving and analytical skills, as well as for my ability to communicate effectively and work well in team environments. I am the lead developer of “What's Poppin," a native iOS event-finding app utilizing API calls and geo hashing for efficient location-based event queries on Google Cloud Platform database. Implemented MVC architecture and applied data normalization techniques to optimize data organization and enhance query performance. I have a proven track record of delivering high-quality work on time and on budget, and I am committed to helping my clients achieve their goals through the use of data engineering.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    PostgreSQL
    ETL
    Data Warehousing
    Amazon S3
    Data Modeling
    MongoDB
    MySQL
    Microsoft Azure
    Google Cloud Platform
    Hive
    PySpark
    Apache Hadoop
    SQL
    Java
    Python
  • $70 hourly
    Innovative technical architect with 14+ years of experience pioneering solutions in large-scale distributed systems, cloud architectures, and data engineering. As a patent-pending inventor, I've architected novel solutions for petabyte-scale data platforms, consistently transforming complex technical challenges into efficient, scalable solutions that drive business value. Technical Innovation & Leadership: Led development of ClaimIt framework (patent-pending) for orphan file management in big data platforms Architected and managed 4000+ node MapR Hadoop clusters processing petabytes of data daily Pioneered first-of-its-kind Hunter utility for cluster-wide duplicate detection, saving millions in storage costs Spearheaded SWAT team handling critical performance optimization challenges across enterprise Guided 200+ teams in solution design and implementation of distributed systems Cloud & Infrastructure Excellence: Engineered hybrid cloud solutions reducing infrastructure costs by 30% while improving performance Implemented automated deployment pipelines reducing setup time from weeks to hours Designed scalable proxy solutions for high-traffic microservices with 40% latency reduction Enhanced Apache Tomcat with HTTP/2 and NIO, improving request processing by 50% Built zero-touch deployment frameworks with automated compliance checks Data Engineering & Performance Optimization: Optimized big data workloads achieving 40% cost reduction across MapReduce, Hive, and Spark jobs Engineered data pipelines processing millions of messages daily with 99.99% reliability Reduced query latency by 65% through innovative caching solutions and architecture optimization Improved message throughput by 50% using heterogeneous disk combinations Designed spark event log analyzer reducing bottleneck identification time by 70% Security & Compliance: Implemented comprehensive security frameworks including Kerberos and MapR security Designed automated ticket management systems reducing CLDB load by 40% Built custom solutions for enterprise SSO integration and access management Achieved 100% compliance with enterprise security standards Maintained zero security incidents while managing petabyte-scale data Technical Expertise: Distributed Systems: Hadoop, MapR, Kubernetes, Docker, Kafka, ZooKeeper Languages: Java, Python, Shell, Rust, Golang Databases: MapRDB, HBase, Redis, PostgreSQL, MySQL, ClickHouse Cloud & Infrastructure: AWS, GCP, Terraform, Ansible, SaltStack Monitoring: Prometheus, Grafana, ELK Stack, Custom Solutions Leadership & Mentorship: Led Bigdata Platform Architecture Review Board guiding enterprise-wide initiatives Mentored 50+ engineers in distributed systems and cloud architecture Established best practices and coding standards adopted company-wide Reduced team onboarding time by 60% through comprehensive documentation Built cross-functional relationships improving project delivery efficiency by 40% Innovation Philosophy: I believe in approaching technical challenges with a combination of innovation and pragmatism. My focus is on: Designing scalable solutions that grow with business needs Optimizing performance while maintaining system reliability Automating processes to reduce operational overhead Building secure systems from the ground up Creating maintainable and well-documented architectures Future Vision: Passionate about pushing technological boundaries while maintaining practical, business-focused solutions. Committed to: Exploring cutting-edge technologies for enterprise applications Developing innovative solutions for complex technical challenges Contributing to open-source communities and knowledge sharing Mentoring next generation of technical architects Driving digital transformation through technical excellence Aim to leverage my expertise in distributed systems, cloud architecture, and data engineering to drive innovation and technical excellence, while contributing to the organization's growth through scalable, efficient solutions and technical leadership.
    vsuc_fltilesrefresh_TrophyIcon Hadoop
    REST API
    Flask
    J2EE
    Java
    Apache Spark
    PySpark
    MapReduce
    MapR
    Apache HBase
    Apache Hive
    Hortonworks
    Cloudera
    Apache Hadoop
    ETL
    Data Extraction
  • Want to browse more freelancers?
    Sign up

How hiring on Upwork works

1. Post a job

Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.

2. Talent comes to you

Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.

3. Collaborate easily

Use Upwork to chat or video call, share files, and track project progress right from the app.

4. Payment simplified

Receive invoices and make payments through Upwork. Only pay for work you authorize.

Trusted by 5M+ businesses