Hire the best Pyspark Developers in Hyderabad, IN
Check out Pyspark Developers in Hyderabad, IN with the skills you need for your next job.
- $40 hourly
- 5.0/5
- (8 jobs)
• Azure Data Factory, • Azure Logic Apps, • Data Bricks • Azure Synapse Analytics • Azure SQL • Power BI, • Azure Data Storage, • Azure Data warehouse, • Fusion Middleware technologies (Oracle SOA, OSB and Oracle BPM) and • MuleSoft • Certified professional in Azure data Engineer (DP-203),Azure Developer,(AZ-204),MuleSoft and Oracle SOA, OSB, and Oracle BPM. • Extensive and diverse experience in Analysis, Design, Development, Implementation, testing of different data storage, data integration Tools and technologies.Pyspark
Oracle WebLogic ServerXSLTOracle SOA SuiteBusiness Process Execution LanguageMicrosoft Azure SQL DatabaseAzure IoT HuBMulesoftBusiness Process ManagementMicrosoft AzurePostmanGitAzure App ServiceAzure DevOpsPySparkSOAPDatabricks Platform - $50 hourly
- 5.0/5
- (8 jobs)
I am from Hyderabad-India, having 9+ years of experience in IT. Working as data engineer in azure cloud and snowflake database. Expert in Snowflake, Azure Data Factory and Matillion developement, SQL Development Good in Azure Functions, DataBricks, SynapsePyspark
SQL Server Integration ServicesMicrosoft SQL ServerMicrosoft AzureMicrosoft Azure SQL DatabaseAmazon S3Microsoft SQL Server AdministrationTransact-SQLdbtDatabricks PlatformPySparkSnowflakeData MigrationPython - $60 hourly
- 4.7/5
- (17 jobs)
*Data Scientist specalized in Exploratory Data Analysis, Predictive Modeling, Data Mining, Machine Learning *Data Engineer speaclized in design & development of Databases, ETL pipelines, Deployment and DevOps. *Data Visualization expert in design & development of Dashboards, Interactive visualizations, Reports and graphs. Key Skills: *Advanced Statistics & Algorithms, Regression, Classification *Dimension Reduction, Neural Networks & Deep Learning *Text Mining, Natural Language Processing Programming: *Python, R, PySpark, Spark Scala, MS SQL, Shell Scripting Data Engineering: *Big Data, Apache Spark, Hadoop, Hive Data Visualization: *Python Dash, R Shiny, ggplot, Plotly, Matplotlib, Bokeh *Tableau, Microsoft Power BI Project Management: *Agile, Git, Bit Bucket, JIRA, ConfluencePyspark
Amazon Web ServicesGoogle Cloud PlatformArtificial IntelligenceNatural Language ProcessingData ScienceData VisualizationPredictive AnalyticsMachine LearningPySparkRData MiningScalaPythonDatabase Design - $50 hourly
- 5.0/5
- (3 jobs)
With around 13 Years of IT experience on data driven applications.I excel in building robust data foundations for both structured and unstructured data from diverse sources. Additionally, I possess expertise in efficiently migrating data lakes and pipelines from on-premise to cloud environments. My skills include designing and developing scalable ETL/ELT pipelines using cutting-edge technologies such as Spark, kafka, Pyspark, Hadoop, Hive, DBT, Python, and leveraging cloud services like AWS, Snowflake, and DBT Cloud,Airbyte, BigQuery, Metabase and A good understanding of containerisation frameworks like Kubernetes and Docker is essentialPyspark
Apache AirflowApache HiveDatabricks PlatformApache SparkPythonApache HadoopPySparkSnowflakeAmazon S3dbtDatabaseOracle PLSQLUnix Shell - $39 hourly
- 0.0/5
- (0 jobs)
I have 3 years of experience. Expert in Python and SQL. I have a strong grip on BigData Technologies. ADF, Databricks, Pyspark, Kafka, Data warehousing and many more. I build frameworks and ETL pipelines to consume data from all kinds of sources.Pyspark
RESTful APIAlgorithm DevelopmentBig DataMicrosoft AzureGoogle Cloud PlatformBigQueryPySparkDatabricks PlatformApache SparkApache KafkaSQLPythonData Engineering - $20 hourly
- 5.0/5
- (1 job)
With a background steeped in Azure Data Engineering, I am a dedicated professional known for optimizing data processes and driving innovation in the field. As an Azure Data Engineer, I have honed my expertise in developing a streamlined framework tailored for the seamless migration from Teradata SQL to PySpark, achieving remarkable improvements in data transformation efficiency. My journey in data engineering is characterized by a relentless commitment to excellence. I excel in the management of CI/CD pipelines, orchestrating deployments for critical components like Synapse SQL, Azure Data Factory, and Databricks. My proficiency extends to leveraging a robust suite of Azure data services, including SQL Database, Data Lake Storage, Databricks, Data Factory, and Synapse Analytics, to craft and execute data solutions that empower organizations to harness the full potential of their data. Within the realm of ETL processes, I am recognized for my ability to efficiently extract data from diverse sources, apply sophisticated transformations with Azure Databricks, and load the refined data seamlessly into Azure SQL Database and Data Lake Storage. My approach places a strong emphasis on data quality, incorporating Key Performance Indicator (KPI) validations and comprehensive end-to-end pipeline monitoring to ensure that data remains accurate, reliable, and trustworthy throughout its journey. One of my standout achievements lies in the successful migration of a project to the Azure platform, where I orchestrated the optimization of the ETL process, resulting in an astonishing 365% improvement in execution time. Additionally, my project management skills shine through as I seamlessly coordinate tasks using Azure DevOps, aligning development efforts with the overarching project objectives. In essence, my career as an Azure Data Engineer is characterized by a steadfast dedication to efficiency, data integrity, and innovative data management. My passion for crafting elegant solutions to complex data challenges has made me a sought-after expert in the field, and I continue to push the boundaries of what is possible within the Azure ecosystem.Pyspark
Data Warehousing & ETL SoftwareData ScrapingSQL Server Integration ServicesMicrosoft Azure SQL DatabaseUnixMicrosoft Power BIAzure DevOpsMicrosoft AzureApache SparkDatabricks PlatformETL PipelinePythonPySparkSQLData Integration - $10 hourly
- 5.0/5
- (2 jobs)
• IT Professional around 6.1 years of experience in Software Development and Maintenance of Big Data projects • Possess in-depth working knowledge in all the areas of development of Big Data • Worked extensively on Technologies like Apache Spark, Databricks, Hive, Sqoop, Map Reduce, Apache Kafka applications.Pyspark
SqoopHiveApache SparkApache KafkaSQLPythonPySpark - $7 hourly
- 4.7/5
- (2 jobs)
I am a highly skilled and experienced Data Scientist with a diverse background in mathematics, statistics, and computer science. My expertise extends to Natural Language Processing (NLP), utilizing a wide range of libraries and frameworks to analyze and model complex data. I hold a Master's in Business Administration, further enhancing my ability to bridge technical and business aspects in data science. Successfully applied machine learning and deep learning techniques to solve complex business problems and fine tuned models for optimal performance and customized solutions, Enhanced data processing efficiency through PySpark implementation, Developed and integrated APIs to facilitate seamless communication between different systems Technical Skills: Languages:-Python, SQL, Java Libraries/Frameworks:- Numpy, Pandas, Matplotlib, Seaborn, Scikit-learn, NLTK, Spacy, Textblob, Gensim, Tensorflow, Keras, Symspell, Hugging Face, Flask, PySpark, Rapidfuzz, Allen NLP Models:- Decision Tree, SVM, Random Forest, Gradient Boosting, CNN, RNN, LLM, LSTM, BERT, XLM, Albert, Zero-shot learning, Few-shot learning, Transformers APIs:-Flask for building and integrating APIs Data Analysis:- Statistical Modeling, Descriptive and Inferential Statistics Other Tools:-Postman for API testing I'm committed to staying updated with the latest advancements in the field of AI through continuous learning. With my expertise, I can develop and implement AI models for various applications, analyze complex data sets, create visualizations, design efficient AI architectures, and optimize models for performance and accuracy. I'm excited to bring my skills and passion for AI to your projects, delivering high-quality results and driving innovation.Pyspark
Microsoft AzureAI Image GeneratorYOLOTensorFlowSnowflakePySparkGenerative AIComputer VisionSQLDeep LearningArtificial IntelligenceData ScienceNatural Language ProcessingMachine LearningPython - $20 hourly
- 5.0/5
- (1 job)
Experienced Data Engineer proficient in SQL, Python, PySpark, and developing web apps with Streamlit. Skilled in designing and implementing ETL processes using Azure Data Factory and Databricks, ensuring seamless data integration and transformation. Expertise in job scheduling with Control-M for efficient workflow orchestration. Passionate about leveraging technology to drive business insights and optimize data operations. Let's collaborate and unlock the full potential of your data!Pyspark
StreamlitHTMLPySparkMySQLSQLPython - $11 hourly
- 4.6/5
- (0 jobs)
Dynamic and results-driven Full Stack Data Scientist with a proven track record in predictive modeling, data pipeline optimization, and large-scale data analysis. - Expert in leveraging advanced analytics to fuel innovation and boost efficiency in data-driven environments. - Passionate about turning complex data into actionable insights and strategic solutions. - Open to freelance opportunities as a Data Scientist or Machine Learning Engineer.Pyspark
PySparkJupyterLabGitRMySQLPostgreSQLDockerPython Scikit-LearnTensorFlowPyTorchPython - $50 hourly
- 0.0/5
- (0 jobs)
I am a highly skilled Generative AI Engineer and Solution Architect with 14+ years of experience in building AI and machine learning solutions across diverse domains. My expertise lies in developing end-to-end AI applications with a strong focus on Retrieval-Augmented Generation (RAG), Azure OpenAI, model fine-tuning, and advanced NLP technologies. Key Areas of Expertise Generative AI Solutions: Designed RAG-based Q&A systems, knowledge retrieval pipelines, and AI-powered semantic search to enhance information accuracy and retrieval speed. Azure AI Ecosystem: Architected scalable platforms leveraging Azure OpenAI, Azure ML, Kubernetes (AKS), and monitoring tools to optimize performance and scalability. Model Fine-Tuning & Optimization: Fine-tuned large models like LLaMA 2-7B/13B, GPT-3.5/4, and multimodal models using LoRA, QLoRA, and quantization techniques to improve domain-specific performance. Deployment & Automation: Containerized models using Docker and deployed them on Kubernetes, integrating CI/CD pipelines for seamless testing and deployment with tools like GitHub Actions. AI-Powered Chatbots: Built chatbots using FastAPI and LangChain, enhancing user engagement and delivering real-time insights.Pyspark
CI/CDKubernetesDockerPySparkSQLPythonMultimodal Large Language ModelMachine Learning AlgorithmAI ConsultingAI Model DevelopmentAI Agent DevelopmentLLM Prompt EngineeringGenerative AIMachine LearningArtificial Intelligence - $35 hourly
- 0.0/5
- (0 jobs)
Professional Summary: * With 12+ years of expertise in analysis, design, development, testing and implementation of data migration, ETL, data warehousing, data modelling, and Data Vault 2.0, I bring a wealth of experience to the table * I have a deep experience in Snowflake services, including stages, time travel, fail safe, cloning, caching, clustering, data sharing, warehouses, materialized views, masking policies, and security and access control using role * One of my notable accomplishments includes creating a metadata-driven ingestion framework in Snowflake, enabling seamless data movement across zones while incorporating essential transformations. I have also leveraged Snowflake streams and tasks to efficiently capture change data capture (CDC) data * My proficiency extends to Azure Databricks, where I have utilized PySpark and Spark SQL for data processing.Pyspark
ETL PipelineArtificial IntelligenceMachine Learning ModelETLSQLPythonPySparkDatabricks PlatformMicrosoft AzureData Extraction - $50 hourly
- 0.0/5
- (0 jobs)
Professional Summary Experienced Data Engineer with 3+ years of expertise in designing and implementing robust data pipelines using big data technologies such as PySpark, Spark, Scala, Hadoop, Hive, and Snowflake. Proficient in SQL and cloud platforms with 1+ year of experience in AWS Glue, Redshift, and EMR. Passionate about solving complex problems through data-driven insights and fostering business success through innovative solutions.Pyspark
ETL PipelineData ExtractionData AnalysisAWS GlueUnix ShellHDFSBig DataHiveScalaPySparkSQL - $60 hourly
- 0.0/5
- (0 jobs)
PROFESSIONAL SUMMARY * Results-driven Senior Data Engineer with 7 years of experience in Big Data, Data Engineering, and Cloud Technologies, specializing in Spark, Python, Scala, SQL, and AWS. * Strong expertise in designing, developing, and optimizing ETL/ELT pipelines, handling structured and semi-structured data across cloud-based and on-premise environments. * Hands-on experience in migrating large-scale data ecosystems from legacy databases to modern AWS, Snowflake, and Databricks platforms, improving scalability, efficiency, and cost-effectiveness. * Proficient in building real-time data processing applications using Apache Kafka, Spark Streaming, and Kinesis, enabling seamless data ingestion and transformation for analytical workloads. * Expert in SCD Type-1 & Type-2 implementations, data modeling, and performance tuning for efficient query execution and optimized storage solutions.Pyspark
Data MigrationAmazon AthenaAmazon S3AWS GlueAmazon RedshiftDatabricks PlatformSnowflakeApache HadoopPySparkBig DataData EngineeringETL PipelineETLData Extraction - $100 hourly
- 0.0/5
- (0 jobs)
AREAS OF EXPERTISE * I have 5.7 years working experience in Data Engineering on Event driven architecture for data acquisition from internal/external data sources. * Create, Develop, and Maintain optimal data pipeline architecture. * Strong Development experience in Python and its common libraries. * Design, develop and maintain data solutions for data generation, collection, and processing using PySpark. * Create data pipelines, ensure data quality, and implement ETL processes for large-scale datasets. * Strong analytical experience with MySQL database in writing complex queries. * Applied PySpark SQL to run complex queries and aggregate large datasets using SQL-based transformations for data analysis and reporting. * Strong experience with source control systems such as Git, Bitbucket, and Jenkins CI/CD automation tools. * Experience with AWS cloud on data integration with S3, Lambda, CloudWatch, ECS, Event Bridge, Step functions,Pyspark
SQLAmazon S3AWS LambdaPySparkPythonData ExtractionETL PipelineETL - $35 hourly
- 0.0/5
- (0 jobs)
Specializing in full-stack web development, cloud infrastructure, and DevOps, I focus on AWS (EC2, S3, RDS, Lambda, SNS, SQS, SES, CloudFront) and GCP for designing and deploying scalable backend systems. I have expertise in Docker for containerization and Kubernetes for microservices orchestration. I develop dynamic frontends using React and TypeScript while managing data with SQLAlchemy, Snowflake, PostgreSQL, and BigQuery. My experience spans ETL pipelines, data engineering, and distributed computing with Apache Airflow, Spark, and Kafka for real-time data processing. I design scalable data architectures, optimize ETL workflows, and implement data lake and warehouse solutions. I architect cloud-native solutions using AWS Glue, Data Pipeline, and GCP’s Pub/Sub, Cloud Functions, and Dataflow. I automate infrastructure with Terraform and manage DevOps workflows, CI/CD pipelines, and infrastructure-as-code using Jenkins, AWS CodePipeline, and GitHub Actions. Additionally, I streamline monitoring and logging with Prometheus, Grafana, and ELK stack. I work with Git, GitHub, and Linux (Ubuntu) to enhance development efficiency. With expertise in Selenium and PyTest for testing and event-driven architectures, I ensure scalability, reliability, and performance in cloud-based environments.Pyspark
Cloud RunAmazon EC2Amazon S3Amazon RedshiftDatabricks PlatformSnowflakeApache SparkApache KafkaPySparkApache AirflowPythonData AnalysisETLETL PipelineData Extraction - $50 hourly
- 0.0/5
- (0 jobs)
Enthusiastic and results-oriented Databricks Developer with a strong foundation in data engineering and Apache Spark. I am eager to apply my skills to real-world projects and contribute to your data initiatives. My areas of expertise include: * Spark Development: Proficient in PySpark, Scala, and Spark SQL for efficient data processing. * Cloud Integration: Seamlessly integrating Databricks with cloud platforms (AWS). * Azure Databricks administration and deployment. * Building and optimizing data pipelines using Databricks. * Working with Databricks notebooks and clusters. * Quick learner, and dedicated to providing excellent results. I am a certified AWS Cloud Practitioner and Databricks Apache Spark Associate Developer, demonstrating my commitment to cloud and data engineering best practices. I am committed to continuous learning and delivering high-quality work. Let's collaborate to transform your data into valuable insights."Pyspark
JavaPythonSQLPySparkDatabricks Platform - $55 hourly
- 0.0/5
- (0 jobs)
PROFILE * 5.7+ years experience in data science and eng ineering , creating and manag ing rob ust data pipelines that drive b usiness ob jectives. * Desig ned and Implemented data pipelines to ing est 1 million records per day from 15+ data sources using PySpark. * Redesig ned the architecture and reduced the overall ing estion time b y 60% * Implemented Datab ricks multi-hop architecture in the Delta Lake House desig n * Certified and experienced in the Datab ricks Lakehouse platform using Apache Spark and Spark SQL, b uilding end-to-end ETL pipelines. * Built and manag ed customers' ETL pipelines and cloud platform administration on Datab ricks and AWS. * Desig ned various AWS solutions b ased on the client's requirements. * Part of the team in desig ning and implementing an end-to-end solution, which includes architecture desig n, data ing estion, transformations, ML/MLOps, and predictive analytics.Pyspark
Microsoft Power BIData EngineeringData AnalyticsArchitectural DesignMLOpsSQLPythonPySparkDatabricks PlatformMachine LearningETL - $35 hourly
- 0.0/5
- (0 jobs)
NarsaReddy Data Engineer | Big Data & Cloud Solutions | Data Engineering Specialist With over 7 years of hands-on experience in data engineering, I specialize in designing and implementing robust data solutions that leverage both on-premises big data technologies and cloud platforms like Azure and AWS. My expertise spans data warehousing, ETL processes, and large-scale data processing, utilizing tools such as Snowflake, SSIS, Informatica, and cutting-edge cloud technologies. I have a proven track record of developing complex, performance-optimized queries that reduce execution time, ensuring quicker application delivery and efficient data processing pipelines. Throughout my career, I’ve built scalable, high-performance solutions using technologies like Azure Data Factory, Airflow, Azure Databricks, StreamSets, and Azure Functions, delivering impactful results for clients in industries such as pharmaceuticals, education, and logistics. Certifications & Skills: Microsoft Certifications: AZ-900, DP-900, DP-203 (Azure Data Engineer Associate) Snowflake Data Warehouse Certification Core Technical Skills: Big Data Platforms: Apache Spark, Databricks, Hadoop Cloud Technologies: AWS (Redshift, EMR, Lambda, S3), Azure (Synapse Analytics, Data Factory) ETL & Data Warehousing: Snowflake, SSIS, Informatica, Redshift Programming Languages: SQL, Python, PySpark Data Formats: JSON, CSV, Parquet, Relational Databases In my current role, I focus on ETL and data processing using Databricks, specifically working with Python and PySpark. I am responsible for designing and optimizing data pipelines that manage student performance data for a data warehousing application. Current Project Overview: Data Extraction: Extracting data from both on-premise and cloud sources (primarily in JSON format) and loading it into a cloud-based data lake. Kafka Integration: Streaming data into Kafka topics via a Python application. Transformation: Flattening complex, deeply nested JSON files into a relational format while applying business rules, data aggregations, and transformations. This process is crucial for handling incremental data loads and slowly changing dimensions. Data Warehousing: Loading the transformed data into Databricks Lakehouse and Redshift for efficient warehousing. Reporting: Enabling the Analytics team to generate reports from the data warehouse using Power BI. Orchestration: Managing the scheduling and orchestration of ETL pipelines using Apache Airflow. I pride myself on being pragmatic, dependable, and results-oriented, with a continuous drive to learn and innovate. I am always seeking new ways to optimize processes and contribute to the success of my team and organization.Pyspark
Data ExtractionArtificial IntelligenceDatabaseCore JavaBig DataAWS ApplicationMicrosoft AzurePySparkPythonMachine LearningETL - $20 hourly
- 5.0/5
- (1 job)
• Over 13+ years of experience in software Application Development. • 3.5 Years of experience as Data Scientist,Implementing Meachine Learning ,Deep Learning ,Nural Networks and NLP using Python ,Tensorflow, pytourch ,keras. • 3 Years of experience in Implementing Hadoop ecosystem components like Map Reduce, HDFS, Spark,Hive, Sqoop, Pig, Kafka and Flume. • Good Exposure on Map Reduce programming using Java, PIG Latin and Hive. • Experience in developing customized UDF's in java to extend Hive and Pig Latin functionality. • Good understanding of HDFS Designs, Daemons, HDFS high availability (HA) • Expertise in data transformation &analysis using PIG, HIVE and SQOOP. • Experience in importing and exporting data using Sqoop from HDFS & Hive to Relational Database Systems and vice-versa. • Worked on NoSQL databases like HBase. • Experience in Spark components like Spark-Core, Spark-SQL . • Involved in capturing BODS generated from various sources to HDFS for further processing through Flume. • Familiar Hadoop architecture, data modeling and data mining, machine learning and advanced data processing. • Experience in implementation of Open-Source frameworks like Java, Struts etc. • Being enthusiastic individual I am always open to new ideas and striving to better myself.Pyspark
Apache HiveApache HadoopApache FlumePySparkSqoopApache HBasePythonKerasApache SparkTensorFlowScalaPyTorchJava - $20 hourly
- 0.0/5
- (1 job)
As a data engineer, I possess a strong technical skillset that includes expertise in Python, SQL, and data management technologies such as AWS Snowflake and MongoDB. I specialize in designing and building robust data pipelines that can efficiently handle large volumes of data and provide valuable insights to business users. I have extensive experience in managing databases and data warehousing, as well as optimizing data workflows for maximum efficiency and scalability. My work helps organizations leverage their data assets to make informed, data-driven decisions that drive success.Pyspark
PySparkMetabaseApache SupersetAWS Cloud9Data ModelingAWS LambdaSnowflakeAmazon RedshiftSQLMongoDBPython - $20 hourly
- 0.0/5
- (0 jobs)
• Highly skilled and dedicated IT professional with over 6 years of extensive experience in the development field, specializing in MS SQL Server and Azure Data Factory. • Proficient in designing and developing robust solutions using MS SQL Server 2016 for diverse applications ranging from OLTP to Data Warehousing systems across multiple industries including Healthcare, Banking, and Insurance. • Possesses strong expertise in Azure Data Factory, SQL Server, Azure Analysis Services, Azure Storages (SQL, Blob), and Power BI. Known for delivering high-quality solutions that meet and exceed client expectations while adhering to best practices and industry standards. • Highly adept in all phases of the software development life cycle (SDLC), from requirement analysis to deployment and maintenance, ensuring the delivery of scalable and reliable solutions. • Proven track record of successfully implementing complex ETL processes and data integration pipelines using Azure Data Factory, facilitating seamless data movement and transformation across heterogeneous environments. • Skilled in designing and implementing data models and dimensional schemas for data warehousing solutions, enabling efficient reporting and analytics capabilities. • Extensive experience in working with stakeholders to gather requirements, define project scope, and provide technical guidance and support throughout the development lifecycle. • Strong communication and interpersonal skills, with a collaborative approach to problem-solving and a proven ability to work effectively in both independent and team-based environmentsPyspark
Databricks PlatformMySQLMicrosoft ExcelMicrosoft AzureMicrosoft Power BI Data VisualizationMicrosoft Power BI DevelopmentMicrosoft Power BIpandasPySparkMicrosoft SQL SSASMicrosoft Azure SQL DatabaseMicrosoft SQL Server - $10 hourly
- 5.0/5
- (3 jobs)
In my dynamic 3+ year journey as an Azure Data Engineer, I've become a maestro of transformative solutions, wielding Azure's arsenal with finesse. From Synapse Analytics to Databricks, Data Factory to Power Automate, I've mastered the tools of the trade, seamlessly orchestrating data migrations and crafting workflows that redefine efficiency. Whether it's bridging the gap between MySQL, SQL Server, and Salesforce, or optimizing batch and streaming processes with Pyspark and Azure Data Factory, I thrive on turning complexity into clarity. But my impact doesn't end with data movement. I fervently advocate for automation, infusing unit-testing into Databricks workflows and championing DevOps practices that ensure resilience and agility. I'm a virtuoso in Power Platform, sculpting ecosystems where Power Apps and Automate converge, empowering teams to innovate at lightning speed. And when it comes to insights, I'm the maestro, sculpting KQL queries and crafting dashboards that illuminate the path forward. With a relentless commitment to transparency and a passion for driving cost-effective solutions, I'm poised to continue reshaping the Azure landscape, one ingenious solution at a time.Pyspark
Apache SparkAzure Cosmos DBApache KafkaScalaMicrosoft AzureData EngineeringpytestAzure DevOpsData LakeApache HadoopMicrosoft Azure SQL DatabasePySparkPythonDatabricks PlatformSQL - $15 hourly
- 5.0/5
- (1 job)
I am a proactive and achievement-oriented professional offering nearly 4 years of experience in the IT industry, specializing in Data Warehousing, Data Modelling, Microsoft Azure, Python, Databricks and Business Intelligence tools with skills in designing and implementing systems to collect and ingest data from various sources, including databases, APIs, files, web scraping leveraging the cloud architectures. I am a forward thinking person with proficiency across Manufacturing, Private Equity, Health Care, Consumer Durables, Satellite Broadcasting, and SAAS companies; successfully provided comprehensive data management solutions utilizing platforms like Microsoft Azure, Snowflake, and Databricks.Pyspark
Apache SparkAmazon Web ServicesDatabricks PlatformPySparkMicrosoft AzureAPISnowflakeLooker StudioGoogle AnalyticsPythonSQLData AnalysisData InterpretationTableauMicrosoft Power BI - $50 hourly
- 0.0/5
- (0 jobs)
Hi, I'm Anil, a Data Scientist and Software Developer with over 5 years of experience. Throughout my career, I've tackled projects in various sectors, applying my skills to businesses, freelance work, and personal endeavors. While I specialize in Process Automation & Data Science, I also have expertise in front-end technologies and database management. I'm a strong researcher and highly self-sufficient. If I encounter something unfamiliar, I'm confident in my ability to independently figure it out, quickly finding effective solutions. Python is my preferred language, and I truly love the work I do. Here’s a snapshot of my skills: Languages: - Python, SQL Frameworks/Libraries: - Flask, Django, Pandas, Pyspark, Selenium, Playwright, Requests, BeautifulSoup, Scrapy, Pyautogui, Pywinauto, OCR Character recognition, Data Scraping Front-end: - CSS, HTML Other tools: - Git versioning, Docker, Google Spreadsheets, Excel Experience: - Backend database analysis, machine learning APIs, LLMs, retail companies’ online product management systems, and Amazon seller logistics I’m passionate about learning, problem-solving, and always pushing to improve my skills. LinkedIn URL: linkeddin.com/in/anil-kumar-yadav-839a97100 Check out my work at my YT channel : @zestbotzsolutionsPyspark
MatplotlibMicrosoft Power BISeleniumDjangoFlaskWeb ScrapingPySparkDatabricks PlatformpandasSQLRobotic Process AutomationAutomationGitHubPostgreSQLPython - $35 hourly
- 0.0/5
- (0 jobs)
I'm a developer with hands-on experience on hive, Pyspark, sql. I can work on these related skills to provide good solution and help to accordingly.Pyspark
HivePySparkBig DataMicrosoft ExcelAWS LambdaSQLPython - $8 hourly
- 0.0/5
- (0 jobs)
I have 1+ years of experience as a Java Backend Developer. My skills are Java, Spring Boot, Microservices, Hibernate, MySQL, HTML, CSS, JavaScript.Pyspark
PySparkPythonPostgreSQLAWS IoT Core Want to browse more freelancers?
Sign up
How hiring on Upwork works
1. Post a job
Tell us what you need. Provide as many details as possible, but don’t worry about getting it perfect.
2. Talent comes to you
Get qualified proposals within 24 hours, and meet the candidates you’re excited about. Hire as soon as you’re ready.
3. Collaborate easily
Use Upwork to chat or video call, share files, and track project progress right from the app.
4. Payment simplified
Receive invoices and make payments through Upwork. Only pay for work you authorize.
How do I hire a Pyspark Developer near Hyderabad, on Upwork?
You can hire a Pyspark Developer near Hyderabad, on Upwork in four simple steps:
- Create a job post tailored to your Pyspark Developer project scope. We’ll walk you through the process step by step.
- Browse top Pyspark Developer talent on Upwork and invite them to your project.
- Once the proposals start flowing in, create a shortlist of top Pyspark Developer profiles and interview.
- Hire the right Pyspark Developer for your project from Upwork, the world’s largest work marketplace.
At Upwork, we believe talent staffing should be easy.
How much does it cost to hire a Pyspark Developer?
Rates charged by Pyspark Developers on Upwork can vary with a number of factors including experience, location, and market conditions. See hourly rates for in-demand skills on Upwork.
Why hire a Pyspark Developer near Hyderabad, on Upwork?
As the world’s work marketplace, we connect highly-skilled freelance Pyspark Developers and businesses and help them build trusted, long-term relationships so they can achieve more together. Let us help you build the dream Pyspark Developer team you need to succeed.
Can I hire a Pyspark Developer near Hyderabad, within 24 hours on Upwork?
Depending on availability and the quality of your job post, it’s entirely possible to sign up for Upwork and receive Pyspark Developer proposals within 24 hours of posting a job description.