Best Talent Reach (BTR) 6 Jobs Found for "databricks"

Hiring? Post Your Job Here Join Our WhatsApp Channel

Top 10 Earners by Sharing Jobs To Other Platforms
Sort by:

DATABRICKS EXPERT @ EMPEREN TECHNOLOGIES

0 Negotiable or Not Mentioned India 5 hours ago emperentech.com 60 Views

Emperen Technologies is seeking elite Databricks talent to join our global team of experts. As an Official Databricks Partner, we specialize in helping enterprises scale their data transformation initiatives faster, smarter, and more cost-efficiently. We are looking for professionals who can hit the ground running on a contract or hourly basis to meet urgent delivery needs for our diverse portfolio of enterprise clients.

Candidates will be responsible for leveraging Azure Databricks, Spark, and PySpark to build robust data pipelines and architectures. The role involves deep involvement in data migration, modernization, and the integration of AI/ML models into existing business analytics frameworks. You will work closely with Data Engineers and Architects to enable outcomes that drive business value. If you possess deep technical capability and a proven track record in data initiatives, we encourage you to apply.

Key Requirements

Proficiency in Azure Databricks and Apache Spark ecosystems. Strong experience with PySpark for large-scale data processing. Solid background in Data Engineering and Data Architecture principles. Expertise in Data Migration and Modernization of legacy systems. Ability to integrate AI/ML and Analytics into production data pipelines. Available to work on a Contract and Hourly Basis for urgent delivery. Strong communication skills for collaborating with CTOs and Head of Data roles. Experience with cloud infrastructure and security best practices. Proven ability to deliver high-quality talent outcomes in fast-paced environments. Knowledge of Spark optimization and performance tuning techniques.
Similar Jobs

SENIOR AWS DATA ENGINEER @ CAREERNET

0 Negotiable or Not Mentioned India, Bangalore 6 days ago careernet.in 477 Views

My client within the Pharmaceutical sector is looking to expand its technology hub located in Bangalore. We are seeking high-impact Senior AWS Data Engineers who are ready to build scalable data platforms and implement cutting-edge solutions. This role is crucial for managing the infrastructure that supports data-driven decision-making in the pharmaceutical industry and ensuring that large-scale data assets are accessible and reliable. The successful candidate will work extensively with AWS Glue, Lambda, and Databricks. You will be responsible for data modelling and processing using Python, PySpark, and SQL. This is a 100% work-from-office position in Bangalore, requiring candidates who are either currently serving their notice period or can join immediately within 30 days. Your expertise will directly contribute to the innovation of data architectures in a fast-paced environment.

Key Requirements

6–12 years of professional experience in data engineering Expertise in AWS Glue and AWS Lambda for serverless computing Proficiency in Databricks for unified analytics and data processing Strong programming skills in Python for data manipulation Advanced knowledge of PySpark for big data processing tasks Hands-on experience with SQL for complex database queries Proven track record in Data Modelling and architectural design Experience in the pharmaceutical or life sciences sector Ability to build and maintain scalable data platforms Strong analytical and problem-solving skills in a cloud environment
Similar Jobs

AZURE TECHNICAL LEAD @ TEKVO

0 Negotiable or Not Mentioned India, Remote 6 days ago tekvo.io 579 Views

Tekvo is looking for a seasoned and highly motivated Azure Technical Lead to spearhead our cloud data engineering projects. In this critical role, you will be the driving force behind large-scale Azure analytics initiatives, overseeing the end-to-end development of high-impact data platforms. You will be responsible for defining the solution design, ensuring technical excellence across the delivery lifecycle, and providing strategic guidance to engineering teams. Your expertise will directly contribute to the creation of scalable, robust, and efficient data architectures that empower our clients to make data-driven decisions.

As an Azure Technical Lead, you must demonstrate mastery over core Azure data services such as Data Factory, Synapse, and Databricks. The role demands a blend of deep technical proficiency in SQL and Python along with the leadership skills required to mentor developers and manage complex stakeholder expectations. This remote position offers a unique opportunity for professionals based in India to work on cutting-edge cloud technologies within a collaborative environment. Successful candidates will be expected to maintain high standards of code quality and architectural integrity while driving innovation in the data engineering space.

Key Requirements

Possess 10-14 years of professional experience in data engineering and cloud platforms. Demonstrate expert-level proficiency in designing and implementing Azure Data Factory pipelines. Have hands-on experience with Azure Synapse Analytics for enterprise data warehousing. Show strong technical expertise in using Azure Databricks for big data processing. Maintain advanced knowledge of SQL for complex data manipulation and performance tuning. Exhibit proficiency in Python programming for automating data workflows and engineering tasks. Prove a track record of leading and delivering large-scale analytics initiatives on Azure. Possess strong solution design skills with the ability to create scalable data architectures. Demonstrate the ability to guide, mentor, and manage high-performing technical teams. Experience in cloud security best practices and data governance frameworks is highly preferred. Excellent communication skills to interact with stakeholders and translate business needs into technical solutions.
Similar Jobs
BTR Ultra Seeker

Opportunity Engine — Power Your Applications

Send 50 applications daily with no ads, supported by 10 AI-personalized letters. BTR ensures your profile reaches recruiters first, maximizing your chances of landing interviews.

Starting $1.99/mo Fast Hire Boost

MLOPS ENGINEER @ TALENTQUELL

~140,000 Mentioned India, Remote 6 days ago talentquell.com 577 Views

We are seeking a highly skilled MLOps Engineer to spearhead our cross-cloud machine learning model migration initiatives, specifically moving from GCP to Azure Databricks. The successful candidate will be responsible for building and optimizing production-grade MLOps workflows and CI/CD pipelines while implementing MLflow for meticulous model tracking and lifecycle management. You will develop scalable pipelines using Databricks and PySpark, ensuring seamless data movement and high model reliability. The budget for this position is ₹1.4 LPM.

In this role, you will also focus on performance and cost optimization of machine learning infrastructures. You will work closely with Data Science teams to enable efficient model deployment and monitoring across cloud environments. This position operates on a UK time shift (8 AM – 5 PM) and offers a remote working arrangement within India. Candidates should have a strong foundation in data engineering and pipeline orchestration to succeed in this dynamic environment.

Key Requirements

Minimum 6–8 years of professional experience in MLOps or a related technical role. Strong expertise in Databricks and PySpark for large-scale data processing. Hands-on experience with MLflow for tracking experiments and managing model lifecycles. Proven proficiency in CI/CD practices and workflows using tools like GitHub Actions. Extensive experience working with cloud platforms, specifically Microsoft Azure and GCP. Demonstrated ability to perform cross-cloud data movement and model migration tasks. In-depth knowledge of model deployment strategies and continuous monitoring systems. Strong background in data engineering and the orchestration of complex pipelines. Excellent communication and collaboration skills for working with Data Science teams. Availability to work according to the UK shift timings from 8 AM to 5 PM.
Similar Jobs

LEAD DATA ENGINEER @ SMARTREFERHUB

~200,000 Mentioned India, Bangalore 6 days ago smartreferhub.in 577 Views

SmartReferHub is looking for a Lead Data Engineer with 6 to 8 years of experience to join their team in Bangalore. This high-impact hybrid role involves working on advanced Databricks and AWS Lakehouse architecture to lead large-scale data transformations. You will be responsible for driving enterprise-level analytics for global operations and accelerating the company's data strategy. The successful candidate will work on cutting-edge data technologies and lead impactful projects that shape the future of data engineering within the organization. Joining is expected within 30 days. The offered salary for this position ranges from ₹24 to ₹28 LPA, providing an excellent opportunity for career growth in the data analytics sector.

Key Requirements

Minimum 6 to 8 years of professional experience in data engineering roles. Strong hands-on experience with Databricks and AWS Lakehouse architecture. Proven track record of leading large-scale data transformations in an enterprise environment. Ability to drive analytics solutions for global operations and cross-functional teams. Deep expertise in Big Data technologies and cloud-based data ecosystems. Strong proficiency in programming languages such as Python or Scala for data processing. Expertise in writing complex SQL queries and optimizing data performance. Solid understanding of ETL and ELT pipeline design and maintenance. Experience with data modeling, data warehousing, and lakehouse concepts. Strong leadership skills with the ability to manage technical projects and mentor team members.
Similar Jobs

DATA ENGINEERING LEAD @ NIHARIKA SAHU

0 Negotiable or Not Mentioned India, Bengaluru 8 days ago huemot.com 629 Views

We are seeking a highly experienced Data Engineering Lead to spearhead a critical engagement within our Capital Markets practice. Based in Bengaluru, this role involves supporting a prominent Private Equity firm headquartered in New York. The successful candidate will oversee the development and maintenance of high-impact data pipelines and lakehouse architectures using cutting-edge technologies. You will work closely with stakeholders to translate business requirements into technical specifications, ensuring high data quality and system reliability across the enterprise.

You will be responsible for leading an offshore team of 5 to 7 engineers, ensuring the delivery of production-grade data solutions through mentorship and technical oversight. This position requires deep expertise in Azure Databricks and PySpark, along with a solid understanding of data governance through Unity Catalog. Candidates must possess a strong background in U.S. Capital Markets or Private Equity to effectively meet the complex data needs of our clients. Successful applicants will demonstrate a history of architectural excellence and the ability to navigate complex financial data landscapes.

Key Requirements

15+ years of enterprise data engineering experience Databricks Certified Data Engineer (mandatory certification) 5+ years of hands-on experience specifically on Azure Databricks 5+ years of hands-on PySpark experience with production-grade pipelines Strong knowledge of Unity Catalog and data governance frameworks Proven experience leading offshore teams of 5–7 engineers Domain experience in U.S. Capital Markets, Private Equity, or Investment Management Expertise in lakehouse architecture and modern data stack design Advanced proficiency in SQL for complex data transformations Strong understanding of CI/CD practices for automated data pipelines
Similar Jobs