0 Negotiable or Not Mentioned
India
20 days ago
se-mentor.com
1359 Views
We are looking for an experienced Databricks Engineer to enhance our data engineering capabilities at SE Mentor Solutions. In this role, you will be leveraging Azure services, PySpark, and Databricks to build high-performance data frameworks. Your contributions will help drive our data-driven decision-making processes by ensuring the reliability and scalability of our cloud-based data systems. This position offers the flexibility to work from either Cochin or Bengaluru.
Candidates should possess at least 5 years of experience in the data engineering field with a focus on Azure technologies. You will be responsible for designing and implementing efficient data processing logic and managing complex data architectures within the Azure cloud environment. As part of our team, you will participate in the full lifecycle of data projects, from requirements gathering to deployment and monitoring. Salary information was not provided in the original job post.
Key Requirements
Over 5 years of experience in data engineering or related technical roles.
Strong expertise in SQL and PySpark for data processing tasks.
Hands-on experience with Azure Databricks and Azure Data Factory.
Familiarity with Azure Cloud infrastructure and related services.
Experience in building and optimizing large-scale data architectures.
Ability to design and implement automated data pipelines.
Understanding of data warehousing concepts and technologies.
Strong troubleshooting skills for cloud-based data environments.
Effective teamwork and communication skills.
Commitment to maintaining high standards of data security and integrity.
0 Negotiable or Not Mentioned
India
20 days ago
pyxidiatech.com
1233 Views
We are seeking a highly skilled Senior Data Analyst to join our dynamic team and play a pivotal role in designing, building, and scaling innovative data solutions across our products and client implementations. The successful candidate will be responsible for developing scalable data pipelines, optimizing ETL workflows, and ensuring the highest standards of data quality and reliability. You will work closely with cross-functional teams, including Product, Engineering, and Data Science, to drive data architecture decisions and deliver actionable insights that solve complex business challenges. This role offers the opportunity to provide technical guidance and mentorship to junior analysts while working on impactful, data-driven projects. This position is available in multiple locations across India, specifically Mumbai, Bangalore, and Pune. The ideal candidate will have over four years of experience in data-focused roles and a deep understanding of the AWS ecosystem, including Redshift, Athena, and EMR. Experience with US Healthcare Data is considered a significant advantage. Candidates must be proficient in Python, PySpark, and SQL, and possess a strong grasp of data modeling and performance optimization. If you are passionate about big data and looking to make a significant impact in a collaborative environment, we encourage you to apply.
Key Requirements
Minimum of 4 years of professional experience in Data Analytics or Data Engineering.
Demonstrated expertise in SQL and both relational and NoSQL database management.
Hands-on proficiency with Python and PySpark for processing large-scale datasets.
Proven experience in building and optimizing ETL/ELT pipelines using Airflow or AWS Glue.
Strong conceptual understanding of data modeling and performance tuning.
Advanced technical knowledge of AWS data services including S3, EMR, Redshift, and Athena.
Ability to design and manage complex data architectures across structured and unstructured sources.
Competency in maintaining high standards for data quality, validation, and monitoring.
Strategic thinking skills to perform analysis and generate actionable business insights.
Strong collaborative skills to work effectively with Product, Engineering, and Client teams.
0 Negotiable or Not Mentioned
India
16 days ago
nishtechnologies.com
933 Views
Nish Technologies is seeking a skilled Data Engineer to join a prestigious Big4 MNC client on a full-time basis. The ideal candidate will have between 5 and 8 years of professional experience in data engineering and will be responsible for designing and implementing efficient data solutions. The primary work locations for this role are Hyderabad and Bangalore. This position requires a strong technical background and the ability to work in a fast-paced, high-impact environment.
Candidates must be proficient in Python, SQL, and PySpark to handle complex data sets and pipelines. We are conducting a virtual weekend recruitment drive on Saturday, April 4th. This role is intended for immediate joiners or those with a notice period of up to 15 days. Interested professionals are encouraged to share their profiles for consideration in this expedited hiring process.
Key Requirements
5 to 8 years of relevant experience in Data Engineering.
Advanced proficiency in Python programming for data processing.
Expertise in writing complex SQL queries for database management.
In-depth knowledge of PySpark and its application in big data projects.
Ability to join immediately or within a maximum notice period of 15 days.
Experience working within a Big4 MNC or similar large-scale environment.
Strong analytical skills to solve complex data-related problems.
Familiarity with ETL processes and data pipeline orchestration.
Excellent communication skills for cross-functional team collaboration.
Availability to participate in the virtual weekend recruitment drive on April 4th.
0 Negotiable or Not Mentioned
India, Remote
14 days ago
sapphiresoftwaresolutions.com
1297 Views
We are seeking a skilled Data Engineer to join a fast-growing team supporting major global brands like KFC, Pizza Hut, and Taco Bell. This is a fully remote role based in India, with a shift schedule of 12 PM to 9 PM IST. The initial contract duration is three months, with a high probability of extension based on performance and project needs. You will be responsible for building and optimizing data pipelines using Informatica IICS and Snowflake, focusing on scalable data integration frameworks within an AWS cloud environment. The ideal candidate should have at least 2 years of experience in data engineering, with strong technical skills in Python scripting, SQL, and event-driven architectures. You will work on impactful global projects, supporting advanced analytics and AI/ML initiatives while collaborating with dedicated DevOps teams. Exposure to Airflow and streaming pipelines such as Kafka or AWS Streaming is highly desirable. This is an excellent opportunity to work on modern data platforms and drive data-driven decisions for world-class organizations.
Key Requirements
2+ years of professional Data Engineering experience.
Proficiency in Informatica Cloud (IICS) as the primary ETL tool.
Hands-on experience with Snowflake as a target data platform.
Strong expertise in AWS (Amazon Web Services) cloud environment.
Advanced knowledge of SQL for complex data queries and manipulation.
Solid programming skills in Python for scripting and automation.
Experience handling both structured and semi-structured data formats.
Familiarity with REST APIs and AWS Lambda functions.
Ability to work the 12 PM to 9 PM IST shift.
Capacity to collaborate effectively with cross-functional teams and DevOps.
0 Negotiable or Not Mentioned
India, Remote
6 days ago
tekvo.io
547 Views
Tekvo is looking for a seasoned and highly motivated Azure Technical Lead to spearhead our cloud data engineering projects. In this critical role, you will be the driving force behind large-scale Azure analytics initiatives, overseeing the end-to-end development of high-impact data platforms. You will be responsible for defining the solution design, ensuring technical excellence across the delivery lifecycle, and providing strategic guidance to engineering teams. Your expertise will directly contribute to the creation of scalable, robust, and efficient data architectures that empower our clients to make data-driven decisions.
As an Azure Technical Lead, you must demonstrate mastery over core Azure data services such as Data Factory, Synapse, and Databricks. The role demands a blend of deep technical proficiency in SQL and Python along with the leadership skills required to mentor developers and manage complex stakeholder expectations. This remote position offers a unique opportunity for professionals based in India to work on cutting-edge cloud technologies within a collaborative environment. Successful candidates will be expected to maintain high standards of code quality and architectural integrity while driving innovation in the data engineering space.
Key Requirements
Possess 10-14 years of professional experience in data engineering and cloud platforms.
Demonstrate expert-level proficiency in designing and implementing Azure Data Factory pipelines.
Have hands-on experience with Azure Synapse Analytics for enterprise data warehousing.
Show strong technical expertise in using Azure Databricks for big data processing.
Maintain advanced knowledge of SQL for complex data manipulation and performance tuning.
Exhibit proficiency in Python programming for automating data workflows and engineering tasks.
Prove a track record of leading and delivering large-scale analytics initiatives on Azure.
Possess strong solution design skills with the ability to create scalable data architectures.
Demonstrate the ability to guide, mentor, and manage high-performing technical teams.
Experience in cloud security best practices and data governance frameworks is highly preferred.
Excellent communication skills to interact with stakeholders and translate business needs into technical solutions.
0 Negotiable or Not Mentioned
India, Remote
18 days ago
idtsolution.in
1764 Views
We are looking for a skilled and passionate Data Engineer to join our advanced analytics team. The ideal candidate will have hands-on experience in IICS, Snowflake, SQL, and Cloud platforms, with a strong foundation in building scalable data pipelines and modern data integration frameworks. You will play a key role in developing data-driven solutions to support AI/ML initiatives and enhance customer experience across global operations. The role requires working from 12:00 PM to 9:00 PM IST to align with team requirements.
Your core responsibilities will include designing ETL/ELT processes and managing cloud-based data solutions on platforms like AWS, Azure, or GCP. You will handle both structured and semi-structured data while implementing essential data quality and monitoring processes. Collaboration with cross-functional teams is vital to support business growth initiatives. The package for this role is up to ₹14 LPA, reflecting the expertise required to handle complex data architectures and DevOps practices within a modern data stack environment.
Key Requirements
2+ years of experience in Data Engineering.
Strong hands-on experience with Snowflake.
Proficiency in Cloud platforms (AWS, Azure, or GCP).
Expert knowledge of SQL and Python.
Extensive experience with Informatica Cloud (IICS) and ETL tools.
Proven ability to build scalable data pipelines and cloud-based data solutions.
Knowledge of serverless architectures and APIs.
Familiarity with DevOps practices including CI/CD and IaC.
Experience with streaming tools like Kafka or Spark Streaming.
Bachelor’s degree in a relevant field such as Computer Science or Engineering.
0 Negotiable or Not Mentioned
India
20 days ago
se-mentor.com
1042 Views
SE Mentor Solutions is seeking a skilled ETL Developer to join our technical team. This role is responsible for designing, developing, and maintaining robust ETL processes to support our data integration needs. You will work closely with data architects and analysts to ensure data quality and system efficiency across various platforms. Possible work locations for this role include Cochin and Bengaluru.
The ideal candidate will have over 4 years of experience with a strong background in Advanced SQL and various ETL tools. You will be expected to optimize database performance, troubleshoot complex data issues, and contribute to the continuous improvement of our data infrastructure. This is an excellent opportunity to work on scalable solutions in a high-growth environment focused on technological innovation and data excellence. No salary was mentioned in the original posting.
Key Requirements
Minimum of 4 years of professional experience in ETL development.
Proficiency in Advanced SQL for complex query writing and optimization.
Experience with data modeling and database design principles.
Proven ability to build and maintain scalable data pipelines.
Strong analytical and problem-solving skills for troubleshooting data issues.
Knowledge of performance tuning for ETL processes and database systems.
Familiarity with version control systems such as Git.
Excellent communication skills for collaborating with cross-functional teams.
Ability to document technical processes and data workflows clearly.
Experience working in an Agile development environment.
0 Negotiable or Not Mentioned
India
18 days ago
compugra.com
1102 Views
Compugra is urgently seeking a seasoned Java Full Stack Developer for a full-time engagement. This role is designed for a highly skilled professional with 7 to 10 years of experience who can hit the ground running in a fast-paced environment. The position offers the opportunity to work in dynamic technical hubs with possible work locations in Hyderabad and Bangalore.
The successful candidate will be responsible for end-to-end software development, involving both complex backend logic and responsive frontend design. You will work within a collaborative team to build scalable applications, maintain code quality, and implement modern software engineering practices. No specific salary was mentioned in the original posting.
Key Requirements
Minimum of 7 to 10 years of professional experience in full stack development.
In-depth knowledge of Java and the Spring Framework ecosystem.
Proficiency in frontend technologies such as React, Angular, or Vue.js.
Experience building and consuming RESTful web services.
Strong understanding of relational databases like MySQL, Oracle, or PostgreSQL.
Familiarity with Microservices architecture and containerization.
Proven experience with version control systems, specifically Git.
Ability to work effectively in an Agile/Scrum development environment.
Solid understanding of object-oriented programming principles and design patterns.
Excellent analytical, debugging, and problem-solving capabilities.
0 Negotiable or Not Mentioned
India
31 days ago
gspann.com
1639 Views
We are looking for an experienced GenAI Engineer to join the GSPANN team to work on cutting-edge generative technologies. This position is available in several key Indian locations including Gurugram, Hyderabad, and Pune. As a senior member of the team with over 6 years of experience, you will lead the development of innovative generative AI applications and frameworks that drive business value. The role requires deep expertise in Python, SQL, and MLOps to manage the entire lifecycle of generative models. You will be responsible for designing and implementing AI solutions, working closely with data scientists and software engineers to create impactful products. This is an opportunity to be at the forefront of the AI revolution and contribute to high-visibility projects within a global organization.
Key Requirements
At least 6 years of relevant experience in software or data engineering.
Expert-level proficiency in Python programming.
Strong command of SQL for data manipulation and analysis.
Extensive knowledge of Generative AI concepts and frameworks.
Proven experience with MLOps for model deployment and monitoring.
Solid background in Data Science methodologies.
General expertise in Artificial Intelligence and Machine Learning.
Experience with Large Language Models (LLMs) and fine-tuning.
Understanding of neural network architectures and transformers.
Ability to collaborate with cross-functional teams on complex projects.
0 Negotiable or Not Mentioned
India
6 days ago
huquo.com
408 Views
Huquo is currently looking for talented professionals for a General Analytics role based across India. The position is categorized under Band C1/C2 and is specifically looking for candidates who can join immediately. The role involves working with data analytics frameworks and requires a strong grasp of SQL and foundational concepts of Generative AI to provide meaningful insights and data-driven solutions for the organization.
Candidates should have over 4 years of relevant experience in the analytics field. You will be expected to utilize your analytical expertise to handle various data sets and contribute to the company's strategic goals. This is an excellent opportunity for experienced professionals to advance their careers in a fast-paced and innovative environment where data analytics is at the forefront of business operations.
Key Requirements
Minimum of 4 years of professional experience in analytics.
Proficiency in SQL for data manipulation and querying.
Basic understanding and knowledge of Generative AI (Gen AI).
Strong expertise in General Data Analytics and methodologies.
Ability to meet the criteria for professional bands C1 or C2.
Must be an immediate joiner or have a very short notice period.
Strong analytical and problem-solving capabilities.
Ability to work in a fast-paced corporate environment across India.
Experience with data visualization tools and reporting techniques.
Excellent communication skills to present data findings to stakeholders.