0 Negotiable or Not Mentioned
India, Remote
18 days ago
idtsolution.in
1871 Views
We are looking for a skilled and passionate Data Engineer to join our advanced analytics team. The ideal candidate will have hands-on experience in IICS, Snowflake, SQL, and Cloud platforms, with a strong foundation in building scalable data pipelines and modern data integration frameworks. You will play a key role in developing data-driven solutions to support AI/ML initiatives and enhance customer experience across global operations. The role requires working from 12:00 PM to 9:00 PM IST to align with team requirements.
Your core responsibilities will include designing ETL/ELT processes and managing cloud-based data solutions on platforms like AWS, Azure, or GCP. You will handle both structured and semi-structured data while implementing essential data quality and monitoring processes. Collaboration with cross-functional teams is vital to support business growth initiatives. The package for this role is up to ₹14 LPA, reflecting the expertise required to handle complex data architectures and DevOps practices within a modern data stack environment.
Key Requirements
2+ years of experience in Data Engineering.
Strong hands-on experience with Snowflake.
Proficiency in Cloud platforms (AWS, Azure, or GCP).
Expert knowledge of SQL and Python.
Extensive experience with Informatica Cloud (IICS) and ETL tools.
Proven ability to build scalable data pipelines and cloud-based data solutions.
Knowledge of serverless architectures and APIs.
Familiarity with DevOps practices including CI/CD and IaC.
Experience with streaming tools like Kafka or Spark Streaming.
Bachelor’s degree in a relevant field such as Computer Science or Engineering.
0 Negotiable or Not Mentioned
India
20 days ago
pyxidiatech.com
1233 Views
We are seeking a highly skilled Senior Data Analyst to join our dynamic team and play a pivotal role in designing, building, and scaling innovative data solutions across our products and client implementations. The successful candidate will be responsible for developing scalable data pipelines, optimizing ETL workflows, and ensuring the highest standards of data quality and reliability. You will work closely with cross-functional teams, including Product, Engineering, and Data Science, to drive data architecture decisions and deliver actionable insights that solve complex business challenges. This role offers the opportunity to provide technical guidance and mentorship to junior analysts while working on impactful, data-driven projects. This position is available in multiple locations across India, specifically Mumbai, Bangalore, and Pune. The ideal candidate will have over four years of experience in data-focused roles and a deep understanding of the AWS ecosystem, including Redshift, Athena, and EMR. Experience with US Healthcare Data is considered a significant advantage. Candidates must be proficient in Python, PySpark, and SQL, and possess a strong grasp of data modeling and performance optimization. If you are passionate about big data and looking to make a significant impact in a collaborative environment, we encourage you to apply.
Key Requirements
Minimum of 4 years of professional experience in Data Analytics or Data Engineering.
Demonstrated expertise in SQL and both relational and NoSQL database management.
Hands-on proficiency with Python and PySpark for processing large-scale datasets.
Proven experience in building and optimizing ETL/ELT pipelines using Airflow or AWS Glue.
Strong conceptual understanding of data modeling and performance tuning.
Advanced technical knowledge of AWS data services including S3, EMR, Redshift, and Athena.
Ability to design and manage complex data architectures across structured and unstructured sources.
Competency in maintaining high standards for data quality, validation, and monitoring.
Strategic thinking skills to perform analysis and generate actionable business insights.
Strong collaborative skills to work effectively with Product, Engineering, and Client teams.
0 Negotiable or Not Mentioned
India
31 days ago
gspann.com
1832 Views
GSPANN is currently seeking five dedicated AI Ops Engineers to join our team across several locations in India, including Gurugram, Hyderabad, Pune, and Delhi NCR. This role focuses on optimizing AI operations and ensuring the seamless integration of machine learning models into production environments through robust automation and monitoring strategies. As part of a forward-thinking team, you will play a crucial role in maintaining the reliability and efficiency of our AI infrastructure to support large-scale enterprise deployments. Candidates should have at least 3 years of relevant experience and demonstrate a strong background in SQL, Python, and major cloud platforms like AWS, GCP, or Azure. You will work extensively with DevOps methodologies and CI/CD pipelines to build impactful AI-driven solutions that solve complex business challenges. The ideal candidate is someone who is passionate about the intersection of artificial intelligence and systems engineering and is eager to build something impactful together.
Key Requirements
Minimum of 3 years of professional experience in technical roles.
Proficiency in AIOps methodologies and operational frameworks.
Strong database management skills using SQL.
Advanced programming abilities in Python.
Hands-on experience with Cloud Platforms including AWS, GCP, or Azure.
Solid understanding of DevOps practices and principles.
Expertise in designing and implementing CI/CD pipelines.
Experience with system monitoring and observability tools.
Familiarity with containerization technologies such as Docker or Kubernetes.
Strong analytical and troubleshooting skills for production systems.
0 Negotiable or Not Mentioned
India, Remote
14 days ago
sapphiresoftwaresolutions.com
1297 Views
We are seeking a skilled Data Engineer to join a fast-growing team supporting major global brands like KFC, Pizza Hut, and Taco Bell. This is a fully remote role based in India, with a shift schedule of 12 PM to 9 PM IST. The initial contract duration is three months, with a high probability of extension based on performance and project needs. You will be responsible for building and optimizing data pipelines using Informatica IICS and Snowflake, focusing on scalable data integration frameworks within an AWS cloud environment. The ideal candidate should have at least 2 years of experience in data engineering, with strong technical skills in Python scripting, SQL, and event-driven architectures. You will work on impactful global projects, supporting advanced analytics and AI/ML initiatives while collaborating with dedicated DevOps teams. Exposure to Airflow and streaming pipelines such as Kafka or AWS Streaming is highly desirable. This is an excellent opportunity to work on modern data platforms and drive data-driven decisions for world-class organizations.
Key Requirements
2+ years of professional Data Engineering experience.
Proficiency in Informatica Cloud (IICS) as the primary ETL tool.
Hands-on experience with Snowflake as a target data platform.
Strong expertise in AWS (Amazon Web Services) cloud environment.
Advanced knowledge of SQL for complex data queries and manipulation.
Solid programming skills in Python for scripting and automation.
Experience handling both structured and semi-structured data formats.
Familiarity with REST APIs and AWS Lambda functions.
Ability to work the 12 PM to 9 PM IST shift.
Capacity to collaborate effectively with cross-functional teams and DevOps.
0 Negotiable or Not Mentioned
India
31 days ago
gspann.com
1295 Views
GSPANN is hiring an AI Developer with over 5 years of experience to contribute to our sophisticated artificial intelligence projects. The role is based in India, with offices located in Gurugram, Hyderabad, and Pune. You will be at the forefront of developing AI/ML solutions and managing complex data processes through efficient ETL and ELT pipelines. Candidates must be proficient in Azure AI Services, Python, and modern data integration processes. You will collaborate with cross-functional teams to build and deploy intelligent software systems, ensuring high performance and scalability of AI-driven applications. This role is ideal for a developer who enjoys solving difficult problems and wants to work on the latest technologies in the cloud AI space.
Key Requirements
Minimum of 5 years of experience in AI or software development.
In-depth knowledge of Azure AI Services and cloud environments.
Strong proficiency in Python development.
Extensive experience with ETL and ELT data processes.
Solid understanding of AI/ML algorithms and their application.
Experience in designing and maintaining scalable data pipelines.
Ability to work with both structured and unstructured data sources.
Familiarity with software development lifecycles (SDLC).
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork capabilities.
0 Negotiable or Not Mentioned
India, Remote
6 days ago
tekvo.io
316 Views
Tekvo is looking for a seasoned and highly motivated Azure Technical Lead to spearhead our cloud data engineering projects. In this critical role, you will be the driving force behind large-scale Azure analytics initiatives, overseeing the end-to-end development of high-impact data platforms. You will be responsible for defining the solution design, ensuring technical excellence across the delivery lifecycle, and providing strategic guidance to engineering teams. Your expertise will directly contribute to the creation of scalable, robust, and efficient data architectures that empower our clients to make data-driven decisions.
As an Azure Technical Lead, you must demonstrate mastery over core Azure data services such as Data Factory, Synapse, and Databricks. The role demands a blend of deep technical proficiency in SQL and Python along with the leadership skills required to mentor developers and manage complex stakeholder expectations. This remote position offers a unique opportunity for professionals based in India to work on cutting-edge cloud technologies within a collaborative environment. Successful candidates will be expected to maintain high standards of code quality and architectural integrity while driving innovation in the data engineering space.
Key Requirements
Possess 10-14 years of professional experience in data engineering and cloud platforms.
Demonstrate expert-level proficiency in designing and implementing Azure Data Factory pipelines.
Have hands-on experience with Azure Synapse Analytics for enterprise data warehousing.
Show strong technical expertise in using Azure Databricks for big data processing.
Maintain advanced knowledge of SQL for complex data manipulation and performance tuning.
Exhibit proficiency in Python programming for automating data workflows and engineering tasks.
Prove a track record of leading and delivering large-scale analytics initiatives on Azure.
Possess strong solution design skills with the ability to create scalable data architectures.
Demonstrate the ability to guide, mentor, and manage high-performing technical teams.
Experience in cloud security best practices and data governance frameworks is highly preferred.
Excellent communication skills to interact with stakeholders and translate business needs into technical solutions.
0 Negotiable or Not Mentioned
India, Remote
16 days ago
e-solutionsinc.com
845 Views
This is a specialized remote opportunity for an Electrical Engineering Pod Lead to join a dynamic project for a duration of 22 weeks. The role requires a high level of technical expertise, necessitating a Master's degree or PhD and at least five years of deep hands-on electrical engineering design experience. As a Pod Lead, you will be responsible for owning designs from the initial specification phase through to validated implementation, providing technical leadership and team coordination to ensure project milestones are met effectively within the PST overlap hours.
Candidates must demonstrate advanced proficiency in Python for scripting and automation, alongside hands-on experience with Docker for managing containerized environments. Proficiency in at least one major cloud platform and various open-source EE simulation tools is required. You must have deep expertise in specialized subdomains such as Analog/Mixed-Signal IC Design, Power Electronics, RF Engineering, or Embedded Systems. The role demands a commitment of at least 30 to 40 hours per week, with a critical requirement for a 4-hour daily overlap with Pacific Standard Time.
Key Requirements
Master's degree or PhD in Electrical Engineering or a closely related field.
Minimum 5 years of hands-on electrical engineering design experience.
Proven track record of owning designs from specification through validated implementation.
At least 1–2 years of experience in a technical lead, senior engineer, or team coordination role.
Proficiency with Python for scripting, automation, and simulation workflows.
Hands-on experience with Docker, including building images and managing containerized environments.
Familiarity with cloud platforms such as AWS, GCP, or Azure for running workloads and monitoring jobs.
Proficiency with open-source EE simulation tools like ngspice, PySpice, or OpenEMS.
Deep expertise in Analog/Mixed-Signal IC Design or Power Electronics.
Strong understanding of RF/Microwave Engineering or Digital Systems/FPGA design.
Ability to work a minimum of 30-40 hours per week with a 4-hour PST overlap.
0 Negotiable or Not Mentioned
India
16 days ago
nishtechnologies.com
933 Views
Nish Technologies is seeking a skilled Data Engineer to join a prestigious Big4 MNC client on a full-time basis. The ideal candidate will have between 5 and 8 years of professional experience in data engineering and will be responsible for designing and implementing efficient data solutions. The primary work locations for this role are Hyderabad and Bangalore. This position requires a strong technical background and the ability to work in a fast-paced, high-impact environment.
Candidates must be proficient in Python, SQL, and PySpark to handle complex data sets and pipelines. We are conducting a virtual weekend recruitment drive on Saturday, April 4th. This role is intended for immediate joiners or those with a notice period of up to 15 days. Interested professionals are encouraged to share their profiles for consideration in this expedited hiring process.
Key Requirements
5 to 8 years of relevant experience in Data Engineering.
Advanced proficiency in Python programming for data processing.
Expertise in writing complex SQL queries for database management.
In-depth knowledge of PySpark and its application in big data projects.
Ability to join immediately or within a maximum notice period of 15 days.
Experience working within a Big4 MNC or similar large-scale environment.
Strong analytical skills to solve complex data-related problems.
Familiarity with ETL processes and data pipeline orchestration.
Excellent communication skills for cross-functional team collaboration.
Availability to participate in the virtual weekend recruitment drive on April 4th.
0 Negotiable or Not Mentioned
India
14 days ago
sysmind.com
1173 Views
We are looking for an experienced Senior Power BI Developer to lead the design and maintenance of critical dashboards and reports for our business operations. This position is available in both Hyderabad and Pune as a contract-to-hire opportunity. The role involves working closely with stakeholders to understand their data needs and delivering high-quality analytical solutions that drive decision-making through intuitive data visualization and robust reporting structures.
The candidate must demonstrate strong expertise in data modeling, ETL processes, and writing optimized SQL, DAX, and Power Query (M) code. Practical experience with cloud data integration tools such as Informatica IICS, Snowflake, and Azure Blob storage is highly preferred. You will be responsible for ensuring the integrity and performance of the data pipeline, from source systems to final visual reports, while collaborating with cross-functional teams in a fast-paced environment.
Key Requirements
6–8 years of professional experience as a Power BI Developer or in a similar Business Intelligence role.
Advanced proficiency in designing and maintaining complex Power BI dashboards and reports.
Strong expertise in data modeling and creating efficient relationships between diverse data sets.
Proven ability to write and optimize complex DAX queries for advanced calculations.
Deep understanding of Power Query (M) for data transformation and cleaning processes.
Excellent SQL skills for data extraction, manipulation, and performance tuning.
Hands-on experience with ETL tools, specifically Informatica IICS, for data integration.
Familiarity with cloud data warehousing solutions such as Snowflake.
Knowledge of Azure cloud services, including Azure Blob storage for data management.
Strong analytical and problem-solving skills with the ability to work in a contract-to-hire capacity.
0 Negotiable or Not Mentioned
India
8 days ago
gmail.com
483 Views
SearchMate is partnering with an elite client to recruit a Senior SDET specialized in ETL environments and Backend Automation. This is a high-impact role aimed at Selection-Ready professionals capable of managing complex data pipelines and developing robust automation frameworks. The candidate will work under a hybrid model with office locations available in Chennai, Hyderabad, and Pune. The selected candidate will join the client's payroll for an initial duration of 6 months, with the possibility of extension based on performance and project needs. Responsibilities include ensuring the architectural integrity of massive data migrations and performing automated validation for a global enterprise. This role offers a strategic path into high-tier US product environments.
Key Requirements
7+ Years of overall IT experience in Quality Engineering.
Proven hands-on experience working as a QA Tester in ETL environments.
Strong proficiency in SQL.
Deep experience in RDBMS databases such as Oracle or SQL Server.
Hands-on expertise in Java-Selenium or Python automation testing.
Fluent in Agile workflows and cross-functional team collaboration.
Exceptional English communication skills for stakeholder interfacing.
Ability to bridge gaps between complex data pipelines and automation frameworks.
Experience in architectural integrity for massive data migrations.
Proactive mindset for high-tier US product environment standards.