0 Negotiable or Not Mentioned
India, Bangalore
10 days ago
refex.co.in
962 Views
RGML is seeking a proactive and experienced DevOps Engineer to manage and scale our new AWS-based cloud architecture. This role is central to building a secure, fault-tolerant, and highly available environment that supports our Sun, Drive, and Comm platforms. You will be responsible for designing and implementing infrastructure using various AWS services across multiple Availability Zones, ensuring the platform remains robust and scalable. In this position, you'll play a critical role in automation, deployment pipelines, monitoring, and cloud cost optimization.
Key tasks include maintaining EC2 Auto Scaling Groups, managing API Gateways, and optimizing Aurora SQL Clusters with multi-AZ failover strategies. You will also enforce infrastructure-as-code practices and collaborate with engineering teams to enable DevSecOps best practices, driving the transformation of legacy systems into modern, scalable infrastructure. You will be working on mission-critical mobility platforms with a growing user base, offering a collaborative and fast-paced environment where you can drive automation and shape future DevSecOps practices.
Key Requirements
Strong hands-on experience with AWS core services: EC2 (Linux and Windows), ALB, VPC, S3, Aurora, CloudWatch, API Gateway, IAM, and VPN.
Deep understanding of multi-AZ, high availability, and auto-healing architectures.
Experience with CI/CD tools such as GitHub Actions, Jenkins, or CodePipeline and scripting in Bash, Python, or Shell.
Working knowledge of networking and cloud security best practices including Security Groups, NACLs, and IAM roles.
Experience with Bastion architecture, Client VPNs, Route 53, and VPC peering.
Familiarity with backup and restore strategies and monitoring/logging pipelines.
Proven ability to implement and maintain infrastructure-as-code practices using Terraform or CloudFormation.
Ability to design and manage infrastructure across multiple Availability Zones to ensure fault tolerance.
Experience maintaining and scaling EC2 Auto Scaling Groups and Application Load Balancers.
Proficiency in setting up and optimizing Aurora SQL Clusters with multi-AZ active-active failover strategies.
0 Negotiable or Not Mentioned
India, Coimbatore
15 days ago
infolexus.com
1754 Views
Infolexus is currently recruiting on behalf of a prominent client in the Information Technology sector. This role is designed for a DevOps Engineer who is eager to contribute to a dynamic team environment and work with cutting-edge cloud technologies. The successful candidate will be responsible for building, maintaining, and optimizing scalable systems, ensuring high availability and performance across various platforms. This position offers an excellent opportunity for both freshers and experienced professionals to grow their careers within a forward-thinking organization.
As a DevOps Engineer based in Coimbatore, you will work closely with development and operations teams to streamline deployment processes and enhance infrastructure. You will utilize your knowledge of Linux systems, cloud environments like AWS or Azure, and automation tools to drive efficiency. The role involves managing CI/CD pipelines, writing scripts to automate repetitive tasks, and troubleshooting complex technical issues. Join a team that values innovation and technical excellence while pushing the boundaries of modern IT infrastructure.
Key Requirements
Basic knowledge of Linux/Unix systems
Familiarity with cloud platforms (AWS, Azure, or GCP)
Understanding of CI/CD tools (Jenkins, GitHub Actions)
Knowledge of scripting (Python, Bash, or similar)
0–3 years of experience in a relevant technical role
Ability to build and maintain scalable systems
Eagerness to work with modern cloud technologies
Strong analytical and problem-solving skills
Excellent communication and collaboration abilities
Proactive approach to learning new DevOps tools and methodologies
0 Negotiable or Not Mentioned
India, Bengaluru
9 days ago
scienstechnologies.com
650 Views
Sciens Technologies is seeking a dedicated Database Support Engineer with specialized expertise in AWS database services to join our team in Bengaluru. This role is pivotal in managing and supporting a variety of AWS databases, including RDS, Aurora, DynamoDB, DocumentDB, and ElastiCache. As part of a global "follow-the-sun" model, you will be responsible for ensuring the high availability, reliability, and performance of mission-critical systems. This involves active participation in incident management, root cause analysis (RCA), and providing 24/7 production support to maintain seamless operations for our global clients.
Beyond routine maintenance, the ideal candidate will drive performance optimization through query tuning and strategic indexing. You will also leverage automation using Python or Bash to streamline database operations and enhance system efficiency. Collaboration is a key component of this role, as you will work closely with global teams to maintain comprehensive documentation and runbooks. Monitoring system health using tools like CloudWatch, Prometheus, and Grafana will be part of your daily activities to ensure proactive issue resolution and disaster recovery preparedness. Candidates with a background in Kubernetes and Terraform are highly encouraged to apply.
Key Requirements
3–5 years of professional Database Administration experience.
Strong hands-on experience specifically with AWS RDS and Aurora services.
Profound knowledge of backup/recovery, High Availability (HA), and Disaster Recovery (DR) strategies.
Proficiency in scripting languages such as Python or Bash for operational automation.
Hands-on experience with monitoring and alerting tools like CloudWatch, Prometheus, or Grafana.
Proven ability to handle incident management and root cause analysis (RCA) in a production environment.
Expertise in SQL query tuning, indexing strategies, and database performance optimization.
Familiarity with AWS Database Migration Service (DMS) and general migration strategies.
Knowledge of Kubernetes (EKS) and managing containerized database environments.
Understanding of Infrastructure as Code (IaC) principles using Terraform or CloudFormation.
0 Negotiable or Not Mentioned
India, Bangalore
18 days ago
bridgetownresearch.org
1247 Views
Bridgetown Research is currently looking for talented SDE2 and SDE3 engineers to join our small and focused team in Bangalore. We are dedicated to building AI-native products that address real-world challenges through innovative technology. Our team operates with a low-ego, high-ownership culture where every member is expected to think like an owner and take full responsibility for the outcomes of their work, not just the tasks assigned. This hybrid role is based out of our modern office in Koramangala, offering a collaborative environment for engineers who thrive on solving complex problems and working on cutting-edge software solutions.
In this role, you will be responsible for developing robust backend systems using Node.js, TypeScript, and Python. You will play a critical role in building and scaling distributed systems on AWS, managing various database technologies such as Postgres, DynamoDB, and Elasticsearch, and ensuring the overall security and reliability of our platforms. Candidates should be comfortable working with infrastructure tools like Docker, Kubernetes, and Terraform to maintain a scalable and efficient environment. Compensation for this position is competitive and is offered at or above current market standards for experienced engineers in the region.
Key Requirements
Strong backend experience with 5-8+ years in the industry.
Proficiency in Node.js and TypeScript for building scalable applications.
Excellent programming skills in Python.
Familiarity with AI systems and AI-native product development.
Extensive experience building and scaling systems on AWS cloud infrastructure.
Comfortable working with relational and NoSQL databases like Postgres and DynamoDB.
Experience with search and indexing tools such as Elasticsearch.
Hands-on experience with containerization using Docker and orchestration with Kubernetes.
Knowledge of Infrastructure as Code using Terraform.
Ability to handle data at scale and implement performance improvements.
Solid understanding of software security best practices and building reliable systems.
A low-ego mindset with a high degree of ownership and responsibility.
0 Negotiable or Not Mentioned
India, Chennai
11 days ago
nazztec.com
813 Views
As a Cloud Platform Engineer at Nazztec, you will be responsible for designing, building, and scaling cloud solutions that seamlessly integrate with complex hybrid infrastructures. You will play a pivotal role in translating business requirements into robust technical specifications and ensuring that all deployments meet the highest performance and security standards. This role requires a professional who enjoys solving complex technical problems and thrives in a collaborative environment where they can shape the overall cloud architecture.
You will take full ownership of technical decisions and act as a Subject Matter Expert (SME) within the organization. Daily responsibilities include collaborating across cross-functional teams, defining best practices for deployment, and continuously optimizing existing cloud environments to improve performance and reliability. This position is based in Chennai under a Work From Office (WFO) model, requiring a dedicated individual with 4 to 6 years of professional experience and a strong background in AWS Architecture to drive innovation and technical excellence.
Key Requirements
Strong expertise in AWS Architecture and associated cloud services.
Solid understanding of IaaS, PaaS, and SaaS deployment models.
Hands-on experience with cloud security principles and best practices.
Proficiency with containerization technologies including Docker and Kubernetes.
Knowledge of Infrastructure as Code using tools like Terraform or CloudFormation.
Minimum of 3+ years of specialized experience in AWS Architecture.
Completion of full-time education totaling at least 15 years.
Ability to design, develop, test, and deploy scalable cloud-based solutions.
Experience integrating cloud and on-premise systems for hybrid operations.
Capability to act as a Subject Matter Expert (SME) and lead technical discussions.
Excellent communication skills for collaborating with cross-functional teams.
Proven ability to translate business requirements into technical specifications.
0 Negotiable or Not Mentioned
India, Bangalore
5 days ago
careernet.in
439 Views
My client within the Pharmaceutical sector is looking to expand its technology hub located in Bangalore. We are seeking high-impact Senior AWS Data Engineers who are ready to build scalable data platforms and implement cutting-edge solutions. This role is crucial for managing the infrastructure that supports data-driven decision-making in the pharmaceutical industry and ensuring that large-scale data assets are accessible and reliable. The successful candidate will work extensively with AWS Glue, Lambda, and Databricks. You will be responsible for data modelling and processing using Python, PySpark, and SQL. This is a 100% work-from-office position in Bangalore, requiring candidates who are either currently serving their notice period or can join immediately within 30 days. Your expertise will directly contribute to the innovation of data architectures in a fast-paced environment.
Key Requirements
6–12 years of professional experience in data engineering
Expertise in AWS Glue and AWS Lambda for serverless computing
Proficiency in Databricks for unified analytics and data processing
Strong programming skills in Python for data manipulation
Advanced knowledge of PySpark for big data processing tasks
Hands-on experience with SQL for complex database queries
Proven track record in Data Modelling and architectural design
Experience in the pharmaceutical or life sciences sector
Ability to build and maintain scalable data platforms
Strong analytical and problem-solving skills in a cloud environment
0 Negotiable or Not Mentioned
India, Bangalore
5 days ago
careernet.in
439 Views
We are looking for an experienced AI/ML Developer to join our growing technology hub in Bangalore. This role is tailored for professionals with a strong background in artificial intelligence and machine learning, particularly within the pharmaceutical sector. You will be at the forefront of building cutting-edge AI solutions and integrating advanced generative models into our data ecosystems. The role requires a high degree of expertise in Python, TensorFlow, and Generative AI frameworks. Mandatory skills include Fast API, with a strong preference for candidates who have experience in Agentic AI and modern Large Language Models such as OpenAI or Claude. This position is a 100% office-based role in Bangalore, and we are specifically looking for candidates who are on their notice period and can join within 30 days. You will be instrumental in deploying scalable AI models that drive technological advancement for our global clients.
Key Requirements
5–10 years of overall experience in software development
Minimum 4 years of dedicated experience in AI/ML projects
Strong proficiency in Python for machine learning workflows
Hands-on experience with TensorFlow or similar deep learning frameworks
Proven expertise in Generative AI technologies and applications
Proficiency in building and deploying APIs using Fast API
Working knowledge of AWS cloud services for AI infrastructure
Experience with Agentic AI or AI agents development
Familiarity with OpenAI or Claude LLM integrations
Ability to optimize machine learning models for production performance
0 Negotiable or Not Mentioned
India, Bangalore
10 days ago
fxconsulting.in
808 Views
We are seeking a highly skilled Technical Lead for Data Engineering to join our dynamic team in Bangalore. This role is centered on building and scaling high-performance data systems that support our product-driven initiatives. As a lead, you will be at the forefront of designing scalable ETL pipelines and leveraging technologies such as Spark, Hadoop, and Kafka for large-scale data processing. Your expertise will ensure that our data infrastructure is robust, efficient, and capable of handling complex data workloads.
In addition to your technical responsibilities, you will provide leadership to the engineering team and work collaboratively with Data Scientists to optimize data models and ensure top-tier data quality and security. You will be expected to monitor and troubleshoot data pipelines while maintaining high standards for data governance. The ideal candidate brings 6 to 9 years of experience, a strong background in Python or Scala, and a deep understanding of cloud platforms like AWS, Azure, or GCP. This is a fantastic opportunity for a professional looking to lead engineering excellence in a fast-paced environment.
Key Requirements
6 to 9 years of professional experience in Data Engineering.
Proven expertise in Spark and other Big Data technologies.
Proficiency in coding with Python, Scala, or Java.
Extensive experience in developing and optimizing ETL pipelines.
Hands-on experience with cloud platforms such as AWS, Azure, or GCP.
Strong knowledge of Hadoop and Kafka for large-scale data processing.
Demonstrated experience in team handling and leadership roles.
Ability to design and optimize complex data models.
Understanding of data quality, governance, and security principles.
Exceptional problem-solving skills and ability to work in fast-paced environments.
~200,000 Mentioned
India, Bangalore
6 days ago
smartreferhub.in
506 Views
SmartReferHub is looking for a Lead Data Engineer with 6 to 8 years of experience to join their team in Bangalore. This high-impact hybrid role involves working on advanced Databricks and AWS Lakehouse architecture to lead large-scale data transformations. You will be responsible for driving enterprise-level analytics for global operations and accelerating the company's data strategy. The successful candidate will work on cutting-edge data technologies and lead impactful projects that shape the future of data engineering within the organization. Joining is expected within 30 days. The offered salary for this position ranges from ₹24 to ₹28 LPA, providing an excellent opportunity for career growth in the data analytics sector.
Key Requirements
Minimum 6 to 8 years of professional experience in data engineering roles.
Strong hands-on experience with Databricks and AWS Lakehouse architecture.
Proven track record of leading large-scale data transformations in an enterprise environment.
Ability to drive analytics solutions for global operations and cross-functional teams.
Deep expertise in Big Data technologies and cloud-based data ecosystems.
Strong proficiency in programming languages such as Python or Scala for data processing.
Expertise in writing complex SQL queries and optimizing data performance.
Solid understanding of ETL and ELT pipeline design and maintenance.
Experience with data modeling, data warehousing, and lakehouse concepts.
Strong leadership skills with the ability to manage technical projects and mentor team members.
0 Negotiable or Not Mentioned
India, Bengaluru
8 days ago
huemot.com
591 Views
We are seeking a highly experienced Data Engineering Lead to spearhead a critical engagement within our Capital Markets practice. Based in Bengaluru, this role involves supporting a prominent Private Equity firm headquartered in New York. The successful candidate will oversee the development and maintenance of high-impact data pipelines and lakehouse architectures using cutting-edge technologies. You will work closely with stakeholders to translate business requirements into technical specifications, ensuring high data quality and system reliability across the enterprise.
You will be responsible for leading an offshore team of 5 to 7 engineers, ensuring the delivery of production-grade data solutions through mentorship and technical oversight. This position requires deep expertise in Azure Databricks and PySpark, along with a solid understanding of data governance through Unity Catalog. Candidates must possess a strong background in U.S. Capital Markets or Private Equity to effectively meet the complex data needs of our clients. Successful applicants will demonstrate a history of architectural excellence and the ability to navigate complex financial data landscapes.
Key Requirements
15+ years of enterprise data engineering experience
Databricks Certified Data Engineer (mandatory certification)
5+ years of hands-on experience specifically on Azure Databricks
5+ years of hands-on PySpark experience with production-grade pipelines
Strong knowledge of Unity Catalog and data governance frameworks
Proven experience leading offshore teams of 5–7 engineers
Domain experience in U.S. Capital Markets, Private Equity, or Investment Management
Expertise in lakehouse architecture and modern data stack design
Advanced proficiency in SQL for complex data transformations
Strong understanding of CI/CD practices for automated data pipelines