0 Negotiable or Not Mentioned
India, Bangalore
5 days ago
careernet.in
358 Views
We are looking for an experienced AI/ML Developer to join our growing technology hub in Bangalore. This role is tailored for professionals with a strong background in artificial intelligence and machine learning, particularly within the pharmaceutical sector. You will be at the forefront of building cutting-edge AI solutions and integrating advanced generative models into our data ecosystems. The role requires a high degree of expertise in Python, TensorFlow, and Generative AI frameworks. Mandatory skills include Fast API, with a strong preference for candidates who have experience in Agentic AI and modern Large Language Models such as OpenAI or Claude. This position is a 100% office-based role in Bangalore, and we are specifically looking for candidates who are on their notice period and can join within 30 days. You will be instrumental in deploying scalable AI models that drive technological advancement for our global clients.
Key Requirements
5–10 years of overall experience in software development
Minimum 4 years of dedicated experience in AI/ML projects
Strong proficiency in Python for machine learning workflows
Hands-on experience with TensorFlow or similar deep learning frameworks
Proven expertise in Generative AI technologies and applications
Proficiency in building and deploying APIs using Fast API
Working knowledge of AWS cloud services for AI infrastructure
Experience with Agentic AI or AI agents development
Familiarity with OpenAI or Claude LLM integrations
Ability to optimize machine learning models for production performance
~5,000 Mentioned
India, Vellore
18 days ago
nafter.in
1345 Views
Nafter Web Technologies is offering an exciting opportunity for an AI Intern to join their team. This role is designed for individuals who are passionate about Artificial Intelligence, ChatGPT, and the development of real-world technology solutions. Working within a fast-paced startup environment, the intern will contribute to live AI projects, gaining invaluable hands-on experience and professional growth. The position is structured as a hybrid role based in Vellore, offering both in-office and remote flexibility. The stipend for this position is mentioned as ₹5,000 – ₹15,000 per month.
Key responsibilities include working on AI Chatbots, automation tasks, and prompt engineering using ChatGPT and other Large Language Models (LLMs). Interns will also contribute to SaaS product features and participate in real client projects. This 3–6 month internship provides exposure to the full lifecycle of AI product development, with a strong potential for transition into a full-time role upon successful completion. Candidates are expected to have a problem-solving mindset and a foundational understanding of modern programming languages.
Key Requirements
Basic proficiency in Python programming language.
Foundational knowledge of JavaScript.
Deep interest in Artificial Intelligence and Machine Learning concepts.
A strong problem-solving mindset and analytical capabilities.
Familiarity with ChatGPT and Large Language Models (LLMs).
Understanding of Prompt Engineering techniques.
Knowledge of or interest in LangChain and other AI development tools.
Ability to integrate and work with various API services.
Willingness to work in a high-energy startup environment.
Good communication skills for team collaboration.
Capacity to commit to a 3-6 month internship duration.
0 Negotiable or Not Mentioned
India, Bangalore
5 days ago
careernet.in
358 Views
My client within the Pharmaceutical sector is looking to expand its technology hub located in Bangalore. We are seeking high-impact Senior AWS Data Engineers who are ready to build scalable data platforms and implement cutting-edge solutions. This role is crucial for managing the infrastructure that supports data-driven decision-making in the pharmaceutical industry and ensuring that large-scale data assets are accessible and reliable. The successful candidate will work extensively with AWS Glue, Lambda, and Databricks. You will be responsible for data modelling and processing using Python, PySpark, and SQL. This is a 100% work-from-office position in Bangalore, requiring candidates who are either currently serving their notice period or can join immediately within 30 days. Your expertise will directly contribute to the innovation of data architectures in a fast-paced environment.
Key Requirements
6–12 years of professional experience in data engineering
Expertise in AWS Glue and AWS Lambda for serverless computing
Proficiency in Databricks for unified analytics and data processing
Strong programming skills in Python for data manipulation
Advanced knowledge of PySpark for big data processing tasks
Hands-on experience with SQL for complex database queries
Proven track record in Data Modelling and architectural design
Experience in the pharmaceutical or life sciences sector
Ability to build and maintain scalable data platforms
Strong analytical and problem-solving skills in a cloud environment
0 Negotiable or Not Mentioned
India, Bangalore
18 days ago
bridgetownresearch.org
1285 Views
Bridgetown Research is currently looking for talented SDE2 and SDE3 engineers to join our small and focused team in Bangalore. We are dedicated to building AI-native products that address real-world challenges through innovative technology. Our team operates with a low-ego, high-ownership culture where every member is expected to think like an owner and take full responsibility for the outcomes of their work, not just the tasks assigned. This hybrid role is based out of our modern office in Koramangala, offering a collaborative environment for engineers who thrive on solving complex problems and working on cutting-edge software solutions.
In this role, you will be responsible for developing robust backend systems using Node.js, TypeScript, and Python. You will play a critical role in building and scaling distributed systems on AWS, managing various database technologies such as Postgres, DynamoDB, and Elasticsearch, and ensuring the overall security and reliability of our platforms. Candidates should be comfortable working with infrastructure tools like Docker, Kubernetes, and Terraform to maintain a scalable and efficient environment. Compensation for this position is competitive and is offered at or above current market standards for experienced engineers in the region.
Key Requirements
Strong backend experience with 5-8+ years in the industry.
Proficiency in Node.js and TypeScript for building scalable applications.
Excellent programming skills in Python.
Familiarity with AI systems and AI-native product development.
Extensive experience building and scaling systems on AWS cloud infrastructure.
Comfortable working with relational and NoSQL databases like Postgres and DynamoDB.
Experience with search and indexing tools such as Elasticsearch.
Hands-on experience with containerization using Docker and orchestration with Kubernetes.
Knowledge of Infrastructure as Code using Terraform.
Ability to handle data at scale and implement performance improvements.
Solid understanding of software security best practices and building reliable systems.
A low-ego mindset with a high degree of ownership and responsibility.
0 Negotiable or Not Mentioned
India, Bangalore
10 days ago
fxconsulting.in
774 Views
We are seeking a highly skilled Technical Lead for Data Engineering to join our dynamic team in Bangalore. This role is centered on building and scaling high-performance data systems that support our product-driven initiatives. As a lead, you will be at the forefront of designing scalable ETL pipelines and leveraging technologies such as Spark, Hadoop, and Kafka for large-scale data processing. Your expertise will ensure that our data infrastructure is robust, efficient, and capable of handling complex data workloads.
In addition to your technical responsibilities, you will provide leadership to the engineering team and work collaboratively with Data Scientists to optimize data models and ensure top-tier data quality and security. You will be expected to monitor and troubleshoot data pipelines while maintaining high standards for data governance. The ideal candidate brings 6 to 9 years of experience, a strong background in Python or Scala, and a deep understanding of cloud platforms like AWS, Azure, or GCP. This is a fantastic opportunity for a professional looking to lead engineering excellence in a fast-paced environment.
Key Requirements
6 to 9 years of professional experience in Data Engineering.
Proven expertise in Spark and other Big Data technologies.
Proficiency in coding with Python, Scala, or Java.
Extensive experience in developing and optimizing ETL pipelines.
Hands-on experience with cloud platforms such as AWS, Azure, or GCP.
Strong knowledge of Hadoop and Kafka for large-scale data processing.
Demonstrated experience in team handling and leadership roles.
Ability to design and optimize complex data models.
Understanding of data quality, governance, and security principles.
Exceptional problem-solving skills and ability to work in fast-paced environments.
0 Negotiable or Not Mentioned
India, Coimbatore
15 days ago
infolexus.com
1779 Views
Infolexus is currently recruiting on behalf of a prominent client in the Information Technology sector. This role is designed for a DevOps Engineer who is eager to contribute to a dynamic team environment and work with cutting-edge cloud technologies. The successful candidate will be responsible for building, maintaining, and optimizing scalable systems, ensuring high availability and performance across various platforms. This position offers an excellent opportunity for both freshers and experienced professionals to grow their careers within a forward-thinking organization.
As a DevOps Engineer based in Coimbatore, you will work closely with development and operations teams to streamline deployment processes and enhance infrastructure. You will utilize your knowledge of Linux systems, cloud environments like AWS or Azure, and automation tools to drive efficiency. The role involves managing CI/CD pipelines, writing scripts to automate repetitive tasks, and troubleshooting complex technical issues. Join a team that values innovation and technical excellence while pushing the boundaries of modern IT infrastructure.
Key Requirements
Basic knowledge of Linux/Unix systems
Familiarity with cloud platforms (AWS, Azure, or GCP)
Understanding of CI/CD tools (Jenkins, GitHub Actions)
Knowledge of scripting (Python, Bash, or similar)
0–3 years of experience in a relevant technical role
Ability to build and maintain scalable systems
Eagerness to work with modern cloud technologies
Strong analytical and problem-solving skills
Excellent communication and collaboration abilities
Proactive approach to learning new DevOps tools and methodologies
0 Negotiable or Not Mentioned
India, Bangalore
10 days ago
refex.co.in
873 Views
RGML is seeking a proactive and experienced DevOps Engineer to manage and scale our new AWS-based cloud architecture. This role is central to building a secure, fault-tolerant, and highly available environment that supports our Sun, Drive, and Comm platforms. You will be responsible for designing and implementing infrastructure using various AWS services across multiple Availability Zones, ensuring the platform remains robust and scalable. In this position, you'll play a critical role in automation, deployment pipelines, monitoring, and cloud cost optimization.
Key tasks include maintaining EC2 Auto Scaling Groups, managing API Gateways, and optimizing Aurora SQL Clusters with multi-AZ failover strategies. You will also enforce infrastructure-as-code practices and collaborate with engineering teams to enable DevSecOps best practices, driving the transformation of legacy systems into modern, scalable infrastructure. You will be working on mission-critical mobility platforms with a growing user base, offering a collaborative and fast-paced environment where you can drive automation and shape future DevSecOps practices.
Key Requirements
Strong hands-on experience with AWS core services: EC2 (Linux and Windows), ALB, VPC, S3, Aurora, CloudWatch, API Gateway, IAM, and VPN.
Deep understanding of multi-AZ, high availability, and auto-healing architectures.
Experience with CI/CD tools such as GitHub Actions, Jenkins, or CodePipeline and scripting in Bash, Python, or Shell.
Working knowledge of networking and cloud security best practices including Security Groups, NACLs, and IAM roles.
Experience with Bastion architecture, Client VPNs, Route 53, and VPC peering.
Familiarity with backup and restore strategies and monitoring/logging pipelines.
Proven ability to implement and maintain infrastructure-as-code practices using Terraform or CloudFormation.
Ability to design and manage infrastructure across multiple Availability Zones to ensure fault tolerance.
Experience maintaining and scaling EC2 Auto Scaling Groups and Application Load Balancers.
Proficiency in setting up and optimizing Aurora SQL Clusters with multi-AZ active-active failover strategies.
0 Negotiable or Not Mentioned
India, Bengaluru
9 days ago
scienstechnologies.com
650 Views
Sciens Technologies is seeking a dedicated Database Support Engineer with specialized expertise in AWS database services to join our team in Bengaluru. This role is pivotal in managing and supporting a variety of AWS databases, including RDS, Aurora, DynamoDB, DocumentDB, and ElastiCache. As part of a global "follow-the-sun" model, you will be responsible for ensuring the high availability, reliability, and performance of mission-critical systems. This involves active participation in incident management, root cause analysis (RCA), and providing 24/7 production support to maintain seamless operations for our global clients.
Beyond routine maintenance, the ideal candidate will drive performance optimization through query tuning and strategic indexing. You will also leverage automation using Python or Bash to streamline database operations and enhance system efficiency. Collaboration is a key component of this role, as you will work closely with global teams to maintain comprehensive documentation and runbooks. Monitoring system health using tools like CloudWatch, Prometheus, and Grafana will be part of your daily activities to ensure proactive issue resolution and disaster recovery preparedness. Candidates with a background in Kubernetes and Terraform are highly encouraged to apply.
Key Requirements
3–5 years of professional Database Administration experience.
Strong hands-on experience specifically with AWS RDS and Aurora services.
Profound knowledge of backup/recovery, High Availability (HA), and Disaster Recovery (DR) strategies.
Proficiency in scripting languages such as Python or Bash for operational automation.
Hands-on experience with monitoring and alerting tools like CloudWatch, Prometheus, or Grafana.
Proven ability to handle incident management and root cause analysis (RCA) in a production environment.
Expertise in SQL query tuning, indexing strategies, and database performance optimization.
Familiarity with AWS Database Migration Service (DMS) and general migration strategies.
Knowledge of Kubernetes (EKS) and managing containerized database environments.
Understanding of Infrastructure as Code (IaC) principles using Terraform or CloudFormation.
~200,000 Mentioned
India, Bangalore
6 days ago
smartreferhub.in
428 Views
SmartReferHub is looking for a Lead Data Engineer with 6 to 8 years of experience to join their team in Bangalore. This high-impact hybrid role involves working on advanced Databricks and AWS Lakehouse architecture to lead large-scale data transformations. You will be responsible for driving enterprise-level analytics for global operations and accelerating the company's data strategy. The successful candidate will work on cutting-edge data technologies and lead impactful projects that shape the future of data engineering within the organization. Joining is expected within 30 days. The offered salary for this position ranges from ₹24 to ₹28 LPA, providing an excellent opportunity for career growth in the data analytics sector.
Key Requirements
Minimum 6 to 8 years of professional experience in data engineering roles.
Strong hands-on experience with Databricks and AWS Lakehouse architecture.
Proven track record of leading large-scale data transformations in an enterprise environment.
Ability to drive analytics solutions for global operations and cross-functional teams.
Deep expertise in Big Data technologies and cloud-based data ecosystems.
Strong proficiency in programming languages such as Python or Scala for data processing.
Expertise in writing complex SQL queries and optimizing data performance.
Solid understanding of ETL and ELT pipeline design and maintenance.
Experience with data modeling, data warehousing, and lakehouse concepts.
Strong leadership skills with the ability to manage technical projects and mentor team members.
0 Negotiable or Not Mentioned
India, Andhra Pradesh
17 days ago
necg.ac.in
918 Views
Narayana Engineering College, located in Gudur, Andhra Pradesh, is currently seeking a dedicated and passionate academic professional to join their esteemed faculty as an Assistant Professor in the Department of Computer Science & Engineering, specifically focusing on Artificial Intelligence and Machine Learning (AIML). The institution is committed to providing high-quality technical education and is looking for candidates who can contribute significantly to the academic growth of the students and the department's research initiatives.
The successful candidate will be responsible for delivering lectures, mentoring students in specialized AI and ML projects, and actively participating in departmental research and innovation activities. This role offers an opportunity to work in a dynamic academic environment that values emerging technologies and encourages faculty members to stay at the forefront of their field. Candidates should be prepared to handle curriculum development and engage in collaborative research efforts to enhance the college's standing in engineering education.
Key Requirements
M.Tech / M.E in Computer Science Engineering or relevant specialization.
Strong domain knowledge in Artificial Intelligence.
Proficiency in Machine Learning algorithms and applications.
Expertise in Data Science methodologies.
Demonstrated passion for undergraduate and postgraduate teaching.
Strong interest in conducting and publishing academic research.
Excellent verbal and written communication skills.
Ability to mentor students in both academic and career development.
Interest in staying updated with emerging technologies and innovation.
Capacity to work collaboratively within a multidisciplinary academic department.