0 Negotiable or Not Mentioned
USA, Philadelphia, PA
16 days ago
apptadinc.com
997 Views
Apptad Inc is seeking a highly skilled Sr. Full Stack Developer to join our team in Philadelphia, PA, in a hybrid capacity. This role is ideal for a veteran developer with over 10 years of experience looking to lead complex technical initiatives. You will be responsible for building advanced data pipelines and ETL processes using Airflow and Snowflake, while also supporting the development of sophisticated web applications using React, Material UI, and AngularJS. The position involves collaborating across multiple teams to ensure the delivery of high-quality software solutions and the resolution of intricate technical problems.
In this hybrid role, you will play a key part in implementing process improvements and driving automation across the development lifecycle. Candidates should possess a deep understanding of cloud environments, specifically AWS, and be proficient in containerization technologies such as Docker and Kubernetes. Given the nature of our projects, experience within the financial services or asset management industry is considered a significant advantage. You will also have the opportunity to utilize AI-enabled development tools like Copilot and Claude to enhance productivity and innovation within our tech stack.
Key Requirements
Minimum of 10 years of professional experience in full-stack software development.
Expert proficiency in Python and frameworks such as Django.
Extensive experience with front-end technologies including React, Angular, and Material UI.
Strong background in AWS services including EC2, S3, Lambda, SNS, and SQS.
Demonstrated expertise in containerization using Docker and orchestration with Kubernetes.
Proven experience with CI/CD tools like Jenkins, Gitlab, or Bamboo.
Deep knowledge of database systems including Snowflake, Redshift, and SQL.
Hands-on experience building and managing data pipelines with Apache Airflow.
Strong understanding of REST API design and DevOps best practices.
Familiarity with AI-enabled development tools such as Copilot or Claude.
Experience in the Financial Services or Asset Management domain is highly preferred.
Excellent collaborative skills and the ability to troubleshoot complex technical issues.
0 Negotiable or Not Mentioned
USA, Jersey City
24 days ago
esharpedge.com
1361 Views
Sharpedge Inc is seeking a highly skilled Senior Rancher Platform Engineer to join our team in Jersey City, NJ. In this role, you will be responsible for managing and optimizing Rancher-managed Kubernetes clusters, including RKE and RKE2 environments. You will leverage the Rancher UI, APIs, and automation workflows to ensure robust and scalable infrastructure. The ideal candidate will have extensive experience in networking and observability stacks, utilizing tools like Prometheus, Grafana, and ELK to monitor system health and performance.
Additionally, you will play a key role in designing and implementing CI/CD and GitOps workflows using Helm, Jenkins, GitHub Actions, and Argo CD. As a senior member of the team, you will contribute to the continuous improvement of our deployment strategies and container orchestration. This position requires being on the Sharpedge Payroll. If you have a passion for Kubernetes and infrastructure automation, we encourage you to apply and help drive our platform's evolution.
Key Requirements
Experience with Rancher-managed Kubernetes clusters including RKE and RKE2.
Proficiency in Rancher UI, APIs, and automation workflows.
Solid understanding of networking concepts in a containerized environment.
Hands-on experience with observability stacks including Prometheus and Grafana.
Experience with centralized logging systems such as EFK or ELK stacks.
Proven track record with CI/CD and GitOps workflows using Helm and Jenkins.
Expertise in GitHub Actions and Argo CD for deployment automation.
Must be eligible to work on Sharpedge Inc payroll.
Strong knowledge of infrastructure as code (IaC) principles.
Excellent troubleshooting skills in cloud-native environments.
0 Negotiable or Not Mentioned
USA, Albany
26 days ago
systech.com
1310 Views
We are seeking an experienced OpenShift Administrator to manage, maintain, and support our OpenShift container platform. The ideal candidate will be responsible for cluster administration, deployment support, monitoring, and ensuring the stability and performance of containerized applications in Albany, NY. This role involves day-to-day administration, monitoring, and maintenance of OpenShift environments to ensure high availability and optimal performance. The candidate must be comfortable working on-site for the duration of this contract. The OpenShift Administrator will work closely with development and DevOps teams to support seamless application deployments. Responsibilities include managing user access through Role-Based Access Control (RBAC), performing cluster upgrades, patches, and backups, as well as troubleshooting complex networking and application issues. This is a 6+ month contract position requiring a proactive approach to system reliability and documentation of configurations and processes.
Key Requirements
Strong experience with OpenShift and Kubernetes
Good knowledge of Linux system administration
Experience with container technologies like Docker or CRI-O
Understanding of networking concepts including DNS, load balancing, and firewalls
Experience with monitoring tools such as Prometheus and Grafana
Basic scripting knowledge in Bash or Python
Familiarity with CI/CD tools like Jenkins or GitLab
Ability to perform day-to-day administration, monitoring, and maintenance of OpenShift environments
Proficiency in troubleshooting cluster, application, and networking issues
Experience managing user access, roles, and permissions through RBAC
0 Negotiable or Not Mentioned
USA, New York City
24 days ago
amaglobaltech.com
1669 Views
Ama Global Tech is seeking a skilled Data Engineer for a hybrid role located in New York City, NY. This position requires a professional who can design, build, and maintain scalable data pipelines and architectures. You will work closely with cross-functional teams to ensure data accessibility and quality, focusing on high-performance computing and cloud-based environments. The role involves a mix of remote work and onsite presence, specifically requiring local candidates capable of attending face-to-face interviews.
The ideal candidate will demonstrate mastery over the AWS ecosystem and the Databricks platform. You will be responsible for implementing data processing solutions using Spark and Python, while managing containerized applications with Docker and Kubernetes. We are looking for a proactive problem-solver who can navigate the complexities of data warehousing and data lakes to provide actionable insights for the business. A certification in Databricks Engineering is a significant plus for this position.
Key Requirements
Strong experience with AWS services including S3, Lambda, and EMR.
Proficiency in Spark and Python for complex data engineering tasks.
Solid understanding of data warehousing and data lake (DW/DH) concepts.
Hands-on experience with Docker and Kubernetes for containerized environments.
Certified Databricks Engineer is highly preferred.
Excellent troubleshooting and debugging skills to resolve technical issues.
Ability to attend a mandatory Face-to-Face (F2F) interview in New York City.
Must be a local candidate currently residing in or near New York City.
Eligible for C2C with H1 or W2 with GC/USC status.
Strong communication skills for effective team collaboration.
0 Negotiable or Not Mentioned
USA, Pennsylvania
22 days ago
jpstechsolutions.com
1265 Views
This is a senior-level Backend Engineering position focusing on the development and optimization of Microservices using Golang and .NET frameworks. The role is critical for building robust payment systems and managing complex REST APIs within a cloud-native environment. You will work closely with cross-functional teams to integrate enterprise platforms such as SAP and Microsoft Dynamics, ensuring seamless data flow and system interoperability.
The position is based in Pennsylvania and follows a hybrid work model, requiring the candidate to be onsite for 3 to 4 days per month. With over 8 years of professional experience, the ideal candidate will lead infrastructure initiatives using Docker and Kubernetes while maintaining high standards for CI/CD pipelines. This role offers an excellent opportunity to work on cutting-edge financial technologies and scalable Azure-based architectures.
Key Requirements
8+ years of professional experience in backend software development.
Expertise in programming with Golang and the .NET framework.
Proven experience designing and implementing Microservices architectures.
Strong knowledge of building and consuming REST APIs.
Hands-on experience with Payment Systems and financial transaction logic.
Proficiency in managing cloud infrastructure within Microsoft Azure.
Solid experience with containerization tools specifically Docker.
Practical knowledge of orchestration using Kubernetes.
Expertise in setting up and maintaining CI/CD pipelines for automated delivery.
Demonstrated ability to integrate systems with SAP and Microsoft Dynamics.
0 Negotiable or Not Mentioned
USA, Pennsylvania
22 days ago
jpstechsolutions.com
1215 Views
This Backend Engineer position at JPSTech Solutions focuses on developing robust and scalable microservices using Golang and .NET. The successful candidate will be responsible for designing and implementing REST APIs, integrating payment systems, and working within an Azure cloud environment. This role requires 8+ years of experience and is based in Pennsylvania on a hybrid schedule, requiring onsite attendance 3 to 4 days per month to ensure effective team collaboration and project alignment.
In this role, you will leverage containerization tools like Docker and Kubernetes to manage deployments and maintain efficient CI/CD pipelines. Additionally, you will be involved in complex integrations with enterprise systems like SAP and Microsoft Dynamics. The position offers flexible employment types including W2 and C2C options. If you have a strong background in backend systems, high-scale application development, and a passion for modern software architecture, this is an excellent opportunity to join a dynamic team in a technical lead capacity.
Key Requirements
Minimum of 8+ years of experience in software engineering.
Strong proficiency in Golang and .NET frameworks.
Proven experience with Microservices architecture and design patterns.
Proficiency in developing and consuming RESTful APIs.
Extensive experience with Payment Systems and financial platform integrations.
Hands-on experience with Azure cloud services and infrastructure.
Expertise in containerization using Docker and Kubernetes.
Professional experience with CI/CD pipelines and DevOps best practices.
Integration experience with enterprise systems such as SAP and Microsoft Dynamics.
Willingness to work in a hybrid environment with 3-4 onsite days per month in Pennsylvania.
0 Negotiable or Not Mentioned
USA, Pittsburgh
24 days ago
skilzmatrix.com
1997 Views
PNC is currently seeking a highly experienced Super Senior Data Engineer with over 10 years of professional experience to join their team in Pittsburgh, PA. The successful candidate will play a critical role in designing, building, and maintaining scalable data pipelines leveraging the full suite of AWS cloud services. This position involves developing and optimizing sophisticated ETL and ELT workflows to handle both structured and semi-structured data, ensuring that high-performance analytics are available for business decision-making. Working within an agile environment, the role demands a expert-level understanding of data processing jobs using Python and PySpark.
In addition to pipeline construction, the engineer will be responsible for integrating and managing data within the Snowflake cloud data warehouse. This includes writing complex SQL queries for data transformation and validation, as well as supporting Power BI dashboards by delivering curated, analytics-ready datasets. Candidates must demonstrate a strong commitment to data quality, governance, performance, and security best practices. This role is offered on a W2 basis and is ideal for individuals with prior experience in the financial services or banking domain who are looking to apply their technical leadership in a dynamic corporate environment.
Key Requirements
Minimum of 10 years of professional experience in Data Engineering or a related field.
Advanced proficiency in SQL, including complex querying and performance tuning.
Extensive experience designing and maintaining scalable data pipelines on AWS.
Expert knowledge of Python and PySpark for large-scale data processing.
Hands-on experience with Snowflake cloud data warehouse management and integration.
Proven ability to develop and optimize ETL/ELT workflows for various data formats.
Experience supporting Power BI through data modeling and performance optimization.
Familiarity with AWS services such as S3, Glue, EMR, Lambda, and Redshift.
Strong understanding of data quality frameworks, governance, and security best practices.
Ability to work effectively in an Agile/Scrum environment with cross-functional teams.
0 Negotiable or Not Mentioned
USA, McLean, VA
27 days ago
S3Connections.com
2163 Views
We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize scalable data platforms that transform complex data into meaningful business insights. The ideal candidate will have strong expertise in SQL, Python, and ETL development, along with experience supporting cloud-based data migration and modern data ecosystems. You will be responsible for building and maintaining scalable ETL/data pipelines for structured and unstructured data while ensuring high-performance data solutions through advanced techniques. The role requires a presence onsite in McLean, VA, for five days a week to ensure close collaboration with team members and stakeholders.
The role involves collaborating with cross-functional teams to enhance data quality, accessibility, and system performance. You will implement best practices for data engineering, code quality, testing, and deployment. Additionally, the candidate will support cloud data migration initiatives, including data mapping, transformation, validation, and optimization. This position is critical for optimizing data workflows and ensuring high availability and reliability of data systems within an enterprise environment. Candidates should be prepared to create and maintain comprehensive technical documentation and data flow diagrams to support the platform's evolution.
Key Requirements
8+ years of experience as a Data Engineer
Strong expertise in SQL and Python
Hands-on experience building and maintaining ETL pipelines in enterprise environments
Experience working with large datasets and complex data architectures
Experience with cloud platforms such as AWS, Azure, or GCP
Strong understanding of data modeling, data warehousing, and data transformation techniques
Experience in data migration and integration projects
Excellent problem-solving, analytical, and communication skills
Familiarity with orchestration tools like Airflow
Experience with CI/CD tools such as GitHub or Jenkins
0 Negotiable or Not Mentioned
USA, Stamford
30 days ago
aetalentsgroup.com
1822 Views
We are seeking a highly skilled Snowflake Developer to join our dynamic team for a contract duration of 6 or more months. This role is designed for a technical expert with a customer-focused mindset who can deliver excellent service to clients, partners, and stakeholders. You will be responsible for managing and resolving support tickets within SLA guidelines while working with critical integrations like Active Directory, LDAP, Outlook, Word, Excel, and Salesforce. The position requires a candidate who can handle customer calls professionally and track issue resolutions effectively to ensure high levels of client satisfaction. In addition to development tasks, you will support software licensing and installations, perform routine server installations, and conduct necessary maintenance. Maintaining accurate documentation of all customer interactions is a key part of the role, as is the ability to troubleshoot and escalate complex technical issues to the appropriate channels. The environment is fast-paced, demanding a high attention to detail and the ability to multitask across various web-based technologies and enterprise tools. This opportunity allows for work based in Stamford or remotely, providing flexibility for the right candidate with the necessary experience and skills.
Key Requirements
Minimum of 8 years of professional experience in technical development roles.
Proven expertise as a Snowflake Developer with deep platform knowledge.
Strong troubleshooting and analytical skills to resolve complex technical issues.
Excellent verbal and written communication skills for stakeholder interaction.
Ability to multitask and maintain organization in a fast-paced environment.
High attention to detail regarding technical documentation and ticket tracking.
Customer-centric mindset with a proactive problem-solving attitude.
Familiarity with web-based technologies and standard enterprise software tools.
Hands-on experience with Active Directory and LDAP integrations.
Proficiency in Microsoft Office Suite including Word, Excel, and Outlook.
Ability to manage software licensing and perform server maintenance tasks.
Experience working with Salesforce and other CRM integrations.
0 Negotiable or Not Mentioned
USA, Harrisburg
28 days ago
apexon.com
1762 Views
Apexon is currently seeking a talented Midlevel DevOps Engineer to join our dedicated team located in Harrisburg, Pennsylvania. This onsite role is ideal for a professional looking to contribute to a collaborative environment where technology and innovation drive business success. As a key member of the operations team, you will be responsible for maintaining the integrity of our development and production environments, ensuring that our systems are both scalable and resilient. Your daily activities will focus on bridging the gap between development and operations to foster a culture of efficiency and continuous delivery.
The successful candidate will manage complex CI/CD pipelines and oversee the automation of deployment processes using industry-standard tools. You will work on optimizing build processes, managing source code repositories, and troubleshooting infrastructure issues to minimize downtime. This position requires a proactive individual who can communicate effectively with cross-functional teams and is comfortable working in a fast-paced onsite setting. By joining Apexon, you will have the opportunity to work with cutting-edge technologies and play a pivotal role in our ongoing digital transformation efforts.
Key Requirements
Proven experience as a DevOps Engineer or in a similar software engineering role.
Deep understanding of CI/CD methodologies and tools like Jenkins.
Proficiency in version control systems, particularly GitHub.
Experience with build automation tools such as Maven.
Familiarity with web server configuration and management using Apache.
Strong knowledge of scripting languages like Python or Bash for automation.
Experience with cloud infrastructure platforms such as AWS or Azure.
Ability to manage and monitor production environments effectively.
Familiarity with containerization technologies like Docker and Kubernetes.
Strong analytical and problem-solving skills to address complex system issues.