0 Negotiable or Not Mentioned
USA, New York City
24 days ago
amaglobaltech.com
1893 Views
Ama Global Tech is seeking a skilled Data Engineer for a hybrid role located in New York City, NY. This position requires a professional who can design, build, and maintain scalable data pipelines and architectures. You will work closely with cross-functional teams to ensure data accessibility and quality, focusing on high-performance computing and cloud-based environments. The role involves a mix of remote work and onsite presence, specifically requiring local candidates capable of attending face-to-face interviews.
The ideal candidate will demonstrate mastery over the AWS ecosystem and the Databricks platform. You will be responsible for implementing data processing solutions using Spark and Python, while managing containerized applications with Docker and Kubernetes. We are looking for a proactive problem-solver who can navigate the complexities of data warehousing and data lakes to provide actionable insights for the business. A certification in Databricks Engineering is a significant plus for this position.
Key Requirements
Strong experience with AWS services including S3, Lambda, and EMR.
Proficiency in Spark and Python for complex data engineering tasks.
Solid understanding of data warehousing and data lake (DW/DH) concepts.
Hands-on experience with Docker and Kubernetes for containerized environments.
Certified Databricks Engineer is highly preferred.
Excellent troubleshooting and debugging skills to resolve technical issues.
Ability to attend a mandatory Face-to-Face (F2F) interview in New York City.
Must be a local candidate currently residing in or near New York City.
Eligible for C2C with H1 or W2 with GC/USC status.
Strong communication skills for effective team collaboration.
~11,666 Mentioned
United States, New York
7 days ago
gmail.com
1086 Views
We are actively seeking a highly skilled Senior Data Engineer to build and scale modern data infrastructure for a fast-growing organization within the Financial Services and Data & Analytics industry. In this role, you will play a critical part in designing, developing, and optimizing data pipelines and architectures that support advanced analytics and critical business intelligence initiatives. You will be responsible for ensuring the scalability and performance of data systems while maintaining the highest standards of data quality and governance.
The ideal candidate will have extensive experience in building scalable ETL/ELT pipelines and maintaining robust data warehouses and data lakes. You will work with large-scale structured and unstructured datasets, collaborating closely with data scientists and analysts to provide the foundational data structures needed for complex modeling. The position offers a competitive package ranging from $140,000 to $200,000 annually, plus bonuses and full benefits, based in New York.
Key Requirements
5+ years of professional experience in data engineering roles.
Strong proficiency in programming languages, particularly Python.
Advanced knowledge of SQL for complex data manipulation and querying.
Hands-on experience with Apache Spark for large-scale data processing.
Extensive experience with cloud platforms such as AWS, Azure, or GCP.
Proven track record with data warehousing solutions and architecture.
Strong understanding of big data technologies and distributed systems.
Ability to design and build scalable ETL and ELT pipelines.
Proficiency in maintaining and optimizing data lakes for performance.
Excellent collaboration skills for working with data scientists and analysts.
Experience in ensuring data quality, integrity, and corporate governance.
0 Negotiable or Not Mentioned
USA, Philadelphia, PA
16 days ago
apptadinc.com
1165 Views
Apptad Inc is seeking a highly skilled Sr. Full Stack Developer to join our team in Philadelphia, PA, in a hybrid capacity. This role is ideal for a veteran developer with over 10 years of experience looking to lead complex technical initiatives. You will be responsible for building advanced data pipelines and ETL processes using Airflow and Snowflake, while also supporting the development of sophisticated web applications using React, Material UI, and AngularJS. The position involves collaborating across multiple teams to ensure the delivery of high-quality software solutions and the resolution of intricate technical problems.
In this hybrid role, you will play a key part in implementing process improvements and driving automation across the development lifecycle. Candidates should possess a deep understanding of cloud environments, specifically AWS, and be proficient in containerization technologies such as Docker and Kubernetes. Given the nature of our projects, experience within the financial services or asset management industry is considered a significant advantage. You will also have the opportunity to utilize AI-enabled development tools like Copilot and Claude to enhance productivity and innovation within our tech stack.
Key Requirements
Minimum of 10 years of professional experience in full-stack software development.
Expert proficiency in Python and frameworks such as Django.
Extensive experience with front-end technologies including React, Angular, and Material UI.
Strong background in AWS services including EC2, S3, Lambda, SNS, and SQS.
Demonstrated expertise in containerization using Docker and orchestration with Kubernetes.
Proven experience with CI/CD tools like Jenkins, Gitlab, or Bamboo.
Deep knowledge of database systems including Snowflake, Redshift, and SQL.
Hands-on experience building and managing data pipelines with Apache Airflow.
Strong understanding of REST API design and DevOps best practices.
Familiarity with AI-enabled development tools such as Copilot or Claude.
Experience in the Financial Services or Asset Management domain is highly preferred.
Excellent collaborative skills and the ability to troubleshoot complex technical issues.
0 Negotiable or Not Mentioned
USA, McLean, VA
27 days ago
S3Connections.com
2163 Views
We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize scalable data platforms that transform complex data into meaningful business insights. The ideal candidate will have strong expertise in SQL, Python, and ETL development, along with experience supporting cloud-based data migration and modern data ecosystems. You will be responsible for building and maintaining scalable ETL/data pipelines for structured and unstructured data while ensuring high-performance data solutions through advanced techniques. The role requires a presence onsite in McLean, VA, for five days a week to ensure close collaboration with team members and stakeholders.
The role involves collaborating with cross-functional teams to enhance data quality, accessibility, and system performance. You will implement best practices for data engineering, code quality, testing, and deployment. Additionally, the candidate will support cloud data migration initiatives, including data mapping, transformation, validation, and optimization. This position is critical for optimizing data workflows and ensuring high availability and reliability of data systems within an enterprise environment. Candidates should be prepared to create and maintain comprehensive technical documentation and data flow diagrams to support the platform's evolution.
Key Requirements
8+ years of experience as a Data Engineer
Strong expertise in SQL and Python
Hands-on experience building and maintaining ETL pipelines in enterprise environments
Experience working with large datasets and complex data architectures
Experience with cloud platforms such as AWS, Azure, or GCP
Strong understanding of data modeling, data warehousing, and data transformation techniques
Experience in data migration and integration projects
Excellent problem-solving, analytical, and communication skills
Familiarity with orchestration tools like Airflow
Experience with CI/CD tools such as GitHub or Jenkins
0 Negotiable or Not Mentioned
USA, Mclean, VA
14 days ago
momentousa.com
1355 Views
Momentousa is hiring a Senior Data Engineer for an onsite W2 position located in McLean, VA. This role is ideal for a seasoned professional with deep expertise in big data technologies and data warehousing. You will be responsible for designing, building, and maintaining scalable data pipelines that process vast amounts of information to support our analytics and business intelligence initiatives.
The successful candidate will work extensively with AWS services, Spark, and Pyspark to transform raw data into actionable insights. You will leverage your advanced SQL skills and Python knowledge to model data and optimize database performance. This role demands a high level of technical proficiency in Hive and general data modeling principles to ensure our data architecture is robust, efficient, and capable of supporting complex business queries.
Key Requirements
Significant experience with AWS cloud data services
Expert-level knowledge of Spark and Pyspark for data processing
Advanced proficiency in SQL, including basic and complex query optimization
Strong backend development skills using Python
Practical experience with Hive for data warehousing and querying
Proven ability in Data Modelling and architecture design
Experience building and maintaining robust ETL pipelines
Knowledge of performance tuning for big data applications
Ability to work onsite in McLean, VA on a regular basis
Strong analytical skills to interpret complex data sets
0 Negotiable or Not Mentioned
USA, Jersey City
24 days ago
esharpedge.com
1408 Views
Sharpedge Inc is seeking a highly skilled Senior Rancher Platform Engineer to join our team in Jersey City, NJ. In this role, you will be responsible for managing and optimizing Rancher-managed Kubernetes clusters, including RKE and RKE2 environments. You will leverage the Rancher UI, APIs, and automation workflows to ensure robust and scalable infrastructure. The ideal candidate will have extensive experience in networking and observability stacks, utilizing tools like Prometheus, Grafana, and ELK to monitor system health and performance.
Additionally, you will play a key role in designing and implementing CI/CD and GitOps workflows using Helm, Jenkins, GitHub Actions, and Argo CD. As a senior member of the team, you will contribute to the continuous improvement of our deployment strategies and container orchestration. This position requires being on the Sharpedge Payroll. If you have a passion for Kubernetes and infrastructure automation, we encourage you to apply and help drive our platform's evolution.
Key Requirements
Experience with Rancher-managed Kubernetes clusters including RKE and RKE2.
Proficiency in Rancher UI, APIs, and automation workflows.
Solid understanding of networking concepts in a containerized environment.
Hands-on experience with observability stacks including Prometheus and Grafana.
Experience with centralized logging systems such as EFK or ELK stacks.
Proven track record with CI/CD and GitOps workflows using Helm and Jenkins.
Expertise in GitHub Actions and Argo CD for deployment automation.
Must be eligible to work on Sharpedge Inc payroll.
Strong knowledge of infrastructure as code (IaC) principles.
Excellent troubleshooting skills in cloud-native environments.
0 Negotiable or Not Mentioned
USA, Albany
26 days ago
systech.com
1559 Views
We are seeking an experienced OpenShift Administrator to manage, maintain, and support our OpenShift container platform. The ideal candidate will be responsible for cluster administration, deployment support, monitoring, and ensuring the stability and performance of containerized applications in Albany, NY. This role involves day-to-day administration, monitoring, and maintenance of OpenShift environments to ensure high availability and optimal performance. The candidate must be comfortable working on-site for the duration of this contract. The OpenShift Administrator will work closely with development and DevOps teams to support seamless application deployments. Responsibilities include managing user access through Role-Based Access Control (RBAC), performing cluster upgrades, patches, and backups, as well as troubleshooting complex networking and application issues. This is a 6+ month contract position requiring a proactive approach to system reliability and documentation of configurations and processes.
Key Requirements
Strong experience with OpenShift and Kubernetes
Good knowledge of Linux system administration
Experience with container technologies like Docker or CRI-O
Understanding of networking concepts including DNS, load balancing, and firewalls
Experience with monitoring tools such as Prometheus and Grafana
Basic scripting knowledge in Bash or Python
Familiarity with CI/CD tools like Jenkins or GitLab
Ability to perform day-to-day administration, monitoring, and maintenance of OpenShift environments
Proficiency in troubleshooting cluster, application, and networking issues
Experience managing user access, roles, and permissions through RBAC
0 Negotiable or Not Mentioned
USA, Arden
13 days ago
flexontechnologies.com
1416 Views
Flexon Technologies is currently hiring for the position of Databricks Python Engineer to join their team for a long-term project in Arden, Delaware. This is a Dayone Onsite role, requiring the candidate to be physically present at the location from the start of the engagement. The position is tailored for senior professionals with over 10 years of experience who can provide high-level consultancy and technical expertise. The focus of the role is within the Retail Digital domain, specifically integrating complex data solutions to improve business operations.
The technical requirements include a deep proficiency in Python, Databricks, and PL/SQL, alongside experience with ServiceNow and Aptos Store Inventory Management Systems. Candidates are expected to have a strong background in retail digital environments, as resumes without this domain expertise will not be considered. The role is offered at an hourly rate of 55/hr C2C. The successful candidate will be responsible for developing and managing data pipelines, optimizing store inventory systems, and ensuring seamless integration across various enterprise platforms.
Key Requirements
10+ years of hands-on experience as a technical consultant.
Deep expertise in Python programming for data engineering tasks.
Advanced proficiency with Databricks for processing large datasets.
Strong skills in writing and optimizing PL/SQL queries.
Prior experience working with ServiceNow platform.
Hands-on experience with Aptos Store Inventory Management System.
Essential domain experience in Retail Digital.
Ability to work onsite in Arden, DE from the first day of the contract.
Familiarity with Azure cloud infrastructure and services.
Understanding of big data ecosystems and Hadoop environments.
0 Negotiable or Not Mentioned
USA, Whippany
27 days ago
gvrinfotek.com
1682 Views
GVR Infotek is seeking a highly skilled and experienced Senior Java Developer to join our team in a hybrid capacity based in Whippany, NJ. This senior-level role requires a professional with at least 12 years of experience in backend development, specifically focusing on building scalable and high-performance systems. The successful candidate will be responsible for designing and implementing microservices architectures and leveraging the latest Java versions to meet complex business requirements.
The position involves working extensively with Apache Kafka and other streaming platforms to manage data flow and system communication. You will be expected to demonstrate expertise in Spring Boot, multithreading, and SQL performance tuning. Additionally, you will play a key role in optimizing system concurrency and managing relational databases. This is an excellent opportunity for a seasoned developer to contribute to distributed systems and work with modern containerization technologies in a dynamic environment.
Key Requirements
Minimum of 12 years of experience in Java development.
Expertise in Java 11+ with strong backend development skills.
Extensive hands-on experience with Spring Boot and Spring Framework.
Solid understanding of REST APIs and Microservices Architecture.
Proven experience with Apache Kafka or similar streaming platforms.
Advanced knowledge of Multithreading and Concurrency management.
Proficiency in SQL and relational databases like PostgreSQL or SQL Server.
Ability to perform system performance tuning and optimization.
Experience working with Docker and Kubernetes for container orchestration.
Familiarity with Redis and high-performance distributed systems.
0 Negotiable or Not Mentioned
USA, Pennsylvania
22 days ago
jpstechsolutions.com
1312 Views
This is a senior-level Backend Engineering position focusing on the development and optimization of Microservices using Golang and .NET frameworks. The role is critical for building robust payment systems and managing complex REST APIs within a cloud-native environment. You will work closely with cross-functional teams to integrate enterprise platforms such as SAP and Microsoft Dynamics, ensuring seamless data flow and system interoperability.
The position is based in Pennsylvania and follows a hybrid work model, requiring the candidate to be onsite for 3 to 4 days per month. With over 8 years of professional experience, the ideal candidate will lead infrastructure initiatives using Docker and Kubernetes while maintaining high standards for CI/CD pipelines. This role offers an excellent opportunity to work on cutting-edge financial technologies and scalable Azure-based architectures.
Key Requirements
8+ years of professional experience in backend software development.
Expertise in programming with Golang and the .NET framework.
Proven experience designing and implementing Microservices architectures.
Strong knowledge of building and consuming REST APIs.
Hands-on experience with Payment Systems and financial transaction logic.
Proficiency in managing cloud infrastructure within Microsoft Azure.
Solid experience with containerization tools specifically Docker.
Practical knowledge of orchestration using Kubernetes.
Expertise in setting up and maintaining CI/CD pipelines for automated delivery.
Demonstrated ability to integrate systems with SAP and Microsoft Dynamics.