0 Negotiable or Not Mentioned
USA, Mclean, VA
14 days ago
momentousa.com
1326 Views
Momentousa is hiring a Senior Data Engineer for an onsite W2 position located in McLean, VA. This role is ideal for a seasoned professional with deep expertise in big data technologies and data warehousing. You will be responsible for designing, building, and maintaining scalable data pipelines that process vast amounts of information to support our analytics and business intelligence initiatives.
The successful candidate will work extensively with AWS services, Spark, and Pyspark to transform raw data into actionable insights. You will leverage your advanced SQL skills and Python knowledge to model data and optimize database performance. This role demands a high level of technical proficiency in Hive and general data modeling principles to ensure our data architecture is robust, efficient, and capable of supporting complex business queries.
Key Requirements
Significant experience with AWS cloud data services
Expert-level knowledge of Spark and Pyspark for data processing
Advanced proficiency in SQL, including basic and complex query optimization
Strong backend development skills using Python
Practical experience with Hive for data warehousing and querying
Proven ability in Data Modelling and architecture design
Experience building and maintaining robust ETL pipelines
Knowledge of performance tuning for big data applications
Ability to work onsite in McLean, VA on a regular basis
Strong analytical skills to interpret complex data sets
0 Negotiable or Not Mentioned
USA, McLean, VA
27 days ago
S3Connections.com
2232 Views
We are seeking a highly skilled Senior Data Engineer to design, develop, and optimize scalable data platforms that transform complex data into meaningful business insights. The ideal candidate will have strong expertise in SQL, Python, and ETL development, along with experience supporting cloud-based data migration and modern data ecosystems. You will be responsible for building and maintaining scalable ETL/data pipelines for structured and unstructured data while ensuring high-performance data solutions through advanced techniques. The role requires a presence onsite in McLean, VA, for five days a week to ensure close collaboration with team members and stakeholders.
The role involves collaborating with cross-functional teams to enhance data quality, accessibility, and system performance. You will implement best practices for data engineering, code quality, testing, and deployment. Additionally, the candidate will support cloud data migration initiatives, including data mapping, transformation, validation, and optimization. This position is critical for optimizing data workflows and ensuring high availability and reliability of data systems within an enterprise environment. Candidates should be prepared to create and maintain comprehensive technical documentation and data flow diagrams to support the platform's evolution.
Key Requirements
8+ years of experience as a Data Engineer
Strong expertise in SQL and Python
Hands-on experience building and maintaining ETL pipelines in enterprise environments
Experience working with large datasets and complex data architectures
Experience with cloud platforms such as AWS, Azure, or GCP
Strong understanding of data modeling, data warehousing, and data transformation techniques
Experience in data migration and integration projects
Excellent problem-solving, analytical, and communication skills
Familiarity with orchestration tools like Airflow
Experience with CI/CD tools such as GitHub or Jenkins
~11,666 Mentioned
United States, New York
7 days ago
gmail.com
390 Views
We are actively seeking a highly skilled Senior Data Engineer to build and scale modern data infrastructure for a fast-growing organization within the Financial Services and Data & Analytics industry. In this role, you will play a critical part in designing, developing, and optimizing data pipelines and architectures that support advanced analytics and critical business intelligence initiatives. You will be responsible for ensuring the scalability and performance of data systems while maintaining the highest standards of data quality and governance.
The ideal candidate will have extensive experience in building scalable ETL/ELT pipelines and maintaining robust data warehouses and data lakes. You will work with large-scale structured and unstructured datasets, collaborating closely with data scientists and analysts to provide the foundational data structures needed for complex modeling. The position offers a competitive package ranging from $140,000 to $200,000 annually, plus bonuses and full benefits, based in New York.
Key Requirements
5+ years of professional experience in data engineering roles.
Strong proficiency in programming languages, particularly Python.
Advanced knowledge of SQL for complex data manipulation and querying.
Hands-on experience with Apache Spark for large-scale data processing.
Extensive experience with cloud platforms such as AWS, Azure, or GCP.
Proven track record with data warehousing solutions and architecture.
Strong understanding of big data technologies and distributed systems.
Ability to design and build scalable ETL and ELT pipelines.
Proficiency in maintaining and optimizing data lakes for performance.
Excellent collaboration skills for working with data scientists and analysts.
Experience in ensuring data quality, integrity, and corporate governance.
0 Negotiable or Not Mentioned
USA, Pittsburgh
24 days ago
skilzmatrix.com
2095 Views
PNC is currently seeking a highly experienced Super Senior Data Engineer with over 10 years of professional experience to join their team in Pittsburgh, PA. The successful candidate will play a critical role in designing, building, and maintaining scalable data pipelines leveraging the full suite of AWS cloud services. This position involves developing and optimizing sophisticated ETL and ELT workflows to handle both structured and semi-structured data, ensuring that high-performance analytics are available for business decision-making. Working within an agile environment, the role demands a expert-level understanding of data processing jobs using Python and PySpark.
In addition to pipeline construction, the engineer will be responsible for integrating and managing data within the Snowflake cloud data warehouse. This includes writing complex SQL queries for data transformation and validation, as well as supporting Power BI dashboards by delivering curated, analytics-ready datasets. Candidates must demonstrate a strong commitment to data quality, governance, performance, and security best practices. This role is offered on a W2 basis and is ideal for individuals with prior experience in the financial services or banking domain who are looking to apply their technical leadership in a dynamic corporate environment.
Key Requirements
Minimum of 10 years of professional experience in Data Engineering or a related field.
Advanced proficiency in SQL, including complex querying and performance tuning.
Extensive experience designing and maintaining scalable data pipelines on AWS.
Expert knowledge of Python and PySpark for large-scale data processing.
Hands-on experience with Snowflake cloud data warehouse management and integration.
Proven ability to develop and optimize ETL/ELT workflows for various data formats.
Experience supporting Power BI through data modeling and performance optimization.
Familiarity with AWS services such as S3, Glue, EMR, Lambda, and Redshift.
Strong understanding of data quality frameworks, governance, and security best practices.
Ability to work effectively in an Agile/Scrum environment with cross-functional teams.
0 Negotiable or Not Mentioned
USA, Malvern, PA
22 days ago
judge.com
1212 Views
This position is for an AWS Data Analytics Engineer located in Malvern, Pennsylvania. The role follows a hybrid model requiring the candidate to be onsite from day one. The initial contract length is for one year, with a strong likelihood of being extended for multiple years. The recruitment process includes a video interview, and candidates are welcome to apply via C2C arrangements.
Technical responsibilities focus on utilizing Python and SQL for complex data queries and manipulation. The successful candidate will also be responsible for creating data visualizations and dashboards using Tableau and leveraging various AWS cloud services to manage and analyze large datasets. Applicants must submit their resume along with a copy of their work authorization for consideration.
Key Requirements
Proficiency in Python programming for data engineering tasks.
Strong expertise in SQL for performing complex data queries.
Extensive experience with Tableau for data visualization and reporting.
In-depth knowledge of AWS services related to data analytics.
Ability to work onsite in Malvern, PA following a hybrid model.
Minimum of 1+ year experience in a similar data analytics role.
Experience with cloud-based data warehousing and architecture.
Strong analytical and problem-solving skills.
Ability to participate and perform well in video interviews.
Must provide valid work authorization documentation.
Experience with C2C project delivery models.
Excellent communication and collaboration skills.
0 Negotiable or Not Mentioned
USA, New York City
24 days ago
amaglobaltech.com
1709 Views
Ama Global Tech is seeking a skilled Data Engineer for a hybrid role located in New York City, NY. This position requires a professional who can design, build, and maintain scalable data pipelines and architectures. You will work closely with cross-functional teams to ensure data accessibility and quality, focusing on high-performance computing and cloud-based environments. The role involves a mix of remote work and onsite presence, specifically requiring local candidates capable of attending face-to-face interviews.
The ideal candidate will demonstrate mastery over the AWS ecosystem and the Databricks platform. You will be responsible for implementing data processing solutions using Spark and Python, while managing containerized applications with Docker and Kubernetes. We are looking for a proactive problem-solver who can navigate the complexities of data warehousing and data lakes to provide actionable insights for the business. A certification in Databricks Engineering is a significant plus for this position.
Key Requirements
Strong experience with AWS services including S3, Lambda, and EMR.
Proficiency in Spark and Python for complex data engineering tasks.
Solid understanding of data warehousing and data lake (DW/DH) concepts.
Hands-on experience with Docker and Kubernetes for containerized environments.
Certified Databricks Engineer is highly preferred.
Excellent troubleshooting and debugging skills to resolve technical issues.
Ability to attend a mandatory Face-to-Face (F2F) interview in New York City.
Must be a local candidate currently residing in or near New York City.
Eligible for C2C with H1 or W2 with GC/USC status.
Strong communication skills for effective team collaboration.
0 Negotiable or Not Mentioned
USA, Harrisburg, PA
23 days ago
dsiginc.com
1644 Views
DSIG Inc is seeking a highly skilled and experienced Senior Data Engineer to join our team for a direct client project located in Harrisburg, PA. This role is a hybrid position, requiring the candidate to be local to the area and possess a valid Pennsylvania Driver’s License. The successful candidate will be responsible for designing, building, and maintaining robust data architectures and pipelines that support large-scale data processing. Experience with the State of Pennsylvania is a mandatory requirement for this role, as the candidate will be working closely with government-related data systems and processes. In this position, you will leverage your expertise in SQL, ETL processes, and various data warehousing technologies to ensure data integrity and availability. You will also participate in face-to-face interviews and collaborate with multi-functional teams to translate business requirements into technical solutions. We are looking for a professional who is not only technically proficient but also an excellent communicator. If you have a passion for data engineering and meet the residency and licensing requirements, we encourage you to apply by sending your resume to the provided contact.
Key Requirements
Must possess a valid Pennsylvania Driver’s License.
Mandatory experience working with the State of Pennsylvania.
Candidate must be local to Harrisburg, PA for hybrid work.
Proven experience as a Senior Data Engineer or similar role.
Expertise in designing and maintaining scalable data pipelines.
Strong proficiency in SQL and relational database management.
Experience with ETL tools and data integration techniques.
Ability to attend face-to-face interviews in Harrisburg.
Experience with cloud-based data platforms (e.g., AWS, Azure, GCP).
Bachelor's degree in Computer Science, Information Technology, or related field.
0 Negotiable or Not Mentioned
USA, Stamford
30 days ago
aetalentsgroup.com
1822 Views
We are seeking a highly skilled Snowflake Developer to join our dynamic team for a contract duration of 6 or more months. This role is designed for a technical expert with a customer-focused mindset who can deliver excellent service to clients, partners, and stakeholders. You will be responsible for managing and resolving support tickets within SLA guidelines while working with critical integrations like Active Directory, LDAP, Outlook, Word, Excel, and Salesforce. The position requires a candidate who can handle customer calls professionally and track issue resolutions effectively to ensure high levels of client satisfaction. In addition to development tasks, you will support software licensing and installations, perform routine server installations, and conduct necessary maintenance. Maintaining accurate documentation of all customer interactions is a key part of the role, as is the ability to troubleshoot and escalate complex technical issues to the appropriate channels. The environment is fast-paced, demanding a high attention to detail and the ability to multitask across various web-based technologies and enterprise tools. This opportunity allows for work based in Stamford or remotely, providing flexibility for the right candidate with the necessary experience and skills.
Key Requirements
Minimum of 8 years of professional experience in technical development roles.
Proven expertise as a Snowflake Developer with deep platform knowledge.
Strong troubleshooting and analytical skills to resolve complex technical issues.
Excellent verbal and written communication skills for stakeholder interaction.
Ability to multitask and maintain organization in a fast-paced environment.
High attention to detail regarding technical documentation and ticket tracking.
Customer-centric mindset with a proactive problem-solving attitude.
Familiarity with web-based technologies and standard enterprise software tools.
Hands-on experience with Active Directory and LDAP integrations.
Proficiency in Microsoft Office Suite including Word, Excel, and Outlook.
Ability to manage software licensing and perform server maintenance tasks.
Experience working with Salesforce and other CRM integrations.
0 Negotiable or Not Mentioned
USA, Philadelphia
24 days ago
keypixelusa.com
1588 Views
We are seeking a highly skilled and experienced Data Architect with specific expertise in the Health Care Payer Domain to join our team in Philadelphia, PA. The ideal candidate will have over 15 years of professional experience in data architecture and a deep understanding of how to manage and structure data within the healthcare insurance sector. You will be responsible for designing, creating, deploying, and managing our organization's data architecture, ensuring that it is robust, scalable, and meets the specific needs of the payer domain.
In this hybrid role, you will leverage your expertise in various database technologies and cloud platforms such as Azure, GCP, and AWS to build modern data environments. Your work will involve collaborating with cross-functional teams to integrate disparate data sources, improve data quality, and support analytical initiatives. You must be adept at translating business requirements into technical specifications and have a proven track record of delivering high-quality data solutions in complex environments.
Key Requirements
Minimum of 15 years of professional experience in Data Architecture.
Must have deep expertise specifically within the Health Care Payer Domain.
Advanced proficiency with cloud platforms including Azure, GCP, and AWS.
Strong background in both relational and non-relational database design.
Extensive experience in building and optimizing scalable data pipelines.
Mastery of various data modeling tools and architectural methodologies.
Comprehensive knowledge of healthcare data standards and security regulations.
Proven ability to lead and deliver large-scale data transformation projects.
Excellent stakeholder management and technical communication skills.
Ability to work effectively in a hybrid office environment in Philadelphia.
0 Negotiable or Not Mentioned
USA, Malvern
28 days ago
judge.com
1308 Views
We are seeking a highly skilled AI/ML Data Scientist to join our team for a long-term engagement in Malvern, Pennsylvania. This is a hybrid role that requires the candidate to be onsite from day one to collaborate effectively with the local team. The successful applicant will be responsible for designing and implementing advanced machine learning algorithms and artificial intelligence models to extract valuable insights from complex datasets. You will work closely with cross-functional teams to identify business opportunities where AI and ML can drive significant impact and efficiency.
This contract position has an expected duration of over one year, providing a stable opportunity to contribute to high-impact projects. The selection process will include video interviews for shortlisted candidates. Applicants are required to submit a comprehensive resume along with a copy of their work authorization and a link to their professional LinkedIn profile. This role is ideal for individuals who thrive in a data-driven environment and possess a strong technical foundation in modern data science practices.
Key Requirements
Proficiency in Python, R, or similar programming languages for data analysis.
Proven experience in developing and deploying Machine Learning and AI models.
Ability to work onsite in Malvern, PA on a hybrid schedule from the first day.
Strong understanding of statistical concepts and probability theory.
Experience with data visualization tools such as Tableau, PowerBI, or Matplotlib.
Ability to clean, preprocess, and manage large-scale datasets efficiently.
A valid work authorization copy must be provided with the application.
Provision of a professional LinkedIn profile URL for background verification.
Excellent communication skills to translate technical findings to stakeholders.
Master’s or PhD degree in Computer Science, Data Science, or a related quantitative field.