Best Talent Reach (BTR) 11 Jobs Found for "data pipeline"

Hiring? Post Your Job Here Join Our WhatsApp Channel

Top 10 Earners by Sharing Jobs To Other Platforms
Sort by:

SOFTWARE ENGINEER (DATA ENGINEERING) @ MANPOWERGROUP

~10,833 Mentioned USA 10 days ago gmail.com 237 Views

We are seeking a Software Engineer specializing in Data Engineering to join our dynamic team. The primary focus of this role is to build and maintain robust data pipelines and essential infrastructure that supports our large-scale data operations. You will be responsible for developing efficient ETL processes and managing big data systems to ensure data availability and integrity across the organization. This position offers the opportunity to work with cutting-edge data tools and contribute to the foundation of our data-driven decision-making processes.

This role is available in multiple locations, including New York, NY, Chicago, IL, and Boston, MA. The compensation package for this role is between $130,000 and $175,000 per year. We are looking for candidates who are passionate about data architecture and eager to solve complex engineering challenges in a collaborative environment.

Key Requirements

Proficiency in SQL for complex data queries and manipulation. Strong programming skills in Python for automation and scripting. Hands-on experience with big data tools and technologies. Ability to develop and maintain scalable ETL pipelines. Proven experience in managing and optimizing big data systems. Knowledge of data infrastructure and architectural patterns. Familiarity with cloud platforms such as AWS, Azure, or GCP. Strong problem-solving and analytical capabilities. Degree in Computer Science, Engineering, or a related technical field. Effective communication skills for collaborating with technical teams.
Similar Jobs

DATA ENGINEER @ VIVID TECHNOLOGIES

0 Negotiable or Not Mentioned USA 15 days ago vivid-technologies.com 817 Views

Join Vivid Technologies as a Data Engineer where we bridge the gap between talented professionals and top-tier W2 contracting opportunities in the USA. We focus on marketing candidates with strong technical backgrounds, offering a robust support system that includes resume polishing and technical interview coaching. Our goal is to place you in high-impact projects quickly while providing the tools needed for long-term career sustainability.

Candidates will benefit from a hybrid work structure, allowing for flexibility while engaging with projects in various regions. We offer continuous guidance from the initial marketing phase through the entire project lifecycle. This role is ideal for those who thrive in data-centric environments and are looking for a reliable partner to manage their professional marketing and placement in the United States.

Key Requirements

Must possess a valid GC, GC-EAD, H4-EAD, or USC visa status. Strong proficiency in Python, Scala, or Java programming languages. Advanced knowledge of SQL and database management systems. Experience with ETL tools and designing efficient data pipelines. Familiarity with Big Data technologies such as Hadoop or Spark. Hands-on experience with cloud platforms like AWS, Azure, or GCP. Understanding of data modeling and data warehousing concepts. Ability to troubleshoot complex data issues and optimize performance. Capability to work effectively in a hybrid team setting. Proactive approach to learning new technologies and project requirements.
Similar Jobs

SR. FULL STACK DEVELOPER @ APPTAD INC

0 Negotiable or Not Mentioned USA, Philadelphia, PA 16 days ago apptadinc.com 1125 Views

Apptad Inc is seeking a highly skilled Sr. Full Stack Developer to join our team in Philadelphia, PA, in a hybrid capacity. This role is ideal for a veteran developer with over 10 years of experience looking to lead complex technical initiatives. You will be responsible for building advanced data pipelines and ETL processes using Airflow and Snowflake, while also supporting the development of sophisticated web applications using React, Material UI, and AngularJS. The position involves collaborating across multiple teams to ensure the delivery of high-quality software solutions and the resolution of intricate technical problems.

In this hybrid role, you will play a key part in implementing process improvements and driving automation across the development lifecycle. Candidates should possess a deep understanding of cloud environments, specifically AWS, and be proficient in containerization technologies such as Docker and Kubernetes. Given the nature of our projects, experience within the financial services or asset management industry is considered a significant advantage. You will also have the opportunity to utilize AI-enabled development tools like Copilot and Claude to enhance productivity and innovation within our tech stack.

Key Requirements

Minimum of 10 years of professional experience in full-stack software development. Expert proficiency in Python and frameworks such as Django. Extensive experience with front-end technologies including React, Angular, and Material UI. Strong background in AWS services including EC2, S3, Lambda, SNS, and SQS. Demonstrated expertise in containerization using Docker and orchestration with Kubernetes. Proven experience with CI/CD tools like Jenkins, Gitlab, or Bamboo. Deep knowledge of database systems including Snowflake, Redshift, and SQL. Hands-on experience building and managing data pipelines with Apache Airflow. Strong understanding of REST API design and DevOps best practices. Familiarity with AI-enabled development tools such as Copilot or Claude. Experience in the Financial Services or Asset Management domain is highly preferred. Excellent collaborative skills and the ability to troubleshoot complex technical issues.
Similar Jobs
BTR Pro Seeker

Join 1000+ Job Seekers: BTR Pro Seeker

Become part of a growing community. Get 20 applications daily, ad-free, and 5 AI letters. Boost your visibility in BTR's talent search and connect with top recruiters.

Starting $0.99/mo Fast Hire Boost

ORACLE DATA ENGINEER @ TROOBELL

0 Negotiable or Not Mentioned USA 17 days ago troobell.com 1193 Views

Troobell is currently seeking a highly skilled Oracle Data Engineer for a C2C role based in the United States. This position is a 6-month contract with the possibility of extension and follows a hybrid work model. Candidates have the flexibility to work from either Merrimack, New Hampshire, or Durham, North Carolina, and the company is open to candidates who are willing to relocate to these areas to fulfill the hybrid requirements. The role focuses on developing and maintaining robust data architectures using Oracle technologies to support business intelligence and reporting needs.

As an Oracle Data Engineer, you will be responsible for designing ETL processes, optimizing database performance, and ensuring the seamless flow of data across various systems. You will collaborate with cross-functional teams to translate business requirements into technical specifications. Applicants must hold either US Citizen (USC) or Green Card (GC) status to be considered. Additionally, a professional LinkedIn profile with a photo is required for the screening process to verify the candidate's professional background and experience.

Key Requirements

Must be a US Citizen (USC) or Green Card (GC) holder. Possess a valid LinkedIn profile with a professional photo. Extensive experience working with Oracle Database environments. Proven proficiency in SQL development and performance tuning. Experience in designing and implementing complex ETL processes. Ability to work in a hybrid setting in Merrimack, NH or Durham, NC. Solid understanding of data modeling and data warehousing concepts. Strong analytical skills for troubleshooting data-related issues. Experience with shell scripting or Python for automation tasks. Excellent communication skills to work with technical and non-technical stakeholders.
Similar Jobs

AWS DATA SOLUTIONS LEAD @ IT VISION GROUP

0 Negotiable or Not Mentioned USA, Seattle WA 17 days ago itvisiongroup.com 755 Views

IT Vision Group is seeking a dedicated AWS Data Solutions Lead to join our innovative team in Seattle, WA. The successful candidate will be responsible for designing, implementing, and maintaining scalable data solutions on Amazon Web Services (AWS). You will leverage various AWS services to build and optimize critical data pipelines, data lakes, and ETL processes, ensuring that our data-driven applications operate at peak efficiency. This role is a W2 position for a duration of 12.0 months with an onsite work arrangement. The compensation for this role is $50/hr on a W2 basis with no benefits included.

You will work closely with data scientists, analysts, and key stakeholders to ensure the availability, reliability, and security of our data assets for our data-driven applications. We are looking for a professional who can lead technical initiatives and provide strategic direction for our cloud-based data infrastructure. If you have a strong background in data engineering and are passionate about cloud innovation, we encourage you to apply for this exciting opportunity to grow your career with a forward-thinking organization.

Key Requirements

Proven experience in data engineering and cloud-based solutions Expertise in designing, implementing, and maintaining scalable AWS data solutions Advanced proficiency in leveraging AWS services for optimization Hands-on experience building and optimizing data pipelines and data lakes Strong understanding of ETL processes and implementation Ability to collaborate effectively with data scientists and analysts Strong focus on ensuring data availability, reliability, and security Excellent problem-solving skills and analytical thinking Proficiency in SQL and scripting languages such as Python Knowledge of cloud infrastructure best practices and data governance
Similar Jobs

AWS DATA ARCHITECT @ DATA WAREHOUSE LABS INC

0 Negotiable or Not Mentioned USA, Torrance, CA 20 days ago dwlabs.com 1159 Views

Data Warehouse Labs Inc is seeking a highly skilled AWS Data Architect to join our team in Torrance, CA. This position requires an onsite presence four days a week and is designed for a professional with at least 15 years of industry experience. You will be responsible for leading the design and implementation of robust data architectures, ensuring that all solutions are scalable, performant, and maintain high data quality standards.

In this role, you will work extensively with the AWS data stack, including MWAA/Airflow, Athena, Aurora, S3, Redshift, Glue, and Lambda. Your daily tasks will include hands-on data pipeline and ETL development, as well as complex data modeling and warehousing. You will also utilize QuickSight for business intelligence and collaborate closely with stakeholders to deliver optimized end-to-end data solutions that meet business objectives.

Key Requirements

Minimum of 15 years of professional experience in data architecture or engineering. Strong hands-on expertise in the AWS data stack including MWAA/Airflow, Athena, and Aurora. Proven experience with AWS S3, Redshift, Glue, and Lambda services. Expertise in data modeling and building scalable data warehousing solutions. Proficiency in Business Intelligence tools, specifically AWS QuickSight. Demonstrated experience in hands-on data pipeline and ETL development. Ability to lead architecture projects and optimize complex data workflows. Strong focus on performance tuning, scalability, and maintaining data quality. Excellent communication skills for collaborating with stakeholders on technical solutions. Willingness to work onsite in Torrance, CA at least 4 days per week.
Similar Jobs
BTR Pro Seeker

Join 1000+ Job Seekers: BTR Pro Seeker

Become part of a growing community. Get 20 applications daily, ad-free, and 5 AI letters. Boost your visibility in BTR's talent search and connect with top recruiters.

Starting $0.99/mo Fast Hire Boost

AWS DATA ANALYTICS ENGINEER @ JUDGE

0 Negotiable or Not Mentioned USA, Malvern, PA 22 days ago judge.com 1212 Views

This position is for an AWS Data Analytics Engineer located in Malvern, Pennsylvania. The role follows a hybrid model requiring the candidate to be onsite from day one. The initial contract length is for one year, with a strong likelihood of being extended for multiple years. The recruitment process includes a video interview, and candidates are welcome to apply via C2C arrangements.

Technical responsibilities focus on utilizing Python and SQL for complex data queries and manipulation. The successful candidate will also be responsible for creating data visualizations and dashboards using Tableau and leveraging various AWS cloud services to manage and analyze large datasets. Applicants must submit their resume along with a copy of their work authorization for consideration.

Key Requirements

Proficiency in Python programming for data engineering tasks. Strong expertise in SQL for performing complex data queries. Extensive experience with Tableau for data visualization and reporting. In-depth knowledge of AWS services related to data analytics. Ability to work onsite in Malvern, PA following a hybrid model. Minimum of 1+ year experience in a similar data analytics role. Experience with cloud-based data warehousing and architecture. Strong analytical and problem-solving skills. Ability to participate and perform well in video interviews. Must provide valid work authorization documentation. Experience with C2C project delivery models. Excellent communication and collaboration skills.
Similar Jobs

SENIOR DATA ENGINEER @ DSIG INC

0 Negotiable or Not Mentioned USA, Harrisburg, PA 23 days ago dsiginc.com 1644 Views

DSIG Inc is seeking a highly skilled and experienced Senior Data Engineer to join our team for a direct client project located in Harrisburg, PA. This role is a hybrid position, requiring the candidate to be local to the area and possess a valid Pennsylvania Driver’s License. The successful candidate will be responsible for designing, building, and maintaining robust data architectures and pipelines that support large-scale data processing. Experience with the State of Pennsylvania is a mandatory requirement for this role, as the candidate will be working closely with government-related data systems and processes. In this position, you will leverage your expertise in SQL, ETL processes, and various data warehousing technologies to ensure data integrity and availability. You will also participate in face-to-face interviews and collaborate with multi-functional teams to translate business requirements into technical solutions. We are looking for a professional who is not only technically proficient but also an excellent communicator. If you have a passion for data engineering and meet the residency and licensing requirements, we encourage you to apply by sending your resume to the provided contact.

Key Requirements

Must possess a valid Pennsylvania Driver’s License. Mandatory experience working with the State of Pennsylvania. Candidate must be local to Harrisburg, PA for hybrid work. Proven experience as a Senior Data Engineer or similar role. Expertise in designing and maintaining scalable data pipelines. Strong proficiency in SQL and relational database management. Experience with ETL tools and data integration techniques. Ability to attend face-to-face interviews in Harrisburg. Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). Bachelor's degree in Computer Science, Information Technology, or related field.
Similar Jobs

SOFTWARE ENGINEER II (DATA + AUTOMATION) @ ARTECH

0 Negotiable or Not Mentioned USA, Bellevue, WA 24 days ago artech.com 1307 Views

Artech is seeking a data-focused Software Engineer II for a Data and Automation role in Bellevue, WA. This hybrid position involves working on complex data flows, focusing on pattern identification and automation at scale. The role is designed for candidates who can thrive in a fast-paced environment and are ready for immediate interviews. The compensation for this role is between $77 and $82 per hour on a W2 basis. The primary responsibilities include analyzing data to identify patterns, building datasets, and tracking data flows. You will be expected to automate manual processes using a combination of scripting and AI technologies. Successful candidates will have at least 2 years of experience and possess strong SQL and scripting skills in languages like Python, JavaScript, or PHP. This is a great opportunity for individuals with experience in data pipelines and large datasets.

Key Requirements

Minimum of 2 years of professional software engineering experience. Strong proficiency in SQL for data querying and management. Expertise in scripting languages such as Python, JavaScript, or PHP. Experience building and managing data pipelines and large datasets. Ability to analyze complex data to identify patterns and trends. Proven track record of building datasets and tracking data flows. Experience automating manual processes using scripting or AI technologies. Ability to thrive in a fast-moving environment and meet tight deadlines. Willingness to work in a hybrid environment in Bellevue, WA. Eligibility to work under a W2 contract arrangement.
Similar Jobs
BTR Pro Seeker

Pro Seeker — Visibility That Counts

Submit 20 applications daily, ad-free, with 5 AI-optimized letters for quick use. BTR highlights your profile in candidate searches to get noticed faster.

Starting $0.99/mo Fast Hire Boost

SUPER SENIOR DATA ENGINEER (1 POSITION) @ PNC

0 Negotiable or Not Mentioned USA, Pittsburgh 24 days ago skilzmatrix.com 1997 Views

PNC is currently seeking a highly experienced Super Senior Data Engineer with over 10 years of professional experience to join their team in Pittsburgh, PA. The successful candidate will play a critical role in designing, building, and maintaining scalable data pipelines leveraging the full suite of AWS cloud services. This position involves developing and optimizing sophisticated ETL and ELT workflows to handle both structured and semi-structured data, ensuring that high-performance analytics are available for business decision-making. Working within an agile environment, the role demands a expert-level understanding of data processing jobs using Python and PySpark.

In addition to pipeline construction, the engineer will be responsible for integrating and managing data within the Snowflake cloud data warehouse. This includes writing complex SQL queries for data transformation and validation, as well as supporting Power BI dashboards by delivering curated, analytics-ready datasets. Candidates must demonstrate a strong commitment to data quality, governance, performance, and security best practices. This role is offered on a W2 basis and is ideal for individuals with prior experience in the financial services or banking domain who are looking to apply their technical leadership in a dynamic corporate environment.

Key Requirements

Minimum of 10 years of professional experience in Data Engineering or a related field. Advanced proficiency in SQL, including complex querying and performance tuning. Extensive experience designing and maintaining scalable data pipelines on AWS. Expert knowledge of Python and PySpark for large-scale data processing. Hands-on experience with Snowflake cloud data warehouse management and integration. Proven ability to develop and optimize ETL/ELT workflows for various data formats. Experience supporting Power BI through data modeling and performance optimization. Familiarity with AWS services such as S3, Glue, EMR, Lambda, and Redshift. Strong understanding of data quality frameworks, governance, and security best practices. Ability to work effectively in an Agile/Scrum environment with cross-functional teams.
Similar Jobs
12Next »
Page 1 of 2 (11 results)