Best Talent Reach (BTR) 4 Jobs Found for "etl"

Hiring? Post Your Job Here Join Our WhatsApp Channel

Top 10 Earners by Sharing Jobs To Other Platforms
Sort by:

DATABRICKS EXPERT @ EMPEREN TECHNOLOGIES

0 Negotiable or Not Mentioned USA 4 hours ago emperentech.com 31 Views

Emperen Technologies is looking for high-quality Databricks talent based in the USA to support our enterprise scaling efforts. As an Official Databricks Partner, we provide specialized services in data transformation, engineering, and architecture. This role is designed for experts who prefer working on a contract or hourly basis and can provide immediate impact on urgent project delivery needs.

Successful applicants will engage in complex data migration and modernization projects, utilizing Azure Databricks and PySpark. You will be tasked with building scalable data architectures and integrating advanced AI/ML analytics. We focus on enabling outcomes rather than just providing resources, so we require candidates who are results-oriented and technically proficient in the Databricks ecosystem.

Key Requirements

Proficiency in Azure Databricks and Apache Spark ecosystems. Strong experience with PySpark for large-scale data processing. Solid background in Data Engineering and Data Architecture principles. Expertise in Data Migration and Modernization of legacy systems. Ability to integrate AI/ML and Analytics into production data pipelines. Available to work on a Contract and Hourly Basis for urgent delivery. Strong communication skills for collaborating with CTOs and Head of Data roles. Experience with cloud infrastructure and security best practices. Proven ability to deliver high-quality talent outcomes in fast-paced environments. Knowledge of Spark optimization and performance tuning techniques.
Similar Jobs

DATA ENGINEER @ TECHLEAN IT

0 Negotiable or Not Mentioned USA 19 hours ago techleanit.com 132 Views

This role is designed for Data Engineering professionals looking for W2 marketing opportunities with top clients throughout the USA. Techlean IT provides a supportive environment for those on H1B Transfer or H4EAD, focusing on placing experts in high-impact data projects. Our placement services include specialized resume marketing and interview coaching to help you secure positions that match your technical expertise and career goals.

As a Data Engineer within our network, you will work on building and optimizing data pipelines and architectures. You will have the opportunity to leverage modern data tools and cloud platforms while working for prestigious clients. We offer long-term career support and a dedicated recruitment team to assist you in every step of the marketing and hiring process, ensuring a stable and rewarding career path in the United States.

Key Requirements

Valid H1B Transfer or H4EAD visa status. Extensive experience with Python or Scala for data processing. Proficiency in writing complex SQL queries and performance tuning. Hands-on experience with ETL/ELT toolsets and data modeling. Knowledge of Big Data technologies such as Spark, Hadoop, or Kafka. Experience with cloud data platforms like AWS Redshift, Snowflake, or Azure Synapse. Ability to build and maintain scalable data pipelines. Understanding of data governance and security best practices. Strong organizational skills to manage multiple data projects. Degree in Computer Science, Information Technology, or a related field.
Similar Jobs

GOVT INFORMATICA DEVELOPER (1 POSITION) @ IIT LABS

0 Negotiable or Not Mentioned USA, Austin 1 day ago iitlabs.com 94 Views

We are seeking a highly experienced Government Informatica Developer for a 12-month contract position located in Austin, TX. This role requires a professional with over 15 years of experience to join a dedicated team working on high-impact data infrastructure. The position is a hybrid role, meaning the candidate must be local to Austin or willing to work on-site as required. You will be responsible for developing and managing complex data pipelines and integration strategies within a government-focused context.

The successful candidate will utilize a modern tech stack including Informatica PowerCenter, IICS, IDMC, and Snowflake. Proficiency in Python and experience in the healthcare sector are crucial for success in this role. You will also be involved with Master Data Management (MDM) and Informatica Axon to ensure data governance and quality across the enterprise. This contract provides a significant opportunity to contribute to large-scale data systems for a duration of one year.

Key Requirements

Minimum of 15 years of professional experience in Informatica Development. Deep technical expertise in Informatica PowerCenter and ETL development. Proven experience with Informatica Intelligent Data Management Cloud (IDMC) and IICS. Strong working knowledge of Informatica MDM and Axon for data governance. Hands-on experience with Snowflake data warehousing and cloud data architecture. Advanced proficiency in Python for data manipulation and scripting. Prior experience working within the healthcare industry or related government projects. Must be located in Austin, TX or able to work in a hybrid capacity on-site. Ability to commit to a 12-month contract duration. Strong analytical skills to solve complex data integration and migration challenges.
Similar Jobs
BTR Pro Seeker

3x Your Profile Visibility to Top Recruiters

Don't just look for jobs; let jobs find you. Pro Seeker members appear at the top of BTR's talent search portal, giving you exclusive exposure to hiring managers actively looking for experts like you.

Starting $0.99/mo Fast Hire Boost

DATA ENGINEER @ CONVEX TECH INC.

0 Negotiable or Not Mentioned USA, New York 3 days ago convextech.com 448 Views

Convex Tech Inc. is seeking a skilled Data Engineer for a hybrid role based in New York. This position requires the candidate to work onsite three days a week and participate in an onsite interview process. The successful candidate will focus on designing and implementing scalable data pipelines within the Azure ecosystem, specifically utilizing Azure Databricks and Azure Data Factory. The role involves developing robust ETL/ELT workflows using Apache Spark and PySpark DataFrames to process large datasets efficiently while ensuring optimal performance and scalability.

Beyond core pipeline development, the Data Engineer will be responsible for maintaining data governance, security, and compliance. Key tasks include implementing data quality frameworks, managing data lineage, and supporting modern Lakehouse architectures. Candidates must possess a deep understanding of SQL-based transformations and Master Data Management (MDM) concepts to ensure data consistency and integrity across the organization. This is a contract-based opportunity for 6 months or more, specifically looking for USC or GC holders ready to work in a hybrid environment.

Key Requirements

Design and implement scalable data pipelines using Azure Databricks and Azure Data Factory. Develop and maintain robust ETL/ELT workflows using Apache Spark and PySpark DataFrames. Build and optimize data pipelines for efficient data ingestion and processing of large datasets. Utilize data governance tools to manage data access, security, compliance, and data lifecycle. Implement data quality frameworks and maintain data lineage across enterprise data platforms. Design and support modern data architecture using Lakehouse and distributed data processing. Develop high-performance Spark and SQL-based data transformation procedures. Apply Master Data Management (MDM) concepts to ensure data consistency and standardization. Must be a US Citizen or Green Card holder (USC/GC only). Willingness to work onsite in New York 3 days a week and attend an onsite interview.
Similar Jobs