Best Talent Reach (BTR) 6 Jobs Found for "databricks"

Hiring? Post Your Job Here Join Our WhatsApp Channel

Top 10 Earners by Sharing Jobs To Other Platforms
Sort by:

SENIOR DATA MODELER / DATA ARCHITECT @ ALPHOSOFT

0 Negotiable or Not Mentioned USA, Columbus 5 hours ago alphosoft.com 72 Views

Alphosoft is hiring a Senior Data Modeler and Data Architect to lead our data strategy in Columbus, OH. With over 13 years of experience required, this role focuses on designing the conceptual, logical, and physical data models that support our large-scale enterprise systems. You will work closely with stakeholders to understand business requirements and translate them into scalable and efficient data architectures.

The role involves utilizing Snowflake and AWS for cloud data warehousing and leveraging tools like DBT and Databricks for transformation. You will use Erwin for advanced data modeling and Informatica for ETL processes, while applying Data Vault 2.0 methodologies to ensure data integrity. Knowledge of data governance tools like Collibra and reporting through Power BI is highly valued. The position is based in Columbus, OH, and we welcome applicants who are open to relocating.

Key Requirements

Minimum of 13 years of experience in data modeling and architecture. Expertise in cloud data warehousing specifically with Snowflake and AWS. Proficiency with DBT and Databricks for data transformation. Advanced skills in using Erwin for enterprise data modeling. Strong experience with Informatica for ETL development. Deep understanding of Data Vault 2.0 architecture and methodologies. Experience with data governance and metadata management tools like Collibra. Proficiency in Power BI for developing advanced data visualizations. Familiarity with industry-specific platforms such as Guidewire. Excellent analytical skills to design complex enterprise-level data schemas.
Similar Jobs

SENIOR DATA ENGINEER @ ALPHOSOFT

0 Negotiable or Not Mentioned USA, Columbus 5 hours ago alphosoft.com 81 Views

Alphosoft is currently seeking a highly experienced Senior Data Engineer to join our technical team in Columbus, OH. This role is designed for a professional with over a decade of hands-on experience in building and optimizing large-scale data systems. The successful candidate will be responsible for designing, constructing, and maintaining high-performance data pipelines that enable the business to leverage complex datasets for strategic decision-making.

The position requires extensive expertise in the modern data stack, specifically using PySpark, Databricks, and Snowflake to process and store data efficiently. You will work with cloud-native technologies including AWS Glue, Azure Data Factory, and Airflow for orchestration. The role involves implementing robust ETL processes and managing real-time data streaming through Kafka. This position is based in Columbus, OH, and we are open to candidates who are willing to relocate to the area.

Key Requirements

Minimum of 10 years of professional experience in Data Engineering. Deep proficiency in PySpark and Databricks for data processing. Hands-on experience with Delta Lake and Snowflake data warehousing. Expertise in cloud services including AWS Glue and Azure Data Factory. Proven experience with workflow orchestration tools such as Airflow. Strong knowledge of real-time data streaming technologies like Kafka. Proficiency in Infrastructure as Code using Terraform. Experience with data visualization tools like Power BI. Strong SQL skills and experience with relational database management. Ability to design and maintain scalable and reliable data architectures.
Similar Jobs

DATA ENGINEER (10+YRS) @ AT&T

0 Negotiable or Not Mentioned USA, Texas 5 hours ago cloudvare.com 108 Views

We are seeking a highly skilled and experienced Data Engineer to join our team for a long-term contract engagement with AT&T. This role is a 2+ year contract primarily based in the Texas region, specifically looking for candidates who can work in a hybrid onsite capacity in Plano, Dallas, or Richardson. The successful candidate will be responsible for building and optimizing scalable data pipelines and automating complex ETL processes to support large-scale enterprise data initiatives.

Key responsibilities include executing data migrations between Palantir and Snowflake, as well as conducting comparative analyses between these platforms. You will utilize Databricks for automation and pipeline development while collaborating with various stakeholders on data modeling and engineering solutions. The ideal candidate must be eligible for W2 employment and possess a deep technical background in SQL, data transformation, and optimization within cloud environments.

Key Requirements

Minimum of 10 years of professional experience in data engineering roles. Expertise in working with Snowflake for data warehousing and management. Advanced proficiency in SQL, including table building and complex K statements. Hands-on experience with Palantir for data migration and comparative analysis. Strong skills in Databricks for automation and data pipeline development. Proven ability to perform data transformation and performance optimization. Must be willing and eligible to work on a W2 contract basis. Experience with Alteryx or similar ETL tools like FME is highly preferred. Ability to build and maintain scalable data pipelines in a production environment. Exposure to GIS or ESRI technologies is considered a significant advantage. Strong analytical skills for troubleshooting complex data sets and migrations.
Similar Jobs
BTR Pro Seeker

Pro Seeker — Visibility That Counts

Submit 20 applications daily, ad-free, with 5 AI-optimized letters for quick use. BTR highlights your profile in candidate searches to get noticed faster.

Starting $0.99/mo Fast Hire Boost

DATABRICKS EXPERT @ EMPEREN TECHNOLOGIES

0 Negotiable or Not Mentioned USA 15 hours ago emperentech.com 168 Views

Emperen Technologies is looking for high-quality Databricks talent based in the USA to support our enterprise scaling efforts. As an Official Databricks Partner, we provide specialized services in data transformation, engineering, and architecture. This role is designed for experts who prefer working on a contract or hourly basis and can provide immediate impact on urgent project delivery needs.

Successful applicants will engage in complex data migration and modernization projects, utilizing Azure Databricks and PySpark. You will be tasked with building scalable data architectures and integrating advanced AI/ML analytics. We focus on enabling outcomes rather than just providing resources, so we require candidates who are results-oriented and technically proficient in the Databricks ecosystem.

Key Requirements

Proficiency in Azure Databricks and Apache Spark ecosystems. Strong experience with PySpark for large-scale data processing. Solid background in Data Engineering and Data Architecture principles. Expertise in Data Migration and Modernization of legacy systems. Ability to integrate AI/ML and Analytics into production data pipelines. Available to work on a Contract and Hourly Basis for urgent delivery. Strong communication skills for collaborating with CTOs and Head of Data roles. Experience with cloud infrastructure and security best practices. Proven ability to deliver high-quality talent outcomes in fast-paced environments. Knowledge of Spark optimization and performance tuning techniques.
Similar Jobs

DATA ENGINEER @ CONVEX TECH INC.

0 Negotiable or Not Mentioned USA, New York 4 days ago convextech.com 583 Views

Convex Tech Inc. is seeking a skilled Data Engineer for a hybrid role based in New York. This position requires the candidate to work onsite three days a week and participate in an onsite interview process. The successful candidate will focus on designing and implementing scalable data pipelines within the Azure ecosystem, specifically utilizing Azure Databricks and Azure Data Factory. The role involves developing robust ETL/ELT workflows using Apache Spark and PySpark DataFrames to process large datasets efficiently while ensuring optimal performance and scalability.

Beyond core pipeline development, the Data Engineer will be responsible for maintaining data governance, security, and compliance. Key tasks include implementing data quality frameworks, managing data lineage, and supporting modern Lakehouse architectures. Candidates must possess a deep understanding of SQL-based transformations and Master Data Management (MDM) concepts to ensure data consistency and integrity across the organization. This is a contract-based opportunity for 6 months or more, specifically looking for USC or GC holders ready to work in a hybrid environment.

Key Requirements

Design and implement scalable data pipelines using Azure Databricks and Azure Data Factory. Develop and maintain robust ETL/ELT workflows using Apache Spark and PySpark DataFrames. Build and optimize data pipelines for efficient data ingestion and processing of large datasets. Utilize data governance tools to manage data access, security, compliance, and data lifecycle. Implement data quality frameworks and maintain data lineage across enterprise data platforms. Design and support modern data architecture using Lakehouse and distributed data processing. Develop high-performance Spark and SQL-based data transformation procedures. Apply Master Data Management (MDM) concepts to ensure data consistency and standardization. Must be a US Citizen or Green Card holder (USC/GC only). Willingness to work onsite in New York 3 days a week and attend an onsite interview.
Similar Jobs

SYSTEMS ADMINISTRATOR – DATABRICKS PLATFORM @ VERAZ INC

0 Negotiable or Not Mentioned USA, Austin TX 8 days ago verazinc.com 696 Views

The company is seeking a highly experienced Systems Administrator specializing in the Databricks Platform for a hybrid role located in Austin, Texas. This position requires a professional with over ten years of experience, particularly those who have previously worked with state clients. The successful candidate will be responsible for managing Databricks workspaces and ensuring optimal platform performance through effective cluster management and job scheduling. Key responsibilities include implementing robust security measures using IAM, SCIM, and RBAC, as well as managing cloud integrations like S3. Applicants should possess deep knowledge of Spark and Databricks SQL for governance and monitoring purposes. Furthermore, proficiency in automation tools such as Terraform and CI/CD pipelines is essential for maintaining and scaling the environment efficiently.

Key Requirements

10+ years of professional experience in systems administration. Prior experience working with state clients is highly preferred. Strong hands-on experience with Databricks workspace administration. Proven ability in cluster management and performance tuning. Expertise in Identity and Access Management (IAM) and SCIM. Proficient in Role-Based Access Control (RBAC) implementation. Experience with cloud integrations, specifically Amazon S3. Solid knowledge of Apache Spark and Databricks SQL. Familiarity with security, governance, and monitoring best practices. Proficiency in automation tools including Terraform and CI/CD workflows.
Similar Jobs