Data Analyst | SQL, Python, Cloud ETL tools

Company: F1 Technical Solutions
Location: Not Specified, Not Specified, United States
Type: Full-time
Posted: 13.MAR.2021

Summary

We are looking to fill a new Data Analyst role with our direct health insurer client in NYC. This is a remote spot for now, however would ne...

Description

We are looking to fill a new Data Analyst role with our direct health insurer client in NYC. This is a remote spot for now, however would need to be local to NYC once pandemic is over. Candidate should be focused on optimizing and building our marketing data infrastructure working alongside a wide variety of data, product and business teams. This person will work closely with marketing management, data science, data visualization, and analytics teams to support them by building data sets and data pipelines using multiple data tools, who would consume this data for producing business-driven insights, data solutions and campaigns.
The right candidate will have strong experience with data infrastructure, data architecture, ETL, SQL, automation, data frameworks, and processes to rapidly integrate disconnected and disparate data sources into automated datasets for analytics consumption. The candidate will also have a proven track record working with enterprise metrics, strong operational skills to drive efficiency and speed, expertise building repeatable data engineering processes, strong project management skills, and a vision for how to deliver data products.*Experience with marketing data sets is a huge plus.

Responsibilities:

    • Work closely with Marketing and /business analysts to understand business requirements.
    • Write sophisticated yet optimized data transformations in SQL/Python
    • Design and Implement of data pipelines, both batch and real-time, that produces reliable data for various data consumption use cases.
    • Help manage Marketing data platforms which include the Segment, Amazon Web services, and traditional data-warehouses.
    • Manage all aspects of dataset design, creation, and curation; including the frameworks to derive metrics to deliver data products for KPI's, visualization, data science, analyst and stakeholder teams
    • Build and own the automation and monitoring frameworks that showcase reliable, accurate, easy-to-understand metrics and operational KPIs to stakeholders for data pipeline quality and performance.
    • Drive design, building, and launching of new data models and data pipelines in production systems
    • Be the subject matter expert for data, pipeline design, and other related big data and programming technologies.
    • Proactively identify reliability & data quality problems and drive triaging and remediation process
    • Partner with data producers in understanding data sources, enable data contracts, and define the data model that drives analytics
    • Partner with Analysts and Data Scientists on delivering reliable data that powers actionable insights
    • Understanding data governance practices such as metadata management, data lineage, and data glossaries a huge plus.
    • Foster strong collaboration between globally distributed team members
    • Harness operational excellence & continuous improvement with a can-do attitude.

Job Requirements

Qualification and Skills:

    • B.S/M.S. in Computer Sciences or equivalent field, with 5+ years total experience and 3+ years of relevant experience within the data/data warehousing domain.

    • Solid understanding of RDBMS concepts, data structures, and distributed data processing patterns.

    • Excellent SQL and Python knowledge strong hands-on data modeling and data warehousing skills

    • Expertise in programming pipelines in languages like Scala, Java, etc.

    • Expertise in big data technologies like Hadoop, Spark etc.

    • Power-user and specialist in building scalable data warehouses and pipelines using some of Cloud tools such as AWS, Cloud ETL tools such as Databricks (Spark/Azure)

    • Experience with version control systems (Github,) and CI/CD tools.

    • Experience in data orchestration & scheduling tools like Control-M, Autosys, Tidal, etc.

    • Experienced with data visualization tools and packages (e.g.Tableau, Power BI)

    • Strong attention to details to highlight and address data quality issues

    • Self- starter, motivated, responsible, innovative and technology-driven individual who performs well both independently and as a team member

If interested, send resume in WORD to

- provided by Dice

 
Apply Now

Share

Flash-bkgn
Loader2 Processing ...