We are looking to fill a new Data Analyst role with our direct health insurer client in NYC. This is a remote spot for now, however would ne...
We are looking to fill a new Data Analyst role with our direct health insurer client in NYC. This is a remote spot for now, however would need to be local to NYC once pandemic is over. Candidate should be focused on optimizing and building our marketing data infrastructure working alongside a wide variety of data, product and business teams. This person will work closely with marketing management, data science, data visualization, and analytics teams to support them by building data sets and data pipelines using multiple data tools, who would consume this data for producing business-driven insights, data solutions and campaigns.
The right candidate will have strong experience with data infrastructure, data architecture, ETL, SQL, automation, data frameworks, and processes to rapidly integrate disconnected and disparate data sources into automated datasets for analytics consumption. The candidate will also have a proven track record working with enterprise metrics, strong operational skills to drive efficiency and speed, expertise building repeatable data engineering processes, strong project management skills, and a vision for how to deliver data products.*Experience with marketing data sets is a huge plus.
Responsibilities:
Job Requirements
Qualification and Skills:
B.S/M.S. in Computer Sciences or equivalent field, with 5+ years total experience and 3+ years of relevant experience within the data/data warehousing domain.
Solid understanding of RDBMS concepts, data structures, and distributed data processing patterns.
Excellent SQL and Python knowledge strong hands-on data modeling and data warehousing skills
Expertise in programming pipelines in languages like Scala, Java, etc.
Expertise in big data technologies like Hadoop, Spark etc.
Power-user and specialist in building scalable data warehouses and pipelines using some of Cloud tools such as AWS, Cloud ETL tools such as Databricks (Spark/Azure)
Experience with version control systems (Github,) and CI/CD tools.
Experience in data orchestration & scheduling tools like Control-M, Autosys, Tidal, etc.
Experienced with data visualization tools and packages (e.g.Tableau, Power BI)
Strong attention to details to highlight and address data quality issues
Self- starter, motivated, responsible, innovative and technology-driven individual who performs well both independently and as a team member
If interested, send resume in WORD to
2021 © All Rights Reserved. Privacy Policy | Terms of Service