Have you been following the crypto craze the last few years? Have you been itching to break into this industry? Well, this ...
Have you been following the crypto craze the last few years? Have you been itching to break into this industry? Well, this company has built one of the world's leading compliant cryptocurrency platform serving over 30 million accounts in more than 100 countries. With multiple successful products, and vocal advocacy for blockchain technology, they have played a major part in mainstream awareness and adoption of cryptocurrency. They are proud to offer an entire suite of products that are helping build the cryptoeconomy and increase economic freedom around the world.
The Analytics team is a full scale vertical team. They own blockchain data storage, analytics platforms and the products to facilitate deep understanding of blockchain data and making it easy for internal and external usages. As an engineer you will contribute to the full spectrum of our systems, from foundational processing and data storage, through scalable pipelines, to frameworks, tools and applications that make that data available to other teams and systems.
Required Skills & Experience
Desired Skills & Experience
- Exhibit our core cultural values: positive energy, clear communication, efficient execution, continuous learning
- Data-oriented and product mindset
- Experience building data backend systems at scale with parallel/distributed compute
- Experience building API layers and microservices
- Experience with Python, Go and/or Java/Scala
- Knowledge of SQL
What You Will Be Doing
- Experience using data tools and frameworks like Airflow, Spark, Flink, Hadoop, Presto, Hive, or Kafka
- Good understanding of common blockchain structure and features.
- Experience with AWS, and especially EMR, S3, Glue, Kinesis, IAM
- Computer Science or related engineering degree
- Build out and operate foundational data infrastructure: storage (cloud data warehouse, S3 data lake), orchestration (Airflow), processing (Spark, Flink), streaming services (AWS Kinesis & Kafka), BI tools (Looker & Redash), graph database, and real-time large scale event aggregation store.
- Build the next iteration of our ingestion pipeline for scale, speed, and reliability. Read from a variety of upstream systems (MongoDB, Postgres, DynamoDB, MySQL, APIs), in both batch and streaming fashion, including change data capture. Make it self-service for non-engineers.
- Build and evolve the tools that empower colleagues across this company to access data and build reliable and scalable transformations. This includes UIs and simple frameworks for derived tables and dimensional modeling, APIs and caching layers for high-throughput serving, and SDKs for the orchestration of complex Spark and Flink pipelines.
- Build systems that secure and govern data end to end: control access across multiple storage and access layers (including BI tools), track data quality, catalogue datasets and their lineage, detect duplication, audit usage and ensure correct data semantics
- Competitive compensation "above market" and Benefits
Applicants must be currently authorized to work in the United States on a full-time basis now and in the future. - provided by Dice