Remote | Full-time
ScaleCapacity is looking for a savvy Data Engineer to join our growing team of analytics experts. The hire
will be responsible for expanding and optimizing our data and data pipeline architecture, as well as
optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data
pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up.
The Data Engineer will support our software developers, database architects, data analysts and data
scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout
ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams,
systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing
our company’s data architecture to support our next generation of products and data initiatives.
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing
data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a
wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer
acquisition, operational efficiency, and other key business performance metrics.
- Create data tools for analytics and data scientist team members that assist them in building and
optimizing our product into an innovative industry leader.
- Work with data and analytics experts to strive for greater functionality in our data systems.
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
- AWS or Snowflake Certifications
- Experience with big data tools: Hadoop, Spark, Kafka, Storm, Spark-Streaming etc.
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift, relational SQL, and NoSQL databases
- Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- BA/BS degree in Computer Science, Engineering, Management Information Systems, OR equivalent practical
- Minimum (3) yrs. experience in Operational/Security cloud-based environments. (AWS & SaaS experience
- Knowledge of risk assessment tools, technologies, and methods
- Strong technical proficiency in large scale systems, Linux/server operating systems, databases, cloud
computing, big data, Hadoop & networking.
- Experience with designing secure networks, systems, and application architecture basic encryption
theory and key management (PKI) and Compliance Automation