Out of the fires that forged Strawhouse;
Canada’s fastest growing Startup of 2017, comes Uncoil, a Startup that is
taking the confusion out of data visibility & interpretation for eCommerce
brands. Leveraging the power of machine learning and real time data-pipelines,
we unify disparate data sources - turning them into actionable insights for
Digital Brands. Uncoil combines robust data visualization and recommendation
tools to solve marketing & business problems. Goodbye spreadsheets, goodbye
pivot tables, goodbye not knowing what to action on.
- We are a sister company to
Strawhouse - one of the largest Canadian based Facebook advertising partners
and Canada’s Fastest Growing Startup in 2017.
- We encourage everyone here to
‘be themselves’. Our strength comes from our diversity. This means a casual
office dress code and environment.
- We are headquartered in
downtown Kelowna, BC, with an office filled with high performing peers.
- We offer snacks, drinks, team
lunches on Fridays and all the other tech co accoutrement, billiards, ping pong
You enjoy rolling up your sleeves and getting right into the code.
You desire and strive to build platforms and products people love. You have a
highly proven track record to fit into becoming a reliable development team
contributor across the stack with a keen interest and skills in backend
development, infrastructure, scalability, performance and data science
In this role, you will be a key part of the engineering team and a
major contributor to our sprint velocity. You will be a key member to bridge
the Data Science Department and Engineering team to be held accountable for the
deployment, and implementation of the Data Science models to feed into our
insights engine. You enjoy being a key resource to make data related decision
and push the evolution of our implementation of new data sources and data
science models to new heights.
- Working on a data pipeline that
fetches data from third-party APIs, leveraging Apache Kafka, Apache Parquet and
Apache Spark SQL
- Develop new Apache Spark
queries as needed using Spark SQL
- Work with our data science team
to productionize machine learning logic using Spark MLLib, deep learning
libraries, and in house built models built in python.
- Improve data pipeline for scale
- Deploy data pipeline on
Kubernetes using Google Cloud Platform
- Continuously learning the
latest best practices and technology standards to ensure we are always using
the highest leverage tools and techniques possible.
- Work directly with the Product
- Assisting in gathering the
requirements for the deployment, implementation and iteration of the in house
- Assist with ensuring the Data
science department has the required data for exploration and experimentation.
- 5+ years of experience in the
engineering field and has a track record of making strong architectural
- Advanced working knowledge of
- Advanced working knowledge of
web frameworks and libraries in Python
- Apache Kafka/Apache Parquet
(data storage) knowledge is a big plus
- Strong knowledge of GCP
Products, Kubernetes, and Docker
- Working knowledge of
- Culture of code reviews and
collaborating closely with other people
- Experience with both relational
and non-relational databases
- Degree in Computer Science or a
related field, or equivalent
- Experience with Agile Software
detail-oriented, comfortable with ambiguity and able to prioritize work for
self and others efficiently
- Experience in working with Data
Science projects is an asset.
- Experience in Ad-tech or
Mar-tech is an asset.
- Have at least 5 hours of work
overlap with Pacific Time Zone (PST)
How to Apply
Please APPLY by visiting our careers page: