Data Engineer

Job Description

We’re looking for a thoughtful, curious, and resourceful Data Engineer to join our growing engineering team. You can work effectively both independently and as part of a team. You’ll need to collaborate with multiple teams, including Operations, Product, Frontend Engineering, and Marketing. This position is based in Pittsburgh, PA.

Core duties include:
  • Building and maintaining pipelines to gather, enrich and merge public and private data sources, making them ready to sing harmony to a customer dataset’s melody
  • Redesigning and better automating our customer dataset analysis and delivery process, to handle everything from custom scoring algorithms to anomaly detection to monitoring pipeline health to machine learning solutions
  • Extending our pipelines to work with occasionally-quirky client sources, mapping schemas and working within their security requirements
  • Improving data storage, indexing and API-based service of data to user interfaces
  • Working in an agile team in a scrum process, collaborating closely with other software engineers and product management
Professional Requirements
  • Bachelor’s degree or equivalent experience in mathematics, statistics, economics, computer science, or similar
  • Excellent communication skills
  • 2+ years of experience in data engineering; likely other software engineering experience as well
  • Very comfortable with Python, and common accompanying tools including Pandas
  • Understanding of statistics
  • Experience with using and maintaining relational database systems
  • Familiarity with the AWS ecosystem
  • Authorized to work in the United States
Preferred Skills and Experience
  • Experience in or exposure to the constraints of a startup environment
  • Highly attentive to detail, with a skeptical sixth sense about data quality
  • Ability to work independently in a challenging, fast-paced environment with several ongoing concurrent projects
  • A can-do mentality, with the willingness to roll up your sleeves and take initiative to solve something when necessary
  • Basic knowledge of common machine learning techniques
  • Some experience with ETL processes and Data Prep tools is helpful context (i.e. we are not dogmatic on tooling, but need some conceptual background)
  • Curious and eager problem solver, able to self-teach new skills when needed, with a hunger for building well-designed, high-quality solutions. 
  • Recognition that there are always multiple answers to a problem and the ability to engage in a constructive dialogue to find the best path forward. 
  • Ability to commute to the Pittsburgh, PA office daily (once current pandemic constraints are lifted)

CLICK HERE TO APPLY NOW

 

Principal Engineer

Job Description 

As a principal engineer at BlastPoint, you’ll be responsible for the architecture, design, implementation and deployment of our software-as-a-service platform. You’ll create solutions that allow our customers to understand their large datasets through data visualization and geospatial tools. You’ll work closely with the executive team to translate product vision and business needs into clean and scalable code. We are a small startup, so you will have a lot of responsibility, but also autonomy and flexibility in how you choose to accomplish your goals. As we grow, you’ll collaborate with our data science team to integrate machine learning tasks into our platform.

Professional Requirements
  • 3+ years of professional programming experience
  • Proven experience as a full stack developer or similar role
  • Expert knowledge of Python (strongly preferred) or other object-oriented language
  • Expert knowledge of a web framework, such as Django, Flask or Rails
  • Understanding of REST principles, service-oriented architecture and API development
  • A history of writing clean code, refactoring and participating in code review
  • Knowledge of Unit Testing and Test-Driven Development
  • Strong MySQL or PostgreSQL skills
  • 1+ years experience with React/Redux
  • Understanding of Linux and Ubuntu operating systems
  • Experience with the AWS ecosystem (EC2, RDS, S3, CodePipeline)
  • Familiarity with Docker, Docker Compose and best practices
  • Experience deploying code to cloud-based and on-prem infrastructure
  • Good written and oral communication skills
  • Authorized to work in the United States
  • Ability to commute to the Pittsburgh, PA office daily
Preferred Skills and Experience
  • Experience or familiarity with Django Rest Framework and Swagger
  • Knowledge of Pandas and/or noted experience with ETL operations
  • Experience with Mapbox (experience with vector tiles is amazing!)
  • An interest in geospatial technologies and knowledge (can you name all 50 state capitals?)
  • Experience with CI/CD pipeline tooling and deployments
  • A love of data, including, but not limited to, sources such as Bigfoot Field Researchers Organization (BFRO) or NUFORC database of UFO sightings from US and Canada.

CLICK HERE TO APPLY NOW