Jobs at Bidgely!
What can Artificial Intelligence do to reinvent the energy industry? This is your opportunity to join a world-class team and help us figure that out.
A growth-phase start-up headquartered in the heart of Silicon Valley, Bidgely is transforming the way utility customers use energy. By combining the power of machine learning and behavioral insights, Bidgely provides a suite of enterprise solutions that help customers save energy and enable utilities to build enduring customer relationships worldwide.
It’s an exciting time in the company’s evolution. We just closed our Series C fund, have acquired some of the biggest names in the industry as customers, and we’re breaking ground on new products and markets. In addition to our headquarters in Mountain View, we have a global presence in Europe, India, and Asia Pacific.
Data Science is the blood that runs in our veins at Bidgely. It is the centerpiece of our technology, our products and our business.
As a Data Engineer, you would work as part of the data science team at Bidgely and play a key role in data crunching as well as development of our groundbreaking algorithms. You will work with a plethora of data sources and storage systems to enable a number of teams achieve important business objectives. We are looking for someone who is a self-starter and enjoys swimming in an ocean of various kinds of datasets. You should expect to learn new technologies and start contributing from day one. You should be a great communicator as you will communicate with our product and customer success teams as well.
- Ownership of key datasets that will be used to analyze business objectives and development of data science algorithms
- Develop tools to help with data retrieval and analysis
- Procure and curate datasets for key initiatives
- Perform ad-hoc data analysis to help debug or improve existing products
- Work with data science, product and customer success teams on key business priorities
- Develop a keen understanding of the business problems being solved and their relationship to the datasets under analysis
- BS/MS in an engineering discipline
- Fluent in Python, Java and SQL. Clean coding style is a must
- Ability to quickly transform data in common formats like CSV, XML, JSON etc
- Comfortable with AWS pipeline components like S3, Redshift, EMR, RDS, Firehose, SQS etc
- Experience with data visualization / plotting libraries
- Fluent in data analysis using Excel
- Knowledge of data analysis tools such as Looker / Tableau are a plus
- Practical experience crunching large amount of data
- Strong communication and collaboration skills. Keen aptitude for large scale data analysis with a knack for identifying key insights from data.
- Ability to produce clean documentation
To apply for this position, please email your resume to email@example.com.