Acest anunț a expirat și nu este disponibil pentru aplicare

Your Role

Currently OPIS, part of IHS Markit, is looking to expand its Romanian Data Science and Machine Learning Engineering team based in Bucharest. To fit this role, we are looking for that unique mix of solid technical capabilities, blended with strong interpersonal skills.

You enjoy working with data starting from the infrastructure up the most complex data processing pipelines, with a strong preference towards Big Data, Data Science and Machine Learning.

Your Main Responsibilities

  • Design and develop the data pipelines to support data science and ML projects
  • Write complex and efficient code in SQL or Python to optimize data pipelines to ingest, cleanse, transform and integrate large volumes of data to prepare for ML analysis and modeling
  • Write Python code to transform our model algorithms to production-ready code
  • Work extensively with AWS services to deploy and run our ML code via CI/CD pipelines and containers
  • Be part of an Agile team, using the company’s latest ML Operations Lifecycle Framework working closely with team members from Romania and US
  • Adhere to best practice development standards for ML Ops design and architecture
  • Believe in high quality, creativity, initiative and self-management in everything you work on

Your Team

OPIS, part of IHS Markit, is one of the world's most comprehensive sources for petroleum pricing and news information. OPIS provides real-time and historical spot, wholesale/rack and retail fuel prices for the refined products, renewable fuels, and natural gas and gas liquids (LPG) industries.

Your team focuses on developing the data infrastructure to support our large volume data analytics processing, data science, and machine learning projects. This is a unique opportunity to join a fast-growing team that will become an important part of global OPIS IT. This team will work closely with the OPIS IT HQ based in Rockville, US.

Your Expertise

You Have

  • 5 years of experience in data management technologies (Microsoft SQL Server, Postgres, advanced SQL coding, relational database design, data warehousing)
  • 3 years of experience in developing Big Data technologies (Spark, AWS EMR) to handle ML model processing (training, evaluation, and inference)
  • 3 years of experience writing Python or PySpark
  • 3 years of experience with writing ETL processes
  • Experience working in a Linux and Windows environment

It’s a Bonus If You Also Have

  • 1 year of experience in writing infrastructure as code for AWS cloud services (Terraform, Step Functions, CloudWatch)
  • 1 year of experience working with SnowFlake cloud data warehouse, parquet file formats, AWS S3 storage
  • 1 year of experience creating CI/CD deployment pipelines in Azure DevOps
  • 1 year of experience working with containers (Docker, Kubernetes)
  • 1 year of experience with ML workflow software (AWS SageMaker)
  • 1 year of experience with BI software (Microsoft Power BI)

You Are

  • Enjoying implementing new technologies and coming up with innovative solutions
  • Appreciating the values of teamwork, being proud of both your own work and your team’s success

What We Offer

  • Attractive benefits package (Medical services, Special discounts for gyms, Meal vouchers)
  • Ongoing Education (Participation in conferences and training)
  • Access to the most interesting information technologies
  • Flexible Working Hours
  • Work from home
  • Three days for charity/volunteering
  • Chillout & fun room (pool table, PlayStation)
  • Fruit days, Coffee, tea, chocolate
  • New and modern office, easy to access (M Aurel Vlaicu), spacious desks, latest technologies/equipment

Verifica pe BestJobs