Acest anunț a expirat și nu este disponibil pentru aplicare

Fișa jobului

The position:

We are looking for someone who can traverse from big picture one day to tinker with code on another day. We are looking for “go getters”, and those who never want to stop learning. We want people that like to experiment as much as they like to deliver. We want passionate, thoughtful technologists who spread knowledge, empathy and problem solve using technology with every breath they take.


Responsibilities:


  • Creating multiple graph environments to draw visualisations and extract data efficiently
  • Applying Learning Machine algorithms to monitor changes in the graph and report risk scoring/alerts updates
  • Time-lining the data into the graph and an ‘event’ view of policies/the book
  • Improving company’s entity resolution process in the graph- would be great some interesting ideas about how to take imperfect graph data and drive a multi-factor/assignment of identity that might help reduce duplication (e.g. when someone changes a DOB or DL#)
  • Enabling mass scaling and simplifying addition of/changes to the graph in a production environment – speed to add nodes/features, speed to load/change, speed to analysis, speed to reporting
  • Providing user-friendly analytical tools , that provide users the ability to model risk scenarios, discern ‘how common’ they are, and report back quickly
  • Fuses structured & graph data into useful event and transaction-level reporting at policy & book level
  • Supporting efforts to ‘game-ify’ idFusion and the graph toolkit to make it more user-appealing/interesting


Requirements:


  • At least 5 years experience in designing, implementing, and operationalising graph data models with DataStax (or other similar platforms)
  • Hands-on experience developing with Neo4j Graph database platform or similar
  • Understanding of graph data import/export approaches, tooling, and data quality issues
  • Strong skills in graph query languages (preferably Gremlin), search/index (Solr or Lucene or ElasticSearch), Cassandra, and Hadoop tooling
  • Fluent in Java or Scala and related Microservices Frameworks like (Spring Boot / Lagom)
  • Experience with stream processing technologies such as Kafka, Spark, Storm, Hadoop MR
  • Proven experience with multiple back-end systems using both SQL and NoSQL - Document DB (Couchbase, Mongo, RethinkDB)
  • Experience working with both on-premise and cloud platforms (AWS, Azure, Google Cloud)
  • Experienced in working within agile development teams, methodology and toolsets
  • Degree emphasis in Computer Science, IT, Business / Tech Management, Computer Engineering, MIS, Mathematics desired;
  • Strong written and verbal communication, presentation, client service and technical writing skills, coupled with a strong interest in further developing and integrating enterprise business processes with technology skills.
Nivel de vechime

Nivel mediu de experiență

Tip de angajare

Full-time

Ocupație

Tehnologia informației

Sectoare de activitate

Tehnologia informației și servicii informatice

Verifica pe LinkedIn