Data Engineer [Poland]


 

$ads={1}

Vertex Solutions has been providing excellent services to our clients in Enterprise Software, FinTech, Insurance, Mobile, R&D, Finance, and IT since 1997. We work for some of the most innovative names in their respective markets.


We operate as partners from a position of trust, developed through our delivery of essential talent time and time again. We have helped to build talented teams of scientific staff and engineers (in Software, Hardware, IT, and Network Support disciplines) from graduate through to board level, allowing our clients to create value and develop their brands.


We provide some of the best technology talents in Europe to a mix of global brand names and cool technology start-ups.


Are you looking for a new challenge as a Data Engineer?

(Candidate must be physically based in Poland)


Our client is one of the largest Financial Institutions and Financial Services organizations in the world, with operations in 64 countries and territories.


Using technology to transform the world’s leading financial institution, this is a job for the boldest problem solvers in the tech industry.


About Project:

We are developing cloud-based solutions for all data needs, be it making decisions, measuring internal/external business /processes/team performance, and enabling a vast number of visualizations and etc. In just a short period of time, we are expecting over 1000+ daily active end users of our data products and supported visualizations.


About Role:

We are looking for a GCP data engineer who can design, develop, test, and deploy a series of new data pipelines, data models, and optimized reporting views aggregated into various time intervals and organizational ‘buckets. You will be working closely with our Qlik and UI developers and our core data team in Google BigQuery to prepare data for time-series calculation, aggregation, and visualization and integrate our read-write data store with data interaction functionality such as annotation, attestation, personalized thresholds, and alerts.


Responsibilities:

  • Design, build, test, and deploy Google Cloud data models and transformations in BigQuery, Datafusion environment (e.g., SQL, stored procedures, indexes, clusters, partitions, triggers, etc.)
  • Optimize data views for specific visualization use cases making use of schema design partitions, indexes, down-sampling, archiving, etc. to manage trade-offs such as performance and flexibility.
  • Review and refine, interpret, and implement business and technical requirements.
  • Ensure you are part of the ongoing productivity and priorities by refining User Stories, Epics, and Backlogs in Jira
  • Onboarding new data sources, designing, building, testing, and deploying Cloud data ingest, pipelines, warehouse, and data models/products (GCP DataFusion, Spark and etc.)
  • Manage code artefacts and CI/CD using tools like Git, Jenkins, Google Secrets Manager, etc.
  • Estimate, commit, and deliver requirements to scope, quality, and time expectations.
  • Protect the solution with appropriate Authorization and Authentication models, data encryption, and other security components.
  • Optimize data view for specific visualization use cases making use of schema design partitions, indexes, down-sampling, archiving, etc. to manage trade-offs such as performance and flexibility.
  • Deliver non-functional requirements, IT standards, and developer and support tools to ensure our applications are secure, compliant, scalable, reliable, and cost-effective.
  • Ensure a consistent approach to logging, monitoring, error handling, and automated recovery as per organization standards.
  • Write automated unit and regression tests as part of a test-centric development approach.
  • Write well-commented, maintainable, and self-documenting code.
  • Deliver a data warehouse and pipelines that follow API, abstraction, and ‘database refactoring’ best practices in order to support evolutionary development and continual change.
  • Develop procedures and scripts for data migration, back-population, and initialization.
  • Fix defects and provide enhancements during the development period and hand over knowledge, expertise, code, and support responsibilities to the support team.
  • Protect the solution with relevant Data Governance, Security, Sovereignty, Masking, and Lineage capabilities.
  • Maintain good quality and up-to-date knowledge base, wiki, and admin pages of the solution.
  • Peer review of colleague’s changes
  • Speak up and help shape how we do things better.

Must-Have Skillset:

  • Proven experience / expertise in database design, development, and administration of Traditional/Cloud Databases & Data Warehouses / Procedures / Products (4+ years)
  • 1+ years of proven recent experience in developing, refactoring, optimization of SQL/T-SQL procedures in BigQuery or equivalent Cloud Databases
  • Good understanding of GCP Core and Data Products, Architecting, and Designs/Patterns
  • Data preparation, wrangling, and refactoring skills, for example as part of a Data Science pipeline.
  • Expert in the preparation, usage, visualization, and editing of data in web, dashboard, or other user interfaces (3-tier architecture, CRUD procedures, etc.)
  • IT methodology/practices knowledge and solid experience in Agile/Scrum
  • Experience in Collaboration tools usage such as JIRA/Confluence/Various board types
  • BS/MS degree in Computer/Data Science, Engineering, Data, or a related subject
  • Excellent communication and interpersonal skills in English. Proficiency in verbal, listening, and written English is crucial.
  • Enthusiastic willingness to learn and develop technical and soft skills as needs require rapidly and independently.
  • Strong organizational and multi-tasking skills.
  • Good team player who embraces teamwork and mutual support.
  • Interested in working in a fast-paced environment.

Nice -To Have Skillset:

  • Experience developing BI/MI reports and dashboards in popular tools like Qlik (VizLib Library, VizLib Collaboration, Mashups, etc.), Tableau, Looker, etc.
  • Experience in GCP-based big data / ETL solutions DevOps / DataOps model.
  • Experience in deploying and operating Datafusion/CDAP-based solutions.
  • Experience in building and operating CI/CD life-cycle management Git, Jenkins, Groovy, Checkmarx, Nexus, Sonar IQ and etc.
  • Expertise in Java, Python, and DataFlow
  • Broad experience with IT development and collaboration tools.
  • An understanding of IT Security and Application Development best practices.
  • Understanding of and interest in various investment products and life cycles and the nature of the investment banking business.
  • Experience in working with infrastructure teams to deliver the best architecture for applications.
  • Working in a global team with different cultures

Employment Type – B2B


The way we work:

  • An opportunity to broaden/deeper knowledge and expertise in global projects.
  • Be part of a dynamic IT environment
  • Stable job in a professional team,
  • Interesting path of a career in an international organization,
  • Consistent scope of responsibilities.
  • Contact with top IT technologies available in the market.
  • An environment where everyone has a voice.
  • Work and Learn from teams with mature processes and tools to ensure the best-in-class deliverables.

Benefits:

  • Private medical care and life insurance

Other benefits:

  • Highly skilled tech team who is always ready to help, collaborate and share knowledge
  • Clear career engineering path and the possibility to rotate between projects and teams (for longer term)
  • Occasionally Hybrid working model after the Pandemic (we miss each other a lot!)

Have we sparked your interest?

Get in touch! We are looking forward to speaking to you.


Reach out to me at s.surela@vertex-solutions.com

$ads={2}


 

.

Post a Comment

Previous Post Next Post

Sponsored Ads

نموذج الاتصال