Yes, I give permission to store and process my data. TP will never share or sell your information to third parties.

SQL Developer

Job Role:

  • Design and implement data solutions using the data lakehouse architecture which is best-suited to deliver on our needs and use cases — from social media (like LinkedIn, Glassdoor etc.) to data lakes & delta lakes (designed using medallion architecture) to analytics (on various verticals like IT, HR & Finance) and beyond across a progressively evolving technical stack
  • We do data integration using Synapse pipelines feeding data from disparate data sources (REST API’s (handling complex JSON’s), Office365 applications (like SharePoint, MS teams) etc.) and prepare data for use by data analysts, and other data systems.
  • We write complex queries as part of the data modeling activity and do database design for applications

Key Responsibilities:

  • Develops and operationalizes data pipelines to make data available for consumption (BI, Advanced analytics, Services).
  • Works in tandem with data architects and data/BI engineers to design data pipelines and recommends ongoing optimization of data storage, data ingestion, data quality and orchestration.
  • Designs, develops, and implements ETL/ELT processes using Azure services such as Azure SQL Synapse, ADLS etc. to improve and speed up delivery of our data products and services.
  • Implements solutions by developing scalable data processing platforms to drive high-value insights to the organization.
  • Identifies, designs, and implements internal process improvements: automating manual processes, optimizing data delivery.
  • Identifies ways to improve data reliability, efficiency, and quality of data management.
  • Communicates technical concepts to non-technical audiences both in written and verbal form.
  • If Lead - Then performs peer reviews for other data engineer’s work

Required Skills:

  • Good Understanding of Data integration: Onboarding and integration of data from external and internal data sources through API management, sftp processes and others using synapse pipelines
  • Deep expertise of core data platforms: Azure, Data Lakehouse design, big data concept using spark architecture
  • Strong knowledge:
    o With Integration technologies: pySpark.
    o With conceptual, logical, and physical database modeling.
    o T-SQL knowledge and experience working with relational databases, query authoring, stored procedure development, debug, and optimize SQL queries.
  • Proven success as a technical lead and individual contributor
  • Familiarity with Project management methodologies: Agile, DevOps.

Qualifications:

  • Bachelor’s degree (or equivalent) in computer science, information technology, engineering, or related discipline
  • Experience in building or maintaining ETL processes
  • Professional certification