KEY SKILLS REQUIRED
· Proficiency in SQL for querying, transforming, and managing data within databases.
· Experience in developing and optimising ETL/ELT pipelines and using DBT for data transformation and modelling.
· Knowledge of data modelling techniques, including star and snowflake schemas, to structure data for efficient analysis.
· Familiarity with cloud platforms such as AWS or GCP, including services like Databricks, Redshift, BigQuery, and Snowflake.
· Strong Python skills for data manipulation, scripting, and automating tasks using libraries like Pandas and NumPy.
· Expertise in managing data architecture and processing within data warehouses and lakehouse platforms like Databricks, Redshift and Snowflake.
· Experience using Git for version control and managing changes to code and data models.
· Ability to automate tasks and processes using Python or workflow orchestration tools like Apache Airflow.
· Skill in integrating data from various sources, including APIs, databases, and third-party systems.
· Ensuring data quality through monitoring and validating data throughout the pipeline.
· Strong troubleshooting skills for resolving data pipeline issues and optimising performance.
· Familiarity with Tableau and ThoughtSpot for analytics and ensuring data compatibility with analytical platforms.
What you’ll get in return
· Competitive base salary
· Up to 20% bonus
· 25 days holiday
· BAYE, SAYE & Performance share schemes
· 7% pension
· Life Insurance
· Work Away Scheme
· Flexible benefits package
· Excellent staff travel benefits
Apply
Complete your application on our careers site.
We encourage individuality, empower our people to seize the initiative, and never stop learning. We see people first and foremost for their performance and potential and we are committed to building a diverse and inclusive organisation that supports the needs of all. As such we will make reasonable adjustments at interview through to employment for our candidates.
#MP1 #LI-HYBRID