Big Data Engineer (Software Engineering) x2, Open to flexible working

Bristol
£58,941 - £65,490 per annum + GoodPackage
15 Mar 2021
08 Apr 2021
044088
Lloyds Banking Group
IT
Permanent

Agile Working Options

Other Agile Working Arrangements / Open to Discussion

Job Description

Big Data Engineer (Software Engineering) x2 positions

Company: Lloyds Banking Group, Data Content Services (DCS), Group Transformation

Location: Bristol (please only apply if able to work 2-3 days per week in this location post-covid)

Salary & Benefits: £58,941 to £65,490, plus annual personal bonus percentage, up to 15% employer pension contribution, 4% flexible cash pot, private medical insurance, 30 days holiday plus bank holidays. We also offer flexible working hours and agile working practices. Working for Lloyds is great!

Who are Lloyds Banking Group?

As the UK's largest retail and commercial bank, we have a footprint that touches nearly every community and household in the UK. That gives us a big responsibility to support the UK economy, and we have a clear strategy to put customers first and achieve our vision of being the best bank for our customers.

Who are Data Content Services & Group Transformation?

We are responsible for Hadoop based Systems of Insight within Lloyds Banking Group. The largest retail Bank in the UK. Whilst currently on-prem, we are moving to the cloud, making this a fantastic time to join our journey. We work closely with strategic suppliers and partners to provide the data solutions, technology & processes to enable the Group's Business Value Streams to process, analyse and understand the data they use day to day. This lab therefore plays a key role in helping the Group deliver its stated mission of being the Best Bank for Customers and Helping Britain Prosper.

We are the owners of the Enterprise Data Hub - a Hadoop implementation based upon Hive and Spark, currently favouring Scala. Lloyds Banking Group has also signed up to a strategic partnership with Google to utilise their Cloud Platform services (GCP) to create a new strategic platform for the Group to prepare our Bank of the Future service for customers. We anticipate very significant opportunity for review of our systems, data processing methods and approaches which you would be a key part of.

What exactly what I be responsible for?

As a big data engineer you will deliver the highest quality data processing and analytical systems, drawing upon your engineering and coding expertise, whilst being open minded to the opportunities the cloud provides.

You'll understand key business and lab priorities, deliver code, facilitate solution designs and identify potential technical impediments then work to resolve them.

As a senior engineer you will help your team be successful by embedding engineering best practices, making sure your team are focused on quality as well as being aligned with the wider Lab. You will help to troubleshoot problems as well as coach and mentor others in the team.

You will be innovative - constantly looking for better ways of solving problems and delivering more efficient solutions, especially using automation and DevOps methodologies. We would like to see you "mastering the art of laziness" and implementing automation and re-use wherever suitable and advantageous to do so.

We are a huge organisation with lots of moving parts, complex systems, processes and procedures, so you should be comfortable with ambiguity and have experience of delivery in both traditional fast path Waterfall and Agile methods.

What do we want to see from applicants?

We like to see people applying from diverse organisational, cultural and technological backgrounds and believe this is critical to success. In terms of minimum criteria we will need to see when applying, you would need to be able to demonstrate that you have prior experience in big data systems (large scale Hadoop, Spark, Beam, Flume or similar data processing paradigms), and associated data transformation and ETL experience.

This is a wholly hands-on role and you will need Big Data experience within Hadoop technologies with specific prior experience of coding in Java or Scala. We also use HiveQL and Pig so experience of these would be a bonus. In our future we anticipate more Python and potentially Beam so familiarity with these may help but we will provide training as required.

On a personal level we would want to see a passion for what can be done with big data systems, excellent people and communication skills, including the ability to influence and effectively communicate with a wide variety of technical and non-technical staff.

What support will I get and how will my career grow?

As a multi-brand, multi-channel business, we have the scale and breadth to provide you with a diverse range of training and development opportunities, helping you achieve a rewarding and fulfilling career. The sheer size of Lloyds provides many opportunities.

If you succeed there will be opportunity to grow within the role or to move into other disciplines within Enterprise Data, across Group Transformation or other parts of Lloyds Banking Group.

Join us and be part of an inclusive, values-based culture focused on making a difference.

Together we make it possible.