Lead Platform Engineer
- Employer
- Hays.
- Location
- United Kingdom, Manchester
- Salary
- Competitive
- Closing date
- 30 Jan 2023
View more
- Sector
- Consultancy
- Job Role
- Cloud Security
- Job Type
- Permanent
You need to sign in or create an account to save a job.
Your new company
Your new company are a global leader in digital transformation with 110,000 employees in 73 countries and annual revenue of € 12 billion. European number one in Cloud, Cybersecurity and High-Performance Computing, the Group provides end-to-end Orchestrated Hybrid Cloud, Big Data, Business Applications and Digital Workplace solutions.
Your new role
• Manage FCAPS scenarios in production environment
• Track and manage platform tickets to meet SLA requirements.
• Monitor and manage cluster capacity based on customer count, events per second Perform Hadoop administration tasks
• Guide platform engineers to fix day-to-day Operational issues.
• Setup and manage HDP Platform, handling all Hadoop environment builds, performance tuning and ongoing monitoring.
• Develop scripts/ tools to automate platform maintenance activities
• Work with sustenance engineering on emergency fixes
• Debug day to day job issues in Hadoop platform and provide solutions.
• Perform software release management tasks
• Health monitoring of multiple HDP clusters using centralised dashboards, for Hadoop services, overall server health, custom applications running on the cluster
• Troubleshoot Log collection and ingestion via Apache NiFi to our MDR platform from various network devices (like Firewall, Switches, Router, Proxy, IPS, WAF, etc..), servers, and Cloud resources.
• Coordinate with Network, Infrastructure, and other organizations as required
• Perform root cause analysis on failed components and implements corrective measures
• Configuration of high level and low level HDP parameters to fine tune performance of the cluster
• Manage escalations on FCAPS issues
What you'll need to succeed
• Experience in design and operationalising FCAPS (Fault, Configuration, Availability, Performance, Security) for Hadoop clusters
• Experience in design of automated Hadoop installation
• Deep Expertise in managing Hadoop ecosystem components in large production clusters.
• Expertise in HDP platform/Cloudera
• Application Deployment using JAVA & Python APIs
• Good Scripting knowledge in Bash, Python, Anaconda, Ansible
• Knowledge of Automation/ DevOps Tools GitHub, Jenkins, Docker, Kubernetes
• Data Ingestion, Data Access & Data storage using Hadoop Big Data tools like HBase, Flume, Kafka, Nifi, Elasticsearch
• Good hands-on experience of Linux, its commands and scripting are a must.
• General operational excellence. This includes good troubleshooting skills, understanding of system's capacity and bottlenecks, memory management, performance tuning and optimization for Linux and Hadoop.
• Configuration management and deployment exposure in Open-source environment.
• Knowledge of Kerberos and Apache Ranger for configuring security
• Excellent communication skills.
• Critical thinker and good problem-solver.
What you'll get in return
• Attractive salary
• 25 days of Annual leave + an option to purchase more through our Flexible Benefits
• Flex benefits system - exciting opportunity to choose your own benefits
• Retail discounts
• Pension - matching contribution up to 10%
• Private Medical Scheme
• Life Assurance
• Enrolment in our Share scheme - subject to scheme eligibility criteria
• Unlimited opportunities to learn in our Training platforms
What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
If this job isn't quite right for you but you are looking for a new position, please contact us for a confidential discussion on your career.
You need to sign in or create an account to save a job.
Get job alerts
Create a job alert and receive personalised job recommendations straight to your inbox.
Create alert