The rise of cloud-based technology has allowed a fast-paced expansion of external attack surfaces and as cybersecurity starts to rise on the C-level priority list, it has become clear organisations must think about risk mitigation in new ways. Josh Neame, CTO at BlueFort, tells us how stripping back these systems and taking an intelligence-led visibility approach is the most effective route to avoiding future threats.
What are the biggest challenges organisations face in the current cybersecurity climate?
Looking across all of the organisations we support, visibility is one of the primary cybersecurity challenges we see security teams facing today. It’s not a new issue, but the changing nature of the landscape means the scope of the challenge is far wider. The shift to remote working during the pandemic pushed the workforce away from the office, but we’re now seeing the tooling move beyond the traditional four wall structure too, with enterprises taking on more Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions. As a consequence, IT is becoming increasingly sprawled out; where security teams traditionally needed to focus visibility over a single location, now they face an ever-changing and expanding list of locations, users, devices and services.
BlueFort conducts an annual survey of UK CISOs and the visibility challenge was a common theme in our 2022 study. More than half of those surveyed don’t know where all their organisation’s data is and how it is protected. Many have also lost track of corporate devices and left legacy systems unmonitored.
The key takeaway is security teams are essentially losing track of their workforce, so regaining visibility is one of the key challenges facing organisations. You can’t build intelligence and put controls in place unless you have visibility, so this is a fundamental first step in a long term security strategy.
Why is it vital for decision-makers to understand their users, data and assets while protecting their infrastructure?
We are starting to see a positive change in how people think about cybersecurity and this is being driven at a number of different levels. The board is more engaged with cybersecurity, particularly due to coverage of cybersecurity in the press, high profile fines and new legislation. But equally, people in general are paying more attention to security, both in their personal life and as an employee. Even something as simple as people setting up multi-factor authentication (MFA) on their Facebook accounts shows a mainstream security-driven culture is starting to gain traction.
Traditionally, security teams worried about the infrastructure – the focus was centralised, everything was under lock and key in your data centre (or your hosted data centre) and you knew this was where all of your important data was. The IT sprawl we’re seeing means understanding your infrastructure is no longer enough. IT teams must now look beyond infrastructure to try and understand their users: who are the users, where are they, what are they doing, what devices are they on, what assets and services do they need to do their job? These questions now play a far more important role in protecting data in a modern IT environment; infrastructure alone can no longer tell you what data you have or where it is.
It’s frightening to think that more than half of the CISOs we surveyed recently don’t know where all their data is or how it’s protected. If your organisation works with sensitive data – whether that’s personally identifiable information (PII) or PCI data – it’s really critical that you don’t just understand where your infrastructure is, but you know how your users interact with it, where your data is being stored, how securely it’s being stored, and whether it can be audited.
Fundamental to this is understanding what assets your organisation has. Assets have changed – we’ve moved on from the days of restricted company laptops, staff are now using iPads, working in cafes and want access to data from their personal devices. We are seeing a shift to a Zero Trust access model, but essentially you need to have that overall visibility – of infrastructure, assets, users, data – to give you a true understanding of how the business is running, what’s working well and where the gaps are. Then you apply intelligence to fill those gaps, which could mean bringing in new controls, processes, people or technology. You need to understand the entire sphere to put effective strategies and policies in place.
In your opinion, why are external attack surfaces increasing?
This goes back to the changing nature of the business IT landscape and the rapid adoption of cloud. Everyone is navigating their own cloud journey right now and the two are interconnected; external attack surfaces are expanding as a result of increased cloud adoption.
Traditionally, most things were behind a firewall, with perhaps a few select services on the internet – everyone knew what they were and how to put the right controls around them. Now, it’s common for organisations to have disparate teams that aren’t talking to each other. For example, the DevOps team could be running wild in Azure or AWS, while the security team chases around trying to plug the gaps they’re creating. Then, on top of this, the workforce is heavily distributed across the country, or even the globe, and all the many SaaS, PaaS and IaaS services are hosted outside the organisation in the public cloud.
Consequently, the external attack surface starts to sprawl. But the challenge goes beyond simply being able to account for what the attack surface actually looks like; the business risk and attribution factors are equally as important. Organisations need to be able to understand what the risk is to the business for any one individual service, or which part of the business is responsible for managing and securing it.
The way cloud services work means organisations may not always realise they are growing the attack surface by using them. A good example is if you were to start pushing public key infrastructure (PKI) certificates out to some of these public cloud services. It could be easy not to realise that this is something you own that is now outside and – if that’s visible to the public – a potential threat to the business.
Fundamentally, managing the external attack surface is about understanding and classifying business risk. We know cybersecurity never ends – it’s a constant cycle of gaining visibility, identifying and mitigating potential risks, and then putting controls in place.
Why is a lack of visibility a root cause for these challenges?
It’s important to realise that intelligence and information are very different things; you can have a pile of information but if it provides no value to you then you will derive zero intelligence from it. For security teams, it’s easy to get caught in a situation where you have too much information with very little context. This creates noise and prevents you from gaining true visibility.
Visibility enables you to move to a proactive mindset – you know where the holes are first and this allows your security team to be ahead of the curve. Having visibility into potential issues before anyone is able to leverage them means you can avoid trying to remediate them reactively, once an incident has occurred. Even if you do encounter negative outcomes, and an attacker gets in, if you have been continuously validating your existing security stack, your tools and your processes, then you are in a much better place to understand what to do next, make the right decisions and mitigate those vulnerabilities.
Visibility is not a panacea – there are no silver bullets in cybersecurity – but it is a key factor in solving these challenges. Many of the issues we see could have been solved with visibility, had the organisation anticipated the problem ahead of time.
It’s critical to know where the problems are – whether that’s the external attack surface, internal network, or the intelligence services you’re getting specific to your industry – as well as having a clear understanding of your internal tooling, processes and people, so you can put the right mitigations in place and put things on a risk register. Visibility isn’t going to fix all of your problems, but without that you’re feeling around in the dark for a problem you can’t see. Greater visibility stacks the odds of success in your favour.
Can you define ‘Intelligence Led Visibility’ and explain its role in combating cyberthreats?
This goes back to the difference between intelligence and information. Intelligence is the product of information, so intelligence-led visibility is shifting from a reactive information-fed model to a more meaningful and curated standpoint. Rather than having tools that send us information, instead we look at the cost-benefit analysis of this information and measure it against our time and energy.
A good example of intelligence-led information would be if your organisation was using data from a third party that’s scanning potential threats on the Dark Web. Rather than a stream of information, this data would also likely include context on what risk certain hacking and attack groups pose to your specific industry. It’s this intelligence that lets you start to build a picture of how attackers are targeting your organisation, which then means you can put the right mitigations in place and ensure you are training staff against the correct processes and playbooks.
Intelligence is about drawing meaning from information. With intelligence and visibility, we are shining a light on the right areas and spending time and energy on these. This stems from a place of efficiency and one of the main challenges across the industry is a major skills gap. Ensuring that you are using intelligence to drive the right visibility means focusing your team and organisation on the policies and the processes that you are creating for the areas that matter. This results in an efficiency-led approach.
Why is it important for organisations to test and validate their cybersecurity and why is an ‘Intelligence Led Visibility’ approach good for this?
Internally, we have a term called ‘zombie renewals’ – meaning people have bought a piece of technology and they simply renew it yearly, even if the technology doesn’t meet the same requirements as it did five years ago.
You can avoid this by constantly testing your infrastructure, making sure it stands up against today’s threats and realising that what may have worked five years ago isn’t the right solution for your business today. Cyber threat actors are constantly innovating and we need to be reactive to that.
Ransomware is a prime example – there are consistently new ransomware campaigns being launched, released by different groups that use new tools, tactics and techniques to access your network. If you’re not continuously validating, you won’t know if your systems can stand up to new attacks coming down the pipeline. Fail fast, find the issues and mitigate – testing on a regular basis is the only way to have real confidence in your solutions.
You need to be taking an intelligence-led approach. It’s the intelligence that will help you focus on the right threats, curate a list of potential loopholes and downfalls within your systems and allow you to prioritise and tackle the most critical business risks.
For organisations that lack this visibility, how would you recommend they approach their security strategy going forward?
I would say to people to go back to the beginning. This may appear obvious, but it’s very common to see organisations taking a reactive approach to threats, by putting controls in place as soon as they realise they have a problem. Quick fixes and knee-jerk decisions will not work in the long run – you need to start with a comprehensive understanding of where all of the gaps are.
The threats will never stop but an effective strategy starts with understanding where your gaps are – which may be skills, technology, people or processes. By understanding what you’re doing well and what you’re not doing well you will see the areas requiring the investment of time, effort and budget.
It’s a journey through visibility, intelligence and control. If you’re putting controls in place while still trying to establish visibility and intelligence through the noise, those controls will not effectively protect your organisation. For people realising they don’t know where their data is or how their users are connecting to the organisation, it can be easy to rush to a technical control point because that is a tangible and easy-to-understand action. However, my advice is always to ask people to take a step back, review the information they’ve got and consider any gaps. Then we can start to build on what’s intelligent information and what’s not, before making a move toward implementing controls.
What advice would you give IT professionals who are looking to start their visibility journey?
Take a step back and ask yourself why this is an issue for you. Most people will struggle to answer this question, so this should be the starting point. The visibility journey will look different at every level – from CISOs to technical managers – but in all cases it’s best to take a joined-up approach that goes from board level to the people putting the solution in place. Conversations will be more productive this way and really help everyone understand the objectives and provide feedback on their perspective of the organisation’s cybersecurity strategy. The goal is to ensure a coherent visibility journey, so ask basic questions first, formulate the desired end result and then start working through those stages logically.Click below to share this article