Tips on maximising existing cyber security features from our free webinar with Microsoft on Thursday 17 August 2023.
KEY: Unable to decipher = (inaudible + timecode), Phonetic spelling (ph) + timecode), Missed word = (mw + timecode), Talking over each other = (talking over each other + timecode).
Moderator: Okay, it's gone half-past now, so I think I'll make a start if that's okay with everyone. So, thanks, everyone, for being here today with this LGA and Microsoft event. For those of you that don't know me, I'm Ellie, I'm a cyber security advisor in the cyber digital and technology team here at the LGA. So, today, we're joined by Arron, Kam, and Adam from Microsoft, who have a range of expertise between them, and lots of first-hand council experience as well. So, this is the first in a short series of other Microsoft and LGA events. Sorry, I'm just opening the lobby, and these events are primarily for IT professionals. They're slightly more technical in content and they aim to respond to, kind of, timely issues and topics, such as Sentinel, Copilot, and other issues that you'll see coming up. They aim to help council IT teams get the most out of their existing E3 and E5 licences, and to share best practice with council peers. I'm going to share a link in the chat as well for the other events that we've got coming up, so please register for those, if you're interested. Our events programme is just one of the areas of support we do offer to councils in cyber security and IT. Please do take a look at our web hub if you're not familiar for more information about the work we do in policy, the guidance videos and e-learning we offer as part of the general support offer, and the bespoke offer, which includes the cyber 360s as well. So, just a couple of housekeeping bits before we get started.
This event is being recorded and will be published on the LGA website soon. If you do have any issues with that, please do let me know, as soon as possible. The event chat is open today. We're a bit of a smaller group, so I really do encourage you to ask any questions you have as you go along, and the team can pick up when they're ready. So, that's everything from me. I'm going to hand over to the Microsoft Team now. Thank you very much.
Arron Kerai: Oh, thanks. Thank you, Ellie. Appreciate that. Morning, everyone. Thanks for your time this morning. Yes, appreciate that. So, just quick introduction. I am Arron Kerai. I'm a cyber specialist at Microsoft, been working with local authority for several years now, about security posture and all that kind of good stuff. You may actually remember me from a session we did, back in November last year, and that was around things like key security and compliance capabilities in E3 and E5. So, I thought I'd come along again, just to, kind of, share an update to that, and also, how we can view the monitoring through security, through a single pane of glass. That's really the purpose of the session today, and what we're going to go through. I'm actually joined by two colleagues as well. I've got Adam and Kam on. Guys, do you want to do a quick one-liner? Kam, if you want to crack on first. You're on mute, Kam.
Kam Hussain: There you go. It helps if I come off mute. Using a bigger screen as well. Yes, so, I'm Kam. I am a technology strategist. I'm fairly new to the organisation, five months in, but I am aligned to Central London and North London, sort of, councils.
Adam Fielder: So, hi, I'm Adam Fielder. I'm also a technology specialist with Microsoft, I've been here for around eighteen months. Before that, I was the lead architect for the City of London and City of London police and I've only ever worked in local authorities before joining MS. Nice to meet you.
Arron Kerai: Brilliant. Cheers, chaps, thank you for that. So, as we, kind of, move forward, we want to talk about how that single pane of glass can actually happen, and the main purpose of that is actually through monitoring and logging. So, the purpose of this session, and the first few bits will be around why that's really critical, why security operations are the, kind of, crux of how you get that single pane of glass, and moving forward from then. Again, as Kam and Adam have shared just now, how their opinion has, kind of, formulated with that in mind, how they've done it before in the past, with their experience. That's probably a really key thing to explain today, just so it gives you the real-world example of how to actually do it in best practice and whatnot as well. So, we'll go through that today, and at the end, we'll do some Q&A as well, kind of, go forward from there, and again, to be fair, in the meantime, ask your questions in the chat, we'll try and get to them and we go forward. Adam and Kam can help me to work out those questions as we go. So, with that in mind, we'll crack on, and start here, and I quite like starting here, because it just paints the picture, a little bit. So, you know, we traditionally build defences in this way.
We try to make a solid wall, we try to stop attacks as they're, kind of, fired in, if you will, and of course, around that defensive tactics around that wall, we have things like, 'How is that positioned? How many bodies are in that wall, for example?' Are they the appropriate height, as part of that defence?' Right, all those different things come into play. They're all about limiting how the free kick taker can actually get the ball in the goal, right? That's the whole point of it, but, right, the big but is, with enough practice, in the case of at least Ronaldo, you know, with enough practice, the ball can go in the net, but how do we then limit that as much as possible, to help the defensive team to then, obviously, defend against that particular attack? The key thing is though, what are we doing outside that wall, right? Right, because that's all well and fine, we're all in place. What are the keys things around that? So, the runners behind the wall, the side entry points, the angles of exposure between how the ball flies into the net, all that sort of stuff, that's really key. Of course, the goalkeeper, right, massively key, that's all outside that wall. How do we then get a view around all of that that sort of stuff, and build that bigger picture about what's happening? That's the key point of this slide here, is how do we get that bigger picture and that proper response across everything that we an do, to at least limit or at least stop, right, that damage from happening?
That's, I suppose, the big question. Those exposure points are something to really, really lock down and understand, and that's where logging comes in. If we can understand the bits around the entire piece, side entry points, yes, the wall, but what else is happening, we get a much bigger picture, and we can respond appropriately, and that's what we're trying to get to, at this point. The problem is though, those exposure points are massive, and that's what you guys have to be, kind of, dealing with and understanding as you go throughout your, you know, task at a local authority, otherwise know as the attack surface, right? That's massive, and it's always exploding, but the point would be, kind of, in the middle there, would be the firewall, in this point, right? You've got that perimeter, you've got the firewall in place, around the corporate network, but the wider attack surface also goes beyond that, right? So, cloud services and SaaS applications that are maybe not quite secured successfully, with misconfiguration, for example, IoT devices, this day and age, right, massive attack vectors for threats to come into the network and steal data. Customers, of course, partners as well, right, and that is all part of the supply chain risk associated to a local authority, but how is that understood? That's probably the key question there. That's where logging is really, really key, to get that picture, to at least collect the data about what's going on. We want to build that picture, across that entire estate, where possible, until a balance as well.
The problem then is though, it's all well and fine collecting it, which is what a lot of local authorities would do, collect it, fine, but actioning it, doing something with it, that is where a typical challenge will come. Right, what's the next step? How do oy make purpose of that? How do you make use of that? That's a key challenge, and that's what we, kind of, see, right? We see lots of challenges around security operations, how to make use of those logs, how to actually investigate and get better with them, but really, it's also responding in that timely manner. It's all well and fine having some visibility, right, but how do we actually use that appropriately, in a timely fashion? As we all know, right, time in security is really key. Time to remediation, time to detection, key metrics about how we respond to a threat, really, really key. So, we've seen that loads of data's been generated by all these security tools and products and whatnot, and that's fine, but how so we get more accurate with that? How do we get more useful with that information? That's a really key point that we need to understand and do something about. How do we reduce the false positives? The last thing we need is all these alerts and all these logs coming in, but maybe some of them aren't quite correct, based on what's being seen. So, getting a reduction in that is quite key. Even if an alert is generated as part of that, how does the security team have the time to investigate it? You can see there, right? 44% of alerts are never investigated.
Great to bring it in, great to have some visibility, but again, if it's not actioned, what's the point, really? What's the use of it? Where's the return, and then, of course, with the availability and, you know, the price of security expertise and analysts and whatnot, always in high demand. How do we get something that's safe and secure, for the residents, as part of the local authority, right, with that resource challenge and constraint in mind? It's all and can be quite difficult without a plan. However, there is some good news. The NCSC have come in, and this is a while ago, to be fair, you may have seen this in the past, and have shared some best practices and guidance around logging. Around why it's foundational to security monitoring, why it's very key, and what you do with hat, and this is from their handy-dandy post on the website, 'To SOC or not to SOC.' So, Security Operations Centre, but that gives you the foundational operating model required to create a logging platform, centralised logging platform, for security. So, again, great resource to go to, and we'll put these links in the chat afterwards, but the key thing is, you know, what's happened, what's the impact, what do we do next, all that kind of stuff, that's really key. The key theme that I see, in my day-to-day, in London local authority, is actually, 'What's the next step after getting those logs in?' as, kind of, mentioned.
Centralising it is great, doing the alerts, investigation is great, but the response is also quite key, and there are varying degrees of (TC 00:10:00) maturity across London for me as well, and Kam and Adam will talk about that a bit later, but again, that's quite important to, kind of, realise where your maturity level would be and go from there. Again, NCSC, great resource to have a look, when you get a chance. What we also see is the evolution of that, right? So, the NCSC have said, 'This is what you should do. This is some guidance around SOCS, Is it worthwhile for your organisation?' Great to have a look at that. What we also see in terms of the evolution trajectory of an actual SOC platform would be this. So, again, in the early stages, it was all about gathering that intelligence from a vast, you know, variety of places and sources and increasing that field of view, really observing what's going on. That's the number one thing to do from a security operations and model, as part of the organisation. That's typically based on a solution, like a security incident and event management platform, or a SIEM, for short. You may have heard of that terminology, it's industry-based, but that's typically what it's based on and what's been used there, and it's all about collecting the sources of data into one place, observing, right? Great, but as that operational security part and model evolves, the next bit is about getting the orientation of that, right?
Extracting the real value from that vast amount of data, by using things that are built into the associated platform, whatever it is, right? Using the AI engines, using the machine-learning bits, using the human expertise as well. All part of how you can get the actual context from that information. Really, really key, and that helps to raise the accuracy of what you see. That's how you start to improve the maturity with security operations and logging. The hard part though is actually next, it's about getting the guidance. Now, the guidance is challenging, because as you can imagine, it's different for every organisation, right? The context to your local authority, to your organisation would be again, you know, slightly different at least, to another one, but again, how do you get the information, I suppose, at your fingertips, to make a proper decision quickly and easily, because it's all about speed, as mentioned, right? We need to get quick. We need to get accurate. That's a key part of it, as well, but that really requires proper deep integration, because without that, it's hard to give embedded guidance. That all depends on what you're putting into eh security operations model and operating system, right? It's how do you get the view that's very, very deep, integration-wise, so you can get the intelligence, and then, the recommendation about what to do next.
That's very key, as the progression goes, and then, with that guidance, once you have that in place, you can start to automate, and that is where the real benefits come about, as the maturity increases. Those noisy tasks, those day-today activities, those bits that take up time, those can all be automated, at a certain point. That's where automation comes in, to start to reduce the need of a human to do that, get the machine to do that, great, human then can focus on other things that are more appropriate, more high-risk, more priority. Right, that's how the, kind of, structure would go. In local authority, the trend that I've seen, and you may be familiar with this, is, at this point, it's a look towards a managed (ph 13.18) service because, of course, resources, at this point, challenging, expensive, constrained. Typically, this is where a managed services comes in, through a partner of sorts, to help to, kind of, you know, we can call it Copilot, sure, but to help you with the resource constraints and get you to that point where it's accurate and automation is in place, right? That's typically what I've seen across my stint in local authority and move from there. What we see though, in the future, now, future is a key thing to understand, because that's where we're going to go, as you move forward, and that's really from the assistance of AI bots and augmented reality. Actually, speaking about that, it's actually not too far away, as you can probably imagine from us presenting today, here to you guys.
Assistance from AI bots is really not far away. You've seen all the hype around generative AI, all that sort of stuffs been out in the news and the industry, right? All that sort of stuff is going to come in many platforms as we go forward in the industry and in the timeline. You've seen many vendors out there who have announced generative AI abilities, you know, Microsoft being one of those. Again, this is not a distant future anymore, this is actually quite close, as a near-future element, to get assistance from that, but what it really means is, rather than maybe a managed secure or yourselves having to dig through all these portals and find out what you need, ask the question. Sentences, English, whatever language, I suppose, but ask the question, type it out, an get the response back that's contextual, sure, it's got all the sources in mind, and it's giving the guidance appropriate to the context of your local authority and organisation, and even, potentially, with the automation rules and play-book that is executed for you as well, on your behalf. So, all that sort of stuff can come into play, by asking the right question, through things like AI bots. Again, that's not far away in the future, and you'll see some of that stuff coming to fruition, I suppose, very soon on any vendors site. If we have time at the end, I'll go through a little bit about how we're doing things and we approach things, but that's quite important to, kind of, realise, but before we get to that advanced AI bots, all that kind of stuff, right, there's a stepladder, as you can see here, with that.
How will you get the basics done right? How do you get the foundations set properly before you get to that stage? Really, it's all about understanding what you're bringing in. Hence that pieces around architecture, hence that piece around the gaps, and hence, the piece around in the wall and things around the wall. What are you bringing into your logging platform of choice, whatever it might be, to give you the value as part of that? So, Iv've decided to include this, and you can see a break-down here. You can see that, by log category, the log source, the log volume, and then, the value of threat detection. So, for example, you can see, okay, network infrastructure, bringing in logs such as Allowed/Denied Traffic, right, using ACLs, for example, Access Control Lists and whatnot, the log volume is actually very high, right? In some cases, that's something that you would be paying for, as part of a service. I mean, more so, because it's high log volume, but actually, the value of that, maybe, you know, medium to low. So, that's not really a good or great return and cost-balance risk analysis scenario for that particular log category. However, when you start to look at things like, let's say, email security gateway, for example, log volume is quite low, great, things that are coming in are, you know, good to ingest, but the value is also very high. Great return.
So, it gives you the guidance about what you should be prioritising as part of your logging strategy and platform, this gives you that, kind of, you know, as I said, guidance here to realise what you need to do, and that's all about how you can do the cost-benefit analysis and that's worth doing as art of improving your visibility, and then, of course, your accuracy, when you're understanding the attack surface, okay? What that would, kind of, look like, in terms of the story, like, what does it get up to, how do you fit it in the context to all of that sort of stuff? It looks a bit like this. So, you can start with the raw data, right? Pure logging, pure logs, no particular analysis or context done to these. This is what the raw data would look like, right? Time, date, right? Not very human readable, shall we say? What we need to do is get to a place where we can start to contextualise that, as mentioned in the previous few slides, right? Actually, okay, we see now the person is called Jeff, that's his display name, great, his email is this, his title is this, his IP address range is this. What is his device? What is he on? What is he using? What's the IP address of that? Is it a high-value asset? Yes, it is, it looks like it is. Is the device managed or unmanaged? All these bits feed into the context about what's going on for that particular user and the device. Actually, you can also see a geolocation, based in China.
If that's a place you don't typically do business in, and at least local authority in the UK probably shouldn't, that's again, something to be flagged, as part of that, but that's contextual information. That's useful as part of your investigation and what a logging platform should do. Feeding that into behaviour now, right, we're getting real, you know, own organisation context about what's going on, it's actually the first time Jeff has accessed that finance server. Okay, none of his peers have done that. It's only used by four users of the organisation. It's the first time Jeff has done that from China, okay? Getting very worried now. No other user in the organisation has connected from China before. Okay, so you can see that, actually, the more context we have, associated to your particular own environment, it gives us the picture, and then, bringing it together, what does it look like from an anomaly and insights point of view? Okay, Jeff is an IT help-desk technician, fine. He was recently formant, but now, he's obviously doing some activity. He has a high blast radius, so he's impactful to the organisation, based on privileges or whatnot, so you can see that. The MITRE attack framework tactics are these, there's enough (ph 19.26) lateral movement from initial access. Okay, so we've now built the full picture about, okay, Jeff looks to be a malicious activity here, maybe a camp (ph 19.34) compromise, in this case. It's on a device that's potentially unmanaged and unsecure.
Great, let's do something about that now. We either start to automate that response to say, 'Okay, if these triggers are hit, right, isolate the device, maybe remove privileges for that user, or (inaudible 19.50) , but at least now we have the picture,' and that is where we get to from a step-by-step process of getting in the right (TC 00:20:00) data across the architecture and building that picture, and then, coming up with a result that says yes or no, basically. Then, potentially, again, the response to that being automated. That's, again, the future of what we're trying to do with logging. What that, kind of, brings us to is how that efficiency can be realised with the security operations and logging with that in mind. So, these bits around the circle there represent the security architecture, right? The bits not circled in red would be the controls you have in place right now. Whatever antivirus, whatever emails security gateway, whatever web security gateway, whatever it might be, those will be around protecting and blocking threats, detecting anything suspicious, giving you that security posture, all that sort of good stuff. Fine. Where the security logging comes in and why it's so important is because it gives you that investigation across the full attack chain, right, because it pulls in everything from the sources-wise, it gives you the view across the whole board, as you've just seen in the example with Jeff, right? Not just the device or the end-point or the user itself, but again, email information, web information, all that kind of good stuff along with it.
It gives you the ability to start doing hunting, much better and more accurately. Again, some of the platforms do this for you, the built-in templates are great, all that stuff is fantastic, but it gives you the ability to start to do hunting, to start to get more accurate in how you then respond to a threat appropriately. Then, lastly, with that auto-healing and remediation of compromised assets, great, that's all about the automation, and building in those analytics rules, to then provide the automation and response, to basically, make your lives a lot easier. Reducing the need for, maybe, you know, a lot of cyber security expertise and resource, key the machine do what it's good at, which is identifying, and then, remediating, okay? So, with that, kind of, in mind, I've shown how logging is important, you've seen the NCSC guidance around that as well, you've seen how context is really, really important, as you build that up, and then, the benefits of having a logging platform in mind, as part of that. So, again, as you can probably imagine, from us being on the call today, there is, you know, no Microsoft presentation without a product pitch, but here you go, here's the one slider for that. That is our answer for it, it's Microsoft Sentinel. You may or may not have heard of this in the past, but again, it's our answer to that particular challenge around security operations and logging, our centralised view across that, and it's all cloud-hosted, cloud-native, as you can imagine.
It's feeding in the signals as appropriate, loads of build in connectors, to help you guys with that job, making that as easy as possible, but again, you can see those five aspects here. Collecting information across many different sources and vendors ,right, it's not just Microsoft, as you, obviously, you know, make sense of a logging platform. So, collecting is really key to get the visibility and, of course, detecting what's important from that, with analytic rules and hunting. How do we actually figure out what's important, and then, make sure we understand that appropriately, with context? How do we then investigate with the full attack picture, the full graph, about what's going on for an incident. Again, all part of Sentinel, and then, again, how do we respond appropriately using automation. You may have heard of things like logic apps before, in the past. Again, all in there to help with the workflow and the play-books to start to get more automated in what you do in response to a particular threat. Right, so I'll, kind of, pause there, because I want to just stop here for a second. I was thinking, rather than maybe show you a demo, because that's all well and fine and we can do that at a later stage, it's probably better if Kam and Adam come on next and just describe how they've done this in practice, because they've got experience. They've done this before, they've designed it, they've architected it, and they've used it. So, it's probably worth that real world example for you guys, just to have a, kind of, view and a picture of that as we moved forward from there. So, I think next is actually Kam, coming up Kam, are you-,
Kam Hussain: Yes.
Arron Kerai: Yes? Yes?
Kam Hussain: Yes.
Arron Kerai: You want to talk through your example, please.
Kam Hussain: Yes. Sure thing. So, yes, hi, everybody, again. Yes, so, I guess, I mean, prior to joining Microsoft, as I, sort of, said, I'm new to the organisation, I was actually the CTO for Islington council, where I looked after the cyber security practice. So, that very much was under, sort of, my area of responsibility. Now, what I wanted to do was start off by painting a picture, a bit of a picture of the council and the, kind of, challenges we faced, when I first joined the organisation. Now, the council, as you know, like all other councils, we provide hundreds if not thousands of disparate types of services across departments, right? Now, in terms of those services, they were provided to about 230,000 plus residents, with a workforce of approximately 5,500 staff. Now, those 5,500 staff were really enabled with about 6,500 end user devices, which were primarily laptops, with a mix of some tablets and mobiles, but the end users then, in terms of those 6,500, sort of, end user devices, these were connected to services that the council was hosting on prem on the time, in it's own data centres, in Islington. Now, those data centres then housed about 800 plus servers and network infrastructure types of equipment. Now, that quickly starts giving you a picture of, there are a lot of things going on, right? There are a lot of residents consulting services, there are a lot of devices, there are lots of users connecting to those services as well.
Now, when I joined the council, the council's security approach, a bit like how Arron, sort of, mentioned, was very much the traditional, what I call a castle and moat topology, you've most probably heard of that, it's quite used across the industry. With users working primarily in the office, at the time, and like I said, data centres being hosted in either, you know, on prem on colo, sort of, locations. Now, from Islington, that castle and moat was more of the office buildings where, you know, the castle has treasures, being our data centre, was the castle where our users, sort of, then operated in, and the moat was very much the perimeter firewalls. You know, in the sense of trying to reduce or doing our best as per that analogy Arron gave of the defence line. You know, trying to prevent attack as much as possible using IDS and IPS types of capabilities,but we also had some logging capabilities, at the time, where we ingested those data into SolarWinds for monitoring. That monitoring capability was really done by two security engineers, you know, as our knights in armour in that castle and moat topology, but the key thing here is, after I joined, a lot of things happened, in the sense that, over the next three years, we started to see a massive change in terms of trend, and we quickly realised that the age-old traditional castle and moat security approach wasn't really sustainable. It was to a point where it wasn't really viable anymore, if we were going to continue protecting our, sort of, users, our data and services the council provides, and really, it meant we needed to adapt and respond to these changes.
Now, in terms of, yes, what were those changes, those trends of, sort of, put them on the slide there. For us, it was first a case of COVID, right? When, kind of, with COVID, everybody, sort of, vanished, didn't they? The office floors became empty, and then, we started moving more towards a hybrid working model becoming the norm. So, that was one of the, sort of, trends that impacted us. Other things were, like, addressing legacy systems, because all councils have legacy systems, we wouldn't be a council otherwise. So, in turn, in responding to those legacy systems, there were security vulnerabilities that we had to address. Not least the PSN, sort of, vulnerabilities that were identified. So, we had, what? 800, 900 plus vulnerabilities that we had to address in that year, but also, you know, applications. So, in terms of addressing vulnerabilities as well, there was this notion of applications or actually, it was our principle at the time to move to SaaS as much as we can, in terms of remediating through upgrades, as well as programmes, like modernisation. Really, what that meant was, what we were seeing was data becoming more siloed as well. So, we started, you know, seeing these islands, these SaaS islands starting to pop up, left, right, and centre, in every direction, and all that needed securing. Furthermore, and that's just the on prem, right? In terms of the current now. Once we started looking at the future, what we started doing was looking at applications in terms of being cloud-native.
Kam Hussain: And, really, what all that meant was the level of scrutiny we had from auditors also increased, especially so with going to the cloud. It's that increased risk associated with state-sponsored and malicious threat actors, with sophisticated tools, and the money and time they had, compared to us, in that macro level geo-social, economical, political… that type of landscape, where we've seen, I believe, threat actors from, if I could say it, Russia, China. We were living in that sort of time. So, what am I saying? What I'm basically saying is our attack surface was growing, and it was quickly becoming larger than what we could manage or observe, because we were adopting new technologies, we were expanding our networks, we were connecting to all these islands in the third-party services, so the need to rethink-, and coming back to that point of rethinking, our security detection response really became amplified. Now, initially, as per that NCSE bit that Arron covered, we looked at those NCSE best practices, and there were lots of things in there that we started to implement, like strong passwords, using multi-factor authentication. We started introducing, through our architecture, the notion of security by design, making sure that data and encryption in transit was secure, looking at securing our web services, and so forth, including from a culture side, with our users, about phishing and simulation. So, yes, in the end, we were moving from a castle and moat model to more of a zero trust model.
I'll try and speed up a bit here. Going towards that new model, what we found was… the challenges we had, in terms of resourcing, was a big challenge. The skills shortage in the market meant we couldn't just expand our security practice, and, really, it was a case of, 'How do we provide constant monitoring and response capability?' So, what we said was, to strengthen our security practice further, in the absence of resources, we would maximise our investment in Azure. With our strategy to adopt SaaS, what we said, we would adopt Sentinel as our SIEM tool, and that we would look towards working with Microsoft Partner to provide a managed (audio distorts 32.12) and, collectively, provide that security analytics threat, intelligence, and all those kinds of things, in terms of a response to the enterprise. We also, working with the partner, adopted the MIDA (ph 32.27) framework, and we developed a number of cyber response play books, to ensure we were being protected from modern attacks. Here's one thing I found that was amazing. When you think about cyber criminals, the advancement that they were making-, not just in the technology, but in the business model as well. These weren't just some people in a dark room. These were people who were thinking at enterprise business level. They were providing notions of ransomware as a service. Who'd have thought of that? We buy Sentinel as a SIEM-as-a-service. Now, attackers are buying capabilities in the cloud like ransomware-as-a-service.
So, for us, as a council, it was a case of really acknowledging that we couldn't internally compete, and there was no way we were going to maintain that level of intensity, in terms of what we're seeing and hearing about attacks in the public sector, on public services. So, really, our business case was put forward in terms of arguing that adopting a cloud-native SIEM would reduce our cost, would reduce out complexity, compared to legacy systems. It would also eliminate the needs for physical hardware and maintenance that come with it. Really, at the end, it was a case of what we wanted was to move away from a reactive service, and I think Adam will cover an element of that. We were consumed firefighting, and doing all the sorts of BAU (ph 34.09) activities, and we really wanted to move to a more proactive IT department, that would really-, it sounds cheesy, but the idea of enabling the council to deliver services to residents and local businesses in a secured manner. I'm hoping that really paints the picture as to the challenges we faced, and why we went towards Sentinel. Arron, if you click to the next slide. Now, on this slide, what we've quickly tried to show is an architecture for a Sentinel deployment. So, this is very much a representation of the deployment. It strictly isn't deployment, but what you see here is-, the key things for me, as a takeaway, or the key things I think are takeaways for yourself, would be that we decided that we would deploy Sentinel in Islington's tenancy.
So, this wasn't the managed SOC providers tenancy. This is very much a SIEM that Islington owned, but it's managed by the partner. That's purely because, with any cloud, there's always that challenge of, how do you come out and move on to another partner? We thought this was the quickest, sort of, response. A couple of key design decisions worth considering here are Sentinel-, in terms of cost, think about cost control, in terms of what logs you ingest. Then, the other key point is, when you do ingest logs-, don't only think about the native connectors, but also think about your entire attack surface, in terms of third-party connectors, on-premises, third-party data centres, and ultimately build the full birds eye view of the entire landscape. A tool is only as good as the information you feed into it. I'll, sort of, stop there. I've taken a bit more time than I should have. Adam?
Arron Kerai: All good, thanks Kam. Over to you Adam.
Adam Fielder: Thank you, Kam. So, this is-, my slides are not as fancy, but they're very real, they're very raw, and some of these slides actually come straight from a presentation that I did at the City of London. So, I arrived in the City of London in 2019, early summertime-ish. When I first got there, on the very first day, the firewalls in the basement collapsed. The next day, there was an air conditioning issue, and the day after that, we realised the air conditioning issue had essentially baked the UPSs. So, three P1 issues in a row, following the first day of my arrival. This, kind of, set the scene for the rest of the next six months I was there. Two weeks after joining, we were contacted by the NCSE for a precursor to ransomware that was on our estate. The NCSE had detected it remotely, as they look at our IP addresses. Actually, they look at all our local authority external IP addresses. They could see that on there before we could. We had lots of issues deploying devices. We couldn't get them out of the door fast enough. Our end users weren't happy. Our business users weren't happy. I mentioned before it was the City of London, the local authority side. It was also the City of London Police, and it was also a third organisation called London Councils. There was, historically implemented, a gateway approach to allowing updates out to end users' devices and server. So, the updates were gated, and there was a meeting that took place to go through the list of updates and allow them to deploy across the environment.
All of these P1 and P2 issues, the security issues, device deployment delays, led to a whole bunch of calls to the service desk, and the guys on the service desk were absolutely melting. They couldn't keep up, and the user experience of the IT service, across those organisations, wasn't as good as it could have been. That led to little time to focus on business value initiatives. So, how is it going to leverage its data better? How is it going to gain single view? How are we going to go about digital transformation and process improvement, etc.? The reason for this is that everything was designed expertly, in isolation. So, the network team would be looking after the firewalls and the WAN, etc. The desktop support team were looking after the configuration of the end user devices. The server team were looking at this (ph 39.16). It, kind of, went on like that. There wasn't a holistic vision, a holistic approach, certainly to security. If you hit the next slide for us, Arron. I'm very much data-led, and what I wanted to do is to paint this picture back to the IT and digital leadership team in a slightly different view. So, I went across my management team, and I aggregated all of their diaries together, and I spent a couple of hours marking which meetings were business as usual versus business value. So, where were our leadership team spending their time? At the grey at the top, it represents all of the hours that they spent in meetings classified as business as usual, so firefighting, device deployment, security, etc.
Down in the turquoise blue colour at the bottom, that was where they were spending their time on business value initiatives, and transformation. Etc. it was around 60% of their time, I'd say 65% of their time, they were spending on business as usual activity, not growing the business, moving out of this, kind of, getting off the treadmill type approach. If you hit the next slide for us. So, this is a slide I put together-, well, actually, the image on the right-hand side, the 'Golden thread', is something I put together while I was at the City. When it came to security, everyone had their own opinion, and it led to a broken approach to security. Like I say, end user support guys had a view of what best practice was in their heads, and how it would apply to end user devices. The networking team had that mindset as well, and even some of the security resources, 'This is best practice'. 'Can you tell me where this best practice is derived from?' The aim of showing this golden thread is to essentially show all local authorities have a PSN connection currently. You go through an annual PSN accreditation. If you follow the guidance from the NCSE and Microsoft, which aligns to that UK national strategy, then you're going to have an easier time getting your PSN accreditation, as also having a better security posture. Some of the things I'll call out is the NCSE have security anti-patterns, that aim to eliminate some of these best practices. So, back-to-back multi-vendor firewalls is actually a security anti-pattern, and can harm your security posture.
So, it was just wanting to point out one of those things that's actually a real life example of common best practice actually not being best practice at all, and to follow the guidance, follow the golden thread. That's why it exists. Yes, go for it Arron, next slide. So, as the months rolled by, I created a roadmap, across various different areas, for improving the technology estate, and creating those solid foundations. Actually, three months, four months in, we were in another position where we could see something happening. In an Azure X (ph 42.41) directory, we could see that we were having hundreds, if not thousands, of failed authentication attempts from all over the planet. We could, kind of, get an idea that it seemed to be targeted on certain users, but we couldn't really get a view of how big and how broad this was. So, we enabled Sentinel mid-flight (ph 43.05) into the security issues, it was like, 'Right, we can't see what's happening across our estate. We're turning on the tooling that we need now, and we're going to get a view of what was happening.' Actually, we were under a globally distributed password spray attack targeted against our members. We wouldn't have been able to get to that level of depth, been able to defend properly, without that information. Actually, we were using legacy authentication mechanisms to-, that password spray was against legacy authentication mechanisms, which should have been turned off and conditional access, as per the guidance from NCSE and Microsoft, on what we should have done.
It was, kind of like, 'Follow the guidance.' We would have been under this thing anyway, but now we had the single pane of glass to actually view that. That's essentially my slides done. I'm happy to answer any questions, if you put them in the chat there. Hopefully I've made up a little bit of time.
Arron Kerai: Yes, perfect. Thank you Adam. I've just responded to one question. It's probably worth getting your opinion as well. So, the question was from Wayne. So, a bit more detail about maybe-, I've responded, but maybe your opinion on that, from the City of London. How do you get the visibility around monitoring and protecting the attack service, with most of the workforce at home now? What's your recommendation? On their own broadband as well. What would you advise?
Adam Fielder: So, the NCSE guidance essentially says, 'Stick with native tooling.' So, you have Microsoft Windows as your operating system. Use Windows Defender. Use that native tooling wherever you can, and then, from a login perspective, that is naturally fed into Sentinel. It doesn't matter where that user works from, you're going to get the insights. That's where that approach, following that golden thread approach, takes you into a position where it doesn't matter where your users work, whether they're at home, internet cafes, different countries, wherever, it doesn't matter. Instead of having appliances that monitor traffic on your LAN, WAN-, because it's going to skip the remote users, right? It's very much a guidance-driven approach. That definitely covers that.
Kam Hussain: If I was to add to it, Wayne, what I'd also say is-, this was definitely case for Islington. There was a similar challenge for us as well. The approach we took was, in that zero trust model approach, to say, 'Well, actually, your perimeter is no longer in your data centre. Your perimeter is where your end users are as well.' So, every end user has a localised perimeter, right? As Adam, sort of, alluded to there, with that native capability-, say, in this case, if you had Defender for end point, on those devices, on your end user devices, then you could essentially have all those logs being ingested back into Sentinel, and this can happen over the internet, securely. You don't strictly need a VPN connection. Having all those logs ingested into Sentinel would then start giving you a view of where your staff are, and what that threat vector looks like. Actually, with the XDR and the SAW (ph 46.42) capabilities that Sentinel has, you can almost automate it. You can do posture, sort of, security as well, to say, 'Actually, if you're not on a compliant Windows version, or the latest security patch, you can't connect to our resources.' You could also say, 'Actually, if you're not in the safe geolocation areas, we can block you.' There's lots of granular things you can put in, in protection, well before the attack surfaces into Sentinel, to remediate.
Arron Kerai: Wayne, your hand is up. Do you want to come off mute and-,
Wayne C: Yes, just picking up on the points around this. I'm not quite clear where I am on this, but just the home working thing. Is there any concern-, I hear what you guys are saying, about Office365, and so forth, but is there any concern such as-, you often hear of devices such as routers. So, if a home router was compromised in some way, I don't know, a lot of people have got BT routers, and I have no idea how they-, well, I think they automatically patch them, but could that be a concern for an attack, or something like that? So, it's, kind of, outside of the realms of 365 to some degree. I just don't fully understand, or have a feel for that, and wonder what your interpretation was.
Adam Fielder: Do you want me to try and answer that?
Kam Hussain: Yes, go for it. I've got a few things to say afterwards.
Adam Fielder: So, you're right. So, I'm at home, and working from home. I've got at least 30 different devices connected to my home network, and they've all got full lateral movement. I've got smart plugs, smart sensors, heating things. Everything's connected, even cars nowadays.
Arron Kerai: Your bill must be enormous.
Adam Fielder: (Laughter) Yes, right? So, it's not just compromised at the router, it's compromised at any of those systems, living behind that appliance, that firewall, that router. I think the aim is to ensure your end point is secure, regardless of where it's connected. That's the concept of zero trust. You consider no network to be secure.
Wayne C: Sorry, just on that point. So, can you consider all home broadbands as the, kind of, the dirty Wi-Fi, or whatever?
Kam Hussain: Yes. Assume the entire network is in a compromised state. Everything is dirty, everything is compromised. If that was the case, how do you securely give access to your users?
Adam Fielder: There's one more thing as well. Historically, local authorities, and many organisations, use certificates, or authentication, on VPNs. When you use certificates for authentication, it doesn't really take into consideration the health of the device as part of that authentication. It just says, 'He's got a certificate. I'm going to allow him on the VPN, and on the network,' regardless of the health of that device. That's not aligned, now, to the NCSE zero trust principles, (TC 00:50:00) that's actually saying, 'Make sure you tie in the health of the device into that authentication-type thing.' Essentially, the guidance is moving you towards using intra ID, AKA Azure AD for authentication, and tying in conditional access. So, is the device healthy? Is the device compliant? Does the device have all of its updates? If it meets all of that criteria, then allow it onto your secure network. Again, of course, all of that information, with a SIEM, is all fed into a central place, so you can see a holistic view of this information.
Wayne C: Thank you.
Arron Kerai: Just to add, it's back to the point around-, network security is and has been there for a very long time, very traditional, good at stopping some things at going through, but now, as part of the zero trust model, moving towards a future strategy and roadmap, it's around identity being very, very key, and then device, as well, as part of that. Those things feed together, and, as the guys have mentioned, any network is now dirty, not trusted, already breached, as an assumption. So, secure where the data would be, device, and user. That's the guidance now. It doesn't matter what network they're on. Irrelevant now. It's now where the user and device is, and go from there. Okay, so, we've got seven minutes left, or so. I wanted to touch on just a little bit around some of the AI things I alluded to earlier. So, I mentioned AI bots and AI assistants. I mentioned that it's, kind of, in the near future, in the very close future, now, or at least in a (audio distorts 51.49). So, I'll touch on that just quickly, for the last five minutes or so, and then we'll close it off after that. So, I'll share with you what we're doing, and how that works, and just to set the foundations a little bit. We'll start with this. AI, massively, overused in some cases, almost buzzword in this day and age, but let's try and understand a little bit better, and break it down a little bit. Machine learning is absolutely part of AI. They're not mutually exclusive. They're all part of the same thing.
The point would be, you've heard things like ChatGPT, and all this kind of stuff. Bard, Google, their implementation and whatnot. Facebook have one now. The point would be, where generative AI comes into play is using these three components, so machine learning, natural language processing, or NLP, and knowledge mining, to give an answer based on the question. That's what generative AI is in this day and age. That's what ChatGPT is, that's what Bard, that's what Facebook's implementation is. That's the key part of what we're looking at as part of generative AI. Moving forward in the future, with SOCs and SIEMs, and getting better with managing resources, and security analysts, and whatnots, that would be a real key turning point in the security field's future, reducing that need, but then empowering those particular users to do more with their time. Rather than spend ages digging through the portals, ask the question, the response is given back. It's saving that amount of time to focus on what's really, really meaningful. That is the number one thing to take away, and that's really, really key. Microsoft are obviously doing this in higher regard, and, speaking internally, there's a massive push, but the point would be it's what we call a co-pilot. It's not a pilot. It's not a replacement. It's a co-pilot, working with IT, security, compliance as well, across those areas, to empower, essentially, you guys, to do your job quicker, therefore empowering you to do other things that will achieve your goals faster, quicker, easier, in that, sort of, sense, all with the responsible ethics and principles in mind as well, around how that's managed and governed too.
So, our implementation at Microsoft is coming out very soon, in the next six months or so, will be what we call Security Co-Pilot, helping you guys across the Microsoft stack, at least for now, across Defender, across Sentinel, across compliance, and across things like Entra (ph 54.18) as well, and Intune, to ask a question and get a response back. So, you're starting with this screen here. You'd ask the question in the, kind of, toolbox there, and then you'd get an answer back. I've got two examples to share with you today. The first is this. Apologies if it's quite small. In fact, I can barely see it on my screen. The point would be, it's asking for an IP address. What's going on with that IP address? You've said it's malicious. The machine has said it's malicious. Okay, what does that actually mean? The response back would be, 'This is the IP address,' fine, but it's malicious because of these three reasons. It's part of a cyberthreat intelligence feed that comes in, as part of Cobalt Strike. It's part of that. Severity is five. It's part of Silk Typhoon as well. It's part of our intelligence feeds coming in. So, it's giving that information with an ask of a question. Then, he's said (ph 55.08), 'What's the autonomous system number as part of that particular threat?' It's giving you the context behind it as well. So, again, that's one question about why that IP address is malicious, rather than going to the portals and spending your time there. It saves the time, and empowers you to use your time better that way.
Another key thing that at least we see, across the local authority, is-, it's all well and fine investigating, that's what we like to do, that's our bread and butter, but the second part of the role is, as you can probably imagine, communicating that to senior management. What actually has happened? In a written report format. Guess what, that's where it comes into play again. 'Write me a report about what you've seen, system, and detail that with the appropriate recommendations about what to do next, and, potentially, as mentioned before, any automated play books that you kicked off because of that particular incident.' 'Here's your report.' Bish, bash, bosh, off you go. Job done. That amount of time saved can be on levels of magnitude, based on how you do your role. Again, it's showing you an example of where things like generative AI come into play in practice, in real life practice, I suppose, with how you do your day-to-day job, around creating reports, about reporting back, in a summary of an incident. So, I hope that, kind of, makes sense across this presentation today. We've got a couple of minutes left. I'll just have a look at the chat in a second. We've covered things like login, why that's key to security operations, why understanding the risk analysis and cost benefit of what you put into that is obviously going to be very key as well. Network logs, great, but maybe low to medium value. Email, end point detection identity, high value, and potentially low amount of them as well.
So, again, doing that risk analysis is important for your own organisation, and your budgets, and whatnot, so bear that in mind. We've seen Sentinel in that play, and we've seen some examples from Adam and Kam, on how they've done it, and then you've seen a little bit, now, about Co-Pilot, that's coming out in the future, from Microsoft, to help you guys be much more effective with your time, and how you do things in your day-to-day job moving forwards. Any questions off the back of that at all, before we close up? Just reviewing the chat now. Yes, Adam's put a bit in there, perfect, and Ellie has as well. I'll also put the link to the NCSE guidance, so you've got that. Bear with me. Then, a bit about some free training as well. I know that's a key thing for skills. So, have a look at the links in the chat now. That's the NCSE. That's free training on Sentinel, and then a bit about what you've just seen around Security Co-Pilot, how it can help you guys in your day-to-day roles.
Moderator: Brilliant, thanks so much everyone. That was a really, really interesting session. I hope that everyone got lots out of it. As I mentioned, this session will be recorded, and we'll put it on the website soon. If you have any further questions after the event, then just send them across to me, and I can get in touch with the Microsoft team as well. So, if there's not any further questions, then I'll happy to wrap up there. Just a massive thank you to Kam, Arron, and Adam.
Arron Kerai: Thanks very much.
Moderator: Brilliant, thanks everyone.