Cyber Security Dispatch

View Original

Human-centric Security - An Interview with Richard Ford Chief Scientist at Forcepoint

Interview with Richard Ford Chief Scientist at Forcepoint:

Cyber Security Dispatch:

Show Notes:
In this episode of the Cyber Security Dispatch, we talk with Dr. Richard Ford the Chief Scientist of Forcepoint. Dr. Ford has been in the industry for quite a while and he has seen the industry through the lens of many different job descriptions, which gives him a grounded perspective of the entire business. Through his grounded perspective he talks about the current problems that plague the security space and how some of these problems are the exact same ones that we’ve had 25 years ago; Dr. Ford advises that before people get into complex security concepts such as resilience we ought to nail down these basic problems that have been put off for 25 years. We continue on about how the industry has too high demands in expecting the entire population to think in security-oriented manner. Rather we should be trying to move toward security systems that accommodate humans habits rather than the other way around. We end on what human-centric implementations of security look like and even hear an example from Dr. Ford.

Key Points From This Episode:

  • Richard’s path in security that lead to his current perspective of cyber-security

  • How we should really be defining “resilience”?

  • Why, in order to move on in security, a lot of us still need to master the basics

  • What are the origins of ransomware and are the problems we deal with nowadays really new?

  • Is the security space ready for things like resiliency?

  • The large execution gap between those who do security well and those who don’t

  • What can cyber-security can learn from car safety features?

  • How to take on human-centric focus in cyber-security

  • Top five ways to look at cyber-security from a “safety” perspective

  • Human-centric security implementations created by Forcepoint

  • Richard’s experiment of how students react to security warnings

Links Mentioned in Today’s Episode:
Dr. Richard Ford — https://www.forcepoint.com/company/biographies/dr-richard-ford-0
Dr. Richard Ford LinkedIn — https://www.linkedin.com/in/dr-ford/
RSA — https://www.rsaconference.com/
Forcepoint — https://www.forcepoint.com/
Virus Bulletin — https://www.virusbulletin.com/
Sapir–Whorf Hypothesis — https://en.wikipedia.org/wiki/Linguistic_relativity
Morris Worm — https://en.wikipedia.org/wiki/Morris_worm

Introduction:
Welcome to another edition of Cyber Security Dispatch, this is your host Andy Anderson. In this episode, Human-centric Security, we talk with Richard Ford, Chief Scientist of Forcepoint. In this episode, we talk about what human-centric security is and why we should move towards it’s implementation. We also talk about back to basics and why it is a necessary movement. Here’s Dr. Richard Ford.

TRANSCRIPT
Andy Anderson: Just introduce yourself for the audience that don’t know you - they should but maybe not yet.
Richard Ford: Sure my name is Dr. Richard Ford. I feel like I’ve been doing security forever; on my RSA badge they even gave me a little tag which says “seasoned” - and I’m not really sure how to take that. But I got into security around I don’t know 89-90 and that’s been my whole life - it’s been a lot of fun. A little bit of time on the offensive side of the house, a lot of time on the defensive side of the house. 
This is really my passion and I do this because I love it. So I’m the Chief Scientist at Forcepoint, and in that role I’m steering technology across the whole company. It’s a blast because I get to do the sort of fun part of research and then sort of hand it off to somebody else to implement and we all know it’s that last 20% that’s the hard bit.
AA: I was talking with somebody last night and they were said: “My job is to create the dream and then its this dude’s to” and he points to his friend “make it happen.” You know? I wanted to be the first guy - I don’t wanna be the second
RF: Absolutely correct that last 20% to make it operational that’s the hard part.
AA: But - I mean you’ve been in security for a long time, but not always on the vendors’ side right? You were a professor for a while, you’ve been a journalist as well. So talk about kind of some of those roles and the different perspective that kind of gives you.
RF: Yeah actually - I really liked the way you phrased the question because it really is about a different perspective. So my first job in the security industry was pulling apart viruses apart as a journalist - it was great. I was still a student and they’re like “We’ll give you X pounds for every virus you disassemble.” So I thought ‘This is free money this is great’ - I do this for free. So I went from there into being a journalist really - being the second editor of Virus Bulletin, which is a great publication still in business today. 
That experience of actually working on that side of the table was probably some of the valuable time that I’ve had in the security industry. Because it taught me how to write, but it also taught me to figure out: always put the user first. Right? You take the user perspective - you don’t take the vendor perspective in that role = that’s really important. 
And, yeah, it’s been a long and varied career and some of the high points for me - working at IBM Research - is a time in my life that I’ll never forget; IBM Research is an awesome place to go and hang out. It was like being a kid in a candy store - we’d site around and you’d just bump into people in the corridor and find out that they are working on the coolest thing that you could imagine - that was a fantastic role for me. 
And, yeah, you know we hit it fairly well at one point and I retired into academia. I spent all of these years in commercial - started off a journalist - went into commercial and the research side of the house. And then I moved into academia and that was a very rewarding time in my life. I’m still in touch with so many of my students - if any of my students see this: I’m easy to find on LinkedIn students, you know, I probably still remember you. I think that they’ve left more of an impression on me than I have on them. 
Eventually it was one of my former students who called me and said “Dr. Ford how would you like to be Chief Scientist at this company we’re standing up.” I said “Sure, Brian, but you’re going to have to stop calling me Dr. Ford” And he was like “Okay, Dr. Ford, I will.” So that’s sort of how I ended up at Forcepoint and the reason that that was attractive was that Forcepoint was trying to things a little bit differently. So I mean you’ve been around the show floor a lot and I mean this with no disrespect to the folks that are down there but it’s pretty samey, right? In some ways. There’s a lot of buzz in security and there’s clearly a lot of money flowing around but it’s not very well differentiated - everyone is going to stop you and go “We can solve you blah-blah-blah problem.” So we’ve got a universe full of point products and that’s not very excited to me - I went into academia because I wanted think deep thoughts. And the reason that Forcepoint was a fit for me was the we were going to shake things up a little bit so that’s been kind of fun. 
AA: Yeah I mean I think that there’s a number of reasons that you’ve got all these different solutions but I’d love to talk about what is - even if it’s not been realized in terms of actual usable products that are well-known and utilized throughout the space - it does sound like there are some interesting things - particularly coming out the academic, some the defense community - in terms of, particularly, like the cyber resiliency is an area that I am excited about since it does seem like that is talking, at least thematically, strategically, about doing things differently. Right?
RF: Yeah so I think that resilience is a really important topic and it has been woefully underexplored, right? Especially in the commercial world; there are a few vendors that have been wandering around the show floor talking about resilience. 
But often the way that we talk about it is not well-formed, since we don’t define the word very well. So often you’ll start to you’ll start talking about something else - talking about resilience. And what they’re actually talking about is robustness, so if something is very strong - you can’t bend it - you can’t move it - that’s robust - you can’t break it. But if something is like a blade of grass - you tread on it, you take  your foot up, and it springs back up - that process of recovery, of coming back - you can bend me but I don’t break; I spring back to shape - that’s resilience. And I think that one of the challenges in this industry is that we are very sloppy in how we use our words - this is something that I was a pain at to my students. 
AA: Well you’re an Oxford grad. You have the OED right - Oxford English Dictionary. You care about words, right?
RF: I do care about words, because I think that it’s also the Sapphire-Whorf hypothesis, which says the words that you shape - words are thought it’s basically. I mean, linguistically relative to the - am I totally sold on that? Not necessarily, but there’s some truth to it. If you can’t express it in words there’s a chance that you don’t think about it cleanly, because words are the language of thought. And so, yeah, words really matter. 
And so being really crisp around the words that you use, so you and I can communicate, is really important. So, yes, resilience is interesting but only when we talk about it in the context of: I took a hit, I got hit, and then I came back up; that’s resilience, and that’s interesting.
AA: Yeah. I just use the word hard-to-kill, right? With the three words, yeah.
RF: Yeah I like that. “Hard-to-kill”; It’s visceral. 
AA: Cause it’s not only the - it’s about recovery but it’s also I think - and that makes sense and it makes me think of cockroaches and rats, right? And it’s not just one right. One you can step on it, right? But there’s also millions of them, right? And so that - dynamism, diversity, all of these other things that fall under resiliency. Where are you seeing that - I know academically it’s been talked about a lot - but where are you starting to see maybe resiliency begin to form in the sort of -?
RF: Right, so of course, in defense world there’s been considerable interest in resilience - the idea of, yeah, you know you survive. The system survives, possibly in a degraded state, but it will come back up. There’s also been very good academic research. The challenge of moving resiliency, sometimes, into the real world or the commercial world is: we’re still messing up the basics. Right? 
I mean we’ve still got people with bad passwords out there. We’ve still got bad password reset policies. We’ve still got companies that will send emails saying “Your password will expire in seven days. Please click on this link to reset it.” And it’s a real email - it’s not a phishing attack. So resiliency is important, but the problem is that you’ve got this massive skill difference between the agencies that do cyber extremely well and the agencies that just make the most basic mistakes. And some of these problems that we’ve been finding today have been around forever.  First piece of ransomware - when was it? Take a guess. Is it a new problem? I mean, seems?
AA: I don’t know; like Morris Worm era?
RF: Yeah. You’re exactly right. We’re going back to-
AA: Like early 90’s or-
RF: AIDS Trojan-[inaudible]-gap, which was a hand-mailed, snail-mail piece of malware that was on a disk-
AA: Like with your AoL disks, right? Those CDs?
RF: No, no. They actually sent you a disk. And it was sent around to I think - now I’m going from fallible memory - I think it was about 5,000 was the first round. And what it would do was somewhere in the U-lay it said if you don’t pay money we will make your system unusable or words to that effect. And we’re still dealing with that problem. Wanna-
AA: It’s the newest thing that’s come out.
RF: -WannaCry was exactly that problem right? It just didn’t use five and a quarter inch disks. It was a little bit faster. 
But it’s exactly the same problem set so if we’re still fighting the battles that we were fighting - what almost 30 years ago, 25 years ago - my question to you is: is this community really ready to step into the complexity that resiliency brings? Because, you pay for resiliency - if you study biological systems you will see that the most diverse system is usually the most resilient - that diversity of species leads you to resiliency. And so there is a real challenge there because what is another word for diverse when we are talking about differences - it’s also a word for complexity. So when you have systems that can sort of move and adapt, those are potentially more complex systems and currently we’re dealing with a world where people are still getting nailed by 25 year old style attacks. 
So there is a tension that I don’t think we’re honest enough about in the industry to say look there is a huge difference in capability between end and the bottom end - and I say bottom end with no disrespect to people or organizations that I would put at that bottom end of security; because they shouldn’t have to care about security. Right? 

When you drive your car sit there and go “Huh I know exactly how the timing chain is working or the ignition.” You don’t think about the mechanics of it you just drive your car.  And so I think this idea that suddenly everybody has to be part of the security solution - to me that doesn’t speak to human nature. I think that we have to build much more human-centric systems - systems that actually accommodate how you and I actually work rather than going: “Well Richard is going to be completely logical at all times. He will step into the security world.”
AA: Right. I mean it’s sort of bananas that we talking that humans are fallible - this is news to anyone? I think that the car analogy is one that is very good. The experience of - I mean humans are intricately involved in the driving of cars - although maybe not as much heading forward - but the systems around them got much better in terms of helping that person stay safe and when they make mistakes there’s airbags and seatbelts and all of those sorts of things. And I’m not seeing that tolerance for mistakes in the security space as much-
RF: Ah and it’s the word, right? If we change that word from security to a much nicer word, to me, which is “safety”, you start to design things in a different way. You don’t design a car going “I will make my car secure” - although, having seen some of the talks, maybe we should be doing more of that. You design a car going “How do I make is safe”: how do I accommodate the real fallible human nature of people like: how people get distracted when they drive, so maybe I’ll make the steering wheel rumble when they get to the edge of the lane. 
That’s designing for safety rather than designing for security, and I like the difference in mindset very much, because I think that when you switch to a safety mindset you become more human. You go: “What’s the human really going to do?” Rather than saying “ You must be sitting and thinking about security at all times,” because guess what - you don’t. And you shouldn’t have to right? Security is a means to an end.
AA: So you’ve been in this a long time thinking about it deeply on all kinds of levels when you see people starting to think from that cyber safety perspective, what are they doing? What are the of top three, top five things that if you’re thinking from a safety mindset they’re doing?
RF: Well the first thing is that you recognize that people are people, and - blunty - the next four things are all: recognize that people are people. 
AA: Look back at number one, right?
RF: Yeah exactly. That is the key to a safety-based system is that: you should make the default safe; you should make the default usable for what the person is trying to accomplish too. So if the car wouldn’t start because it’s “so safe” that’s not very helpful. So it’s really this very human-centered design that looks at how you and I will naturally operate those machines, and how we can use that as a way of accomplishing a task. 
Remember, again - we’ve talked about this - security is a means to an end; it is not an end in and of itself. I don’t do security because I’m doing security; I’m doing security because I’m trying to keep my people and my data safe. And again there is a lot of that security culture - it’s like the cult of security, where security becomes “the thing”. And I’d like to remind people that security is a way of getting something done. 
You don’t sit down at your computer to do security, typically, unless you’re one of maybe one of ten people in an organization. What you do is you sit down at a computer to do business and security is an enabler to that business but it’s not your primary focus.
AA: Yeah. I think that I’ve heard it a lot kind of talked as a measure of quality just the way we measure other things security particularly in the software development world. Security should just the way it does next to performance or the ability to do the tasks, right? Security should be right there in the mindset and I think that’s true in every one of - sort of pulling security into the business, right? The business should really be the owner of the security. 
RF: Right because security is a business function because it is a business enabler, and what do is it not remove those risks, because business is risk. That’s what we do; we go out and sell a product on the market, that’s risk; you get behind the wheel of your car, that’s risk. You accept it and what you do is you mitigate that risk. 
So we sometimes get into this “We’re going to make it completely secure. We’re going to make it completely safe.” - Nah it’s all risk. It’s about managing risk in an intelligent way to let you do what you want to do.
AA: So, you know, just to bring it down to a concrete level, on this show we kind of like to talk about things that we likes and things that we thought were cool. Where have you encountered like “Hey somebody was really using that mindset and they come up with something that made me thing ‘That’s a cool way to kind of think about safety’ or human-centric kind of - ”
RF: Right. So I mean, of course, us. Right, I mean that’s the reason that I’m here. But-
AA: Well how do you do it then. What are the ways that you do it in your own product. 
RF: Sure so a lot the design work that we do starts from the user interface. It starts with: ‘how is this product going to engage with the user’. 
 So one the pieces of research that I did with a colleague when I was back at university - so we bought an eye tracker and eye trackers are so much fun, right? So it’s a little dual camera that sits at the bottom of the screen and it will show you exactly where the eyes are looking on the screen. What we did was we got a bunch of students - and I think we paid them in pizza which is like the universal student currency - what we did was we had a little competition. 
They had to complete some tasks with a timer how fast and how accurately you can do the tasks, and that wasn’t the experiment. The experiment was, half-way through, some security warnings came up from the machine and we could see exactly how long they looked at the warning and what they were looking for and what they were looking for was the cancel button.
They didn’t generally read what the warning was - like “Let me get rid of this annoying box so I can keep on with my task” right? “Is there a close button?” No, really. It went to the top right; it was a scary piece of research. What you have to do is design your interactions with a security product with the human in mind. So here’s an example. If I’m going to interrupt you in a task with a security warning; if it’s: ‘Your Flash is out-of-date. It needs updating,” let’s say. If you’re not on the web, wait until you switch tasks it's a much more human way of doing it rather than: you’re working on a script or a word document or whatever and suddenly this box pops up. 
It’s getting in your flow. If I can put that distraction off to later in a way that provides you the same level of protection - make it human-centric; it’s a very simple example. Also, start looking at risk-adaptive rather than risk-static. So one of the announcements we made at the show was: dynamic data protection or risk adaptive protection. 
The basic idea there is: in the human world we don’t just let somebody in the door in through our house and then just pay no more attention to them. You keep an eye on what they’re doing and you adapt based on what they’re doing. It’s not: ‘Here are the rules for entering my house’ and, you know, that’s that. 
So this idea adapting to user risk so we can better protect the user is really important. You’re not behaving like you, maybe I should be doing protection around that. And it’s not just about maybe you’re bad, it also maybe you’re compromised. I think part of the mindset around this is that you have to change the lens; when you start talking about those kinds of systems it’s all about “the bad user”. No, it’s all about user protection it’s about maybe your machine is compromised by malware and we should take care of that for you. What else -
AA: Unfortunately, I think we’ve got to leave it there based on time. But Dr. Ford this was awesome.
RF: It’s a pleasure.
AA: Now, I’m like a student too. I’m still going to call you Dr. Ford. 
RF: Well thank you very much. 
AA: Awesome. Appreciate it!
RF: Thanks.